public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-07-29  7:43 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-07-29  7:43 UTC (permalink / raw
  To: gentoo-commits

commit:     ad2063531d83af73c11b1b391813693ffd031fd6
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Sun May 11 19:53:33 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Tue Jul 29 07:41:06 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ad206353

Create the 6.16 branch with genpatches

Enable link security restrictions by default
sparc: Address -Warray-bounds warnings
prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
Bluetooth: Check key sizes only when Secure Simple Pairing is
enabled. See bug #686758
sign-file: full functionality with modern LibreSSL
libbpf: workaround -Wmaybe-uninitialized false positive
Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO)
Add Gentoo Linux support config settings and defaults
menuconfig: Allow sorting the entries alphabetically

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                        |  32 +++
 ...ble-link-security-restrictions-by-default.patch |  17 ++
 1700_sparc-address-warray-bound-warnings.patch     |  17 ++
 1730_parisc-Disable-prctl.patch                    |  51 +++++
 ...zes-only-if-Secure-Simple-Pairing-enabled.patch |  37 ++++
 2901_permit-menuconfig-sorting.patch               | 219 +++++++++++++++++++++
 2920_sign-file-patch-for-libressl.patch            |  16 ++
 ...workaround-Wmaybe-uninitialized-false-pos.patch |  98 +++++++++
 3000_Support-printing-firmware-info.patch          |  14 ++
 9 files changed, 501 insertions(+)

diff --git a/0000_README b/0000_README
index 90189932..2e9aa3cc 100644
--- a/0000_README
+++ b/0000_README
@@ -42,7 +42,39 @@ EXPERIMENTAL
 
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
+Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
+From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
+Desc:   Enable link security restrictions by default.
+
+Patch:  1700_sparc-address-warray-bound-warnings.patch
+From:   https://github.com/KSPP/linux/issues/109
+Desc:   Address -Warray-bounds warnings
+
+Patch:  1730_parisc-Disable-prctl.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
+Desc:   prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
+
+Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
+From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
+Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
+
+Patch:  2901_permit-menuconfig-sorting.patch
+From:   https://lore.kernel.org/
+Desc:   menuconfig: Allow sorting the entries alphabetically
+
+Patch:  2920_sign-file-patch-for-libressl.patch
+From:   https://bugs.gentoo.org/717166
+Desc:   sign-file: full functionality with modern LibreSSL
+
+Patch:  2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
+From:   https://lore.kernel.org/bpf/
+Desc:   libbpf: workaround -Wmaybe-uninitialized false positive
+
+Patch:  3000_Support-printing-firmware-info.patch
+From:   https://bugs.gentoo.org/732852
+Desc:   Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO). Thanks to Georgy Yakovlev
 
 Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
+

diff --git a/1510_fs-enable-link-security-restrictions-by-default.patch b/1510_fs-enable-link-security-restrictions-by-default.patch
new file mode 100644
index 00000000..e8c30157
--- /dev/null
+++ b/1510_fs-enable-link-security-restrictions-by-default.patch
@@ -0,0 +1,17 @@
+--- a/fs/namei.c	2022-01-23 13:02:27.876558299 -0500
++++ b/fs/namei.c	2022-03-06 12:47:39.375719693 -0500
+@@ -1020,10 +1020,10 @@ static inline void put_link(struct namei
+ 		path_put(&last->link);
+ }
+ 
+-static int sysctl_protected_symlinks __read_mostly;
+-static int sysctl_protected_hardlinks __read_mostly;
+-static int sysctl_protected_fifos __read_mostly;
+-static int sysctl_protected_regular __read_mostly;
++static int sysctl_protected_symlinks __read_mostly = 1;
++static int sysctl_protected_hardlinks __read_mostly = 1;
++int sysctl_protected_fifos __read_mostly = 1;
++int sysctl_protected_regular __read_mostly = 1;
+ 
+ #ifdef CONFIG_SYSCTL
+ static struct ctl_table namei_sysctls[] = {

diff --git a/1700_sparc-address-warray-bound-warnings.patch b/1700_sparc-address-warray-bound-warnings.patch
new file mode 100644
index 00000000..f9393555
--- /dev/null
+++ b/1700_sparc-address-warray-bound-warnings.patch
@@ -0,0 +1,17 @@
+--- a/arch/sparc/mm/init_64.c	2022-05-24 16:48:40.749677491 -0400
++++ b/arch/sparc/mm/init_64.c	2022-05-24 16:55:15.511356945 -0400
+@@ -3052,11 +3052,11 @@ static inline resource_size_t compute_ke
+ static void __init kernel_lds_init(void)
+ {
+ 	code_resource.start = compute_kern_paddr(_text);
+-	code_resource.end   = compute_kern_paddr(_etext - 1);
++	code_resource.end   = compute_kern_paddr(_etext) - 1;
+ 	data_resource.start = compute_kern_paddr(_etext);
+-	data_resource.end   = compute_kern_paddr(_edata - 1);
++	data_resource.end   = compute_kern_paddr(_edata) - 1;
+ 	bss_resource.start  = compute_kern_paddr(__bss_start);
+-	bss_resource.end    = compute_kern_paddr(_end - 1);
++	bss_resource.end    = compute_kern_paddr(_end) - 1;
+ }
+ 
+ static int __init report_memory(void)

diff --git a/1730_parisc-Disable-prctl.patch b/1730_parisc-Disable-prctl.patch
new file mode 100644
index 00000000..f892d6a1
--- /dev/null
+++ b/1730_parisc-Disable-prctl.patch
@@ -0,0 +1,51 @@
+From 339b41ec357c24c02ed4aed6267dbfd443ee1e8e Mon Sep 17 00:00:00 2001
+From: Helge Deller <deller@gmx.de>
+Date: Mon, 13 Nov 2023 16:06:18 +0100
+Subject: prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
+
+systemd-254 tries to use prctl(PR_SET_MDWE) for systemd's
+MemoryDenyWriteExecute functionality, but fails on PA-RISC/HPPA which
+still needs executable stacks.
+
+Temporarily disable prctl(PR_SET_MDWE) by returning -ENODEV on parisc
+for now. Note that we can't return -EINVAL since systemd will then try
+to use seccomp instead.
+
+Reported-by: Sam James <sam@gentoo.org>
+Signed-off-by: Helge Deller <deller@gmx.de>
+Link: https://lore.kernel.org/all/875y2jro9a.fsf@gentoo.org/
+Link: https://github.com/systemd/systemd/issues/29775.
+Cc: <stable@vger.kernel.org> # v6.3+
+---
+ kernel/sys.c | 10 ++++++++--
+ 1 file changed, 8 insertions(+), 2 deletions(-)
+
+diff --git a/kernel/sys.c b/kernel/sys.c
+index 420d9cb9cc8e2..8e3eaf650d07d 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -2700,10 +2700,16 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
+ 		break;
+ #endif
+ 	case PR_SET_MDWE:
+-		error = prctl_set_mdwe(arg2, arg3, arg4, arg5);
++		if (IS_ENABLED(CONFIG_PARISC))
++			error = -EINVAL;
++		else
++			error = prctl_set_mdwe(arg2, arg3, arg4, arg5);
+ 		break;
+ 	case PR_GET_MDWE:
+-		error = prctl_get_mdwe(arg2, arg3, arg4, arg5);
++		if (IS_ENABLED(CONFIG_PARISC))
++			error = -EINVAL;
++		else
++			error = prctl_get_mdwe(arg2, arg3, arg4, arg5);
+ 		break;
+ 	case PR_SET_VMA:
+ 		error = prctl_set_vma(arg2, arg3, arg4, arg5);
+-- 
+cgit
+Filename: fallback-exec-stack.patch. Size: 2kb. View raw, copy, hex, or download this file.
+View source code, the removal or expiry stories, or read the about page.
+
+This website does not claim ownership of, copyright on, and assumes no liability for provided content. Toggle color scheme.

diff --git a/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch b/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
new file mode 100644
index 00000000..394ad48f
--- /dev/null
+++ b/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
@@ -0,0 +1,37 @@
+The encryption is only mandatory to be enforced when both sides are using
+Secure Simple Pairing and this means the key size check makes only sense
+in that case.
+
+On legacy Bluetooth 2.0 and earlier devices like mice the encryption was
+optional and thus causing an issue if the key size check is not bound to
+using Secure Simple Pairing.
+
+Fixes: d5bb334a8e17 ("Bluetooth: Align minimum encryption key size for LE and BR/EDR connections")
+Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
+Cc: stable@vger.kernel.org
+---
+ net/bluetooth/hci_conn.c | 9 +++++++--
+ 1 file changed, 7 insertions(+), 2 deletions(-)
+
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 3cf0764d5793..7516cdde3373 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1272,8 +1272,13 @@ int hci_conn_check_link_mode(struct hci_conn *conn)
+ 			return 0;
+ 	}
+ 
+-	if (hci_conn_ssp_enabled(conn) &&
+-	    !test_bit(HCI_CONN_ENCRYPT, &conn->flags))
++	/* If Secure Simple Pairing is not enabled, then legacy connection
++	 * setup is used and no encryption or key sizes can be enforced.
++	 */
++	if (!hci_conn_ssp_enabled(conn))
++		return 1;
++
++	if (!test_bit(HCI_CONN_ENCRYPT, &conn->flags))
+ 		return 0;
+ 
+ 	/* The minimum encryption key size needs to be enforced by the
+-- 
+2.20.1

diff --git a/2901_permit-menuconfig-sorting.patch b/2901_permit-menuconfig-sorting.patch
new file mode 100644
index 00000000..1ceade0c
--- /dev/null
+++ b/2901_permit-menuconfig-sorting.patch
@@ -0,0 +1,219 @@
+From git@z Thu Jan  1 00:00:00 1970
+Subject: [PATCH] menuconfig: Allow sorting the entries alphabetically
+From: Ivan Orlov <ivan.orlov0322@gmail.com>
+Date: Fri, 16 Aug 2024 15:18:31 +0100
+Message-Id: <20240816141831.104085-1-ivan.orlov0322@gmail.com>
+MIME-Version: 1.0
+Content-Type: text/plain; charset="utf-8"
+Content-Transfer-Encoding: 7bit
+
+Implement the functionality which allows to sort the Kconfig entries
+alphabetically if user decides to. It may help finding the desired entry
+faster, so the user will spend less time looking through the list.
+
+The sorting is done on the dialog_list elements in the 'dialog_menu'
+function, so on the option "representation" layer. The sorting could be
+enabled/disabled by pressing the '>' key. The labels are sorted in the
+following way:
+
+1. Put all entries into the array (from the linked list)
+2. Sort them alphabetically using qsort and custom comparator
+3. Restore the items linked list structure
+
+I know that this modification includes the ugly heruistics for
+extracting the actual label text from "    [ ] Some-option"-like
+expressions (to be able to alphabetically compare the labels), and I
+would be happy to discuss alternative solutions.
+
+Signed-off-by: Ivan Orlov <ivan.orlov0322@gmail.com>
+---
+ scripts/kconfig/lxdialog/dialog.h  |  5 +-
+ scripts/kconfig/lxdialog/menubox.c |  7 ++-
+ scripts/kconfig/lxdialog/util.c    | 79 ++++++++++++++++++++++++++++++
+ scripts/kconfig/mconf.c            |  9 +++-
+ 4 files changed, 97 insertions(+), 3 deletions(-)
+
+diff --git a/scripts/kconfig/lxdialog/dialog.h b/scripts/kconfig/lxdialog/dialog.h
+index f6c2ebe6d1f9..a036ed8cb43c 100644
+--- a/scripts/kconfig/lxdialog/dialog.h
++++ b/scripts/kconfig/lxdialog/dialog.h
+@@ -58,6 +58,8 @@
+ #define ACS_DARROW 'v'
+ #endif
+ 
++#define KEY_ACTION_SORT 11
++
+ /* error return codes */
+ #define ERRDISPLAYTOOSMALL (KEY_MAX + 1)
+ 
+@@ -127,6 +129,7 @@ void item_set_selected(int val);
+ int item_activate_selected(void);
+ void *item_data(void);
+ char item_tag(void);
++void sort_items(void);
+ 
+ /* item list manipulation for lxdialog use */
+ #define MAXITEMSTR 200
+@@ -196,7 +199,7 @@ int dialog_textbox(const char *title, const char *tbuf, int initial_height,
+ 		   int initial_width, int *_vscroll, int *_hscroll,
+ 		   int (*extra_key_cb)(int, size_t, size_t, void *), void *data);
+ int dialog_menu(const char *title, const char *prompt,
+-		const void *selected, int *s_scroll);
++		const void *selected, int *s_scroll, bool sort);
+ int dialog_checklist(const char *title, const char *prompt, int height,
+ 		     int width, int list_height);
+ int dialog_inputbox(const char *title, const char *prompt, int height,
+diff --git a/scripts/kconfig/lxdialog/menubox.c b/scripts/kconfig/lxdialog/menubox.c
+index 6e6244df0c56..4cba15f967c5 100644
+--- a/scripts/kconfig/lxdialog/menubox.c
++++ b/scripts/kconfig/lxdialog/menubox.c
+@@ -161,7 +161,7 @@ static void do_scroll(WINDOW *win, int *scroll, int n)
+  * Display a menu for choosing among a number of options
+  */
+ int dialog_menu(const char *title, const char *prompt,
+-		const void *selected, int *s_scroll)
++		const void *selected, int *s_scroll, bool sort)
+ {
+ 	int i, j, x, y, box_x, box_y;
+ 	int height, width, menu_height;
+@@ -181,6 +181,9 @@ int dialog_menu(const char *title, const char *prompt,
+ 
+ 	max_choice = MIN(menu_height, item_count());
+ 
++	if (sort)
++		sort_items();
++
+ 	/* center dialog box on screen */
+ 	x = (getmaxx(stdscr) - width) / 2;
+ 	y = (getmaxy(stdscr) - height) / 2;
+@@ -408,6 +411,8 @@ int dialog_menu(const char *title, const char *prompt,
+ 			delwin(menu);
+ 			delwin(dialog);
+ 			goto do_resize;
++		case '>':
++			return KEY_ACTION_SORT;
+ 		}
+ 	}
+ 	delwin(menu);
+diff --git a/scripts/kconfig/lxdialog/util.c b/scripts/kconfig/lxdialog/util.c
+index 964139c87fcb..cc87ddd69c10 100644
+--- a/scripts/kconfig/lxdialog/util.c
++++ b/scripts/kconfig/lxdialog/util.c
+@@ -563,6 +563,85 @@ void item_reset(void)
+ 	item_cur = &item_nil;
+ }
+ 
++/*
++ * Function skips a part of the label to get the actual label text
++ * (without the '[ ]'-like prefix).
++ */
++static char *skip_spec_characters(char *s)
++{
++	bool unbalanced = false;
++
++	while (*s) {
++		if (isalnum(*s) && !unbalanced) {
++			break;
++		} else if (*s == '[' || *s == '<' || *s == '(') {
++			/*
++			 * '[', '<' or '(' means that we need to look for
++			 * closure
++			 */
++			unbalanced = true;
++		} else if (*s == '-') {
++			/*
++			 * Labels could start with "-*-", so '-' here either
++			 * opens or closes the "checkbox"
++			 */
++			unbalanced = !unbalanced;
++		} else if (*s == '>' || *s == ']' || *s == ')') {
++			unbalanced = false;
++		}
++		s++;
++	}
++	return s;
++}
++
++static int compare_labels(const void *a, const void *b)
++{
++	struct dialog_list *el1 = *((struct dialog_list **)a);
++	struct dialog_list *el2 = *((struct dialog_list **)b);
++
++	return strcasecmp(skip_spec_characters(el1->node.str),
++			  skip_spec_characters(el2->node.str));
++}
++
++void sort_items(void)
++{
++	struct dialog_list **arr;
++	struct dialog_list *cur;
++	size_t n, i;
++
++	n = item_count();
++	if (n == 0)
++		return;
++
++	/* Copy all items from linked list into array */
++	cur = item_head;
++	arr = malloc(sizeof(*arr) * n);
++
++	if (!arr) {
++		/* Don't have enough memory, so don't do anything */
++		return;
++	}
++
++	for (i = 0; i < n; i++) {
++		arr[i] = cur;
++		cur = cur->next;
++	}
++
++	qsort(arr, n, sizeof(struct dialog_list *), compare_labels);
++
++	/* Restore the linked list structure from the sorted array */
++	for (i = 0; i < n; i++) {
++		if (i < n - 1)
++			arr[i]->next = arr[i + 1];
++		else
++			arr[i]->next = NULL;
++	}
++
++	item_head = arr[0];
++
++	free(arr);
++}
++
+ void item_make(const char *fmt, ...)
+ {
+ 	va_list ap;
+diff --git a/scripts/kconfig/mconf.c b/scripts/kconfig/mconf.c
+index 3887eac75289..8a961a41cae4 100644
+--- a/scripts/kconfig/mconf.c
++++ b/scripts/kconfig/mconf.c
+@@ -749,6 +749,7 @@ static void conf_save(void)
+ 	}
+ }
+ 
++static bool should_sort;
+ static void conf(struct menu *menu, struct menu *active_menu)
+ {
+ 	struct menu *submenu;
+@@ -774,9 +775,15 @@ static void conf(struct menu *menu, struct menu *active_menu)
+ 		dialog_clear();
+ 		res = dialog_menu(prompt ? prompt : "Main Menu",
+ 				  menu_instructions,
+-				  active_menu, &s_scroll);
++				  active_menu, &s_scroll, should_sort);
+ 		if (res == 1 || res == KEY_ESC || res == -ERRDISPLAYTOOSMALL)
+ 			break;
++
++		if (res == KEY_ACTION_SORT) {
++			should_sort = !should_sort;
++			continue;
++		}
++
+ 		if (item_count() != 0) {
+ 			if (!item_activate_selected())
+ 				continue;
+-- 
+2.34.1
+

diff --git a/2920_sign-file-patch-for-libressl.patch b/2920_sign-file-patch-for-libressl.patch
new file mode 100644
index 00000000..e6ec017d
--- /dev/null
+++ b/2920_sign-file-patch-for-libressl.patch
@@ -0,0 +1,16 @@
+--- a/scripts/sign-file.c	2020-05-20 18:47:21.282820662 -0400
++++ b/scripts/sign-file.c	2020-05-20 18:48:37.991081899 -0400
+@@ -41,9 +41,10 @@
+  * signing with anything other than SHA1 - so we're stuck with that if such is
+  * the case.
+  */
+-#if defined(LIBRESSL_VERSION_NUMBER) || \
+-	OPENSSL_VERSION_NUMBER < 0x10000000L || \
+-	defined(OPENSSL_NO_CMS)
++#if defined(OPENSSL_NO_CMS) || \
++	( defined(LIBRESSL_VERSION_NUMBER) \
++	&& (LIBRESSL_VERSION_NUMBER < 0x3010000fL) ) || \
++	OPENSSL_VERSION_NUMBER < 0x10000000L
+ #define USE_PKCS7
+ #endif
+ #ifndef USE_PKCS7

diff --git a/2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch b/2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
new file mode 100644
index 00000000..af5e117f
--- /dev/null
+++ b/2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
@@ -0,0 +1,98 @@
+From git@z Thu Jan  1 00:00:00 1970
+Subject: [PATCH v2] libbpf: workaround -Wmaybe-uninitialized false positive
+From: Sam James <sam@gentoo.org>
+Date: Fri, 09 Aug 2024 18:26:41 +0100
+Message-Id: <8f5c3b173e4cb216322ae19ade2766940c6fbebb.1723224401.git.sam@gentoo.org>
+MIME-Version: 1.0
+Content-Type: text/plain; charset="utf-8"
+Content-Transfer-Encoding: 8bit
+
+In `elf_close`, we get this with GCC 15 -O3 (at least):
+```
+In function ‘elf_close’,
+    inlined from ‘elf_close’ at elf.c:53:6,
+    inlined from ‘elf_find_func_offset_from_file’ at elf.c:384:2:
+elf.c:57:9: warning: ‘elf_fd.elf’ may be used uninitialized [-Wmaybe-uninitialized]
+   57 |         elf_end(elf_fd->elf);
+      |         ^~~~~~~~~~~~~~~~~~~~
+elf.c: In function ‘elf_find_func_offset_from_file’:
+elf.c:377:23: note: ‘elf_fd.elf’ was declared here
+  377 |         struct elf_fd elf_fd;
+      |                       ^~~~~~
+In function ‘elf_close’,
+    inlined from ‘elf_close’ at elf.c:53:6,
+    inlined from ‘elf_find_func_offset_from_file’ at elf.c:384:2:
+elf.c:58:9: warning: ‘elf_fd.fd’ may be used uninitialized [-Wmaybe-uninitialized]
+   58 |         close(elf_fd->fd);
+      |         ^~~~~~~~~~~~~~~~~
+elf.c: In function ‘elf_find_func_offset_from_file’:
+elf.c:377:23: note: ‘elf_fd.fd’ was declared here
+  377 |         struct elf_fd elf_fd;
+      |                       ^~~~~~
+```
+
+In reality, our use is fine, it's just that GCC doesn't model errno
+here (see linked GCC bug). Suppress -Wmaybe-uninitialized accordingly.
+
+Link: https://gcc.gnu.org/PR114952
+Signed-off-by: Sam James <sam@gentoo.org>
+---
+v2: Fix Clang build.
+
+Range-diff against v1:
+1:  3ebbe7a4e93a ! 1:  8f5c3b173e4c libbpf: workaround -Wmaybe-uninitialized false positive
+    @@ tools/lib/bpf/elf.c: long elf_find_func_offset(Elf *elf, const char *binary_path
+      	return ret;
+      }
+      
+    ++#if !defined(__clang__)
+     +#pragma GCC diagnostic push
+     +/* https://gcc.gnu.org/PR114952 */
+     +#pragma GCC diagnostic ignored "-Wmaybe-uninitialized"
+    ++#endif
+      /* Find offset of function name in ELF object specified by path. "name" matches
+       * symbol name or name@@LIB for library functions.
+       */
+    @@ tools/lib/bpf/elf.c: long elf_find_func_offset_from_file(const char *binary_path
+      	elf_close(&elf_fd);
+      	return ret;
+      }
+    ++#if !defined(__clang__)
+     +#pragma GCC diagnostic pop
+    ++#endif
+      
+      struct symbol {
+      	const char *name;
+
+ tools/lib/bpf/elf.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+diff --git a/tools/lib/bpf/elf.c b/tools/lib/bpf/elf.c
+index c92e02394159..7058425ca85b 100644
+--- a/tools/lib/bpf/elf.c
++++ b/tools/lib/bpf/elf.c
+@@ -369,6 +369,11 @@ long elf_find_func_offset(Elf *elf, const char *binary_path, const char *name)
+ 	return ret;
+ }
+ 
++#if !defined(__clang__)
++#pragma GCC diagnostic push
++/* https://gcc.gnu.org/PR114952 */
++#pragma GCC diagnostic ignored "-Wmaybe-uninitialized"
++#endif
+ /* Find offset of function name in ELF object specified by path. "name" matches
+  * symbol name or name@@LIB for library functions.
+  */
+@@ -384,6 +389,9 @@ long elf_find_func_offset_from_file(const char *binary_path, const char *name)
+ 	elf_close(&elf_fd);
+ 	return ret;
+ }
++#if !defined(__clang__)
++#pragma GCC diagnostic pop
++#endif
+ 
+ struct symbol {
+ 	const char *name;
+-- 
+2.45.2
+

diff --git a/3000_Support-printing-firmware-info.patch b/3000_Support-printing-firmware-info.patch
new file mode 100644
index 00000000..a630cfbe
--- /dev/null
+++ b/3000_Support-printing-firmware-info.patch
@@ -0,0 +1,14 @@
+--- a/drivers/base/firmware_loader/main.c	2021-08-24 15:42:07.025482085 -0400
++++ b/drivers/base/firmware_loader/main.c	2021-08-24 15:44:40.782975313 -0400
+@@ -809,6 +809,11 @@ _request_firmware(const struct firmware
+ 
+ 	ret = _request_firmware_prepare(&fw, name, device, buf, size,
+ 					offset, opt_flags);
++
++#ifdef CONFIG_GENTOO_PRINT_FIRMWARE_INFO
++        printk(KERN_NOTICE "Loading firmware: %s\n", name);
++#endif
++
+ 	if (ret <= 0) /* error or already assigned */
+ 		goto out;
+ 


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-08-16  3:07 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-08-16  3:07 UTC (permalink / raw
  To: gentoo-commits

commit:     25785ddfcbfb6071830cc117721d921f56150f61
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Sat Aug 16 02:59:33 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Sat Aug 16 03:01:47 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=25785ddf

Linux patch 6.16.1

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README             |     5 +
 1000_linux-6.16.1.patch | 26788 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 26793 insertions(+)

diff --git a/0000_README b/0000_README
index 2e9aa3cc..3d350c4c 100644
--- a/0000_README
+++ b/0000_README
@@ -42,6 +42,11 @@ EXPERIMENTAL
 
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
+
+Patch:  1000_linux-6.16.1.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.16.1
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1000_linux-6.16.1.patch b/1000_linux-6.16.1.patch
new file mode 100644
index 00000000..e580a27e
--- /dev/null
+++ b/1000_linux-6.16.1.patch
@@ -0,0 +1,26788 @@
+diff --git a/.gitignore b/.gitignore
+index bf5ee6e01cd42a..929054df5212d6 100644
+--- a/.gitignore
++++ b/.gitignore
+@@ -114,6 +114,7 @@ modules.order
+ !.gitignore
+ !.kunitconfig
+ !.mailmap
++!.pylintrc
+ !.rustfmt.toml
+ 
+ #
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 07e22ba5bfe343..f6d317e1674d6a 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -633,6 +633,14 @@
+ 			named mounts. Specifying both "all" and "named" disables
+ 			all v1 hierarchies.
+ 
++	cgroup_v1_proc=	[KNL] Show also missing controllers in /proc/cgroups
++			Format: { "true" | "false" }
++			/proc/cgroups lists only v1 controllers by default.
++			This compatibility option enables listing also v2
++			controllers (whose v1 code is not compiled!), so that
++			semi-legacy software can check this file to decide
++			about usage of v2 (sic) controllers.
++
+ 	cgroup_favordynmods= [KNL] Enable or Disable favordynmods.
+ 			Format: { "true" | "false" }
+ 			Defaults to the value of CONFIG_CGROUP_FAVOR_DYNMODS.
+diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst
+index 440e4ae74e4432..03b1efa6d3b2b1 100644
+--- a/Documentation/filesystems/f2fs.rst
++++ b/Documentation/filesystems/f2fs.rst
+@@ -238,9 +238,9 @@ usrjquota=<file>	 Appoint specified file and type during mount, so that quota
+ grpjquota=<file>	 information can be properly updated during recovery flow,
+ prjjquota=<file>	 <quota file>: must be in root directory;
+ jqfmt=<quota type>	 <quota type>: [vfsold,vfsv0,vfsv1].
+-offusrjquota		 Turn off user journalled quota.
+-offgrpjquota		 Turn off group journalled quota.
+-offprjjquota		 Turn off project journalled quota.
++usrjquota=		 Turn off user journalled quota.
++grpjquota=		 Turn off group journalled quota.
++prjjquota=		 Turn off project journalled quota.
+ quota			 Enable plain user disk quota accounting.
+ noquota			 Disable all plain disk quota option.
+ alloc_mode=%s		 Adjust block allocation policy, which supports "reuse"
+diff --git a/Documentation/netlink/specs/ethtool.yaml b/Documentation/netlink/specs/ethtool.yaml
+index 348c6ad548f501..d1ee5307160f4c 100644
+--- a/Documentation/netlink/specs/ethtool.yaml
++++ b/Documentation/netlink/specs/ethtool.yaml
+@@ -2107,9 +2107,6 @@ operations:
+ 
+       do: &module-eeprom-get-op
+         request:
+-          attributes:
+-            - header
+-        reply:
+           attributes:
+             - header
+             - offset
+@@ -2117,6 +2114,9 @@ operations:
+             - page
+             - bank
+             - i2c-address
++        reply:
++          attributes:
++            - header
+             - data
+       dump: *module-eeprom-get-op
+     -
+diff --git a/Makefile b/Makefile
+index 478f2004602da0..d18dae20b7af39 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 16
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm/boot/dts/microchip/sam9x7.dtsi b/arch/arm/boot/dts/microchip/sam9x7.dtsi
+index b217a908f52534..114449e9072065 100644
+--- a/arch/arm/boot/dts/microchip/sam9x7.dtsi
++++ b/arch/arm/boot/dts/microchip/sam9x7.dtsi
+@@ -45,11 +45,13 @@ cpu@0 {
+ 	clocks {
+ 		slow_xtal: clock-slowxtal {
+ 			compatible = "fixed-clock";
++			clock-output-names = "slow_xtal";
+ 			#clock-cells = <0>;
+ 		};
+ 
+ 		main_xtal: clock-mainxtal {
+ 			compatible = "fixed-clock";
++			clock-output-names = "main_xtal";
+ 			#clock-cells = <0>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/microchip/sama7d65.dtsi b/arch/arm/boot/dts/microchip/sama7d65.dtsi
+index d08d773b1cc578..f96b073a7db515 100644
+--- a/arch/arm/boot/dts/microchip/sama7d65.dtsi
++++ b/arch/arm/boot/dts/microchip/sama7d65.dtsi
+@@ -38,11 +38,13 @@ cpu0: cpu@0 {
+ 	clocks {
+ 		main_xtal: clock-mainxtal {
+ 			compatible = "fixed-clock";
++			clock-output-names = "main_xtal";
+ 			#clock-cells = <0>;
+ 		};
+ 
+ 		slow_xtal: clock-slowxtal {
+ 			compatible = "fixed-clock";
++			clock-output-names = "slow_xtal";
+ 			#clock-cells = <0>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6ul-kontron-bl-common.dtsi b/arch/arm/boot/dts/nxp/imx/imx6ul-kontron-bl-common.dtsi
+index 29d2f86d5e34a7..f4c45e964daf8f 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6ul-kontron-bl-common.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx6ul-kontron-bl-common.dtsi
+@@ -168,7 +168,6 @@ &uart2 {
+ 	pinctrl-0 = <&pinctrl_uart2>;
+ 	linux,rs485-enabled-at-boot-time;
+ 	rs485-rx-during-tx;
+-	rs485-rts-active-low;
+ 	uart-has-rtscts;
+ 	status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/nxp/vf/vfxxx.dtsi b/arch/arm/boot/dts/nxp/vf/vfxxx.dtsi
+index 597f20be82f1ee..62e555bf6a71d9 100644
+--- a/arch/arm/boot/dts/nxp/vf/vfxxx.dtsi
++++ b/arch/arm/boot/dts/nxp/vf/vfxxx.dtsi
+@@ -603,7 +603,7 @@ usbmisc1: usb@400b4800 {
+ 
+ 			ftm: ftm@400b8000 {
+ 				compatible = "fsl,ftm-timer";
+-				reg = <0x400b8000 0x1000 0x400b9000 0x1000>;
++				reg = <0x400b8000 0x1000>, <0x400b9000 0x1000>;
+ 				interrupts = <44 IRQ_TYPE_LEVEL_HIGH>;
+ 				clock-names = "ftm-evt", "ftm-src",
+ 					"ftm-evt-counter-en", "ftm-src-counter-en";
+diff --git a/arch/arm/boot/dts/ti/omap/am335x-boneblack.dts b/arch/arm/boot/dts/ti/omap/am335x-boneblack.dts
+index 16b567e3cb4722..b4fdcf9c02b500 100644
+--- a/arch/arm/boot/dts/ti/omap/am335x-boneblack.dts
++++ b/arch/arm/boot/dts/ti/omap/am335x-boneblack.dts
+@@ -35,7 +35,7 @@ &gpio0 {
+ 		"P9_18 [spi0_d1]",
+ 		"P9_17 [spi0_cs0]",
+ 		"[mmc0_cd]",
+-		"P8_42A [ecappwm0]",
++		"P9_42A [ecappwm0]",
+ 		"P8_35 [lcd d12]",
+ 		"P8_33 [lcd d13]",
+ 		"P8_31 [lcd d14]",
+diff --git a/arch/arm/crypto/aes-neonbs-glue.c b/arch/arm/crypto/aes-neonbs-glue.c
+index c60104dc158529..df5afe601e4a57 100644
+--- a/arch/arm/crypto/aes-neonbs-glue.c
++++ b/arch/arm/crypto/aes-neonbs-glue.c
+@@ -206,7 +206,7 @@ static int ctr_encrypt(struct skcipher_request *req)
+ 	while (walk.nbytes > 0) {
+ 		const u8 *src = walk.src.virt.addr;
+ 		u8 *dst = walk.dst.virt.addr;
+-		int bytes = walk.nbytes;
++		unsigned int bytes = walk.nbytes;
+ 
+ 		if (unlikely(bytes < AES_BLOCK_SIZE))
+ 			src = dst = memcpy(buf + sizeof(buf) - bytes,
+diff --git a/arch/arm/mach-s3c/gpio-samsung.c b/arch/arm/mach-s3c/gpio-samsung.c
+index 206a492fbaf50c..3ee4ad969cc22e 100644
+--- a/arch/arm/mach-s3c/gpio-samsung.c
++++ b/arch/arm/mach-s3c/gpio-samsung.c
+@@ -516,7 +516,7 @@ static void __init samsung_gpiolib_add(struct samsung_gpio_chip *chip)
+ 		gc->direction_input = samsung_gpiolib_2bit_input;
+ 	if (!gc->direction_output)
+ 		gc->direction_output = samsung_gpiolib_2bit_output;
+-	if (!gc->set)
++	if (!gc->set_rv)
+ 		gc->set_rv = samsung_gpiolib_set;
+ 	if (!gc->get)
+ 		gc->get = samsung_gpiolib_get;
+diff --git a/arch/arm64/boot/dts/exynos/google/gs101.dtsi b/arch/arm64/boot/dts/exynos/google/gs101.dtsi
+index 48c691fd0a3ae4..94aa0ffb9a9760 100644
+--- a/arch/arm64/boot/dts/exynos/google/gs101.dtsi
++++ b/arch/arm64/boot/dts/exynos/google/gs101.dtsi
+@@ -155,6 +155,7 @@ ananke_cpu_sleep: cpu-ananke-sleep {
+ 				idle-state-name = "c2";
+ 				compatible = "arm,idle-state";
+ 				arm,psci-suspend-param = <0x0010000>;
++				local-timer-stop;
+ 				entry-latency-us = <70>;
+ 				exit-latency-us = <160>;
+ 				min-residency-us = <2000>;
+@@ -164,6 +165,7 @@ enyo_cpu_sleep: cpu-enyo-sleep {
+ 				idle-state-name = "c2";
+ 				compatible = "arm,idle-state";
+ 				arm,psci-suspend-param = <0x0010000>;
++				local-timer-stop;
+ 				entry-latency-us = <150>;
+ 				exit-latency-us = <190>;
+ 				min-residency-us = <2500>;
+@@ -173,6 +175,7 @@ hera_cpu_sleep: cpu-hera-sleep {
+ 				idle-state-name = "c2";
+ 				compatible = "arm,idle-state";
+ 				arm,psci-suspend-param = <0x0010000>;
++				local-timer-stop;
+ 				entry-latency-us = <235>;
+ 				exit-latency-us = <220>;
+ 				min-residency-us = <3500>;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
+index 21bcd82fd092f2..8287a7f66ed372 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
+@@ -294,6 +294,8 @@ &usdhc3 {
+ 	pinctrl-0 = <&pinctrl_usdhc3>;
+ 	pinctrl-1 = <&pinctrl_usdhc3_100mhz>;
+ 	pinctrl-2 = <&pinctrl_usdhc3_200mhz>;
++	assigned-clocks = <&clk IMX8MM_CLK_USDHC3>;
++	assigned-clock-rates = <400000000>;
+ 	bus-width = <8>;
+ 	non-removable;
+ 	status = "okay";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
+index 67a99383a63247..917b7d0007a7a3 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
+@@ -305,6 +305,8 @@ &usdhc3 {
+ 	pinctrl-0 = <&pinctrl_usdhc3>;
+ 	pinctrl-1 = <&pinctrl_usdhc3_100mhz>;
+ 	pinctrl-2 = <&pinctrl_usdhc3_200mhz>;
++	assigned-clocks = <&clk IMX8MN_CLK_USDHC3>;
++	assigned-clock-rates = <400000000>;
+ 	bus-width = <8>;
+ 	non-removable;
+ 	status = "okay";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-toradex-smarc-dev.dts b/arch/arm64/boot/dts/freescale/imx8mp-toradex-smarc-dev.dts
+index 55b8c5c14fb4f3..d5fa9a8d414ec8 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-toradex-smarc-dev.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-toradex-smarc-dev.dts
+@@ -102,11 +102,6 @@ &gpio1 {
+ 		    <&pinctrl_gpio13>;
+ };
+ 
+-&gpio3 {
+-	pinctrl-names = "default";
+-	pinctrl-0 = <&pinctrl_lvds_dsi_sel>;
+-};
+-
+ &gpio4 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_gpio4>, <&pinctrl_gpio6>;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-toradex-smarc.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-toradex-smarc.dtsi
+index 22f6daabdb90a2..11fd5360ab9040 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-toradex-smarc.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp-toradex-smarc.dtsi
+@@ -320,6 +320,8 @@ &gpio2 {
+ };
+ 
+ &gpio3 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&pinctrl_lvds_dsi_sel>;
+ 	gpio-line-names = "ETH_0_INT#", /* 0 */
+ 			  "SLEEP#",
+ 			  "",
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts b/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
+index 568d24265ddf8e..12de7cf1e8538e 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
+@@ -301,7 +301,7 @@ &gpio2 {
+ &gpio3 {
+ 	gpio-line-names =
+ 		"", "", "", "", "", "", "m2_rst", "",
+-		"", "", "", "", "", "", "m2_gpio10", "",
++		"", "", "", "", "", "", "m2_wdis2#", "",
+ 		"", "", "", "", "", "", "", "",
+ 		"", "", "", "", "", "", "", "";
+ };
+@@ -310,7 +310,7 @@ &gpio4 {
+ 	gpio-line-names =
+ 		"", "", "m2_off#", "", "", "", "", "",
+ 		"", "", "", "", "", "", "", "",
+-		"", "", "m2_wdis#", "", "", "", "", "",
++		"", "", "m2_wdis1#", "", "", "", "", "",
+ 		"", "", "", "", "", "", "", "rs485_en";
+ };
+ 
+@@ -811,14 +811,14 @@ pinctrl_hog: hoggrp {
+ 			MX8MP_IOMUXC_GPIO1_IO09__GPIO1_IO09	0x40000040 /* DIO0 */
+ 			MX8MP_IOMUXC_GPIO1_IO11__GPIO1_IO11	0x40000040 /* DIO1 */
+ 			MX8MP_IOMUXC_SAI1_RXD0__GPIO4_IO02	0x40000040 /* M2SKT_OFF# */
+-			MX8MP_IOMUXC_SAI1_TXD6__GPIO4_IO18	0x40000150 /* M2SKT_WDIS# */
++			MX8MP_IOMUXC_SAI1_TXD6__GPIO4_IO18	0x40000150 /* M2SKT_WDIS1# */
+ 			MX8MP_IOMUXC_SD1_DATA4__GPIO2_IO06	0x40000040 /* M2SKT_PIN20 */
+ 			MX8MP_IOMUXC_SD1_STROBE__GPIO2_IO11	0x40000040 /* M2SKT_PIN22 */
+ 			MX8MP_IOMUXC_SD2_CLK__GPIO2_IO13	0x40000150 /* PCIE1_WDIS# */
+ 			MX8MP_IOMUXC_SD2_CMD__GPIO2_IO14	0x40000150 /* PCIE3_WDIS# */
+ 			MX8MP_IOMUXC_SD2_DATA3__GPIO2_IO18	0x40000150 /* PCIE2_WDIS# */
+ 			MX8MP_IOMUXC_NAND_DATA00__GPIO3_IO06	0x40000040 /* M2SKT_RST# */
+-			MX8MP_IOMUXC_NAND_DQS__GPIO3_IO14	0x40000040 /* M2SKT_GPIO10 */
++			MX8MP_IOMUXC_NAND_DQS__GPIO3_IO14	0x40000150 /* M2KST_WDIS2# */
+ 			MX8MP_IOMUXC_SAI3_TXD__GPIO5_IO01	0x40000104 /* UART_TERM */
+ 			MX8MP_IOMUXC_SAI3_TXFS__GPIO4_IO31	0x40000104 /* UART_RS485 */
+ 			MX8MP_IOMUXC_SAI3_TXC__GPIO5_IO00	0x40000104 /* UART_HALF */
+diff --git a/arch/arm64/boot/dts/freescale/imx93-tqma9352.dtsi b/arch/arm64/boot/dts/freescale/imx93-tqma9352.dtsi
+index 2cabdae2422739..09385b058664c3 100644
+--- a/arch/arm64/boot/dts/freescale/imx93-tqma9352.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx93-tqma9352.dtsi
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: (GPL-2.0-or-later OR MIT)
+ /*
+- * Copyright (c) 2022 TQ-Systems GmbH <linux@ew.tq-group.com>,
++ * Copyright (c) 2022-2025 TQ-Systems GmbH <linux@ew.tq-group.com>,
+  * D-82229 Seefeld, Germany.
+  * Author: Markus Niebel
+  */
+@@ -110,11 +110,11 @@ buck1: BUCK1 {
+ 				regulator-ramp-delay = <3125>;
+ 			};
+ 
+-			/* V_DDRQ - 1.1 LPDDR4 or 0.6 LPDDR4X */
++			/* V_DDRQ - 0.6 V for LPDDR4X */
+ 			buck2: BUCK2 {
+ 				regulator-name = "BUCK2";
+ 				regulator-min-microvolt = <600000>;
+-				regulator-max-microvolt = <1100000>;
++				regulator-max-microvolt = <600000>;
+ 				regulator-boot-on;
+ 				regulator-always-on;
+ 				regulator-ramp-delay = <3125>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8976.dtsi b/arch/arm64/boot/dts/qcom/msm8976.dtsi
+index e2ac2fd6882fcf..2a30246384700d 100644
+--- a/arch/arm64/boot/dts/qcom/msm8976.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8976.dtsi
+@@ -1331,6 +1331,7 @@ blsp1_dma: dma-controller@7884000 {
+ 			clock-names = "bam_clk";
+ 			#dma-cells = <1>;
+ 			qcom,ee = <0>;
++			qcom,controlled-remotely;
+ 		};
+ 
+ 		blsp1_uart1: serial@78af000 {
+@@ -1451,6 +1452,7 @@ blsp2_dma: dma-controller@7ac4000 {
+ 			clock-names = "bam_clk";
+ 			#dma-cells = <1>;
+ 			qcom,ee = <0>;
++			qcom,controlled-remotely;
+ 		};
+ 
+ 		blsp2_uart2: serial@7af0000 {
+diff --git a/arch/arm64/boot/dts/qcom/qcs615.dtsi b/arch/arm64/boot/dts/qcom/qcs615.dtsi
+index bb8b6c3ebd03f0..e5d118c755e640 100644
+--- a/arch/arm64/boot/dts/qcom/qcs615.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcs615.dtsi
+@@ -1902,6 +1902,7 @@ replicator@604a000 {
+ 
+ 			clocks = <&aoss_qmp>;
+ 			clock-names = "apb_pclk";
++			status = "disabled";
+ 
+ 			in-ports {
+ 				port {
+@@ -2461,6 +2462,9 @@ cti@6c13000 {
+ 
+ 			clocks = <&aoss_qmp>;
+ 			clock-names = "apb_pclk";
++
++			/* Not all required clocks can be enabled from the OS */
++			status = "fail";
+ 		};
+ 
+ 		cti@6c20000 {
+diff --git a/arch/arm64/boot/dts/qcom/sa8775p.dtsi b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+index 45f536633f6449..f682a53e83e5be 100644
+--- a/arch/arm64/boot/dts/qcom/sa8775p.dtsi
++++ b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+@@ -5571,8 +5571,8 @@ remoteproc_gpdsp0: remoteproc@20c00000 {
+ 
+ 			interrupts-extended = <&intc GIC_SPI 768 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_gpdsp0_in 0 0>,
+-					      <&smp2p_gpdsp0_in 2 0>,
+ 					      <&smp2p_gpdsp0_in 1 0>,
++					      <&smp2p_gpdsp0_in 2 0>,
+ 					      <&smp2p_gpdsp0_in 3 0>;
+ 			interrupt-names = "wdog", "fatal", "ready",
+ 					  "handover", "stop-ack";
+@@ -5614,8 +5614,8 @@ remoteproc_gpdsp1: remoteproc@21c00000 {
+ 
+ 			interrupts-extended = <&intc GIC_SPI 624 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_gpdsp1_in 0 0>,
+-					      <&smp2p_gpdsp1_in 2 0>,
+ 					      <&smp2p_gpdsp1_in 1 0>,
++					      <&smp2p_gpdsp1_in 2 0>,
+ 					      <&smp2p_gpdsp1_in 3 0>;
+ 			interrupt-names = "wdog", "fatal", "ready",
+ 					  "handover", "stop-ack";
+@@ -5755,8 +5755,8 @@ remoteproc_cdsp0: remoteproc@26300000 {
+ 
+ 			interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_cdsp0_in 0 IRQ_TYPE_EDGE_RISING>,
+-					      <&smp2p_cdsp0_in 2 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_cdsp0_in 1 IRQ_TYPE_EDGE_RISING>,
++					      <&smp2p_cdsp0_in 2 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_cdsp0_in 3 IRQ_TYPE_EDGE_RISING>;
+ 			interrupt-names = "wdog", "fatal", "ready",
+ 					  "handover", "stop-ack";
+@@ -5887,8 +5887,8 @@ remoteproc_cdsp1: remoteproc@2a300000 {
+ 
+ 			interrupts-extended = <&intc GIC_SPI 798 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_cdsp1_in 0 IRQ_TYPE_EDGE_RISING>,
+-					      <&smp2p_cdsp1_in 2 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_cdsp1_in 1 IRQ_TYPE_EDGE_RISING>,
++					      <&smp2p_cdsp1_in 2 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_cdsp1_in 3 IRQ_TYPE_EDGE_RISING>;
+ 			interrupt-names = "wdog", "fatal", "ready",
+ 					  "handover", "stop-ack";
+@@ -6043,8 +6043,8 @@ remoteproc_adsp: remoteproc@30000000 {
+ 
+ 			interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
+-					      <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
++					      <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
+ 					      <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
+ 			interrupt-names = "wdog", "fatal", "ready", "handover",
+ 					  "stop-ack";
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index 01e727b021ec58..3afb69921be363 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -3526,18 +3526,18 @@ spmi_bus: spmi@c440000 {
+ 			#interrupt-cells = <4>;
+ 		};
+ 
+-		sram@146aa000 {
++		sram@14680000 {
+ 			compatible = "qcom,sc7180-imem", "syscon", "simple-mfd";
+-			reg = <0 0x146aa000 0 0x2000>;
++			reg = <0 0x14680000 0 0x2e000>;
+ 
+ 			#address-cells = <1>;
+ 			#size-cells = <1>;
+ 
+-			ranges = <0 0 0x146aa000 0x2000>;
++			ranges = <0 0 0x14680000 0x2e000>;
+ 
+-			pil-reloc@94c {
++			pil-reloc@2a94c {
+ 				compatible = "qcom,pil-reloc-info";
+-				reg = <0x94c 0xc8>;
++				reg = <0x2a94c 0xc8>;
+ 			};
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index 3bc8471c658bda..6ee97cfecc705c 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -5081,18 +5081,18 @@ spmi_bus: spmi@c440000 {
+ 			#interrupt-cells = <4>;
+ 		};
+ 
+-		sram@146bf000 {
++		sram@14680000 {
+ 			compatible = "qcom,sdm845-imem", "syscon", "simple-mfd";
+-			reg = <0 0x146bf000 0 0x1000>;
++			reg = <0 0x14680000 0 0x40000>;
+ 
+ 			#address-cells = <1>;
+ 			#size-cells = <1>;
+ 
+-			ranges = <0 0 0x146bf000 0x1000>;
++			ranges = <0 0 0x14680000 0x40000>;
+ 
+-			pil-reloc@94c {
++			pil-reloc@3f94c {
+ 				compatible = "qcom,pil-reloc-info";
+-				reg = <0x94c 0xc8>;
++				reg = <0x3f94c 0xc8>;
+ 			};
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+index a8eb4c5fe99fe6..5edcfb83c61a95 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+@@ -8548,7 +8548,7 @@ timer {
+ 			     <GIC_PPI 10 IRQ_TYPE_LEVEL_LOW>;
+ 	};
+ 
+-	thermal-zones {
++	thermal_zones: thermal-zones {
+ 		aoss0-thermal {
+ 			thermal-sensors = <&tsens0 0>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/x1p42100.dtsi b/arch/arm64/boot/dts/qcom/x1p42100.dtsi
+index 27f479010bc330..9af9e707f982fe 100644
+--- a/arch/arm64/boot/dts/qcom/x1p42100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1p42100.dtsi
+@@ -18,6 +18,7 @@
+ /delete-node/ &cpu_pd10;
+ /delete-node/ &cpu_pd11;
+ /delete-node/ &pcie3_phy;
++/delete-node/ &thermal_zones;
+ 
+ &gcc {
+ 	compatible = "qcom,x1p42100-gcc", "qcom,x1e80100-gcc";
+@@ -79,3 +80,558 @@ pcie3_phy: phy@1bd4000 {
+ 		status = "disabled";
+ 	};
+ };
++
++/* While physically present, this controller is left unconfigured and unused */
++&tsens3 {
++	status = "disabled";
++};
++
++/ {
++	thermal-zones {
++		aoss0-thermal {
++			thermal-sensors = <&tsens0 0>;
++
++			trips {
++				trip-point0 {
++					temperature = <90000>;
++					hysteresis = <2000>;
++					type = "hot";
++				};
++
++				trip-point1 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpu0-0-top-thermal {
++			thermal-sensors = <&tsens0 1>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpu0-0-btm-thermal {
++			thermal-sensors = <&tsens0 2>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpu0-1-top-thermal {
++			thermal-sensors = <&tsens0 3>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpu0-1-btm-thermal {
++			thermal-sensors = <&tsens0 4>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpu0-2-top-thermal {
++			thermal-sensors = <&tsens0 5>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpu0-2-btm-thermal {
++			thermal-sensors = <&tsens0 6>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpu0-3-top-thermal {
++			thermal-sensors = <&tsens0 7>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpu0-3-btm-thermal {
++			thermal-sensors = <&tsens0 8>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpuss0-top-thermal {
++			thermal-sensors = <&tsens0 9>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpuss0-btm-thermal {
++			thermal-sensors = <&tsens0 10>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		mem-thermal {
++			thermal-sensors = <&tsens0 11>;
++
++			trips {
++				trip-point0 {
++					temperature = <90000>;
++					hysteresis = <2000>;
++					type = "hot";
++				};
++
++				trip-point1 {
++					temperature = <115000>;
++					hysteresis = <0>;
++					type = "critical";
++				};
++			};
++		};
++
++		video-thermal {
++			thermal-sensors = <&tsens0 12>;
++
++			trips {
++				trip-point0 {
++					temperature = <90000>;
++					hysteresis = <2000>;
++					type = "hot";
++				};
++
++				trip-point1 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		aoss1-thermal {
++			thermal-sensors = <&tsens1 0>;
++
++			trips {
++				trip-point0 {
++					temperature = <90000>;
++					hysteresis = <2000>;
++					type = "hot";
++				};
++
++				trip-point1 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpu1-0-top-thermal {
++			thermal-sensors = <&tsens1 1>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpu1-0-btm-thermal {
++			thermal-sensors = <&tsens1 2>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpu1-1-top-thermal {
++			thermal-sensors = <&tsens1 3>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpu1-1-btm-thermal {
++			thermal-sensors = <&tsens1 4>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpu1-2-top-thermal {
++			thermal-sensors = <&tsens1 5>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpu1-2-btm-thermal {
++			thermal-sensors = <&tsens1 6>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpu1-3-top-thermal {
++			thermal-sensors = <&tsens1 7>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpu1-3-btm-thermal {
++			thermal-sensors = <&tsens1 8>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpuss1-top-thermal {
++			thermal-sensors = <&tsens1 9>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		cpuss1-btm-thermal {
++			thermal-sensors = <&tsens1 10>;
++
++			trips {
++				trip-point0 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		aoss2-thermal {
++			thermal-sensors = <&tsens2 0>;
++
++			trips {
++				trip-point0 {
++					temperature = <90000>;
++					hysteresis = <2000>;
++					type = "hot";
++				};
++
++				trip-point1 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		nsp0-thermal {
++			thermal-sensors = <&tsens2 1>;
++
++			trips {
++				trip-point0 {
++					temperature = <90000>;
++					hysteresis = <2000>;
++					type = "hot";
++				};
++
++				trip-point1 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		nsp1-thermal {
++			thermal-sensors = <&tsens2 2>;
++
++			trips {
++				trip-point0 {
++					temperature = <90000>;
++					hysteresis = <2000>;
++					type = "hot";
++				};
++
++				trip-point1 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		nsp2-thermal {
++			thermal-sensors = <&tsens2 3>;
++
++			trips {
++				trip-point0 {
++					temperature = <90000>;
++					hysteresis = <2000>;
++					type = "hot";
++				};
++
++				trip-point1 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		nsp3-thermal {
++			thermal-sensors = <&tsens2 4>;
++
++			trips {
++				trip-point0 {
++					temperature = <90000>;
++					hysteresis = <2000>;
++					type = "hot";
++				};
++
++				trip-point1 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		gpuss-0-thermal {
++			polling-delay-passive = <200>;
++
++			thermal-sensors = <&tsens2 5>;
++
++			cooling-maps {
++				map0 {
++					trip = <&gpuss0_alert0>;
++					cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++				};
++			};
++
++			trips {
++				gpuss0_alert0: trip-point0 {
++					temperature = <95000>;
++					hysteresis = <1000>;
++					type = "passive";
++				};
++
++				trip-point1 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		gpuss-1-thermal {
++			polling-delay-passive = <200>;
++
++			thermal-sensors = <&tsens2 6>;
++
++			cooling-maps {
++				map0 {
++					trip = <&gpuss1_alert0>;
++					cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++				};
++			};
++
++			trips {
++				gpuss1_alert0: trip-point0 {
++					temperature = <95000>;
++					hysteresis = <1000>;
++					type = "passive";
++				};
++
++				trip-point1 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		gpuss-2-thermal {
++			polling-delay-passive = <200>;
++
++			thermal-sensors = <&tsens2 7>;
++
++			cooling-maps {
++				map0 {
++					trip = <&gpuss2_alert0>;
++					cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++				};
++			};
++
++			trips {
++				gpuss2_alert0: trip-point0 {
++					temperature = <95000>;
++					hysteresis = <1000>;
++					type = "passive";
++				};
++
++				trip-point1 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		gpuss-3-thermal {
++			polling-delay-passive = <200>;
++
++			thermal-sensors = <&tsens2 8>;
++
++			cooling-maps {
++				map0 {
++					trip = <&gpuss3_alert0>;
++					cooling-device = <&gpu THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++				};
++			};
++
++			trips {
++				gpuss3_alert0: trip-point0 {
++					temperature = <95000>;
++					hysteresis = <1000>;
++					type = "passive";
++				};
++
++				trip-point1 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		camera0-thermal {
++			thermal-sensors = <&tsens2 9>;
++
++			trips {
++				trip-point0 {
++					temperature = <90000>;
++					hysteresis = <2000>;
++					type = "hot";
++				};
++
++				trip-point1 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++
++		camera1-thermal {
++			thermal-sensors = <&tsens2 10>;
++
++			trips {
++				trip-point0 {
++					temperature = <90000>;
++					hysteresis = <2000>;
++					type = "hot";
++				};
++
++				trip-point1 {
++					temperature = <115000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
++			};
++		};
++	};
++};
+diff --git a/arch/arm64/boot/dts/renesas/Makefile b/arch/arm64/boot/dts/renesas/Makefile
+index aa7f996c0546c8..f9f70f181d105d 100644
+--- a/arch/arm64/boot/dts/renesas/Makefile
++++ b/arch/arm64/boot/dts/renesas/Makefile
+@@ -96,6 +96,7 @@ dtb-$(CONFIG_ARCH_R8A779G0) += r8a779g2-white-hawk-single-ard-audio-da7212.dtb
+ 
+ DTC_FLAGS_r8a779g3-sparrow-hawk += -Wno-spi_bus_bridge
+ dtb-$(CONFIG_ARCH_R8A779G0) += r8a779g3-sparrow-hawk.dtb
++dtb-$(CONFIG_ARCH_R8A779G0) += r8a779g3-sparrow-hawk-fan-pwm.dtbo
+ r8a779g3-sparrow-hawk-fan-pwm-dtbs := r8a779g3-sparrow-hawk.dtb r8a779g3-sparrow-hawk-fan-pwm.dtbo
+ dtb-$(CONFIG_ARCH_R8A779G0) += r8a779g3-sparrow-hawk-fan-pwm.dtb
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/px30-evb.dts b/arch/arm64/boot/dts/rockchip/px30-evb.dts
+index d93aaac7a42f15..bfd724b73c9a76 100644
+--- a/arch/arm64/boot/dts/rockchip/px30-evb.dts
++++ b/arch/arm64/boot/dts/rockchip/px30-evb.dts
+@@ -483,8 +483,7 @@ &isp {
+ 
+ 	ports {
+ 		port@0 {
+-			mipi_in_ucam: endpoint@0 {
+-				reg = <0>;
++			mipi_in_ucam: endpoint {
+ 				data-lanes = <1 2>;
+ 				remote-endpoint = <&ucam_out>;
+ 			};
+diff --git a/arch/arm64/boot/dts/rockchip/px30-pp1516.dtsi b/arch/arm64/boot/dts/rockchip/px30-pp1516.dtsi
+index 3f9a133d7373a1..b4bd4e34747ca0 100644
+--- a/arch/arm64/boot/dts/rockchip/px30-pp1516.dtsi
++++ b/arch/arm64/boot/dts/rockchip/px30-pp1516.dtsi
+@@ -444,8 +444,7 @@ &isp {
+ 
+ 	ports {
+ 		port@0 {
+-			mipi_in_ucam: endpoint@0 {
+-				reg = <0>;
++			mipi_in_ucam: endpoint {
+ 				data-lanes = <1 2>;
+ 				remote-endpoint = <&ucam_out>;
+ 			};
+diff --git a/arch/arm64/boot/dts/rockchip/px30.dtsi b/arch/arm64/boot/dts/rockchip/px30.dtsi
+index feabdadfa440f9..8220c875415f52 100644
+--- a/arch/arm64/boot/dts/rockchip/px30.dtsi
++++ b/arch/arm64/boot/dts/rockchip/px30.dtsi
+@@ -1271,8 +1271,6 @@ ports {
+ 
+ 			port@0 {
+ 				reg = <0>;
+-				#address-cells = <1>;
+-				#size-cells = <0>;
+ 			};
+ 		};
+ 	};
+diff --git a/arch/arm64/boot/dts/rockchip/rk3528-pinctrl.dtsi b/arch/arm64/boot/dts/rockchip/rk3528-pinctrl.dtsi
+index ea051362fb2651..59b75c91bbb7f0 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3528-pinctrl.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3528-pinctrl.dtsi
+@@ -98,42 +98,42 @@ eth_pins: eth-pins {
+ 
+ 	fephy {
+ 		/omit-if-no-ref/
+-		fephym0_led_dpx: fephym0-led_dpx {
++		fephym0_led_dpx: fephym0-led-dpx {
+ 			rockchip,pins =
+ 				/* fephy_led_dpx_m0 */
+ 				<4 RK_PB5 2 &pcfg_pull_none>;
+ 		};
+ 
+ 		/omit-if-no-ref/
+-		fephym0_led_link: fephym0-led_link {
++		fephym0_led_link: fephym0-led-link {
+ 			rockchip,pins =
+ 				/* fephy_led_link_m0 */
+ 				<4 RK_PC0 2 &pcfg_pull_none>;
+ 		};
+ 
+ 		/omit-if-no-ref/
+-		fephym0_led_spd: fephym0-led_spd {
++		fephym0_led_spd: fephym0-led-spd {
+ 			rockchip,pins =
+ 				/* fephy_led_spd_m0 */
+ 				<4 RK_PB7 2 &pcfg_pull_none>;
+ 		};
+ 
+ 		/omit-if-no-ref/
+-		fephym1_led_dpx: fephym1-led_dpx {
++		fephym1_led_dpx: fephym1-led-dpx {
+ 			rockchip,pins =
+ 				/* fephy_led_dpx_m1 */
+ 				<2 RK_PA4 5 &pcfg_pull_none>;
+ 		};
+ 
+ 		/omit-if-no-ref/
+-		fephym1_led_link: fephym1-led_link {
++		fephym1_led_link: fephym1-led-link {
+ 			rockchip,pins =
+ 				/* fephy_led_link_m1 */
+ 				<2 RK_PA6 5 &pcfg_pull_none>;
+ 		};
+ 
+ 		/omit-if-no-ref/
+-		fephym1_led_spd: fephym1-led_spd {
++		fephym1_led_spd: fephym1-led-spd {
+ 			rockchip,pins =
+ 				/* fephy_led_spd_m1 */
+ 				<2 RK_PA5 5 &pcfg_pull_none>;
+@@ -779,7 +779,7 @@ rgmii_miim: rgmii-miim {
+ 		};
+ 
+ 		/omit-if-no-ref/
+-		rgmii_rx_bus2: rgmii-rx_bus2 {
++		rgmii_rx_bus2: rgmii-rx-bus2 {
+ 			rockchip,pins =
+ 				/* rgmii_rxd0 */
+ 				<3 RK_PA3 2 &pcfg_pull_none>,
+@@ -790,7 +790,7 @@ rgmii_rx_bus2: rgmii-rx_bus2 {
+ 		};
+ 
+ 		/omit-if-no-ref/
+-		rgmii_tx_bus2: rgmii-tx_bus2 {
++		rgmii_tx_bus2: rgmii-tx-bus2 {
+ 			rockchip,pins =
+ 				/* rgmii_txd0 */
+ 				<3 RK_PA1 2 &pcfg_pull_none_drv_level_2>,
+@@ -801,7 +801,7 @@ rgmii_tx_bus2: rgmii-tx_bus2 {
+ 		};
+ 
+ 		/omit-if-no-ref/
+-		rgmii_rgmii_clk: rgmii-rgmii_clk {
++		rgmii_rgmii_clk: rgmii-rgmii-clk {
+ 			rockchip,pins =
+ 				/* rgmii_rxclk */
+ 				<3 RK_PA5 2 &pcfg_pull_none>,
+@@ -810,7 +810,7 @@ rgmii_rgmii_clk: rgmii-rgmii_clk {
+ 		};
+ 
+ 		/omit-if-no-ref/
+-		rgmii_rgmii_bus: rgmii-rgmii_bus {
++		rgmii_rgmii_bus: rgmii-rgmii-bus {
+ 			rockchip,pins =
+ 				/* rgmii_rxd2 */
+ 				<3 RK_PA7 2 &pcfg_pull_none>,
+diff --git a/arch/arm64/boot/dts/rockchip/rk3528-radxa-e20c.dts b/arch/arm64/boot/dts/rockchip/rk3528-radxa-e20c.dts
+index 9f6ccd9dd1f7aa..ea722be2acd314 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3528-radxa-e20c.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3528-radxa-e20c.dts
+@@ -278,6 +278,7 @@ &saradc {
+ &sdhci {
+ 	bus-width = <8>;
+ 	cap-mmc-highspeed;
++	mmc-hs200-1_8v;
+ 	no-sd;
+ 	no-sdio;
+ 	non-removable;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3528.dtsi b/arch/arm64/boot/dts/rockchip/rk3528.dtsi
+index d1c72b52aa4e66..7f78409cb558c4 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3528.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3528.dtsi
+@@ -445,7 +445,7 @@ uart0: serial@ff9f0000 {
+ 			clocks = <&cru SCLK_UART0>, <&cru PCLK_UART0>;
+ 			clock-names = "baudclk", "apb_pclk";
+ 			interrupts = <GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>;
+-			dmas = <&dmac 8>, <&dmac 9>;
++			dmas = <&dmac 9>, <&dmac 8>;
+ 			reg-io-width = <4>;
+ 			reg-shift = <2>;
+ 			status = "disabled";
+@@ -457,7 +457,7 @@ uart1: serial@ff9f8000 {
+ 			clocks = <&cru SCLK_UART1>, <&cru PCLK_UART1>;
+ 			clock-names = "baudclk", "apb_pclk";
+ 			interrupts = <GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>;
+-			dmas = <&dmac 10>, <&dmac 11>;
++			dmas = <&dmac 11>, <&dmac 10>;
+ 			reg-io-width = <4>;
+ 			reg-shift = <2>;
+ 			status = "disabled";
+@@ -469,7 +469,7 @@ uart2: serial@ffa00000 {
+ 			clocks = <&cru SCLK_UART2>, <&cru PCLK_UART2>;
+ 			clock-names = "baudclk", "apb_pclk";
+ 			interrupts = <GIC_SPI 42 IRQ_TYPE_LEVEL_HIGH>;
+-			dmas = <&dmac 12>, <&dmac 13>;
++			dmas = <&dmac 13>, <&dmac 12>;
+ 			reg-io-width = <4>;
+ 			reg-shift = <2>;
+ 			status = "disabled";
+@@ -481,7 +481,7 @@ uart3: serial@ffa08000 {
+ 			clocks = <&cru SCLK_UART3>, <&cru PCLK_UART3>;
+ 			clock-names = "baudclk", "apb_pclk";
+ 			interrupts = <GIC_SPI 43 IRQ_TYPE_LEVEL_HIGH>;
+-			dmas = <&dmac 14>, <&dmac 15>;
++			dmas = <&dmac 15>, <&dmac 14>;
+ 			reg-io-width = <4>;
+ 			reg-shift = <2>;
+ 			status = "disabled";
+@@ -493,7 +493,7 @@ uart4: serial@ffa10000 {
+ 			clocks = <&cru SCLK_UART4>, <&cru PCLK_UART4>;
+ 			clock-names = "baudclk", "apb_pclk";
+ 			interrupts = <GIC_SPI 44 IRQ_TYPE_LEVEL_HIGH>;
+-			dmas = <&dmac 16>, <&dmac 17>;
++			dmas = <&dmac 17>, <&dmac 16>;
+ 			reg-io-width = <4>;
+ 			reg-shift = <2>;
+ 			status = "disabled";
+@@ -505,7 +505,7 @@ uart5: serial@ffa18000 {
+ 			clocks = <&cru SCLK_UART5>, <&cru PCLK_UART5>;
+ 			clock-names = "baudclk", "apb_pclk";
+ 			interrupts = <GIC_SPI 45 IRQ_TYPE_LEVEL_HIGH>;
+-			dmas = <&dmac 18>, <&dmac 19>;
++			dmas = <&dmac 19>, <&dmac 18>;
+ 			reg-io-width = <4>;
+ 			reg-shift = <2>;
+ 			status = "disabled";
+@@ -517,7 +517,7 @@ uart6: serial@ffa20000 {
+ 			clocks = <&cru SCLK_UART6>, <&cru PCLK_UART6>;
+ 			clock-names = "baudclk", "apb_pclk";
+ 			interrupts = <GIC_SPI 46 IRQ_TYPE_LEVEL_HIGH>;
+-			dmas = <&dmac 20>, <&dmac 21>;
++			dmas = <&dmac 21>, <&dmac 20>;
+ 			reg-io-width = <4>;
+ 			reg-shift = <2>;
+ 			status = "disabled";
+@@ -529,7 +529,7 @@ uart7: serial@ffa28000 {
+ 			clocks = <&cru SCLK_UART7>, <&cru PCLK_UART7>;
+ 			clock-names = "baudclk", "apb_pclk";
+ 			interrupts = <GIC_SPI 47 IRQ_TYPE_LEVEL_HIGH>;
+-			dmas = <&dmac 22>, <&dmac 23>;
++			dmas = <&dmac 23>, <&dmac 22>;
+ 			reg-io-width = <4>;
+ 			reg-shift = <2>;
+ 			status = "disabled";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3576-rock-4d.dts b/arch/arm64/boot/dts/rockchip/rk3576-rock-4d.dts
+index 6756403111e704..0a93853cdf43c5 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3576-rock-4d.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3576-rock-4d.dts
+@@ -641,14 +641,16 @@ hym8563: rtc@51 {
+ 
+ &mdio0 {
+ 	rgmii_phy0: ethernet-phy@1 {
+-		compatible = "ethernet-phy-ieee802.3-c22";
++		compatible = "ethernet-phy-id001c.c916";
+ 		reg = <0x1>;
+ 		clocks = <&cru REFCLKO25M_GMAC0_OUT>;
++		assigned-clocks = <&cru REFCLKO25M_GMAC0_OUT>;
++		assigned-clock-rates = <25000000>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&rtl8211f_rst>;
+ 		reset-assert-us = <20000>;
+ 		reset-deassert-us = <100000>;
+-		reset-gpio = <&gpio2 RK_PB5 GPIO_ACTIVE_LOW>;
++		reset-gpios = <&gpio2 RK_PB5 GPIO_ACTIVE_LOW>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/st/stm32mp251.dtsi b/arch/arm64/boot/dts/st/stm32mp251.dtsi
+index 8d87865850a7a6..74c5f85b800f93 100644
+--- a/arch/arm64/boot/dts/st/stm32mp251.dtsi
++++ b/arch/arm64/boot/dts/st/stm32mp251.dtsi
+@@ -150,7 +150,7 @@ timer {
+ 			     <GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(1) | IRQ_TYPE_LEVEL_LOW)>,
+ 			     <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(1) | IRQ_TYPE_LEVEL_LOW)>,
+ 			     <GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(1) | IRQ_TYPE_LEVEL_LOW)>;
+-		always-on;
++		arm,no-tick-in-suspend;
+ 	};
+ 
+ 	soc@0 {
+diff --git a/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-main.dtsi
+index fa55c43ca28dc8..2e5e25a8ca868d 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62p-j722s-common-main.dtsi
+@@ -259,7 +259,7 @@ secure_proxy_sa3: mailbox@43600000 {
+ 
+ 	main_pmx0: pinctrl@f4000 {
+ 		compatible = "pinctrl-single";
+-		reg = <0x00 0xf4000 0x00 0x2ac>;
++		reg = <0x00 0xf4000 0x00 0x2b0>;
+ 		#pinctrl-cells = <1>;
+ 		pinctrl-single,register-width = <32>;
+ 		pinctrl-single,function-mask = <0xffffffff>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am62p-verdin.dtsi b/arch/arm64/boot/dts/ti/k3-am62p-verdin.dtsi
+index 226398c37fa9b3..24b233de2bf4ec 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62p-verdin.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62p-verdin.dtsi
+@@ -426,14 +426,14 @@ AM62PX_IOPAD(0x00f4, PIN_INPUT, 7) /* (Y20) VOUT0_DATA15.GPIO0_60 */ /* WIFI_SPI
+ 	/* Verdin PWM_3_DSI as GPIO */
+ 	pinctrl_pwm3_dsi_gpio: main-gpio1-16-default-pins {
+ 		pinctrl-single,pins = <
+-			AM62PX_IOPAD(0x01b8, PIN_OUTPUT, 7) /* (E20) SPI0_CS1.GPIO1_16 */ /* SODIMM 19 */
++			AM62PX_IOPAD(0x01b8, PIN_INPUT, 7) /* (E20) SPI0_CS1.GPIO1_16 */ /* SODIMM 19 */
+ 		>;
+ 	};
+ 
+ 	/* Verdin SD_1_CD# */
+ 	pinctrl_sd1_cd: main-gpio1-48-default-pins {
+ 		pinctrl-single,pins = <
+-			AM62PX_IOPAD(0x0240, PIN_INPUT, 7) /* (D23) MMC1_SDCD.GPIO1_48 */ /* SODIMM 84 */
++			AM62PX_IOPAD(0x0240, PIN_INPUT_PULLUP, 7) /* (D23) MMC1_SDCD.GPIO1_48 */ /* SODIMM 84 */
+ 		>;
+ 	};
+ 
+@@ -717,8 +717,8 @@ AM62PX_MCU_IOPAD(0x0010, PIN_INPUT, 7) /* (D10) MCU_SPI0_D1.MCU_GPIO0_4 */ /* SO
+ 	/* Verdin I2C_3_HDMI */
+ 	pinctrl_mcu_i2c0: mcu-i2c0-default-pins {
+ 		pinctrl-single,pins = <
+-			AM62PX_MCU_IOPAD(0x0044, PIN_INPUT, 0) /* (E11) MCU_I2C0_SCL */ /* SODIMM 59 */
+-			AM62PX_MCU_IOPAD(0x0048, PIN_INPUT, 0) /* (D11) MCU_I2C0_SDA */ /* SODIMM 57 */
++			AM62PX_MCU_IOPAD(0x0044, PIN_INPUT_PULLUP, 0) /* (E11) MCU_I2C0_SCL */ /* SODIMM 59 */
++			AM62PX_MCU_IOPAD(0x0048, PIN_INPUT_PULLUP, 0) /* (D11) MCU_I2C0_SDA */ /* SODIMM 57 */
+ 		>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-am642-phyboard-electra-rdk.dts b/arch/arm64/boot/dts/ti/k3-am642-phyboard-electra-rdk.dts
+index f63c101b7d61a1..129524eb5b9123 100644
+--- a/arch/arm64/boot/dts/ti/k3-am642-phyboard-electra-rdk.dts
++++ b/arch/arm64/boot/dts/ti/k3-am642-phyboard-electra-rdk.dts
+@@ -322,6 +322,8 @@ AM64X_IOPAD(0x0040, PIN_OUTPUT, 7)	/* (U21) GPMC0_AD1.GPIO0_16 */
+ &icssg0_mdio {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&icssg0_mdio_pins_default &clkout0_pins_default>;
++	assigned-clocks = <&k3_clks 157 123>;
++	assigned-clock-parents = <&k3_clks 157 125>;
+ 	status = "okay";
+ 
+ 	icssg0_phy1: ethernet-phy@1 {
+diff --git a/arch/arm64/include/asm/gcs.h b/arch/arm64/include/asm/gcs.h
+index f50660603ecf5d..5bc432234d3aba 100644
+--- a/arch/arm64/include/asm/gcs.h
++++ b/arch/arm64/include/asm/gcs.h
+@@ -58,7 +58,7 @@ static inline u64 gcsss2(void)
+ 
+ static inline bool task_gcs_el0_enabled(struct task_struct *task)
+ {
+-	return current->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE;
++	return task->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE;
+ }
+ 
+ void gcs_set_el0_mode(struct task_struct *task);
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index 3e41a880b06239..cb7be34f69292b 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -1149,6 +1149,8 @@ static inline bool __vcpu_read_sys_reg_from_cpu(int reg, u64 *val)
+ 	 * System registers listed in the switch are not saved on every
+ 	 * exit from the guest but are only saved on vcpu_put.
+ 	 *
++	 * SYSREGS_ON_CPU *MUST* be checked before using this helper.
++	 *
+ 	 * Note that MPIDR_EL1 for the guest is set by KVM via VMPIDR_EL2 but
+ 	 * should never be listed below, because the guest cannot modify its
+ 	 * own MPIDR_EL1 and MPIDR_EL1 is accessed for VCPU A from VCPU B's
+@@ -1200,6 +1202,8 @@ static inline bool __vcpu_write_sys_reg_to_cpu(u64 val, int reg)
+ 	 * System registers listed in the switch are not restored on every
+ 	 * entry to the guest but are only restored on vcpu_load.
+ 	 *
++	 * SYSREGS_ON_CPU *MUST* be checked before using this helper.
++	 *
+ 	 * Note that MPIDR_EL1 for the guest is set by KVM via VMPIDR_EL2 but
+ 	 * should never be listed below, because the MPIDR should only be set
+ 	 * once, before running the VCPU, and never changed later.
+diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
+index a2faf0049dab1a..76f32e424065e5 100644
+--- a/arch/arm64/kernel/Makefile
++++ b/arch/arm64/kernel/Makefile
+@@ -80,7 +80,7 @@ obj-y					+= head.o
+ always-$(KBUILD_BUILTIN)		+= vmlinux.lds
+ 
+ ifeq ($(CONFIG_DEBUG_EFI),y)
+-AFLAGS_head.o += -DVMLINUX_PATH="\"$(realpath $(objtree)/vmlinux)\""
++AFLAGS_head.o += -DVMLINUX_PATH="\"$(abspath vmlinux)\""
+ endif
+ 
+ # for cleaning
+diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
+index 08b7042a2e2d47..3e1baff5e88d92 100644
+--- a/arch/arm64/kernel/process.c
++++ b/arch/arm64/kernel/process.c
+@@ -307,13 +307,13 @@ static int copy_thread_gcs(struct task_struct *p,
+ 	p->thread.gcs_base = 0;
+ 	p->thread.gcs_size = 0;
+ 
++	p->thread.gcs_el0_mode = current->thread.gcs_el0_mode;
++	p->thread.gcs_el0_locked = current->thread.gcs_el0_locked;
++
+ 	gcs = gcs_alloc_thread_stack(p, args);
+ 	if (IS_ERR_VALUE(gcs))
+ 		return PTR_ERR((void *)gcs);
+ 
+-	p->thread.gcs_el0_mode = current->thread.gcs_el0_mode;
+-	p->thread.gcs_el0_locked = current->thread.gcs_el0_locked;
+-
+ 	return 0;
+ }
+ 
+diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
+index 6a2a899a344e64..9f5d837cc03f01 100644
+--- a/arch/arm64/kvm/hyp/exception.c
++++ b/arch/arm64/kvm/hyp/exception.c
+@@ -26,7 +26,8 @@ static inline u64 __vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg)
+ 
+ 	if (unlikely(vcpu_has_nv(vcpu)))
+ 		return vcpu_read_sys_reg(vcpu, reg);
+-	else if (__vcpu_read_sys_reg_from_cpu(reg, &val))
++	else if (vcpu_get_flag(vcpu, SYSREGS_ON_CPU) &&
++		 __vcpu_read_sys_reg_from_cpu(reg, &val))
+ 		return val;
+ 
+ 	return __vcpu_sys_reg(vcpu, reg);
+@@ -36,7 +37,8 @@ static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
+ {
+ 	if (unlikely(vcpu_has_nv(vcpu)))
+ 		vcpu_write_sys_reg(vcpu, val, reg);
+-	else if (!__vcpu_write_sys_reg_to_cpu(val, reg))
++	else if (!vcpu_get_flag(vcpu, SYSREGS_ON_CPU) ||
++		 !__vcpu_write_sys_reg_to_cpu(val, reg))
+ 		__vcpu_assign_sys_reg(vcpu, reg, val);
+ }
+ 
+diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
+index 477f1580ffeaa8..e482181c66322a 100644
+--- a/arch/arm64/kvm/hyp/vhe/switch.c
++++ b/arch/arm64/kvm/hyp/vhe/switch.c
+@@ -48,8 +48,7 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
+ 
+ static u64 __compute_hcr(struct kvm_vcpu *vcpu)
+ {
+-	u64 guest_hcr = __vcpu_sys_reg(vcpu, HCR_EL2);
+-	u64 hcr = vcpu->arch.hcr_el2;
++	u64 guest_hcr, hcr = vcpu->arch.hcr_el2;
+ 
+ 	if (!vcpu_has_nv(vcpu))
+ 		return hcr;
+@@ -68,10 +67,21 @@ static u64 __compute_hcr(struct kvm_vcpu *vcpu)
+ 		if (!vcpu_el2_e2h_is_set(vcpu))
+ 			hcr |= HCR_NV1;
+ 
++		/*
++		 * Nothing in HCR_EL2 should impact running in hypervisor
++		 * context, apart from bits we have defined as RESx (E2H,
++		 * HCD and co), or that cannot be set directly (the EXCLUDE
++		 * bits). Given that we OR the guest's view with the host's,
++		 * we can use the 0 value as the starting point, and only
++		 * use the config-driven RES1 bits.
++		 */
++		guest_hcr = kvm_vcpu_apply_reg_masks(vcpu, HCR_EL2, 0);
++
+ 		write_sysreg_s(vcpu->arch.ctxt.vncr_array, SYS_VNCR_EL2);
+ 	} else {
+ 		host_data_clear_flag(VCPU_IN_HYP_CONTEXT);
+ 
++		guest_hcr = __vcpu_sys_reg(vcpu, HCR_EL2);
+ 		if (guest_hcr & HCR_NV) {
+ 			u64 va = __fix_to_virt(vncr_fixmap(smp_processor_id()));
+ 
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index da8b89dd291053..58f838b310bc50 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -412,6 +412,7 @@ static void push_callee_regs(struct jit_ctx *ctx)
+ 		emit(A64_PUSH(A64_R(23), A64_R(24), A64_SP), ctx);
+ 		emit(A64_PUSH(A64_R(25), A64_R(26), A64_SP), ctx);
+ 		emit(A64_PUSH(A64_R(27), A64_R(28), A64_SP), ctx);
++		ctx->fp_used = true;
+ 	} else {
+ 		find_used_callee_regs(ctx);
+ 		for (i = 0; i + 1 < ctx->nr_used_callee_reg; i += 2) {
+diff --git a/arch/m68k/Kconfig.debug b/arch/m68k/Kconfig.debug
+index 30638a6e8edcb3..d036f903864c26 100644
+--- a/arch/m68k/Kconfig.debug
++++ b/arch/m68k/Kconfig.debug
+@@ -10,7 +10,7 @@ config BOOTPARAM_STRING
+ 
+ config EARLY_PRINTK
+ 	bool "Early printk"
+-	depends on !(SUN3 || M68000 || COLDFIRE)
++	depends on MMU_MOTOROLA
+ 	help
+ 	  Write kernel log output directly to a serial port.
+ 	  Where implemented, output goes to the framebuffer as well.
+diff --git a/arch/m68k/kernel/early_printk.c b/arch/m68k/kernel/early_printk.c
+index f11ef9f1f56fcf..521cbb8a150c99 100644
+--- a/arch/m68k/kernel/early_printk.c
++++ b/arch/m68k/kernel/early_printk.c
+@@ -16,25 +16,10 @@
+ #include "../mvme147/mvme147.h"
+ #include "../mvme16x/mvme16x.h"
+ 
+-asmlinkage void __init debug_cons_nputs(const char *s, unsigned n);
+-
+-static void __ref debug_cons_write(struct console *c,
+-				   const char *s, unsigned n)
+-{
+-#if !(defined(CONFIG_SUN3) || defined(CONFIG_M68000) || \
+-      defined(CONFIG_COLDFIRE))
+-	if (MACH_IS_MVME147)
+-		mvme147_scc_write(c, s, n);
+-	else if (MACH_IS_MVME16x)
+-		mvme16x_cons_write(c, s, n);
+-	else
+-		debug_cons_nputs(s, n);
+-#endif
+-}
++asmlinkage void __init debug_cons_nputs(struct console *c, const char *s, unsigned int n);
+ 
+ static struct console early_console_instance = {
+ 	.name  = "debug",
+-	.write = debug_cons_write,
+ 	.flags = CON_PRINTBUFFER | CON_BOOT,
+ 	.index = -1
+ };
+@@ -44,6 +29,12 @@ static int __init setup_early_printk(char *buf)
+ 	if (early_console || buf)
+ 		return 0;
+ 
++	if (MACH_IS_MVME147)
++		early_console_instance.write = mvme147_scc_write;
++	else if (MACH_IS_MVME16x)
++		early_console_instance.write = mvme16x_cons_write;
++	else
++		early_console_instance.write = debug_cons_nputs;
+ 	early_console = &early_console_instance;
+ 	register_console(early_console);
+ 
+@@ -51,20 +42,15 @@ static int __init setup_early_printk(char *buf)
+ }
+ early_param("earlyprintk", setup_early_printk);
+ 
+-/*
+- * debug_cons_nputs() defined in arch/m68k/kernel/head.S cannot be called
+- * after init sections are discarded (for platforms that use it).
+- */
+-#if !(defined(CONFIG_SUN3) || defined(CONFIG_M68000) || \
+-      defined(CONFIG_COLDFIRE))
+-
+ static int __init unregister_early_console(void)
+ {
+-	if (!early_console || MACH_IS_MVME16x)
+-		return 0;
++	/*
++	 * debug_cons_nputs() defined in arch/m68k/kernel/head.S cannot be
++	 * called after init sections are discarded (for platforms that use it).
++	 */
++	if (early_console && early_console->write == debug_cons_nputs)
++		return unregister_console(early_console);
+ 
+-	return unregister_console(early_console);
++	return 0;
+ }
+ late_initcall(unregister_early_console);
+-
+-#endif
+diff --git a/arch/m68k/kernel/head.S b/arch/m68k/kernel/head.S
+index 852255cf60dec1..ba22bc2f3d6d86 100644
+--- a/arch/m68k/kernel/head.S
++++ b/arch/m68k/kernel/head.S
+@@ -3263,8 +3263,8 @@ func_return	putn
+  *	turns around and calls the internal routines.  This routine
+  *	is used by the boot console.
+  *
+- *	The calling parameters are:
+- *		void debug_cons_nputs(const char *str, unsigned length)
++ *	The function signature is -
++ *		void debug_cons_nputs(struct console *c, const char *s, unsigned int n)
+  *
+  *	This routine does NOT understand variable arguments only
+  *	simple strings!
+@@ -3273,8 +3273,8 @@ ENTRY(debug_cons_nputs)
+ 	moveml	%d0/%d1/%a0,%sp@-
+ 	movew	%sr,%sp@-
+ 	ori	#0x0700,%sr
+-	movel	%sp@(18),%a0		/* fetch parameter */
+-	movel	%sp@(22),%d1		/* fetch parameter */
++	movel	%sp@(22),%a0		/* char *s */
++	movel	%sp@(26),%d1		/* unsigned int n */
+ 	jra	2f
+ 1:
+ #ifdef CONSOLE_DEBUG
+diff --git a/arch/mips/alchemy/common/gpiolib.c b/arch/mips/alchemy/common/gpiolib.c
+index 411f70ceb762ae..194034eba75fd6 100644
+--- a/arch/mips/alchemy/common/gpiolib.c
++++ b/arch/mips/alchemy/common/gpiolib.c
+@@ -40,9 +40,11 @@ static int gpio2_get(struct gpio_chip *chip, unsigned offset)
+ 	return !!alchemy_gpio2_get_value(offset + ALCHEMY_GPIO2_BASE);
+ }
+ 
+-static void gpio2_set(struct gpio_chip *chip, unsigned offset, int value)
++static int gpio2_set(struct gpio_chip *chip, unsigned offset, int value)
+ {
+ 	alchemy_gpio2_set_value(offset + ALCHEMY_GPIO2_BASE, value);
++
++	return 0;
+ }
+ 
+ static int gpio2_direction_input(struct gpio_chip *chip, unsigned offset)
+@@ -68,10 +70,12 @@ static int gpio1_get(struct gpio_chip *chip, unsigned offset)
+ 	return !!alchemy_gpio1_get_value(offset + ALCHEMY_GPIO1_BASE);
+ }
+ 
+-static void gpio1_set(struct gpio_chip *chip,
++static int gpio1_set(struct gpio_chip *chip,
+ 				unsigned offset, int value)
+ {
+ 	alchemy_gpio1_set_value(offset + ALCHEMY_GPIO1_BASE, value);
++
++	return 0;
+ }
+ 
+ static int gpio1_direction_input(struct gpio_chip *chip, unsigned offset)
+@@ -97,7 +101,7 @@ struct gpio_chip alchemy_gpio_chip[] = {
+ 		.direction_input	= gpio1_direction_input,
+ 		.direction_output	= gpio1_direction_output,
+ 		.get			= gpio1_get,
+-		.set			= gpio1_set,
++		.set_rv			= gpio1_set,
+ 		.to_irq			= gpio1_to_irq,
+ 		.base			= ALCHEMY_GPIO1_BASE,
+ 		.ngpio			= ALCHEMY_GPIO1_NUM,
+@@ -107,7 +111,7 @@ struct gpio_chip alchemy_gpio_chip[] = {
+ 		.direction_input	= gpio2_direction_input,
+ 		.direction_output	= gpio2_direction_output,
+ 		.get			= gpio2_get,
+-		.set			= gpio2_set,
++		.set_rv			= gpio2_set,
+ 		.to_irq			= gpio2_to_irq,
+ 		.base			= ALCHEMY_GPIO2_BASE,
+ 		.ngpio			= ALCHEMY_GPIO2_NUM,
+diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c
+index 76f3b9c0a9f0ce..347126dc010dd5 100644
+--- a/arch/mips/mm/tlb-r4k.c
++++ b/arch/mips/mm/tlb-r4k.c
+@@ -508,6 +508,60 @@ static int __init set_ntlb(char *str)
+ 
+ __setup("ntlb=", set_ntlb);
+ 
++/* Initialise all TLB entries with unique values */
++static void r4k_tlb_uniquify(void)
++{
++	int entry = num_wired_entries();
++
++	htw_stop();
++	write_c0_entrylo0(0);
++	write_c0_entrylo1(0);
++
++	while (entry < current_cpu_data.tlbsize) {
++		unsigned long asid_mask = cpu_asid_mask(&current_cpu_data);
++		unsigned long asid = 0;
++		int idx;
++
++		/* Skip wired MMID to make ginvt_mmid work */
++		if (cpu_has_mmid)
++			asid = MMID_KERNEL_WIRED + 1;
++
++		/* Check for match before using UNIQUE_ENTRYHI */
++		do {
++			if (cpu_has_mmid) {
++				write_c0_memorymapid(asid);
++				write_c0_entryhi(UNIQUE_ENTRYHI(entry));
++			} else {
++				write_c0_entryhi(UNIQUE_ENTRYHI(entry) | asid);
++			}
++			mtc0_tlbw_hazard();
++			tlb_probe();
++			tlb_probe_hazard();
++			idx = read_c0_index();
++			/* No match or match is on current entry */
++			if (idx < 0 || idx == entry)
++				break;
++			/*
++			 * If we hit a match, we need to try again with
++			 * a different ASID.
++			 */
++			asid++;
++		} while (asid < asid_mask);
++
++		if (idx >= 0 && idx != entry)
++			panic("Unable to uniquify TLB entry %d", idx);
++
++		write_c0_index(entry);
++		mtc0_tlbw_hazard();
++		tlb_write_indexed();
++		entry++;
++	}
++
++	tlbw_use_hazard();
++	htw_start();
++	flush_micro_tlb();
++}
++
+ /*
+  * Configure TLB (for init or after a CPU has been powered off).
+  */
+@@ -547,7 +601,7 @@ static void r4k_tlb_configure(void)
+ 	temp_tlb_entry = current_cpu_data.tlbsize - 1;
+ 
+ 	/* From this point on the ARC firmware is dead.	 */
+-	local_flush_tlb_all();
++	r4k_tlb_uniquify();
+ 
+ 	/* Did I tell you that ARC SUCKS?  */
+ }
+diff --git a/arch/powerpc/configs/ppc6xx_defconfig b/arch/powerpc/configs/ppc6xx_defconfig
+index f96f8ed9856cae..bb359643ddc118 100644
+--- a/arch/powerpc/configs/ppc6xx_defconfig
++++ b/arch/powerpc/configs/ppc6xx_defconfig
+@@ -252,7 +252,6 @@ CONFIG_NET_SCH_DSMARK=m
+ CONFIG_NET_SCH_NETEM=m
+ CONFIG_NET_SCH_INGRESS=m
+ CONFIG_NET_CLS_BASIC=m
+-CONFIG_NET_CLS_TCINDEX=m
+ CONFIG_NET_CLS_ROUTE4=m
+ CONFIG_NET_CLS_FW=m
+ CONFIG_NET_CLS_U32=m
+diff --git a/arch/powerpc/kernel/eeh.c b/arch/powerpc/kernel/eeh.c
+index ca7f7bb2b47869..2b5f3323e1072d 100644
+--- a/arch/powerpc/kernel/eeh.c
++++ b/arch/powerpc/kernel/eeh.c
+@@ -1139,6 +1139,7 @@ int eeh_unfreeze_pe(struct eeh_pe *pe)
+ 
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(eeh_unfreeze_pe);
+ 
+ 
+ static struct pci_device_id eeh_reset_ids[] = {
+diff --git a/arch/powerpc/kernel/eeh_driver.c b/arch/powerpc/kernel/eeh_driver.c
+index 7efe04c68f0fe3..dd50de91c43834 100644
+--- a/arch/powerpc/kernel/eeh_driver.c
++++ b/arch/powerpc/kernel/eeh_driver.c
+@@ -257,13 +257,12 @@ static void eeh_pe_report_edev(struct eeh_dev *edev, eeh_report_fn fn,
+ 	struct pci_driver *driver;
+ 	enum pci_ers_result new_result;
+ 
+-	pci_lock_rescan_remove();
+ 	pdev = edev->pdev;
+ 	if (pdev)
+ 		get_device(&pdev->dev);
+-	pci_unlock_rescan_remove();
+ 	if (!pdev) {
+ 		eeh_edev_info(edev, "no device");
++		*result = PCI_ERS_RESULT_DISCONNECT;
+ 		return;
+ 	}
+ 	device_lock(&pdev->dev);
+@@ -304,8 +303,9 @@ static void eeh_pe_report(const char *name, struct eeh_pe *root,
+ 	struct eeh_dev *edev, *tmp;
+ 
+ 	pr_info("EEH: Beginning: '%s'\n", name);
+-	eeh_for_each_pe(root, pe) eeh_pe_for_each_dev(pe, edev, tmp)
+-		eeh_pe_report_edev(edev, fn, result);
++	eeh_for_each_pe(root, pe)
++		eeh_pe_for_each_dev(pe, edev, tmp)
++			eeh_pe_report_edev(edev, fn, result);
+ 	if (result)
+ 		pr_info("EEH: Finished:'%s' with aggregate recovery state:'%s'\n",
+ 			name, pci_ers_result_name(*result));
+@@ -383,6 +383,8 @@ static void eeh_dev_restore_state(struct eeh_dev *edev, void *userdata)
+ 	if (!edev)
+ 		return;
+ 
++	pci_lock_rescan_remove();
++
+ 	/*
+ 	 * The content in the config space isn't saved because
+ 	 * the blocked config space on some adapters. We have
+@@ -393,14 +395,19 @@ static void eeh_dev_restore_state(struct eeh_dev *edev, void *userdata)
+ 		if (list_is_last(&edev->entry, &edev->pe->edevs))
+ 			eeh_pe_restore_bars(edev->pe);
+ 
++		pci_unlock_rescan_remove();
+ 		return;
+ 	}
+ 
+ 	pdev = eeh_dev_to_pci_dev(edev);
+-	if (!pdev)
++	if (!pdev) {
++		pci_unlock_rescan_remove();
+ 		return;
++	}
+ 
+ 	pci_restore_state(pdev);
++
++	pci_unlock_rescan_remove();
+ }
+ 
+ /**
+@@ -647,9 +654,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
+ 	if (any_passed || driver_eeh_aware || (pe->type & EEH_PE_VF)) {
+ 		eeh_pe_dev_traverse(pe, eeh_rmv_device, rmv_data);
+ 	} else {
+-		pci_lock_rescan_remove();
+ 		pci_hp_remove_devices(bus);
+-		pci_unlock_rescan_remove();
+ 	}
+ 
+ 	/*
+@@ -665,8 +670,6 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
+ 	if (rc)
+ 		return rc;
+ 
+-	pci_lock_rescan_remove();
+-
+ 	/* Restore PE */
+ 	eeh_ops->configure_bridge(pe);
+ 	eeh_pe_restore_bars(pe);
+@@ -674,7 +677,6 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
+ 	/* Clear frozen state */
+ 	rc = eeh_clear_pe_frozen_state(pe, false);
+ 	if (rc) {
+-		pci_unlock_rescan_remove();
+ 		return rc;
+ 	}
+ 
+@@ -709,7 +711,6 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
+ 	pe->tstamp = tstamp;
+ 	pe->freeze_count = cnt;
+ 
+-	pci_unlock_rescan_remove();
+ 	return 0;
+ }
+ 
+@@ -843,10 +844,13 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
+ 		{LIST_HEAD_INIT(rmv_data.removed_vf_list), 0};
+ 	int devices = 0;
+ 
++	pci_lock_rescan_remove();
++
+ 	bus = eeh_pe_bus_get(pe);
+ 	if (!bus) {
+ 		pr_err("%s: Cannot find PCI bus for PHB#%x-PE#%x\n",
+ 			__func__, pe->phb->global_number, pe->addr);
++		pci_unlock_rescan_remove();
+ 		return;
+ 	}
+ 
+@@ -1094,10 +1098,15 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
+ 		eeh_pe_state_clear(pe, EEH_PE_PRI_BUS, true);
+ 		eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
+ 
+-		pci_lock_rescan_remove();
+-		pci_hp_remove_devices(bus);
+-		pci_unlock_rescan_remove();
++		bus = eeh_pe_bus_get(pe);
++		if (bus)
++			pci_hp_remove_devices(bus);
++		else
++			pr_err("%s: PCI bus for PHB#%x-PE#%x disappeared\n",
++				__func__, pe->phb->global_number, pe->addr);
++
+ 		/* The passed PE should no longer be used */
++		pci_unlock_rescan_remove();
+ 		return;
+ 	}
+ 
+@@ -1114,6 +1123,8 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
+ 			eeh_clear_slot_attention(edev->pdev);
+ 
+ 	eeh_pe_state_clear(pe, EEH_PE_RECOVERING, true);
++
++	pci_unlock_rescan_remove();
+ }
+ 
+ /**
+@@ -1132,6 +1143,7 @@ void eeh_handle_special_event(void)
+ 	unsigned long flags;
+ 	int rc;
+ 
++	pci_lock_rescan_remove();
+ 
+ 	do {
+ 		rc = eeh_ops->next_error(&pe);
+@@ -1171,10 +1183,12 @@ void eeh_handle_special_event(void)
+ 
+ 			break;
+ 		case EEH_NEXT_ERR_NONE:
++			pci_unlock_rescan_remove();
+ 			return;
+ 		default:
+ 			pr_warn("%s: Invalid value %d from next_error()\n",
+ 				__func__, rc);
++			pci_unlock_rescan_remove();
+ 			return;
+ 		}
+ 
+@@ -1186,7 +1200,9 @@ void eeh_handle_special_event(void)
+ 		if (rc == EEH_NEXT_ERR_FROZEN_PE ||
+ 		    rc == EEH_NEXT_ERR_FENCED_PHB) {
+ 			eeh_pe_state_mark(pe, EEH_PE_RECOVERING);
++			pci_unlock_rescan_remove();
+ 			eeh_handle_normal_event(pe);
++			pci_lock_rescan_remove();
+ 		} else {
+ 			eeh_for_each_pe(pe, tmp_pe)
+ 				eeh_pe_for_each_dev(tmp_pe, edev, tmp_edev)
+@@ -1199,7 +1215,6 @@ void eeh_handle_special_event(void)
+ 				eeh_report_failure, NULL);
+ 			eeh_set_channel_state(pe, pci_channel_io_perm_failure);
+ 
+-			pci_lock_rescan_remove();
+ 			list_for_each_entry(hose, &hose_list, list_node) {
+ 				phb_pe = eeh_phb_pe_get(hose);
+ 				if (!phb_pe ||
+@@ -1218,7 +1233,6 @@ void eeh_handle_special_event(void)
+ 				}
+ 				pci_hp_remove_devices(bus);
+ 			}
+-			pci_unlock_rescan_remove();
+ 		}
+ 
+ 		/*
+@@ -1228,4 +1242,6 @@ void eeh_handle_special_event(void)
+ 		if (rc == EEH_NEXT_ERR_DEAD_IOC)
+ 			break;
+ 	} while (rc != EEH_NEXT_ERR_NONE);
++
++	pci_unlock_rescan_remove();
+ }
+diff --git a/arch/powerpc/kernel/eeh_pe.c b/arch/powerpc/kernel/eeh_pe.c
+index d283d281d28e8b..e740101fadf3b1 100644
+--- a/arch/powerpc/kernel/eeh_pe.c
++++ b/arch/powerpc/kernel/eeh_pe.c
+@@ -671,10 +671,12 @@ static void eeh_bridge_check_link(struct eeh_dev *edev)
+ 	eeh_ops->write_config(edev, cap + PCI_EXP_LNKCTL, 2, val);
+ 
+ 	/* Check link */
+-	if (!edev->pdev->link_active_reporting) {
+-		eeh_edev_dbg(edev, "No link reporting capability\n");
+-		msleep(1000);
+-		return;
++	if (edev->pdev) {
++		if (!edev->pdev->link_active_reporting) {
++			eeh_edev_dbg(edev, "No link reporting capability\n");
++			msleep(1000);
++			return;
++		}
+ 	}
+ 
+ 	/* Wait the link is up until timeout (5s) */
+diff --git a/arch/powerpc/kernel/pci-hotplug.c b/arch/powerpc/kernel/pci-hotplug.c
+index 9ea74973d78d5a..6f444d0822d820 100644
+--- a/arch/powerpc/kernel/pci-hotplug.c
++++ b/arch/powerpc/kernel/pci-hotplug.c
+@@ -141,6 +141,9 @@ void pci_hp_add_devices(struct pci_bus *bus)
+ 	struct pci_controller *phb;
+ 	struct device_node *dn = pci_bus_to_OF_node(bus);
+ 
++	if (!dn)
++		return;
++
+ 	phb = pci_bus_to_host(bus);
+ 
+ 	mode = PCI_PROBE_NORMAL;
+diff --git a/arch/powerpc/platforms/pseries/dlpar.c b/arch/powerpc/platforms/pseries/dlpar.c
+index 213aa26dc8b337..979487da65223d 100644
+--- a/arch/powerpc/platforms/pseries/dlpar.c
++++ b/arch/powerpc/platforms/pseries/dlpar.c
+@@ -404,6 +404,45 @@ get_device_node_with_drc_info(u32 index)
+ 	return NULL;
+ }
+ 
++static struct device_node *
++get_device_node_with_drc_indexes(u32 drc_index)
++{
++	struct device_node *np = NULL;
++	u32 nr_indexes, index;
++	int i, rc;
++
++	for_each_node_with_property(np, "ibm,drc-indexes") {
++		/*
++		 * First element in the array is the total number of
++		 * DRC indexes returned.
++		 */
++		rc = of_property_read_u32_index(np, "ibm,drc-indexes",
++				0, &nr_indexes);
++		if (rc)
++			goto out_put_np;
++
++		/*
++		 * Retrieve DRC index from the list and return the
++		 * device node if matched with the specified index.
++		 */
++		for (i = 0; i < nr_indexes; i++) {
++			rc = of_property_read_u32_index(np, "ibm,drc-indexes",
++							i+1, &index);
++			if (rc)
++				goto out_put_np;
++
++			if (drc_index == index)
++				return np;
++		}
++	}
++
++	return NULL;
++
++out_put_np:
++	of_node_put(np);
++	return NULL;
++}
++
+ static int dlpar_hp_dt_add(u32 index)
+ {
+ 	struct device_node *np, *nodes;
+@@ -423,10 +462,19 @@ static int dlpar_hp_dt_add(u32 index)
+ 		goto out;
+ 	}
+ 
++	/*
++	 * Recent FW provides ibm,drc-info property. So search
++	 * for the user specified DRC index from ibm,drc-info
++	 * property. If this property is not available, search
++	 * in the indexes array from ibm,drc-indexes property.
++	 */
+ 	np = get_device_node_with_drc_info(index);
+ 
+-	if (!np)
+-		return -EIO;
++	if (!np) {
++		np = get_device_node_with_drc_indexes(index);
++		if (!np)
++			return -EIO;
++	}
+ 
+ 	/* Next, configure the connector. */
+ 	nodes = dlpar_configure_connector(cpu_to_be32(index), np);
+diff --git a/arch/riscv/boot/dts/sophgo/sg2044-cpus.dtsi b/arch/riscv/boot/dts/sophgo/sg2044-cpus.dtsi
+index 2a4267078ce6b4..6a35ed8f253c0a 100644
+--- a/arch/riscv/boot/dts/sophgo/sg2044-cpus.dtsi
++++ b/arch/riscv/boot/dts/sophgo/sg2044-cpus.dtsi
+@@ -38,6 +38,7 @@ cpu0: cpu@0 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu0_intc: interrupt-controller {
+@@ -73,6 +74,7 @@ cpu1: cpu@1 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu1_intc: interrupt-controller {
+@@ -108,6 +110,7 @@ cpu2: cpu@2 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu2_intc: interrupt-controller {
+@@ -143,6 +146,7 @@ cpu3: cpu@3 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu3_intc: interrupt-controller {
+@@ -178,6 +182,7 @@ cpu4: cpu@4 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu4_intc: interrupt-controller {
+@@ -213,6 +218,7 @@ cpu5: cpu@5 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu5_intc: interrupt-controller {
+@@ -248,6 +254,7 @@ cpu6: cpu@6 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu6_intc: interrupt-controller {
+@@ -283,6 +290,7 @@ cpu7: cpu@7 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu7_intc: interrupt-controller {
+@@ -318,6 +326,7 @@ cpu8: cpu@8 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu8_intc: interrupt-controller {
+@@ -353,6 +362,7 @@ cpu9: cpu@9 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu9_intc: interrupt-controller {
+@@ -388,6 +398,7 @@ cpu10: cpu@10 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu10_intc: interrupt-controller {
+@@ -423,6 +434,7 @@ cpu11: cpu@11 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu11_intc: interrupt-controller {
+@@ -458,6 +470,7 @@ cpu12: cpu@12 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu12_intc: interrupt-controller {
+@@ -493,6 +506,7 @@ cpu13: cpu@13 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu13_intc: interrupt-controller {
+@@ -528,6 +542,7 @@ cpu14: cpu@14 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu14_intc: interrupt-controller {
+@@ -563,6 +578,7 @@ cpu15: cpu@15 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu15_intc: interrupt-controller {
+@@ -598,6 +614,7 @@ cpu16: cpu@16 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu16_intc: interrupt-controller {
+@@ -633,6 +650,7 @@ cpu17: cpu@17 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu17_intc: interrupt-controller {
+@@ -668,6 +686,7 @@ cpu18: cpu@18 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu18_intc: interrupt-controller {
+@@ -703,6 +722,7 @@ cpu19: cpu@19 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu19_intc: interrupt-controller {
+@@ -738,6 +758,7 @@ cpu20: cpu@20 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu20_intc: interrupt-controller {
+@@ -773,6 +794,7 @@ cpu21: cpu@21 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu21_intc: interrupt-controller {
+@@ -808,6 +830,7 @@ cpu22: cpu@22 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu22_intc: interrupt-controller {
+@@ -843,6 +866,7 @@ cpu23: cpu@23 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu23_intc: interrupt-controller {
+@@ -878,6 +902,7 @@ cpu24: cpu@24 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu24_intc: interrupt-controller {
+@@ -913,6 +938,7 @@ cpu25: cpu@25 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu25_intc: interrupt-controller {
+@@ -948,6 +974,7 @@ cpu26: cpu@26 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu26_intc: interrupt-controller {
+@@ -983,6 +1010,7 @@ cpu27: cpu@27 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu27_intc: interrupt-controller {
+@@ -1018,6 +1046,7 @@ cpu28: cpu@28 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu28_intc: interrupt-controller {
+@@ -1053,6 +1082,7 @@ cpu29: cpu@29 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu29_intc: interrupt-controller {
+@@ -1088,6 +1118,7 @@ cpu30: cpu@30 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu30_intc: interrupt-controller {
+@@ -1123,6 +1154,7 @@ cpu31: cpu@31 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu31_intc: interrupt-controller {
+@@ -1158,6 +1190,7 @@ cpu32: cpu@32 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu32_intc: interrupt-controller {
+@@ -1193,6 +1226,7 @@ cpu33: cpu@33 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu33_intc: interrupt-controller {
+@@ -1228,6 +1262,7 @@ cpu34: cpu@34 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu34_intc: interrupt-controller {
+@@ -1263,6 +1298,7 @@ cpu35: cpu@35 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu35_intc: interrupt-controller {
+@@ -1298,6 +1334,7 @@ cpu36: cpu@36 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu36_intc: interrupt-controller {
+@@ -1333,6 +1370,7 @@ cpu37: cpu@37 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu37_intc: interrupt-controller {
+@@ -1368,6 +1406,7 @@ cpu38: cpu@38 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu38_intc: interrupt-controller {
+@@ -1403,6 +1442,7 @@ cpu39: cpu@39 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu39_intc: interrupt-controller {
+@@ -1438,6 +1478,7 @@ cpu40: cpu@40 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu40_intc: interrupt-controller {
+@@ -1473,6 +1514,7 @@ cpu41: cpu@41 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu41_intc: interrupt-controller {
+@@ -1508,6 +1550,7 @@ cpu42: cpu@42 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu42_intc: interrupt-controller {
+@@ -1543,6 +1586,7 @@ cpu43: cpu@43 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu43_intc: interrupt-controller {
+@@ -1578,6 +1622,7 @@ cpu44: cpu@44 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu44_intc: interrupt-controller {
+@@ -1613,6 +1658,7 @@ cpu45: cpu@45 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu45_intc: interrupt-controller {
+@@ -1648,6 +1694,7 @@ cpu46: cpu@46 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu46_intc: interrupt-controller {
+@@ -1683,6 +1730,7 @@ cpu47: cpu@47 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu47_intc: interrupt-controller {
+@@ -1718,6 +1766,7 @@ cpu48: cpu@48 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu48_intc: interrupt-controller {
+@@ -1753,6 +1802,7 @@ cpu49: cpu@49 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu49_intc: interrupt-controller {
+@@ -1788,6 +1838,7 @@ cpu50: cpu@50 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu50_intc: interrupt-controller {
+@@ -1823,6 +1874,7 @@ cpu51: cpu@51 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu51_intc: interrupt-controller {
+@@ -1858,6 +1910,7 @@ cpu52: cpu@52 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu52_intc: interrupt-controller {
+@@ -1893,6 +1946,7 @@ cpu53: cpu@53 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu53_intc: interrupt-controller {
+@@ -1928,6 +1982,7 @@ cpu54: cpu@54 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu54_intc: interrupt-controller {
+@@ -1963,6 +2018,7 @@ cpu55: cpu@55 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu55_intc: interrupt-controller {
+@@ -1998,6 +2054,7 @@ cpu56: cpu@56 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu56_intc: interrupt-controller {
+@@ -2033,6 +2090,7 @@ cpu57: cpu@57 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu57_intc: interrupt-controller {
+@@ -2068,6 +2126,7 @@ cpu58: cpu@58 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu58_intc: interrupt-controller {
+@@ -2103,6 +2162,7 @@ cpu59: cpu@59 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu59_intc: interrupt-controller {
+@@ -2138,6 +2198,7 @@ cpu60: cpu@60 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu60_intc: interrupt-controller {
+@@ -2173,6 +2234,7 @@ cpu61: cpu@61 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu61_intc: interrupt-controller {
+@@ -2208,6 +2270,7 @@ cpu62: cpu@62 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu62_intc: interrupt-controller {
+@@ -2243,6 +2306,7 @@ cpu63: cpu@63 {
+ 					       "zvfbfmin", "zvfbfwma", "zvfh",
+ 					       "zvfhmin";
+ 			riscv,cbom-block-size = <64>;
++			riscv,cbop-block-size = <64>;
+ 			riscv,cboz-block-size = <64>;
+ 
+ 			cpu63_intc: interrupt-controller {
+diff --git a/arch/riscv/kvm/vcpu_onereg.c b/arch/riscv/kvm/vcpu_onereg.c
+index 2e1b646f0d6130..cce6a38ea54f2a 100644
+--- a/arch/riscv/kvm/vcpu_onereg.c
++++ b/arch/riscv/kvm/vcpu_onereg.c
+@@ -23,7 +23,7 @@
+ #define KVM_ISA_EXT_ARR(ext)		\
+ [KVM_RISCV_ISA_EXT_##ext] = RISCV_ISA_EXT_##ext
+ 
+-/* Mapping between KVM ISA Extension ID & Host ISA extension ID */
++/* Mapping between KVM ISA Extension ID & guest ISA extension ID */
+ static const unsigned long kvm_isa_ext_arr[] = {
+ 	/* Single letter extensions (alphabetically sorted) */
+ 	[KVM_RISCV_ISA_EXT_A] = RISCV_ISA_EXT_a,
+@@ -35,7 +35,7 @@ static const unsigned long kvm_isa_ext_arr[] = {
+ 	[KVM_RISCV_ISA_EXT_M] = RISCV_ISA_EXT_m,
+ 	[KVM_RISCV_ISA_EXT_V] = RISCV_ISA_EXT_v,
+ 	/* Multi letter extensions (alphabetically sorted) */
+-	[KVM_RISCV_ISA_EXT_SMNPM] = RISCV_ISA_EXT_SSNPM,
++	KVM_ISA_EXT_ARR(SMNPM),
+ 	KVM_ISA_EXT_ARR(SMSTATEEN),
+ 	KVM_ISA_EXT_ARR(SSAIA),
+ 	KVM_ISA_EXT_ARR(SSCOFPMF),
+@@ -112,6 +112,36 @@ static unsigned long kvm_riscv_vcpu_base2isa_ext(unsigned long base_ext)
+ 	return KVM_RISCV_ISA_EXT_MAX;
+ }
+ 
++static int kvm_riscv_vcpu_isa_check_host(unsigned long kvm_ext, unsigned long *guest_ext)
++{
++	unsigned long host_ext;
++
++	if (kvm_ext >= KVM_RISCV_ISA_EXT_MAX ||
++	    kvm_ext >= ARRAY_SIZE(kvm_isa_ext_arr))
++		return -ENOENT;
++
++	*guest_ext = kvm_isa_ext_arr[kvm_ext];
++	switch (*guest_ext) {
++	case RISCV_ISA_EXT_SMNPM:
++		/*
++		 * Pointer masking effective in (H)S-mode is provided by the
++		 * Smnpm extension, so that extension is reported to the guest,
++		 * even though the CSR bits for configuring VS-mode pointer
++		 * masking on the host side are part of the Ssnpm extension.
++		 */
++		host_ext = RISCV_ISA_EXT_SSNPM;
++		break;
++	default:
++		host_ext = *guest_ext;
++		break;
++	}
++
++	if (!__riscv_isa_extension_available(NULL, host_ext))
++		return -ENOENT;
++
++	return 0;
++}
++
+ static bool kvm_riscv_vcpu_isa_enable_allowed(unsigned long ext)
+ {
+ 	switch (ext) {
+@@ -219,13 +249,13 @@ static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext)
+ 
+ void kvm_riscv_vcpu_setup_isa(struct kvm_vcpu *vcpu)
+ {
+-	unsigned long host_isa, i;
++	unsigned long guest_ext, i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(kvm_isa_ext_arr); i++) {
+-		host_isa = kvm_isa_ext_arr[i];
+-		if (__riscv_isa_extension_available(NULL, host_isa) &&
+-		    kvm_riscv_vcpu_isa_enable_allowed(i))
+-			set_bit(host_isa, vcpu->arch.isa);
++		if (kvm_riscv_vcpu_isa_check_host(i, &guest_ext))
++			continue;
++		if (kvm_riscv_vcpu_isa_enable_allowed(i))
++			set_bit(guest_ext, vcpu->arch.isa);
+ 	}
+ }
+ 
+@@ -607,18 +637,15 @@ static int riscv_vcpu_get_isa_ext_single(struct kvm_vcpu *vcpu,
+ 					 unsigned long reg_num,
+ 					 unsigned long *reg_val)
+ {
+-	unsigned long host_isa_ext;
+-
+-	if (reg_num >= KVM_RISCV_ISA_EXT_MAX ||
+-	    reg_num >= ARRAY_SIZE(kvm_isa_ext_arr))
+-		return -ENOENT;
++	unsigned long guest_ext;
++	int ret;
+ 
+-	host_isa_ext = kvm_isa_ext_arr[reg_num];
+-	if (!__riscv_isa_extension_available(NULL, host_isa_ext))
+-		return -ENOENT;
++	ret = kvm_riscv_vcpu_isa_check_host(reg_num, &guest_ext);
++	if (ret)
++		return ret;
+ 
+ 	*reg_val = 0;
+-	if (__riscv_isa_extension_available(vcpu->arch.isa, host_isa_ext))
++	if (__riscv_isa_extension_available(vcpu->arch.isa, guest_ext))
+ 		*reg_val = 1; /* Mark the given extension as available */
+ 
+ 	return 0;
+@@ -628,17 +655,14 @@ static int riscv_vcpu_set_isa_ext_single(struct kvm_vcpu *vcpu,
+ 					 unsigned long reg_num,
+ 					 unsigned long reg_val)
+ {
+-	unsigned long host_isa_ext;
+-
+-	if (reg_num >= KVM_RISCV_ISA_EXT_MAX ||
+-	    reg_num >= ARRAY_SIZE(kvm_isa_ext_arr))
+-		return -ENOENT;
++	unsigned long guest_ext;
++	int ret;
+ 
+-	host_isa_ext = kvm_isa_ext_arr[reg_num];
+-	if (!__riscv_isa_extension_available(NULL, host_isa_ext))
+-		return -ENOENT;
++	ret = kvm_riscv_vcpu_isa_check_host(reg_num, &guest_ext);
++	if (ret)
++		return ret;
+ 
+-	if (reg_val == test_bit(host_isa_ext, vcpu->arch.isa))
++	if (reg_val == test_bit(guest_ext, vcpu->arch.isa))
+ 		return 0;
+ 
+ 	if (!vcpu->arch.ran_atleast_once) {
+@@ -648,10 +672,10 @@ static int riscv_vcpu_set_isa_ext_single(struct kvm_vcpu *vcpu,
+ 		 */
+ 		if (reg_val == 1 &&
+ 		    kvm_riscv_vcpu_isa_enable_allowed(reg_num))
+-			set_bit(host_isa_ext, vcpu->arch.isa);
++			set_bit(guest_ext, vcpu->arch.isa);
+ 		else if (!reg_val &&
+ 			 kvm_riscv_vcpu_isa_disable_allowed(reg_num))
+-			clear_bit(host_isa_ext, vcpu->arch.isa);
++			clear_bit(guest_ext, vcpu->arch.isa);
+ 		else
+ 			return -EINVAL;
+ 		kvm_riscv_vcpu_fp_reset(vcpu);
+@@ -1009,16 +1033,15 @@ static int copy_fp_d_reg_indices(const struct kvm_vcpu *vcpu,
+ static int copy_isa_ext_reg_indices(const struct kvm_vcpu *vcpu,
+ 				u64 __user *uindices)
+ {
++	unsigned long guest_ext;
+ 	unsigned int n = 0;
+-	unsigned long isa_ext;
+ 
+ 	for (int i = 0; i < KVM_RISCV_ISA_EXT_MAX; i++) {
+ 		u64 size = IS_ENABLED(CONFIG_32BIT) ?
+ 			   KVM_REG_SIZE_U32 : KVM_REG_SIZE_U64;
+ 		u64 reg = KVM_REG_RISCV | size | KVM_REG_RISCV_ISA_EXT | i;
+ 
+-		isa_ext = kvm_isa_ext_arr[i];
+-		if (!__riscv_isa_extension_available(NULL, isa_ext))
++		if (kvm_riscv_vcpu_isa_check_host(i, &guest_ext))
+ 			continue;
+ 
+ 		if (uindices) {
+diff --git a/arch/s390/boot/startup.c b/arch/s390/boot/startup.c
+index da8337e63a3e23..e124d1f1cf760a 100644
+--- a/arch/s390/boot/startup.c
++++ b/arch/s390/boot/startup.c
+@@ -384,7 +384,7 @@ static unsigned long setup_kernel_memory_layout(unsigned long kernel_size)
+ 		kernel_start = round_down(kernel_end - kernel_size, THREAD_SIZE);
+ 		boot_debug("Randomization range: 0x%016lx-0x%016lx\n", vmax - kaslr_len, vmax);
+ 		boot_debug("kernel image:        0x%016lx-0x%016lx (kaslr)\n", kernel_start,
+-			   kernel_size + kernel_size);
++			   kernel_start + kernel_size);
+ 	} else if (vmax < __NO_KASLR_END_KERNEL || vsize > __NO_KASLR_END_KERNEL) {
+ 		kernel_start = round_down(vmax - kernel_size, THREAD_SIZE);
+ 		boot_debug("kernel image:        0x%016lx-0x%016lx (constrained)\n", kernel_start,
+diff --git a/arch/s390/crypto/hmac_s390.c b/arch/s390/crypto/hmac_s390.c
+index 93a1098d9f8d3d..58444da9b004cd 100644
+--- a/arch/s390/crypto/hmac_s390.c
++++ b/arch/s390/crypto/hmac_s390.c
+@@ -290,6 +290,7 @@ static int s390_hmac_export(struct shash_desc *desc, void *out)
+ 	struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
+ 	unsigned int bs = crypto_shash_blocksize(desc->tfm);
+ 	unsigned int ds = bs / 2;
++	u64 lo = ctx->buflen[0];
+ 	union {
+ 		u8 *u8;
+ 		u64 *u64;
+@@ -301,9 +302,10 @@ static int s390_hmac_export(struct shash_desc *desc, void *out)
+ 	else
+ 		memcpy(p.u8, ctx->param, ds);
+ 	p.u8 += ds;
+-	put_unaligned(ctx->buflen[0], p.u64++);
++	lo += bs;
++	put_unaligned(lo, p.u64++);
+ 	if (ds == SHA512_DIGEST_SIZE)
+-		put_unaligned(ctx->buflen[1], p.u64);
++		put_unaligned(ctx->buflen[1] + (lo < bs), p.u64);
+ 	return err;
+ }
+ 
+@@ -316,14 +318,16 @@ static int s390_hmac_import(struct shash_desc *desc, const void *in)
+ 		const u8 *u8;
+ 		const u64 *u64;
+ 	} p = { .u8 = in };
++	u64 lo;
+ 	int err;
+ 
+ 	err = s390_hmac_sha2_init(desc);
+ 	memcpy(ctx->param, p.u8, ds);
+ 	p.u8 += ds;
+-	ctx->buflen[0] = get_unaligned(p.u64++);
++	lo = get_unaligned(p.u64++);
++	ctx->buflen[0] = lo - bs;
+ 	if (ds == SHA512_DIGEST_SIZE)
+-		ctx->buflen[1] = get_unaligned(p.u64);
++		ctx->buflen[1] = get_unaligned(p.u64) - (lo < bs);
+ 	if (ctx->buflen[0] | ctx->buflen[1])
+ 		ctx->gr0.ikp = 1;
+ 	return err;
+diff --git a/arch/s390/crypto/sha.h b/arch/s390/crypto/sha.h
+index d757ccbce2b4f4..cadb4b13622ae5 100644
+--- a/arch/s390/crypto/sha.h
++++ b/arch/s390/crypto/sha.h
+@@ -27,6 +27,9 @@ struct s390_sha_ctx {
+ 			u64 state[SHA512_DIGEST_SIZE / sizeof(u64)];
+ 			u64 count_hi;
+ 		} sha512;
++		struct {
++			__le64 state[SHA3_STATE_SIZE / sizeof(u64)];
++		} sha3;
+ 	};
+ 	int func;		/* KIMD function to use */
+ 	bool first_message_part;
+diff --git a/arch/s390/crypto/sha3_256_s390.c b/arch/s390/crypto/sha3_256_s390.c
+index 4a7731ac6bcd6c..03bb4f4bab7015 100644
+--- a/arch/s390/crypto/sha3_256_s390.c
++++ b/arch/s390/crypto/sha3_256_s390.c
+@@ -35,23 +35,33 @@ static int sha3_256_init(struct shash_desc *desc)
+ static int sha3_256_export(struct shash_desc *desc, void *out)
+ {
+ 	struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
+-	struct sha3_state *octx = out;
++	union {
++		u8 *u8;
++		u64 *u64;
++	} p = { .u8 = out };
++	int i;
+ 
+ 	if (sctx->first_message_part) {
+-		memset(sctx->state, 0, sizeof(sctx->state));
+-		sctx->first_message_part = 0;
++		memset(out, 0, SHA3_STATE_SIZE);
++		return 0;
+ 	}
+-	memcpy(octx->st, sctx->state, sizeof(octx->st));
++	for (i = 0; i < SHA3_STATE_SIZE / 8; i++)
++		put_unaligned(le64_to_cpu(sctx->sha3.state[i]), p.u64++);
+ 	return 0;
+ }
+ 
+ static int sha3_256_import(struct shash_desc *desc, const void *in)
+ {
+ 	struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
+-	const struct sha3_state *ictx = in;
+-
++	union {
++		const u8 *u8;
++		const u64 *u64;
++	} p = { .u8 = in };
++	int i;
++
++	for (i = 0; i < SHA3_STATE_SIZE / 8; i++)
++		sctx->sha3.state[i] = cpu_to_le64(get_unaligned(p.u64++));
+ 	sctx->count = 0;
+-	memcpy(sctx->state, ictx->st, sizeof(ictx->st));
+ 	sctx->first_message_part = 0;
+ 	sctx->func = CPACF_KIMD_SHA3_256;
+ 
+diff --git a/arch/s390/crypto/sha3_512_s390.c b/arch/s390/crypto/sha3_512_s390.c
+index 018f02fff44469..a5c9690eecb19a 100644
+--- a/arch/s390/crypto/sha3_512_s390.c
++++ b/arch/s390/crypto/sha3_512_s390.c
+@@ -34,24 +34,33 @@ static int sha3_512_init(struct shash_desc *desc)
+ static int sha3_512_export(struct shash_desc *desc, void *out)
+ {
+ 	struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
+-	struct sha3_state *octx = out;
+-
++	union {
++		u8 *u8;
++		u64 *u64;
++	} p = { .u8 = out };
++	int i;
+ 
+ 	if (sctx->first_message_part) {
+-		memset(sctx->state, 0, sizeof(sctx->state));
+-		sctx->first_message_part = 0;
++		memset(out, 0, SHA3_STATE_SIZE);
++		return 0;
+ 	}
+-	memcpy(octx->st, sctx->state, sizeof(octx->st));
++	for (i = 0; i < SHA3_STATE_SIZE / 8; i++)
++		put_unaligned(le64_to_cpu(sctx->sha3.state[i]), p.u64++);
+ 	return 0;
+ }
+ 
+ static int sha3_512_import(struct shash_desc *desc, const void *in)
+ {
+ 	struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
+-	const struct sha3_state *ictx = in;
+-
++	union {
++		const u8 *u8;
++		const u64 *u64;
++	} p = { .u8 = in };
++	int i;
++
++	for (i = 0; i < SHA3_STATE_SIZE / 8; i++)
++		sctx->sha3.state[i] = cpu_to_le64(get_unaligned(p.u64++));
+ 	sctx->count = 0;
+-	memcpy(sctx->state, ictx->st, sizeof(ictx->st));
+ 	sctx->first_message_part = 0;
+ 	sctx->func = CPACF_KIMD_SHA3_512;
+ 
+diff --git a/arch/s390/include/asm/ap.h b/arch/s390/include/asm/ap.h
+index 395b02d6a13374..352108727d7e62 100644
+--- a/arch/s390/include/asm/ap.h
++++ b/arch/s390/include/asm/ap.h
+@@ -103,7 +103,7 @@ struct ap_tapq_hwinfo {
+ 			unsigned int accel :  1; /* A */
+ 			unsigned int ep11  :  1; /* X */
+ 			unsigned int apxa  :  1; /* APXA */
+-			unsigned int	   :  1;
++			unsigned int slcf  :  1; /* Cmd filtering avail. */
+ 			unsigned int class :  8;
+ 			unsigned int bs	   :  2; /* SE bind/assoc */
+ 			unsigned int	   : 14;
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index f244c5560e7f62..5c9789804120ab 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -719,6 +719,11 @@ static void __init memblock_add_physmem_info(void)
+ 	memblock_set_node(0, ULONG_MAX, &memblock.memory, 0);
+ }
+ 
++static void __init setup_high_memory(void)
++{
++	high_memory = __va(ident_map_size);
++}
++
+ /*
+  * Reserve memory used for lowcore.
+  */
+@@ -951,6 +956,7 @@ void __init setup_arch(char **cmdline_p)
+ 
+ 	free_physmem_info();
+ 	setup_memory_end();
++	setup_high_memory();
+ 	memblock_dump_all();
+ 	setup_memory();
+ 
+diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
+index b449fd2605b03e..d2f6f1f6d2fcb9 100644
+--- a/arch/s390/mm/pgalloc.c
++++ b/arch/s390/mm/pgalloc.c
+@@ -173,11 +173,6 @@ void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable)
+ 	struct ptdesc *ptdesc = virt_to_ptdesc(pgtable);
+ 
+ 	call_rcu(&ptdesc->pt_rcu_head, pte_free_now);
+-	/*
+-	 * THPs are not allowed for KVM guests. Warn if pgste ever reaches here.
+-	 * Turn to the generic pte_free_defer() version once gmap is removed.
+-	 */
+-	WARN_ON_ONCE(mm_has_pgste(mm));
+ }
+ #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+ 
+diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
+index 448dd6ed1069b7..f48ef361bc8315 100644
+--- a/arch/s390/mm/vmem.c
++++ b/arch/s390/mm/vmem.c
+@@ -64,13 +64,12 @@ void *vmem_crst_alloc(unsigned long val)
+ 
+ pte_t __ref *vmem_pte_alloc(void)
+ {
+-	unsigned long size = PTRS_PER_PTE * sizeof(pte_t);
+ 	pte_t *pte;
+ 
+ 	if (slab_is_available())
+-		pte = (pte_t *) page_table_alloc(&init_mm);
++		pte = (pte_t *)page_table_alloc(&init_mm);
+ 	else
+-		pte = (pte_t *) memblock_alloc(size, size);
++		pte = (pte_t *)memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ 	if (!pte)
+ 		return NULL;
+ 	memset64((u64 *)pte, _PAGE_INVALID, PTRS_PER_PTE);
+diff --git a/arch/sh/Makefile b/arch/sh/Makefile
+index cab2f9c011a8db..7b420424b6d7c4 100644
+--- a/arch/sh/Makefile
++++ b/arch/sh/Makefile
+@@ -103,16 +103,16 @@ UTS_MACHINE		:= sh
+ LDFLAGS_vmlinux		+= -e _stext
+ 
+ ifdef CONFIG_CPU_LITTLE_ENDIAN
+-ld-bfd			:= elf32-sh-linux
+-LDFLAGS_vmlinux		+= --defsym jiffies=jiffies_64 --oformat $(ld-bfd)
++ld_bfd			:= elf32-sh-linux
++LDFLAGS_vmlinux		+= --defsym jiffies=jiffies_64 --oformat $(ld_bfd)
+ KBUILD_LDFLAGS		+= -EL
+ else
+-ld-bfd			:= elf32-shbig-linux
+-LDFLAGS_vmlinux		+= --defsym jiffies=jiffies_64+4 --oformat $(ld-bfd)
++ld_bfd			:= elf32-shbig-linux
++LDFLAGS_vmlinux		+= --defsym jiffies=jiffies_64+4 --oformat $(ld_bfd)
+ KBUILD_LDFLAGS		+= -EB
+ endif
+ 
+-export ld-bfd
++export ld_bfd
+ 
+ # Mach groups
+ machdir-$(CONFIG_SOLUTION_ENGINE)		+= mach-se
+diff --git a/arch/sh/boot/compressed/Makefile b/arch/sh/boot/compressed/Makefile
+index 8bc319ff54bf93..58df491778b29a 100644
+--- a/arch/sh/boot/compressed/Makefile
++++ b/arch/sh/boot/compressed/Makefile
+@@ -27,7 +27,7 @@ endif
+ 
+ ccflags-remove-$(CONFIG_MCOUNT) += -pg
+ 
+-LDFLAGS_vmlinux := --oformat $(ld-bfd) -Ttext $(IMAGE_OFFSET) -e startup \
++LDFLAGS_vmlinux := --oformat $(ld_bfd) -Ttext $(IMAGE_OFFSET) -e startup \
+ 		   -T $(obj)/../../kernel/vmlinux.lds
+ 
+ KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
+@@ -51,7 +51,7 @@ $(obj)/vmlinux.bin.lzo: $(obj)/vmlinux.bin FORCE
+ 
+ OBJCOPYFLAGS += -R .empty_zero_page
+ 
+-LDFLAGS_piggy.o := -r --format binary --oformat $(ld-bfd) -T
++LDFLAGS_piggy.o := -r --format binary --oformat $(ld_bfd) -T
+ 
+ $(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/vmlinux.bin.$(suffix_y) FORCE
+ 	$(call if_changed,ld)
+diff --git a/arch/sh/boot/romimage/Makefile b/arch/sh/boot/romimage/Makefile
+index c7c8be58400cd9..17b03df0a8de4d 100644
+--- a/arch/sh/boot/romimage/Makefile
++++ b/arch/sh/boot/romimage/Makefile
+@@ -13,7 +13,7 @@ mmcif-obj-$(CONFIG_CPU_SUBTYPE_SH7724)	:= $(obj)/mmcif-sh7724.o
+ load-$(CONFIG_ROMIMAGE_MMCIF)		:= $(mmcif-load-y)
+ obj-$(CONFIG_ROMIMAGE_MMCIF)		:= $(mmcif-obj-y)
+ 
+-LDFLAGS_vmlinux := --oformat $(ld-bfd) -Ttext $(load-y) -e romstart \
++LDFLAGS_vmlinux := --oformat $(ld_bfd) -Ttext $(load-y) -e romstart \
+ 		   -T $(obj)/../../kernel/vmlinux.lds
+ 
+ $(obj)/vmlinux: $(obj)/head.o $(obj-y) $(obj)/piggy.o FORCE
+@@ -24,7 +24,7 @@ OBJCOPYFLAGS += -j .empty_zero_page
+ $(obj)/zeropage.bin: vmlinux FORCE
+ 	$(call if_changed,objcopy)
+ 
+-LDFLAGS_piggy.o := -r --format binary --oformat $(ld-bfd) -T
++LDFLAGS_piggy.o := -r --format binary --oformat $(ld_bfd) -T
+ 
+ $(obj)/piggy.o: $(obj)/vmlinux.scr $(obj)/zeropage.bin arch/sh/boot/zImage FORCE
+ 	$(call if_changed,ld)
+diff --git a/arch/um/drivers/rtc_user.c b/arch/um/drivers/rtc_user.c
+index 51e79f3148cd40..67912fcf7b2864 100644
+--- a/arch/um/drivers/rtc_user.c
++++ b/arch/um/drivers/rtc_user.c
+@@ -28,7 +28,7 @@ int uml_rtc_start(bool timetravel)
+ 	int err;
+ 
+ 	if (timetravel) {
+-		int err = os_pipe(uml_rtc_irq_fds, 1, 1);
++		err = os_pipe(uml_rtc_irq_fds, 1, 1);
+ 		if (err)
+ 			goto fail;
+ 	} else {
+diff --git a/arch/x86/boot/cpuflags.c b/arch/x86/boot/cpuflags.c
+index 916bac09b464da..63e037e94e4c03 100644
+--- a/arch/x86/boot/cpuflags.c
++++ b/arch/x86/boot/cpuflags.c
+@@ -106,5 +106,18 @@ void get_cpuflags(void)
+ 			cpuid(0x80000001, &ignored, &ignored, &cpu.flags[6],
+ 			      &cpu.flags[1]);
+ 		}
++
++		if (max_amd_level >= 0x8000001f) {
++			u32 ebx;
++
++			/*
++			 * The X86_FEATURE_COHERENCY_SFW_NO feature bit is in
++			 * the virtualization flags entry (word 8) and set by
++			 * scattered.c, so the bit needs to be explicitly set.
++			 */
++			cpuid(0x8000001f, &ignored, &ebx, &ignored, &ignored);
++			if (ebx & BIT(31))
++				set_bit(X86_FEATURE_COHERENCY_SFW_NO, cpu.flags);
++		}
+ 	}
+ }
+diff --git a/arch/x86/boot/startup/sev-shared.c b/arch/x86/boot/startup/sev-shared.c
+index 7a706db87b932f..ac7dfd21ddd4d5 100644
+--- a/arch/x86/boot/startup/sev-shared.c
++++ b/arch/x86/boot/startup/sev-shared.c
+@@ -810,6 +810,13 @@ static void __head pvalidate_4k_page(unsigned long vaddr, unsigned long paddr,
+ 		if (ret)
+ 			sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE);
+ 	}
++
++	/*
++	 * If validating memory (making it private) and affected by the
++	 * cache-coherency vulnerability, perform the cache eviction mitigation.
++	 */
++	if (validate && !has_cpuflag(X86_FEATURE_COHERENCY_SFW_NO))
++		sev_evict_cache((void *)vaddr, 1);
+ }
+ 
+ /*
+diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
+index 7543a8b52c67c2..eb80445e8e5d38 100644
+--- a/arch/x86/coco/sev/core.c
++++ b/arch/x86/coco/sev/core.c
+@@ -358,10 +358,31 @@ static void svsm_pval_pages(struct snp_psc_desc *desc)
+ 
+ static void pvalidate_pages(struct snp_psc_desc *desc)
+ {
++	struct psc_entry *e;
++	unsigned int i;
++
+ 	if (snp_vmpl)
+ 		svsm_pval_pages(desc);
+ 	else
+ 		pval_pages(desc);
++
++	/*
++	 * If not affected by the cache-coherency vulnerability there is no need
++	 * to perform the cache eviction mitigation.
++	 */
++	if (cpu_feature_enabled(X86_FEATURE_COHERENCY_SFW_NO))
++		return;
++
++	for (i = 0; i <= desc->hdr.end_entry; i++) {
++		e = &desc->entries[i];
++
++		/*
++		 * If validating memory (making it private) perform the cache
++		 * eviction mitigation.
++		 */
++		if (e->operation == SNP_PAGE_STATE_PRIVATE)
++			sev_evict_cache(pfn_to_kaddr(e->gfn), e->pagesize ? 512 : 1);
++	}
+ }
+ 
+ static int vmgexit_psc(struct ghcb *ghcb, struct snp_psc_desc *desc)
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 286d509f9363bc..4597ef6621220e 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -218,6 +218,7 @@
+ #define X86_FEATURE_FLEXPRIORITY	( 8*32+ 1) /* "flexpriority" Intel FlexPriority */
+ #define X86_FEATURE_EPT			( 8*32+ 2) /* "ept" Intel Extended Page Table */
+ #define X86_FEATURE_VPID		( 8*32+ 3) /* "vpid" Intel Virtual Processor ID */
++#define X86_FEATURE_COHERENCY_SFW_NO	( 8*32+ 4) /* SNP cache coherency software work around not needed */
+ 
+ #define X86_FEATURE_VMMCALL		( 8*32+15) /* "vmmcall" Prefer VMMCALL to VMCALL */
+ #define X86_FEATURE_XENPV		( 8*32+16) /* Xen paravirtual guest */
+diff --git a/arch/x86/include/asm/hw_irq.h b/arch/x86/include/asm/hw_irq.h
+index 162ebd73a6981e..cbe19e6690801a 100644
+--- a/arch/x86/include/asm/hw_irq.h
++++ b/arch/x86/include/asm/hw_irq.h
+@@ -92,8 +92,6 @@ struct irq_cfg {
+ 
+ extern struct irq_cfg *irq_cfg(unsigned int irq);
+ extern struct irq_cfg *irqd_cfg(struct irq_data *irq_data);
+-extern void lock_vector_lock(void);
+-extern void unlock_vector_lock(void);
+ #ifdef CONFIG_SMP
+ extern void vector_schedule_cleanup(struct irq_cfg *);
+ extern void irq_complete_move(struct irq_cfg *cfg);
+@@ -101,12 +99,16 @@ extern void irq_complete_move(struct irq_cfg *cfg);
+ static inline void vector_schedule_cleanup(struct irq_cfg *c) { }
+ static inline void irq_complete_move(struct irq_cfg *c) { }
+ #endif
+-
+ extern void apic_ack_edge(struct irq_data *data);
+-#else	/*  CONFIG_IRQ_DOMAIN_HIERARCHY */
++#endif /* CONFIG_IRQ_DOMAIN_HIERARCHY */
++
++#ifdef CONFIG_X86_LOCAL_APIC
++extern void lock_vector_lock(void);
++extern void unlock_vector_lock(void);
++#else
+ static inline void lock_vector_lock(void) {}
+ static inline void unlock_vector_lock(void) {}
+-#endif	/* CONFIG_IRQ_DOMAIN_HIERARCHY */
++#endif
+ 
+ /* Statistics */
+ extern atomic_t irq_err_count;
+diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
+index 8d50e3e0a19b9c..9e0c37ea267e05 100644
+--- a/arch/x86/include/asm/kvm-x86-ops.h
++++ b/arch/x86/include/asm/kvm-x86-ops.h
+@@ -49,7 +49,6 @@ KVM_X86_OP(set_idt)
+ KVM_X86_OP(get_gdt)
+ KVM_X86_OP(set_gdt)
+ KVM_X86_OP(sync_dirty_debug_regs)
+-KVM_X86_OP(set_dr6)
+ KVM_X86_OP(set_dr7)
+ KVM_X86_OP(cache_reg)
+ KVM_X86_OP(get_rflags)
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index f7af967aa16fdb..7e45a20d3ebc3f 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1680,6 +1680,11 @@ static inline u16 kvm_lapic_irq_dest_mode(bool dest_mode_logical)
+ 	return dest_mode_logical ? APIC_DEST_LOGICAL : APIC_DEST_PHYSICAL;
+ }
+ 
++enum kvm_x86_run_flags {
++	KVM_RUN_FORCE_IMMEDIATE_EXIT	= BIT(0),
++	KVM_RUN_LOAD_GUEST_DR6		= BIT(1),
++};
++
+ struct kvm_x86_ops {
+ 	const char *name;
+ 
+@@ -1730,7 +1735,6 @@ struct kvm_x86_ops {
+ 	void (*get_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+ 	void (*set_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+ 	void (*sync_dirty_debug_regs)(struct kvm_vcpu *vcpu);
+-	void (*set_dr6)(struct kvm_vcpu *vcpu, unsigned long value);
+ 	void (*set_dr7)(struct kvm_vcpu *vcpu, unsigned long value);
+ 	void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
+ 	unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
+@@ -1761,7 +1765,7 @@ struct kvm_x86_ops {
+ 
+ 	int (*vcpu_pre_run)(struct kvm_vcpu *vcpu);
+ 	enum exit_fastpath_completion (*vcpu_run)(struct kvm_vcpu *vcpu,
+-						  bool force_immediate_exit);
++						  u64 run_flags);
+ 	int (*handle_exit)(struct kvm_vcpu *vcpu,
+ 		enum exit_fastpath_completion exit_fastpath);
+ 	int (*skip_emulated_instruction)(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 5cfb5d74dd5f58..c29127ac626ac4 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -419,6 +419,7 @@
+ #define DEBUGCTLMSR_FREEZE_PERFMON_ON_PMI	(1UL << 12)
+ #define DEBUGCTLMSR_FREEZE_IN_SMM_BIT	14
+ #define DEBUGCTLMSR_FREEZE_IN_SMM	(1UL << DEBUGCTLMSR_FREEZE_IN_SMM_BIT)
++#define DEBUGCTLMSR_RTM_DEBUG		BIT(15)
+ 
+ #define MSR_PEBS_FRONTEND		0x000003f7
+ 
+diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
+index a631f7d7c0c0bc..14d7e0719dd5ee 100644
+--- a/arch/x86/include/asm/sev.h
++++ b/arch/x86/include/asm/sev.h
+@@ -621,6 +621,24 @@ int rmp_make_shared(u64 pfn, enum pg_level level);
+ void snp_leak_pages(u64 pfn, unsigned int npages);
+ void kdump_sev_callback(void);
+ void snp_fixup_e820_tables(void);
++
++static inline void sev_evict_cache(void *va, int npages)
++{
++	volatile u8 val __always_unused;
++	u8 *bytes = va;
++	int page_idx;
++
++	/*
++	 * For SEV guests, a read from the first/last cache-lines of a 4K page
++	 * using the guest key is sufficient to cause a flush of all cache-lines
++	 * associated with that 4K page without incurring all the overhead of a
++	 * full CLFLUSH sequence.
++	 */
++	for (page_idx = 0; page_idx < npages; page_idx++) {
++		val = bytes[page_idx * PAGE_SIZE];
++		val = bytes[page_idx * PAGE_SIZE + PAGE_SIZE - 1];
++	}
++}
+ #else
+ static inline bool snp_probe_rmptable_info(void) { return false; }
+ static inline int snp_rmptable_init(void) { return -ENOSYS; }
+@@ -636,6 +654,7 @@ static inline int rmp_make_shared(u64 pfn, enum pg_level level) { return -ENODEV
+ static inline void snp_leak_pages(u64 pfn, unsigned int npages) {}
+ static inline void kdump_sev_callback(void) { }
+ static inline void snp_fixup_e820_tables(void) {}
++static inline void sev_evict_cache(void *va, int npages) {}
+ #endif
+ 
+ #endif
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index f4d3abb12317a7..f2721801d8d423 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -1124,6 +1124,20 @@ early_param("nospectre_v1", nospectre_v1_cmdline);
+ 
+ enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = SPECTRE_V2_NONE;
+ 
++/* Depends on spectre_v2 mitigation selected already */
++static inline bool cdt_possible(enum spectre_v2_mitigation mode)
++{
++	if (!IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING) ||
++	    !IS_ENABLED(CONFIG_MITIGATION_RETPOLINE))
++		return false;
++
++	if (mode == SPECTRE_V2_RETPOLINE ||
++	    mode == SPECTRE_V2_EIBRS_RETPOLINE)
++		return true;
++
++	return false;
++}
++
+ #undef pr_fmt
+ #define pr_fmt(fmt)     "RETBleed: " fmt
+ 
+@@ -1251,6 +1265,14 @@ static void __init retbleed_select_mitigation(void)
+ 			retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
+ 		else
+ 			retbleed_mitigation = RETBLEED_MITIGATION_NONE;
++	} else if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
++		/* Final mitigation depends on spectre-v2 selection */
++		if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED))
++			retbleed_mitigation = RETBLEED_MITIGATION_EIBRS;
++		else if (boot_cpu_has(X86_FEATURE_IBRS))
++			retbleed_mitigation = RETBLEED_MITIGATION_IBRS;
++		else
++			retbleed_mitigation = RETBLEED_MITIGATION_NONE;
+ 	}
+ }
+ 
+@@ -1259,27 +1281,16 @@ static void __init retbleed_update_mitigation(void)
+ 	if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off())
+ 		return;
+ 
+-	if (retbleed_mitigation == RETBLEED_MITIGATION_NONE)
+-		goto out;
+-
+-	/*
+-	 * retbleed=stuff is only allowed on Intel.  If stuffing can't be used
+-	 * then a different mitigation will be selected below.
+-	 *
+-	 * its=stuff will also attempt to enable stuffing.
+-	 */
+-	if (retbleed_mitigation == RETBLEED_MITIGATION_STUFF ||
+-	    its_mitigation == ITS_MITIGATION_RETPOLINE_STUFF) {
+-		if (spectre_v2_enabled != SPECTRE_V2_RETPOLINE) {
+-			pr_err("WARNING: retbleed=stuff depends on spectre_v2=retpoline\n");
+-			retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
+-		} else {
+-			if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF)
+-				pr_info("Retbleed mitigation updated to stuffing\n");
++	 /* ITS can also enable stuffing */
++	if (its_mitigation == ITS_MITIGATION_RETPOLINE_STUFF)
++		retbleed_mitigation = RETBLEED_MITIGATION_STUFF;
+ 
+-			retbleed_mitigation = RETBLEED_MITIGATION_STUFF;
+-		}
++	if (retbleed_mitigation == RETBLEED_MITIGATION_STUFF &&
++	    !cdt_possible(spectre_v2_enabled)) {
++		pr_err("WARNING: retbleed=stuff depends on retpoline\n");
++		retbleed_mitigation = RETBLEED_MITIGATION_NONE;
+ 	}
++
+ 	/*
+ 	 * Let IBRS trump all on Intel without affecting the effects of the
+ 	 * retbleed= cmdline option except for call depth based stuffing
+@@ -1298,15 +1309,11 @@ static void __init retbleed_update_mitigation(void)
+ 			if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF)
+ 				pr_err(RETBLEED_INTEL_MSG);
+ 		}
+-		/* If nothing has set the mitigation yet, default to NONE. */
+-		if (retbleed_mitigation == RETBLEED_MITIGATION_AUTO)
+-			retbleed_mitigation = RETBLEED_MITIGATION_NONE;
+ 	}
+-out:
++
+ 	pr_info("%s\n", retbleed_strings[retbleed_mitigation]);
+ }
+ 
+-
+ static void __init retbleed_apply_mitigation(void)
+ {
+ 	bool mitigate_smt = false;
+@@ -1453,6 +1460,7 @@ static void __init its_update_mitigation(void)
+ 		its_mitigation = ITS_MITIGATION_OFF;
+ 		break;
+ 	case SPECTRE_V2_RETPOLINE:
++	case SPECTRE_V2_EIBRS_RETPOLINE:
+ 		/* Retpoline+CDT mitigates ITS */
+ 		if (retbleed_mitigation == RETBLEED_MITIGATION_STUFF)
+ 			its_mitigation = ITS_MITIGATION_RETPOLINE_STUFF;
+diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
+index b4a1f6732a3aad..6b868afb26c319 100644
+--- a/arch/x86/kernel/cpu/scattered.c
++++ b/arch/x86/kernel/cpu/scattered.c
+@@ -48,6 +48,7 @@ static const struct cpuid_bit cpuid_bits[] = {
+ 	{ X86_FEATURE_PROC_FEEDBACK,		CPUID_EDX, 11, 0x80000007, 0 },
+ 	{ X86_FEATURE_AMD_FAST_CPPC,		CPUID_EDX, 15, 0x80000007, 0 },
+ 	{ X86_FEATURE_MBA,			CPUID_EBX,  6, 0x80000008, 0 },
++	{ X86_FEATURE_COHERENCY_SFW_NO,		CPUID_EBX, 31, 0x8000001f, 0 },
+ 	{ X86_FEATURE_SMBA,			CPUID_EBX,  2, 0x80000020, 0 },
+ 	{ X86_FEATURE_BMEC,			CPUID_EBX,  3, 0x80000020, 0 },
+ 	{ X86_FEATURE_TSA_SQ_NO,		CPUID_ECX,  1, 0x80000021, 0 },
+diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
+index 9ed29ff10e59d0..10721a12522694 100644
+--- a/arch/x86/kernel/irq.c
++++ b/arch/x86/kernel/irq.c
+@@ -256,26 +256,59 @@ static __always_inline void handle_irq(struct irq_desc *desc,
+ 		__handle_irq(desc, regs);
+ }
+ 
+-static __always_inline int call_irq_handler(int vector, struct pt_regs *regs)
++static struct irq_desc *reevaluate_vector(int vector)
+ {
+-	struct irq_desc *desc;
+-	int ret = 0;
++	struct irq_desc *desc = __this_cpu_read(vector_irq[vector]);
++
++	if (!IS_ERR_OR_NULL(desc))
++		return desc;
++
++	if (desc == VECTOR_UNUSED)
++		pr_emerg_ratelimited("No irq handler for %d.%u\n", smp_processor_id(), vector);
++	else
++		__this_cpu_write(vector_irq[vector], VECTOR_UNUSED);
++	return NULL;
++}
++
++static __always_inline bool call_irq_handler(int vector, struct pt_regs *regs)
++{
++	struct irq_desc *desc = __this_cpu_read(vector_irq[vector]);
+ 
+-	desc = __this_cpu_read(vector_irq[vector]);
+ 	if (likely(!IS_ERR_OR_NULL(desc))) {
+ 		handle_irq(desc, regs);
+-	} else {
+-		ret = -EINVAL;
+-		if (desc == VECTOR_UNUSED) {
+-			pr_emerg_ratelimited("%s: %d.%u No irq handler for vector\n",
+-					     __func__, smp_processor_id(),
+-					     vector);
+-		} else {
+-			__this_cpu_write(vector_irq[vector], VECTOR_UNUSED);
+-		}
++		return true;
+ 	}
+ 
+-	return ret;
++	/*
++	 * Reevaluate with vector_lock held to prevent a race against
++	 * request_irq() setting up the vector:
++	 *
++	 * CPU0				CPU1
++	 *				interrupt is raised in APIC IRR
++	 *				but not handled
++	 * free_irq()
++	 *   per_cpu(vector_irq, CPU1)[vector] = VECTOR_SHUTDOWN;
++	 *
++	 * request_irq()		common_interrupt()
++	 *				  d = this_cpu_read(vector_irq[vector]);
++	 *
++	 * per_cpu(vector_irq, CPU1)[vector] = desc;
++	 *
++	 *				  if (d == VECTOR_SHUTDOWN)
++	 *				    this_cpu_write(vector_irq[vector], VECTOR_UNUSED);
++	 *
++	 * This requires that the same vector on the same target CPU is
++	 * handed out or that a spurious interrupt hits that CPU/vector.
++	 */
++	lock_vector_lock();
++	desc = reevaluate_vector(vector);
++	unlock_vector_lock();
++
++	if (!desc)
++		return false;
++
++	handle_irq(desc, regs);
++	return true;
+ }
+ 
+ /*
+@@ -289,7 +322,7 @@ DEFINE_IDTENTRY_IRQ(common_interrupt)
+ 	/* entry code tells RCU that we're not quiescent.  Check it. */
+ 	RCU_LOCKDEP_WARN(!rcu_is_watching(), "IRQ failed to wake up RCU");
+ 
+-	if (unlikely(call_irq_handler(vector, regs)))
++	if (unlikely(!call_irq_handler(vector, regs)))
+ 		apic_eoi();
+ 
+ 	set_irq_regs(old_regs);
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index ab9b947dbf4f9c..be8c43049f4d39 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -4389,9 +4389,9 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, bool spec_ctrl_in
+ 	guest_state_exit_irqoff();
+ }
+ 
+-static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu,
+-					  bool force_immediate_exit)
++static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
+ {
++	bool force_immediate_exit = run_flags & KVM_RUN_FORCE_IMMEDIATE_EXIT;
+ 	struct vcpu_svm *svm = to_svm(vcpu);
+ 	bool spec_ctrl_intercepted = msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL);
+ 
+@@ -4438,10 +4438,13 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu,
+ 	svm_hv_update_vp_id(svm->vmcb, vcpu);
+ 
+ 	/*
+-	 * Run with all-zero DR6 unless needed, so that we can get the exact cause
+-	 * of a #DB.
++	 * Run with all-zero DR6 unless the guest can write DR6 freely, so that
++	 * KVM can get the exact cause of a #DB.  Note, loading guest DR6 from
++	 * KVM's snapshot is only necessary when DR accesses won't exit.
+ 	 */
+-	if (likely(!(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)))
++	if (unlikely(run_flags & KVM_RUN_LOAD_GUEST_DR6))
++		svm_set_dr6(vcpu, vcpu->arch.dr6);
++	else if (likely(!(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)))
+ 		svm_set_dr6(vcpu, DR6_ACTIVE_LOW);
+ 
+ 	clgi();
+@@ -5252,7 +5255,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
+ 	.set_idt = svm_set_idt,
+ 	.get_gdt = svm_get_gdt,
+ 	.set_gdt = svm_set_gdt,
+-	.set_dr6 = svm_set_dr6,
+ 	.set_dr7 = svm_set_dr7,
+ 	.sync_dirty_debug_regs = svm_sync_dirty_debug_regs,
+ 	.cache_reg = svm_cache_reg,
+diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
+index d1e02e567b571f..c85cbce6d2f6ef 100644
+--- a/arch/x86/kvm/vmx/main.c
++++ b/arch/x86/kvm/vmx/main.c
+@@ -175,12 +175,12 @@ static int vt_vcpu_pre_run(struct kvm_vcpu *vcpu)
+ 	return vmx_vcpu_pre_run(vcpu);
+ }
+ 
+-static fastpath_t vt_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
++static fastpath_t vt_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
+ {
+ 	if (is_td_vcpu(vcpu))
+-		return tdx_vcpu_run(vcpu, force_immediate_exit);
++		return tdx_vcpu_run(vcpu, run_flags);
+ 
+-	return vmx_vcpu_run(vcpu, force_immediate_exit);
++	return vmx_vcpu_run(vcpu, run_flags);
+ }
+ 
+ static int vt_handle_exit(struct kvm_vcpu *vcpu,
+@@ -489,14 +489,6 @@ static void vt_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt)
+ 	vmx_set_gdt(vcpu, dt);
+ }
+ 
+-static void vt_set_dr6(struct kvm_vcpu *vcpu, unsigned long val)
+-{
+-	if (is_td_vcpu(vcpu))
+-		return;
+-
+-	vmx_set_dr6(vcpu, val);
+-}
+-
+ static void vt_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
+ {
+ 	if (is_td_vcpu(vcpu))
+@@ -943,7 +935,6 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
+ 	.set_idt = vt_op(set_idt),
+ 	.get_gdt = vt_op(get_gdt),
+ 	.set_gdt = vt_op(set_gdt),
+-	.set_dr6 = vt_op(set_dr6),
+ 	.set_dr7 = vt_op(set_dr7),
+ 	.sync_dirty_debug_regs = vt_op(sync_dirty_debug_regs),
+ 	.cache_reg = vt_op(cache_reg),
+diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
+index ec79aacc446f08..d584c0e2869217 100644
+--- a/arch/x86/kvm/vmx/tdx.c
++++ b/arch/x86/kvm/vmx/tdx.c
+@@ -1025,20 +1025,20 @@ static void tdx_load_host_xsave_state(struct kvm_vcpu *vcpu)
+ 				DEBUGCTLMSR_FREEZE_PERFMON_ON_PMI | \
+ 				DEBUGCTLMSR_FREEZE_IN_SMM)
+ 
+-fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
++fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
+ {
+ 	struct vcpu_tdx *tdx = to_tdx(vcpu);
+ 	struct vcpu_vt *vt = to_vt(vcpu);
+ 
+ 	/*
+-	 * force_immediate_exit requires vCPU entering for events injection with
+-	 * an immediately exit followed. But The TDX module doesn't guarantee
+-	 * entry, it's already possible for KVM to _think_ it completely entry
+-	 * to the guest without actually having done so.
+-	 * Since KVM never needs to force an immediate exit for TDX, and can't
+-	 * do direct injection, just warn on force_immediate_exit.
++	 * WARN if KVM wants to force an immediate exit, as the TDX module does
++	 * not guarantee entry into the guest, i.e. it's possible for KVM to
++	 * _think_ it completed entry to the guest and forced an immediate exit
++	 * without actually having done so.  Luckily, KVM never needs to force
++	 * an immediate exit for TDX (KVM can't do direct event injection, so
++	 * just WARN and continue on.
+ 	 */
+-	WARN_ON_ONCE(force_immediate_exit);
++	WARN_ON_ONCE(run_flags);
+ 
+ 	/*
+ 	 * Wait until retry of SEPT-zap-related SEAMCALL completes before
+@@ -1048,7 +1048,7 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
+ 	if (unlikely(READ_ONCE(to_kvm_tdx(vcpu->kvm)->wait_for_sept_zap)))
+ 		return EXIT_FASTPATH_EXIT_HANDLED;
+ 
+-	trace_kvm_entry(vcpu, force_immediate_exit);
++	trace_kvm_entry(vcpu, run_flags & KVM_RUN_FORCE_IMMEDIATE_EXIT);
+ 
+ 	if (pi_test_on(&vt->pi_desc)) {
+ 		apic->send_IPI_self(POSTED_INTR_VECTOR);
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 191a9ed0da2271..91fbddbbc3ba7c 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -2186,6 +2186,10 @@ static u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated
+ 	    (host_initiated || intel_pmu_lbr_is_enabled(vcpu)))
+ 		debugctl |= DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI;
+ 
++	if (boot_cpu_has(X86_FEATURE_RTM) &&
++	    (host_initiated || guest_cpu_cap_has(vcpu, X86_FEATURE_RTM)))
++		debugctl |= DEBUGCTLMSR_RTM_DEBUG;
++
+ 	return debugctl;
+ }
+ 
+@@ -5606,12 +5610,6 @@ void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
+ 	set_debugreg(DR6_RESERVED, 6);
+ }
+ 
+-void vmx_set_dr6(struct kvm_vcpu *vcpu, unsigned long val)
+-{
+-	lockdep_assert_irqs_disabled();
+-	set_debugreg(vcpu->arch.dr6, 6);
+-}
+-
+ void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
+ {
+ 	vmcs_writel(GUEST_DR7, val);
+@@ -7323,8 +7321,9 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
+ 	guest_state_exit_irqoff();
+ }
+ 
+-fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
++fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
+ {
++	bool force_immediate_exit = run_flags & KVM_RUN_FORCE_IMMEDIATE_EXIT;
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+ 	unsigned long cr3, cr4;
+ 
+@@ -7369,6 +7368,9 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
+ 		vmcs_writel(GUEST_RIP, vcpu->arch.regs[VCPU_REGS_RIP]);
+ 	vcpu->arch.regs_dirty = 0;
+ 
++	if (run_flags & KVM_RUN_LOAD_GUEST_DR6)
++		set_debugreg(vcpu->arch.dr6, 6);
++
+ 	/*
+ 	 * Refresh vmcs.HOST_CR3 if necessary.  This must be done immediately
+ 	 * prior to VM-Enter, as the kernel may load a new ASID (PCID) any time
+diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
+index b4596f65123282..0b4f5c5558d0da 100644
+--- a/arch/x86/kvm/vmx/x86_ops.h
++++ b/arch/x86/kvm/vmx/x86_ops.h
+@@ -21,7 +21,7 @@ void vmx_vm_destroy(struct kvm *kvm);
+ int vmx_vcpu_precreate(struct kvm *kvm);
+ int vmx_vcpu_create(struct kvm_vcpu *vcpu);
+ int vmx_vcpu_pre_run(struct kvm_vcpu *vcpu);
+-fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit);
++fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags);
+ void vmx_vcpu_free(struct kvm_vcpu *vcpu);
+ void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
+ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
+@@ -133,7 +133,7 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
+ void tdx_vcpu_free(struct kvm_vcpu *vcpu);
+ void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
+ int tdx_vcpu_pre_run(struct kvm_vcpu *vcpu);
+-fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit);
++fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags);
+ void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu);
+ void tdx_vcpu_put(struct kvm_vcpu *vcpu);
+ bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 93636f77c42d8a..05de6c5949a470 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -10785,6 +10785,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 		dm_request_for_irq_injection(vcpu) &&
+ 		kvm_cpu_accept_dm_intr(vcpu);
+ 	fastpath_t exit_fastpath;
++	u64 run_flags;
+ 
+ 	bool req_immediate_exit = false;
+ 
+@@ -11029,8 +11030,11 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 		goto cancel_injection;
+ 	}
+ 
+-	if (req_immediate_exit)
++	run_flags = 0;
++	if (req_immediate_exit) {
++		run_flags |= KVM_RUN_FORCE_IMMEDIATE_EXIT;
+ 		kvm_make_request(KVM_REQ_EVENT, vcpu);
++	}
+ 
+ 	fpregs_assert_state_consistent();
+ 	if (test_thread_flag(TIF_NEED_FPU_LOAD))
+@@ -11048,7 +11052,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 		set_debugreg(vcpu->arch.eff_db[3], 3);
+ 		/* When KVM_DEBUGREG_WONT_EXIT, dr6 is accessible in guest. */
+ 		if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT))
+-			kvm_x86_call(set_dr6)(vcpu, vcpu->arch.dr6);
++			run_flags |= KVM_RUN_LOAD_GUEST_DR6;
+ 	} else if (unlikely(hw_breakpoint_active())) {
+ 		set_debugreg(DR7_FIXED_1, 7);
+ 	}
+@@ -11067,8 +11071,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 		WARN_ON_ONCE((kvm_vcpu_apicv_activated(vcpu) != kvm_vcpu_apicv_active(vcpu)) &&
+ 			     (kvm_get_apic_mode(vcpu) != LAPIC_MODE_DISABLED));
+ 
+-		exit_fastpath = kvm_x86_call(vcpu_run)(vcpu,
+-						       req_immediate_exit);
++		exit_fastpath = kvm_x86_call(vcpu_run)(vcpu, run_flags);
+ 		if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST))
+ 			break;
+ 
+@@ -11080,6 +11083,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 			break;
+ 		}
+ 
++		run_flags = 0;
++
+ 		/* Note, VM-Exits that go down the "slow" path are accounted below. */
+ 		++vcpu->stat.exits;
+ 	}
+diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
+index bf8dab18be9747..2fdc1f1f5adb95 100644
+--- a/arch/x86/mm/extable.c
++++ b/arch/x86/mm/extable.c
+@@ -122,13 +122,12 @@ static bool ex_handler_sgx(const struct exception_table_entry *fixup,
+ static bool ex_handler_fprestore(const struct exception_table_entry *fixup,
+ 				 struct pt_regs *regs)
+ {
+-	regs->ip = ex_fixup_addr(fixup);
+-
+ 	WARN_ONCE(1, "Bad FPU state detected at %pB, reinitializing FPU registers.",
+ 		  (void *)instruction_pointer(regs));
+ 
+ 	fpu_reset_from_exception_fixup();
+-	return true;
++
++	return ex_handler_default(fixup, regs);
+ }
+ 
+ /*
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 4806b867e37d08..dec1cd4f1f5b6e 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -4966,6 +4966,60 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr)
+ 	return ret;
+ }
+ 
++/*
++ * Switch back to the elevator type stored in the xarray.
++ */
++static void blk_mq_elv_switch_back(struct request_queue *q,
++		struct xarray *elv_tbl)
++{
++	struct elevator_type *e = xa_load(elv_tbl, q->id);
++
++	/* The elv_update_nr_hw_queues unfreezes the queue. */
++	elv_update_nr_hw_queues(q, e);
++
++	/* Drop the reference acquired in blk_mq_elv_switch_none. */
++	if (e)
++		elevator_put(e);
++}
++
++/*
++ * Stores elevator type in xarray and set current elevator to none. It uses
++ * q->id as an index to store the elevator type into the xarray.
++ */
++static int blk_mq_elv_switch_none(struct request_queue *q,
++		struct xarray *elv_tbl)
++{
++	int ret = 0;
++
++	lockdep_assert_held_write(&q->tag_set->update_nr_hwq_lock);
++
++	/*
++	 * Accessing q->elevator without holding q->elevator_lock is safe here
++	 * because we're called from nr_hw_queue update which is protected by
++	 * set->update_nr_hwq_lock in the writer context. So, scheduler update/
++	 * switch code (which acquires the same lock in the reader context)
++	 * can't run concurrently.
++	 */
++	if (q->elevator) {
++
++		ret = xa_insert(elv_tbl, q->id, q->elevator->type, GFP_KERNEL);
++		if (WARN_ON_ONCE(ret))
++			return ret;
++
++		/*
++		 * Before we switch elevator to 'none', take a reference to
++		 * the elevator module so that while nr_hw_queue update is
++		 * running, no one can remove elevator module. We'd put the
++		 * reference to elevator module later when we switch back
++		 * elevator.
++		 */
++		__elevator_get(q->elevator->type);
++
++		elevator_set_none(q);
++	}
++	return ret;
++}
++
+ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ 							int nr_hw_queues)
+ {
+@@ -4973,6 +5027,7 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ 	int prev_nr_hw_queues = set->nr_hw_queues;
+ 	unsigned int memflags;
+ 	int i;
++	struct xarray elv_tbl;
+ 
+ 	lockdep_assert_held(&set->tag_list_lock);
+ 
+@@ -4984,6 +5039,9 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ 		return;
+ 
+ 	memflags = memalloc_noio_save();
++
++	xa_init(&elv_tbl);
++
+ 	list_for_each_entry(q, &set->tag_list, tag_set_list) {
+ 		blk_mq_debugfs_unregister_hctxs(q);
+ 		blk_mq_sysfs_unregister_hctxs(q);
+@@ -4992,11 +5050,17 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ 	list_for_each_entry(q, &set->tag_list, tag_set_list)
+ 		blk_mq_freeze_queue_nomemsave(q);
+ 
+-	if (blk_mq_realloc_tag_set_tags(set, nr_hw_queues) < 0) {
+-		list_for_each_entry(q, &set->tag_list, tag_set_list)
+-			blk_mq_unfreeze_queue_nomemrestore(q);
+-		goto reregister;
+-	}
++	/*
++	 * Switch IO scheduler to 'none', cleaning up the data associated
++	 * with the previous scheduler. We will switch back once we are done
++	 * updating the new sw to hw queue mappings.
++	 */
++	list_for_each_entry(q, &set->tag_list, tag_set_list)
++		if (blk_mq_elv_switch_none(q, &elv_tbl))
++			goto switch_back;
++
++	if (blk_mq_realloc_tag_set_tags(set, nr_hw_queues) < 0)
++		goto switch_back;
+ 
+ fallback:
+ 	blk_mq_update_queue_map(set);
+@@ -5016,12 +5080,11 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ 		}
+ 		blk_mq_map_swqueue(q);
+ 	}
+-
+-	/* elv_update_nr_hw_queues() unfreeze queue for us */
++switch_back:
++	/* The blk_mq_elv_switch_back unfreezes queue for us. */
+ 	list_for_each_entry(q, &set->tag_list, tag_set_list)
+-		elv_update_nr_hw_queues(q);
++		blk_mq_elv_switch_back(q, &elv_tbl);
+ 
+-reregister:
+ 	list_for_each_entry(q, &set->tag_list, tag_set_list) {
+ 		blk_mq_sysfs_register_hctxs(q);
+ 		blk_mq_debugfs_register_hctxs(q);
+@@ -5029,6 +5092,9 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ 		blk_mq_remove_hw_queues_cpuhp(q);
+ 		blk_mq_add_hw_queues_cpuhp(q);
+ 	}
++
++	xa_destroy(&elv_tbl);
++
+ 	memalloc_noio_restore(memflags);
+ 
+ 	/* Free the excess tags when nr_hw_queues shrink. */
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index a000daafbfb489..1a82980d52e93a 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -181,6 +181,8 @@ static void blk_atomic_writes_update_limits(struct queue_limits *lim)
+ static void blk_validate_atomic_write_limits(struct queue_limits *lim)
+ {
+ 	unsigned int boundary_sectors;
++	unsigned int atomic_write_hw_max_sectors =
++			lim->atomic_write_hw_max >> SECTOR_SHIFT;
+ 
+ 	if (!(lim->features & BLK_FEAT_ATOMIC_WRITES))
+ 		goto unsupported;
+@@ -202,6 +204,10 @@ static void blk_validate_atomic_write_limits(struct queue_limits *lim)
+ 			 lim->atomic_write_hw_max))
+ 		goto unsupported;
+ 
++	if (WARN_ON_ONCE(lim->chunk_sectors &&
++			atomic_write_hw_max_sectors > lim->chunk_sectors))
++		goto unsupported;
++
+ 	boundary_sectors = lim->atomic_write_hw_boundary >> SECTOR_SHIFT;
+ 
+ 	if (boundary_sectors) {
+@@ -336,12 +342,19 @@ int blk_validate_limits(struct queue_limits *lim)
+ 	lim->max_discard_sectors =
+ 		min(lim->max_hw_discard_sectors, lim->max_user_discard_sectors);
+ 
++	/*
++	 * When discard is not supported, discard_granularity should be reported
++	 * as 0 to userspace.
++	 */
++	if (lim->max_discard_sectors)
++		lim->discard_granularity =
++			max(lim->discard_granularity, lim->physical_block_size);
++	else
++		lim->discard_granularity = 0;
++
+ 	if (!lim->max_discard_segments)
+ 		lim->max_discard_segments = 1;
+ 
+-	if (lim->discard_granularity < lim->physical_block_size)
+-		lim->discard_granularity = lim->physical_block_size;
+-
+ 	/*
+ 	 * By default there is no limit on the segment boundary alignment,
+ 	 * but if there is one it can't be smaller than the page size as
+diff --git a/block/blk.h b/block/blk.h
+index 37ec459fe65629..fae7653a941f5a 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -321,7 +321,7 @@ bool blk_bio_list_merge(struct request_queue *q, struct list_head *list,
+ 
+ bool blk_insert_flush(struct request *rq);
+ 
+-void elv_update_nr_hw_queues(struct request_queue *q);
++void elv_update_nr_hw_queues(struct request_queue *q, struct elevator_type *e);
+ void elevator_set_default(struct request_queue *q);
+ void elevator_set_none(struct request_queue *q);
+ 
+diff --git a/block/elevator.c b/block/elevator.c
+index a960bdc869bcd5..88f8f36bed9818 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -689,21 +689,21 @@ static int elevator_change(struct request_queue *q, struct elv_change_ctx *ctx)
+  * The I/O scheduler depends on the number of hardware queues, this forces a
+  * reattachment when nr_hw_queues changes.
+  */
+-void elv_update_nr_hw_queues(struct request_queue *q)
++void elv_update_nr_hw_queues(struct request_queue *q, struct elevator_type *e)
+ {
+ 	struct elv_change_ctx ctx = {};
+ 	int ret = -ENODEV;
+ 
+ 	WARN_ON_ONCE(q->mq_freeze_depth == 0);
+ 
+-	mutex_lock(&q->elevator_lock);
+-	if (q->elevator && !blk_queue_dying(q) && blk_queue_registered(q)) {
+-		ctx.name = q->elevator->type->elevator_name;
++	if (e && !blk_queue_dying(q) && blk_queue_registered(q)) {
++		ctx.name = e->elevator_name;
+ 
++		mutex_lock(&q->elevator_lock);
+ 		/* force to reattach elevator after nr_hw_queue is updated */
+ 		ret = elevator_switch(q, &ctx);
++		mutex_unlock(&q->elevator_lock);
+ 	}
+-	mutex_unlock(&q->elevator_lock);
+ 	blk_mq_unfreeze_queue_nomemrestore(q);
+ 	if (!ret)
+ 		WARN_ON_ONCE(elevator_change_done(q, &ctx));
+diff --git a/crypto/ahash.c b/crypto/ahash.c
+index bc84a07c924c3b..2f06e6b4f601ee 100644
+--- a/crypto/ahash.c
++++ b/crypto/ahash.c
+@@ -347,6 +347,12 @@ static int ahash_do_req_chain(struct ahash_request *req,
+ 	if (crypto_ahash_statesize(tfm) > HASH_MAX_STATESIZE)
+ 		return -ENOSYS;
+ 
++	if (!crypto_ahash_need_fallback(tfm))
++		return -ENOSYS;
++
++	if (crypto_hash_no_export_core(tfm))
++		return -ENOSYS;
++
+ 	{
+ 		u8 state[HASH_MAX_STATESIZE];
+ 
+@@ -954,6 +960,10 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
+ 	    base->cra_reqsize > MAX_SYNC_HASH_REQSIZE)
+ 		return -EINVAL;
+ 
++	if (base->cra_flags & CRYPTO_ALG_NEED_FALLBACK &&
++	    base->cra_flags & CRYPTO_ALG_NO_FALLBACK)
++		return -EINVAL;
++
+ 	err = hash_prepare_alg(&alg->halg);
+ 	if (err)
+ 		return err;
+@@ -962,7 +972,8 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
+ 	base->cra_flags |= CRYPTO_ALG_TYPE_AHASH;
+ 
+ 	if ((base->cra_flags ^ CRYPTO_ALG_REQ_VIRT) &
+-	    (CRYPTO_ALG_ASYNC | CRYPTO_ALG_REQ_VIRT))
++	    (CRYPTO_ALG_ASYNC | CRYPTO_ALG_REQ_VIRT) &&
++	    !(base->cra_flags & CRYPTO_ALG_NO_FALLBACK))
+ 		base->cra_flags |= CRYPTO_ALG_NEED_FALLBACK;
+ 
+ 	if (!alg->setkey)
+diff --git a/crypto/krb5/selftest.c b/crypto/krb5/selftest.c
+index 2a81a6315a0d0b..4519c572d37ef5 100644
+--- a/crypto/krb5/selftest.c
++++ b/crypto/krb5/selftest.c
+@@ -152,6 +152,7 @@ static int krb5_test_one_prf(const struct krb5_prf_test *test)
+ 
+ out:
+ 	clear_buf(&result);
++	clear_buf(&prf);
+ 	clear_buf(&octet);
+ 	clear_buf(&key);
+ 	return ret;
+diff --git a/drivers/base/auxiliary.c b/drivers/base/auxiliary.c
+index dba7c8e13a531f..6bdefebf36091c 100644
+--- a/drivers/base/auxiliary.c
++++ b/drivers/base/auxiliary.c
+@@ -399,6 +399,7 @@ static void auxiliary_device_release(struct device *dev)
+ {
+ 	struct auxiliary_device *auxdev = to_auxiliary_dev(dev);
+ 
++	of_node_put(dev->of_node);
+ 	kfree(auxdev);
+ }
+ 
+@@ -435,6 +436,7 @@ struct auxiliary_device *auxiliary_device_create(struct device *dev,
+ 
+ 	ret = auxiliary_device_init(auxdev);
+ 	if (ret) {
++		of_node_put(auxdev->dev.of_node);
+ 		kfree(auxdev);
+ 		return NULL;
+ 	}
+diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
+index 66ce6b81c7d938..8fc7761397bd7a 100644
+--- a/drivers/block/mtip32xx/mtip32xx.c
++++ b/drivers/block/mtip32xx/mtip32xx.c
+@@ -2040,11 +2040,12 @@ static int mtip_hw_ioctl(struct driver_data *dd, unsigned int cmd,
+  * @dir      Direction (read or write)
+  *
+  * return value
+- *	None
++ *	0	The IO completed successfully.
++ *	-ENOMEM	The DMA mapping failed.
+  */
+-static void mtip_hw_submit_io(struct driver_data *dd, struct request *rq,
+-			      struct mtip_cmd *command,
+-			      struct blk_mq_hw_ctx *hctx)
++static int mtip_hw_submit_io(struct driver_data *dd, struct request *rq,
++			     struct mtip_cmd *command,
++			     struct blk_mq_hw_ctx *hctx)
+ {
+ 	struct mtip_cmd_hdr *hdr =
+ 		dd->port->command_list + sizeof(struct mtip_cmd_hdr) * rq->tag;
+@@ -2056,12 +2057,14 @@ static void mtip_hw_submit_io(struct driver_data *dd, struct request *rq,
+ 	unsigned int nents;
+ 
+ 	/* Map the scatter list for DMA access */
+-	nents = blk_rq_map_sg(rq, command->sg);
+-	nents = dma_map_sg(&dd->pdev->dev, command->sg, nents, dma_dir);
++	command->scatter_ents = blk_rq_map_sg(rq, command->sg);
++	nents = dma_map_sg(&dd->pdev->dev, command->sg,
++			   command->scatter_ents, dma_dir);
++	if (!nents)
++		return -ENOMEM;
+ 
+-	prefetch(&port->flags);
+ 
+-	command->scatter_ents = nents;
++	prefetch(&port->flags);
+ 
+ 	/*
+ 	 * The number of retries for this command before it is
+@@ -2112,11 +2115,13 @@ static void mtip_hw_submit_io(struct driver_data *dd, struct request *rq,
+ 	if (unlikely(port->flags & MTIP_PF_PAUSE_IO)) {
+ 		set_bit(rq->tag, port->cmds_to_issue);
+ 		set_bit(MTIP_PF_ISSUE_CMDS_BIT, &port->flags);
+-		return;
++		return 0;
+ 	}
+ 
+ 	/* Issue the command to the hardware */
+ 	mtip_issue_ncq_command(port, rq->tag);
++
++	return 0;
+ }
+ 
+ /*
+@@ -3315,7 +3320,9 @@ static blk_status_t mtip_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 
+ 	blk_mq_start_request(rq);
+ 
+-	mtip_hw_submit_io(dd, rq, cmd, hctx);
++	if (mtip_hw_submit_io(dd, rq, cmd, hctx))
++		return BLK_STS_IOERR;
++
+ 	return BLK_STS_OK;
+ }
+ 
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 2592bd19ebc151..6463d0e8d0cef7 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -1473,7 +1473,17 @@ static int nbd_start_device(struct nbd_device *nbd)
+ 		return -EINVAL;
+ 	}
+ 
+-	blk_mq_update_nr_hw_queues(&nbd->tag_set, config->num_connections);
++retry:
++	mutex_unlock(&nbd->config_lock);
++	blk_mq_update_nr_hw_queues(&nbd->tag_set, num_connections);
++	mutex_lock(&nbd->config_lock);
++
++	/* if another code path updated nr_hw_queues, retry until succeed */
++	if (num_connections != config->num_connections) {
++		num_connections = config->num_connections;
++		goto retry;
++	}
++
+ 	nbd->pid = task_pid_nr(current);
+ 
+ 	nbd_parse_flags(nbd);
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 9fd284fa76dcd7..3e60558bf52595 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -216,6 +216,9 @@ struct ublk_device {
+ 	struct completion	completion;
+ 	unsigned int		nr_queues_ready;
+ 	unsigned int		nr_privileged_daemon;
++	struct mutex cancel_mutex;
++	bool canceling;
++	pid_t 	ublksrv_tgid;
+ };
+ 
+ /* header of ublk_params */
+@@ -1515,6 +1518,7 @@ static int ublk_ch_open(struct inode *inode, struct file *filp)
+ 	if (test_and_set_bit(UB_STATE_OPEN, &ub->state))
+ 		return -EBUSY;
+ 	filp->private_data = ub;
++	ub->ublksrv_tgid = current->tgid;
+ 	return 0;
+ }
+ 
+@@ -1529,6 +1533,7 @@ static void ublk_reset_ch_dev(struct ublk_device *ub)
+ 	ub->mm = NULL;
+ 	ub->nr_queues_ready = 0;
+ 	ub->nr_privileged_daemon = 0;
++	ub->ublksrv_tgid = -1;
+ }
+ 
+ static struct gendisk *ublk_get_disk(struct ublk_device *ub)
+@@ -1578,6 +1583,7 @@ static int ublk_ch_release(struct inode *inode, struct file *filp)
+ 	 * All requests may be inflight, so ->canceling may not be set, set
+ 	 * it now.
+ 	 */
++	ub->canceling = true;
+ 	for (i = 0; i < ub->dev_info.nr_hw_queues; i++) {
+ 		struct ublk_queue *ubq = ublk_get_queue(ub, i);
+ 
+@@ -1706,23 +1712,18 @@ static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ubq)
+ 	}
+ }
+ 
+-/* Must be called when queue is frozen */
+-static void ublk_mark_queue_canceling(struct ublk_queue *ubq)
++static void ublk_start_cancel(struct ublk_device *ub)
+ {
+-	spin_lock(&ubq->cancel_lock);
+-	if (!ubq->canceling)
+-		ubq->canceling = true;
+-	spin_unlock(&ubq->cancel_lock);
+-}
+-
+-static void ublk_start_cancel(struct ublk_queue *ubq)
+-{
+-	struct ublk_device *ub = ubq->dev;
+ 	struct gendisk *disk = ublk_get_disk(ub);
++	int i;
+ 
+ 	/* Our disk has been dead */
+ 	if (!disk)
+ 		return;
++
++	mutex_lock(&ub->cancel_mutex);
++	if (ub->canceling)
++		goto out;
+ 	/*
+ 	 * Now we are serialized with ublk_queue_rq()
+ 	 *
+@@ -1731,8 +1732,12 @@ static void ublk_start_cancel(struct ublk_queue *ubq)
+ 	 * touch completed uring_cmd
+ 	 */
+ 	blk_mq_quiesce_queue(disk->queue);
+-	ublk_mark_queue_canceling(ubq);
++	ub->canceling = true;
++	for (i = 0; i < ub->dev_info.nr_hw_queues; i++)
++		ublk_get_queue(ub, i)->canceling = true;
+ 	blk_mq_unquiesce_queue(disk->queue);
++out:
++	mutex_unlock(&ub->cancel_mutex);
+ 	ublk_put_disk(disk);
+ }
+ 
+@@ -1805,8 +1810,7 @@ static void ublk_uring_cmd_cancel_fn(struct io_uring_cmd *cmd,
+ 	if (WARN_ON_ONCE(task && task != io->task))
+ 		return;
+ 
+-	if (!ubq->canceling)
+-		ublk_start_cancel(ubq);
++	ublk_start_cancel(ubq->dev);
+ 
+ 	WARN_ON_ONCE(io->cmd != cmd);
+ 	ublk_cancel_cmd(ubq, pdu->tag, issue_flags);
+@@ -1933,6 +1937,7 @@ static void ublk_reset_io_flags(struct ublk_device *ub)
+ 		ubq->canceling = false;
+ 		ubq->fail_io = false;
+ 	}
++	ub->canceling = false;
+ }
+ 
+ /* device can only be started after all IOs are ready */
+@@ -2513,7 +2518,7 @@ static void ublk_deinit_queues(struct ublk_device *ub)
+ 
+ 	for (i = 0; i < nr_queues; i++)
+ 		ublk_deinit_queue(ub, i);
+-	kfree(ub->__queues);
++	kvfree(ub->__queues);
+ }
+ 
+ static int ublk_init_queues(struct ublk_device *ub)
+@@ -2524,7 +2529,7 @@ static int ublk_init_queues(struct ublk_device *ub)
+ 	int i, ret = -ENOMEM;
+ 
+ 	ub->queue_size = ubq_size;
+-	ub->__queues = kcalloc(nr_queues, ubq_size, GFP_KERNEL);
++	ub->__queues = kvcalloc(nr_queues, ubq_size, GFP_KERNEL);
+ 	if (!ub->__queues)
+ 		return ret;
+ 
+@@ -2580,6 +2585,7 @@ static void ublk_cdev_rel(struct device *dev)
+ 	ublk_deinit_queues(ub);
+ 	ublk_free_dev_number(ub);
+ 	mutex_destroy(&ub->mutex);
++	mutex_destroy(&ub->cancel_mutex);
+ 	kfree(ub);
+ }
+ 
+@@ -2729,6 +2735,9 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub,
+ 	if (wait_for_completion_interruptible(&ub->completion) != 0)
+ 		return -EINTR;
+ 
++	if (ub->ublksrv_tgid != ublksrv_pid)
++		return -EINVAL;
++
+ 	mutex_lock(&ub->mutex);
+ 	if (ub->dev_info.state == UBLK_S_DEV_LIVE ||
+ 	    test_bit(UB_STATE_USED, &ub->state)) {
+@@ -2933,6 +2942,7 @@ static int ublk_ctrl_add_dev(const struct ublksrv_ctrl_cmd *header)
+ 		goto out_unlock;
+ 	mutex_init(&ub->mutex);
+ 	spin_lock_init(&ub->lock);
++	mutex_init(&ub->cancel_mutex);
+ 
+ 	ret = ublk_alloc_dev_number(ub, header->dev_id);
+ 	if (ret < 0)
+@@ -3003,6 +3013,7 @@ static int ublk_ctrl_add_dev(const struct ublksrv_ctrl_cmd *header)
+ 	ublk_free_dev_number(ub);
+ out_free_ub:
+ 	mutex_destroy(&ub->mutex);
++	mutex_destroy(&ub->cancel_mutex);
+ 	kfree(ub);
+ out_unlock:
+ 	mutex_unlock(&ublk_ctl_mutex);
+@@ -3227,6 +3238,9 @@ static int ublk_ctrl_end_recovery(struct ublk_device *ub,
+ 	pr_devel("%s: All FETCH_REQs received, dev id %d\n", __func__,
+ 		 header->dev_id);
+ 
++	if (ub->ublksrv_tgid != ublksrv_pid)
++		return -EINVAL;
++
+ 	mutex_lock(&ub->mutex);
+ 	if (ublk_nosrv_should_stop_dev(ub))
+ 		goto out_unlock;
+@@ -3357,8 +3371,9 @@ static int ublk_ctrl_quiesce_dev(struct ublk_device *ub,
+ 	if (ub->dev_info.state != UBLK_S_DEV_LIVE)
+ 		goto put_disk;
+ 
+-	/* Mark all queues as canceling */
++	/* Mark the device as canceling */
+ 	blk_mq_quiesce_queue(disk->queue);
++	ub->canceling = true;
+ 	for (i = 0; i < ub->dev_info.nr_hw_queues; i++) {
+ 		struct ublk_queue *ubq = ublk_get_queue(ub, i);
+ 
+diff --git a/drivers/block/zloop.c b/drivers/block/zloop.c
+index 553b1a713ab915..a423228e201ba3 100644
+--- a/drivers/block/zloop.c
++++ b/drivers/block/zloop.c
+@@ -700,6 +700,8 @@ static void zloop_free_disk(struct gendisk *disk)
+ 	struct zloop_device *zlo = disk->private_data;
+ 	unsigned int i;
+ 
++	blk_mq_free_tag_set(&zlo->tag_set);
++
+ 	for (i = 0; i < zlo->nr_zones; i++) {
+ 		struct zloop_zone *zone = &zlo->zones[i];
+ 
+@@ -1080,7 +1082,6 @@ static int zloop_ctl_remove(struct zloop_options *opts)
+ 
+ 	del_gendisk(zlo->disk);
+ 	put_disk(zlo->disk);
+-	blk_mq_free_tag_set(&zlo->tag_set);
+ 
+ 	pr_info("Removed device %d\n", opts->id);
+ 
+diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
+index 06016ac3965c39..6aceecf5a13def 100644
+--- a/drivers/bluetooth/btintel.c
++++ b/drivers/bluetooth/btintel.c
+@@ -889,7 +889,7 @@ int btintel_send_intel_reset(struct hci_dev *hdev, u32 boot_param)
+ 
+ 	params.boot_param = cpu_to_le32(boot_param);
+ 
+-	skb = __hci_cmd_sync(hdev, 0xfc01, sizeof(params), &params,
++	skb = __hci_cmd_sync(hdev, BTINTEL_HCI_OP_RESET, sizeof(params), &params,
+ 			     HCI_INIT_TIMEOUT);
+ 	if (IS_ERR(skb)) {
+ 		bt_dev_err(hdev, "Failed to send Intel Reset command");
+@@ -1287,7 +1287,7 @@ static void btintel_reset_to_bootloader(struct hci_dev *hdev)
+ 	params.boot_option = 0x00;
+ 	params.boot_param = cpu_to_le32(0x00000000);
+ 
+-	skb = __hci_cmd_sync(hdev, 0xfc01, sizeof(params),
++	skb = __hci_cmd_sync(hdev, BTINTEL_HCI_OP_RESET, sizeof(params),
+ 			     &params, HCI_INIT_TIMEOUT);
+ 	if (IS_ERR(skb)) {
+ 		bt_dev_err(hdev, "FW download error recovery failed (%ld)",
+diff --git a/drivers/bluetooth/btintel.h b/drivers/bluetooth/btintel.h
+index 1d12c4113c669e..431998049e686b 100644
+--- a/drivers/bluetooth/btintel.h
++++ b/drivers/bluetooth/btintel.h
+@@ -52,6 +52,8 @@ struct intel_tlv {
+ 	u8 val[];
+ } __packed;
+ 
++#define BTINTEL_HCI_OP_RESET	0xfc01
++
+ #define BTINTEL_CNVI_BLAZARI		0x900
+ #define BTINTEL_CNVI_BLAZARIW		0x901
+ #define BTINTEL_CNVI_GAP		0x910
+diff --git a/drivers/bluetooth/btintel_pcie.c b/drivers/bluetooth/btintel_pcie.c
+index f4e3fb54fe766d..7f789937a764ba 100644
+--- a/drivers/bluetooth/btintel_pcie.c
++++ b/drivers/bluetooth/btintel_pcie.c
+@@ -928,11 +928,13 @@ static void btintel_pcie_msix_gp0_handler(struct btintel_pcie_data *data)
+ 	case BTINTEL_PCIE_INTEL_HCI_RESET1:
+ 		if (btintel_pcie_in_op(data)) {
+ 			submit_rx = true;
++			signal_waitq = true;
+ 			break;
+ 		}
+ 
+ 		if (btintel_pcie_in_iml(data)) {
+ 			submit_rx = true;
++			signal_waitq = true;
+ 			data->alive_intr_ctxt = BTINTEL_PCIE_FW_DL;
+ 			break;
+ 		}
+@@ -1955,16 +1957,19 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev,
+ 			struct hci_command_hdr *cmd = (void *)skb->data;
+ 			__u16 opcode = le16_to_cpu(cmd->opcode);
+ 
+-			/* When the 0xfc01 command is issued to boot into
+-			 * the operational firmware, it will actually not
+-			 * send a command complete event. To keep the flow
++			/* When the BTINTEL_HCI_OP_RESET command is issued to
++			 * boot into the operational firmware, it will actually
++			 * not send a command complete event. To keep the flow
+ 			 * control working inject that event here.
+ 			 */
+-			if (opcode == 0xfc01)
++			if (opcode == BTINTEL_HCI_OP_RESET)
+ 				btintel_pcie_inject_cmd_complete(hdev, opcode);
+ 		}
+-		/* Firmware raises alive interrupt on HCI_OP_RESET */
+-		if (opcode == HCI_OP_RESET)
++
++		/* Firmware raises alive interrupt on HCI_OP_RESET or
++		 * BTINTEL_HCI_OP_RESET
++		 */
++		if (opcode == HCI_OP_RESET || opcode == BTINTEL_HCI_OP_RESET)
+ 			data->gp0_received = false;
+ 
+ 		hdev->stat.cmd_tx++;
+@@ -1995,25 +2000,24 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev,
+ 	}
+ 
+ 	if (type == BTINTEL_PCIE_HCI_CMD_PKT &&
+-	    (opcode == HCI_OP_RESET || opcode == 0xfc01)) {
++	    (opcode == HCI_OP_RESET || opcode == BTINTEL_HCI_OP_RESET)) {
+ 		old_ctxt = data->alive_intr_ctxt;
+ 		data->alive_intr_ctxt =
+-			(opcode == 0xfc01 ? BTINTEL_PCIE_INTEL_HCI_RESET1 :
++			(opcode == BTINTEL_HCI_OP_RESET ? BTINTEL_PCIE_INTEL_HCI_RESET1 :
+ 				BTINTEL_PCIE_HCI_RESET);
+ 		bt_dev_dbg(data->hdev, "sent cmd: 0x%4.4x alive context changed: %s  ->  %s",
+ 			   opcode, btintel_pcie_alivectxt_state2str(old_ctxt),
+ 			   btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt));
+-		if (opcode == HCI_OP_RESET) {
+-			ret = wait_event_timeout(data->gp0_wait_q,
+-						 data->gp0_received,
+-						 msecs_to_jiffies(BTINTEL_DEFAULT_INTR_TIMEOUT_MS));
+-			if (!ret) {
+-				hdev->stat.err_tx++;
+-				bt_dev_err(hdev, "No alive interrupt received for %s",
+-					   btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt));
+-				ret = -ETIME;
+-				goto exit_error;
+-			}
++		ret = wait_event_timeout(data->gp0_wait_q,
++					 data->gp0_received,
++					 msecs_to_jiffies(BTINTEL_DEFAULT_INTR_TIMEOUT_MS));
++		if (!ret) {
++			hdev->stat.err_tx++;
++			bt_dev_err(hdev, "Timeout on alive interrupt (%u ms). Alive context: %s",
++				   BTINTEL_DEFAULT_INTR_TIMEOUT_MS,
++				   btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt));
++			ret = -ETIME;
++			goto exit_error;
+ 		}
+ 	}
+ 	hdev->stat.byte_tx += skb->len;
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index f9eeec0aed57da..e8977fff421226 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -516,6 +516,10 @@ static const struct usb_device_id quirks_table[] = {
+ 	{ USB_DEVICE(0x0bda, 0xb850), .driver_info = BTUSB_REALTEK },
+ 	{ USB_DEVICE(0x13d3, 0x3600), .driver_info = BTUSB_REALTEK },
+ 
++	/* Realtek 8851BU Bluetooth devices */
++	{ USB_DEVICE(0x3625, 0x010b), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
++
+ 	/* Realtek 8852AE Bluetooth devices */
+ 	{ USB_DEVICE(0x0bda, 0x2852), .driver_info = BTUSB_REALTEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
+@@ -2594,12 +2598,12 @@ static int btusb_send_frame_intel(struct hci_dev *hdev, struct sk_buff *skb)
+ 			else
+ 				urb = alloc_ctrl_urb(hdev, skb);
+ 
+-			/* When the 0xfc01 command is issued to boot into
+-			 * the operational firmware, it will actually not
+-			 * send a command complete event. To keep the flow
++			/* When the BTINTEL_HCI_OP_RESET command is issued to
++			 * boot into the operational firmware, it will actually
++			 * not send a command complete event. To keep the flow
+ 			 * control working inject that event here.
+ 			 */
+-			if (opcode == 0xfc01)
++			if (opcode == BTINTEL_HCI_OP_RESET)
+ 				inject_cmd_complete(hdev, opcode);
+ 		} else {
+ 			urb = alloc_ctrl_urb(hdev, skb);
+@@ -3802,6 +3806,8 @@ static int btusb_hci_drv_supported_altsettings(struct hci_dev *hdev, void *data,
+ 
+ 	/* There are at most 7 alt (0 - 6) */
+ 	rp = kmalloc(sizeof(*rp) + 7, GFP_KERNEL);
++	if (!rp)
++		return -ENOMEM;
+ 
+ 	rp->num = 0;
+ 	if (!drvdata->isoc)
+diff --git a/drivers/bluetooth/hci_intel.c b/drivers/bluetooth/hci_intel.c
+index d22fbb7f9fc5e1..9b353c3d644210 100644
+--- a/drivers/bluetooth/hci_intel.c
++++ b/drivers/bluetooth/hci_intel.c
+@@ -1029,12 +1029,12 @@ static struct sk_buff *intel_dequeue(struct hci_uart *hu)
+ 		struct hci_command_hdr *cmd = (void *)skb->data;
+ 		__u16 opcode = le16_to_cpu(cmd->opcode);
+ 
+-		/* When the 0xfc01 command is issued to boot into
+-		 * the operational firmware, it will actually not
+-		 * send a command complete event. To keep the flow
+-		 * control working inject that event here.
++		/* When the BTINTEL_HCI_OP_RESET command is issued to boot into
++		 * the operational firmware, it will actually not send a command
++		 * complete event. To keep the flow control working inject that
++		 * event here.
+ 		 */
+-		if (opcode == 0xfc01)
++		if (opcode == BTINTEL_HCI_OP_RESET)
+ 			inject_cmd_complete(hu->hdev, opcode);
+ 	}
+ 
+diff --git a/drivers/bus/mhi/host/pci_generic.c b/drivers/bus/mhi/host/pci_generic.c
+index 589cb672231643..92bd133e7c456e 100644
+--- a/drivers/bus/mhi/host/pci_generic.c
++++ b/drivers/bus/mhi/host/pci_generic.c
+@@ -593,8 +593,8 @@ static const struct mhi_pci_dev_info mhi_foxconn_dw5932e_info = {
+ 	.sideband_wake = false,
+ };
+ 
+-static const struct mhi_pci_dev_info mhi_foxconn_t99w515_info = {
+-	.name = "foxconn-t99w515",
++static const struct mhi_pci_dev_info mhi_foxconn_t99w640_info = {
++	.name = "foxconn-t99w640",
+ 	.edl = "qcom/sdx72m/foxconn/edl.mbn",
+ 	.edl_trigger = true,
+ 	.config = &modem_foxconn_sdx72_config,
+@@ -920,9 +920,9 @@ static const struct pci_device_id mhi_pci_id_table[] = {
+ 	/* DW5932e (sdx62), Non-eSIM */
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe0f9),
+ 		.driver_data = (kernel_ulong_t) &mhi_foxconn_dw5932e_info },
+-	/* T99W515 (sdx72) */
++	/* T99W640 (sdx72) */
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe118),
+-		.driver_data = (kernel_ulong_t) &mhi_foxconn_t99w515_info },
++		.driver_data = (kernel_ulong_t) &mhi_foxconn_t99w640_info },
+ 	/* DW5934e(sdx72), With eSIM */
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe11d),
+ 		.driver_data = (kernel_ulong_t) &mhi_foxconn_dw5934e_info },
+diff --git a/drivers/char/hw_random/mtk-rng.c b/drivers/char/hw_random/mtk-rng.c
+index b7fa1bc1122bd3..d09a4d81376669 100644
+--- a/drivers/char/hw_random/mtk-rng.c
++++ b/drivers/char/hw_random/mtk-rng.c
+@@ -143,7 +143,9 @@ static int mtk_rng_probe(struct platform_device *pdev)
+ 	dev_set_drvdata(&pdev->dev, priv);
+ 	pm_runtime_set_autosuspend_delay(&pdev->dev, RNG_AUTOSUSPEND_TIMEOUT);
+ 	pm_runtime_use_autosuspend(&pdev->dev);
+-	devm_pm_runtime_enable(&pdev->dev);
++	ret = devm_pm_runtime_enable(&pdev->dev);
++	if (ret)
++		return ret;
+ 
+ 	dev_info(&pdev->dev, "registered RNG driver\n");
+ 
+diff --git a/drivers/clk/at91/sam9x7.c b/drivers/clk/at91/sam9x7.c
+index cbb8b220f16bcd..ffab32b047a017 100644
+--- a/drivers/clk/at91/sam9x7.c
++++ b/drivers/clk/at91/sam9x7.c
+@@ -61,44 +61,44 @@ static const struct clk_master_layout sam9x7_master_layout = {
+ 
+ /* Fractional PLL core output range. */
+ static const struct clk_range plla_core_outputs[] = {
+-	{ .min = 375000000, .max = 1600000000 },
++	{ .min = 800000000, .max = 1600000000 },
+ };
+ 
+ static const struct clk_range upll_core_outputs[] = {
+-	{ .min = 600000000, .max = 1200000000 },
++	{ .min = 600000000, .max = 960000000 },
+ };
+ 
+ static const struct clk_range lvdspll_core_outputs[] = {
+-	{ .min = 400000000, .max = 800000000 },
++	{ .min = 600000000, .max = 1200000000 },
+ };
+ 
+ static const struct clk_range audiopll_core_outputs[] = {
+-	{ .min = 400000000, .max = 800000000 },
++	{ .min = 600000000, .max = 1200000000 },
+ };
+ 
+ static const struct clk_range plladiv2_core_outputs[] = {
+-	{ .min = 375000000, .max = 1600000000 },
++	{ .min = 800000000, .max = 1600000000 },
+ };
+ 
+ /* Fractional PLL output range. */
+ static const struct clk_range plla_outputs[] = {
+-	{ .min = 732421, .max = 800000000 },
++	{ .min = 400000000, .max = 800000000 },
+ };
+ 
+ static const struct clk_range upll_outputs[] = {
+-	{ .min = 300000000, .max = 600000000 },
++	{ .min = 300000000, .max = 480000000 },
+ };
+ 
+ static const struct clk_range lvdspll_outputs[] = {
+-	{ .min = 10000000, .max = 800000000 },
++	{ .min = 175000000, .max = 550000000 },
+ };
+ 
+ static const struct clk_range audiopll_outputs[] = {
+-	{ .min = 10000000, .max = 800000000 },
++	{ .min = 0, .max = 300000000 },
+ };
+ 
+ static const struct clk_range plladiv2_outputs[] = {
+-	{ .min = 366210, .max = 400000000 },
++	{ .min = 200000000, .max = 400000000 },
+ };
+ 
+ /* PLL characteristics. */
+diff --git a/drivers/clk/clk-axi-clkgen.c b/drivers/clk/clk-axi-clkgen.c
+index 934e53a96dddac..00bf799964c61a 100644
+--- a/drivers/clk/clk-axi-clkgen.c
++++ b/drivers/clk/clk-axi-clkgen.c
+@@ -118,7 +118,7 @@ static const struct axi_clkgen_limits axi_clkgen_zynqmp_default_limits = {
+ 
+ static const struct axi_clkgen_limits axi_clkgen_zynq_default_limits = {
+ 	.fpfd_min = 10000,
+-	.fpfd_max = 300000,
++	.fpfd_max = 450000,
+ 	.fvco_min = 600000,
+ 	.fvco_max = 1200000,
+ };
+diff --git a/drivers/clk/davinci/psc.c b/drivers/clk/davinci/psc.c
+index b48322176c2101..f3ee9397bb0c78 100644
+--- a/drivers/clk/davinci/psc.c
++++ b/drivers/clk/davinci/psc.c
+@@ -277,6 +277,11 @@ davinci_lpsc_clk_register(struct device *dev, const char *name,
+ 
+ 	lpsc->pm_domain.name = devm_kasprintf(dev, GFP_KERNEL, "%s: %s",
+ 					      best_dev_name(dev), name);
++	if (!lpsc->pm_domain.name) {
++		clk_hw_unregister(&lpsc->hw);
++		kfree(lpsc);
++		return ERR_PTR(-ENOMEM);
++	}
+ 	lpsc->pm_domain.attach_dev = davinci_psc_genpd_attach_dev;
+ 	lpsc->pm_domain.detach_dev = davinci_psc_genpd_detach_dev;
+ 	lpsc->pm_domain.flags = GENPD_FLAG_PM_CLK;
+diff --git a/drivers/clk/imx/clk-imx95-blk-ctl.c b/drivers/clk/imx/clk-imx95-blk-ctl.c
+index cc2ee2be18195f..86bdcd21753102 100644
+--- a/drivers/clk/imx/clk-imx95-blk-ctl.c
++++ b/drivers/clk/imx/clk-imx95-blk-ctl.c
+@@ -342,8 +342,10 @@ static int imx95_bc_probe(struct platform_device *pdev)
+ 	if (!clk_hw_data)
+ 		return -ENOMEM;
+ 
+-	if (bc_data->rpm_enabled)
+-		pm_runtime_enable(&pdev->dev);
++	if (bc_data->rpm_enabled) {
++		devm_pm_runtime_enable(&pdev->dev);
++		pm_runtime_resume_and_get(&pdev->dev);
++	}
+ 
+ 	clk_hw_data->num = bc_data->num_clks;
+ 	hws = clk_hw_data->hws;
+@@ -383,8 +385,10 @@ static int imx95_bc_probe(struct platform_device *pdev)
+ 		goto cleanup;
+ 	}
+ 
+-	if (pm_runtime_enabled(bc->dev))
++	if (pm_runtime_enabled(bc->dev)) {
++		pm_runtime_put_sync(&pdev->dev);
+ 		clk_disable_unprepare(bc->clk_apb);
++	}
+ 
+ 	return 0;
+ 
+@@ -395,9 +399,6 @@ static int imx95_bc_probe(struct platform_device *pdev)
+ 		clk_hw_unregister(hws[i]);
+ 	}
+ 
+-	if (bc_data->rpm_enabled)
+-		pm_runtime_disable(&pdev->dev);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/clk/renesas/rzv2h-cpg.c b/drivers/clk/renesas/rzv2h-cpg.c
+index bcc496e8cbcda3..fb39e6446b269a 100644
+--- a/drivers/clk/renesas/rzv2h-cpg.c
++++ b/drivers/clk/renesas/rzv2h-cpg.c
+@@ -381,6 +381,7 @@ rzv2h_cpg_ddiv_clk_register(const struct cpg_core_clk *core,
+ 		init.ops = &rzv2h_ddiv_clk_divider_ops;
+ 	init.parent_names = &parent_name;
+ 	init.num_parents = 1;
++	init.flags = CLK_SET_RATE_PARENT;
+ 
+ 	ddiv->priv = priv;
+ 	ddiv->mon = cfg_ddiv.monbit;
+diff --git a/drivers/clk/spacemit/ccu-k1.c b/drivers/clk/spacemit/ccu-k1.c
+index cdde37a0523537..df65009a07bb1c 100644
+--- a/drivers/clk/spacemit/ccu-k1.c
++++ b/drivers/clk/spacemit/ccu-k1.c
+@@ -170,7 +170,8 @@ CCU_FACTOR_GATE_DEFINE(pll1_d4, CCU_PARENT_HW(pll1), APBS_PLL1_SWCR2, BIT(3), 4,
+ CCU_FACTOR_GATE_DEFINE(pll1_d5, CCU_PARENT_HW(pll1), APBS_PLL1_SWCR2, BIT(4), 5, 1);
+ CCU_FACTOR_GATE_DEFINE(pll1_d6, CCU_PARENT_HW(pll1), APBS_PLL1_SWCR2, BIT(5), 6, 1);
+ CCU_FACTOR_GATE_DEFINE(pll1_d7, CCU_PARENT_HW(pll1), APBS_PLL1_SWCR2, BIT(6), 7, 1);
+-CCU_FACTOR_GATE_DEFINE(pll1_d8, CCU_PARENT_HW(pll1), APBS_PLL1_SWCR2, BIT(7), 8, 1);
++CCU_FACTOR_GATE_FLAGS_DEFINE(pll1_d8, CCU_PARENT_HW(pll1), APBS_PLL1_SWCR2, BIT(7), 8, 1,
++		CLK_IS_CRITICAL);
+ CCU_FACTOR_GATE_DEFINE(pll1_d11_223p4, CCU_PARENT_HW(pll1), APBS_PLL1_SWCR2, BIT(15), 11, 1);
+ CCU_FACTOR_GATE_DEFINE(pll1_d13_189, CCU_PARENT_HW(pll1), APBS_PLL1_SWCR2, BIT(16), 13, 1);
+ CCU_FACTOR_GATE_DEFINE(pll1_d23_106p8, CCU_PARENT_HW(pll1), APBS_PLL1_SWCR2, BIT(20), 23, 1);
+diff --git a/drivers/clk/spacemit/ccu_mix.h b/drivers/clk/spacemit/ccu_mix.h
+index 51d19f5d6aacb7..54d40cd39b2752 100644
+--- a/drivers/clk/spacemit/ccu_mix.h
++++ b/drivers/clk/spacemit/ccu_mix.h
+@@ -101,17 +101,22 @@ static struct ccu_mix _name = {							\
+ 	}									\
+ }
+ 
+-#define CCU_FACTOR_GATE_DEFINE(_name, _parent, _reg_ctrl, _mask_gate, _div,	\
+-			       _mul)						\
++#define CCU_FACTOR_GATE_FLAGS_DEFINE(_name, _parent, _reg_ctrl, _mask_gate, _div,	\
++			       _mul, _flags)					\
+ static struct ccu_mix _name = {							\
+ 	.gate	= CCU_GATE_INIT(_mask_gate),					\
+ 	.factor	= CCU_FACTOR_INIT(_div, _mul),					\
+ 	.common = {								\
+ 		.reg_ctrl	= _reg_ctrl,					\
+-		CCU_MIX_INITHW(_name, _parent, spacemit_ccu_factor_gate_ops, 0)	\
++		CCU_MIX_INITHW(_name, _parent, spacemit_ccu_factor_gate_ops, _flags)	\
+ 	}									\
+ }
+ 
++#define CCU_FACTOR_GATE_DEFINE(_name, _parent, _reg_ctrl, _mask_gate, _div,	\
++			       _mul)						\
++	CCU_FACTOR_GATE_FLAGS_DEFINE(_name, _parent, _reg_ctrl, _mask_gate, _div,	\
++			       _mul, 0)
++
+ #define CCU_MUX_GATE_DEFINE(_name, _parents, _reg_ctrl, _shift, _width,		\
+ 			    _mask_gate, _flags)					\
+ static struct ccu_mix _name = {							\
+diff --git a/drivers/clk/spacemit/ccu_pll.c b/drivers/clk/spacemit/ccu_pll.c
+index 4427dcfbbb97f3..45f540073a656c 100644
+--- a/drivers/clk/spacemit/ccu_pll.c
++++ b/drivers/clk/spacemit/ccu_pll.c
+@@ -122,7 +122,7 @@ static unsigned long ccu_pll_recalc_rate(struct clk_hw *hw,
+ 
+ 	WARN_ON_ONCE(!entry);
+ 
+-	return entry ? entry->rate : -EINVAL;
++	return entry ? entry->rate : 0;
+ }
+ 
+ static long ccu_pll_round_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c b/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c
+index 52e4369664c587..df345a620d8dd4 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c
++++ b/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c
+@@ -347,8 +347,7 @@ static SUNXI_CCU_GATE(dram_ohci_clk,	"dram-ohci",	"dram",
+ 
+ static const char * const de_parents[] = { "pll-video", "pll-periph0" };
+ static SUNXI_CCU_M_WITH_MUX_GATE(de_clk, "de", de_parents,
+-				 0x104, 0, 4, 24, 2, BIT(31),
+-				 CLK_SET_RATE_PARENT);
++				 0x104, 0, 4, 24, 3, BIT(31), 0);
+ 
+ static const char * const tcon_parents[] = { "pll-video", "pll-periph0" };
+ static SUNXI_CCU_M_WITH_MUX_GATE(tcon_clk, "tcon", tcon_parents,
+diff --git a/drivers/clk/thead/clk-th1520-ap.c b/drivers/clk/thead/clk-th1520-ap.c
+index ebfb1d59401d05..485b1d5cfd18c6 100644
+--- a/drivers/clk/thead/clk-th1520-ap.c
++++ b/drivers/clk/thead/clk-th1520-ap.c
+@@ -42,8 +42,9 @@ struct ccu_common {
+ };
+ 
+ struct ccu_mux {
+-	struct ccu_internal	mux;
+-	struct ccu_common	common;
++	int			clkid;
++	u32			reg;
++	struct clk_mux		mux;
+ };
+ 
+ struct ccu_gate {
+@@ -75,6 +76,17 @@ struct ccu_pll {
+ 		.flags	= _flags,					\
+ 	}
+ 
++#define TH_CCU_MUX(_name, _parents, _shift, _width)			\
++	{								\
++		.mask		= GENMASK(_width - 1, 0),		\
++		.shift		= _shift,				\
++		.hw.init	= CLK_HW_INIT_PARENTS_DATA(		\
++					_name,				\
++					_parents,			\
++					&clk_mux_ops,			\
++					0),				\
++	}
++
+ #define CCU_GATE(_clkid, _struct, _name, _parent, _reg, _gate, _flags)	\
+ 	struct ccu_gate _struct = {					\
+ 		.enable	= _gate,					\
+@@ -94,13 +106,6 @@ static inline struct ccu_common *hw_to_ccu_common(struct clk_hw *hw)
+ 	return container_of(hw, struct ccu_common, hw);
+ }
+ 
+-static inline struct ccu_mux *hw_to_ccu_mux(struct clk_hw *hw)
+-{
+-	struct ccu_common *common = hw_to_ccu_common(hw);
+-
+-	return container_of(common, struct ccu_mux, common);
+-}
+-
+ static inline struct ccu_pll *hw_to_ccu_pll(struct clk_hw *hw)
+ {
+ 	struct ccu_common *common = hw_to_ccu_common(hw);
+@@ -415,32 +420,20 @@ static const struct clk_parent_data c910_i0_parents[] = {
+ };
+ 
+ static struct ccu_mux c910_i0_clk = {
+-	.mux	= TH_CCU_ARG(1, 1),
+-	.common	= {
+-		.clkid		= CLK_C910_I0,
+-		.cfg0		= 0x100,
+-		.hw.init	= CLK_HW_INIT_PARENTS_DATA("c910-i0",
+-					      c910_i0_parents,
+-					      &clk_mux_ops,
+-					      0),
+-	}
++	.clkid	= CLK_C910_I0,
++	.reg	= 0x100,
++	.mux	= TH_CCU_MUX("c910-i0", c910_i0_parents, 1, 1),
+ };
+ 
+ static const struct clk_parent_data c910_parents[] = {
+-	{ .hw = &c910_i0_clk.common.hw },
++	{ .hw = &c910_i0_clk.mux.hw },
+ 	{ .hw = &cpu_pll1_clk.common.hw }
+ };
+ 
+ static struct ccu_mux c910_clk = {
+-	.mux	= TH_CCU_ARG(0, 1),
+-	.common	= {
+-		.clkid		= CLK_C910,
+-		.cfg0		= 0x100,
+-		.hw.init	= CLK_HW_INIT_PARENTS_DATA("c910",
+-					      c910_parents,
+-					      &clk_mux_ops,
+-					      0),
+-	}
++	.clkid	= CLK_C910,
++	.reg	= 0x100,
++	.mux	= TH_CCU_MUX("c910", c910_parents, 0, 1),
+ };
+ 
+ static const struct clk_parent_data ahb2_cpusys_parents[] = {
+@@ -582,7 +575,14 @@ static const struct clk_parent_data peri2sys_apb_pclk_pd[] = {
+ 	{ .hw = &peri2sys_apb_pclk.common.hw }
+ };
+ 
+-static CLK_FIXED_FACTOR_FW_NAME(osc12m_clk, "osc_12m", "osc_24m", 2, 1, 0);
++static struct clk_fixed_factor osc12m_clk = {
++	.div		= 2,
++	.mult		= 1,
++	.hw.init	= CLK_HW_INIT_PARENTS_DATA("osc_12m",
++						   osc_24m_clk,
++						   &clk_fixed_factor_ops,
++						   0),
++};
+ 
+ static const char * const out_parents[] = { "osc_24m", "osc_12m" };
+ 
+@@ -917,15 +917,9 @@ static const struct clk_parent_data uart_sclk_parents[] = {
+ };
+ 
+ static struct ccu_mux uart_sclk = {
+-	.mux	= TH_CCU_ARG(0, 1),
+-	.common	= {
+-		.clkid          = CLK_UART_SCLK,
+-		.cfg0		= 0x210,
+-		.hw.init	= CLK_HW_INIT_PARENTS_DATA("uart-sclk",
+-					      uart_sclk_parents,
+-					      &clk_mux_ops,
+-					      0),
+-	}
++	.clkid	= CLK_UART_SCLK,
++	.reg	= 0x210,
++	.mux	= TH_CCU_MUX("uart-sclk", uart_sclk_parents, 0, 1),
+ };
+ 
+ static struct ccu_common *th1520_pll_clks[] = {
+@@ -962,10 +956,10 @@ static struct ccu_common *th1520_div_clks[] = {
+ 	&dpu1_clk.common,
+ };
+ 
+-static struct ccu_common *th1520_mux_clks[] = {
+-	&c910_i0_clk.common,
+-	&c910_clk.common,
+-	&uart_sclk.common,
++static struct ccu_mux *th1520_mux_clks[] = {
++	&c910_i0_clk,
++	&c910_clk,
++	&uart_sclk,
+ };
+ 
+ static struct ccu_common *th1520_gate_clks[] = {
+@@ -1067,7 +1061,7 @@ static const struct regmap_config th1520_clk_regmap_config = {
+ struct th1520_plat_data {
+ 	struct ccu_common **th1520_pll_clks;
+ 	struct ccu_common **th1520_div_clks;
+-	struct ccu_common **th1520_mux_clks;
++	struct ccu_mux	  **th1520_mux_clks;
+ 	struct ccu_common **th1520_gate_clks;
+ 
+ 	int nr_clks;
+@@ -1154,23 +1148,15 @@ static int th1520_clk_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	for (i = 0; i < plat_data->nr_mux_clks; i++) {
+-		struct ccu_mux *cm = hw_to_ccu_mux(&plat_data->th1520_mux_clks[i]->hw);
+-		const struct clk_init_data *init = cm->common.hw.init;
+-
+-		plat_data->th1520_mux_clks[i]->map = map;
+-		hw = devm_clk_hw_register_mux_parent_data_table(dev,
+-								init->name,
+-								init->parent_data,
+-								init->num_parents,
+-								0,
+-								base + cm->common.cfg0,
+-								cm->mux.shift,
+-								cm->mux.width,
+-								0, NULL, NULL);
+-		if (IS_ERR(hw))
+-			return PTR_ERR(hw);
++		struct ccu_mux *cm = plat_data->th1520_mux_clks[i];
++
++		cm->mux.reg = base + cm->reg;
++
++		ret = devm_clk_hw_register(dev, &cm->mux.hw);
++		if (ret)
++			return ret;
+ 
+-		priv->hws[cm->common.clkid] = hw;
++		priv->hws[cm->clkid] = &cm->mux.hw;
+ 	}
+ 
+ 	for (i = 0; i < plat_data->nr_gate_clks; i++) {
+diff --git a/drivers/clk/xilinx/clk-xlnx-clock-wizard.c b/drivers/clk/xilinx/clk-xlnx-clock-wizard.c
+index bbf7714480e7b7..0295a13a811cf8 100644
+--- a/drivers/clk/xilinx/clk-xlnx-clock-wizard.c
++++ b/drivers/clk/xilinx/clk-xlnx-clock-wizard.c
+@@ -669,7 +669,7 @@ static long clk_wzrd_ver_round_rate_all(struct clk_hw *hw, unsigned long rate,
+ 	u32 m, d, o, div, f;
+ 	int err;
+ 
+-	err = clk_wzrd_get_divisors(hw, rate, *prate);
++	err = clk_wzrd_get_divisors_ver(hw, rate, *prate);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/clk/xilinx/xlnx_vcu.c b/drivers/clk/xilinx/xlnx_vcu.c
+index 81501b48412ee6..88b3fd8250c202 100644
+--- a/drivers/clk/xilinx/xlnx_vcu.c
++++ b/drivers/clk/xilinx/xlnx_vcu.c
+@@ -587,8 +587,8 @@ static void xvcu_unregister_clock_provider(struct xvcu_device *xvcu)
+ 		xvcu_clk_hw_unregister_leaf(hws[CLK_XVCU_ENC_MCU]);
+ 	if (!IS_ERR_OR_NULL(hws[CLK_XVCU_ENC_CORE]))
+ 		xvcu_clk_hw_unregister_leaf(hws[CLK_XVCU_ENC_CORE]);
+-
+-	clk_hw_unregister_fixed_factor(xvcu->pll_post);
++	if (!IS_ERR_OR_NULL(xvcu->pll_post))
++		clk_hw_unregister_fixed_factor(xvcu->pll_post);
+ }
+ 
+ /**
+diff --git a/drivers/cpufreq/Makefile b/drivers/cpufreq/Makefile
+index d38526b8e06337..681d687b5a18e9 100644
+--- a/drivers/cpufreq/Makefile
++++ b/drivers/cpufreq/Makefile
+@@ -21,6 +21,7 @@ obj-$(CONFIG_CPUFREQ_VIRT)		+= virtual-cpufreq.o
+ 
+ # Traces
+ CFLAGS_amd-pstate-trace.o               := -I$(src)
++CFLAGS_powernv-cpufreq.o                := -I$(src)
+ amd_pstate-y				:= amd-pstate.o amd-pstate-trace.o
+ 
+ ##################################################################################
+diff --git a/drivers/cpufreq/armada-8k-cpufreq.c b/drivers/cpufreq/armada-8k-cpufreq.c
+index 5a3545bd0d8d20..006f4c554dd7e9 100644
+--- a/drivers/cpufreq/armada-8k-cpufreq.c
++++ b/drivers/cpufreq/armada-8k-cpufreq.c
+@@ -132,7 +132,7 @@ static int __init armada_8k_cpufreq_init(void)
+ 	int ret = 0, opps_index = 0, cpu, nb_cpus;
+ 	struct freq_table *freq_tables;
+ 	struct device_node *node;
+-	static struct cpumask cpus;
++	static struct cpumask cpus, shared_cpus;
+ 
+ 	node = of_find_matching_node_and_match(NULL, armada_8k_cpufreq_of_match,
+ 					       NULL);
+@@ -154,7 +154,6 @@ static int __init armada_8k_cpufreq_init(void)
+ 	 * divisions of it).
+ 	 */
+ 	for_each_cpu(cpu, &cpus) {
+-		struct cpumask shared_cpus;
+ 		struct device *cpu_dev;
+ 		struct clk *clk;
+ 
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index d7426e1d8bdd26..c1c6f11ac551bb 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -1284,6 +1284,8 @@ static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu)
+ 		goto err_free_real_cpus;
+ 	}
+ 
++	init_rwsem(&policy->rwsem);
++
+ 	freq_constraints_init(&policy->constraints);
+ 
+ 	policy->nb_min.notifier_call = cpufreq_notifier_min;
+@@ -1306,7 +1308,6 @@ static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu)
+ 	}
+ 
+ 	INIT_LIST_HEAD(&policy->policy_list);
+-	init_rwsem(&policy->rwsem);
+ 	spin_lock_init(&policy->transition_lock);
+ 	init_waitqueue_head(&policy->transition_wait);
+ 	INIT_WORK(&policy->update, handle_update);
+@@ -2944,15 +2945,6 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
+ 	cpufreq_driver = driver_data;
+ 	write_unlock_irqrestore(&cpufreq_driver_lock, flags);
+ 
+-	/*
+-	 * Mark support for the scheduler's frequency invariance engine for
+-	 * drivers that implement target(), target_index() or fast_switch().
+-	 */
+-	if (!cpufreq_driver->setpolicy) {
+-		static_branch_enable_cpuslocked(&cpufreq_freq_invariance);
+-		pr_debug("supports frequency invariance");
+-	}
+-
+ 	if (driver_data->setpolicy)
+ 		driver_data->flags |= CPUFREQ_CONST_LOOPS;
+ 
+@@ -2983,6 +2975,15 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
+ 	hp_online = ret;
+ 	ret = 0;
+ 
++	/*
++	 * Mark support for the scheduler's frequency invariance engine for
++	 * drivers that implement target(), target_index() or fast_switch().
++	 */
++	if (!cpufreq_driver->setpolicy) {
++		static_branch_enable_cpuslocked(&cpufreq_freq_invariance);
++		pr_debug("supports frequency invariance");
++	}
++
+ 	pr_debug("driver %s up and running\n", driver_data->name);
+ 	goto out;
+ 
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 64587d31826725..60326ab5475f9d 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -3249,8 +3249,8 @@ static int intel_cpufreq_update_pstate(struct cpufreq_policy *policy,
+ 		int max_pstate = policy->strict_target ?
+ 					target_pstate : cpu->max_perf_ratio;
+ 
+-		intel_cpufreq_hwp_update(cpu, target_pstate, max_pstate, 0,
+-					 fast_switch);
++		intel_cpufreq_hwp_update(cpu, target_pstate, max_pstate,
++					 target_pstate, fast_switch);
+ 	} else if (target_pstate != old_pstate) {
+ 		intel_cpufreq_perf_ctl_update(cpu, target_pstate, fast_switch);
+ 	}
+diff --git a/drivers/cpufreq/powernv-cpufreq.c b/drivers/cpufreq/powernv-cpufreq.c
+index a8943e2a93be6a..7d9a5f656de89d 100644
+--- a/drivers/cpufreq/powernv-cpufreq.c
++++ b/drivers/cpufreq/powernv-cpufreq.c
+@@ -21,7 +21,6 @@
+ #include <linux/string_choices.h>
+ #include <linux/cpu.h>
+ #include <linux/hashtable.h>
+-#include <trace/events/power.h>
+ 
+ #include <asm/cputhreads.h>
+ #include <asm/firmware.h>
+@@ -30,6 +29,9 @@
+ #include <asm/opal.h>
+ #include <linux/timer.h>
+ 
++#define CREATE_TRACE_POINTS
++#include "powernv-trace.h"
++
+ #define POWERNV_MAX_PSTATES_ORDER  8
+ #define POWERNV_MAX_PSTATES	(1UL << (POWERNV_MAX_PSTATES_ORDER))
+ #define PMSR_PSAFE_ENABLE	(1UL << 30)
+diff --git a/drivers/cpufreq/powernv-trace.h b/drivers/cpufreq/powernv-trace.h
+new file mode 100644
+index 00000000000000..8cadb7c9427b36
+--- /dev/null
++++ b/drivers/cpufreq/powernv-trace.h
+@@ -0,0 +1,44 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++
++#if !defined(_POWERNV_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
++#define _POWERNV_TRACE_H
++
++#include <linux/cpufreq.h>
++#include <linux/tracepoint.h>
++#include <linux/trace_events.h>
++
++#undef TRACE_SYSTEM
++#define TRACE_SYSTEM power
++
++TRACE_EVENT(powernv_throttle,
++
++	TP_PROTO(int chip_id, const char *reason, int pmax),
++
++	TP_ARGS(chip_id, reason, pmax),
++
++	TP_STRUCT__entry(
++		__field(int, chip_id)
++		__string(reason, reason)
++		__field(int, pmax)
++	),
++
++	TP_fast_assign(
++		__entry->chip_id = chip_id;
++		__assign_str(reason);
++		__entry->pmax = pmax;
++	),
++
++	TP_printk("Chip %d Pmax %d %s", __entry->chip_id,
++		  __entry->pmax, __get_str(reason))
++);
++
++#endif /* _POWERNV_TRACE_H */
++
++/* This part must be outside protection */
++#undef TRACE_INCLUDE_PATH
++#define TRACE_INCLUDE_PATH .
++
++#undef TRACE_INCLUDE_FILE
++#define TRACE_INCLUDE_FILE powernv-trace
++
++#include <trace/define_trace.h>
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
+index f9cf00d690e2fd..7cd3b13f3bdc65 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
+@@ -278,8 +278,8 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req
+ 	}
+ 
+ 	chan->timeout = areq->cryptlen;
+-	rctx->nr_sgs = nr_sgs;
+-	rctx->nr_sgd = nr_sgd;
++	rctx->nr_sgs = ns;
++	rctx->nr_sgd = nd;
+ 	return 0;
+ 
+ theend_sgs:
+diff --git a/drivers/crypto/ccp/ccp-debugfs.c b/drivers/crypto/ccp/ccp-debugfs.c
+index a1055554b47a24..dc26bc22c91d1d 100644
+--- a/drivers/crypto/ccp/ccp-debugfs.c
++++ b/drivers/crypto/ccp/ccp-debugfs.c
+@@ -319,5 +319,8 @@ void ccp5_debugfs_setup(struct ccp_device *ccp)
+ 
+ void ccp5_debugfs_destroy(void)
+ {
++	mutex_lock(&ccp_debugfs_lock);
+ 	debugfs_remove_recursive(ccp_debugfs_dir);
++	ccp_debugfs_dir = NULL;
++	mutex_unlock(&ccp_debugfs_lock);
+ }
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 3451bada884e14..539c303beb3a28 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -434,7 +434,7 @@ static int rmp_mark_pages_firmware(unsigned long paddr, unsigned int npages, boo
+ 	return rc;
+ }
+ 
+-static struct page *__snp_alloc_firmware_pages(gfp_t gfp_mask, int order)
++static struct page *__snp_alloc_firmware_pages(gfp_t gfp_mask, int order, bool locked)
+ {
+ 	unsigned long npages = 1ul << order, paddr;
+ 	struct sev_device *sev;
+@@ -453,7 +453,7 @@ static struct page *__snp_alloc_firmware_pages(gfp_t gfp_mask, int order)
+ 		return page;
+ 
+ 	paddr = __pa((unsigned long)page_address(page));
+-	if (rmp_mark_pages_firmware(paddr, npages, false))
++	if (rmp_mark_pages_firmware(paddr, npages, locked))
+ 		return NULL;
+ 
+ 	return page;
+@@ -463,7 +463,7 @@ void *snp_alloc_firmware_page(gfp_t gfp_mask)
+ {
+ 	struct page *page;
+ 
+-	page = __snp_alloc_firmware_pages(gfp_mask, 0);
++	page = __snp_alloc_firmware_pages(gfp_mask, 0, false);
+ 
+ 	return page ? page_address(page) : NULL;
+ }
+@@ -498,7 +498,7 @@ static void *sev_fw_alloc(unsigned long len)
+ {
+ 	struct page *page;
+ 
+-	page = __snp_alloc_firmware_pages(GFP_KERNEL, get_order(len));
++	page = __snp_alloc_firmware_pages(GFP_KERNEL, get_order(len), true);
+ 	if (!page)
+ 		return NULL;
+ 
+@@ -1276,9 +1276,11 @@ static int __sev_platform_init_handle_init_ex_path(struct sev_device *sev)
+ 
+ static int __sev_platform_init_locked(int *error)
+ {
+-	int rc, psp_ret = SEV_RET_NO_FW_CALL;
++	int rc, psp_ret, dfflush_error;
+ 	struct sev_device *sev;
+ 
++	psp_ret = dfflush_error = SEV_RET_NO_FW_CALL;
++
+ 	if (!psp_master || !psp_master->sev_data)
+ 		return -ENODEV;
+ 
+@@ -1320,10 +1322,10 @@ static int __sev_platform_init_locked(int *error)
+ 
+ 	/* Prepare for first SEV guest launch after INIT */
+ 	wbinvd_on_all_cpus();
+-	rc = __sev_do_cmd_locked(SEV_CMD_DF_FLUSH, NULL, error);
++	rc = __sev_do_cmd_locked(SEV_CMD_DF_FLUSH, NULL, &dfflush_error);
+ 	if (rc) {
+ 		dev_err(sev->dev, "SEV: DF_FLUSH failed %#x, rc %d\n",
+-			*error, rc);
++			dfflush_error, rc);
+ 		return rc;
+ 	}
+ 
+diff --git a/drivers/crypto/img-hash.c b/drivers/crypto/img-hash.c
+index e050f5ff5efb6d..c527cd75b6fe98 100644
+--- a/drivers/crypto/img-hash.c
++++ b/drivers/crypto/img-hash.c
+@@ -436,7 +436,7 @@ static int img_hash_write_via_dma_stop(struct img_hash_dev *hdev)
+ 	struct img_hash_request_ctx *ctx = ahash_request_ctx(hdev->req);
+ 
+ 	if (ctx->flags & DRIVER_FLAGS_SG)
+-		dma_unmap_sg(hdev->dev, ctx->sg, ctx->dma_ct, DMA_TO_DEVICE);
++		dma_unmap_sg(hdev->dev, ctx->sg, 1, DMA_TO_DEVICE);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
+index d2b632193bebb9..5baf4bd2fcee52 100644
+--- a/drivers/crypto/inside-secure/safexcel_hash.c
++++ b/drivers/crypto/inside-secure/safexcel_hash.c
+@@ -249,7 +249,9 @@ static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv,
+ 	safexcel_complete(priv, ring);
+ 
+ 	if (sreq->nents) {
+-		dma_unmap_sg(priv->dev, areq->src, sreq->nents, DMA_TO_DEVICE);
++		dma_unmap_sg(priv->dev, areq->src,
++			     sg_nents_for_len(areq->src, areq->nbytes),
++			     DMA_TO_DEVICE);
+ 		sreq->nents = 0;
+ 	}
+ 
+@@ -497,7 +499,9 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
+ 			 DMA_FROM_DEVICE);
+ unmap_sg:
+ 	if (req->nents) {
+-		dma_unmap_sg(priv->dev, areq->src, req->nents, DMA_TO_DEVICE);
++		dma_unmap_sg(priv->dev, areq->src,
++			     sg_nents_for_len(areq->src, areq->nbytes),
++			     DMA_TO_DEVICE);
+ 		req->nents = 0;
+ 	}
+ cdesc_rollback:
+diff --git a/drivers/crypto/intel/keembay/keembay-ocs-hcu-core.c b/drivers/crypto/intel/keembay/keembay-ocs-hcu-core.c
+index 95dc8979918d8b..8f9e21ced0fe1e 100644
+--- a/drivers/crypto/intel/keembay/keembay-ocs-hcu-core.c
++++ b/drivers/crypto/intel/keembay/keembay-ocs-hcu-core.c
+@@ -68,6 +68,7 @@ struct ocs_hcu_ctx {
+  * @sg_data_total:  Total data in the SG list at any time.
+  * @sg_data_offset: Offset into the data of the current individual SG node.
+  * @sg_dma_nents:   Number of sg entries mapped in dma_list.
++ * @nents:          Number of entries in the scatterlist.
+  */
+ struct ocs_hcu_rctx {
+ 	struct ocs_hcu_dev	*hcu_dev;
+@@ -91,6 +92,7 @@ struct ocs_hcu_rctx {
+ 	unsigned int		sg_data_total;
+ 	unsigned int		sg_data_offset;
+ 	unsigned int		sg_dma_nents;
++	unsigned int		nents;
+ };
+ 
+ /**
+@@ -199,7 +201,7 @@ static void kmb_ocs_hcu_dma_cleanup(struct ahash_request *req,
+ 
+ 	/* Unmap req->src (if mapped). */
+ 	if (rctx->sg_dma_nents) {
+-		dma_unmap_sg(dev, req->src, rctx->sg_dma_nents, DMA_TO_DEVICE);
++		dma_unmap_sg(dev, req->src, rctx->nents, DMA_TO_DEVICE);
+ 		rctx->sg_dma_nents = 0;
+ 	}
+ 
+@@ -260,6 +262,10 @@ static int kmb_ocs_dma_prepare(struct ahash_request *req)
+ 			rc = -ENOMEM;
+ 			goto cleanup;
+ 		}
++
++		/* Save the value of nents to pass to dma_unmap_sg. */
++		rctx->nents = nents;
++
+ 		/*
+ 		 * The value returned by dma_map_sg() can be < nents; so update
+ 		 * nents accordingly.
+diff --git a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
+index 7c3c0f561c9562..8340b5e8a94714 100644
+--- a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
++++ b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
+@@ -191,7 +191,6 @@ static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
+ 			  ICP_ACCEL_CAPABILITIES_SM4 |
+ 			  ICP_ACCEL_CAPABILITIES_AES_V2 |
+ 			  ICP_ACCEL_CAPABILITIES_ZUC |
+-			  ICP_ACCEL_CAPABILITIES_ZUC_256 |
+ 			  ICP_ACCEL_CAPABILITIES_WIRELESS_CRYPTO_EXT |
+ 			  ICP_ACCEL_CAPABILITIES_EXT_ALGCHAIN;
+ 
+@@ -223,17 +222,11 @@ static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
+ 
+ 	if (fusectl1 & ICP_ACCEL_GEN4_MASK_WCP_WAT_SLICE) {
+ 		capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_ZUC;
+-		capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_ZUC_256;
+ 		capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_WIRELESS_CRYPTO_EXT;
+ 	}
+ 
+-	if (fusectl1 & ICP_ACCEL_GEN4_MASK_EIA3_SLICE) {
++	if (fusectl1 & ICP_ACCEL_GEN4_MASK_EIA3_SLICE)
+ 		capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_ZUC;
+-		capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_ZUC_256;
+-	}
+-
+-	if (fusectl1 & ICP_ACCEL_GEN4_MASK_ZUC_256_SLICE)
+-		capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_ZUC_256;
+ 
+ 	capabilities_asym = ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC |
+ 			  ICP_ACCEL_CAPABILITIES_SM2 |
+diff --git a/drivers/crypto/intel/qat/qat_6xxx/adf_6xxx_hw_data.c b/drivers/crypto/intel/qat/qat_6xxx/adf_6xxx_hw_data.c
+index 359a6447ccb837..2207e5e576b271 100644
+--- a/drivers/crypto/intel/qat/qat_6xxx/adf_6xxx_hw_data.c
++++ b/drivers/crypto/intel/qat/qat_6xxx/adf_6xxx_hw_data.c
+@@ -520,8 +520,8 @@ static void set_vc_csr_for_bank(void __iomem *csr, u32 bank_number)
+ 	 * driver must program the ringmodectl CSRs.
+ 	 */
+ 	value = ADF_CSR_RD(csr, ADF_GEN6_CSR_RINGMODECTL(bank_number));
+-	value |= FIELD_PREP(ADF_GEN6_RINGMODECTL_TC_MASK, ADF_GEN6_RINGMODECTL_TC_DEFAULT);
+-	value |= FIELD_PREP(ADF_GEN6_RINGMODECTL_TC_EN_MASK, ADF_GEN6_RINGMODECTL_TC_EN_OP1);
++	FIELD_MODIFY(ADF_GEN6_RINGMODECTL_TC_MASK, &value, ADF_GEN6_RINGMODECTL_TC_DEFAULT);
++	FIELD_MODIFY(ADF_GEN6_RINGMODECTL_TC_EN_MASK, &value, ADF_GEN6_RINGMODECTL_TC_EN_OP1);
+ 	ADF_CSR_WR(csr, ADF_GEN6_CSR_RINGMODECTL(bank_number), value);
+ }
+ 
+@@ -537,7 +537,7 @@ static int set_vc_config(struct adf_accel_dev *accel_dev)
+ 	 * Read PVC0CTL then write the masked values.
+ 	 */
+ 	pci_read_config_dword(pdev, ADF_GEN6_PVC0CTL_OFFSET, &value);
+-	value |= FIELD_PREP(ADF_GEN6_PVC0CTL_TCVCMAP_MASK, ADF_GEN6_PVC0CTL_TCVCMAP_DEFAULT);
++	FIELD_MODIFY(ADF_GEN6_PVC0CTL_TCVCMAP_MASK, &value, ADF_GEN6_PVC0CTL_TCVCMAP_DEFAULT);
+ 	err = pci_write_config_dword(pdev, ADF_GEN6_PVC0CTL_OFFSET, value);
+ 	if (err) {
+ 		dev_err(&GET_DEV(accel_dev), "pci write to PVC0CTL failed\n");
+@@ -546,8 +546,8 @@ static int set_vc_config(struct adf_accel_dev *accel_dev)
+ 
+ 	/* Read PVC1CTL then write masked values */
+ 	pci_read_config_dword(pdev, ADF_GEN6_PVC1CTL_OFFSET, &value);
+-	value |= FIELD_PREP(ADF_GEN6_PVC1CTL_TCVCMAP_MASK, ADF_GEN6_PVC1CTL_TCVCMAP_DEFAULT);
+-	value |= FIELD_PREP(ADF_GEN6_PVC1CTL_VCEN_MASK, ADF_GEN6_PVC1CTL_VCEN_ON);
++	FIELD_MODIFY(ADF_GEN6_PVC1CTL_TCVCMAP_MASK, &value, ADF_GEN6_PVC1CTL_TCVCMAP_DEFAULT);
++	FIELD_MODIFY(ADF_GEN6_PVC1CTL_VCEN_MASK, &value, ADF_GEN6_PVC1CTL_VCEN_ON);
+ 	err = pci_write_config_dword(pdev, ADF_GEN6_PVC1CTL_OFFSET, value);
+ 	if (err)
+ 		dev_err(&GET_DEV(accel_dev), "pci write to PVC1CTL failed\n");
+@@ -627,7 +627,15 @@ static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
+ 		capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_CIPHER;
+ 	}
+ 
+-	capabilities_asym = 0;
++	capabilities_asym = ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC |
++			    ICP_ACCEL_CAPABILITIES_SM2 |
++			    ICP_ACCEL_CAPABILITIES_ECEDMONT;
++
++	if (fusectl1 & ICP_ACCEL_GEN6_MASK_PKE_SLICE) {
++		capabilities_asym &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC;
++		capabilities_asym &= ~ICP_ACCEL_CAPABILITIES_SM2;
++		capabilities_asym &= ~ICP_ACCEL_CAPABILITIES_ECEDMONT;
++	}
+ 
+ 	capabilities_dc = ICP_ACCEL_CAPABILITIES_COMPRESSION |
+ 			  ICP_ACCEL_CAPABILITIES_LZ4_COMPRESSION |
+diff --git a/drivers/crypto/intel/qat/qat_6xxx/adf_6xxx_hw_data.h b/drivers/crypto/intel/qat/qat_6xxx/adf_6xxx_hw_data.h
+index 78e2e2c5816e54..8824958527c4bd 100644
+--- a/drivers/crypto/intel/qat/qat_6xxx/adf_6xxx_hw_data.h
++++ b/drivers/crypto/intel/qat/qat_6xxx/adf_6xxx_hw_data.h
+@@ -99,7 +99,7 @@
+ #define ADF_GEN6_PVC0CTL_OFFSET			0x204
+ #define ADF_GEN6_PVC0CTL_TCVCMAP_OFFSET		1
+ #define ADF_GEN6_PVC0CTL_TCVCMAP_MASK		GENMASK(7, 1)
+-#define ADF_GEN6_PVC0CTL_TCVCMAP_DEFAULT	0x7F
++#define ADF_GEN6_PVC0CTL_TCVCMAP_DEFAULT	0x3F
+ 
+ /* VC1 Resource Control Register */
+ #define ADF_GEN6_PVC1CTL_OFFSET			0x210
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c
+index 0406cb09c5bbb9..14d0fdd66a4b2f 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.c
+@@ -581,6 +581,28 @@ static int bank_state_restore(struct adf_hw_csr_ops *ops, void __iomem *base,
+ 	ops->write_csr_int_srcsel_w_val(base, bank, state->iaintflagsrcsel0);
+ 	ops->write_csr_exp_int_en(base, bank, state->ringexpintenable);
+ 	ops->write_csr_int_col_ctl(base, bank, state->iaintcolctl);
++
++	/*
++	 * Verify whether any exceptions were raised during the bank save process.
++	 * If exceptions occurred, the status and exception registers cannot
++	 * be directly restored. Consequently, further restoration is not
++	 * feasible, and the current state of the ring should be maintained.
++	 */
++	val = state->ringexpstat;
++	if (val) {
++		pr_info("QAT: Bank %u state not fully restored due to exception in saved state (%#x)\n",
++			bank, val);
++		return 0;
++	}
++
++	/* Ensure that the restoration process completed without exceptions */
++	tmp_val = ops->read_csr_exp_stat(base, bank);
++	if (tmp_val) {
++		pr_err("QAT: Bank %u restored with exception: %#x\n",
++		       bank, tmp_val);
++		return -EFAULT;
++	}
++
+ 	ops->write_csr_ring_srv_arb_en(base, bank, state->ringsrvarben);
+ 
+ 	/* Check that all ring statuses match the saved state. */
+@@ -614,13 +636,6 @@ static int bank_state_restore(struct adf_hw_csr_ops *ops, void __iomem *base,
+ 	if (ret)
+ 		return ret;
+ 
+-	tmp_val = ops->read_csr_exp_stat(base, bank);
+-	val = state->ringexpstat;
+-	if (tmp_val && !val) {
+-		pr_err("QAT: Bank was restored with exception: 0x%x\n", val);
+-		return -EINVAL;
+-	}
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_sriov.c b/drivers/crypto/intel/qat/qat_common/adf_sriov.c
+index c75d0b6cb0ada3..31d1ef0cb1f52e 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_sriov.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_sriov.c
+@@ -155,7 +155,6 @@ static int adf_do_enable_sriov(struct adf_accel_dev *accel_dev)
+ 	if (!device_iommu_mapped(&GET_DEV(accel_dev))) {
+ 		dev_warn(&GET_DEV(accel_dev),
+ 			 "IOMMU should be enabled for SR-IOV to work correctly\n");
+-		return -EINVAL;
+ 	}
+ 
+ 	if (adf_dev_started(accel_dev)) {
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_transport_debug.c b/drivers/crypto/intel/qat/qat_common/adf_transport_debug.c
+index e2dd568b87b519..621b5d3dfcef91 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_transport_debug.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_transport_debug.c
+@@ -31,8 +31,10 @@ static void *adf_ring_next(struct seq_file *sfile, void *v, loff_t *pos)
+ 	struct adf_etr_ring_data *ring = sfile->private;
+ 
+ 	if (*pos >= (ADF_SIZE_TO_RING_SIZE_IN_BYTES(ring->ring_size) /
+-		     ADF_MSG_SIZE_TO_BYTES(ring->msg_size)))
++		     ADF_MSG_SIZE_TO_BYTES(ring->msg_size))) {
++		(*pos)++;
+ 		return NULL;
++	}
+ 
+ 	return ring->base_addr +
+ 		(ADF_MSG_SIZE_TO_BYTES(ring->msg_size) * (*pos)++);
+diff --git a/drivers/crypto/intel/qat/qat_common/qat_bl.c b/drivers/crypto/intel/qat/qat_common/qat_bl.c
+index 5e4dad4693caad..9b2338f58d97c4 100644
+--- a/drivers/crypto/intel/qat/qat_common/qat_bl.c
++++ b/drivers/crypto/intel/qat/qat_common/qat_bl.c
+@@ -38,7 +38,7 @@ void qat_bl_free_bufl(struct adf_accel_dev *accel_dev,
+ 		for (i = 0; i < blout->num_mapped_bufs; i++) {
+ 			dma_unmap_single(dev, blout->buffers[i].addr,
+ 					 blout->buffers[i].len,
+-					 DMA_FROM_DEVICE);
++					 DMA_BIDIRECTIONAL);
+ 		}
+ 		dma_unmap_single(dev, blpout, sz_out, DMA_TO_DEVICE);
+ 
+@@ -162,7 +162,7 @@ static int __qat_bl_sgl_to_bufl(struct adf_accel_dev *accel_dev,
+ 			}
+ 			buffers[y].addr = dma_map_single(dev, sg_virt(sg) + left,
+ 							 sg->length - left,
+-							 DMA_FROM_DEVICE);
++							 DMA_BIDIRECTIONAL);
+ 			if (unlikely(dma_mapping_error(dev, buffers[y].addr)))
+ 				goto err_out;
+ 			buffers[y].len = sg->length;
+@@ -204,7 +204,7 @@ static int __qat_bl_sgl_to_bufl(struct adf_accel_dev *accel_dev,
+ 		if (!dma_mapping_error(dev, buflout->buffers[i].addr))
+ 			dma_unmap_single(dev, buflout->buffers[i].addr,
+ 					 buflout->buffers[i].len,
+-					 DMA_FROM_DEVICE);
++					 DMA_BIDIRECTIONAL);
+ 	}
+ 
+ 	if (!buf->sgl_dst_valid)
+diff --git a/drivers/crypto/intel/qat/qat_common/qat_compression.c b/drivers/crypto/intel/qat/qat_common/qat_compression.c
+index c285b45b8679da..53a4db5507ec28 100644
+--- a/drivers/crypto/intel/qat/qat_common/qat_compression.c
++++ b/drivers/crypto/intel/qat/qat_common/qat_compression.c
+@@ -196,7 +196,7 @@ static int qat_compression_alloc_dc_data(struct adf_accel_dev *accel_dev)
+ 	struct adf_dc_data *dc_data = NULL;
+ 	u8 *obuff = NULL;
+ 
+-	dc_data = devm_kzalloc(dev, sizeof(*dc_data), GFP_KERNEL);
++	dc_data = kzalloc_node(sizeof(*dc_data), GFP_KERNEL, dev_to_node(dev));
+ 	if (!dc_data)
+ 		goto err;
+ 
+@@ -204,7 +204,7 @@ static int qat_compression_alloc_dc_data(struct adf_accel_dev *accel_dev)
+ 	if (!obuff)
+ 		goto err;
+ 
+-	obuff_p = dma_map_single(dev, obuff, ovf_buff_sz, DMA_FROM_DEVICE);
++	obuff_p = dma_map_single(dev, obuff, ovf_buff_sz, DMA_BIDIRECTIONAL);
+ 	if (unlikely(dma_mapping_error(dev, obuff_p)))
+ 		goto err;
+ 
+@@ -232,9 +232,9 @@ static void qat_free_dc_data(struct adf_accel_dev *accel_dev)
+ 		return;
+ 
+ 	dma_unmap_single(dev, dc_data->ovf_buff_p, dc_data->ovf_buff_sz,
+-			 DMA_FROM_DEVICE);
++			 DMA_BIDIRECTIONAL);
+ 	kfree_sensitive(dc_data->ovf_buff);
+-	devm_kfree(dev, dc_data);
++	kfree(dc_data);
+ 	accel_dev->dc_data = NULL;
+ }
+ 
+diff --git a/drivers/crypto/marvell/cesa/cipher.c b/drivers/crypto/marvell/cesa/cipher.c
+index 48c5c8ea8c43ec..3fe0fd9226cf79 100644
+--- a/drivers/crypto/marvell/cesa/cipher.c
++++ b/drivers/crypto/marvell/cesa/cipher.c
+@@ -75,9 +75,12 @@ mv_cesa_skcipher_dma_cleanup(struct skcipher_request *req)
+ static inline void mv_cesa_skcipher_cleanup(struct skcipher_request *req)
+ {
+ 	struct mv_cesa_skcipher_req *creq = skcipher_request_ctx(req);
++	struct mv_cesa_engine *engine = creq->base.engine;
+ 
+ 	if (mv_cesa_req_get_type(&creq->base) == CESA_DMA_REQ)
+ 		mv_cesa_skcipher_dma_cleanup(req);
++
++	atomic_sub(req->cryptlen, &engine->load);
+ }
+ 
+ static void mv_cesa_skcipher_std_step(struct skcipher_request *req)
+@@ -212,7 +215,6 @@ mv_cesa_skcipher_complete(struct crypto_async_request *req)
+ 	struct mv_cesa_engine *engine = creq->base.engine;
+ 	unsigned int ivsize;
+ 
+-	atomic_sub(skreq->cryptlen, &engine->load);
+ 	ivsize = crypto_skcipher_ivsize(crypto_skcipher_reqtfm(skreq));
+ 
+ 	if (mv_cesa_req_get_type(&creq->base) == CESA_DMA_REQ) {
+diff --git a/drivers/crypto/marvell/cesa/hash.c b/drivers/crypto/marvell/cesa/hash.c
+index 6815eddc906812..e339ce7ad53310 100644
+--- a/drivers/crypto/marvell/cesa/hash.c
++++ b/drivers/crypto/marvell/cesa/hash.c
+@@ -110,9 +110,12 @@ static inline void mv_cesa_ahash_dma_cleanup(struct ahash_request *req)
+ static inline void mv_cesa_ahash_cleanup(struct ahash_request *req)
+ {
+ 	struct mv_cesa_ahash_req *creq = ahash_request_ctx(req);
++	struct mv_cesa_engine *engine = creq->base.engine;
+ 
+ 	if (mv_cesa_req_get_type(&creq->base) == CESA_DMA_REQ)
+ 		mv_cesa_ahash_dma_cleanup(req);
++
++	atomic_sub(req->nbytes, &engine->load);
+ }
+ 
+ static void mv_cesa_ahash_last_cleanup(struct ahash_request *req)
+@@ -395,8 +398,6 @@ static void mv_cesa_ahash_complete(struct crypto_async_request *req)
+ 			}
+ 		}
+ 	}
+-
+-	atomic_sub(ahashreq->nbytes, &engine->load);
+ }
+ 
+ static void mv_cesa_ahash_prepare(struct crypto_async_request *req,
+diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h
+index 29b61828a84763..6b78b10da3e185 100644
+--- a/drivers/cxl/core/core.h
++++ b/drivers/cxl/core/core.h
+@@ -80,6 +80,7 @@ int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, u64 size);
+ int cxl_dpa_free(struct cxl_endpoint_decoder *cxled);
+ resource_size_t cxl_dpa_size(struct cxl_endpoint_decoder *cxled);
+ resource_size_t cxl_dpa_resource_start(struct cxl_endpoint_decoder *cxled);
++bool cxl_resource_contains_addr(const struct resource *res, const resource_size_t addr);
+ 
+ enum cxl_rcrb {
+ 	CXL_RCRB_DOWNSTREAM,
+diff --git a/drivers/cxl/core/edac.c b/drivers/cxl/core/edac.c
+index 623aaa4439c4b5..991fa3e705228c 100644
+--- a/drivers/cxl/core/edac.c
++++ b/drivers/cxl/core/edac.c
+@@ -1923,8 +1923,11 @@ static int cxl_ppr_set_nibble_mask(struct device *dev, void *drv_data,
+ static int cxl_do_ppr(struct device *dev, void *drv_data, u32 val)
+ {
+ 	struct cxl_ppr_context *cxl_ppr_ctx = drv_data;
++	struct cxl_memdev *cxlmd = cxl_ppr_ctx->cxlmd;
++	struct cxl_dev_state *cxlds = cxlmd->cxlds;
+ 
+-	if (!cxl_ppr_ctx->dpa || val != EDAC_DO_MEM_REPAIR)
++	if (val != EDAC_DO_MEM_REPAIR ||
++	    !cxl_resource_contains_addr(&cxlds->dpa_res, cxl_ppr_ctx->dpa))
+ 		return -EINVAL;
+ 
+ 	return cxl_mem_perform_ppr(cxl_ppr_ctx);
+diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c
+index ab1007495f6b94..088caa6b6f7422 100644
+--- a/drivers/cxl/core/hdm.c
++++ b/drivers/cxl/core/hdm.c
+@@ -547,6 +547,13 @@ resource_size_t cxl_dpa_resource_start(struct cxl_endpoint_decoder *cxled)
+ 	return base;
+ }
+ 
++bool cxl_resource_contains_addr(const struct resource *res, const resource_size_t addr)
++{
++	struct resource _addr = DEFINE_RES_MEM(addr, 1);
++
++	return resource_contains(res, &_addr);
++}
++
+ int cxl_dpa_free(struct cxl_endpoint_decoder *cxled)
+ {
+ 	struct cxl_port *port = cxled_to_port(cxled);
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 98657d3b9435c7..0d9f3d3282ec94 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -1382,15 +1382,11 @@ int devfreq_remove_governor(struct devfreq_governor *governor)
+ 		int ret;
+ 		struct device *dev = devfreq->dev.parent;
+ 
++		if (!devfreq->governor)
++			continue;
++
+ 		if (!strncmp(devfreq->governor->name, governor->name,
+ 			     DEVFREQ_NAME_LEN)) {
+-			/* we should have a devfreq governor! */
+-			if (!devfreq->governor) {
+-				dev_warn(dev, "%s: Governor %s NOT present\n",
+-					 __func__, governor->name);
+-				continue;
+-				/* Fall through */
+-			}
+ 			ret = devfreq->governor->event_handler(devfreq,
+ 						DEVFREQ_GOV_STOP, NULL);
+ 			if (ret) {
+@@ -1743,7 +1739,7 @@ static ssize_t trans_stat_show(struct device *dev,
+ 	for (i = 0; i < max_state; i++) {
+ 		if (len >= PAGE_SIZE - 1)
+ 			break;
+-		if (df->freq_table[2] == df->previous_freq)
++		if (df->freq_table[i] == df->previous_freq)
+ 			len += sysfs_emit_at(buf, len, "*");
+ 		else
+ 			len += sysfs_emit_at(buf, len, " ");
+diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
+index fee04fdb08220c..b46eb8a552d7be 100644
+--- a/drivers/dma-buf/Kconfig
++++ b/drivers/dma-buf/Kconfig
+@@ -36,7 +36,6 @@ config UDMABUF
+ 	depends on DMA_SHARED_BUFFER
+ 	depends on MEMFD_CREATE || COMPILE_TEST
+ 	depends on MMU
+-	select VMAP_PFN
+ 	help
+ 	  A driver to let userspace turn memfd regions into dma-bufs.
+ 	  Qemu can use this to create host dmabufs for guest framebuffers.
+diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
+index c9d0c68d2fcb0f..40399c26e6be62 100644
+--- a/drivers/dma-buf/udmabuf.c
++++ b/drivers/dma-buf/udmabuf.c
+@@ -109,29 +109,22 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma)
+ static int vmap_udmabuf(struct dma_buf *buf, struct iosys_map *map)
+ {
+ 	struct udmabuf *ubuf = buf->priv;
+-	unsigned long *pfns;
++	struct page **pages;
+ 	void *vaddr;
+ 	pgoff_t pg;
+ 
+ 	dma_resv_assert_held(buf->resv);
+ 
+-	/**
+-	 * HVO may free tail pages, so just use pfn to map each folio
+-	 * into vmalloc area.
+-	 */
+-	pfns = kvmalloc_array(ubuf->pagecount, sizeof(*pfns), GFP_KERNEL);
+-	if (!pfns)
++	pages = kvmalloc_array(ubuf->pagecount, sizeof(*pages), GFP_KERNEL);
++	if (!pages)
+ 		return -ENOMEM;
+ 
+-	for (pg = 0; pg < ubuf->pagecount; pg++) {
+-		unsigned long pfn = folio_pfn(ubuf->folios[pg]);
+-
+-		pfn += ubuf->offsets[pg] >> PAGE_SHIFT;
+-		pfns[pg] = pfn;
+-	}
++	for (pg = 0; pg < ubuf->pagecount; pg++)
++		pages[pg] = folio_page(ubuf->folios[pg],
++				       ubuf->offsets[pg] >> PAGE_SHIFT);
+ 
+-	vaddr = vmap_pfn(pfns, ubuf->pagecount, PAGE_KERNEL);
+-	kvfree(pfns);
++	vaddr = vm_map_ram(pages, ubuf->pagecount, -1);
++	kvfree(pages);
+ 	if (!vaddr)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/dma/mmp_tdma.c b/drivers/dma/mmp_tdma.c
+index c8dc504510f1e3..b7fb843c67a6f2 100644
+--- a/drivers/dma/mmp_tdma.c
++++ b/drivers/dma/mmp_tdma.c
+@@ -641,7 +641,7 @@ static int mmp_tdma_probe(struct platform_device *pdev)
+ 	int chan_num = TDMA_CHANNEL_NUM;
+ 	struct gen_pool *pool = NULL;
+ 
+-	type = (enum mmp_tdma_type)device_get_match_data(&pdev->dev);
++	type = (kernel_ulong_t)device_get_match_data(&pdev->dev);
+ 
+ 	/* always have couple channels */
+ 	tdev = devm_kzalloc(&pdev->dev, sizeof(*tdev), GFP_KERNEL);
+diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
+index fa6e4646fdc29d..1fdcb0f5c9e725 100644
+--- a/drivers/dma/mv_xor.c
++++ b/drivers/dma/mv_xor.c
+@@ -1061,8 +1061,16 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
+ 	 */
+ 	mv_chan->dummy_src_addr = dma_map_single(dma_dev->dev,
+ 		mv_chan->dummy_src, MV_XOR_MIN_BYTE_COUNT, DMA_FROM_DEVICE);
++	if (dma_mapping_error(dma_dev->dev, mv_chan->dummy_src_addr))
++		return ERR_PTR(-ENOMEM);
++
+ 	mv_chan->dummy_dst_addr = dma_map_single(dma_dev->dev,
+ 		mv_chan->dummy_dst, MV_XOR_MIN_BYTE_COUNT, DMA_TO_DEVICE);
++	if (dma_mapping_error(dma_dev->dev, mv_chan->dummy_dst_addr)) {
++		ret = -ENOMEM;
++		goto err_unmap_src;
++	}
++
+ 
+ 	/* allocate coherent memory for hardware descriptors
+ 	 * note: writecombine gives slightly better performance, but
+@@ -1071,8 +1079,10 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
+ 	mv_chan->dma_desc_pool_virt =
+ 	  dma_alloc_wc(&pdev->dev, MV_XOR_POOL_SIZE, &mv_chan->dma_desc_pool,
+ 		       GFP_KERNEL);
+-	if (!mv_chan->dma_desc_pool_virt)
+-		return ERR_PTR(-ENOMEM);
++	if (!mv_chan->dma_desc_pool_virt) {
++		ret = -ENOMEM;
++		goto err_unmap_dst;
++	}
+ 
+ 	/* discover transaction capabilities from the platform data */
+ 	dma_dev->cap_mask = cap_mask;
+@@ -1155,6 +1165,13 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
+ err_free_dma:
+ 	dma_free_coherent(&pdev->dev, MV_XOR_POOL_SIZE,
+ 			  mv_chan->dma_desc_pool_virt, mv_chan->dma_desc_pool);
++err_unmap_dst:
++	dma_unmap_single(dma_dev->dev, mv_chan->dummy_dst_addr,
++			 MV_XOR_MIN_BYTE_COUNT, DMA_TO_DEVICE);
++err_unmap_src:
++	dma_unmap_single(dma_dev->dev, mv_chan->dummy_src_addr,
++			 MV_XOR_MIN_BYTE_COUNT, DMA_FROM_DEVICE);
++
+ 	return ERR_PTR(ret);
+ }
+ 
+diff --git a/drivers/dma/nbpfaxi.c b/drivers/dma/nbpfaxi.c
+index 7a2488a0d6a326..765462303de098 100644
+--- a/drivers/dma/nbpfaxi.c
++++ b/drivers/dma/nbpfaxi.c
+@@ -711,6 +711,9 @@ static int nbpf_desc_page_alloc(struct nbpf_channel *chan)
+ 		list_add_tail(&ldesc->node, &lhead);
+ 		ldesc->hwdesc_dma_addr = dma_map_single(dchan->device->dev,
+ 					hwdesc, sizeof(*hwdesc), DMA_TO_DEVICE);
++		if (dma_mapping_error(dchan->device->dev,
++				      ldesc->hwdesc_dma_addr))
++			goto unmap_error;
+ 
+ 		dev_dbg(dev, "%s(): mapped 0x%p to %pad\n", __func__,
+ 			hwdesc, &ldesc->hwdesc_dma_addr);
+@@ -737,6 +740,16 @@ static int nbpf_desc_page_alloc(struct nbpf_channel *chan)
+ 	spin_unlock_irq(&chan->lock);
+ 
+ 	return ARRAY_SIZE(dpage->desc);
++
++unmap_error:
++	while (i--) {
++		ldesc--; hwdesc--;
++
++		dma_unmap_single(dchan->device->dev, ldesc->hwdesc_dma_addr,
++				 sizeof(hwdesc), DMA_TO_DEVICE);
++	}
++
++	return -ENOMEM;
+ }
+ 
+ static void nbpf_desc_put(struct nbpf_desc *desc)
+diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c
+index c7e5a34b254bf4..683fd9b85c5ce2 100644
+--- a/drivers/firmware/arm_scmi/perf.c
++++ b/drivers/firmware/arm_scmi/perf.c
+@@ -892,7 +892,7 @@ static int scmi_dvfs_device_opps_add(const struct scmi_protocol_handle *ph,
+ 			freq = dom->opp[idx].indicative_freq * dom->mult_factor;
+ 
+ 		/* All OPPs above the sustained frequency are treated as turbo */
+-		data.turbo = freq > dom->sustained_freq_khz * 1000;
++		data.turbo = freq > dom->sustained_freq_khz * 1000UL;
+ 
+ 		data.level = dom->opp[idx].perf;
+ 		data.freq = freq;
+diff --git a/drivers/firmware/efi/libstub/Makefile.zboot b/drivers/firmware/efi/libstub/Makefile.zboot
+index 92e3c73502ba15..832deee36e48e9 100644
+--- a/drivers/firmware/efi/libstub/Makefile.zboot
++++ b/drivers/firmware/efi/libstub/Makefile.zboot
+@@ -36,7 +36,7 @@ aflags-zboot-header-$(EFI_ZBOOT_FORWARD_CFI) := \
+ 		-DPE_DLL_CHAR_EX=IMAGE_DLLCHARACTERISTICS_EX_FORWARD_CFI_COMPAT
+ 
+ AFLAGS_zboot-header.o += -DMACHINE_TYPE=IMAGE_FILE_MACHINE_$(EFI_ZBOOT_MACH_TYPE) \
+-			 -DZBOOT_EFI_PATH="\"$(realpath $(obj)/vmlinuz.efi.elf)\"" \
++			 -DZBOOT_EFI_PATH="\"$(abspath $(obj)/vmlinuz.efi.elf)\"" \
+ 			 -DZBOOT_SIZE_LEN=$(zboot-size-len-y) \
+ 			 -DCOMP_TYPE="\"$(comp-type-y)\"" \
+ 			 $(aflags-zboot-header-y)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+index d8ac4b1051a81c..fe282b85573414 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+@@ -248,18 +248,34 @@ void amdgpu_amdkfd_interrupt(struct amdgpu_device *adev,
+ 		kgd2kfd_interrupt(adev->kfd.dev, ih_ring_entry);
+ }
+ 
+-void amdgpu_amdkfd_suspend(struct amdgpu_device *adev, bool run_pm)
++void amdgpu_amdkfd_suspend(struct amdgpu_device *adev, bool suspend_proc)
+ {
+ 	if (adev->kfd.dev)
+-		kgd2kfd_suspend(adev->kfd.dev, run_pm);
++		kgd2kfd_suspend(adev->kfd.dev, suspend_proc);
+ }
+ 
+-int amdgpu_amdkfd_resume(struct amdgpu_device *adev, bool run_pm)
++int amdgpu_amdkfd_resume(struct amdgpu_device *adev, bool resume_proc)
+ {
+ 	int r = 0;
+ 
+ 	if (adev->kfd.dev)
+-		r = kgd2kfd_resume(adev->kfd.dev, run_pm);
++		r = kgd2kfd_resume(adev->kfd.dev, resume_proc);
++
++	return r;
++}
++
++void amdgpu_amdkfd_suspend_process(struct amdgpu_device *adev)
++{
++	if (adev->kfd.dev)
++		kgd2kfd_suspend_process(adev->kfd.dev);
++}
++
++int amdgpu_amdkfd_resume_process(struct amdgpu_device *adev)
++{
++	int r = 0;
++
++	if (adev->kfd.dev)
++		r = kgd2kfd_resume_process(adev->kfd.dev);
+ 
+ 	return r;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+index b6ca41859b5367..b7c3ec48340721 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+@@ -154,8 +154,10 @@ struct amdkfd_process_info {
+ int amdgpu_amdkfd_init(void);
+ void amdgpu_amdkfd_fini(void);
+ 
+-void amdgpu_amdkfd_suspend(struct amdgpu_device *adev, bool run_pm);
+-int amdgpu_amdkfd_resume(struct amdgpu_device *adev, bool run_pm);
++void amdgpu_amdkfd_suspend(struct amdgpu_device *adev, bool suspend_proc);
++int amdgpu_amdkfd_resume(struct amdgpu_device *adev, bool resume_proc);
++void amdgpu_amdkfd_suspend_process(struct amdgpu_device *adev);
++int amdgpu_amdkfd_resume_process(struct amdgpu_device *adev);
+ void amdgpu_amdkfd_interrupt(struct amdgpu_device *adev,
+ 			const void *ih_ring_entry);
+ void amdgpu_amdkfd_device_probe(struct amdgpu_device *adev);
+@@ -411,8 +413,10 @@ struct kfd_dev *kgd2kfd_probe(struct amdgpu_device *adev, bool vf);
+ bool kgd2kfd_device_init(struct kfd_dev *kfd,
+ 			 const struct kgd2kfd_shared_resources *gpu_resources);
+ void kgd2kfd_device_exit(struct kfd_dev *kfd);
+-void kgd2kfd_suspend(struct kfd_dev *kfd, bool run_pm);
+-int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm);
++void kgd2kfd_suspend(struct kfd_dev *kfd, bool suspend_proc);
++int kgd2kfd_resume(struct kfd_dev *kfd, bool resume_proc);
++void kgd2kfd_suspend_process(struct kfd_dev *kfd);
++int kgd2kfd_resume_process(struct kfd_dev *kfd);
+ int kgd2kfd_pre_reset(struct kfd_dev *kfd,
+ 		      struct amdgpu_reset_context *reset_context);
+ int kgd2kfd_post_reset(struct kfd_dev *kfd);
+@@ -454,11 +458,20 @@ static inline void kgd2kfd_device_exit(struct kfd_dev *kfd)
+ {
+ }
+ 
+-static inline void kgd2kfd_suspend(struct kfd_dev *kfd, bool run_pm)
++static inline void kgd2kfd_suspend(struct kfd_dev *kfd, bool suspend_proc)
+ {
+ }
+ 
+-static inline int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm)
++static inline int kgd2kfd_resume(struct kfd_dev *kfd, bool resume_proc)
++{
++	return 0;
++}
++
++static inline void kgd2kfd_suspend_process(struct kfd_dev *kfd)
++{
++}
++
++static inline int kgd2kfd_resume_process(struct kfd_dev *kfd)
+ {
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_arcturus.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_arcturus.c
+index ffbaa8bc5eea9e..1105a09e55dc18 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_arcturus.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_arcturus.c
+@@ -320,7 +320,7 @@ static void set_barrier_auto_waitcnt(struct amdgpu_device *adev, bool enable_wai
+ 	if (!down_read_trylock(&adev->reset_domain->sem))
+ 		return;
+ 
+-	amdgpu_amdkfd_suspend(adev, false);
++	amdgpu_amdkfd_suspend(adev, true);
+ 
+ 	if (suspend_resume_compute_scheduler(adev, true))
+ 		goto out;
+@@ -333,7 +333,7 @@ static void set_barrier_auto_waitcnt(struct amdgpu_device *adev, bool enable_wai
+ out:
+ 	suspend_resume_compute_scheduler(adev, false);
+ 
+-	amdgpu_amdkfd_resume(adev, false);
++	amdgpu_amdkfd_resume(adev, true);
+ 
+ 	up_read(&adev->reset_domain->sem);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index aa32df7e2fb2f3..54ea8e8d781215 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -3518,7 +3518,7 @@ static int amdgpu_device_ip_fini_early(struct amdgpu_device *adev)
+ 	amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);
+ 	amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
+ 
+-	amdgpu_amdkfd_suspend(adev, false);
++	amdgpu_amdkfd_suspend(adev, true);
+ 	amdgpu_userq_suspend(adev);
+ 
+ 	/* Workaround for ASICs need to disable SMC first */
+@@ -5055,6 +5055,8 @@ int amdgpu_device_suspend(struct drm_device *dev, bool notify_clients)
+ 	adev->in_suspend = true;
+ 
+ 	if (amdgpu_sriov_vf(adev)) {
++		if (!adev->in_s0ix && !adev->in_runpm)
++			amdgpu_amdkfd_suspend_process(adev);
+ 		amdgpu_virt_fini_data_exchange(adev);
+ 		r = amdgpu_virt_request_full_gpu(adev, false);
+ 		if (r)
+@@ -5074,7 +5076,7 @@ int amdgpu_device_suspend(struct drm_device *dev, bool notify_clients)
+ 	amdgpu_device_ip_suspend_phase1(adev);
+ 
+ 	if (!adev->in_s0ix) {
+-		amdgpu_amdkfd_suspend(adev, adev->in_runpm);
++		amdgpu_amdkfd_suspend(adev, !amdgpu_sriov_vf(adev) && !adev->in_runpm);
+ 		amdgpu_userq_suspend(adev);
+ 	}
+ 
+@@ -5140,7 +5142,7 @@ int amdgpu_device_resume(struct drm_device *dev, bool notify_clients)
+ 	}
+ 
+ 	if (!adev->in_s0ix) {
+-		r = amdgpu_amdkfd_resume(adev, adev->in_runpm);
++		r = amdgpu_amdkfd_resume(adev, !amdgpu_sriov_vf(adev) && !adev->in_runpm);
+ 		if (r)
+ 			goto exit;
+ 
+@@ -5159,6 +5161,9 @@ int amdgpu_device_resume(struct drm_device *dev, bool notify_clients)
+ 	if (amdgpu_sriov_vf(adev)) {
+ 		amdgpu_virt_init_data_exchange(adev);
+ 		amdgpu_virt_release_full_gpu(adev, true);
++
++		if (!adev->in_s0ix && !r && !adev->in_runpm)
++			r = amdgpu_amdkfd_resume_process(adev);
+ 	}
+ 
+ 	if (r)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+index ddb9d3269357cf..3528a27c7c1ddd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+@@ -91,8 +91,8 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
+ 	struct amdgpu_job *job = to_amdgpu_job(s_job);
+ 	struct amdgpu_task_info *ti;
+ 	struct amdgpu_device *adev = ring->adev;
+-	int idx;
+-	int r;
++	bool set_error = false;
++	int idx, r;
+ 
+ 	if (!drm_dev_enter(adev_to_drm(adev), &idx)) {
+ 		dev_info(adev->dev, "%s - device unplugged skipping recovery on scheduler:%s",
+@@ -136,10 +136,12 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
+ 	} else if (amdgpu_gpu_recovery && ring->funcs->reset) {
+ 		bool is_guilty;
+ 
+-		dev_err(adev->dev, "Starting %s ring reset\n", s_job->sched->name);
+-		/* stop the scheduler, but don't mess with the
+-		 * bad job yet because if ring reset fails
+-		 * we'll fall back to full GPU reset.
++		dev_err(adev->dev, "Starting %s ring reset\n",
++			s_job->sched->name);
++
++		/*
++		 * Stop the scheduler to prevent anybody else from touching the
++		 * ring buffer.
+ 		 */
+ 		drm_sched_wqueue_stop(&ring->sched);
+ 
+@@ -152,26 +154,27 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
+ 		else
+ 			is_guilty = true;
+ 
+-		if (is_guilty)
++		if (is_guilty) {
+ 			dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
++			set_error = true;
++		}
+ 
+ 		r = amdgpu_ring_reset(ring, job->vmid);
+ 		if (!r) {
+-			if (amdgpu_ring_sched_ready(ring))
+-				drm_sched_stop(&ring->sched, s_job);
+-			if (is_guilty) {
++			if (is_guilty)
+ 				atomic_inc(&ring->adev->gpu_reset_counter);
+-				amdgpu_fence_driver_force_completion(ring);
+-			}
+-			if (amdgpu_ring_sched_ready(ring))
+-				drm_sched_start(&ring->sched, 0);
+-			dev_err(adev->dev, "Ring %s reset succeeded\n", ring->sched.name);
+-			drm_dev_wedged_event(adev_to_drm(adev), DRM_WEDGE_RECOVERY_NONE);
++			drm_sched_wqueue_start(&ring->sched);
++			dev_err(adev->dev, "Ring %s reset succeeded\n",
++				ring->sched.name);
++			drm_dev_wedged_event(adev_to_drm(adev),
++					     DRM_WEDGE_RECOVERY_NONE);
+ 			goto exit;
+ 		}
+-		dev_err(adev->dev, "Ring %s reset failure\n", ring->sched.name);
++		dev_err(adev->dev, "Ring %s reset failed\n", ring->sched.name);
+ 	}
+-	dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
++
++	if (!set_error)
++		dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
+ 
+ 	if (amdgpu_device_should_recover_gpu(ring->adev)) {
+ 		struct amdgpu_reset_context reset_context;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
+index 9b54a1ece447fb..f7decf533bae84 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
+@@ -597,8 +597,11 @@ int amdgpu_sdma_reset_engine(struct amdgpu_device *adev, uint32_t instance_id)
+ 		page_sched_stopped = true;
+ 	}
+ 
+-	if (sdma_instance->funcs->stop_kernel_queue)
++	if (sdma_instance->funcs->stop_kernel_queue) {
+ 		sdma_instance->funcs->stop_kernel_queue(gfx_ring);
++		if (adev->sdma.has_page_queue)
++			sdma_instance->funcs->stop_kernel_queue(page_ring);
++	}
+ 
+ 	/* Perform the SDMA reset for the specified instance */
+ 	ret = amdgpu_sdma_soft_reset(adev, instance_id);
+@@ -607,8 +610,11 @@ int amdgpu_sdma_reset_engine(struct amdgpu_device *adev, uint32_t instance_id)
+ 		goto exit;
+ 	}
+ 
+-	if (sdma_instance->funcs->start_kernel_queue)
++	if (sdma_instance->funcs->start_kernel_queue) {
+ 		sdma_instance->funcs->start_kernel_queue(gfx_ring);
++		if (adev->sdma.has_page_queue)
++			sdma_instance->funcs->start_kernel_queue(page_ring);
++	}
+ 
+ exit:
+ 	/* Restart the scheduler's work queue for the GFX and page rings
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c
+index 295e7186e1565a..aac0de86f3e8c8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c
+@@ -664,7 +664,7 @@ static void amdgpu_userq_restore_worker(struct work_struct *work)
+ 	struct amdgpu_fpriv *fpriv = uq_mgr_to_fpriv(uq_mgr);
+ 	int ret;
+ 
+-	flush_work(&fpriv->evf_mgr.suspend_work.work);
++	flush_delayed_work(&fpriv->evf_mgr.suspend_work);
+ 
+ 	mutex_lock(&uq_mgr->userq_mutex);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index 75ea071744eb5e..961d5e0af052e2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -9540,7 +9540,7 @@ static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring, unsigned int vmid)
+ 
+ 	spin_lock_irqsave(&kiq->ring_lock, flags);
+ 
+-	if (amdgpu_ring_alloc(kiq_ring, 5 + 7 + 7 + kiq->pmf->map_queues_size)) {
++	if (amdgpu_ring_alloc(kiq_ring, 5 + 7 + 7)) {
+ 		spin_unlock_irqrestore(&kiq->ring_lock, flags);
+ 		return -ENOMEM;
+ 	}
+@@ -9560,12 +9560,9 @@ static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring, unsigned int vmid)
+ 			       0, 1, 0x20);
+ 	gfx_v10_0_ring_emit_reg_wait(kiq_ring,
+ 				     SOC15_REG_OFFSET(GC, 0, mmCP_VMID_RESET), 0, 0xffffffff);
+-	kiq->pmf->kiq_map_queues(kiq_ring, ring);
+ 	amdgpu_ring_commit(kiq_ring);
+-
+-	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+-
+ 	r = amdgpu_ring_test_ring(kiq_ring);
++	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+ 	if (r)
+ 		return r;
+ 
+@@ -9575,7 +9572,24 @@ static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring, unsigned int vmid)
+ 		return r;
+ 	}
+ 
+-	return amdgpu_ring_test_ring(ring);
++	spin_lock_irqsave(&kiq->ring_lock, flags);
++
++	if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->map_queues_size)) {
++		spin_unlock_irqrestore(&kiq->ring_lock, flags);
++		return -ENOMEM;
++	}
++	kiq->pmf->kiq_map_queues(kiq_ring, ring);
++	amdgpu_ring_commit(kiq_ring);
++	r = amdgpu_ring_test_ring(kiq_ring);
++	spin_unlock_irqrestore(&kiq->ring_lock, flags);
++	if (r)
++		return r;
++
++	r = amdgpu_ring_test_ring(ring);
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
+@@ -9603,9 +9617,8 @@ static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
+ 	kiq->pmf->kiq_unmap_queues(kiq_ring, ring, RESET_QUEUES,
+ 				   0, 0);
+ 	amdgpu_ring_commit(kiq_ring);
+-	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+-
+ 	r = amdgpu_ring_test_ring(kiq_ring);
++	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+ 	if (r)
+ 		return r;
+ 
+@@ -9641,13 +9654,16 @@ static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
+ 	}
+ 	kiq->pmf->kiq_map_queues(kiq_ring, ring);
+ 	amdgpu_ring_commit(kiq_ring);
+-	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+-
+ 	r = amdgpu_ring_test_ring(kiq_ring);
++	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+ 	if (r)
+ 		return r;
+ 
+-	return amdgpu_ring_test_ring(ring);
++	r = amdgpu_ring_test_ring(ring);
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static void gfx_v10_ip_print(struct amdgpu_ip_block *ip_block, struct drm_printer *p)
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+index ec9b84f92d4670..e632e97d63be02 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+@@ -6840,7 +6840,11 @@ static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring, unsigned int vmid)
+ 		return r;
+ 	}
+ 
+-	return amdgpu_ring_test_ring(ring);
++	r = amdgpu_ring_test_ring(ring);
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static int gfx_v11_0_reset_compute_pipe(struct amdgpu_ring *ring)
+@@ -7000,7 +7004,11 @@ static int gfx_v11_0_reset_kcq(struct amdgpu_ring *ring, unsigned int vmid)
+ 		return r;
+ 	}
+ 
+-	return amdgpu_ring_test_ring(ring);
++	r = amdgpu_ring_test_ring(ring);
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static void gfx_v11_ip_print(struct amdgpu_ip_block *ip_block, struct drm_printer *p)
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+index 1234c8d64e20d9..50f04c2c0b8c0c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+@@ -5335,7 +5335,11 @@ static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring, unsigned int vmid)
+ 		return r;
+ 	}
+ 
+-	return amdgpu_ring_test_ring(ring);
++	r = amdgpu_ring_test_ring(ring);
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static int gfx_v12_0_reset_compute_pipe(struct amdgpu_ring *ring)
+@@ -5448,7 +5452,11 @@ static int gfx_v12_0_reset_kcq(struct amdgpu_ring *ring, unsigned int vmid)
+ 		return r;
+ 	}
+ 
+-	return amdgpu_ring_test_ring(ring);
++	r = amdgpu_ring_test_ring(ring);
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static void gfx_v12_0_ring_begin_use(struct amdgpu_ring *ring)
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index ad9be3656653bb..f768c407771ac6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -7280,13 +7280,18 @@ static int gfx_v9_0_reset_kcq(struct amdgpu_ring *ring,
+ 	}
+ 	kiq->pmf->kiq_map_queues(kiq_ring, ring);
+ 	amdgpu_ring_commit(kiq_ring);
+-	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+ 	r = amdgpu_ring_test_ring(kiq_ring);
++	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+ 	if (r) {
+ 		DRM_ERROR("fail to remap queue\n");
+ 		return r;
+ 	}
+-	return amdgpu_ring_test_ring(ring);
++
++	r = amdgpu_ring_test_ring(ring);
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static void gfx_v9_ip_print(struct amdgpu_ip_block *ip_block, struct drm_printer *p)
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+index c233edf605694c..b3c842ec17ee2a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+@@ -3612,14 +3612,18 @@ static int gfx_v9_4_3_reset_kcq(struct amdgpu_ring *ring,
+ 	}
+ 	kiq->pmf->kiq_map_queues(kiq_ring, ring);
+ 	amdgpu_ring_commit(kiq_ring);
+-	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+-
+ 	r = amdgpu_ring_test_ring(kiq_ring);
++	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+ 	if (r) {
+ 		dev_err(adev->dev, "fail to remap queue\n");
+ 		return r;
+ 	}
+-	return amdgpu_ring_test_ring(ring);
++
++	r = amdgpu_ring_test_ring(ring);
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ enum amdgpu_gfx_cp_ras_mem_id {
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+index 4cde8a8bcc837a..49620fbf6c7a25 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+@@ -766,9 +766,15 @@ static int jpeg_v2_0_process_interrupt(struct amdgpu_device *adev,
+ 
+ static int jpeg_v2_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+ {
++	int r;
++
+ 	jpeg_v2_0_stop(ring->adev);
+ 	jpeg_v2_0_start(ring->adev);
+-	return amdgpu_ring_test_helper(ring);
++	r = amdgpu_ring_test_helper(ring);
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static const struct amd_ip_funcs jpeg_v2_0_ip_funcs = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+index 8b39e114f3be14..98ae9c0e83f7ba 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+@@ -645,9 +645,15 @@ static int jpeg_v2_5_process_interrupt(struct amdgpu_device *adev,
+ 
+ static int jpeg_v2_5_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+ {
++	int r;
++
+ 	jpeg_v2_5_stop_inst(ring->adev, ring->me);
+ 	jpeg_v2_5_start_inst(ring->adev, ring->me);
+-	return amdgpu_ring_test_helper(ring);
++	r = amdgpu_ring_test_helper(ring);
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static const struct amd_ip_funcs jpeg_v2_5_ip_funcs = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
+index 2f8510c2986b9a..7fb59943036521 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
+@@ -557,9 +557,15 @@ static int jpeg_v3_0_process_interrupt(struct amdgpu_device *adev,
+ 
+ static int jpeg_v3_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+ {
++	int r;
++
+ 	jpeg_v3_0_stop(ring->adev);
+ 	jpeg_v3_0_start(ring->adev);
+-	return amdgpu_ring_test_helper(ring);
++	r = amdgpu_ring_test_helper(ring);
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static const struct amd_ip_funcs jpeg_v3_0_ip_funcs = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
+index f17ec5414fd69d..a6612c942b939b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
+@@ -722,12 +722,18 @@ static int jpeg_v4_0_process_interrupt(struct amdgpu_device *adev,
+ 
+ static int jpeg_v4_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+ {
++	int r;
++
+ 	if (amdgpu_sriov_vf(ring->adev))
+ 		return -EINVAL;
+ 
+ 	jpeg_v4_0_stop(ring->adev);
+ 	jpeg_v4_0_start(ring->adev);
+-	return amdgpu_ring_test_helper(ring);
++	r = amdgpu_ring_test_helper(ring);
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static const struct amd_ip_funcs jpeg_v4_0_ip_funcs = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+index 79e342d5ab28d8..90d773dbe337cd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+@@ -1145,12 +1145,18 @@ static void jpeg_v4_0_3_core_stall_reset(struct amdgpu_ring *ring)
+ 
+ static int jpeg_v4_0_3_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+ {
++	int r;
++
+ 	if (amdgpu_sriov_vf(ring->adev))
+ 		return -EOPNOTSUPP;
+ 
+ 	jpeg_v4_0_3_core_stall_reset(ring);
+ 	jpeg_v4_0_3_start_jrbc(ring);
+-	return amdgpu_ring_test_helper(ring);
++	r = amdgpu_ring_test_helper(ring);
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static const struct amd_ip_funcs jpeg_v4_0_3_ip_funcs = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
+index 3b6f65a256464a..7cad77a968f160 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
+@@ -836,12 +836,18 @@ static void jpeg_v5_0_1_core_stall_reset(struct amdgpu_ring *ring)
+ 
+ static int jpeg_v5_0_1_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+ {
++	int r;
++
+ 	if (amdgpu_sriov_vf(ring->adev))
+ 		return -EOPNOTSUPP;
+ 
+ 	jpeg_v5_0_1_core_stall_reset(ring);
+ 	jpeg_v5_0_1_init_jrbc(ring);
+-	return amdgpu_ring_test_helper(ring);
++	r = amdgpu_ring_test_helper(ring);
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static const struct amd_ip_funcs jpeg_v5_0_1_ip_funcs = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_9.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_9.c
+index a376f072700dc7..1c22bc11c1f85f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_9.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_9.c
+@@ -31,9 +31,6 @@
+ 
+ #define NPS_MODE_MASK 0x000000FFL
+ 
+-/* Core 0 Port 0 counter */
+-#define smnPCIEP_NAK_COUNTER 0x1A340218
+-
+ static void nbio_v7_9_remap_hdp_registers(struct amdgpu_device *adev)
+ {
+ 	WREG32_SOC15(NBIO, 0, regBIF_BX0_REMAP_HDP_MEM_FLUSH_CNTL,
+@@ -467,22 +464,6 @@ static void nbio_v7_9_init_registers(struct amdgpu_device *adev)
+ 	}
+ }
+ 
+-static u64 nbio_v7_9_get_pcie_replay_count(struct amdgpu_device *adev)
+-{
+-	u32 val, nak_r, nak_g;
+-
+-	if (adev->flags & AMD_IS_APU)
+-		return 0;
+-
+-	/* Get the number of NAKs received and generated */
+-	val = RREG32_PCIE(smnPCIEP_NAK_COUNTER);
+-	nak_r = val & 0xFFFF;
+-	nak_g = val >> 16;
+-
+-	/* Add the total number of NAKs, i.e the number of replays */
+-	return (nak_r + nak_g);
+-}
+-
+ #define MMIO_REG_HOLE_OFFSET 0x1A000
+ 
+ static void nbio_v7_9_set_reg_remap(struct amdgpu_device *adev)
+@@ -524,7 +505,6 @@ const struct amdgpu_nbio_funcs nbio_v7_9_funcs = {
+ 	.get_memory_partition_mode = nbio_v7_9_get_memory_partition_mode,
+ 	.is_nps_switch_requested = nbio_v7_9_is_nps_switch_requested,
+ 	.init_registers = nbio_v7_9_init_registers,
+-	.get_pcie_replay_count = nbio_v7_9_get_pcie_replay_count,
+ 	.set_reg_remap = nbio_v7_9_set_reg_remap,
+ };
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+index bb82c652e4c05c..9f0ad119943177 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+@@ -1674,6 +1674,7 @@ static bool sdma_v4_4_2_page_ring_is_guilty(struct amdgpu_ring *ring)
+ 
+ static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
+ {
++	bool is_guilty = ring->funcs->is_guilty(ring);
+ 	struct amdgpu_device *adev = ring->adev;
+ 	u32 id = ring->me;
+ 	int r;
+@@ -1681,11 +1682,16 @@ static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
+ 	if (!(adev->sdma.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
+ 		return -EOPNOTSUPP;
+ 
+-	amdgpu_amdkfd_suspend(adev, false);
++	amdgpu_amdkfd_suspend(adev, true);
+ 	r = amdgpu_sdma_reset_engine(adev, id);
+-	amdgpu_amdkfd_resume(adev, false);
++	amdgpu_amdkfd_resume(adev, true);
++	if (r)
++		return r;
+ 
+-	return r;
++	if (is_guilty)
++		amdgpu_fence_driver_force_completion(ring);
++
++	return 0;
+ }
+ 
+ static int sdma_v4_4_2_stop_queue(struct amdgpu_ring *ring)
+@@ -1729,8 +1735,8 @@ static int sdma_v4_4_2_stop_queue(struct amdgpu_ring *ring)
+ static int sdma_v4_4_2_restore_queue(struct amdgpu_ring *ring)
+ {
+ 	struct amdgpu_device *adev = ring->adev;
+-	u32 inst_mask;
+-	int i;
++	u32 inst_mask, tmp_mask;
++	int i, r;
+ 
+ 	inst_mask = 1 << ring->me;
+ 	udelay(50);
+@@ -1747,7 +1753,24 @@ static int sdma_v4_4_2_restore_queue(struct amdgpu_ring *ring)
+ 		return -ETIMEDOUT;
+ 	}
+ 
+-	return sdma_v4_4_2_inst_start(adev, inst_mask, true);
++	r = sdma_v4_4_2_inst_start(adev, inst_mask, true);
++	if (r)
++		return r;
++
++	tmp_mask = inst_mask;
++	for_each_inst(i, tmp_mask) {
++		ring = &adev->sdma.instance[i].ring;
++
++		amdgpu_fence_driver_force_completion(ring);
++
++		if (adev->sdma.has_page_queue) {
++			struct amdgpu_ring *page = &adev->sdma.instance[i].page;
++
++			amdgpu_fence_driver_force_completion(page);
++		}
++	}
++
++	return r;
+ }
+ 
+ static int sdma_v4_4_2_set_trap_irq_state(struct amdgpu_device *adev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+index 37f4b5b4a098ff..b43d6cb8a0d4ec 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+@@ -1616,7 +1616,10 @@ static int sdma_v5_0_restore_queue(struct amdgpu_ring *ring)
+ 
+ 	r = sdma_v5_0_gfx_resume_instance(adev, inst_id, true);
+ 	amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
+-	return r;
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static int sdma_v5_0_ring_preempt_ib(struct amdgpu_ring *ring)
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+index 0b40411b92a0b8..a88aa53e887c2a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+@@ -1532,7 +1532,10 @@ static int sdma_v5_2_restore_queue(struct amdgpu_ring *ring)
+ 	r = sdma_v5_2_gfx_resume_instance(adev, inst_id, true);
+ 
+ 	amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
+-	return r;
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static int sdma_v5_2_ring_preempt_ib(struct amdgpu_ring *ring)
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
+index a9bdf8d61d6ce7..041bca58add556 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
+@@ -1572,7 +1572,11 @@ static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
+ 	if (r)
+ 		return r;
+ 
+-	return sdma_v6_0_gfx_resume_instance(adev, i, true);
++	r = sdma_v6_0_gfx_resume_instance(adev, i, true);
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static int sdma_v6_0_set_trap_irq_state(struct amdgpu_device *adev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+index 86903eccbd4e57..b4167f23c02dd1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+@@ -824,7 +824,11 @@ static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
+ 	if (r)
+ 		return r;
+ 
+-	return sdma_v7_0_gfx_resume_instance(adev, i, true);
++	r = sdma_v7_0_gfx_resume_instance(adev, i, true);
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+index b5071f77f78d23..46c329a1b2f5f0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+@@ -1971,6 +1971,7 @@ static int vcn_v4_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+ {
+ 	struct amdgpu_device *adev = ring->adev;
+ 	struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
++	int r;
+ 
+ 	if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
+ 		return -EOPNOTSUPP;
+@@ -1978,7 +1979,11 @@ static int vcn_v4_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+ 	vcn_v4_0_stop(vinst);
+ 	vcn_v4_0_start(vinst);
+ 
+-	return amdgpu_ring_test_helper(ring);
++	r = amdgpu_ring_test_helper(ring);
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static struct amdgpu_ring_funcs vcn_v4_0_unified_ring_vm_funcs = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+index 5a33140f572351..faba11166efb6b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+@@ -1621,8 +1621,10 @@ static int vcn_v4_0_3_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+ 	vcn_v4_0_3_hw_init_inst(vinst);
+ 	vcn_v4_0_3_start_dpg_mode(vinst, adev->vcn.inst[ring->me].indirect_sram);
+ 	r = amdgpu_ring_test_helper(ring);
+-
+-	return r;
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static const struct amdgpu_ring_funcs vcn_v4_0_3_unified_ring_vm_funcs = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+index 16ade84facc789..af29a8e141a4f4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+@@ -1469,6 +1469,7 @@ static int vcn_v4_0_5_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+ {
+ 	struct amdgpu_device *adev = ring->adev;
+ 	struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
++	int r;
+ 
+ 	if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
+ 		return -EOPNOTSUPP;
+@@ -1476,7 +1477,11 @@ static int vcn_v4_0_5_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+ 	vcn_v4_0_5_stop(vinst);
+ 	vcn_v4_0_5_start(vinst);
+ 
+-	return amdgpu_ring_test_helper(ring);
++	r = amdgpu_ring_test_helper(ring);
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static struct amdgpu_ring_funcs vcn_v4_0_5_unified_ring_vm_funcs = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
+index f8e3f0b882da56..216324f6da85f9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
+@@ -1196,6 +1196,7 @@ static int vcn_v5_0_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+ {
+ 	struct amdgpu_device *adev = ring->adev;
+ 	struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
++	int r;
+ 
+ 	if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
+ 		return -EOPNOTSUPP;
+@@ -1203,7 +1204,11 @@ static int vcn_v5_0_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+ 	vcn_v5_0_0_stop(vinst);
+ 	vcn_v5_0_0_start(vinst);
+ 
+-	return amdgpu_ring_test_helper(ring);
++	r = amdgpu_ring_test_helper(ring);
++	if (r)
++		return r;
++	amdgpu_fence_driver_force_completion(ring);
++	return 0;
+ }
+ 
+ static const struct amdgpu_ring_funcs vcn_v5_0_0_unified_ring_vm_funcs = {
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index bf0854bd55551b..097bf675378273 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -971,7 +971,7 @@ int kgd2kfd_pre_reset(struct kfd_dev *kfd,
+ 		kfd_smi_event_update_gpu_reset(node, false, reset_context);
+ 	}
+ 
+-	kgd2kfd_suspend(kfd, false);
++	kgd2kfd_suspend(kfd, true);
+ 
+ 	for (i = 0; i < kfd->num_nodes; i++)
+ 		kfd_signal_reset_event(kfd->nodes[i]);
+@@ -1019,7 +1019,7 @@ bool kfd_is_locked(void)
+ 	return  (kfd_locked > 0);
+ }
+ 
+-void kgd2kfd_suspend(struct kfd_dev *kfd, bool run_pm)
++void kgd2kfd_suspend(struct kfd_dev *kfd, bool suspend_proc)
+ {
+ 	struct kfd_node *node;
+ 	int i;
+@@ -1027,14 +1027,8 @@ void kgd2kfd_suspend(struct kfd_dev *kfd, bool run_pm)
+ 	if (!kfd->init_complete)
+ 		return;
+ 
+-	/* for runtime suspend, skip locking kfd */
+-	if (!run_pm) {
+-		mutex_lock(&kfd_processes_mutex);
+-		/* For first KFD device suspend all the KFD processes */
+-		if (++kfd_locked == 1)
+-			kfd_suspend_all_processes();
+-		mutex_unlock(&kfd_processes_mutex);
+-	}
++	if (suspend_proc)
++		kgd2kfd_suspend_process(kfd);
+ 
+ 	for (i = 0; i < kfd->num_nodes; i++) {
+ 		node = kfd->nodes[i];
+@@ -1042,7 +1036,7 @@ void kgd2kfd_suspend(struct kfd_dev *kfd, bool run_pm)
+ 	}
+ }
+ 
+-int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm)
++int kgd2kfd_resume(struct kfd_dev *kfd, bool resume_proc)
+ {
+ 	int ret, i;
+ 
+@@ -1055,14 +1049,36 @@ int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm)
+ 			return ret;
+ 	}
+ 
+-	/* for runtime resume, skip unlocking kfd */
+-	if (!run_pm) {
+-		mutex_lock(&kfd_processes_mutex);
+-		if (--kfd_locked == 0)
+-			ret = kfd_resume_all_processes();
+-		WARN_ONCE(kfd_locked < 0, "KFD suspend / resume ref. error");
+-		mutex_unlock(&kfd_processes_mutex);
+-	}
++	if (resume_proc)
++		ret = kgd2kfd_resume_process(kfd);
++
++	return ret;
++}
++
++void kgd2kfd_suspend_process(struct kfd_dev *kfd)
++{
++	if (!kfd->init_complete)
++		return;
++
++	mutex_lock(&kfd_processes_mutex);
++	/* For first KFD device suspend all the KFD processes */
++	if (++kfd_locked == 1)
++		kfd_suspend_all_processes();
++	mutex_unlock(&kfd_processes_mutex);
++}
++
++int kgd2kfd_resume_process(struct kfd_dev *kfd)
++{
++	int ret = 0;
++
++	if (!kfd->init_complete)
++		return 0;
++
++	mutex_lock(&kfd_processes_mutex);
++	if (--kfd_locked == 0)
++		ret = kfd_resume_all_processes();
++	WARN_ONCE(kfd_locked < 0, "KFD suspend / resume ref. error");
++	mutex_unlock(&kfd_processes_mutex);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu_helper.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu_helper.c
+index 79a566f3564a57..c305ea4ec17d21 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu_helper.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu_helper.c
+@@ -149,7 +149,7 @@ int phm_wait_on_indirect_register(struct pp_hwmgr *hwmgr,
+ 	}
+ 
+ 	cgs_write_register(hwmgr->device, indirect_port, index);
+-	return phm_wait_on_register(hwmgr, indirect_port + 1, mask, value);
++	return phm_wait_on_register(hwmgr, indirect_port + 1, value, mask);
+ }
+ 
+ int phm_wait_for_register_unequal(struct pp_hwmgr *hwmgr,
+diff --git a/drivers/gpu/drm/display/drm_hdmi_state_helper.c b/drivers/gpu/drm/display/drm_hdmi_state_helper.c
+index d9d9948b29e9d5..45b154c8abb2cc 100644
+--- a/drivers/gpu/drm/display/drm_hdmi_state_helper.c
++++ b/drivers/gpu/drm/display/drm_hdmi_state_helper.c
+@@ -798,12 +798,12 @@ int drm_atomic_helper_connector_hdmi_check(struct drm_connector *connector,
+ 	if (!new_conn_state->crtc || !new_conn_state->best_encoder)
+ 		return 0;
+ 
+-	new_conn_state->hdmi.is_limited_range = hdmi_is_limited_range(connector, new_conn_state);
+-
+ 	ret = hdmi_compute_config(connector, new_conn_state, mode);
+ 	if (ret)
+ 		return ret;
+ 
++	new_conn_state->hdmi.is_limited_range = hdmi_is_limited_range(connector, new_conn_state);
++
+ 	ret = hdmi_generate_infoframes(connector, new_conn_state);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+index d6f8b1030c68a4..6c04f41f9bacc3 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+@@ -383,6 +383,7 @@ static const struct dpu_perf_cfg sc8180x_perf_data = {
+ 	.min_core_ib = 2400000,
+ 	.min_llcc_ib = 800000,
+ 	.min_dram_ib = 800000,
++	.min_prefill_lines = 24,
+ 	.danger_lut_tbl = {0xf, 0xffff, 0x0},
+ 	.safe_lut_tbl = {0xfff0, 0xf000, 0xffff},
+ 	.qos_lut_tbl = {
+diff --git a/drivers/gpu/drm/panfrost/panfrost_devfreq.c b/drivers/gpu/drm/panfrost/panfrost_devfreq.c
+index 3385fd3ef41a47..5d0dce10336ba3 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_devfreq.c
++++ b/drivers/gpu/drm/panfrost/panfrost_devfreq.c
+@@ -29,7 +29,7 @@ static void panfrost_devfreq_update_utilization(struct panfrost_devfreq *pfdevfr
+ static int panfrost_devfreq_target(struct device *dev, unsigned long *freq,
+ 				   u32 flags)
+ {
+-	struct panfrost_device *ptdev = dev_get_drvdata(dev);
++	struct panfrost_device *pfdev = dev_get_drvdata(dev);
+ 	struct dev_pm_opp *opp;
+ 	int err;
+ 
+@@ -40,7 +40,7 @@ static int panfrost_devfreq_target(struct device *dev, unsigned long *freq,
+ 
+ 	err = dev_pm_opp_set_rate(dev, *freq);
+ 	if (!err)
+-		ptdev->pfdevfreq.current_frequency = *freq;
++		pfdev->pfdevfreq.current_frequency = *freq;
+ 
+ 	return err;
+ }
+diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c
+index 7c00fd77758b15..a123bc740ba146 100644
+--- a/drivers/gpu/drm/panthor/panthor_gem.c
++++ b/drivers/gpu/drm/panthor/panthor_gem.c
+@@ -16,10 +16,15 @@
+ #include "panthor_mmu.h"
+ 
+ #ifdef CONFIG_DEBUG_FS
+-static void panthor_gem_debugfs_bo_add(struct panthor_device *ptdev,
+-				       struct panthor_gem_object *bo)
++static void panthor_gem_debugfs_bo_init(struct panthor_gem_object *bo)
+ {
+ 	INIT_LIST_HEAD(&bo->debugfs.node);
++}
++
++static void panthor_gem_debugfs_bo_add(struct panthor_gem_object *bo)
++{
++	struct panthor_device *ptdev = container_of(bo->base.base.dev,
++						    struct panthor_device, base);
+ 
+ 	bo->debugfs.creator.tgid = current->group_leader->pid;
+ 	get_task_comm(bo->debugfs.creator.process_name, current->group_leader);
+@@ -44,14 +49,13 @@ static void panthor_gem_debugfs_bo_rm(struct panthor_gem_object *bo)
+ 
+ static void panthor_gem_debugfs_set_usage_flags(struct panthor_gem_object *bo, u32 usage_flags)
+ {
+-	bo->debugfs.flags = usage_flags | PANTHOR_DEBUGFS_GEM_USAGE_FLAG_INITIALIZED;
++	bo->debugfs.flags = usage_flags;
++	panthor_gem_debugfs_bo_add(bo);
+ }
+ #else
+-static void panthor_gem_debugfs_bo_add(struct panthor_device *ptdev,
+-				       struct panthor_gem_object *bo)
+-{}
+ static void panthor_gem_debugfs_bo_rm(struct panthor_gem_object *bo) {}
+ static void panthor_gem_debugfs_set_usage_flags(struct panthor_gem_object *bo, u32 usage_flags) {}
++static void panthor_gem_debugfs_bo_init(struct panthor_gem_object *bo) {}
+ #endif
+ 
+ static void panthor_gem_free_object(struct drm_gem_object *obj)
+@@ -246,7 +250,7 @@ struct drm_gem_object *panthor_gem_create_object(struct drm_device *ddev, size_t
+ 	drm_gem_gpuva_set_lock(&obj->base.base, &obj->gpuva_list_lock);
+ 	mutex_init(&obj->label.lock);
+ 
+-	panthor_gem_debugfs_bo_add(ptdev, obj);
++	panthor_gem_debugfs_bo_init(obj);
+ 
+ 	return &obj->base.base;
+ }
+@@ -285,6 +289,8 @@ panthor_gem_create_with_handle(struct drm_file *file,
+ 		bo->base.base.resv = bo->exclusive_vm_root_gem->resv;
+ 	}
+ 
++	panthor_gem_debugfs_set_usage_flags(bo, 0);
++
+ 	/*
+ 	 * Allocate an id of idr table where the obj is registered
+ 	 * and handle has the id what user can see.
+@@ -296,12 +302,6 @@ panthor_gem_create_with_handle(struct drm_file *file,
+ 	/* drop reference from allocate - handle holds it now. */
+ 	drm_gem_object_put(&shmem->base);
+ 
+-	/*
+-	 * No explicit flags are needed in the call below, since the
+-	 * function internally sets the INITIALIZED bit for us.
+-	 */
+-	panthor_gem_debugfs_set_usage_flags(bo, 0);
+-
+ 	return ret;
+ }
+ 
+@@ -387,7 +387,7 @@ static void panthor_gem_debugfs_bo_print(struct panthor_gem_object *bo,
+ 	unsigned int refcount = kref_read(&bo->base.base.refcount);
+ 	char creator_info[32] = {};
+ 	size_t resident_size;
+-	u32 gem_usage_flags = bo->debugfs.flags & (u32)~PANTHOR_DEBUGFS_GEM_USAGE_FLAG_INITIALIZED;
++	u32 gem_usage_flags = bo->debugfs.flags;
+ 	u32 gem_state_flags = 0;
+ 
+ 	/* Skip BOs being destroyed. */
+@@ -436,8 +436,7 @@ void panthor_gem_debugfs_print_bos(struct panthor_device *ptdev,
+ 
+ 	scoped_guard(mutex, &ptdev->gems.lock) {
+ 		list_for_each_entry(bo, &ptdev->gems.node, debugfs.node) {
+-			if (bo->debugfs.flags & PANTHOR_DEBUGFS_GEM_USAGE_FLAG_INITIALIZED)
+-				panthor_gem_debugfs_bo_print(bo, m, &totals);
++			panthor_gem_debugfs_bo_print(bo, m, &totals);
+ 		}
+ 	}
+ 
+diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h
+index 4dd732dcd59f0a..8fc7215e9b900e 100644
+--- a/drivers/gpu/drm/panthor/panthor_gem.h
++++ b/drivers/gpu/drm/panthor/panthor_gem.h
+@@ -35,9 +35,6 @@ enum panthor_debugfs_gem_usage_flags {
+ 
+ 	/** @PANTHOR_DEBUGFS_GEM_USAGE_FLAG_FW_MAPPED: BO is mapped on the FW VM. */
+ 	PANTHOR_DEBUGFS_GEM_USAGE_FLAG_FW_MAPPED = BIT(PANTHOR_DEBUGFS_GEM_USAGE_FW_MAPPED_BIT),
+-
+-	/** @PANTHOR_DEBUGFS_GEM_USAGE_FLAG_INITIALIZED: BO is ready for DebugFS display. */
+-	PANTHOR_DEBUGFS_GEM_USAGE_FLAG_INITIALIZED = BIT(31),
+ };
+ 
+ /**
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_fb.c b/drivers/gpu/drm/rockchip/rockchip_drm_fb.c
+index dcc1f07632c3a1..5829ee061c61bb 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_fb.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_fb.c
+@@ -52,16 +52,9 @@ rockchip_fb_create(struct drm_device *dev, struct drm_file *file,
+ 	}
+ 
+ 	if (drm_is_afbc(mode_cmd->modifier[0])) {
+-		int ret, i;
+-
+ 		ret = drm_gem_fb_afbc_init(dev, mode_cmd, afbc_fb);
+ 		if (ret) {
+-			struct drm_gem_object **obj = afbc_fb->base.obj;
+-
+-			for (i = 0; i < info->num_planes; ++i)
+-				drm_gem_object_put(obj[i]);
+-
+-			kfree(afbc_fb);
++			drm_framebuffer_put(&afbc_fb->base);
+ 			return ERR_PTR(ret);
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
+index d0f5fea15e21fa..186f6452a7d359 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
+@@ -146,25 +146,6 @@ static void vop2_unlock(struct vop2 *vop2)
+ 	mutex_unlock(&vop2->vop2_lock);
+ }
+ 
+-/*
+- * Note:
+- * The write mask function is documented but missing on rk3566/8, writes
+- * to these bits have no effect. For newer soc(rk3588 and following) the
+- * write mask is needed for register writes.
+- *
+- * GLB_CFG_DONE_EN has no write mask bit.
+- *
+- */
+-static void vop2_cfg_done(struct vop2_video_port *vp)
+-{
+-	struct vop2 *vop2 = vp->vop2;
+-	u32 val = RK3568_REG_CFG_DONE__GLB_CFG_DONE_EN;
+-
+-	val |= BIT(vp->id) | (BIT(vp->id) << 16);
+-
+-	regmap_set_bits(vop2->map, RK3568_REG_CFG_DONE, val);
+-}
+-
+ static void vop2_win_disable(struct vop2_win *win)
+ {
+ 	vop2_win_write(win, VOP2_WIN_ENABLE, 0);
+@@ -854,6 +835,11 @@ static void vop2_enable(struct vop2 *vop2)
+ 	if (vop2->version == VOP_VERSION_RK3588)
+ 		rk3588_vop2_power_domain_enable_all(vop2);
+ 
++	if (vop2->version <= VOP_VERSION_RK3588) {
++		vop2->old_layer_sel = vop2_readl(vop2, RK3568_OVL_LAYER_SEL);
++		vop2->old_port_sel = vop2_readl(vop2, RK3568_OVL_PORT_SEL);
++	}
++
+ 	vop2_writel(vop2, RK3568_REG_CFG_DONE, RK3568_REG_CFG_DONE__GLB_CFG_DONE_EN);
+ 
+ 	/*
+@@ -2422,6 +2408,10 @@ static int vop2_create_crtcs(struct vop2 *vop2)
+ 				break;
+ 			}
+ 		}
++
++		if (!vp->primary_plane)
++			return dev_err_probe(drm->dev, -ENOENT,
++					     "no primary plane for vp %d\n", i);
+ 	}
+ 
+ 	/* Register all unused window as overlay plane */
+@@ -2724,6 +2714,7 @@ static int vop2_bind(struct device *dev, struct device *master, void *data)
+ 		return dev_err_probe(drm->dev, vop2->irq, "cannot find irq for vop2\n");
+ 
+ 	mutex_init(&vop2->vop2_lock);
++	mutex_init(&vop2->ovl_lock);
+ 
+ 	ret = devm_request_irq(dev, vop2->irq, vop2_isr, IRQF_SHARED, dev_name(dev), vop2);
+ 	if (ret)
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h
+index fc3ecb9fcd9576..fa5c56f16047e3 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h
+@@ -334,6 +334,19 @@ struct vop2 {
+ 	/* optional internal rgb encoder */
+ 	struct rockchip_rgb *rgb;
+ 
++	/*
++	 * Used to record layer selection configuration on rk356x/rk3588
++	 * as register RK3568_OVL_LAYER_SEL and RK3568_OVL_PORT_SEL are
++	 * shared for all the Video Ports.
++	 */
++	u32 old_layer_sel;
++	u32 old_port_sel;
++	/*
++	 * Ensure that the updates to these two registers(RKK3568_OVL_LAYER_SEL/RK3568_OVL_PORT_SEL)
++	 * take effect in sequence.
++	 */
++	struct mutex ovl_lock;
++
+ 	/* must be put at the end of the struct */
+ 	struct vop2_win win[];
+ };
+@@ -727,6 +740,7 @@ enum dst_factor_mode {
+ #define RK3588_OVL_PORT_SEL__CLUSTER2			GENMASK(21, 20)
+ #define RK3568_OVL_PORT_SEL__CLUSTER1			GENMASK(19, 18)
+ #define RK3568_OVL_PORT_SEL__CLUSTER0			GENMASK(17, 16)
++#define RK3588_OVL_PORT_SET__PORT3_MUX			GENMASK(15, 12)
+ #define RK3568_OVL_PORT_SET__PORT2_MUX			GENMASK(11, 8)
+ #define RK3568_OVL_PORT_SET__PORT1_MUX			GENMASK(7, 4)
+ #define RK3568_OVL_PORT_SET__PORT0_MUX			GENMASK(3, 0)
+@@ -831,4 +845,23 @@ static inline struct vop2_win *to_vop2_win(struct drm_plane *p)
+ 	return container_of(p, struct vop2_win, base);
+ }
+ 
++/*
++ * Note:
++ * The write mask function is documented but missing on rk3566/8, writes
++ * to these bits have no effect. For newer soc(rk3588 and following) the
++ * write mask is needed for register writes.
++ *
++ * GLB_CFG_DONE_EN has no write mask bit.
++ *
++ */
++static inline void vop2_cfg_done(struct vop2_video_port *vp)
++{
++	struct vop2 *vop2 = vp->vop2;
++	u32 val = RK3568_REG_CFG_DONE__GLB_CFG_DONE_EN;
++
++	val |= BIT(vp->id) | (BIT(vp->id) << 16);
++
++	regmap_set_bits(vop2->map, RK3568_REG_CFG_DONE, val);
++}
++
+ #endif /* _ROCKCHIP_DRM_VOP2_H */
+diff --git a/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c b/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c
+index 32c4ed6857395a..45c5e398781331 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c
++++ b/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c
+@@ -2052,12 +2052,55 @@ static void vop2_setup_alpha(struct vop2_video_port *vp)
+ 	}
+ }
+ 
++static u32 rk3568_vop2_read_port_mux(struct vop2 *vop2)
++{
++	return vop2_readl(vop2, RK3568_OVL_PORT_SEL);
++}
++
++static void rk3568_vop2_wait_for_port_mux_done(struct vop2 *vop2)
++{
++	u32 port_mux_sel;
++	int ret;
++
++	/*
++	 * Spin until the previous port_mux figuration is done.
++	 */
++	ret = readx_poll_timeout_atomic(rk3568_vop2_read_port_mux, vop2, port_mux_sel,
++					port_mux_sel == vop2->old_port_sel, 0, 50 * 1000);
++	if (ret)
++		DRM_DEV_ERROR(vop2->dev, "wait port_mux done timeout: 0x%x--0x%x\n",
++			      port_mux_sel, vop2->old_port_sel);
++}
++
++static u32 rk3568_vop2_read_layer_cfg(struct vop2 *vop2)
++{
++	return vop2_readl(vop2, RK3568_OVL_LAYER_SEL);
++}
++
++static void rk3568_vop2_wait_for_layer_cfg_done(struct vop2 *vop2, u32 cfg)
++{
++	u32 atv_layer_cfg;
++	int ret;
++
++	/*
++	 * Spin until the previous layer configuration is done.
++	 */
++	ret = readx_poll_timeout_atomic(rk3568_vop2_read_layer_cfg, vop2, atv_layer_cfg,
++					atv_layer_cfg == cfg, 0, 50 * 1000);
++	if (ret)
++		DRM_DEV_ERROR(vop2->dev, "wait layer cfg done timeout: 0x%x--0x%x\n",
++			      atv_layer_cfg, cfg);
++}
++
+ static void rk3568_vop2_setup_layer_mixer(struct vop2_video_port *vp)
+ {
+ 	struct vop2 *vop2 = vp->vop2;
+ 	struct drm_plane *plane;
+ 	u32 layer_sel = 0;
+ 	u32 port_sel;
++	u32 old_layer_sel = 0;
++	u32 atv_layer_sel = 0;
++	u32 old_port_sel = 0;
+ 	u8 layer_id;
+ 	u8 old_layer_id;
+ 	u8 layer_sel_id;
+@@ -2069,19 +2112,18 @@ static void rk3568_vop2_setup_layer_mixer(struct vop2_video_port *vp)
+ 	struct vop2_video_port *vp2 = &vop2->vps[2];
+ 	struct rockchip_crtc_state *vcstate = to_rockchip_crtc_state(vp->crtc.state);
+ 
++	mutex_lock(&vop2->ovl_lock);
+ 	ovl_ctrl = vop2_readl(vop2, RK3568_OVL_CTRL);
+ 	ovl_ctrl &= ~RK3568_OVL_CTRL__LAYERSEL_REGDONE_IMD;
+ 	ovl_ctrl &= ~RK3568_OVL_CTRL__LAYERSEL_REGDONE_SEL;
+-	ovl_ctrl |= FIELD_PREP(RK3568_OVL_CTRL__LAYERSEL_REGDONE_SEL, vp->id);
+ 
+ 	if (vcstate->yuv_overlay)
+ 		ovl_ctrl |= RK3568_OVL_CTRL__YUV_MODE(vp->id);
+ 	else
+ 		ovl_ctrl &= ~RK3568_OVL_CTRL__YUV_MODE(vp->id);
+ 
+-	vop2_writel(vop2, RK3568_OVL_CTRL, ovl_ctrl);
+-
+-	port_sel = vop2_readl(vop2, RK3568_OVL_PORT_SEL);
++	old_port_sel = vop2->old_port_sel;
++	port_sel = old_port_sel;
+ 	port_sel &= RK3568_OVL_PORT_SEL__SEL_PORT;
+ 
+ 	if (vp0->nlayers)
+@@ -2102,7 +2144,13 @@ static void rk3568_vop2_setup_layer_mixer(struct vop2_video_port *vp)
+ 	else
+ 		port_sel |= FIELD_PREP(RK3568_OVL_PORT_SET__PORT2_MUX, 8);
+ 
+-	layer_sel = vop2_readl(vop2, RK3568_OVL_LAYER_SEL);
++	/* Fixed value for rk3588 */
++	if (vop2->version == VOP_VERSION_RK3588)
++		port_sel |= FIELD_PREP(RK3588_OVL_PORT_SET__PORT3_MUX, 7);
++
++	atv_layer_sel = vop2_readl(vop2, RK3568_OVL_LAYER_SEL);
++	old_layer_sel = vop2->old_layer_sel;
++	layer_sel = old_layer_sel;
+ 
+ 	ofs = 0;
+ 	for (i = 0; i < vp->id; i++)
+@@ -2186,8 +2234,37 @@ static void rk3568_vop2_setup_layer_mixer(struct vop2_video_port *vp)
+ 			     old_win->data->layer_sel_id[vp->id]);
+ 	}
+ 
++	vop2->old_layer_sel = layer_sel;
++	vop2->old_port_sel = port_sel;
++	/*
++	 * As the RK3568_OVL_LAYER_SEL and RK3568_OVL_PORT_SEL are shared by all Video Ports,
++	 * and the configuration take effect by one Video Port's vsync.
++	 * When performing layer migration or change the zpos of layers, there are two things
++	 * to be observed and followed:
++	 * 1. When a layer is migrated from one VP to another, the configuration of the layer
++	 *    can only take effect after the Port mux configuration is enabled.
++	 *
++	 * 2. When we change the zpos of layers, we must ensure that the change for the previous
++	 *    VP takes effect before we proceed to change the next VP. Otherwise, the new
++	 *    configuration might overwrite the previous one for the previous VP, or it could
++	 *    lead to the configuration of the previous VP being take effect along with the VSYNC
++	 *    of the new VP.
++	 */
++	if (layer_sel != old_layer_sel || port_sel != old_port_sel)
++		ovl_ctrl |= FIELD_PREP(RK3568_OVL_CTRL__LAYERSEL_REGDONE_SEL, vp->id);
++	vop2_writel(vop2, RK3568_OVL_CTRL, ovl_ctrl);
++
++	if (port_sel != old_port_sel) {
++		vop2_writel(vop2, RK3568_OVL_PORT_SEL, port_sel);
++		vop2_cfg_done(vp);
++		rk3568_vop2_wait_for_port_mux_done(vop2);
++	}
++
++	if (layer_sel != old_layer_sel && atv_layer_sel != old_layer_sel)
++		rk3568_vop2_wait_for_layer_cfg_done(vop2, vop2->old_layer_sel);
++
+ 	vop2_writel(vop2, RK3568_OVL_LAYER_SEL, layer_sel);
+-	vop2_writel(vop2, RK3568_OVL_PORT_SEL, port_sel);
++	mutex_unlock(&vop2->ovl_lock);
+ }
+ 
+ static void rk3568_vop2_setup_dly_for_windows(struct vop2_video_port *vp)
+diff --git a/drivers/gpu/drm/sitronix/Kconfig b/drivers/gpu/drm/sitronix/Kconfig
+index 741d1bb4b83f7f..6de7d92d9b74c7 100644
+--- a/drivers/gpu/drm/sitronix/Kconfig
++++ b/drivers/gpu/drm/sitronix/Kconfig
+@@ -11,10 +11,6 @@ config DRM_ST7571_I2C
+ 
+ 	  if M is selected the module will be called st7571-i2c.
+ 
+-config TINYDRM_ST7586
+-	tristate
+-	default n
+-
+ config DRM_ST7586
+ 	tristate "DRM support for Sitronix ST7586 display panels"
+ 	depends on DRM && SPI
+@@ -22,17 +18,12 @@ config DRM_ST7586
+ 	select DRM_KMS_HELPER
+ 	select DRM_GEM_DMA_HELPER
+ 	select DRM_MIPI_DBI
+-	default TINYDRM_ST7586
+ 	help
+ 	  DRM driver for the following Sitronix ST7586 panels:
+ 	  * LEGO MINDSTORMS EV3
+ 
+ 	  If M is selected the module will be called st7586.
+ 
+-config TINYDRM_ST7735R
+-	tristate
+-	default n
+-
+ config DRM_ST7735R
+ 	tristate "DRM support for Sitronix ST7715R/ST7735R display panels"
+ 	depends on DRM && SPI
+@@ -41,7 +32,6 @@ config DRM_ST7735R
+ 	select DRM_GEM_DMA_HELPER
+ 	select DRM_MIPI_DBI
+ 	select BACKLIGHT_CLASS_DEVICE
+-	default TINYDRM_ST7735R
+ 	help
+ 	  DRM driver for Sitronix ST7715R/ST7735R with one of the following
+ 	  LCDs:
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
+index 7fb1c88bcc475f..69dfe69ce0f87d 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
+@@ -896,7 +896,7 @@ int vmw_compat_shader_add(struct vmw_private *dev_priv,
+ 		.busy_domain = VMW_BO_DOMAIN_SYS,
+ 		.bo_type = ttm_bo_type_device,
+ 		.size = size,
+-		.pin = true,
++		.pin = false,
+ 		.keep_resv = true,
+ 	};
+ 
+diff --git a/drivers/gpu/drm/xe/xe_configfs.c b/drivers/gpu/drm/xe/xe_configfs.c
+index cb9f175c89a1c9..9a2b96b111ef54 100644
+--- a/drivers/gpu/drm/xe/xe_configfs.c
++++ b/drivers/gpu/drm/xe/xe_configfs.c
+@@ -133,7 +133,8 @@ static struct config_group *xe_config_make_device_group(struct config_group *gro
+ 
+ 	pdev = pci_get_domain_bus_and_slot(domain, bus, PCI_DEVFN(slot, function));
+ 	if (!pdev)
+-		return ERR_PTR(-EINVAL);
++		return ERR_PTR(-ENODEV);
++	pci_dev_put(pdev);
+ 
+ 	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ 	if (!dev)
+diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
+index e9f3c1a53db229..7f839c3b9a140b 100644
+--- a/drivers/gpu/drm/xe/xe_device.c
++++ b/drivers/gpu/drm/xe/xe_device.c
+@@ -685,6 +685,7 @@ static void sriov_update_device_info(struct xe_device *xe)
+ 	/* disable features that are not available/applicable to VFs */
+ 	if (IS_SRIOV_VF(xe)) {
+ 		xe->info.probe_display = 0;
++		xe->info.has_heci_cscfi = 0;
+ 		xe->info.has_heci_gscfi = 0;
+ 		xe->info.skip_guc_pc = 1;
+ 		xe->info.skip_pcode = 1;
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf.c
+index 35489fa818259d..2ea81d81c0aeb4 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf.c
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf.c
+@@ -47,9 +47,16 @@ static int pf_alloc_metadata(struct xe_gt *gt)
+ 
+ static void pf_init_workers(struct xe_gt *gt)
+ {
++	xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
+ 	INIT_WORK(&gt->sriov.pf.workers.restart, pf_worker_restart_func);
+ }
+ 
++static void pf_fini_workers(struct xe_gt *gt)
++{
++	xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
++	disable_work_sync(&gt->sriov.pf.workers.restart);
++}
++
+ /**
+  * xe_gt_sriov_pf_init_early - Prepare SR-IOV PF data structures on PF.
+  * @gt: the &xe_gt to initialize
+@@ -79,6 +86,21 @@ int xe_gt_sriov_pf_init_early(struct xe_gt *gt)
+ 	return 0;
+ }
+ 
++static void pf_fini_action(void *arg)
++{
++	struct xe_gt *gt = arg;
++
++	pf_fini_workers(gt);
++}
++
++static int pf_init_late(struct xe_gt *gt)
++{
++	struct xe_device *xe = gt_to_xe(gt);
++
++	xe_gt_assert(gt, IS_SRIOV_PF(xe));
++	return devm_add_action_or_reset(xe->drm.dev, pf_fini_action, gt);
++}
++
+ /**
+  * xe_gt_sriov_pf_init - Prepare SR-IOV PF data structures on PF.
+  * @gt: the &xe_gt to initialize
+@@ -95,7 +117,15 @@ int xe_gt_sriov_pf_init(struct xe_gt *gt)
+ 	if (err)
+ 		return err;
+ 
+-	return xe_gt_sriov_pf_migration_init(gt);
++	err = xe_gt_sriov_pf_migration_init(gt);
++	if (err)
++		return err;
++
++	err = pf_init_late(gt);
++	if (err)
++		return err;
++
++	return 0;
+ }
+ 
+ static bool pf_needs_enable_ggtt_guest_update(struct xe_device *xe)
+diff --git a/drivers/gpu/drm/xe/xe_vsec.c b/drivers/gpu/drm/xe/xe_vsec.c
+index b378848d3b7bc7..56930ad4296216 100644
+--- a/drivers/gpu/drm/xe/xe_vsec.c
++++ b/drivers/gpu/drm/xe/xe_vsec.c
+@@ -24,6 +24,7 @@
+ #define BMG_DEVICE_ID 0xE2F8
+ 
+ static struct intel_vsec_header bmg_telemetry = {
++	.rev = 1,
+ 	.length = 0x10,
+ 	.id = VSEC_ID_TELEMETRY,
+ 	.num_entries = 2,
+@@ -32,28 +33,19 @@ static struct intel_vsec_header bmg_telemetry = {
+ 	.offset = BMG_DISCOVERY_OFFSET,
+ };
+ 
+-static struct intel_vsec_header bmg_punit_crashlog = {
++static struct intel_vsec_header bmg_crashlog = {
++	.rev = 1,
+ 	.length = 0x10,
+ 	.id = VSEC_ID_CRASHLOG,
+-	.num_entries = 1,
+-	.entry_size = 4,
++	.num_entries = 2,
++	.entry_size = 6,
+ 	.tbir = 0,
+ 	.offset = BMG_DISCOVERY_OFFSET + 0x60,
+ };
+ 
+-static struct intel_vsec_header bmg_oobmsm_crashlog = {
+-	.length = 0x10,
+-	.id = VSEC_ID_CRASHLOG,
+-	.num_entries = 1,
+-	.entry_size = 4,
+-	.tbir = 0,
+-	.offset = BMG_DISCOVERY_OFFSET + 0x78,
+-};
+-
+ static struct intel_vsec_header *bmg_capabilities[] = {
+ 	&bmg_telemetry,
+-	&bmg_punit_crashlog,
+-	&bmg_oobmsm_crashlog,
++	&bmg_crashlog,
+ 	NULL
+ };
+ 
+diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
+index 0639b1f43d884e..3bed3f0c90c2e4 100644
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -890,7 +890,8 @@ static int apple_magic_backlight_init(struct hid_device *hdev)
+ 	backlight->brightness = report_enum->report_id_hash[APPLE_MAGIC_REPORT_ID_BRIGHTNESS];
+ 	backlight->power = report_enum->report_id_hash[APPLE_MAGIC_REPORT_ID_POWER];
+ 
+-	if (!backlight->brightness || !backlight->power)
++	if (!backlight->brightness || backlight->brightness->maxfield < 2 ||
++	    !backlight->power || backlight->power->maxfield < 2)
+ 		return -ENODEV;
+ 
+ 	backlight->cdev.name = ":white:" LED_FUNCTION_KBD_BACKLIGHT;
+@@ -933,10 +934,12 @@ static int apple_probe(struct hid_device *hdev,
+ 		return ret;
+ 	}
+ 
+-	timer_setup(&asc->battery_timer, apple_battery_timer_tick, 0);
+-	mod_timer(&asc->battery_timer,
+-		  jiffies + msecs_to_jiffies(APPLE_BATTERY_TIMEOUT_MS));
+-	apple_fetch_battery(hdev);
++	if (quirks & APPLE_RDESC_BATTERY) {
++		timer_setup(&asc->battery_timer, apple_battery_timer_tick, 0);
++		mod_timer(&asc->battery_timer,
++			  jiffies + msecs_to_jiffies(APPLE_BATTERY_TIMEOUT_MS));
++		apple_fetch_battery(hdev);
++	}
+ 
+ 	if (quirks & APPLE_BACKLIGHT_CTL)
+ 		apple_backlight_init(hdev);
+@@ -950,7 +953,9 @@ static int apple_probe(struct hid_device *hdev,
+ 	return 0;
+ 
+ out_err:
+-	timer_delete_sync(&asc->battery_timer);
++	if (quirks & APPLE_RDESC_BATTERY)
++		timer_delete_sync(&asc->battery_timer);
++
+ 	hid_hw_stop(hdev);
+ 	return ret;
+ }
+@@ -959,7 +964,8 @@ static void apple_remove(struct hid_device *hdev)
+ {
+ 	struct apple_sc *asc = hid_get_drvdata(hdev);
+ 
+-	timer_delete_sync(&asc->battery_timer);
++	if (asc->quirks & APPLE_RDESC_BATTERY)
++		timer_delete_sync(&asc->battery_timer);
+ 
+ 	hid_hw_stop(hdev);
+ }
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index b31b8a2fd540bd..b9748366c6d644 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -66,8 +66,12 @@ static s32 snto32(__u32 value, unsigned int n)
+ 
+ static u32 s32ton(__s32 value, unsigned int n)
+ {
+-	s32 a = value >> (n - 1);
++	s32 a;
+ 
++	if (!value || !n)
++		return 0;
++
++	a = value >> (n - 1);
+ 	if (a && a != -1)
+ 		return value < 0 ? 1 << (n - 1) : (1 << (n - 1)) - 1;
+ 	return value & ((1 << n) - 1);
+diff --git a/drivers/hid/hid-magicmouse.c b/drivers/hid/hid-magicmouse.c
+index 36f034ac605dcb..226682762db365 100644
+--- a/drivers/hid/hid-magicmouse.c
++++ b/drivers/hid/hid-magicmouse.c
+@@ -791,17 +791,31 @@ static void magicmouse_enable_mt_work(struct work_struct *work)
+ 		hid_err(msc->hdev, "unable to request touch data (%d)\n", ret);
+ }
+ 
++static bool is_usb_magicmouse2(__u32 vendor, __u32 product)
++{
++	if (vendor != USB_VENDOR_ID_APPLE)
++		return false;
++	return product == USB_DEVICE_ID_APPLE_MAGICMOUSE2 ||
++	       product == USB_DEVICE_ID_APPLE_MAGICMOUSE2_USBC;
++}
++
++static bool is_usb_magictrackpad2(__u32 vendor, __u32 product)
++{
++	if (vendor != USB_VENDOR_ID_APPLE)
++		return false;
++	return product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++	       product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC;
++}
++
+ static int magicmouse_fetch_battery(struct hid_device *hdev)
+ {
+ #ifdef CONFIG_HID_BATTERY_STRENGTH
+ 	struct hid_report_enum *report_enum;
+ 	struct hid_report *report;
+ 
+-	if (!hdev->battery || hdev->vendor != USB_VENDOR_ID_APPLE ||
+-	    (hdev->product != USB_DEVICE_ID_APPLE_MAGICMOUSE2 &&
+-	     hdev->product != USB_DEVICE_ID_APPLE_MAGICMOUSE2_USBC &&
+-	     hdev->product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 &&
+-	     hdev->product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC))
++	if (!hdev->battery ||
++	    (!is_usb_magicmouse2(hdev->vendor, hdev->product) &&
++	     !is_usb_magictrackpad2(hdev->vendor, hdev->product)))
+ 		return -1;
+ 
+ 	report_enum = &hdev->report_enum[hdev->battery_report_type];
+@@ -863,17 +877,17 @@ static int magicmouse_probe(struct hid_device *hdev,
+ 		return ret;
+ 	}
+ 
+-	timer_setup(&msc->battery_timer, magicmouse_battery_timer_tick, 0);
+-	mod_timer(&msc->battery_timer,
+-		  jiffies + msecs_to_jiffies(USB_BATTERY_TIMEOUT_MS));
+-	magicmouse_fetch_battery(hdev);
+-
+-	if (id->vendor == USB_VENDOR_ID_APPLE &&
+-	    (id->product == USB_DEVICE_ID_APPLE_MAGICMOUSE2 ||
+-	     id->product == USB_DEVICE_ID_APPLE_MAGICMOUSE2_USBC ||
+-	     ((id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
+-	       id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) &&
+-	      hdev->type != HID_TYPE_USBMOUSE)))
++	if (is_usb_magicmouse2(id->vendor, id->product) ||
++	    is_usb_magictrackpad2(id->vendor, id->product)) {
++		timer_setup(&msc->battery_timer, magicmouse_battery_timer_tick, 0);
++		mod_timer(&msc->battery_timer,
++			  jiffies + msecs_to_jiffies(USB_BATTERY_TIMEOUT_MS));
++		magicmouse_fetch_battery(hdev);
++	}
++
++	if (is_usb_magicmouse2(id->vendor, id->product) ||
++	    (is_usb_magictrackpad2(id->vendor, id->product) &&
++	     hdev->type != HID_TYPE_USBMOUSE))
+ 		return 0;
+ 
+ 	if (!msc->input) {
+@@ -936,7 +950,10 @@ static int magicmouse_probe(struct hid_device *hdev,
+ 
+ 	return 0;
+ err_stop_hw:
+-	timer_delete_sync(&msc->battery_timer);
++	if (is_usb_magicmouse2(id->vendor, id->product) ||
++	    is_usb_magictrackpad2(id->vendor, id->product))
++		timer_delete_sync(&msc->battery_timer);
++
+ 	hid_hw_stop(hdev);
+ 	return ret;
+ }
+@@ -947,7 +964,9 @@ static void magicmouse_remove(struct hid_device *hdev)
+ 
+ 	if (msc) {
+ 		cancel_delayed_work_sync(&msc->work);
+-		timer_delete_sync(&msc->battery_timer);
++		if (is_usb_magicmouse2(hdev->vendor, hdev->product) ||
++		    is_usb_magictrackpad2(hdev->vendor, hdev->product))
++			timer_delete_sync(&msc->battery_timer);
+ 	}
+ 
+ 	hid_hw_stop(hdev);
+@@ -964,11 +983,8 @@ static const __u8 *magicmouse_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ 	 *   0x05, 0x01,       // Usage Page (Generic Desktop)        0
+ 	 *   0x09, 0x02,       // Usage (Mouse)                       2
+ 	 */
+-	if (hdev->vendor == USB_VENDOR_ID_APPLE &&
+-	    (hdev->product == USB_DEVICE_ID_APPLE_MAGICMOUSE2 ||
+-	     hdev->product == USB_DEVICE_ID_APPLE_MAGICMOUSE2_USBC ||
+-	     hdev->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
+-	     hdev->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) &&
++	if ((is_usb_magicmouse2(hdev->vendor, hdev->product) ||
++	     is_usb_magictrackpad2(hdev->vendor, hdev->product)) &&
+ 	    *rsize == 83 && rdesc[46] == 0x84 && rdesc[58] == 0x85) {
+ 		hid_info(hdev,
+ 			 "fixing up magicmouse battery report descriptor\n");
+diff --git a/drivers/i2c/muxes/i2c-mux-mule.c b/drivers/i2c/muxes/i2c-mux-mule.c
+index 284ff4afeeacab..d3b32b794172ad 100644
+--- a/drivers/i2c/muxes/i2c-mux-mule.c
++++ b/drivers/i2c/muxes/i2c-mux-mule.c
+@@ -47,7 +47,6 @@ static int mule_i2c_mux_probe(struct platform_device *pdev)
+ 	struct mule_i2c_reg_mux *priv;
+ 	struct i2c_client *client;
+ 	struct i2c_mux_core *muxc;
+-	struct device_node *dev;
+ 	unsigned int readback;
+ 	int ndev, ret;
+ 	bool old_fw;
+@@ -95,7 +94,7 @@ static int mule_i2c_mux_probe(struct platform_device *pdev)
+ 				     "Failed to register mux remove\n");
+ 
+ 	/* Create device adapters */
+-	for_each_child_of_node(mux_dev->of_node, dev) {
++	for_each_child_of_node_scoped(mux_dev->of_node, dev) {
+ 		u32 reg;
+ 
+ 		ret = of_property_read_u32(dev, "reg", &reg);
+diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c
+index 7e1a7cb94b4361..ece56335389582 100644
+--- a/drivers/i3c/master/svc-i3c-master.c
++++ b/drivers/i3c/master/svc-i3c-master.c
+@@ -104,6 +104,7 @@
+ #define   SVC_I3C_MDATACTRL_TXTRIG_FIFO_NOT_FULL GENMASK(5, 4)
+ #define   SVC_I3C_MDATACTRL_RXTRIG_FIFO_NOT_EMPTY 0
+ #define   SVC_I3C_MDATACTRL_RXCOUNT(x) FIELD_GET(GENMASK(28, 24), (x))
++#define   SVC_I3C_MDATACTRL_TXCOUNT(x) FIELD_GET(GENMASK(20, 16), (x))
+ #define   SVC_I3C_MDATACTRL_TXFULL BIT(30)
+ #define   SVC_I3C_MDATACTRL_RXEMPTY BIT(31)
+ 
+@@ -1304,14 +1305,19 @@ static int svc_i3c_master_xfer(struct svc_i3c_master *master,
+ 		 * FIFO start filling as soon as possible after EmitStartAddr.
+ 		 */
+ 		if (svc_has_quirk(master, SVC_I3C_QUIRK_FIFO_EMPTY) && !rnw && xfer_len) {
+-			u32 end = xfer_len > SVC_I3C_FIFO_SIZE ? 0 : SVC_I3C_MWDATAB_END;
+-			u32 len = min_t(u32, xfer_len, SVC_I3C_FIFO_SIZE);
+-
+-			writesb(master->regs + SVC_I3C_MWDATAB1, out, len - 1);
+-			/* Mark END bit if this is the last byte */
+-			writel(out[len - 1] | end, master->regs + SVC_I3C_MWDATAB);
+-			xfer_len -= len;
+-			out += len;
++			u32 space, end, len;
++
++			reg = readl(master->regs + SVC_I3C_MDATACTRL);
++			space = SVC_I3C_FIFO_SIZE - SVC_I3C_MDATACTRL_TXCOUNT(reg);
++			if (space) {
++				end = xfer_len > space ? 0 : SVC_I3C_MWDATAB_END;
++				len = min_t(u32, xfer_len, space);
++				writesb(master->regs + SVC_I3C_MWDATAB1, out, len - 1);
++				/* Mark END bit if this is the last byte */
++				writel(out[len - 1] | end, master->regs + SVC_I3C_MWDATAB);
++				xfer_len -= len;
++				out += len;
++			}
+ 		}
+ 
+ 		ret = readl_poll_timeout(master->regs + SVC_I3C_MSTATUS, reg,
+diff --git a/drivers/infiniband/core/counters.c b/drivers/infiniband/core/counters.c
+index e6ec7b7a40afa4..c3aa6d7fc66b6e 100644
+--- a/drivers/infiniband/core/counters.c
++++ b/drivers/infiniband/core/counters.c
+@@ -461,7 +461,7 @@ static struct ib_qp *rdma_counter_get_qp(struct ib_device *dev, u32 qp_num)
+ 		return NULL;
+ 
+ 	qp = container_of(res, struct ib_qp, res);
+-	if (qp->qp_type == IB_QPT_RAW_PACKET && !capable(CAP_NET_RAW))
++	if (qp->qp_type == IB_QPT_RAW_PACKET && !rdma_dev_has_raw_cap(dev))
+ 		goto err;
+ 
+ 	return qp;
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index d4263385850a7a..792824e0ab2cb8 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -145,6 +145,33 @@ bool rdma_dev_access_netns(const struct ib_device *dev, const struct net *net)
+ }
+ EXPORT_SYMBOL(rdma_dev_access_netns);
+ 
++/**
++ * rdma_dev_has_raw_cap() - Returns whether a specified rdma device has
++ *			    CAP_NET_RAW capability or not.
++ *
++ * @dev:	Pointer to rdma device whose capability to be checked
++ *
++ * Returns true if a rdma device's owning user namespace has CAP_NET_RAW
++ * capability, otherwise false. When rdma subsystem is in legacy shared network,
++ * namespace mode, the default net namespace is considered.
++ */
++bool rdma_dev_has_raw_cap(const struct ib_device *dev)
++{
++	const struct net *net;
++
++	/* Network namespace is the resource whose user namespace
++	 * to be considered. When in shared mode, there is no reliable
++	 * network namespace resource, so consider the default net namespace.
++	 */
++	if (ib_devices_shared_netns)
++		net = &init_net;
++	else
++		net = read_pnet(&dev->coredev.rdma_net);
++
++	return ns_capable(net->user_ns, CAP_NET_RAW);
++}
++EXPORT_SYMBOL(rdma_dev_has_raw_cap);
++
+ /*
+  * xarray has this behavior where it won't iterate over NULL values stored in
+  * allocated arrays.  So we need our own iterator to see all values stored in
+diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c
+index a872643e8039fc..be6b2ef0ede4ee 100644
+--- a/drivers/infiniband/core/nldev.c
++++ b/drivers/infiniband/core/nldev.c
+@@ -255,7 +255,7 @@ EXPORT_SYMBOL(rdma_nl_put_driver_u64_hex);
+ 
+ bool rdma_nl_get_privileged_qkey(void)
+ {
+-	return privileged_qkey || capable(CAP_NET_RAW);
++	return privileged_qkey;
+ }
+ EXPORT_SYMBOL(rdma_nl_get_privileged_qkey);
+ 
+diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c
+index 90c177edf9b0d7..18918f4633614e 100644
+--- a/drivers/infiniband/core/rdma_core.c
++++ b/drivers/infiniband/core/rdma_core.c
+@@ -1019,3 +1019,32 @@ void uverbs_finalize_object(struct ib_uobject *uobj,
+ 		WARN_ON(true);
+ 	}
+ }
++
++/**
++ * rdma_uattrs_has_raw_cap() - Returns whether a rdma device linked to the
++ *			       uverbs attributes file has CAP_NET_RAW
++ *			       capability or not.
++ *
++ * @attrs:       Pointer to uverbs attributes
++ *
++ * Returns true if a rdma device's owning user namespace has CAP_NET_RAW
++ * capability, otherwise false.
++ */
++bool rdma_uattrs_has_raw_cap(const struct uverbs_attr_bundle *attrs)
++{
++	struct ib_uverbs_file *ufile = attrs->ufile;
++	struct ib_ucontext *ucontext;
++	bool has_cap = false;
++	int srcu_key;
++
++	srcu_key = srcu_read_lock(&ufile->device->disassociate_srcu);
++	ucontext = ib_uverbs_get_ucontext_file(ufile);
++	if (IS_ERR(ucontext))
++		goto out;
++	has_cap = rdma_dev_has_raw_cap(ucontext->device);
++
++out:
++	srcu_read_unlock(&ufile->device->disassociate_srcu, srcu_key);
++	return has_cap;
++}
++EXPORT_SYMBOL(rdma_uattrs_has_raw_cap);
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index bc9fe3ceca4dbe..0807e9a0000850 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -1451,7 +1451,7 @@ static int create_qp(struct uverbs_attr_bundle *attrs,
+ 	}
+ 
+ 	if (attr.create_flags & IB_QP_CREATE_SOURCE_QPN) {
+-		if (!capable(CAP_NET_RAW)) {
++		if (!rdma_uattrs_has_raw_cap(attrs)) {
+ 			ret = -EPERM;
+ 			goto err_put;
+ 		}
+@@ -1877,7 +1877,8 @@ static int modify_qp(struct uverbs_attr_bundle *attrs,
+ 		attr->path_mig_state = cmd->base.path_mig_state;
+ 	if (cmd->base.attr_mask & IB_QP_QKEY) {
+ 		if (cmd->base.qkey & IB_QP_SET_QKEY &&
+-		    !rdma_nl_get_privileged_qkey()) {
++		    !(rdma_nl_get_privileged_qkey() ||
++		      rdma_uattrs_has_raw_cap(attrs))) {
+ 			ret = -EPERM;
+ 			goto release_qp;
+ 		}
+@@ -3225,7 +3226,7 @@ static int ib_uverbs_ex_create_flow(struct uverbs_attr_bundle *attrs)
+ 	if (cmd.comp_mask)
+ 		return -EINVAL;
+ 
+-	if (!capable(CAP_NET_RAW))
++	if (!rdma_uattrs_has_raw_cap(attrs))
+ 		return -EPERM;
+ 
+ 	if (cmd.flow_attr.flags >= IB_FLOW_ATTR_FLAGS_RESERVED)
+diff --git a/drivers/infiniband/core/uverbs_std_types_qp.c b/drivers/infiniband/core/uverbs_std_types_qp.c
+index 7b4773fa4bc0bd..be0730e8509ed9 100644
+--- a/drivers/infiniband/core/uverbs_std_types_qp.c
++++ b/drivers/infiniband/core/uverbs_std_types_qp.c
+@@ -133,7 +133,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QP_CREATE)(
+ 		device = xrcd->device;
+ 		break;
+ 	case IB_UVERBS_QPT_RAW_PACKET:
+-		if (!capable(CAP_NET_RAW))
++		if (!rdma_uattrs_has_raw_cap(attrs))
+ 			return -EPERM;
+ 		fallthrough;
+ 	case IB_UVERBS_QPT_RC:
+diff --git a/drivers/infiniband/hw/erdma/erdma_verbs.c b/drivers/infiniband/hw/erdma/erdma_verbs.c
+index af36a8d2df2285..ec0ad40860668a 100644
+--- a/drivers/infiniband/hw/erdma/erdma_verbs.c
++++ b/drivers/infiniband/hw/erdma/erdma_verbs.c
+@@ -629,7 +629,8 @@ static struct erdma_mtt *erdma_create_cont_mtt(struct erdma_dev *dev,
+ static void erdma_destroy_mtt_buf_sg(struct erdma_dev *dev,
+ 				     struct erdma_mtt *mtt)
+ {
+-	dma_unmap_sg(&dev->pdev->dev, mtt->sglist, mtt->nsg, DMA_TO_DEVICE);
++	dma_unmap_sg(&dev->pdev->dev, mtt->sglist,
++		     DIV_ROUND_UP(mtt->size, PAGE_SIZE), DMA_TO_DEVICE);
+ 	vfree(mtt->sglist);
+ }
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index 1dcc9cbb4678b1..254fd4d6ea9fa9 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -856,6 +856,7 @@ struct hns_roce_caps {
+ 	u16		default_ceq_arm_st;
+ 	u8		cong_cap;
+ 	enum hns_roce_cong_type default_cong_type;
++	u32             max_ack_req_msg_len;
+ };
+ 
+ enum hns_roce_device_state {
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index ca0798224e565c..3d479c63b117a9 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -249,15 +249,12 @@ int hns_roce_calc_hem_mhop(struct hns_roce_dev *hr_dev,
+ }
+ 
+ static struct hns_roce_hem *hns_roce_alloc_hem(struct hns_roce_dev *hr_dev,
+-					       unsigned long hem_alloc_size,
+-					       gfp_t gfp_mask)
++					       unsigned long hem_alloc_size)
+ {
+ 	struct hns_roce_hem *hem;
+ 	int order;
+ 	void *buf;
+ 
+-	WARN_ON(gfp_mask & __GFP_HIGHMEM);
+-
+ 	order = get_order(hem_alloc_size);
+ 	if (PAGE_SIZE << order != hem_alloc_size) {
+ 		dev_err(hr_dev->dev, "invalid hem_alloc_size: %lu!\n",
+@@ -265,13 +262,12 @@ static struct hns_roce_hem *hns_roce_alloc_hem(struct hns_roce_dev *hr_dev,
+ 		return NULL;
+ 	}
+ 
+-	hem = kmalloc(sizeof(*hem),
+-		      gfp_mask & ~(__GFP_HIGHMEM | __GFP_NOWARN));
++	hem = kmalloc(sizeof(*hem), GFP_KERNEL);
+ 	if (!hem)
+ 		return NULL;
+ 
+ 	buf = dma_alloc_coherent(hr_dev->dev, hem_alloc_size,
+-				 &hem->dma, gfp_mask);
++				 &hem->dma, GFP_KERNEL);
+ 	if (!buf)
+ 		goto fail;
+ 
+@@ -378,7 +374,6 @@ static int alloc_mhop_hem(struct hns_roce_dev *hr_dev,
+ {
+ 	u32 bt_size = mhop->bt_chunk_size;
+ 	struct device *dev = hr_dev->dev;
+-	gfp_t flag;
+ 	u64 bt_ba;
+ 	u32 size;
+ 	int ret;
+@@ -417,8 +412,7 @@ static int alloc_mhop_hem(struct hns_roce_dev *hr_dev,
+ 	 * alloc bt space chunk for MTT/CQE.
+ 	 */
+ 	size = table->type < HEM_TYPE_MTT ? mhop->buf_chunk_size : bt_size;
+-	flag = GFP_KERNEL | __GFP_NOWARN;
+-	table->hem[index->buf] = hns_roce_alloc_hem(hr_dev, size, flag);
++	table->hem[index->buf] = hns_roce_alloc_hem(hr_dev, size);
+ 	if (!table->hem[index->buf]) {
+ 		ret = -ENOMEM;
+ 		goto err_alloc_hem;
+@@ -546,9 +540,7 @@ int hns_roce_table_get(struct hns_roce_dev *hr_dev,
+ 		goto out;
+ 	}
+ 
+-	table->hem[i] = hns_roce_alloc_hem(hr_dev,
+-				       table->table_chunk_size,
+-				       GFP_KERNEL | __GFP_NOWARN);
++	table->hem[i] = hns_roce_alloc_hem(hr_dev, table->table_chunk_size);
+ 	if (!table->hem[i]) {
+ 		ret = -ENOMEM;
+ 		goto out;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index fa8747656f25fe..b30dce00f2405a 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -2196,31 +2196,36 @@ static void apply_func_caps(struct hns_roce_dev *hr_dev)
+ 
+ static int hns_roce_query_caps(struct hns_roce_dev *hr_dev)
+ {
+-	struct hns_roce_cmq_desc desc[HNS_ROCE_QUERY_PF_CAPS_CMD_NUM];
++	struct hns_roce_cmq_desc desc[HNS_ROCE_QUERY_PF_CAPS_CMD_NUM] = {};
+ 	struct hns_roce_caps *caps = &hr_dev->caps;
+ 	struct hns_roce_query_pf_caps_a *resp_a;
+ 	struct hns_roce_query_pf_caps_b *resp_b;
+ 	struct hns_roce_query_pf_caps_c *resp_c;
+ 	struct hns_roce_query_pf_caps_d *resp_d;
+ 	struct hns_roce_query_pf_caps_e *resp_e;
++	struct hns_roce_query_pf_caps_f *resp_f;
+ 	enum hns_roce_opcode_type cmd;
+ 	int ctx_hop_num;
+ 	int pbl_hop_num;
++	int cmd_num;
+ 	int ret;
+ 	int i;
+ 
+ 	cmd = hr_dev->is_vf ? HNS_ROCE_OPC_QUERY_VF_CAPS_NUM :
+ 	      HNS_ROCE_OPC_QUERY_PF_CAPS_NUM;
++	cmd_num = hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08 ?
++		  HNS_ROCE_QUERY_PF_CAPS_CMD_NUM_HIP08 :
++		  HNS_ROCE_QUERY_PF_CAPS_CMD_NUM;
+ 
+-	for (i = 0; i < HNS_ROCE_QUERY_PF_CAPS_CMD_NUM; i++) {
++	for (i = 0; i < cmd_num - 1; i++) {
+ 		hns_roce_cmq_setup_basic_desc(&desc[i], cmd, true);
+-		if (i < (HNS_ROCE_QUERY_PF_CAPS_CMD_NUM - 1))
+-			desc[i].flag |= cpu_to_le16(HNS_ROCE_CMD_FLAG_NEXT);
+-		else
+-			desc[i].flag &= ~cpu_to_le16(HNS_ROCE_CMD_FLAG_NEXT);
++		desc[i].flag |= cpu_to_le16(HNS_ROCE_CMD_FLAG_NEXT);
+ 	}
+ 
+-	ret = hns_roce_cmq_send(hr_dev, desc, HNS_ROCE_QUERY_PF_CAPS_CMD_NUM);
++	hns_roce_cmq_setup_basic_desc(&desc[cmd_num - 1], cmd, true);
++	desc[cmd_num - 1].flag &= ~cpu_to_le16(HNS_ROCE_CMD_FLAG_NEXT);
++
++	ret = hns_roce_cmq_send(hr_dev, desc, cmd_num);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -2229,6 +2234,7 @@ static int hns_roce_query_caps(struct hns_roce_dev *hr_dev)
+ 	resp_c = (struct hns_roce_query_pf_caps_c *)desc[2].data;
+ 	resp_d = (struct hns_roce_query_pf_caps_d *)desc[3].data;
+ 	resp_e = (struct hns_roce_query_pf_caps_e *)desc[4].data;
++	resp_f = (struct hns_roce_query_pf_caps_f *)desc[5].data;
+ 
+ 	caps->local_ca_ack_delay = resp_a->local_ca_ack_delay;
+ 	caps->max_sq_sg = le16_to_cpu(resp_a->max_sq_sg);
+@@ -2293,6 +2299,8 @@ static int hns_roce_query_caps(struct hns_roce_dev *hr_dev)
+ 	caps->reserved_srqs = hr_reg_read(resp_e, PF_CAPS_E_RSV_SRQS);
+ 	caps->reserved_lkey = hr_reg_read(resp_e, PF_CAPS_E_RSV_LKEYS);
+ 
++	caps->max_ack_req_msg_len = le32_to_cpu(resp_f->max_ack_req_msg_len);
++
+ 	caps->qpc_hop_num = ctx_hop_num;
+ 	caps->sccc_hop_num = ctx_hop_num;
+ 	caps->srqc_hop_num = ctx_hop_num;
+@@ -2986,14 +2994,22 @@ static int hns_roce_v2_init(struct hns_roce_dev *hr_dev)
+ {
+ 	int ret;
+ 
++	if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) {
++		ret = free_mr_init(hr_dev);
++		if (ret) {
++			dev_err(hr_dev->dev, "failed to init free mr!\n");
++			return ret;
++		}
++	}
++
+ 	/* The hns ROCEE requires the extdb info to be cleared before using */
+ 	ret = hns_roce_clear_extdb_list_info(hr_dev);
+ 	if (ret)
+-		return ret;
++		goto err_clear_extdb_failed;
+ 
+ 	ret = get_hem_table(hr_dev);
+ 	if (ret)
+-		return ret;
++		goto err_get_hem_table_failed;
+ 
+ 	if (hr_dev->is_vf)
+ 		return 0;
+@@ -3008,6 +3024,11 @@ static int hns_roce_v2_init(struct hns_roce_dev *hr_dev)
+ 
+ err_llm_init_failed:
+ 	put_hem_table(hr_dev);
++err_get_hem_table_failed:
++	hns_roce_function_clear(hr_dev);
++err_clear_extdb_failed:
++	if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08)
++		free_mr_exit(hr_dev);
+ 
+ 	return ret;
+ }
+@@ -4560,7 +4581,9 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
+ 	dma_addr_t trrl_ba;
+ 	dma_addr_t irrl_ba;
+ 	enum ib_mtu ib_mtu;
++	u8 ack_req_freq;
+ 	const u8 *smac;
++	int lp_msg_len;
+ 	u8 lp_pktn_ini;
+ 	u64 *mtts;
+ 	u8 *dmac;
+@@ -4643,7 +4666,8 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
+ 		return -EINVAL;
+ #define MIN_LP_MSG_LEN 1024
+ 	/* mtu * (2 ^ lp_pktn_ini) should be in the range of 1024 to mtu */
+-	lp_pktn_ini = ilog2(max(mtu, MIN_LP_MSG_LEN) / mtu);
++	lp_msg_len = max(mtu, MIN_LP_MSG_LEN);
++	lp_pktn_ini = ilog2(lp_msg_len / mtu);
+ 
+ 	if (attr_mask & IB_QP_PATH_MTU) {
+ 		hr_reg_write(context, QPC_MTU, ib_mtu);
+@@ -4653,8 +4677,22 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
+ 	hr_reg_write(context, QPC_LP_PKTN_INI, lp_pktn_ini);
+ 	hr_reg_clear(qpc_mask, QPC_LP_PKTN_INI);
+ 
+-	/* ACK_REQ_FREQ should be larger than or equal to LP_PKTN_INI */
+-	hr_reg_write(context, QPC_ACK_REQ_FREQ, lp_pktn_ini);
++	/*
++	 * There are several constraints for ACK_REQ_FREQ:
++	 * 1. mtu * (2 ^ ACK_REQ_FREQ) should not be too large, otherwise
++	 *    it may cause some unexpected retries when sending large
++	 *    payload.
++	 * 2. ACK_REQ_FREQ should be larger than or equal to LP_PKTN_INI.
++	 * 3. ACK_REQ_FREQ must be equal to LP_PKTN_INI when using LDCP
++	 *    or HC3 congestion control algorithm.
++	 */
++	if (hr_qp->cong_type == CONG_TYPE_LDCP ||
++	    hr_qp->cong_type == CONG_TYPE_HC3 ||
++	    hr_dev->caps.max_ack_req_msg_len < lp_msg_len)
++		ack_req_freq = lp_pktn_ini;
++	else
++		ack_req_freq = ilog2(hr_dev->caps.max_ack_req_msg_len / mtu);
++	hr_reg_write(context, QPC_ACK_REQ_FREQ, ack_req_freq);
+ 	hr_reg_clear(qpc_mask, QPC_ACK_REQ_FREQ);
+ 
+ 	hr_reg_clear(qpc_mask, QPC_RX_REQ_PSN_ERR);
+@@ -5349,11 +5387,10 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
+ {
+ 	struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
+ 	struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
+-	struct hns_roce_v2_qp_context ctx[2];
+-	struct hns_roce_v2_qp_context *context = ctx;
+-	struct hns_roce_v2_qp_context *qpc_mask = ctx + 1;
++	struct hns_roce_v2_qp_context *context;
++	struct hns_roce_v2_qp_context *qpc_mask;
+ 	struct ib_device *ibdev = &hr_dev->ib_dev;
+-	int ret;
++	int ret = -ENOMEM;
+ 
+ 	if (attr_mask & ~IB_QP_ATTR_STANDARD_BITS)
+ 		return -EOPNOTSUPP;
+@@ -5364,7 +5401,11 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
+ 	 * we should set all bits of the relevant fields in context mask to
+ 	 * 0 at the same time, else set them to 0x1.
+ 	 */
+-	memset(context, 0, hr_dev->caps.qpc_sz);
++	context = kvzalloc(sizeof(*context), GFP_KERNEL);
++	qpc_mask = kvzalloc(sizeof(*qpc_mask), GFP_KERNEL);
++	if (!context || !qpc_mask)
++		goto out;
++
+ 	memset(qpc_mask, 0xff, hr_dev->caps.qpc_sz);
+ 
+ 	ret = hns_roce_v2_set_abs_fields(ibqp, attr, attr_mask, cur_state,
+@@ -5406,6 +5447,8 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
+ 		clear_qp(hr_qp);
+ 
+ out:
++	kvfree(qpc_mask);
++	kvfree(context);
+ 	return ret;
+ }
+ 
+@@ -7044,21 +7087,11 @@ static int __hns_roce_hw_v2_init_instance(struct hnae3_handle *handle)
+ 		goto error_failed_roce_init;
+ 	}
+ 
+-	if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) {
+-		ret = free_mr_init(hr_dev);
+-		if (ret) {
+-			dev_err(hr_dev->dev, "failed to init free mr!\n");
+-			goto error_failed_free_mr_init;
+-		}
+-	}
+ 
+ 	handle->priv = hr_dev;
+ 
+ 	return 0;
+ 
+-error_failed_free_mr_init:
+-	hns_roce_exit(hr_dev);
+-
+ error_failed_roce_init:
+ 	kfree(hr_dev->priv);
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index bc7466830eaf9d..1c2660305d27c8 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -1168,7 +1168,8 @@ struct hns_roce_cfg_gmv_tb_b {
+ #define GMV_TB_B_SMAC_H GMV_TB_B_FIELD_LOC(47, 32)
+ #define GMV_TB_B_SGID_IDX GMV_TB_B_FIELD_LOC(71, 64)
+ 
+-#define HNS_ROCE_QUERY_PF_CAPS_CMD_NUM 5
++#define HNS_ROCE_QUERY_PF_CAPS_CMD_NUM_HIP08 5
++#define HNS_ROCE_QUERY_PF_CAPS_CMD_NUM 6
+ struct hns_roce_query_pf_caps_a {
+ 	u8 number_ports;
+ 	u8 local_ca_ack_delay;
+@@ -1280,6 +1281,11 @@ struct hns_roce_query_pf_caps_e {
+ 	__le16 aeq_period;
+ };
+ 
++struct hns_roce_query_pf_caps_f {
++	__le32 max_ack_req_msg_len;
++	__le32 rsv[5];
++};
++
+ #define PF_CAPS_E_FIELD_LOC(h, l) \
+ 	FIELD_LOC(struct hns_roce_query_pf_caps_e, h, l)
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index e7a497cc125cc3..11fa64044a8d85 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -947,10 +947,7 @@ static int hns_roce_init_hem(struct hns_roce_dev *hr_dev)
+ static void hns_roce_teardown_hca(struct hns_roce_dev *hr_dev)
+ {
+ 	hns_roce_cleanup_bitmap(hr_dev);
+-
+-	if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_CQ_RECORD_DB ||
+-	    hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_QP_RECORD_DB)
+-		mutex_destroy(&hr_dev->pgdir_mutex);
++	mutex_destroy(&hr_dev->pgdir_mutex);
+ }
+ 
+ /**
+@@ -965,11 +962,11 @@ static int hns_roce_setup_hca(struct hns_roce_dev *hr_dev)
+ 
+ 	spin_lock_init(&hr_dev->sm_lock);
+ 
+-	if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_CQ_RECORD_DB ||
+-	    hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_QP_RECORD_DB) {
+-		INIT_LIST_HEAD(&hr_dev->pgdir_list);
+-		mutex_init(&hr_dev->pgdir_mutex);
+-	}
++	INIT_LIST_HEAD(&hr_dev->qp_list);
++	spin_lock_init(&hr_dev->qp_list_lock);
++
++	INIT_LIST_HEAD(&hr_dev->pgdir_list);
++	mutex_init(&hr_dev->pgdir_mutex);
+ 
+ 	hns_roce_init_uar_table(hr_dev);
+ 
+@@ -1001,9 +998,7 @@ static int hns_roce_setup_hca(struct hns_roce_dev *hr_dev)
+ 
+ err_uar_table_free:
+ 	ida_destroy(&hr_dev->uar_ida.ida);
+-	if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_CQ_RECORD_DB ||
+-	    hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_QP_RECORD_DB)
+-		mutex_destroy(&hr_dev->pgdir_mutex);
++	mutex_destroy(&hr_dev->pgdir_mutex);
+ 
+ 	return ret;
+ }
+@@ -1132,9 +1127,6 @@ int hns_roce_init(struct hns_roce_dev *hr_dev)
+ 		}
+ 	}
+ 
+-	INIT_LIST_HEAD(&hr_dev->qp_list);
+-	spin_lock_init(&hr_dev->qp_list_lock);
+-
+ 	ret = hns_roce_register_device(hr_dev);
+ 	if (ret)
+ 		goto error_failed_register_device;
+diff --git a/drivers/infiniband/hw/mana/qp.c b/drivers/infiniband/hw/mana/qp.c
+index 14fd7d6c54a243..a6bf4d539e6701 100644
+--- a/drivers/infiniband/hw/mana/qp.c
++++ b/drivers/infiniband/hw/mana/qp.c
+@@ -772,7 +772,7 @@ static int mana_ib_gd_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 		req.ah_attr.dest_port = ROCE_V2_UDP_DPORT;
+ 		req.ah_attr.src_port = rdma_get_udp_sport(attr->ah_attr.grh.flow_label,
+ 							  ibqp->qp_num, attr->dest_qp_num);
+-		req.ah_attr.traffic_class = attr->ah_attr.grh.traffic_class;
++		req.ah_attr.traffic_class = attr->ah_attr.grh.traffic_class >> 2;
+ 		req.ah_attr.hop_limit = attr->ah_attr.grh.hop_limit;
+ 	}
+ 
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index 843dcd3122424d..c369fee3356216 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -159,7 +159,7 @@ int mlx5_ib_devx_create(struct mlx5_ib_dev *dev, bool is_user, u64 req_ucaps)
+ 	uctx = MLX5_ADDR_OF(create_uctx_in, in, uctx);
+ 	if (is_user &&
+ 	    (MLX5_CAP_GEN(dev->mdev, uctx_cap) & MLX5_UCTX_CAP_RAW_TX) &&
+-	    capable(CAP_NET_RAW))
++	    rdma_dev_has_raw_cap(&dev->ib_dev))
+ 		cap |= MLX5_UCTX_CAP_RAW_TX;
+ 	if (is_user &&
+ 	    (MLX5_CAP_GEN(dev->mdev, uctx_cap) &
+diff --git a/drivers/infiniband/hw/mlx5/dm.c b/drivers/infiniband/hw/mlx5/dm.c
+index b4c97fb62abfcc..9ded2b7c1e3199 100644
+--- a/drivers/infiniband/hw/mlx5/dm.c
++++ b/drivers/infiniband/hw/mlx5/dm.c
+@@ -282,7 +282,7 @@ static struct ib_dm *handle_alloc_dm_memic(struct ib_ucontext *ctx,
+ 	int err;
+ 	u64 address;
+ 
+-	if (!MLX5_CAP_DEV_MEM(dm_db->dev, memic))
++	if (!dm_db || !MLX5_CAP_DEV_MEM(dm_db->dev, memic))
+ 		return ERR_PTR(-EOPNOTSUPP);
+ 
+ 	dm = kzalloc(sizeof(*dm), GFP_KERNEL);
+diff --git a/drivers/infiniband/hw/mlx5/fs.c b/drivers/infiniband/hw/mlx5/fs.c
+index 680627f1de3361..eabc37f2ac19b2 100644
+--- a/drivers/infiniband/hw/mlx5/fs.c
++++ b/drivers/infiniband/hw/mlx5/fs.c
+@@ -2458,7 +2458,7 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_CREATE_FLOW)(
+ 	struct mlx5_ib_dev *dev;
+ 	u32 flags;
+ 
+-	if (!capable(CAP_NET_RAW))
++	if (!rdma_uattrs_has_raw_cap(attrs))
+ 		return -EPERM;
+ 
+ 	fs_matcher = uverbs_attr_get_obj(attrs,
+@@ -2989,7 +2989,7 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_STEERING_ANCHOR_CREATE)(
+ 	u32 ft_id;
+ 	int err;
+ 
+-	if (!capable(CAP_NET_RAW))
++	if (!rdma_dev_has_raw_cap(&dev->ib_dev))
+ 		return -EPERM;
+ 
+ 	err = uverbs_get_const(&ib_uapi_ft_type, attrs,
+diff --git a/drivers/infiniband/hw/mlx5/umr.c b/drivers/infiniband/hw/mlx5/umr.c
+index 5be4426a288449..25601dea9e3010 100644
+--- a/drivers/infiniband/hw/mlx5/umr.c
++++ b/drivers/infiniband/hw/mlx5/umr.c
+@@ -32,13 +32,15 @@ static __be64 get_umr_disable_mr_mask(void)
+ 	return cpu_to_be64(result);
+ }
+ 
+-static __be64 get_umr_update_translation_mask(void)
++static __be64 get_umr_update_translation_mask(struct mlx5_ib_dev *dev)
+ {
+ 	u64 result;
+ 
+ 	result = MLX5_MKEY_MASK_LEN |
+ 		 MLX5_MKEY_MASK_PAGE_SIZE |
+ 		 MLX5_MKEY_MASK_START_ADDR;
++	if (MLX5_CAP_GEN_2(dev->mdev, umr_log_entity_size_5))
++		result |= MLX5_MKEY_MASK_PAGE_SIZE_5;
+ 
+ 	return cpu_to_be64(result);
+ }
+@@ -654,7 +656,7 @@ static void mlx5r_umr_final_update_xlt(struct mlx5_ib_dev *dev,
+ 		flags & MLX5_IB_UPD_XLT_ENABLE || flags & MLX5_IB_UPD_XLT_ADDR;
+ 
+ 	if (update_translation) {
+-		wqe->ctrl_seg.mkey_mask |= get_umr_update_translation_mask();
++		wqe->ctrl_seg.mkey_mask |= get_umr_update_translation_mask(dev);
+ 		if (!mr->ibmr.length)
+ 			MLX5_SET(mkc, &wqe->mkey_seg, length64, 1);
+ 	}
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+index f2f5465f2a9008..7acafc5c0e09a8 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+@@ -2577,6 +2577,8 @@ static struct net_device *ipoib_add_port(const char *format,
+ 
+ 	ndev->rtnl_link_ops = ipoib_get_link_ops();
+ 
++	dev_net_set(ndev, rdma_dev_net(hca));
++
+ 	result = register_netdev(ndev);
+ 	if (result) {
+ 		pr_warn("%s: couldn't register ipoib port %d; error %d\n",
+diff --git a/drivers/interconnect/qcom/sc8180x.c b/drivers/interconnect/qcom/sc8180x.c
+index a741badaa966e0..4dd1d2f2e82162 100644
+--- a/drivers/interconnect/qcom/sc8180x.c
++++ b/drivers/interconnect/qcom/sc8180x.c
+@@ -1492,34 +1492,40 @@ static struct qcom_icc_bcm bcm_sh3 = {
+ 
+ static struct qcom_icc_bcm bcm_sn0 = {
+ 	.name = "SN0",
++	.num_nodes = 1,
+ 	.nodes = { &slv_qns_gemnoc_sf }
+ };
+ 
+ static struct qcom_icc_bcm bcm_sn1 = {
+ 	.name = "SN1",
++	.num_nodes = 1,
+ 	.nodes = { &slv_qxs_imem }
+ };
+ 
+ static struct qcom_icc_bcm bcm_sn2 = {
+ 	.name = "SN2",
+ 	.keepalive = true,
++	.num_nodes = 1,
+ 	.nodes = { &slv_qns_gemnoc_gc }
+ };
+ 
+ static struct qcom_icc_bcm bcm_co2 = {
+ 	.name = "CO2",
++	.num_nodes = 1,
+ 	.nodes = { &mas_qnm_npu }
+ };
+ 
+ static struct qcom_icc_bcm bcm_sn3 = {
+ 	.name = "SN3",
+ 	.keepalive = true,
++	.num_nodes = 2,
+ 	.nodes = { &slv_srvc_aggre1_noc,
+ 		  &slv_qns_cnoc }
+ };
+ 
+ static struct qcom_icc_bcm bcm_sn4 = {
+ 	.name = "SN4",
++	.num_nodes = 1,
+ 	.nodes = { &slv_qxs_pimem }
+ };
+ 
+diff --git a/drivers/interconnect/qcom/sc8280xp.c b/drivers/interconnect/qcom/sc8280xp.c
+index 0270f6c64481a9..c646cdf8a19bf6 100644
+--- a/drivers/interconnect/qcom/sc8280xp.c
++++ b/drivers/interconnect/qcom/sc8280xp.c
+@@ -48,6 +48,7 @@ static struct qcom_icc_node qnm_a1noc_cfg = {
+ 	.id = SC8280XP_MASTER_A1NOC_CFG,
+ 	.channels = 1,
+ 	.buswidth = 4,
++	.num_links = 1,
+ 	.links = { SC8280XP_SLAVE_SERVICE_A1NOC },
+ };
+ 
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index 3117d99cf83d0d..aea061f26de352 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -634,8 +634,8 @@ static inline void pdev_disable_cap_pasid(struct pci_dev *pdev)
+ 
+ static void pdev_enable_caps(struct pci_dev *pdev)
+ {
+-	pdev_enable_cap_ats(pdev);
+ 	pdev_enable_cap_pasid(pdev);
++	pdev_enable_cap_ats(pdev);
+ 	pdev_enable_cap_pri(pdev);
+ }
+ 
+@@ -2526,8 +2526,21 @@ static inline u64 dma_max_address(enum protection_domain_mode pgtable)
+ 	if (pgtable == PD_MODE_V1)
+ 		return ~0ULL;
+ 
+-	/* V2 with 4/5 level page table */
+-	return ((1ULL << PM_LEVEL_SHIFT(amd_iommu_gpt_level)) - 1);
++	/*
++	 * V2 with 4/5 level page table. Note that "2.2.6.5 AMD64 4-Kbyte Page
++	 * Translation" shows that the V2 table sign extends the top of the
++	 * address space creating a reserved region in the middle of the
++	 * translation, just like the CPU does. Further Vasant says the docs are
++	 * incomplete and this only applies to non-zero PASIDs. If the AMDv2
++	 * page table is assigned to the 0 PASID then there is no sign extension
++	 * check.
++	 *
++	 * Since the IOMMU must have a fixed geometry, and the core code does
++	 * not understand sign extended addressing, we have to chop off the high
++	 * bit to get consistent behavior with attachments of the domain to any
++	 * PASID.
++	 */
++	return ((1ULL << (PM_LEVEL_SHIFT(amd_iommu_gpt_level) - 1)) - 1);
+ }
+ 
+ static bool amd_iommu_hd_support(struct amd_iommu *iommu)
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+index 62874b18f6459a..53d88646476e9f 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+@@ -355,7 +355,8 @@ static int qcom_adreno_smmu_init_context(struct arm_smmu_domain *smmu_domain,
+ 	priv->set_prr_addr = NULL;
+ 
+ 	if (of_device_is_compatible(np, "qcom,smmu-500") &&
+-			of_device_is_compatible(np, "qcom,adreno-smmu")) {
++	    !of_device_is_compatible(np, "qcom,sm8250-smmu-500") &&
++	    of_device_is_compatible(np, "qcom,adreno-smmu")) {
+ 		priv->set_prr_bit = qcom_adreno_smmu_set_prr_bit;
+ 		priv->set_prr_addr = qcom_adreno_smmu_set_prr_addr;
+ 	}
+diff --git a/drivers/iommu/intel/cache.c b/drivers/iommu/intel/cache.c
+index 47692cbfaabdb3..c8b79de84d3fb9 100644
+--- a/drivers/iommu/intel/cache.c
++++ b/drivers/iommu/intel/cache.c
+@@ -422,22 +422,6 @@ static void cache_tag_flush_devtlb_psi(struct dmar_domain *domain, struct cache_
+ 					     domain->qi_batch);
+ }
+ 
+-static void cache_tag_flush_devtlb_all(struct dmar_domain *domain, struct cache_tag *tag)
+-{
+-	struct intel_iommu *iommu = tag->iommu;
+-	struct device_domain_info *info;
+-	u16 sid;
+-
+-	info = dev_iommu_priv_get(tag->dev);
+-	sid = PCI_DEVID(info->bus, info->devfn);
+-
+-	qi_batch_add_dev_iotlb(iommu, sid, info->pfsid, info->ats_qdep, 0,
+-			       MAX_AGAW_PFN_WIDTH, domain->qi_batch);
+-	if (info->dtlb_extra_inval)
+-		qi_batch_add_dev_iotlb(iommu, sid, info->pfsid, info->ats_qdep, 0,
+-				       MAX_AGAW_PFN_WIDTH, domain->qi_batch);
+-}
+-
+ /*
+  * Invalidates a range of IOVA from @start (inclusive) to @end (inclusive)
+  * when the memory mappings in the target domain have been modified.
+@@ -508,7 +492,7 @@ void cache_tag_flush_all(struct dmar_domain *domain)
+ 			break;
+ 		case CACHE_TAG_DEVTLB:
+ 		case CACHE_TAG_NESTING_DEVTLB:
+-			cache_tag_flush_devtlb_all(domain, tag);
++			cache_tag_flush_devtlb_psi(domain, tag, 0, MAX_AGAW_PFN_WIDTH);
+ 			break;
+ 		}
+ 
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 148b944143b81e..c0be0b64e4c773 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -1391,7 +1391,6 @@ void domain_detach_iommu(struct dmar_domain *domain, struct intel_iommu *iommu)
+ 	if (--info->refcnt == 0) {
+ 		ida_free(&iommu->domain_ida, info->did);
+ 		xa_erase(&domain->iommu_array, iommu->seq_id);
+-		domain->nid = NUMA_NO_NODE;
+ 		kfree(info);
+ 	}
+ }
+@@ -4000,8 +3999,8 @@ static int blocking_domain_set_dev_pasid(struct iommu_domain *domain,
+ {
+ 	struct device_domain_info *info = dev_iommu_priv_get(dev);
+ 
+-	iopf_for_domain_remove(old, dev);
+ 	intel_pasid_tear_down_entry(info->iommu, dev, pasid, false);
++	iopf_for_domain_remove(old, dev);
+ 	domain_remove_dev_pasid(old, dev, pasid);
+ 
+ 	return 0;
+diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
+index c3928ef7934490..5f47e9a9c127a6 100644
+--- a/drivers/irqchip/Kconfig
++++ b/drivers/irqchip/Kconfig
+@@ -539,6 +539,7 @@ config IMX_MU_MSI
+ 	tristate "i.MX MU used as MSI controller"
+ 	depends on OF && HAS_IOMEM
+ 	depends on ARCH_MXC || COMPILE_TEST
++	depends on ARM || ARM64
+ 	default m if ARCH_MXC
+ 	select IRQ_DOMAIN
+ 	select IRQ_DOMAIN_HIERARCHY
+diff --git a/drivers/leds/flash/Kconfig b/drivers/leds/flash/Kconfig
+index 55ca663ca506ad..5e08102a678416 100644
+--- a/drivers/leds/flash/Kconfig
++++ b/drivers/leds/flash/Kconfig
+@@ -136,6 +136,7 @@ config LEDS_TPS6131X
+ 	tristate "LED support for TI TPS6131x flash LED driver"
+ 	depends on I2C && OF
+ 	depends on GPIOLIB
++	depends on V4L2_FLASH_LED_CLASS || !V4L2_FLASH_LED_CLASS
+ 	select REGMAP_I2C
+ 	help
+ 	  This option enables support for Texas Instruments TPS61310/TPS61311
+diff --git a/drivers/leds/leds-lp8860.c b/drivers/leds/leds-lp8860.c
+index 52b97c9f2a0356..0962c00c215a11 100644
+--- a/drivers/leds/leds-lp8860.c
++++ b/drivers/leds/leds-lp8860.c
+@@ -307,7 +307,9 @@ static int lp8860_probe(struct i2c_client *client)
+ 	led->client = client;
+ 	led->led_dev.brightness_set_blocking = lp8860_brightness_set;
+ 
+-	devm_mutex_init(&client->dev, &led->lock);
++	ret = devm_mutex_init(&client->dev, &led->lock);
++	if (ret)
++		return dev_err_probe(&client->dev, ret, "Failed to initialize lock\n");
+ 
+ 	led->regmap = devm_regmap_init_i2c(client, &lp8860_regmap_config);
+ 	if (IS_ERR(led->regmap)) {
+diff --git a/drivers/leds/leds-pca955x.c b/drivers/leds/leds-pca955x.c
+index 42fe056b1c7469..70d109246088de 100644
+--- a/drivers/leds/leds-pca955x.c
++++ b/drivers/leds/leds-pca955x.c
+@@ -587,7 +587,7 @@ static int pca955x_probe(struct i2c_client *client)
+ 	struct pca955x_platform_data *pdata;
+ 	bool keep_psc0 = false;
+ 	bool set_default_label = false;
+-	char default_label[8];
++	char default_label[4];
+ 	int bit, err, reg;
+ 
+ 	chip = i2c_get_match_data(client);
+@@ -693,7 +693,7 @@ static int pca955x_probe(struct i2c_client *client)
+ 			}
+ 
+ 			if (set_default_label) {
+-				snprintf(default_label, sizeof(default_label), "%u", i);
++				snprintf(default_label, sizeof(default_label), "%hhu", i);
+ 				init_data.default_label = default_label;
+ 			} else {
+ 				init_data.default_label = NULL;
+diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
+index c711db6f8f5ca3..cf17fd46e25567 100644
+--- a/drivers/md/dm-flakey.c
++++ b/drivers/md/dm-flakey.c
+@@ -215,16 +215,19 @@ static int parse_features(struct dm_arg_set *as, struct flakey_c *fc,
+ 	}
+ 
+ 	if (test_bit(DROP_WRITES, &fc->flags) &&
+-	    (fc->corrupt_bio_rw == WRITE || fc->random_write_corrupt)) {
++	    ((fc->corrupt_bio_byte && fc->corrupt_bio_rw == WRITE) ||
++	     fc->random_write_corrupt)) {
+ 		ti->error = "drop_writes is incompatible with random_write_corrupt or corrupt_bio_byte with the WRITE flag set";
+ 		return -EINVAL;
+ 
+ 	} else if (test_bit(ERROR_WRITES, &fc->flags) &&
+-		   (fc->corrupt_bio_rw == WRITE || fc->random_write_corrupt)) {
++		   ((fc->corrupt_bio_byte && fc->corrupt_bio_rw == WRITE) ||
++		    fc->random_write_corrupt)) {
+ 		ti->error = "error_writes is incompatible with random_write_corrupt or corrupt_bio_byte with the WRITE flag set";
+ 		return -EINVAL;
+ 	} else if (test_bit(ERROR_READS, &fc->flags) &&
+-		   (fc->corrupt_bio_rw == READ || fc->random_read_corrupt)) {
++		   ((fc->corrupt_bio_byte && fc->corrupt_bio_rw == READ) ||
++		    fc->random_read_corrupt)) {
+ 		ti->error = "error_reads is incompatible with random_read_corrupt or corrupt_bio_byte with the READ flag set";
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 0f03b21e66e454..10670c62b09e50 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -9418,6 +9418,12 @@ static bool rdev_is_spare(struct md_rdev *rdev)
+ 
+ static bool rdev_addable(struct md_rdev *rdev)
+ {
++	struct mddev *mddev;
++
++	mddev = READ_ONCE(rdev->mddev);
++	if (!mddev)
++		return false;
++
+ 	/* rdev is already used, don't add it again. */
+ 	if (test_bit(Candidate, &rdev->flags) || rdev->raid_disk >= 0 ||
+ 	    test_bit(Faulty, &rdev->flags))
+@@ -9428,7 +9434,7 @@ static bool rdev_addable(struct md_rdev *rdev)
+ 		return true;
+ 
+ 	/* Allow to add if array is read-write. */
+-	if (md_is_rdwr(rdev->mddev))
++	if (md_is_rdwr(mddev))
+ 		return true;
+ 
+ 	/*
+@@ -9456,17 +9462,11 @@ static bool md_spares_need_change(struct mddev *mddev)
+ 	return false;
+ }
+ 
+-static int remove_and_add_spares(struct mddev *mddev,
+-				 struct md_rdev *this)
++static int remove_spares(struct mddev *mddev, struct md_rdev *this)
+ {
+ 	struct md_rdev *rdev;
+-	int spares = 0;
+ 	int removed = 0;
+ 
+-	if (this && test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
+-		/* Mustn't remove devices when resync thread is running */
+-		return 0;
+-
+ 	rdev_for_each(rdev, mddev) {
+ 		if ((this == NULL || rdev == this) && rdev_removeable(rdev) &&
+ 		    !mddev->pers->hot_remove_disk(mddev, rdev)) {
+@@ -9480,6 +9480,21 @@ static int remove_and_add_spares(struct mddev *mddev,
+ 	if (removed && mddev->kobj.sd)
+ 		sysfs_notify_dirent_safe(mddev->sysfs_degraded);
+ 
++	return removed;
++}
++
++static int remove_and_add_spares(struct mddev *mddev,
++				 struct md_rdev *this)
++{
++	struct md_rdev *rdev;
++	int spares = 0;
++	int removed = 0;
++
++	if (this && test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
++		/* Mustn't remove devices when resync thread is running */
++		return 0;
++
++	removed = remove_spares(mddev, this);
+ 	if (this && removed)
+ 		goto no_add;
+ 
+@@ -9522,6 +9537,7 @@ static bool md_choose_sync_action(struct mddev *mddev, int *spares)
+ 
+ 	/* Check if resync is in progress. */
+ 	if (mddev->recovery_cp < MaxSector) {
++		remove_spares(mddev, NULL);
+ 		set_bit(MD_RECOVERY_SYNC, &mddev->recovery);
+ 		clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
+ 		return true;
+@@ -9758,8 +9774,8 @@ void md_check_recovery(struct mddev *mddev)
+ 			 * remove disk.
+ 			 */
+ 			rdev_for_each_safe(rdev, tmp, mddev) {
+-				if (test_and_clear_bit(ClusterRemove, &rdev->flags) &&
+-						rdev->raid_disk < 0)
++				if (rdev->raid_disk < 0 &&
++				    test_and_clear_bit(ClusterRemove, &rdev->flags))
+ 					md_kick_rdev_from_array(rdev);
+ 			}
+ 		}
+@@ -10065,8 +10081,11 @@ static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
+ 
+ 	/* Check for change of roles in the active devices */
+ 	rdev_for_each_safe(rdev2, tmp, mddev) {
+-		if (test_bit(Faulty, &rdev2->flags))
++		if (test_bit(Faulty, &rdev2->flags)) {
++			if (test_bit(ClusterRemove, &rdev2->flags))
++				set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+ 			continue;
++		}
+ 
+ 		/* Check if the roles changed */
+ 		role = le16_to_cpu(sb->dev_roles[rdev2->desc_nr]);
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index c9bd2005bfd0ae..d2b237652d7e9f 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -2446,15 +2446,12 @@ static void sync_request_write(struct mddev *mddev, struct r10bio *r10_bio)
+ 	 * that are active
+ 	 */
+ 	for (i = 0; i < conf->copies; i++) {
+-		int d;
+-
+ 		tbio = r10_bio->devs[i].repl_bio;
+ 		if (!tbio || !tbio->bi_end_io)
+ 			continue;
+ 		if (r10_bio->devs[i].bio->bi_end_io != end_sync_write
+ 		    && r10_bio->devs[i].bio != fbio)
+ 			bio_copy_data(tbio, fbio);
+-		d = r10_bio->devs[i].devnum;
+ 		atomic_inc(&r10_bio->remaining);
+ 		submit_bio_noacct(tbio);
+ 	}
+diff --git a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+index 5c17bc58181e2e..8681dd1930334c 100644
+--- a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
++++ b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+@@ -598,6 +598,27 @@ static void _bswap16(u16 *a)
+ 	*a = ((*a & 0x00FF) << 8) | ((*a & 0xFF00) >> 8);
+ }
+ 
++static dma_addr_t mxc_jpeg_get_plane_dma_addr(struct vb2_buffer *buf, unsigned int plane_no)
++{
++	if (plane_no >= buf->num_planes)
++		return 0;
++	return vb2_dma_contig_plane_dma_addr(buf, plane_no) + buf->planes[plane_no].data_offset;
++}
++
++static void *mxc_jpeg_get_plane_vaddr(struct vb2_buffer *buf, unsigned int plane_no)
++{
++	if (plane_no >= buf->num_planes)
++		return NULL;
++	return vb2_plane_vaddr(buf, plane_no) + buf->planes[plane_no].data_offset;
++}
++
++static unsigned long mxc_jpeg_get_plane_payload(struct vb2_buffer *buf, unsigned int plane_no)
++{
++	if (plane_no >= buf->num_planes)
++		return 0;
++	return vb2_get_plane_payload(buf, plane_no) - buf->planes[plane_no].data_offset;
++}
++
+ static void print_mxc_buf(struct mxc_jpeg_dev *jpeg, struct vb2_buffer *buf,
+ 			  unsigned long len)
+ {
+@@ -610,11 +631,11 @@ static void print_mxc_buf(struct mxc_jpeg_dev *jpeg, struct vb2_buffer *buf,
+ 		return;
+ 
+ 	for (plane_no = 0; plane_no < buf->num_planes; plane_no++) {
+-		payload = vb2_get_plane_payload(buf, plane_no);
++		payload = mxc_jpeg_get_plane_payload(buf, plane_no);
+ 		if (len == 0)
+ 			len = payload;
+-		dma_addr = vb2_dma_contig_plane_dma_addr(buf, plane_no);
+-		vaddr = vb2_plane_vaddr(buf, plane_no);
++		dma_addr = mxc_jpeg_get_plane_dma_addr(buf, plane_no);
++		vaddr = mxc_jpeg_get_plane_vaddr(buf, plane_no);
+ 		v4l2_dbg(3, debug, &jpeg->v4l2_dev,
+ 			 "plane %d (vaddr=%p dma_addr=%x payload=%ld):",
+ 			  plane_no, vaddr, dma_addr, payload);
+@@ -712,16 +733,15 @@ static void mxc_jpeg_addrs(struct mxc_jpeg_desc *desc,
+ 	struct mxc_jpeg_q_data *q_data;
+ 
+ 	q_data = mxc_jpeg_get_q_data(ctx, raw_buf->type);
+-	desc->buf_base0 = vb2_dma_contig_plane_dma_addr(raw_buf, 0);
++	desc->buf_base0 = mxc_jpeg_get_plane_dma_addr(raw_buf, 0);
+ 	desc->buf_base1 = 0;
+ 	if (img_fmt == STM_CTRL_IMAGE_FORMAT(MXC_JPEG_YUV420)) {
+ 		if (raw_buf->num_planes == 2)
+-			desc->buf_base1 = vb2_dma_contig_plane_dma_addr(raw_buf, 1);
++			desc->buf_base1 = mxc_jpeg_get_plane_dma_addr(raw_buf, 1);
+ 		else
+ 			desc->buf_base1 = desc->buf_base0 + q_data->sizeimage[0];
+ 	}
+-	desc->stm_bufbase = vb2_dma_contig_plane_dma_addr(jpeg_buf, 0) +
+-		offset;
++	desc->stm_bufbase = mxc_jpeg_get_plane_dma_addr(jpeg_buf, 0) + offset;
+ }
+ 
+ static bool mxc_jpeg_is_extended_sequential(const struct mxc_jpeg_fmt *fmt)
+@@ -1029,8 +1049,8 @@ static irqreturn_t mxc_jpeg_dec_irq(int irq, void *priv)
+ 			vb2_set_plane_payload(&dst_buf->vb2_buf, 1, payload);
+ 		}
+ 		dev_dbg(dev, "Decoding finished, payload size: %ld + %ld\n",
+-			vb2_get_plane_payload(&dst_buf->vb2_buf, 0),
+-			vb2_get_plane_payload(&dst_buf->vb2_buf, 1));
++			mxc_jpeg_get_plane_payload(&dst_buf->vb2_buf, 0),
++			mxc_jpeg_get_plane_payload(&dst_buf->vb2_buf, 1));
+ 	}
+ 
+ 	/* short preview of the results */
+@@ -1889,8 +1909,8 @@ static int mxc_jpeg_parse(struct mxc_jpeg_ctx *ctx, struct vb2_buffer *vb)
+ 	struct mxc_jpeg_sof *psof = NULL;
+ 	struct mxc_jpeg_sos *psos = NULL;
+ 	struct mxc_jpeg_src_buf *jpeg_src_buf = vb2_to_mxc_buf(vb);
+-	u8 *src_addr = (u8 *)vb2_plane_vaddr(vb, 0);
+-	u32 size = vb2_get_plane_payload(vb, 0);
++	u8 *src_addr = (u8 *)mxc_jpeg_get_plane_vaddr(vb, 0);
++	u32 size = mxc_jpeg_get_plane_payload(vb, 0);
+ 	int ret;
+ 
+ 	memset(&header, 0, sizeof(header));
+@@ -2027,6 +2047,11 @@ static int mxc_jpeg_buf_prepare(struct vb2_buffer *vb)
+ 				i, vb2_plane_size(vb, i), sizeimage);
+ 			return -EINVAL;
+ 		}
++		if (!IS_ALIGNED(mxc_jpeg_get_plane_dma_addr(vb, i), MXC_JPEG_ADDR_ALIGNMENT)) {
++			dev_err(dev, "planes[%d] address is not %d aligned\n",
++				i, MXC_JPEG_ADDR_ALIGNMENT);
++			return -EINVAL;
++		}
+ 	}
+ 	if (V4L2_TYPE_IS_CAPTURE(vb->vb2_queue->type)) {
+ 		vb2_set_plane_payload(vb, 0, 0);
+diff --git a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.h b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.h
+index fdde45f7e16329..44e46face6d1d8 100644
+--- a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.h
++++ b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.h
+@@ -30,6 +30,7 @@
+ #define MXC_JPEG_MAX_PLANES		2
+ #define MXC_JPEG_PATTERN_WIDTH		128
+ #define MXC_JPEG_PATTERN_HEIGHT		64
++#define MXC_JPEG_ADDR_ALIGNMENT		16
+ 
+ enum mxc_jpeg_enc_state {
+ 	MXC_JPEG_ENCODING	= 0, /* jpeg encode phase */
+diff --git a/drivers/media/platform/ti/j721e-csi2rx/j721e-csi2rx.c b/drivers/media/platform/ti/j721e-csi2rx/j721e-csi2rx.c
+index 6412a00be8eab8..0e358759e35faa 100644
+--- a/drivers/media/platform/ti/j721e-csi2rx/j721e-csi2rx.c
++++ b/drivers/media/platform/ti/j721e-csi2rx/j721e-csi2rx.c
+@@ -619,6 +619,7 @@ static void ti_csi2rx_dma_callback(void *param)
+ 
+ 		if (ti_csi2rx_start_dma(csi, buf)) {
+ 			dev_err(csi->dev, "Failed to queue the next buffer for DMA\n");
++			list_del(&buf->list);
+ 			vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
+ 		} else {
+ 			list_move_tail(&buf->list, &dma->submitted);
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls-core.c b/drivers/media/v4l2-core/v4l2-ctrls-core.c
+index 90d25329661ed7..b45809a82f9a66 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls-core.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls-core.c
+@@ -968,12 +968,12 @@ static int std_validate_compound(const struct v4l2_ctrl *ctrl, u32 idx,
+ 
+ 			p_h264_sps->flags &=
+ 				~V4L2_H264_SPS_FLAG_QPPRIME_Y_ZERO_TRANSFORM_BYPASS;
+-
+-			if (p_h264_sps->chroma_format_idc < 3)
+-				p_h264_sps->flags &=
+-					~V4L2_H264_SPS_FLAG_SEPARATE_COLOUR_PLANE;
+ 		}
+ 
++		if (p_h264_sps->chroma_format_idc < 3)
++			p_h264_sps->flags &=
++				~V4L2_H264_SPS_FLAG_SEPARATE_COLOUR_PLANE;
++
+ 		if (p_h264_sps->flags & V4L2_H264_SPS_FLAG_FRAME_MBS_ONLY)
+ 			p_h264_sps->flags &=
+ 				~V4L2_H264_SPS_FLAG_MB_ADAPTIVE_FRAME_FIELD;
+diff --git a/drivers/mfd/tps65219.c b/drivers/mfd/tps65219.c
+index fd390600fbf07e..297511025dd464 100644
+--- a/drivers/mfd/tps65219.c
++++ b/drivers/mfd/tps65219.c
+@@ -190,7 +190,7 @@ static const struct resource tps65219_regulator_resources[] = {
+ 
+ static const struct mfd_cell tps65214_cells[] = {
+ 	MFD_CELL_RES("tps65214-regulator", tps65214_regulator_resources),
+-	MFD_CELL_NAME("tps65215-gpio"),
++	MFD_CELL_NAME("tps65214-gpio"),
+ };
+ 
+ static const struct mfd_cell tps65215_cells[] = {
+diff --git a/drivers/misc/mei/platform-vsc.c b/drivers/misc/mei/platform-vsc.c
+index 435760b1e86f7a..b2b5a20ae3fa48 100644
+--- a/drivers/misc/mei/platform-vsc.c
++++ b/drivers/misc/mei/platform-vsc.c
+@@ -256,6 +256,9 @@ static int mei_vsc_hw_reset(struct mei_device *mei_dev, bool intr_enable)
+ 
+ 	vsc_tp_reset(hw->tp);
+ 
++	if (!intr_enable)
++		return 0;
++
+ 	return vsc_tp_init(hw->tp, mei_dev->dev);
+ }
+ 
+@@ -377,6 +380,8 @@ static int mei_vsc_probe(struct platform_device *pdev)
+ err_cancel:
+ 	mei_cancel_work(mei_dev);
+ 
++	vsc_tp_register_event_cb(tp, NULL, NULL);
++
+ 	mei_disable_interrupts(mei_dev);
+ 
+ 	return ret;
+@@ -385,11 +390,14 @@ static int mei_vsc_probe(struct platform_device *pdev)
+ static void mei_vsc_remove(struct platform_device *pdev)
+ {
+ 	struct mei_device *mei_dev = platform_get_drvdata(pdev);
++	struct mei_vsc_hw *hw = mei_dev_to_vsc_hw(mei_dev);
+ 
+ 	pm_runtime_disable(mei_dev->dev);
+ 
+ 	mei_stop(mei_dev);
+ 
++	vsc_tp_register_event_cb(hw->tp, NULL, NULL);
++
+ 	mei_disable_interrupts(mei_dev);
+ 
+ 	mei_deregister(mei_dev);
+diff --git a/drivers/misc/mei/vsc-tp.c b/drivers/misc/mei/vsc-tp.c
+index 267d0de5fade83..0de5acc33b74d1 100644
+--- a/drivers/misc/mei/vsc-tp.c
++++ b/drivers/misc/mei/vsc-tp.c
+@@ -18,6 +18,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/spi/spi.h>
+ #include <linux/types.h>
++#include <linux/workqueue.h>
+ 
+ #include "vsc-tp.h"
+ 
+@@ -76,12 +77,12 @@ struct vsc_tp {
+ 
+ 	atomic_t assert_cnt;
+ 	wait_queue_head_t xfer_wait;
++	struct work_struct event_work;
+ 
+ 	vsc_tp_event_cb_t event_notify;
+ 	void *event_notify_context;
+-
+-	/* used to protect command download */
+-	struct mutex mutex;
++	struct mutex event_notify_mutex;	/* protects event_notify + context */
++	struct mutex mutex;			/* protects command download */
+ };
+ 
+ /* GPIO resources */
+@@ -106,17 +107,19 @@ static irqreturn_t vsc_tp_isr(int irq, void *data)
+ 
+ 	wake_up(&tp->xfer_wait);
+ 
+-	return IRQ_WAKE_THREAD;
++	schedule_work(&tp->event_work);
++
++	return IRQ_HANDLED;
+ }
+ 
+-static irqreturn_t vsc_tp_thread_isr(int irq, void *data)
++static void vsc_tp_event_work(struct work_struct *work)
+ {
+-	struct vsc_tp *tp = data;
++	struct vsc_tp *tp = container_of(work, struct vsc_tp, event_work);
++
++	guard(mutex)(&tp->event_notify_mutex);
+ 
+ 	if (tp->event_notify)
+ 		tp->event_notify(tp->event_notify_context);
+-
+-	return IRQ_HANDLED;
+ }
+ 
+ /* wakeup firmware and wait for response */
+@@ -399,6 +402,8 @@ EXPORT_SYMBOL_NS_GPL(vsc_tp_need_read, "VSC_TP");
+ int vsc_tp_register_event_cb(struct vsc_tp *tp, vsc_tp_event_cb_t event_cb,
+ 			    void *context)
+ {
++	guard(mutex)(&tp->event_notify_mutex);
++
+ 	tp->event_notify = event_cb;
+ 	tp->event_notify_context = context;
+ 
+@@ -406,37 +411,6 @@ int vsc_tp_register_event_cb(struct vsc_tp *tp, vsc_tp_event_cb_t event_cb,
+ }
+ EXPORT_SYMBOL_NS_GPL(vsc_tp_register_event_cb, "VSC_TP");
+ 
+-/**
+- * vsc_tp_request_irq - request irq for vsc_tp device
+- * @tp: vsc_tp device handle
+- */
+-int vsc_tp_request_irq(struct vsc_tp *tp)
+-{
+-	struct spi_device *spi = tp->spi;
+-	struct device *dev = &spi->dev;
+-	int ret;
+-
+-	irq_set_status_flags(spi->irq, IRQ_DISABLE_UNLAZY);
+-	ret = request_threaded_irq(spi->irq, vsc_tp_isr, vsc_tp_thread_isr,
+-				   IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+-				   dev_name(dev), tp);
+-	if (ret)
+-		return ret;
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_NS_GPL(vsc_tp_request_irq, "VSC_TP");
+-
+-/**
+- * vsc_tp_free_irq - free irq for vsc_tp device
+- * @tp: vsc_tp device handle
+- */
+-void vsc_tp_free_irq(struct vsc_tp *tp)
+-{
+-	free_irq(tp->spi->irq, tp);
+-}
+-EXPORT_SYMBOL_NS_GPL(vsc_tp_free_irq, "VSC_TP");
+-
+ /**
+  * vsc_tp_intr_synchronize - synchronize vsc_tp interrupt
+  * @tp: vsc_tp device handle
+@@ -523,13 +497,15 @@ static int vsc_tp_probe(struct spi_device *spi)
+ 	tp->spi = spi;
+ 
+ 	irq_set_status_flags(spi->irq, IRQ_DISABLE_UNLAZY);
+-	ret = request_threaded_irq(spi->irq, vsc_tp_isr, vsc_tp_thread_isr,
++	ret = request_threaded_irq(spi->irq, NULL, vsc_tp_isr,
+ 				   IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ 				   dev_name(dev), tp);
+ 	if (ret)
+ 		return ret;
+ 
+ 	mutex_init(&tp->mutex);
++	mutex_init(&tp->event_notify_mutex);
++	INIT_WORK(&tp->event_work, vsc_tp_event_work);
+ 
+ 	/* only one child acpi device */
+ 	ret = acpi_dev_for_each_child(ACPI_COMPANION(dev),
+@@ -552,10 +528,12 @@ static int vsc_tp_probe(struct spi_device *spi)
+ 	return 0;
+ 
+ err_destroy_lock:
+-	mutex_destroy(&tp->mutex);
+-
+ 	free_irq(spi->irq, tp);
+ 
++	cancel_work_sync(&tp->event_work);
++	mutex_destroy(&tp->event_notify_mutex);
++	mutex_destroy(&tp->mutex);
++
+ 	return ret;
+ }
+ 
+@@ -565,9 +543,11 @@ static void vsc_tp_remove(struct spi_device *spi)
+ 
+ 	platform_device_unregister(tp->pdev);
+ 
+-	mutex_destroy(&tp->mutex);
+-
+ 	free_irq(spi->irq, tp);
++
++	cancel_work_sync(&tp->event_work);
++	mutex_destroy(&tp->event_notify_mutex);
++	mutex_destroy(&tp->mutex);
+ }
+ 
+ static void vsc_tp_shutdown(struct spi_device *spi)
+diff --git a/drivers/misc/mei/vsc-tp.h b/drivers/misc/mei/vsc-tp.h
+index 14ca195cbddccf..f9513ddc3e4093 100644
+--- a/drivers/misc/mei/vsc-tp.h
++++ b/drivers/misc/mei/vsc-tp.h
+@@ -37,9 +37,6 @@ int vsc_tp_xfer(struct vsc_tp *tp, u8 cmd, const void *obuf, size_t olen,
+ int vsc_tp_register_event_cb(struct vsc_tp *tp, vsc_tp_event_cb_t event_cb,
+ 			     void *context);
+ 
+-int vsc_tp_request_irq(struct vsc_tp *tp);
+-void vsc_tp_free_irq(struct vsc_tp *tp);
+-
+ void vsc_tp_intr_enable(struct vsc_tp *tp);
+ void vsc_tp_intr_disable(struct vsc_tp *tp);
+ void vsc_tp_intr_synchronize(struct vsc_tp *tp);
+diff --git a/drivers/misc/sram.c b/drivers/misc/sram.c
+index e5069882457ef6..c69644be4176ab 100644
+--- a/drivers/misc/sram.c
++++ b/drivers/misc/sram.c
+@@ -28,7 +28,8 @@ static ssize_t sram_read(struct file *filp, struct kobject *kobj,
+ {
+ 	struct sram_partition *part;
+ 
+-	part = container_of(attr, struct sram_partition, battr);
++	/* Cast away the const as the attribute is part of a larger structure */
++	part = (struct sram_partition *)container_of(attr, struct sram_partition, battr);
+ 
+ 	mutex_lock(&part->lock);
+ 	memcpy_fromio(buf, part->base + pos, count);
+@@ -43,7 +44,8 @@ static ssize_t sram_write(struct file *filp, struct kobject *kobj,
+ {
+ 	struct sram_partition *part;
+ 
+-	part = container_of(attr, struct sram_partition, battr);
++	/* Cast away the const as the attribute is part of a larger structure */
++	part = (struct sram_partition *)container_of(attr, struct sram_partition, battr);
+ 
+ 	mutex_lock(&part->lock);
+ 	memcpy_toio(part->base + pos, buf, count);
+@@ -164,8 +166,8 @@ static void sram_free_partitions(struct sram_dev *sram)
+ static int sram_reserve_cmp(void *priv, const struct list_head *a,
+ 					const struct list_head *b)
+ {
+-	struct sram_reserve *ra = list_entry(a, struct sram_reserve, list);
+-	struct sram_reserve *rb = list_entry(b, struct sram_reserve, list);
++	const struct sram_reserve *ra = list_entry(a, struct sram_reserve, list);
++	const struct sram_reserve *rb = list_entry(b, struct sram_reserve, list);
+ 
+ 	return ra->start - rb->start;
+ }
+diff --git a/drivers/mtd/ftl.c b/drivers/mtd/ftl.c
+index 8c22064ead3870..f2bd1984609ccc 100644
+--- a/drivers/mtd/ftl.c
++++ b/drivers/mtd/ftl.c
+@@ -344,7 +344,7 @@ static int erase_xfer(partition_t *part,
+             return -ENOMEM;
+ 
+     erase->addr = xfer->Offset;
+-    erase->len = 1 << part->header.EraseUnitSize;
++    erase->len = 1ULL << part->header.EraseUnitSize;
+ 
+     ret = mtd_erase(part->mbd.mtd, erase);
+     if (!ret) {
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index dedcca87defc7a..84ab4a83cbd686 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -373,7 +373,7 @@ static int atmel_nand_dma_transfer(struct atmel_nand_controller *nc,
+ 	dma_cookie_t cookie;
+ 
+ 	buf_dma = dma_map_single(nc->dev, buf, len, dir);
+-	if (dma_mapping_error(nc->dev, dev_dma)) {
++	if (dma_mapping_error(nc->dev, buf_dma)) {
+ 		dev_err(nc->dev,
+ 			"Failed to prepare a buffer for DMA access\n");
+ 		goto err;
+diff --git a/drivers/mtd/nand/raw/atmel/pmecc.c b/drivers/mtd/nand/raw/atmel/pmecc.c
+index 3c7dee1be21df1..0b402823b619cf 100644
+--- a/drivers/mtd/nand/raw/atmel/pmecc.c
++++ b/drivers/mtd/nand/raw/atmel/pmecc.c
+@@ -143,6 +143,7 @@ struct atmel_pmecc_caps {
+ 	int nstrengths;
+ 	int el_offset;
+ 	bool correct_erased_chunks;
++	bool clk_ctrl;
+ };
+ 
+ struct atmel_pmecc {
+@@ -843,6 +844,10 @@ static struct atmel_pmecc *atmel_pmecc_create(struct platform_device *pdev,
+ 	if (IS_ERR(pmecc->regs.errloc))
+ 		return ERR_CAST(pmecc->regs.errloc);
+ 
++	/* pmecc data setup time */
++	if (caps->clk_ctrl)
++		writel(PMECC_CLK_133MHZ, pmecc->regs.base + ATMEL_PMECC_CLK);
++
+ 	/* Disable all interrupts before registering the PMECC handler. */
+ 	writel(0xffffffff, pmecc->regs.base + ATMEL_PMECC_IDR);
+ 	atmel_pmecc_reset(pmecc);
+@@ -896,6 +901,7 @@ static struct atmel_pmecc_caps at91sam9g45_caps = {
+ 	.strengths = atmel_pmecc_strengths,
+ 	.nstrengths = 5,
+ 	.el_offset = 0x8c,
++	.clk_ctrl = true,
+ };
+ 
+ static struct atmel_pmecc_caps sama5d4_caps = {
+diff --git a/drivers/mtd/nand/raw/rockchip-nand-controller.c b/drivers/mtd/nand/raw/rockchip-nand-controller.c
+index 63e7b9e39a5ab0..c5d7cd8a6cab41 100644
+--- a/drivers/mtd/nand/raw/rockchip-nand-controller.c
++++ b/drivers/mtd/nand/raw/rockchip-nand-controller.c
+@@ -656,9 +656,16 @@ static int rk_nfc_write_page_hwecc(struct nand_chip *chip, const u8 *buf,
+ 
+ 	dma_data = dma_map_single(nfc->dev, (void *)nfc->page_buf,
+ 				  mtd->writesize, DMA_TO_DEVICE);
++	if (dma_mapping_error(nfc->dev, dma_data))
++		return -ENOMEM;
++
+ 	dma_oob = dma_map_single(nfc->dev, nfc->oob_buf,
+ 				 ecc->steps * oob_step,
+ 				 DMA_TO_DEVICE);
++	if (dma_mapping_error(nfc->dev, dma_oob)) {
++		dma_unmap_single(nfc->dev, dma_data, mtd->writesize, DMA_TO_DEVICE);
++		return -ENOMEM;
++	}
+ 
+ 	reinit_completion(&nfc->done);
+ 	writel(INT_DMA, nfc->regs + nfc->cfg->int_en_off);
+@@ -772,9 +779,17 @@ static int rk_nfc_read_page_hwecc(struct nand_chip *chip, u8 *buf, int oob_on,
+ 	dma_data = dma_map_single(nfc->dev, nfc->page_buf,
+ 				  mtd->writesize,
+ 				  DMA_FROM_DEVICE);
++	if (dma_mapping_error(nfc->dev, dma_data))
++		return -ENOMEM;
++
+ 	dma_oob = dma_map_single(nfc->dev, nfc->oob_buf,
+ 				 ecc->steps * oob_step,
+ 				 DMA_FROM_DEVICE);
++	if (dma_mapping_error(nfc->dev, dma_oob)) {
++		dma_unmap_single(nfc->dev, dma_data, mtd->writesize,
++				 DMA_FROM_DEVICE);
++		return -ENOMEM;
++	}
+ 
+ 	/*
+ 	 * The first blocks (4, 8 or 16 depending on the device)
+diff --git a/drivers/mtd/spi-nor/spansion.c b/drivers/mtd/spi-nor/spansion.c
+index bf08dbf5e7421f..b9f156c0f8bcf9 100644
+--- a/drivers/mtd/spi-nor/spansion.c
++++ b/drivers/mtd/spi-nor/spansion.c
+@@ -17,6 +17,7 @@
+ 
+ #define SPINOR_OP_CLSR		0x30	/* Clear status register 1 */
+ #define SPINOR_OP_CLPEF		0x82	/* Clear program/erase failure flags */
++#define SPINOR_OP_CYPRESS_EX4B	0xB8	/* Exit 4-byte address mode */
+ #define SPINOR_OP_CYPRESS_DIE_ERASE		0x61	/* Chip (die) erase */
+ #define SPINOR_OP_RD_ANY_REG			0x65	/* Read any register */
+ #define SPINOR_OP_WR_ANY_REG			0x71	/* Write any register */
+@@ -58,6 +59,13 @@
+ 		   SPI_MEM_OP_DUMMY(ndummy, 0),				\
+ 		   SPI_MEM_OP_DATA_IN(1, buf, 0))
+ 
++#define CYPRESS_NOR_EN4B_EX4B_OP(enable)				\
++	SPI_MEM_OP(SPI_MEM_OP_CMD(enable ? SPINOR_OP_EN4B :		\
++					   SPINOR_OP_CYPRESS_EX4B, 0),	\
++		   SPI_MEM_OP_NO_ADDR,					\
++		   SPI_MEM_OP_NO_DUMMY,					\
++		   SPI_MEM_OP_NO_DATA)
++
+ #define SPANSION_OP(opcode)						\
+ 	SPI_MEM_OP(SPI_MEM_OP_CMD(opcode, 0),				\
+ 		   SPI_MEM_OP_NO_ADDR,					\
+@@ -356,6 +364,20 @@ static int cypress_nor_quad_enable_volatile(struct spi_nor *nor)
+ 	return 0;
+ }
+ 
++static int cypress_nor_set_4byte_addr_mode(struct spi_nor *nor, bool enable)
++{
++	int ret;
++	struct spi_mem_op op = CYPRESS_NOR_EN4B_EX4B_OP(enable);
++
++	spi_nor_spimem_setup_op(nor, &op, nor->reg_proto);
++
++	ret = spi_mem_exec_op(nor->spimem, &op);
++	if (ret)
++		dev_dbg(nor->dev, "error %d setting 4-byte mode\n", ret);
++
++	return ret;
++}
++
+ /**
+  * cypress_nor_determine_addr_mode_by_sr1() - Determine current address mode
+  *                                            (3 or 4-byte) by querying status
+@@ -526,6 +548,9 @@ s25fs256t_post_bfpt_fixup(struct spi_nor *nor,
+ 	struct spi_mem_op op;
+ 	int ret;
+ 
++	/* Assign 4-byte address mode method that is not determined in BFPT */
++	nor->params->set_4byte_addr_mode = cypress_nor_set_4byte_addr_mode;
++
+ 	ret = cypress_nor_set_addr_mode_nbytes(nor);
+ 	if (ret)
+ 		return ret;
+@@ -591,6 +616,9 @@ s25hx_t_post_bfpt_fixup(struct spi_nor *nor,
+ {
+ 	int ret;
+ 
++	/* Assign 4-byte address mode method that is not determined in BFPT */
++	nor->params->set_4byte_addr_mode = cypress_nor_set_4byte_addr_mode;
++
+ 	ret = cypress_nor_set_addr_mode_nbytes(nor);
+ 	if (ret)
+ 		return ret;
+@@ -718,6 +746,9 @@ static int s28hx_t_post_bfpt_fixup(struct spi_nor *nor,
+ 				   const struct sfdp_parameter_header *bfpt_header,
+ 				   const struct sfdp_bfpt *bfpt)
+ {
++	/* Assign 4-byte address mode method that is not determined in BFPT */
++	nor->params->set_4byte_addr_mode = cypress_nor_set_4byte_addr_mode;
++
+ 	return cypress_nor_set_addr_mode_nbytes(nor);
+ }
+ 
+diff --git a/drivers/net/can/kvaser_pciefd.c b/drivers/net/can/kvaser_pciefd.c
+index 09510663988c7e..dc748797416efe 100644
+--- a/drivers/net/can/kvaser_pciefd.c
++++ b/drivers/net/can/kvaser_pciefd.c
+@@ -982,6 +982,7 @@ static int kvaser_pciefd_setup_can_ctrls(struct kvaser_pciefd *pcie)
+ 		can->completed_tx_bytes = 0;
+ 		can->bec.txerr = 0;
+ 		can->bec.rxerr = 0;
++		can->can.dev->dev_port = i;
+ 
+ 		init_completion(&can->start_comp);
+ 		init_completion(&can->flush_comp);
+diff --git a/drivers/net/can/sja1000/Kconfig b/drivers/net/can/sja1000/Kconfig
+index 2f516cc6d22c40..e061e35769bfba 100644
+--- a/drivers/net/can/sja1000/Kconfig
++++ b/drivers/net/can/sja1000/Kconfig
+@@ -105,7 +105,7 @@ config CAN_SJA1000_PLATFORM
+ 
+ config CAN_TSCAN1
+ 	tristate "TS-CAN1 PC104 boards"
+-	depends on ISA
++	depends on (ISA && PC104) || (COMPILE_TEST && HAS_IOPORT)
+ 	help
+ 	  This driver is for Technologic Systems' TSCAN-1 PC104 boards.
+ 	  https://www.embeddedts.com/products/TS-CAN1
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+index daf42080f94282..e863a9b0e30377 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+@@ -852,6 +852,7 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
+ 	netdev->ethtool_ops = &kvaser_usb_ethtool_ops;
+ 	SET_NETDEV_DEV(netdev, &dev->intf->dev);
+ 	netdev->dev_id = channel;
++	netdev->dev_port = channel;
+ 
+ 	dev->nets[channel] = priv;
+ 
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
+index 4d85b29a17b787..ebefc274b50a5f 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c
+@@ -49,7 +49,7 @@ struct __packed pcan_ufd_fw_info {
+ 	__le32	ser_no;		/* S/N */
+ 	__le32	flags;		/* special functions */
+ 
+-	/* extended data when type == PCAN_USBFD_TYPE_EXT */
++	/* extended data when type >= PCAN_USBFD_TYPE_EXT */
+ 	u8	cmd_out_ep;	/* ep for cmd */
+ 	u8	cmd_in_ep;	/* ep for replies */
+ 	u8	data_out_ep[2];	/* ep for CANx TX */
+@@ -982,10 +982,11 @@ static int pcan_usb_fd_init(struct peak_usb_device *dev)
+ 			dev->can.ctrlmode |= CAN_CTRLMODE_FD_NON_ISO;
+ 		}
+ 
+-		/* if vendor rsp is of type 2, then it contains EP numbers to
+-		 * use for cmds pipes. If not, then default EP should be used.
++		/* if vendor rsp type is greater than or equal to 2, then it
++		 * contains EP numbers to use for cmds pipes. If not, then
++		 * default EP should be used.
+ 		 */
+-		if (fw_info->type != cpu_to_le16(PCAN_USBFD_TYPE_EXT)) {
++		if (le16_to_cpu(fw_info->type) < PCAN_USBFD_TYPE_EXT) {
+ 			fw_info->cmd_out_ep = PCAN_USBPRO_EP_CMDOUT;
+ 			fw_info->cmd_in_ep = PCAN_USBPRO_EP_CMDIN;
+ 		}
+@@ -1018,11 +1019,11 @@ static int pcan_usb_fd_init(struct peak_usb_device *dev)
+ 	dev->can_channel_id =
+ 		le32_to_cpu(pdev->usb_if->fw_info.dev_id[dev->ctrl_idx]);
+ 
+-	/* if vendor rsp is of type 2, then it contains EP numbers to
+-	 * use for data pipes. If not, then statically defined EP are used
+-	 * (see peak_usb_create_dev()).
++	/* if vendor rsp type is greater than or equal to 2, then it contains EP
++	 * numbers to use for data pipes. If not, then statically defined EP are
++	 * used (see peak_usb_create_dev()).
+ 	 */
+-	if (fw_info->type == cpu_to_le16(PCAN_USBFD_TYPE_EXT)) {
++	if (le16_to_cpu(fw_info->type) >= PCAN_USBFD_TYPE_EXT) {
+ 		dev->ep_msg_in = fw_info->data_in_ep;
+ 		dev->ep_msg_out = fw_info->data_out_ep[dev->ctrl_idx];
+ 	}
+diff --git a/drivers/net/dsa/microchip/ksz8.c b/drivers/net/dsa/microchip/ksz8.c
+index be433b4e2b1ca8..8f55be89f8bf65 100644
+--- a/drivers/net/dsa/microchip/ksz8.c
++++ b/drivers/net/dsa/microchip/ksz8.c
+@@ -371,6 +371,9 @@ static void ksz8863_r_mib_pkt(struct ksz_device *dev, int port, u16 addr,
+ 	addr -= dev->info->reg_mib_cnt;
+ 	ctrl_addr = addr ? KSZ8863_MIB_PACKET_DROPPED_TX_0 :
+ 			   KSZ8863_MIB_PACKET_DROPPED_RX_0;
++	if (ksz_is_8895_family(dev) &&
++	    ctrl_addr == KSZ8863_MIB_PACKET_DROPPED_RX_0)
++		ctrl_addr = KSZ8895_MIB_PACKET_DROPPED_RX_0;
+ 	ctrl_addr += port;
+ 	ctrl_addr |= IND_ACC_TABLE(TABLE_MIB | TABLE_READ);
+ 
+diff --git a/drivers/net/dsa/microchip/ksz8_reg.h b/drivers/net/dsa/microchip/ksz8_reg.h
+index 329688603a582b..da80e659c64809 100644
+--- a/drivers/net/dsa/microchip/ksz8_reg.h
++++ b/drivers/net/dsa/microchip/ksz8_reg.h
+@@ -784,7 +784,9 @@
+ #define KSZ8795_MIB_TOTAL_TX_1		0x105
+ 
+ #define KSZ8863_MIB_PACKET_DROPPED_TX_0 0x100
+-#define KSZ8863_MIB_PACKET_DROPPED_RX_0 0x105
++#define KSZ8863_MIB_PACKET_DROPPED_RX_0 0x103
++
++#define KSZ8895_MIB_PACKET_DROPPED_RX_0 0x105
+ 
+ #define MIB_PACKET_DROPPED		0x0000FFFF
+ 
+diff --git a/drivers/net/ethernet/airoha/airoha_npu.c b/drivers/net/ethernet/airoha/airoha_npu.c
+index 1e58a4aeb9a0c2..12fc3c68b9d002 100644
+--- a/drivers/net/ethernet/airoha/airoha_npu.c
++++ b/drivers/net/ethernet/airoha/airoha_npu.c
+@@ -586,6 +586,8 @@ static struct platform_driver airoha_npu_driver = {
+ };
+ module_platform_driver(airoha_npu_driver);
+ 
++MODULE_FIRMWARE(NPU_EN7581_FIRMWARE_DATA);
++MODULE_FIRMWARE(NPU_EN7581_FIRMWARE_RV32);
+ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("Lorenzo Bianconi <lorenzo@kernel.org>");
+ MODULE_DESCRIPTION("Airoha Network Processor Unit driver");
+diff --git a/drivers/net/ethernet/airoha/airoha_ppe.c b/drivers/net/ethernet/airoha/airoha_ppe.c
+index 0e217acfc5ef74..7832fe8fc2021d 100644
+--- a/drivers/net/ethernet/airoha/airoha_ppe.c
++++ b/drivers/net/ethernet/airoha/airoha_ppe.c
+@@ -498,9 +498,11 @@ static void airoha_ppe_foe_flow_stats_update(struct airoha_ppe *ppe,
+ 		FIELD_PREP(AIROHA_FOE_IB2_NBQ, nbq);
+ }
+ 
+-struct airoha_foe_entry *airoha_ppe_foe_get_entry(struct airoha_ppe *ppe,
+-						  u32 hash)
++static struct airoha_foe_entry *
++airoha_ppe_foe_get_entry_locked(struct airoha_ppe *ppe, u32 hash)
+ {
++	lockdep_assert_held(&ppe_lock);
++
+ 	if (hash < PPE_SRAM_NUM_ENTRIES) {
+ 		u32 *hwe = ppe->foe + hash * sizeof(struct airoha_foe_entry);
+ 		struct airoha_eth *eth = ppe->eth;
+@@ -527,6 +529,18 @@ struct airoha_foe_entry *airoha_ppe_foe_get_entry(struct airoha_ppe *ppe,
+ 	return ppe->foe + hash * sizeof(struct airoha_foe_entry);
+ }
+ 
++struct airoha_foe_entry *airoha_ppe_foe_get_entry(struct airoha_ppe *ppe,
++						  u32 hash)
++{
++	struct airoha_foe_entry *hwe;
++
++	spin_lock_bh(&ppe_lock);
++	hwe = airoha_ppe_foe_get_entry_locked(ppe, hash);
++	spin_unlock_bh(&ppe_lock);
++
++	return hwe;
++}
++
+ static bool airoha_ppe_foe_compare_entry(struct airoha_flow_table_entry *e,
+ 					 struct airoha_foe_entry *hwe)
+ {
+@@ -641,7 +655,7 @@ airoha_ppe_foe_commit_subflow_entry(struct airoha_ppe *ppe,
+ 	struct airoha_flow_table_entry *f;
+ 	int type;
+ 
+-	hwe_p = airoha_ppe_foe_get_entry(ppe, hash);
++	hwe_p = airoha_ppe_foe_get_entry_locked(ppe, hash);
+ 	if (!hwe_p)
+ 		return -EINVAL;
+ 
+@@ -693,7 +707,7 @@ static void airoha_ppe_foe_insert_entry(struct airoha_ppe *ppe,
+ 
+ 	spin_lock_bh(&ppe_lock);
+ 
+-	hwe = airoha_ppe_foe_get_entry(ppe, hash);
++	hwe = airoha_ppe_foe_get_entry_locked(ppe, hash);
+ 	if (!hwe)
+ 		goto unlock;
+ 
+@@ -808,7 +822,7 @@ airoha_ppe_foe_flow_l2_entry_update(struct airoha_ppe *ppe,
+ 		u32 ib1, state;
+ 		int idle;
+ 
+-		hwe = airoha_ppe_foe_get_entry(ppe, iter->hash);
++		hwe = airoha_ppe_foe_get_entry_locked(ppe, iter->hash);
+ 		if (!hwe)
+ 			continue;
+ 
+@@ -845,7 +859,7 @@ static void airoha_ppe_foe_flow_entry_update(struct airoha_ppe *ppe,
+ 	if (e->hash == 0xffff)
+ 		goto unlock;
+ 
+-	hwe_p = airoha_ppe_foe_get_entry(ppe, e->hash);
++	hwe_p = airoha_ppe_foe_get_entry_locked(ppe, e->hash);
+ 	if (!hwe_p)
+ 		goto unlock;
+ 
+diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c
+index d730af4a50c745..bb5d2fa157365f 100644
+--- a/drivers/net/ethernet/emulex/benet/be_cmds.c
++++ b/drivers/net/ethernet/emulex/benet/be_cmds.c
+@@ -3856,8 +3856,8 @@ int be_cmd_set_mac_list(struct be_adapter *adapter, u8 *mac_array,
+ 	status = be_mcc_notify_wait(adapter);
+ 
+ err:
+-	dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma);
+ 	spin_unlock_bh(&adapter->mcc_lock);
++	dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma);
+ 	return status;
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/igb/igb_xsk.c b/drivers/net/ethernet/intel/igb/igb_xsk.c
+index 5cf67ba292694f..30ce5fbb5b776d 100644
+--- a/drivers/net/ethernet/intel/igb/igb_xsk.c
++++ b/drivers/net/ethernet/intel/igb/igb_xsk.c
+@@ -482,7 +482,7 @@ bool igb_xmit_zc(struct igb_ring *tx_ring, struct xsk_buff_pool *xsk_pool)
+ 	if (!nb_pkts)
+ 		return true;
+ 
+-	while (nb_pkts-- > 0) {
++	for (; i < nb_pkts; i++) {
+ 		dma = xsk_buff_raw_get_dma(xsk_pool, descs[i].addr);
+ 		xsk_buff_raw_dma_sync_for_device(xsk_pool, dma, descs[i].len);
+ 
+@@ -512,7 +512,6 @@ bool igb_xmit_zc(struct igb_ring *tx_ring, struct xsk_buff_pool *xsk_pool)
+ 
+ 		total_bytes += descs[i].len;
+ 
+-		i++;
+ 		tx_ring->next_to_use++;
+ 		tx_buffer_info->next_to_watch = tx_desc;
+ 		if (tx_ring->next_to_use == tx_ring->count)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index 5b0d03b3efe828..48bcd6813aff45 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -728,6 +728,7 @@ struct mlx5e_rq {
+ 	struct xsk_buff_pool  *xsk_pool;
+ 
+ 	struct work_struct     recover_work;
++	struct work_struct     rx_timeout_work;
+ 
+ 	/* control */
+ 	struct mlx5_wq_ctrl    wq_ctrl;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+index 8e25f4ef5cccee..5ae787656a7ca0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+@@ -331,6 +331,9 @@ static int port_set_buffer(struct mlx5e_priv *priv,
+ 	if (err)
+ 		goto out;
+ 
++	/* RO bits should be set to 0 on write */
++	MLX5_SET(pbmc_reg, in, port_buffer_size, 0);
++
+ 	err = mlx5e_port_set_pbmc(mdev, in);
+ out:
+ 	kfree(in);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
+index e75759533ae0c4..16c44d628eda65 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
+@@ -170,16 +170,23 @@ static int mlx5e_rx_reporter_err_rq_cqe_recover(void *ctx)
+ static int mlx5e_rx_reporter_timeout_recover(void *ctx)
+ {
+ 	struct mlx5_eq_comp *eq;
++	struct mlx5e_priv *priv;
+ 	struct mlx5e_rq *rq;
+ 	int err;
+ 
+ 	rq = ctx;
++	priv = rq->priv;
++
++	mutex_lock(&priv->state_lock);
++
+ 	eq = rq->cq.mcq.eq;
+ 
+ 	err = mlx5e_health_channel_eq_recover(rq->netdev, eq, rq->cq.ch_stats);
+ 	if (err && rq->icosq)
+ 		clear_bit(MLX5E_SQ_STATE_ENABLED, &rq->icosq->state);
+ 
++	mutex_unlock(&priv->state_lock);
++
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c
+index 727fa7c185238c..6056106edcc647 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c
+@@ -327,6 +327,10 @@ void mlx5e_ipsec_offload_handle_rx_skb(struct net_device *netdev,
+ 	if (unlikely(!sa_entry)) {
+ 		rcu_read_unlock();
+ 		atomic64_inc(&ipsec->sw_stats.ipsec_rx_drop_sadb_miss);
++		/* Clear secpath to prevent invalid dereference
++		 * in downstream XFRM policy checks.
++		 */
++		secpath_reset(skb);
+ 		return;
+ 	}
+ 	xfrm_state_hold(sa_entry->x);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index ea822c69d137b2..16d818943487b2 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -707,6 +707,27 @@ static void mlx5e_rq_err_cqe_work(struct work_struct *recover_work)
+ 	mlx5e_reporter_rq_cqe_err(rq);
+ }
+ 
++static void mlx5e_rq_timeout_work(struct work_struct *timeout_work)
++{
++	struct mlx5e_rq *rq = container_of(timeout_work,
++					   struct mlx5e_rq,
++					   rx_timeout_work);
++
++	/* Acquire netdev instance lock to synchronize with channel close and
++	 * reopen flows. Either successfully obtain the lock, or detect that
++	 * channels are closing for another reason, making this work no longer
++	 * necessary.
++	 */
++	while (!netdev_trylock(rq->netdev)) {
++		if (!test_bit(MLX5E_STATE_CHANNELS_ACTIVE, &rq->priv->state))
++			return;
++		msleep(20);
++	}
++
++	mlx5e_reporter_rx_timeout(rq);
++	netdev_unlock(rq->netdev);
++}
++
+ static int mlx5e_alloc_mpwqe_rq_drop_page(struct mlx5e_rq *rq)
+ {
+ 	rq->wqe_overflow.page = alloc_page(GFP_KERNEL);
+@@ -830,6 +851,7 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params,
+ 
+ 	rqp->wq.db_numa_node = node;
+ 	INIT_WORK(&rq->recover_work, mlx5e_rq_err_cqe_work);
++	INIT_WORK(&rq->rx_timeout_work, mlx5e_rq_timeout_work);
+ 
+ 	if (params->xdp_prog)
+ 		bpf_prog_inc(params->xdp_prog);
+@@ -1204,7 +1226,8 @@ int mlx5e_wait_for_min_rx_wqes(struct mlx5e_rq *rq, int wait_time)
+ 	netdev_warn(rq->netdev, "Failed to get min RX wqes on Channel[%d] RQN[0x%x] wq cur_sz(%d) min_rx_wqes(%d)\n",
+ 		    rq->ix, rq->rqn, mlx5e_rqwq_get_cur_sz(rq), min_wqes);
+ 
+-	mlx5e_reporter_rx_timeout(rq);
++	queue_work(rq->priv->wq, &rq->rx_timeout_work);
++
+ 	return -ETIMEDOUT;
+ }
+ 
+@@ -1375,6 +1398,7 @@ void mlx5e_close_rq(struct mlx5e_rq *rq)
+ 	if (rq->dim)
+ 		cancel_work_sync(&rq->dim->work);
+ 	cancel_work_sync(&rq->recover_work);
++	cancel_work_sync(&rq->rx_timeout_work);
+ 	mlx5e_destroy_rq(rq);
+ 	mlx5e_free_rx_descs(rq);
+ 	mlx5e_free_rq(rq);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index 7462514c7f3d16..da3e340c99b72c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -1567,6 +1567,7 @@ static inline void mlx5e_build_rx_skb(struct mlx5_cqe64 *cqe,
+ 		unsigned int hdrlen = mlx5e_lro_update_hdr(skb, cqe, cqe_bcnt);
+ 
+ 		skb_shinfo(skb)->gso_size = DIV_ROUND_UP(cqe_bcnt - hdrlen, lro_num_seg);
++		skb_shinfo(skb)->gso_segs = lro_num_seg;
+ 		/* Subtract one since we already counted this as one
+ 		 * "regular" packet in mlx5e_complete_rx_cqe()
+ 		 */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/dm.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/dm.c
+index 7c5516b0a84494..8115071c34a4ae 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/dm.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/dm.c
+@@ -30,7 +30,7 @@ struct mlx5_dm *mlx5_dm_create(struct mlx5_core_dev *dev)
+ 
+ 	dm = kzalloc(sizeof(*dm), GFP_KERNEL);
+ 	if (!dm)
+-		return ERR_PTR(-ENOMEM);
++		return NULL;
+ 
+ 	spin_lock_init(&dm->lock);
+ 
+@@ -96,7 +96,7 @@ struct mlx5_dm *mlx5_dm_create(struct mlx5_core_dev *dev)
+ err_steering:
+ 	kfree(dm);
+ 
+-	return ERR_PTR(-ENOMEM);
++	return NULL;
+ }
+ 
+ void mlx5_dm_cleanup(struct mlx5_core_dev *dev)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 9c1504d29d34c3..e7bcd0f0a70979 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -1102,9 +1102,6 @@ static int mlx5_init_once(struct mlx5_core_dev *dev)
+ 	}
+ 
+ 	dev->dm = mlx5_dm_create(dev);
+-	if (IS_ERR(dev->dm))
+-		mlx5_core_warn(dev, "Failed to init device memory %ld\n", PTR_ERR(dev->dm));
+-
+ 	dev->tracer = mlx5_fw_tracer_create(dev);
+ 	dev->hv_vhca = mlx5_hv_vhca_create(dev);
+ 	dev->rsc_dump = mlx5_rsc_dump_create(dev);
+diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c b/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
+index aa812c63d5afac..553bd8b8bb0562 100644
+--- a/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
++++ b/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
+@@ -33,7 +33,7 @@ int __fbnic_open(struct fbnic_net *fbn)
+ 		dev_warn(fbd->dev,
+ 			 "Error %d sending host ownership message to the firmware\n",
+ 			 err);
+-		goto free_resources;
++		goto err_reset_queues;
+ 	}
+ 
+ 	err = fbnic_time_start(fbn);
+@@ -57,6 +57,8 @@ int __fbnic_open(struct fbnic_net *fbn)
+ 	fbnic_time_stop(fbn);
+ release_ownership:
+ 	fbnic_fw_xmit_ownership_msg(fbn->fbd, false);
++err_reset_queues:
++	fbnic_reset_netif_queues(fbn);
+ free_resources:
+ 	fbnic_free_resources(fbn);
+ free_napi_vectors:
+@@ -420,15 +422,17 @@ static void fbnic_get_stats64(struct net_device *dev,
+ 	tx_packets = stats->packets;
+ 	tx_dropped = stats->dropped;
+ 
+-	stats64->tx_bytes = tx_bytes;
+-	stats64->tx_packets = tx_packets;
+-	stats64->tx_dropped = tx_dropped;
+-
+ 	/* Record drops from Tx HW Datapath */
++	spin_lock(&fbd->hw_stats_lock);
+ 	tx_dropped += fbd->hw_stats.tmi.drop.frames.value +
+ 		      fbd->hw_stats.tti.cm_drop.frames.value +
+ 		      fbd->hw_stats.tti.frame_drop.frames.value +
+ 		      fbd->hw_stats.tti.tbi_drop.frames.value;
++	spin_unlock(&fbd->hw_stats_lock);
++
++	stats64->tx_bytes = tx_bytes;
++	stats64->tx_packets = tx_packets;
++	stats64->tx_dropped = tx_dropped;
+ 
+ 	for (i = 0; i < fbn->num_tx_queues; i++) {
+ 		struct fbnic_ring *txr = fbn->tx[i];
+diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+index ac11389a764cef..f9543d03485fe1 100644
+--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
++++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+@@ -661,8 +661,8 @@ static void fbnic_page_pool_init(struct fbnic_ring *ring, unsigned int idx,
+ {
+ 	struct fbnic_rx_buf *rx_buf = &ring->rx_buf[idx];
+ 
+-	page_pool_fragment_page(page, PAGECNT_BIAS_MAX);
+-	rx_buf->pagecnt_bias = PAGECNT_BIAS_MAX;
++	page_pool_fragment_page(page, FBNIC_PAGECNT_BIAS_MAX);
++	rx_buf->pagecnt_bias = FBNIC_PAGECNT_BIAS_MAX;
+ 	rx_buf->page = page;
+ }
+ 
+diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
+index f46616af41eac4..37b4dadbfc6c8b 100644
+--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
++++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
+@@ -91,10 +91,8 @@ struct fbnic_queue_stats {
+ 	struct u64_stats_sync syncp;
+ };
+ 
+-/* Pagecnt bias is long max to reserve the last bit to catch overflow
+- * cases where if we overcharge the bias it will flip over to be negative.
+- */
+-#define PAGECNT_BIAS_MAX	LONG_MAX
++#define FBNIC_PAGECNT_BIAS_MAX	PAGE_SIZE
++
+ struct fbnic_rx_buf {
+ 	struct page *page;
+ 	long pagecnt_bias;
+diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
+index faad1cb880f8ac..2dd14d97cc9893 100644
+--- a/drivers/net/ethernet/microsoft/mana/mana_en.c
++++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
+@@ -1912,8 +1912,10 @@ static void mana_destroy_txq(struct mana_port_context *apc)
+ 		napi = &apc->tx_qp[i].tx_cq.napi;
+ 		if (apc->tx_qp[i].txq.napi_initialized) {
+ 			napi_synchronize(napi);
+-			napi_disable(napi);
+-			netif_napi_del(napi);
++			netdev_lock_ops_to_full(napi->dev);
++			napi_disable_locked(napi);
++			netif_napi_del_locked(napi);
++			netdev_unlock_full_to_ops(napi->dev);
+ 			apc->tx_qp[i].txq.napi_initialized = false;
+ 		}
+ 		mana_destroy_wq_obj(apc, GDMA_SQ, apc->tx_qp[i].tx_object);
+@@ -2065,8 +2067,11 @@ static int mana_create_txq(struct mana_port_context *apc,
+ 
+ 		mana_create_txq_debugfs(apc, i);
+ 
+-		netif_napi_add_tx(net, &cq->napi, mana_poll);
+-		napi_enable(&cq->napi);
++		set_bit(NAPI_STATE_NO_BUSY_POLL, &cq->napi.state);
++		netdev_lock_ops_to_full(net);
++		netif_napi_add_locked(net, &cq->napi, mana_poll);
++		napi_enable_locked(&cq->napi);
++		netdev_unlock_full_to_ops(net);
+ 		txq->napi_initialized = true;
+ 
+ 		mana_gd_ring_cq(cq->gdma_cq, SET_ARM_BIT);
+@@ -2102,9 +2107,10 @@ static void mana_destroy_rxq(struct mana_port_context *apc,
+ 	if (napi_initialized) {
+ 		napi_synchronize(napi);
+ 
+-		napi_disable(napi);
+-
+-		netif_napi_del(napi);
++		netdev_lock_ops_to_full(napi->dev);
++		napi_disable_locked(napi);
++		netif_napi_del_locked(napi);
++		netdev_unlock_full_to_ops(napi->dev);
+ 	}
+ 	xdp_rxq_info_unreg(&rxq->xdp_rxq);
+ 
+@@ -2355,14 +2361,18 @@ static struct mana_rxq *mana_create_rxq(struct mana_port_context *apc,
+ 
+ 	gc->cq_table[cq->gdma_id] = cq->gdma_cq;
+ 
+-	netif_napi_add_weight(ndev, &cq->napi, mana_poll, 1);
++	netdev_lock_ops_to_full(ndev);
++	netif_napi_add_weight_locked(ndev, &cq->napi, mana_poll, 1);
++	netdev_unlock_full_to_ops(ndev);
+ 
+ 	WARN_ON(xdp_rxq_info_reg(&rxq->xdp_rxq, ndev, rxq_idx,
+ 				 cq->napi.napi_id));
+ 	WARN_ON(xdp_rxq_info_reg_mem_model(&rxq->xdp_rxq, MEM_TYPE_PAGE_POOL,
+ 					   rxq->page_pool));
+ 
+-	napi_enable(&cq->napi);
++	netdev_lock_ops_to_full(ndev);
++	napi_enable_locked(&cq->napi);
++	netdev_unlock_full_to_ops(ndev);
+ 
+ 	mana_gd_ring_cq(cq->gdma_cq, SET_ARM_BIT);
+ out:
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index b948df1bff9a84..e0fb06af1f940d 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2596,7 +2596,7 @@ static bool stmmac_xdp_xmit_zc(struct stmmac_priv *priv, u32 queue, u32 budget)
+ 
+ 	budget = min(budget, stmmac_tx_avail(priv, queue));
+ 
+-	while (budget-- > 0) {
++	for (; budget > 0; budget--) {
+ 		struct stmmac_metadata_request meta_req;
+ 		struct xsk_tx_metadata *meta = NULL;
+ 		dma_addr_t dma_addr;
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_common.c b/drivers/net/ethernet/ti/icssg/icssg_common.c
+index 12f25cec6255e9..57e5f1c88f5098 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_common.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_common.c
+@@ -706,9 +706,9 @@ static int emac_rx_packet(struct prueth_emac *emac, u32 flow_id, u32 *xdp_state)
+ 	struct page_pool *pool;
+ 	struct sk_buff *skb;
+ 	struct xdp_buff xdp;
++	int headroom, ret;
+ 	u32 *psdata;
+ 	void *pa;
+-	int ret;
+ 
+ 	*xdp_state = 0;
+ 	pool = rx_chn->pg_pool;
+@@ -757,22 +757,23 @@ static int emac_rx_packet(struct prueth_emac *emac, u32 flow_id, u32 *xdp_state)
+ 		xdp_prepare_buff(&xdp, pa, PRUETH_HEADROOM, pkt_len, false);
+ 
+ 		*xdp_state = emac_run_xdp(emac, &xdp, page, &pkt_len);
+-		if (*xdp_state == ICSSG_XDP_PASS)
+-			skb = xdp_build_skb_from_buff(&xdp);
+-		else
++		if (*xdp_state != ICSSG_XDP_PASS)
+ 			goto requeue;
++		headroom = xdp.data - xdp.data_hard_start;
++		pkt_len = xdp.data_end - xdp.data;
+ 	} else {
+-		/* prepare skb and send to n/w stack */
+-		skb = napi_build_skb(pa, PAGE_SIZE);
++		headroom = PRUETH_HEADROOM;
+ 	}
+ 
++	/* prepare skb and send to n/w stack */
++	skb = napi_build_skb(pa, PAGE_SIZE);
+ 	if (!skb) {
+ 		ndev->stats.rx_dropped++;
+ 		page_pool_recycle_direct(pool, page);
+ 		goto requeue;
+ 	}
+ 
+-	skb_reserve(skb, PRUETH_HEADROOM);
++	skb_reserve(skb, headroom);
+ 	skb_put(skb, pkt_len);
+ 	skb->dev = ndev;
+ 
+diff --git a/drivers/net/ipa/Kconfig b/drivers/net/ipa/Kconfig
+index 6782c2cbf542fa..01d219d3760c82 100644
+--- a/drivers/net/ipa/Kconfig
++++ b/drivers/net/ipa/Kconfig
+@@ -5,7 +5,7 @@ config QCOM_IPA
+ 	depends on INTERCONNECT
+ 	depends on QCOM_RPROC_COMMON || (QCOM_RPROC_COMMON=n && COMPILE_TEST)
+ 	depends on QCOM_AOSS_QMP || QCOM_AOSS_QMP=n
+-	select QCOM_MDT_LOADER if ARCH_QCOM
++	select QCOM_MDT_LOADER
+ 	select QCOM_SCM
+ 	select QCOM_QMI_HELPERS
+ 	help
+diff --git a/drivers/net/ipa/ipa_sysfs.c b/drivers/net/ipa/ipa_sysfs.c
+index a59bd215494c9b..a53e9e6f6cdf50 100644
+--- a/drivers/net/ipa/ipa_sysfs.c
++++ b/drivers/net/ipa/ipa_sysfs.c
+@@ -37,8 +37,12 @@ static const char *ipa_version_string(struct ipa *ipa)
+ 		return "4.11";
+ 	case IPA_VERSION_5_0:
+ 		return "5.0";
++	case IPA_VERSION_5_1:
++		return "5.1";
++	case IPA_VERSION_5_5:
++		return "5.5";
+ 	default:
+-		return "0.0";	/* Won't happen (checked at probe time) */
++		return "0.0";	/* Should not happen */
+ 	}
+ }
+ 
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 7edbe76b5455a8..4c75d1fea55271 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -3868,7 +3868,7 @@ static void macsec_setup(struct net_device *dev)
+ 	ether_setup(dev);
+ 	dev->min_mtu = 0;
+ 	dev->max_mtu = ETH_MAX_MTU;
+-	dev->priv_flags |= IFF_NO_QUEUE;
++	dev->priv_flags |= IFF_NO_QUEUE | IFF_UNICAST_FLT;
+ 	dev->netdev_ops = &macsec_netdev_ops;
+ 	dev->needs_free_netdev = true;
+ 	dev->priv_destructor = macsec_free_netdev;
+diff --git a/drivers/net/mdio/mdio-bcm-unimac.c b/drivers/net/mdio/mdio-bcm-unimac.c
+index b6e30bdf532574..7baab230008a42 100644
+--- a/drivers/net/mdio/mdio-bcm-unimac.c
++++ b/drivers/net/mdio/mdio-bcm-unimac.c
+@@ -209,10 +209,9 @@ static int unimac_mdio_clk_set(struct unimac_mdio_priv *priv)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (!priv->clk)
++	rate = clk_get_rate(priv->clk);
++	if (!rate)
+ 		rate = 250000000;
+-	else
+-		rate = clk_get_rate(priv->clk);
+ 
+ 	div = (rate / (2 * priv->clk_freq)) - 1;
+ 	if (div & ~MDIO_CLK_DIV_MASK) {
+diff --git a/drivers/net/netconsole.c b/drivers/net/netconsole.c
+index 176935a8645ff1..a35b1fd4337b94 100644
+--- a/drivers/net/netconsole.c
++++ b/drivers/net/netconsole.c
+@@ -86,10 +86,10 @@ static DEFINE_SPINLOCK(target_list_lock);
+ static DEFINE_MUTEX(target_cleanup_list_lock);
+ 
+ /*
+- * Console driver for extended netconsoles.  Registered on the first use to
+- * avoid unnecessarily enabling ext message formatting.
++ * Console driver for netconsoles.  Register only consoles that have
++ * an associated target of the same type.
+  */
+-static struct console netconsole_ext;
++static struct console netconsole_ext, netconsole;
+ 
+ struct netconsole_target_stats  {
+ 	u64_stats_t xmit_drop_count;
+@@ -97,6 +97,11 @@ struct netconsole_target_stats  {
+ 	struct u64_stats_sync syncp;
+ };
+ 
++enum console_type {
++	CONS_BASIC = BIT(0),
++	CONS_EXTENDED = BIT(1),
++};
++
+ /* Features enabled in sysdata. Contrary to userdata, this data is populated by
+  * the kernel. The fields are designed as bitwise flags, allowing multiple
+  * features to be set in sysdata_fields.
+@@ -491,6 +496,12 @@ static ssize_t enabled_store(struct config_item *item,
+ 		if (nt->extended && !console_is_registered(&netconsole_ext))
+ 			register_console(&netconsole_ext);
+ 
++		/* User might be enabling the basic format target for the very
++		 * first time, make sure the console is registered.
++		 */
++		if (!nt->extended && !console_is_registered(&netconsole))
++			register_console(&netconsole);
++
+ 		/*
+ 		 * Skip netpoll_parse_options() -- all the attributes are
+ 		 * already configured via configfs. Just print them out.
+@@ -1690,8 +1701,8 @@ static int __init init_netconsole(void)
+ {
+ 	int err;
+ 	struct netconsole_target *nt, *tmp;
++	u32 console_type_needed = 0;
+ 	unsigned int count = 0;
+-	bool extended = false;
+ 	unsigned long flags;
+ 	char *target_config;
+ 	char *input = config;
+@@ -1707,9 +1718,10 @@ static int __init init_netconsole(void)
+ 			}
+ 			/* Dump existing printks when we register */
+ 			if (nt->extended) {
+-				extended = true;
++				console_type_needed |= CONS_EXTENDED;
+ 				netconsole_ext.flags |= CON_PRINTBUFFER;
+ 			} else {
++				console_type_needed |= CONS_BASIC;
+ 				netconsole.flags |= CON_PRINTBUFFER;
+ 			}
+ 
+@@ -1728,9 +1740,10 @@ static int __init init_netconsole(void)
+ 	if (err)
+ 		goto undonotifier;
+ 
+-	if (extended)
++	if (console_type_needed & CONS_EXTENDED)
+ 		register_console(&netconsole_ext);
+-	register_console(&netconsole);
++	if (console_type_needed & CONS_BASIC)
++		register_console(&netconsole);
+ 	pr_info("network logging started\n");
+ 
+ 	return err;
+@@ -1760,7 +1773,8 @@ static void __exit cleanup_netconsole(void)
+ 
+ 	if (console_is_registered(&netconsole_ext))
+ 		unregister_console(&netconsole_ext);
+-	unregister_console(&netconsole);
++	if (console_is_registered(&netconsole))
++		unregister_console(&netconsole);
+ 	dynamic_netconsole_exit();
+ 	unregister_netdevice_notifier(&netconsole_netdev_notifier);
+ 
+diff --git a/drivers/net/phy/mscc/mscc_ptp.c b/drivers/net/phy/mscc/mscc_ptp.c
+index 6b800081eed52f..275706de5847cd 100644
+--- a/drivers/net/phy/mscc/mscc_ptp.c
++++ b/drivers/net/phy/mscc/mscc_ptp.c
+@@ -900,6 +900,7 @@ static int vsc85xx_eth1_conf(struct phy_device *phydev, enum ts_blk blk,
+ 				     get_unaligned_be32(ptp_multicast));
+ 	} else {
+ 		val |= ANA_ETH1_FLOW_ADDR_MATCH2_ANY_MULTICAST;
++		val |= ANA_ETH1_FLOW_ADDR_MATCH2_ANY_UNICAST;
+ 		vsc85xx_ts_write_csr(phydev, blk,
+ 				     MSCC_ANA_ETH1_FLOW_ADDR_MATCH2(0), val);
+ 		vsc85xx_ts_write_csr(phydev, blk,
+diff --git a/drivers/net/phy/mscc/mscc_ptp.h b/drivers/net/phy/mscc/mscc_ptp.h
+index da3465360e9018..ae9ad925bfa8c0 100644
+--- a/drivers/net/phy/mscc/mscc_ptp.h
++++ b/drivers/net/phy/mscc/mscc_ptp.h
+@@ -98,6 +98,7 @@
+ #define MSCC_ANA_ETH1_FLOW_ADDR_MATCH2(x) (MSCC_ANA_ETH1_FLOW_ENA(x) + 3)
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_MASK_MASK	GENMASK(22, 20)
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_ANY_MULTICAST	0x400000
++#define ANA_ETH1_FLOW_ADDR_MATCH2_ANY_UNICAST	0x200000
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_FULL_ADDR	0x100000
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_SRC_DEST_MASK	GENMASK(17, 16)
+ #define ANA_ETH1_FLOW_ADDR_MATCH2_SRC_DEST	0x020000
+diff --git a/drivers/net/ppp/pptp.c b/drivers/net/ppp/pptp.c
+index 5feaa70b5f47e6..90737cb718928a 100644
+--- a/drivers/net/ppp/pptp.c
++++ b/drivers/net/ppp/pptp.c
+@@ -159,19 +159,17 @@ static int pptp_xmit(struct ppp_channel *chan, struct sk_buff *skb)
+ 	int len;
+ 	unsigned char *data;
+ 	__u32 seq_recv;
+-
+-
+ 	struct rtable *rt;
+ 	struct net_device *tdev;
+ 	struct iphdr  *iph;
+ 	int    max_headroom;
+ 
+ 	if (sk_pppox(po)->sk_state & PPPOX_DEAD)
+-		goto tx_error;
++		goto tx_drop;
+ 
+ 	rt = pptp_route_output(po, &fl4);
+ 	if (IS_ERR(rt))
+-		goto tx_error;
++		goto tx_drop;
+ 
+ 	tdev = rt->dst.dev;
+ 
+@@ -179,16 +177,20 @@ static int pptp_xmit(struct ppp_channel *chan, struct sk_buff *skb)
+ 
+ 	if (skb_headroom(skb) < max_headroom || skb_cloned(skb) || skb_shared(skb)) {
+ 		struct sk_buff *new_skb = skb_realloc_headroom(skb, max_headroom);
+-		if (!new_skb) {
+-			ip_rt_put(rt);
++
++		if (!new_skb)
+ 			goto tx_error;
+-		}
++
+ 		if (skb->sk)
+ 			skb_set_owner_w(new_skb, skb->sk);
+ 		consume_skb(skb);
+ 		skb = new_skb;
+ 	}
+ 
++	/* Ensure we can safely access protocol field and LCP code */
++	if (!pskb_may_pull(skb, 3))
++		goto tx_error;
++
+ 	data = skb->data;
+ 	islcp = ((data[0] << 8) + data[1]) == PPP_LCP && 1 <= data[2] && data[2] <= 7;
+ 
+@@ -262,6 +264,8 @@ static int pptp_xmit(struct ppp_channel *chan, struct sk_buff *skb)
+ 	return 1;
+ 
+ tx_error:
++	ip_rt_put(rt);
++tx_drop:
+ 	kfree_skb(skb);
+ 	return 1;
+ }
+diff --git a/drivers/net/team/team_core.c b/drivers/net/team/team_core.c
+index 8bc56186b2a3e9..17f07eb0ee52a6 100644
+--- a/drivers/net/team/team_core.c
++++ b/drivers/net/team/team_core.c
+@@ -933,7 +933,7 @@ static bool team_port_find(const struct team *team,
+  * Enable/disable port by adding to enabled port hashlist and setting
+  * port->index (Might be racy so reader could see incorrect ifindex when
+  * processing a flying packet, but that is not a problem). Write guarded
+- * by team->lock.
++ * by RTNL.
+  */
+ static void team_port_enable(struct team *team,
+ 			     struct team_port *port)
+@@ -1660,8 +1660,6 @@ static int team_init(struct net_device *dev)
+ 		goto err_options_register;
+ 	netif_carrier_off(dev);
+ 
+-	lockdep_register_key(&team->team_lock_key);
+-	__mutex_init(&team->lock, "team->team_lock_key", &team->team_lock_key);
+ 	netdev_lockdep_set_classes(dev);
+ 
+ 	return 0;
+@@ -1682,7 +1680,8 @@ static void team_uninit(struct net_device *dev)
+ 	struct team_port *port;
+ 	struct team_port *tmp;
+ 
+-	mutex_lock(&team->lock);
++	ASSERT_RTNL();
++
+ 	list_for_each_entry_safe(port, tmp, &team->port_list, list)
+ 		team_port_del(team, port->dev);
+ 
+@@ -1691,9 +1690,7 @@ static void team_uninit(struct net_device *dev)
+ 	team_mcast_rejoin_fini(team);
+ 	team_notify_peers_fini(team);
+ 	team_queue_override_fini(team);
+-	mutex_unlock(&team->lock);
+ 	netdev_change_features(dev);
+-	lockdep_unregister_key(&team->team_lock_key);
+ }
+ 
+ static void team_destructor(struct net_device *dev)
+@@ -1778,7 +1775,8 @@ static void team_change_rx_flags(struct net_device *dev, int change)
+ 	struct team_port *port;
+ 	int inc;
+ 
+-	mutex_lock(&team->lock);
++	ASSERT_RTNL();
++
+ 	list_for_each_entry(port, &team->port_list, list) {
+ 		if (change & IFF_PROMISC) {
+ 			inc = dev->flags & IFF_PROMISC ? 1 : -1;
+@@ -1789,7 +1787,6 @@ static void team_change_rx_flags(struct net_device *dev, int change)
+ 			dev_set_allmulti(port->dev, inc);
+ 		}
+ 	}
+-	mutex_unlock(&team->lock);
+ }
+ 
+ static void team_set_rx_mode(struct net_device *dev)
+@@ -1811,14 +1808,14 @@ static int team_set_mac_address(struct net_device *dev, void *p)
+ 	struct team *team = netdev_priv(dev);
+ 	struct team_port *port;
+ 
++	ASSERT_RTNL();
++
+ 	if (dev->type == ARPHRD_ETHER && !is_valid_ether_addr(addr->sa_data))
+ 		return -EADDRNOTAVAIL;
+ 	dev_addr_set(dev, addr->sa_data);
+-	mutex_lock(&team->lock);
+ 	list_for_each_entry(port, &team->port_list, list)
+ 		if (team->ops.port_change_dev_addr)
+ 			team->ops.port_change_dev_addr(team, port);
+-	mutex_unlock(&team->lock);
+ 	return 0;
+ }
+ 
+@@ -1828,11 +1825,8 @@ static int team_change_mtu(struct net_device *dev, int new_mtu)
+ 	struct team_port *port;
+ 	int err;
+ 
+-	/*
+-	 * Alhough this is reader, it's guarded by team lock. It's not possible
+-	 * to traverse list in reverse under rcu_read_lock
+-	 */
+-	mutex_lock(&team->lock);
++	ASSERT_RTNL();
++
+ 	team->port_mtu_change_allowed = true;
+ 	list_for_each_entry(port, &team->port_list, list) {
+ 		err = dev_set_mtu(port->dev, new_mtu);
+@@ -1843,7 +1837,6 @@ static int team_change_mtu(struct net_device *dev, int new_mtu)
+ 		}
+ 	}
+ 	team->port_mtu_change_allowed = false;
+-	mutex_unlock(&team->lock);
+ 
+ 	WRITE_ONCE(dev->mtu, new_mtu);
+ 
+@@ -1853,7 +1846,6 @@ static int team_change_mtu(struct net_device *dev, int new_mtu)
+ 	list_for_each_entry_continue_reverse(port, &team->port_list, list)
+ 		dev_set_mtu(port->dev, dev->mtu);
+ 	team->port_mtu_change_allowed = false;
+-	mutex_unlock(&team->lock);
+ 
+ 	return err;
+ }
+@@ -1903,24 +1895,19 @@ static int team_vlan_rx_add_vid(struct net_device *dev, __be16 proto, u16 vid)
+ 	struct team_port *port;
+ 	int err;
+ 
+-	/*
+-	 * Alhough this is reader, it's guarded by team lock. It's not possible
+-	 * to traverse list in reverse under rcu_read_lock
+-	 */
+-	mutex_lock(&team->lock);
++	ASSERT_RTNL();
++
+ 	list_for_each_entry(port, &team->port_list, list) {
+ 		err = vlan_vid_add(port->dev, proto, vid);
+ 		if (err)
+ 			goto unwind;
+ 	}
+-	mutex_unlock(&team->lock);
+ 
+ 	return 0;
+ 
+ unwind:
+ 	list_for_each_entry_continue_reverse(port, &team->port_list, list)
+ 		vlan_vid_del(port->dev, proto, vid);
+-	mutex_unlock(&team->lock);
+ 
+ 	return err;
+ }
+@@ -1930,10 +1917,10 @@ static int team_vlan_rx_kill_vid(struct net_device *dev, __be16 proto, u16 vid)
+ 	struct team *team = netdev_priv(dev);
+ 	struct team_port *port;
+ 
+-	mutex_lock(&team->lock);
++	ASSERT_RTNL();
++
+ 	list_for_each_entry(port, &team->port_list, list)
+ 		vlan_vid_del(port->dev, proto, vid);
+-	mutex_unlock(&team->lock);
+ 
+ 	return 0;
+ }
+@@ -1955,9 +1942,9 @@ static void team_netpoll_cleanup(struct net_device *dev)
+ {
+ 	struct team *team = netdev_priv(dev);
+ 
+-	mutex_lock(&team->lock);
++	ASSERT_RTNL();
++
+ 	__team_netpoll_cleanup(team);
+-	mutex_unlock(&team->lock);
+ }
+ 
+ static int team_netpoll_setup(struct net_device *dev)
+@@ -1966,7 +1953,8 @@ static int team_netpoll_setup(struct net_device *dev)
+ 	struct team_port *port;
+ 	int err = 0;
+ 
+-	mutex_lock(&team->lock);
++	ASSERT_RTNL();
++
+ 	list_for_each_entry(port, &team->port_list, list) {
+ 		err = __team_port_enable_netpoll(port);
+ 		if (err) {
+@@ -1974,7 +1962,6 @@ static int team_netpoll_setup(struct net_device *dev)
+ 			break;
+ 		}
+ 	}
+-	mutex_unlock(&team->lock);
+ 	return err;
+ }
+ #endif
+@@ -1985,9 +1972,9 @@ static int team_add_slave(struct net_device *dev, struct net_device *port_dev,
+ 	struct team *team = netdev_priv(dev);
+ 	int err;
+ 
+-	mutex_lock(&team->lock);
++	ASSERT_RTNL();
++
+ 	err = team_port_add(team, port_dev, extack);
+-	mutex_unlock(&team->lock);
+ 
+ 	if (!err)
+ 		netdev_change_features(dev);
+@@ -2000,18 +1987,13 @@ static int team_del_slave(struct net_device *dev, struct net_device *port_dev)
+ 	struct team *team = netdev_priv(dev);
+ 	int err;
+ 
+-	mutex_lock(&team->lock);
++	ASSERT_RTNL();
++
+ 	err = team_port_del(team, port_dev);
+-	mutex_unlock(&team->lock);
+ 
+ 	if (err)
+ 		return err;
+ 
+-	if (netif_is_team_master(port_dev)) {
+-		lockdep_unregister_key(&team->team_lock_key);
+-		lockdep_register_key(&team->team_lock_key);
+-		lockdep_set_class(&team->lock, &team->team_lock_key);
+-	}
+ 	netdev_change_features(dev);
+ 
+ 	return err;
+@@ -2304,9 +2286,10 @@ int team_nl_noop_doit(struct sk_buff *skb, struct genl_info *info)
+ static struct team *team_nl_team_get(struct genl_info *info)
+ {
+ 	struct net *net = genl_info_net(info);
+-	int ifindex;
+ 	struct net_device *dev;
+-	struct team *team;
++	int ifindex;
++
++	ASSERT_RTNL();
+ 
+ 	if (!info->attrs[TEAM_ATTR_TEAM_IFINDEX])
+ 		return NULL;
+@@ -2318,14 +2301,11 @@ static struct team *team_nl_team_get(struct genl_info *info)
+ 		return NULL;
+ 	}
+ 
+-	team = netdev_priv(dev);
+-	mutex_lock(&team->lock);
+-	return team;
++	return netdev_priv(dev);
+ }
+ 
+ static void team_nl_team_put(struct team *team)
+ {
+-	mutex_unlock(&team->lock);
+ 	dev_put(team->dev);
+ }
+ 
+@@ -2515,9 +2495,13 @@ int team_nl_options_get_doit(struct sk_buff *skb, struct genl_info *info)
+ 	int err;
+ 	LIST_HEAD(sel_opt_inst_list);
+ 
++	rtnl_lock();
++
+ 	team = team_nl_team_get(info);
+-	if (!team)
+-		return -EINVAL;
++	if (!team) {
++		err = -EINVAL;
++		goto rtnl_unlock;
++	}
+ 
+ 	list_for_each_entry(opt_inst, &team->option_inst_list, list)
+ 		list_add_tail(&opt_inst->tmp_list, &sel_opt_inst_list);
+@@ -2527,6 +2511,9 @@ int team_nl_options_get_doit(struct sk_buff *skb, struct genl_info *info)
+ 
+ 	team_nl_team_put(team);
+ 
++rtnl_unlock:
++	rtnl_unlock();
++
+ 	return err;
+ }
+ 
+@@ -2805,15 +2792,22 @@ int team_nl_port_list_get_doit(struct sk_buff *skb,
+ 	struct team *team;
+ 	int err;
+ 
++	rtnl_lock();
++
+ 	team = team_nl_team_get(info);
+-	if (!team)
+-		return -EINVAL;
++	if (!team) {
++		err = -EINVAL;
++		goto rtnl_unlock;
++	}
+ 
+ 	err = team_nl_send_port_list_get(team, info->snd_portid, info->snd_seq,
+ 					 NLM_F_ACK, team_nl_send_unicast, NULL);
+ 
+ 	team_nl_team_put(team);
+ 
++rtnl_unlock:
++	rtnl_unlock();
++
+ 	return err;
+ }
+ 
+@@ -2961,11 +2955,9 @@ static void __team_port_change_port_removed(struct team_port *port)
+ 
+ static void team_port_change_check(struct team_port *port, bool linkup)
+ {
+-	struct team *team = port->team;
++	ASSERT_RTNL();
+ 
+-	mutex_lock(&team->lock);
+ 	__team_port_change_check(port, linkup);
+-	mutex_unlock(&team->lock);
+ }
+ 
+ 
+diff --git a/drivers/net/team/team_mode_activebackup.c b/drivers/net/team/team_mode_activebackup.c
+index e0f599e2a51dd6..1c3336c7a1b26e 100644
+--- a/drivers/net/team/team_mode_activebackup.c
++++ b/drivers/net/team/team_mode_activebackup.c
+@@ -67,8 +67,7 @@ static void ab_active_port_get(struct team *team, struct team_gsetter_ctx *ctx)
+ {
+ 	struct team_port *active_port;
+ 
+-	active_port = rcu_dereference_protected(ab_priv(team)->active_port,
+-						lockdep_is_held(&team->lock));
++	active_port = rtnl_dereference(ab_priv(team)->active_port);
+ 	if (active_port)
+ 		ctx->data.u32_val = active_port->dev->ifindex;
+ 	else
+diff --git a/drivers/net/team/team_mode_loadbalance.c b/drivers/net/team/team_mode_loadbalance.c
+index 00f8989c29c0ff..b14538bde2f824 100644
+--- a/drivers/net/team/team_mode_loadbalance.c
++++ b/drivers/net/team/team_mode_loadbalance.c
+@@ -301,8 +301,7 @@ static int lb_bpf_func_set(struct team *team, struct team_gsetter_ctx *ctx)
+ 	if (lb_priv->ex->orig_fprog) {
+ 		/* Clear old filter data */
+ 		__fprog_destroy(lb_priv->ex->orig_fprog);
+-		orig_fp = rcu_dereference_protected(lb_priv->fp,
+-						lockdep_is_held(&team->lock));
++		orig_fp = rtnl_dereference(lb_priv->fp);
+ 	}
+ 
+ 	rcu_assign_pointer(lb_priv->fp, fp);
+@@ -324,8 +323,7 @@ static void lb_bpf_func_free(struct team *team)
+ 		return;
+ 
+ 	__fprog_destroy(lb_priv->ex->orig_fprog);
+-	fp = rcu_dereference_protected(lb_priv->fp,
+-				       lockdep_is_held(&team->lock));
++	fp = rtnl_dereference(lb_priv->fp);
+ 	bpf_prog_destroy(fp);
+ }
+ 
+@@ -335,8 +333,7 @@ static void lb_tx_method_get(struct team *team, struct team_gsetter_ctx *ctx)
+ 	lb_select_tx_port_func_t *func;
+ 	char *name;
+ 
+-	func = rcu_dereference_protected(lb_priv->select_tx_port_func,
+-					 lockdep_is_held(&team->lock));
++	func = rtnl_dereference(lb_priv->select_tx_port_func);
+ 	name = lb_select_tx_port_get_name(func);
+ 	BUG_ON(!name);
+ 	ctx->data.str_val = name;
+@@ -478,7 +475,7 @@ static void lb_stats_refresh(struct work_struct *work)
+ 	team = lb_priv_ex->team;
+ 	lb_priv = get_lb_priv(team);
+ 
+-	if (!mutex_trylock(&team->lock)) {
++	if (!rtnl_trylock()) {
+ 		schedule_delayed_work(&lb_priv_ex->stats.refresh_dw, 0);
+ 		return;
+ 	}
+@@ -515,7 +512,7 @@ static void lb_stats_refresh(struct work_struct *work)
+ 	schedule_delayed_work(&lb_priv_ex->stats.refresh_dw,
+ 			      (lb_priv_ex->stats.refresh_interval * HZ) / 10);
+ 
+-	mutex_unlock(&team->lock);
++	rtnl_unlock();
+ }
+ 
+ static void lb_stats_refresh_interval_get(struct team *team,
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index c04e715a4c2ade..d1045249941368 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -1113,6 +1113,9 @@ static void __handle_link_change(struct usbnet *dev)
+ 	if (!test_bit(EVENT_DEV_OPEN, &dev->flags))
+ 		return;
+ 
++	if (test_and_clear_bit(EVENT_LINK_CARRIER_ON, &dev->flags))
++		netif_carrier_on(dev->net);
++
+ 	if (!netif_carrier_ok(dev->net)) {
+ 		/* kill URBs for reading packets to save bus bandwidth */
+ 		unlink_urbs(dev, &dev->rxq);
+@@ -2009,10 +2012,12 @@ EXPORT_SYMBOL(usbnet_manage_power);
+ void usbnet_link_change(struct usbnet *dev, bool link, bool need_reset)
+ {
+ 	/* update link after link is reseted */
+-	if (link && !need_reset)
+-		netif_carrier_on(dev->net);
+-	else
++	if (link && !need_reset) {
++		set_bit(EVENT_LINK_CARRIER_ON, &dev->flags);
++	} else {
++		clear_bit(EVENT_LINK_CARRIER_ON, &dev->flags);
+ 		netif_carrier_off(dev->net);
++	}
+ 
+ 	if (need_reset && link)
+ 		usbnet_defer_kevent(dev, EVENT_LINK_RESET);
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 9a4beea6ee0c29..3ccd649913b507 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -1302,6 +1302,8 @@ static void vrf_ip6_input_dst(struct sk_buff *skb, struct net_device *vrf_dev,
+ 	struct net *net = dev_net(vrf_dev);
+ 	struct rt6_info *rt6;
+ 
++	skb_dst_drop(skb);
++
+ 	rt6 = vrf_ip6_route_lookup(net, vrf_dev, &fl6, ifindex, skb,
+ 				   RT6_LOOKUP_F_HAS_SADDR | RT6_LOOKUP_F_IFACE);
+ 	if (unlikely(!rt6))
+diff --git a/drivers/net/wireless/ath/ath11k/hal.c b/drivers/net/wireless/ath/ath11k/hal.c
+index 8cb1505a5a0c3f..cab11a35f9115d 100644
+--- a/drivers/net/wireless/ath/ath11k/hal.c
++++ b/drivers/net/wireless/ath/ath11k/hal.c
+@@ -1346,6 +1346,10 @@ EXPORT_SYMBOL(ath11k_hal_srng_init);
+ void ath11k_hal_srng_deinit(struct ath11k_base *ab)
+ {
+ 	struct ath11k_hal *hal = &ab->hal;
++	int i;
++
++	for (i = 0; i < HAL_SRNG_RING_ID_MAX; i++)
++		ab->hal.srng_list[i].initialized = 0;
+ 
+ 	ath11k_hal_unregister_srng_key(ab);
+ 	ath11k_hal_free_cont_rdp(ab);
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 13301ca317a532..977f370fd6de42 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -8740,9 +8740,9 @@ ath11k_mac_op_set_bitrate_mask(struct ieee80211_hw *hw,
+ 				    arvif->vdev_id, ret);
+ 			return ret;
+ 		}
+-		ieee80211_iterate_stations_atomic(ar->hw,
+-						  ath11k_mac_disable_peer_fixed_rate,
+-						  arvif);
++		ieee80211_iterate_stations_mtx(ar->hw,
++					       ath11k_mac_disable_peer_fixed_rate,
++					       arvif);
+ 	} else if (ath11k_mac_bitrate_mask_get_single_nss(ar, arvif, band, mask,
+ 							  &single_nss)) {
+ 		rate = WMI_FIXED_RATE_NONE;
+@@ -8809,9 +8809,9 @@ ath11k_mac_op_set_bitrate_mask(struct ieee80211_hw *hw,
+ 		}
+ 
+ 		mutex_lock(&ar->conf_mutex);
+-		ieee80211_iterate_stations_atomic(ar->hw,
+-						  ath11k_mac_disable_peer_fixed_rate,
+-						  arvif);
++		ieee80211_iterate_stations_mtx(ar->hw,
++					       ath11k_mac_disable_peer_fixed_rate,
++					       arvif);
+ 
+ 		arvif->bitrate_mask = *mask;
+ 		ieee80211_iterate_stations_atomic(ar->hw,
+diff --git a/drivers/net/wireless/ath/ath12k/core.c b/drivers/net/wireless/ath/ath12k/core.c
+index 89ae80934b3048..cd58ab9c232239 100644
+--- a/drivers/net/wireless/ath/ath12k/core.c
++++ b/drivers/net/wireless/ath/ath12k/core.c
+@@ -1409,6 +1409,7 @@ void ath12k_core_halt(struct ath12k *ar)
+ 	ath12k_mac_peer_cleanup_all(ar);
+ 	cancel_delayed_work_sync(&ar->scan.timeout);
+ 	cancel_work_sync(&ar->regd_update_work);
++	cancel_work_sync(&ar->regd_channel_update_work);
+ 	cancel_work_sync(&ab->rfkill_work);
+ 	cancel_work_sync(&ab->update_11d_work);
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/core.h b/drivers/net/wireless/ath/ath12k/core.h
+index 7bcd9c70309fdb..4bd286296da794 100644
+--- a/drivers/net/wireless/ath/ath12k/core.h
++++ b/drivers/net/wireless/ath/ath12k/core.h
+@@ -345,6 +345,10 @@ struct ath12k_link_vif {
+ 	bool is_sta_assoc_link;
+ 
+ 	struct ath12k_reg_tpc_power_info reg_tpc_info;
++
++	bool group_key_valid;
++	struct wmi_vdev_install_key_arg group_key;
++	bool pairwise_key_done;
+ };
+ 
+ struct ath12k_vif {
+@@ -719,7 +723,7 @@ struct ath12k {
+ 
+ 	/* protects the radio specific data like debug stats, ppdu_stats_info stats,
+ 	 * vdev_stop_status info, scan data, ath12k_sta info, ath12k_link_vif info,
+-	 * channel context data, survey info, test mode data.
++	 * channel context data, survey info, test mode data, regd_channel_update_queue.
+ 	 */
+ 	spinlock_t data_lock;
+ 
+@@ -778,6 +782,8 @@ struct ath12k {
+ 	struct completion bss_survey_done;
+ 
+ 	struct work_struct regd_update_work;
++	struct work_struct regd_channel_update_work;
++	struct list_head regd_channel_update_queue;
+ 
+ 	struct wiphy_work wmi_mgmt_tx_work;
+ 	struct sk_buff_head wmi_mgmt_tx_queue;
+diff --git a/drivers/net/wireless/ath/ath12k/debugfs_htt_stats.h b/drivers/net/wireless/ath/ath12k/debugfs_htt_stats.h
+index c2a02cf8a38b73..db9532c39cbf13 100644
+--- a/drivers/net/wireless/ath/ath12k/debugfs_htt_stats.h
++++ b/drivers/net/wireless/ath/ath12k/debugfs_htt_stats.h
+@@ -470,7 +470,7 @@ struct ath12k_htt_tx_pdev_rate_stats_tlv {
+ 			   [ATH12K_HTT_TX_PDEV_STATS_NUM_EXTRA_MCS_COUNTERS];
+ 	__le32 tx_mcs_ext_2[ATH12K_HTT_TX_PDEV_STATS_NUM_EXTRA2_MCS_COUNTERS];
+ 	__le32 tx_bw_320mhz;
+-};
++} __packed;
+ 
+ #define ATH12K_HTT_RX_PDEV_STATS_NUM_LEGACY_CCK_STATS		4
+ #define ATH12K_HTT_RX_PDEV_STATS_NUM_LEGACY_OFDM_STATS		8
+@@ -550,7 +550,7 @@ struct ath12k_htt_rx_pdev_rate_stats_tlv {
+ 	__le32 rx_ulofdma_non_data_nusers[ATH12K_HTT_RX_PDEV_MAX_OFDMA_NUM_USER];
+ 	__le32 rx_ulofdma_data_nusers[ATH12K_HTT_RX_PDEV_MAX_OFDMA_NUM_USER];
+ 	__le32 rx_mcs_ext[ATH12K_HTT_RX_PDEV_STATS_NUM_EXTRA_MCS_COUNTERS];
+-};
++} __packed;
+ 
+ #define ATH12K_HTT_RX_PDEV_STATS_NUM_BW_EXT_COUNTERS		4
+ #define ATH12K_HTT_RX_PDEV_STATS_NUM_MCS_COUNTERS_EXT		14
+@@ -580,7 +580,7 @@ struct ath12k_htt_rx_pdev_rate_ext_stats_tlv {
+ 	__le32 rx_gi_ext_2[ATH12K_HTT_RX_PDEV_STATS_NUM_GI_COUNTERS]
+ 		[ATH12K_HTT_RX_PDEV_STATS_NUM_EXTRA2_MCS_COUNTERS];
+ 	__le32 rx_su_punctured_mode[ATH12K_HTT_RX_PDEV_STATS_NUM_PUNCTURED_MODE_COUNTERS];
+-};
++} __packed;
+ 
+ #define ATH12K_HTT_TX_PDEV_STATS_SCHED_PER_TXQ_MAC_ID	GENMASK(7, 0)
+ #define ATH12K_HTT_TX_PDEV_STATS_SCHED_PER_TXQ_ID	GENMASK(15, 8)
+diff --git a/drivers/net/wireless/ath/ath12k/dp.h b/drivers/net/wireless/ath/ath12k/dp.h
+index a353333f83b68e..2f0718edabd206 100644
+--- a/drivers/net/wireless/ath/ath12k/dp.h
++++ b/drivers/net/wireless/ath/ath12k/dp.h
+@@ -469,6 +469,7 @@ enum htt_h2t_msg_type {
+ };
+ 
+ #define HTT_VER_REQ_INFO_MSG_ID		GENMASK(7, 0)
++#define HTT_OPTION_TCL_METADATA_VER_V1	1
+ #define HTT_OPTION_TCL_METADATA_VER_V2	2
+ #define HTT_OPTION_TAG			GENMASK(7, 0)
+ #define HTT_OPTION_LEN			GENMASK(15, 8)
+diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.c b/drivers/net/wireless/ath/ath12k/dp_mon.c
+index 28cadc4167f787..91f4e3aff74c38 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_mon.c
++++ b/drivers/net/wireless/ath/ath12k/dp_mon.c
+@@ -3761,7 +3761,6 @@ int ath12k_dp_mon_srng_process(struct ath12k *ar, int *budget,
+ 	ath12k_hal_srng_access_begin(ab, srng);
+ 
+ 	while (likely(*budget)) {
+-		*budget -= 1;
+ 		mon_dst_desc = ath12k_hal_srng_dst_peek(ab, srng);
+ 		if (unlikely(!mon_dst_desc))
+ 			break;
+diff --git a/drivers/net/wireless/ath/ath12k/dp_tx.c b/drivers/net/wireless/ath/ath12k/dp_tx.c
+index b6816b6c2c0400..7470731eb8300c 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_tx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_tx.c
+@@ -13,10 +13,9 @@
+ #include "mac.h"
+ 
+ static enum hal_tcl_encap_type
+-ath12k_dp_tx_get_encap_type(struct ath12k_link_vif *arvif, struct sk_buff *skb)
++ath12k_dp_tx_get_encap_type(struct ath12k_base *ab, struct sk_buff *skb)
+ {
+ 	struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+-	struct ath12k_base *ab = arvif->ar->ab;
+ 
+ 	if (test_bit(ATH12K_FLAG_RAW_MODE, &ab->dev_flags))
+ 		return HAL_TCL_ENCAP_TYPE_RAW;
+@@ -305,7 +304,7 @@ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_link_vif *arvif,
+ 			u32_encode_bits(mcbc_gsn, HTT_TCL_META_DATA_GLOBAL_SEQ_NUM);
+ 	}
+ 
+-	ti.encap_type = ath12k_dp_tx_get_encap_type(arvif, skb);
++	ti.encap_type = ath12k_dp_tx_get_encap_type(ab, skb);
+ 	ti.addr_search_flags = arvif->hal_addr_search_flags;
+ 	ti.search_type = arvif->search_type;
+ 	ti.type = HAL_TCL_DESC_TYPE_BUFFER;
+@@ -1183,6 +1182,7 @@ int ath12k_dp_tx_htt_h2t_ver_req_msg(struct ath12k_base *ab)
+ 	struct sk_buff *skb;
+ 	struct htt_ver_req_cmd *cmd;
+ 	int len = sizeof(*cmd);
++	u32 metadata_version;
+ 	int ret;
+ 
+ 	init_completion(&dp->htt_tgt_version_received);
+@@ -1195,12 +1195,14 @@ int ath12k_dp_tx_htt_h2t_ver_req_msg(struct ath12k_base *ab)
+ 	cmd = (struct htt_ver_req_cmd *)skb->data;
+ 	cmd->ver_reg_info = le32_encode_bits(HTT_H2T_MSG_TYPE_VERSION_REQ,
+ 					     HTT_OPTION_TAG);
++	metadata_version = ath12k_ftm_mode ? HTT_OPTION_TCL_METADATA_VER_V1 :
++			   HTT_OPTION_TCL_METADATA_VER_V2;
+ 
+ 	cmd->tcl_metadata_version = le32_encode_bits(HTT_TAG_TCL_METADATA_VERSION,
+ 						     HTT_OPTION_TAG) |
+ 				    le32_encode_bits(HTT_TCL_METADATA_VER_SZ,
+ 						     HTT_OPTION_LEN) |
+-				    le32_encode_bits(HTT_OPTION_TCL_METADATA_VER_V2,
++				    le32_encode_bits(metadata_version,
+ 						     HTT_OPTION_VALUE);
+ 
+ 	ret = ath12k_htc_send(&ab->htc, dp->eid, skb);
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index 59ec422992d304..23469d0cc9b34e 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -693,6 +693,9 @@ static void ath12k_get_arvif_iter(void *data, u8 *mac,
+ 		if (WARN_ON(!arvif))
+ 			continue;
+ 
++		if (!arvif->is_created)
++			continue;
++
+ 		if (arvif->vdev_id == arvif_iter->vdev_id &&
+ 		    arvif->ar == arvif_iter->ar) {
+ 			arvif_iter->arvif = arvif;
+@@ -1755,7 +1758,7 @@ static void ath12k_mac_handle_beacon_iter(void *data, u8 *mac,
+ 	struct ath12k_vif *ahvif = ath12k_vif_to_ahvif(vif);
+ 	struct ath12k_link_vif *arvif = &ahvif->deflink;
+ 
+-	if (vif->type != NL80211_IFTYPE_STATION)
++	if (vif->type != NL80211_IFTYPE_STATION || !arvif->is_created)
+ 		return;
+ 
+ 	if (!ether_addr_equal(mgmt->bssid, vif->bss_conf.bssid))
+@@ -1778,16 +1781,16 @@ static void ath12k_mac_handle_beacon_miss_iter(void *data, u8 *mac,
+ 	u32 *vdev_id = data;
+ 	struct ath12k_vif *ahvif = ath12k_vif_to_ahvif(vif);
+ 	struct ath12k_link_vif *arvif = &ahvif->deflink;
+-	struct ath12k *ar = arvif->ar;
+-	struct ieee80211_hw *hw = ath12k_ar_to_hw(ar);
++	struct ieee80211_hw *hw;
+ 
+-	if (arvif->vdev_id != *vdev_id)
++	if (!arvif->is_created || arvif->vdev_id != *vdev_id)
+ 		return;
+ 
+ 	if (!arvif->is_up)
+ 		return;
+ 
+ 	ieee80211_beacon_loss(vif);
++	hw = ath12k_ar_to_hw(arvif->ar);
+ 
+ 	/* Firmware doesn't report beacon loss events repeatedly. If AP probe
+ 	 * (done by mac80211) succeeds but beacons do not resume then it
+@@ -3232,6 +3235,7 @@ static void ath12k_bss_assoc(struct ath12k *ar,
+ 
+ 	rcu_read_unlock();
+ 
++	peer_arg->is_assoc = true;
+ 	ret = ath12k_wmi_send_peer_assoc_cmd(ar, peer_arg);
+ 	if (ret) {
+ 		ath12k_warn(ar->ab, "failed to run peer assoc for %pM vdev %i: %d\n",
+@@ -4719,14 +4723,13 @@ static int ath12k_install_key(struct ath12k_link_vif *arvif,
+ 		.key_len = key->keylen,
+ 		.key_data = key->key,
+ 		.key_flags = flags,
++		.ieee80211_key_cipher = key->cipher,
+ 		.macaddr = macaddr,
+ 	};
+ 	struct ath12k_vif *ahvif = arvif->ahvif;
+ 
+ 	lockdep_assert_wiphy(ath12k_ar_to_hw(ar)->wiphy);
+ 
+-	reinit_completion(&ar->install_key_done);
+-
+ 	if (test_bit(ATH12K_FLAG_HW_CRYPTO_DISABLED, &ar->ab->dev_flags))
+ 		return 0;
+ 
+@@ -4735,7 +4738,7 @@ static int ath12k_install_key(struct ath12k_link_vif *arvif,
+ 		/* arg.key_cipher = WMI_CIPHER_NONE; */
+ 		arg.key_len = 0;
+ 		arg.key_data = NULL;
+-		goto install;
++		goto check_order;
+ 	}
+ 
+ 	switch (key->cipher) {
+@@ -4763,19 +4766,82 @@ static int ath12k_install_key(struct ath12k_link_vif *arvif,
+ 		key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV |
+ 			      IEEE80211_KEY_FLAG_RESERVE_TAILROOM;
+ 
++check_order:
++	if (ahvif->vdev_type == WMI_VDEV_TYPE_STA &&
++	    arg.key_flags == WMI_KEY_GROUP) {
++		if (cmd == SET_KEY) {
++			if (arvif->pairwise_key_done) {
++				ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
++					   "vdev %u pairwise key done, go install group key\n",
++					   arg.vdev_id);
++				goto install;
++			} else {
++				/* WCN7850 firmware requires pairwise key to be installed
++				 * before group key. In case group key comes first, cache
++				 * it and return. Will revisit it once pairwise key gets
++				 * installed.
++				 */
++				arvif->group_key = arg;
++				arvif->group_key_valid = true;
++				ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
++					   "vdev %u group key before pairwise key, cache and skip\n",
++					   arg.vdev_id);
++
++				ret = 0;
++				goto out;
++			}
++		} else {
++			arvif->group_key_valid = false;
++		}
++	}
++
+ install:
+-	ret = ath12k_wmi_vdev_install_key(arvif->ar, &arg);
++	reinit_completion(&ar->install_key_done);
+ 
++	ret = ath12k_wmi_vdev_install_key(arvif->ar, &arg);
+ 	if (ret)
+ 		return ret;
+ 
+ 	if (!wait_for_completion_timeout(&ar->install_key_done, 1 * HZ))
+ 		return -ETIMEDOUT;
+ 
+-	if (ether_addr_equal(macaddr, arvif->bssid))
+-		ahvif->key_cipher = key->cipher;
++	if (ether_addr_equal(arg.macaddr, arvif->bssid))
++		ahvif->key_cipher = arg.ieee80211_key_cipher;
++
++	if (ar->install_key_status) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	if (ahvif->vdev_type == WMI_VDEV_TYPE_STA &&
++	    arg.key_flags == WMI_KEY_PAIRWISE) {
++		if (cmd == SET_KEY) {
++			arvif->pairwise_key_done = true;
++			if (arvif->group_key_valid) {
++				/* Install cached GTK */
++				arvif->group_key_valid = false;
++				arg = arvif->group_key;
++				ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
++					   "vdev %u pairwise key done, group key ready, go install\n",
++					   arg.vdev_id);
++				goto install;
++			}
++		} else {
++			arvif->pairwise_key_done = false;
++		}
++	}
++
++out:
++	if (ret) {
++		/* In case of failure userspace may not do DISABLE_KEY
++		 * but triggers re-connection directly, so manually reset
++		 * status here.
++		 */
++		arvif->group_key_valid = false;
++		arvif->pairwise_key_done = false;
++	}
+ 
+-	return ar->install_key_status ? -EINVAL : 0;
++	return ret;
+ }
+ 
+ static int ath12k_clear_peer_keys(struct ath12k_link_vif *arvif,
+@@ -5162,6 +5228,8 @@ static int ath12k_mac_station_assoc(struct ath12k *ar,
+ 			    "invalid peer NSS %d\n", peer_arg->peer_nss);
+ 		return -EINVAL;
+ 	}
++
++	peer_arg->is_assoc = true;
+ 	ret = ath12k_wmi_send_peer_assoc_cmd(ar, peer_arg);
+ 	if (ret) {
+ 		ath12k_warn(ar->ab, "failed to run peer assoc for STA %pM vdev %i: %d\n",
+@@ -5408,6 +5476,7 @@ static void ath12k_sta_rc_update_wk(struct wiphy *wiphy, struct wiphy_work *wk)
+ 			ath12k_peer_assoc_prepare(ar, arvif, arsta,
+ 						  peer_arg, true);
+ 
++			peer_arg->is_assoc = false;
+ 			err = ath12k_wmi_send_peer_assoc_cmd(ar, peer_arg);
+ 			if (err)
+ 				ath12k_warn(ar->ab, "failed to run peer assoc for STA %pM vdev %i: %d\n",
+@@ -8160,14 +8229,9 @@ static int ath12k_mac_start(struct ath12k *ar)
+ 
+ static void ath12k_drain_tx(struct ath12k_hw *ah)
+ {
+-	struct ath12k *ar = ah->radio;
++	struct ath12k *ar;
+ 	int i;
+ 
+-	if (ath12k_ftm_mode) {
+-		ath12k_err(ar->ab, "fail to start mac operations in ftm mode\n");
+-		return;
+-	}
+-
+ 	lockdep_assert_wiphy(ah->hw->wiphy);
+ 
+ 	for_each_ar(ah, ar, i)
+@@ -8180,6 +8244,9 @@ static int ath12k_mac_op_start(struct ieee80211_hw *hw)
+ 	struct ath12k *ar;
+ 	int ret, i;
+ 
++	if (ath12k_ftm_mode)
++		return -EPERM;
++
+ 	lockdep_assert_wiphy(hw->wiphy);
+ 
+ 	ath12k_drain_tx(ah);
+@@ -8286,6 +8353,7 @@ static void ath12k_mac_stop(struct ath12k *ar)
+ {
+ 	struct ath12k_hw *ah = ar->ah;
+ 	struct htt_ppdu_stats_info *ppdu_stats, *tmp;
++	struct ath12k_wmi_scan_chan_list_arg *arg;
+ 	int ret;
+ 
+ 	lockdep_assert_held(&ah->hw_mutex);
+@@ -8300,6 +8368,7 @@ static void ath12k_mac_stop(struct ath12k *ar)
+ 
+ 	cancel_delayed_work_sync(&ar->scan.timeout);
+ 	wiphy_work_cancel(ath12k_ar_to_hw(ar)->wiphy, &ar->scan.vdev_clean_wk);
++	cancel_work_sync(&ar->regd_channel_update_work);
+ 	cancel_work_sync(&ar->regd_update_work);
+ 	cancel_work_sync(&ar->ab->rfkill_work);
+ 	cancel_work_sync(&ar->ab->update_11d_work);
+@@ -8307,10 +8376,18 @@ static void ath12k_mac_stop(struct ath12k *ar)
+ 	complete(&ar->completed_11d_scan);
+ 
+ 	spin_lock_bh(&ar->data_lock);
++
+ 	list_for_each_entry_safe(ppdu_stats, tmp, &ar->ppdu_stats_info, list) {
+ 		list_del(&ppdu_stats->list);
+ 		kfree(ppdu_stats);
+ 	}
++
++	while ((arg = list_first_entry_or_null(&ar->regd_channel_update_queue,
++					       struct ath12k_wmi_scan_chan_list_arg,
++					       list))) {
++		list_del(&arg->list);
++		kfree(arg);
++	}
+ 	spin_unlock_bh(&ar->data_lock);
+ 
+ 	rcu_assign_pointer(ar->ab->pdevs_active[ar->pdev_idx], NULL);
+@@ -9818,7 +9895,7 @@ ath12k_mac_change_chanctx_cnt_iter(void *data, u8 *mac,
+ 		if (WARN_ON(!arvif))
+ 			continue;
+ 
+-		if (arvif->ar != arg->ar)
++		if (!arvif->is_created || arvif->ar != arg->ar)
+ 			continue;
+ 
+ 		link_conf = wiphy_dereference(ahvif->ah->hw->wiphy,
+@@ -9853,7 +9930,7 @@ ath12k_mac_change_chanctx_fill_iter(void *data, u8 *mac,
+ 		if (WARN_ON(!arvif))
+ 			continue;
+ 
+-		if (arvif->ar != arg->ar)
++		if (!arvif->is_created || arvif->ar != arg->ar)
+ 			continue;
+ 
+ 		link_conf = wiphy_dereference(ahvif->ah->hw->wiphy,
+@@ -12204,6 +12281,7 @@ static void ath12k_mac_hw_unregister(struct ath12k_hw *ah)
+ 	int i;
+ 
+ 	for_each_ar(ah, ar, i) {
++		cancel_work_sync(&ar->regd_channel_update_work);
+ 		cancel_work_sync(&ar->regd_update_work);
+ 		ath12k_debugfs_unregister(ar);
+ 		ath12k_fw_stats_reset(ar);
+@@ -12564,6 +12642,8 @@ static void ath12k_mac_setup(struct ath12k *ar)
+ 
+ 	INIT_DELAYED_WORK(&ar->scan.timeout, ath12k_scan_timeout_work);
+ 	wiphy_work_init(&ar->scan.vdev_clean_wk, ath12k_scan_vdev_clean_work);
++	INIT_WORK(&ar->regd_channel_update_work, ath12k_regd_update_chan_list_work);
++	INIT_LIST_HEAD(&ar->regd_channel_update_queue);
+ 	INIT_WORK(&ar->regd_update_work, ath12k_regd_update_work);
+ 
+ 	wiphy_work_init(&ar->wmi_mgmt_tx_work, ath12k_mgmt_over_wmi_tx_work);
+diff --git a/drivers/net/wireless/ath/ath12k/p2p.c b/drivers/net/wireless/ath/ath12k/p2p.c
+index 84cccf7d91e72b..59589748f1a8c2 100644
+--- a/drivers/net/wireless/ath/ath12k/p2p.c
++++ b/drivers/net/wireless/ath/ath12k/p2p.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries.
+  */
+ 
+ #include <net/mac80211.h>
+@@ -124,7 +125,7 @@ static void ath12k_p2p_noa_update_vdev_iter(void *data, u8 *mac,
+ 
+ 	WARN_ON(!rcu_read_lock_any_held());
+ 	arvif = &ahvif->deflink;
+-	if (arvif->ar != arg->ar || arvif->vdev_id != arg->vdev_id)
++	if (!arvif->is_created || arvif->ar != arg->ar || arvif->vdev_id != arg->vdev_id)
+ 		return;
+ 
+ 	ath12k_p2p_noa_update(arvif, arg->noa);
+diff --git a/drivers/net/wireless/ath/ath12k/reg.c b/drivers/net/wireless/ath/ath12k/reg.c
+index 2598b39d5d7ee9..743552abf1498b 100644
+--- a/drivers/net/wireless/ath/ath12k/reg.c
++++ b/drivers/net/wireless/ath/ath12k/reg.c
+@@ -137,32 +137,7 @@ int ath12k_reg_update_chan_list(struct ath12k *ar, bool wait)
+ 	struct ath12k_wmi_channel_arg *ch;
+ 	enum nl80211_band band;
+ 	int num_channels = 0;
+-	int i, ret, left;
+-
+-	if (wait && ar->state_11d == ATH12K_11D_RUNNING) {
+-		left = wait_for_completion_timeout(&ar->completed_11d_scan,
+-						   ATH12K_SCAN_TIMEOUT_HZ);
+-		if (!left) {
+-			ath12k_dbg(ar->ab, ATH12K_DBG_REG,
+-				   "failed to receive 11d scan complete: timed out\n");
+-			ar->state_11d = ATH12K_11D_IDLE;
+-		}
+-		ath12k_dbg(ar->ab, ATH12K_DBG_REG,
+-			   "reg 11d scan wait left time %d\n", left);
+-	}
+-
+-	if (wait &&
+-	    (ar->scan.state == ATH12K_SCAN_STARTING ||
+-	    ar->scan.state == ATH12K_SCAN_RUNNING)) {
+-		left = wait_for_completion_timeout(&ar->scan.completed,
+-						   ATH12K_SCAN_TIMEOUT_HZ);
+-		if (!left)
+-			ath12k_dbg(ar->ab, ATH12K_DBG_REG,
+-				   "failed to receive hw scan complete: timed out\n");
+-
+-		ath12k_dbg(ar->ab, ATH12K_DBG_REG,
+-			   "reg hw scan wait left time %d\n", left);
+-	}
++	int i, ret = 0;
+ 
+ 	if (ar->ah->state == ATH12K_HW_STATE_RESTARTING)
+ 		return 0;
+@@ -244,6 +219,16 @@ int ath12k_reg_update_chan_list(struct ath12k *ar, bool wait)
+ 		}
+ 	}
+ 
++	if (wait) {
++		spin_lock_bh(&ar->data_lock);
++		list_add_tail(&arg->list, &ar->regd_channel_update_queue);
++		spin_unlock_bh(&ar->data_lock);
++
++		queue_work(ar->ab->workqueue, &ar->regd_channel_update_work);
++
++		return 0;
++	}
++
+ 	ret = ath12k_wmi_send_scan_chan_list_cmd(ar, arg);
+ 	kfree(arg);
+ 
+@@ -413,6 +398,29 @@ ath12k_map_fw_dfs_region(enum ath12k_dfs_region dfs_region)
+ 	}
+ }
+ 
++static u32 ath12k_get_bw_reg_flags(u16 max_bw)
++{
++	switch (max_bw) {
++	case 20:
++		return NL80211_RRF_NO_HT40 |
++			NL80211_RRF_NO_80MHZ |
++			NL80211_RRF_NO_160MHZ |
++			NL80211_RRF_NO_320MHZ;
++	case 40:
++		return NL80211_RRF_NO_80MHZ |
++			NL80211_RRF_NO_160MHZ |
++			NL80211_RRF_NO_320MHZ;
++	case 80:
++		return NL80211_RRF_NO_160MHZ |
++			NL80211_RRF_NO_320MHZ;
++	case 160:
++		return NL80211_RRF_NO_320MHZ;
++	case 320:
++	default:
++		return 0;
++	}
++}
++
+ static u32 ath12k_map_fw_reg_flags(u16 reg_flags)
+ {
+ 	u32 flags = 0;
+@@ -691,7 +699,7 @@ ath12k_reg_build_regd(struct ath12k_base *ab,
+ 			reg_rule = reg_info->reg_rules_2g_ptr + i;
+ 			max_bw = min_t(u16, reg_rule->max_bw,
+ 				       reg_info->max_bw_2g);
+-			flags = 0;
++			flags = ath12k_get_bw_reg_flags(reg_info->max_bw_2g);
+ 			ath12k_reg_update_freq_range(&ab->reg_freq_2ghz, reg_rule);
+ 		} else if (reg_info->num_5g_reg_rules &&
+ 			   (j < reg_info->num_5g_reg_rules)) {
+@@ -705,13 +713,15 @@ ath12k_reg_build_regd(struct ath12k_base *ab,
+ 			 * BW correction if required and applies flags as
+ 			 * per other BW rule flags we pass from here
+ 			 */
+-			flags = NL80211_RRF_AUTO_BW;
++			flags = NL80211_RRF_AUTO_BW |
++				ath12k_get_bw_reg_flags(reg_info->max_bw_5g);
+ 			ath12k_reg_update_freq_range(&ab->reg_freq_5ghz, reg_rule);
+ 		} else if (reg_info->is_ext_reg_event && reg_6ghz_number &&
+ 			   (k < reg_6ghz_number)) {
+ 			reg_rule = reg_rule_6ghz + k++;
+ 			max_bw = min_t(u16, reg_rule->max_bw, max_bw_6ghz);
+-			flags = NL80211_RRF_AUTO_BW;
++			flags = NL80211_RRF_AUTO_BW |
++				ath12k_get_bw_reg_flags(max_bw_6ghz);
+ 			if (reg_rule->psd_flag)
+ 				flags |= NL80211_RRF_PSD;
+ 			ath12k_reg_update_freq_range(&ab->reg_freq_6ghz, reg_rule);
+@@ -764,6 +774,54 @@ ath12k_reg_build_regd(struct ath12k_base *ab,
+ 	return new_regd;
+ }
+ 
++void ath12k_regd_update_chan_list_work(struct work_struct *work)
++{
++	struct ath12k *ar = container_of(work, struct ath12k,
++					 regd_channel_update_work);
++	struct ath12k_wmi_scan_chan_list_arg *arg;
++	struct list_head local_update_list;
++	int left;
++
++	INIT_LIST_HEAD(&local_update_list);
++
++	spin_lock_bh(&ar->data_lock);
++	list_splice_tail_init(&ar->regd_channel_update_queue, &local_update_list);
++	spin_unlock_bh(&ar->data_lock);
++
++	while ((arg = list_first_entry_or_null(&local_update_list,
++					       struct ath12k_wmi_scan_chan_list_arg,
++					       list))) {
++		if (ar->state_11d != ATH12K_11D_IDLE) {
++			left = wait_for_completion_timeout(&ar->completed_11d_scan,
++							   ATH12K_SCAN_TIMEOUT_HZ);
++			if (!left) {
++				ath12k_dbg(ar->ab, ATH12K_DBG_REG,
++					   "failed to receive 11d scan complete: timed out\n");
++				ar->state_11d = ATH12K_11D_IDLE;
++			}
++
++			ath12k_dbg(ar->ab, ATH12K_DBG_REG,
++				   "reg 11d scan wait left time %d\n", left);
++		}
++
++		if ((ar->scan.state == ATH12K_SCAN_STARTING ||
++		     ar->scan.state == ATH12K_SCAN_RUNNING)) {
++			left = wait_for_completion_timeout(&ar->scan.completed,
++							   ATH12K_SCAN_TIMEOUT_HZ);
++			if (!left)
++				ath12k_dbg(ar->ab, ATH12K_DBG_REG,
++					   "failed to receive hw scan complete: timed out\n");
++
++			ath12k_dbg(ar->ab, ATH12K_DBG_REG,
++				   "reg hw scan wait left time %d\n", left);
++		}
++
++		ath12k_wmi_send_scan_chan_list_cmd(ar, arg);
++		list_del(&arg->list);
++		kfree(arg);
++	}
++}
++
+ void ath12k_regd_update_work(struct work_struct *work)
+ {
+ 	struct ath12k *ar = container_of(work, struct ath12k,
+diff --git a/drivers/net/wireless/ath/ath12k/reg.h b/drivers/net/wireless/ath/ath12k/reg.h
+index 8af8e9ba462e90..0aeba06182c50c 100644
+--- a/drivers/net/wireless/ath/ath12k/reg.h
++++ b/drivers/net/wireless/ath/ath12k/reg.h
+@@ -113,6 +113,7 @@ int ath12k_reg_handle_chan_list(struct ath12k_base *ab,
+ 				struct ath12k_reg_info *reg_info,
+ 				enum wmi_vdev_type vdev_type,
+ 				enum ieee80211_ap_reg_power power_type);
++void ath12k_regd_update_chan_list_work(struct work_struct *work);
+ enum wmi_reg_6g_ap_type
+ ath12k_reg_ap_pwr_convert(enum ieee80211_ap_reg_power power_type);
+ enum ath12k_reg_status ath12k_reg_validate_reg_info(struct ath12k_base *ab,
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
+index 465f877fc0fb4b..745d017c5aa88c 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.c
++++ b/drivers/net/wireless/ath/ath12k/wmi.c
+@@ -2152,7 +2152,7 @@ static void ath12k_wmi_copy_peer_flags(struct wmi_peer_assoc_complete_cmd *cmd,
+ 		cmd->peer_flags |= cpu_to_le32(WMI_PEER_AUTH);
+ 	if (arg->need_ptk_4_way) {
+ 		cmd->peer_flags |= cpu_to_le32(WMI_PEER_NEED_PTK_4_WAY);
+-		if (!hw_crypto_disabled)
++		if (!hw_crypto_disabled && arg->is_assoc)
+ 			cmd->peer_flags &= cpu_to_le32(~WMI_PEER_AUTH);
+ 	}
+ 	if (arg->need_gtk_2_way)
+@@ -7491,7 +7491,7 @@ static int ath12k_wmi_tlv_services_parser(struct ath12k_base *ab,
+ 					  void *data)
+ {
+ 	const struct wmi_service_available_event *ev;
+-	u32 *wmi_ext2_service_bitmap;
++	__le32 *wmi_ext2_service_bitmap;
+ 	int i, j;
+ 	u16 expected_len;
+ 
+@@ -7523,12 +7523,12 @@ static int ath12k_wmi_tlv_services_parser(struct ath12k_base *ab,
+ 			   ev->wmi_service_segment_bitmap[3]);
+ 		break;
+ 	case WMI_TAG_ARRAY_UINT32:
+-		wmi_ext2_service_bitmap = (u32 *)ptr;
++		wmi_ext2_service_bitmap = (__le32 *)ptr;
+ 		for (i = 0, j = WMI_MAX_EXT_SERVICE;
+ 		     i < WMI_SERVICE_SEGMENT_BM_SIZE32 && j < WMI_MAX_EXT2_SERVICE;
+ 		     i++) {
+ 			do {
+-				if (wmi_ext2_service_bitmap[i] &
++				if (__le32_to_cpu(wmi_ext2_service_bitmap[i]) &
+ 				    BIT(j % WMI_AVAIL_SERVICE_BITS_IN_SIZE32))
+ 					set_bit(j, ab->wmi_ab.svc_map);
+ 			} while (++j % WMI_AVAIL_SERVICE_BITS_IN_SIZE32);
+@@ -7536,8 +7536,10 @@ static int ath12k_wmi_tlv_services_parser(struct ath12k_base *ab,
+ 
+ 		ath12k_dbg(ab, ATH12K_DBG_WMI,
+ 			   "wmi_ext2_service_bitmap 0x%04x 0x%04x 0x%04x 0x%04x",
+-			   wmi_ext2_service_bitmap[0], wmi_ext2_service_bitmap[1],
+-			   wmi_ext2_service_bitmap[2], wmi_ext2_service_bitmap[3]);
++			   __le32_to_cpu(wmi_ext2_service_bitmap[0]),
++			   __le32_to_cpu(wmi_ext2_service_bitmap[1]),
++			   __le32_to_cpu(wmi_ext2_service_bitmap[2]),
++			   __le32_to_cpu(wmi_ext2_service_bitmap[3]));
+ 		break;
+ 	}
+ 	return 0;
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.h b/drivers/net/wireless/ath/ath12k/wmi.h
+index c640ffa180c88b..8627154f1680fa 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.h
++++ b/drivers/net/wireless/ath/ath12k/wmi.h
+@@ -3760,6 +3760,7 @@ struct wmi_vdev_install_key_arg {
+ 	u32 key_idx;
+ 	u32 key_flags;
+ 	u32 key_cipher;
++	u32 ieee80211_key_cipher;
+ 	u32 key_len;
+ 	u32 key_txmic_len;
+ 	u32 key_rxmic_len;
+@@ -3948,6 +3949,7 @@ struct wmi_stop_scan_cmd {
+ } __packed;
+ 
+ struct ath12k_wmi_scan_chan_list_arg {
++	struct list_head list;
+ 	u32 pdev_id;
+ 	u16 nallchans;
+ 	struct ath12k_wmi_channel_arg channel[];
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index b94c3619526cfa..70e8ddd3851f84 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -1544,10 +1544,6 @@ brcmf_cfg80211_scan(struct wiphy *wiphy, struct cfg80211_scan_request *request)
+ 		return -EAGAIN;
+ 	}
+ 
+-	/* If scan req comes for p2p0, send it over primary I/F */
+-	if (vif == cfg->p2p.bss_idx[P2PAPI_BSSCFG_DEVICE].vif)
+-		vif = cfg->p2p.bss_idx[P2PAPI_BSSCFG_PRIMARY].vif;
+-
+ 	brcmf_dbg(SCAN, "START ESCAN\n");
+ 
+ 	cfg->scan_request = request;
+@@ -1563,6 +1559,10 @@ brcmf_cfg80211_scan(struct wiphy *wiphy, struct cfg80211_scan_request *request)
+ 	if (err)
+ 		goto scan_out;
+ 
++	/* If scan req comes for p2p0, send it over primary I/F */
++	if (vif == cfg->p2p.bss_idx[P2PAPI_BSSCFG_DEVICE].vif)
++		vif = cfg->p2p.bss_idx[P2PAPI_BSSCFG_PRIMARY].vif;
++
+ 	err = brcmf_do_escan(vif->ifp, request);
+ 	if (err)
+ 		goto scan_out;
+@@ -5527,8 +5527,7 @@ brcmf_cfg80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
+ 	struct brcmf_fil_action_frame_le *action_frame;
+ 	struct brcmf_fil_af_params_le *af_params;
+ 	bool ack;
+-	s32 chan_nr;
+-	u32 freq;
++	__le32 hw_ch;
+ 
+ 	brcmf_dbg(TRACE, "Enter\n");
+ 
+@@ -5589,25 +5588,34 @@ brcmf_cfg80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
+ 		/* Add the channel. Use the one specified as parameter if any or
+ 		 * the current one (got from the firmware) otherwise
+ 		 */
+-		if (chan)
+-			freq = chan->center_freq;
+-		else
+-			brcmf_fil_cmd_int_get(vif->ifp, BRCMF_C_GET_CHANNEL,
+-					      &freq);
+-		chan_nr = ieee80211_frequency_to_channel(freq);
+-		af_params->channel = cpu_to_le32(chan_nr);
++		if (chan) {
++			hw_ch = cpu_to_le32(chan->hw_value);
++		} else {
++			err = brcmf_fil_cmd_data_get(vif->ifp,
++						     BRCMF_C_GET_CHANNEL,
++						     &hw_ch, sizeof(hw_ch));
++			if (err) {
++				bphy_err(drvr,
++					 "unable to get current hw channel\n");
++				goto free;
++			}
++		}
++		af_params->channel = hw_ch;
++
+ 		af_params->dwell_time = cpu_to_le32(params->wait);
+ 		memcpy(action_frame->data, &buf[DOT11_MGMT_HDR_LEN],
+ 		       le16_to_cpu(action_frame->len));
+ 
+-		brcmf_dbg(TRACE, "Action frame, cookie=%lld, len=%d, freq=%d\n",
+-			  *cookie, le16_to_cpu(action_frame->len), freq);
++		brcmf_dbg(TRACE, "Action frame, cookie=%lld, len=%d, channel=%d\n",
++			  *cookie, le16_to_cpu(action_frame->len),
++			  le32_to_cpu(af_params->channel));
+ 
+ 		ack = brcmf_p2p_send_action_frame(cfg, cfg_to_ndev(cfg),
+ 						  af_params);
+ 
+ 		cfg80211_mgmt_tx_status(wdev, *cookie, buf, len, ack,
+ 					GFP_KERNEL);
++free:
+ 		kfree(af_params);
+ 	} else {
+ 		brcmf_dbg(TRACE, "Unhandled, fc=%04x!!\n", mgmt->frame_control);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cyw/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cyw/core.c
+index c9537fb597ce85..4f0ea4347840b5 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cyw/core.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cyw/core.c
+@@ -112,8 +112,7 @@ int brcmf_cyw_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
+ 	struct brcmf_cfg80211_vif *vif;
+ 	s32 err = 0;
+ 	bool ack = false;
+-	s32 chan_nr;
+-	u32 freq;
++	__le16 hw_ch;
+ 	struct brcmf_mf_params_le *mf_params;
+ 	u32 mf_params_len;
+ 	s32 ready;
+@@ -143,13 +142,18 @@ int brcmf_cyw_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
+ 	mf_params->len = cpu_to_le16(len - DOT11_MGMT_HDR_LEN);
+ 	mf_params->frame_control = mgmt->frame_control;
+ 
+-	if (chan)
+-		freq = chan->center_freq;
+-	else
+-		brcmf_fil_cmd_int_get(vif->ifp, BRCMF_C_GET_CHANNEL,
+-				      &freq);
+-	chan_nr = ieee80211_frequency_to_channel(freq);
+-	mf_params->channel = cpu_to_le16(chan_nr);
++	if (chan) {
++		hw_ch = cpu_to_le16(chan->hw_value);
++	} else {
++		err = brcmf_fil_cmd_data_get(vif->ifp, BRCMF_C_GET_CHANNEL,
++					     &hw_ch, sizeof(hw_ch));
++		if (err) {
++			bphy_err(drvr, "unable to get current hw channel\n");
++			goto free;
++		}
++	}
++	mf_params->channel = hw_ch;
++
+ 	memcpy(&mf_params->da[0], &mgmt->da[0], ETH_ALEN);
+ 	memcpy(&mf_params->bssid[0], &mgmt->bssid[0], ETH_ALEN);
+ 	mf_params->packet_id = cpu_to_le32(*cookie);
+@@ -159,7 +163,8 @@ int brcmf_cyw_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
+ 	brcmf_dbg(TRACE, "Auth frame, cookie=%d, fc=%04x, len=%d, channel=%d\n",
+ 		  le32_to_cpu(mf_params->packet_id),
+ 		  le16_to_cpu(mf_params->frame_control),
+-		  le16_to_cpu(mf_params->len), chan_nr);
++		  le16_to_cpu(mf_params->len),
++		  le16_to_cpu(mf_params->channel));
+ 
+ 	vif->mgmt_tx_id = le32_to_cpu(mf_params->packet_id);
+ 	set_bit(BRCMF_MGMT_TX_SEND_FRAME, &vif->mgmt_tx_status);
+@@ -185,6 +190,7 @@ int brcmf_cyw_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
+ tx_status:
+ 	cfg80211_mgmt_tx_status(wdev, *cookie, buf, len, ack,
+ 				GFP_KERNEL);
++free:
+ 	kfree(mf_params);
+ 	return err;
+ }
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cyw/fwil_types.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cyw/fwil_types.h
+index 08c69142495ab1..669564382e3212 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cyw/fwil_types.h
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cyw/fwil_types.h
+@@ -80,7 +80,7 @@ struct brcmf_mf_params_le {
+ 	u8 da[ETH_ALEN];
+ 	u8 bssid[ETH_ALEN];
+ 	__le32 packet_id;
+-	u8 data[] __counted_by(len);
++	u8 data[] __counted_by_le(len);
+ };
+ 
+ #endif /* CYW_FWIL_TYPES_H_ */
+diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/main.c b/drivers/net/wireless/intel/iwlwifi/dvm/main.c
+index 66211426aa3a23..2b4dbebc71c2e8 100644
+--- a/drivers/net/wireless/intel/iwlwifi/dvm/main.c
++++ b/drivers/net/wireless/intel/iwlwifi/dvm/main.c
+@@ -1049,9 +1049,11 @@ static void iwl_bg_restart(struct work_struct *data)
+  *
+  *****************************************************************************/
+ 
+-static void iwl_setup_deferred_work(struct iwl_priv *priv)
++static int iwl_setup_deferred_work(struct iwl_priv *priv)
+ {
+ 	priv->workqueue = alloc_ordered_workqueue(DRV_NAME, 0);
++	if (!priv->workqueue)
++		return -ENOMEM;
+ 
+ 	INIT_WORK(&priv->restart, iwl_bg_restart);
+ 	INIT_WORK(&priv->beacon_update, iwl_bg_beacon_update);
+@@ -1068,6 +1070,8 @@ static void iwl_setup_deferred_work(struct iwl_priv *priv)
+ 	timer_setup(&priv->statistics_periodic, iwl_bg_statistics_periodic, 0);
+ 
+ 	timer_setup(&priv->ucode_trace, iwl_bg_ucode_trace, 0);
++
++	return 0;
+ }
+ 
+ void iwl_cancel_deferred_work(struct iwl_priv *priv)
+@@ -1463,7 +1467,10 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans,
+ 	/********************
+ 	 * 6. Setup services
+ 	 ********************/
+-	iwl_setup_deferred_work(priv);
++	err = iwl_setup_deferred_work(priv);
++	if (err)
++		goto out_uninit_drv;
++
+ 	iwl_setup_rx_handlers(priv);
+ 
+ 	iwl_power_initialize(priv);
+@@ -1502,6 +1509,7 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans,
+ 	iwl_cancel_deferred_work(priv);
+ 	destroy_workqueue(priv->workqueue);
+ 	priv->workqueue = NULL;
++out_uninit_drv:
+ 	iwl_uninit_drv(priv);
+ out_free_eeprom_blob:
+ 	kfree(priv->eeprom_blob);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/rx.c b/drivers/net/wireless/intel/iwlwifi/mld/rx.c
+index ce0093d5c638a5..185c1a0cb47f0a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/rx.c
+@@ -1039,6 +1039,15 @@ static void iwl_mld_rx_eht(struct iwl_mld *mld, struct sk_buff *skb,
+ 			rx_status->flag |= RX_FLAG_AMPDU_EOF_BIT;
+ 	}
+ 
++	/* update aggregation data for monitor sake on default queue */
++	if (!queue && (phy_info & IWL_RX_MPDU_PHY_TSF_OVERLOAD) &&
++	    (phy_info & IWL_RX_MPDU_PHY_AMPDU) && phy_data->first_subframe) {
++		rx_status->flag |= RX_FLAG_AMPDU_EOF_BIT_KNOWN;
++		if (phy_data->data0 &
++		    cpu_to_le32(IWL_RX_PHY_DATA0_EHT_DELIM_EOF))
++			rx_status->flag |= RX_FLAG_AMPDU_EOF_BIT;
++	}
++
+ 	if (phy_info & IWL_RX_MPDU_PHY_TSF_OVERLOAD)
+ 		iwl_mld_decode_eht_phy_data(mld, phy_data, rx_status, eht, usig);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index a2dc5c3b0596db..1c05a3d8e4245f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -61,8 +61,10 @@ static int __init iwl_mvm_init(void)
+ 	}
+ 
+ 	ret = iwl_opmode_register("iwlmvm", &iwl_mvm_ops);
+-	if (ret)
++	if (ret) {
+ 		pr_err("Unable to register MVM op_mode: %d\n", ret);
++		iwl_mvm_rate_control_unregister();
++	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/net/wireless/marvell/mwl8k.c b/drivers/net/wireless/marvell/mwl8k.c
+index bab9ef37a1ab80..8bcb1d0dd61887 100644
+--- a/drivers/net/wireless/marvell/mwl8k.c
++++ b/drivers/net/wireless/marvell/mwl8k.c
+@@ -1227,6 +1227,10 @@ static int rxq_refill(struct ieee80211_hw *hw, int index, int limit)
+ 
+ 		addr = dma_map_single(&priv->pdev->dev, skb->data,
+ 				      MWL8K_RX_MAXSZ, DMA_FROM_DEVICE);
++		if (dma_mapping_error(&priv->pdev->dev, addr)) {
++			kfree_skb(skb);
++			break;
++		}
+ 
+ 		rxq->rxd_count++;
+ 		rx = rxq->tail++;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+index 8ac6fbb736ab87..300c863f0e3e20 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+@@ -2916,7 +2916,7 @@ int mt7925_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 	for (i = 0; i < sreq->n_ssids; i++) {
+ 		if (!sreq->ssids[i].ssid_len)
+ 			continue;
+-		if (i > MT7925_RNR_SCAN_MAX_BSSIDS)
++		if (i >= MT7925_RNR_SCAN_MAX_BSSIDS)
+ 			break;
+ 
+ 		ssid->ssids[n_ssids].ssid_len = cpu_to_le32(sreq->ssids[i].ssid_len);
+@@ -2933,7 +2933,7 @@ int mt7925_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 		mt76_connac_mcu_build_rnr_scan_param(mdev, sreq);
+ 
+ 		for (j = 0; j < mdev->rnr.bssid_num; j++) {
+-			if (j > MT7925_RNR_SCAN_MAX_BSSIDS)
++			if (j >= MT7925_RNR_SCAN_MAX_BSSIDS)
+ 				break;
+ 
+ 			tlv = mt76_connac_mcu_add_tlv(skb, UNI_SCAN_BSSID,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/main.c b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+index 07dd75ce94a5f2..f41b2c98bc4518 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+@@ -1061,7 +1061,7 @@ mt7996_mac_sta_add(struct mt76_phy *mphy, struct ieee80211_vif *vif,
+ 	struct mt7996_dev *dev = container_of(mdev, struct mt7996_dev, mt76);
+ 	struct mt7996_sta *msta = (struct mt7996_sta *)sta->drv_priv;
+ 	struct mt7996_vif *mvif = (struct mt7996_vif *)vif->drv_priv;
+-	unsigned long links = sta->mlo ? sta->valid_links : BIT(0);
++	unsigned long links = sta->valid_links ? sta->valid_links : BIT(0);
+ 	int err;
+ 
+ 	mutex_lock(&mdev->mutex);
+@@ -1155,7 +1155,7 @@ mt7996_mac_sta_remove(struct mt76_phy *mphy, struct ieee80211_vif *vif,
+ {
+ 	struct mt76_dev *mdev = mphy->dev;
+ 	struct mt7996_dev *dev = container_of(mdev, struct mt7996_dev, mt76);
+-	unsigned long links = sta->mlo ? sta->valid_links : BIT(0);
++	unsigned long links = sta->valid_links ? sta->valid_links : BIT(0);
+ 
+ 	mutex_lock(&mdev->mutex);
+ 
+@@ -1216,10 +1216,17 @@ static void mt7996_tx(struct ieee80211_hw *hw,
+ 
+ 	if (vif) {
+ 		struct mt7996_vif *mvif = (void *)vif->drv_priv;
+-		struct mt76_vif_link *mlink;
++		struct mt76_vif_link *mlink = &mvif->deflink.mt76;
+ 
+-		mlink = rcu_dereference(mvif->mt76.link[link_id]);
+-		if (mlink && mlink->wcid)
++		if (link_id < IEEE80211_LINK_UNSPECIFIED)
++			mlink = rcu_dereference(mvif->mt76.link[link_id]);
++
++		if (!mlink) {
++			ieee80211_free_txskb(hw, skb);
++			goto unlock;
++		}
++
++		if (mlink->wcid)
+ 			wcid = mlink->wcid;
+ 
+ 		if (mvif->mt76.roc_phy &&
+@@ -1228,7 +1235,7 @@ static void mt7996_tx(struct ieee80211_hw *hw,
+ 			if (mphy->roc_link)
+ 				wcid = mphy->roc_link->wcid;
+ 		} else {
+-			mphy = mt76_vif_link_phy(&mvif->deflink.mt76);
++			mphy = mt76_vif_link_phy(mlink);
+ 		}
+ 	}
+ 
+@@ -1237,7 +1244,7 @@ static void mt7996_tx(struct ieee80211_hw *hw,
+ 		goto unlock;
+ 	}
+ 
+-	if (control->sta) {
++	if (control->sta && link_id < IEEE80211_LINK_UNSPECIFIED) {
+ 		struct mt7996_sta *msta = (void *)control->sta->drv_priv;
+ 		struct mt7996_sta_link *msta_link;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+index 994526c65bfc32..dd4b7b8c34ea1f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+@@ -2326,8 +2326,7 @@ mt7996_mcu_sta_mld_setup_tlv(struct mt7996_dev *dev, struct sk_buff *skb,
+ 
+ 	if (nlinks > 1) {
+ 		link_id = __ffs(links & ~BIT(msta->deflink_id));
+-		msta_link = mt76_dereference(msta->link[msta->deflink_id],
+-					     &dev->mt76);
++		msta_link = mt76_dereference(msta->link[link_id], &dev->mt76);
+ 		if (!msta_link)
+ 			return;
+ 	}
+diff --git a/drivers/net/wireless/purelifi/plfxlc/mac.c b/drivers/net/wireless/purelifi/plfxlc/mac.c
+index 82d1bf7edba20d..a7f5d287e369bd 100644
+--- a/drivers/net/wireless/purelifi/plfxlc/mac.c
++++ b/drivers/net/wireless/purelifi/plfxlc/mac.c
+@@ -99,11 +99,6 @@ int plfxlc_mac_init_hw(struct ieee80211_hw *hw)
+ 	return r;
+ }
+ 
+-void plfxlc_mac_release(struct plfxlc_mac *mac)
+-{
+-	plfxlc_chip_release(&mac->chip);
+-}
+-
+ int plfxlc_op_start(struct ieee80211_hw *hw)
+ {
+ 	plfxlc_hw_mac(hw)->chip.usb.initialized = 1;
+@@ -755,3 +750,9 @@ struct ieee80211_hw *plfxlc_mac_alloc_hw(struct usb_interface *intf)
+ 	SET_IEEE80211_DEV(hw, &intf->dev);
+ 	return hw;
+ }
++
++void plfxlc_mac_release_hw(struct ieee80211_hw *hw)
++{
++	plfxlc_chip_release(&plfxlc_hw_mac(hw)->chip);
++	ieee80211_free_hw(hw);
++}
+diff --git a/drivers/net/wireless/purelifi/plfxlc/mac.h b/drivers/net/wireless/purelifi/plfxlc/mac.h
+index 9384acddcf26a3..56da502999c1aa 100644
+--- a/drivers/net/wireless/purelifi/plfxlc/mac.h
++++ b/drivers/net/wireless/purelifi/plfxlc/mac.h
+@@ -168,7 +168,7 @@ static inline u8 *plfxlc_mac_get_perm_addr(struct plfxlc_mac *mac)
+ }
+ 
+ struct ieee80211_hw *plfxlc_mac_alloc_hw(struct usb_interface *intf);
+-void plfxlc_mac_release(struct plfxlc_mac *mac);
++void plfxlc_mac_release_hw(struct ieee80211_hw *hw);
+ 
+ int plfxlc_mac_preinit_hw(struct ieee80211_hw *hw, const u8 *hw_address);
+ int plfxlc_mac_init_hw(struct ieee80211_hw *hw);
+diff --git a/drivers/net/wireless/purelifi/plfxlc/usb.c b/drivers/net/wireless/purelifi/plfxlc/usb.c
+index d8b0b79dea1ac4..711902a809dba4 100644
+--- a/drivers/net/wireless/purelifi/plfxlc/usb.c
++++ b/drivers/net/wireless/purelifi/plfxlc/usb.c
+@@ -604,7 +604,7 @@ static int probe(struct usb_interface *intf,
+ 	r = plfxlc_upload_mac_and_serial(intf, hw_address, serial_number);
+ 	if (r) {
+ 		dev_err(&intf->dev, "MAC and Serial upload failed (%d)\n", r);
+-		goto error;
++		goto error_free_hw;
+ 	}
+ 
+ 	chip->unit_type = STA;
+@@ -613,13 +613,13 @@ static int probe(struct usb_interface *intf,
+ 	r = plfxlc_mac_preinit_hw(hw, hw_address);
+ 	if (r) {
+ 		dev_err(&intf->dev, "Init mac failed (%d)\n", r);
+-		goto error;
++		goto error_free_hw;
+ 	}
+ 
+ 	r = ieee80211_register_hw(hw);
+ 	if (r) {
+ 		dev_err(&intf->dev, "Register device failed (%d)\n", r);
+-		goto error;
++		goto error_free_hw;
+ 	}
+ 
+ 	if ((le16_to_cpu(interface_to_usbdev(intf)->descriptor.idVendor) ==
+@@ -632,7 +632,7 @@ static int probe(struct usb_interface *intf,
+ 	}
+ 	if (r != 0) {
+ 		dev_err(&intf->dev, "FPGA download failed (%d)\n", r);
+-		goto error;
++		goto error_unreg_hw;
+ 	}
+ 
+ 	tx->mac_fifo_full = 0;
+@@ -642,21 +642,21 @@ static int probe(struct usb_interface *intf,
+ 	r = plfxlc_usb_init_hw(usb);
+ 	if (r < 0) {
+ 		dev_err(&intf->dev, "usb_init_hw failed (%d)\n", r);
+-		goto error;
++		goto error_unreg_hw;
+ 	}
+ 
+ 	msleep(PLF_MSLEEP_TIME);
+ 	r = plfxlc_chip_switch_radio(chip, PLFXLC_RADIO_ON);
+ 	if (r < 0) {
+ 		dev_dbg(&intf->dev, "chip_switch_radio_on failed (%d)\n", r);
+-		goto error;
++		goto error_unreg_hw;
+ 	}
+ 
+ 	msleep(PLF_MSLEEP_TIME);
+ 	r = plfxlc_chip_set_rate(chip, 8);
+ 	if (r < 0) {
+ 		dev_dbg(&intf->dev, "chip_set_rate failed (%d)\n", r);
+-		goto error;
++		goto error_unreg_hw;
+ 	}
+ 
+ 	msleep(PLF_MSLEEP_TIME);
+@@ -664,7 +664,7 @@ static int probe(struct usb_interface *intf,
+ 			    hw_address, ETH_ALEN, USB_REQ_MAC_WR);
+ 	if (r < 0) {
+ 		dev_dbg(&intf->dev, "MAC_WR failure (%d)\n", r);
+-		goto error;
++		goto error_unreg_hw;
+ 	}
+ 
+ 	plfxlc_chip_enable_rxtx(chip);
+@@ -691,12 +691,12 @@ static int probe(struct usb_interface *intf,
+ 	plfxlc_mac_init_hw(hw);
+ 	usb->initialized = true;
+ 	return 0;
++
++error_unreg_hw:
++	ieee80211_unregister_hw(hw);
++error_free_hw:
++	plfxlc_mac_release_hw(hw);
+ error:
+-	if (hw) {
+-		plfxlc_mac_release(plfxlc_hw_mac(hw));
+-		ieee80211_unregister_hw(hw);
+-		ieee80211_free_hw(hw);
+-	}
+ 	dev_err(&intf->dev, "pureLifi:Device error");
+ 	return r;
+ }
+@@ -730,8 +730,7 @@ static void disconnect(struct usb_interface *intf)
+ 	 */
+ 	usb_reset_device(interface_to_usbdev(intf));
+ 
+-	plfxlc_mac_release(mac);
+-	ieee80211_free_hw(hw);
++	plfxlc_mac_release_hw(hw);
+ }
+ 
+ static void plfxlc_usb_resume(struct plfxlc_usb *usb)
+diff --git a/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c b/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c
+index 220ac5bdf279a1..8a57d6c72335ef 100644
+--- a/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c
++++ b/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c
+@@ -1041,10 +1041,11 @@ static void rtl8187_stop(struct ieee80211_hw *dev, bool suspend)
+ 	rtl818x_iowrite8(priv, &priv->map->CONFIG4, reg | RTL818X_CONFIG4_VCOOFF);
+ 	rtl818x_iowrite8(priv, &priv->map->EEPROM_CMD, RTL818X_EEPROM_CMD_NORMAL);
+ 
++	usb_kill_anchored_urbs(&priv->anchored);
++
+ 	while ((skb = skb_dequeue(&priv->b_tx_status.queue)))
+ 		dev_kfree_skb_any(skb);
+ 
+-	usb_kill_anchored_urbs(&priv->anchored);
+ 	mutex_unlock(&priv->conf_mutex);
+ 
+ 	if (!priv->is_rtl8187b)
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/core.c b/drivers/net/wireless/realtek/rtl8xxxu/core.c
+index 569856ca677f62..c6f69d87c38d41 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/core.c
+@@ -6617,7 +6617,7 @@ static int rtl8xxxu_submit_rx_urb(struct rtl8xxxu_priv *priv,
+ 		skb_size = fops->rx_agg_buf_size;
+ 		skb_size += (rx_desc_sz + sizeof(struct rtl8723au_phy_stats));
+ 	} else {
+-		skb_size = IEEE80211_MAX_FRAME_LEN;
++		skb_size = IEEE80211_MAX_FRAME_LEN + rx_desc_sz;
+ 	}
+ 
+ 	skb = __netdev_alloc_skb(NULL, skb_size, GFP_KERNEL);
+diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
+index c4de5d114eda1d..8be6e70d92d121 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.c
++++ b/drivers/net/wireless/realtek/rtw88/main.c
+@@ -349,7 +349,7 @@ int rtw_sta_add(struct rtw_dev *rtwdev, struct ieee80211_sta *sta,
+ 	struct rtw_vif *rtwvif = (struct rtw_vif *)vif->drv_priv;
+ 	int i;
+ 
+-	if (vif->type == NL80211_IFTYPE_STATION) {
++	if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) {
+ 		si->mac_id = rtwvif->mac_id;
+ 	} else {
+ 		si->mac_id = rtw_acquire_macid(rtwdev);
+@@ -386,7 +386,7 @@ void rtw_sta_remove(struct rtw_dev *rtwdev, struct ieee80211_sta *sta,
+ 
+ 	cancel_work_sync(&si->rc_work);
+ 
+-	if (vif->type != NL80211_IFTYPE_STATION)
++	if (vif->type != NL80211_IFTYPE_STATION || sta->tdls)
+ 		rtw_release_macid(rtwdev, si->mac_id);
+ 	if (fw_exist)
+ 		rtw_fw_media_status_report(rtwdev, si->mac_id, false);
+diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
+index 49447668cbf3d6..c886dd2a73b412 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.c
++++ b/drivers/net/wireless/realtek/rtw89/core.c
+@@ -2158,6 +2158,11 @@ static void rtw89_core_cancel_6ghz_probe_tx(struct rtw89_dev *rtwdev,
+ 	if (rx_status->band != NL80211_BAND_6GHZ)
+ 		return;
+ 
++	if (unlikely(!(rtwdev->chip->support_bands & BIT(NL80211_BAND_6GHZ)))) {
++		rtw89_debug(rtwdev, RTW89_DBG_UNEXP, "invalid rx on unsupported 6 GHz\n");
++		return;
++	}
++
+ 	ssid_ie = cfg80211_find_ie(WLAN_EID_SSID, ies, skb->len);
+ 
+ 	list_for_each_entry(info, &pkt_list[NL80211_BAND_6GHZ], list) {
+@@ -5239,7 +5244,8 @@ int rtw89_core_mlsr_switch(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ 	if (unlikely(!ieee80211_vif_is_mld(vif)))
+ 		return -EOPNOTSUPP;
+ 
+-	if (unlikely(!(usable_links & BIT(link_id)))) {
++	if (unlikely(link_id >= IEEE80211_MLD_MAX_NUM_LINKS ||
++		     !(usable_links & BIT(link_id)))) {
+ 		rtw89_warn(rtwdev, "%s: link id %u is not usable\n", __func__,
+ 			   link_id);
+ 		return -ENOLINK;
+diff --git a/drivers/net/wireless/realtek/rtw89/phy.c b/drivers/net/wireless/realtek/rtw89/phy.c
+index 76a2e26d4a10b4..e45e5dd5ca0a41 100644
+--- a/drivers/net/wireless/realtek/rtw89/phy.c
++++ b/drivers/net/wireless/realtek/rtw89/phy.c
+@@ -119,10 +119,12 @@ static u64 get_eht_mcs_ra_mask(u8 *max_nss, u8 start_mcs, u8 n_nss)
+ 	return mask;
+ }
+ 
+-static u64 get_eht_ra_mask(struct ieee80211_link_sta *link_sta)
++static u64 get_eht_ra_mask(struct rtw89_vif_link *rtwvif_link,
++			   struct ieee80211_link_sta *link_sta)
+ {
+-	struct ieee80211_sta_eht_cap *eht_cap = &link_sta->eht_cap;
++	struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+ 	struct ieee80211_eht_mcs_nss_supp_20mhz_only *mcs_nss_20mhz;
++	struct ieee80211_sta_eht_cap *eht_cap = &link_sta->eht_cap;
+ 	struct ieee80211_eht_mcs_nss_supp_bw *mcs_nss;
+ 	u8 *he_phy_cap = link_sta->he_cap.he_cap_elem.phy_cap_info;
+ 
+@@ -136,8 +138,8 @@ static u64 get_eht_ra_mask(struct ieee80211_link_sta *link_sta)
+ 		/* MCS 9, 11, 13 */
+ 		return get_eht_mcs_ra_mask(mcs_nss->rx_tx_max_nss, 9, 3);
+ 	case IEEE80211_STA_RX_BW_20:
+-		if (!(he_phy_cap[0] &
+-		      IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_MASK_ALL)) {
++		if (vif->type == NL80211_IFTYPE_AP &&
++		    !(he_phy_cap[0] & IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_MASK_ALL)) {
+ 			mcs_nss_20mhz = &eht_cap->eht_mcs_nss_supp.only_20mhz;
+ 			/* MCS 7, 9, 11, 13 */
+ 			return get_eht_mcs_ra_mask(mcs_nss_20mhz->rx_tx_max_nss, 7, 4);
+@@ -332,7 +334,7 @@ static void rtw89_phy_ra_sta_update(struct rtw89_dev *rtwdev,
+ 	/* Set the ra mask from sta's capability */
+ 	if (link_sta->eht_cap.has_eht) {
+ 		mode |= RTW89_RA_MODE_EHT;
+-		ra_mask |= get_eht_ra_mask(link_sta);
++		ra_mask |= get_eht_ra_mask(rtwvif_link, link_sta);
+ 
+ 		if (rtwdev->hal.no_mcs_12_13)
+ 			high_rate_masks = rtw89_ra_mask_eht_mcs0_11;
+diff --git a/drivers/net/wireless/realtek/rtw89/sar.c b/drivers/net/wireless/realtek/rtw89/sar.c
+index 517b66022f18b6..7f568ffb3766f4 100644
+--- a/drivers/net/wireless/realtek/rtw89/sar.c
++++ b/drivers/net/wireless/realtek/rtw89/sar.c
+@@ -199,7 +199,8 @@ struct rtw89_sar_handler rtw89_sar_handlers[RTW89_SAR_SOURCE_NR] = {
+ 		typeof(_dev) _d = (_dev);				\
+ 		BUILD_BUG_ON(!rtw89_sar_handlers[_s].descr_sar_source);	\
+ 		BUILD_BUG_ON(!rtw89_sar_handlers[_s].query_sar_config);	\
+-		lockdep_assert_wiphy(_d->hw->wiphy);			\
++		if (test_bit(RTW89_FLAG_PROBE_DONE, _d->flags))		\
++			lockdep_assert_wiphy(_d->hw->wiphy);		\
+ 		_d->sar._cfg_name = *(_cfg_data);			\
+ 		_d->sar.src = _s;					\
+ 	} while (0)
+@@ -499,8 +500,6 @@ static void rtw89_set_sar_from_acpi(struct rtw89_dev *rtwdev)
+ 	struct rtw89_sar_cfg_acpi *cfg;
+ 	int ret;
+ 
+-	lockdep_assert_wiphy(rtwdev->hw->wiphy);
+-
+ 	cfg = kzalloc(sizeof(*cfg), GFP_KERNEL);
+ 	if (!cfg)
+ 		return;
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 175c5b6d4dd587..491df044635f1d 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -1962,24 +1962,24 @@ static int __init nvmet_init(void)
+ 	if (!nvmet_wq)
+ 		goto out_free_buffered_work_queue;
+ 
+-	error = nvmet_init_discovery();
++	error = nvmet_init_debugfs();
+ 	if (error)
+ 		goto out_free_nvmet_work_queue;
+ 
+-	error = nvmet_init_debugfs();
++	error = nvmet_init_discovery();
+ 	if (error)
+-		goto out_exit_discovery;
++		goto out_exit_debugfs;
+ 
+ 	error = nvmet_init_configfs();
+ 	if (error)
+-		goto out_exit_debugfs;
++		goto out_exit_discovery;
+ 
+ 	return 0;
+ 
+-out_exit_debugfs:
+-	nvmet_exit_debugfs();
+ out_exit_discovery:
+ 	nvmet_exit_discovery();
++out_exit_debugfs:
++	nvmet_exit_debugfs();
+ out_free_nvmet_work_queue:
+ 	destroy_workqueue(nvmet_wq);
+ out_free_buffered_work_queue:
+@@ -1994,8 +1994,8 @@ static int __init nvmet_init(void)
+ static void __exit nvmet_exit(void)
+ {
+ 	nvmet_exit_configfs();
+-	nvmet_exit_debugfs();
+ 	nvmet_exit_discovery();
++	nvmet_exit_debugfs();
+ 	ida_destroy(&cntlid_ida);
+ 	destroy_workqueue(nvmet_wq);
+ 	destroy_workqueue(buffered_io_wq);
+diff --git a/drivers/nvme/target/pci-epf.c b/drivers/nvme/target/pci-epf.c
+index a4295a5b8d280f..6f1651183e3277 100644
+--- a/drivers/nvme/target/pci-epf.c
++++ b/drivers/nvme/target/pci-epf.c
+@@ -1242,8 +1242,11 @@ static void nvmet_pci_epf_queue_response(struct nvmet_req *req)
+ 
+ 	iod->status = le16_to_cpu(req->cqe->status) >> 1;
+ 
+-	/* If we have no data to transfer, directly complete the command. */
+-	if (!iod->data_len || iod->dma_dir != DMA_TO_DEVICE) {
++	/*
++	 * If the command failed or we have no data to transfer, complete the
++	 * command immediately.
++	 */
++	if (iod->status || !iod->data_len || iod->dma_dir != DMA_TO_DEVICE) {
+ 		nvmet_pci_epf_complete_iod(iod);
+ 		return;
+ 	}
+@@ -1604,8 +1607,13 @@ static void nvmet_pci_epf_exec_iod_work(struct work_struct *work)
+ 		goto complete;
+ 	}
+ 
++	/*
++	 * If nvmet_req_init() fails (e.g., unsupported opcode) it will call
++	 * __nvmet_req_complete() internally which will call
++	 * nvmet_pci_epf_queue_response() and will complete the command directly.
++	 */
+ 	if (!nvmet_req_init(req, &iod->sq->nvme_sq, &nvmet_pci_epf_fabrics_ops))
+-		goto complete;
++		return;
+ 
+ 	iod->data_len = nvmet_req_transfer_len(req);
+ 	if (iod->data_len) {
+@@ -1643,10 +1651,11 @@ static void nvmet_pci_epf_exec_iod_work(struct work_struct *work)
+ 
+ 	wait_for_completion(&iod->done);
+ 
+-	if (iod->status == NVME_SC_SUCCESS) {
+-		WARN_ON_ONCE(!iod->data_len || iod->dma_dir != DMA_TO_DEVICE);
+-		nvmet_pci_epf_transfer_iod_data(iod);
+-	}
++	if (iod->status != NVME_SC_SUCCESS)
++		return;
++
++	WARN_ON_ONCE(!iod->data_len || iod->dma_dir != DMA_TO_DEVICE);
++	nvmet_pci_epf_transfer_iod_data(iod);
+ 
+ complete:
+ 	nvmet_pci_epf_complete_iod(iod);
+diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+index 93171a39287949..108d30637920e9 100644
+--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
++++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+@@ -458,6 +458,7 @@ static irqreturn_t rockchip_pcie_rc_sys_irq_thread(int irq, void *arg)
+ 
+ 	if (reg & PCIE_RDLH_LINK_UP_CHGED) {
+ 		if (rockchip_pcie_link_up(pci)) {
++			msleep(PCIE_RESET_CONFIG_WAIT_MS);
+ 			dev_dbg(dev, "Received Link up event. Starting enumeration!\n");
+ 			/* Rescan the bus to enumerate endpoint devices */
+ 			pci_lock_rescan_remove();
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index c789e3f856550b..9b12f2f0204222 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1564,6 +1564,7 @@ static irqreturn_t qcom_pcie_global_irq_thread(int irq, void *data)
+ 	writel_relaxed(status, pcie->parf + PARF_INT_ALL_CLEAR);
+ 
+ 	if (FIELD_GET(PARF_INT_ALL_LINK_UP, status)) {
++		msleep(PCIE_RESET_CONFIG_WAIT_MS);
+ 		dev_dbg(dev, "Received Link up event. Starting enumeration!\n");
+ 		/* Rescan the bus to enumerate endpoint devices */
+ 		pci_lock_rescan_remove();
+diff --git a/drivers/pci/controller/pcie-rockchip-host.c b/drivers/pci/controller/pcie-rockchip-host.c
+index b9e7a8710cf047..648b6fcb93b0b7 100644
+--- a/drivers/pci/controller/pcie-rockchip-host.c
++++ b/drivers/pci/controller/pcie-rockchip-host.c
+@@ -439,7 +439,7 @@ static irqreturn_t rockchip_pcie_subsys_irq_handler(int irq, void *arg)
+ 			dev_dbg(dev, "malformed TLP received from the link\n");
+ 
+ 		if (sub_reg & PCIE_CORE_INT_UCR)
+-			dev_dbg(dev, "malformed TLP received from the link\n");
++			dev_dbg(dev, "Unexpected Completion received from the link\n");
+ 
+ 		if (sub_reg & PCIE_CORE_INT_FCE)
+ 			dev_dbg(dev, "an error was observed in the flow control advertisements from the other side\n");
+diff --git a/drivers/pci/controller/plda/pcie-starfive.c b/drivers/pci/controller/plda/pcie-starfive.c
+index e73c1b7bc8efc4..3caf53c6c08238 100644
+--- a/drivers/pci/controller/plda/pcie-starfive.c
++++ b/drivers/pci/controller/plda/pcie-starfive.c
+@@ -368,7 +368,7 @@ static int starfive_pcie_host_init(struct plda_pcie_rp *plda)
+ 	 * of 100ms following exit from a conventional reset before
+ 	 * sending a configuration request to the device.
+ 	 */
+-	msleep(PCIE_RESET_CONFIG_DEVICE_WAIT_MS);
++	msleep(PCIE_RESET_CONFIG_WAIT_MS);
+ 
+ 	if (starfive_pcie_host_wait_for_link(pcie))
+ 		dev_info(dev, "port link down\n");
+diff --git a/drivers/pci/endpoint/functions/pci-epf-vntb.c b/drivers/pci/endpoint/functions/pci-epf-vntb.c
+index e4da3fdb000723..577055be303399 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-vntb.c
++++ b/drivers/pci/endpoint/functions/pci-epf-vntb.c
+@@ -510,7 +510,7 @@ static int epf_ntb_db_bar_init(struct epf_ntb *ntb)
+ 	struct device *dev = &ntb->epf->dev;
+ 	int ret;
+ 	struct pci_epf_bar *epf_bar;
+-	void __iomem *mw_addr;
++	void *mw_addr;
+ 	enum pci_barno barno;
+ 	size_t size = sizeof(u32) * ntb->db_count;
+ 
+@@ -680,7 +680,7 @@ static int epf_ntb_init_epc_bar(struct epf_ntb *ntb)
+ 		barno = pci_epc_get_next_free_bar(epc_features, barno);
+ 		if (barno < 0) {
+ 			dev_err(dev, "Fail to get NTB function BAR\n");
+-			return barno;
++			return -ENOENT;
+ 		}
+ 		ntb->epf_ntb_bar[bar] = barno;
+ 	}
+diff --git a/drivers/pci/hotplug/pnv_php.c b/drivers/pci/hotplug/pnv_php.c
+index 573a41869c153f..4f85e7fe29ec23 100644
+--- a/drivers/pci/hotplug/pnv_php.c
++++ b/drivers/pci/hotplug/pnv_php.c
+@@ -3,12 +3,15 @@
+  * PCI Hotplug Driver for PowerPC PowerNV platform.
+  *
+  * Copyright Gavin Shan, IBM Corporation 2016.
++ * Copyright (C) 2025 Raptor Engineering, LLC
++ * Copyright (C) 2025 Raptor Computing Systems, LLC
+  */
+ 
+ #include <linux/bitfield.h>
+ #include <linux/libfdt.h>
+ #include <linux/module.h>
+ #include <linux/pci.h>
++#include <linux/delay.h>
+ #include <linux/pci_hotplug.h>
+ #include <linux/of_fdt.h>
+ 
+@@ -36,8 +39,10 @@ static void pnv_php_register(struct device_node *dn);
+ static void pnv_php_unregister_one(struct device_node *dn);
+ static void pnv_php_unregister(struct device_node *dn);
+ 
++static void pnv_php_enable_irq(struct pnv_php_slot *php_slot);
++
+ static void pnv_php_disable_irq(struct pnv_php_slot *php_slot,
+-				bool disable_device)
++				bool disable_device, bool disable_msi)
+ {
+ 	struct pci_dev *pdev = php_slot->pdev;
+ 	u16 ctrl;
+@@ -53,19 +58,15 @@ static void pnv_php_disable_irq(struct pnv_php_slot *php_slot,
+ 		php_slot->irq = 0;
+ 	}
+ 
+-	if (php_slot->wq) {
+-		destroy_workqueue(php_slot->wq);
+-		php_slot->wq = NULL;
+-	}
+-
+-	if (disable_device) {
++	if (disable_device || disable_msi) {
+ 		if (pdev->msix_enabled)
+ 			pci_disable_msix(pdev);
+ 		else if (pdev->msi_enabled)
+ 			pci_disable_msi(pdev);
++	}
+ 
++	if (disable_device)
+ 		pci_disable_device(pdev);
+-	}
+ }
+ 
+ static void pnv_php_free_slot(struct kref *kref)
+@@ -74,7 +75,8 @@ static void pnv_php_free_slot(struct kref *kref)
+ 					struct pnv_php_slot, kref);
+ 
+ 	WARN_ON(!list_empty(&php_slot->children));
+-	pnv_php_disable_irq(php_slot, false);
++	pnv_php_disable_irq(php_slot, false, false);
++	destroy_workqueue(php_slot->wq);
+ 	kfree(php_slot->name);
+ 	kfree(php_slot);
+ }
+@@ -391,6 +393,20 @@ static int pnv_php_get_power_state(struct hotplug_slot *slot, u8 *state)
+ 	return 0;
+ }
+ 
++static int pcie_check_link_active(struct pci_dev *pdev)
++{
++	u16 lnk_status;
++	int ret;
++
++	ret = pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status);
++	if (ret == PCIBIOS_DEVICE_NOT_FOUND || PCI_POSSIBLE_ERROR(lnk_status))
++		return -ENODEV;
++
++	ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA);
++
++	return ret;
++}
++
+ static int pnv_php_get_adapter_state(struct hotplug_slot *slot, u8 *state)
+ {
+ 	struct pnv_php_slot *php_slot = to_pnv_php_slot(slot);
+@@ -403,6 +419,19 @@ static int pnv_php_get_adapter_state(struct hotplug_slot *slot, u8 *state)
+ 	 */
+ 	ret = pnv_pci_get_presence_state(php_slot->id, &presence);
+ 	if (ret >= 0) {
++		if (pci_pcie_type(php_slot->pdev) == PCI_EXP_TYPE_DOWNSTREAM &&
++			presence == OPAL_PCI_SLOT_EMPTY) {
++			/*
++			 * Similar to pciehp_hpc, check whether the Link Active
++			 * bit is set to account for broken downstream bridges
++			 * that don't properly assert Presence Detect State, as
++			 * was observed on the Microsemi Switchtec PM8533 PFX
++			 * [11f8:8533].
++			 */
++			if (pcie_check_link_active(php_slot->pdev) > 0)
++				presence = OPAL_PCI_SLOT_PRESENT;
++		}
++
+ 		*state = presence;
+ 		ret = 0;
+ 	} else {
+@@ -442,6 +471,61 @@ static int pnv_php_set_attention_state(struct hotplug_slot *slot, u8 state)
+ 	return 0;
+ }
+ 
++static int pnv_php_activate_slot(struct pnv_php_slot *php_slot,
++				 struct hotplug_slot *slot)
++{
++	int ret, i;
++
++	/*
++	 * Issue initial slot activation command to firmware
++	 *
++	 * Firmware will power slot on, attempt to train the link, and
++	 * discover any downstream devices. If this process fails, firmware
++	 * will return an error code and an invalid device tree. Failure
++	 * can be caused for multiple reasons, including a faulty
++	 * downstream device, poor connection to the downstream device, or
++	 * a previously latched PHB fence.  On failure, issue fundamental
++	 * reset up to three times before aborting.
++	 */
++	ret = pnv_php_set_slot_power_state(slot, OPAL_PCI_SLOT_POWER_ON);
++	if (ret) {
++		SLOT_WARN(
++			php_slot,
++			"PCI slot activation failed with error code %d, possible frozen PHB",
++			ret);
++		SLOT_WARN(
++			php_slot,
++			"Attempting complete PHB reset before retrying slot activation\n");
++		for (i = 0; i < 3; i++) {
++			/*
++			 * Slot activation failed, PHB may be fenced from a
++			 * prior device failure.
++			 *
++			 * Use the OPAL fundamental reset call to both try a
++			 * device reset and clear any potentially active PHB
++			 * fence / freeze.
++			 */
++			SLOT_WARN(php_slot, "Try %d...\n", i + 1);
++			pci_set_pcie_reset_state(php_slot->pdev,
++						 pcie_warm_reset);
++			msleep(250);
++			pci_set_pcie_reset_state(php_slot->pdev,
++						 pcie_deassert_reset);
++
++			ret = pnv_php_set_slot_power_state(
++				slot, OPAL_PCI_SLOT_POWER_ON);
++			if (!ret)
++				break;
++		}
++
++		if (i >= 3)
++			SLOT_WARN(php_slot,
++				  "Failed to bring slot online, aborting!\n");
++	}
++
++	return ret;
++}
++
+ static int pnv_php_enable(struct pnv_php_slot *php_slot, bool rescan)
+ {
+ 	struct hotplug_slot *slot = &php_slot->slot;
+@@ -504,7 +588,7 @@ static int pnv_php_enable(struct pnv_php_slot *php_slot, bool rescan)
+ 		goto scan;
+ 
+ 	/* Power is off, turn it on and then scan the slot */
+-	ret = pnv_php_set_slot_power_state(slot, OPAL_PCI_SLOT_POWER_ON);
++	ret = pnv_php_activate_slot(php_slot, slot);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -561,8 +645,58 @@ static int pnv_php_reset_slot(struct hotplug_slot *slot, bool probe)
+ static int pnv_php_enable_slot(struct hotplug_slot *slot)
+ {
+ 	struct pnv_php_slot *php_slot = to_pnv_php_slot(slot);
++	u32 prop32;
++	int ret;
++
++	ret = pnv_php_enable(php_slot, true);
++	if (ret)
++		return ret;
++
++	/* (Re-)enable interrupt if the slot supports surprise hotplug */
++	ret = of_property_read_u32(php_slot->dn, "ibm,slot-surprise-pluggable",
++				   &prop32);
++	if (!ret && prop32)
++		pnv_php_enable_irq(php_slot);
++
++	return 0;
++}
++
++/*
++ * Disable any hotplug interrupts for all slots on the provided bus, as well as
++ * all downstream slots in preparation for a hot unplug.
++ */
++static int pnv_php_disable_all_irqs(struct pci_bus *bus)
++{
++	struct pci_bus *child_bus;
++	struct pci_slot *slot;
++
++	/* First go down child buses */
++	list_for_each_entry(child_bus, &bus->children, node)
++		pnv_php_disable_all_irqs(child_bus);
++
++	/* Disable IRQs for all pnv_php slots on this bus */
++	list_for_each_entry(slot, &bus->slots, list) {
++		struct pnv_php_slot *php_slot = to_pnv_php_slot(slot->hotplug);
+ 
+-	return pnv_php_enable(php_slot, true);
++		pnv_php_disable_irq(php_slot, false, true);
++	}
++
++	return 0;
++}
++
++/*
++ * Disable any hotplug interrupts for all downstream slots on the provided
++ * bus in preparation for a hot unplug.
++ */
++static int pnv_php_disable_all_downstream_irqs(struct pci_bus *bus)
++{
++	struct pci_bus *child_bus;
++
++	/* Go down child buses, recursively deactivating their IRQs */
++	list_for_each_entry(child_bus, &bus->children, node)
++		pnv_php_disable_all_irqs(child_bus);
++
++	return 0;
+ }
+ 
+ static int pnv_php_disable_slot(struct hotplug_slot *slot)
+@@ -579,6 +713,13 @@ static int pnv_php_disable_slot(struct hotplug_slot *slot)
+ 	    php_slot->state != PNV_PHP_STATE_REGISTERED)
+ 		return 0;
+ 
++	/*
++	 * Free all IRQ resources from all child slots before remove.
++	 * Note that we do not disable the root slot IRQ here as that
++	 * would also deactivate the slot hot (re)plug interrupt!
++	 */
++	pnv_php_disable_all_downstream_irqs(php_slot->bus);
++
+ 	/* Remove all devices behind the slot */
+ 	pci_lock_rescan_remove();
+ 	pci_hp_remove_devices(php_slot->bus);
+@@ -647,6 +788,15 @@ static struct pnv_php_slot *pnv_php_alloc_slot(struct device_node *dn)
+ 		return NULL;
+ 	}
+ 
++	/* Allocate workqueue for this slot's interrupt handling */
++	php_slot->wq = alloc_workqueue("pciehp-%s", 0, 0, php_slot->name);
++	if (!php_slot->wq) {
++		SLOT_WARN(php_slot, "Cannot alloc workqueue\n");
++		kfree(php_slot->name);
++		kfree(php_slot);
++		return NULL;
++	}
++
+ 	if (dn->child && PCI_DN(dn->child))
+ 		php_slot->slot_no = PCI_SLOT(PCI_DN(dn->child)->devfn);
+ 	else
+@@ -745,16 +895,63 @@ static int pnv_php_enable_msix(struct pnv_php_slot *php_slot)
+ 	return entry.vector;
+ }
+ 
++static void
++pnv_php_detect_clear_suprise_removal_freeze(struct pnv_php_slot *php_slot)
++{
++	struct pci_dev *pdev = php_slot->pdev;
++	struct eeh_dev *edev;
++	struct eeh_pe *pe;
++	int i, rc;
++
++	/*
++	 * When a device is surprise removed from a downstream bridge slot,
++	 * the upstream bridge port can still end up frozen due to related EEH
++	 * events, which will in turn block the MSI interrupts for slot hotplug
++	 * detection.
++	 *
++	 * Detect and thaw any frozen upstream PE after slot deactivation.
++	 */
++	edev = pci_dev_to_eeh_dev(pdev);
++	pe = edev ? edev->pe : NULL;
++	rc = eeh_pe_get_state(pe);
++	if ((rc == -ENODEV) || (rc == -ENOENT)) {
++		SLOT_WARN(
++			php_slot,
++			"Upstream bridge PE state unknown, hotplug detect may fail\n");
++	} else {
++		if (pe->state & EEH_PE_ISOLATED) {
++			SLOT_WARN(
++				php_slot,
++				"Upstream bridge PE %02x frozen, thawing...\n",
++				pe->addr);
++			for (i = 0; i < 3; i++)
++				if (!eeh_unfreeze_pe(pe))
++					break;
++			if (i >= 3)
++				SLOT_WARN(
++					php_slot,
++					"Unable to thaw PE %02x, hotplug detect will fail!\n",
++					pe->addr);
++			else
++				SLOT_WARN(php_slot,
++					  "PE %02x thawed successfully\n",
++					  pe->addr);
++		}
++	}
++}
++
+ static void pnv_php_event_handler(struct work_struct *work)
+ {
+ 	struct pnv_php_event *event =
+ 		container_of(work, struct pnv_php_event, work);
+ 	struct pnv_php_slot *php_slot = event->php_slot;
+ 
+-	if (event->added)
++	if (event->added) {
+ 		pnv_php_enable_slot(&php_slot->slot);
+-	else
++	} else {
+ 		pnv_php_disable_slot(&php_slot->slot);
++		pnv_php_detect_clear_suprise_removal_freeze(php_slot);
++	}
+ 
+ 	kfree(event);
+ }
+@@ -843,14 +1040,6 @@ static void pnv_php_init_irq(struct pnv_php_slot *php_slot, int irq)
+ 	u16 sts, ctrl;
+ 	int ret;
+ 
+-	/* Allocate workqueue */
+-	php_slot->wq = alloc_workqueue("pciehp-%s", 0, 0, php_slot->name);
+-	if (!php_slot->wq) {
+-		SLOT_WARN(php_slot, "Cannot alloc workqueue\n");
+-		pnv_php_disable_irq(php_slot, true);
+-		return;
+-	}
+-
+ 	/* Check PDC (Presence Detection Change) is broken or not */
+ 	ret = of_property_read_u32(php_slot->dn, "ibm,slot-broken-pdc",
+ 				   &broken_pdc);
+@@ -869,7 +1058,7 @@ static void pnv_php_init_irq(struct pnv_php_slot *php_slot, int irq)
+ 	ret = request_irq(irq, pnv_php_interrupt, IRQF_SHARED,
+ 			  php_slot->name, php_slot);
+ 	if (ret) {
+-		pnv_php_disable_irq(php_slot, true);
++		pnv_php_disable_irq(php_slot, true, true);
+ 		SLOT_WARN(php_slot, "Error %d enabling IRQ %d\n", ret, irq);
+ 		return;
+ 	}
+diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
+index 67db34fd10ee71..01e6aea1b0c7a1 100644
+--- a/drivers/pci/pci-driver.c
++++ b/drivers/pci/pci-driver.c
+@@ -1628,7 +1628,7 @@ static int pci_bus_num_vf(struct device *dev)
+  */
+ static int pci_dma_configure(struct device *dev)
+ {
+-	struct pci_driver *driver = to_pci_driver(dev->driver);
++	const struct device_driver *drv = READ_ONCE(dev->driver);
+ 	struct device *bridge;
+ 	int ret = 0;
+ 
+@@ -1645,8 +1645,8 @@ static int pci_dma_configure(struct device *dev)
+ 
+ 	pci_put_host_bridge_device(bridge);
+ 
+-	/* @driver may not be valid when we're called from the IOMMU layer */
+-	if (!ret && dev->driver && !driver->driver_managed_dma) {
++	/* @drv may not be valid when we're called from the IOMMU layer */
++	if (!ret && drv && !to_pci_driver(drv)->driver_managed_dma) {
+ 		ret = iommu_device_use_default_domain(dev);
+ 		if (ret)
+ 			arch_teardown_dma_ops(dev);
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index 12215ee72afb68..98d6fccb383e55 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -61,7 +61,7 @@ struct pcie_tlp_log;
+  *    completes before sending a Configuration Request to the device
+  *    immediately below that Port."
+  */
+-#define PCIE_RESET_CONFIG_DEVICE_WAIT_MS	100
++#define PCIE_RESET_CONFIG_WAIT_MS	100
+ 
+ /* Message Routing (r[2:0]); PCIe r6.0, sec 2.2.8 */
+ #define PCIE_MSG_TYPE_R_RC	0
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index d7f4ee634263c2..db6e142b082daa 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -105,13 +105,13 @@ int pcie_failed_link_retrain(struct pci_dev *dev)
+ 	    !pcie_cap_has_lnkctl2(dev) || !dev->link_active_reporting)
+ 		return ret;
+ 
+-	pcie_capability_read_word(dev, PCI_EXP_LNKCTL2, &lnkctl2);
+ 	pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta);
+ 	if (!(lnksta & PCI_EXP_LNKSTA_DLLLA) && pcie_lbms_seen(dev, lnksta)) {
+-		u16 oldlnkctl2 = lnkctl2;
++		u16 oldlnkctl2;
+ 
+ 		pci_info(dev, "broken device, retraining non-functional downstream link at 2.5GT/s\n");
+ 
++		pcie_capability_read_word(dev, PCI_EXP_LNKCTL2, &oldlnkctl2);
+ 		ret = pcie_set_target_speed(dev, PCIE_SPEED_2_5GT, false);
+ 		if (ret) {
+ 			pci_info(dev, "retraining failed\n");
+@@ -123,6 +123,8 @@ int pcie_failed_link_retrain(struct pci_dev *dev)
+ 		pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta);
+ 	}
+ 
++	pcie_capability_read_word(dev, PCI_EXP_LNKCTL2, &lnkctl2);
++
+ 	if ((lnksta & PCI_EXP_LNKSTA_DLLLA) &&
+ 	    (lnkctl2 & PCI_EXP_LNKCTL2_TLS) == PCI_EXP_LNKCTL2_TLS_2_5GT &&
+ 	    pci_match_id(ids, dev)) {
+diff --git a/drivers/perf/arm-ni.c b/drivers/perf/arm-ni.c
+index de7b6cce4d68a8..9396d243415f48 100644
+--- a/drivers/perf/arm-ni.c
++++ b/drivers/perf/arm-ni.c
+@@ -544,6 +544,8 @@ static int arm_ni_init_cd(struct arm_ni *ni, struct arm_ni_node *node, u64 res_s
+ 		return err;
+ 
+ 	cd->cpu = cpumask_local_spread(0, dev_to_node(ni->dev));
++	irq_set_affinity(cd->irq, cpumask_of(cd->cpu));
++
+ 	cd->pmu = (struct pmu) {
+ 		.module = THIS_MODULE,
+ 		.parent = ni->dev,
+diff --git a/drivers/phy/phy-snps-eusb2.c b/drivers/phy/phy-snps-eusb2.c
+index 751b6d8ba2be29..e78d222eec5f16 100644
+--- a/drivers/phy/phy-snps-eusb2.c
++++ b/drivers/phy/phy-snps-eusb2.c
+@@ -437,6 +437,9 @@ static int qcom_snps_eusb2_hsphy_init(struct phy *p)
+ 	snps_eusb2_hsphy_write_mask(phy->base, QCOM_USB_PHY_HS_PHY_CTRL2,
+ 				    USB2_SUSPEND_N_SEL, 0);
+ 
++	snps_eusb2_hsphy_write_mask(phy->base, QCOM_USB_PHY_CFG0,
++				    CMN_CTRL_OVERRIDE_EN, 0);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c b/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c
+index 6bd1b3c75c779d..d7493c2294ef23 100644
+--- a/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c
++++ b/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c
+@@ -37,32 +37,13 @@
+ #define EUSB2_TUNE_EUSB_EQU		0x5A
+ #define EUSB2_TUNE_EUSB_HS_COMP_CUR	0x5B
+ 
+-enum eusb2_reg_layout {
+-	TUNE_EUSB_HS_COMP_CUR,
+-	TUNE_EUSB_EQU,
+-	TUNE_EUSB_SLEW,
+-	TUNE_USB2_HS_COMP_CUR,
+-	TUNE_USB2_PREEM,
+-	TUNE_USB2_EQU,
+-	TUNE_USB2_SLEW,
+-	TUNE_SQUELCH_U,
+-	TUNE_HSDISC,
+-	TUNE_RES_FSDIF,
+-	TUNE_IUSB2,
+-	TUNE_USB2_CROSSOVER,
+-	NUM_TUNE_FIELDS,
+-
+-	FORCE_VAL_5 = NUM_TUNE_FIELDS,
+-	FORCE_EN_5,
+-
+-	EN_CTL1,
+-
+-	RPTR_STATUS,
+-	LAYOUT_SIZE,
++struct eusb2_repeater_init_tbl_reg {
++	unsigned int reg;
++	unsigned int value;
+ };
+ 
+ struct eusb2_repeater_cfg {
+-	const u32 *init_tbl;
++	const struct eusb2_repeater_init_tbl_reg *init_tbl;
+ 	int init_tbl_num;
+ 	const char * const *vreg_list;
+ 	int num_vregs;
+@@ -82,16 +63,16 @@ static const char * const pm8550b_vreg_l[] = {
+ 	"vdd18", "vdd3",
+ };
+ 
+-static const u32 pm8550b_init_tbl[NUM_TUNE_FIELDS] = {
+-	[TUNE_IUSB2] = 0x8,
+-	[TUNE_SQUELCH_U] = 0x3,
+-	[TUNE_USB2_PREEM] = 0x5,
++static const struct eusb2_repeater_init_tbl_reg pm8550b_init_tbl[] = {
++	{ EUSB2_TUNE_IUSB2, 0x8 },
++	{ EUSB2_TUNE_SQUELCH_U, 0x3 },
++	{ EUSB2_TUNE_USB2_PREEM, 0x5 },
+ };
+ 
+-static const u32 smb2360_init_tbl[NUM_TUNE_FIELDS] = {
+-	[TUNE_IUSB2] = 0x5,
+-	[TUNE_SQUELCH_U] = 0x3,
+-	[TUNE_USB2_PREEM] = 0x2,
++static const struct eusb2_repeater_init_tbl_reg smb2360_init_tbl[] = {
++	{ EUSB2_TUNE_IUSB2, 0x5 },
++	{ EUSB2_TUNE_SQUELCH_U, 0x3 },
++	{ EUSB2_TUNE_USB2_PREEM, 0x2 },
+ };
+ 
+ static const struct eusb2_repeater_cfg pm8550b_eusb2_cfg = {
+@@ -129,17 +110,10 @@ static int eusb2_repeater_init(struct phy *phy)
+ 	struct eusb2_repeater *rptr = phy_get_drvdata(phy);
+ 	struct device_node *np = rptr->dev->of_node;
+ 	struct regmap *regmap = rptr->regmap;
+-	const u32 *init_tbl = rptr->cfg->init_tbl;
+-	u8 tune_usb2_preem = init_tbl[TUNE_USB2_PREEM];
+-	u8 tune_hsdisc = init_tbl[TUNE_HSDISC];
+-	u8 tune_iusb2 = init_tbl[TUNE_IUSB2];
+ 	u32 base = rptr->base;
+-	u32 val;
++	u32 poll_val;
+ 	int ret;
+-
+-	of_property_read_u8(np, "qcom,tune-usb2-amplitude", &tune_iusb2);
+-	of_property_read_u8(np, "qcom,tune-usb2-disc-thres", &tune_hsdisc);
+-	of_property_read_u8(np, "qcom,tune-usb2-preem", &tune_usb2_preem);
++	u8 val;
+ 
+ 	ret = regulator_bulk_enable(rptr->cfg->num_vregs, rptr->vregs);
+ 	if (ret)
+@@ -147,21 +121,24 @@ static int eusb2_repeater_init(struct phy *phy)
+ 
+ 	regmap_write(regmap, base + EUSB2_EN_CTL1, EUSB2_RPTR_EN);
+ 
+-	regmap_write(regmap, base + EUSB2_TUNE_EUSB_HS_COMP_CUR, init_tbl[TUNE_EUSB_HS_COMP_CUR]);
+-	regmap_write(regmap, base + EUSB2_TUNE_EUSB_EQU, init_tbl[TUNE_EUSB_EQU]);
+-	regmap_write(regmap, base + EUSB2_TUNE_EUSB_SLEW, init_tbl[TUNE_EUSB_SLEW]);
+-	regmap_write(regmap, base + EUSB2_TUNE_USB2_HS_COMP_CUR, init_tbl[TUNE_USB2_HS_COMP_CUR]);
+-	regmap_write(regmap, base + EUSB2_TUNE_USB2_EQU, init_tbl[TUNE_USB2_EQU]);
+-	regmap_write(regmap, base + EUSB2_TUNE_USB2_SLEW, init_tbl[TUNE_USB2_SLEW]);
+-	regmap_write(regmap, base + EUSB2_TUNE_SQUELCH_U, init_tbl[TUNE_SQUELCH_U]);
+-	regmap_write(regmap, base + EUSB2_TUNE_RES_FSDIF, init_tbl[TUNE_RES_FSDIF]);
+-	regmap_write(regmap, base + EUSB2_TUNE_USB2_CROSSOVER, init_tbl[TUNE_USB2_CROSSOVER]);
+-
+-	regmap_write(regmap, base + EUSB2_TUNE_USB2_PREEM, tune_usb2_preem);
+-	regmap_write(regmap, base + EUSB2_TUNE_HSDISC, tune_hsdisc);
+-	regmap_write(regmap, base + EUSB2_TUNE_IUSB2, tune_iusb2);
+-
+-	ret = regmap_read_poll_timeout(regmap, base + EUSB2_RPTR_STATUS, val, val & RPTR_OK, 10, 5);
++	/* Write registers from init table */
++	for (int i = 0; i < rptr->cfg->init_tbl_num; i++)
++		regmap_write(regmap, base + rptr->cfg->init_tbl[i].reg,
++			     rptr->cfg->init_tbl[i].value);
++
++	/* Override registers from devicetree values */
++	if (!of_property_read_u8(np, "qcom,tune-usb2-amplitude", &val))
++		regmap_write(regmap, base + EUSB2_TUNE_USB2_PREEM, val);
++
++	if (!of_property_read_u8(np, "qcom,tune-usb2-disc-thres", &val))
++		regmap_write(regmap, base + EUSB2_TUNE_HSDISC, val);
++
++	if (!of_property_read_u8(np, "qcom,tune-usb2-preem", &val))
++		regmap_write(regmap, base + EUSB2_TUNE_IUSB2, val);
++
++	/* Wait for status OK */
++	ret = regmap_read_poll_timeout(regmap, base + EUSB2_RPTR_STATUS, poll_val,
++				       poll_val & RPTR_OK, 10, 5);
+ 	if (ret)
+ 		dev_err(rptr->dev, "initialization timed-out\n");
+ 
+diff --git a/drivers/pinctrl/berlin/berlin.c b/drivers/pinctrl/berlin/berlin.c
+index c372a2a24be4bb..9dc2da8056b722 100644
+--- a/drivers/pinctrl/berlin/berlin.c
++++ b/drivers/pinctrl/berlin/berlin.c
+@@ -204,6 +204,7 @@ static int berlin_pinctrl_build_state(struct platform_device *pdev)
+ 	const struct berlin_desc_group *desc_group;
+ 	const struct berlin_desc_function *desc_function;
+ 	int i, max_functions = 0;
++	struct pinfunction *new_functions;
+ 
+ 	pctrl->nfunctions = 0;
+ 
+@@ -229,12 +230,15 @@ static int berlin_pinctrl_build_state(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-	pctrl->functions = krealloc(pctrl->functions,
++	new_functions = krealloc(pctrl->functions,
+ 				    pctrl->nfunctions * sizeof(*pctrl->functions),
+ 				    GFP_KERNEL);
+-	if (!pctrl->functions)
++	if (!new_functions) {
++		kfree(pctrl->functions);
+ 		return -ENOMEM;
++	}
+ 
++	pctrl->functions = new_functions;
+ 	/* map functions to theirs groups */
+ 	for (i = 0; i < pctrl->desc->ngroups; i++) {
+ 		desc_group = pctrl->desc->groups + i;
+diff --git a/drivers/pinctrl/cirrus/pinctrl-madera-core.c b/drivers/pinctrl/cirrus/pinctrl-madera-core.c
+index 73ec5b9beb49f2..d19ef13224cca7 100644
+--- a/drivers/pinctrl/cirrus/pinctrl-madera-core.c
++++ b/drivers/pinctrl/cirrus/pinctrl-madera-core.c
+@@ -1061,8 +1061,9 @@ static int madera_pin_probe(struct platform_device *pdev)
+ 
+ 	/* if the configuration is provided through pdata, apply it */
+ 	if (pdata->gpio_configs) {
+-		ret = pinctrl_register_mappings(pdata->gpio_configs,
+-						pdata->n_gpio_configs);
++		ret = devm_pinctrl_register_mappings(priv->dev,
++						     pdata->gpio_configs,
++						     pdata->n_gpio_configs);
+ 		if (ret)
+ 			return dev_err_probe(priv->dev, ret,
+ 						"Failed to register pdata mappings\n");
+@@ -1081,17 +1082,8 @@ static int madera_pin_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static void madera_pin_remove(struct platform_device *pdev)
+-{
+-	struct madera_pin_private *priv = platform_get_drvdata(pdev);
+-
+-	if (priv->madera->pdata.gpio_configs)
+-		pinctrl_unregister_mappings(priv->madera->pdata.gpio_configs);
+-}
+-
+ static struct platform_driver madera_pin_driver = {
+ 	.probe = madera_pin_probe,
+-	.remove = madera_pin_remove,
+ 	.driver = {
+ 		.name = "madera-pinctrl",
+ 	},
+diff --git a/drivers/pinctrl/pinctrl-k230.c b/drivers/pinctrl/pinctrl-k230.c
+index a9b4627b46b012..d716f23d837f7a 100644
+--- a/drivers/pinctrl/pinctrl-k230.c
++++ b/drivers/pinctrl/pinctrl-k230.c
+@@ -477,6 +477,10 @@ static int k230_pinctrl_parse_groups(struct device_node *np,
+ 	grp->name = np->name;
+ 
+ 	list = of_get_property(np, "pinmux", &size);
++	if (!list) {
++		dev_err(dev, "failed to get pinmux property\n");
++		return -EINVAL;
++	}
+ 	size /= sizeof(*list);
+ 
+ 	grp->num_pins = size;
+@@ -586,6 +590,7 @@ static int k230_pinctrl_probe(struct platform_device *pdev)
+ 	struct device *dev = &pdev->dev;
+ 	struct k230_pinctrl *info;
+ 	struct pinctrl_desc *pctl;
++	int ret;
+ 
+ 	info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL);
+ 	if (!info)
+@@ -611,19 +616,21 @@ static int k230_pinctrl_probe(struct platform_device *pdev)
+ 		return dev_err_probe(dev, PTR_ERR(info->regmap_base),
+ 				     "failed to init regmap\n");
+ 
++	ret = k230_pinctrl_parse_dt(pdev, info);
++	if (ret)
++		return ret;
++
+ 	info->pctl_dev = devm_pinctrl_register(dev, pctl, info);
+ 	if (IS_ERR(info->pctl_dev))
+ 		return dev_err_probe(dev, PTR_ERR(info->pctl_dev),
+ 				     "devm_pinctrl_register failed\n");
+ 
+-	k230_pinctrl_parse_dt(pdev, info);
+-
+ 	return 0;
+ }
+ 
+ static const struct of_device_id k230_dt_ids[] = {
+ 	{ .compatible = "canaan,k230-pinctrl", },
+-	{ /* sintenel */ }
++	{ /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, k230_dt_ids);
+ 
+diff --git a/drivers/pinctrl/pinmux.c b/drivers/pinctrl/pinmux.c
+index 0743190da59e81..2c31e7f2a27a86 100644
+--- a/drivers/pinctrl/pinmux.c
++++ b/drivers/pinctrl/pinmux.c
+@@ -236,18 +236,7 @@ static const char *pin_free(struct pinctrl_dev *pctldev, int pin,
+ 			if (desc->mux_usecount)
+ 				return NULL;
+ 		}
+-	}
+-
+-	/*
+-	 * If there is no kind of request function for the pin we just assume
+-	 * we got it by default and proceed.
+-	 */
+-	if (gpio_range && ops->gpio_disable_free)
+-		ops->gpio_disable_free(pctldev, gpio_range, pin);
+-	else if (ops->free)
+-		ops->free(pctldev, pin);
+ 
+-	scoped_guard(mutex, &desc->mux_lock) {
+ 		if (gpio_range) {
+ 			owner = desc->gpio_owner;
+ 			desc->gpio_owner = NULL;
+@@ -258,6 +247,15 @@ static const char *pin_free(struct pinctrl_dev *pctldev, int pin,
+ 		}
+ 	}
+ 
++	/*
++	 * If there is no kind of request function for the pin we just assume
++	 * we got it by default and proceed.
++	 */
++	if (gpio_range && ops->gpio_disable_free)
++		ops->gpio_disable_free(pctldev, gpio_range, pin);
++	else if (ops->free)
++		ops->free(pctldev, pin);
++
+ 	module_put(pctldev->owner);
+ 
+ 	return owner;
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi.c b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+index bf8612d72daacd..d63859a2a64ec7 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sunxi.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+@@ -408,6 +408,7 @@ static int sunxi_pctrl_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	const char *function, *pin_prop;
+ 	const char *group;
+ 	int ret, npins, nmaps, configlen = 0, i = 0;
++	struct pinctrl_map *new_map;
+ 
+ 	*map = NULL;
+ 	*num_maps = 0;
+@@ -482,9 +483,13 @@ static int sunxi_pctrl_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	 * We know have the number of maps we need, we can resize our
+ 	 * map array
+ 	 */
+-	*map = krealloc(*map, i * sizeof(struct pinctrl_map), GFP_KERNEL);
+-	if (!*map)
+-		return -ENOMEM;
++	new_map = krealloc(*map, i * sizeof(struct pinctrl_map), GFP_KERNEL);
++	if (!new_map) {
++		ret = -ENOMEM;
++		goto err_free_map;
++	}
++
++	*map = new_map;
+ 
+ 	return 0;
+ 
+diff --git a/drivers/platform/x86/intel/pmt/class.c b/drivers/platform/x86/intel/pmt/class.c
+index 7233b654bbad15..d046e87521738e 100644
+--- a/drivers/platform/x86/intel/pmt/class.c
++++ b/drivers/platform/x86/intel/pmt/class.c
+@@ -97,7 +97,7 @@ intel_pmt_read(struct file *filp, struct kobject *kobj,
+ 	if (count > entry->size - off)
+ 		count = entry->size - off;
+ 
+-	count = pmt_telem_read_mmio(entry->ep->pcidev, entry->cb, entry->header.guid, buf,
++	count = pmt_telem_read_mmio(entry->pcidev, entry->cb, entry->header.guid, buf,
+ 				    entry->base, off, count);
+ 
+ 	return count;
+@@ -252,6 +252,7 @@ static int intel_pmt_populate_entry(struct intel_pmt_entry *entry,
+ 		return -EINVAL;
+ 	}
+ 
++	entry->pcidev = pci_dev;
+ 	entry->guid = header->guid;
+ 	entry->size = header->size;
+ 	entry->cb = ivdev->priv_data;
+diff --git a/drivers/platform/x86/intel/pmt/class.h b/drivers/platform/x86/intel/pmt/class.h
+index b2006d57779d66..f6ce80c4e05111 100644
+--- a/drivers/platform/x86/intel/pmt/class.h
++++ b/drivers/platform/x86/intel/pmt/class.h
+@@ -39,6 +39,7 @@ struct intel_pmt_header {
+ 
+ struct intel_pmt_entry {
+ 	struct telem_endpoint	*ep;
++	struct pci_dev		*pcidev;
+ 	struct intel_pmt_header	header;
+ 	struct bin_attribute	pmt_bin_attr;
+ 	struct kobject		*kobj;
+diff --git a/drivers/platform/x86/oxpec.c b/drivers/platform/x86/oxpec.c
+index 06759036945d43..9839e8cb82ce4c 100644
+--- a/drivers/platform/x86/oxpec.c
++++ b/drivers/platform/x86/oxpec.c
+@@ -58,7 +58,8 @@ enum oxp_board {
+ 	oxp_mini_amd_a07,
+ 	oxp_mini_amd_pro,
+ 	oxp_x1,
+-	oxp_g1,
++	oxp_g1_i,
++	oxp_g1_a,
+ };
+ 
+ static enum oxp_board board;
+@@ -247,14 +248,14 @@ static const struct dmi_system_id dmi_table[] = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "ONE-NETBOOK"),
+ 			DMI_EXACT_MATCH(DMI_BOARD_NAME, "ONEXPLAYER G1 A"),
+ 		},
+-		.driver_data = (void *)oxp_g1,
++		.driver_data = (void *)oxp_g1_a,
+ 	},
+ 	{
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "ONE-NETBOOK"),
+ 			DMI_EXACT_MATCH(DMI_BOARD_NAME, "ONEXPLAYER G1 i"),
+ 		},
+-		.driver_data = (void *)oxp_g1,
++		.driver_data = (void *)oxp_g1_i,
+ 	},
+ 	{
+ 		.matches = {
+@@ -352,7 +353,8 @@ static umode_t tt_toggle_is_visible(struct kobject *kobj,
+ 	case oxp_mini_amd_a07:
+ 	case oxp_mini_amd_pro:
+ 	case oxp_x1:
+-	case oxp_g1:
++	case oxp_g1_i:
++	case oxp_g1_a:
+ 		return attr->mode;
+ 	default:
+ 		break;
+@@ -381,12 +383,13 @@ static ssize_t tt_toggle_store(struct device *dev,
+ 	case aok_zoe_a1:
+ 	case oxp_fly:
+ 	case oxp_mini_amd_pro:
++	case oxp_g1_a:
+ 		reg = OXP_TURBO_SWITCH_REG;
+ 		mask = OXP_TURBO_TAKE_VAL;
+ 		break;
+ 	case oxp_2:
+ 	case oxp_x1:
+-	case oxp_g1:
++	case oxp_g1_i:
+ 		reg = OXP_2_TURBO_SWITCH_REG;
+ 		mask = OXP_TURBO_TAKE_VAL;
+ 		break;
+@@ -426,12 +429,13 @@ static ssize_t tt_toggle_show(struct device *dev,
+ 	case aok_zoe_a1:
+ 	case oxp_fly:
+ 	case oxp_mini_amd_pro:
++	case oxp_g1_a:
+ 		reg = OXP_TURBO_SWITCH_REG;
+ 		mask = OXP_TURBO_TAKE_VAL;
+ 		break;
+ 	case oxp_2:
+ 	case oxp_x1:
+-	case oxp_g1:
++	case oxp_g1_i:
+ 		reg = OXP_2_TURBO_SWITCH_REG;
+ 		mask = OXP_TURBO_TAKE_VAL;
+ 		break;
+@@ -520,7 +524,8 @@ static bool oxp_psy_ext_supported(void)
+ {
+ 	switch (board) {
+ 	case oxp_x1:
+-	case oxp_g1:
++	case oxp_g1_i:
++	case oxp_g1_a:
+ 	case oxp_fly:
+ 		return true;
+ 	default:
+@@ -659,7 +664,8 @@ static int oxp_pwm_enable(void)
+ 	case oxp_mini_amd_a07:
+ 	case oxp_mini_amd_pro:
+ 	case oxp_x1:
+-	case oxp_g1:
++	case oxp_g1_i:
++	case oxp_g1_a:
+ 		return write_to_ec(OXP_SENSOR_PWM_ENABLE_REG, PWM_MODE_MANUAL);
+ 	default:
+ 		return -EINVAL;
+@@ -686,7 +692,8 @@ static int oxp_pwm_disable(void)
+ 	case oxp_mini_amd_a07:
+ 	case oxp_mini_amd_pro:
+ 	case oxp_x1:
+-	case oxp_g1:
++	case oxp_g1_i:
++	case oxp_g1_a:
+ 		return write_to_ec(OXP_SENSOR_PWM_ENABLE_REG, PWM_MODE_AUTO);
+ 	default:
+ 		return -EINVAL;
+@@ -713,7 +720,8 @@ static int oxp_pwm_read(long *val)
+ 	case oxp_mini_amd_a07:
+ 	case oxp_mini_amd_pro:
+ 	case oxp_x1:
+-	case oxp_g1:
++	case oxp_g1_i:
++	case oxp_g1_a:
+ 		return read_from_ec(OXP_SENSOR_PWM_ENABLE_REG, 1, val);
+ 	default:
+ 		return -EOPNOTSUPP;
+@@ -742,7 +750,7 @@ static int oxp_pwm_fan_speed(long *val)
+ 		return read_from_ec(ORANGEPI_SENSOR_FAN_REG, 2, val);
+ 	case oxp_2:
+ 	case oxp_x1:
+-	case oxp_g1:
++	case oxp_g1_i:
+ 		return read_from_ec(OXP_2_SENSOR_FAN_REG, 2, val);
+ 	case aok_zoe_a1:
+ 	case aya_neo_2:
+@@ -757,6 +765,7 @@ static int oxp_pwm_fan_speed(long *val)
+ 	case oxp_mini_amd:
+ 	case oxp_mini_amd_a07:
+ 	case oxp_mini_amd_pro:
++	case oxp_g1_a:
+ 		return read_from_ec(OXP_SENSOR_FAN_REG, 2, val);
+ 	default:
+ 		return -EOPNOTSUPP;
+@@ -776,7 +785,7 @@ static int oxp_pwm_input_write(long val)
+ 		return write_to_ec(ORANGEPI_SENSOR_PWM_REG, val);
+ 	case oxp_2:
+ 	case oxp_x1:
+-	case oxp_g1:
++	case oxp_g1_i:
+ 		/* scale to range [0-184] */
+ 		val = (val * 184) / 255;
+ 		return write_to_ec(OXP_SENSOR_PWM_REG, val);
+@@ -796,6 +805,7 @@ static int oxp_pwm_input_write(long val)
+ 	case aok_zoe_a1:
+ 	case oxp_fly:
+ 	case oxp_mini_amd_pro:
++	case oxp_g1_a:
+ 		return write_to_ec(OXP_SENSOR_PWM_REG, val);
+ 	default:
+ 		return -EOPNOTSUPP;
+@@ -816,7 +826,7 @@ static int oxp_pwm_input_read(long *val)
+ 		break;
+ 	case oxp_2:
+ 	case oxp_x1:
+-	case oxp_g1:
++	case oxp_g1_i:
+ 		ret = read_from_ec(OXP_SENSOR_PWM_REG, 1, val);
+ 		if (ret)
+ 			return ret;
+@@ -842,6 +852,7 @@ static int oxp_pwm_input_read(long *val)
+ 	case aok_zoe_a1:
+ 	case oxp_fly:
+ 	case oxp_mini_amd_pro:
++	case oxp_g1_a:
+ 	default:
+ 		ret = read_from_ec(OXP_SENSOR_PWM_REG, 1, val);
+ 		if (ret)
+diff --git a/drivers/power/reset/Kconfig b/drivers/power/reset/Kconfig
+index e71f0af4e378c1..95f140ee7077d9 100644
+--- a/drivers/power/reset/Kconfig
++++ b/drivers/power/reset/Kconfig
+@@ -218,6 +218,7 @@ config POWER_RESET_ST
+ 
+ config POWER_RESET_TORADEX_EC
+ 	tristate "Toradex Embedded Controller power-off and reset driver"
++	depends on ARCH_MXC || COMPILE_TEST
+ 	depends on I2C
+ 	select REGMAP_I2C
+ 	help
+diff --git a/drivers/power/sequencing/pwrseq-qcom-wcn.c b/drivers/power/sequencing/pwrseq-qcom-wcn.c
+index e8f5030f2639a6..7d8d6b3407495c 100644
+--- a/drivers/power/sequencing/pwrseq-qcom-wcn.c
++++ b/drivers/power/sequencing/pwrseq-qcom-wcn.c
+@@ -155,7 +155,7 @@ static const struct pwrseq_unit_data pwrseq_qcom_wcn_bt_unit_data = {
+ };
+ 
+ static const struct pwrseq_unit_data pwrseq_qcom_wcn6855_bt_unit_data = {
+-	.name = "wlan-enable",
++	.name = "bluetooth-enable",
+ 	.deps = pwrseq_qcom_wcn6855_unit_deps,
+ 	.enable = pwrseq_qcom_wcn_bt_enable,
+ 	.disable = pwrseq_qcom_wcn_bt_disable,
+diff --git a/drivers/power/supply/cpcap-charger.c b/drivers/power/supply/cpcap-charger.c
+index 13300dc60baf9b..d0c3008db53490 100644
+--- a/drivers/power/supply/cpcap-charger.c
++++ b/drivers/power/supply/cpcap-charger.c
+@@ -689,9 +689,8 @@ static void cpcap_usb_detect(struct work_struct *work)
+ 		struct power_supply *battery;
+ 
+ 		battery = power_supply_get_by_name("battery");
+-		if (IS_ERR_OR_NULL(battery)) {
+-			dev_err(ddata->dev, "battery power_supply not available %li\n",
+-					PTR_ERR(battery));
++		if (!battery) {
++			dev_err(ddata->dev, "battery power_supply not available\n");
+ 			return;
+ 		}
+ 
+diff --git a/drivers/power/supply/max14577_charger.c b/drivers/power/supply/max14577_charger.c
+index 1cef2f860b5fcc..63077d38ea30a7 100644
+--- a/drivers/power/supply/max14577_charger.c
++++ b/drivers/power/supply/max14577_charger.c
+@@ -501,7 +501,7 @@ static struct max14577_charger_platform_data *max14577_charger_dt_init(
+ static struct max14577_charger_platform_data *max14577_charger_dt_init(
+ 		struct platform_device *pdev)
+ {
+-	return NULL;
++	return ERR_PTR(-ENODATA);
+ }
+ #endif /* CONFIG_OF */
+ 
+@@ -572,7 +572,7 @@ static int max14577_charger_probe(struct platform_device *pdev)
+ 	chg->max14577 = max14577;
+ 
+ 	chg->pdata = max14577_charger_dt_init(pdev);
+-	if (IS_ERR_OR_NULL(chg->pdata))
++	if (IS_ERR(chg->pdata))
+ 		return PTR_ERR(chg->pdata);
+ 
+ 	ret = max14577_charger_reg_init(chg);
+diff --git a/drivers/power/supply/max1720x_battery.c b/drivers/power/supply/max1720x_battery.c
+index ea3912fd1de8bf..68b5314ecf3a23 100644
+--- a/drivers/power/supply/max1720x_battery.c
++++ b/drivers/power/supply/max1720x_battery.c
+@@ -288,9 +288,10 @@ static int max172xx_voltage_to_ps(unsigned int reg)
+ 	return reg * 1250;	/* in uV */
+ }
+ 
+-static int max172xx_capacity_to_ps(unsigned int reg)
++static int max172xx_capacity_to_ps(unsigned int reg,
++				   struct max1720x_device_info *info)
+ {
+-	return reg * 500;	/* in uAh */
++	return reg * (500000 / info->rsense);	/* in uAh */
+ }
+ 
+ /*
+@@ -394,11 +395,11 @@ static int max1720x_battery_get_property(struct power_supply *psy,
+ 		break;
+ 	case POWER_SUPPLY_PROP_CHARGE_FULL_DESIGN:
+ 		ret = regmap_read(info->regmap, MAX172XX_DESIGN_CAP, &reg_val);
+-		val->intval = max172xx_capacity_to_ps(reg_val);
++		val->intval = max172xx_capacity_to_ps(reg_val, info);
+ 		break;
+ 	case POWER_SUPPLY_PROP_CHARGE_AVG:
+ 		ret = regmap_read(info->regmap, MAX172XX_REPCAP, &reg_val);
+-		val->intval = max172xx_capacity_to_ps(reg_val);
++		val->intval = max172xx_capacity_to_ps(reg_val, info);
+ 		break;
+ 	case POWER_SUPPLY_PROP_TIME_TO_EMPTY_AVG:
+ 		ret = regmap_read(info->regmap, MAX172XX_TTE, &reg_val);
+@@ -422,7 +423,7 @@ static int max1720x_battery_get_property(struct power_supply *psy,
+ 		break;
+ 	case POWER_SUPPLY_PROP_CHARGE_FULL:
+ 		ret = regmap_read(info->regmap, MAX172XX_FULL_CAP, &reg_val);
+-		val->intval = max172xx_capacity_to_ps(reg_val);
++		val->intval = max172xx_capacity_to_ps(reg_val, info);
+ 		break;
+ 	case POWER_SUPPLY_PROP_MODEL_NAME:
+ 		ret = regmap_read(info->regmap, MAX172XX_DEV_NAME, &reg_val);
+diff --git a/drivers/power/supply/qcom_pmi8998_charger.c b/drivers/power/supply/qcom_pmi8998_charger.c
+index c2f8f2e2439831..cd3cb473c70dd1 100644
+--- a/drivers/power/supply/qcom_pmi8998_charger.c
++++ b/drivers/power/supply/qcom_pmi8998_charger.c
+@@ -1016,7 +1016,9 @@ static int smb2_probe(struct platform_device *pdev)
+ 	if (rc < 0)
+ 		return rc;
+ 
+-	rc = dev_pm_set_wake_irq(chip->dev, chip->cable_irq);
++	devm_device_init_wakeup(chip->dev);
++
++	rc = devm_pm_set_wake_irq(chip->dev, chip->cable_irq);
+ 	if (rc < 0)
+ 		return dev_err_probe(chip->dev, rc, "Couldn't set wake irq\n");
+ 
+diff --git a/drivers/powercap/dtpm_cpu.c b/drivers/powercap/dtpm_cpu.c
+index 6b6f51b215501b..99390ec1481f83 100644
+--- a/drivers/powercap/dtpm_cpu.c
++++ b/drivers/powercap/dtpm_cpu.c
+@@ -96,6 +96,8 @@ static u64 get_pd_power_uw(struct dtpm *dtpm)
+ 	int i;
+ 
+ 	pd = em_cpu_get(dtpm_cpu->cpu);
++	if (!pd)
++		return 0;
+ 
+ 	pd_mask = em_span_cpus(pd);
+ 
+diff --git a/drivers/pps/pps.c b/drivers/pps/pps.c
+index 6a02245ea35fec..9463232af8d2e6 100644
+--- a/drivers/pps/pps.c
++++ b/drivers/pps/pps.c
+@@ -41,6 +41,9 @@ static __poll_t pps_cdev_poll(struct file *file, poll_table *wait)
+ 
+ 	poll_wait(file, &pps->queue, wait);
+ 
++	if (pps->last_fetched_ev == pps->last_ev)
++		return 0;
++
+ 	return EPOLLIN | EPOLLRDNORM;
+ }
+ 
+@@ -186,9 +189,11 @@ static long pps_cdev_ioctl(struct file *file,
+ 		if (err)
+ 			return err;
+ 
+-		/* Return the fetched timestamp */
++		/* Return the fetched timestamp and save last fetched event  */
+ 		spin_lock_irq(&pps->lock);
+ 
++		pps->last_fetched_ev = pps->last_ev;
++
+ 		fdata.info.assert_sequence = pps->assert_sequence;
+ 		fdata.info.clear_sequence = pps->clear_sequence;
+ 		fdata.info.assert_tu = pps->assert_tu;
+@@ -272,9 +277,11 @@ static long pps_cdev_compat_ioctl(struct file *file,
+ 		if (err)
+ 			return err;
+ 
+-		/* Return the fetched timestamp */
++		/* Return the fetched timestamp and save last fetched event  */
+ 		spin_lock_irq(&pps->lock);
+ 
++		pps->last_fetched_ev = pps->last_ev;
++
+ 		compat.info.assert_sequence = pps->assert_sequence;
+ 		compat.info.clear_sequence = pps->clear_sequence;
+ 		compat.info.current_mode = pps->current_mode;
+diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
+index 83962a114dc9fd..48a0d3a69ed080 100644
+--- a/drivers/remoteproc/Kconfig
++++ b/drivers/remoteproc/Kconfig
+@@ -214,7 +214,7 @@ config QCOM_Q6V5_MSS
+ 	  handled by QCOM_Q6V5_PAS driver.
+ 
+ config QCOM_Q6V5_PAS
+-	tristate "Qualcomm Hexagon v5 Peripheral Authentication Service support"
++	tristate "Qualcomm Peripheral Authentication Service support"
+ 	depends on OF && ARCH_QCOM
+ 	depends on QCOM_SMEM
+ 	depends on RPMSG_QCOM_SMD || RPMSG_QCOM_SMD=n
+@@ -229,11 +229,10 @@ config QCOM_Q6V5_PAS
+ 	select QCOM_RPROC_COMMON
+ 	select QCOM_SCM
+ 	help
+-	  Say y here to support the TrustZone based Peripheral Image Loader
+-	  for the Qualcomm Hexagon v5 based remote processors. This is commonly
+-	  used to control subsystems such as ADSP (Audio DSP),
+-	  CDSP (Compute DSP), MPSS (Modem Peripheral SubSystem), and
+-	  SLPI (Sensor Low Power Island).
++	  Say y here to support the TrustZone based Peripheral Image Loader for
++	  the Qualcomm remote processors. This is commonly used to control
++	  subsystems such as ADSP (Audio DSP), CDSP (Compute DSP), MPSS (Modem
++	  Peripheral SubSystem), and SLPI (Sensor Low Power Island).
+ 
+ config QCOM_Q6V5_WCSS
+ 	tristate "Qualcomm Hexagon based WCSS Peripheral Image Loader"
+diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
+index b306f223127c45..02e29171cbbee2 100644
+--- a/drivers/remoteproc/qcom_q6v5_pas.c
++++ b/drivers/remoteproc/qcom_q6v5_pas.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Qualcomm ADSP/SLPI Peripheral Image Loader for MSM8974 and MSM8996
++ * Qualcomm Peripheral Authentication Service remoteproc driver
+  *
+  * Copyright (C) 2016 Linaro Ltd
+  * Copyright (C) 2014 Sony Mobile Communications AB
+@@ -31,11 +31,11 @@
+ #include "qcom_q6v5.h"
+ #include "remoteproc_internal.h"
+ 
+-#define ADSP_DECRYPT_SHUTDOWN_DELAY_MS	100
++#define QCOM_PAS_DECRYPT_SHUTDOWN_DELAY_MS	100
+ 
+ #define MAX_ASSIGN_COUNT 3
+ 
+-struct adsp_data {
++struct qcom_pas_data {
+ 	int crash_reason_smem;
+ 	const char *firmware_name;
+ 	const char *dtb_firmware_name;
+@@ -60,7 +60,7 @@ struct adsp_data {
+ 	int region_assign_vmid;
+ };
+ 
+-struct qcom_adsp {
++struct qcom_pas {
+ 	struct device *dev;
+ 	struct rproc *rproc;
+ 
+@@ -119,36 +119,37 @@ struct qcom_adsp {
+ 	struct qcom_scm_pas_metadata dtb_pas_metadata;
+ };
+ 
+-static void adsp_segment_dump(struct rproc *rproc, struct rproc_dump_segment *segment,
+-		       void *dest, size_t offset, size_t size)
++static void qcom_pas_segment_dump(struct rproc *rproc,
++				  struct rproc_dump_segment *segment,
++				  void *dest, size_t offset, size_t size)
+ {
+-	struct qcom_adsp *adsp = rproc->priv;
++	struct qcom_pas *pas = rproc->priv;
+ 	int total_offset;
+ 
+-	total_offset = segment->da + segment->offset + offset - adsp->mem_phys;
+-	if (total_offset < 0 || total_offset + size > adsp->mem_size) {
+-		dev_err(adsp->dev,
++	total_offset = segment->da + segment->offset + offset - pas->mem_phys;
++	if (total_offset < 0 || total_offset + size > pas->mem_size) {
++		dev_err(pas->dev,
+ 			"invalid copy request for segment %pad with offset %zu and size %zu)\n",
+ 			&segment->da, offset, size);
+ 		memset(dest, 0xff, size);
+ 		return;
+ 	}
+ 
+-	memcpy_fromio(dest, adsp->mem_region + total_offset, size);
++	memcpy_fromio(dest, pas->mem_region + total_offset, size);
+ }
+ 
+-static void adsp_minidump(struct rproc *rproc)
++static void qcom_pas_minidump(struct rproc *rproc)
+ {
+-	struct qcom_adsp *adsp = rproc->priv;
++	struct qcom_pas *pas = rproc->priv;
+ 
+ 	if (rproc->dump_conf == RPROC_COREDUMP_DISABLED)
+ 		return;
+ 
+-	qcom_minidump(rproc, adsp->minidump_id, adsp_segment_dump);
++	qcom_minidump(rproc, pas->minidump_id, qcom_pas_segment_dump);
+ }
+ 
+-static int adsp_pds_enable(struct qcom_adsp *adsp, struct device **pds,
+-			   size_t pd_count)
++static int qcom_pas_pds_enable(struct qcom_pas *pas, struct device **pds,
++			       size_t pd_count)
+ {
+ 	int ret;
+ 	int i;
+@@ -174,8 +175,8 @@ static int adsp_pds_enable(struct qcom_adsp *adsp, struct device **pds,
+ 	return ret;
+ };
+ 
+-static void adsp_pds_disable(struct qcom_adsp *adsp, struct device **pds,
+-			     size_t pd_count)
++static void qcom_pas_pds_disable(struct qcom_pas *pas, struct device **pds,
++				 size_t pd_count)
+ {
+ 	int i;
+ 
+@@ -185,65 +186,65 @@ static void adsp_pds_disable(struct qcom_adsp *adsp, struct device **pds,
+ 	}
+ }
+ 
+-static int adsp_shutdown_poll_decrypt(struct qcom_adsp *adsp)
++static int qcom_pas_shutdown_poll_decrypt(struct qcom_pas *pas)
+ {
+ 	unsigned int retry_num = 50;
+ 	int ret;
+ 
+ 	do {
+-		msleep(ADSP_DECRYPT_SHUTDOWN_DELAY_MS);
+-		ret = qcom_scm_pas_shutdown(adsp->pas_id);
++		msleep(QCOM_PAS_DECRYPT_SHUTDOWN_DELAY_MS);
++		ret = qcom_scm_pas_shutdown(pas->pas_id);
+ 	} while (ret == -EINVAL && --retry_num);
+ 
+ 	return ret;
+ }
+ 
+-static int adsp_unprepare(struct rproc *rproc)
++static int qcom_pas_unprepare(struct rproc *rproc)
+ {
+-	struct qcom_adsp *adsp = rproc->priv;
++	struct qcom_pas *pas = rproc->priv;
+ 
+ 	/*
+-	 * adsp_load() did pass pas_metadata to the SCM driver for storing
++	 * qcom_pas_load() did pass pas_metadata to the SCM driver for storing
+ 	 * metadata context. It might have been released already if
+ 	 * auth_and_reset() was successful, but in other cases clean it up
+ 	 * here.
+ 	 */
+-	qcom_scm_pas_metadata_release(&adsp->pas_metadata);
+-	if (adsp->dtb_pas_id)
+-		qcom_scm_pas_metadata_release(&adsp->dtb_pas_metadata);
++	qcom_scm_pas_metadata_release(&pas->pas_metadata);
++	if (pas->dtb_pas_id)
++		qcom_scm_pas_metadata_release(&pas->dtb_pas_metadata);
+ 
+ 	return 0;
+ }
+ 
+-static int adsp_load(struct rproc *rproc, const struct firmware *fw)
++static int qcom_pas_load(struct rproc *rproc, const struct firmware *fw)
+ {
+-	struct qcom_adsp *adsp = rproc->priv;
++	struct qcom_pas *pas = rproc->priv;
+ 	int ret;
+ 
+-	/* Store firmware handle to be used in adsp_start() */
+-	adsp->firmware = fw;
++	/* Store firmware handle to be used in qcom_pas_start() */
++	pas->firmware = fw;
+ 
+-	if (adsp->lite_pas_id)
+-		ret = qcom_scm_pas_shutdown(adsp->lite_pas_id);
++	if (pas->lite_pas_id)
++		ret = qcom_scm_pas_shutdown(pas->lite_pas_id);
+ 
+-	if (adsp->dtb_pas_id) {
+-		ret = request_firmware(&adsp->dtb_firmware, adsp->dtb_firmware_name, adsp->dev);
++	if (pas->dtb_pas_id) {
++		ret = request_firmware(&pas->dtb_firmware, pas->dtb_firmware_name, pas->dev);
+ 		if (ret) {
+-			dev_err(adsp->dev, "request_firmware failed for %s: %d\n",
+-				adsp->dtb_firmware_name, ret);
++			dev_err(pas->dev, "request_firmware failed for %s: %d\n",
++				pas->dtb_firmware_name, ret);
+ 			return ret;
+ 		}
+ 
+-		ret = qcom_mdt_pas_init(adsp->dev, adsp->dtb_firmware, adsp->dtb_firmware_name,
+-					adsp->dtb_pas_id, adsp->dtb_mem_phys,
+-					&adsp->dtb_pas_metadata);
++		ret = qcom_mdt_pas_init(pas->dev, pas->dtb_firmware, pas->dtb_firmware_name,
++					pas->dtb_pas_id, pas->dtb_mem_phys,
++					&pas->dtb_pas_metadata);
+ 		if (ret)
+ 			goto release_dtb_firmware;
+ 
+-		ret = qcom_mdt_load_no_init(adsp->dev, adsp->dtb_firmware, adsp->dtb_firmware_name,
+-					    adsp->dtb_pas_id, adsp->dtb_mem_region,
+-					    adsp->dtb_mem_phys, adsp->dtb_mem_size,
+-					    &adsp->dtb_mem_reloc);
++		ret = qcom_mdt_load_no_init(pas->dev, pas->dtb_firmware, pas->dtb_firmware_name,
++					    pas->dtb_pas_id, pas->dtb_mem_region,
++					    pas->dtb_mem_phys, pas->dtb_mem_size,
++					    &pas->dtb_mem_reloc);
+ 		if (ret)
+ 			goto release_dtb_metadata;
+ 	}
+@@ -251,248 +252,246 @@ static int adsp_load(struct rproc *rproc, const struct firmware *fw)
+ 	return 0;
+ 
+ release_dtb_metadata:
+-	qcom_scm_pas_metadata_release(&adsp->dtb_pas_metadata);
++	qcom_scm_pas_metadata_release(&pas->dtb_pas_metadata);
+ 
+ release_dtb_firmware:
+-	release_firmware(adsp->dtb_firmware);
++	release_firmware(pas->dtb_firmware);
+ 
+ 	return ret;
+ }
+ 
+-static int adsp_start(struct rproc *rproc)
++static int qcom_pas_start(struct rproc *rproc)
+ {
+-	struct qcom_adsp *adsp = rproc->priv;
++	struct qcom_pas *pas = rproc->priv;
+ 	int ret;
+ 
+-	ret = qcom_q6v5_prepare(&adsp->q6v5);
++	ret = qcom_q6v5_prepare(&pas->q6v5);
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = adsp_pds_enable(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
++	ret = qcom_pas_pds_enable(pas, pas->proxy_pds, pas->proxy_pd_count);
+ 	if (ret < 0)
+ 		goto disable_irqs;
+ 
+-	ret = clk_prepare_enable(adsp->xo);
++	ret = clk_prepare_enable(pas->xo);
+ 	if (ret)
+ 		goto disable_proxy_pds;
+ 
+-	ret = clk_prepare_enable(adsp->aggre2_clk);
++	ret = clk_prepare_enable(pas->aggre2_clk);
+ 	if (ret)
+ 		goto disable_xo_clk;
+ 
+-	if (adsp->cx_supply) {
+-		ret = regulator_enable(adsp->cx_supply);
++	if (pas->cx_supply) {
++		ret = regulator_enable(pas->cx_supply);
+ 		if (ret)
+ 			goto disable_aggre2_clk;
+ 	}
+ 
+-	if (adsp->px_supply) {
+-		ret = regulator_enable(adsp->px_supply);
++	if (pas->px_supply) {
++		ret = regulator_enable(pas->px_supply);
+ 		if (ret)
+ 			goto disable_cx_supply;
+ 	}
+ 
+-	if (adsp->dtb_pas_id) {
+-		ret = qcom_scm_pas_auth_and_reset(adsp->dtb_pas_id);
++	if (pas->dtb_pas_id) {
++		ret = qcom_scm_pas_auth_and_reset(pas->dtb_pas_id);
+ 		if (ret) {
+-			dev_err(adsp->dev,
++			dev_err(pas->dev,
+ 				"failed to authenticate dtb image and release reset\n");
+ 			goto disable_px_supply;
+ 		}
+ 	}
+ 
+-	ret = qcom_mdt_pas_init(adsp->dev, adsp->firmware, rproc->firmware, adsp->pas_id,
+-				adsp->mem_phys, &adsp->pas_metadata);
++	ret = qcom_mdt_pas_init(pas->dev, pas->firmware, rproc->firmware, pas->pas_id,
++				pas->mem_phys, &pas->pas_metadata);
+ 	if (ret)
+ 		goto disable_px_supply;
+ 
+-	ret = qcom_mdt_load_no_init(adsp->dev, adsp->firmware, rproc->firmware, adsp->pas_id,
+-				    adsp->mem_region, adsp->mem_phys, adsp->mem_size,
+-				    &adsp->mem_reloc);
++	ret = qcom_mdt_load_no_init(pas->dev, pas->firmware, rproc->firmware, pas->pas_id,
++				    pas->mem_region, pas->mem_phys, pas->mem_size,
++				    &pas->mem_reloc);
+ 	if (ret)
+ 		goto release_pas_metadata;
+ 
+-	qcom_pil_info_store(adsp->info_name, adsp->mem_phys, adsp->mem_size);
++	qcom_pil_info_store(pas->info_name, pas->mem_phys, pas->mem_size);
+ 
+-	ret = qcom_scm_pas_auth_and_reset(adsp->pas_id);
++	ret = qcom_scm_pas_auth_and_reset(pas->pas_id);
+ 	if (ret) {
+-		dev_err(adsp->dev,
++		dev_err(pas->dev,
+ 			"failed to authenticate image and release reset\n");
+ 		goto release_pas_metadata;
+ 	}
+ 
+-	ret = qcom_q6v5_wait_for_start(&adsp->q6v5, msecs_to_jiffies(5000));
++	ret = qcom_q6v5_wait_for_start(&pas->q6v5, msecs_to_jiffies(5000));
+ 	if (ret == -ETIMEDOUT) {
+-		dev_err(adsp->dev, "start timed out\n");
+-		qcom_scm_pas_shutdown(adsp->pas_id);
++		dev_err(pas->dev, "start timed out\n");
++		qcom_scm_pas_shutdown(pas->pas_id);
+ 		goto release_pas_metadata;
+ 	}
+ 
+-	qcom_scm_pas_metadata_release(&adsp->pas_metadata);
+-	if (adsp->dtb_pas_id)
+-		qcom_scm_pas_metadata_release(&adsp->dtb_pas_metadata);
++	qcom_scm_pas_metadata_release(&pas->pas_metadata);
++	if (pas->dtb_pas_id)
++		qcom_scm_pas_metadata_release(&pas->dtb_pas_metadata);
+ 
+-	/* Remove pointer to the loaded firmware, only valid in adsp_load() & adsp_start() */
+-	adsp->firmware = NULL;
++	/* firmware is used to pass reference from qcom_pas_start(), drop it now */
++	pas->firmware = NULL;
+ 
+ 	return 0;
+ 
+ release_pas_metadata:
+-	qcom_scm_pas_metadata_release(&adsp->pas_metadata);
+-	if (adsp->dtb_pas_id)
+-		qcom_scm_pas_metadata_release(&adsp->dtb_pas_metadata);
++	qcom_scm_pas_metadata_release(&pas->pas_metadata);
++	if (pas->dtb_pas_id)
++		qcom_scm_pas_metadata_release(&pas->dtb_pas_metadata);
+ disable_px_supply:
+-	if (adsp->px_supply)
+-		regulator_disable(adsp->px_supply);
++	if (pas->px_supply)
++		regulator_disable(pas->px_supply);
+ disable_cx_supply:
+-	if (adsp->cx_supply)
+-		regulator_disable(adsp->cx_supply);
++	if (pas->cx_supply)
++		regulator_disable(pas->cx_supply);
+ disable_aggre2_clk:
+-	clk_disable_unprepare(adsp->aggre2_clk);
++	clk_disable_unprepare(pas->aggre2_clk);
+ disable_xo_clk:
+-	clk_disable_unprepare(adsp->xo);
++	clk_disable_unprepare(pas->xo);
+ disable_proxy_pds:
+-	adsp_pds_disable(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
++	qcom_pas_pds_disable(pas, pas->proxy_pds, pas->proxy_pd_count);
+ disable_irqs:
+-	qcom_q6v5_unprepare(&adsp->q6v5);
++	qcom_q6v5_unprepare(&pas->q6v5);
+ 
+-	/* Remove pointer to the loaded firmware, only valid in adsp_load() & adsp_start() */
+-	adsp->firmware = NULL;
++	/* firmware is used to pass reference from qcom_pas_start(), drop it now */
++	pas->firmware = NULL;
+ 
+ 	return ret;
+ }
+ 
+ static void qcom_pas_handover(struct qcom_q6v5 *q6v5)
+ {
+-	struct qcom_adsp *adsp = container_of(q6v5, struct qcom_adsp, q6v5);
+-
+-	if (adsp->px_supply)
+-		regulator_disable(adsp->px_supply);
+-	if (adsp->cx_supply)
+-		regulator_disable(adsp->cx_supply);
+-	clk_disable_unprepare(adsp->aggre2_clk);
+-	clk_disable_unprepare(adsp->xo);
+-	adsp_pds_disable(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
++	struct qcom_pas *pas = container_of(q6v5, struct qcom_pas, q6v5);
++
++	if (pas->px_supply)
++		regulator_disable(pas->px_supply);
++	if (pas->cx_supply)
++		regulator_disable(pas->cx_supply);
++	clk_disable_unprepare(pas->aggre2_clk);
++	clk_disable_unprepare(pas->xo);
++	qcom_pas_pds_disable(pas, pas->proxy_pds, pas->proxy_pd_count);
+ }
+ 
+-static int adsp_stop(struct rproc *rproc)
++static int qcom_pas_stop(struct rproc *rproc)
+ {
+-	struct qcom_adsp *adsp = rproc->priv;
++	struct qcom_pas *pas = rproc->priv;
+ 	int handover;
+ 	int ret;
+ 
+-	ret = qcom_q6v5_request_stop(&adsp->q6v5, adsp->sysmon);
++	ret = qcom_q6v5_request_stop(&pas->q6v5, pas->sysmon);
+ 	if (ret == -ETIMEDOUT)
+-		dev_err(adsp->dev, "timed out on wait\n");
++		dev_err(pas->dev, "timed out on wait\n");
+ 
+-	ret = qcom_scm_pas_shutdown(adsp->pas_id);
+-	if (ret && adsp->decrypt_shutdown)
+-		ret = adsp_shutdown_poll_decrypt(adsp);
++	ret = qcom_scm_pas_shutdown(pas->pas_id);
++	if (ret && pas->decrypt_shutdown)
++		ret = qcom_pas_shutdown_poll_decrypt(pas);
+ 
+ 	if (ret)
+-		dev_err(adsp->dev, "failed to shutdown: %d\n", ret);
++		dev_err(pas->dev, "failed to shutdown: %d\n", ret);
+ 
+-	if (adsp->dtb_pas_id) {
+-		ret = qcom_scm_pas_shutdown(adsp->dtb_pas_id);
++	if (pas->dtb_pas_id) {
++		ret = qcom_scm_pas_shutdown(pas->dtb_pas_id);
+ 		if (ret)
+-			dev_err(adsp->dev, "failed to shutdown dtb: %d\n", ret);
++			dev_err(pas->dev, "failed to shutdown dtb: %d\n", ret);
+ 	}
+ 
+-	handover = qcom_q6v5_unprepare(&adsp->q6v5);
++	handover = qcom_q6v5_unprepare(&pas->q6v5);
+ 	if (handover)
+-		qcom_pas_handover(&adsp->q6v5);
++		qcom_pas_handover(&pas->q6v5);
+ 
+-	if (adsp->smem_host_id)
+-		ret = qcom_smem_bust_hwspin_lock_by_host(adsp->smem_host_id);
++	if (pas->smem_host_id)
++		ret = qcom_smem_bust_hwspin_lock_by_host(pas->smem_host_id);
+ 
+ 	return ret;
+ }
+ 
+-static void *adsp_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem)
++static void *qcom_pas_da_to_va(struct rproc *rproc, u64 da, size_t len, bool *is_iomem)
+ {
+-	struct qcom_adsp *adsp = rproc->priv;
++	struct qcom_pas *pas = rproc->priv;
+ 	int offset;
+ 
+-	offset = da - adsp->mem_reloc;
+-	if (offset < 0 || offset + len > adsp->mem_size)
++	offset = da - pas->mem_reloc;
++	if (offset < 0 || offset + len > pas->mem_size)
+ 		return NULL;
+ 
+ 	if (is_iomem)
+ 		*is_iomem = true;
+ 
+-	return adsp->mem_region + offset;
++	return pas->mem_region + offset;
+ }
+ 
+-static unsigned long adsp_panic(struct rproc *rproc)
++static unsigned long qcom_pas_panic(struct rproc *rproc)
+ {
+-	struct qcom_adsp *adsp = rproc->priv;
++	struct qcom_pas *pas = rproc->priv;
+ 
+-	return qcom_q6v5_panic(&adsp->q6v5);
++	return qcom_q6v5_panic(&pas->q6v5);
+ }
+ 
+-static const struct rproc_ops adsp_ops = {
+-	.unprepare = adsp_unprepare,
+-	.start = adsp_start,
+-	.stop = adsp_stop,
+-	.da_to_va = adsp_da_to_va,
++static const struct rproc_ops qcom_pas_ops = {
++	.unprepare = qcom_pas_unprepare,
++	.start = qcom_pas_start,
++	.stop = qcom_pas_stop,
++	.da_to_va = qcom_pas_da_to_va,
+ 	.parse_fw = qcom_register_dump_segments,
+-	.load = adsp_load,
+-	.panic = adsp_panic,
++	.load = qcom_pas_load,
++	.panic = qcom_pas_panic,
+ };
+ 
+-static const struct rproc_ops adsp_minidump_ops = {
+-	.unprepare = adsp_unprepare,
+-	.start = adsp_start,
+-	.stop = adsp_stop,
+-	.da_to_va = adsp_da_to_va,
++static const struct rproc_ops qcom_pas_minidump_ops = {
++	.unprepare = qcom_pas_unprepare,
++	.start = qcom_pas_start,
++	.stop = qcom_pas_stop,
++	.da_to_va = qcom_pas_da_to_va,
+ 	.parse_fw = qcom_register_dump_segments,
+-	.load = adsp_load,
+-	.panic = adsp_panic,
+-	.coredump = adsp_minidump,
++	.load = qcom_pas_load,
++	.panic = qcom_pas_panic,
++	.coredump = qcom_pas_minidump,
+ };
+ 
+-static int adsp_init_clock(struct qcom_adsp *adsp)
++static int qcom_pas_init_clock(struct qcom_pas *pas)
+ {
+-	adsp->xo = devm_clk_get(adsp->dev, "xo");
+-	if (IS_ERR(adsp->xo))
+-		return dev_err_probe(adsp->dev, PTR_ERR(adsp->xo),
++	pas->xo = devm_clk_get(pas->dev, "xo");
++	if (IS_ERR(pas->xo))
++		return dev_err_probe(pas->dev, PTR_ERR(pas->xo),
+ 				     "failed to get xo clock");
+ 
+-
+-	adsp->aggre2_clk = devm_clk_get_optional(adsp->dev, "aggre2");
+-	if (IS_ERR(adsp->aggre2_clk))
+-		return dev_err_probe(adsp->dev, PTR_ERR(adsp->aggre2_clk),
++	pas->aggre2_clk = devm_clk_get_optional(pas->dev, "aggre2");
++	if (IS_ERR(pas->aggre2_clk))
++		return dev_err_probe(pas->dev, PTR_ERR(pas->aggre2_clk),
+ 				     "failed to get aggre2 clock");
+ 
+ 	return 0;
+ }
+ 
+-static int adsp_init_regulator(struct qcom_adsp *adsp)
++static int qcom_pas_init_regulator(struct qcom_pas *pas)
+ {
+-	adsp->cx_supply = devm_regulator_get_optional(adsp->dev, "cx");
+-	if (IS_ERR(adsp->cx_supply)) {
+-		if (PTR_ERR(adsp->cx_supply) == -ENODEV)
+-			adsp->cx_supply = NULL;
++	pas->cx_supply = devm_regulator_get_optional(pas->dev, "cx");
++	if (IS_ERR(pas->cx_supply)) {
++		if (PTR_ERR(pas->cx_supply) == -ENODEV)
++			pas->cx_supply = NULL;
+ 		else
+-			return PTR_ERR(adsp->cx_supply);
++			return PTR_ERR(pas->cx_supply);
+ 	}
+ 
+-	if (adsp->cx_supply)
+-		regulator_set_load(adsp->cx_supply, 100000);
++	if (pas->cx_supply)
++		regulator_set_load(pas->cx_supply, 100000);
+ 
+-	adsp->px_supply = devm_regulator_get_optional(adsp->dev, "px");
+-	if (IS_ERR(adsp->px_supply)) {
+-		if (PTR_ERR(adsp->px_supply) == -ENODEV)
+-			adsp->px_supply = NULL;
++	pas->px_supply = devm_regulator_get_optional(pas->dev, "px");
++	if (IS_ERR(pas->px_supply)) {
++		if (PTR_ERR(pas->px_supply) == -ENODEV)
++			pas->px_supply = NULL;
+ 		else
+-			return PTR_ERR(adsp->px_supply);
++			return PTR_ERR(pas->px_supply);
+ 	}
+ 
+ 	return 0;
+ }
+ 
+-static int adsp_pds_attach(struct device *dev, struct device **devs,
+-			   char **pd_names)
++static int qcom_pas_pds_attach(struct device *dev, struct device **devs, char **pd_names)
+ {
+ 	size_t num_pds = 0;
+ 	int ret;
+@@ -528,10 +527,9 @@ static int adsp_pds_attach(struct device *dev, struct device **devs,
+ 	return ret;
+ };
+ 
+-static void adsp_pds_detach(struct qcom_adsp *adsp, struct device **pds,
+-			    size_t pd_count)
++static void qcom_pas_pds_detach(struct qcom_pas *pas, struct device **pds, size_t pd_count)
+ {
+-	struct device *dev = adsp->dev;
++	struct device *dev = pas->dev;
+ 	int i;
+ 
+ 	/* Handle single power domain */
+@@ -544,62 +542,62 @@ static void adsp_pds_detach(struct qcom_adsp *adsp, struct device **pds,
+ 		dev_pm_domain_detach(pds[i], false);
+ }
+ 
+-static int adsp_alloc_memory_region(struct qcom_adsp *adsp)
++static int qcom_pas_alloc_memory_region(struct qcom_pas *pas)
+ {
+ 	struct reserved_mem *rmem;
+ 	struct device_node *node;
+ 
+-	node = of_parse_phandle(adsp->dev->of_node, "memory-region", 0);
++	node = of_parse_phandle(pas->dev->of_node, "memory-region", 0);
+ 	if (!node) {
+-		dev_err(adsp->dev, "no memory-region specified\n");
++		dev_err(pas->dev, "no memory-region specified\n");
+ 		return -EINVAL;
+ 	}
+ 
+ 	rmem = of_reserved_mem_lookup(node);
+ 	of_node_put(node);
+ 	if (!rmem) {
+-		dev_err(adsp->dev, "unable to resolve memory-region\n");
++		dev_err(pas->dev, "unable to resolve memory-region\n");
+ 		return -EINVAL;
+ 	}
+ 
+-	adsp->mem_phys = adsp->mem_reloc = rmem->base;
+-	adsp->mem_size = rmem->size;
+-	adsp->mem_region = devm_ioremap_wc(adsp->dev, adsp->mem_phys, adsp->mem_size);
+-	if (!adsp->mem_region) {
+-		dev_err(adsp->dev, "unable to map memory region: %pa+%zx\n",
+-			&rmem->base, adsp->mem_size);
++	pas->mem_phys = pas->mem_reloc = rmem->base;
++	pas->mem_size = rmem->size;
++	pas->mem_region = devm_ioremap_wc(pas->dev, pas->mem_phys, pas->mem_size);
++	if (!pas->mem_region) {
++		dev_err(pas->dev, "unable to map memory region: %pa+%zx\n",
++			&rmem->base, pas->mem_size);
+ 		return -EBUSY;
+ 	}
+ 
+-	if (!adsp->dtb_pas_id)
++	if (!pas->dtb_pas_id)
+ 		return 0;
+ 
+-	node = of_parse_phandle(adsp->dev->of_node, "memory-region", 1);
++	node = of_parse_phandle(pas->dev->of_node, "memory-region", 1);
+ 	if (!node) {
+-		dev_err(adsp->dev, "no dtb memory-region specified\n");
++		dev_err(pas->dev, "no dtb memory-region specified\n");
+ 		return -EINVAL;
+ 	}
+ 
+ 	rmem = of_reserved_mem_lookup(node);
+ 	of_node_put(node);
+ 	if (!rmem) {
+-		dev_err(adsp->dev, "unable to resolve dtb memory-region\n");
++		dev_err(pas->dev, "unable to resolve dtb memory-region\n");
+ 		return -EINVAL;
+ 	}
+ 
+-	adsp->dtb_mem_phys = adsp->dtb_mem_reloc = rmem->base;
+-	adsp->dtb_mem_size = rmem->size;
+-	adsp->dtb_mem_region = devm_ioremap_wc(adsp->dev, adsp->dtb_mem_phys, adsp->dtb_mem_size);
+-	if (!adsp->dtb_mem_region) {
+-		dev_err(adsp->dev, "unable to map dtb memory region: %pa+%zx\n",
+-			&rmem->base, adsp->dtb_mem_size);
++	pas->dtb_mem_phys = pas->dtb_mem_reloc = rmem->base;
++	pas->dtb_mem_size = rmem->size;
++	pas->dtb_mem_region = devm_ioremap_wc(pas->dev, pas->dtb_mem_phys, pas->dtb_mem_size);
++	if (!pas->dtb_mem_region) {
++		dev_err(pas->dev, "unable to map dtb memory region: %pa+%zx\n",
++			&rmem->base, pas->dtb_mem_size);
+ 		return -EBUSY;
+ 	}
+ 
+ 	return 0;
+ }
+ 
+-static int adsp_assign_memory_region(struct qcom_adsp *adsp)
++static int qcom_pas_assign_memory_region(struct qcom_pas *pas)
+ {
+ 	struct qcom_scm_vmperm perm[MAX_ASSIGN_COUNT];
+ 	struct device_node *node;
+@@ -607,45 +605,45 @@ static int adsp_assign_memory_region(struct qcom_adsp *adsp)
+ 	int offset;
+ 	int ret;
+ 
+-	if (!adsp->region_assign_idx)
++	if (!pas->region_assign_idx)
+ 		return 0;
+ 
+-	for (offset = 0; offset < adsp->region_assign_count; ++offset) {
++	for (offset = 0; offset < pas->region_assign_count; ++offset) {
+ 		struct reserved_mem *rmem = NULL;
+ 
+-		node = of_parse_phandle(adsp->dev->of_node, "memory-region",
+-					adsp->region_assign_idx + offset);
++		node = of_parse_phandle(pas->dev->of_node, "memory-region",
++					pas->region_assign_idx + offset);
+ 		if (node)
+ 			rmem = of_reserved_mem_lookup(node);
+ 		of_node_put(node);
+ 		if (!rmem) {
+-			dev_err(adsp->dev, "unable to resolve shareable memory-region index %d\n",
++			dev_err(pas->dev, "unable to resolve shareable memory-region index %d\n",
+ 				offset);
+ 			return -EINVAL;
+ 		}
+ 
+-		if (adsp->region_assign_shared)  {
++		if (pas->region_assign_shared)  {
+ 			perm[0].vmid = QCOM_SCM_VMID_HLOS;
+ 			perm[0].perm = QCOM_SCM_PERM_RW;
+-			perm[1].vmid = adsp->region_assign_vmid;
++			perm[1].vmid = pas->region_assign_vmid;
+ 			perm[1].perm = QCOM_SCM_PERM_RW;
+ 			perm_size = 2;
+ 		} else {
+-			perm[0].vmid = adsp->region_assign_vmid;
++			perm[0].vmid = pas->region_assign_vmid;
+ 			perm[0].perm = QCOM_SCM_PERM_RW;
+ 			perm_size = 1;
+ 		}
+ 
+-		adsp->region_assign_phys[offset] = rmem->base;
+-		adsp->region_assign_size[offset] = rmem->size;
+-		adsp->region_assign_owners[offset] = BIT(QCOM_SCM_VMID_HLOS);
++		pas->region_assign_phys[offset] = rmem->base;
++		pas->region_assign_size[offset] = rmem->size;
++		pas->region_assign_owners[offset] = BIT(QCOM_SCM_VMID_HLOS);
+ 
+-		ret = qcom_scm_assign_mem(adsp->region_assign_phys[offset],
+-					  adsp->region_assign_size[offset],
+-					  &adsp->region_assign_owners[offset],
++		ret = qcom_scm_assign_mem(pas->region_assign_phys[offset],
++					  pas->region_assign_size[offset],
++					  &pas->region_assign_owners[offset],
+ 					  perm, perm_size);
+ 		if (ret < 0) {
+-			dev_err(adsp->dev, "assign memory %d failed\n", offset);
++			dev_err(pas->dev, "assign memory %d failed\n", offset);
+ 			return ret;
+ 		}
+ 	}
+@@ -653,35 +651,35 @@ static int adsp_assign_memory_region(struct qcom_adsp *adsp)
+ 	return 0;
+ }
+ 
+-static void adsp_unassign_memory_region(struct qcom_adsp *adsp)
++static void qcom_pas_unassign_memory_region(struct qcom_pas *pas)
+ {
+ 	struct qcom_scm_vmperm perm;
+ 	int offset;
+ 	int ret;
+ 
+-	if (!adsp->region_assign_idx || adsp->region_assign_shared)
++	if (!pas->region_assign_idx || pas->region_assign_shared)
+ 		return;
+ 
+-	for (offset = 0; offset < adsp->region_assign_count; ++offset) {
++	for (offset = 0; offset < pas->region_assign_count; ++offset) {
+ 		perm.vmid = QCOM_SCM_VMID_HLOS;
+ 		perm.perm = QCOM_SCM_PERM_RW;
+ 
+-		ret = qcom_scm_assign_mem(adsp->region_assign_phys[offset],
+-					  adsp->region_assign_size[offset],
+-					  &adsp->region_assign_owners[offset],
++		ret = qcom_scm_assign_mem(pas->region_assign_phys[offset],
++					  pas->region_assign_size[offset],
++					  &pas->region_assign_owners[offset],
+ 					  &perm, 1);
+ 		if (ret < 0)
+-			dev_err(adsp->dev, "unassign memory %d failed\n", offset);
++			dev_err(pas->dev, "unassign memory %d failed\n", offset);
+ 	}
+ }
+ 
+-static int adsp_probe(struct platform_device *pdev)
++static int qcom_pas_probe(struct platform_device *pdev)
+ {
+-	const struct adsp_data *desc;
+-	struct qcom_adsp *adsp;
++	const struct qcom_pas_data *desc;
++	struct qcom_pas *pas;
+ 	struct rproc *rproc;
+ 	const char *fw_name, *dtb_fw_name = NULL;
+-	const struct rproc_ops *ops = &adsp_ops;
++	const struct rproc_ops *ops = &qcom_pas_ops;
+ 	int ret;
+ 
+ 	desc = of_device_get_match_data(&pdev->dev);
+@@ -706,9 +704,9 @@ static int adsp_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	if (desc->minidump_id)
+-		ops = &adsp_minidump_ops;
++		ops = &qcom_pas_minidump_ops;
+ 
+-	rproc = devm_rproc_alloc(&pdev->dev, desc->sysmon_name, ops, fw_name, sizeof(*adsp));
++	rproc = devm_rproc_alloc(&pdev->dev, desc->sysmon_name, ops, fw_name, sizeof(*pas));
+ 
+ 	if (!rproc) {
+ 		dev_err(&pdev->dev, "unable to allocate remoteproc\n");
+@@ -718,68 +716,65 @@ static int adsp_probe(struct platform_device *pdev)
+ 	rproc->auto_boot = desc->auto_boot;
+ 	rproc_coredump_set_elf_info(rproc, ELFCLASS32, EM_NONE);
+ 
+-	adsp = rproc->priv;
+-	adsp->dev = &pdev->dev;
+-	adsp->rproc = rproc;
+-	adsp->minidump_id = desc->minidump_id;
+-	adsp->pas_id = desc->pas_id;
+-	adsp->lite_pas_id = desc->lite_pas_id;
+-	adsp->info_name = desc->sysmon_name;
+-	adsp->smem_host_id = desc->smem_host_id;
+-	adsp->decrypt_shutdown = desc->decrypt_shutdown;
+-	adsp->region_assign_idx = desc->region_assign_idx;
+-	adsp->region_assign_count = min_t(int, MAX_ASSIGN_COUNT, desc->region_assign_count);
+-	adsp->region_assign_vmid = desc->region_assign_vmid;
+-	adsp->region_assign_shared = desc->region_assign_shared;
++	pas = rproc->priv;
++	pas->dev = &pdev->dev;
++	pas->rproc = rproc;
++	pas->minidump_id = desc->minidump_id;
++	pas->pas_id = desc->pas_id;
++	pas->lite_pas_id = desc->lite_pas_id;
++	pas->info_name = desc->sysmon_name;
++	pas->smem_host_id = desc->smem_host_id;
++	pas->decrypt_shutdown = desc->decrypt_shutdown;
++	pas->region_assign_idx = desc->region_assign_idx;
++	pas->region_assign_count = min_t(int, MAX_ASSIGN_COUNT, desc->region_assign_count);
++	pas->region_assign_vmid = desc->region_assign_vmid;
++	pas->region_assign_shared = desc->region_assign_shared;
+ 	if (dtb_fw_name) {
+-		adsp->dtb_firmware_name = dtb_fw_name;
+-		adsp->dtb_pas_id = desc->dtb_pas_id;
++		pas->dtb_firmware_name = dtb_fw_name;
++		pas->dtb_pas_id = desc->dtb_pas_id;
+ 	}
+-	platform_set_drvdata(pdev, adsp);
++	platform_set_drvdata(pdev, pas);
+ 
+-	ret = device_init_wakeup(adsp->dev, true);
++	ret = device_init_wakeup(pas->dev, true);
+ 	if (ret)
+ 		goto free_rproc;
+ 
+-	ret = adsp_alloc_memory_region(adsp);
++	ret = qcom_pas_alloc_memory_region(pas);
+ 	if (ret)
+ 		goto free_rproc;
+ 
+-	ret = adsp_assign_memory_region(adsp);
++	ret = qcom_pas_assign_memory_region(pas);
+ 	if (ret)
+ 		goto free_rproc;
+ 
+-	ret = adsp_init_clock(adsp);
++	ret = qcom_pas_init_clock(pas);
+ 	if (ret)
+ 		goto unassign_mem;
+ 
+-	ret = adsp_init_regulator(adsp);
++	ret = qcom_pas_init_regulator(pas);
+ 	if (ret)
+ 		goto unassign_mem;
+ 
+-	ret = adsp_pds_attach(&pdev->dev, adsp->proxy_pds,
+-			      desc->proxy_pd_names);
++	ret = qcom_pas_pds_attach(&pdev->dev, pas->proxy_pds, desc->proxy_pd_names);
+ 	if (ret < 0)
+ 		goto unassign_mem;
+-	adsp->proxy_pd_count = ret;
++	pas->proxy_pd_count = ret;
+ 
+-	ret = qcom_q6v5_init(&adsp->q6v5, pdev, rproc, desc->crash_reason_smem, desc->load_state,
+-			     qcom_pas_handover);
++	ret = qcom_q6v5_init(&pas->q6v5, pdev, rproc, desc->crash_reason_smem,
++			     desc->load_state, qcom_pas_handover);
+ 	if (ret)
+ 		goto detach_proxy_pds;
+ 
+-	qcom_add_glink_subdev(rproc, &adsp->glink_subdev, desc->ssr_name);
+-	qcom_add_smd_subdev(rproc, &adsp->smd_subdev);
+-	qcom_add_pdm_subdev(rproc, &adsp->pdm_subdev);
+-	adsp->sysmon = qcom_add_sysmon_subdev(rproc,
+-					      desc->sysmon_name,
+-					      desc->ssctl_id);
+-	if (IS_ERR(adsp->sysmon)) {
+-		ret = PTR_ERR(adsp->sysmon);
++	qcom_add_glink_subdev(rproc, &pas->glink_subdev, desc->ssr_name);
++	qcom_add_smd_subdev(rproc, &pas->smd_subdev);
++	qcom_add_pdm_subdev(rproc, &pas->pdm_subdev);
++	pas->sysmon = qcom_add_sysmon_subdev(rproc, desc->sysmon_name, desc->ssctl_id);
++	if (IS_ERR(pas->sysmon)) {
++		ret = PTR_ERR(pas->sysmon);
+ 		goto deinit_remove_pdm_smd_glink;
+ 	}
+ 
+-	qcom_add_ssr_subdev(rproc, &adsp->ssr_subdev, desc->ssr_name);
++	qcom_add_ssr_subdev(rproc, &pas->ssr_subdev, desc->ssr_name);
+ 	ret = rproc_add(rproc);
+ 	if (ret)
+ 		goto remove_ssr_sysmon;
+@@ -787,41 +782,41 @@ static int adsp_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ remove_ssr_sysmon:
+-	qcom_remove_ssr_subdev(rproc, &adsp->ssr_subdev);
+-	qcom_remove_sysmon_subdev(adsp->sysmon);
++	qcom_remove_ssr_subdev(rproc, &pas->ssr_subdev);
++	qcom_remove_sysmon_subdev(pas->sysmon);
+ deinit_remove_pdm_smd_glink:
+-	qcom_remove_pdm_subdev(rproc, &adsp->pdm_subdev);
+-	qcom_remove_smd_subdev(rproc, &adsp->smd_subdev);
+-	qcom_remove_glink_subdev(rproc, &adsp->glink_subdev);
+-	qcom_q6v5_deinit(&adsp->q6v5);
++	qcom_remove_pdm_subdev(rproc, &pas->pdm_subdev);
++	qcom_remove_smd_subdev(rproc, &pas->smd_subdev);
++	qcom_remove_glink_subdev(rproc, &pas->glink_subdev);
++	qcom_q6v5_deinit(&pas->q6v5);
+ detach_proxy_pds:
+-	adsp_pds_detach(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
++	qcom_pas_pds_detach(pas, pas->proxy_pds, pas->proxy_pd_count);
+ unassign_mem:
+-	adsp_unassign_memory_region(adsp);
++	qcom_pas_unassign_memory_region(pas);
+ free_rproc:
+-	device_init_wakeup(adsp->dev, false);
++	device_init_wakeup(pas->dev, false);
+ 
+ 	return ret;
+ }
+ 
+-static void adsp_remove(struct platform_device *pdev)
++static void qcom_pas_remove(struct platform_device *pdev)
+ {
+-	struct qcom_adsp *adsp = platform_get_drvdata(pdev);
+-
+-	rproc_del(adsp->rproc);
+-
+-	qcom_q6v5_deinit(&adsp->q6v5);
+-	adsp_unassign_memory_region(adsp);
+-	qcom_remove_glink_subdev(adsp->rproc, &adsp->glink_subdev);
+-	qcom_remove_sysmon_subdev(adsp->sysmon);
+-	qcom_remove_smd_subdev(adsp->rproc, &adsp->smd_subdev);
+-	qcom_remove_pdm_subdev(adsp->rproc, &adsp->pdm_subdev);
+-	qcom_remove_ssr_subdev(adsp->rproc, &adsp->ssr_subdev);
+-	adsp_pds_detach(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
+-	device_init_wakeup(adsp->dev, false);
++	struct qcom_pas *pas = platform_get_drvdata(pdev);
++
++	rproc_del(pas->rproc);
++
++	qcom_q6v5_deinit(&pas->q6v5);
++	qcom_pas_unassign_memory_region(pas);
++	qcom_remove_glink_subdev(pas->rproc, &pas->glink_subdev);
++	qcom_remove_sysmon_subdev(pas->sysmon);
++	qcom_remove_smd_subdev(pas->rproc, &pas->smd_subdev);
++	qcom_remove_pdm_subdev(pas->rproc, &pas->pdm_subdev);
++	qcom_remove_ssr_subdev(pas->rproc, &pas->ssr_subdev);
++	qcom_pas_pds_detach(pas, pas->proxy_pds, pas->proxy_pd_count);
++	device_init_wakeup(pas->dev, false);
+ }
+ 
+-static const struct adsp_data adsp_resource_init = {
++static const struct qcom_pas_data adsp_resource_init = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mdt",
+ 	.pas_id = 1,
+@@ -831,7 +826,7 @@ static const struct adsp_data adsp_resource_init = {
+ 	.ssctl_id = 0x14,
+ };
+ 
+-static const struct adsp_data sa8775p_adsp_resource = {
++static const struct qcom_pas_data sa8775p_adsp_resource = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mbn",
+ 	.pas_id = 1,
+@@ -848,7 +843,7 @@ static const struct adsp_data sa8775p_adsp_resource = {
+ 	.ssctl_id = 0x14,
+ };
+ 
+-static const struct adsp_data sdm845_adsp_resource_init = {
++static const struct qcom_pas_data sdm845_adsp_resource_init = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mdt",
+ 	.pas_id = 1,
+@@ -859,7 +854,7 @@ static const struct adsp_data sdm845_adsp_resource_init = {
+ 	.ssctl_id = 0x14,
+ };
+ 
+-static const struct adsp_data sm6350_adsp_resource = {
++static const struct qcom_pas_data sm6350_adsp_resource = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mdt",
+ 	.pas_id = 1,
+@@ -875,7 +870,7 @@ static const struct adsp_data sm6350_adsp_resource = {
+ 	.ssctl_id = 0x14,
+ };
+ 
+-static const struct adsp_data sm6375_mpss_resource = {
++static const struct qcom_pas_data sm6375_mpss_resource = {
+ 	.crash_reason_smem = 421,
+ 	.firmware_name = "modem.mdt",
+ 	.pas_id = 4,
+@@ -890,7 +885,7 @@ static const struct adsp_data sm6375_mpss_resource = {
+ 	.ssctl_id = 0x12,
+ };
+ 
+-static const struct adsp_data sm8150_adsp_resource = {
++static const struct qcom_pas_data sm8150_adsp_resource = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mdt",
+ 	.pas_id = 1,
+@@ -905,7 +900,7 @@ static const struct adsp_data sm8150_adsp_resource = {
+ 	.ssctl_id = 0x14,
+ };
+ 
+-static const struct adsp_data sm8250_adsp_resource = {
++static const struct qcom_pas_data sm8250_adsp_resource = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mdt",
+ 	.pas_id = 1,
+@@ -922,7 +917,7 @@ static const struct adsp_data sm8250_adsp_resource = {
+ 	.ssctl_id = 0x14,
+ };
+ 
+-static const struct adsp_data sm8350_adsp_resource = {
++static const struct qcom_pas_data sm8350_adsp_resource = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mdt",
+ 	.pas_id = 1,
+@@ -938,7 +933,7 @@ static const struct adsp_data sm8350_adsp_resource = {
+ 	.ssctl_id = 0x14,
+ };
+ 
+-static const struct adsp_data msm8996_adsp_resource = {
++static const struct qcom_pas_data msm8996_adsp_resource = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mdt",
+ 	.pas_id = 1,
+@@ -952,7 +947,7 @@ static const struct adsp_data msm8996_adsp_resource = {
+ 	.ssctl_id = 0x14,
+ };
+ 
+-static const struct adsp_data cdsp_resource_init = {
++static const struct qcom_pas_data cdsp_resource_init = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.pas_id = 18,
+@@ -962,7 +957,7 @@ static const struct adsp_data cdsp_resource_init = {
+ 	.ssctl_id = 0x17,
+ };
+ 
+-static const struct adsp_data sa8775p_cdsp0_resource = {
++static const struct qcom_pas_data sa8775p_cdsp0_resource = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp0.mbn",
+ 	.pas_id = 18,
+@@ -980,7 +975,7 @@ static const struct adsp_data sa8775p_cdsp0_resource = {
+ 	.ssctl_id = 0x17,
+ };
+ 
+-static const struct adsp_data sa8775p_cdsp1_resource = {
++static const struct qcom_pas_data sa8775p_cdsp1_resource = {
+ 	.crash_reason_smem = 633,
+ 	.firmware_name = "cdsp1.mbn",
+ 	.pas_id = 30,
+@@ -998,7 +993,7 @@ static const struct adsp_data sa8775p_cdsp1_resource = {
+ 	.ssctl_id = 0x20,
+ };
+ 
+-static const struct adsp_data sdm845_cdsp_resource_init = {
++static const struct qcom_pas_data sdm845_cdsp_resource_init = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.pas_id = 18,
+@@ -1009,7 +1004,7 @@ static const struct adsp_data sdm845_cdsp_resource_init = {
+ 	.ssctl_id = 0x17,
+ };
+ 
+-static const struct adsp_data sm6350_cdsp_resource = {
++static const struct qcom_pas_data sm6350_cdsp_resource = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.pas_id = 18,
+@@ -1025,7 +1020,7 @@ static const struct adsp_data sm6350_cdsp_resource = {
+ 	.ssctl_id = 0x17,
+ };
+ 
+-static const struct adsp_data sm8150_cdsp_resource = {
++static const struct qcom_pas_data sm8150_cdsp_resource = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.pas_id = 18,
+@@ -1040,7 +1035,7 @@ static const struct adsp_data sm8150_cdsp_resource = {
+ 	.ssctl_id = 0x17,
+ };
+ 
+-static const struct adsp_data sm8250_cdsp_resource = {
++static const struct qcom_pas_data sm8250_cdsp_resource = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.pas_id = 18,
+@@ -1055,7 +1050,7 @@ static const struct adsp_data sm8250_cdsp_resource = {
+ 	.ssctl_id = 0x17,
+ };
+ 
+-static const struct adsp_data sc8280xp_nsp0_resource = {
++static const struct qcom_pas_data sc8280xp_nsp0_resource = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.pas_id = 18,
+@@ -1069,7 +1064,7 @@ static const struct adsp_data sc8280xp_nsp0_resource = {
+ 	.ssctl_id = 0x17,
+ };
+ 
+-static const struct adsp_data sc8280xp_nsp1_resource = {
++static const struct qcom_pas_data sc8280xp_nsp1_resource = {
+ 	.crash_reason_smem = 633,
+ 	.firmware_name = "cdsp.mdt",
+ 	.pas_id = 30,
+@@ -1083,7 +1078,7 @@ static const struct adsp_data sc8280xp_nsp1_resource = {
+ 	.ssctl_id = 0x20,
+ };
+ 
+-static const struct adsp_data x1e80100_adsp_resource = {
++static const struct qcom_pas_data x1e80100_adsp_resource = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mdt",
+ 	.dtb_firmware_name = "adsp_dtb.mdt",
+@@ -1103,7 +1098,7 @@ static const struct adsp_data x1e80100_adsp_resource = {
+ 	.ssctl_id = 0x14,
+ };
+ 
+-static const struct adsp_data x1e80100_cdsp_resource = {
++static const struct qcom_pas_data x1e80100_cdsp_resource = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.dtb_firmware_name = "cdsp_dtb.mdt",
+@@ -1123,7 +1118,7 @@ static const struct adsp_data x1e80100_cdsp_resource = {
+ 	.ssctl_id = 0x17,
+ };
+ 
+-static const struct adsp_data sm8350_cdsp_resource = {
++static const struct qcom_pas_data sm8350_cdsp_resource = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.pas_id = 18,
+@@ -1140,7 +1135,7 @@ static const struct adsp_data sm8350_cdsp_resource = {
+ 	.ssctl_id = 0x17,
+ };
+ 
+-static const struct adsp_data sa8775p_gpdsp0_resource = {
++static const struct qcom_pas_data sa8775p_gpdsp0_resource = {
+ 	.crash_reason_smem = 640,
+ 	.firmware_name = "gpdsp0.mbn",
+ 	.pas_id = 39,
+@@ -1157,7 +1152,7 @@ static const struct adsp_data sa8775p_gpdsp0_resource = {
+ 	.ssctl_id = 0x21,
+ };
+ 
+-static const struct adsp_data sa8775p_gpdsp1_resource = {
++static const struct qcom_pas_data sa8775p_gpdsp1_resource = {
+ 	.crash_reason_smem = 641,
+ 	.firmware_name = "gpdsp1.mbn",
+ 	.pas_id = 40,
+@@ -1174,7 +1169,7 @@ static const struct adsp_data sa8775p_gpdsp1_resource = {
+ 	.ssctl_id = 0x22,
+ };
+ 
+-static const struct adsp_data mpss_resource_init = {
++static const struct qcom_pas_data mpss_resource_init = {
+ 	.crash_reason_smem = 421,
+ 	.firmware_name = "modem.mdt",
+ 	.pas_id = 4,
+@@ -1191,7 +1186,7 @@ static const struct adsp_data mpss_resource_init = {
+ 	.ssctl_id = 0x12,
+ };
+ 
+-static const struct adsp_data sc8180x_mpss_resource = {
++static const struct qcom_pas_data sc8180x_mpss_resource = {
+ 	.crash_reason_smem = 421,
+ 	.firmware_name = "modem.mdt",
+ 	.pas_id = 4,
+@@ -1206,7 +1201,7 @@ static const struct adsp_data sc8180x_mpss_resource = {
+ 	.ssctl_id = 0x12,
+ };
+ 
+-static const struct adsp_data msm8996_slpi_resource_init = {
++static const struct qcom_pas_data msm8996_slpi_resource_init = {
+ 	.crash_reason_smem = 424,
+ 	.firmware_name = "slpi.mdt",
+ 	.pas_id = 12,
+@@ -1220,7 +1215,7 @@ static const struct adsp_data msm8996_slpi_resource_init = {
+ 	.ssctl_id = 0x16,
+ };
+ 
+-static const struct adsp_data sdm845_slpi_resource_init = {
++static const struct qcom_pas_data sdm845_slpi_resource_init = {
+ 	.crash_reason_smem = 424,
+ 	.firmware_name = "slpi.mdt",
+ 	.pas_id = 12,
+@@ -1236,7 +1231,7 @@ static const struct adsp_data sdm845_slpi_resource_init = {
+ 	.ssctl_id = 0x16,
+ };
+ 
+-static const struct adsp_data wcss_resource_init = {
++static const struct qcom_pas_data wcss_resource_init = {
+ 	.crash_reason_smem = 421,
+ 	.firmware_name = "wcnss.mdt",
+ 	.pas_id = 6,
+@@ -1246,7 +1241,7 @@ static const struct adsp_data wcss_resource_init = {
+ 	.ssctl_id = 0x12,
+ };
+ 
+-static const struct adsp_data sdx55_mpss_resource = {
++static const struct qcom_pas_data sdx55_mpss_resource = {
+ 	.crash_reason_smem = 421,
+ 	.firmware_name = "modem.mdt",
+ 	.pas_id = 4,
+@@ -1261,7 +1256,7 @@ static const struct adsp_data sdx55_mpss_resource = {
+ 	.ssctl_id = 0x22,
+ };
+ 
+-static const struct adsp_data sm8450_mpss_resource = {
++static const struct qcom_pas_data sm8450_mpss_resource = {
+ 	.crash_reason_smem = 421,
+ 	.firmware_name = "modem.mdt",
+ 	.pas_id = 4,
+@@ -1279,7 +1274,7 @@ static const struct adsp_data sm8450_mpss_resource = {
+ 	.ssctl_id = 0x12,
+ };
+ 
+-static const struct adsp_data sm8550_adsp_resource = {
++static const struct qcom_pas_data sm8550_adsp_resource = {
+ 	.crash_reason_smem = 423,
+ 	.firmware_name = "adsp.mdt",
+ 	.dtb_firmware_name = "adsp_dtb.mdt",
+@@ -1299,7 +1294,7 @@ static const struct adsp_data sm8550_adsp_resource = {
+ 	.smem_host_id = 2,
+ };
+ 
+-static const struct adsp_data sm8550_cdsp_resource = {
++static const struct qcom_pas_data sm8550_cdsp_resource = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.dtb_firmware_name = "cdsp_dtb.mdt",
+@@ -1320,7 +1315,7 @@ static const struct adsp_data sm8550_cdsp_resource = {
+ 	.smem_host_id = 5,
+ };
+ 
+-static const struct adsp_data sm8550_mpss_resource = {
++static const struct qcom_pas_data sm8550_mpss_resource = {
+ 	.crash_reason_smem = 421,
+ 	.firmware_name = "modem.mdt",
+ 	.dtb_firmware_name = "modem_dtb.mdt",
+@@ -1344,7 +1339,7 @@ static const struct adsp_data sm8550_mpss_resource = {
+ 	.region_assign_vmid = QCOM_SCM_VMID_MSS_MSA,
+ };
+ 
+-static const struct adsp_data sc7280_wpss_resource = {
++static const struct qcom_pas_data sc7280_wpss_resource = {
+ 	.crash_reason_smem = 626,
+ 	.firmware_name = "wpss.mdt",
+ 	.pas_id = 6,
+@@ -1361,7 +1356,7 @@ static const struct adsp_data sc7280_wpss_resource = {
+ 	.ssctl_id = 0x19,
+ };
+ 
+-static const struct adsp_data sm8650_cdsp_resource = {
++static const struct qcom_pas_data sm8650_cdsp_resource = {
+ 	.crash_reason_smem = 601,
+ 	.firmware_name = "cdsp.mdt",
+ 	.dtb_firmware_name = "cdsp_dtb.mdt",
+@@ -1386,7 +1381,7 @@ static const struct adsp_data sm8650_cdsp_resource = {
+ 	.region_assign_vmid = QCOM_SCM_VMID_CDSP,
+ };
+ 
+-static const struct adsp_data sm8650_mpss_resource = {
++static const struct qcom_pas_data sm8650_mpss_resource = {
+ 	.crash_reason_smem = 421,
+ 	.firmware_name = "modem.mdt",
+ 	.dtb_firmware_name = "modem_dtb.mdt",
+@@ -1410,7 +1405,7 @@ static const struct adsp_data sm8650_mpss_resource = {
+ 	.region_assign_vmid = QCOM_SCM_VMID_MSS_MSA,
+ };
+ 
+-static const struct adsp_data sm8750_mpss_resource = {
++static const struct qcom_pas_data sm8750_mpss_resource = {
+ 	.crash_reason_smem = 421,
+ 	.firmware_name = "modem.mdt",
+ 	.dtb_firmware_name = "modem_dtb.mdt",
+@@ -1434,7 +1429,7 @@ static const struct adsp_data sm8750_mpss_resource = {
+ 	.region_assign_vmid = QCOM_SCM_VMID_MSS_MSA,
+ };
+ 
+-static const struct of_device_id adsp_of_match[] = {
++static const struct of_device_id qcom_pas_of_match[] = {
+ 	{ .compatible = "qcom,msm8226-adsp-pil", .data = &msm8996_adsp_resource},
+ 	{ .compatible = "qcom,msm8953-adsp-pil", .data = &msm8996_adsp_resource},
+ 	{ .compatible = "qcom,msm8974-adsp-pil", .data = &adsp_resource_init},
+@@ -1504,17 +1499,17 @@ static const struct of_device_id adsp_of_match[] = {
+ 	{ .compatible = "qcom,x1e80100-cdsp-pas", .data = &x1e80100_cdsp_resource},
+ 	{ },
+ };
+-MODULE_DEVICE_TABLE(of, adsp_of_match);
++MODULE_DEVICE_TABLE(of, qcom_pas_of_match);
+ 
+-static struct platform_driver adsp_driver = {
+-	.probe = adsp_probe,
+-	.remove = adsp_remove,
++static struct platform_driver qcom_pas_driver = {
++	.probe = qcom_pas_probe,
++	.remove = qcom_pas_remove,
+ 	.driver = {
+ 		.name = "qcom_q6v5_pas",
+-		.of_match_table = adsp_of_match,
++		.of_match_table = qcom_pas_of_match,
+ 	},
+ };
+ 
+-module_platform_driver(adsp_driver);
+-MODULE_DESCRIPTION("Qualcomm Hexagon v5 Peripheral Authentication Service driver");
++module_platform_driver(qcom_pas_driver);
++MODULE_DESCRIPTION("Qualcomm Peripheral Authentication Service remoteproc driver");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
+index 1af89782e116cc..79be88b40ab0af 100644
+--- a/drivers/remoteproc/xlnx_r5_remoteproc.c
++++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
+@@ -938,6 +938,8 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
+ 
+ 	rproc_coredump_set_elf_info(r5_rproc, ELFCLASS32, EM_ARM);
+ 
++	r5_rproc->recovery_disabled = true;
++	r5_rproc->has_iommu = false;
+ 	r5_rproc->auto_boot = false;
+ 	r5_core = r5_rproc->priv;
+ 	r5_core->dev = cdev;
+diff --git a/drivers/rtc/rtc-ds1307.c b/drivers/rtc/rtc-ds1307.c
+index 5efbe69bf5ca8c..c8a666de9cbe91 100644
+--- a/drivers/rtc/rtc-ds1307.c
++++ b/drivers/rtc/rtc-ds1307.c
+@@ -1466,7 +1466,7 @@ static long ds3231_clk_sqw_round_rate(struct clk_hw *hw, unsigned long rate,
+ 			return ds3231_clk_sqw_rates[i];
+ 	}
+ 
+-	return 0;
++	return ds3231_clk_sqw_rates[ARRAY_SIZE(ds3231_clk_sqw_rates) - 1];
+ }
+ 
+ static int ds3231_clk_sqw_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/rtc/rtc-hym8563.c b/drivers/rtc/rtc-hym8563.c
+index 63f11ea3589d64..759dc2ad6e3b2a 100644
+--- a/drivers/rtc/rtc-hym8563.c
++++ b/drivers/rtc/rtc-hym8563.c
+@@ -294,7 +294,7 @@ static long hym8563_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+ 		if (clkout_rates[i] <= rate)
+ 			return clkout_rates[i];
+ 
+-	return 0;
++	return clkout_rates[0];
+ }
+ 
+ static int hym8563_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/rtc/rtc-nct3018y.c b/drivers/rtc/rtc-nct3018y.c
+index 76c5f464b2daeb..cea05fca0bccdd 100644
+--- a/drivers/rtc/rtc-nct3018y.c
++++ b/drivers/rtc/rtc-nct3018y.c
+@@ -376,7 +376,7 @@ static long nct3018y_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+ 		if (clkout_rates[i] <= rate)
+ 			return clkout_rates[i];
+ 
+-	return 0;
++	return clkout_rates[0];
+ }
+ 
+ static int nct3018y_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/rtc/rtc-pcf85063.c b/drivers/rtc/rtc-pcf85063.c
+index 4fa5c4ecdd5a34..b26c9bfad5d929 100644
+--- a/drivers/rtc/rtc-pcf85063.c
++++ b/drivers/rtc/rtc-pcf85063.c
+@@ -410,7 +410,7 @@ static long pcf85063_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+ 		if (clkout_rates[i] <= rate)
+ 			return clkout_rates[i];
+ 
+-	return 0;
++	return clkout_rates[0];
+ }
+ 
+ static int pcf85063_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/rtc/rtc-pcf8563.c b/drivers/rtc/rtc-pcf8563.c
+index b2611697fa5e3a..a2a2067b28a127 100644
+--- a/drivers/rtc/rtc-pcf8563.c
++++ b/drivers/rtc/rtc-pcf8563.c
+@@ -339,7 +339,7 @@ static long pcf8563_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+ 		if (clkout_rates[i] <= rate)
+ 			return clkout_rates[i];
+ 
+-	return 0;
++	return clkout_rates[0];
+ }
+ 
+ static int pcf8563_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/rtc/rtc-rv3028.c b/drivers/rtc/rtc-rv3028.c
+index 868d1b1eb0f42e..278841c2e47edf 100644
+--- a/drivers/rtc/rtc-rv3028.c
++++ b/drivers/rtc/rtc-rv3028.c
+@@ -740,7 +740,7 @@ static long rv3028_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+ 		if (clkout_rates[i] <= rate)
+ 			return clkout_rates[i];
+ 
+-	return 0;
++	return clkout_rates[0];
+ }
+ 
+ static int rv3028_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/s390/crypto/ap_bus.h b/drivers/s390/crypto/ap_bus.h
+index 88b625ba197802..4b7ffa840563c4 100644
+--- a/drivers/s390/crypto/ap_bus.h
++++ b/drivers/s390/crypto/ap_bus.h
+@@ -180,7 +180,7 @@ struct ap_card {
+ 	atomic64_t total_request_count;	/* # requests ever for this AP device.*/
+ };
+ 
+-#define TAPQ_CARD_HWINFO_MASK 0xFEFF0000FFFF0F0FUL
++#define TAPQ_CARD_HWINFO_MASK 0xFFFF0000FFFF0F0FUL
+ #define ASSOC_IDX_INVALID 0x10000
+ 
+ #define to_ap_card(x) container_of((x), struct ap_card, ap_dev.device)
+diff --git a/drivers/scsi/elx/efct/efct_lio.c b/drivers/scsi/elx/efct/efct_lio.c
+index 9ac69356b13e08..bd3d489e56ae93 100644
+--- a/drivers/scsi/elx/efct/efct_lio.c
++++ b/drivers/scsi/elx/efct/efct_lio.c
+@@ -382,7 +382,7 @@ efct_lio_sg_unmap(struct efct_io *io)
+ 		return;
+ 
+ 	dma_unmap_sg(&io->efct->pci->dev, cmd->t_data_sg,
+-		     ocp->seg_map_cnt, cmd->data_direction);
++		     cmd->t_data_nents, cmd->data_direction);
+ 	ocp->seg_map_cnt = 0;
+ }
+ 
+diff --git a/drivers/scsi/ibmvscsi_tgt/libsrp.c b/drivers/scsi/ibmvscsi_tgt/libsrp.c
+index 8a0e28aec928e4..0ecad398ed3db0 100644
+--- a/drivers/scsi/ibmvscsi_tgt/libsrp.c
++++ b/drivers/scsi/ibmvscsi_tgt/libsrp.c
+@@ -184,7 +184,8 @@ static int srp_direct_data(struct ibmvscsis_cmd *cmd, struct srp_direct_buf *md,
+ 	err = rdma_io(cmd, sg, nsg, md, 1, dir, len);
+ 
+ 	if (dma_map)
+-		dma_unmap_sg(iue->target->dev, sg, nsg, DMA_BIDIRECTIONAL);
++		dma_unmap_sg(iue->target->dev, sg, cmd->se_cmd.t_data_nents,
++			     DMA_BIDIRECTIONAL);
+ 
+ 	return err;
+ }
+@@ -256,7 +257,8 @@ static int srp_indirect_data(struct ibmvscsis_cmd *cmd, struct srp_cmd *srp_cmd,
+ 	err = rdma_io(cmd, sg, nsg, md, nmd, dir, len);
+ 
+ 	if (dma_map)
+-		dma_unmap_sg(iue->target->dev, sg, nsg, DMA_BIDIRECTIONAL);
++		dma_unmap_sg(iue->target->dev, sg, cmd->se_cmd.t_data_nents,
++			     DMA_BIDIRECTIONAL);
+ 
+ free_mem:
+ 	if (token && dma_map) {
+diff --git a/drivers/scsi/isci/request.c b/drivers/scsi/isci/request.c
+index 355a0bc0828e74..bb89a2e33eb407 100644
+--- a/drivers/scsi/isci/request.c
++++ b/drivers/scsi/isci/request.c
+@@ -2904,7 +2904,7 @@ static void isci_request_io_request_complete(struct isci_host *ihost,
+ 					 task->total_xfer_len, task->data_dir);
+ 		else  /* unmap the sgl dma addresses */
+ 			dma_unmap_sg(&ihost->pdev->dev, task->scatter,
+-				     request->num_sg_entries, task->data_dir);
++				     task->num_scatter, task->data_dir);
+ 		break;
+ 	case SAS_PROTOCOL_SMP: {
+ 		struct scatterlist *sg = &task->smp_task.smp_req;
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index 508861e88d9fe2..0f900ddb3047c7 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -10790,8 +10790,7 @@ _mpt3sas_fw_work(struct MPT3SAS_ADAPTER *ioc, struct fw_event_work *fw_event)
+ 		break;
+ 	case MPI2_EVENT_PCIE_TOPOLOGY_CHANGE_LIST:
+ 		_scsih_pcie_topology_change_event(ioc, fw_event);
+-		ioc->current_event = NULL;
+-		return;
++		break;
+ 	}
+ out:
+ 	fw_event_work_put(fw_event);
+diff --git a/drivers/scsi/mvsas/mv_sas.c b/drivers/scsi/mvsas/mv_sas.c
+index 6c46654b9cd928..15b3d9d55a4b69 100644
+--- a/drivers/scsi/mvsas/mv_sas.c
++++ b/drivers/scsi/mvsas/mv_sas.c
+@@ -818,7 +818,7 @@ static int mvs_task_prep(struct sas_task *task, struct mvs_info *mvi, int is_tmf
+ 	dev_printk(KERN_ERR, mvi->dev, "mvsas prep failed[%d]!\n", rc);
+ 	if (!sas_protocol_ata(task->task_proto))
+ 		if (n_elem)
+-			dma_unmap_sg(mvi->dev, task->scatter, n_elem,
++			dma_unmap_sg(mvi->dev, task->scatter, task->num_scatter,
+ 				     task->data_dir);
+ prep_out:
+ 	return rc;
+@@ -864,7 +864,7 @@ static void mvs_slot_task_free(struct mvs_info *mvi, struct sas_task *task,
+ 	if (!sas_protocol_ata(task->task_proto))
+ 		if (slot->n_elem)
+ 			dma_unmap_sg(mvi->dev, task->scatter,
+-				     slot->n_elem, task->data_dir);
++				     task->num_scatter, task->data_dir);
+ 
+ 	switch (task->task_proto) {
+ 	case SAS_PROTOCOL_SMP:
+diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
+index 518a252eb6aa05..c2527dd289d9eb 100644
+--- a/drivers/scsi/scsi.c
++++ b/drivers/scsi/scsi.c
+@@ -242,9 +242,11 @@ EXPORT_SYMBOL(scsi_change_queue_depth);
+  * 		specific SCSI device to determine if and when there is a
+  * 		need to adjust the queue depth on the device.
+  *
+- * Returns:	0 - No change needed, >0 - Adjust queue depth to this new depth,
+- * 		-1 - Drop back to untagged operation using host->cmd_per_lun
+- * 			as the untagged command depth
++ * Returns:
++ * * 0 - No change needed
++ * * >0 - Adjust queue depth to this new depth,
++ * * -1 - Drop back to untagged operation using host->cmd_per_lun as the
++ *   untagged command depth
+  *
+  * Lock Status:	None held on entry
+  *
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index c75a806496d674..743b4c792ceb36 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -2143,6 +2143,8 @@ static int iscsi_iter_destroy_conn_fn(struct device *dev, void *data)
+ 		return 0;
+ 
+ 	iscsi_remove_conn(iscsi_dev_to_conn(dev));
++	iscsi_put_conn(iscsi_dev_to_conn(dev));
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index eeaa6af294b812..282000c761f8e0 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -4173,7 +4173,9 @@ static void sd_shutdown(struct device *dev)
+ 	if ((system_state != SYSTEM_RESTART &&
+ 	     sdkp->device->manage_system_start_stop) ||
+ 	    (system_state == SYSTEM_POWER_OFF &&
+-	     sdkp->device->manage_shutdown)) {
++	     sdkp->device->manage_shutdown) ||
++	    (system_state == SYSTEM_RUNNING &&
++	     sdkp->device->manage_runtime_start_stop)) {
+ 		sd_printk(KERN_NOTICE, sdkp, "Stopping disk\n");
+ 		sd_start_stop_device(sdkp, 0);
+ 	}
+diff --git a/drivers/soc/qcom/pmic_glink.c b/drivers/soc/qcom/pmic_glink.c
+index 0a6d325b195cd1..c0a4be5df9267d 100644
+--- a/drivers/soc/qcom/pmic_glink.c
++++ b/drivers/soc/qcom/pmic_glink.c
+@@ -167,7 +167,10 @@ static int pmic_glink_rpmsg_callback(struct rpmsg_device *rpdev, void *data,
+ 	return 0;
+ }
+ 
+-static void pmic_glink_aux_release(struct device *dev) {}
++static void pmic_glink_aux_release(struct device *dev)
++{
++	of_node_put(dev->of_node);
++}
+ 
+ static int pmic_glink_add_aux_device(struct pmic_glink *pg,
+ 				     struct auxiliary_device *aux,
+@@ -181,8 +184,10 @@ static int pmic_glink_add_aux_device(struct pmic_glink *pg,
+ 	aux->dev.release = pmic_glink_aux_release;
+ 	device_set_of_node_from_dev(&aux->dev, parent);
+ 	ret = auxiliary_device_init(aux);
+-	if (ret)
++	if (ret) {
++		of_node_put(aux->dev.of_node);
+ 		return ret;
++	}
+ 
+ 	ret = auxiliary_device_add(aux);
+ 	if (ret)
+diff --git a/drivers/soc/qcom/qmi_encdec.c b/drivers/soc/qcom/qmi_encdec.c
+index bb09eff85cff3b..7660a960fb45ea 100644
+--- a/drivers/soc/qcom/qmi_encdec.c
++++ b/drivers/soc/qcom/qmi_encdec.c
+@@ -304,6 +304,8 @@ static int qmi_encode(const struct qmi_elem_info *ei_array, void *out_buf,
+ 	const void *buf_src;
+ 	int encode_tlv = 0;
+ 	int rc;
++	u8 val8;
++	u16 val16;
+ 
+ 	if (!ei_array)
+ 		return 0;
+@@ -338,7 +340,6 @@ static int qmi_encode(const struct qmi_elem_info *ei_array, void *out_buf,
+ 			break;
+ 
+ 		case QMI_DATA_LEN:
+-			memcpy(&data_len_value, buf_src, temp_ei->elem_size);
+ 			data_len_sz = temp_ei->elem_size == sizeof(u8) ?
+ 					sizeof(u8) : sizeof(u16);
+ 			/* Check to avoid out of range buffer access */
+@@ -348,8 +349,17 @@ static int qmi_encode(const struct qmi_elem_info *ei_array, void *out_buf,
+ 				       __func__);
+ 				return -ETOOSMALL;
+ 			}
+-			rc = qmi_encode_basic_elem(buf_dst, &data_len_value,
+-						   1, data_len_sz);
++			if (data_len_sz == sizeof(u8)) {
++				val8 = *(u8 *)buf_src;
++				data_len_value = (u32)val8;
++				rc = qmi_encode_basic_elem(buf_dst, &val8,
++							   1, data_len_sz);
++			} else {
++				val16 = *(u16 *)buf_src;
++				data_len_value = (u32)le16_to_cpu(val16);
++				rc = qmi_encode_basic_elem(buf_dst, &val16,
++							   1, data_len_sz);
++			}
+ 			UPDATE_ENCODE_VARIABLES(temp_ei, buf_dst,
+ 						encoded_bytes, tlv_len,
+ 						encode_tlv, rc);
+@@ -523,14 +533,23 @@ static int qmi_decode_string_elem(const struct qmi_elem_info *ei_array,
+ 	u32 string_len = 0;
+ 	u32 string_len_sz = 0;
+ 	const struct qmi_elem_info *temp_ei = ei_array;
++	u8 val8;
++	u16 val16;
+ 
+ 	if (dec_level == 1) {
+ 		string_len = tlv_len;
+ 	} else {
+ 		string_len_sz = temp_ei->elem_len <= U8_MAX ?
+ 				sizeof(u8) : sizeof(u16);
+-		rc = qmi_decode_basic_elem(&string_len, buf_src,
+-					   1, string_len_sz);
++		if (string_len_sz == sizeof(u8)) {
++			rc = qmi_decode_basic_elem(&val8, buf_src,
++						   1, string_len_sz);
++			string_len = (u32)val8;
++		} else {
++			rc = qmi_decode_basic_elem(&val16, buf_src,
++						   1, string_len_sz);
++			string_len = (u32)val16;
++		}
+ 		decoded_bytes += rc;
+ 	}
+ 
+@@ -604,6 +623,9 @@ static int qmi_decode(const struct qmi_elem_info *ei_array, void *out_c_struct,
+ 	u32 decoded_bytes = 0;
+ 	const void *buf_src = in_buf;
+ 	int rc;
++	u8 val8;
++	u16 val16;
++	u32 val32;
+ 
+ 	while (decoded_bytes < in_buf_len) {
+ 		if (dec_level >= 2 && temp_ei->data_type == QMI_EOTI)
+@@ -642,9 +664,17 @@ static int qmi_decode(const struct qmi_elem_info *ei_array, void *out_c_struct,
+ 		if (temp_ei->data_type == QMI_DATA_LEN) {
+ 			data_len_sz = temp_ei->elem_size == sizeof(u8) ?
+ 					sizeof(u8) : sizeof(u16);
+-			rc = qmi_decode_basic_elem(&data_len_value, buf_src,
+-						   1, data_len_sz);
+-			memcpy(buf_dst, &data_len_value, sizeof(u32));
++			if (data_len_sz == sizeof(u8)) {
++				rc = qmi_decode_basic_elem(&val8, buf_src,
++							   1, data_len_sz);
++				data_len_value = (u32)val8;
++			} else {
++				rc = qmi_decode_basic_elem(&val16, buf_src,
++							   1, data_len_sz);
++				data_len_value = (u32)val16;
++			}
++			val32 = cpu_to_le32(data_len_value);
++			memcpy(buf_dst, &val32, sizeof(u32));
+ 			temp_ei = temp_ei + 1;
+ 			buf_dst = out_c_struct + temp_ei->offset;
+ 			tlv_len -= data_len_sz;
+@@ -746,9 +776,9 @@ void *qmi_encode_message(int type, unsigned int msg_id, size_t *len,
+ 
+ 	hdr = msg;
+ 	hdr->type = type;
+-	hdr->txn_id = txn_id;
+-	hdr->msg_id = msg_id;
+-	hdr->msg_len = msglen;
++	hdr->txn_id = cpu_to_le16(txn_id);
++	hdr->msg_id = cpu_to_le16(msg_id);
++	hdr->msg_len = cpu_to_le16(msglen);
+ 
+ 	*len = sizeof(*hdr) + msglen;
+ 
+diff --git a/drivers/soc/qcom/qmi_interface.c b/drivers/soc/qcom/qmi_interface.c
+index bc6d6379d8b1bb..6500f863aae5ca 100644
+--- a/drivers/soc/qcom/qmi_interface.c
++++ b/drivers/soc/qcom/qmi_interface.c
+@@ -400,7 +400,7 @@ static void qmi_invoke_handler(struct qmi_handle *qmi, struct sockaddr_qrtr *sq,
+ 
+ 	for (handler = qmi->handlers; handler->fn; handler++) {
+ 		if (handler->type == hdr->type &&
+-		    handler->msg_id == hdr->msg_id)
++		    handler->msg_id == le16_to_cpu(hdr->msg_id))
+ 			break;
+ 	}
+ 
+@@ -488,7 +488,7 @@ static void qmi_handle_message(struct qmi_handle *qmi,
+ 	/* If this is a response, find the matching transaction handle */
+ 	if (hdr->type == QMI_RESPONSE) {
+ 		mutex_lock(&qmi->txn_lock);
+-		txn = idr_find(&qmi->txns, hdr->txn_id);
++		txn = idr_find(&qmi->txns, le16_to_cpu(hdr->txn_id));
+ 
+ 		/* Ignore unexpected responses */
+ 		if (!txn) {
+@@ -514,7 +514,7 @@ static void qmi_handle_message(struct qmi_handle *qmi,
+ 	} else {
+ 		/* Create a txn based on the txn_id of the incoming message */
+ 		memset(&tmp_txn, 0, sizeof(tmp_txn));
+-		tmp_txn.id = hdr->txn_id;
++		tmp_txn.id = le16_to_cpu(hdr->txn_id);
+ 
+ 		qmi_invoke_handler(qmi, sq, &tmp_txn, buf, len);
+ 	}
+diff --git a/drivers/soc/tegra/cbb/tegra234-cbb.c b/drivers/soc/tegra/cbb/tegra234-cbb.c
+index c74629af9bb5d0..1da31ead2b5ebc 100644
+--- a/drivers/soc/tegra/cbb/tegra234-cbb.c
++++ b/drivers/soc/tegra/cbb/tegra234-cbb.c
+@@ -185,6 +185,8 @@ static void tegra234_cbb_error_clear(struct tegra_cbb *cbb)
+ {
+ 	struct tegra234_cbb *priv = to_tegra234_cbb(cbb);
+ 
++	writel(0, priv->mon + FABRIC_MN_MASTER_ERR_FORCE_0);
++
+ 	writel(0x3f, priv->mon + FABRIC_MN_MASTER_ERR_STATUS_0);
+ 	dsb(sy);
+ }
+diff --git a/drivers/soundwire/debugfs.c b/drivers/soundwire/debugfs.c
+index 3099ea074f10e2..230a51489486e1 100644
+--- a/drivers/soundwire/debugfs.c
++++ b/drivers/soundwire/debugfs.c
+@@ -291,6 +291,9 @@ static int cmd_go(void *data, u64 value)
+ 
+ 	finish_t = ktime_get();
+ 
++	dev_dbg(&slave->dev, "command completed, num_byte %zu status %d, time %lld ms\n",
++		num_bytes, ret, div_u64(finish_t - start_t, NSEC_PER_MSEC));
++
+ out:
+ 	if (fw)
+ 		release_firmware(fw);
+@@ -298,9 +301,6 @@ static int cmd_go(void *data, u64 value)
+ 	pm_runtime_mark_last_busy(&slave->dev);
+ 	pm_runtime_put(&slave->dev);
+ 
+-	dev_dbg(&slave->dev, "command completed, num_byte %zu status %d, time %lld ms\n",
+-		num_bytes, ret, div_u64(finish_t - start_t, NSEC_PER_MSEC));
+-
+ 	return ret;
+ }
+ DEFINE_DEBUGFS_ATTRIBUTE(cmd_go_fops, NULL,
+diff --git a/drivers/soundwire/mipi_disco.c b/drivers/soundwire/mipi_disco.c
+index 65afb28ef8fab1..c69b78cd0b6209 100644
+--- a/drivers/soundwire/mipi_disco.c
++++ b/drivers/soundwire/mipi_disco.c
+@@ -451,10 +451,10 @@ int sdw_slave_read_prop(struct sdw_slave *slave)
+ 			"mipi-sdw-highPHY-capable");
+ 
+ 	prop->paging_support = mipi_device_property_read_bool(dev,
+-			"mipi-sdw-paging-support");
++			"mipi-sdw-paging-supported");
+ 
+ 	prop->bank_delay_support = mipi_device_property_read_bool(dev,
+-			"mipi-sdw-bank-delay-support");
++			"mipi-sdw-bank-delay-supported");
+ 
+ 	device_property_read_u32(dev,
+ 			"mipi-sdw-port15-read-behavior", &prop->p15_behave);
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index a4bea742b5d9a5..38c9dbd3560652 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -1510,7 +1510,7 @@ static int _sdw_prepare_stream(struct sdw_stream_runtime *stream,
+ 		if (ret < 0) {
+ 			dev_err(bus->dev, "Prepare port(s) failed ret = %d\n",
+ 				ret);
+-			return ret;
++			goto restore_params;
+ 		}
+ 	}
+ 
+diff --git a/drivers/spi/spi-cs42l43.c b/drivers/spi/spi-cs42l43.c
+index b28a840b3b04b5..14307dd800b744 100644
+--- a/drivers/spi/spi-cs42l43.c
++++ b/drivers/spi/spi-cs42l43.c
+@@ -295,7 +295,7 @@ static struct spi_board_info *cs42l43_create_bridge_amp(struct cs42l43_spi *priv
+ 	struct spi_board_info *info;
+ 
+ 	if (spkid >= 0) {
+-		props = devm_kmalloc(priv->dev, sizeof(*props), GFP_KERNEL);
++		props = devm_kcalloc(priv->dev, 2, sizeof(*props), GFP_KERNEL);
+ 		if (!props)
+ 			return NULL;
+ 
+diff --git a/drivers/spi/spi-nxp-fspi.c b/drivers/spi/spi-nxp-fspi.c
+index e63c77e418231c..f3d5765054132c 100644
+--- a/drivers/spi/spi-nxp-fspi.c
++++ b/drivers/spi/spi-nxp-fspi.c
+@@ -1273,7 +1273,9 @@ static int nxp_fspi_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		return dev_err_probe(dev, ret, "Failed to request irq\n");
+ 
+-	devm_mutex_init(dev, &f->lock);
++	ret = devm_mutex_init(dev, &f->lock);
++	if (ret)
++		return dev_err_probe(dev, ret, "Failed to initialize lock\n");
+ 
+ 	ctlr->bus_num = -1;
+ 	ctlr->num_chipselect = NXP_FSPI_MAX_CHIPSELECT;
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index da3517d7102dce..dc22b98bdbcc34 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -2069,9 +2069,15 @@ static int stm32_spi_probe(struct platform_device *pdev)
+ 	struct resource *res;
+ 	struct reset_control *rst;
+ 	struct device_node *np = pdev->dev.of_node;
++	const struct stm32_spi_cfg *cfg;
+ 	bool device_mode;
+ 	int ret;
+-	const struct stm32_spi_cfg *cfg = of_device_get_match_data(&pdev->dev);
++
++	cfg = of_device_get_match_data(&pdev->dev);
++	if (!cfg) {
++		dev_err(&pdev->dev, "Failed to get match data for platform\n");
++		return -ENODEV;
++	}
+ 
+ 	device_mode = of_property_read_bool(np, "spi-slave");
+ 	if (!cfg->has_device_mode && device_mode) {
+diff --git a/drivers/staging/fbtft/fbtft-core.c b/drivers/staging/fbtft/fbtft-core.c
+index da9c64152a606d..39bced40006501 100644
+--- a/drivers/staging/fbtft/fbtft-core.c
++++ b/drivers/staging/fbtft/fbtft-core.c
+@@ -692,6 +692,7 @@ struct fb_info *fbtft_framebuffer_alloc(struct fbtft_display *display,
+ 	return info;
+ 
+ release_framebuf:
++	fb_deferred_io_cleanup(info);
+ 	framebuffer_release(info);
+ 
+ alloc_fail:
+diff --git a/drivers/staging/gpib/cb7210/cb7210.c b/drivers/staging/gpib/cb7210/cb7210.c
+index 298ed306189df5..3e2397898a9ba2 100644
+--- a/drivers/staging/gpib/cb7210/cb7210.c
++++ b/drivers/staging/gpib/cb7210/cb7210.c
+@@ -1184,8 +1184,7 @@ struct local_info {
+ static int cb_gpib_probe(struct pcmcia_device *link)
+ {
+ 	struct local_info *info;
+-
+-//	int ret, i;
++	int ret;
+ 
+ 	/* Allocate space for private device-specific data */
+ 	info = kzalloc(sizeof(*info), GFP_KERNEL);
+@@ -1211,8 +1210,16 @@ static int cb_gpib_probe(struct pcmcia_device *link)
+ 
+ 	/* Register with Card Services */
+ 	curr_dev = link;
+-	return cb_gpib_config(link);
+-} /* gpib_attach */
++	ret = cb_gpib_config(link);
++	if (ret)
++		goto free_info;
++
++	return 0;
++
++free_info:
++	kfree(info);
++	return ret;
++}
+ 
+ /*
+  *   This deletes a driver "instance".  The device is de-registered
+diff --git a/drivers/staging/gpib/common/gpib_os.c b/drivers/staging/gpib/common/gpib_os.c
+index a193d64db0337e..4cb2683caf9966 100644
+--- a/drivers/staging/gpib/common/gpib_os.c
++++ b/drivers/staging/gpib/common/gpib_os.c
+@@ -831,7 +831,7 @@ static int board_type_ioctl(struct gpib_file_private *file_priv,
+ 	retval = copy_from_user(&cmd, (void __user *)arg,
+ 				sizeof(struct gpib_board_type_ioctl));
+ 	if (retval)
+-		return retval;
++		return -EFAULT;
+ 
+ 	for (list_ptr = registered_drivers.next; list_ptr != &registered_drivers;
+ 	     list_ptr = list_ptr->next) {
+@@ -1774,7 +1774,7 @@ static int query_board_rsv_ioctl(struct gpib_board *board, unsigned long arg)
+ 
+ static int board_info_ioctl(const struct gpib_board *board, unsigned long arg)
+ {
+-	struct gpib_board_info_ioctl info;
++	struct gpib_board_info_ioctl info = { };
+ 	int retval;
+ 
+ 	info.pad = board->pad;
+diff --git a/drivers/staging/greybus/gbphy.c b/drivers/staging/greybus/gbphy.c
+index 6adcad28663305..60cf09a302a7e3 100644
+--- a/drivers/staging/greybus/gbphy.c
++++ b/drivers/staging/greybus/gbphy.c
+@@ -102,8 +102,8 @@ static int gbphy_dev_uevent(const struct device *dev, struct kobj_uevent_env *en
+ }
+ 
+ static const struct gbphy_device_id *
+-gbphy_dev_match_id(struct gbphy_device *gbphy_dev,
+-		   struct gbphy_driver *gbphy_drv)
++gbphy_dev_match_id(const struct gbphy_device *gbphy_dev,
++		   const struct gbphy_driver *gbphy_drv)
+ {
+ 	const struct gbphy_device_id *id = gbphy_drv->id_table;
+ 
+@@ -119,7 +119,7 @@ gbphy_dev_match_id(struct gbphy_device *gbphy_dev,
+ 
+ static int gbphy_dev_match(struct device *dev, const struct device_driver *drv)
+ {
+-	struct gbphy_driver *gbphy_drv = to_gbphy_driver(drv);
++	const struct gbphy_driver *gbphy_drv = to_gbphy_driver(drv);
+ 	struct gbphy_device *gbphy_dev = to_gbphy_dev(dev);
+ 	const struct gbphy_device_id *id;
+ 
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
+index 5f59519ac8e28a..964cc3bcc0ac00 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
+@@ -1272,14 +1272,15 @@ static int gmin_get_config_var(struct device *maindev,
+ 	if (efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE))
+ 		status = efi.get_variable(var16, &GMIN_CFG_VAR_EFI_GUID, NULL,
+ 					  (unsigned long *)out_len, out);
+-	if (status == EFI_SUCCESS)
++	if (status == EFI_SUCCESS) {
+ 		dev_info(maindev, "found EFI entry for '%s'\n", var8);
+-	else if (is_gmin)
++		return 0;
++	}
++	if (is_gmin)
+ 		dev_info(maindev, "Failed to find EFI gmin variable %s\n", var8);
+ 	else
+ 		dev_info(maindev, "Failed to find EFI variable %s\n", var8);
+-
+-	return ret;
++	return -ENOENT;
+ }
+ 
+ int gmin_get_var_int(struct device *dev, bool is_gmin, const char *var, int def)
+diff --git a/drivers/staging/nvec/nvec_power.c b/drivers/staging/nvec/nvec_power.c
+index e0e67a3eb7222b..2faab9fdedef70 100644
+--- a/drivers/staging/nvec/nvec_power.c
++++ b/drivers/staging/nvec/nvec_power.c
+@@ -194,7 +194,7 @@ static int nvec_power_bat_notifier(struct notifier_block *nb,
+ 		break;
+ 	case MANUFACTURER:
+ 		memcpy(power->bat_manu, &res->plc, res->length - 2);
+-		power->bat_model[res->length - 2] = '\0';
++		power->bat_manu[res->length - 2] = '\0';
+ 		break;
+ 	case MODEL:
+ 		memcpy(power->bat_model, &res->plc, res->length - 2);
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 50adfb8b335bf4..f07878c50f142e 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -4340,7 +4340,7 @@ static int ufshcd_uic_pwr_ctrl(struct ufs_hba *hba, struct uic_command *cmd)
+ 	hba->uic_async_done = NULL;
+ 	if (reenable_intr)
+ 		ufshcd_enable_intr(hba, UIC_COMMAND_COMPL);
+-	if (ret) {
++	if (ret && !hba->pm_op_in_progress) {
+ 		ufshcd_set_link_broken(hba);
+ 		ufshcd_schedule_eh_work(hba);
+ 	}
+@@ -4348,6 +4348,14 @@ static int ufshcd_uic_pwr_ctrl(struct ufs_hba *hba, struct uic_command *cmd)
+ 	spin_unlock_irqrestore(hba->host->host_lock, flags);
+ 	mutex_unlock(&hba->uic_cmd_mutex);
+ 
++	/*
++	 * If the h8 exit fails during the runtime resume process, it becomes
++	 * stuck and cannot be recovered through the error handler.  To fix
++	 * this, use link recovery instead of the error handler.
++	 */
++	if (ret && hba->pm_op_in_progress)
++		ret = ufshcd_link_recovery(hba);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/usb/early/xhci-dbc.c b/drivers/usb/early/xhci-dbc.c
+index 341408410ed934..41118bba91978d 100644
+--- a/drivers/usb/early/xhci-dbc.c
++++ b/drivers/usb/early/xhci-dbc.c
+@@ -681,6 +681,10 @@ int __init early_xdbc_setup_hardware(void)
+ 
+ 		xdbc.table_base = NULL;
+ 		xdbc.out_buf = NULL;
++
++		early_iounmap(xdbc.xhci_base, xdbc.xhci_length);
++		xdbc.xhci_base = NULL;
++		xdbc.xhci_length = 0;
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 8dbc132a505e39..a893a29ebfac5e 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -2489,6 +2489,11 @@ int composite_os_desc_req_prepare(struct usb_composite_dev *cdev,
+ 	if (!cdev->os_desc_req->buf) {
+ 		ret = -ENOMEM;
+ 		usb_ep_free_request(ep0, cdev->os_desc_req);
++		/*
++		 * Set os_desc_req to NULL so that composite_dev_cleanup()
++		 * will not try to free it again.
++		 */
++		cdev->os_desc_req = NULL;
+ 		goto end;
+ 	}
+ 	cdev->os_desc_req->context = cdev;
+diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
+index 97a62b9264150f..8e1d1e8840503e 100644
+--- a/drivers/usb/gadget/function/f_hid.c
++++ b/drivers/usb/gadget/function/f_hid.c
+@@ -1278,18 +1278,19 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
+ 
+ 	if (!hidg->workqueue) {
+ 		status = -ENOMEM;
+-		goto fail;
++		goto fail_free_descs;
+ 	}
+ 
+ 	/* create char device */
+ 	cdev_init(&hidg->cdev, &f_hidg_fops);
+ 	status = cdev_device_add(&hidg->cdev, &hidg->dev);
+ 	if (status)
+-		goto fail_free_descs;
++		goto fail_free_all;
+ 
+ 	return 0;
+-fail_free_descs:
++fail_free_all:
+ 	destroy_workqueue(hidg->workqueue);
++fail_free_descs:
+ 	usb_free_all_descriptors(f);
+ fail:
+ 	ERROR(f->config->cdev, "hidg_bind FAILED\n");
+diff --git a/drivers/usb/gadget/function/uvc_configfs.c b/drivers/usb/gadget/function/uvc_configfs.c
+index f131943254a4c4..a4a2d3dcb0d666 100644
+--- a/drivers/usb/gadget/function/uvc_configfs.c
++++ b/drivers/usb/gadget/function/uvc_configfs.c
+@@ -2916,8 +2916,15 @@ static struct config_group *uvcg_framebased_make(struct config_group *group,
+ 		'H',  '2',  '6',  '4', 0x00, 0x00, 0x10, 0x00,
+ 		0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71
+ 	};
++	struct uvcg_color_matching *color_match;
++	struct config_item *streaming;
+ 	struct uvcg_framebased *h;
+ 
++	streaming = group->cg_item.ci_parent;
++	color_match = uvcg_format_get_default_color_match(streaming);
++	if (!color_match)
++		return ERR_PTR(-EINVAL);
++
+ 	h = kzalloc(sizeof(*h), GFP_KERNEL);
+ 	if (!h)
+ 		return ERR_PTR(-ENOMEM);
+@@ -2936,6 +2943,9 @@ static struct config_group *uvcg_framebased_make(struct config_group *group,
+ 
+ 	INIT_LIST_HEAD(&h->fmt.frames);
+ 	h->fmt.type = UVCG_FRAMEBASED;
++
++	h->fmt.color_matching = color_match;
++	color_match->refcnt++;
+ 	config_group_init_type_name(&h->fmt.group, name,
+ 				    &uvcg_framebased_type);
+ 
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index c79d5ed48a08b7..5eb51797de326a 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -152,7 +152,7 @@ int xhci_plat_probe(struct platform_device *pdev, struct device *sysdev, const s
+ 	int			ret;
+ 	int			irq;
+ 	struct xhci_plat_priv	*priv = NULL;
+-	bool			of_match;
++	const struct of_device_id *of_match;
+ 
+ 	if (usb_disabled())
+ 		return -ENODEV;
+diff --git a/drivers/usb/misc/apple-mfi-fastcharge.c b/drivers/usb/misc/apple-mfi-fastcharge.c
+index ac8695195c13c8..8e852f4b8262e6 100644
+--- a/drivers/usb/misc/apple-mfi-fastcharge.c
++++ b/drivers/usb/misc/apple-mfi-fastcharge.c
+@@ -44,6 +44,7 @@ MODULE_DEVICE_TABLE(usb, mfi_fc_id_table);
+ struct mfi_device {
+ 	struct usb_device *udev;
+ 	struct power_supply *battery;
++	struct power_supply_desc battery_desc;
+ 	int charge_type;
+ };
+ 
+@@ -178,6 +179,7 @@ static int mfi_fc_probe(struct usb_device *udev)
+ {
+ 	struct power_supply_config battery_cfg = {};
+ 	struct mfi_device *mfi = NULL;
++	char *battery_name;
+ 	int err;
+ 
+ 	if (!mfi_fc_match(udev))
+@@ -187,23 +189,38 @@ static int mfi_fc_probe(struct usb_device *udev)
+ 	if (!mfi)
+ 		return -ENOMEM;
+ 
++	battery_name = kasprintf(GFP_KERNEL, "apple_mfi_fastcharge_%d-%d",
++				 udev->bus->busnum, udev->devnum);
++	if (!battery_name) {
++		err = -ENOMEM;
++		goto err_free_mfi;
++	}
++
++	mfi->battery_desc = apple_mfi_fc_desc;
++	mfi->battery_desc.name = battery_name;
++
+ 	battery_cfg.drv_data = mfi;
+ 
+ 	mfi->charge_type = POWER_SUPPLY_CHARGE_TYPE_TRICKLE;
+ 	mfi->battery = power_supply_register(&udev->dev,
+-						&apple_mfi_fc_desc,
++						&mfi->battery_desc,
+ 						&battery_cfg);
+ 	if (IS_ERR(mfi->battery)) {
+ 		dev_err(&udev->dev, "Can't register battery\n");
+ 		err = PTR_ERR(mfi->battery);
+-		kfree(mfi);
+-		return err;
++		goto err_free_name;
+ 	}
+ 
+ 	mfi->udev = usb_get_dev(udev);
+ 	dev_set_drvdata(&udev->dev, mfi);
+ 
+ 	return 0;
++
++err_free_name:
++	kfree(battery_name);
++err_free_mfi:
++	kfree(mfi);
++	return err;
+ }
+ 
+ static void mfi_fc_disconnect(struct usb_device *udev)
+@@ -213,6 +230,7 @@ static void mfi_fc_disconnect(struct usb_device *udev)
+ 	mfi = dev_get_drvdata(&udev->dev);
+ 	if (mfi->battery)
+ 		power_supply_unregister(mfi->battery);
++	kfree(mfi->battery_desc.name);
+ 	dev_set_drvdata(&udev->dev, NULL);
+ 	usb_put_dev(mfi->udev);
+ 	kfree(mfi);
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 147ca50c94beec..e5cd3309342364 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -2346,6 +2346,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(3) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe145, 0xff),			/* Foxconn T99W651 RNDIS */
+ 	  .driver_info = RSVD(5) | RSVD(6) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe15f, 0xff),                     /* Foxconn T99W709 */
++	  .driver_info = RSVD(5) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe167, 0xff),                     /* Foxconn T99W640 MBIM */
+ 	  .driver_info = RSVD(3) },
+ 	{ USB_DEVICE(0x1508, 0x1001),						/* Fibocom NL668 (IOT version) */
+diff --git a/drivers/usb/typec/ucsi/ucsi_yoga_c630.c b/drivers/usb/typec/ucsi/ucsi_yoga_c630.c
+index d33e3f2dd1d80f..47e8dd5b255b2b 100644
+--- a/drivers/usb/typec/ucsi/ucsi_yoga_c630.c
++++ b/drivers/usb/typec/ucsi/ucsi_yoga_c630.c
+@@ -133,17 +133,30 @@ static int yoga_c630_ucsi_probe(struct auxiliary_device *adev,
+ 
+ 	ret = yoga_c630_ec_register_notify(ec, &uec->nb);
+ 	if (ret)
+-		return ret;
++		goto err_destroy;
++
++	ret = ucsi_register(uec->ucsi);
++	if (ret)
++		goto err_unregister;
++
++	return 0;
+ 
+-	return ucsi_register(uec->ucsi);
++err_unregister:
++	yoga_c630_ec_unregister_notify(uec->ec, &uec->nb);
++
++err_destroy:
++	ucsi_destroy(uec->ucsi);
++
++	return ret;
+ }
+ 
+ static void yoga_c630_ucsi_remove(struct auxiliary_device *adev)
+ {
+ 	struct yoga_c630_ucsi *uec = auxiliary_get_drvdata(adev);
+ 
+-	yoga_c630_ec_unregister_notify(uec->ec, &uec->nb);
+ 	ucsi_unregister(uec->ucsi);
++	yoga_c630_ec_unregister_notify(uec->ec, &uec->nb);
++	ucsi_destroy(uec->ucsi);
+ }
+ 
+ static const struct auxiliary_device_id yoga_c630_ucsi_id_table[] = {
+diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
+index 61424342c09641..c7a20278bc3ca5 100644
+--- a/drivers/vdpa/mlx5/core/mr.c
++++ b/drivers/vdpa/mlx5/core/mr.c
+@@ -908,6 +908,9 @@ void mlx5_vdpa_destroy_mr_resources(struct mlx5_vdpa_dev *mvdev)
+ {
+ 	struct mlx5_vdpa_mr_resources *mres = &mvdev->mres;
+ 
++	if (!mres->wq_gc)
++		return;
++
+ 	atomic_set(&mres->shutdown, 1);
+ 
+ 	flush_delayed_work(&mres->gc_dwork_ent);
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index cccc49a08a1abf..0ed2fc28e1cefe 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -2491,7 +2491,7 @@ static void mlx5_vdpa_set_vq_num(struct vdpa_device *vdev, u16 idx, u32 num)
+         }
+ 
+ 	mvq = &ndev->vqs[idx];
+-	ndev->needs_teardown = num != mvq->num_ent;
++	ndev->needs_teardown |= num != mvq->num_ent;
+ 	mvq->num_ent = num;
+ }
+ 
+@@ -3432,15 +3432,17 @@ static void mlx5_vdpa_free(struct vdpa_device *vdev)
+ 
+ 	ndev = to_mlx5_vdpa_ndev(mvdev);
+ 
++	/* Functions called here should be able to work with
++	 * uninitialized resources.
++	 */
+ 	free_fixed_resources(ndev);
+ 	mlx5_vdpa_clean_mrs(mvdev);
+ 	mlx5_vdpa_destroy_mr_resources(&ndev->mvdev);
+-	mlx5_cmd_cleanup_async_ctx(&mvdev->async_ctx);
+-
+ 	if (!is_zero_ether_addr(ndev->config.mac)) {
+ 		pfmdev = pci_get_drvdata(pci_physfn(mvdev->mdev->pdev));
+ 		mlx5_mpfs_del_mac(pfmdev, ndev->config.mac);
+ 	}
++	mlx5_cmd_cleanup_async_ctx(&mvdev->async_ctx);
+ 	mlx5_vdpa_free_resources(&ndev->mvdev);
+ 	free_irqs(ndev);
+ 	kfree(ndev->event_cbs);
+@@ -3888,6 +3890,8 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
+ 	mvdev->actual_features =
+ 			(device_features & BIT_ULL(VIRTIO_F_VERSION_1));
+ 
++	mlx5_cmd_init_async_ctx(mdev, &mvdev->async_ctx);
++
+ 	ndev->vqs = kcalloc(max_vqs, sizeof(*ndev->vqs), GFP_KERNEL);
+ 	ndev->event_cbs = kcalloc(max_vqs + 1, sizeof(*ndev->event_cbs), GFP_KERNEL);
+ 	if (!ndev->vqs || !ndev->event_cbs) {
+@@ -3960,8 +3964,6 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
+ 		ndev->rqt_size = 1;
+ 	}
+ 
+-	mlx5_cmd_init_async_ctx(mdev, &mvdev->async_ctx);
+-
+ 	ndev->mvdev.mlx_features = device_features;
+ 	mvdev->vdev.dma_dev = &mdev->pdev->dev;
+ 	err = mlx5_vdpa_alloc_resources(&ndev->mvdev);
+diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
+index 6a9a3735131036..04620bb77203d0 100644
+--- a/drivers/vdpa/vdpa_user/vduse_dev.c
++++ b/drivers/vdpa/vdpa_user/vduse_dev.c
+@@ -2216,6 +2216,7 @@ static void vduse_exit(void)
+ 	cdev_del(&vduse_ctrl_cdev);
+ 	unregister_chrdev_region(vduse_major, VDUSE_DEV_MAX);
+ 	class_unregister(&vduse_class);
++	idr_destroy(&vduse_idr);
+ }
+ module_exit(vduse_exit);
+ 
+diff --git a/drivers/vfio/device_cdev.c b/drivers/vfio/device_cdev.c
+index 281a8dc3ed4974..480cac3a0c274f 100644
+--- a/drivers/vfio/device_cdev.c
++++ b/drivers/vfio/device_cdev.c
+@@ -60,22 +60,50 @@ static void vfio_df_get_kvm_safe(struct vfio_device_file *df)
+ 	spin_unlock(&df->kvm_ref_lock);
+ }
+ 
++static int vfio_df_check_token(struct vfio_device *device,
++			       const struct vfio_device_bind_iommufd *bind)
++{
++	uuid_t uuid;
++
++	if (!device->ops->match_token_uuid) {
++		if (bind->flags & VFIO_DEVICE_BIND_FLAG_TOKEN)
++			return -EINVAL;
++		return 0;
++	}
++
++	if (!(bind->flags & VFIO_DEVICE_BIND_FLAG_TOKEN))
++		return device->ops->match_token_uuid(device, NULL);
++
++	if (copy_from_user(&uuid, u64_to_user_ptr(bind->token_uuid_ptr),
++			   sizeof(uuid)))
++		return -EFAULT;
++	return device->ops->match_token_uuid(device, &uuid);
++}
++
+ long vfio_df_ioctl_bind_iommufd(struct vfio_device_file *df,
+ 				struct vfio_device_bind_iommufd __user *arg)
+ {
++	const u32 VALID_FLAGS = VFIO_DEVICE_BIND_FLAG_TOKEN;
+ 	struct vfio_device *device = df->device;
+ 	struct vfio_device_bind_iommufd bind;
+ 	unsigned long minsz;
++	u32 user_size;
+ 	int ret;
+ 
+ 	static_assert(__same_type(arg->out_devid, df->devid));
+ 
+ 	minsz = offsetofend(struct vfio_device_bind_iommufd, out_devid);
+ 
+-	if (copy_from_user(&bind, arg, minsz))
+-		return -EFAULT;
++	ret = get_user(user_size, &arg->argsz);
++	if (ret)
++		return ret;
++	if (user_size < minsz)
++		return -EINVAL;
++	ret = copy_struct_from_user(&bind, minsz, arg, user_size);
++	if (ret)
++		return ret;
+ 
+-	if (bind.argsz < minsz || bind.flags || bind.iommufd < 0)
++	if (bind.iommufd < 0 || bind.flags & ~VALID_FLAGS)
+ 		return -EINVAL;
+ 
+ 	/* BIND_IOMMUFD only allowed for cdev fds */
+@@ -93,6 +121,10 @@ long vfio_df_ioctl_bind_iommufd(struct vfio_device_file *df,
+ 		goto out_unlock;
+ 	}
+ 
++	ret = vfio_df_check_token(device, &bind);
++	if (ret)
++		goto out_unlock;
++
+ 	df->iommufd = iommufd_ctx_from_fd(bind.iommufd);
+ 	if (IS_ERR(df->iommufd)) {
+ 		ret = PTR_ERR(df->iommufd);
+diff --git a/drivers/vfio/group.c b/drivers/vfio/group.c
+index c321d442f0da09..c376a6279de0e6 100644
+--- a/drivers/vfio/group.c
++++ b/drivers/vfio/group.c
+@@ -192,11 +192,10 @@ static int vfio_df_group_open(struct vfio_device_file *df)
+ 		 * implies they expected translation to exist
+ 		 */
+ 		if (!capable(CAP_SYS_RAWIO) ||
+-		    vfio_iommufd_device_has_compat_ioas(device, df->iommufd))
++		    vfio_iommufd_device_has_compat_ioas(device, df->iommufd)) {
+ 			ret = -EPERM;
+-		else
+-			ret = 0;
+-		goto out_put_kvm;
++			goto out_put_kvm;
++		}
+ 	}
+ 
+ 	ret = vfio_df_open(df);
+diff --git a/drivers/vfio/iommufd.c b/drivers/vfio/iommufd.c
+index c8c3a2d53f86e1..a38d262c602809 100644
+--- a/drivers/vfio/iommufd.c
++++ b/drivers/vfio/iommufd.c
+@@ -25,6 +25,10 @@ int vfio_df_iommufd_bind(struct vfio_device_file *df)
+ 
+ 	lockdep_assert_held(&vdev->dev_set->lock);
+ 
++	/* Returns 0 to permit device opening under noiommu mode */
++	if (vfio_device_is_noiommu(vdev))
++		return 0;
++
+ 	return vdev->ops->bind_iommufd(vdev, ictx, &df->devid);
+ }
+ 
+diff --git a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
+index 2149f49aeec7f8..397f5e44513639 100644
+--- a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
++++ b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
+@@ -1583,6 +1583,7 @@ static const struct vfio_device_ops hisi_acc_vfio_pci_ops = {
+ 	.mmap = vfio_pci_core_mmap,
+ 	.request = vfio_pci_core_request,
+ 	.match = vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd = vfio_iommufd_physical_bind,
+ 	.unbind_iommufd = vfio_iommufd_physical_unbind,
+ 	.attach_ioas = vfio_iommufd_physical_attach_ioas,
+diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c
+index 93f894fe60d221..7ec47e736a8e5a 100644
+--- a/drivers/vfio/pci/mlx5/main.c
++++ b/drivers/vfio/pci/mlx5/main.c
+@@ -1372,6 +1372,7 @@ static const struct vfio_device_ops mlx5vf_pci_ops = {
+ 	.mmap = vfio_pci_core_mmap,
+ 	.request = vfio_pci_core_request,
+ 	.match = vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd = vfio_iommufd_physical_bind,
+ 	.unbind_iommufd = vfio_iommufd_physical_unbind,
+ 	.attach_ioas = vfio_iommufd_physical_attach_ioas,
+diff --git a/drivers/vfio/pci/nvgrace-gpu/main.c b/drivers/vfio/pci/nvgrace-gpu/main.c
+index e5ac39c4cc6b6f..d95761dcdd58c4 100644
+--- a/drivers/vfio/pci/nvgrace-gpu/main.c
++++ b/drivers/vfio/pci/nvgrace-gpu/main.c
+@@ -696,6 +696,7 @@ static const struct vfio_device_ops nvgrace_gpu_pci_ops = {
+ 	.mmap		= nvgrace_gpu_mmap,
+ 	.request	= vfio_pci_core_request,
+ 	.match		= vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd	= vfio_iommufd_physical_bind,
+ 	.unbind_iommufd	= vfio_iommufd_physical_unbind,
+ 	.attach_ioas	= vfio_iommufd_physical_attach_ioas,
+@@ -715,6 +716,7 @@ static const struct vfio_device_ops nvgrace_gpu_pci_core_ops = {
+ 	.mmap		= vfio_pci_core_mmap,
+ 	.request	= vfio_pci_core_request,
+ 	.match		= vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd	= vfio_iommufd_physical_bind,
+ 	.unbind_iommufd	= vfio_iommufd_physical_unbind,
+ 	.attach_ioas	= vfio_iommufd_physical_attach_ioas,
+diff --git a/drivers/vfio/pci/pds/vfio_dev.c b/drivers/vfio/pci/pds/vfio_dev.c
+index 76a80ae7087b51..f3ccb0008f6752 100644
+--- a/drivers/vfio/pci/pds/vfio_dev.c
++++ b/drivers/vfio/pci/pds/vfio_dev.c
+@@ -201,9 +201,11 @@ static const struct vfio_device_ops pds_vfio_ops = {
+ 	.mmap = vfio_pci_core_mmap,
+ 	.request = vfio_pci_core_request,
+ 	.match = vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd = vfio_iommufd_physical_bind,
+ 	.unbind_iommufd = vfio_iommufd_physical_unbind,
+ 	.attach_ioas = vfio_iommufd_physical_attach_ioas,
++	.detach_ioas = vfio_iommufd_physical_detach_ioas,
+ };
+ 
+ const struct vfio_device_ops *pds_vfio_ops_info(void)
+diff --git a/drivers/vfio/pci/qat/main.c b/drivers/vfio/pci/qat/main.c
+index 845ed15b67718c..5cce6b0b8d2f3e 100644
+--- a/drivers/vfio/pci/qat/main.c
++++ b/drivers/vfio/pci/qat/main.c
+@@ -614,6 +614,7 @@ static const struct vfio_device_ops qat_vf_pci_ops = {
+ 	.mmap = vfio_pci_core_mmap,
+ 	.request = vfio_pci_core_request,
+ 	.match = vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd = vfio_iommufd_physical_bind,
+ 	.unbind_iommufd = vfio_iommufd_physical_unbind,
+ 	.attach_ioas = vfio_iommufd_physical_attach_ioas,
+diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
+index 5ba39f7623bb76..ac10f14417f2f3 100644
+--- a/drivers/vfio/pci/vfio_pci.c
++++ b/drivers/vfio/pci/vfio_pci.c
+@@ -138,6 +138,7 @@ static const struct vfio_device_ops vfio_pci_ops = {
+ 	.mmap		= vfio_pci_core_mmap,
+ 	.request	= vfio_pci_core_request,
+ 	.match		= vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd	= vfio_iommufd_physical_bind,
+ 	.unbind_iommufd	= vfio_iommufd_physical_unbind,
+ 	.attach_ioas	= vfio_iommufd_physical_attach_ioas,
+diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
+index 6328c3a05bcdd4..fad410cf91bc22 100644
+--- a/drivers/vfio/pci/vfio_pci_core.c
++++ b/drivers/vfio/pci/vfio_pci_core.c
+@@ -1821,9 +1821,13 @@ void vfio_pci_core_request(struct vfio_device *core_vdev, unsigned int count)
+ }
+ EXPORT_SYMBOL_GPL(vfio_pci_core_request);
+ 
+-static int vfio_pci_validate_vf_token(struct vfio_pci_core_device *vdev,
+-				      bool vf_token, uuid_t *uuid)
++int vfio_pci_core_match_token_uuid(struct vfio_device *core_vdev,
++				   const uuid_t *uuid)
++
+ {
++	struct vfio_pci_core_device *vdev =
++		container_of(core_vdev, struct vfio_pci_core_device, vdev);
++
+ 	/*
+ 	 * There's always some degree of trust or collaboration between SR-IOV
+ 	 * PF and VFs, even if just that the PF hosts the SR-IOV capability and
+@@ -1854,7 +1858,7 @@ static int vfio_pci_validate_vf_token(struct vfio_pci_core_device *vdev,
+ 		bool match;
+ 
+ 		if (!pf_vdev) {
+-			if (!vf_token)
++			if (!uuid)
+ 				return 0; /* PF is not vfio-pci, no VF token */
+ 
+ 			pci_info_ratelimited(vdev->pdev,
+@@ -1862,7 +1866,7 @@ static int vfio_pci_validate_vf_token(struct vfio_pci_core_device *vdev,
+ 			return -EINVAL;
+ 		}
+ 
+-		if (!vf_token) {
++		if (!uuid) {
+ 			pci_info_ratelimited(vdev->pdev,
+ 				"VF token required to access device\n");
+ 			return -EACCES;
+@@ -1880,7 +1884,7 @@ static int vfio_pci_validate_vf_token(struct vfio_pci_core_device *vdev,
+ 	} else if (vdev->vf_token) {
+ 		mutex_lock(&vdev->vf_token->lock);
+ 		if (vdev->vf_token->users) {
+-			if (!vf_token) {
++			if (!uuid) {
+ 				mutex_unlock(&vdev->vf_token->lock);
+ 				pci_info_ratelimited(vdev->pdev,
+ 					"VF token required to access device\n");
+@@ -1893,12 +1897,12 @@ static int vfio_pci_validate_vf_token(struct vfio_pci_core_device *vdev,
+ 					"Incorrect VF token provided for device\n");
+ 				return -EACCES;
+ 			}
+-		} else if (vf_token) {
++		} else if (uuid) {
+ 			uuid_copy(&vdev->vf_token->uuid, uuid);
+ 		}
+ 
+ 		mutex_unlock(&vdev->vf_token->lock);
+-	} else if (vf_token) {
++	} else if (uuid) {
+ 		pci_info_ratelimited(vdev->pdev,
+ 			"VF token incorrectly provided, not a PF or VF\n");
+ 		return -EINVAL;
+@@ -1906,6 +1910,7 @@ static int vfio_pci_validate_vf_token(struct vfio_pci_core_device *vdev,
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(vfio_pci_core_match_token_uuid);
+ 
+ #define VF_TOKEN_ARG "vf_token="
+ 
+@@ -1952,7 +1957,8 @@ int vfio_pci_core_match(struct vfio_device *core_vdev, char *buf)
+ 		}
+ 	}
+ 
+-	ret = vfio_pci_validate_vf_token(vdev, vf_token, &uuid);
++	ret = core_vdev->ops->match_token_uuid(core_vdev,
++					       vf_token ? &uuid : NULL);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -2149,7 +2155,7 @@ int vfio_pci_core_register_device(struct vfio_pci_core_device *vdev)
+ 		return -EBUSY;
+ 	}
+ 
+-	if (pci_is_root_bus(pdev->bus)) {
++	if (pci_is_root_bus(pdev->bus) || pdev->is_virtfn) {
+ 		ret = vfio_assign_device_set(&vdev->vdev, vdev);
+ 	} else if (!pci_probe_reset_slot(pdev->slot)) {
+ 		ret = vfio_assign_device_set(&vdev->vdev, pdev->slot);
+diff --git a/drivers/vfio/pci/virtio/main.c b/drivers/vfio/pci/virtio/main.c
+index 515fe1b9f94d80..8084f3e36a9f70 100644
+--- a/drivers/vfio/pci/virtio/main.c
++++ b/drivers/vfio/pci/virtio/main.c
+@@ -94,6 +94,7 @@ static const struct vfio_device_ops virtiovf_vfio_pci_lm_ops = {
+ 	.mmap = vfio_pci_core_mmap,
+ 	.request = vfio_pci_core_request,
+ 	.match = vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd = vfio_iommufd_physical_bind,
+ 	.unbind_iommufd = vfio_iommufd_physical_unbind,
+ 	.attach_ioas = vfio_iommufd_physical_attach_ioas,
+@@ -114,6 +115,7 @@ static const struct vfio_device_ops virtiovf_vfio_pci_tran_lm_ops = {
+ 	.mmap = vfio_pci_core_mmap,
+ 	.request = vfio_pci_core_request,
+ 	.match = vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd = vfio_iommufd_physical_bind,
+ 	.unbind_iommufd = vfio_iommufd_physical_unbind,
+ 	.attach_ioas = vfio_iommufd_physical_attach_ioas,
+@@ -134,6 +136,7 @@ static const struct vfio_device_ops virtiovf_vfio_pci_ops = {
+ 	.mmap = vfio_pci_core_mmap,
+ 	.request = vfio_pci_core_request,
+ 	.match = vfio_pci_core_match,
++	.match_token_uuid = vfio_pci_core_match_token_uuid,
+ 	.bind_iommufd = vfio_iommufd_physical_bind,
+ 	.unbind_iommufd = vfio_iommufd_physical_unbind,
+ 	.attach_ioas = vfio_iommufd_physical_attach_ioas,
+diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
+index 1fd261efc582d0..5046cae052224e 100644
+--- a/drivers/vfio/vfio_main.c
++++ b/drivers/vfio/vfio_main.c
+@@ -583,7 +583,8 @@ void vfio_df_close(struct vfio_device_file *df)
+ 
+ 	lockdep_assert_held(&device->dev_set->lock);
+ 
+-	vfio_assert_device_open(device);
++	if (!vfio_assert_device_open(device))
++		return;
+ 	if (device->open_count == 1)
+ 		vfio_df_device_last_close(df);
+ 	device->open_count--;
+diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
+index 020d4fbb947ca0..bc0f385744974d 100644
+--- a/drivers/vhost/Kconfig
++++ b/drivers/vhost/Kconfig
+@@ -95,4 +95,22 @@ config VHOST_CROSS_ENDIAN_LEGACY
+ 
+ 	  If unsure, say "N".
+ 
++config VHOST_ENABLE_FORK_OWNER_CONTROL
++	bool "Enable VHOST_ENABLE_FORK_OWNER_CONTROL"
++	default y
++	help
++	  This option enables two IOCTLs: VHOST_SET_FORK_FROM_OWNER and
++	  VHOST_GET_FORK_FROM_OWNER. These allow userspace applications
++	  to modify the vhost worker mode for vhost devices.
++
++	  Also expose module parameter 'fork_from_owner_default' to allow users
++	  to configure the default mode for vhost workers.
++
++	  By default, `VHOST_ENABLE_FORK_OWNER_CONTROL` is set to `y`,
++	  users can change the worker thread mode as needed.
++	  If this config is disabled (n),the related IOCTLs and parameters will
++	  be unavailable.
++
++	  If unsure, say "Y".
++
+ endif
+diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
+index c12a0d4e6386f3..63b0829391eb97 100644
+--- a/drivers/vhost/scsi.c
++++ b/drivers/vhost/scsi.c
+@@ -71,7 +71,7 @@ static int vhost_scsi_set_inline_sg_cnt(const char *buf,
+ 	if (ret)
+ 		return ret;
+ 
+-	if (ret > VHOST_SCSI_PREALLOC_SGLS) {
++	if (cnt > VHOST_SCSI_PREALLOC_SGLS) {
+ 		pr_err("Max inline_sg_cnt is %u\n", VHOST_SCSI_PREALLOC_SGLS);
+ 		return -EINVAL;
+ 	}
+@@ -1226,10 +1226,8 @@ vhost_scsi_get_req(struct vhost_virtqueue *vq, struct vhost_scsi_ctx *vc,
+ 			/* validated at handler entry */
+ 			vs_tpg = vhost_vq_get_backend(vq);
+ 			tpg = READ_ONCE(vs_tpg[*vc->target]);
+-			if (unlikely(!tpg)) {
+-				vq_err(vq, "Target 0x%x does not exist\n", *vc->target);
++			if (unlikely(!tpg))
+ 				goto out;
+-			}
+ 		}
+ 
+ 		if (tpgp)
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index 3a5ebb973dba39..84c9bdf9aeddaf 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -22,6 +22,7 @@
+ #include <linux/slab.h>
+ #include <linux/vmalloc.h>
+ #include <linux/kthread.h>
++#include <linux/cgroup.h>
+ #include <linux/module.h>
+ #include <linux/sort.h>
+ #include <linux/sched/mm.h>
+@@ -41,6 +42,13 @@ static int max_iotlb_entries = 2048;
+ module_param(max_iotlb_entries, int, 0444);
+ MODULE_PARM_DESC(max_iotlb_entries,
+ 	"Maximum number of iotlb entries. (default: 2048)");
++static bool fork_from_owner_default = VHOST_FORK_OWNER_TASK;
++
++#ifdef CONFIG_VHOST_ENABLE_FORK_OWNER_CONTROL
++module_param(fork_from_owner_default, bool, 0444);
++MODULE_PARM_DESC(fork_from_owner_default,
++		 "Set task mode as the default(default: Y)");
++#endif
+ 
+ enum {
+ 	VHOST_MEMORY_F_LOG = 0x1,
+@@ -242,7 +250,7 @@ static void vhost_worker_queue(struct vhost_worker *worker,
+ 		 * test_and_set_bit() implies a memory barrier.
+ 		 */
+ 		llist_add(&work->node, &worker->work_list);
+-		vhost_task_wake(worker->vtsk);
++		worker->ops->wakeup(worker);
+ 	}
+ }
+ 
+@@ -388,6 +396,44 @@ static void vhost_vq_reset(struct vhost_dev *dev,
+ 	__vhost_vq_meta_reset(vq);
+ }
+ 
++static int vhost_run_work_kthread_list(void *data)
++{
++	struct vhost_worker *worker = data;
++	struct vhost_work *work, *work_next;
++	struct vhost_dev *dev = worker->dev;
++	struct llist_node *node;
++
++	kthread_use_mm(dev->mm);
++
++	for (;;) {
++		/* mb paired w/ kthread_stop */
++		set_current_state(TASK_INTERRUPTIBLE);
++
++		if (kthread_should_stop()) {
++			__set_current_state(TASK_RUNNING);
++			break;
++		}
++		node = llist_del_all(&worker->work_list);
++		if (!node)
++			schedule();
++
++		node = llist_reverse_order(node);
++		/* make sure flag is seen after deletion */
++		smp_wmb();
++		llist_for_each_entry_safe(work, work_next, node, node) {
++			clear_bit(VHOST_WORK_QUEUED, &work->flags);
++			__set_current_state(TASK_RUNNING);
++			kcov_remote_start_common(worker->kcov_handle);
++			work->fn(work);
++			kcov_remote_stop();
++			cond_resched();
++		}
++	}
++	kthread_unuse_mm(dev->mm);
++
++	return 0;
++}
++
+ static bool vhost_run_work_list(void *data)
+ {
+ 	struct vhost_worker *worker = data;
+@@ -552,6 +598,7 @@ void vhost_dev_init(struct vhost_dev *dev,
+ 	dev->byte_weight = byte_weight;
+ 	dev->use_worker = use_worker;
+ 	dev->msg_handler = msg_handler;
++	dev->fork_owner = fork_from_owner_default;
+ 	init_waitqueue_head(&dev->wait);
+ 	INIT_LIST_HEAD(&dev->read_list);
+ 	INIT_LIST_HEAD(&dev->pending_list);
+@@ -581,6 +628,46 @@ long vhost_dev_check_owner(struct vhost_dev *dev)
+ }
+ EXPORT_SYMBOL_GPL(vhost_dev_check_owner);
+ 
++struct vhost_attach_cgroups_struct {
++	struct vhost_work work;
++	struct task_struct *owner;
++	int ret;
++};
++
++static void vhost_attach_cgroups_work(struct vhost_work *work)
++{
++	struct vhost_attach_cgroups_struct *s;
++
++	s = container_of(work, struct vhost_attach_cgroups_struct, work);
++	s->ret = cgroup_attach_task_all(s->owner, current);
++}
++
++static int vhost_attach_task_to_cgroups(struct vhost_worker *worker)
++{
++	struct vhost_attach_cgroups_struct attach;
++	int saved_cnt;
++
++	attach.owner = current;
++
++	vhost_work_init(&attach.work, vhost_attach_cgroups_work);
++	vhost_worker_queue(worker, &attach.work);
++
++	mutex_lock(&worker->mutex);
++
++	/*
++	 * Bypass attachment_cnt check in __vhost_worker_flush:
++	 * Temporarily change it to INT_MAX to bypass the check
++	 */
++	saved_cnt = worker->attachment_cnt;
++	worker->attachment_cnt = INT_MAX;
++	__vhost_worker_flush(worker);
++	worker->attachment_cnt = saved_cnt;
++
++	mutex_unlock(&worker->mutex);
++
++	return attach.ret;
++}
++
+ /* Caller should have device mutex */
+ bool vhost_dev_has_owner(struct vhost_dev *dev)
+ {
+@@ -626,7 +713,7 @@ static void vhost_worker_destroy(struct vhost_dev *dev,
+ 
+ 	WARN_ON(!llist_empty(&worker->work_list));
+ 	xa_erase(&dev->worker_xa, worker->id);
+-	vhost_task_stop(worker->vtsk);
++	worker->ops->stop(worker);
+ 	kfree(worker);
+ }
+ 
+@@ -649,42 +736,115 @@ static void vhost_workers_free(struct vhost_dev *dev)
+ 	xa_destroy(&dev->worker_xa);
+ }
+ 
++static void vhost_task_wakeup(struct vhost_worker *worker)
++{
++	return vhost_task_wake(worker->vtsk);
++}
++
++static void vhost_kthread_wakeup(struct vhost_worker *worker)
++{
++	wake_up_process(worker->kthread_task);
++}
++
++static void vhost_task_do_stop(struct vhost_worker *worker)
++{
++	return vhost_task_stop(worker->vtsk);
++}
++
++static void vhost_kthread_do_stop(struct vhost_worker *worker)
++{
++	kthread_stop(worker->kthread_task);
++}
++
++static int vhost_task_worker_create(struct vhost_worker *worker,
++				    struct vhost_dev *dev, const char *name)
++{
++	struct vhost_task *vtsk;
++	u32 id;
++	int ret;
++
++	vtsk = vhost_task_create(vhost_run_work_list, vhost_worker_killed,
++				 worker, name);
++	if (IS_ERR(vtsk))
++		return PTR_ERR(vtsk);
++
++	worker->vtsk = vtsk;
++	vhost_task_start(vtsk);
++	ret = xa_alloc(&dev->worker_xa, &id, worker, xa_limit_32b, GFP_KERNEL);
++	if (ret < 0) {
++		vhost_task_do_stop(worker);
++		return ret;
++	}
++	worker->id = id;
++	return 0;
++}
++
++static int vhost_kthread_worker_create(struct vhost_worker *worker,
++				       struct vhost_dev *dev, const char *name)
++{
++	struct task_struct *task;
++	u32 id;
++	int ret;
++
++	task = kthread_create(vhost_run_work_kthread_list, worker, "%s", name);
++	if (IS_ERR(task))
++		return PTR_ERR(task);
++
++	worker->kthread_task = task;
++	wake_up_process(task);
++	ret = xa_alloc(&dev->worker_xa, &id, worker, xa_limit_32b, GFP_KERNEL);
++	if (ret < 0)
++		goto stop_worker;
++
++	ret = vhost_attach_task_to_cgroups(worker);
++	if (ret)
++		goto stop_worker;
++
++	worker->id = id;
++	return 0;
++
++stop_worker:
++	vhost_kthread_do_stop(worker);
++	return ret;
++}
++
++static const struct vhost_worker_ops kthread_ops = {
++	.create = vhost_kthread_worker_create,
++	.stop = vhost_kthread_do_stop,
++	.wakeup = vhost_kthread_wakeup,
++};
++
++static const struct vhost_worker_ops vhost_task_ops = {
++	.create = vhost_task_worker_create,
++	.stop = vhost_task_do_stop,
++	.wakeup = vhost_task_wakeup,
++};
++
+ static struct vhost_worker *vhost_worker_create(struct vhost_dev *dev)
+ {
+ 	struct vhost_worker *worker;
+-	struct vhost_task *vtsk;
+ 	char name[TASK_COMM_LEN];
+ 	int ret;
+-	u32 id;
++	const struct vhost_worker_ops *ops = dev->fork_owner ? &vhost_task_ops :
++							       &kthread_ops;
+ 
+ 	worker = kzalloc(sizeof(*worker), GFP_KERNEL_ACCOUNT);
+ 	if (!worker)
+ 		return NULL;
+ 
+ 	worker->dev = dev;
++	worker->ops = ops;
+ 	snprintf(name, sizeof(name), "vhost-%d", current->pid);
+ 
+-	vtsk = vhost_task_create(vhost_run_work_list, vhost_worker_killed,
+-				 worker, name);
+-	if (IS_ERR(vtsk))
+-		goto free_worker;
+-
+ 	mutex_init(&worker->mutex);
+ 	init_llist_head(&worker->work_list);
+ 	worker->kcov_handle = kcov_common_handle();
+-	worker->vtsk = vtsk;
+-
+-	vhost_task_start(vtsk);
+-
+-	ret = xa_alloc(&dev->worker_xa, &id, worker, xa_limit_32b, GFP_KERNEL);
++	ret = ops->create(worker, dev, name);
+ 	if (ret < 0)
+-		goto stop_worker;
+-	worker->id = id;
++		goto free_worker;
+ 
+ 	return worker;
+ 
+-stop_worker:
+-	vhost_task_stop(vtsk);
+ free_worker:
+ 	kfree(worker);
+ 	return NULL;
+@@ -865,6 +1025,14 @@ long vhost_worker_ioctl(struct vhost_dev *dev, unsigned int ioctl,
+ 	switch (ioctl) {
+ 	/* dev worker ioctls */
+ 	case VHOST_NEW_WORKER:
++		/*
++		 * vhost_tasks will account for worker threads under the parent's
++		 * NPROC value but kthreads do not. To avoid userspace overflowing
++		 * the system with worker threads fork_owner must be true.
++		 */
++		if (!dev->fork_owner)
++			return -EFAULT;
++
+ 		ret = vhost_new_worker(dev, &state);
+ 		if (!ret && copy_to_user(argp, &state, sizeof(state)))
+ 			ret = -EFAULT;
+@@ -982,6 +1150,7 @@ void vhost_dev_reset_owner(struct vhost_dev *dev, struct vhost_iotlb *umem)
+ 
+ 	vhost_dev_cleanup(dev);
+ 
++	dev->fork_owner = fork_from_owner_default;
+ 	dev->umem = umem;
+ 	/* We don't need VQ locks below since vhost_dev_cleanup makes sure
+ 	 * VQs aren't running.
+@@ -2135,6 +2304,45 @@ long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
+ 		goto done;
+ 	}
+ 
++#ifdef CONFIG_VHOST_ENABLE_FORK_OWNER_CONTROL
++	if (ioctl == VHOST_SET_FORK_FROM_OWNER) {
++		/* Only allow modification before owner is set */
++		if (vhost_dev_has_owner(d)) {
++			r = -EBUSY;
++			goto done;
++		}
++		u8 fork_owner_val;
++
++		if (get_user(fork_owner_val, (u8 __user *)argp)) {
++			r = -EFAULT;
++			goto done;
++		}
++		if (fork_owner_val != VHOST_FORK_OWNER_TASK &&
++		    fork_owner_val != VHOST_FORK_OWNER_KTHREAD) {
++			r = -EINVAL;
++			goto done;
++		}
++		d->fork_owner = !!fork_owner_val;
++		r = 0;
++		goto done;
++	}
++	if (ioctl == VHOST_GET_FORK_FROM_OWNER) {
++		u8 fork_owner_val = d->fork_owner;
++
++		if (fork_owner_val != VHOST_FORK_OWNER_TASK &&
++		    fork_owner_val != VHOST_FORK_OWNER_KTHREAD) {
++			r = -EINVAL;
++			goto done;
++		}
++		if (put_user(fork_owner_val, (u8 __user *)argp)) {
++			r = -EFAULT;
++			goto done;
++		}
++		r = 0;
++		goto done;
++	}
++#endif
++
+ 	/* You must be the owner to do anything else */
+ 	r = vhost_dev_check_owner(d);
+ 	if (r)
+diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
+index bb75a292d50cd3..ab704d84fb3446 100644
+--- a/drivers/vhost/vhost.h
++++ b/drivers/vhost/vhost.h
+@@ -26,7 +26,18 @@ struct vhost_work {
+ 	unsigned long		flags;
+ };
+ 
++struct vhost_worker;
++struct vhost_dev;
++
++struct vhost_worker_ops {
++	int (*create)(struct vhost_worker *worker, struct vhost_dev *dev,
++		      const char *name);
++	void (*stop)(struct vhost_worker *worker);
++	void (*wakeup)(struct vhost_worker *worker);
++};
++
+ struct vhost_worker {
++	struct task_struct *kthread_task;
+ 	struct vhost_task	*vtsk;
+ 	struct vhost_dev	*dev;
+ 	/* Used to serialize device wide flushing with worker swapping. */
+@@ -36,6 +47,7 @@ struct vhost_worker {
+ 	u32			id;
+ 	int			attachment_cnt;
+ 	bool			killed;
++	const struct vhost_worker_ops *ops;
+ };
+ 
+ /* Poll a file (eventfd or socket) */
+@@ -176,6 +188,16 @@ struct vhost_dev {
+ 	int byte_weight;
+ 	struct xarray worker_xa;
+ 	bool use_worker;
++	/*
++	 * If fork_owner is true we use vhost_tasks to create
++	 * the worker so all settings/limits like cgroups, NPROC,
++	 * scheduler, etc are inherited from the owner. If false,
++	 * we use kthreads and only attach to the same cgroups
++	 * as the owner for compat with older kernels.
++	 * here we use true as default value.
++	 * The default value is set by fork_from_owner_default
++	 */
++	bool fork_owner;
+ 	int (*msg_handler)(struct vhost_dev *dev, u32 asid,
+ 			   struct vhost_iotlb_msg *msg);
+ };
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 2df48037688d1d..2b2d36c021ba55 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -952,13 +952,13 @@ static const char *fbcon_startup(void)
+ 	int rows, cols;
+ 
+ 	/*
+-	 *  If num_registered_fb is zero, this is a call for the dummy part.
++	 *  If fbcon_num_registered_fb is zero, this is a call for the dummy part.
+ 	 *  The frame buffer devices weren't initialized yet.
+ 	 */
+ 	if (!fbcon_num_registered_fb || info_idx == -1)
+ 		return display_desc;
+ 	/*
+-	 * Instead of blindly using registered_fb[0], we use info_idx, set by
++	 * Instead of blindly using fbcon_registered_fb[0], we use info_idx, set by
+ 	 * fbcon_fb_registered();
+ 	 */
+ 	info = fbcon_registered_fb[info_idx];
+diff --git a/drivers/video/fbdev/imxfb.c b/drivers/video/fbdev/imxfb.c
+index f30da32cdaed4d..a077bf346bdf4b 100644
+--- a/drivers/video/fbdev/imxfb.c
++++ b/drivers/video/fbdev/imxfb.c
+@@ -996,8 +996,13 @@ static int imxfb_probe(struct platform_device *pdev)
+ 	info->fix.smem_start = fbi->map_dma;
+ 
+ 	INIT_LIST_HEAD(&info->modelist);
+-	for (i = 0; i < fbi->num_modes; i++)
+-		fb_add_videomode(&fbi->mode[i].mode, &info->modelist);
++	for (i = 0; i < fbi->num_modes; i++) {
++		ret = fb_add_videomode(&fbi->mode[i].mode, &info->modelist);
++		if (ret) {
++			dev_err(&pdev->dev, "Failed to add videomode\n");
++			goto failed_cmap;
++		}
++	}
+ 
+ 	/*
+ 	 * This makes sure that our colour bitfield
+diff --git a/drivers/watchdog/ziirave_wdt.c b/drivers/watchdog/ziirave_wdt.c
+index fcc1ba02e75b66..5c6e3fa001d885 100644
+--- a/drivers/watchdog/ziirave_wdt.c
++++ b/drivers/watchdog/ziirave_wdt.c
+@@ -302,6 +302,9 @@ static int ziirave_firm_verify(struct watchdog_device *wdd,
+ 		const u16 len = be16_to_cpu(rec->len);
+ 		const u32 addr = be32_to_cpu(rec->addr);
+ 
++		if (len > sizeof(data))
++			return -EINVAL;
++
+ 		if (ziirave_firm_addr_readonly(addr))
+ 			continue;
+ 
+diff --git a/drivers/xen/gntdev-common.h b/drivers/xen/gntdev-common.h
+index 9c286b2a190016..ac8ce3179ba2e9 100644
+--- a/drivers/xen/gntdev-common.h
++++ b/drivers/xen/gntdev-common.h
+@@ -26,6 +26,10 @@ struct gntdev_priv {
+ 	/* lock protects maps and freeable_maps. */
+ 	struct mutex lock;
+ 
++	/* Free instances of struct gntdev_copy_batch. */
++	struct gntdev_copy_batch *batch;
++	struct mutex batch_lock;
++
+ #ifdef CONFIG_XEN_GRANT_DMA_ALLOC
+ 	/* Device for which DMA memory is allocated. */
+ 	struct device *dma_dev;
+diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
+index 5453d86324f66f..82855105ab857f 100644
+--- a/drivers/xen/gntdev-dmabuf.c
++++ b/drivers/xen/gntdev-dmabuf.c
+@@ -357,8 +357,11 @@ struct gntdev_dmabuf_export_args {
+ static int dmabuf_exp_from_pages(struct gntdev_dmabuf_export_args *args)
+ {
+ 	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+-	struct gntdev_dmabuf *gntdev_dmabuf;
+-	int ret;
++	struct gntdev_dmabuf *gntdev_dmabuf __free(kfree) = NULL;
++	CLASS(get_unused_fd, ret)(O_CLOEXEC);
++
++	if (ret < 0)
++		return ret;
+ 
+ 	gntdev_dmabuf = kzalloc(sizeof(*gntdev_dmabuf), GFP_KERNEL);
+ 	if (!gntdev_dmabuf)
+@@ -383,32 +386,21 @@ static int dmabuf_exp_from_pages(struct gntdev_dmabuf_export_args *args)
+ 	exp_info.priv = gntdev_dmabuf;
+ 
+ 	gntdev_dmabuf->dmabuf = dma_buf_export(&exp_info);
+-	if (IS_ERR(gntdev_dmabuf->dmabuf)) {
+-		ret = PTR_ERR(gntdev_dmabuf->dmabuf);
+-		gntdev_dmabuf->dmabuf = NULL;
+-		goto fail;
+-	}
+-
+-	ret = dma_buf_fd(gntdev_dmabuf->dmabuf, O_CLOEXEC);
+-	if (ret < 0)
+-		goto fail;
++	if (IS_ERR(gntdev_dmabuf->dmabuf))
++		return PTR_ERR(gntdev_dmabuf->dmabuf);
+ 
+ 	gntdev_dmabuf->fd = ret;
+ 	args->fd = ret;
+ 
+ 	pr_debug("Exporting DMA buffer with fd %d\n", ret);
+ 
++	get_file(gntdev_dmabuf->priv->filp);
+ 	mutex_lock(&args->dmabuf_priv->lock);
+ 	list_add(&gntdev_dmabuf->next, &args->dmabuf_priv->exp_list);
+ 	mutex_unlock(&args->dmabuf_priv->lock);
+-	get_file(gntdev_dmabuf->priv->filp);
+-	return 0;
+ 
+-fail:
+-	if (gntdev_dmabuf->dmabuf)
+-		dma_buf_put(gntdev_dmabuf->dmabuf);
+-	kfree(gntdev_dmabuf);
+-	return ret;
++	fd_install(take_fd(ret), no_free_ptr(gntdev_dmabuf)->dmabuf->file);
++	return 0;
+ }
+ 
+ static struct gntdev_grant_map *
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index 61faea1f066305..1f21607656182a 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -56,6 +56,18 @@ MODULE_AUTHOR("Derek G. Murray <Derek.Murray@cl.cam.ac.uk>, "
+ 	      "Gerd Hoffmann <kraxel@redhat.com>");
+ MODULE_DESCRIPTION("User-space granted page access driver");
+ 
++#define GNTDEV_COPY_BATCH 16
++
++struct gntdev_copy_batch {
++	struct gnttab_copy ops[GNTDEV_COPY_BATCH];
++	struct page *pages[GNTDEV_COPY_BATCH];
++	s16 __user *status[GNTDEV_COPY_BATCH];
++	unsigned int nr_ops;
++	unsigned int nr_pages;
++	bool writeable;
++	struct gntdev_copy_batch *next;
++};
++
+ static unsigned int limit = 64*1024;
+ module_param(limit, uint, 0644);
+ MODULE_PARM_DESC(limit,
+@@ -584,6 +596,8 @@ static int gntdev_open(struct inode *inode, struct file *flip)
+ 	INIT_LIST_HEAD(&priv->maps);
+ 	mutex_init(&priv->lock);
+ 
++	mutex_init(&priv->batch_lock);
++
+ #ifdef CONFIG_XEN_GNTDEV_DMABUF
+ 	priv->dmabuf_priv = gntdev_dmabuf_init(flip);
+ 	if (IS_ERR(priv->dmabuf_priv)) {
+@@ -608,6 +622,7 @@ static int gntdev_release(struct inode *inode, struct file *flip)
+ {
+ 	struct gntdev_priv *priv = flip->private_data;
+ 	struct gntdev_grant_map *map;
++	struct gntdev_copy_batch *batch;
+ 
+ 	pr_debug("priv %p\n", priv);
+ 
+@@ -620,6 +635,14 @@ static int gntdev_release(struct inode *inode, struct file *flip)
+ 	}
+ 	mutex_unlock(&priv->lock);
+ 
++	mutex_lock(&priv->batch_lock);
++	while (priv->batch) {
++		batch = priv->batch;
++		priv->batch = batch->next;
++		kfree(batch);
++	}
++	mutex_unlock(&priv->batch_lock);
++
+ #ifdef CONFIG_XEN_GNTDEV_DMABUF
+ 	gntdev_dmabuf_fini(priv->dmabuf_priv);
+ #endif
+@@ -785,17 +808,6 @@ static long gntdev_ioctl_notify(struct gntdev_priv *priv, void __user *u)
+ 	return rc;
+ }
+ 
+-#define GNTDEV_COPY_BATCH 16
+-
+-struct gntdev_copy_batch {
+-	struct gnttab_copy ops[GNTDEV_COPY_BATCH];
+-	struct page *pages[GNTDEV_COPY_BATCH];
+-	s16 __user *status[GNTDEV_COPY_BATCH];
+-	unsigned int nr_ops;
+-	unsigned int nr_pages;
+-	bool writeable;
+-};
+-
+ static int gntdev_get_page(struct gntdev_copy_batch *batch, void __user *virt,
+ 				unsigned long *gfn)
+ {
+@@ -953,36 +965,53 @@ static int gntdev_grant_copy_seg(struct gntdev_copy_batch *batch,
+ static long gntdev_ioctl_grant_copy(struct gntdev_priv *priv, void __user *u)
+ {
+ 	struct ioctl_gntdev_grant_copy copy;
+-	struct gntdev_copy_batch batch;
++	struct gntdev_copy_batch *batch;
+ 	unsigned int i;
+ 	int ret = 0;
+ 
+ 	if (copy_from_user(&copy, u, sizeof(copy)))
+ 		return -EFAULT;
+ 
+-	batch.nr_ops = 0;
+-	batch.nr_pages = 0;
++	mutex_lock(&priv->batch_lock);
++	if (!priv->batch) {
++		batch = kmalloc(sizeof(*batch), GFP_KERNEL);
++	} else {
++		batch = priv->batch;
++		priv->batch = batch->next;
++	}
++	mutex_unlock(&priv->batch_lock);
++	if (!batch)
++		return -ENOMEM;
++
++	batch->nr_ops = 0;
++	batch->nr_pages = 0;
+ 
+ 	for (i = 0; i < copy.count; i++) {
+ 		struct gntdev_grant_copy_segment seg;
+ 
+ 		if (copy_from_user(&seg, &copy.segments[i], sizeof(seg))) {
+ 			ret = -EFAULT;
++			gntdev_put_pages(batch);
+ 			goto out;
+ 		}
+ 
+-		ret = gntdev_grant_copy_seg(&batch, &seg, &copy.segments[i].status);
+-		if (ret < 0)
++		ret = gntdev_grant_copy_seg(batch, &seg, &copy.segments[i].status);
++		if (ret < 0) {
++			gntdev_put_pages(batch);
+ 			goto out;
++		}
+ 
+ 		cond_resched();
+ 	}
+-	if (batch.nr_ops)
+-		ret = gntdev_copy(&batch);
+-	return ret;
++	if (batch->nr_ops)
++		ret = gntdev_copy(batch);
++
++ out:
++	mutex_lock(&priv->batch_lock);
++	batch->next = priv->batch;
++	priv->batch = batch;
++	mutex_unlock(&priv->batch_lock);
+ 
+-  out:
+-	gntdev_put_pages(&batch);
+ 	return ret;
+ }
+ 
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index a2e7979372ccd8..648531fe09002c 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -4585,16 +4585,13 @@ int btrfs_del_items(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ 
+ /*
+  * A helper function to walk down the tree starting at min_key, and looking
+- * for nodes or leaves that are have a minimum transaction id.
++ * for leaves that have a minimum transaction id.
+  * This is used by the btree defrag code, and tree logging
+  *
+  * This does not cow, but it does stuff the starting key it finds back
+  * into min_key, so you can call btrfs_search_slot with cow=1 on the
+  * key and get a writable path.
+  *
+- * This honors path->lowest_level to prevent descent past a given level
+- * of the tree.
+- *
+  * min_trans indicates the oldest transaction that you are interested
+  * in walking through.  Any nodes or leaves older than min_trans are
+  * skipped over (without reading them).
+@@ -4615,6 +4612,7 @@ int btrfs_search_forward(struct btrfs_root *root, struct btrfs_key *min_key,
+ 	int keep_locks = path->keep_locks;
+ 
+ 	ASSERT(!path->nowait);
++	ASSERT(path->lowest_level == 0);
+ 	path->keep_locks = 1;
+ again:
+ 	cur = btrfs_read_lock_root_node(root);
+@@ -4636,8 +4634,8 @@ int btrfs_search_forward(struct btrfs_root *root, struct btrfs_key *min_key,
+ 			goto out;
+ 		}
+ 
+-		/* at the lowest level, we're done, setup the path and exit */
+-		if (level == path->lowest_level) {
++		/* At level 0 we're done, setup the path and exit. */
++		if (level == 0) {
+ 			if (slot >= nritems)
+ 				goto find_next_key;
+ 			ret = 0;
+@@ -4678,12 +4676,6 @@ int btrfs_search_forward(struct btrfs_root *root, struct btrfs_key *min_key,
+ 				goto out;
+ 			}
+ 		}
+-		if (level == path->lowest_level) {
+-			ret = 0;
+-			/* Save our key for returning back. */
+-			btrfs_node_key_to_cpu(cur, min_key, slot);
+-			goto out;
+-		}
+ 		cur = btrfs_read_node_slot(cur, slot);
+ 		if (IS_ERR(cur)) {
+ 			ret = PTR_ERR(cur);
+@@ -4699,7 +4691,7 @@ int btrfs_search_forward(struct btrfs_root *root, struct btrfs_key *min_key,
+ out:
+ 	path->keep_locks = keep_locks;
+ 	if (ret == 0)
+-		btrfs_unlock_up_safe(path, path->lowest_level + 1);
++		btrfs_unlock_up_safe(path, 1);
+ 	return ret;
+ }
+ 
+diff --git a/fs/ceph/crypto.c b/fs/ceph/crypto.c
+index 3b3c4d8d401ece..9c70622458800a 100644
+--- a/fs/ceph/crypto.c
++++ b/fs/ceph/crypto.c
+@@ -215,35 +215,31 @@ static struct inode *parse_longname(const struct inode *parent,
+ 	struct ceph_client *cl = ceph_inode_to_client(parent);
+ 	struct inode *dir = NULL;
+ 	struct ceph_vino vino = { .snap = CEPH_NOSNAP };
+-	char *inode_number;
+-	char *name_end;
+-	int orig_len = *name_len;
++	char *name_end, *inode_number;
+ 	int ret = -EIO;
+-
++	/* NUL-terminate */
++	char *str __free(kfree) = kmemdup_nul(name, *name_len, GFP_KERNEL);
++	if (!str)
++		return ERR_PTR(-ENOMEM);
+ 	/* Skip initial '_' */
+-	name++;
+-	name_end = strrchr(name, '_');
++	str++;
++	name_end = strrchr(str, '_');
+ 	if (!name_end) {
+-		doutc(cl, "failed to parse long snapshot name: %s\n", name);
++		doutc(cl, "failed to parse long snapshot name: %s\n", str);
+ 		return ERR_PTR(-EIO);
+ 	}
+-	*name_len = (name_end - name);
++	*name_len = (name_end - str);
+ 	if (*name_len <= 0) {
+ 		pr_err_client(cl, "failed to parse long snapshot name\n");
+ 		return ERR_PTR(-EIO);
+ 	}
+ 
+ 	/* Get the inode number */
+-	inode_number = kmemdup_nul(name_end + 1,
+-				   orig_len - *name_len - 2,
+-				   GFP_KERNEL);
+-	if (!inode_number)
+-		return ERR_PTR(-ENOMEM);
++	inode_number = name_end + 1;
+ 	ret = kstrtou64(inode_number, 10, &vino.ino);
+ 	if (ret) {
+-		doutc(cl, "failed to parse inode number: %s\n", name);
+-		dir = ERR_PTR(ret);
+-		goto out;
++		doutc(cl, "failed to parse inode number: %s\n", str);
++		return ERR_PTR(ret);
+ 	}
+ 
+ 	/* And finally the inode */
+@@ -254,9 +250,6 @@ static struct inode *parse_longname(const struct inode *parent,
+ 		if (IS_ERR(dir))
+ 			doutc(cl, "can't find inode %s (%s)\n", inode_number, name);
+ 	}
+-
+-out:
+-	kfree(inode_number);
+ 	return dir;
+ }
+ 
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 0fbf5dfedb24e2..b22d6f819f782d 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -218,6 +218,7 @@ struct eventpoll {
+ 	/* used to optimize loop detection check */
+ 	u64 gen;
+ 	struct hlist_head refs;
++	u8 loop_check_depth;
+ 
+ 	/*
+ 	 * usage count, used together with epitem->dying to
+@@ -2140,23 +2141,24 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
+ }
+ 
+ /**
+- * ep_loop_check_proc - verify that adding an epoll file inside another
+- *                      epoll structure does not violate the constraints, in
+- *                      terms of closed loops, or too deep chains (which can
+- *                      result in excessive stack usage).
++ * ep_loop_check_proc - verify that adding an epoll file @ep inside another
++ *                      epoll file does not create closed loops, and
++ *                      determine the depth of the subtree starting at @ep
+  *
+  * @ep: the &struct eventpoll to be currently checked.
+  * @depth: Current depth of the path being checked.
+  *
+- * Return: %zero if adding the epoll @file inside current epoll
+- *          structure @ep does not violate the constraints, or %-1 otherwise.
++ * Return: depth of the subtree, or INT_MAX if we found a loop or went too deep.
+  */
+ static int ep_loop_check_proc(struct eventpoll *ep, int depth)
+ {
+-	int error = 0;
++	int result = 0;
+ 	struct rb_node *rbp;
+ 	struct epitem *epi;
+ 
++	if (ep->gen == loop_check_gen)
++		return ep->loop_check_depth;
++
+ 	mutex_lock_nested(&ep->mtx, depth + 1);
+ 	ep->gen = loop_check_gen;
+ 	for (rbp = rb_first_cached(&ep->rbr); rbp; rbp = rb_next(rbp)) {
+@@ -2164,13 +2166,11 @@ static int ep_loop_check_proc(struct eventpoll *ep, int depth)
+ 		if (unlikely(is_file_epoll(epi->ffd.file))) {
+ 			struct eventpoll *ep_tovisit;
+ 			ep_tovisit = epi->ffd.file->private_data;
+-			if (ep_tovisit->gen == loop_check_gen)
+-				continue;
+ 			if (ep_tovisit == inserting_into || depth > EP_MAX_NESTS)
+-				error = -1;
++				result = INT_MAX;
+ 			else
+-				error = ep_loop_check_proc(ep_tovisit, depth + 1);
+-			if (error != 0)
++				result = max(result, ep_loop_check_proc(ep_tovisit, depth + 1) + 1);
++			if (result > EP_MAX_NESTS)
+ 				break;
+ 		} else {
+ 			/*
+@@ -2184,9 +2184,25 @@ static int ep_loop_check_proc(struct eventpoll *ep, int depth)
+ 			list_file(epi->ffd.file);
+ 		}
+ 	}
++	ep->loop_check_depth = result;
+ 	mutex_unlock(&ep->mtx);
+ 
+-	return error;
++	return result;
++}
++
++/* ep_get_upwards_depth_proc - determine depth of @ep when traversed upwards */
++static int ep_get_upwards_depth_proc(struct eventpoll *ep, int depth)
++{
++	int result = 0;
++	struct epitem *epi;
++
++	if (ep->gen == loop_check_gen)
++		return ep->loop_check_depth;
++	hlist_for_each_entry_rcu(epi, &ep->refs, fllink)
++		result = max(result, ep_get_upwards_depth_proc(epi->ep, depth + 1) + 1);
++	ep->gen = loop_check_gen;
++	ep->loop_check_depth = result;
++	return result;
+ }
+ 
+ /**
+@@ -2202,8 +2218,22 @@ static int ep_loop_check_proc(struct eventpoll *ep, int depth)
+  */
+ static int ep_loop_check(struct eventpoll *ep, struct eventpoll *to)
+ {
++	int depth, upwards_depth;
++
+ 	inserting_into = ep;
+-	return ep_loop_check_proc(to, 0);
++	/*
++	 * Check how deep down we can get from @to, and whether it is possible
++	 * to loop up to @ep.
++	 */
++	depth = ep_loop_check_proc(to, 0);
++	if (depth > EP_MAX_NESTS)
++		return -1;
++	/* Check how far up we can go from @ep. */
++	rcu_read_lock();
++	upwards_depth = ep_get_upwards_depth_proc(ep, 0);
++	rcu_read_unlock();
++
++	return (depth+1+upwards_depth > EP_MAX_NESTS) ? -1 : 0;
+ }
+ 
+ static void clear_tfile_check_list(void)
+diff --git a/fs/exfat/file.c b/fs/exfat/file.c
+index 841a5b18e3dfdb..7ac5126aa4f1ea 100644
+--- a/fs/exfat/file.c
++++ b/fs/exfat/file.c
+@@ -623,9 +623,8 @@ static ssize_t exfat_file_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 	if (pos > valid_size)
+ 		pos = valid_size;
+ 
+-	if (iocb_is_dsync(iocb) && iocb->ki_pos > pos) {
+-		ssize_t err = vfs_fsync_range(file, pos, iocb->ki_pos - 1,
+-				iocb->ki_flags & IOCB_SYNC);
++	if (iocb->ki_pos > pos) {
++		ssize_t err = generic_write_sync(iocb, iocb->ki_pos - pos);
+ 		if (err < 0)
+ 			return err;
+ 	}
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index a1bbcdf4082471..1545846e0e3e3f 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -612,6 +612,7 @@ static int ext4_convert_inline_data_to_extent(struct address_space *mapping,
+ 	} else
+ 		ret = ext4_block_write_begin(handle, folio, from, to,
+ 					     ext4_get_block);
++	clear_buffer_new(folio_buffers(folio));
+ 
+ 	if (!ret && ext4_should_journal_data(inode)) {
+ 		ret = ext4_walk_page_buffers(handle, inode,
+@@ -891,6 +892,7 @@ static int ext4_da_convert_inline_data_to_extent(struct address_space *mapping,
+ 		return ret;
+ 	}
+ 
++	clear_buffer_new(folio_buffers(folio));
+ 	folio_mark_dirty(folio);
+ 	folio_mark_uptodate(folio);
+ 	ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index be9a4cba35fd52..ee4129b5ecce33 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1171,7 +1171,7 @@ int ext4_block_write_begin(handle_t *handle, struct folio *folio,
+ 			}
+ 			continue;
+ 		}
+-		if (buffer_new(bh))
++		if (WARN_ON_ONCE(buffer_new(bh)))
+ 			clear_buffer_new(bh);
+ 		if (!buffer_mapped(bh)) {
+ 			WARN_ON(bh->b_size != blocksize);
+@@ -1395,6 +1395,7 @@ static int write_end_fn(handle_t *handle, struct inode *inode,
+ 	ret = ext4_dirty_journalled_data(handle, bh);
+ 	clear_buffer_meta(bh);
+ 	clear_buffer_prio(bh);
++	clear_buffer_new(bh);
+ 	return ret;
+ }
+ 
+@@ -6139,7 +6140,7 @@ int ext4_meta_trans_blocks(struct inode *inode, int lblocks, int pextents)
+ 	int ret;
+ 
+ 	/*
+-	 * How many index and lead blocks need to touch to map @lblocks
++	 * How many index and leaf blocks need to touch to map @lblocks
+ 	 * logical blocks to @pextents physical extents?
+ 	 */
+ 	idxblocks = ext4_index_trans_blocks(inode, lblocks, pextents);
+@@ -6148,7 +6149,7 @@ int ext4_meta_trans_blocks(struct inode *inode, int lblocks, int pextents)
+ 	 * Now let's see how many group bitmaps and group descriptors need
+ 	 * to account
+ 	 */
+-	groups = idxblocks;
++	groups = idxblocks + pextents;
+ 	gdpblocks = groups;
+ 	if (groups > ngroups)
+ 		groups = ngroups;
+diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
+index 179e54f3a3b6a2..3d8b0f6d2dea50 100644
+--- a/fs/ext4/page-io.c
++++ b/fs/ext4/page-io.c
+@@ -236,10 +236,12 @@ static void dump_completed_IO(struct inode *inode, struct list_head *head)
+ 
+ static bool ext4_io_end_defer_completion(ext4_io_end_t *io_end)
+ {
+-	if (io_end->flag & EXT4_IO_END_UNWRITTEN)
++	if (io_end->flag & EXT4_IO_END_UNWRITTEN &&
++	    !list_empty(&io_end->list_vec))
+ 		return true;
+ 	if (test_opt(io_end->inode->i_sb, DATA_ERR_ABORT) &&
+-	    io_end->flag & EXT4_IO_END_FAILED)
++	    io_end->flag & EXT4_IO_END_FAILED &&
++	    !ext4_emergency_state(io_end->inode->i_sb))
+ 		return true;
+ 	return false;
+ }
+@@ -256,6 +258,7 @@ static void ext4_add_complete_io(ext4_io_end_t *io_end)
+ 	WARN_ON(!(io_end->flag & EXT4_IO_END_DEFER_COMPLETION));
+ 	WARN_ON(io_end->flag & EXT4_IO_END_UNWRITTEN &&
+ 		!io_end->handle && sbi->s_journal);
++	WARN_ON(!io_end->bio);
+ 
+ 	spin_lock_irqsave(&ei->i_completed_io_lock, flags);
+ 	wq = sbi->rsv_conversion_wq;
+@@ -318,12 +321,9 @@ ext4_io_end_t *ext4_init_io_end(struct inode *inode, gfp_t flags)
+ void ext4_put_io_end_defer(ext4_io_end_t *io_end)
+ {
+ 	if (refcount_dec_and_test(&io_end->count)) {
+-		if (io_end->flag & EXT4_IO_END_FAILED ||
+-		    (io_end->flag & EXT4_IO_END_UNWRITTEN &&
+-		     !list_empty(&io_end->list_vec))) {
+-			ext4_add_complete_io(io_end);
+-			return;
+-		}
++		if (ext4_io_end_defer_completion(io_end))
++			return ext4_add_complete_io(io_end);
++
+ 		ext4_release_io_end(io_end);
+ 	}
+ }
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index b3c1df93a163c6..8cbb8038bc72f3 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -23,20 +23,18 @@
+ static struct kmem_cache *cic_entry_slab;
+ static struct kmem_cache *dic_entry_slab;
+ 
+-static void *page_array_alloc(struct inode *inode, int nr)
++static void *page_array_alloc(struct f2fs_sb_info *sbi, int nr)
+ {
+-	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	unsigned int size = sizeof(struct page *) * nr;
+ 
+ 	if (likely(size <= sbi->page_array_slab_size))
+ 		return f2fs_kmem_cache_alloc(sbi->page_array_slab,
+-					GFP_F2FS_ZERO, false, F2FS_I_SB(inode));
++					GFP_F2FS_ZERO, false, sbi);
+ 	return f2fs_kzalloc(sbi, size, GFP_NOFS);
+ }
+ 
+-static void page_array_free(struct inode *inode, void *pages, int nr)
++static void page_array_free(struct f2fs_sb_info *sbi, void *pages, int nr)
+ {
+-	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	unsigned int size = sizeof(struct page *) * nr;
+ 
+ 	if (!pages)
+@@ -149,13 +147,13 @@ int f2fs_init_compress_ctx(struct compress_ctx *cc)
+ 	if (cc->rpages)
+ 		return 0;
+ 
+-	cc->rpages = page_array_alloc(cc->inode, cc->cluster_size);
++	cc->rpages = page_array_alloc(F2FS_I_SB(cc->inode), cc->cluster_size);
+ 	return cc->rpages ? 0 : -ENOMEM;
+ }
+ 
+ void f2fs_destroy_compress_ctx(struct compress_ctx *cc, bool reuse)
+ {
+-	page_array_free(cc->inode, cc->rpages, cc->cluster_size);
++	page_array_free(F2FS_I_SB(cc->inode), cc->rpages, cc->cluster_size);
+ 	cc->rpages = NULL;
+ 	cc->nr_rpages = 0;
+ 	cc->nr_cpages = 0;
+@@ -216,13 +214,13 @@ static int lzo_decompress_pages(struct decompress_io_ctx *dic)
+ 	ret = lzo1x_decompress_safe(dic->cbuf->cdata, dic->clen,
+ 						dic->rbuf, &dic->rlen);
+ 	if (ret != LZO_E_OK) {
+-		f2fs_err_ratelimited(F2FS_I_SB(dic->inode),
++		f2fs_err_ratelimited(dic->sbi,
+ 				"lzo decompress failed, ret:%d", ret);
+ 		return -EIO;
+ 	}
+ 
+ 	if (dic->rlen != PAGE_SIZE << dic->log_cluster_size) {
+-		f2fs_err_ratelimited(F2FS_I_SB(dic->inode),
++		f2fs_err_ratelimited(dic->sbi,
+ 				"lzo invalid rlen:%zu, expected:%lu",
+ 				dic->rlen, PAGE_SIZE << dic->log_cluster_size);
+ 		return -EIO;
+@@ -296,13 +294,13 @@ static int lz4_decompress_pages(struct decompress_io_ctx *dic)
+ 	ret = LZ4_decompress_safe(dic->cbuf->cdata, dic->rbuf,
+ 						dic->clen, dic->rlen);
+ 	if (ret < 0) {
+-		f2fs_err_ratelimited(F2FS_I_SB(dic->inode),
++		f2fs_err_ratelimited(dic->sbi,
+ 				"lz4 decompress failed, ret:%d", ret);
+ 		return -EIO;
+ 	}
+ 
+ 	if (ret != PAGE_SIZE << dic->log_cluster_size) {
+-		f2fs_err_ratelimited(F2FS_I_SB(dic->inode),
++		f2fs_err_ratelimited(dic->sbi,
+ 				"lz4 invalid ret:%d, expected:%lu",
+ 				ret, PAGE_SIZE << dic->log_cluster_size);
+ 		return -EIO;
+@@ -424,13 +422,13 @@ static int zstd_init_decompress_ctx(struct decompress_io_ctx *dic)
+ 
+ 	workspace_size = zstd_dstream_workspace_bound(max_window_size);
+ 
+-	workspace = f2fs_vmalloc(F2FS_I_SB(dic->inode), workspace_size);
++	workspace = f2fs_vmalloc(dic->sbi, workspace_size);
+ 	if (!workspace)
+ 		return -ENOMEM;
+ 
+ 	stream = zstd_init_dstream(max_window_size, workspace, workspace_size);
+ 	if (!stream) {
+-		f2fs_err_ratelimited(F2FS_I_SB(dic->inode),
++		f2fs_err_ratelimited(dic->sbi,
+ 				"%s zstd_init_dstream failed", __func__);
+ 		vfree(workspace);
+ 		return -EIO;
+@@ -466,14 +464,14 @@ static int zstd_decompress_pages(struct decompress_io_ctx *dic)
+ 
+ 	ret = zstd_decompress_stream(stream, &outbuf, &inbuf);
+ 	if (zstd_is_error(ret)) {
+-		f2fs_err_ratelimited(F2FS_I_SB(dic->inode),
++		f2fs_err_ratelimited(dic->sbi,
+ 				"%s zstd_decompress_stream failed, ret: %d",
+ 				__func__, zstd_get_error_code(ret));
+ 		return -EIO;
+ 	}
+ 
+ 	if (dic->rlen != outbuf.pos) {
+-		f2fs_err_ratelimited(F2FS_I_SB(dic->inode),
++		f2fs_err_ratelimited(dic->sbi,
+ 				"%s ZSTD invalid rlen:%zu, expected:%lu",
+ 				__func__, dic->rlen,
+ 				PAGE_SIZE << dic->log_cluster_size);
+@@ -622,6 +620,7 @@ static void *f2fs_vmap(struct page **pages, unsigned int count)
+ 
+ static int f2fs_compress_pages(struct compress_ctx *cc)
+ {
++	struct f2fs_sb_info *sbi = F2FS_I_SB(cc->inode);
+ 	struct f2fs_inode_info *fi = F2FS_I(cc->inode);
+ 	const struct f2fs_compress_ops *cops =
+ 				f2fs_cops[fi->i_compress_algorithm];
+@@ -642,7 +641,7 @@ static int f2fs_compress_pages(struct compress_ctx *cc)
+ 	cc->nr_cpages = DIV_ROUND_UP(max_len, PAGE_SIZE);
+ 	cc->valid_nr_cpages = cc->nr_cpages;
+ 
+-	cc->cpages = page_array_alloc(cc->inode, cc->nr_cpages);
++	cc->cpages = page_array_alloc(sbi, cc->nr_cpages);
+ 	if (!cc->cpages) {
+ 		ret = -ENOMEM;
+ 		goto destroy_compress_ctx;
+@@ -716,7 +715,7 @@ static int f2fs_compress_pages(struct compress_ctx *cc)
+ 		if (cc->cpages[i])
+ 			f2fs_compress_free_page(cc->cpages[i]);
+ 	}
+-	page_array_free(cc->inode, cc->cpages, cc->nr_cpages);
++	page_array_free(sbi, cc->cpages, cc->nr_cpages);
+ 	cc->cpages = NULL;
+ destroy_compress_ctx:
+ 	if (cops->destroy_compress_ctx)
+@@ -734,7 +733,7 @@ static void f2fs_release_decomp_mem(struct decompress_io_ctx *dic,
+ 
+ void f2fs_decompress_cluster(struct decompress_io_ctx *dic, bool in_task)
+ {
+-	struct f2fs_sb_info *sbi = F2FS_I_SB(dic->inode);
++	struct f2fs_sb_info *sbi = dic->sbi;
+ 	struct f2fs_inode_info *fi = F2FS_I(dic->inode);
+ 	const struct f2fs_compress_ops *cops =
+ 			f2fs_cops[fi->i_compress_algorithm];
+@@ -807,7 +806,7 @@ void f2fs_end_read_compressed_page(struct page *page, bool failed,
+ {
+ 	struct decompress_io_ctx *dic =
+ 			(struct decompress_io_ctx *)page_private(page);
+-	struct f2fs_sb_info *sbi = F2FS_I_SB(dic->inode);
++	struct f2fs_sb_info *sbi = dic->sbi;
+ 
+ 	dec_page_count(sbi, F2FS_RD_DATA);
+ 
+@@ -1340,7 +1339,7 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc,
+ 	cic->magic = F2FS_COMPRESSED_PAGE_MAGIC;
+ 	cic->inode = inode;
+ 	atomic_set(&cic->pending_pages, cc->valid_nr_cpages);
+-	cic->rpages = page_array_alloc(cc->inode, cc->cluster_size);
++	cic->rpages = page_array_alloc(sbi, cc->cluster_size);
+ 	if (!cic->rpages)
+ 		goto out_put_cic;
+ 
+@@ -1442,13 +1441,13 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc,
+ 	spin_unlock(&fi->i_size_lock);
+ 
+ 	f2fs_put_rpages(cc);
+-	page_array_free(cc->inode, cc->cpages, cc->nr_cpages);
++	page_array_free(sbi, cc->cpages, cc->nr_cpages);
+ 	cc->cpages = NULL;
+ 	f2fs_destroy_compress_ctx(cc, false);
+ 	return 0;
+ 
+ out_destroy_crypt:
+-	page_array_free(cc->inode, cic->rpages, cc->cluster_size);
++	page_array_free(sbi, cic->rpages, cc->cluster_size);
+ 
+ 	for (--i; i >= 0; i--) {
+ 		if (!cc->cpages[i])
+@@ -1469,7 +1468,7 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc,
+ 		f2fs_compress_free_page(cc->cpages[i]);
+ 		cc->cpages[i] = NULL;
+ 	}
+-	page_array_free(cc->inode, cc->cpages, cc->nr_cpages);
++	page_array_free(sbi, cc->cpages, cc->nr_cpages);
+ 	cc->cpages = NULL;
+ 	return -EAGAIN;
+ }
+@@ -1499,7 +1498,7 @@ void f2fs_compress_write_end_io(struct bio *bio, struct page *page)
+ 		end_page_writeback(cic->rpages[i]);
+ 	}
+ 
+-	page_array_free(cic->inode, cic->rpages, cic->nr_rpages);
++	page_array_free(sbi, cic->rpages, cic->nr_rpages);
+ 	kmem_cache_free(cic_entry_slab, cic);
+ }
+ 
+@@ -1633,14 +1632,13 @@ static inline bool allow_memalloc_for_decomp(struct f2fs_sb_info *sbi,
+ static int f2fs_prepare_decomp_mem(struct decompress_io_ctx *dic,
+ 		bool pre_alloc)
+ {
+-	const struct f2fs_compress_ops *cops =
+-		f2fs_cops[F2FS_I(dic->inode)->i_compress_algorithm];
++	const struct f2fs_compress_ops *cops = f2fs_cops[dic->compress_algorithm];
+ 	int i;
+ 
+-	if (!allow_memalloc_for_decomp(F2FS_I_SB(dic->inode), pre_alloc))
++	if (!allow_memalloc_for_decomp(dic->sbi, pre_alloc))
+ 		return 0;
+ 
+-	dic->tpages = page_array_alloc(dic->inode, dic->cluster_size);
++	dic->tpages = page_array_alloc(dic->sbi, dic->cluster_size);
+ 	if (!dic->tpages)
+ 		return -ENOMEM;
+ 
+@@ -1670,10 +1668,9 @@ static int f2fs_prepare_decomp_mem(struct decompress_io_ctx *dic,
+ static void f2fs_release_decomp_mem(struct decompress_io_ctx *dic,
+ 		bool bypass_destroy_callback, bool pre_alloc)
+ {
+-	const struct f2fs_compress_ops *cops =
+-		f2fs_cops[F2FS_I(dic->inode)->i_compress_algorithm];
++	const struct f2fs_compress_ops *cops = f2fs_cops[dic->compress_algorithm];
+ 
+-	if (!allow_memalloc_for_decomp(F2FS_I_SB(dic->inode), pre_alloc))
++	if (!allow_memalloc_for_decomp(dic->sbi, pre_alloc))
+ 		return;
+ 
+ 	if (!bypass_destroy_callback && cops->destroy_decompress_ctx)
+@@ -1700,7 +1697,7 @@ struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc)
+ 	if (!dic)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	dic->rpages = page_array_alloc(cc->inode, cc->cluster_size);
++	dic->rpages = page_array_alloc(sbi, cc->cluster_size);
+ 	if (!dic->rpages) {
+ 		kmem_cache_free(dic_entry_slab, dic);
+ 		return ERR_PTR(-ENOMEM);
+@@ -1708,6 +1705,8 @@ struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc)
+ 
+ 	dic->magic = F2FS_COMPRESSED_PAGE_MAGIC;
+ 	dic->inode = cc->inode;
++	dic->sbi = sbi;
++	dic->compress_algorithm = F2FS_I(cc->inode)->i_compress_algorithm;
+ 	atomic_set(&dic->remaining_pages, cc->nr_cpages);
+ 	dic->cluster_idx = cc->cluster_idx;
+ 	dic->cluster_size = cc->cluster_size;
+@@ -1721,7 +1720,7 @@ struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc)
+ 		dic->rpages[i] = cc->rpages[i];
+ 	dic->nr_rpages = cc->cluster_size;
+ 
+-	dic->cpages = page_array_alloc(dic->inode, dic->nr_cpages);
++	dic->cpages = page_array_alloc(sbi, dic->nr_cpages);
+ 	if (!dic->cpages) {
+ 		ret = -ENOMEM;
+ 		goto out_free;
+@@ -1751,6 +1750,8 @@ static void f2fs_free_dic(struct decompress_io_ctx *dic,
+ 		bool bypass_destroy_callback)
+ {
+ 	int i;
++	/* use sbi in dic to avoid UFA of dic->inode*/
++	struct f2fs_sb_info *sbi = dic->sbi;
+ 
+ 	f2fs_release_decomp_mem(dic, bypass_destroy_callback, true);
+ 
+@@ -1762,7 +1763,7 @@ static void f2fs_free_dic(struct decompress_io_ctx *dic,
+ 				continue;
+ 			f2fs_compress_free_page(dic->tpages[i]);
+ 		}
+-		page_array_free(dic->inode, dic->tpages, dic->cluster_size);
++		page_array_free(sbi, dic->tpages, dic->cluster_size);
+ 	}
+ 
+ 	if (dic->cpages) {
+@@ -1771,10 +1772,10 @@ static void f2fs_free_dic(struct decompress_io_ctx *dic,
+ 				continue;
+ 			f2fs_compress_free_page(dic->cpages[i]);
+ 		}
+-		page_array_free(dic->inode, dic->cpages, dic->nr_cpages);
++		page_array_free(sbi, dic->cpages, dic->nr_cpages);
+ 	}
+ 
+-	page_array_free(dic->inode, dic->rpages, dic->nr_rpages);
++	page_array_free(sbi, dic->rpages, dic->nr_rpages);
+ 	kmem_cache_free(dic_entry_slab, dic);
+ }
+ 
+@@ -1793,8 +1794,7 @@ static void f2fs_put_dic(struct decompress_io_ctx *dic, bool in_task)
+ 			f2fs_free_dic(dic, false);
+ 		} else {
+ 			INIT_WORK(&dic->free_work, f2fs_late_free_dic);
+-			queue_work(F2FS_I_SB(dic->inode)->post_read_wq,
+-					&dic->free_work);
++			queue_work(dic->sbi->post_read_wq, &dic->free_work);
+ 		}
+ 	}
+ }
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 31e89284262592..53b64f4ff2d742 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -282,7 +282,7 @@ static void f2fs_read_end_io(struct bio *bio)
+ {
+ 	struct f2fs_sb_info *sbi = F2FS_P_SB(bio_first_page_all(bio));
+ 	struct bio_post_read_ctx *ctx;
+-	bool intask = in_task();
++	bool intask = in_task() && !irqs_disabled();
+ 
+ 	iostat_update_and_unbind_ctx(bio);
+ 	ctx = bio->bi_private;
+@@ -1572,8 +1572,11 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, int flag)
+ 	end = pgofs + maxblocks;
+ 
+ next_dnode:
+-	if (map->m_may_create)
++	if (map->m_may_create) {
++		if (f2fs_lfs_mode(sbi))
++			f2fs_balance_fs(sbi, true);
+ 		f2fs_map_lock(sbi, flag);
++	}
+ 
+ 	/* When reading holes, we need its node page */
+ 	set_new_dnode(&dn, inode, NULL, NULL, 0);
+diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c
+index 16c2dfb4f59530..3417e7e550b210 100644
+--- a/fs/f2fs/debug.c
++++ b/fs/f2fs/debug.c
+@@ -21,7 +21,7 @@
+ #include "gc.h"
+ 
+ static LIST_HEAD(f2fs_stat_list);
+-static DEFINE_RAW_SPINLOCK(f2fs_stat_lock);
++static DEFINE_SPINLOCK(f2fs_stat_lock);
+ #ifdef CONFIG_DEBUG_FS
+ static struct dentry *f2fs_debugfs_root;
+ #endif
+@@ -439,9 +439,8 @@ static int stat_show(struct seq_file *s, void *v)
+ {
+ 	struct f2fs_stat_info *si;
+ 	int i = 0, j = 0;
+-	unsigned long flags;
+ 
+-	raw_spin_lock_irqsave(&f2fs_stat_lock, flags);
++	spin_lock(&f2fs_stat_lock);
+ 	list_for_each_entry(si, &f2fs_stat_list, stat_list) {
+ 		struct f2fs_sb_info *sbi = si->sbi;
+ 
+@@ -753,7 +752,7 @@ static int stat_show(struct seq_file *s, void *v)
+ 		seq_printf(s, "  - paged : %llu KB\n",
+ 				si->page_mem >> 10);
+ 	}
+-	raw_spin_unlock_irqrestore(&f2fs_stat_lock, flags);
++	spin_unlock(&f2fs_stat_lock);
+ 	return 0;
+ }
+ 
+@@ -765,7 +764,6 @@ int f2fs_build_stats(struct f2fs_sb_info *sbi)
+ 	struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi);
+ 	struct f2fs_stat_info *si;
+ 	struct f2fs_dev_stats *dev_stats;
+-	unsigned long flags;
+ 	int i;
+ 
+ 	si = f2fs_kzalloc(sbi, sizeof(struct f2fs_stat_info), GFP_KERNEL);
+@@ -817,9 +815,9 @@ int f2fs_build_stats(struct f2fs_sb_info *sbi)
+ 
+ 	atomic_set(&sbi->max_aw_cnt, 0);
+ 
+-	raw_spin_lock_irqsave(&f2fs_stat_lock, flags);
++	spin_lock(&f2fs_stat_lock);
+ 	list_add_tail(&si->stat_list, &f2fs_stat_list);
+-	raw_spin_unlock_irqrestore(&f2fs_stat_lock, flags);
++	spin_unlock(&f2fs_stat_lock);
+ 
+ 	return 0;
+ }
+@@ -827,11 +825,10 @@ int f2fs_build_stats(struct f2fs_sb_info *sbi)
+ void f2fs_destroy_stats(struct f2fs_sb_info *sbi)
+ {
+ 	struct f2fs_stat_info *si = F2FS_STAT(sbi);
+-	unsigned long flags;
+ 
+-	raw_spin_lock_irqsave(&f2fs_stat_lock, flags);
++	spin_lock(&f2fs_stat_lock);
+ 	list_del(&si->stat_list);
+-	raw_spin_unlock_irqrestore(&f2fs_stat_lock, flags);
++	spin_unlock(&f2fs_stat_lock);
+ 
+ 	kfree(si->dev_stats);
+ 	kfree(si);
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index cfe925a3d5555f..4ce19a310f38a5 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -414,7 +414,7 @@ void f2fs_init_read_extent_tree(struct inode *inode, struct folio *ifolio)
+ 	struct f2fs_extent *i_ext = &F2FS_INODE(&ifolio->page)->i_ext;
+ 	struct extent_tree *et;
+ 	struct extent_node *en;
+-	struct extent_info ei;
++	struct extent_info ei = {0};
+ 
+ 	if (!__may_extent_tree(inode, EX_READ)) {
+ 		/* drop largest read extent */
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 9333a22b9a01ed..e084b96f110902 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1286,7 +1286,7 @@ struct f2fs_bio_info {
+ struct f2fs_dev_info {
+ 	struct file *bdev_file;
+ 	struct block_device *bdev;
+-	char path[MAX_PATH_LEN];
++	char path[MAX_PATH_LEN + 1];
+ 	unsigned int total_segments;
+ 	block_t start_blk;
+ 	block_t end_blk;
+@@ -1536,6 +1536,7 @@ struct compress_io_ctx {
+ struct decompress_io_ctx {
+ 	u32 magic;			/* magic number to indicate page is compressed */
+ 	struct inode *inode;		/* inode the context belong to */
++	struct f2fs_sb_info *sbi;	/* f2fs_sb_info pointer */
+ 	pgoff_t cluster_idx;		/* cluster index number */
+ 	unsigned int cluster_size;	/* page count in cluster */
+ 	unsigned int log_cluster_size;	/* log of cluster size */
+@@ -1576,6 +1577,7 @@ struct decompress_io_ctx {
+ 
+ 	bool failed;			/* IO error occurred before decompression? */
+ 	bool need_verity;		/* need fs-verity verification after decompression? */
++	unsigned char compress_algorithm;	/* backup algorithm type */
+ 	void *private;			/* payload buffer for specified decompression algorithm */
+ 	void *private2;			/* extra payload buffer */
+ 	struct work_struct verity_work;	/* work to verify the decompressed pages */
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 3cb5242f4ddfe9..d915b54392b854 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1891,6 +1891,7 @@ int f2fs_gc(struct f2fs_sb_info *sbi, struct f2fs_gc_control *gc_control)
+ 	/* Let's run FG_GC, if we don't have enough space. */
+ 	if (has_not_enough_free_secs(sbi, 0, 0)) {
+ 		gc_type = FG_GC;
++		gc_control->one_time = false;
+ 
+ 		/*
+ 		 * For example, if there are many prefree_segments below given
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 083d52a42bfb25..fc774de1c75264 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -933,6 +933,19 @@ void f2fs_evict_inode(struct inode *inode)
+ 		f2fs_update_inode_page(inode);
+ 		if (dquot_initialize_needed(inode))
+ 			set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR);
++
++		/*
++		 * If both f2fs_truncate() and f2fs_update_inode_page() failed
++		 * due to fuzzed corrupted inode, call f2fs_inode_synced() to
++		 * avoid triggering later f2fs_bug_on().
++		 */
++		if (is_inode_flag_set(inode, FI_DIRTY_INODE)) {
++			f2fs_warn(sbi,
++				"f2fs_evict_inode: inode is dirty, ino:%lu",
++				inode->i_ino);
++			f2fs_inode_synced(inode);
++			set_sbi_flag(sbi, SBI_NEED_FSCK);
++		}
+ 	}
+ 	if (freeze_protected)
+ 		sb_end_intwrite(inode->i_sb);
+@@ -949,8 +962,12 @@ void f2fs_evict_inode(struct inode *inode)
+ 	if (likely(!f2fs_cp_error(sbi) &&
+ 				!is_sbi_flag_set(sbi, SBI_CP_DISABLED)))
+ 		f2fs_bug_on(sbi, is_inode_flag_set(inode, FI_DIRTY_INODE));
+-	else
+-		f2fs_inode_synced(inode);
++
++	/*
++	 * anyway, it needs to remove the inode from sbi->inode_list[DIRTY_META]
++	 * list to avoid UAF in f2fs_sync_inode_meta() during checkpoint.
++	 */
++	f2fs_inode_synced(inode);
+ 
+ 	/* for the case f2fs_new_inode() was failed, .i_ino is zero, skip it */
+ 	if (inode->i_ino)
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index db619fd2f51a52..a8ac5309bd9056 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -674,8 +674,7 @@ static inline void __get_secs_required(struct f2fs_sb_info *sbi,
+ 	unsigned int dent_blocks = total_dent_blocks % CAP_BLKS_PER_SEC(sbi);
+ 	unsigned int data_blocks = 0;
+ 
+-	if (f2fs_lfs_mode(sbi) &&
+-		unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
++	if (f2fs_lfs_mode(sbi)) {
+ 		total_data_blocks = get_pages(sbi, F2FS_DIRTY_DATA);
+ 		data_secs = total_data_blocks / CAP_BLKS_PER_SEC(sbi);
+ 		data_blocks = total_data_blocks % CAP_BLKS_PER_SEC(sbi);
+@@ -684,7 +683,7 @@ static inline void __get_secs_required(struct f2fs_sb_info *sbi,
+ 	if (lower_p)
+ 		*lower_p = node_secs + dent_secs + data_secs;
+ 	if (upper_p)
+-		*upper_p = node_secs + dent_secs +
++		*upper_p = node_secs + dent_secs + data_secs +
+ 			(node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0) +
+ 			(data_blocks ? 1 : 0);
+ 	if (curseg_p)
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index bbf1dad6843f06..4cbf3a133474c3 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -3451,6 +3451,7 @@ static int __f2fs_commit_super(struct f2fs_sb_info *sbi, struct folio *folio,
+ 		f2fs_bug_on(sbi, 1);
+ 
+ 	ret = submit_bio_wait(bio);
++	bio_put(bio);
+ 	folio_end_writeback(folio);
+ 
+ 	return ret;
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index 75134d69a0bdcd..5da0254e205789 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -628,6 +628,27 @@ static ssize_t __sbi_store(struct f2fs_attr *a,
+ 		return count;
+ 	}
+ 
++	if (!strcmp(a->attr.name, "gc_no_zoned_gc_percent")) {
++		if (t > 100)
++			return -EINVAL;
++		*ui = (unsigned int)t;
++		return count;
++	}
++
++	if (!strcmp(a->attr.name, "gc_boost_zoned_gc_percent")) {
++		if (t > 100)
++			return -EINVAL;
++		*ui = (unsigned int)t;
++		return count;
++	}
++
++	if (!strcmp(a->attr.name, "gc_valid_thresh_ratio")) {
++		if (t > 100)
++			return -EINVAL;
++		*ui = (unsigned int)t;
++		return count;
++	}
++
+ #ifdef CONFIG_F2FS_IOSTAT
+ 	if (!strcmp(a->attr.name, "iostat_enable")) {
+ 		sbi->iostat_enable = !!t;
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index ba25b884169e50..ea96113edbe31b 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -802,7 +802,8 @@ __acquires(&gl->gl_lockref.lock)
+ 			 * We skip telling dlm to do the locking, so we won't get a
+ 			 * reply that would otherwise clear GLF_LOCK. So we clear it here.
+ 			 */
+-			clear_bit(GLF_LOCK, &gl->gl_flags);
++			if (!test_bit(GLF_CANCELING, &gl->gl_flags))
++				clear_bit(GLF_LOCK, &gl->gl_flags);
+ 			clear_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags);
+ 			gfs2_glock_queue_work(gl, GL_GLOCK_DFT_HOLD);
+ 			return;
+diff --git a/fs/gfs2/util.c b/fs/gfs2/util.c
+index d5a1e63fa257e0..24864a66074b2a 100644
+--- a/fs/gfs2/util.c
++++ b/fs/gfs2/util.c
+@@ -232,32 +232,23 @@ static void signal_our_withdraw(struct gfs2_sbd *sdp)
+ 	 */
+ 	ret = gfs2_glock_nq(&sdp->sd_live_gh);
+ 
++	gfs2_glock_put(live_gl); /* drop extra reference we acquired */
++	clear_bit(SDF_WITHDRAW_RECOVERY, &sdp->sd_flags);
++
+ 	/*
+ 	 * If we actually got the "live" lock in EX mode, there are no other
+-	 * nodes available to replay our journal. So we try to replay it
+-	 * ourselves. We hold the "live" glock to prevent other mounters
+-	 * during recovery, then just dequeue it and reacquire it in our
+-	 * normal SH mode. Just in case the problem that caused us to
+-	 * withdraw prevents us from recovering our journal (e.g. io errors
+-	 * and such) we still check if the journal is clean before proceeding
+-	 * but we may wait forever until another mounter does the recovery.
++	 * nodes available to replay our journal.
+ 	 */
+ 	if (ret == 0) {
+-		fs_warn(sdp, "No other mounters found. Trying to recover our "
+-			"own journal jid %d.\n", sdp->sd_lockstruct.ls_jid);
+-		if (gfs2_recover_journal(sdp->sd_jdesc, 1))
+-			fs_warn(sdp, "Unable to recover our journal jid %d.\n",
+-				sdp->sd_lockstruct.ls_jid);
+-		gfs2_glock_dq_wait(&sdp->sd_live_gh);
+-		gfs2_holder_reinit(LM_ST_SHARED,
+-				   LM_FLAG_NOEXP | GL_EXACT | GL_NOPID,
+-				   &sdp->sd_live_gh);
+-		gfs2_glock_nq(&sdp->sd_live_gh);
++		fs_warn(sdp, "No other mounters found.\n");
++		/*
++		 * We are about to release the lockspace.  By keeping live_gl
++		 * locked here, we ensure that the next mounter coming along
++		 * will be a "first" mounter which will perform recovery.
++		 */
++		goto skip_recovery;
+ 	}
+ 
+-	gfs2_glock_put(live_gl); /* drop extra reference we acquired */
+-	clear_bit(SDF_WITHDRAW_RECOVERY, &sdp->sd_flags);
+-
+ 	/*
+ 	 * At this point our journal is evicted, so we need to get a new inode
+ 	 * for it. Once done, we need to call gfs2_find_jhead which
+diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
+index a81ce7a740b918..451115360f73a0 100644
+--- a/fs/hfs/inode.c
++++ b/fs/hfs/inode.c
+@@ -692,6 +692,7 @@ static const struct file_operations hfs_file_operations = {
+ 	.write_iter	= generic_file_write_iter,
+ 	.mmap		= generic_file_mmap,
+ 	.splice_read	= filemap_splice_read,
++	.splice_write	= iter_file_splice_write,
+ 	.fsync		= hfs_file_fsync,
+ 	.open		= hfs_file_open,
+ 	.release	= hfs_file_release,
+diff --git a/fs/hfsplus/extents.c b/fs/hfsplus/extents.c
+index a6d61685ae79bb..b1699b3c246ae4 100644
+--- a/fs/hfsplus/extents.c
++++ b/fs/hfsplus/extents.c
+@@ -342,9 +342,6 @@ static int hfsplus_free_extents(struct super_block *sb,
+ 	int i;
+ 	int err = 0;
+ 
+-	/* Mapping the allocation file may lock the extent tree */
+-	WARN_ON(mutex_is_locked(&HFSPLUS_SB(sb)->ext_tree->tree_lock));
+-
+ 	hfsplus_dump_extent(extent);
+ 	for (i = 0; i < 8; extent++, i++) {
+ 		count = be32_to_cpu(extent->block_count);
+diff --git a/fs/hfsplus/inode.c b/fs/hfsplus/inode.c
+index f331e957421783..c85b5802ec0f95 100644
+--- a/fs/hfsplus/inode.c
++++ b/fs/hfsplus/inode.c
+@@ -368,6 +368,7 @@ static const struct file_operations hfsplus_file_operations = {
+ 	.write_iter	= generic_file_write_iter,
+ 	.mmap		= generic_file_mmap,
+ 	.splice_read	= filemap_splice_read,
++	.splice_write	= iter_file_splice_write,
+ 	.fsync		= hfsplus_file_fsync,
+ 	.open		= hfsplus_file_open,
+ 	.release	= hfsplus_file_release,
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 35e063c9f3a42e..5a877261c3fe48 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -1809,8 +1809,10 @@ dbAllocCtl(struct bmap * bmp, s64 nblocks, int l2nb, s64 blkno, s64 * results)
+ 			return -EIO;
+ 		dp = (struct dmap *) mp->data;
+ 
+-		if (dp->tree.budmin < 0)
++		if (dp->tree.budmin < 0) {
++			release_metapage(mp);
+ 			return -EIO;
++		}
+ 
+ 		/* try to allocate the blocks.
+ 		 */
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index d0e0b435a84316..d812179239362b 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -1828,9 +1828,7 @@ static void block_revalidate(struct dentry *dentry)
+ 
+ static void unblock_revalidate(struct dentry *dentry)
+ {
+-	/* store_release ensures wait_var_event() sees the update */
+-	smp_store_release(&dentry->d_fsdata, NULL);
+-	wake_up_var(&dentry->d_fsdata);
++	store_release_wake_up(&dentry->d_fsdata, NULL);
+ }
+ 
+ /*
+diff --git a/fs/nfs/export.c b/fs/nfs/export.c
+index e9c233b6fd2095..a10dd5f9d0786e 100644
+--- a/fs/nfs/export.c
++++ b/fs/nfs/export.c
+@@ -66,14 +66,21 @@ nfs_fh_to_dentry(struct super_block *sb, struct fid *fid,
+ {
+ 	struct nfs_fattr *fattr = NULL;
+ 	struct nfs_fh *server_fh = nfs_exp_embedfh(fid->raw);
+-	size_t fh_size = offsetof(struct nfs_fh, data) + server_fh->size;
++	size_t fh_size = offsetof(struct nfs_fh, data);
+ 	const struct nfs_rpc_ops *rpc_ops;
+ 	struct dentry *dentry;
+ 	struct inode *inode;
+-	int len = EMBED_FH_OFF + XDR_QUADLEN(fh_size);
++	int len = EMBED_FH_OFF;
+ 	u32 *p = fid->raw;
+ 	int ret;
+ 
++	/* Initial check of bounds */
++	if (fh_len < len + XDR_QUADLEN(fh_size) ||
++	    fh_len > XDR_QUADLEN(NFS_MAXFHSIZE))
++		return NULL;
++	/* Calculate embedded filehandle size */
++	fh_size += server_fh->size;
++	len += XDR_QUADLEN(fh_size);
+ 	/* NULL translates to ESTALE */
+ 	if (fh_len < len || fh_type != len)
+ 		return NULL;
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index 4bea008dbebd7c..8dc921d835388e 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -762,14 +762,14 @@ ff_layout_choose_ds_for_read(struct pnfs_layout_segment *lseg,
+ {
+ 	struct nfs4_ff_layout_segment *fls = FF_LAYOUT_LSEG(lseg);
+ 	struct nfs4_ff_layout_mirror *mirror;
+-	struct nfs4_pnfs_ds *ds;
++	struct nfs4_pnfs_ds *ds = ERR_PTR(-EAGAIN);
+ 	u32 idx;
+ 
+ 	/* mirrors are initially sorted by efficiency */
+ 	for (idx = start_idx; idx < fls->mirror_array_cnt; idx++) {
+ 		mirror = FF_LAYOUT_COMP(lseg, idx);
+ 		ds = nfs4_ff_layout_prepare_ds(lseg, mirror, false);
+-		if (!ds)
++		if (IS_ERR(ds))
+ 			continue;
+ 
+ 		if (check_device &&
+@@ -777,10 +777,10 @@ ff_layout_choose_ds_for_read(struct pnfs_layout_segment *lseg,
+ 			continue;
+ 
+ 		*best_idx = idx;
+-		return ds;
++		break;
+ 	}
+ 
+-	return NULL;
++	return ds;
+ }
+ 
+ static struct nfs4_pnfs_ds *
+@@ -942,7 +942,7 @@ ff_layout_pg_init_write(struct nfs_pageio_descriptor *pgio,
+ 	for (i = 0; i < pgio->pg_mirror_count; i++) {
+ 		mirror = FF_LAYOUT_COMP(pgio->pg_lseg, i);
+ 		ds = nfs4_ff_layout_prepare_ds(pgio->pg_lseg, mirror, true);
+-		if (!ds) {
++		if (IS_ERR(ds)) {
+ 			if (!ff_layout_no_fallback_to_mds(pgio->pg_lseg))
+ 				goto out_mds;
+ 			pnfs_generic_pg_cleanup(pgio);
+@@ -1867,6 +1867,7 @@ ff_layout_read_pagelist(struct nfs_pgio_header *hdr)
+ 	u32 idx = hdr->pgio_mirror_idx;
+ 	int vers;
+ 	struct nfs_fh *fh;
++	bool ds_fatal_error = false;
+ 
+ 	dprintk("--> %s ino %lu pgbase %u req %zu@%llu\n",
+ 		__func__, hdr->inode->i_ino,
+@@ -1874,8 +1875,10 @@ ff_layout_read_pagelist(struct nfs_pgio_header *hdr)
+ 
+ 	mirror = FF_LAYOUT_COMP(lseg, idx);
+ 	ds = nfs4_ff_layout_prepare_ds(lseg, mirror, false);
+-	if (!ds)
++	if (IS_ERR(ds)) {
++		ds_fatal_error = nfs_error_is_fatal(PTR_ERR(ds));
+ 		goto out_failed;
++	}
+ 
+ 	ds_clnt = nfs4_ff_find_or_create_ds_client(mirror, ds->ds_clp,
+ 						   hdr->inode);
+@@ -1923,7 +1926,7 @@ ff_layout_read_pagelist(struct nfs_pgio_header *hdr)
+ 	return PNFS_ATTEMPTED;
+ 
+ out_failed:
+-	if (ff_layout_avoid_mds_available_ds(lseg))
++	if (ff_layout_avoid_mds_available_ds(lseg) && !ds_fatal_error)
+ 		return PNFS_TRY_AGAIN;
+ 	trace_pnfs_mds_fallback_read_pagelist(hdr->inode,
+ 			hdr->args.offset, hdr->args.count,
+@@ -1945,11 +1948,14 @@ ff_layout_write_pagelist(struct nfs_pgio_header *hdr, int sync)
+ 	int vers;
+ 	struct nfs_fh *fh;
+ 	u32 idx = hdr->pgio_mirror_idx;
++	bool ds_fatal_error = false;
+ 
+ 	mirror = FF_LAYOUT_COMP(lseg, idx);
+ 	ds = nfs4_ff_layout_prepare_ds(lseg, mirror, true);
+-	if (!ds)
++	if (IS_ERR(ds)) {
++		ds_fatal_error = nfs_error_is_fatal(PTR_ERR(ds));
+ 		goto out_failed;
++	}
+ 
+ 	ds_clnt = nfs4_ff_find_or_create_ds_client(mirror, ds->ds_clp,
+ 						   hdr->inode);
+@@ -2000,7 +2006,7 @@ ff_layout_write_pagelist(struct nfs_pgio_header *hdr, int sync)
+ 	return PNFS_ATTEMPTED;
+ 
+ out_failed:
+-	if (ff_layout_avoid_mds_available_ds(lseg))
++	if (ff_layout_avoid_mds_available_ds(lseg) && !ds_fatal_error)
+ 		return PNFS_TRY_AGAIN;
+ 	trace_pnfs_mds_fallback_write_pagelist(hdr->inode,
+ 			hdr->args.offset, hdr->args.count,
+@@ -2043,7 +2049,7 @@ static int ff_layout_initiate_commit(struct nfs_commit_data *data, int how)
+ 	idx = calc_ds_index_from_commit(lseg, data->ds_commit_index);
+ 	mirror = FF_LAYOUT_COMP(lseg, idx);
+ 	ds = nfs4_ff_layout_prepare_ds(lseg, mirror, true);
+-	if (!ds)
++	if (IS_ERR(ds))
+ 		goto out_err;
+ 
+ 	ds_clnt = nfs4_ff_find_or_create_ds_client(mirror, ds->ds_clp,
+diff --git a/fs/nfs/flexfilelayout/flexfilelayoutdev.c b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+index 656d5c50bbce1c..30365ec782bb1b 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayoutdev.c
++++ b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+@@ -370,11 +370,11 @@ nfs4_ff_layout_prepare_ds(struct pnfs_layout_segment *lseg,
+ 			  struct nfs4_ff_layout_mirror *mirror,
+ 			  bool fail_return)
+ {
+-	struct nfs4_pnfs_ds *ds = NULL;
++	struct nfs4_pnfs_ds *ds;
+ 	struct inode *ino = lseg->pls_layout->plh_inode;
+ 	struct nfs_server *s = NFS_SERVER(ino);
+ 	unsigned int max_payload;
+-	int status;
++	int status = -EAGAIN;
+ 
+ 	if (!ff_layout_init_mirror_ds(lseg->pls_layout, mirror))
+ 		goto noconnect;
+@@ -418,7 +418,7 @@ nfs4_ff_layout_prepare_ds(struct pnfs_layout_segment *lseg,
+ 	ff_layout_send_layouterror(lseg);
+ 	if (fail_return || !ff_layout_has_available_ds(lseg))
+ 		pnfs_error_mark_layout_for_return(ino, lseg);
+-	ds = NULL;
++	ds = ERR_PTR(status);
+ out:
+ 	return ds;
+ }
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 69c2c10ee658c9..d8f768254f1665 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -671,9 +671,12 @@ nfs_write_match_verf(const struct nfs_writeverf *verf,
+ 
+ static inline gfp_t nfs_io_gfp_mask(void)
+ {
+-	if (current->flags & PF_WQ_WORKER)
+-		return GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN;
+-	return GFP_KERNEL;
++	gfp_t ret = current_gfp_context(GFP_KERNEL);
++
++	/* For workers __GFP_NORETRY only with __GFP_IO or __GFP_FS */
++	if ((current->flags & PF_WQ_WORKER) && ret == GFP_KERNEL)
++		ret |= __GFP_NORETRY | __GFP_NOWARN;
++	return ret;
+ }
+ 
+ /*
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 341740fa293d8f..811892cdb5a3a3 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -10867,7 +10867,7 @@ const struct nfs4_minor_version_ops *nfs_v4_minor_ops[] = {
+ 
+ static ssize_t nfs4_listxattr(struct dentry *dentry, char *list, size_t size)
+ {
+-	ssize_t error, error2, error3, error4;
++	ssize_t error, error2, error3, error4 = 0;
+ 	size_t left = size;
+ 
+ 	error = generic_listxattr(dentry, list, left);
+@@ -10895,9 +10895,11 @@ static ssize_t nfs4_listxattr(struct dentry *dentry, char *list, size_t size)
+ 		left -= error3;
+ 	}
+ 
+-	error4 = security_inode_listsecurity(d_inode(dentry), list, left);
+-	if (error4 < 0)
+-		return error4;
++	if (!nfs_server_capable(d_inode(dentry), NFS_CAP_SECURITY_LABEL)) {
++		error4 = security_inode_listsecurity(d_inode(dentry), list, left);
++		if (error4 < 0)
++			return error4;
++	}
+ 
+ 	error += error2 + error3 + error4;
+ 	if (size && error > size)
+diff --git a/fs/nfs_common/nfslocalio.c b/fs/nfs_common/nfslocalio.c
+index 05c7c16e37ab4c..dd715cdb6c0431 100644
+--- a/fs/nfs_common/nfslocalio.c
++++ b/fs/nfs_common/nfslocalio.c
+@@ -177,7 +177,7 @@ static bool nfs_uuid_put(nfs_uuid_t *nfs_uuid)
+ 			/* nfs_close_local_fh() is doing the
+ 			 * close and we must wait. until it unlinks
+ 			 */
+-			wait_var_event_spinlock(nfl,
++			wait_var_event_spinlock(nfs_uuid,
+ 						list_first_entry_or_null(
+ 							&nfs_uuid->files,
+ 							struct nfs_file_localio,
+@@ -198,8 +198,7 @@ static bool nfs_uuid_put(nfs_uuid_t *nfs_uuid)
+ 		/* Now we can allow racing nfs_close_local_fh() to
+ 		 * skip the locking.
+ 		 */
+-		RCU_INIT_POINTER(nfl->nfs_uuid, NULL);
+-		wake_up_var_locked(&nfl->nfs_uuid, &nfs_uuid->lock);
++		store_release_wake_up(&nfl->nfs_uuid, RCU_INITIALIZER(NULL));
+ 	}
+ 
+ 	/* Remove client from nn->local_clients */
+@@ -243,15 +242,20 @@ void nfs_localio_invalidate_clients(struct list_head *nn_local_clients,
+ }
+ EXPORT_SYMBOL_GPL(nfs_localio_invalidate_clients);
+ 
+-static void nfs_uuid_add_file(nfs_uuid_t *nfs_uuid, struct nfs_file_localio *nfl)
++static int nfs_uuid_add_file(nfs_uuid_t *nfs_uuid, struct nfs_file_localio *nfl)
+ {
++	int ret = 0;
++
+ 	/* Add nfl to nfs_uuid->files if it isn't already */
+ 	spin_lock(&nfs_uuid->lock);
+-	if (list_empty(&nfl->list)) {
++	if (rcu_access_pointer(nfs_uuid->net) == NULL) {
++		ret = -ENXIO;
++	} else if (list_empty(&nfl->list)) {
+ 		rcu_assign_pointer(nfl->nfs_uuid, nfs_uuid);
+ 		list_add_tail(&nfl->list, &nfs_uuid->files);
+ 	}
+ 	spin_unlock(&nfs_uuid->lock);
++	return ret;
+ }
+ 
+ /*
+@@ -285,11 +289,13 @@ struct nfsd_file *nfs_open_local_fh(nfs_uuid_t *uuid,
+ 	}
+ 	rcu_read_unlock();
+ 	/* We have an implied reference to net thanks to nfsd_net_try_get */
+-	localio = nfs_to->nfsd_open_local_fh(net, uuid->dom, rpc_clnt,
+-					     cred, nfs_fh, pnf, fmode);
++	localio = nfs_to->nfsd_open_local_fh(net, uuid->dom, rpc_clnt, cred,
++					     nfs_fh, pnf, fmode);
++	if (!IS_ERR(localio) && nfs_uuid_add_file(uuid, nfl) < 0) {
++		/* Delete the cached file when racing with nfs_uuid_put() */
++		nfs_to_nfsd_file_put_local(pnf);
++	}
+ 	nfs_to_nfsd_net_put(net);
+-	if (!IS_ERR(localio))
+-		nfs_uuid_add_file(uuid, nfl);
+ 
+ 	return localio;
+ }
+@@ -314,7 +320,7 @@ void nfs_close_local_fh(struct nfs_file_localio *nfl)
+ 		rcu_read_unlock();
+ 		return;
+ 	}
+-	if (list_empty(&nfs_uuid->files)) {
++	if (list_empty(&nfl->list)) {
+ 		/* nfs_uuid_put() has started closing files, wait for it
+ 		 * to finished
+ 		 */
+@@ -338,7 +344,7 @@ void nfs_close_local_fh(struct nfs_file_localio *nfl)
+ 	 */
+ 	spin_lock(&nfs_uuid->lock);
+ 	list_del_init(&nfl->list);
+-	wake_up_var_locked(&nfl->nfs_uuid, &nfs_uuid->lock);
++	wake_up_var_locked(nfs_uuid, &nfs_uuid->lock);
+ 	spin_unlock(&nfs_uuid->lock);
+ }
+ EXPORT_SYMBOL_GPL(nfs_close_local_fh);
+diff --git a/fs/nfsd/localio.c b/fs/nfsd/localio.c
+index 80d9ff6608a7b9..519bbdedcb1170 100644
+--- a/fs/nfsd/localio.c
++++ b/fs/nfsd/localio.c
+@@ -103,10 +103,11 @@ nfsd_open_local_fh(struct net *net, struct auth_domain *dom,
+ 			if (nfsd_file_get(new) == NULL)
+ 				goto again;
+ 			/*
+-			 * Drop the ref we were going to install and the
+-			 * one we were going to return.
++			 * Drop the ref we were going to install (both file and
++			 * net) and the one we were going to return (only file).
+ 			 */
+ 			nfsd_file_put(localio);
++			nfsd_net_put(net);
+ 			nfsd_file_put(localio);
+ 			localio = new;
+ 		}
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index cd689df2ca5d73..b8fbc02fb3e44c 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -470,7 +470,15 @@ static int __nfsd_setattr(struct dentry *dentry, struct iattr *iap)
+ 	if (!iap->ia_valid)
+ 		return 0;
+ 
+-	iap->ia_valid |= ATTR_CTIME;
++	/*
++	 * If ATTR_DELEG is set, then this is an update from a client that
++	 * holds a delegation. If this is an update for only the atime, the
++	 * ctime should not be changed. If the update contains the mtime
++	 * too, then ATTR_CTIME should already be set.
++	 */
++	if (!(iap->ia_valid & ATTR_DELEG))
++		iap->ia_valid |= ATTR_CTIME;
++
+ 	return notify_change(&nop_mnt_idmap, dentry, iap, NULL);
+ }
+ 
+diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c
+index 3083643b864b83..bfe884d624e7b2 100644
+--- a/fs/notify/fanotify/fanotify.c
++++ b/fs/notify/fanotify/fanotify.c
+@@ -454,7 +454,13 @@ static int fanotify_encode_fh(struct fanotify_fh *fh, struct inode *inode,
+ 	dwords = fh_len >> 2;
+ 	type = exportfs_encode_fid(inode, buf, &dwords);
+ 	err = -EINVAL;
+-	if (type <= 0 || type == FILEID_INVALID || fh_len != dwords << 2)
++	/*
++	 * Unlike file_handle, type and len of struct fanotify_fh are u8.
++	 * Traditionally, filesystem return handle_type < 0xff, but there
++	 * is no enforecement for that in vfs.
++	 */
++	BUILD_BUG_ON(MAX_HANDLE_SZ > 0xff || FILEID_INVALID > 0xff);
++	if (type <= 0 || type >= FILEID_INVALID || fh_len != dwords << 2)
+ 		goto out_err;
+ 
+ 	fh->type = type;
+diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
+index 1e99a35691cdfc..4dc8d7eb0901c9 100644
+--- a/fs/ntfs3/file.c
++++ b/fs/ntfs3/file.c
+@@ -310,7 +310,10 @@ static int ntfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ 		}
+ 
+ 		if (ni->i_valid < to) {
+-			inode_lock(inode);
++			if (!inode_trylock(inode)) {
++				err = -EAGAIN;
++				goto out;
++			}
+ 			err = ntfs_extend_initialized_size(file, ni,
+ 							   ni->i_valid, to);
+ 			inode_unlock(inode);
+diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c
+index 756e1306fe6c49..7afbb4418eb240 100644
+--- a/fs/ntfs3/frecord.c
++++ b/fs/ntfs3/frecord.c
+@@ -3003,8 +3003,7 @@ int ni_add_name(struct ntfs_inode *dir_ni, struct ntfs_inode *ni,
+  * ni_rename - Remove one name and insert new name.
+  */
+ int ni_rename(struct ntfs_inode *dir_ni, struct ntfs_inode *new_dir_ni,
+-	      struct ntfs_inode *ni, struct NTFS_DE *de, struct NTFS_DE *new_de,
+-	      bool *is_bad)
++	      struct ntfs_inode *ni, struct NTFS_DE *de, struct NTFS_DE *new_de)
+ {
+ 	int err;
+ 	struct NTFS_DE *de2 = NULL;
+@@ -3027,8 +3026,8 @@ int ni_rename(struct ntfs_inode *dir_ni, struct ntfs_inode *new_dir_ni,
+ 	err = ni_add_name(new_dir_ni, ni, new_de);
+ 	if (!err) {
+ 		err = ni_remove_name(dir_ni, ni, de, &de2, &undo);
+-		if (err && ni_remove_name(new_dir_ni, ni, new_de, &de2, &undo))
+-			*is_bad = true;
++		WARN_ON(err && ni_remove_name(new_dir_ni, ni, new_de, &de2,
++			&undo));
+ 	}
+ 
+ 	/*
+diff --git a/fs/ntfs3/namei.c b/fs/ntfs3/namei.c
+index b807744fc6a97d..0db7ca3b64ead5 100644
+--- a/fs/ntfs3/namei.c
++++ b/fs/ntfs3/namei.c
+@@ -244,7 +244,7 @@ static int ntfs_rename(struct mnt_idmap *idmap, struct inode *dir,
+ 	struct ntfs_inode *ni = ntfs_i(inode);
+ 	struct inode *new_inode = d_inode(new_dentry);
+ 	struct NTFS_DE *de, *new_de;
+-	bool is_same, is_bad;
++	bool is_same;
+ 	/*
+ 	 * de		- memory of PATH_MAX bytes:
+ 	 * [0-1024)	- original name (dentry->d_name)
+@@ -313,12 +313,8 @@ static int ntfs_rename(struct mnt_idmap *idmap, struct inode *dir,
+ 	if (dir_ni != new_dir_ni)
+ 		ni_lock_dir2(new_dir_ni);
+ 
+-	is_bad = false;
+-	err = ni_rename(dir_ni, new_dir_ni, ni, de, new_de, &is_bad);
+-	if (is_bad) {
+-		/* Restore after failed rename failed too. */
+-		_ntfs_bad_inode(inode);
+-	} else if (!err) {
++	err = ni_rename(dir_ni, new_dir_ni, ni, de, new_de);
++	if (!err) {
+ 		simple_rename_timestamp(dir, dentry, new_dir, new_dentry);
+ 		mark_inode_dirty(inode);
+ 		mark_inode_dirty(dir);
+diff --git a/fs/ntfs3/ntfs_fs.h b/fs/ntfs3/ntfs_fs.h
+index 36b8052660d52c..f54635df18fa5f 100644
+--- a/fs/ntfs3/ntfs_fs.h
++++ b/fs/ntfs3/ntfs_fs.h
+@@ -577,8 +577,7 @@ int ni_add_name(struct ntfs_inode *dir_ni, struct ntfs_inode *ni,
+ 		struct NTFS_DE *de);
+ 
+ int ni_rename(struct ntfs_inode *dir_ni, struct ntfs_inode *new_dir_ni,
+-	      struct ntfs_inode *ni, struct NTFS_DE *de, struct NTFS_DE *new_de,
+-	      bool *is_bad);
++	      struct ntfs_inode *ni, struct NTFS_DE *de, struct NTFS_DE *new_de);
+ 
+ bool ni_is_dirty(struct inode *inode);
+ 
+diff --git a/fs/orangefs/orangefs-debugfs.c b/fs/orangefs/orangefs-debugfs.c
+index f7095c91660c34..e8e3badbc2ec06 100644
+--- a/fs/orangefs/orangefs-debugfs.c
++++ b/fs/orangefs/orangefs-debugfs.c
+@@ -769,8 +769,8 @@ static void do_k_string(void *k_mask, int index)
+ 
+ 	if (*mask & s_kmod_keyword_mask_map[index].mask_val) {
+ 		if ((strlen(kernel_debug_string) +
+-		     strlen(s_kmod_keyword_mask_map[index].keyword))
+-			< ORANGEFS_MAX_DEBUG_STRING_LEN - 1) {
++		     strlen(s_kmod_keyword_mask_map[index].keyword) + 1)
++			< ORANGEFS_MAX_DEBUG_STRING_LEN) {
+ 				strcat(kernel_debug_string,
+ 				       s_kmod_keyword_mask_map[index].keyword);
+ 				strcat(kernel_debug_string, ",");
+@@ -797,7 +797,7 @@ static void do_c_string(void *c_mask, int index)
+ 	    (mask->mask2 & cdm_array[index].mask2)) {
+ 		if ((strlen(client_debug_string) +
+ 		     strlen(cdm_array[index].keyword) + 1)
+-			< ORANGEFS_MAX_DEBUG_STRING_LEN - 2) {
++			< ORANGEFS_MAX_DEBUG_STRING_LEN) {
+ 				strcat(client_debug_string,
+ 				       cdm_array[index].keyword);
+ 				strcat(client_debug_string, ",");
+diff --git a/fs/proc/generic.c b/fs/proc/generic.c
+index a3e22803cddf2d..e0e50914ab25f2 100644
+--- a/fs/proc/generic.c
++++ b/fs/proc/generic.c
+@@ -569,6 +569,8 @@ static void pde_set_flags(struct proc_dir_entry *pde)
+ 	if (pde->proc_ops->proc_compat_ioctl)
+ 		pde->flags |= PROC_ENTRY_proc_compat_ioctl;
+ #endif
++	if (pde->proc_ops->proc_lseek)
++		pde->flags |= PROC_ENTRY_proc_lseek;
+ }
+ 
+ struct proc_dir_entry *proc_create_data(const char *name, umode_t mode,
+diff --git a/fs/proc/inode.c b/fs/proc/inode.c
+index 3604b616311c27..129490151be147 100644
+--- a/fs/proc/inode.c
++++ b/fs/proc/inode.c
+@@ -473,7 +473,7 @@ static int proc_reg_open(struct inode *inode, struct file *file)
+ 	typeof_member(struct proc_ops, proc_open) open;
+ 	struct pde_opener *pdeo;
+ 
+-	if (!pde->proc_ops->proc_lseek)
++	if (!pde_has_proc_lseek(pde))
+ 		file->f_mode &= ~FMODE_LSEEK;
+ 
+ 	if (pde_is_permanent(pde)) {
+diff --git a/fs/proc/internal.h b/fs/proc/internal.h
+index 96122e91c64597..3d48ffe72583a1 100644
+--- a/fs/proc/internal.h
++++ b/fs/proc/internal.h
+@@ -99,6 +99,11 @@ static inline bool pde_has_proc_compat_ioctl(const struct proc_dir_entry *pde)
+ #endif
+ }
+ 
++static inline bool pde_has_proc_lseek(const struct proc_dir_entry *pde)
++{
++	return pde->flags & PROC_ENTRY_proc_lseek;
++}
++
+ extern struct kmem_cache *proc_dir_entry_cache;
+ void pde_free(struct proc_dir_entry *pde);
+ 
+diff --git a/fs/smb/client/cifs_debug.c b/fs/smb/client/cifs_debug.c
+index 3fdf75737d43cc..d1acde8443266d 100644
+--- a/fs/smb/client/cifs_debug.c
++++ b/fs/smb/client/cifs_debug.c
+@@ -432,10 +432,8 @@ static int cifs_debug_data_proc_show(struct seq_file *m, void *v)
+ 			server->smbd_conn->receive_credit_target);
+ 		seq_printf(m, "\nPending send_pending: %x ",
+ 			atomic_read(&server->smbd_conn->send_pending));
+-		seq_printf(m, "\nReceive buffers count_receive_queue: %x "
+-			"count_empty_packet_queue: %x",
+-			server->smbd_conn->count_receive_queue,
+-			server->smbd_conn->count_empty_packet_queue);
++		seq_printf(m, "\nReceive buffers count_receive_queue: %x ",
++			server->smbd_conn->count_receive_queue);
+ 		seq_printf(m, "\nMR responder_resources: %x "
+ 			"max_frmr_depth: %x mr_type: %x",
+ 			server->smbd_conn->responder_resources,
+diff --git a/fs/smb/client/cifsencrypt.c b/fs/smb/client/cifsencrypt.c
+index 35892df7335c75..6be850d2a34677 100644
+--- a/fs/smb/client/cifsencrypt.c
++++ b/fs/smb/client/cifsencrypt.c
+@@ -343,7 +343,7 @@ static struct ntlmssp2_name *find_next_av(struct cifs_ses *ses,
+ 	len = AV_LEN(av);
+ 	if (AV_TYPE(av) == NTLMSSP_AV_EOL)
+ 		return NULL;
+-	if (!len || (u8 *)av + sizeof(*av) + len > end)
++	if ((u8 *)av + sizeof(*av) + len > end)
+ 		return NULL;
+ 	return av;
+ }
+@@ -363,7 +363,7 @@ static int find_av_name(struct cifs_ses *ses, u16 type, char **name, u16 maxlen)
+ 
+ 	av_for_each_entry(ses, av) {
+ 		len = AV_LEN(av);
+-		if (AV_TYPE(av) != type)
++		if (AV_TYPE(av) != type || !len)
+ 			continue;
+ 		if (!IS_ALIGNED(len, sizeof(__le16))) {
+ 			cifs_dbg(VFS | ONCE, "%s: bad length(%u) for type %u\n",
+diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c
+index 0a5266ecfd1571..697badd0445a0d 100644
+--- a/fs/smb/client/cifsfs.c
++++ b/fs/smb/client/cifsfs.c
+@@ -723,7 +723,7 @@ cifs_show_options(struct seq_file *s, struct dentry *root)
+ 	else
+ 		seq_puts(s, ",nativesocket");
+ 	seq_show_option(s, "symlink",
+-			cifs_symlink_type_str(get_cifs_symlink_type(cifs_sb)));
++			cifs_symlink_type_str(cifs_symlink_type(cifs_sb)));
+ 
+ 	seq_printf(s, ",rsize=%u", cifs_sb->ctx->rsize);
+ 	seq_printf(s, ",wsize=%u", cifs_sb->ctx->wsize);
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index 205f547ca49ed1..5eec8957f2a996 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -3362,18 +3362,15 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ 		struct net *net = cifs_net_ns(server);
+ 		struct sock *sk;
+ 
+-		rc = __sock_create(net, sfamily, SOCK_STREAM,
+-				   IPPROTO_TCP, &server->ssocket, 1);
++		rc = sock_create_kern(net, sfamily, SOCK_STREAM,
++				      IPPROTO_TCP, &server->ssocket);
+ 		if (rc < 0) {
+ 			cifs_server_dbg(VFS, "Error %d creating socket\n", rc);
+ 			return rc;
+ 		}
+ 
+ 		sk = server->ssocket->sk;
+-		__netns_tracker_free(net, &sk->ns_tracker, false);
+-		sk->sk_net_refcnt = 1;
+-		get_net_track(net, &sk->ns_tracker, GFP_KERNEL);
+-		sock_inuse_add(net, 1);
++		sk_net_refcnt_upgrade(sk);
+ 
+ 		/* BB other socket options to set KEEPALIVE, NODELAY? */
+ 		cifs_dbg(FYI, "Socket created\n");
+diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c
+index 59ccc2229ab300..3d0bb068f825fc 100644
+--- a/fs/smb/client/fs_context.c
++++ b/fs/smb/client/fs_context.c
+@@ -1674,6 +1674,7 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ 				pr_warn_once("conflicting posix mount options specified\n");
+ 			ctx->linux_ext = 1;
+ 			ctx->no_linux_ext = 0;
++			ctx->nonativesocket = 1; /* POSIX mounts use NFS style reparse points */
+ 		}
+ 		break;
+ 	case Opt_nocase:
+@@ -1851,24 +1852,6 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ 	return -EINVAL;
+ }
+ 
+-enum cifs_symlink_type get_cifs_symlink_type(struct cifs_sb_info *cifs_sb)
+-{
+-	if (cifs_sb->ctx->symlink_type == CIFS_SYMLINK_TYPE_DEFAULT) {
+-		if (cifs_sb->ctx->mfsymlinks)
+-			return CIFS_SYMLINK_TYPE_MFSYMLINKS;
+-		else if (cifs_sb->ctx->sfu_emul)
+-			return CIFS_SYMLINK_TYPE_SFU;
+-		else if (cifs_sb->ctx->linux_ext && !cifs_sb->ctx->no_linux_ext)
+-			return CIFS_SYMLINK_TYPE_UNIX;
+-		else if (cifs_sb->ctx->reparse_type != CIFS_REPARSE_TYPE_NONE)
+-			return CIFS_SYMLINK_TYPE_NATIVE;
+-		else
+-			return CIFS_SYMLINK_TYPE_NONE;
+-	} else {
+-		return cifs_sb->ctx->symlink_type;
+-	}
+-}
+-
+ int smb3_init_fs_context(struct fs_context *fc)
+ {
+ 	struct smb3_fs_context *ctx;
+diff --git a/fs/smb/client/fs_context.h b/fs/smb/client/fs_context.h
+index 9e83302ce4b801..b0fec6b9a23b4f 100644
+--- a/fs/smb/client/fs_context.h
++++ b/fs/smb/client/fs_context.h
+@@ -341,7 +341,23 @@ struct smb3_fs_context {
+ 
+ extern const struct fs_parameter_spec smb3_fs_parameters[];
+ 
+-extern enum cifs_symlink_type get_cifs_symlink_type(struct cifs_sb_info *cifs_sb);
++static inline enum cifs_symlink_type cifs_symlink_type(struct cifs_sb_info *cifs_sb)
++{
++	bool posix = cifs_sb_master_tcon(cifs_sb)->posix_extensions;
++
++	if (cifs_sb->ctx->symlink_type != CIFS_SYMLINK_TYPE_DEFAULT)
++		return cifs_sb->ctx->symlink_type;
++
++	if (cifs_sb->ctx->mfsymlinks)
++		return CIFS_SYMLINK_TYPE_MFSYMLINKS;
++	else if (cifs_sb->ctx->sfu_emul)
++		return CIFS_SYMLINK_TYPE_SFU;
++	else if (cifs_sb->ctx->linux_ext && !cifs_sb->ctx->no_linux_ext)
++		return posix ? CIFS_SYMLINK_TYPE_NATIVE : CIFS_SYMLINK_TYPE_UNIX;
++	else if (cifs_sb->ctx->reparse_type != CIFS_REPARSE_TYPE_NONE)
++		return CIFS_SYMLINK_TYPE_NATIVE;
++	return CIFS_SYMLINK_TYPE_NONE;
++}
+ 
+ extern int smb3_init_fs_context(struct fs_context *fc);
+ extern void smb3_cleanup_fs_context_contents(struct smb3_fs_context *ctx);
+diff --git a/fs/smb/client/link.c b/fs/smb/client/link.c
+index 769752ad2c5ce8..e2075f1aebc96c 100644
+--- a/fs/smb/client/link.c
++++ b/fs/smb/client/link.c
+@@ -606,14 +606,7 @@ cifs_symlink(struct mnt_idmap *idmap, struct inode *inode,
+ 
+ 	/* BB what if DFS and this volume is on different share? BB */
+ 	rc = -EOPNOTSUPP;
+-	switch (get_cifs_symlink_type(cifs_sb)) {
+-	case CIFS_SYMLINK_TYPE_DEFAULT:
+-		/* should not happen, get_cifs_symlink_type() resolves the default */
+-		break;
+-
+-	case CIFS_SYMLINK_TYPE_NONE:
+-		break;
+-
++	switch (cifs_symlink_type(cifs_sb)) {
+ 	case CIFS_SYMLINK_TYPE_UNIX:
+ #ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+ 		if (pTcon->unix_ext) {
+@@ -653,6 +646,8 @@ cifs_symlink(struct mnt_idmap *idmap, struct inode *inode,
+ 			goto symlink_exit;
+ 		}
+ 		break;
++	default:
++		break;
+ 	}
+ 
+ 	if (rc == 0) {
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index 5fa29a97ac154b..4f6c320b41c971 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -38,7 +38,7 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
+ 				struct dentry *dentry, struct cifs_tcon *tcon,
+ 				const char *full_path, const char *symname)
+ {
+-	switch (get_cifs_symlink_type(CIFS_SB(inode->i_sb))) {
++	switch (cifs_symlink_type(CIFS_SB(inode->i_sb))) {
+ 	case CIFS_SYMLINK_TYPE_NATIVE:
+ 		return create_native_symlink(xid, inode, dentry, tcon, full_path, symname);
+ 	case CIFS_SYMLINK_TYPE_NFS:
+diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c
+index 754e94a0e07f50..c661a8e6c18b85 100644
+--- a/fs/smb/client/smbdirect.c
++++ b/fs/smb/client/smbdirect.c
+@@ -13,8 +13,6 @@
+ #include "cifsproto.h"
+ #include "smb2proto.h"
+ 
+-static struct smbd_response *get_empty_queue_buffer(
+-		struct smbd_connection *info);
+ static struct smbd_response *get_receive_buffer(
+ 		struct smbd_connection *info);
+ static void put_receive_buffer(
+@@ -23,8 +21,6 @@ static void put_receive_buffer(
+ static int allocate_receive_buffers(struct smbd_connection *info, int num_buf);
+ static void destroy_receive_buffers(struct smbd_connection *info);
+ 
+-static void put_empty_packet(
+-		struct smbd_connection *info, struct smbd_response *response);
+ static void enqueue_reassembly(
+ 		struct smbd_connection *info,
+ 		struct smbd_response *response, int data_length);
+@@ -391,7 +387,6 @@ static bool process_negotiation_response(
+ static void smbd_post_send_credits(struct work_struct *work)
+ {
+ 	int ret = 0;
+-	int use_receive_queue = 1;
+ 	int rc;
+ 	struct smbd_response *response;
+ 	struct smbd_connection *info =
+@@ -407,18 +402,9 @@ static void smbd_post_send_credits(struct work_struct *work)
+ 	if (info->receive_credit_target >
+ 		atomic_read(&info->receive_credits)) {
+ 		while (true) {
+-			if (use_receive_queue)
+-				response = get_receive_buffer(info);
+-			else
+-				response = get_empty_queue_buffer(info);
+-			if (!response) {
+-				/* now switch to empty packet queue */
+-				if (use_receive_queue) {
+-					use_receive_queue = 0;
+-					continue;
+-				} else
+-					break;
+-			}
++			response = get_receive_buffer(info);
++			if (!response)
++				break;
+ 
+ 			response->type = SMBD_TRANSFER_DATA;
+ 			response->first_segment = false;
+@@ -466,7 +452,6 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 	if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_RECV) {
+ 		log_rdma_recv(INFO, "wc->status=%d opcode=%d\n",
+ 			wc->status, wc->opcode);
+-		smbd_disconnect_rdma_connection(info);
+ 		goto error;
+ 	}
+ 
+@@ -483,18 +468,15 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 		info->full_packet_received = true;
+ 		info->negotiate_done =
+ 			process_negotiation_response(response, wc->byte_len);
++		put_receive_buffer(info, response);
+ 		complete(&info->negotiate_completion);
+-		break;
++		return;
+ 
+ 	/* SMBD data transfer packet */
+ 	case SMBD_TRANSFER_DATA:
+ 		data_transfer = smbd_response_payload(response);
+ 		data_length = le32_to_cpu(data_transfer->data_length);
+ 
+-		/*
+-		 * If this is a packet with data playload place the data in
+-		 * reassembly queue and wake up the reading thread
+-		 */
+ 		if (data_length) {
+ 			if (info->full_packet_received)
+ 				response->first_segment = true;
+@@ -503,16 +485,7 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 				info->full_packet_received = false;
+ 			else
+ 				info->full_packet_received = true;
+-
+-			enqueue_reassembly(
+-				info,
+-				response,
+-				data_length);
+-		} else
+-			put_empty_packet(info, response);
+-
+-		if (data_length)
+-			wake_up_interruptible(&info->wait_reassembly_queue);
++		}
+ 
+ 		atomic_dec(&info->receive_credits);
+ 		info->receive_credit_target =
+@@ -540,15 +513,27 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 			info->keep_alive_requested = KEEP_ALIVE_PENDING;
+ 		}
+ 
+-		return;
++		/*
++		 * If this is a packet with data playload place the data in
++		 * reassembly queue and wake up the reading thread
++		 */
++		if (data_length) {
++			enqueue_reassembly(info, response, data_length);
++			wake_up_interruptible(&info->wait_reassembly_queue);
++		} else
++			put_receive_buffer(info, response);
+ 
+-	default:
+-		log_rdma_recv(ERR,
+-			"unexpected response type=%d\n", response->type);
++		return;
+ 	}
+ 
++	/*
++	 * This is an internal error!
++	 */
++	log_rdma_recv(ERR, "unexpected response type=%d\n", response->type);
++	WARN_ON_ONCE(response->type != SMBD_TRANSFER_DATA);
+ error:
+ 	put_receive_buffer(info, response);
++	smbd_disconnect_rdma_connection(info);
+ }
+ 
+ static struct rdma_cm_id *smbd_create_id(
+@@ -1069,6 +1054,7 @@ static int smbd_post_recv(
+ 	if (rc) {
+ 		ib_dma_unmap_single(sc->ib.dev, response->sge.addr,
+ 				    response->sge.length, DMA_FROM_DEVICE);
++		response->sge.length = 0;
+ 		smbd_disconnect_rdma_connection(info);
+ 		log_rdma_recv(ERR, "ib_post_recv failed rc=%d\n", rc);
+ 	}
+@@ -1113,17 +1099,6 @@ static int smbd_negotiate(struct smbd_connection *info)
+ 	return rc;
+ }
+ 
+-static void put_empty_packet(
+-		struct smbd_connection *info, struct smbd_response *response)
+-{
+-	spin_lock(&info->empty_packet_queue_lock);
+-	list_add_tail(&response->list, &info->empty_packet_queue);
+-	info->count_empty_packet_queue++;
+-	spin_unlock(&info->empty_packet_queue_lock);
+-
+-	queue_work(info->workqueue, &info->post_send_credits_work);
+-}
+-
+ /*
+  * Implement Connection.FragmentReassemblyBuffer defined in [MS-SMBD] 3.1.1.1
+  * This is a queue for reassembling upper layer payload and present to upper
+@@ -1172,25 +1147,6 @@ static struct smbd_response *_get_first_reassembly(struct smbd_connection *info)
+ 	return ret;
+ }
+ 
+-static struct smbd_response *get_empty_queue_buffer(
+-		struct smbd_connection *info)
+-{
+-	struct smbd_response *ret = NULL;
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&info->empty_packet_queue_lock, flags);
+-	if (!list_empty(&info->empty_packet_queue)) {
+-		ret = list_first_entry(
+-			&info->empty_packet_queue,
+-			struct smbd_response, list);
+-		list_del(&ret->list);
+-		info->count_empty_packet_queue--;
+-	}
+-	spin_unlock_irqrestore(&info->empty_packet_queue_lock, flags);
+-
+-	return ret;
+-}
+-
+ /*
+  * Get a receive buffer
+  * For each remote send, we need to post a receive. The receive buffers are
+@@ -1228,8 +1184,13 @@ static void put_receive_buffer(
+ 	struct smbdirect_socket *sc = &info->socket;
+ 	unsigned long flags;
+ 
+-	ib_dma_unmap_single(sc->ib.dev, response->sge.addr,
+-		response->sge.length, DMA_FROM_DEVICE);
++	if (likely(response->sge.length != 0)) {
++		ib_dma_unmap_single(sc->ib.dev,
++				    response->sge.addr,
++				    response->sge.length,
++				    DMA_FROM_DEVICE);
++		response->sge.length = 0;
++	}
+ 
+ 	spin_lock_irqsave(&info->receive_queue_lock, flags);
+ 	list_add_tail(&response->list, &info->receive_queue);
+@@ -1255,10 +1216,6 @@ static int allocate_receive_buffers(struct smbd_connection *info, int num_buf)
+ 	spin_lock_init(&info->receive_queue_lock);
+ 	info->count_receive_queue = 0;
+ 
+-	INIT_LIST_HEAD(&info->empty_packet_queue);
+-	spin_lock_init(&info->empty_packet_queue_lock);
+-	info->count_empty_packet_queue = 0;
+-
+ 	init_waitqueue_head(&info->wait_receive_queues);
+ 
+ 	for (i = 0; i < num_buf; i++) {
+@@ -1267,6 +1224,7 @@ static int allocate_receive_buffers(struct smbd_connection *info, int num_buf)
+ 			goto allocate_failed;
+ 
+ 		response->info = info;
++		response->sge.length = 0;
+ 		list_add_tail(&response->list, &info->receive_queue);
+ 		info->count_receive_queue++;
+ 	}
+@@ -1292,9 +1250,6 @@ static void destroy_receive_buffers(struct smbd_connection *info)
+ 
+ 	while ((response = get_receive_buffer(info)))
+ 		mempool_free(response, info->response_mempool);
+-
+-	while ((response = get_empty_queue_buffer(info)))
+-		mempool_free(response, info->response_mempool);
+ }
+ 
+ /* Implement idle connection timer [MS-SMBD] 3.1.6.2 */
+@@ -1381,8 +1336,7 @@ void smbd_destroy(struct TCP_Server_Info *server)
+ 
+ 	log_rdma_event(INFO, "free receive buffers\n");
+ 	wait_event(info->wait_receive_queues,
+-		info->count_receive_queue + info->count_empty_packet_queue
+-			== sp->recv_credit_max);
++		info->count_receive_queue == sp->recv_credit_max);
+ 	destroy_receive_buffers(info);
+ 
+ 	/*
+@@ -1680,8 +1634,10 @@ static struct smbd_connection *_smbd_get_connection(
+ 		goto rdma_connect_failed;
+ 	}
+ 
+-	wait_event_interruptible(
+-		info->conn_wait, sc->status != SMBDIRECT_SOCKET_CONNECTING);
++	wait_event_interruptible_timeout(
++		info->conn_wait,
++		sc->status != SMBDIRECT_SOCKET_CONNECTING,
++		msecs_to_jiffies(RDMA_RESOLVE_TIMEOUT));
+ 
+ 	if (sc->status != SMBDIRECT_SOCKET_CONNECTED) {
+ 		log_rdma_event(ERR, "rdma_connect failed port=%d\n", port);
+diff --git a/fs/smb/client/smbdirect.h b/fs/smb/client/smbdirect.h
+index 75b3f491c3ad65..ea04ce8a9763a6 100644
+--- a/fs/smb/client/smbdirect.h
++++ b/fs/smb/client/smbdirect.h
+@@ -110,10 +110,6 @@ struct smbd_connection {
+ 	int count_receive_queue;
+ 	spinlock_t receive_queue_lock;
+ 
+-	struct list_head empty_packet_queue;
+-	int count_empty_packet_queue;
+-	spinlock_t empty_packet_queue_lock;
+-
+ 	wait_queue_head_t wait_receive_queues;
+ 
+ 	/* Reassembly queue */
+diff --git a/fs/smb/server/connection.h b/fs/smb/server/connection.h
+index dd3e0e3f7bf046..31dd1caac1e8a8 100644
+--- a/fs/smb/server/connection.h
++++ b/fs/smb/server/connection.h
+@@ -46,6 +46,7 @@ struct ksmbd_conn {
+ 	struct mutex			srv_mutex;
+ 	int				status;
+ 	unsigned int			cli_cap;
++	__be32				inet_addr;
+ 	char				*request_buf;
+ 	struct ksmbd_transport		*transport;
+ 	struct nls_table		*local_nls;
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 63d17cea2e95f9..a760785aa1ec7e 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -1621,11 +1621,24 @@ static int krb5_authenticate(struct ksmbd_work *work,
+ 
+ 	rsp->SecurityBufferLength = cpu_to_le16(out_len);
+ 
+-	if ((conn->sign || server_conf.enforced_signing) ||
++	/*
++	 * If session state is SMB2_SESSION_VALID, We can assume
++	 * that it is reauthentication. And the user/password
++	 * has been verified, so return it here.
++	 */
++	if (sess->state == SMB2_SESSION_VALID) {
++		if (conn->binding)
++			goto binding_session;
++		return 0;
++	}
++
++	if ((rsp->SessionFlags != SMB2_SESSION_FLAG_IS_GUEST_LE &&
++	    (conn->sign || server_conf.enforced_signing)) ||
+ 	    (req->SecurityMode & SMB2_NEGOTIATE_SIGNING_REQUIRED))
+ 		sess->sign = true;
+ 
+-	if (smb3_encryption_negotiated(conn)) {
++	if (smb3_encryption_negotiated(conn) &&
++	    !(req->Flags & SMB2_SESSION_REQ_FLAG_BINDING)) {
+ 		retval = conn->ops->generate_encryptionkey(conn, sess);
+ 		if (retval) {
+ 			ksmbd_debug(SMB,
+@@ -1638,6 +1651,7 @@ static int krb5_authenticate(struct ksmbd_work *work,
+ 		sess->sign = false;
+ 	}
+ 
++binding_session:
+ 	if (conn->dialect >= SMB30_PROT_ID) {
+ 		chann = lookup_chann_list(sess, conn);
+ 		if (!chann) {
+@@ -1833,8 +1847,6 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ 				ksmbd_conn_set_good(conn);
+ 				sess->state = SMB2_SESSION_VALID;
+ 			}
+-			kfree(sess->Preauth_HashValue);
+-			sess->Preauth_HashValue = NULL;
+ 		} else if (conn->preferred_auth_mech == KSMBD_AUTH_NTLMSSP) {
+ 			if (negblob->MessageType == NtLmNegotiate) {
+ 				rc = ntlm_negotiate(work, negblob, negblob_len, rsp);
+@@ -1861,8 +1873,6 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ 						kfree(preauth_sess);
+ 					}
+ 				}
+-				kfree(sess->Preauth_HashValue);
+-				sess->Preauth_HashValue = NULL;
+ 			} else {
+ 				pr_info_ratelimited("Unknown NTLMSSP message type : 0x%x\n",
+ 						le32_to_cpu(negblob->MessageType));
+diff --git a/fs/smb/server/smb_common.c b/fs/smb/server/smb_common.c
+index 425c756bcfb862..b23203a1c2865a 100644
+--- a/fs/smb/server/smb_common.c
++++ b/fs/smb/server/smb_common.c
+@@ -515,7 +515,7 @@ int ksmbd_extract_shortname(struct ksmbd_conn *conn, const char *longname,
+ 
+ 	p = strrchr(longname, '.');
+ 	if (p == longname) { /*name starts with a dot*/
+-		strscpy(extension, "___", strlen("___"));
++		strscpy(extension, "___", sizeof(extension));
+ 	} else {
+ 		if (p) {
+ 			p++;
+diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c
+index c6cbe0d56e3212..8d366db5f60547 100644
+--- a/fs/smb/server/transport_rdma.c
++++ b/fs/smb/server/transport_rdma.c
+@@ -129,9 +129,6 @@ struct smb_direct_transport {
+ 	spinlock_t		recvmsg_queue_lock;
+ 	struct list_head	recvmsg_queue;
+ 
+-	spinlock_t		empty_recvmsg_queue_lock;
+-	struct list_head	empty_recvmsg_queue;
+-
+ 	int			send_credit_target;
+ 	atomic_t		send_credits;
+ 	spinlock_t		lock_new_recv_credits;
+@@ -268,40 +265,19 @@ smb_direct_recvmsg *get_free_recvmsg(struct smb_direct_transport *t)
+ static void put_recvmsg(struct smb_direct_transport *t,
+ 			struct smb_direct_recvmsg *recvmsg)
+ {
+-	ib_dma_unmap_single(t->cm_id->device, recvmsg->sge.addr,
+-			    recvmsg->sge.length, DMA_FROM_DEVICE);
++	if (likely(recvmsg->sge.length != 0)) {
++		ib_dma_unmap_single(t->cm_id->device,
++				    recvmsg->sge.addr,
++				    recvmsg->sge.length,
++				    DMA_FROM_DEVICE);
++		recvmsg->sge.length = 0;
++	}
+ 
+ 	spin_lock(&t->recvmsg_queue_lock);
+ 	list_add(&recvmsg->list, &t->recvmsg_queue);
+ 	spin_unlock(&t->recvmsg_queue_lock);
+ }
+ 
+-static struct
+-smb_direct_recvmsg *get_empty_recvmsg(struct smb_direct_transport *t)
+-{
+-	struct smb_direct_recvmsg *recvmsg = NULL;
+-
+-	spin_lock(&t->empty_recvmsg_queue_lock);
+-	if (!list_empty(&t->empty_recvmsg_queue)) {
+-		recvmsg = list_first_entry(&t->empty_recvmsg_queue,
+-					   struct smb_direct_recvmsg, list);
+-		list_del(&recvmsg->list);
+-	}
+-	spin_unlock(&t->empty_recvmsg_queue_lock);
+-	return recvmsg;
+-}
+-
+-static void put_empty_recvmsg(struct smb_direct_transport *t,
+-			      struct smb_direct_recvmsg *recvmsg)
+-{
+-	ib_dma_unmap_single(t->cm_id->device, recvmsg->sge.addr,
+-			    recvmsg->sge.length, DMA_FROM_DEVICE);
+-
+-	spin_lock(&t->empty_recvmsg_queue_lock);
+-	list_add_tail(&recvmsg->list, &t->empty_recvmsg_queue);
+-	spin_unlock(&t->empty_recvmsg_queue_lock);
+-}
+-
+ static void enqueue_reassembly(struct smb_direct_transport *t,
+ 			       struct smb_direct_recvmsg *recvmsg,
+ 			       int data_length)
+@@ -386,9 +362,6 @@ static struct smb_direct_transport *alloc_transport(struct rdma_cm_id *cm_id)
+ 	spin_lock_init(&t->recvmsg_queue_lock);
+ 	INIT_LIST_HEAD(&t->recvmsg_queue);
+ 
+-	spin_lock_init(&t->empty_recvmsg_queue_lock);
+-	INIT_LIST_HEAD(&t->empty_recvmsg_queue);
+-
+ 	init_waitqueue_head(&t->wait_send_pending);
+ 	atomic_set(&t->send_pending, 0);
+ 
+@@ -548,13 +521,13 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 	t = recvmsg->transport;
+ 
+ 	if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_RECV) {
++		put_recvmsg(t, recvmsg);
+ 		if (wc->status != IB_WC_WR_FLUSH_ERR) {
+ 			pr_err("Recv error. status='%s (%d)' opcode=%d\n",
+ 			       ib_wc_status_msg(wc->status), wc->status,
+ 			       wc->opcode);
+ 			smb_direct_disconnect_rdma_connection(t);
+ 		}
+-		put_empty_recvmsg(t, recvmsg);
+ 		return;
+ 	}
+ 
+@@ -568,7 +541,8 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 	switch (recvmsg->type) {
+ 	case SMB_DIRECT_MSG_NEGOTIATE_REQ:
+ 		if (wc->byte_len < sizeof(struct smb_direct_negotiate_req)) {
+-			put_empty_recvmsg(t, recvmsg);
++			put_recvmsg(t, recvmsg);
++			smb_direct_disconnect_rdma_connection(t);
+ 			return;
+ 		}
+ 		t->negotiation_requested = true;
+@@ -576,7 +550,7 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 		t->status = SMB_DIRECT_CS_CONNECTED;
+ 		enqueue_reassembly(t, recvmsg, 0);
+ 		wake_up_interruptible(&t->wait_status);
+-		break;
++		return;
+ 	case SMB_DIRECT_MSG_DATA_TRANSFER: {
+ 		struct smb_direct_data_transfer *data_transfer =
+ 			(struct smb_direct_data_transfer *)recvmsg->packet;
+@@ -585,7 +559,8 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 
+ 		if (wc->byte_len <
+ 		    offsetof(struct smb_direct_data_transfer, padding)) {
+-			put_empty_recvmsg(t, recvmsg);
++			put_recvmsg(t, recvmsg);
++			smb_direct_disconnect_rdma_connection(t);
+ 			return;
+ 		}
+ 
+@@ -593,7 +568,8 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 		if (data_length) {
+ 			if (wc->byte_len < sizeof(struct smb_direct_data_transfer) +
+ 			    (u64)data_length) {
+-				put_empty_recvmsg(t, recvmsg);
++				put_recvmsg(t, recvmsg);
++				smb_direct_disconnect_rdma_connection(t);
+ 				return;
+ 			}
+ 
+@@ -605,16 +581,11 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 			else
+ 				t->full_packet_received = true;
+ 
+-			enqueue_reassembly(t, recvmsg, (int)data_length);
+-			wake_up_interruptible(&t->wait_reassembly_queue);
+-
+ 			spin_lock(&t->receive_credit_lock);
+ 			receive_credits = --(t->recv_credits);
+ 			avail_recvmsg_count = t->count_avail_recvmsg;
+ 			spin_unlock(&t->receive_credit_lock);
+ 		} else {
+-			put_empty_recvmsg(t, recvmsg);
+-
+ 			spin_lock(&t->receive_credit_lock);
+ 			receive_credits = --(t->recv_credits);
+ 			avail_recvmsg_count = ++(t->count_avail_recvmsg);
+@@ -636,11 +607,23 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 		if (is_receive_credit_post_required(receive_credits, avail_recvmsg_count))
+ 			mod_delayed_work(smb_direct_wq,
+ 					 &t->post_recv_credits_work, 0);
+-		break;
++
++		if (data_length) {
++			enqueue_reassembly(t, recvmsg, (int)data_length);
++			wake_up_interruptible(&t->wait_reassembly_queue);
++		} else
++			put_recvmsg(t, recvmsg);
++
++		return;
+ 	}
+-	default:
+-		break;
+ 	}
++
++	/*
++	 * This is an internal error!
++	 */
++	WARN_ON_ONCE(recvmsg->type != SMB_DIRECT_MSG_DATA_TRANSFER);
++	put_recvmsg(t, recvmsg);
++	smb_direct_disconnect_rdma_connection(t);
+ }
+ 
+ static int smb_direct_post_recv(struct smb_direct_transport *t,
+@@ -670,6 +653,7 @@ static int smb_direct_post_recv(struct smb_direct_transport *t,
+ 		ib_dma_unmap_single(t->cm_id->device,
+ 				    recvmsg->sge.addr, recvmsg->sge.length,
+ 				    DMA_FROM_DEVICE);
++		recvmsg->sge.length = 0;
+ 		smb_direct_disconnect_rdma_connection(t);
+ 		return ret;
+ 	}
+@@ -811,7 +795,6 @@ static void smb_direct_post_recv_credits(struct work_struct *work)
+ 	struct smb_direct_recvmsg *recvmsg;
+ 	int receive_credits, credits = 0;
+ 	int ret;
+-	int use_free = 1;
+ 
+ 	spin_lock(&t->receive_credit_lock);
+ 	receive_credits = t->recv_credits;
+@@ -819,18 +802,9 @@ static void smb_direct_post_recv_credits(struct work_struct *work)
+ 
+ 	if (receive_credits < t->recv_credit_target) {
+ 		while (true) {
+-			if (use_free)
+-				recvmsg = get_free_recvmsg(t);
+-			else
+-				recvmsg = get_empty_recvmsg(t);
+-			if (!recvmsg) {
+-				if (use_free) {
+-					use_free = 0;
+-					continue;
+-				} else {
+-					break;
+-				}
+-			}
++			recvmsg = get_free_recvmsg(t);
++			if (!recvmsg)
++				break;
+ 
+ 			recvmsg->type = SMB_DIRECT_MSG_DATA_TRANSFER;
+ 			recvmsg->first_segment = false;
+@@ -1806,8 +1780,6 @@ static void smb_direct_destroy_pools(struct smb_direct_transport *t)
+ 
+ 	while ((recvmsg = get_free_recvmsg(t)))
+ 		mempool_free(recvmsg, t->recvmsg_mempool);
+-	while ((recvmsg = get_empty_recvmsg(t)))
+-		mempool_free(recvmsg, t->recvmsg_mempool);
+ 
+ 	mempool_destroy(t->recvmsg_mempool);
+ 	t->recvmsg_mempool = NULL;
+@@ -1863,6 +1835,7 @@ static int smb_direct_create_pools(struct smb_direct_transport *t)
+ 		if (!recvmsg)
+ 			goto err;
+ 		recvmsg->transport = t;
++		recvmsg->sge.length = 0;
+ 		list_add(&recvmsg->list, &t->recvmsg_queue);
+ 	}
+ 	t->count_avail_recvmsg = t->recv_credit_max;
+diff --git a/fs/smb/server/transport_tcp.c b/fs/smb/server/transport_tcp.c
+index 4e9f98db9ff409..d72588f33b9cd1 100644
+--- a/fs/smb/server/transport_tcp.c
++++ b/fs/smb/server/transport_tcp.c
+@@ -87,6 +87,7 @@ static struct tcp_transport *alloc_transport(struct socket *client_sk)
+ 		return NULL;
+ 	}
+ 
++	conn->inet_addr = inet_sk(client_sk->sk)->inet_daddr;
+ 	conn->transport = KSMBD_TRANS(t);
+ 	KSMBD_TRANS(t)->conn = conn;
+ 	KSMBD_TRANS(t)->ops = &ksmbd_tcp_transport_ops;
+@@ -230,6 +231,8 @@ static int ksmbd_kthread_fn(void *p)
+ {
+ 	struct socket *client_sk = NULL;
+ 	struct interface *iface = (struct interface *)p;
++	struct inet_sock *csk_inet;
++	struct ksmbd_conn *conn;
+ 	int ret;
+ 
+ 	while (!kthread_should_stop()) {
+@@ -248,6 +251,20 @@ static int ksmbd_kthread_fn(void *p)
+ 			continue;
+ 		}
+ 
++		/*
++		 * Limits repeated connections from clients with the same IP.
++		 */
++		csk_inet = inet_sk(client_sk->sk);
++		down_read(&conn_list_lock);
++		list_for_each_entry(conn, &conn_list, conns_list)
++			if (csk_inet->inet_daddr == conn->inet_addr) {
++				ret = -EAGAIN;
++				break;
++			}
++		up_read(&conn_list_lock);
++		if (ret == -EAGAIN)
++			continue;
++
+ 		if (server_conf.max_connections &&
+ 		    atomic_inc_return(&active_num_conn) >= server_conf.max_connections) {
+ 			pr_info_ratelimited("Limit the maximum number of connections(%u)\n",
+diff --git a/fs/smb/server/vfs.c b/fs/smb/server/vfs.c
+index d3437f6644e334..58ee06814bd27c 100644
+--- a/fs/smb/server/vfs.c
++++ b/fs/smb/server/vfs.c
+@@ -548,7 +548,8 @@ int ksmbd_vfs_getattr(const struct path *path, struct kstat *stat)
+ {
+ 	int err;
+ 
+-	err = vfs_getattr(path, stat, STATX_BTIME, AT_STATX_SYNC_AS_STAT);
++	err = vfs_getattr(path, stat, STATX_BASIC_STATS | STATX_BTIME,
++			AT_STATX_SYNC_AS_STAT);
+ 	if (err)
+ 		pr_err("getattr failed, err %d\n", err);
+ 	return err;
+diff --git a/fs/squashfs/block.c b/fs/squashfs/block.c
+index 3061043e915c87..e7a4649fc85cf4 100644
+--- a/fs/squashfs/block.c
++++ b/fs/squashfs/block.c
+@@ -80,23 +80,22 @@ static int squashfs_bio_read_cached(struct bio *fullbio,
+ 		struct address_space *cache_mapping, u64 index, int length,
+ 		u64 read_start, u64 read_end, int page_count)
+ {
+-	struct page *head_to_cache = NULL, *tail_to_cache = NULL;
++	struct folio *head_to_cache = NULL, *tail_to_cache = NULL;
+ 	struct block_device *bdev = fullbio->bi_bdev;
+ 	int start_idx = 0, end_idx = 0;
+-	struct bvec_iter_all iter_all;
++	struct folio_iter fi;;
+ 	struct bio *bio = NULL;
+-	struct bio_vec *bv;
+ 	int idx = 0;
+ 	int err = 0;
+ #ifdef CONFIG_SQUASHFS_COMP_CACHE_FULL
+-	struct page **cache_pages = kmalloc_array(page_count,
+-			sizeof(void *), GFP_KERNEL | __GFP_ZERO);
++	struct folio **cache_folios = kmalloc_array(page_count,
++			sizeof(*cache_folios), GFP_KERNEL | __GFP_ZERO);
+ #endif
+ 
+-	bio_for_each_segment_all(bv, fullbio, iter_all) {
+-		struct page *page = bv->bv_page;
++	bio_for_each_folio_all(fi, fullbio) {
++		struct folio *folio = fi.folio;
+ 
+-		if (page->mapping == cache_mapping) {
++		if (folio->mapping == cache_mapping) {
+ 			idx++;
+ 			continue;
+ 		}
+@@ -111,13 +110,13 @@ static int squashfs_bio_read_cached(struct bio *fullbio,
+ 		 * adjacent blocks.
+ 		 */
+ 		if (idx == 0 && index != read_start)
+-			head_to_cache = page;
++			head_to_cache = folio;
+ 		else if (idx == page_count - 1 && index + length != read_end)
+-			tail_to_cache = page;
++			tail_to_cache = folio;
+ #ifdef CONFIG_SQUASHFS_COMP_CACHE_FULL
+ 		/* Cache all pages in the BIO for repeated reads */
+-		else if (cache_pages)
+-			cache_pages[idx] = page;
++		else if (cache_folios)
++			cache_folios[idx] = folio;
+ #endif
+ 
+ 		if (!bio || idx != end_idx) {
+@@ -150,45 +149,45 @@ static int squashfs_bio_read_cached(struct bio *fullbio,
+ 		return err;
+ 
+ 	if (head_to_cache) {
+-		int ret = add_to_page_cache_lru(head_to_cache, cache_mapping,
++		int ret = filemap_add_folio(cache_mapping, head_to_cache,
+ 						read_start >> PAGE_SHIFT,
+ 						GFP_NOIO);
+ 
+ 		if (!ret) {
+-			SetPageUptodate(head_to_cache);
+-			unlock_page(head_to_cache);
++			folio_mark_uptodate(head_to_cache);
++			folio_unlock(head_to_cache);
+ 		}
+ 
+ 	}
+ 
+ 	if (tail_to_cache) {
+-		int ret = add_to_page_cache_lru(tail_to_cache, cache_mapping,
++		int ret = filemap_add_folio(cache_mapping, tail_to_cache,
+ 						(read_end >> PAGE_SHIFT) - 1,
+ 						GFP_NOIO);
+ 
+ 		if (!ret) {
+-			SetPageUptodate(tail_to_cache);
+-			unlock_page(tail_to_cache);
++			folio_mark_uptodate(tail_to_cache);
++			folio_unlock(tail_to_cache);
+ 		}
+ 	}
+ 
+ #ifdef CONFIG_SQUASHFS_COMP_CACHE_FULL
+-	if (!cache_pages)
++	if (!cache_folios)
+ 		goto out;
+ 
+ 	for (idx = 0; idx < page_count; idx++) {
+-		if (!cache_pages[idx])
++		if (!cache_folios[idx])
+ 			continue;
+-		int ret = add_to_page_cache_lru(cache_pages[idx], cache_mapping,
++		int ret = filemap_add_folio(cache_mapping, cache_folios[idx],
+ 						(read_start >> PAGE_SHIFT) + idx,
+ 						GFP_NOIO);
+ 
+ 		if (!ret) {
+-			SetPageUptodate(cache_pages[idx]);
+-			unlock_page(cache_pages[idx]);
++			folio_mark_uptodate(cache_folios[idx]);
++			folio_unlock(cache_folios[idx]);
+ 		}
+ 	}
+-	kfree(cache_pages);
++	kfree(cache_folios);
+ out:
+ #endif
+ 	return 0;
+diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
+index 0f85c543f80bf4..f052afa6e7b0c3 100644
+--- a/include/crypto/internal/hash.h
++++ b/include/crypto/internal/hash.h
+@@ -91,6 +91,12 @@ static inline bool crypto_hash_alg_needs_key(struct hash_alg_common *alg)
+ 		!(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY);
+ }
+ 
++static inline bool crypto_hash_no_export_core(struct crypto_ahash *tfm)
++{
++	return crypto_hash_alg_common(tfm)->base.cra_flags &
++	       CRYPTO_AHASH_ALG_NO_EXPORT_CORE;
++}
++
+ int crypto_grab_ahash(struct crypto_ahash_spawn *spawn,
+ 		      struct crypto_instance *inst,
+ 		      const char *name, u32 type, u32 mask);
+diff --git a/include/linux/audit.h b/include/linux/audit.h
+index 0050ef288ab3ce..a394614ccd0b81 100644
+--- a/include/linux/audit.h
++++ b/include/linux/audit.h
+@@ -417,7 +417,7 @@ extern int __audit_log_bprm_fcaps(struct linux_binprm *bprm,
+ extern void __audit_log_capset(const struct cred *new, const struct cred *old);
+ extern void __audit_mmap_fd(int fd, int flags);
+ extern void __audit_openat2_how(struct open_how *how);
+-extern void __audit_log_kern_module(char *name);
++extern void __audit_log_kern_module(const char *name);
+ extern void __audit_fanotify(u32 response, struct fanotify_response_info_audit_rule *friar);
+ extern void __audit_tk_injoffset(struct timespec64 offset);
+ extern void __audit_ntp_log(const struct audit_ntp_data *ad);
+@@ -519,7 +519,7 @@ static inline void audit_openat2_how(struct open_how *how)
+ 		__audit_openat2_how(how);
+ }
+ 
+-static inline void audit_log_kern_module(char *name)
++static inline void audit_log_kern_module(const char *name)
+ {
+ 	if (!audit_dummy_context())
+ 		__audit_log_kern_module(name);
+@@ -677,9 +677,8 @@ static inline void audit_mmap_fd(int fd, int flags)
+ static inline void audit_openat2_how(struct open_how *how)
+ { }
+ 
+-static inline void audit_log_kern_module(char *name)
+-{
+-}
++static inline void audit_log_kern_module(const char *name)
++{ }
+ 
+ static inline void audit_fanotify(u32 response, struct fanotify_response_info_audit_rule *friar)
+ { }
+diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
+index 70c8b94e797ac9..501873758ce642 100644
+--- a/include/linux/bpf-cgroup.h
++++ b/include/linux/bpf-cgroup.h
+@@ -77,9 +77,6 @@ to_cgroup_bpf_attach_type(enum bpf_attach_type attach_type)
+ extern struct static_key_false cgroup_bpf_enabled_key[MAX_CGROUP_BPF_ATTACH_TYPE];
+ #define cgroup_bpf_enabled(atype) static_branch_unlikely(&cgroup_bpf_enabled_key[atype])
+ 
+-#define for_each_cgroup_storage_type(stype) \
+-	for (stype = 0; stype < MAX_BPF_CGROUP_STORAGE_TYPE; stype++)
+-
+ struct bpf_cgroup_storage_map;
+ 
+ struct bpf_storage_buffer {
+@@ -511,8 +508,6 @@ static inline int bpf_percpu_cgroup_storage_update(struct bpf_map *map,
+ #define BPF_CGROUP_RUN_PROG_SETSOCKOPT(sock, level, optname, optval, optlen, \
+ 				       kernel_optval) ({ 0; })
+ 
+-#define for_each_cgroup_storage_type(stype) for (; false; )
+-
+ #endif /* CONFIG_CGROUP_BPF */
+ 
+ #endif /* _BPF_CGROUP_H */
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 5b25d278409bb2..bcae876a2a6038 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -208,6 +208,20 @@ enum btf_field_type {
+ 	BPF_RES_SPIN_LOCK = (1 << 12),
+ };
+ 
++enum bpf_cgroup_storage_type {
++	BPF_CGROUP_STORAGE_SHARED,
++	BPF_CGROUP_STORAGE_PERCPU,
++	__BPF_CGROUP_STORAGE_MAX
++#define MAX_BPF_CGROUP_STORAGE_TYPE __BPF_CGROUP_STORAGE_MAX
++};
++
++#ifdef CONFIG_CGROUP_BPF
++# define for_each_cgroup_storage_type(stype) \
++	for (stype = 0; stype < MAX_BPF_CGROUP_STORAGE_TYPE; stype++)
++#else
++# define for_each_cgroup_storage_type(stype) for (; false; )
++#endif /* CONFIG_CGROUP_BPF */
++
+ typedef void (*btf_dtor_kfunc_t)(void *);
+ 
+ struct btf_field_kptr {
+@@ -260,6 +274,19 @@ struct bpf_list_node_kern {
+ 	void *owner;
+ } __attribute__((aligned(8)));
+ 
++/* 'Ownership' of program-containing map is claimed by the first program
++ * that is going to use this map or by the first program which FD is
++ * stored in the map to make sure that all callers and callees have the
++ * same prog type, JITed flag and xdp_has_frags flag.
++ */
++struct bpf_map_owner {
++	enum bpf_prog_type type;
++	bool jited;
++	bool xdp_has_frags;
++	u64 storage_cookie[MAX_BPF_CGROUP_STORAGE_TYPE];
++	const struct btf_type *attach_func_proto;
++};
++
+ struct bpf_map {
+ 	const struct bpf_map_ops *ops;
+ 	struct bpf_map *inner_map_meta;
+@@ -292,24 +319,15 @@ struct bpf_map {
+ 		struct rcu_head rcu;
+ 	};
+ 	atomic64_t writecnt;
+-	/* 'Ownership' of program-containing map is claimed by the first program
+-	 * that is going to use this map or by the first program which FD is
+-	 * stored in the map to make sure that all callers and callees have the
+-	 * same prog type, JITed flag and xdp_has_frags flag.
+-	 */
+-	struct {
+-		const struct btf_type *attach_func_proto;
+-		spinlock_t lock;
+-		enum bpf_prog_type type;
+-		bool jited;
+-		bool xdp_has_frags;
+-	} owner;
++	spinlock_t owner_lock;
++	struct bpf_map_owner *owner;
+ 	bool bypass_spec_v1;
+ 	bool frozen; /* write-once; write-protected by freeze_mutex */
+ 	bool free_after_mult_rcu_gp;
+ 	bool free_after_rcu_gp;
+ 	atomic64_t sleepable_refcnt;
+ 	s64 __percpu *elem_count;
++	u64 cookie; /* write-once */
+ };
+ 
+ static inline const char *btf_field_type_name(enum btf_field_type type)
+@@ -1082,14 +1100,6 @@ struct bpf_prog_offload {
+ 	u32			jited_len;
+ };
+ 
+-enum bpf_cgroup_storage_type {
+-	BPF_CGROUP_STORAGE_SHARED,
+-	BPF_CGROUP_STORAGE_PERCPU,
+-	__BPF_CGROUP_STORAGE_MAX
+-};
+-
+-#define MAX_BPF_CGROUP_STORAGE_TYPE __BPF_CGROUP_STORAGE_MAX
+-
+ /* The longest tracepoint has 12 args.
+  * See include/trace/bpf_probe.h
+  */
+@@ -2071,6 +2081,16 @@ static inline bool bpf_map_flags_access_ok(u32 access_flags)
+ 	       (BPF_F_RDONLY_PROG | BPF_F_WRONLY_PROG);
+ }
+ 
++static inline struct bpf_map_owner *bpf_map_owner_alloc(struct bpf_map *map)
++{
++	return kzalloc(sizeof(*map->owner), GFP_ATOMIC);
++}
++
++static inline void bpf_map_owner_free(struct bpf_map *map)
++{
++	kfree(map->owner);
++}
++
+ struct bpf_event_entry {
+ 	struct perf_event *event;
+ 	struct file *perf_file;
+diff --git a/include/linux/crypto.h b/include/linux/crypto.h
+index b50f1954d1bbad..a2137e19be7d86 100644
+--- a/include/linux/crypto.h
++++ b/include/linux/crypto.h
+@@ -136,6 +136,9 @@
+ /* Set if the algorithm supports virtual addresses. */
+ #define CRYPTO_ALG_REQ_VIRT		0x00040000
+ 
++/* Set if the algorithm cannot have a fallback (e.g., phmac). */
++#define CRYPTO_ALG_NO_FALLBACK		0x00080000
++
+ /* The high bits 0xff000000 are reserved for type-specific flags. */
+ 
+ /*
+diff --git a/include/linux/fortify-string.h b/include/linux/fortify-string.h
+index e4ce1cae03bf77..b3b53f8c1b28ef 100644
+--- a/include/linux/fortify-string.h
++++ b/include/linux/fortify-string.h
+@@ -596,7 +596,7 @@ __FORTIFY_INLINE bool fortify_memcpy_chk(__kernel_size_t size,
+ 	if (p_size != SIZE_MAX && p_size < size)
+ 		fortify_panic(func, FORTIFY_WRITE, p_size, size, true);
+ 	else if (q_size != SIZE_MAX && q_size < size)
+-		fortify_panic(func, FORTIFY_READ, p_size, size, true);
++		fortify_panic(func, FORTIFY_READ, q_size, size, true);
+ 
+ 	/*
+ 	 * Warn when writing beyond destination field size.
+diff --git a/include/linux/fs_context.h b/include/linux/fs_context.h
+index a19e4bd32e4d31..7773eb870039c4 100644
+--- a/include/linux/fs_context.h
++++ b/include/linux/fs_context.h
+@@ -200,7 +200,7 @@ void logfc(struct fc_log *log, const char *prefix, char level, const char *fmt,
+  */
+ #define infof(fc, fmt, ...) __logfc(fc, 'i', fmt, ## __VA_ARGS__)
+ #define info_plog(p, fmt, ...) __plog(p, 'i', fmt, ## __VA_ARGS__)
+-#define infofc(p, fmt, ...) __plog((&(fc)->log), 'i', fmt, ## __VA_ARGS__)
++#define infofc(fc, fmt, ...) __plog((&(fc)->log), 'i', fmt, ## __VA_ARGS__)
+ 
+ /**
+  * warnf - Store supplementary warning message
+diff --git a/include/linux/i3c/device.h b/include/linux/i3c/device.h
+index b674f64d0822e9..7f136de4b73ef8 100644
+--- a/include/linux/i3c/device.h
++++ b/include/linux/i3c/device.h
+@@ -245,7 +245,7 @@ void i3c_driver_unregister(struct i3c_driver *drv);
+  *
+  * Return: 0 if both registrations succeeds, a negative error code otherwise.
+  */
+-static inline int i3c_i2c_driver_register(struct i3c_driver *i3cdrv,
++static __always_inline int i3c_i2c_driver_register(struct i3c_driver *i3cdrv,
+ 					  struct i2c_driver *i2cdrv)
+ {
+ 	int ret;
+@@ -270,7 +270,7 @@ static inline int i3c_i2c_driver_register(struct i3c_driver *i3cdrv,
+  * Note that when CONFIG_I3C is not enabled, this function only unregisters the
+  * @i2cdrv.
+  */
+-static inline void i3c_i2c_driver_unregister(struct i3c_driver *i3cdrv,
++static __always_inline void i3c_i2c_driver_unregister(struct i3c_driver *i3cdrv,
+ 					     struct i2c_driver *i2cdrv)
+ {
+ 	if (IS_ENABLED(CONFIG_I3C))
+diff --git a/include/linux/if_team.h b/include/linux/if_team.h
+index cdc684e04a2fb6..ce97d891cf720f 100644
+--- a/include/linux/if_team.h
++++ b/include/linux/if_team.h
+@@ -191,8 +191,6 @@ struct team {
+ 
+ 	const struct header_ops *header_ops_cache;
+ 
+-	struct mutex lock; /* used for overall locking, e.g. port lists write */
+-
+ 	/*
+ 	 * List of enabled ports and their count
+ 	 */
+@@ -223,7 +221,6 @@ struct team {
+ 		atomic_t count_pending;
+ 		struct delayed_work dw;
+ 	} mcast_rejoin;
+-	struct lock_class_key team_lock_key;
+ 	long mode_priv[TEAM_MODE_PRIV_LONGS];
+ };
+ 
+diff --git a/include/linux/ioprio.h b/include/linux/ioprio.h
+index b25377b6ea98dd..5210e8371238f1 100644
+--- a/include/linux/ioprio.h
++++ b/include/linux/ioprio.h
+@@ -60,7 +60,8 @@ static inline int __get_task_ioprio(struct task_struct *p)
+ 	int prio;
+ 
+ 	if (!ioc)
+-		return IOPRIO_DEFAULT;
++		return IOPRIO_PRIO_VALUE(task_nice_ioclass(p),
++					 task_nice_ioprio(p));
+ 
+ 	if (p != current)
+ 		lockdep_assert_held(&p->alloc_lock);
+diff --git a/include/linux/mlx5/device.h b/include/linux/mlx5/device.h
+index 6822cfa5f4ad31..9d2467f982ad46 100644
+--- a/include/linux/mlx5/device.h
++++ b/include/linux/mlx5/device.h
+@@ -280,6 +280,7 @@ enum {
+ 	MLX5_MKEY_MASK_SMALL_FENCE	= 1ull << 23,
+ 	MLX5_MKEY_MASK_RELAXED_ORDERING_WRITE	= 1ull << 25,
+ 	MLX5_MKEY_MASK_FREE			= 1ull << 29,
++	MLX5_MKEY_MASK_PAGE_SIZE_5		= 1ull << 42,
+ 	MLX5_MKEY_MASK_RELAXED_ORDERING_READ	= 1ull << 47,
+ };
+ 
+diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
+index 5da384bd0a2648..ae9f8967257493 100644
+--- a/include/linux/mmap_lock.h
++++ b/include/linux/mmap_lock.h
+@@ -12,6 +12,7 @@ extern int rcuwait_wake_up(struct rcuwait *w);
+ #include <linux/tracepoint-defs.h>
+ #include <linux/types.h>
+ #include <linux/cleanup.h>
++#include <linux/sched/mm.h>
+ 
+ #define MMAP_LOCK_INITIALIZER(name) \
+ 	.mmap_lock = __RWSEM_INITIALIZER((name).mmap_lock),
+@@ -154,6 +155,10 @@ static inline void vma_refcount_put(struct vm_area_struct *vma)
+  * reused and attached to a different mm before we lock it.
+  * Returns the vma on success, NULL on failure to lock and EAGAIN if vma got
+  * detached.
++ *
++ * WARNING! The vma passed to this function cannot be used if the function
++ * fails to lock it because in certain cases RCU lock is dropped and then
++ * reacquired. Once RCU lock is dropped the vma can be concurently freed.
+  */
+ static inline struct vm_area_struct *vma_start_read(struct mm_struct *mm,
+ 						    struct vm_area_struct *vma)
+@@ -183,6 +188,31 @@ static inline struct vm_area_struct *vma_start_read(struct mm_struct *mm,
+ 	}
+ 
+ 	rwsem_acquire_read(&vma->vmlock_dep_map, 0, 1, _RET_IP_);
++
++	/*
++	 * If vma got attached to another mm from under us, that mm is not
++	 * stable and can be freed in the narrow window after vma->vm_refcnt
++	 * is dropped and before rcuwait_wake_up(mm) is called. Grab it before
++	 * releasing vma->vm_refcnt.
++	 */
++	if (unlikely(vma->vm_mm != mm)) {
++		/* Use a copy of vm_mm in case vma is freed after we drop vm_refcnt */
++		struct mm_struct *other_mm = vma->vm_mm;
++
++		/*
++		 * __mmdrop() is a heavy operation and we don't need RCU
++		 * protection here. Release RCU lock during these operations.
++		 * We reinstate the RCU read lock as the caller expects it to
++		 * be held when this function returns even on error.
++		 */
++		rcu_read_unlock();
++		mmgrab(other_mm);
++		vma_refcount_put(vma);
++		mmdrop(other_mm);
++		rcu_read_lock();
++		return NULL;
++	}
++
+ 	/*
+ 	 * Overflow of vm_lock_seq/mm_lock_seq might produce false locked result.
+ 	 * False unlocked result is impossible because we modify and check
+diff --git a/include/linux/moduleparam.h b/include/linux/moduleparam.h
+index bfb85fd13e1fae..110e9d09de2436 100644
+--- a/include/linux/moduleparam.h
++++ b/include/linux/moduleparam.h
+@@ -282,10 +282,9 @@ struct kparam_array
+ #define __moduleparam_const const
+ #endif
+ 
+-/* This is the fundamental function for registering boot/module
+-   parameters. */
++/* This is the fundamental function for registering boot/module parameters. */
+ #define __module_param_call(prefix, name, ops, arg, perm, level, flags)	\
+-	/* Default value instead of permissions? */			\
++	static_assert(sizeof(""prefix) - 1 <= MAX_PARAM_PREFIX_LEN);	\
+ 	static const char __param_str_##name[] = prefix #name;		\
+ 	static struct kernel_param __moduleparam_const __param_##name	\
+ 	__used __section("__param")					\
+diff --git a/include/linux/padata.h b/include/linux/padata.h
+index 0146daf3443066..765f2778e264a5 100644
+--- a/include/linux/padata.h
++++ b/include/linux/padata.h
+@@ -90,8 +90,6 @@ struct padata_cpumask {
+  * @processed: Number of already processed objects.
+  * @cpu: Next CPU to be processed.
+  * @cpumask: The cpumasks in use for parallel and serial workers.
+- * @reorder_work: work struct for reordering.
+- * @lock: Reorder lock.
+  */
+ struct parallel_data {
+ 	struct padata_shell		*ps;
+@@ -102,8 +100,6 @@ struct parallel_data {
+ 	unsigned int			processed;
+ 	int				cpu;
+ 	struct padata_cpumask		cpumask;
+-	struct work_struct		reorder_work;
+-	spinlock_t                      ____cacheline_aligned lock;
+ };
+ 
+ /**
+diff --git a/include/linux/pps_kernel.h b/include/linux/pps_kernel.h
+index c7abce28ed2995..aab0aebb529e02 100644
+--- a/include/linux/pps_kernel.h
++++ b/include/linux/pps_kernel.h
+@@ -52,6 +52,7 @@ struct pps_device {
+ 	int current_mode;			/* PPS mode at event time */
+ 
+ 	unsigned int last_ev;			/* last PPS event id */
++	unsigned int last_fetched_ev;		/* last fetched PPS event id */
+ 	wait_queue_head_t queue;		/* PPS event queue */
+ 
+ 	unsigned int id;			/* PPS source unique ID */
+diff --git a/include/linux/proc_fs.h b/include/linux/proc_fs.h
+index ea62201c74c402..703d0c76cc9a0a 100644
+--- a/include/linux/proc_fs.h
++++ b/include/linux/proc_fs.h
+@@ -27,6 +27,7 @@ enum {
+ 
+ 	PROC_ENTRY_proc_read_iter	= 1U << 1,
+ 	PROC_ENTRY_proc_compat_ioctl	= 1U << 2,
++	PROC_ENTRY_proc_lseek		= 1U << 3,
+ };
+ 
+ struct proc_ops {
+diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
+index f1fd3a8044e0ec..dd10c22299ab82 100644
+--- a/include/linux/psi_types.h
++++ b/include/linux/psi_types.h
+@@ -84,11 +84,9 @@ enum psi_aggregators {
+ struct psi_group_cpu {
+ 	/* 1st cacheline updated by the scheduler */
+ 
+-	/* Aggregator needs to know of concurrent changes */
+-	seqcount_t seq ____cacheline_aligned_in_smp;
+-
+ 	/* States of the tasks belonging to this group */
+-	unsigned int tasks[NR_PSI_TASK_COUNTS];
++	unsigned int tasks[NR_PSI_TASK_COUNTS]
++			____cacheline_aligned_in_smp;
+ 
+ 	/* Aggregate pressure state derived from the tasks */
+ 	u32 state_mask;
+diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
+index cd7f0ae266150c..bc90c3c7b5fd4c 100644
+--- a/include/linux/ring_buffer.h
++++ b/include/linux/ring_buffer.h
+@@ -152,9 +152,7 @@ ring_buffer_consume(struct trace_buffer *buffer, int cpu, u64 *ts,
+ 		    unsigned long *lost_events);
+ 
+ struct ring_buffer_iter *
+-ring_buffer_read_prepare(struct trace_buffer *buffer, int cpu, gfp_t flags);
+-void ring_buffer_read_prepare_sync(void);
+-void ring_buffer_read_start(struct ring_buffer_iter *iter);
++ring_buffer_read_start(struct trace_buffer *buffer, int cpu, gfp_t flags);
+ void ring_buffer_read_finish(struct ring_buffer_iter *iter);
+ 
+ struct ring_buffer_event *
+diff --git a/include/linux/sched/task_stack.h b/include/linux/sched/task_stack.h
+index 85c5a6392e0277..1fab7e9043a3ca 100644
+--- a/include/linux/sched/task_stack.h
++++ b/include/linux/sched/task_stack.h
+@@ -53,7 +53,7 @@ static inline void setup_thread_stack(struct task_struct *p, struct task_struct
+  * When the stack grows up, this is the highest address.
+  * Beyond that position, we corrupt data on the next page.
+  */
+-static inline unsigned long *end_of_stack(struct task_struct *p)
++static inline unsigned long *end_of_stack(const struct task_struct *p)
+ {
+ #ifdef CONFIG_STACK_GROWSUP
+ 	return (unsigned long *)((unsigned long)task_thread_info(p) + THREAD_SIZE) - 1;
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 5520524c93bff9..37f5c6099b1fbb 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -3033,6 +3033,29 @@ static inline void skb_reset_transport_header(struct sk_buff *skb)
+ 	skb->transport_header = offset;
+ }
+ 
++/**
++ * skb_reset_transport_header_careful - conditionally reset transport header
++ * @skb: buffer to alter
++ *
++ * Hardened version of skb_reset_transport_header().
++ *
++ * Returns: true if the operation was a success.
++ */
++static inline bool __must_check
++skb_reset_transport_header_careful(struct sk_buff *skb)
++{
++	long offset = skb->data - skb->head;
++
++	if (unlikely(offset != (typeof(skb->transport_header))offset))
++		return false;
++
++	if (unlikely(offset == (typeof(skb->transport_header))~0U))
++		return false;
++
++	skb->transport_header = offset;
++	return true;
++}
++
+ static inline void skb_set_transport_header(struct sk_buff *skb,
+ 					    const int offset)
+ {
+diff --git a/include/linux/soc/qcom/qmi.h b/include/linux/soc/qcom/qmi.h
+index 469e02d2aa0d81..291cdc7ef49ce1 100644
+--- a/include/linux/soc/qcom/qmi.h
++++ b/include/linux/soc/qcom/qmi.h
+@@ -24,9 +24,9 @@ struct socket;
+  */
+ struct qmi_header {
+ 	u8 type;
+-	u16 txn_id;
+-	u16 msg_id;
+-	u16 msg_len;
++	__le16 txn_id;
++	__le16 msg_id;
++	__le16 msg_len;
+ } __packed;
+ 
+ #define QMI_REQUEST	0
+diff --git a/include/linux/usb/usbnet.h b/include/linux/usb/usbnet.h
+index 0b9f1e598e3a6b..4bc6bb01a0eb8b 100644
+--- a/include/linux/usb/usbnet.h
++++ b/include/linux/usb/usbnet.h
+@@ -76,6 +76,7 @@ struct usbnet {
+ #		define EVENT_LINK_CHANGE	11
+ #		define EVENT_SET_RX_MODE	12
+ #		define EVENT_NO_IP_ALIGN	13
++#		define EVENT_LINK_CARRIER_ON	14
+ /* This one is special, as it indicates that the device is going away
+  * there are cyclic dependencies between tasklet, timer and bh
+  * that must be broken
+diff --git a/include/linux/vfio.h b/include/linux/vfio.h
+index 707b00772ce1ff..eb563f538dee51 100644
+--- a/include/linux/vfio.h
++++ b/include/linux/vfio.h
+@@ -105,6 +105,9 @@ struct vfio_device {
+  * @match: Optional device name match callback (return: 0 for no-match, >0 for
+  *         match, -errno for abort (ex. match with insufficient or incorrect
+  *         additional args)
++ * @match_token_uuid: Optional device token match/validation. Return 0
++ *         if the uuid is valid for the device, -errno otherwise. uuid is NULL
++ *         if none was provided.
+  * @dma_unmap: Called when userspace unmaps IOVA from the container
+  *             this device is attached to.
+  * @device_feature: Optional, fill in the VFIO_DEVICE_FEATURE ioctl
+@@ -132,6 +135,7 @@ struct vfio_device_ops {
+ 	int	(*mmap)(struct vfio_device *vdev, struct vm_area_struct *vma);
+ 	void	(*request)(struct vfio_device *vdev, unsigned int count);
+ 	int	(*match)(struct vfio_device *vdev, char *buf);
++	int	(*match_token_uuid)(struct vfio_device *vdev, const uuid_t *uuid);
+ 	void	(*dma_unmap)(struct vfio_device *vdev, u64 iova, u64 length);
+ 	int	(*device_feature)(struct vfio_device *device, u32 flags,
+ 				  void __user *arg, size_t argsz);
+diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h
+index fbb472dd99b361..f541044e42a2ad 100644
+--- a/include/linux/vfio_pci_core.h
++++ b/include/linux/vfio_pci_core.h
+@@ -122,6 +122,8 @@ ssize_t vfio_pci_core_write(struct vfio_device *core_vdev, const char __user *bu
+ int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct vm_area_struct *vma);
+ void vfio_pci_core_request(struct vfio_device *core_vdev, unsigned int count);
+ int vfio_pci_core_match(struct vfio_device *core_vdev, char *buf);
++int vfio_pci_core_match_token_uuid(struct vfio_device *core_vdev,
++				   const uuid_t *uuid);
+ int vfio_pci_core_enable(struct vfio_pci_core_device *vdev);
+ void vfio_pci_core_disable(struct vfio_pci_core_device *vdev);
+ void vfio_pci_core_finish_enable(struct vfio_pci_core_device *vdev);
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index c79901f2dc2a04..5796ca9fe5da38 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -2634,6 +2634,7 @@ struct hci_ev_le_conn_complete {
+ #define LE_EXT_ADV_DIRECT_IND		0x0004
+ #define LE_EXT_ADV_SCAN_RSP		0x0008
+ #define LE_EXT_ADV_LEGACY_PDU		0x0010
++#define LE_EXT_ADV_DATA_STATUS_MASK	0x0060
+ #define LE_EXT_ADV_EVT_TYPE_MASK	0x007f
+ 
+ #define ADDR_LE_DEV_PUBLIC		0x00
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index f79f59e67114b3..c371dadc6fa3e9 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -29,6 +29,7 @@
+ #include <linux/idr.h>
+ #include <linux/leds.h>
+ #include <linux/rculist.h>
++#include <linux/spinlock.h>
+ #include <linux/srcu.h>
+ 
+ #include <net/bluetooth/hci.h>
+@@ -94,6 +95,7 @@ struct discovery_state {
+ 	u16			uuid_count;
+ 	u8			(*uuids)[16];
+ 	unsigned long		name_resolve_timeout;
++	spinlock_t		lock;
+ };
+ 
+ #define SUSPEND_NOTIFIER_TIMEOUT	msecs_to_jiffies(2000) /* 2 seconds */
+@@ -889,6 +891,7 @@ static inline void iso_recv(struct hci_conn *hcon, struct sk_buff *skb,
+ 
+ static inline void discovery_init(struct hci_dev *hdev)
+ {
++	spin_lock_init(&hdev->discovery.lock);
+ 	hdev->discovery.state = DISCOVERY_STOPPED;
+ 	INIT_LIST_HEAD(&hdev->discovery.all);
+ 	INIT_LIST_HEAD(&hdev->discovery.unknown);
+@@ -903,8 +906,11 @@ static inline void hci_discovery_filter_clear(struct hci_dev *hdev)
+ 	hdev->discovery.report_invalid_rssi = true;
+ 	hdev->discovery.rssi = HCI_RSSI_INVALID;
+ 	hdev->discovery.uuid_count = 0;
++
++	spin_lock(&hdev->discovery.lock);
+ 	kfree(hdev->discovery.uuids);
+ 	hdev->discovery.uuids = NULL;
++	spin_unlock(&hdev->discovery.lock);
+ }
+ 
+ bool hci_discovery_active(struct hci_dev *hdev);
+diff --git a/include/net/dst.h b/include/net/dst.h
+index 78c78cdce0e9a7..32dafbab4cd0d6 100644
+--- a/include/net/dst.h
++++ b/include/net/dst.h
+@@ -456,7 +456,7 @@ INDIRECT_CALLABLE_DECLARE(int ip_output(struct net *, struct sock *,
+ /* Output packet to network from transport.  */
+ static inline int dst_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+-	return INDIRECT_CALL_INET(skb_dst(skb)->output,
++	return INDIRECT_CALL_INET(READ_ONCE(skb_dst(skb)->output),
+ 				  ip6_output, ip_output,
+ 				  net, sk, skb);
+ }
+@@ -466,7 +466,7 @@ INDIRECT_CALLABLE_DECLARE(int ip_local_deliver(struct sk_buff *));
+ /* Input packet from network to transport.  */
+ static inline int dst_input(struct sk_buff *skb)
+ {
+-	return INDIRECT_CALL_INET(skb_dst(skb)->input,
++	return INDIRECT_CALL_INET(READ_ONCE(skb_dst(skb)->input),
+ 				  ip6_input, ip_local_deliver, skb);
+ }
+ 
+@@ -561,6 +561,26 @@ static inline void skb_dst_update_pmtu_no_confirm(struct sk_buff *skb, u32 mtu)
+ 		dst->ops->update_pmtu(dst, NULL, skb, mtu, false);
+ }
+ 
++static inline struct net_device *dst_dev(const struct dst_entry *dst)
++{
++	return READ_ONCE(dst->dev);
++}
++
++static inline struct net_device *skb_dst_dev(const struct sk_buff *skb)
++{
++	return dst_dev(skb_dst(skb));
++}
++
++static inline struct net *skb_dst_dev_net(const struct sk_buff *skb)
++{
++	return dev_net(skb_dst_dev(skb));
++}
++
++static inline struct net *skb_dst_dev_net_rcu(const struct sk_buff *skb)
++{
++	return dev_net_rcu(skb_dst_dev(skb));
++}
++
+ struct dst_entry *dst_blackhole_check(struct dst_entry *dst, u32 cookie);
+ void dst_blackhole_update_pmtu(struct dst_entry *dst, struct sock *sk,
+ 			       struct sk_buff *skb, u32 mtu, bool confirm_neigh);
+diff --git a/include/net/lwtunnel.h b/include/net/lwtunnel.h
+index c306ebe379a0b3..26232f603e33c9 100644
+--- a/include/net/lwtunnel.h
++++ b/include/net/lwtunnel.h
+@@ -138,12 +138,12 @@ int bpf_lwt_push_ip_encap(struct sk_buff *skb, void *hdr, u32 len,
+ static inline void lwtunnel_set_redirect(struct dst_entry *dst)
+ {
+ 	if (lwtunnel_output_redirect(dst->lwtstate)) {
+-		dst->lwtstate->orig_output = dst->output;
+-		dst->output = lwtunnel_output;
++		dst->lwtstate->orig_output = READ_ONCE(dst->output);
++		WRITE_ONCE(dst->output, lwtunnel_output);
+ 	}
+ 	if (lwtunnel_input_redirect(dst->lwtstate)) {
+-		dst->lwtstate->orig_input = dst->input;
+-		dst->input = lwtunnel_input;
++		dst->lwtstate->orig_input = READ_ONCE(dst->input);
++		WRITE_ONCE(dst->input, lwtunnel_input);
+ 	}
+ }
+ #else
+diff --git a/include/net/route.h b/include/net/route.h
+index 8e39aa822cf986..3d3d6048ffca2b 100644
+--- a/include/net/route.h
++++ b/include/net/route.h
+@@ -153,7 +153,7 @@ static inline void inet_sk_init_flowi4(const struct inet_sock *inet,
+ 			   ip_sock_rt_tos(sk), ip_sock_rt_scope(sk),
+ 			   sk->sk_protocol, inet_sk_flowi_flags(sk), daddr,
+ 			   inet->inet_saddr, inet->inet_dport,
+-			   inet->inet_sport, sk->sk_uid);
++			   inet->inet_sport, sk_uid(sk));
+ 	security_sk_classify_flow(sk, flowi4_to_flowi_common(fl4));
+ }
+ 
+@@ -331,7 +331,7 @@ static inline void ip_route_connect_init(struct flowi4 *fl4, __be32 dst,
+ 
+ 	flowi4_init_output(fl4, oif, READ_ONCE(sk->sk_mark), ip_sock_rt_tos(sk),
+ 			   ip_sock_rt_scope(sk), protocol, flow_flags, dst,
+-			   src, dport, sport, sk->sk_uid);
++			   src, dport, sport, sk_uid(sk));
+ }
+ 
+ static inline struct rtable *ip_route_connect(struct flowi4 *fl4, __be32 dst,
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 4c37015b7cf71e..e3ab203456858a 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -2076,6 +2076,7 @@ static inline void sock_orphan(struct sock *sk)
+ 	sock_set_flag(sk, SOCK_DEAD);
+ 	sk_set_socket(sk, NULL);
+ 	sk->sk_wq  = NULL;
++	/* Note: sk_uid is unchanged. */
+ 	write_unlock_bh(&sk->sk_callback_lock);
+ }
+ 
+@@ -2086,18 +2087,25 @@ static inline void sock_graft(struct sock *sk, struct socket *parent)
+ 	rcu_assign_pointer(sk->sk_wq, &parent->wq);
+ 	parent->sk = sk;
+ 	sk_set_socket(sk, parent);
+-	sk->sk_uid = SOCK_INODE(parent)->i_uid;
++	WRITE_ONCE(sk->sk_uid, SOCK_INODE(parent)->i_uid);
+ 	security_sock_graft(sk, parent);
+ 	write_unlock_bh(&sk->sk_callback_lock);
+ }
+ 
+ kuid_t sock_i_uid(struct sock *sk);
++
++static inline kuid_t sk_uid(const struct sock *sk)
++{
++	/* Paired with WRITE_ONCE() in sockfs_setattr() */
++	return READ_ONCE(sk->sk_uid);
++}
++
+ unsigned long __sock_i_ino(struct sock *sk);
+ unsigned long sock_i_ino(struct sock *sk);
+ 
+ static inline kuid_t sock_net_uid(const struct net *net, const struct sock *sk)
+ {
+-	return sk ? sk->sk_uid : make_kuid(net->user_ns, 0);
++	return sk ? sk_uid(sk) : make_kuid(net->user_ns, 0);
+ }
+ 
+ static inline u32 net_tx_rndhash(void)
+diff --git a/include/net/tc_act/tc_ctinfo.h b/include/net/tc_act/tc_ctinfo.h
+index f071c1d70a25e1..a04bcac7adf4b6 100644
+--- a/include/net/tc_act/tc_ctinfo.h
++++ b/include/net/tc_act/tc_ctinfo.h
+@@ -18,9 +18,9 @@ struct tcf_ctinfo_params {
+ struct tcf_ctinfo {
+ 	struct tc_action common;
+ 	struct tcf_ctinfo_params __rcu *params;
+-	u64 stats_dscp_set;
+-	u64 stats_dscp_error;
+-	u64 stats_cpmark_set;
++	atomic64_t stats_dscp_set;
++	atomic64_t stats_dscp_error;
++	atomic64_t stats_cpmark_set;
+ };
+ 
+ enum {
+diff --git a/include/net/udp.h b/include/net/udp.h
+index a772510b2aa58a..7a4524243b1921 100644
+--- a/include/net/udp.h
++++ b/include/net/udp.h
+@@ -587,6 +587,16 @@ static inline struct sk_buff *udp_rcv_segment(struct sock *sk,
+ {
+ 	netdev_features_t features = NETIF_F_SG;
+ 	struct sk_buff *segs;
++	int drop_count;
++
++	/*
++	 * Segmentation in UDP receive path is only for UDP GRO, drop udp
++	 * fragmentation offload (UFO) packets.
++	 */
++	if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP) {
++		drop_count = 1;
++		goto drop;
++	}
+ 
+ 	/* Avoid csum recalculation by skb_segment unless userspace explicitly
+ 	 * asks for the final checksum values
+@@ -610,16 +620,18 @@ static inline struct sk_buff *udp_rcv_segment(struct sock *sk,
+ 	 */
+ 	segs = __skb_gso_segment(skb, features, false);
+ 	if (IS_ERR_OR_NULL(segs)) {
+-		int segs_nr = skb_shinfo(skb)->gso_segs;
+-
+-		atomic_add(segs_nr, &sk->sk_drops);
+-		SNMP_ADD_STATS(__UDPX_MIB(sk, ipv4), UDP_MIB_INERRORS, segs_nr);
+-		kfree_skb(skb);
+-		return NULL;
++		drop_count = skb_shinfo(skb)->gso_segs;
++		goto drop;
+ 	}
+ 
+ 	consume_skb(skb);
+ 	return segs;
++
++drop:
++	atomic_add(drop_count, &sk->sk_drops);
++	SNMP_ADD_STATS(__UDPX_MIB(sk, ipv4), UDP_MIB_INERRORS, drop_count);
++	kfree_skb(skb);
++	return NULL;
+ }
+ 
+ static inline void udp_post_segment_fix_csum(struct sk_buff *skb)
+diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
+index af43a8d2a74aec..6353da1c022805 100644
+--- a/include/rdma/ib_verbs.h
++++ b/include/rdma/ib_verbs.h
+@@ -4794,11 +4794,17 @@ struct ib_ucontext *ib_uverbs_get_ucontext_file(struct ib_uverbs_file *ufile);
+ 
+ #if IS_ENABLED(CONFIG_INFINIBAND_USER_ACCESS)
+ int uverbs_destroy_def_handler(struct uverbs_attr_bundle *attrs);
++bool rdma_uattrs_has_raw_cap(const struct uverbs_attr_bundle *attrs);
+ #else
+ static inline int uverbs_destroy_def_handler(struct uverbs_attr_bundle *attrs)
+ {
+ 	return 0;
+ }
++static inline bool
++rdma_uattrs_has_raw_cap(const struct uverbs_attr_bundle *attrs)
++{
++	return false;
++}
+ #endif
+ 
+ struct net_device *rdma_alloc_netdev(struct ib_device *device, u32 port_num,
+@@ -4855,6 +4861,12 @@ static inline int ibdev_to_node(struct ib_device *ibdev)
+ bool rdma_dev_access_netns(const struct ib_device *device,
+ 			   const struct net *net);
+ 
++bool rdma_dev_has_raw_cap(const struct ib_device *dev);
++static inline struct net *rdma_dev_net(struct ib_device *device)
++{
++	return read_pnet(&device->coredev.rdma_net);
++}
++
+ #define IB_ROCE_UDP_ENCAP_VALID_PORT_MIN (0xC000)
+ #define IB_ROCE_UDP_ENCAP_VALID_PORT_MAX (0xFFFF)
+ #define IB_GRH_FLOWLABEL_MASK (0x000FFFFF)
+diff --git a/include/sound/tas2781-tlv.h b/include/sound/tas2781-tlv.h
+index d87263e43fdb61..ef9b9f19d21205 100644
+--- a/include/sound/tas2781-tlv.h
++++ b/include/sound/tas2781-tlv.h
+@@ -15,7 +15,7 @@
+ #ifndef __TAS2781_TLV_H__
+ #define __TAS2781_TLV_H__
+ 
+-static const __maybe_unused DECLARE_TLV_DB_SCALE(dvc_tlv, -10000, 100, 0);
++static const __maybe_unused DECLARE_TLV_DB_SCALE(dvc_tlv, -10000, 50, 0);
+ static const __maybe_unused DECLARE_TLV_DB_SCALE(amp_vol_tlv, 1100, 50, 0);
+ 
+ #endif
+diff --git a/include/trace/events/power.h b/include/trace/events/power.h
+index 6c631eec23e32b..913181cebfe9ab 100644
+--- a/include/trace/events/power.h
++++ b/include/trace/events/power.h
+@@ -99,28 +99,6 @@ DEFINE_EVENT(psci_domain_idle, psci_domain_idle_exit,
+ 	TP_ARGS(cpu_id, state, s2idle)
+ );
+ 
+-TRACE_EVENT(powernv_throttle,
+-
+-	TP_PROTO(int chip_id, const char *reason, int pmax),
+-
+-	TP_ARGS(chip_id, reason, pmax),
+-
+-	TP_STRUCT__entry(
+-		__field(int, chip_id)
+-		__string(reason, reason)
+-		__field(int, pmax)
+-	),
+-
+-	TP_fast_assign(
+-		__entry->chip_id = chip_id;
+-		__assign_str(reason);
+-		__entry->pmax = pmax;
+-	),
+-
+-	TP_printk("Chip %d Pmax %d %s", __entry->chip_id,
+-		  __entry->pmax, __get_str(reason))
+-);
+-
+ TRACE_EVENT(pstate_sample,
+ 
+ 	TP_PROTO(u32 core_busy,
+diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h
+index ad9a70afea6c27..3a76c4f2882b66 100644
+--- a/include/uapi/drm/panthor_drm.h
++++ b/include/uapi/drm/panthor_drm.h
+@@ -296,6 +296,9 @@ struct drm_panthor_gpu_info {
+ 	/** @as_present: Bitmask encoding the number of address-space exposed by the MMU. */
+ 	__u32 as_present;
+ 
++	/** @pad0: MBZ. */
++	__u32 pad0;
++
+ 	/** @shader_present: Bitmask encoding the shader cores exposed by the GPU. */
+ 	__u64 shader_present;
+ 
+diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
+index 6a702ba7817c38..5f1524f466a7cf 100644
+--- a/include/uapi/drm/xe_drm.h
++++ b/include/uapi/drm/xe_drm.h
+@@ -925,9 +925,9 @@ struct drm_xe_gem_mmap_offset {
+  *  - %DRM_XE_VM_CREATE_FLAG_LR_MODE - An LR, or Long Running VM accepts
+  *    exec submissions to its exec_queues that don't have an upper time
+  *    limit on the job execution time. But exec submissions to these
+- *    don't allow any of the flags DRM_XE_SYNC_FLAG_SYNCOBJ,
+- *    DRM_XE_SYNC_FLAG_TIMELINE_SYNCOBJ, DRM_XE_SYNC_FLAG_DMA_BUF,
+- *    used as out-syncobjs, that is, together with DRM_XE_SYNC_FLAG_SIGNAL.
++ *    don't allow any of the sync types DRM_XE_SYNC_TYPE_SYNCOBJ,
++ *    DRM_XE_SYNC_TYPE_TIMELINE_SYNCOBJ, used as out-syncobjs, that is,
++ *    together with sync flag DRM_XE_SYNC_FLAG_SIGNAL.
+  *    LR VMs can be created in recoverable page-fault mode using
+  *    DRM_XE_VM_CREATE_FLAG_FAULT_MODE, if the device supports it.
+  *    If that flag is omitted, the UMD can not rely on the slightly
+@@ -1394,7 +1394,7 @@ struct drm_xe_sync {
+ 
+ 	/**
+ 	 * @timeline_value: Input for the timeline sync object. Needs to be
+-	 * different than 0 when used with %DRM_XE_SYNC_FLAG_TIMELINE_SYNCOBJ.
++	 * different than 0 when used with %DRM_XE_SYNC_TYPE_TIMELINE_SYNCOBJ.
+ 	 */
+ 	__u64 timeline_value;
+ 
+diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
+index 5764f315137f99..75100bf009baf5 100644
+--- a/include/uapi/linux/vfio.h
++++ b/include/uapi/linux/vfio.h
+@@ -905,10 +905,12 @@ struct vfio_device_feature {
+  * VFIO_DEVICE_BIND_IOMMUFD - _IOR(VFIO_TYPE, VFIO_BASE + 18,
+  *				   struct vfio_device_bind_iommufd)
+  * @argsz:	 User filled size of this data.
+- * @flags:	 Must be 0.
++ * @flags:	 Must be 0 or a bit flags of VFIO_DEVICE_BIND_*
+  * @iommufd:	 iommufd to bind.
+  * @out_devid:	 The device id generated by this bind. devid is a handle for
+  *		 this device/iommufd bond and can be used in IOMMUFD commands.
++ * @token_uuid_ptr: Valid if VFIO_DEVICE_BIND_FLAG_TOKEN. Points to a 16 byte
++ *                  UUID in the same format as VFIO_DEVICE_FEATURE_PCI_VF_TOKEN.
+  *
+  * Bind a vfio_device to the specified iommufd.
+  *
+@@ -917,13 +919,21 @@ struct vfio_device_feature {
+  *
+  * Unbind is automatically conducted when device fd is closed.
+  *
++ * A token is sometimes required to open the device, unless this is known to be
++ * needed VFIO_DEVICE_BIND_FLAG_TOKEN should not be set and token_uuid_ptr is
++ * ignored. The only case today is a PF/VF relationship where the VF bind must
++ * be provided the same token as VFIO_DEVICE_FEATURE_PCI_VF_TOKEN provided to
++ * the PF.
++ *
+  * Return: 0 on success, -errno on failure.
+  */
+ struct vfio_device_bind_iommufd {
+ 	__u32		argsz;
+ 	__u32		flags;
++#define VFIO_DEVICE_BIND_FLAG_TOKEN (1 << 0)
+ 	__s32		iommufd;
+ 	__u32		out_devid;
++	__aligned_u64	token_uuid_ptr;
+ };
+ 
+ #define VFIO_DEVICE_BIND_IOMMUFD	_IO(VFIO_TYPE, VFIO_BASE + 18)
+diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
+index d4b3e2ae1314d1..e72f2655459e45 100644
+--- a/include/uapi/linux/vhost.h
++++ b/include/uapi/linux/vhost.h
+@@ -235,4 +235,33 @@
+  */
+ #define VHOST_VDPA_GET_VRING_SIZE	_IOWR(VHOST_VIRTIO, 0x82,	\
+ 					      struct vhost_vring_state)
++
++/* fork_owner values for vhost */
++#define VHOST_FORK_OWNER_KTHREAD 0
++#define VHOST_FORK_OWNER_TASK 1
++
++/**
++ * VHOST_SET_FORK_FROM_OWNER - Set the fork_owner flag for the vhost device,
++ * This ioctl must called before VHOST_SET_OWNER.
++ * Only available when CONFIG_VHOST_ENABLE_FORK_OWNER_CONTROL=y
++ *
++ * @param fork_owner: An 8-bit value that determines the vhost thread mode
++ *
++ * When fork_owner is set to VHOST_FORK_OWNER_TASK(default value):
++ *   - Vhost will create vhost worker as tasks forked from the owner,
++ *     inheriting all of the owner's attributes.
++ *
++ * When fork_owner is set to VHOST_FORK_OWNER_KTHREAD:
++ *   - Vhost will create vhost workers as kernel threads.
++ */
++#define VHOST_SET_FORK_FROM_OWNER _IOW(VHOST_VIRTIO, 0x83, __u8)
++
++/**
++ * VHOST_GET_FORK_OWNER - Get the current fork_owner flag for the vhost device.
++ * Only available when CONFIG_VHOST_ENABLE_FORK_OWNER_CONTROL=y
++ *
++ * @return: An 8-bit value indicating the current thread mode.
++ */
++#define VHOST_GET_FORK_FROM_OWNER _IOR(VHOST_VIRTIO, 0x84, __u8)
++
+ #endif
+diff --git a/init/Kconfig b/init/Kconfig
+index 666783eb50abd7..2e15b4a8478e81 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -1794,7 +1794,7 @@ config IO_URING
+ 
+ config GCOV_PROFILE_URING
+ 	bool "Enable GCOV profiling on the io_uring subsystem"
+-	depends on GCOV_KERNEL
++	depends on IO_URING && GCOV_KERNEL
+ 	help
+ 	  Enable GCOV profiling on the io_uring subsystem, to facilitate
+ 	  code coverage testing.
+diff --git a/kernel/audit.h b/kernel/audit.h
+index 0211cb307d3028..2a24d01c5fb0e2 100644
+--- a/kernel/audit.h
++++ b/kernel/audit.h
+@@ -200,7 +200,7 @@ struct audit_context {
+ 			int			argc;
+ 		} execve;
+ 		struct {
+-			char			*name;
++			const char		*name;
+ 		} module;
+ 		struct {
+ 			struct audit_ntp_data	ntp_data;
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index 78fd876a5473fb..eb98cd6fe91fb5 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -2864,7 +2864,7 @@ void __audit_openat2_how(struct open_how *how)
+ 	context->type = AUDIT_OPENAT2;
+ }
+ 
+-void __audit_log_kern_module(char *name)
++void __audit_log_kern_module(const char *name)
+ {
+ 	struct audit_context *context = audit_context();
+ 
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index f4885514f007b3..deb88fade2499e 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -2440,22 +2440,22 @@ static bool cg_sockopt_is_valid_access(int off, int size,
+ 	}
+ 
+ 	switch (off) {
+-	case offsetof(struct bpf_sockopt, sk):
++	case bpf_ctx_range_ptr(struct bpf_sockopt, sk):
+ 		if (size != sizeof(__u64))
+ 			return false;
+ 		info->reg_type = PTR_TO_SOCKET;
+ 		break;
+-	case offsetof(struct bpf_sockopt, optval):
++	case bpf_ctx_range_ptr(struct bpf_sockopt, optval):
+ 		if (size != sizeof(__u64))
+ 			return false;
+ 		info->reg_type = PTR_TO_PACKET;
+ 		break;
+-	case offsetof(struct bpf_sockopt, optval_end):
++	case bpf_ctx_range_ptr(struct bpf_sockopt, optval_end):
+ 		if (size != sizeof(__u64))
+ 			return false;
+ 		info->reg_type = PTR_TO_PACKET_END;
+ 		break;
+-	case offsetof(struct bpf_sockopt, retval):
++	case bpf_ctx_range(struct bpf_sockopt, retval):
+ 		if (size != size_default)
+ 			return false;
+ 		return prog->expected_attach_type == BPF_CGROUP_GETSOCKOPT;
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index c20babbf998f4e..d966e971893ab3 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -778,7 +778,10 @@ bool is_bpf_text_address(unsigned long addr)
+ 
+ struct bpf_prog *bpf_prog_ksym_find(unsigned long addr)
+ {
+-	struct bpf_ksym *ksym = bpf_ksym_find(addr);
++	struct bpf_ksym *ksym;
++
++	WARN_ON_ONCE(!rcu_read_lock_held());
++	ksym = bpf_ksym_find(addr);
+ 
+ 	return ksym && ksym->prog ?
+ 	       container_of(ksym, struct bpf_prog_aux, ksym)->prog :
+@@ -2362,28 +2365,44 @@ static bool __bpf_prog_map_compatible(struct bpf_map *map,
+ 				      const struct bpf_prog *fp)
+ {
+ 	enum bpf_prog_type prog_type = resolve_prog_type(fp);
+-	bool ret;
+ 	struct bpf_prog_aux *aux = fp->aux;
++	enum bpf_cgroup_storage_type i;
++	bool ret = false;
++	u64 cookie;
+ 
+ 	if (fp->kprobe_override)
+-		return false;
++		return ret;
+ 
+-	spin_lock(&map->owner.lock);
+-	if (!map->owner.type) {
+-		/* There's no owner yet where we could check for
+-		 * compatibility.
+-		 */
+-		map->owner.type  = prog_type;
+-		map->owner.jited = fp->jited;
+-		map->owner.xdp_has_frags = aux->xdp_has_frags;
+-		map->owner.attach_func_proto = aux->attach_func_proto;
++	spin_lock(&map->owner_lock);
++	/* There's no owner yet where we could check for compatibility. */
++	if (!map->owner) {
++		map->owner = bpf_map_owner_alloc(map);
++		if (!map->owner)
++			goto err;
++		map->owner->type  = prog_type;
++		map->owner->jited = fp->jited;
++		map->owner->xdp_has_frags = aux->xdp_has_frags;
++		map->owner->attach_func_proto = aux->attach_func_proto;
++		for_each_cgroup_storage_type(i) {
++			map->owner->storage_cookie[i] =
++				aux->cgroup_storage[i] ?
++				aux->cgroup_storage[i]->cookie : 0;
++		}
+ 		ret = true;
+ 	} else {
+-		ret = map->owner.type  == prog_type &&
+-		      map->owner.jited == fp->jited &&
+-		      map->owner.xdp_has_frags == aux->xdp_has_frags;
++		ret = map->owner->type  == prog_type &&
++		      map->owner->jited == fp->jited &&
++		      map->owner->xdp_has_frags == aux->xdp_has_frags;
++		for_each_cgroup_storage_type(i) {
++			if (!ret)
++				break;
++			cookie = aux->cgroup_storage[i] ?
++				 aux->cgroup_storage[i]->cookie : 0;
++			ret = map->owner->storage_cookie[i] == cookie ||
++			      !cookie;
++		}
+ 		if (ret &&
+-		    map->owner.attach_func_proto != aux->attach_func_proto) {
++		    map->owner->attach_func_proto != aux->attach_func_proto) {
+ 			switch (prog_type) {
+ 			case BPF_PROG_TYPE_TRACING:
+ 			case BPF_PROG_TYPE_LSM:
+@@ -2396,8 +2415,8 @@ static bool __bpf_prog_map_compatible(struct bpf_map *map,
+ 			}
+ 		}
+ 	}
+-	spin_unlock(&map->owner.lock);
+-
++err:
++	spin_unlock(&map->owner_lock);
+ 	return ret;
+ }
+ 
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index ad6df48b540caf..fdf8737542ac45 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -2943,9 +2943,16 @@ static bool bpf_stack_walker(void *cookie, u64 ip, u64 sp, u64 bp)
+ 	struct bpf_throw_ctx *ctx = cookie;
+ 	struct bpf_prog *prog;
+ 
+-	if (!is_bpf_text_address(ip))
+-		return !ctx->cnt;
++	/*
++	 * The RCU read lock is held to safely traverse the latch tree, but we
++	 * don't need its protection when accessing the prog, since it has an
++	 * active stack frame on the current stack trace, and won't disappear.
++	 */
++	rcu_read_lock();
+ 	prog = bpf_prog_ksym_find(ip);
++	rcu_read_unlock();
++	if (!prog)
++		return !ctx->cnt;
+ 	ctx->cnt++;
+ 	if (bpf_is_subprog(prog))
+ 		return true;
+diff --git a/kernel/bpf/preload/Kconfig b/kernel/bpf/preload/Kconfig
+index c9d45c9d6918d1..f9b11d01c3b50d 100644
+--- a/kernel/bpf/preload/Kconfig
++++ b/kernel/bpf/preload/Kconfig
+@@ -10,7 +10,6 @@ menuconfig BPF_PRELOAD
+ 	# The dependency on !COMPILE_TEST prevents it from being enabled
+ 	# in allmodconfig or allyesconfig configurations
+ 	depends on !COMPILE_TEST
+-	select USERMODE_DRIVER
+ 	help
+ 	  This builds kernel module with several embedded BPF programs that are
+ 	  pinned into BPF FS mount point as human readable files that are
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index dd5304c6ac3cc1..88511a9bc114a0 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -37,6 +37,7 @@
+ #include <linux/trace_events.h>
+ #include <linux/tracepoint.h>
+ #include <linux/overflow.h>
++#include <linux/cookie.h>
+ 
+ #include <net/netfilter/nf_bpf_link.h>
+ #include <net/netkit.h>
+@@ -53,6 +54,7 @@
+ #define BPF_OBJ_FLAG_MASK   (BPF_F_RDONLY | BPF_F_WRONLY)
+ 
+ DEFINE_PER_CPU(int, bpf_prog_active);
++DEFINE_COOKIE(bpf_map_cookie);
+ static DEFINE_IDR(prog_idr);
+ static DEFINE_SPINLOCK(prog_idr_lock);
+ static DEFINE_IDR(map_idr);
+@@ -885,6 +887,7 @@ static void bpf_map_free_deferred(struct work_struct *work)
+ 
+ 	security_bpf_map_free(map);
+ 	bpf_map_release_memcg(map);
++	bpf_map_owner_free(map);
+ 	bpf_map_free(map);
+ }
+ 
+@@ -979,12 +982,12 @@ static void bpf_map_show_fdinfo(struct seq_file *m, struct file *filp)
+ 	struct bpf_map *map = filp->private_data;
+ 	u32 type = 0, jited = 0;
+ 
+-	if (map_type_contains_progs(map)) {
+-		spin_lock(&map->owner.lock);
+-		type  = map->owner.type;
+-		jited = map->owner.jited;
+-		spin_unlock(&map->owner.lock);
++	spin_lock(&map->owner_lock);
++	if (map->owner) {
++		type  = map->owner->type;
++		jited = map->owner->jited;
+ 	}
++	spin_unlock(&map->owner_lock);
+ 
+ 	seq_printf(m,
+ 		   "map_type:\t%u\n"
+@@ -1487,10 +1490,14 @@ static int map_create(union bpf_attr *attr, bool kernel)
+ 	if (err < 0)
+ 		goto free_map;
+ 
++	preempt_disable();
++	map->cookie = gen_cookie_next(&bpf_map_cookie);
++	preempt_enable();
++
+ 	atomic64_set(&map->refcnt, 1);
+ 	atomic64_set(&map->usercnt, 1);
+ 	mutex_init(&map->freeze_mutex);
+-	spin_lock_init(&map->owner.lock);
++	spin_lock_init(&map->owner_lock);
+ 
+ 	if (attr->btf_key_type_id || attr->btf_value_type_id ||
+ 	    /* Even the map's value is a kernel's struct,
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 169845710c7e16..97e07eb31fec2f 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -23671,6 +23671,7 @@ static bool can_jump(struct bpf_insn *insn)
+ 	case BPF_JSLT:
+ 	case BPF_JSLE:
+ 	case BPF_JCOND:
++	case BPF_JSET:
+ 		return true;
+ 	}
+ 
+diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
+index fa24c032ed6fe0..2a4a387f867abc 100644
+--- a/kernel/cgroup/cgroup-v1.c
++++ b/kernel/cgroup/cgroup-v1.c
+@@ -32,6 +32,9 @@ static u16 cgroup_no_v1_mask;
+ /* disable named v1 mounts */
+ static bool cgroup_no_v1_named;
+ 
++/* Show unavailable controllers in /proc/cgroups */
++static bool proc_show_all;
++
+ /*
+  * pidlist destructions need to be flushed on cgroup destruction.  Use a
+  * separate workqueue as flush domain.
+@@ -683,10 +686,11 @@ int proc_cgroupstats_show(struct seq_file *m, void *v)
+ 	 */
+ 
+ 	for_each_subsys(ss, i) {
+-		if (cgroup1_subsys_absent(ss))
+-			continue;
+ 		cgrp_v1_visible |= ss->root != &cgrp_dfl_root;
+ 
++		if (!proc_show_all && cgroup1_subsys_absent(ss))
++			continue;
++
+ 		seq_printf(m, "%s\t%d\t%d\t%d\n",
+ 			   ss->legacy_name, ss->root->hierarchy_id,
+ 			   atomic_read(&ss->root->nr_cgrps),
+@@ -1359,3 +1363,9 @@ static int __init cgroup_no_v1(char *str)
+ 	return 1;
+ }
+ __setup("cgroup_no_v1=", cgroup_no_v1);
++
++static int __init cgroup_v1_proc(char *str)
++{
++	return (kstrtobool(str, &proc_show_all) == 0);
++}
++__setup("cgroup_v1_proc=", cgroup_v1_proc);
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 22fdf0c187cd97..8060c2857bb2b3 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -6842,10 +6842,20 @@ static vm_fault_t perf_mmap_pfn_mkwrite(struct vm_fault *vmf)
+ 	return vmf->pgoff == 0 ? 0 : VM_FAULT_SIGBUS;
+ }
+ 
++static int perf_mmap_may_split(struct vm_area_struct *vma, unsigned long addr)
++{
++	/*
++	 * Forbid splitting perf mappings to prevent refcount leaks due to
++	 * the resulting non-matching offsets and sizes. See open()/close().
++	 */
++	return -EINVAL;
++}
++
+ static const struct vm_operations_struct perf_mmap_vmops = {
+ 	.open		= perf_mmap_open,
+ 	.close		= perf_mmap_close, /* non mergeable */
+ 	.pfn_mkwrite	= perf_mmap_pfn_mkwrite,
++	.may_split	= perf_mmap_may_split,
+ };
+ 
+ static int map_range(struct perf_buffer *rb, struct vm_area_struct *vma)
+@@ -7051,8 +7061,6 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ 			ret = 0;
+ 			goto unlock;
+ 		}
+-
+-		atomic_set(&rb->aux_mmap_count, 1);
+ 	}
+ 
+ 	user_lock_limit = sysctl_perf_event_mlock >> (PAGE_SHIFT - 10);
+@@ -7115,15 +7123,16 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ 		perf_event_update_time(event);
+ 		perf_event_init_userpage(event);
+ 		perf_event_update_userpage(event);
++		ret = 0;
+ 	} else {
+ 		ret = rb_alloc_aux(rb, event, vma->vm_pgoff, nr_pages,
+ 				   event->attr.aux_watermark, flags);
+-		if (!ret)
++		if (!ret) {
++			atomic_set(&rb->aux_mmap_count, 1);
+ 			rb->aux_mmap_locked = extra;
++		}
+ 	}
+ 
+-	ret = 0;
+-
+ unlock:
+ 	if (!ret) {
+ 		atomic_long_add(user_extra, &user->locked_vm);
+@@ -7131,6 +7140,7 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ 
+ 		atomic_inc(&event->mmap_count);
+ 	} else if (rb) {
++		/* AUX allocation failed */
+ 		atomic_dec(&rb->mmap_count);
+ 	}
+ aux_unlock:
+@@ -7138,6 +7148,9 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ 		mutex_unlock(aux_mutex);
+ 	mutex_unlock(&event->mmap_mutex);
+ 
++	if (ret)
++		return ret;
++
+ 	/*
+ 	 * Since pinned accounting is per vm we cannot allow fork() to copy our
+ 	 * vma.
+@@ -7145,13 +7158,20 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ 	vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+ 	vma->vm_ops = &perf_mmap_vmops;
+ 
+-	if (!ret)
+-		ret = map_range(rb, vma);
+-
+ 	mapped = get_mapped(event, event_mapped);
+ 	if (mapped)
+ 		mapped(event, vma->vm_mm);
+ 
++	/*
++	 * Try to map it into the page table. On fail, invoke
++	 * perf_mmap_close() to undo the above, as the callsite expects
++	 * full cleanup in this case and therefore does not invoke
++	 * vmops::close().
++	 */
++	ret = map_range(rb, vma);
++	if (ret)
++		perf_mmap_close(vma);
++
+ 	return ret;
+ }
+ 
+diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
+index 4c965ba77f9f8c..84ee7b590861a0 100644
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -581,8 +581,8 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct vm_area_struct *vma,
+ 
+ out:
+ 	/* Revert back reference counter if instruction update failed. */
+-	if (ret < 0 && is_register && ref_ctr_updated)
+-		update_ref_ctr(uprobe, mm, -1);
++	if (ret < 0 && ref_ctr_updated)
++		update_ref_ctr(uprobe, mm, is_register ? -1 : 1);
+ 
+ 	/* try collapse pmd for compound page */
+ 	if (ret > 0)
+diff --git a/kernel/kcsan/kcsan_test.c b/kernel/kcsan/kcsan_test.c
+index c2871180edccc1..49ab81faaed95f 100644
+--- a/kernel/kcsan/kcsan_test.c
++++ b/kernel/kcsan/kcsan_test.c
+@@ -533,7 +533,7 @@ static void test_barrier_nothreads(struct kunit *test)
+ 	struct kcsan_scoped_access *reorder_access = NULL;
+ #endif
+ 	arch_spinlock_t arch_spinlock = __ARCH_SPIN_LOCK_UNLOCKED;
+-	atomic_t dummy;
++	atomic_t dummy = ATOMIC_INIT(0);
+ 
+ 	KCSAN_TEST_REQUIRES(test, reorder_access != NULL);
+ 	KCSAN_TEST_REQUIRES(test, IS_ENABLED(CONFIG_SMP));
+diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
+index 3a9a9f240dbc96..5543695952982b 100644
+--- a/kernel/kexec_core.c
++++ b/kernel/kexec_core.c
+@@ -1080,7 +1080,7 @@ int kernel_kexec(void)
+ 		console_suspend_all();
+ 		error = dpm_suspend_start(PMSG_FREEZE);
+ 		if (error)
+-			goto Resume_console;
++			goto Resume_devices;
+ 		/*
+ 		 * dpm_suspend_end() must be called after dpm_suspend_start()
+ 		 * to complete the transition, like in the hibernation flows
+@@ -1135,7 +1135,6 @@ int kernel_kexec(void)
+ 		dpm_resume_start(PMSG_RESTORE);
+  Resume_devices:
+ 		dpm_resume_end(PMSG_RESTORE);
+- Resume_console:
+ 		pm_restore_gfp_mask();
+ 		console_resume_all();
+ 		thaw_processes();
+diff --git a/kernel/module/main.c b/kernel/module/main.c
+index c2c08007029d1b..43df45c39f59cb 100644
+--- a/kernel/module/main.c
++++ b/kernel/module/main.c
+@@ -3373,7 +3373,7 @@ static int load_module(struct load_info *info, const char __user *uargs,
+ 
+ 	module_allocated = true;
+ 
+-	audit_log_kern_module(mod->name);
++	audit_log_kern_module(info->name);
+ 
+ 	/* Reserve our place in the list. */
+ 	err = add_unformed_module(mod);
+@@ -3537,8 +3537,10 @@ static int load_module(struct load_info *info, const char __user *uargs,
+ 	 * failures once the proper module was allocated and
+ 	 * before that.
+ 	 */
+-	if (!module_allocated)
++	if (!module_allocated) {
++		audit_log_kern_module(info->name ? info->name : "?");
+ 		mod_stat_bump_becoming(info, flags);
++	}
+ 	free_copy(info, flags);
+ 	return err;
+ }
+diff --git a/kernel/padata.c b/kernel/padata.c
+index 7eee94166357a0..25cd3406477ab8 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -261,20 +261,17 @@ EXPORT_SYMBOL(padata_do_parallel);
+  *   be parallel processed by another cpu and is not yet present in
+  *   the cpu's reorder queue.
+  */
+-static struct padata_priv *padata_find_next(struct parallel_data *pd,
+-					    bool remove_object)
++static struct padata_priv *padata_find_next(struct parallel_data *pd, int cpu,
++					    unsigned int processed)
+ {
+ 	struct padata_priv *padata;
+ 	struct padata_list *reorder;
+-	int cpu = pd->cpu;
+ 
+ 	reorder = per_cpu_ptr(pd->reorder_list, cpu);
+ 
+ 	spin_lock(&reorder->lock);
+-	if (list_empty(&reorder->list)) {
+-		spin_unlock(&reorder->lock);
+-		return NULL;
+-	}
++	if (list_empty(&reorder->list))
++		goto notfound;
+ 
+ 	padata = list_entry(reorder->list.next, struct padata_priv, list);
+ 
+@@ -282,97 +279,52 @@ static struct padata_priv *padata_find_next(struct parallel_data *pd,
+ 	 * Checks the rare case where two or more parallel jobs have hashed to
+ 	 * the same CPU and one of the later ones finishes first.
+ 	 */
+-	if (padata->seq_nr != pd->processed) {
+-		spin_unlock(&reorder->lock);
+-		return NULL;
+-	}
+-
+-	if (remove_object) {
+-		list_del_init(&padata->list);
+-		++pd->processed;
+-		pd->cpu = cpumask_next_wrap(cpu, pd->cpumask.pcpu);
+-	}
++	if (padata->seq_nr != processed)
++		goto notfound;
+ 
++	list_del_init(&padata->list);
+ 	spin_unlock(&reorder->lock);
+ 	return padata;
++
++notfound:
++	pd->processed = processed;
++	pd->cpu = cpu;
++	spin_unlock(&reorder->lock);
++	return NULL;
+ }
+ 
+-static void padata_reorder(struct parallel_data *pd)
++static void padata_reorder(struct padata_priv *padata)
+ {
++	struct parallel_data *pd = padata->pd;
+ 	struct padata_instance *pinst = pd->ps->pinst;
+-	int cb_cpu;
+-	struct padata_priv *padata;
+-	struct padata_serial_queue *squeue;
+-	struct padata_list *reorder;
++	unsigned int processed;
++	int cpu;
+ 
+-	/*
+-	 * We need to ensure that only one cpu can work on dequeueing of
+-	 * the reorder queue the time. Calculating in which percpu reorder
+-	 * queue the next object will arrive takes some time. A spinlock
+-	 * would be highly contended. Also it is not clear in which order
+-	 * the objects arrive to the reorder queues. So a cpu could wait to
+-	 * get the lock just to notice that there is nothing to do at the
+-	 * moment. Therefore we use a trylock and let the holder of the lock
+-	 * care for all the objects enqueued during the holdtime of the lock.
+-	 */
+-	if (!spin_trylock_bh(&pd->lock))
+-		return;
++	processed = pd->processed;
++	cpu = pd->cpu;
+ 
+-	while (1) {
+-		padata = padata_find_next(pd, true);
++	do {
++		struct padata_serial_queue *squeue;
++		int cb_cpu;
+ 
+-		/*
+-		 * If the next object that needs serialization is parallel
+-		 * processed by another cpu and is still on it's way to the
+-		 * cpu's reorder queue, nothing to do for now.
+-		 */
+-		if (!padata)
+-			break;
++		cpu = cpumask_next_wrap(cpu, pd->cpumask.pcpu);
++		processed++;
+ 
+ 		cb_cpu = padata->cb_cpu;
+ 		squeue = per_cpu_ptr(pd->squeue, cb_cpu);
+ 
+ 		spin_lock(&squeue->serial.lock);
+ 		list_add_tail(&padata->list, &squeue->serial.list);
+-		spin_unlock(&squeue->serial.lock);
+-
+ 		queue_work_on(cb_cpu, pinst->serial_wq, &squeue->work);
+-	}
+ 
+-	spin_unlock_bh(&pd->lock);
+-
+-	/*
+-	 * The next object that needs serialization might have arrived to
+-	 * the reorder queues in the meantime.
+-	 *
+-	 * Ensure reorder queue is read after pd->lock is dropped so we see
+-	 * new objects from another task in padata_do_serial.  Pairs with
+-	 * smp_mb in padata_do_serial.
+-	 */
+-	smp_mb();
+-
+-	reorder = per_cpu_ptr(pd->reorder_list, pd->cpu);
+-	if (!list_empty(&reorder->list) && padata_find_next(pd, false)) {
+ 		/*
+-		 * Other context(eg. the padata_serial_worker) can finish the request.
+-		 * To avoid UAF issue, add pd ref here, and put pd ref after reorder_work finish.
++		 * If the next object that needs serialization is parallel
++		 * processed by another cpu and is still on it's way to the
++		 * cpu's reorder queue, end the loop.
+ 		 */
+-		padata_get_pd(pd);
+-		if (!queue_work(pinst->serial_wq, &pd->reorder_work))
+-			padata_put_pd(pd);
+-	}
+-}
+-
+-static void invoke_padata_reorder(struct work_struct *work)
+-{
+-	struct parallel_data *pd;
+-
+-	local_bh_disable();
+-	pd = container_of(work, struct parallel_data, reorder_work);
+-	padata_reorder(pd);
+-	local_bh_enable();
+-	/* Pairs with putting the reorder_work in the serial_wq */
+-	padata_put_pd(pd);
++		padata = padata_find_next(pd, cpu, processed);
++		spin_unlock(&squeue->serial.lock);
++	} while (padata);
+ }
+ 
+ static void padata_serial_worker(struct work_struct *serial_work)
+@@ -423,6 +375,7 @@ void padata_do_serial(struct padata_priv *padata)
+ 	struct padata_list *reorder = per_cpu_ptr(pd->reorder_list, hashed_cpu);
+ 	struct padata_priv *cur;
+ 	struct list_head *pos;
++	bool gotit = true;
+ 
+ 	spin_lock(&reorder->lock);
+ 	/* Sort in ascending order of sequence number. */
+@@ -432,17 +385,14 @@ void padata_do_serial(struct padata_priv *padata)
+ 		if ((signed int)(cur->seq_nr - padata->seq_nr) < 0)
+ 			break;
+ 	}
+-	list_add(&padata->list, pos);
++	if (padata->seq_nr != pd->processed) {
++		gotit = false;
++		list_add(&padata->list, pos);
++	}
+ 	spin_unlock(&reorder->lock);
+ 
+-	/*
+-	 * Ensure the addition to the reorder list is ordered correctly
+-	 * with the trylock of pd->lock in padata_reorder.  Pairs with smp_mb
+-	 * in padata_reorder.
+-	 */
+-	smp_mb();
+-
+-	padata_reorder(pd);
++	if (gotit)
++		padata_reorder(padata);
+ }
+ EXPORT_SYMBOL(padata_do_serial);
+ 
+@@ -632,9 +582,7 @@ static struct parallel_data *padata_alloc_pd(struct padata_shell *ps)
+ 	padata_init_squeues(pd);
+ 	pd->seq_nr = -1;
+ 	refcount_set(&pd->refcnt, 1);
+-	spin_lock_init(&pd->lock);
+ 	pd->cpu = cpumask_first(pd->cpumask.pcpu);
+-	INIT_WORK(&pd->reorder_work, invoke_padata_reorder);
+ 
+ 	return pd;
+ 
+@@ -1144,12 +1092,6 @@ void padata_free_shell(struct padata_shell *ps)
+ 	if (!ps)
+ 		return;
+ 
+-	/*
+-	 * Wait for all _do_serial calls to finish to avoid touching
+-	 * freed pd's and ps's.
+-	 */
+-	synchronize_rcu();
+-
+ 	mutex_lock(&ps->pinst->lock);
+ 	list_del(&ps->list);
+ 	pd = rcu_dereference_protected(ps->pd, 1);
+diff --git a/kernel/rcu/refscale.c b/kernel/rcu/refscale.c
+index f11a7c2af778cd..ab7fcdc94cc08f 100644
+--- a/kernel/rcu/refscale.c
++++ b/kernel/rcu/refscale.c
+@@ -85,7 +85,7 @@ torture_param(int, holdoff, IS_BUILTIN(CONFIG_RCU_REF_SCALE_TEST) ? 10 : 0,
+ // Number of typesafe_lookup structures, that is, the degree of concurrency.
+ torture_param(long, lookup_instances, 0, "Number of typesafe_lookup structures.");
+ // Number of loops per experiment, all readers execute operations concurrently.
+-torture_param(long, loops, 10000, "Number of loops per experiment.");
++torture_param(int, loops, 10000, "Number of loops per experiment.");
+ // Number of readers, with -1 defaulting to about 75% of the CPUs.
+ torture_param(int, nreaders, -1, "Number of readers, -1 for 75% of CPUs.");
+ // Number of runs.
+@@ -1140,7 +1140,7 @@ static void
+ ref_scale_print_module_parms(const struct ref_scale_ops *cur_ops, const char *tag)
+ {
+ 	pr_alert("%s" SCALE_FLAG
+-		 "--- %s:  verbose=%d verbose_batched=%d shutdown=%d holdoff=%d lookup_instances=%ld loops=%ld nreaders=%d nruns=%d readdelay=%d\n", scale_type, tag,
++		 "--- %s:  verbose=%d verbose_batched=%d shutdown=%d holdoff=%d lookup_instances=%ld loops=%d nreaders=%d nruns=%d readdelay=%d\n", scale_type, tag,
+ 		 verbose, verbose_batched, shutdown, holdoff, lookup_instances, loops, nreaders, nruns, readdelay);
+ }
+ 
+@@ -1238,12 +1238,16 @@ ref_scale_init(void)
+ 	// Reader tasks (default to ~75% of online CPUs).
+ 	if (nreaders < 0)
+ 		nreaders = (num_online_cpus() >> 1) + (num_online_cpus() >> 2);
+-	if (WARN_ONCE(loops <= 0, "%s: loops = %ld, adjusted to 1\n", __func__, loops))
++	if (WARN_ONCE(loops <= 0, "%s: loops = %d, adjusted to 1\n", __func__, loops))
+ 		loops = 1;
+ 	if (WARN_ONCE(nreaders <= 0, "%s: nreaders = %d, adjusted to 1\n", __func__, nreaders))
+ 		nreaders = 1;
+ 	if (WARN_ONCE(nruns <= 0, "%s: nruns = %d, adjusted to 1\n", __func__, nruns))
+ 		nruns = 1;
++	if (WARN_ONCE(loops > INT_MAX / nreaders,
++		      "%s: nreaders * loops will overflow, adjusted loops to %d",
++		      __func__, INT_MAX / nreaders))
++		loops = INT_MAX / nreaders;
+ 	reader_tasks = kcalloc(nreaders, sizeof(reader_tasks[0]),
+ 			       GFP_KERNEL);
+ 	if (!reader_tasks) {
+diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
+index b473ff056f493c..711043e4eb5434 100644
+--- a/kernel/rcu/tree_nocb.h
++++ b/kernel/rcu/tree_nocb.h
+@@ -276,7 +276,7 @@ static void wake_nocb_gp_defer(struct rcu_data *rdp, int waketype,
+ 	 * callback storms, no need to wake up too early.
+ 	 */
+ 	if (waketype == RCU_NOCB_WAKE_LAZY &&
+-	    rdp->nocb_defer_wakeup == RCU_NOCB_WAKE_NOT) {
++	    rdp_gp->nocb_defer_wakeup == RCU_NOCB_WAKE_NOT) {
+ 		mod_timer(&rdp_gp->nocb_timer, jiffies + rcu_get_jiffies_lazy_flush());
+ 		WRITE_ONCE(rdp_gp->nocb_defer_wakeup, waketype);
+ 	} else if (waketype == RCU_NOCB_WAKE_BYPASS) {
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 89019a14082642..65f3b2cc891da6 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -2976,7 +2976,14 @@ void dl_clear_root_domain(struct root_domain *rd)
+ 	int i;
+ 
+ 	guard(raw_spinlock_irqsave)(&rd->dl_bw.lock);
++
++	/*
++	 * Reset total_bw to zero and extra_bw to max_bw so that next
++	 * loop will add dl-servers contributions back properly,
++	 */
+ 	rd->dl_bw.total_bw = 0;
++	for_each_cpu(i, rd->span)
++		cpu_rq(i)->dl.extra_bw = cpu_rq(i)->dl.max_bw;
+ 
+ 	/*
+ 	 * dl_servers are not tasks. Since dl_add_task_root_domain ignores
+diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
+index ad04a5c3162a26..3f9f0a39e85832 100644
+--- a/kernel/sched/psi.c
++++ b/kernel/sched/psi.c
+@@ -172,17 +172,35 @@ struct psi_group psi_system = {
+ 	.pcpu = &system_group_pcpu,
+ };
+ 
++static DEFINE_PER_CPU(seqcount_t, psi_seq) = SEQCNT_ZERO(psi_seq);
++
++static inline void psi_write_begin(int cpu)
++{
++	write_seqcount_begin(per_cpu_ptr(&psi_seq, cpu));
++}
++
++static inline void psi_write_end(int cpu)
++{
++	write_seqcount_end(per_cpu_ptr(&psi_seq, cpu));
++}
++
++static inline u32 psi_read_begin(int cpu)
++{
++	return read_seqcount_begin(per_cpu_ptr(&psi_seq, cpu));
++}
++
++static inline bool psi_read_retry(int cpu, u32 seq)
++{
++	return read_seqcount_retry(per_cpu_ptr(&psi_seq, cpu), seq);
++}
++
+ static void psi_avgs_work(struct work_struct *work);
+ 
+ static void poll_timer_fn(struct timer_list *t);
+ 
+ static void group_init(struct psi_group *group)
+ {
+-	int cpu;
+-
+ 	group->enabled = true;
+-	for_each_possible_cpu(cpu)
+-		seqcount_init(&per_cpu_ptr(group->pcpu, cpu)->seq);
+ 	group->avg_last_update = sched_clock();
+ 	group->avg_next_update = group->avg_last_update + psi_period;
+ 	mutex_init(&group->avgs_lock);
+@@ -262,14 +280,14 @@ static void get_recent_times(struct psi_group *group, int cpu,
+ 
+ 	/* Snapshot a coherent view of the CPU state */
+ 	do {
+-		seq = read_seqcount_begin(&groupc->seq);
++		seq = psi_read_begin(cpu);
+ 		now = cpu_clock(cpu);
+ 		memcpy(times, groupc->times, sizeof(groupc->times));
+ 		state_mask = groupc->state_mask;
+ 		state_start = groupc->state_start;
+ 		if (cpu == current_cpu)
+ 			memcpy(tasks, groupc->tasks, sizeof(groupc->tasks));
+-	} while (read_seqcount_retry(&groupc->seq, seq));
++	} while (psi_read_retry(cpu, seq));
+ 
+ 	/* Calculate state time deltas against the previous snapshot */
+ 	for (s = 0; s < NR_PSI_STATES; s++) {
+@@ -768,30 +786,20 @@ static void record_times(struct psi_group_cpu *groupc, u64 now)
+ 		groupc->times[PSI_NONIDLE] += delta;
+ }
+ 
++#define for_each_group(iter, group) \
++	for (typeof(group) iter = group; iter; iter = iter->parent)
++
+ static void psi_group_change(struct psi_group *group, int cpu,
+ 			     unsigned int clear, unsigned int set,
+-			     bool wake_clock)
++			     u64 now, bool wake_clock)
+ {
+ 	struct psi_group_cpu *groupc;
+ 	unsigned int t, m;
+ 	u32 state_mask;
+-	u64 now;
+ 
+ 	lockdep_assert_rq_held(cpu_rq(cpu));
+ 	groupc = per_cpu_ptr(group->pcpu, cpu);
+ 
+-	/*
+-	 * First we update the task counts according to the state
+-	 * change requested through the @clear and @set bits.
+-	 *
+-	 * Then if the cgroup PSI stats accounting enabled, we
+-	 * assess the aggregate resource states this CPU's tasks
+-	 * have been in since the last change, and account any
+-	 * SOME and FULL time these may have resulted in.
+-	 */
+-	write_seqcount_begin(&groupc->seq);
+-	now = cpu_clock(cpu);
+-
+ 	/*
+ 	 * Start with TSK_ONCPU, which doesn't have a corresponding
+ 	 * task count - it's just a boolean flag directly encoded in
+@@ -843,7 +851,6 @@ static void psi_group_change(struct psi_group *group, int cpu,
+ 
+ 		groupc->state_mask = state_mask;
+ 
+-		write_seqcount_end(&groupc->seq);
+ 		return;
+ 	}
+ 
+@@ -864,8 +871,6 @@ static void psi_group_change(struct psi_group *group, int cpu,
+ 
+ 	groupc->state_mask = state_mask;
+ 
+-	write_seqcount_end(&groupc->seq);
+-
+ 	if (state_mask & group->rtpoll_states)
+ 		psi_schedule_rtpoll_work(group, 1, false);
+ 
+@@ -900,24 +905,29 @@ static void psi_flags_change(struct task_struct *task, int clear, int set)
+ void psi_task_change(struct task_struct *task, int clear, int set)
+ {
+ 	int cpu = task_cpu(task);
+-	struct psi_group *group;
++	u64 now;
+ 
+ 	if (!task->pid)
+ 		return;
+ 
+ 	psi_flags_change(task, clear, set);
+ 
+-	group = task_psi_group(task);
+-	do {
+-		psi_group_change(group, cpu, clear, set, true);
+-	} while ((group = group->parent));
++	psi_write_begin(cpu);
++	now = cpu_clock(cpu);
++	for_each_group(group, task_psi_group(task))
++		psi_group_change(group, cpu, clear, set, now, true);
++	psi_write_end(cpu);
+ }
+ 
+ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+ 		     bool sleep)
+ {
+-	struct psi_group *group, *common = NULL;
++	struct psi_group *common = NULL;
+ 	int cpu = task_cpu(prev);
++	u64 now;
++
++	psi_write_begin(cpu);
++	now = cpu_clock(cpu);
+ 
+ 	if (next->pid) {
+ 		psi_flags_change(next, 0, TSK_ONCPU);
+@@ -926,16 +936,15 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+ 		 * ancestors with @prev, those will already have @prev's
+ 		 * TSK_ONCPU bit set, and we can stop the iteration there.
+ 		 */
+-		group = task_psi_group(next);
+-		do {
+-			if (per_cpu_ptr(group->pcpu, cpu)->state_mask &
+-			    PSI_ONCPU) {
++		for_each_group(group, task_psi_group(next)) {
++			struct psi_group_cpu *groupc = per_cpu_ptr(group->pcpu, cpu);
++
++			if (groupc->state_mask & PSI_ONCPU) {
+ 				common = group;
+ 				break;
+ 			}
+-
+-			psi_group_change(group, cpu, 0, TSK_ONCPU, true);
+-		} while ((group = group->parent));
++			psi_group_change(group, cpu, 0, TSK_ONCPU, now, true);
++		}
+ 	}
+ 
+ 	if (prev->pid) {
+@@ -968,12 +977,11 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+ 
+ 		psi_flags_change(prev, clear, set);
+ 
+-		group = task_psi_group(prev);
+-		do {
++		for_each_group(group, task_psi_group(prev)) {
+ 			if (group == common)
+ 				break;
+-			psi_group_change(group, cpu, clear, set, wake_clock);
+-		} while ((group = group->parent));
++			psi_group_change(group, cpu, clear, set, now, wake_clock);
++		}
+ 
+ 		/*
+ 		 * TSK_ONCPU is handled up to the common ancestor. If there are
+@@ -983,20 +991,21 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+ 		 */
+ 		if ((prev->psi_flags ^ next->psi_flags) & ~TSK_ONCPU) {
+ 			clear &= ~TSK_ONCPU;
+-			for (; group; group = group->parent)
+-				psi_group_change(group, cpu, clear, set, wake_clock);
++			for_each_group(group, common)
++				psi_group_change(group, cpu, clear, set, now, wake_clock);
+ 		}
+ 	}
++	psi_write_end(cpu);
+ }
+ 
+ #ifdef CONFIG_IRQ_TIME_ACCOUNTING
+ void psi_account_irqtime(struct rq *rq, struct task_struct *curr, struct task_struct *prev)
+ {
+ 	int cpu = task_cpu(curr);
+-	struct psi_group *group;
+ 	struct psi_group_cpu *groupc;
+ 	s64 delta;
+ 	u64 irq;
++	u64 now;
+ 
+ 	if (static_branch_likely(&psi_disabled) || !irqtime_enabled())
+ 		return;
+@@ -1005,8 +1014,7 @@ void psi_account_irqtime(struct rq *rq, struct task_struct *curr, struct task_st
+ 		return;
+ 
+ 	lockdep_assert_rq_held(rq);
+-	group = task_psi_group(curr);
+-	if (prev && task_psi_group(prev) == group)
++	if (prev && task_psi_group(prev) == task_psi_group(curr))
+ 		return;
+ 
+ 	irq = irq_time_read(cpu);
+@@ -1015,25 +1023,22 @@ void psi_account_irqtime(struct rq *rq, struct task_struct *curr, struct task_st
+ 		return;
+ 	rq->psi_irq_time = irq;
+ 
+-	do {
+-		u64 now;
++	psi_write_begin(cpu);
++	now = cpu_clock(cpu);
+ 
++	for_each_group(group, task_psi_group(curr)) {
+ 		if (!group->enabled)
+ 			continue;
+ 
+ 		groupc = per_cpu_ptr(group->pcpu, cpu);
+ 
+-		write_seqcount_begin(&groupc->seq);
+-		now = cpu_clock(cpu);
+-
+ 		record_times(groupc, now);
+ 		groupc->times[PSI_IRQ_FULL] += delta;
+ 
+-		write_seqcount_end(&groupc->seq);
+-
+ 		if (group->rtpoll_states & (1 << PSI_IRQ_FULL))
+ 			psi_schedule_rtpoll_work(group, 1, false);
+-	} while ((group = group->parent));
++	}
++	psi_write_end(cpu);
+ }
+ #endif
+ 
+@@ -1221,12 +1226,14 @@ void psi_cgroup_restart(struct psi_group *group)
+ 		return;
+ 
+ 	for_each_possible_cpu(cpu) {
+-		struct rq *rq = cpu_rq(cpu);
+-		struct rq_flags rf;
++		u64 now;
+ 
+-		rq_lock_irq(rq, &rf);
+-		psi_group_change(group, cpu, 0, 0, true);
+-		rq_unlock_irq(rq, &rf);
++		guard(rq_lock_irq)(cpu_rq(cpu));
++
++		psi_write_begin(cpu);
++		now = cpu_clock(cpu);
++		psi_group_change(group, cpu, 0, 0, now, true);
++		psi_write_end(cpu);
+ 	}
+ }
+ #endif /* CONFIG_CGROUPS */
+diff --git a/kernel/trace/power-traces.c b/kernel/trace/power-traces.c
+index 21bb161c231698..f2fe33573e544a 100644
+--- a/kernel/trace/power-traces.c
++++ b/kernel/trace/power-traces.c
+@@ -17,5 +17,4 @@
+ EXPORT_TRACEPOINT_SYMBOL_GPL(suspend_resume);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(cpu_idle);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(cpu_frequency);
+-EXPORT_TRACEPOINT_SYMBOL_GPL(powernv_throttle);
+ 
+diff --git a/kernel/trace/preemptirq_delay_test.c b/kernel/trace/preemptirq_delay_test.c
+index 314ffc143039c5..acb0c971a4082a 100644
+--- a/kernel/trace/preemptirq_delay_test.c
++++ b/kernel/trace/preemptirq_delay_test.c
+@@ -117,12 +117,15 @@ static int preemptirq_delay_run(void *data)
+ {
+ 	int i;
+ 	int s = MIN(burst_size, NR_TEST_FUNCS);
+-	struct cpumask cpu_mask;
++	cpumask_var_t cpu_mask;
++
++	if (!alloc_cpumask_var(&cpu_mask, GFP_KERNEL))
++		return -ENOMEM;
+ 
+ 	if (cpu_affinity > -1) {
+-		cpumask_clear(&cpu_mask);
+-		cpumask_set_cpu(cpu_affinity, &cpu_mask);
+-		if (set_cpus_allowed_ptr(current, &cpu_mask))
++		cpumask_clear(cpu_mask);
++		cpumask_set_cpu(cpu_affinity, cpu_mask);
++		if (set_cpus_allowed_ptr(current, cpu_mask))
+ 			pr_err("cpu_affinity:%d, failed\n", cpu_affinity);
+ 	}
+ 
+@@ -139,6 +142,8 @@ static int preemptirq_delay_run(void *data)
+ 
+ 	__set_current_state(TASK_RUNNING);
+ 
++	free_cpumask_var(cpu_mask);
++
+ 	return 0;
+ }
+ 
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 00fc38d70e868a..24bb5287c415b6 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -5846,24 +5846,20 @@ ring_buffer_consume(struct trace_buffer *buffer, int cpu, u64 *ts,
+ EXPORT_SYMBOL_GPL(ring_buffer_consume);
+ 
+ /**
+- * ring_buffer_read_prepare - Prepare for a non consuming read of the buffer
++ * ring_buffer_read_start - start a non consuming read of the buffer
+  * @buffer: The ring buffer to read from
+  * @cpu: The cpu buffer to iterate over
+  * @flags: gfp flags to use for memory allocation
+  *
+- * This performs the initial preparations necessary to iterate
+- * through the buffer.  Memory is allocated, buffer resizing
+- * is disabled, and the iterator pointer is returned to the caller.
+- *
+- * After a sequence of ring_buffer_read_prepare calls, the user is
+- * expected to make at least one call to ring_buffer_read_prepare_sync.
+- * Afterwards, ring_buffer_read_start is invoked to get things going
+- * for real.
++ * This creates an iterator to allow non-consuming iteration through
++ * the buffer. If the buffer is disabled for writing, it will produce
++ * the same information each time, but if the buffer is still writing
++ * then the first hit of a write will cause the iteration to stop.
+  *
+- * This overall must be paired with ring_buffer_read_finish.
++ * Must be paired with ring_buffer_read_finish.
+  */
+ struct ring_buffer_iter *
+-ring_buffer_read_prepare(struct trace_buffer *buffer, int cpu, gfp_t flags)
++ring_buffer_read_start(struct trace_buffer *buffer, int cpu, gfp_t flags)
+ {
+ 	struct ring_buffer_per_cpu *cpu_buffer;
+ 	struct ring_buffer_iter *iter;
+@@ -5889,51 +5885,12 @@ ring_buffer_read_prepare(struct trace_buffer *buffer, int cpu, gfp_t flags)
+ 
+ 	atomic_inc(&cpu_buffer->resize_disabled);
+ 
+-	return iter;
+-}
+-EXPORT_SYMBOL_GPL(ring_buffer_read_prepare);
+-
+-/**
+- * ring_buffer_read_prepare_sync - Synchronize a set of prepare calls
+- *
+- * All previously invoked ring_buffer_read_prepare calls to prepare
+- * iterators will be synchronized.  Afterwards, read_buffer_read_start
+- * calls on those iterators are allowed.
+- */
+-void
+-ring_buffer_read_prepare_sync(void)
+-{
+-	synchronize_rcu();
+-}
+-EXPORT_SYMBOL_GPL(ring_buffer_read_prepare_sync);
+-
+-/**
+- * ring_buffer_read_start - start a non consuming read of the buffer
+- * @iter: The iterator returned by ring_buffer_read_prepare
+- *
+- * This finalizes the startup of an iteration through the buffer.
+- * The iterator comes from a call to ring_buffer_read_prepare and
+- * an intervening ring_buffer_read_prepare_sync must have been
+- * performed.
+- *
+- * Must be paired with ring_buffer_read_finish.
+- */
+-void
+-ring_buffer_read_start(struct ring_buffer_iter *iter)
+-{
+-	struct ring_buffer_per_cpu *cpu_buffer;
+-	unsigned long flags;
+-
+-	if (!iter)
+-		return;
+-
+-	cpu_buffer = iter->cpu_buffer;
+-
+-	raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
++	guard(raw_spinlock_irqsave)(&cpu_buffer->reader_lock);
+ 	arch_spin_lock(&cpu_buffer->lock);
+ 	rb_iter_reset(iter);
+ 	arch_spin_unlock(&cpu_buffer->lock);
+-	raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
++
++	return iter;
+ }
+ EXPORT_SYMBOL_GPL(ring_buffer_read_start);
+ 
+diff --git a/kernel/trace/rv/monitors/scpd/Kconfig b/kernel/trace/rv/monitors/scpd/Kconfig
+index b9114fbf680f99..682d0416188b39 100644
+--- a/kernel/trace/rv/monitors/scpd/Kconfig
++++ b/kernel/trace/rv/monitors/scpd/Kconfig
+@@ -2,7 +2,7 @@
+ #
+ config RV_MON_SCPD
+ 	depends on RV
+-	depends on PREEMPT_TRACER
++	depends on TRACE_PREEMPT_TOGGLE
+ 	depends on RV_MON_SCHED
+ 	default y
+ 	select DA_MON_EVENTS_IMPLICIT
+diff --git a/kernel/trace/rv/monitors/sncid/Kconfig b/kernel/trace/rv/monitors/sncid/Kconfig
+index 76bcfef4fd1032..3a5639feaaaf69 100644
+--- a/kernel/trace/rv/monitors/sncid/Kconfig
++++ b/kernel/trace/rv/monitors/sncid/Kconfig
+@@ -2,7 +2,7 @@
+ #
+ config RV_MON_SNCID
+ 	depends on RV
+-	depends on IRQSOFF_TRACER
++	depends on TRACE_IRQFLAGS
+ 	depends on RV_MON_SCHED
+ 	default y
+ 	select DA_MON_EVENTS_IMPLICIT
+diff --git a/kernel/trace/rv/monitors/snep/Kconfig b/kernel/trace/rv/monitors/snep/Kconfig
+index 77527f97123250..7dd54f434ff758 100644
+--- a/kernel/trace/rv/monitors/snep/Kconfig
++++ b/kernel/trace/rv/monitors/snep/Kconfig
+@@ -2,7 +2,7 @@
+ #
+ config RV_MON_SNEP
+ 	depends on RV
+-	depends on PREEMPT_TRACER
++	depends on TRACE_PREEMPT_TOGGLE
+ 	depends on RV_MON_SCHED
+ 	default y
+ 	select DA_MON_EVENTS_IMPLICIT
+diff --git a/kernel/trace/rv/monitors/wip/Kconfig b/kernel/trace/rv/monitors/wip/Kconfig
+index e464b9294865b5..87a26195792b44 100644
+--- a/kernel/trace/rv/monitors/wip/Kconfig
++++ b/kernel/trace/rv/monitors/wip/Kconfig
+@@ -2,7 +2,7 @@
+ #
+ config RV_MON_WIP
+ 	depends on RV
+-	depends on PREEMPT_TRACER
++	depends on TRACE_PREEMPT_TOGGLE
+ 	select DA_MON_EVENTS_IMPLICIT
+ 	bool "wip monitor"
+ 	help
+diff --git a/kernel/trace/rv/rv_trace.h b/kernel/trace/rv/rv_trace.h
+index 422b75f58891eb..01fa84824bcbe6 100644
+--- a/kernel/trace/rv/rv_trace.h
++++ b/kernel/trace/rv/rv_trace.h
+@@ -16,24 +16,24 @@ DECLARE_EVENT_CLASS(event_da_monitor,
+ 	TP_ARGS(state, event, next_state, final_state),
+ 
+ 	TP_STRUCT__entry(
+-		__array(	char,	state,		MAX_DA_NAME_LEN	)
+-		__array(	char,	event,		MAX_DA_NAME_LEN	)
+-		__array(	char,	next_state,	MAX_DA_NAME_LEN	)
+-		__field(	bool,	final_state			)
++		__string(	state,		state		)
++		__string(	event,		event		)
++		__string(	next_state,	next_state	)
++		__field(	bool,		final_state	)
+ 	),
+ 
+ 	TP_fast_assign(
+-		memcpy(__entry->state,		state,		MAX_DA_NAME_LEN);
+-		memcpy(__entry->event,		event,		MAX_DA_NAME_LEN);
+-		memcpy(__entry->next_state,	next_state,	MAX_DA_NAME_LEN);
+-		__entry->final_state		= final_state;
++		__assign_str(state);
++		__assign_str(event);
++		__assign_str(next_state);
++		__entry->final_state = final_state;
+ 	),
+ 
+-	TP_printk("%s x %s -> %s %s",
+-		__entry->state,
+-		__entry->event,
+-		__entry->next_state,
+-		__entry->final_state ? "(final)" : "")
++	TP_printk("%s x %s -> %s%s",
++		__get_str(state),
++		__get_str(event),
++		__get_str(next_state),
++		__entry->final_state ? " (final)" : "")
+ );
+ 
+ DECLARE_EVENT_CLASS(error_da_monitor,
+@@ -43,18 +43,18 @@ DECLARE_EVENT_CLASS(error_da_monitor,
+ 	TP_ARGS(state, event),
+ 
+ 	TP_STRUCT__entry(
+-		__array(	char,	state,		MAX_DA_NAME_LEN	)
+-		__array(	char,	event,		MAX_DA_NAME_LEN	)
++		__string(	state,	state	)
++		__string(	event,	event	)
+ 	),
+ 
+ 	TP_fast_assign(
+-		memcpy(__entry->state,		state,		MAX_DA_NAME_LEN);
+-		memcpy(__entry->event,		event,		MAX_DA_NAME_LEN);
++		__assign_str(state);
++		__assign_str(event);
+ 	),
+ 
+ 	TP_printk("event %s not expected in the state %s",
+-		__entry->event,
+-		__entry->state)
++		__get_str(event),
++		__get_str(state))
+ );
+ 
+ #include <monitors/wip/wip_trace.h>
+@@ -75,27 +75,27 @@ DECLARE_EVENT_CLASS(event_da_monitor_id,
+ 	TP_ARGS(id, state, event, next_state, final_state),
+ 
+ 	TP_STRUCT__entry(
+-		__field(	int,	id				)
+-		__array(	char,	state,		MAX_DA_NAME_LEN	)
+-		__array(	char,	event,		MAX_DA_NAME_LEN	)
+-		__array(	char,	next_state,	MAX_DA_NAME_LEN	)
+-		__field(	bool,	final_state			)
++		__field(	int,		id		)
++		__string(	state,		state		)
++		__string(	event,		event		)
++		__string(	next_state,	next_state	)
++		__field(	bool,		final_state	)
+ 	),
+ 
+ 	TP_fast_assign(
+-		memcpy(__entry->state,		state,		MAX_DA_NAME_LEN);
+-		memcpy(__entry->event,		event,		MAX_DA_NAME_LEN);
+-		memcpy(__entry->next_state,	next_state,	MAX_DA_NAME_LEN);
+-		__entry->id			= id;
+-		__entry->final_state		= final_state;
++		__assign_str(state);
++		__assign_str(event);
++		__assign_str(next_state);
++		__entry->id		= id;
++		__entry->final_state	= final_state;
+ 	),
+ 
+-	TP_printk("%d: %s x %s -> %s %s",
++	TP_printk("%d: %s x %s -> %s%s",
+ 		__entry->id,
+-		__entry->state,
+-		__entry->event,
+-		__entry->next_state,
+-		__entry->final_state ? "(final)" : "")
++		__get_str(state),
++		__get_str(event),
++		__get_str(next_state),
++		__entry->final_state ? " (final)" : "")
+ );
+ 
+ DECLARE_EVENT_CLASS(error_da_monitor_id,
+@@ -105,21 +105,21 @@ DECLARE_EVENT_CLASS(error_da_monitor_id,
+ 	TP_ARGS(id, state, event),
+ 
+ 	TP_STRUCT__entry(
+-		__field(	int,	id				)
+-		__array(	char,	state,		MAX_DA_NAME_LEN	)
+-		__array(	char,	event,		MAX_DA_NAME_LEN	)
++		__field(	int,	id	)
++		__string(	state,	state	)
++		__string(	event,	event	)
+ 	),
+ 
+ 	TP_fast_assign(
+-		memcpy(__entry->state,		state,		MAX_DA_NAME_LEN);
+-		memcpy(__entry->event,		event,		MAX_DA_NAME_LEN);
+-		__entry->id			= id;
++		__assign_str(state);
++		__assign_str(event);
++		__entry->id	= id;
+ 	),
+ 
+ 	TP_printk("%d: event %s not expected in the state %s",
+ 		__entry->id,
+-		__entry->event,
+-		__entry->state)
++		__get_str(event),
++		__get_str(state))
+ );
+ 
+ #include <monitors/wwnr/wwnr_trace.h>
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 95ae7c4e58357f..7996f26c3f46d2 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -4735,21 +4735,15 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
+ 	if (iter->cpu_file == RING_BUFFER_ALL_CPUS) {
+ 		for_each_tracing_cpu(cpu) {
+ 			iter->buffer_iter[cpu] =
+-				ring_buffer_read_prepare(iter->array_buffer->buffer,
+-							 cpu, GFP_KERNEL);
+-		}
+-		ring_buffer_read_prepare_sync();
+-		for_each_tracing_cpu(cpu) {
+-			ring_buffer_read_start(iter->buffer_iter[cpu]);
++				ring_buffer_read_start(iter->array_buffer->buffer,
++						       cpu, GFP_KERNEL);
+ 			tracing_iter_reset(iter, cpu);
+ 		}
+ 	} else {
+ 		cpu = iter->cpu_file;
+ 		iter->buffer_iter[cpu] =
+-			ring_buffer_read_prepare(iter->array_buffer->buffer,
+-						 cpu, GFP_KERNEL);
+-		ring_buffer_read_prepare_sync();
+-		ring_buffer_read_start(iter->buffer_iter[cpu]);
++			ring_buffer_read_start(iter->array_buffer->buffer,
++					       cpu, GFP_KERNEL);
+ 		tracing_iter_reset(iter, cpu);
+ 	}
+ 
+diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
+index 3885aadc434d93..196c8bf349704a 100644
+--- a/kernel/trace/trace_events_filter.c
++++ b/kernel/trace/trace_events_filter.c
+@@ -1344,13 +1344,14 @@ struct filter_list {
+ 
+ struct filter_head {
+ 	struct list_head	list;
+-	struct rcu_head		rcu;
++	union {
++		struct rcu_head		rcu;
++		struct rcu_work		rwork;
++	};
+ };
+ 
+-
+-static void free_filter_list(struct rcu_head *rhp)
++static void free_filter_list(struct filter_head *filter_list)
+ {
+-	struct filter_head *filter_list = container_of(rhp, struct filter_head, rcu);
+ 	struct filter_list *filter_item, *tmp;
+ 
+ 	list_for_each_entry_safe(filter_item, tmp, &filter_list->list, list) {
+@@ -1361,9 +1362,20 @@ static void free_filter_list(struct rcu_head *rhp)
+ 	kfree(filter_list);
+ }
+ 
++static void free_filter_list_work(struct work_struct *work)
++{
++	struct filter_head *filter_list;
++
++	filter_list = container_of(to_rcu_work(work), struct filter_head, rwork);
++	free_filter_list(filter_list);
++}
++
+ static void free_filter_list_tasks(struct rcu_head *rhp)
+ {
+-	call_rcu(rhp, free_filter_list);
++	struct filter_head *filter_list = container_of(rhp, struct filter_head, rcu);
++
++	INIT_RCU_WORK(&filter_list->rwork, free_filter_list_work);
++	queue_rcu_work(system_wq, &filter_list->rwork);
+ }
+ 
+ /*
+@@ -1460,7 +1472,7 @@ static void filter_free_subsystem_filters(struct trace_subsystem_dir *dir,
+ 	tracepoint_synchronize_unregister();
+ 
+ 	if (head)
+-		free_filter_list(&head->rcu);
++		free_filter_list(head);
+ 
+ 	list_for_each_entry(file, &tr->events, list) {
+ 		if (file->system != dir || !file->filter)
+@@ -2305,7 +2317,7 @@ static int process_system_preds(struct trace_subsystem_dir *dir,
+ 	return 0;
+  fail:
+ 	/* No call succeeded */
+-	free_filter_list(&filter_list->rcu);
++	free_filter_list(filter_list);
+ 	parse_error(pe, FILT_ERR_BAD_SUBSYS_FILTER, 0);
+ 	return -EINVAL;
+  fail_mem:
+@@ -2315,7 +2327,7 @@ static int process_system_preds(struct trace_subsystem_dir *dir,
+ 	if (!fail)
+ 		delay_free_filter(filter_list);
+ 	else
+-		free_filter_list(&filter_list->rcu);
++		free_filter_list(filter_list);
+ 
+ 	return -ENOMEM;
+ }
+diff --git a/kernel/trace/trace_kdb.c b/kernel/trace/trace_kdb.c
+index d7b135de958a2e..896ff78b8349cb 100644
+--- a/kernel/trace/trace_kdb.c
++++ b/kernel/trace/trace_kdb.c
+@@ -43,17 +43,15 @@ static void ftrace_dump_buf(int skip_entries, long cpu_file)
+ 	if (cpu_file == RING_BUFFER_ALL_CPUS) {
+ 		for_each_tracing_cpu(cpu) {
+ 			iter.buffer_iter[cpu] =
+-			ring_buffer_read_prepare(iter.array_buffer->buffer,
+-						 cpu, GFP_ATOMIC);
+-			ring_buffer_read_start(iter.buffer_iter[cpu]);
++			ring_buffer_read_start(iter.array_buffer->buffer,
++					       cpu, GFP_ATOMIC);
+ 			tracing_iter_reset(&iter, cpu);
+ 		}
+ 	} else {
+ 		iter.cpu_file = cpu_file;
+ 		iter.buffer_iter[cpu_file] =
+-			ring_buffer_read_prepare(iter.array_buffer->buffer,
++			ring_buffer_read_start(iter.array_buffer->buffer,
+ 						 cpu_file, GFP_ATOMIC);
+-		ring_buffer_read_start(iter.buffer_iter[cpu_file]);
+ 		tracing_iter_reset(&iter, cpu_file);
+ 	}
+ 
+diff --git a/kernel/ucount.c b/kernel/ucount.c
+index 8686e329b8f2ce..f629db485a0797 100644
+--- a/kernel/ucount.c
++++ b/kernel/ucount.c
+@@ -199,7 +199,7 @@ void put_ucounts(struct ucounts *ucounts)
+ 	}
+ }
+ 
+-static inline bool atomic_long_inc_below(atomic_long_t *v, int u)
++static inline bool atomic_long_inc_below(atomic_long_t *v, long u)
+ {
+ 	long c, old;
+ 	c = atomic_long_read(v);
+diff --git a/lib/tests/fortify_kunit.c b/lib/tests/fortify_kunit.c
+index 29ffc62a71e3f9..fc9c76f026d636 100644
+--- a/lib/tests/fortify_kunit.c
++++ b/lib/tests/fortify_kunit.c
+@@ -1003,8 +1003,8 @@ static void fortify_test_memcmp(struct kunit *test)
+ {
+ 	char one[] = "My mind is going ...";
+ 	char two[] = "My mind is going ... I can feel it.";
+-	size_t one_len = sizeof(one) - 1;
+-	size_t two_len = sizeof(two) - 1;
++	volatile size_t one_len = sizeof(one) - 1;
++	volatile size_t two_len = sizeof(two) - 1;
+ 
+ 	OPTIMIZER_HIDE_VAR(one_len);
+ 	OPTIMIZER_HIDE_VAR(two_len);
+diff --git a/mm/hmm.c b/mm/hmm.c
+index feac86196a65f5..4078fc0ccd68da 100644
+--- a/mm/hmm.c
++++ b/mm/hmm.c
+@@ -183,6 +183,7 @@ static inline unsigned long hmm_pfn_flags_order(unsigned long order)
+ 	return order << HMM_PFN_ORDER_SHIFT;
+ }
+ 
++#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ static inline unsigned long pmd_to_hmm_pfn_flags(struct hmm_range *range,
+ 						 pmd_t pmd)
+ {
+@@ -193,7 +194,6 @@ static inline unsigned long pmd_to_hmm_pfn_flags(struct hmm_range *range,
+ 	       hmm_pfn_flags_order(PMD_SHIFT - PAGE_SHIFT);
+ }
+ 
+-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr,
+ 			      unsigned long end, unsigned long hmm_pfns[],
+ 			      pmd_t pmd)
+diff --git a/mm/mmap_lock.c b/mm/mmap_lock.c
+index 5f725cc67334f6..5cd2b07895007f 100644
+--- a/mm/mmap_lock.c
++++ b/mm/mmap_lock.c
+@@ -164,8 +164,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
+ 	 */
+ 
+ 	/* Check if the vma we locked is the right one. */
+-	if (unlikely(vma->vm_mm != mm ||
+-		     address < vma->vm_start || address >= vma->vm_end))
++	if (unlikely(address < vma->vm_start || address >= vma->vm_end))
+ 		goto inval_end_read;
+ 
+ 	rcu_read_unlock();
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 3a5a65b1f41a3c..c67dfc17a81920 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -5928,8 +5928,8 @@ struct folio *shmem_read_folio_gfp(struct address_space *mapping,
+ 	struct folio *folio;
+ 	int error;
+ 
+-	error = shmem_get_folio_gfp(inode, index, 0, &folio, SGP_CACHE,
+-				    gfp, NULL, NULL);
++	error = shmem_get_folio_gfp(inode, index, i_size_read(inode),
++				    &folio, SGP_CACHE, gfp, NULL, NULL);
+ 	if (error)
+ 		return ERR_PTR(error);
+ 
+diff --git a/mm/slub.c b/mm/slub.c
+index 31e11ef256f90a..45a963e363d32b 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -4930,12 +4930,12 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags)
+  * When slub_debug_orig_size() is off, krealloc() only knows about the bucket
+  * size of an allocation (but not the exact size it was allocated with) and
+  * hence implements the following semantics for shrinking and growing buffers
+- * with __GFP_ZERO.
++ * with __GFP_ZERO::
+  *
+- *         new             bucket
+- * 0       size             size
+- * |--------|----------------|
+- * |  keep  |      zero      |
++ *           new             bucket
++ *   0       size             size
++ *   |--------|----------------|
++ *   |  keep  |      zero      |
+  *
+  * Otherwise, the original allocation size 'orig_size' could be used to
+  * precisely clear the requested size, and the new size will also be stored
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index 68ce283e84be82..4f47ec9118f84a 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -1115,6 +1115,7 @@ static void swap_range_alloc(struct swap_info_struct *si,
+ 		if (vm_swap_full())
+ 			schedule_work(&si->reclaim_work);
+ 	}
++	atomic_long_sub(nr_entries, &nr_swap_pages);
+ }
+ 
+ static void swap_range_free(struct swap_info_struct *si, unsigned long offset,
+@@ -1313,7 +1314,6 @@ int folio_alloc_swap(struct folio *folio, gfp_t gfp)
+ 	if (add_to_swap_cache(folio, entry, gfp | __GFP_NOMEMALLOC, NULL))
+ 		goto out_free;
+ 
+-	atomic_long_sub(size, &nr_swap_pages);
+ 	return 0;
+ 
+ out_free:
+@@ -3141,43 +3141,30 @@ static unsigned long read_swap_header(struct swap_info_struct *si,
+ 	return maxpages;
+ }
+ 
+-static int setup_swap_map_and_extents(struct swap_info_struct *si,
+-					union swap_header *swap_header,
+-					unsigned char *swap_map,
+-					unsigned long maxpages,
+-					sector_t *span)
++static int setup_swap_map(struct swap_info_struct *si,
++			  union swap_header *swap_header,
++			  unsigned char *swap_map,
++			  unsigned long maxpages)
+ {
+-	unsigned int nr_good_pages;
+ 	unsigned long i;
+-	int nr_extents;
+-
+-	nr_good_pages = maxpages - 1;	/* omit header page */
+ 
++	swap_map[0] = SWAP_MAP_BAD; /* omit header page */
+ 	for (i = 0; i < swap_header->info.nr_badpages; i++) {
+ 		unsigned int page_nr = swap_header->info.badpages[i];
+ 		if (page_nr == 0 || page_nr > swap_header->info.last_page)
+ 			return -EINVAL;
+ 		if (page_nr < maxpages) {
+ 			swap_map[page_nr] = SWAP_MAP_BAD;
+-			nr_good_pages--;
++			si->pages--;
+ 		}
+ 	}
+ 
+-	if (nr_good_pages) {
+-		swap_map[0] = SWAP_MAP_BAD;
+-		si->max = maxpages;
+-		si->pages = nr_good_pages;
+-		nr_extents = setup_swap_extents(si, span);
+-		if (nr_extents < 0)
+-			return nr_extents;
+-		nr_good_pages = si->pages;
+-	}
+-	if (!nr_good_pages) {
++	if (!si->pages) {
+ 		pr_warn("Empty swap-file\n");
+ 		return -EINVAL;
+ 	}
+ 
+-	return nr_extents;
++	return 0;
+ }
+ 
+ #define SWAP_CLUSTER_INFO_COLS						\
+@@ -3217,13 +3204,17 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si,
+ 	 * Mark unusable pages as unavailable. The clusters aren't
+ 	 * marked free yet, so no list operations are involved yet.
+ 	 *
+-	 * See setup_swap_map_and_extents(): header page, bad pages,
++	 * See setup_swap_map(): header page, bad pages,
+ 	 * and the EOF part of the last cluster.
+ 	 */
+ 	inc_cluster_info_page(si, cluster_info, 0);
+-	for (i = 0; i < swap_header->info.nr_badpages; i++)
+-		inc_cluster_info_page(si, cluster_info,
+-				      swap_header->info.badpages[i]);
++	for (i = 0; i < swap_header->info.nr_badpages; i++) {
++		unsigned int page_nr = swap_header->info.badpages[i];
++
++		if (page_nr >= maxpages)
++			continue;
++		inc_cluster_info_page(si, cluster_info, page_nr);
++	}
+ 	for (i = maxpages; i < round_up(maxpages, SWAPFILE_CLUSTER); i++)
+ 		inc_cluster_info_page(si, cluster_info, i);
+ 
+@@ -3363,6 +3354,21 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags)
+ 		goto bad_swap_unlock_inode;
+ 	}
+ 
++	si->max = maxpages;
++	si->pages = maxpages - 1;
++	nr_extents = setup_swap_extents(si, &span);
++	if (nr_extents < 0) {
++		error = nr_extents;
++		goto bad_swap_unlock_inode;
++	}
++	if (si->pages != si->max - 1) {
++		pr_err("swap:%u != (max:%u - 1)\n", si->pages, si->max);
++		error = -EINVAL;
++		goto bad_swap_unlock_inode;
++	}
++
++	maxpages = si->max;
++
+ 	/* OK, set up the swap map and apply the bad block list */
+ 	swap_map = vzalloc(maxpages);
+ 	if (!swap_map) {
+@@ -3374,12 +3380,9 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags)
+ 	if (error)
+ 		goto bad_swap_unlock_inode;
+ 
+-	nr_extents = setup_swap_map_and_extents(si, swap_header, swap_map,
+-						maxpages, &span);
+-	if (unlikely(nr_extents < 0)) {
+-		error = nr_extents;
++	error = setup_swap_map(si, swap_header, swap_map, maxpages);
++	if (error)
+ 		goto bad_swap_unlock_inode;
+-	}
+ 
+ 	/*
+ 	 * Use kvmalloc_array instead of bitmap_zalloc as the allocation order might
+diff --git a/net/bluetooth/coredump.c b/net/bluetooth/coredump.c
+index 819eacb3876228..720cb79adf9648 100644
+--- a/net/bluetooth/coredump.c
++++ b/net/bluetooth/coredump.c
+@@ -249,15 +249,15 @@ static void hci_devcd_dump(struct hci_dev *hdev)
+ 
+ 	size = hdev->dump.tail - hdev->dump.head;
+ 
+-	/* Emit a devcoredump with the available data */
+-	dev_coredumpv(&hdev->dev, hdev->dump.head, size, GFP_KERNEL);
+-
+ 	/* Send a copy to monitor as a diagnostic packet */
+ 	skb = bt_skb_alloc(size, GFP_ATOMIC);
+ 	if (skb) {
+ 		skb_put_data(skb, hdev->dump.head, size);
+ 		hci_recv_diag(hdev, skb);
+ 	}
++
++	/* Emit a devcoredump with the available data */
++	dev_coredumpv(&hdev->dev, hdev->dump.head, size, GFP_KERNEL);
+ }
+ 
+ static void hci_devcd_handle_pkt_complete(struct hci_dev *hdev,
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index cf4b30ac9e0e57..c1dd8d78701fe5 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -6239,6 +6239,11 @@ static void hci_le_adv_report_evt(struct hci_dev *hdev, void *data,
+ 
+ static u8 ext_evt_type_to_legacy(struct hci_dev *hdev, u16 evt_type)
+ {
++	u16 pdu_type = evt_type & ~LE_EXT_ADV_DATA_STATUS_MASK;
++
++	if (!pdu_type)
++		return LE_ADV_NONCONN_IND;
++
+ 	if (evt_type & LE_EXT_ADV_LEGACY_PDU) {
+ 		switch (evt_type) {
+ 		case LE_LEGACY_ADV_IND:
+@@ -6270,8 +6275,7 @@ static u8 ext_evt_type_to_legacy(struct hci_dev *hdev, u16 evt_type)
+ 	if (evt_type & LE_EXT_ADV_SCAN_IND)
+ 		return LE_ADV_SCAN_IND;
+ 
+-	if (evt_type == LE_EXT_ADV_NON_CONN_IND ||
+-	    evt_type & LE_EXT_ADV_DIRECT_IND)
++	if (evt_type & LE_EXT_ADV_DIRECT_IND)
+ 		return LE_ADV_NONCONN_IND;
+ 
+ invalid:
+diff --git a/net/caif/cfctrl.c b/net/caif/cfctrl.c
+index 20139fa1be1fff..06b604cf9d58c0 100644
+--- a/net/caif/cfctrl.c
++++ b/net/caif/cfctrl.c
+@@ -351,17 +351,154 @@ int cfctrl_cancel_req(struct cflayer *layr, struct cflayer *adap_layer)
+ 	return found;
+ }
+ 
++static int cfctrl_link_setup(struct cfctrl *cfctrl, struct cfpkt *pkt, u8 cmdrsp)
++{
++	u8 len;
++	u8 linkid = 0;
++	enum cfctrl_srv serv;
++	enum cfctrl_srv servtype;
++	u8 endpoint;
++	u8 physlinkid;
++	u8 prio;
++	u8 tmp;
++	u8 *cp;
++	int i;
++	struct cfctrl_link_param linkparam;
++	struct cfctrl_request_info rsp, *req;
++
++	memset(&linkparam, 0, sizeof(linkparam));
++
++	tmp = cfpkt_extr_head_u8(pkt);
++
++	serv = tmp & CFCTRL_SRV_MASK;
++	linkparam.linktype = serv;
++
++	servtype = tmp >> 4;
++	linkparam.chtype = servtype;
++
++	tmp = cfpkt_extr_head_u8(pkt);
++	physlinkid = tmp & 0x07;
++	prio = tmp >> 3;
++
++	linkparam.priority = prio;
++	linkparam.phyid = physlinkid;
++	endpoint = cfpkt_extr_head_u8(pkt);
++	linkparam.endpoint = endpoint & 0x03;
++
++	switch (serv) {
++	case CFCTRL_SRV_VEI:
++	case CFCTRL_SRV_DBG:
++		if (CFCTRL_ERR_BIT & cmdrsp)
++			break;
++		/* Link ID */
++		linkid = cfpkt_extr_head_u8(pkt);
++		break;
++	case CFCTRL_SRV_VIDEO:
++		tmp = cfpkt_extr_head_u8(pkt);
++		linkparam.u.video.connid = tmp;
++		if (CFCTRL_ERR_BIT & cmdrsp)
++			break;
++		/* Link ID */
++		linkid = cfpkt_extr_head_u8(pkt);
++		break;
++
++	case CFCTRL_SRV_DATAGRAM:
++		linkparam.u.datagram.connid = cfpkt_extr_head_u32(pkt);
++		if (CFCTRL_ERR_BIT & cmdrsp)
++			break;
++		/* Link ID */
++		linkid = cfpkt_extr_head_u8(pkt);
++		break;
++	case CFCTRL_SRV_RFM:
++		/* Construct a frame, convert
++		 * DatagramConnectionID
++		 * to network format long and copy it out...
++		 */
++		linkparam.u.rfm.connid = cfpkt_extr_head_u32(pkt);
++		cp = (u8 *) linkparam.u.rfm.volume;
++		for (tmp = cfpkt_extr_head_u8(pkt);
++		     cfpkt_more(pkt) && tmp != '\0';
++		     tmp = cfpkt_extr_head_u8(pkt))
++			*cp++ = tmp;
++		*cp = '\0';
++
++		if (CFCTRL_ERR_BIT & cmdrsp)
++			break;
++		/* Link ID */
++		linkid = cfpkt_extr_head_u8(pkt);
++
++		break;
++	case CFCTRL_SRV_UTIL:
++		/* Construct a frame, convert
++		 * DatagramConnectionID
++		 * to network format long and copy it out...
++		 */
++		/* Fifosize KB */
++		linkparam.u.utility.fifosize_kb = cfpkt_extr_head_u16(pkt);
++		/* Fifosize bufs */
++		linkparam.u.utility.fifosize_bufs = cfpkt_extr_head_u16(pkt);
++		/* name */
++		cp = (u8 *) linkparam.u.utility.name;
++		caif_assert(sizeof(linkparam.u.utility.name)
++			     >= UTILITY_NAME_LENGTH);
++		for (i = 0; i < UTILITY_NAME_LENGTH && cfpkt_more(pkt); i++) {
++			tmp = cfpkt_extr_head_u8(pkt);
++			*cp++ = tmp;
++		}
++		/* Length */
++		len = cfpkt_extr_head_u8(pkt);
++		linkparam.u.utility.paramlen = len;
++		/* Param Data */
++		cp = linkparam.u.utility.params;
++		while (cfpkt_more(pkt) && len--) {
++			tmp = cfpkt_extr_head_u8(pkt);
++			*cp++ = tmp;
++		}
++		if (CFCTRL_ERR_BIT & cmdrsp)
++			break;
++		/* Link ID */
++		linkid = cfpkt_extr_head_u8(pkt);
++		/* Length */
++		len = cfpkt_extr_head_u8(pkt);
++		/* Param Data */
++		cfpkt_extr_head(pkt, NULL, len);
++		break;
++	default:
++		pr_warn("Request setup, invalid type (%d)\n", serv);
++		return -1;
++	}
++
++	rsp.cmd = CFCTRL_CMD_LINK_SETUP;
++	rsp.param = linkparam;
++	spin_lock_bh(&cfctrl->info_list_lock);
++	req = cfctrl_remove_req(cfctrl, &rsp);
++
++	if (CFCTRL_ERR_BIT == (CFCTRL_ERR_BIT & cmdrsp) ||
++		cfpkt_erroneous(pkt)) {
++		pr_err("Invalid O/E bit or parse error "
++				"on CAIF control channel\n");
++		cfctrl->res.reject_rsp(cfctrl->serv.layer.up, 0,
++				       req ? req->client_layer : NULL);
++	} else {
++		cfctrl->res.linksetup_rsp(cfctrl->serv.layer.up, linkid,
++					  serv, physlinkid,
++					  req ?  req->client_layer : NULL);
++	}
++
++	kfree(req);
++
++	spin_unlock_bh(&cfctrl->info_list_lock);
++
++	return 0;
++}
++
+ static int cfctrl_recv(struct cflayer *layer, struct cfpkt *pkt)
+ {
+ 	u8 cmdrsp;
+ 	u8 cmd;
+-	int ret = -1;
+-	u8 len;
+-	u8 param[255];
++	int ret = 0;
+ 	u8 linkid = 0;
+ 	struct cfctrl *cfctrl = container_obj(layer);
+-	struct cfctrl_request_info rsp, *req;
+-
+ 
+ 	cmdrsp = cfpkt_extr_head_u8(pkt);
+ 	cmd = cmdrsp & CFCTRL_CMD_MASK;
+@@ -374,150 +511,7 @@ static int cfctrl_recv(struct cflayer *layer, struct cfpkt *pkt)
+ 
+ 	switch (cmd) {
+ 	case CFCTRL_CMD_LINK_SETUP:
+-		{
+-			enum cfctrl_srv serv;
+-			enum cfctrl_srv servtype;
+-			u8 endpoint;
+-			u8 physlinkid;
+-			u8 prio;
+-			u8 tmp;
+-			u8 *cp;
+-			int i;
+-			struct cfctrl_link_param linkparam;
+-			memset(&linkparam, 0, sizeof(linkparam));
+-
+-			tmp = cfpkt_extr_head_u8(pkt);
+-
+-			serv = tmp & CFCTRL_SRV_MASK;
+-			linkparam.linktype = serv;
+-
+-			servtype = tmp >> 4;
+-			linkparam.chtype = servtype;
+-
+-			tmp = cfpkt_extr_head_u8(pkt);
+-			physlinkid = tmp & 0x07;
+-			prio = tmp >> 3;
+-
+-			linkparam.priority = prio;
+-			linkparam.phyid = physlinkid;
+-			endpoint = cfpkt_extr_head_u8(pkt);
+-			linkparam.endpoint = endpoint & 0x03;
+-
+-			switch (serv) {
+-			case CFCTRL_SRV_VEI:
+-			case CFCTRL_SRV_DBG:
+-				if (CFCTRL_ERR_BIT & cmdrsp)
+-					break;
+-				/* Link ID */
+-				linkid = cfpkt_extr_head_u8(pkt);
+-				break;
+-			case CFCTRL_SRV_VIDEO:
+-				tmp = cfpkt_extr_head_u8(pkt);
+-				linkparam.u.video.connid = tmp;
+-				if (CFCTRL_ERR_BIT & cmdrsp)
+-					break;
+-				/* Link ID */
+-				linkid = cfpkt_extr_head_u8(pkt);
+-				break;
+-
+-			case CFCTRL_SRV_DATAGRAM:
+-				linkparam.u.datagram.connid =
+-				    cfpkt_extr_head_u32(pkt);
+-				if (CFCTRL_ERR_BIT & cmdrsp)
+-					break;
+-				/* Link ID */
+-				linkid = cfpkt_extr_head_u8(pkt);
+-				break;
+-			case CFCTRL_SRV_RFM:
+-				/* Construct a frame, convert
+-				 * DatagramConnectionID
+-				 * to network format long and copy it out...
+-				 */
+-				linkparam.u.rfm.connid =
+-				    cfpkt_extr_head_u32(pkt);
+-				cp = (u8 *) linkparam.u.rfm.volume;
+-				for (tmp = cfpkt_extr_head_u8(pkt);
+-				     cfpkt_more(pkt) && tmp != '\0';
+-				     tmp = cfpkt_extr_head_u8(pkt))
+-					*cp++ = tmp;
+-				*cp = '\0';
+-
+-				if (CFCTRL_ERR_BIT & cmdrsp)
+-					break;
+-				/* Link ID */
+-				linkid = cfpkt_extr_head_u8(pkt);
+-
+-				break;
+-			case CFCTRL_SRV_UTIL:
+-				/* Construct a frame, convert
+-				 * DatagramConnectionID
+-				 * to network format long and copy it out...
+-				 */
+-				/* Fifosize KB */
+-				linkparam.u.utility.fifosize_kb =
+-				    cfpkt_extr_head_u16(pkt);
+-				/* Fifosize bufs */
+-				linkparam.u.utility.fifosize_bufs =
+-				    cfpkt_extr_head_u16(pkt);
+-				/* name */
+-				cp = (u8 *) linkparam.u.utility.name;
+-				caif_assert(sizeof(linkparam.u.utility.name)
+-					     >= UTILITY_NAME_LENGTH);
+-				for (i = 0;
+-				     i < UTILITY_NAME_LENGTH
+-				     && cfpkt_more(pkt); i++) {
+-					tmp = cfpkt_extr_head_u8(pkt);
+-					*cp++ = tmp;
+-				}
+-				/* Length */
+-				len = cfpkt_extr_head_u8(pkt);
+-				linkparam.u.utility.paramlen = len;
+-				/* Param Data */
+-				cp = linkparam.u.utility.params;
+-				while (cfpkt_more(pkt) && len--) {
+-					tmp = cfpkt_extr_head_u8(pkt);
+-					*cp++ = tmp;
+-				}
+-				if (CFCTRL_ERR_BIT & cmdrsp)
+-					break;
+-				/* Link ID */
+-				linkid = cfpkt_extr_head_u8(pkt);
+-				/* Length */
+-				len = cfpkt_extr_head_u8(pkt);
+-				/* Param Data */
+-				cfpkt_extr_head(pkt, &param, len);
+-				break;
+-			default:
+-				pr_warn("Request setup, invalid type (%d)\n",
+-					serv);
+-				goto error;
+-			}
+-
+-			rsp.cmd = cmd;
+-			rsp.param = linkparam;
+-			spin_lock_bh(&cfctrl->info_list_lock);
+-			req = cfctrl_remove_req(cfctrl, &rsp);
+-
+-			if (CFCTRL_ERR_BIT == (CFCTRL_ERR_BIT & cmdrsp) ||
+-				cfpkt_erroneous(pkt)) {
+-				pr_err("Invalid O/E bit or parse error "
+-						"on CAIF control channel\n");
+-				cfctrl->res.reject_rsp(cfctrl->serv.layer.up,
+-						       0,
+-						       req ? req->client_layer
+-						       : NULL);
+-			} else {
+-				cfctrl->res.linksetup_rsp(cfctrl->serv.
+-							  layer.up, linkid,
+-							  serv, physlinkid,
+-							  req ? req->
+-							  client_layer : NULL);
+-			}
+-
+-			kfree(req);
+-
+-			spin_unlock_bh(&cfctrl->info_list_lock);
+-		}
++		ret = cfctrl_link_setup(cfctrl, pkt, cmdrsp);
+ 		break;
+ 	case CFCTRL_CMD_LINK_DESTROY:
+ 		linkid = cfpkt_extr_head_u8(pkt);
+@@ -544,9 +538,9 @@ static int cfctrl_recv(struct cflayer *layer, struct cfpkt *pkt)
+ 		break;
+ 	default:
+ 		pr_err("Unrecognized Control Frame\n");
++		ret = -1;
+ 		goto error;
+ 	}
+-	ret = 0;
+ error:
+ 	cfpkt_destroy(pkt);
+ 	return ret;
+diff --git a/net/core/devmem.c b/net/core/devmem.c
+index b3a62ca0df653a..24c591ab38aec1 100644
+--- a/net/core/devmem.c
++++ b/net/core/devmem.c
+@@ -70,14 +70,13 @@ void __net_devmem_dmabuf_binding_free(struct work_struct *wq)
+ 		gen_pool_destroy(binding->chunk_pool);
+ 
+ 	dma_buf_unmap_attachment_unlocked(binding->attachment, binding->sgt,
+-					  DMA_FROM_DEVICE);
++					  binding->direction);
+ 	dma_buf_detach(binding->dmabuf, binding->attachment);
+ 	dma_buf_put(binding->dmabuf);
+ 	xa_destroy(&binding->bound_rxqs);
+ 	kvfree(binding->tx_vec);
+ 	kfree(binding);
+ }
+-EXPORT_SYMBOL(__net_devmem_dmabuf_binding_free);
+ 
+ struct net_iov *
+ net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding)
+@@ -208,6 +207,7 @@ net_devmem_bind_dmabuf(struct net_device *dev,
+ 	mutex_init(&binding->lock);
+ 
+ 	binding->dmabuf = dmabuf;
++	binding->direction = direction;
+ 
+ 	binding->attachment = dma_buf_attach(binding->dmabuf, dev->dev.parent);
+ 	if (IS_ERR(binding->attachment)) {
+@@ -312,7 +312,7 @@ net_devmem_bind_dmabuf(struct net_device *dev,
+ 	kvfree(binding->tx_vec);
+ err_unmap:
+ 	dma_buf_unmap_attachment_unlocked(binding->attachment, binding->sgt,
+-					  DMA_FROM_DEVICE);
++					  direction);
+ err_detach:
+ 	dma_buf_detach(dmabuf, binding->attachment);
+ err_free_binding:
+diff --git a/net/core/devmem.h b/net/core/devmem.h
+index 0a3b28ba5c1377..41cd6e1c914128 100644
+--- a/net/core/devmem.h
++++ b/net/core/devmem.h
+@@ -56,6 +56,9 @@ struct net_devmem_dmabuf_binding {
+ 	 */
+ 	u32 id;
+ 
++	/* DMA direction, FROM_DEVICE for Rx binding, TO_DEVICE for Tx. */
++	enum dma_data_direction direction;
++
+ 	/* Array of net_iov pointers for this binding, sorted by virtual
+ 	 * address. This array is convenient to map the virtual addresses to
+ 	 * net_iovs in the TX path.
+@@ -165,10 +168,6 @@ static inline void net_devmem_put_net_iov(struct net_iov *niov)
+ {
+ }
+ 
+-static inline void __net_devmem_dmabuf_binding_free(struct work_struct *wq)
+-{
+-}
+-
+ static inline struct net_devmem_dmabuf_binding *
+ net_devmem_bind_dmabuf(struct net_device *dev,
+ 		       enum dma_data_direction direction,
+diff --git a/net/core/dst.c b/net/core/dst.c
+index 795ca07e28a4ef..b3a12c7c08af0c 100644
+--- a/net/core/dst.c
++++ b/net/core/dst.c
+@@ -148,9 +148,9 @@ void dst_dev_put(struct dst_entry *dst)
+ 	dst->obsolete = DST_OBSOLETE_DEAD;
+ 	if (dst->ops->ifdown)
+ 		dst->ops->ifdown(dst, dev);
+-	dst->input = dst_discard;
+-	dst->output = dst_discard_out;
+-	dst->dev = blackhole_netdev;
++	WRITE_ONCE(dst->input, dst_discard);
++	WRITE_ONCE(dst->output, dst_discard_out);
++	WRITE_ONCE(dst->dev, blackhole_netdev);
+ 	netdev_ref_replace(dev, blackhole_netdev, &dst->dev_tracker,
+ 			   GFP_ATOMIC);
+ }
+@@ -263,7 +263,7 @@ unsigned int dst_blackhole_mtu(const struct dst_entry *dst)
+ {
+ 	unsigned int mtu = dst_metric_raw(dst, RTAX_MTU);
+ 
+-	return mtu ? : dst->dev->mtu;
++	return mtu ? : dst_dev(dst)->mtu;
+ }
+ EXPORT_SYMBOL_GPL(dst_blackhole_mtu);
+ 
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 7a72f766aacfaf..2c3196dadd54a6 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -8690,7 +8690,7 @@ static bool bpf_skb_is_valid_access(int off, int size, enum bpf_access_type type
+ 		if (size != sizeof(__u64))
+ 			return false;
+ 		break;
+-	case offsetof(struct __sk_buff, sk):
++	case bpf_ctx_range_ptr(struct __sk_buff, sk):
+ 		if (type == BPF_WRITE || size != sizeof(__u64))
+ 			return false;
+ 		info->reg_type = PTR_TO_SOCK_COMMON_OR_NULL;
+@@ -9268,7 +9268,7 @@ static bool sock_addr_is_valid_access(int off, int size,
+ 				return false;
+ 		}
+ 		break;
+-	case offsetof(struct bpf_sock_addr, sk):
++	case bpf_ctx_range_ptr(struct bpf_sock_addr, sk):
+ 		if (type != BPF_READ)
+ 			return false;
+ 		if (size != sizeof(__u64))
+@@ -9318,17 +9318,17 @@ static bool sock_ops_is_valid_access(int off, int size,
+ 			if (size != sizeof(__u64))
+ 				return false;
+ 			break;
+-		case offsetof(struct bpf_sock_ops, sk):
++		case bpf_ctx_range_ptr(struct bpf_sock_ops, sk):
+ 			if (size != sizeof(__u64))
+ 				return false;
+ 			info->reg_type = PTR_TO_SOCKET_OR_NULL;
+ 			break;
+-		case offsetof(struct bpf_sock_ops, skb_data):
++		case bpf_ctx_range_ptr(struct bpf_sock_ops, skb_data):
+ 			if (size != sizeof(__u64))
+ 				return false;
+ 			info->reg_type = PTR_TO_PACKET;
+ 			break;
+-		case offsetof(struct bpf_sock_ops, skb_data_end):
++		case bpf_ctx_range_ptr(struct bpf_sock_ops, skb_data_end):
+ 			if (size != sizeof(__u64))
+ 				return false;
+ 			info->reg_type = PTR_TO_PACKET_END;
+@@ -9337,7 +9337,7 @@ static bool sock_ops_is_valid_access(int off, int size,
+ 			bpf_ctx_record_field_size(info, size_default);
+ 			return bpf_ctx_narrow_access_ok(off, size,
+ 							size_default);
+-		case offsetof(struct bpf_sock_ops, skb_hwtstamp):
++		case bpf_ctx_range(struct bpf_sock_ops, skb_hwtstamp):
+ 			if (size != sizeof(__u64))
+ 				return false;
+ 			break;
+@@ -9407,17 +9407,17 @@ static bool sk_msg_is_valid_access(int off, int size,
+ 		return false;
+ 
+ 	switch (off) {
+-	case offsetof(struct sk_msg_md, data):
++	case bpf_ctx_range_ptr(struct sk_msg_md, data):
+ 		info->reg_type = PTR_TO_PACKET;
+ 		if (size != sizeof(__u64))
+ 			return false;
+ 		break;
+-	case offsetof(struct sk_msg_md, data_end):
++	case bpf_ctx_range_ptr(struct sk_msg_md, data_end):
+ 		info->reg_type = PTR_TO_PACKET_END;
+ 		if (size != sizeof(__u64))
+ 			return false;
+ 		break;
+-	case offsetof(struct sk_msg_md, sk):
++	case bpf_ctx_range_ptr(struct sk_msg_md, sk):
+ 		if (size != sizeof(__u64))
+ 			return false;
+ 		info->reg_type = PTR_TO_SOCKET;
+@@ -9449,6 +9449,9 @@ static bool flow_dissector_is_valid_access(int off, int size,
+ 	if (off < 0 || off >= sizeof(struct __sk_buff))
+ 		return false;
+ 
++	if (off % size != 0)
++		return false;
++
+ 	if (type == BPF_WRITE)
+ 		return false;
+ 
+@@ -11623,7 +11626,7 @@ static bool sk_lookup_is_valid_access(int off, int size,
+ 		return false;
+ 
+ 	switch (off) {
+-	case offsetof(struct bpf_sk_lookup, sk):
++	case bpf_ctx_range_ptr(struct bpf_sk_lookup, sk):
+ 		info->reg_type = PTR_TO_SOCKET_OR_NULL;
+ 		return size == sizeof(__u64);
+ 
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 49dce9a82295b1..a8dc72eda20271 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -368,6 +368,43 @@ static void pneigh_queue_purge(struct sk_buff_head *list, struct net *net,
+ 	}
+ }
+ 
++static void neigh_flush_one(struct neighbour *n)
++{
++	hlist_del_rcu(&n->hash);
++	hlist_del_rcu(&n->dev_list);
++
++	write_lock(&n->lock);
++
++	neigh_del_timer(n);
++	neigh_mark_dead(n);
++
++	if (refcount_read(&n->refcnt) != 1) {
++		/* The most unpleasant situation.
++		 * We must destroy neighbour entry,
++		 * but someone still uses it.
++		 *
++		 * The destroy will be delayed until
++		 * the last user releases us, but
++		 * we must kill timers etc. and move
++		 * it to safe state.
++		 */
++		__skb_queue_purge(&n->arp_queue);
++		n->arp_queue_len_bytes = 0;
++		WRITE_ONCE(n->output, neigh_blackhole);
++
++		if (n->nud_state & NUD_VALID)
++			n->nud_state = NUD_NOARP;
++		else
++			n->nud_state = NUD_NONE;
++
++		neigh_dbg(2, "neigh %p is stray\n", n);
++	}
++
++	write_unlock(&n->lock);
++
++	neigh_cleanup_and_release(n);
++}
++
+ static void neigh_flush_dev(struct neigh_table *tbl, struct net_device *dev,
+ 			    bool skip_perm)
+ {
+@@ -381,32 +418,24 @@ static void neigh_flush_dev(struct neigh_table *tbl, struct net_device *dev,
+ 		if (skip_perm && n->nud_state & NUD_PERMANENT)
+ 			continue;
+ 
+-		hlist_del_rcu(&n->hash);
+-		hlist_del_rcu(&n->dev_list);
+-		write_lock(&n->lock);
+-		neigh_del_timer(n);
+-		neigh_mark_dead(n);
+-		if (refcount_read(&n->refcnt) != 1) {
+-			/* The most unpleasant situation.
+-			 * We must destroy neighbour entry,
+-			 * but someone still uses it.
+-			 *
+-			 * The destroy will be delayed until
+-			 * the last user releases us, but
+-			 * we must kill timers etc. and move
+-			 * it to safe state.
+-			 */
+-			__skb_queue_purge(&n->arp_queue);
+-			n->arp_queue_len_bytes = 0;
+-			WRITE_ONCE(n->output, neigh_blackhole);
+-			if (n->nud_state & NUD_VALID)
+-				n->nud_state = NUD_NOARP;
+-			else
+-				n->nud_state = NUD_NONE;
+-			neigh_dbg(2, "neigh %p is stray\n", n);
+-		}
+-		write_unlock(&n->lock);
+-		neigh_cleanup_and_release(n);
++		neigh_flush_one(n);
++	}
++}
++
++static void neigh_flush_table(struct neigh_table *tbl)
++{
++	struct neigh_hash_table *nht;
++	int i;
++
++	nht = rcu_dereference_protected(tbl->nht,
++					lockdep_is_held(&tbl->lock));
++
++	for (i = 0; i < (1 << nht->hash_shift); i++) {
++		struct hlist_node *tmp;
++		struct neighbour *n;
++
++		neigh_for_each_in_bucket_safe(n, tmp, &nht->hash_heads[i])
++			neigh_flush_one(n);
+ 	}
+ }
+ 
+@@ -422,7 +451,12 @@ static int __neigh_ifdown(struct neigh_table *tbl, struct net_device *dev,
+ 			  bool skip_perm)
+ {
+ 	write_lock_bh(&tbl->lock);
+-	neigh_flush_dev(tbl, dev, skip_perm);
++	if (likely(dev)) {
++		neigh_flush_dev(tbl, dev, skip_perm);
++	} else {
++		DEBUG_NET_WARN_ON_ONCE(skip_perm);
++		neigh_flush_table(tbl);
++	}
+ 	pneigh_ifdown_and_unlock(tbl, dev);
+ 	pneigh_queue_purge(&tbl->proxy_queue, dev ? dev_net(dev) : NULL,
+ 			   tbl->family);
+diff --git a/net/core/netclassid_cgroup.c b/net/core/netclassid_cgroup.c
+index d22f0919821e93..dff66d8fb325d2 100644
+--- a/net/core/netclassid_cgroup.c
++++ b/net/core/netclassid_cgroup.c
+@@ -21,7 +21,9 @@ static inline struct cgroup_cls_state *css_cls_state(struct cgroup_subsys_state
+ struct cgroup_cls_state *task_cls_state(struct task_struct *p)
+ {
+ 	return css_cls_state(task_css_check(p, net_cls_cgrp_id,
+-					    rcu_read_lock_bh_held()));
++					    rcu_read_lock_held() ||
++					    rcu_read_lock_bh_held() ||
++					    rcu_read_lock_trace_held()));
+ }
+ EXPORT_SYMBOL_GPL(task_cls_state);
+ 
+diff --git a/net/core/netpoll.c b/net/core/netpoll.c
+index 6ad84d4a2b464b..63477a6dd6e965 100644
+--- a/net/core/netpoll.c
++++ b/net/core/netpoll.c
+@@ -831,6 +831,13 @@ int netpoll_setup(struct netpoll *np)
+ 	if (err)
+ 		goto flush;
+ 	rtnl_unlock();
++
++	/* Make sure all NAPI polls which started before dev->npinfo
++	 * was visible have exited before we start calling NAPI poll.
++	 * NAPI skips locking if dev->npinfo is NULL.
++	 */
++	synchronize_rcu();
++
+ 	return 0;
+ 
+ flush:
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 34c51eb1a14fb4..83c78379932e23 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -656,6 +656,13 @@ static void sk_psock_backlog(struct work_struct *work)
+ 	bool ingress;
+ 	int ret;
+ 
++	/* If sk is quickly removed from the map and then added back, the old
++	 * psock should not be scheduled, because there are now two psocks
++	 * pointing to the same sk.
++	 */
++	if (!sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED))
++		return;
++
+ 	/* Increment the psock refcnt to synchronize with close(fd) path in
+ 	 * sock_map_close(), ensuring we wait for backlog thread completion
+ 	 * before sk_socket freed. If refcnt increment fails, it indicates
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 3b409bc8ef6d81..9fae9239f93934 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2602,8 +2602,8 @@ static u32 sk_dst_gso_max_size(struct sock *sk, struct dst_entry *dst)
+ 		   !ipv6_addr_v4mapped(&sk->sk_v6_rcv_saddr));
+ #endif
+ 	/* pairs with the WRITE_ONCE() in netif_set_gso(_ipv4)_max_size() */
+-	max_size = is_ipv6 ? READ_ONCE(dst->dev->gso_max_size) :
+-			READ_ONCE(dst->dev->gso_ipv4_max_size);
++	max_size = is_ipv6 ? READ_ONCE(dst_dev(dst)->gso_max_size) :
++			READ_ONCE(dst_dev(dst)->gso_ipv4_max_size);
+ 	if (max_size > GSO_LEGACY_MAX_SIZE && !sk_is_tcp(sk))
+ 		max_size = GSO_LEGACY_MAX_SIZE;
+ 
+@@ -2614,7 +2614,7 @@ void sk_setup_caps(struct sock *sk, struct dst_entry *dst)
+ {
+ 	u32 max_segs = 1;
+ 
+-	sk->sk_route_caps = dst->dev->features;
++	sk->sk_route_caps = dst_dev(dst)->features;
+ 	if (sk_is_tcp(sk)) {
+ 		struct inet_connection_sock *icsk = inet_csk(sk);
+ 
+@@ -2632,7 +2632,7 @@ void sk_setup_caps(struct sock *sk, struct dst_entry *dst)
+ 			sk->sk_route_caps |= NETIF_F_SG | NETIF_F_HW_CSUM;
+ 			sk->sk_gso_max_size = sk_dst_gso_max_size(sk, dst);
+ 			/* pairs with the WRITE_ONCE() in netif_set_gso_max_segs() */
+-			max_segs = max_t(u32, READ_ONCE(dst->dev->gso_max_segs), 1);
++			max_segs = max_t(u32, READ_ONCE(dst_dev(dst)->gso_max_segs), 1);
+ 		}
+ 	}
+ 	sk->sk_gso_max_segs = max_segs;
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 6906bedad19a13..46750c96d08ea3 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -812,7 +812,7 @@ struct dst_entry *inet_csk_route_req(const struct sock *sk,
+ 			   sk->sk_protocol, inet_sk_flowi_flags(sk),
+ 			   (opt && opt->opt.srr) ? opt->opt.faddr : ireq->ir_rmt_addr,
+ 			   ireq->ir_loc_addr, ireq->ir_rmt_port,
+-			   htons(ireq->ir_num), sk->sk_uid);
++			   htons(ireq->ir_num), sk_uid(sk));
+ 	security_req_classify_flow(req, flowi4_to_flowi_common(fl4));
+ 	rt = ip_route_output_flow(net, fl4, sk);
+ 	if (IS_ERR(rt))
+@@ -849,7 +849,7 @@ struct dst_entry *inet_csk_route_child_sock(const struct sock *sk,
+ 			   sk->sk_protocol, inet_sk_flowi_flags(sk),
+ 			   (opt && opt->opt.srr) ? opt->opt.faddr : ireq->ir_rmt_addr,
+ 			   ireq->ir_loc_addr, ireq->ir_rmt_port,
+-			   htons(ireq->ir_num), sk->sk_uid);
++			   htons(ireq->ir_num), sk_uid(sk));
+ 	security_req_classify_flow(req, flowi4_to_flowi_common(fl4));
+ 	rt = ip_route_output_flow(net, fl4, sk);
+ 	if (IS_ERR(rt))
+diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
+index c14baa6589c748..4eacaf00e2e9b7 100644
+--- a/net/ipv4/ping.c
++++ b/net/ipv4/ping.c
+@@ -781,7 +781,7 @@ static int ping_v4_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	flowi4_init_output(&fl4, ipc.oif, ipc.sockc.mark,
+ 			   ipc.tos & INET_DSCP_MASK, scope,
+ 			   sk->sk_protocol, inet_sk_flowi_flags(sk), faddr,
+-			   saddr, 0, 0, sk->sk_uid);
++			   saddr, 0, 0, sk_uid(sk));
+ 
+ 	fl4.fl4_icmp_type = user_icmph.type;
+ 	fl4.fl4_icmp_code = user_icmph.code;
+diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
+index 6aace4d55733e2..32f942d0f944cc 100644
+--- a/net/ipv4/raw.c
++++ b/net/ipv4/raw.c
+@@ -610,7 +610,7 @@ static int raw_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 			   hdrincl ? ipc.protocol : sk->sk_protocol,
+ 			   inet_sk_flowi_flags(sk) |
+ 			    (hdrincl ? FLOWI_FLAG_KNOWN_NH : 0),
+-			   daddr, saddr, 0, 0, sk->sk_uid);
++			   daddr, saddr, 0, 0, sk_uid(sk));
+ 
+ 	fl4.fl4_icmp_type = 0;
+ 	fl4.fl4_icmp_code = 0;
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index fccb05fb3a794b..bd5d48fdd62ace 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -556,7 +556,8 @@ static void build_sk_flow_key(struct flowi4 *fl4, const struct sock *sk)
+ 			   inet_test_bit(HDRINCL, sk) ?
+ 				IPPROTO_RAW : sk->sk_protocol,
+ 			   inet_sk_flowi_flags(sk),
+-			   daddr, inet->inet_saddr, 0, 0, sk->sk_uid);
++			   daddr, inet->inet_saddr, 0, 0,
++			   sk_uid(sk));
+ 	rcu_read_unlock();
+ }
+ 
+@@ -1684,8 +1685,8 @@ struct rtable *rt_dst_clone(struct net_device *dev, struct rtable *rt)
+ 		else if (rt->rt_gw_family == AF_INET6)
+ 			new_rt->rt_gw6 = rt->rt_gw6;
+ 
+-		new_rt->dst.input = rt->dst.input;
+-		new_rt->dst.output = rt->dst.output;
++		new_rt->dst.input = READ_ONCE(rt->dst.input);
++		new_rt->dst.output = READ_ONCE(rt->dst.output);
+ 		new_rt->dst.error = rt->dst.error;
+ 		new_rt->dst.lastuse = jiffies;
+ 		new_rt->dst.lwtstate = lwtstate_get(rt->dst.lwtstate);
+diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
+index 5459a78b980959..eb0819463faed7 100644
+--- a/net/ipv4/syncookies.c
++++ b/net/ipv4/syncookies.c
+@@ -454,7 +454,8 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb)
+ 			   ip_sock_rt_tos(sk), ip_sock_rt_scope(sk),
+ 			   IPPROTO_TCP, inet_sk_flowi_flags(sk),
+ 			   opt->srr ? opt->faddr : ireq->ir_rmt_addr,
+-			   ireq->ir_loc_addr, th->source, th->dest, sk->sk_uid);
++			   ireq->ir_loc_addr, th->source, th->dest,
++			   sk_uid(sk));
+ 	security_req_classify_flow(req, flowi4_to_flowi_common(&fl4));
+ 	rt = ip_route_output_key(net, &fl4);
+ 	if (IS_ERR(rt)) {
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 68bc79eb90197b..94391f32a5d879 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -4985,8 +4985,9 @@ static void tcp_ofo_queue(struct sock *sk)
+ 
+ 		if (before(TCP_SKB_CB(skb)->seq, dsack_high)) {
+ 			__u32 dsack = dsack_high;
++
+ 			if (before(TCP_SKB_CB(skb)->end_seq, dsack_high))
+-				dsack_high = TCP_SKB_CB(skb)->end_seq;
++				dsack = TCP_SKB_CB(skb)->end_seq;
+ 			tcp_dsack_extend(sk, TCP_SKB_CB(skb)->seq, dsack);
+ 		}
+ 		p = rb_next(p);
+@@ -5054,6 +5055,7 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb)
+ 		return;
+ 	}
+ 
++	tcp_measure_rcv_mss(sk, skb);
+ 	/* Disable header prediction. */
+ 	tp->pred_flags = 0;
+ 	inet_csk_schedule_ack(sk);
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index dde52b8050b8ca..f94bb222aa2d49 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1445,7 +1445,8 @@ int udp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 		flowi4_init_output(fl4, ipc.oif, ipc.sockc.mark,
+ 				   ipc.tos & INET_DSCP_MASK, scope,
+ 				   sk->sk_protocol, flow_flags, faddr, saddr,
+-				   dport, inet->inet_sport, sk->sk_uid);
++				   dport, inet->inet_sport,
++				   sk_uid(sk));
+ 
+ 		security_sk_classify_flow(sk, flowi4_to_flowi_common(fl4));
+ 		rt = ip_route_output_flow(net, fl4, sk);
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index acaff129678353..1992621e3f3f4b 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -842,7 +842,7 @@ int inet6_sk_rebuild_header(struct sock *sk)
+ 		fl6.flowi6_mark = sk->sk_mark;
+ 		fl6.fl6_dport = inet->inet_dport;
+ 		fl6.fl6_sport = inet->inet_sport;
+-		fl6.flowi6_uid = sk->sk_uid;
++		fl6.flowi6_uid = sk_uid(sk);
+ 		security_sk_classify_flow(sk, flowi6_to_flowi_common(&fl6));
+ 
+ 		rcu_read_lock();
+diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
+index fff78496803da6..83f5aa5e133ab2 100644
+--- a/net/ipv6/datagram.c
++++ b/net/ipv6/datagram.c
+@@ -53,7 +53,7 @@ static void ip6_datagram_flow_key_init(struct flowi6 *fl6,
+ 	fl6->fl6_dport = inet->inet_dport;
+ 	fl6->fl6_sport = inet->inet_sport;
+ 	fl6->flowlabel = ip6_make_flowinfo(np->tclass, np->flow_label);
+-	fl6->flowi6_uid = sk->sk_uid;
++	fl6->flowi6_uid = sk_uid(sk);
+ 
+ 	if (!oif)
+ 		oif = np->sticky_pktinfo.ipi6_ifindex;
+diff --git a/net/ipv6/inet6_connection_sock.c b/net/ipv6/inet6_connection_sock.c
+index 8f500eaf33cfc4..333e43434dd78d 100644
+--- a/net/ipv6/inet6_connection_sock.c
++++ b/net/ipv6/inet6_connection_sock.c
+@@ -45,7 +45,7 @@ struct dst_entry *inet6_csk_route_req(const struct sock *sk,
+ 	fl6->flowi6_mark = ireq->ir_mark;
+ 	fl6->fl6_dport = ireq->ir_rmt_port;
+ 	fl6->fl6_sport = htons(ireq->ir_num);
+-	fl6->flowi6_uid = sk->sk_uid;
++	fl6->flowi6_uid = sk_uid(sk);
+ 	security_req_classify_flow(req, flowi6_to_flowi_common(fl6));
+ 
+ 	dst = ip6_dst_lookup_flow(sock_net(sk), sk, fl6, final_p);
+@@ -79,7 +79,7 @@ static struct dst_entry *inet6_csk_route_socket(struct sock *sk,
+ 	fl6->flowi6_mark = sk->sk_mark;
+ 	fl6->fl6_sport = inet->inet_sport;
+ 	fl6->fl6_dport = inet->inet_dport;
+-	fl6->flowi6_uid = sk->sk_uid;
++	fl6->flowi6_uid = sk_uid(sk);
+ 	security_sk_classify_flow(sk, flowi6_to_flowi_common(fl6));
+ 
+ 	rcu_read_lock();
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index 93578b2ec35fb5..4d68bd853dbae9 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -445,15 +445,17 @@ struct fib6_dump_arg {
+ static int fib6_rt_dump(struct fib6_info *rt, struct fib6_dump_arg *arg)
+ {
+ 	enum fib_event_type fib_event = FIB_EVENT_ENTRY_REPLACE;
++	unsigned int nsiblings;
+ 	int err;
+ 
+ 	if (!rt || rt == arg->net->ipv6.fib6_null_entry)
+ 		return 0;
+ 
+-	if (rt->fib6_nsiblings)
++	nsiblings = READ_ONCE(rt->fib6_nsiblings);
++	if (nsiblings)
+ 		err = call_fib6_multipath_entry_notifier(arg->nb, fib_event,
+ 							 rt,
+-							 rt->fib6_nsiblings,
++							 nsiblings,
+ 							 arg->extack);
+ 	else
+ 		err = call_fib6_entry_notifier(arg->nb, fib_event, rt,
+@@ -1138,7 +1140,7 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ 
+ 			if (rt6_duplicate_nexthop(iter, rt)) {
+ 				if (rt->fib6_nsiblings)
+-					rt->fib6_nsiblings = 0;
++					WRITE_ONCE(rt->fib6_nsiblings, 0);
+ 				if (!(iter->fib6_flags & RTF_EXPIRES))
+ 					return -EEXIST;
+ 				if (!(rt->fib6_flags & RTF_EXPIRES)) {
+@@ -1167,7 +1169,8 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ 			 */
+ 			if (rt_can_ecmp &&
+ 			    rt6_qualify_for_ecmp(iter))
+-				rt->fib6_nsiblings++;
++				WRITE_ONCE(rt->fib6_nsiblings,
++					   rt->fib6_nsiblings + 1);
+ 		}
+ 
+ 		if (iter->fib6_metric > rt->fib6_metric)
+@@ -1217,7 +1220,8 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ 		fib6_nsiblings = 0;
+ 		list_for_each_entry_safe(sibling, temp_sibling,
+ 					 &rt->fib6_siblings, fib6_siblings) {
+-			sibling->fib6_nsiblings++;
++			WRITE_ONCE(sibling->fib6_nsiblings,
++				   sibling->fib6_nsiblings + 1);
+ 			BUG_ON(sibling->fib6_nsiblings != rt->fib6_nsiblings);
+ 			fib6_nsiblings++;
+ 		}
+@@ -1264,8 +1268,9 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ 				list_for_each_entry_safe(sibling, next_sibling,
+ 							 &rt->fib6_siblings,
+ 							 fib6_siblings)
+-					sibling->fib6_nsiblings--;
+-				rt->fib6_nsiblings = 0;
++					WRITE_ONCE(sibling->fib6_nsiblings,
++						   sibling->fib6_nsiblings - 1);
++				WRITE_ONCE(rt->fib6_nsiblings, 0);
+ 				list_del_rcu(&rt->fib6_siblings);
+ 				rcu_read_lock();
+ 				rt6_multipath_rebalance(next_sibling);
+@@ -2014,8 +2019,9 @@ static void fib6_del_route(struct fib6_table *table, struct fib6_node *fn,
+ 			notify_del = true;
+ 		list_for_each_entry_safe(sibling, next_sibling,
+ 					 &rt->fib6_siblings, fib6_siblings)
+-			sibling->fib6_nsiblings--;
+-		rt->fib6_nsiblings = 0;
++			WRITE_ONCE(sibling->fib6_nsiblings,
++				   sibling->fib6_nsiblings - 1);
++		WRITE_ONCE(rt->fib6_nsiblings, 0);
+ 		list_del_rcu(&rt->fib6_siblings);
+ 		rt6_multipath_rebalance(next_sibling);
+ 	}
+diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c
+index 9822163428b028..fce91183797a60 100644
+--- a/net/ipv6/ip6_offload.c
++++ b/net/ipv6/ip6_offload.c
+@@ -148,7 +148,9 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb,
+ 
+ 	ops = rcu_dereference(inet6_offloads[proto]);
+ 	if (likely(ops && ops->callbacks.gso_segment)) {
+-		skb_reset_transport_header(skb);
++		if (!skb_reset_transport_header_careful(skb))
++			goto out;
++
+ 		segs = ops->callbacks.gso_segment(skb, features);
+ 		if (!segs)
+ 			skb->network_header = skb_mac_header(skb) + nhoff - skb->head;
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index 9db31e5b998c1a..426859cd340923 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -2039,6 +2039,7 @@ static int ip6mr_forward2(struct net *net, struct mr_table *mrt,
+ 			  struct sk_buff *skb, int vifi)
+ {
+ 	struct vif_device *vif = &mrt->vif_table[vifi];
++	struct net_device *indev = skb->dev;
+ 	struct net_device *vif_dev;
+ 	struct ipv6hdr *ipv6h;
+ 	struct dst_entry *dst;
+@@ -2101,7 +2102,7 @@ static int ip6mr_forward2(struct net *net, struct mr_table *mrt,
+ 	IP6CB(skb)->flags |= IP6SKB_FORWARDED;
+ 
+ 	return NF_HOOK(NFPROTO_IPV6, NF_INET_FORWARD,
+-		       net, NULL, skb, skb->dev, vif_dev,
++		       net, NULL, skb, indev, skb->dev,
+ 		       ip6mr_forward2_finish);
+ 
+ out_free:
+diff --git a/net/ipv6/ping.c b/net/ipv6/ping.c
+index 84d90dd8b3f0f7..82b0492923d458 100644
+--- a/net/ipv6/ping.c
++++ b/net/ipv6/ping.c
+@@ -142,7 +142,7 @@ static int ping_v6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	fl6.saddr = np->saddr;
+ 	fl6.daddr = *daddr;
+ 	fl6.flowi6_mark = ipc6.sockc.mark;
+-	fl6.flowi6_uid = sk->sk_uid;
++	fl6.flowi6_uid = sk_uid(sk);
+ 	fl6.fl6_icmp_type = user_icmph.icmp6_type;
+ 	fl6.fl6_icmp_code = user_icmph.icmp6_code;
+ 	security_sk_classify_flow(sk, flowi6_to_flowi_common(&fl6));
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index fda640ebd53f86..4c3f8245c40f15 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -777,7 +777,7 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	memset(&fl6, 0, sizeof(fl6));
+ 
+ 	fl6.flowi6_mark = ipc6.sockc.mark;
+-	fl6.flowi6_uid = sk->sk_uid;
++	fl6.flowi6_uid = sk_uid(sk);
+ 
+ 	if (sin6) {
+ 		if (addr_len < SIN6_LEN_RFC2133)
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 79c8f1acf8a35e..8adae86fbe722e 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -3010,7 +3010,7 @@ void ip6_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, __be32 mtu)
+ 		oif = l3mdev_master_ifindex(skb->dev);
+ 
+ 	ip6_update_pmtu(skb, sock_net(sk), mtu, oif, READ_ONCE(sk->sk_mark),
+-			sk->sk_uid);
++			sk_uid(sk));
+ 
+ 	dst = __sk_dst_get(sk);
+ 	if (!dst || !dst->obsolete ||
+@@ -3232,7 +3232,7 @@ void ip6_redirect_no_header(struct sk_buff *skb, struct net *net, int oif)
+ void ip6_sk_redirect(struct sk_buff *skb, struct sock *sk)
+ {
+ 	ip6_redirect(skb, sock_net(sk), sk->sk_bound_dev_if,
+-		     READ_ONCE(sk->sk_mark), sk->sk_uid);
++		     READ_ONCE(sk->sk_mark), sk_uid(sk));
+ }
+ EXPORT_SYMBOL_GPL(ip6_sk_redirect);
+ 
+@@ -5346,7 +5346,8 @@ static void ip6_route_mpath_notify(struct fib6_info *rt,
+ 	 */
+ 	rcu_read_lock();
+ 
+-	if ((nlflags & NLM_F_APPEND) && rt_last && rt_last->fib6_nsiblings) {
++	if ((nlflags & NLM_F_APPEND) && rt_last &&
++	    READ_ONCE(rt_last->fib6_nsiblings)) {
+ 		rt = list_first_or_null_rcu(&rt_last->fib6_siblings,
+ 					    struct fib6_info,
+ 					    fib6_siblings);
+@@ -5670,32 +5671,34 @@ static int rt6_nh_nlmsg_size(struct fib6_nh *nh, void *arg)
+ 
+ static size_t rt6_nlmsg_size(struct fib6_info *f6i)
+ {
++	struct fib6_info *sibling;
++	struct fib6_nh *nh;
+ 	int nexthop_len;
+ 
+ 	if (f6i->nh) {
+ 		nexthop_len = nla_total_size(4); /* RTA_NH_ID */
+ 		nexthop_for_each_fib6_nh(f6i->nh, rt6_nh_nlmsg_size,
+ 					 &nexthop_len);
+-	} else {
+-		struct fib6_nh *nh = f6i->fib6_nh;
+-		struct fib6_info *sibling;
+-
+-		nexthop_len = 0;
+-		if (f6i->fib6_nsiblings) {
+-			rt6_nh_nlmsg_size(nh, &nexthop_len);
+-
+-			rcu_read_lock();
++		goto common;
++	}
+ 
+-			list_for_each_entry_rcu(sibling, &f6i->fib6_siblings,
+-						fib6_siblings) {
+-				rt6_nh_nlmsg_size(sibling->fib6_nh, &nexthop_len);
+-			}
++	rcu_read_lock();
++retry:
++	nh = f6i->fib6_nh;
++	nexthop_len = 0;
++	if (READ_ONCE(f6i->fib6_nsiblings)) {
++		rt6_nh_nlmsg_size(nh, &nexthop_len);
+ 
+-			rcu_read_unlock();
++		list_for_each_entry_rcu(sibling, &f6i->fib6_siblings,
++					fib6_siblings) {
++			rt6_nh_nlmsg_size(sibling->fib6_nh, &nexthop_len);
++			if (!READ_ONCE(f6i->fib6_nsiblings))
++				goto retry;
+ 		}
+-		nexthop_len += lwtunnel_get_encap_size(nh->fib_nh_lws);
+ 	}
+-
++	rcu_read_unlock();
++	nexthop_len += lwtunnel_get_encap_size(nh->fib_nh_lws);
++common:
+ 	return NLMSG_ALIGN(sizeof(struct rtmsg))
+ 	       + nla_total_size(16) /* RTA_SRC */
+ 	       + nla_total_size(16) /* RTA_DST */
+@@ -5854,7 +5857,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 		if (dst->lwtstate &&
+ 		    lwtunnel_fill_encap(skb, dst->lwtstate, RTA_ENCAP, RTA_ENCAP_TYPE) < 0)
+ 			goto nla_put_failure;
+-	} else if (rt->fib6_nsiblings) {
++	} else if (READ_ONCE(rt->fib6_nsiblings)) {
+ 		struct fib6_info *sibling;
+ 		struct nlattr *mp;
+ 
+@@ -5956,16 +5959,21 @@ static bool fib6_info_uses_dev(const struct fib6_info *f6i,
+ 	if (f6i->fib6_nh->fib_nh_dev == dev)
+ 		return true;
+ 
+-	if (f6i->fib6_nsiblings) {
+-		struct fib6_info *sibling, *next_sibling;
++	if (READ_ONCE(f6i->fib6_nsiblings)) {
++		const struct fib6_info *sibling;
+ 
+-		list_for_each_entry_safe(sibling, next_sibling,
+-					 &f6i->fib6_siblings, fib6_siblings) {
+-			if (sibling->fib6_nh->fib_nh_dev == dev)
++		rcu_read_lock();
++		list_for_each_entry_rcu(sibling, &f6i->fib6_siblings,
++					fib6_siblings) {
++			if (sibling->fib6_nh->fib_nh_dev == dev) {
++				rcu_read_unlock();
+ 				return true;
++			}
++			if (!READ_ONCE(f6i->fib6_nsiblings))
++				break;
+ 		}
++		rcu_read_unlock();
+ 	}
+-
+ 	return false;
+ }
+ 
+@@ -6321,8 +6329,9 @@ static int inet6_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh,
+ void inet6_rt_notify(int event, struct fib6_info *rt, struct nl_info *info,
+ 		     unsigned int nlm_flags)
+ {
+-	struct sk_buff *skb;
+ 	struct net *net = info->nl_net;
++	struct sk_buff *skb;
++	size_t sz;
+ 	u32 seq;
+ 	int err;
+ 
+@@ -6330,17 +6339,21 @@ void inet6_rt_notify(int event, struct fib6_info *rt, struct nl_info *info,
+ 	seq = info->nlh ? info->nlh->nlmsg_seq : 0;
+ 
+ 	rcu_read_lock();
+-
+-	skb = nlmsg_new(rt6_nlmsg_size(rt), GFP_ATOMIC);
++	sz = rt6_nlmsg_size(rt);
++retry:
++	skb = nlmsg_new(sz, GFP_ATOMIC);
+ 	if (!skb)
+ 		goto errout;
+ 
+ 	err = rt6_fill_node(net, skb, rt, NULL, NULL, NULL, 0,
+ 			    event, info->portid, seq, nlm_flags);
+ 	if (err < 0) {
+-		/* -EMSGSIZE implies BUG in rt6_nlmsg_size() */
+-		WARN_ON(err == -EMSGSIZE);
+ 		kfree_skb(skb);
++		/* -EMSGSIZE implies needed space grew under us. */
++		if (err == -EMSGSIZE) {
++			sz = max(rt6_nlmsg_size(rt), sz << 1);
++			goto retry;
++		}
+ 		goto errout;
+ 	}
+ 
+diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
+index 9d83eadd308b0c..f0ee1a90977166 100644
+--- a/net/ipv6/syncookies.c
++++ b/net/ipv6/syncookies.c
+@@ -236,7 +236,7 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb)
+ 		fl6.flowi6_mark = ireq->ir_mark;
+ 		fl6.fl6_dport = ireq->ir_rmt_port;
+ 		fl6.fl6_sport = inet_sk(sk)->inet_sport;
+-		fl6.flowi6_uid = sk->sk_uid;
++		fl6.flowi6_uid = sk_uid(sk);
+ 		security_req_classify_flow(req, flowi6_to_flowi_common(&fl6));
+ 
+ 		dst = ip6_dst_lookup_flow(net, sk, &fl6, final_p);
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index e8e68a14264991..f61b0396ef6b18 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -269,7 +269,7 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
+ 	fl6.fl6_sport = inet->inet_sport;
+ 	if (IS_ENABLED(CONFIG_IP_ROUTE_MULTIPATH) && !fl6.fl6_sport)
+ 		fl6.flowi6_flags = FLOWI_FLAG_ANY_SPORT;
+-	fl6.flowi6_uid = sk->sk_uid;
++	fl6.flowi6_uid = sk_uid(sk);
+ 
+ 	opt = rcu_dereference_protected(np->opt, lockdep_sock_is_held(sk));
+ 	final_p = fl6_update_dst(&fl6, opt, &final);
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 7317f8e053f1c2..ebb95d8bc6819f 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -750,7 +750,8 @@ int __udp6_lib_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 	if (type == NDISC_REDIRECT) {
+ 		if (tunnel) {
+ 			ip6_redirect(skb, sock_net(sk), inet6_iif(skb),
+-				     READ_ONCE(sk->sk_mark), sk->sk_uid);
++				     READ_ONCE(sk->sk_mark),
++				     sk_uid(sk));
+ 		} else {
+ 			ip6_sk_redirect(skb, sk);
+ 		}
+@@ -1620,7 +1621,7 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	if (!fl6->flowi6_oif)
+ 		fl6->flowi6_oif = np->sticky_pktinfo.ipi6_ifindex;
+ 
+-	fl6->flowi6_uid = sk->sk_uid;
++	fl6->flowi6_uid = sk_uid(sk);
+ 
+ 	if (msg->msg_controllen) {
+ 		opt = &opt_space;
+diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
+index 24aec295a51cf7..c05047dad62d7e 100644
+--- a/net/kcm/kcmsock.c
++++ b/net/kcm/kcmsock.c
+@@ -19,6 +19,7 @@
+ #include <linux/rculist.h>
+ #include <linux/skbuff.h>
+ #include <linux/socket.h>
++#include <linux/splice.h>
+ #include <linux/uaccess.h>
+ #include <linux/workqueue.h>
+ #include <linux/syscalls.h>
+@@ -1030,6 +1031,11 @@ static ssize_t kcm_splice_read(struct socket *sock, loff_t *ppos,
+ 	ssize_t copied;
+ 	struct sk_buff *skb;
+ 
++	if (sock->file->f_flags & O_NONBLOCK || flags & SPLICE_F_NONBLOCK)
++		flags = MSG_DONTWAIT;
++	else
++		flags = 0;
++
+ 	/* Only support splice for SOCKSEQPACKET */
+ 
+ 	skb = skb_recv_datagram(sk, flags, &err);
+diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
+index b98d13584c81f0..ea232f338dcb65 100644
+--- a/net/l2tp/l2tp_ip6.c
++++ b/net/l2tp/l2tp_ip6.c
+@@ -545,7 +545,7 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	memset(&fl6, 0, sizeof(fl6));
+ 
+ 	fl6.flowi6_mark = READ_ONCE(sk->sk_mark);
+-	fl6.flowi6_uid = sk->sk_uid;
++	fl6.flowi6_uid = sk_uid(sk);
+ 
+ 	ipcm6_init_sk(&ipc6, sk);
+ 
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 954795b0fe48a8..7b17591a861086 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -178,6 +178,7 @@ static int ieee80211_set_ap_mbssid_options(struct ieee80211_sub_if_data *sdata,
+ 
+ 		link_conf->nontransmitted = true;
+ 		link_conf->bssid_index = params->index;
++		link_conf->bssid_indicator = tx_bss_conf->bssid_indicator;
+ 	}
+ 	if (params->ema)
+ 		link_conf->ema_ap = true;
+@@ -1121,13 +1122,13 @@ ieee80211_copy_rnr_beacon(u8 *pos, struct cfg80211_rnr_elems *dst,
+ {
+ 	int i, offset = 0;
+ 
++	dst->cnt = src->cnt;
+ 	for (i = 0; i < src->cnt; i++) {
+ 		memcpy(pos + offset, src->elem[i].data, src->elem[i].len);
+ 		dst->elem[i].len = src->elem[i].len;
+ 		dst->elem[i].data = pos + offset;
+ 		offset += dst->elem[i].len;
+ 	}
+-	dst->cnt = src->cnt;
+ 
+ 	return offset;
+ }
+@@ -1218,8 +1219,11 @@ ieee80211_assign_beacon(struct ieee80211_sub_if_data *sdata,
+ 			ieee80211_copy_rnr_beacon(pos, new->rnr_ies, rnr);
+ 		}
+ 		/* update bssid_indicator */
+-		link_conf->bssid_indicator =
+-			ilog2(__roundup_pow_of_two(mbssid->cnt + 1));
++		if (new->mbssid_ies->cnt && new->mbssid_ies->elem[0].len > 2)
++			link_conf->bssid_indicator =
++					*(new->mbssid_ies->elem[0].data + 2);
++		else
++			link_conf->bssid_indicator = 0;
+ 	}
+ 
+ 	if (csa) {
+@@ -3756,7 +3760,7 @@ void ieee80211_csa_finish(struct ieee80211_vif *vif, unsigned int link_id)
+ 		 */
+ 		struct ieee80211_link_data *iter;
+ 
+-		for_each_sdata_link(local, iter) {
++		for_each_sdata_link_rcu(local, iter) {
+ 			if (iter->sdata == sdata ||
+ 			    rcu_access_pointer(iter->conf->tx_bss_conf) != tx_bss_conf)
+ 				continue;
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index 30809f0b35f73e..f71d9eeb8abc80 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -1226,6 +1226,21 @@ struct ieee80211_sub_if_data *vif_to_sdata(struct ieee80211_vif *p)
+ 	if ((_link = wiphy_dereference((_local)->hw.wiphy,		\
+ 				       ___sdata->link[___link_id])))
+ 
++/*
++ * for_each_sdata_link_rcu() must be used under RCU read lock.
++ */
++#define for_each_sdata_link_rcu(_local, _link)						\
++	/* outer loop just to define the variables ... */				\
++	for (struct ieee80211_sub_if_data *___sdata = NULL;				\
++	     !___sdata;									\
++	     ___sdata = (void *)~0 /* always stop */)					\
++	list_for_each_entry_rcu(___sdata, &(_local)->interfaces, list)			\
++	if (ieee80211_sdata_running(___sdata))						\
++	for (int ___link_id = 0;							\
++	     ___link_id < ARRAY_SIZE((___sdata)->link);					\
++	     ___link_id++)								\
++	if ((_link = rcu_dereference((___sdata)->link[___link_id])))
++
+ #define for_each_link_data(sdata, __link)					\
+ 	struct ieee80211_sub_if_data *__sdata = sdata;				\
+ 	for (int __link_id = 0;							\
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index 6b6de43d9420ac..1bad353d8a772b 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -407,9 +407,20 @@ void ieee80211_link_info_change_notify(struct ieee80211_sub_if_data *sdata,
+ 
+ 	WARN_ON_ONCE(changed & BSS_CHANGED_VIF_CFG_FLAGS);
+ 
+-	if (!changed || sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
++	if (!changed)
+ 		return;
+ 
++	switch (sdata->vif.type) {
++	case NL80211_IFTYPE_AP_VLAN:
++		return;
++	case NL80211_IFTYPE_MONITOR:
++		if (!ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF))
++			return;
++		break;
++	default:
++		break;
++	}
++
+ 	if (!check_sdata_in_driver(sdata))
+ 		return;
+ 
+diff --git a/net/mac80211/tdls.c b/net/mac80211/tdls.c
+index 94714f8ffd2249..ba5fbacbeeda63 100644
+--- a/net/mac80211/tdls.c
++++ b/net/mac80211/tdls.c
+@@ -1422,7 +1422,7 @@ int ieee80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev,
+ 	if (!(wiphy->flags & WIPHY_FLAG_SUPPORTS_TDLS))
+ 		return -EOPNOTSUPP;
+ 
+-	if (sdata->vif.type != NL80211_IFTYPE_STATION)
++	if (sdata->vif.type != NL80211_IFTYPE_STATION || !sdata->vif.cfg.assoc)
+ 		return -EINVAL;
+ 
+ 	switch (oper) {
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index d58b80813bdd78..8aaa59a27bc4cc 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -612,6 +612,12 @@ ieee80211_tx_h_select_key(struct ieee80211_tx_data *tx)
+ 	else
+ 		tx->key = NULL;
+ 
++	if (info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP) {
++		if (tx->key && tx->key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE)
++			info->control.hw_key = &tx->key->conf;
++		return TX_CONTINUE;
++	}
++
+ 	if (tx->key) {
+ 		bool skip_hw = false;
+ 
+@@ -1428,7 +1434,7 @@ static void ieee80211_txq_enqueue(struct ieee80211_local *local,
+ {
+ 	struct fq *fq = &local->fq;
+ 	struct fq_tin *tin = &txqi->tin;
+-	u32 flow_idx = fq_flow_idx(fq, skb);
++	u32 flow_idx;
+ 
+ 	ieee80211_set_skb_enqueue_time(skb);
+ 
+@@ -1444,6 +1450,7 @@ static void ieee80211_txq_enqueue(struct ieee80211_local *local,
+ 			IEEE80211_TX_INTCFL_NEED_TXPROCESSING;
+ 		__skb_queue_tail(&txqi->frags, skb);
+ 	} else {
++		flow_idx = fq_flow_idx(fq, skb);
+ 		fq_tin_enqueue(fq, tin, flow_idx, skb,
+ 			       fq_skb_free_func);
+ 	}
+@@ -3876,6 +3883,7 @@ struct sk_buff *ieee80211_tx_dequeue(struct ieee80211_hw *hw,
+ 	 * The key can be removed while the packet was queued, so need to call
+ 	 * this here to get the current key.
+ 	 */
++	info->control.hw_key = NULL;
+ 	r = ieee80211_tx_h_select_key(&tx);
+ 	if (r != TX_CONTINUE) {
+ 		ieee80211_free_txskb(&local->hw, skb);
+@@ -4098,7 +4106,9 @@ void __ieee80211_schedule_txq(struct ieee80211_hw *hw,
+ 
+ 	spin_lock_bh(&local->active_txq_lock[txq->ac]);
+ 
+-	has_queue = force || txq_has_queue(txq);
++	has_queue = force ||
++		    (!test_bit(IEEE80211_TXQ_STOP, &txqi->flags) &&
++		     txq_has_queue(txq));
+ 	if (list_empty(&txqi->schedule_order) &&
+ 	    (has_queue || ieee80211_txq_keep_active(txqi))) {
+ 		/* If airtime accounting is active, always enqueue STAs at the
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 6a817a13b1549c..76cb699885b388 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -3537,7 +3537,7 @@ void mptcp_sock_graft(struct sock *sk, struct socket *parent)
+ 	write_lock_bh(&sk->sk_callback_lock);
+ 	rcu_assign_pointer(sk->sk_wq, &parent->wq);
+ 	sk_set_socket(sk, parent);
+-	sk->sk_uid = SOCK_INODE(parent)->i_uid;
++	WRITE_ONCE(sk->sk_uid, SOCK_INODE(parent)->i_uid);
+ 	write_unlock_bh(&sk->sk_callback_lock);
+ }
+ 
+diff --git a/net/netfilter/nf_bpf_link.c b/net/netfilter/nf_bpf_link.c
+index 06b08484470034..c12250e50a8b29 100644
+--- a/net/netfilter/nf_bpf_link.c
++++ b/net/netfilter/nf_bpf_link.c
+@@ -17,7 +17,7 @@ static unsigned int nf_hook_run_bpf(void *bpf_prog, struct sk_buff *skb,
+ 		.skb = skb,
+ 	};
+ 
+-	return bpf_prog_run(prog, &ctx);
++	return bpf_prog_run_pin_on_cpu(prog, &ctx);
+ }
+ 
+ struct bpf_nf_link {
+@@ -295,6 +295,9 @@ static bool nf_is_valid_access(int off, int size, enum bpf_access_type type,
+ 	if (off < 0 || off >= sizeof(struct bpf_nf_ctx))
+ 		return false;
+ 
++	if (off % size != 0)
++		return false;
++
+ 	if (type == BPF_WRITE)
+ 		return false;
+ 
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index a7240736f98e6a..064f18792d9847 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1165,11 +1165,6 @@ static int nf_tables_fill_table_info(struct sk_buff *skb, struct net *net,
+ 			 NFTA_TABLE_PAD))
+ 		goto nla_put_failure;
+ 
+-	if (event == NFT_MSG_DELTABLE) {
+-		nlmsg_end(skb, nlh);
+-		return 0;
+-	}
+-
+ 	if (nla_put_be32(skb, NFTA_TABLE_FLAGS,
+ 			 htonl(table->flags & NFT_TABLE_F_MASK)))
+ 		goto nla_put_failure;
+@@ -2028,11 +2023,6 @@ static int nf_tables_fill_chain_info(struct sk_buff *skb, struct net *net,
+ 			 NFTA_CHAIN_PAD))
+ 		goto nla_put_failure;
+ 
+-	if (event == NFT_MSG_DELCHAIN && !hook_list) {
+-		nlmsg_end(skb, nlh);
+-		return 0;
+-	}
+-
+ 	if (nft_is_base_chain(chain)) {
+ 		const struct nft_base_chain *basechain = nft_base_chain(chain);
+ 		struct nft_stats __percpu *stats;
+@@ -4039,7 +4029,7 @@ void nf_tables_rule_destroy(const struct nft_ctx *ctx, struct nft_rule *rule)
+ /* can only be used if rule is no longer visible to dumps */
+ static void nf_tables_rule_release(const struct nft_ctx *ctx, struct nft_rule *rule)
+ {
+-	lockdep_commit_lock_is_held(ctx->net);
++	WARN_ON_ONCE(!lockdep_commit_lock_is_held(ctx->net));
+ 
+ 	nft_rule_expr_deactivate(ctx, rule, NFT_TRANS_RELEASE);
+ 	nf_tables_rule_destroy(ctx, rule);
+@@ -4859,11 +4849,6 @@ static int nf_tables_fill_set(struct sk_buff *skb, const struct nft_ctx *ctx,
+ 			 NFTA_SET_PAD))
+ 		goto nla_put_failure;
+ 
+-	if (event == NFT_MSG_DELSET) {
+-		nlmsg_end(skb, nlh);
+-		return 0;
+-	}
+-
+ 	if (set->flags != 0)
+ 		if (nla_put_be32(skb, NFTA_SET_FLAGS, htonl(set->flags)))
+ 			goto nla_put_failure;
+@@ -5859,7 +5844,7 @@ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 			      struct nft_set_binding *binding,
+ 			      enum nft_trans_phase phase)
+ {
+-	lockdep_commit_lock_is_held(ctx->net);
++	WARN_ON_ONCE(!lockdep_commit_lock_is_held(ctx->net));
+ 
+ 	switch (phase) {
+ 	case NFT_TRANS_PREPARE_ERROR:
+@@ -8350,11 +8335,6 @@ static int nf_tables_fill_obj_info(struct sk_buff *skb, struct net *net,
+ 			 NFTA_OBJ_PAD))
+ 		goto nla_put_failure;
+ 
+-	if (event == NFT_MSG_DELOBJ) {
+-		nlmsg_end(skb, nlh);
+-		return 0;
+-	}
+-
+ 	if (nla_put_be32(skb, NFTA_OBJ_TYPE, htonl(obj->ops->type->type)) ||
+ 	    nla_put_be32(skb, NFTA_OBJ_USE, htonl(obj->use)) ||
+ 	    nft_object_dump(skb, NFTA_OBJ_DATA, obj, reset))
+@@ -9394,11 +9374,6 @@ static int nf_tables_fill_flowtable_info(struct sk_buff *skb, struct net *net,
+ 			 NFTA_FLOWTABLE_PAD))
+ 		goto nla_put_failure;
+ 
+-	if (event == NFT_MSG_DELFLOWTABLE && !hook_list) {
+-		nlmsg_end(skb, nlh);
+-		return 0;
+-	}
+-
+ 	if (nla_put_be32(skb, NFTA_FLOWTABLE_USE, htonl(flowtable->use)) ||
+ 	    nla_put_be32(skb, NFTA_FLOWTABLE_FLAGS, htonl(flowtable->data.flags)))
+ 		goto nla_put_failure;
+diff --git a/net/netfilter/xt_nfacct.c b/net/netfilter/xt_nfacct.c
+index 7c6bf1c168131a..0ca1cdfc4095b6 100644
+--- a/net/netfilter/xt_nfacct.c
++++ b/net/netfilter/xt_nfacct.c
+@@ -38,8 +38,8 @@ nfacct_mt_checkentry(const struct xt_mtchk_param *par)
+ 
+ 	nfacct = nfnl_acct_find_get(par->net, info->name);
+ 	if (nfacct == NULL) {
+-		pr_info_ratelimited("accounting object `%s' does not exists\n",
+-				    info->name);
++		pr_info_ratelimited("accounting object `%.*s' does not exist\n",
++				    NFACCT_NAME_MAX, info->name);
+ 		return -ENOENT;
+ 	}
+ 	info->nfacct = nfacct;
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index be608f07441f4f..c7c7de3403f76e 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -4573,10 +4573,10 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ 	spin_lock(&po->bind_lock);
+ 	was_running = packet_sock_flag(po, PACKET_SOCK_RUNNING);
+ 	num = po->num;
+-	if (was_running) {
+-		WRITE_ONCE(po->num, 0);
++	WRITE_ONCE(po->num, 0);
++	if (was_running)
+ 		__unregister_prot_hook(sk, false);
+-	}
++
+ 	spin_unlock(&po->bind_lock);
+ 
+ 	synchronize_net();
+@@ -4608,10 +4608,10 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ 	mutex_unlock(&po->pg_vec_lock);
+ 
+ 	spin_lock(&po->bind_lock);
+-	if (was_running) {
+-		WRITE_ONCE(po->num, num);
++	WRITE_ONCE(po->num, num);
++	if (was_running)
+ 		register_prot_hook(sk);
+-	}
++
+ 	spin_unlock(&po->bind_lock);
+ 	if (pg_vec && (po->tp_version > TPACKET_V2)) {
+ 		/* Because we don't support block-based V3 on tx-ring */
+diff --git a/net/sched/act_ctinfo.c b/net/sched/act_ctinfo.c
+index 5b1241ddc75851..93ab3bcd6d3106 100644
+--- a/net/sched/act_ctinfo.c
++++ b/net/sched/act_ctinfo.c
+@@ -44,9 +44,9 @@ static void tcf_ctinfo_dscp_set(struct nf_conn *ct, struct tcf_ctinfo *ca,
+ 				ipv4_change_dsfield(ip_hdr(skb),
+ 						    INET_ECN_MASK,
+ 						    newdscp);
+-				ca->stats_dscp_set++;
++				atomic64_inc(&ca->stats_dscp_set);
+ 			} else {
+-				ca->stats_dscp_error++;
++				atomic64_inc(&ca->stats_dscp_error);
+ 			}
+ 		}
+ 		break;
+@@ -57,9 +57,9 @@ static void tcf_ctinfo_dscp_set(struct nf_conn *ct, struct tcf_ctinfo *ca,
+ 				ipv6_change_dsfield(ipv6_hdr(skb),
+ 						    INET_ECN_MASK,
+ 						    newdscp);
+-				ca->stats_dscp_set++;
++				atomic64_inc(&ca->stats_dscp_set);
+ 			} else {
+-				ca->stats_dscp_error++;
++				atomic64_inc(&ca->stats_dscp_error);
+ 			}
+ 		}
+ 		break;
+@@ -72,7 +72,7 @@ static void tcf_ctinfo_cpmark_set(struct nf_conn *ct, struct tcf_ctinfo *ca,
+ 				  struct tcf_ctinfo_params *cp,
+ 				  struct sk_buff *skb)
+ {
+-	ca->stats_cpmark_set++;
++	atomic64_inc(&ca->stats_cpmark_set);
+ 	skb->mark = READ_ONCE(ct->mark) & cp->cpmarkmask;
+ }
+ 
+@@ -323,15 +323,18 @@ static int tcf_ctinfo_dump(struct sk_buff *skb, struct tc_action *a,
+ 	}
+ 
+ 	if (nla_put_u64_64bit(skb, TCA_CTINFO_STATS_DSCP_SET,
+-			      ci->stats_dscp_set, TCA_CTINFO_PAD))
++			      atomic64_read(&ci->stats_dscp_set),
++			      TCA_CTINFO_PAD))
+ 		goto nla_put_failure;
+ 
+ 	if (nla_put_u64_64bit(skb, TCA_CTINFO_STATS_DSCP_ERROR,
+-			      ci->stats_dscp_error, TCA_CTINFO_PAD))
++			      atomic64_read(&ci->stats_dscp_error),
++			      TCA_CTINFO_PAD))
+ 		goto nla_put_failure;
+ 
+ 	if (nla_put_u64_64bit(skb, TCA_CTINFO_STATS_CPMARK_SET,
+-			      ci->stats_cpmark_set, TCA_CTINFO_PAD))
++			      atomic64_read(&ci->stats_cpmark_set),
++			      TCA_CTINFO_PAD))
+ 		goto nla_put_failure;
+ 
+ 	spin_unlock_bh(&ci->tcf_lock);
+diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
+index 51d4013b612198..f3e5ef9a959256 100644
+--- a/net/sched/sch_mqprio.c
++++ b/net/sched/sch_mqprio.c
+@@ -152,7 +152,7 @@ static int mqprio_parse_opt(struct net_device *dev, struct tc_mqprio_qopt *qopt,
+ static const struct
+ nla_policy mqprio_tc_entry_policy[TCA_MQPRIO_TC_ENTRY_MAX + 1] = {
+ 	[TCA_MQPRIO_TC_ENTRY_INDEX]	= NLA_POLICY_MAX(NLA_U32,
+-							 TC_QOPT_MAX_QUEUE),
++							 TC_QOPT_MAX_QUEUE - 1),
+ 	[TCA_MQPRIO_TC_ENTRY_FP]	= NLA_POLICY_RANGE(NLA_U32,
+ 							   TC_FP_EXPRESS,
+ 							   TC_FP_PREEMPTIBLE),
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index fdd79d3ccd8ce7..eafc316ae319e3 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -973,6 +973,41 @@ static int parse_attr(struct nlattr *tb[], int maxtype, struct nlattr *nla,
+ 	return 0;
+ }
+ 
++static const struct Qdisc_class_ops netem_class_ops;
++
++static int check_netem_in_tree(struct Qdisc *sch, bool duplicates,
++			       struct netlink_ext_ack *extack)
++{
++	struct Qdisc *root, *q;
++	unsigned int i;
++
++	root = qdisc_root_sleeping(sch);
++
++	if (sch != root && root->ops->cl_ops == &netem_class_ops) {
++		if (duplicates ||
++		    ((struct netem_sched_data *)qdisc_priv(root))->duplicate)
++			goto err;
++	}
++
++	if (!qdisc_dev(root))
++		return 0;
++
++	hash_for_each(qdisc_dev(root)->qdisc_hash, i, q, hash) {
++		if (sch != q && q->ops->cl_ops == &netem_class_ops) {
++			if (duplicates ||
++			    ((struct netem_sched_data *)qdisc_priv(q))->duplicate)
++				goto err;
++		}
++	}
++
++	return 0;
++
++err:
++	NL_SET_ERR_MSG(extack,
++		       "netem: cannot mix duplicating netems with other netems in tree");
++	return -EINVAL;
++}
++
+ /* Parse netlink message to set options */
+ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ 			struct netlink_ext_ack *extack)
+@@ -1031,6 +1066,11 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ 	q->gap = qopt->gap;
+ 	q->counter = 0;
+ 	q->loss = qopt->loss;
++
++	ret = check_netem_in_tree(sch, qopt->duplicate, extack);
++	if (ret)
++		goto unlock;
++
+ 	q->duplicate = qopt->duplicate;
+ 
+ 	/* for compatibility with earlier versions.
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 2b14c81a87e5c4..85d84f39e220c7 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -43,6 +43,11 @@ static struct static_key_false taprio_have_working_mqprio;
+ #define TAPRIO_SUPPORTED_FLAGS \
+ 	(TCA_TAPRIO_ATTR_FLAG_TXTIME_ASSIST | TCA_TAPRIO_ATTR_FLAG_FULL_OFFLOAD)
+ #define TAPRIO_FLAGS_INVALID U32_MAX
++/* Minimum value for picos_per_byte to ensure non-zero duration
++ * for minimum-sized Ethernet frames (ETH_ZLEN = 60).
++ * 60 * 17 > PSEC_PER_NSEC (1000)
++ */
++#define TAPRIO_PICOS_PER_BYTE_MIN 17
+ 
+ struct sched_entry {
+ 	/* Durations between this GCL entry and the GCL entry where the
+@@ -1284,7 +1289,8 @@ static void taprio_start_sched(struct Qdisc *sch,
+ }
+ 
+ static void taprio_set_picos_per_byte(struct net_device *dev,
+-				      struct taprio_sched *q)
++				      struct taprio_sched *q,
++				      struct netlink_ext_ack *extack)
+ {
+ 	struct ethtool_link_ksettings ecmd;
+ 	int speed = SPEED_10;
+@@ -1300,6 +1306,15 @@ static void taprio_set_picos_per_byte(struct net_device *dev,
+ 
+ skip:
+ 	picos_per_byte = (USEC_PER_SEC * 8) / speed;
++	if (picos_per_byte < TAPRIO_PICOS_PER_BYTE_MIN) {
++		if (!extack)
++			pr_warn("Link speed %d is too high. Schedule may be inaccurate.\n",
++				speed);
++		NL_SET_ERR_MSG_FMT_MOD(extack,
++				       "Link speed %d is too high. Schedule may be inaccurate.",
++				       speed);
++		picos_per_byte = TAPRIO_PICOS_PER_BYTE_MIN;
++	}
+ 
+ 	atomic64_set(&q->picos_per_byte, picos_per_byte);
+ 	netdev_dbg(dev, "taprio: set %s's picos_per_byte to: %lld, linkspeed: %d\n",
+@@ -1324,7 +1339,7 @@ static int taprio_dev_notifier(struct notifier_block *nb, unsigned long event,
+ 		if (dev != qdisc_dev(q->root))
+ 			continue;
+ 
+-		taprio_set_picos_per_byte(dev, q);
++		taprio_set_picos_per_byte(dev, q, NULL);
+ 
+ 		stab = rtnl_dereference(q->root->stab);
+ 
+@@ -1848,7 +1863,7 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt,
+ 	q->flags = taprio_flags;
+ 
+ 	/* Needed for length_to_duration() during netlink attribute parsing */
+-	taprio_set_picos_per_byte(dev, q);
++	taprio_set_picos_per_byte(dev, q, extack);
+ 
+ 	err = taprio_parse_mqprio_opt(dev, mqprio, extack, q->flags);
+ 	if (err < 0)
+diff --git a/net/socket.c b/net/socket.c
+index 9a0e720f08598a..c706601a4c1639 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -592,10 +592,12 @@ static int sockfs_setattr(struct mnt_idmap *idmap,
+ 	if (!err && (iattr->ia_valid & ATTR_UID)) {
+ 		struct socket *sock = SOCKET_I(d_inode(dentry));
+ 
+-		if (sock->sk)
+-			sock->sk->sk_uid = iattr->ia_uid;
+-		else
++		if (sock->sk) {
++			/* Paired with READ_ONCE() in sk_uid() */
++			WRITE_ONCE(sock->sk->sk_uid, iattr->ia_uid);
++		} else {
+ 			err = -ENOENT;
++		}
+ 	}
+ 
+ 	return err;
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index e1c85123b445bf..dd20ccf8d3533f 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -257,20 +257,47 @@ svc_tcp_sock_process_cmsg(struct socket *sock, struct msghdr *msg,
+ }
+ 
+ static int
+-svc_tcp_sock_recv_cmsg(struct svc_sock *svsk, struct msghdr *msg)
++svc_tcp_sock_recv_cmsg(struct socket *sock, unsigned int *msg_flags)
+ {
+ 	union {
+ 		struct cmsghdr	cmsg;
+ 		u8		buf[CMSG_SPACE(sizeof(u8))];
+ 	} u;
+-	struct socket *sock = svsk->sk_sock;
++	u8 alert[2];
++	struct kvec alert_kvec = {
++		.iov_base = alert,
++		.iov_len = sizeof(alert),
++	};
++	struct msghdr msg = {
++		.msg_flags = *msg_flags,
++		.msg_control = &u,
++		.msg_controllen = sizeof(u),
++	};
++	int ret;
++
++	iov_iter_kvec(&msg.msg_iter, ITER_DEST, &alert_kvec, 1,
++		      alert_kvec.iov_len);
++	ret = sock_recvmsg(sock, &msg, MSG_DONTWAIT);
++	if (ret > 0 &&
++	    tls_get_record_type(sock->sk, &u.cmsg) == TLS_RECORD_TYPE_ALERT) {
++		iov_iter_revert(&msg.msg_iter, ret);
++		ret = svc_tcp_sock_process_cmsg(sock, &msg, &u.cmsg, -EAGAIN);
++	}
++	return ret;
++}
++
++static int
++svc_tcp_sock_recvmsg(struct svc_sock *svsk, struct msghdr *msg)
++{
+ 	int ret;
++	struct socket *sock = svsk->sk_sock;
+ 
+-	msg->msg_control = &u;
+-	msg->msg_controllen = sizeof(u);
+ 	ret = sock_recvmsg(sock, msg, MSG_DONTWAIT);
+-	if (unlikely(msg->msg_controllen != sizeof(u)))
+-		ret = svc_tcp_sock_process_cmsg(sock, msg, &u.cmsg, ret);
++	if (msg->msg_flags & MSG_CTRUNC) {
++		msg->msg_flags &= ~(MSG_CTRUNC | MSG_EOR);
++		if (ret == 0 || ret == -EIO)
++			ret = svc_tcp_sock_recv_cmsg(sock, &msg->msg_flags);
++	}
+ 	return ret;
+ }
+ 
+@@ -321,7 +348,7 @@ static ssize_t svc_tcp_read_msg(struct svc_rqst *rqstp, size_t buflen,
+ 		iov_iter_advance(&msg.msg_iter, seek);
+ 		buflen -= seek;
+ 	}
+-	len = svc_tcp_sock_recv_cmsg(svsk, &msg);
++	len = svc_tcp_sock_recvmsg(svsk, &msg);
+ 	if (len > 0)
+ 		svc_flush_bvec(bvec, len, seek);
+ 
+@@ -1018,7 +1045,7 @@ static ssize_t svc_tcp_read_marker(struct svc_sock *svsk,
+ 		iov.iov_base = ((char *)&svsk->sk_marker) + svsk->sk_tcplen;
+ 		iov.iov_len  = want;
+ 		iov_iter_kvec(&msg.msg_iter, ITER_DEST, &iov, 1, want);
+-		len = svc_tcp_sock_recv_cmsg(svsk, &msg);
++		len = svc_tcp_sock_recvmsg(svsk, &msg);
+ 		if (len < 0)
+ 			return len;
+ 		svsk->sk_tcplen += len;
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 04ff66758fc3e8..c5f7bbf5775ff8 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -358,7 +358,7 @@ xs_alloc_sparse_pages(struct xdr_buf *buf, size_t want, gfp_t gfp)
+ 
+ static int
+ xs_sock_process_cmsg(struct socket *sock, struct msghdr *msg,
+-		     struct cmsghdr *cmsg, int ret)
++		     unsigned int *msg_flags, struct cmsghdr *cmsg, int ret)
+ {
+ 	u8 content_type = tls_get_record_type(sock->sk, cmsg);
+ 	u8 level, description;
+@@ -371,7 +371,7 @@ xs_sock_process_cmsg(struct socket *sock, struct msghdr *msg,
+ 		 * record, even though there might be more frames
+ 		 * waiting to be decrypted.
+ 		 */
+-		msg->msg_flags &= ~MSG_EOR;
++		*msg_flags &= ~MSG_EOR;
+ 		break;
+ 	case TLS_RECORD_TYPE_ALERT:
+ 		tls_alert_recv(sock->sk, msg, &level, &description);
+@@ -386,19 +386,33 @@ xs_sock_process_cmsg(struct socket *sock, struct msghdr *msg,
+ }
+ 
+ static int
+-xs_sock_recv_cmsg(struct socket *sock, struct msghdr *msg, int flags)
++xs_sock_recv_cmsg(struct socket *sock, unsigned int *msg_flags, int flags)
+ {
+ 	union {
+ 		struct cmsghdr	cmsg;
+ 		u8		buf[CMSG_SPACE(sizeof(u8))];
+ 	} u;
++	u8 alert[2];
++	struct kvec alert_kvec = {
++		.iov_base = alert,
++		.iov_len = sizeof(alert),
++	};
++	struct msghdr msg = {
++		.msg_flags = *msg_flags,
++		.msg_control = &u,
++		.msg_controllen = sizeof(u),
++	};
+ 	int ret;
+ 
+-	msg->msg_control = &u;
+-	msg->msg_controllen = sizeof(u);
+-	ret = sock_recvmsg(sock, msg, flags);
+-	if (msg->msg_controllen != sizeof(u))
+-		ret = xs_sock_process_cmsg(sock, msg, &u.cmsg, ret);
++	iov_iter_kvec(&msg.msg_iter, ITER_DEST, &alert_kvec, 1,
++		      alert_kvec.iov_len);
++	ret = sock_recvmsg(sock, &msg, flags);
++	if (ret > 0 &&
++	    tls_get_record_type(sock->sk, &u.cmsg) == TLS_RECORD_TYPE_ALERT) {
++		iov_iter_revert(&msg.msg_iter, ret);
++		ret = xs_sock_process_cmsg(sock, &msg, msg_flags, &u.cmsg,
++					   -EAGAIN);
++	}
+ 	return ret;
+ }
+ 
+@@ -408,7 +422,13 @@ xs_sock_recvmsg(struct socket *sock, struct msghdr *msg, int flags, size_t seek)
+ 	ssize_t ret;
+ 	if (seek != 0)
+ 		iov_iter_advance(&msg->msg_iter, seek);
+-	ret = xs_sock_recv_cmsg(sock, msg, flags);
++	ret = sock_recvmsg(sock, msg, flags);
++	/* Handle TLS inband control message lazily */
++	if (msg->msg_flags & MSG_CTRUNC) {
++		msg->msg_flags &= ~(MSG_CTRUNC | MSG_EOR);
++		if (ret == 0 || ret == -EIO)
++			ret = xs_sock_recv_cmsg(sock, &msg->msg_flags, flags);
++	}
+ 	return ret > 0 ? ret + seek : ret;
+ }
+ 
+@@ -434,7 +454,7 @@ xs_read_discard(struct socket *sock, struct msghdr *msg, int flags,
+ 		size_t count)
+ {
+ 	iov_iter_discard(&msg->msg_iter, ITER_DEST, count);
+-	return xs_sock_recv_cmsg(sock, msg, flags);
++	return xs_sock_recvmsg(sock, msg, flags, 0);
+ }
+ 
+ #if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index fc88e34b7f33fe..549d1ea01a72a7 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -872,6 +872,19 @@ static int bpf_exec_tx_verdict(struct sk_msg *msg, struct sock *sk,
+ 		delta = msg->sg.size;
+ 		psock->eval = sk_psock_msg_verdict(sk, psock, msg);
+ 		delta -= msg->sg.size;
++
++		if ((s32)delta > 0) {
++			/* It indicates that we executed bpf_msg_pop_data(),
++			 * causing the plaintext data size to decrease.
++			 * Therefore the encrypted data size also needs to
++			 * correspondingly decrease. We only need to subtract
++			 * delta to calculate the new ciphertext length since
++			 * ktls does not support block encryption.
++			 */
++			struct sk_msg *enc = &ctx->open_rec->msg_encrypted;
++
++			sk_msg_trim(sk, enc, enc->sg.size - delta);
++		}
+ 	}
+ 	if (msg->cork_bytes && msg->cork_bytes > msg->sg.size &&
+ 	    !enospc && !full_record) {
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 1053662725f8f0..4da8289a3ef5aa 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -689,7 +689,8 @@ static int __vsock_bind_connectible(struct vsock_sock *vsk,
+ 		unsigned int i;
+ 
+ 		for (i = 0; i < MAX_PORT_RETRIES; i++) {
+-			if (port <= LAST_RESERVED_PORT)
++			if (port == VMADDR_PORT_ANY ||
++			    port <= LAST_RESERVED_PORT)
+ 				port = LAST_RESERVED_PORT + 1;
+ 
+ 			new_addr.svm_port = port++;
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 50202d170f3a74..bcdccd7dea062a 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -16932,6 +16932,7 @@ static int nl80211_set_sar_specs(struct sk_buff *skb, struct genl_info *info)
+ 	if (!sar_spec)
+ 		return -ENOMEM;
+ 
++	sar_spec->num_sub_specs = specs;
+ 	sar_spec->type = type;
+ 	specs = 0;
+ 	nla_for_each_nested(spec_list, tb[NL80211_SAR_ATTR_SPECS], rem) {
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index c1752b31734faa..92e04370fa63a8 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -4229,6 +4229,8 @@ static void cfg80211_check_and_end_cac(struct cfg80211_registered_device *rdev)
+ 	struct wireless_dev *wdev;
+ 	unsigned int link_id;
+ 
++	guard(wiphy)(&rdev->wiphy);
++
+ 	/* If we finished CAC or received radar, we should end any
+ 	 * CAC running on the same channels.
+ 	 * the check !cfg80211_chandef_dfs_usable contain 2 options:
+diff --git a/rust/kernel/devres.rs b/rust/kernel/devres.rs
+index 57502534d98500..8ede607414fd58 100644
+--- a/rust/kernel/devres.rs
++++ b/rust/kernel/devres.rs
+@@ -18,7 +18,7 @@
+ };
+ 
+ #[pin_data]
+-struct DevresInner<T> {
++struct DevresInner<T: Send> {
+     dev: ARef<Device>,
+     callback: unsafe extern "C" fn(*mut c_void),
+     #[pin]
+@@ -95,9 +95,9 @@ struct DevresInner<T> {
+ /// # Ok(())
+ /// # }
+ /// ```
+-pub struct Devres<T>(Arc<DevresInner<T>>);
++pub struct Devres<T: Send>(Arc<DevresInner<T>>);
+ 
+-impl<T> DevresInner<T> {
++impl<T: Send> DevresInner<T> {
+     fn new(dev: &Device<Bound>, data: T, flags: Flags) -> Result<Arc<DevresInner<T>>> {
+         let inner = Arc::pin_init(
+             pin_init!( DevresInner {
+@@ -175,7 +175,7 @@ fn remove_action(this: &Arc<Self>) -> bool {
+     }
+ }
+ 
+-impl<T> Devres<T> {
++impl<T: Send> Devres<T> {
+     /// Creates a new [`Devres`] instance of the given `data`. The `data` encapsulated within the
+     /// returned `Devres` instance' `data` will be revoked once the device is detached.
+     pub fn new(dev: &Device<Bound>, data: T, flags: Flags) -> Result<Self> {
+@@ -247,7 +247,7 @@ pub fn try_access_with_guard<'a>(&'a self, guard: &'a rcu::Guard) -> Option<&'a
+     }
+ }
+ 
+-impl<T> Drop for Devres<T> {
++impl<T: Send> Drop for Devres<T> {
+     fn drop(&mut self) {
+         // SAFETY: When `drop` runs, it is guaranteed that nobody is accessing the revocable data
+         // anymore, hence it is safe not to wait for the grace period to finish.
+diff --git a/rust/kernel/miscdevice.rs b/rust/kernel/miscdevice.rs
+index 939278bc7b0348..4f7a8714ad361b 100644
+--- a/rust/kernel/miscdevice.rs
++++ b/rust/kernel/miscdevice.rs
+@@ -45,7 +45,13 @@ pub const fn into_raw<T: MiscDevice>(self) -> bindings::miscdevice {
+ ///
+ /// # Invariants
+ ///
+-/// `inner` is a registered misc device.
++/// - `inner` contains a `struct miscdevice` that is registered using
++///   `misc_register()`.
++/// - This registration remains valid for the entire lifetime of the
++///   [`MiscDeviceRegistration`] instance.
++/// - Deregistration occurs exactly once in [`Drop`] via `misc_deregister()`.
++/// - `inner` wraps a valid, pinned `miscdevice` created using
++///   [`MiscDeviceOptions::into_raw`].
+ #[repr(transparent)]
+ #[pin_data(PinnedDrop)]
+ pub struct MiscDeviceRegistration<T> {
+diff --git a/samples/mei/mei-amt-version.c b/samples/mei/mei-amt-version.c
+index 867debd3b9124c..1d7254bcb44cb7 100644
+--- a/samples/mei/mei-amt-version.c
++++ b/samples/mei/mei-amt-version.c
+@@ -69,11 +69,11 @@
+ #include <string.h>
+ #include <fcntl.h>
+ #include <sys/ioctl.h>
++#include <sys/time.h>
+ #include <unistd.h>
+ #include <errno.h>
+ #include <stdint.h>
+ #include <stdbool.h>
+-#include <bits/wordsize.h>
+ #include <linux/mei.h>
+ 
+ /*****************************************************************************
+diff --git a/scripts/gdb/linux/constants.py.in b/scripts/gdb/linux/constants.py.in
+index f795302ddfa8b3..c3886739a02891 100644
+--- a/scripts/gdb/linux/constants.py.in
++++ b/scripts/gdb/linux/constants.py.in
+@@ -74,12 +74,12 @@ if IS_BUILTIN(CONFIG_MODULES):
+     LX_GDBPARSED(MOD_RO_AFTER_INIT)
+ 
+ /* linux/mount.h */
+-LX_VALUE(MNT_NOSUID)
+-LX_VALUE(MNT_NODEV)
+-LX_VALUE(MNT_NOEXEC)
+-LX_VALUE(MNT_NOATIME)
+-LX_VALUE(MNT_NODIRATIME)
+-LX_VALUE(MNT_RELATIME)
++LX_GDBPARSED(MNT_NOSUID)
++LX_GDBPARSED(MNT_NODEV)
++LX_GDBPARSED(MNT_NOEXEC)
++LX_GDBPARSED(MNT_NOATIME)
++LX_GDBPARSED(MNT_NODIRATIME)
++LX_GDBPARSED(MNT_RELATIME)
+ 
+ /* linux/threads.h */
+ LX_VALUE(NR_CPUS)
+diff --git a/scripts/kconfig/qconf.cc b/scripts/kconfig/qconf.cc
+index eaa465b0ccf9c4..49607555d343bb 100644
+--- a/scripts/kconfig/qconf.cc
++++ b/scripts/kconfig/qconf.cc
+@@ -478,7 +478,7 @@ void ConfigList::updateListAllForAll()
+ 	while (it.hasNext()) {
+ 		ConfigList *list = it.next();
+ 
+-		list->updateList();
++		list->updateListAll();
+ 	}
+ }
+ 
+diff --git a/security/apparmor/include/match.h b/security/apparmor/include/match.h
+index 536ce3abd5986a..27cf23b0396bc8 100644
+--- a/security/apparmor/include/match.h
++++ b/security/apparmor/include/match.h
+@@ -137,17 +137,15 @@ aa_state_t aa_dfa_matchn_until(struct aa_dfa *dfa, aa_state_t start,
+ 
+ void aa_dfa_free_kref(struct kref *kref);
+ 
+-#define WB_HISTORY_SIZE 24
++/* This needs to be a power of 2 */
++#define WB_HISTORY_SIZE 32
+ struct match_workbuf {
+-	unsigned int count;
+ 	unsigned int pos;
+ 	unsigned int len;
+-	unsigned int size;	/* power of 2, same as history size */
+-	unsigned int history[WB_HISTORY_SIZE];
++	aa_state_t history[WB_HISTORY_SIZE];
+ };
+ #define DEFINE_MATCH_WB(N)		\
+ struct match_workbuf N = {		\
+-	.count = 0,			\
+ 	.pos = 0,			\
+ 	.len = 0,			\
+ }
+diff --git a/security/apparmor/match.c b/security/apparmor/match.c
+index f2d9c57f879439..c5a91600842a16 100644
+--- a/security/apparmor/match.c
++++ b/security/apparmor/match.c
+@@ -679,34 +679,35 @@ aa_state_t aa_dfa_matchn_until(struct aa_dfa *dfa, aa_state_t start,
+ 	return state;
+ }
+ 
+-#define inc_wb_pos(wb)						\
+-do {								\
++#define inc_wb_pos(wb)							\
++do {									\
++	BUILD_BUG_ON_NOT_POWER_OF_2(WB_HISTORY_SIZE);			\
+ 	wb->pos = (wb->pos + 1) & (WB_HISTORY_SIZE - 1);		\
+-	wb->len = (wb->len + 1) & (WB_HISTORY_SIZE - 1);		\
++	wb->len = (wb->len + 1) > WB_HISTORY_SIZE ? WB_HISTORY_SIZE :	\
++				wb->len + 1;				\
+ } while (0)
+ 
+ /* For DFAs that don't support extended tagging of states */
++/* adjust is only set if is_loop returns true */
+ static bool is_loop(struct match_workbuf *wb, aa_state_t state,
+ 		    unsigned int *adjust)
+ {
+-	aa_state_t pos = wb->pos;
+-	aa_state_t i;
++	int pos = wb->pos;
++	int i;
+ 
+ 	if (wb->history[pos] < state)
+ 		return false;
+ 
+-	for (i = 0; i <= wb->len; i++) {
++	for (i = 0; i < wb->len; i++) {
+ 		if (wb->history[pos] == state) {
+ 			*adjust = i;
+ 			return true;
+ 		}
+-		if (pos == 0)
+-			pos = WB_HISTORY_SIZE;
+-		pos--;
++		/* -1 wraps to WB_HISTORY_SIZE - 1 */
++		pos = (pos - 1) & (WB_HISTORY_SIZE - 1);
+ 	}
+ 
+-	*adjust = i;
+-	return true;
++	return false;
+ }
+ 
+ static aa_state_t leftmatch_fb(struct aa_dfa *dfa, aa_state_t start,
+diff --git a/security/apparmor/policy_unpack_test.c b/security/apparmor/policy_unpack_test.c
+index 5b2ba88ae9e24b..cf18744dafe264 100644
+--- a/security/apparmor/policy_unpack_test.c
++++ b/security/apparmor/policy_unpack_test.c
+@@ -9,6 +9,8 @@
+ #include "include/policy.h"
+ #include "include/policy_unpack.h"
+ 
++#include <linux/unaligned.h>
++
+ #define TEST_STRING_NAME "TEST_STRING"
+ #define TEST_STRING_DATA "testing"
+ #define TEST_STRING_BUF_OFFSET \
+@@ -80,7 +82,7 @@ static struct aa_ext *build_aa_ext_struct(struct policy_unpack_fixture *puf,
+ 	*(buf + 1) = strlen(TEST_U32_NAME) + 1;
+ 	strscpy(buf + 3, TEST_U32_NAME, e->end - (void *)(buf + 3));
+ 	*(buf + 3 + strlen(TEST_U32_NAME) + 1) = AA_U32;
+-	*((__le32 *)(buf + 3 + strlen(TEST_U32_NAME) + 2)) = cpu_to_le32(TEST_U32_DATA);
++	put_unaligned_le32(TEST_U32_DATA, buf + 3 + strlen(TEST_U32_NAME) + 2);
+ 
+ 	buf = e->start + TEST_NAMED_U64_BUF_OFFSET;
+ 	*buf = AA_NAME;
+@@ -103,7 +105,7 @@ static struct aa_ext *build_aa_ext_struct(struct policy_unpack_fixture *puf,
+ 	*(buf + 1) = strlen(TEST_ARRAY_NAME) + 1;
+ 	strscpy(buf + 3, TEST_ARRAY_NAME, e->end - (void *)(buf + 3));
+ 	*(buf + 3 + strlen(TEST_ARRAY_NAME) + 1) = AA_ARRAY;
+-	*((__le16 *)(buf + 3 + strlen(TEST_ARRAY_NAME) + 2)) = cpu_to_le16(TEST_ARRAY_SIZE);
++	put_unaligned_le16(TEST_ARRAY_SIZE, buf + 3 + strlen(TEST_ARRAY_NAME) + 2);
+ 
+ 	return e;
+ }
+diff --git a/security/landlock/id.c b/security/landlock/id.c
+index 56f7cc0fc7440f..838c3ed7bb822e 100644
+--- a/security/landlock/id.c
++++ b/security/landlock/id.c
+@@ -119,6 +119,12 @@ static u64 get_id_range(size_t number_of_ids, atomic64_t *const counter,
+ 
+ #ifdef CONFIG_SECURITY_LANDLOCK_KUNIT_TEST
+ 
++static u8 get_random_u8_positive(void)
++{
++	/* max() evaluates its arguments once. */
++	return max(1, get_random_u8());
++}
++
+ static void test_range1_rand0(struct kunit *const test)
+ {
+ 	atomic64_t counter;
+@@ -127,9 +133,10 @@ static void test_range1_rand0(struct kunit *const test)
+ 	init = get_random_u32();
+ 	atomic64_set(&counter, init);
+ 	KUNIT_EXPECT_EQ(test, get_id_range(1, &counter, 0), init);
+-	KUNIT_EXPECT_EQ(
+-		test, get_id_range(get_random_u8(), &counter, get_random_u8()),
+-		init + 1);
++	KUNIT_EXPECT_EQ(test,
++			get_id_range(get_random_u8_positive(), &counter,
++				     get_random_u8()),
++			init + 1);
+ }
+ 
+ static void test_range1_rand1(struct kunit *const test)
+@@ -140,9 +147,10 @@ static void test_range1_rand1(struct kunit *const test)
+ 	init = get_random_u32();
+ 	atomic64_set(&counter, init);
+ 	KUNIT_EXPECT_EQ(test, get_id_range(1, &counter, 1), init);
+-	KUNIT_EXPECT_EQ(
+-		test, get_id_range(get_random_u8(), &counter, get_random_u8()),
+-		init + 2);
++	KUNIT_EXPECT_EQ(test,
++			get_id_range(get_random_u8_positive(), &counter,
++				     get_random_u8()),
++			init + 2);
+ }
+ 
+ static void test_range1_rand15(struct kunit *const test)
+@@ -153,9 +161,10 @@ static void test_range1_rand15(struct kunit *const test)
+ 	init = get_random_u32();
+ 	atomic64_set(&counter, init);
+ 	KUNIT_EXPECT_EQ(test, get_id_range(1, &counter, 15), init);
+-	KUNIT_EXPECT_EQ(
+-		test, get_id_range(get_random_u8(), &counter, get_random_u8()),
+-		init + 16);
++	KUNIT_EXPECT_EQ(test,
++			get_id_range(get_random_u8_positive(), &counter,
++				     get_random_u8()),
++			init + 16);
+ }
+ 
+ static void test_range1_rand16(struct kunit *const test)
+@@ -166,9 +175,10 @@ static void test_range1_rand16(struct kunit *const test)
+ 	init = get_random_u32();
+ 	atomic64_set(&counter, init);
+ 	KUNIT_EXPECT_EQ(test, get_id_range(1, &counter, 16), init);
+-	KUNIT_EXPECT_EQ(
+-		test, get_id_range(get_random_u8(), &counter, get_random_u8()),
+-		init + 1);
++	KUNIT_EXPECT_EQ(test,
++			get_id_range(get_random_u8_positive(), &counter,
++				     get_random_u8()),
++			init + 1);
+ }
+ 
+ static void test_range2_rand0(struct kunit *const test)
+@@ -179,9 +189,10 @@ static void test_range2_rand0(struct kunit *const test)
+ 	init = get_random_u32();
+ 	atomic64_set(&counter, init);
+ 	KUNIT_EXPECT_EQ(test, get_id_range(2, &counter, 0), init);
+-	KUNIT_EXPECT_EQ(
+-		test, get_id_range(get_random_u8(), &counter, get_random_u8()),
+-		init + 2);
++	KUNIT_EXPECT_EQ(test,
++			get_id_range(get_random_u8_positive(), &counter,
++				     get_random_u8()),
++			init + 2);
+ }
+ 
+ static void test_range2_rand1(struct kunit *const test)
+@@ -192,9 +203,10 @@ static void test_range2_rand1(struct kunit *const test)
+ 	init = get_random_u32();
+ 	atomic64_set(&counter, init);
+ 	KUNIT_EXPECT_EQ(test, get_id_range(2, &counter, 1), init);
+-	KUNIT_EXPECT_EQ(
+-		test, get_id_range(get_random_u8(), &counter, get_random_u8()),
+-		init + 3);
++	KUNIT_EXPECT_EQ(test,
++			get_id_range(get_random_u8_positive(), &counter,
++				     get_random_u8()),
++			init + 3);
+ }
+ 
+ static void test_range2_rand2(struct kunit *const test)
+@@ -205,9 +217,10 @@ static void test_range2_rand2(struct kunit *const test)
+ 	init = get_random_u32();
+ 	atomic64_set(&counter, init);
+ 	KUNIT_EXPECT_EQ(test, get_id_range(2, &counter, 2), init);
+-	KUNIT_EXPECT_EQ(
+-		test, get_id_range(get_random_u8(), &counter, get_random_u8()),
+-		init + 4);
++	KUNIT_EXPECT_EQ(test,
++			get_id_range(get_random_u8_positive(), &counter,
++				     get_random_u8()),
++			init + 4);
+ }
+ 
+ static void test_range2_rand15(struct kunit *const test)
+@@ -218,9 +231,10 @@ static void test_range2_rand15(struct kunit *const test)
+ 	init = get_random_u32();
+ 	atomic64_set(&counter, init);
+ 	KUNIT_EXPECT_EQ(test, get_id_range(2, &counter, 15), init);
+-	KUNIT_EXPECT_EQ(
+-		test, get_id_range(get_random_u8(), &counter, get_random_u8()),
+-		init + 17);
++	KUNIT_EXPECT_EQ(test,
++			get_id_range(get_random_u8_positive(), &counter,
++				     get_random_u8()),
++			init + 17);
+ }
+ 
+ static void test_range2_rand16(struct kunit *const test)
+@@ -231,9 +245,10 @@ static void test_range2_rand16(struct kunit *const test)
+ 	init = get_random_u32();
+ 	atomic64_set(&counter, init);
+ 	KUNIT_EXPECT_EQ(test, get_id_range(2, &counter, 16), init);
+-	KUNIT_EXPECT_EQ(
+-		test, get_id_range(get_random_u8(), &counter, get_random_u8()),
+-		init + 2);
++	KUNIT_EXPECT_EQ(test,
++			get_id_range(get_random_u8_positive(), &counter,
++				     get_random_u8()),
++			init + 2);
+ }
+ 
+ #endif /* CONFIG_SECURITY_LANDLOCK_KUNIT_TEST */
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index d40197fb5fbd58..77432e06f3e32c 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -4802,7 +4802,8 @@ static int ca0132_alt_select_out(struct hda_codec *codec)
+ 	if (err < 0)
+ 		goto exit;
+ 
+-	if (ca0132_alt_select_out_quirk_set(codec) < 0)
++	err = ca0132_alt_select_out_quirk_set(codec);
++	if (err < 0)
+ 		goto exit;
+ 
+ 	switch (spec->cur_out_type) {
+@@ -4892,6 +4893,8 @@ static int ca0132_alt_select_out(struct hda_codec *codec)
+ 				spec->bass_redirection_val);
+ 	else
+ 		err = ca0132_alt_surround_set_bass_redirection(codec, 0);
++	if (err < 0)
++		goto exit;
+ 
+ 	/* Unmute DSP now that we're done with output selection. */
+ 	err = dspio_set_uint_param(codec, 0x96,
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 2627e2f49316c9..4031eeb4357b15 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10764,6 +10764,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8a0f, "HP Pavilion 14-ec1xxx", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8a20, "HP Laptop 15s-fq5xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x8a25, "HP Victus 16-d1xxx (MB 8A25)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
++	SND_PCI_QUIRK(0x103c, 0x8a26, "HP Victus 16-d1xxx (MB 8A26)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8a28, "HP Envy 13", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8a29, "HP Envy 15", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8a2a, "HP Envy 15", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -10822,6 +10823,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8bbe, "HP Victus 16-r0xxx (MB 8BBE)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8bc8, "HP Victus 15-fa1xxx", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8bcd, "HP Omen 16-xd0xxx", ALC245_FIXUP_HP_MUTE_LED_V1_COEFBIT),
++	SND_PCI_QUIRK(0x103c, 0x8bd4, "HP Victus 16-s0xxx (MB 8BD4)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8bdd, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8bde, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8bdf, "HP Envy 15", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -10874,6 +10876,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8c91, "HP EliteBook 660", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8c96, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8c97, "HP ZBook", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++	SND_PCI_QUIRK(0x103c, 0x8c99, "HP Victus 16-r1xxx (MB 8C99)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8c9c, "HP Victus 16-s1xxx (MB 8C9C)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8ca1, "HP ZBook Power", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8ca2, "HP ZBook Power", ALC236_FIXUP_HP_GPIO_LED),
+diff --git a/sound/soc/amd/acp/acp-pci.c b/sound/soc/amd/acp/acp-pci.c
+index 0b2aa33cc426f9..2591b1a1c5e002 100644
+--- a/sound/soc/amd/acp/acp-pci.c
++++ b/sound/soc/amd/acp/acp-pci.c
+@@ -137,26 +137,26 @@ static int acp_pci_probe(struct pci_dev *pci, const struct pci_device_id *pci_id
+ 		chip->name = "acp_asoc_renoir";
+ 		chip->rsrc = &rn_rsrc;
+ 		chip->acp_hw_ops_init = acp31_hw_ops_init;
+-		chip->machines = &snd_soc_acpi_amd_acp_machines;
++		chip->machines = snd_soc_acpi_amd_acp_machines;
+ 		break;
+ 	case 0x6f:
+ 		chip->name = "acp_asoc_rembrandt";
+ 		chip->rsrc = &rmb_rsrc;
+ 		chip->acp_hw_ops_init = acp6x_hw_ops_init;
+-		chip->machines = &snd_soc_acpi_amd_rmb_acp_machines;
++		chip->machines = snd_soc_acpi_amd_rmb_acp_machines;
+ 		break;
+ 	case 0x63:
+ 		chip->name = "acp_asoc_acp63";
+ 		chip->rsrc = &acp63_rsrc;
+ 		chip->acp_hw_ops_init = acp63_hw_ops_init;
+-		chip->machines = &snd_soc_acpi_amd_acp63_acp_machines;
++		chip->machines = snd_soc_acpi_amd_acp63_acp_machines;
+ 		break;
+ 	case 0x70:
+ 	case 0x71:
+ 		chip->name = "acp_asoc_acp70";
+ 		chip->rsrc = &acp70_rsrc;
+ 		chip->acp_hw_ops_init = acp70_hw_ops_init;
+-		chip->machines = &snd_soc_acpi_amd_acp70_acp_machines;
++		chip->machines = snd_soc_acpi_amd_acp70_acp_machines;
+ 		break;
+ 	default:
+ 		dev_err(dev, "Unsupported device revision:0x%x\n", pci->revision);
+diff --git a/sound/soc/amd/acp/amd-acpi-mach.c b/sound/soc/amd/acp/amd-acpi-mach.c
+index d95047d2ee945e..27da2a862f1c22 100644
+--- a/sound/soc/amd/acp/amd-acpi-mach.c
++++ b/sound/soc/amd/acp/amd-acpi-mach.c
+@@ -8,12 +8,12 @@
+ 
+ #include <sound/soc-acpi.h>
+ 
+-struct snd_soc_acpi_codecs amp_rt1019 = {
++static struct snd_soc_acpi_codecs amp_rt1019 = {
+ 	.num_codecs = 1,
+ 	.codecs = {"10EC1019"}
+ };
+ 
+-struct snd_soc_acpi_codecs amp_max = {
++static struct snd_soc_acpi_codecs amp_max = {
+ 	.num_codecs = 1,
+ 	.codecs = {"MX98360A"}
+ };
+diff --git a/sound/soc/amd/acp/amd.h b/sound/soc/amd/acp/amd.h
+index 863e74fcee437e..cb8d97122f95c7 100644
+--- a/sound/soc/amd/acp/amd.h
++++ b/sound/soc/amd/acp/amd.h
+@@ -243,10 +243,10 @@ extern struct acp_resource rmb_rsrc;
+ extern struct acp_resource acp63_rsrc;
+ extern struct acp_resource acp70_rsrc;
+ 
+-extern struct snd_soc_acpi_mach snd_soc_acpi_amd_acp_machines;
+-extern struct snd_soc_acpi_mach snd_soc_acpi_amd_rmb_acp_machines;
+-extern struct snd_soc_acpi_mach snd_soc_acpi_amd_acp63_acp_machines;
+-extern struct snd_soc_acpi_mach snd_soc_acpi_amd_acp70_acp_machines;
++extern struct snd_soc_acpi_mach snd_soc_acpi_amd_acp_machines[];
++extern struct snd_soc_acpi_mach snd_soc_acpi_amd_rmb_acp_machines[];
++extern struct snd_soc_acpi_mach snd_soc_acpi_amd_acp63_acp_machines[];
++extern struct snd_soc_acpi_mach snd_soc_acpi_amd_acp70_acp_machines[];
+ 
+ extern const struct snd_soc_dai_ops asoc_acp_cpu_dai_ops;
+ extern const struct snd_soc_dai_ops acp_dmic_dai_ops;
+diff --git a/sound/soc/fsl/fsl_xcvr.c b/sound/soc/fsl/fsl_xcvr.c
+index e3111dd80be486..5d804860f7d8c8 100644
+--- a/sound/soc/fsl/fsl_xcvr.c
++++ b/sound/soc/fsl/fsl_xcvr.c
+@@ -1395,7 +1395,7 @@ static irqreturn_t irq0_isr(int irq, void *devid)
+ 	if (isr & FSL_XCVR_IRQ_NEW_CS) {
+ 		dev_dbg(dev, "Received new CS block\n");
+ 		isr_clr |= FSL_XCVR_IRQ_NEW_CS;
+-		if (!xcvr->soc_data->spdif_only) {
++		if (xcvr->soc_data->fw_name) {
+ 			/* Data RAM is 4KiB, last two pages: 8 and 9. Select page 8. */
+ 			regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_CTRL,
+ 					   FSL_XCVR_EXT_CTRL_PAGE_MASK,
+@@ -1423,6 +1423,26 @@ static irqreturn_t irq0_isr(int irq, void *devid)
+ 				/* clear CS control register */
+ 				memset_io(reg_ctrl, 0, sizeof(val));
+ 			}
++		} else {
++			regmap_read(xcvr->regmap, FSL_XCVR_RX_CS_DATA_0,
++				    (u32 *)&xcvr->rx_iec958.status[0]);
++			regmap_read(xcvr->regmap, FSL_XCVR_RX_CS_DATA_1,
++				    (u32 *)&xcvr->rx_iec958.status[4]);
++			regmap_read(xcvr->regmap, FSL_XCVR_RX_CS_DATA_2,
++				    (u32 *)&xcvr->rx_iec958.status[8]);
++			regmap_read(xcvr->regmap, FSL_XCVR_RX_CS_DATA_3,
++				    (u32 *)&xcvr->rx_iec958.status[12]);
++			regmap_read(xcvr->regmap, FSL_XCVR_RX_CS_DATA_4,
++				    (u32 *)&xcvr->rx_iec958.status[16]);
++			regmap_read(xcvr->regmap, FSL_XCVR_RX_CS_DATA_5,
++				    (u32 *)&xcvr->rx_iec958.status[20]);
++			for (i = 0; i < 6; i++) {
++				val = *(u32 *)(xcvr->rx_iec958.status + i * 4);
++				*(u32 *)(xcvr->rx_iec958.status + i * 4) =
++					bitrev32(val);
++			}
++			regmap_set_bits(xcvr->regmap, FSL_XCVR_RX_DPTH_CTRL,
++					FSL_XCVR_RX_DPTH_CTRL_CSA);
+ 		}
+ 	}
+ 	if (isr & FSL_XCVR_IRQ_NEW_UD) {
+@@ -1497,6 +1517,7 @@ static const struct fsl_xcvr_soc_data fsl_xcvr_imx93_data = {
+ };
+ 
+ static const struct fsl_xcvr_soc_data fsl_xcvr_imx95_data = {
++	.fw_name = "imx/xcvr/xcvr-imx95.bin",
+ 	.spdif_only = true,
+ 	.use_phy = true,
+ 	.use_edma = true,
+@@ -1786,7 +1807,7 @@ static int fsl_xcvr_runtime_resume(struct device *dev)
+ 		}
+ 	}
+ 
+-	if (xcvr->mode == FSL_XCVR_MODE_EARC) {
++	if (xcvr->soc_data->fw_name) {
+ 		ret = fsl_xcvr_load_firmware(xcvr);
+ 		if (ret) {
+ 			dev_err(dev, "failed to load firmware.\n");
+diff --git a/sound/soc/mediatek/common/mtk-afe-platform-driver.c b/sound/soc/mediatek/common/mtk-afe-platform-driver.c
+index 6b633058394140..70fd05d5ff486c 100644
+--- a/sound/soc/mediatek/common/mtk-afe-platform-driver.c
++++ b/sound/soc/mediatek/common/mtk-afe-platform-driver.c
+@@ -120,7 +120,9 @@ int mtk_afe_pcm_new(struct snd_soc_component *component,
+ 	struct mtk_base_afe *afe = snd_soc_component_get_drvdata(component);
+ 
+ 	size = afe->mtk_afe_hardware->buffer_bytes_max;
+-	snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_DEV, afe->dev, 0, size);
++	snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_DEV, afe->dev,
++				       afe->preallocate_buffers ? size : 0,
++				       size);
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/mediatek/common/mtk-base-afe.h b/sound/soc/mediatek/common/mtk-base-afe.h
+index f51578b6c50a35..a406f2e3e7a878 100644
+--- a/sound/soc/mediatek/common/mtk-base-afe.h
++++ b/sound/soc/mediatek/common/mtk-base-afe.h
+@@ -117,6 +117,7 @@ struct mtk_base_afe {
+ 	struct mtk_base_afe_irq *irqs;
+ 	int irqs_size;
+ 	int memif_32bit_supported;
++	bool preallocate_buffers;
+ 
+ 	struct list_head sub_dais;
+ 	struct snd_soc_dai_driver *dai_drivers;
+diff --git a/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c b/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c
+index 04ed0cfec1741e..f93d6348fdf89a 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c
++++ b/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c
+@@ -13,6 +13,7 @@
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/of_address.h>
++#include <linux/of_reserved_mem.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/pm_runtime.h>
+ #include <sound/soc.h>
+@@ -1070,6 +1071,12 @@ static int mt8173_afe_pcm_dev_probe(struct platform_device *pdev)
+ 
+ 	afe->dev = &pdev->dev;
+ 
++	ret = of_reserved_mem_device_init(&pdev->dev);
++	if (ret) {
++		dev_info(&pdev->dev, "no reserved memory found, pre-allocating buffers instead\n");
++		afe->preallocate_buffers = true;
++	}
++
+ 	irq_id = platform_get_irq(pdev, 0);
+ 	if (irq_id <= 0)
+ 		return irq_id < 0 ? irq_id : -ENXIO;
+diff --git a/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c b/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c
+index e8884354995cb1..7383184097a48d 100644
+--- a/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c
++++ b/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c
+@@ -6,10 +6,12 @@
+ // Author: KaiChieh Chuang <kaichieh.chuang@mediatek.com>
+ 
+ #include <linux/delay.h>
++#include <linux/dma-mapping.h>
+ #include <linux/module.h>
+ #include <linux/mfd/syscon.h>
+ #include <linux/of.h>
+ #include <linux/of_address.h>
++#include <linux/of_reserved_mem.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/reset.h>
+ 
+@@ -431,6 +433,9 @@ static const struct snd_soc_component_driver mt8183_afe_pcm_dai_component = {
+ 		.reg_ofs_base = AFE_##_id##_BASE,	\
+ 		.reg_ofs_cur = AFE_##_id##_CUR,		\
+ 		.reg_ofs_end = AFE_##_id##_END,		\
++		.reg_ofs_base_msb = AFE_##_id##_BASE_MSB,	\
++		.reg_ofs_cur_msb = AFE_##_id##_CUR_MSB,		\
++		.reg_ofs_end_msb = AFE_##_id##_END_MSB,		\
+ 		.fs_reg = (_fs_reg),			\
+ 		.fs_shift = _id##_MODE_SFT,		\
+ 		.fs_maskbit = _id##_MODE_MASK,		\
+@@ -462,11 +467,17 @@ static const struct snd_soc_component_driver mt8183_afe_pcm_dai_component = {
+ #define AFE_VUL12_BASE		AFE_VUL_D2_BASE
+ #define AFE_VUL12_CUR		AFE_VUL_D2_CUR
+ #define AFE_VUL12_END		AFE_VUL_D2_END
++#define AFE_VUL12_BASE_MSB	AFE_VUL_D2_BASE_MSB
++#define AFE_VUL12_CUR_MSB	AFE_VUL_D2_CUR_MSB
++#define AFE_VUL12_END_MSB	AFE_VUL_D2_END_MSB
+ #define AWB2_HD_ALIGN_SFT	AWB2_ALIGN_SFT
+ #define VUL12_DATA_SFT		VUL12_MONO_SFT
+ #define AFE_HDMI_BASE		AFE_HDMI_OUT_BASE
+ #define AFE_HDMI_CUR		AFE_HDMI_OUT_CUR
+ #define AFE_HDMI_END		AFE_HDMI_OUT_END
++#define AFE_HDMI_BASE_MSB	AFE_HDMI_OUT_BASE_MSB
++#define AFE_HDMI_CUR_MSB	AFE_HDMI_OUT_CUR_MSB
++#define AFE_HDMI_END_MSB	AFE_HDMI_OUT_END_MSB
+ 
+ static const struct mtk_base_memif_data memif_data[MT8183_MEMIF_NUM] = {
+ 	MT8183_MEMIF(DL1, AFE_DAC_CON1, AFE_DAC_CON1),
+@@ -763,6 +774,10 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	struct reset_control *rstc;
+ 	int i, irq_id, ret;
+ 
++	ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(34));
++	if (ret)
++		return ret;
++
+ 	afe = devm_kzalloc(&pdev->dev, sizeof(*afe), GFP_KERNEL);
+ 	if (!afe)
+ 		return -ENOMEM;
+@@ -777,6 +792,12 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	afe->dev = &pdev->dev;
+ 	dev = afe->dev;
+ 
++	ret = of_reserved_mem_device_init(dev);
++	if (ret) {
++		dev_info(dev, "no reserved memory found, pre-allocating buffers instead\n");
++		afe->preallocate_buffers = true;
++	}
++
+ 	/* initial audio related clock */
+ 	ret = mt8183_init_clock(afe);
+ 	if (ret) {
+diff --git a/sound/soc/mediatek/mt8186/mt8186-afe-pcm.c b/sound/soc/mediatek/mt8186/mt8186-afe-pcm.c
+index db7c93401bee69..c73b4664e53e1b 100644
+--- a/sound/soc/mediatek/mt8186/mt8186-afe-pcm.c
++++ b/sound/soc/mediatek/mt8186/mt8186-afe-pcm.c
+@@ -10,6 +10,7 @@
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/of_address.h>
++#include <linux/of_reserved_mem.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/reset.h>
+ #include <sound/soc.h>
+@@ -2835,6 +2836,12 @@ static int mt8186_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	afe_priv = afe->platform_priv;
+ 	afe->dev = &pdev->dev;
+ 
++	ret = of_reserved_mem_device_init(dev);
++	if (ret) {
++		dev_info(dev, "no reserved memory found, pre-allocating buffers instead\n");
++		afe->preallocate_buffers = true;
++	}
++
+ 	afe->base_addr = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(afe->base_addr))
+ 		return PTR_ERR(afe->base_addr);
+diff --git a/sound/soc/mediatek/mt8192/mt8192-afe-pcm.c b/sound/soc/mediatek/mt8192/mt8192-afe-pcm.c
+index fd6af74d799579..3d32fe46118ece 100644
+--- a/sound/soc/mediatek/mt8192/mt8192-afe-pcm.c
++++ b/sound/soc/mediatek/mt8192/mt8192-afe-pcm.c
+@@ -12,6 +12,7 @@
+ #include <linux/mfd/syscon.h>
+ #include <linux/of.h>
+ #include <linux/of_address.h>
++#include <linux/of_reserved_mem.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/reset.h>
+ #include <sound/soc.h>
+@@ -2179,6 +2180,12 @@ static int mt8192_afe_pcm_dev_probe(struct platform_device *pdev)
+ 
+ 	afe->dev = dev;
+ 
++	ret = of_reserved_mem_device_init(dev);
++	if (ret) {
++		dev_info(dev, "no reserved memory found, pre-allocating buffers instead\n");
++		afe->preallocate_buffers = true;
++	}
++
+ 	/* init audio related clock */
+ 	ret = mt8192_init_clock(afe);
+ 	if (ret) {
+diff --git a/sound/soc/rockchip/rockchip_sai.c b/sound/soc/rockchip/rockchip_sai.c
+index 602f1ddfad0060..916af63f1c2cba 100644
+--- a/sound/soc/rockchip/rockchip_sai.c
++++ b/sound/soc/rockchip/rockchip_sai.c
+@@ -378,19 +378,9 @@ static void rockchip_sai_xfer_start(struct rk_sai_dev *sai, int stream)
+ static void rockchip_sai_xfer_stop(struct rk_sai_dev *sai, int stream)
+ {
+ 	unsigned int msk = 0, val = 0, clr = 0;
+-	bool playback;
+-	bool capture;
+-
+-	if (stream < 0) {
+-		playback = true;
+-		capture = true;
+-	} else if (stream == SNDRV_PCM_STREAM_PLAYBACK) {
+-		playback = true;
+-		capture = false;
+-	} else {
+-		playback = true;
+-		capture = false;
+-	}
++	bool capture = stream == SNDRV_PCM_STREAM_CAPTURE || stream < 0;
++	bool playback = stream == SNDRV_PCM_STREAM_PLAYBACK || stream < 0;
++	/* could be <= 0 but we don't want to depend on enum values */
+ 
+ 	if (playback) {
+ 		msk |= SAI_XFER_TXS_MASK;
+diff --git a/sound/soc/sdca/sdca_asoc.c b/sound/soc/sdca/sdca_asoc.c
+index 7bc8f6069f3d41..febc57b2a0b5d1 100644
+--- a/sound/soc/sdca/sdca_asoc.c
++++ b/sound/soc/sdca/sdca_asoc.c
+@@ -229,11 +229,11 @@ static int entity_early_parse_ge(struct device *dev,
+ 	if (!control_name)
+ 		return -ENOMEM;
+ 
+-	kctl = devm_kmalloc(dev, sizeof(*kctl), GFP_KERNEL);
++	kctl = devm_kzalloc(dev, sizeof(*kctl), GFP_KERNEL);
+ 	if (!kctl)
+ 		return -ENOMEM;
+ 
+-	soc_enum = devm_kmalloc(dev, sizeof(*soc_enum), GFP_KERNEL);
++	soc_enum = devm_kzalloc(dev, sizeof(*soc_enum), GFP_KERNEL);
+ 	if (!soc_enum)
+ 		return -ENOMEM;
+ 
+@@ -397,6 +397,8 @@ static int entity_pde_event(struct snd_soc_dapm_widget *widget,
+ 		from = widget->off_val;
+ 		to = widget->on_val;
+ 		break;
++	default:
++		return 0;
+ 	}
+ 
+ 	for (i = 0; i < entity->pde.num_max_delay; i++) {
+@@ -558,11 +560,11 @@ static int entity_parse_su_class(struct device *dev,
+ 	const char **texts;
+ 	int i;
+ 
+-	kctl = devm_kmalloc(dev, sizeof(*kctl), GFP_KERNEL);
++	kctl = devm_kzalloc(dev, sizeof(*kctl), GFP_KERNEL);
+ 	if (!kctl)
+ 		return -ENOMEM;
+ 
+-	soc_enum = devm_kmalloc(dev, sizeof(*soc_enum), GFP_KERNEL);
++	soc_enum = devm_kzalloc(dev, sizeof(*soc_enum), GFP_KERNEL);
+ 	if (!soc_enum)
+ 		return -ENOMEM;
+ 
+@@ -669,7 +671,7 @@ static int entity_parse_mu(struct device *dev,
+ 		if (!control_name)
+ 			return -ENOMEM;
+ 
+-		mc = devm_kmalloc(dev, sizeof(*mc), GFP_KERNEL);
++		mc = devm_kzalloc(dev, sizeof(*mc), GFP_KERNEL);
+ 		if (!mc)
+ 			return -ENOMEM;
+ 
+@@ -923,7 +925,7 @@ static int populate_control(struct device *dev,
+ 	if (!control_name)
+ 		return -ENOMEM;
+ 
+-	mc = devm_kmalloc(dev, sizeof(*mc), GFP_KERNEL);
++	mc = devm_kzalloc(dev, sizeof(*mc), GFP_KERNEL);
+ 	if (!mc)
+ 		return -ENOMEM;
+ 
+diff --git a/sound/soc/sdca/sdca_functions.c b/sound/soc/sdca/sdca_functions.c
+index de213a69e0dacc..28e9e6de6d5dba 100644
+--- a/sound/soc/sdca/sdca_functions.c
++++ b/sound/soc/sdca/sdca_functions.c
+@@ -880,7 +880,8 @@ static int find_sdca_entity_control(struct device *dev, struct sdca_entity *enti
+ 			control->value = tmp;
+ 			control->has_fixed = true;
+ 		}
+-
++		fallthrough;
++	case SDCA_ACCESS_MODE_RO:
+ 		control->deferrable = fwnode_property_read_bool(control_node,
+ 								"mipi-sdca-control-deferrable");
+ 		break;
+diff --git a/sound/soc/sdca/sdca_regmap.c b/sound/soc/sdca/sdca_regmap.c
+index 66e7eee7d7f498..c41c67c2204a41 100644
+--- a/sound/soc/sdca/sdca_regmap.c
++++ b/sound/soc/sdca/sdca_regmap.c
+@@ -72,12 +72,18 @@ bool sdca_regmap_readable(struct sdca_function_data *function, unsigned int reg)
+ 	if (!control)
+ 		return false;
+ 
++	if (!(BIT(SDW_SDCA_CTL_CNUM(reg)) & control->cn_list))
++		return false;
++
+ 	switch (control->mode) {
+ 	case SDCA_ACCESS_MODE_RW:
+ 	case SDCA_ACCESS_MODE_RO:
+-	case SDCA_ACCESS_MODE_DUAL:
+ 	case SDCA_ACCESS_MODE_RW1S:
+ 	case SDCA_ACCESS_MODE_RW1C:
++		if (SDW_SDCA_NEXT_CTL(0) & reg)
++			return false;
++		fallthrough;
++	case SDCA_ACCESS_MODE_DUAL:
+ 		/* No access to registers marked solely for device use */
+ 		return control->layers & ~SDCA_ACCESS_LAYER_DEVICE;
+ 	default:
+@@ -104,11 +110,17 @@ bool sdca_regmap_writeable(struct sdca_function_data *function, unsigned int reg
+ 	if (!control)
+ 		return false;
+ 
++	if (!(BIT(SDW_SDCA_CTL_CNUM(reg)) & control->cn_list))
++		return false;
++
+ 	switch (control->mode) {
+ 	case SDCA_ACCESS_MODE_RW:
+-	case SDCA_ACCESS_MODE_DUAL:
+ 	case SDCA_ACCESS_MODE_RW1S:
+ 	case SDCA_ACCESS_MODE_RW1C:
++		if (SDW_SDCA_NEXT_CTL(0) & reg)
++			return false;
++		fallthrough;
++	case SDCA_ACCESS_MODE_DUAL:
+ 		/* No access to registers marked solely for device use */
+ 		return control->layers & ~SDCA_ACCESS_LAYER_DEVICE;
+ 	default:
+diff --git a/sound/soc/soc-dai.c b/sound/soc/soc-dai.c
+index a210089747d004..32f46a38682b79 100644
+--- a/sound/soc/soc-dai.c
++++ b/sound/soc/soc-dai.c
+@@ -259,13 +259,15 @@ int snd_soc_dai_set_tdm_slot(struct snd_soc_dai *dai,
+ 		&rx_mask,
+ 	};
+ 
+-	if (dai->driver->ops &&
+-	    dai->driver->ops->xlate_tdm_slot_mask)
+-		ret = dai->driver->ops->xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask);
+-	else
+-		ret = snd_soc_xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask);
+-	if (ret)
+-		goto err;
++	if (slots) {
++		if (dai->driver->ops &&
++		    dai->driver->ops->xlate_tdm_slot_mask)
++			ret = dai->driver->ops->xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask);
++		else
++			ret = snd_soc_xlate_tdm_slot_mask(slots, &tx_mask, &rx_mask);
++		if (ret)
++			goto err;
++	}
+ 
+ 	for_each_pcm_streams(stream)
+ 		snd_soc_dai_tdm_mask_set(dai, stream, *tdm_mask[stream]);
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index 8d4dd11c9aef1d..a629e0eacb20eb 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -399,28 +399,32 @@ EXPORT_SYMBOL_GPL(snd_soc_put_volsw_sx);
+ static int snd_soc_clip_to_platform_max(struct snd_kcontrol *kctl)
+ {
+ 	struct soc_mixer_control *mc = (struct soc_mixer_control *)kctl->private_value;
+-	struct snd_ctl_elem_value uctl;
++	struct snd_ctl_elem_value *uctl;
+ 	int ret;
+ 
+ 	if (!mc->platform_max)
+ 		return 0;
+ 
+-	ret = kctl->get(kctl, &uctl);
++	uctl = kzalloc(sizeof(*uctl), GFP_KERNEL);
++	if (!uctl)
++		return -ENOMEM;
++
++	ret = kctl->get(kctl, uctl);
+ 	if (ret < 0)
+-		return ret;
++		goto out;
+ 
+-	if (uctl.value.integer.value[0] > mc->platform_max)
+-		uctl.value.integer.value[0] = mc->platform_max;
++	if (uctl->value.integer.value[0] > mc->platform_max)
++		uctl->value.integer.value[0] = mc->platform_max;
+ 
+ 	if (snd_soc_volsw_is_stereo(mc) &&
+-	    uctl.value.integer.value[1] > mc->platform_max)
+-		uctl.value.integer.value[1] = mc->platform_max;
++	    uctl->value.integer.value[1] > mc->platform_max)
++		uctl->value.integer.value[1] = mc->platform_max;
+ 
+-	ret = kctl->put(kctl, &uctl);
+-	if (ret < 0)
+-		return ret;
++	ret = kctl->put(kctl, uctl);
+ 
+-	return 0;
++out:
++	kfree(uctl);
++	return ret;
+ }
+ 
+ /**
+diff --git a/sound/soc/sof/intel/Kconfig b/sound/soc/sof/intel/Kconfig
+index dc1d21de4ab792..4f27f8c8debf8a 100644
+--- a/sound/soc/sof/intel/Kconfig
++++ b/sound/soc/sof/intel/Kconfig
+@@ -266,9 +266,10 @@ config SND_SOC_SOF_METEORLAKE
+ 
+ config SND_SOC_SOF_INTEL_LNL
+ 	tristate
++	select SOUNDWIRE_INTEL if SND_SOC_SOF_INTEL_SOUNDWIRE != n
+ 	select SND_SOC_SOF_HDA_GENERIC
+ 	select SND_SOC_SOF_INTEL_SOUNDWIRE_LINK_BASELINE
+-	select SND_SOF_SOF_HDA_SDW_BPT if SND_SOC_SOF_INTEL_SOUNDWIRE
++	select SND_SOF_SOF_HDA_SDW_BPT if SND_SOC_SOF_INTEL_SOUNDWIRE != n
+ 	select SND_SOC_SOF_IPC4
+ 	select SND_SOC_SOF_INTEL_MTL
+ 
+diff --git a/sound/usb/mixer_scarlett2.c b/sound/usb/mixer_scarlett2.c
+index 93589e86828a3e..e06a7a60ac634e 100644
+--- a/sound/usb/mixer_scarlett2.c
++++ b/sound/usb/mixer_scarlett2.c
+@@ -2351,6 +2351,8 @@ static int scarlett2_usb(
+ 	struct scarlett2_usb_packet *req, *resp = NULL;
+ 	size_t req_buf_size = struct_size(req, data, req_size);
+ 	size_t resp_buf_size = struct_size(resp, data, resp_size);
++	int retries = 0;
++	const int max_retries = 5;
+ 	int err;
+ 
+ 	req = kmalloc(req_buf_size, GFP_KERNEL);
+@@ -2374,10 +2376,15 @@ static int scarlett2_usb(
+ 	if (req_size)
+ 		memcpy(req->data, req_data, req_size);
+ 
++retry:
+ 	err = scarlett2_usb_tx(dev, private->bInterfaceNumber,
+ 			       req, req_buf_size);
+ 
+ 	if (err != req_buf_size) {
++		if (err == -EPROTO && ++retries <= max_retries) {
++			msleep(5 * (1 << (retries - 1)));
++			goto retry;
++		}
+ 		usb_audio_err(
+ 			mixer->chip,
+ 			"%s USB request result cmd %x was %d\n",
+@@ -3971,8 +3978,13 @@ static int scarlett2_input_select_ctl_info(
+ 		goto unlock;
+ 
+ 	/* Loop through each input */
+-	for (i = 0; i < inputs; i++)
++	for (i = 0; i < inputs; i++) {
+ 		values[i] = kasprintf(GFP_KERNEL, "Input %d", i + 1);
++		if (!values[i]) {
++			err = -ENOMEM;
++			goto unlock;
++		}
++	}
+ 
+ 	err = snd_ctl_enum_info(uinfo, 1, i,
+ 				(const char * const *)values);
+diff --git a/sound/x86/intel_hdmi_audio.c b/sound/x86/intel_hdmi_audio.c
+index fe5cb41390883c..9df143b5deea8c 100644
+--- a/sound/x86/intel_hdmi_audio.c
++++ b/sound/x86/intel_hdmi_audio.c
+@@ -1768,7 +1768,7 @@ static int __hdmi_lpe_audio_probe(struct platform_device *pdev)
+ 		/* setup private data which can be retrieved when required */
+ 		pcm->private_data = ctx;
+ 		pcm->info_flags = 0;
+-		strscpy(pcm->name, card->shortname, strlen(card->shortname));
++		strscpy(pcm->name, card->shortname, sizeof(pcm->name));
+ 		/* setup the ops for playback */
+ 		snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &had_pcm_ops);
+ 
+diff --git a/tools/bpf/bpftool/net.c b/tools/bpf/bpftool/net.c
+index 64f958f437b01e..cfc6f944f7c33a 100644
+--- a/tools/bpf/bpftool/net.c
++++ b/tools/bpf/bpftool/net.c
+@@ -366,17 +366,18 @@ static int dump_link_nlmsg(void *cookie, void *msg, struct nlattr **tb)
+ {
+ 	struct bpf_netdev_t *netinfo = cookie;
+ 	struct ifinfomsg *ifinfo = msg;
++	struct ip_devname_ifindex *tmp;
+ 
+ 	if (netinfo->filter_idx > 0 && netinfo->filter_idx != ifinfo->ifi_index)
+ 		return 0;
+ 
+ 	if (netinfo->used_len == netinfo->array_len) {
+-		netinfo->devices = realloc(netinfo->devices,
+-			(netinfo->array_len + 16) *
+-			sizeof(struct ip_devname_ifindex));
+-		if (!netinfo->devices)
++		tmp = realloc(netinfo->devices,
++			(netinfo->array_len + 16) * sizeof(struct ip_devname_ifindex));
++		if (!tmp)
+ 			return -ENOMEM;
+ 
++		netinfo->devices = tmp;
+ 		netinfo->array_len += 16;
+ 	}
+ 	netinfo->devices[netinfo->used_len].ifindex = ifinfo->ifi_index;
+@@ -395,6 +396,7 @@ static int dump_class_qdisc_nlmsg(void *cookie, void *msg, struct nlattr **tb)
+ {
+ 	struct bpf_tcinfo_t *tcinfo = cookie;
+ 	struct tcmsg *info = msg;
++	struct tc_kind_handle *tmp;
+ 
+ 	if (tcinfo->is_qdisc) {
+ 		/* skip clsact qdisc */
+@@ -406,11 +408,12 @@ static int dump_class_qdisc_nlmsg(void *cookie, void *msg, struct nlattr **tb)
+ 	}
+ 
+ 	if (tcinfo->used_len == tcinfo->array_len) {
+-		tcinfo->handle_array = realloc(tcinfo->handle_array,
++		tmp = realloc(tcinfo->handle_array,
+ 			(tcinfo->array_len + 16) * sizeof(struct tc_kind_handle));
+-		if (!tcinfo->handle_array)
++		if (!tmp)
+ 			return -ENOMEM;
+ 
++		tcinfo->handle_array = tmp;
+ 		tcinfo->array_len += 16;
+ 	}
+ 	tcinfo->handle_array[tcinfo->used_len].handle = info->tcm_handle;
+diff --git a/tools/cgroup/memcg_slabinfo.py b/tools/cgroup/memcg_slabinfo.py
+index 270c28a0d09801..6bf4bde77903c3 100644
+--- a/tools/cgroup/memcg_slabinfo.py
++++ b/tools/cgroup/memcg_slabinfo.py
+@@ -146,11 +146,11 @@ def detect_kernel_config():
+ 
+ 
+ def for_each_slab(prog):
+-    PGSlab = ~prog.constant('PG_slab')
++    slabtype = prog.constant('PGTY_slab')
+ 
+     for page in for_each_page(prog):
+         try:
+-            if page.page_type.value_() == PGSlab:
++            if (page.page_type.value_() >> 24) == slabtype:
+                 yield cast('struct slab *', page)
+         except FaultError:
+             pass
+diff --git a/tools/include/nolibc/stdio.h b/tools/include/nolibc/stdio.h
+index c470d334ef3f41..7630234408c584 100644
+--- a/tools/include/nolibc/stdio.h
++++ b/tools/include/nolibc/stdio.h
+@@ -358,11 +358,11 @@ int __nolibc_printf(__nolibc_printf_cb cb, intptr_t state, size_t n, const char
+ 				n -= w;
+ 				while (width-- > w) {
+ 					if (cb(state, " ", 1) != 0)
+-						break;
++						return -1;
+ 					written += 1;
+ 				}
+ 				if (cb(state, outstr, w) != 0)
+-					break;
++					return -1;
+ 			}
+ 
+ 			written += len;
+diff --git a/tools/include/nolibc/sys/wait.h b/tools/include/nolibc/sys/wait.h
+index 4d44e3da0ba814..56ddb806da7f24 100644
+--- a/tools/include/nolibc/sys/wait.h
++++ b/tools/include/nolibc/sys/wait.h
+@@ -78,7 +78,7 @@ pid_t waitpid(pid_t pid, int *status, int options)
+ 
+ 	ret = waitid(idtype, id, &info, options);
+ 	if (ret)
+-		return ret;
++		return -1;
+ 
+ 	switch (info.si_code) {
+ 	case 0:
+diff --git a/tools/lib/subcmd/help.c b/tools/lib/subcmd/help.c
+index 8561b0f01a2476..9ef569492560ef 100644
+--- a/tools/lib/subcmd/help.c
++++ b/tools/lib/subcmd/help.c
+@@ -9,6 +9,7 @@
+ #include <sys/stat.h>
+ #include <unistd.h>
+ #include <dirent.h>
++#include <assert.h>
+ #include "subcmd-util.h"
+ #include "help.h"
+ #include "exec-cmd.h"
+@@ -82,10 +83,11 @@ void exclude_cmds(struct cmdnames *cmds, struct cmdnames *excludes)
+ 				ci++;
+ 				cj++;
+ 			} else {
+-				zfree(&cmds->names[cj]);
+-				cmds->names[cj++] = cmds->names[ci++];
++				cmds->names[cj++] = cmds->names[ci];
++				cmds->names[ci++] = NULL;
+ 			}
+ 		} else if (cmp == 0) {
++			zfree(&cmds->names[ci]);
+ 			ci++;
+ 			ei++;
+ 		} else if (cmp > 0) {
+@@ -94,12 +96,12 @@ void exclude_cmds(struct cmdnames *cmds, struct cmdnames *excludes)
+ 	}
+ 	if (ci != cj) {
+ 		while (ci < cmds->cnt) {
+-			zfree(&cmds->names[cj]);
+-			cmds->names[cj++] = cmds->names[ci++];
++			cmds->names[cj++] = cmds->names[ci];
++			cmds->names[ci++] = NULL;
+ 		}
+ 	}
+ 	for (ci = cj; ci < cmds->cnt; ci++)
+-		zfree(&cmds->names[ci]);
++		assert(cmds->names[ci] == NULL);
+ 	cmds->cnt = cj;
+ }
+ 
+diff --git a/tools/lib/subcmd/run-command.c b/tools/lib/subcmd/run-command.c
+index 0a764c25c384f0..b7510f83209a0a 100644
+--- a/tools/lib/subcmd/run-command.c
++++ b/tools/lib/subcmd/run-command.c
+@@ -5,6 +5,7 @@
+ #include <ctype.h>
+ #include <fcntl.h>
+ #include <string.h>
++#include <linux/compiler.h>
+ #include <linux/string.h>
+ #include <errno.h>
+ #include <sys/wait.h>
+@@ -216,10 +217,20 @@ static int wait_or_whine(struct child_process *cmd, bool block)
+ 	return result;
+ }
+ 
++/*
++ * Conservative estimate of number of characaters needed to hold an a decoded
++ * integer, assume each 3 bits needs a character byte and plus a possible sign
++ * character.
++ */
++#ifndef is_signed_type
++#define is_signed_type(type) (((type)(-1)) < (type)1)
++#endif
++#define MAX_STRLEN_TYPE(type) (sizeof(type) * 8 / 3 + (is_signed_type(type) ? 1 : 0))
++
+ int check_if_command_finished(struct child_process *cmd)
+ {
+ #ifdef __linux__
+-	char filename[FILENAME_MAX + 12];
++	char filename[6 + MAX_STRLEN_TYPE(typeof(cmd->pid)) + 7 + 1];
+ 	char status_line[256];
+ 	FILE *status_file;
+ 
+@@ -227,7 +238,7 @@ int check_if_command_finished(struct child_process *cmd)
+ 	 * Check by reading /proc/<pid>/status as calling waitpid causes
+ 	 * stdout/stderr to be closed and data lost.
+ 	 */
+-	sprintf(filename, "/proc/%d/status", cmd->pid);
++	sprintf(filename, "/proc/%u/status", cmd->pid);
+ 	status_file = fopen(filename, "r");
+ 	if (status_file == NULL) {
+ 		/* Open failed assume finish_command was called. */
+diff --git a/tools/perf/.gitignore b/tools/perf/.gitignore
+index 5aaf73df6700dc..b64302a761444d 100644
+--- a/tools/perf/.gitignore
++++ b/tools/perf/.gitignore
+@@ -48,8 +48,6 @@ libbpf/
+ libperf/
+ libsubcmd/
+ libsymbol/
+-libtraceevent/
+-libtraceevent_plugins/
+ fixdep
+ Documentation/doc.dep
+ python_ext_build/
+diff --git a/tools/perf/builtin-sched.c b/tools/perf/builtin-sched.c
+index 26ece6e9bfd167..4bbebd6ef2e4a7 100644
+--- a/tools/perf/builtin-sched.c
++++ b/tools/perf/builtin-sched.c
+@@ -994,7 +994,7 @@ thread_atoms_search(struct rb_root_cached *root, struct thread *thread,
+ 		else if (cmp < 0)
+ 			node = node->rb_right;
+ 		else {
+-			BUG_ON(thread != atoms->thread);
++			BUG_ON(!RC_CHK_EQUAL(thread, atoms->thread));
+ 			return atoms;
+ 		}
+ 	}
+@@ -1111,6 +1111,21 @@ add_sched_in_event(struct work_atoms *atoms, u64 timestamp)
+ 	atoms->nb_atoms++;
+ }
+ 
++static void free_work_atoms(struct work_atoms *atoms)
++{
++	struct work_atom *atom, *tmp;
++
++	if (atoms == NULL)
++		return;
++
++	list_for_each_entry_safe(atom, tmp, &atoms->work_list, list) {
++		list_del(&atom->list);
++		free(atom);
++	}
++	thread__zput(atoms->thread);
++	free(atoms);
++}
++
+ static int latency_switch_event(struct perf_sched *sched,
+ 				struct evsel *evsel,
+ 				struct perf_sample *sample,
+@@ -1634,6 +1649,7 @@ static int map_switch_event(struct perf_sched *sched, struct evsel *evsel,
+ 	const char *color = PERF_COLOR_NORMAL;
+ 	char stimestamp[32];
+ 	const char *str;
++	int ret = -1;
+ 
+ 	BUG_ON(this_cpu.cpu >= MAX_CPUS || this_cpu.cpu < 0);
+ 
+@@ -1664,17 +1680,20 @@ static int map_switch_event(struct perf_sched *sched, struct evsel *evsel,
+ 	sched_in = map__findnew_thread(sched, machine, -1, next_pid);
+ 	sched_out = map__findnew_thread(sched, machine, -1, prev_pid);
+ 	if (sched_in == NULL || sched_out == NULL)
+-		return -1;
++		goto out;
+ 
+ 	tr = thread__get_runtime(sched_in);
+-	if (tr == NULL) {
+-		thread__put(sched_in);
+-		return -1;
+-	}
++	if (tr == NULL)
++		goto out;
++
++	thread__put(sched->curr_thread[this_cpu.cpu]);
++	thread__put(sched->curr_out_thread[this_cpu.cpu]);
+ 
+ 	sched->curr_thread[this_cpu.cpu] = thread__get(sched_in);
+ 	sched->curr_out_thread[this_cpu.cpu] = thread__get(sched_out);
+ 
++	ret = 0;
++
+ 	str = thread__comm_str(sched_in);
+ 	new_shortname = 0;
+ 	if (!tr->shortname[0]) {
+@@ -1769,12 +1788,10 @@ static int map_switch_event(struct perf_sched *sched, struct evsel *evsel,
+ 	color_fprintf(stdout, color, "\n");
+ 
+ out:
+-	if (sched->map.task_name)
+-		thread__put(sched_out);
+-
++	thread__put(sched_out);
+ 	thread__put(sched_in);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int process_sched_switch_event(const struct perf_tool *tool,
+@@ -2018,6 +2035,16 @@ static u64 evsel__get_time(struct evsel *evsel, u32 cpu)
+ 	return r->last_time[cpu];
+ }
+ 
++static void timehist__evsel_priv_destructor(void *priv)
++{
++	struct evsel_runtime *r = priv;
++
++	if (r) {
++		free(r->last_time);
++		free(r);
++	}
++}
++
+ static int comm_width = 30;
+ 
+ static char *timehist_get_commstr(struct thread *thread)
+@@ -2311,8 +2338,10 @@ static void save_task_callchain(struct perf_sched *sched,
+ 		return;
+ 	}
+ 
+-	if (!sched->show_callchain || sample->callchain == NULL)
++	if (!sched->show_callchain || sample->callchain == NULL) {
++		thread__put(thread);
+ 		return;
++	}
+ 
+ 	cursor = get_tls_callchain_cursor();
+ 
+@@ -2321,10 +2350,12 @@ static void save_task_callchain(struct perf_sched *sched,
+ 		if (verbose > 0)
+ 			pr_err("Failed to resolve callchain. Skipping\n");
+ 
++		thread__put(thread);
+ 		return;
+ 	}
+ 
+ 	callchain_cursor_commit(cursor);
++	thread__put(thread);
+ 
+ 	while (true) {
+ 		struct callchain_cursor_node *node;
+@@ -2401,8 +2432,17 @@ static void free_idle_threads(void)
+ 		return;
+ 
+ 	for (i = 0; i < idle_max_cpu; ++i) {
+-		if ((idle_threads[i]))
+-			thread__delete(idle_threads[i]);
++		struct thread *idle = idle_threads[i];
++
++		if (idle) {
++			struct idle_thread_runtime *itr;
++
++			itr = thread__priv(idle);
++			if (itr)
++				thread__put(itr->last_thread);
++
++			thread__delete(idle);
++		}
+ 	}
+ 
+ 	free(idle_threads);
+@@ -2439,7 +2479,7 @@ static struct thread *get_idle_thread(int cpu)
+ 		}
+ 	}
+ 
+-	return idle_threads[cpu];
++	return thread__get(idle_threads[cpu]);
+ }
+ 
+ static void save_idle_callchain(struct perf_sched *sched,
+@@ -2494,7 +2534,8 @@ static struct thread *timehist_get_thread(struct perf_sched *sched,
+ 			if (itr == NULL)
+ 				return NULL;
+ 
+-			itr->last_thread = thread;
++			thread__put(itr->last_thread);
++			itr->last_thread = thread__get(thread);
+ 
+ 			/* copy task callchain when entering to idle */
+ 			if (evsel__intval(evsel, sample, "next_pid") == 0)
+@@ -2565,6 +2606,7 @@ static void timehist_print_wakeup_event(struct perf_sched *sched,
+ 	/* show wakeup unless both awakee and awaker are filtered */
+ 	if (timehist_skip_sample(sched, thread, evsel, sample) &&
+ 	    timehist_skip_sample(sched, awakened, evsel, sample)) {
++		thread__put(thread);
+ 		return;
+ 	}
+ 
+@@ -2581,6 +2623,8 @@ static void timehist_print_wakeup_event(struct perf_sched *sched,
+ 	printf("awakened: %s", timehist_get_commstr(awakened));
+ 
+ 	printf("\n");
++
++	thread__put(thread);
+ }
+ 
+ static int timehist_sched_wakeup_ignore(const struct perf_tool *tool __maybe_unused,
+@@ -2609,8 +2653,10 @@ static int timehist_sched_wakeup_event(const struct perf_tool *tool,
+ 		return -1;
+ 
+ 	tr = thread__get_runtime(thread);
+-	if (tr == NULL)
++	if (tr == NULL) {
++		thread__put(thread);
+ 		return -1;
++	}
+ 
+ 	if (tr->ready_to_run == 0)
+ 		tr->ready_to_run = sample->time;
+@@ -2620,6 +2666,7 @@ static int timehist_sched_wakeup_event(const struct perf_tool *tool,
+ 	    !perf_time__skip_sample(&sched->ptime, sample->time))
+ 		timehist_print_wakeup_event(sched, evsel, sample, machine, thread);
+ 
++	thread__put(thread);
+ 	return 0;
+ }
+ 
+@@ -2647,6 +2694,7 @@ static void timehist_print_migration_event(struct perf_sched *sched,
+ 
+ 	if (timehist_skip_sample(sched, thread, evsel, sample) &&
+ 	    timehist_skip_sample(sched, migrated, evsel, sample)) {
++		thread__put(thread);
+ 		return;
+ 	}
+ 
+@@ -2674,6 +2722,7 @@ static void timehist_print_migration_event(struct perf_sched *sched,
+ 	printf(" cpu %d => %d", ocpu, dcpu);
+ 
+ 	printf("\n");
++	thread__put(thread);
+ }
+ 
+ static int timehist_migrate_task_event(const struct perf_tool *tool,
+@@ -2693,8 +2742,10 @@ static int timehist_migrate_task_event(const struct perf_tool *tool,
+ 		return -1;
+ 
+ 	tr = thread__get_runtime(thread);
+-	if (tr == NULL)
++	if (tr == NULL) {
++		thread__put(thread);
+ 		return -1;
++	}
+ 
+ 	tr->migrations++;
+ 	tr->migrated = sample->time;
+@@ -2704,6 +2755,7 @@ static int timehist_migrate_task_event(const struct perf_tool *tool,
+ 		timehist_print_migration_event(sched, evsel, sample,
+ 							machine, thread);
+ 	}
++	thread__put(thread);
+ 
+ 	return 0;
+ }
+@@ -2726,10 +2778,10 @@ static void timehist_update_task_prio(struct evsel *evsel,
+ 		return;
+ 
+ 	tr = thread__get_runtime(thread);
+-	if (tr == NULL)
+-		return;
++	if (tr != NULL)
++		tr->prio = next_prio;
+ 
+-	tr->prio = next_prio;
++	thread__put(thread);
+ }
+ 
+ static int timehist_sched_change_event(const struct perf_tool *tool,
+@@ -2741,7 +2793,7 @@ static int timehist_sched_change_event(const struct perf_tool *tool,
+ 	struct perf_sched *sched = container_of(tool, struct perf_sched, tool);
+ 	struct perf_time_interval *ptime = &sched->ptime;
+ 	struct addr_location al;
+-	struct thread *thread;
++	struct thread *thread = NULL;
+ 	struct thread_runtime *tr = NULL;
+ 	u64 tprev, t = sample->time;
+ 	int rc = 0;
+@@ -2865,6 +2917,7 @@ static int timehist_sched_change_event(const struct perf_tool *tool,
+ 
+ 	evsel__save_time(evsel, sample->time, sample->cpu);
+ 
++	thread__put(thread);
+ 	addr_location__exit(&al);
+ 	return rc;
+ }
+@@ -3286,6 +3339,8 @@ static int perf_sched__timehist(struct perf_sched *sched)
+ 
+ 	setup_pager();
+ 
++	evsel__set_priv_destructor(timehist__evsel_priv_destructor);
++
+ 	/* prefer sched_waking if it is captured */
+ 	if (evlist__find_tracepoint_by_name(session->evlist, "sched:sched_waking"))
+ 		handlers[1].handler = timehist_sched_wakeup_ignore;
+@@ -3386,13 +3441,13 @@ static void __merge_work_atoms(struct rb_root_cached *root, struct work_atoms *d
+ 			this->total_runtime += data->total_runtime;
+ 			this->nb_atoms += data->nb_atoms;
+ 			this->total_lat += data->total_lat;
+-			list_splice(&data->work_list, &this->work_list);
++			list_splice_init(&data->work_list, &this->work_list);
+ 			if (this->max_lat < data->max_lat) {
+ 				this->max_lat = data->max_lat;
+ 				this->max_lat_start = data->max_lat_start;
+ 				this->max_lat_end = data->max_lat_end;
+ 			}
+-			zfree(&data);
++			free_work_atoms(data);
+ 			return;
+ 		}
+ 	}
+@@ -3471,7 +3526,6 @@ static int perf_sched__lat(struct perf_sched *sched)
+ 		work_list = rb_entry(next, struct work_atoms, node);
+ 		output_lat_thread(sched, work_list);
+ 		next = rb_next(next);
+-		thread__zput(work_list->thread);
+ 	}
+ 
+ 	printf(" -----------------------------------------------------------------------------------------------------------------\n");
+@@ -3485,6 +3539,13 @@ static int perf_sched__lat(struct perf_sched *sched)
+ 
+ 	rc = 0;
+ 
++	while ((next = rb_first_cached(&sched->sorted_atom_root))) {
++		struct work_atoms *data;
++
++		data = rb_entry(next, struct work_atoms, node);
++		rb_erase_cached(next, &sched->sorted_atom_root);
++		free_work_atoms(data);
++	}
+ out_free_cpus_switch_event:
+ 	free_cpus_switch_event(sched);
+ 	return rc;
+@@ -3556,10 +3617,10 @@ static int perf_sched__map(struct perf_sched *sched)
+ 
+ 	sched->curr_out_thread = calloc(MAX_CPUS, sizeof(*(sched->curr_out_thread)));
+ 	if (!sched->curr_out_thread)
+-		return rc;
++		goto out_free_curr_thread;
+ 
+ 	if (setup_cpus_switch_event(sched))
+-		goto out_free_curr_thread;
++		goto out_free_curr_out_thread;
+ 
+ 	if (setup_map_cpus(sched))
+ 		goto out_free_cpus_switch_event;
+@@ -3590,7 +3651,14 @@ static int perf_sched__map(struct perf_sched *sched)
+ out_free_cpus_switch_event:
+ 	free_cpus_switch_event(sched);
+ 
++out_free_curr_out_thread:
++	for (int i = 0; i < MAX_CPUS; i++)
++		thread__put(sched->curr_out_thread[i]);
++	zfree(&sched->curr_out_thread);
++
+ out_free_curr_thread:
++	for (int i = 0; i < MAX_CPUS; i++)
++		thread__put(sched->curr_thread[i]);
+ 	zfree(&sched->curr_thread);
+ 	return rc;
+ }
+@@ -3898,13 +3966,15 @@ int cmd_sched(int argc, const char **argv)
+ 	if (!argc)
+ 		usage_with_options(sched_usage, sched_options);
+ 
++	thread__set_priv_destructor(free);
++
+ 	/*
+ 	 * Aliased to 'perf script' for now:
+ 	 */
+ 	if (!strcmp(argv[0], "script")) {
+-		return cmd_script(argc, argv);
++		ret = cmd_script(argc, argv);
+ 	} else if (strlen(argv[0]) > 2 && strstarts("record", argv[0])) {
+-		return __cmd_record(argc, argv);
++		ret = __cmd_record(argc, argv);
+ 	} else if (strlen(argv[0]) > 2 && strstarts("latency", argv[0])) {
+ 		sched.tp_handler = &lat_ops;
+ 		if (argc > 1) {
+@@ -3913,7 +3983,7 @@ int cmd_sched(int argc, const char **argv)
+ 				usage_with_options(latency_usage, latency_options);
+ 		}
+ 		setup_sorting(&sched, latency_options, latency_usage);
+-		return perf_sched__lat(&sched);
++		ret = perf_sched__lat(&sched);
+ 	} else if (!strcmp(argv[0], "map")) {
+ 		if (argc) {
+ 			argc = parse_options(argc, argv, map_options, map_usage, 0);
+@@ -3924,13 +3994,14 @@ int cmd_sched(int argc, const char **argv)
+ 				sched.map.task_names = strlist__new(sched.map.task_name, NULL);
+ 				if (sched.map.task_names == NULL) {
+ 					fprintf(stderr, "Failed to parse task names\n");
+-					return -1;
++					ret = -1;
++					goto out;
+ 				}
+ 			}
+ 		}
+ 		sched.tp_handler = &map_ops;
+ 		setup_sorting(&sched, latency_options, latency_usage);
+-		return perf_sched__map(&sched);
++		ret = perf_sched__map(&sched);
+ 	} else if (strlen(argv[0]) > 2 && strstarts("replay", argv[0])) {
+ 		sched.tp_handler = &replay_ops;
+ 		if (argc) {
+@@ -3938,7 +4009,7 @@ int cmd_sched(int argc, const char **argv)
+ 			if (argc)
+ 				usage_with_options(replay_usage, replay_options);
+ 		}
+-		return perf_sched__replay(&sched);
++		ret = perf_sched__replay(&sched);
+ 	} else if (!strcmp(argv[0], "timehist")) {
+ 		if (argc) {
+ 			argc = parse_options(argc, argv, timehist_options,
+@@ -3954,19 +4025,19 @@ int cmd_sched(int argc, const char **argv)
+ 				parse_options_usage(NULL, timehist_options, "w", true);
+ 			if (sched.show_next)
+ 				parse_options_usage(NULL, timehist_options, "n", true);
+-			return -EINVAL;
++			ret = -EINVAL;
++			goto out;
+ 		}
+ 		ret = symbol__validate_sym_arguments();
+-		if (ret)
+-			return ret;
+-
+-		return perf_sched__timehist(&sched);
++		if (!ret)
++			ret = perf_sched__timehist(&sched);
+ 	} else {
+ 		usage_with_options(sched_usage, sched_options);
+ 	}
+ 
++out:
+ 	/* free usage string allocated by parse_options_subcommand */
+ 	free((void *)sched_usage[0]);
+ 
+-	return 0;
++	return ret;
+ }
+diff --git a/tools/perf/tests/bp_account.c b/tools/perf/tests/bp_account.c
+index 4cb7d486b5c178..047433c977bc9d 100644
+--- a/tools/perf/tests/bp_account.c
++++ b/tools/perf/tests/bp_account.c
+@@ -104,6 +104,7 @@ static int bp_accounting(int wp_cnt, int share)
+ 		fd_wp = wp_event((void *)&the_var, &attr_new);
+ 		TEST_ASSERT_VAL("failed to create max wp\n", fd_wp != -1);
+ 		pr_debug("wp max created\n");
++		close(fd_wp);
+ 	}
+ 
+ 	for (i = 0; i < wp_cnt; i++)
+diff --git a/tools/perf/util/build-id.c b/tools/perf/util/build-id.c
+index e763e8d99a4367..ee00313d5d7e2a 100644
+--- a/tools/perf/util/build-id.c
++++ b/tools/perf/util/build-id.c
+@@ -864,7 +864,7 @@ static int dso__cache_build_id(struct dso *dso, struct machine *machine,
+ 	char *allocated_name = NULL;
+ 	int ret = 0;
+ 
+-	if (!dso__has_build_id(dso))
++	if (!dso__has_build_id(dso) || !dso__hit(dso))
+ 		return 0;
+ 
+ 	if (dso__is_kcore(dso)) {
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index d55482f094bf95..1dc1f7b3bfb80c 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -1656,6 +1656,15 @@ static void evsel__free_config_terms(struct evsel *evsel)
+ 	free_config_terms(&evsel->config_terms);
+ }
+ 
++static void (*evsel__priv_destructor)(void *priv);
++
++void evsel__set_priv_destructor(void (*destructor)(void *priv))
++{
++	assert(evsel__priv_destructor == NULL);
++
++	evsel__priv_destructor = destructor;
++}
++
+ void evsel__exit(struct evsel *evsel)
+ {
+ 	assert(list_empty(&evsel->core.node));
+@@ -1686,6 +1695,8 @@ void evsel__exit(struct evsel *evsel)
+ 	hashmap__free(evsel->per_pkg_mask);
+ 	evsel->per_pkg_mask = NULL;
+ 	zfree(&evsel->metric_events);
++	if (evsel__priv_destructor)
++		evsel__priv_destructor(evsel->priv);
+ 	perf_evsel__object.fini(evsel);
+ 	if (evsel__tool_event(evsel) == TOOL_PMU__EVENT_SYSTEM_TIME ||
+ 	    evsel__tool_event(evsel) == TOOL_PMU__EVENT_USER_TIME)
+diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
+index 6dbc9690e0c925..b84ee274602d7d 100644
+--- a/tools/perf/util/evsel.h
++++ b/tools/perf/util/evsel.h
+@@ -280,6 +280,8 @@ void evsel__init(struct evsel *evsel, struct perf_event_attr *attr, int idx);
+ void evsel__exit(struct evsel *evsel);
+ void evsel__delete(struct evsel *evsel);
+ 
++void evsel__set_priv_destructor(void (*destructor)(void *priv));
++
+ struct callchain_param;
+ 
+ void evsel__config(struct evsel *evsel, struct record_opts *opts,
+diff --git a/tools/perf/util/hwmon_pmu.c b/tools/perf/util/hwmon_pmu.c
+index c25e7296f1c10c..75683c543994e5 100644
+--- a/tools/perf/util/hwmon_pmu.c
++++ b/tools/perf/util/hwmon_pmu.c
+@@ -344,7 +344,7 @@ static int hwmon_pmu__read_events(struct hwmon_pmu *pmu)
+ 
+ struct perf_pmu *hwmon_pmu__new(struct list_head *pmus, int hwmon_dir, const char *sysfs_name, const char *name)
+ {
+-	char buf[32];
++	char buf[64];
+ 	struct hwmon_pmu *hwm;
+ 	__u32 type = PERF_PMU_TYPE_HWMON_START + strtoul(sysfs_name + 5, NULL, 10);
+ 
+diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
+index 2380de56a20746..d07c83ba6f1a03 100644
+--- a/tools/perf/util/parse-events.c
++++ b/tools/perf/util/parse-events.c
+@@ -1829,13 +1829,11 @@ static int parse_events__modifier_list(struct parse_events_state *parse_state,
+ 		int eH = group ? evsel->core.attr.exclude_host : 0;
+ 		int eG = group ? evsel->core.attr.exclude_guest : 0;
+ 		int exclude = eu | ek | eh;
+-		int exclude_GH = group ? evsel->exclude_GH : 0;
++		int exclude_GH = eG | eH;
+ 
+ 		if (mod.user) {
+ 			if (!exclude)
+ 				exclude = eu = ek = eh = 1;
+-			if (!exclude_GH && !perf_guest && exclude_GH_default)
+-				eG = 1;
+ 			eu = 0;
+ 		}
+ 		if (mod.kernel) {
+@@ -1858,6 +1856,13 @@ static int parse_events__modifier_list(struct parse_events_state *parse_state,
+ 				exclude_GH = eG = eH = 1;
+ 			eH = 0;
+ 		}
++		if (!exclude_GH && exclude_GH_default) {
++			if (perf_host)
++				eG = 1;
++			else if (perf_guest)
++				eH = 1;
++		}
++
+ 		evsel->core.attr.exclude_user   = eu;
+ 		evsel->core.attr.exclude_kernel = ek;
+ 		evsel->core.attr.exclude_hv     = eh;
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index 609828513f6cfb..55ee17082c7fea 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -452,7 +452,7 @@ static struct perf_pmu_alias *perf_pmu__find_alias(struct perf_pmu *pmu,
+ {
+ 	struct perf_pmu_alias *alias;
+ 	bool has_sysfs_event;
+-	char event_file_name[FILENAME_MAX + 8];
++	char event_file_name[NAME_MAX + 8];
+ 
+ 	if (hashmap__find(pmu->aliases, name, &alias))
+ 		return alias;
+@@ -752,7 +752,7 @@ static int pmu_aliases_parse(struct perf_pmu *pmu)
+ 
+ static int pmu_aliases_parse_eager(struct perf_pmu *pmu, int sysfs_fd)
+ {
+-	char path[FILENAME_MAX + 7];
++	char path[NAME_MAX + 8];
+ 	int ret, events_dir_fd;
+ 
+ 	scnprintf(path, sizeof(path), "%s/events", pmu->name);
+diff --git a/tools/perf/util/python.c b/tools/perf/util/python.c
+index 321c333877fa70..b9fe7f2c14af68 100644
+--- a/tools/perf/util/python.c
++++ b/tools/perf/util/python.c
+@@ -10,6 +10,7 @@
+ #endif
+ #include <perf/mmap.h>
+ #include "callchain.h"
++#include "counts.h"
+ #include "evlist.h"
+ #include "evsel.h"
+ #include "event.h"
+@@ -888,12 +889,38 @@ static PyObject *pyrf_evsel__threads(struct pyrf_evsel *pevsel)
+ 	return (PyObject *)pthread_map;
+ }
+ 
++/*
++ * Ensure evsel's counts and prev_raw_counts are allocated, the latter
++ * used by tool PMUs to compute the cumulative count as expected by
++ * stat's process_counter_values.
++ */
++static int evsel__ensure_counts(struct evsel *evsel)
++{
++	int nthreads, ncpus;
++
++	if (evsel->counts != NULL)
++		return 0;
++
++	nthreads = perf_thread_map__nr(evsel->core.threads);
++	ncpus = perf_cpu_map__nr(evsel->core.cpus);
++
++	evsel->counts = perf_counts__new(ncpus, nthreads);
++	if (evsel->counts == NULL)
++		return -ENOMEM;
++
++	evsel->prev_raw_counts = perf_counts__new(ncpus, nthreads);
++	if (evsel->prev_raw_counts == NULL)
++		return -ENOMEM;
++
++	return 0;
++}
++
+ static PyObject *pyrf_evsel__read(struct pyrf_evsel *pevsel,
+ 				  PyObject *args, PyObject *kwargs)
+ {
+ 	struct evsel *evsel = &pevsel->evsel;
+ 	int cpu = 0, cpu_idx, thread = 0, thread_idx;
+-	struct perf_counts_values counts;
++	struct perf_counts_values *old_count, *new_count;
+ 	struct pyrf_counts_values *count_values = PyObject_New(struct pyrf_counts_values,
+ 							       &pyrf_counts_values__type);
+ 
+@@ -909,13 +936,27 @@ static PyObject *pyrf_evsel__read(struct pyrf_evsel *pevsel,
+ 		return NULL;
+ 	}
+ 	thread_idx = perf_thread_map__idx(evsel->core.threads, thread);
+-	if (cpu_idx < 0) {
++	if (thread_idx < 0) {
+ 		PyErr_Format(PyExc_TypeError, "Thread %d is not part of evsel's threads",
+ 			     thread);
+ 		return NULL;
+ 	}
+-	perf_evsel__read(&(evsel->core), cpu_idx, thread_idx, &counts);
+-	count_values->values = counts;
++
++	if (evsel__ensure_counts(evsel))
++		return PyErr_NoMemory();
++
++	/* Set up pointers to the old and newly read counter values. */
++	old_count = perf_counts(evsel->prev_raw_counts, cpu_idx, thread_idx);
++	new_count = perf_counts(evsel->counts, cpu_idx, thread_idx);
++	/* Update the value in evsel->counts. */
++	evsel__read_counter(evsel, cpu_idx, thread_idx);
++	/* Copy the value and turn it into the delta from old_count. */
++	count_values->values = *new_count;
++	count_values->values.val -= old_count->val;
++	count_values->values.ena -= old_count->ena;
++	count_values->values.run -= old_count->run;
++	/* Save the new count over the old_count for the next read. */
++	*old_count = *new_count;
+ 	return (PyObject *)count_values;
+ }
+ 
+diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
+index 8b30c6f16a9eea..fd4583718eabf3 100644
+--- a/tools/perf/util/symbol.c
++++ b/tools/perf/util/symbol.c
+@@ -1422,6 +1422,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
+ 				goto out_err;
+ 			}
+ 		}
++		map__zput(new_node->map);
+ 		free(new_node);
+ 	}
+ 
+diff --git a/tools/power/cpupower/utils/idle_monitor/cpupower-monitor.c b/tools/power/cpupower/utils/idle_monitor/cpupower-monitor.c
+index ad493157f826f3..e8b3841d5c0f81 100644
+--- a/tools/power/cpupower/utils/idle_monitor/cpupower-monitor.c
++++ b/tools/power/cpupower/utils/idle_monitor/cpupower-monitor.c
+@@ -121,10 +121,8 @@ void print_header(int topology_depth)
+ 	switch (topology_depth) {
+ 	case TOPOLOGY_DEPTH_PKG:
+ 		printf(" PKG|");
+-		break;
+ 	case TOPOLOGY_DEPTH_CORE:
+ 		printf("CORE|");
+-		break;
+ 	case	TOPOLOGY_DEPTH_CPU:
+ 		printf(" CPU|");
+ 		break;
+@@ -167,10 +165,8 @@ void print_results(int topology_depth, int cpu)
+ 	switch (topology_depth) {
+ 	case TOPOLOGY_DEPTH_PKG:
+ 		printf("%4d|", cpu_top.core_info[cpu].pkg);
+-		break;
+ 	case TOPOLOGY_DEPTH_CORE:
+ 		printf("%4d|", cpu_top.core_info[cpu].core);
+-		break;
+ 	case TOPOLOGY_DEPTH_CPU:
+ 		printf("%4d|", cpu_top.core_info[cpu].cpu);
+ 		break;
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index 5230e072e414c6..426eabc10d765f 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -840,20 +840,21 @@ static const struct platform_features spr_features = {
+ };
+ 
+ static const struct platform_features dmr_features = {
+-	.has_msr_misc_feature_control = spr_features.has_msr_misc_feature_control,
+-	.has_msr_misc_pwr_mgmt = spr_features.has_msr_misc_pwr_mgmt,
+-	.has_nhm_msrs = spr_features.has_nhm_msrs,
+-	.has_config_tdp = spr_features.has_config_tdp,
+-	.bclk_freq = spr_features.bclk_freq,
+-	.supported_cstates = spr_features.supported_cstates,
+-	.cst_limit = spr_features.cst_limit,
+-	.has_msr_core_c1_res = spr_features.has_msr_core_c1_res,
+-	.has_msr_module_c6_res_ms = 1,	/* DMR has Dual Core Module and MC6 MSR */
+-	.has_irtl_msrs = spr_features.has_irtl_msrs,
+-	.has_cst_prewake_bit = spr_features.has_cst_prewake_bit,
+-	.has_fixed_rapl_psys_unit = spr_features.has_fixed_rapl_psys_unit,
+-	.trl_msrs = spr_features.trl_msrs,
+-	.rapl_msrs = 0,		/* DMR does not have RAPL MSRs */
++	.has_msr_misc_feature_control	= spr_features.has_msr_misc_feature_control,
++	.has_msr_misc_pwr_mgmt		= spr_features.has_msr_misc_pwr_mgmt,
++	.has_nhm_msrs			= spr_features.has_nhm_msrs,
++	.bclk_freq			= spr_features.bclk_freq,
++	.supported_cstates		= spr_features.supported_cstates,
++	.cst_limit			= spr_features.cst_limit,
++	.has_msr_core_c1_res		= spr_features.has_msr_core_c1_res,
++	.has_cst_prewake_bit		= spr_features.has_cst_prewake_bit,
++	.has_fixed_rapl_psys_unit	= spr_features.has_fixed_rapl_psys_unit,
++	.trl_msrs			= spr_features.trl_msrs,
++	.has_msr_module_c6_res_ms	= 1,	/* DMR has Dual-Core-Module and MC6 MSR */
++	.rapl_msrs			= 0,	/* DMR does not have RAPL MSRs */
++	.plr_msrs			= 0,	/* DMR does not have PLR  MSRs */
++	.has_irtl_msrs			= 0,	/* DMR does not have IRTL MSRs */
++	.has_config_tdp			= 0,	/* DMR does not have CTDP MSRs */
+ };
+ 
+ static const struct platform_features srf_features = {
+@@ -2429,7 +2430,6 @@ unsigned long long bic_lookup(char *name_list, enum show_hide_mode mode)
+ 
+ 		}
+ 		if (i == MAX_BIC) {
+-			fprintf(stderr, "deferred %s\n", name_list);
+ 			if (mode == SHOW_LIST) {
+ 				deferred_add_names[deferred_add_index++] = name_list;
+ 				if (deferred_add_index >= MAX_DEFERRED) {
+@@ -9817,6 +9817,7 @@ int fork_it(char **argv)
+ 	timersub(&tv_odd, &tv_even, &tv_delta);
+ 	if (for_all_cpus_2(delta_cpu, ODD_COUNTERS, EVEN_COUNTERS))
+ 		fprintf(outf, "%s: Counter reset detected\n", progname);
++	delta_platform(&platform_counters_odd, &platform_counters_even);
+ 
+ 	compute_average(EVEN_COUNTERS);
+ 	format_all_counters(EVEN_COUNTERS);
+@@ -10537,9 +10538,6 @@ void probe_cpuidle_residency(void)
+ 	int min_state = 1024, max_state = 0;
+ 	char *sp;
+ 
+-	if (!DO_BIC(BIC_pct_idle))
+-		return;
+-
+ 	for (state = 10; state >= 0; --state) {
+ 
+ 		sprintf(path, "/sys/devices/system/cpu/cpu%d/cpuidle/state%d/name", base_cpu, state);
+diff --git a/tools/testing/selftests/alsa/utimer-test.c b/tools/testing/selftests/alsa/utimer-test.c
+index 32ee3ce577216b..37964f311a3397 100644
+--- a/tools/testing/selftests/alsa/utimer-test.c
++++ b/tools/testing/selftests/alsa/utimer-test.c
+@@ -135,6 +135,7 @@ TEST_F(timer_f, utimer) {
+ 	pthread_join(ticking_thread, NULL);
+ 	ASSERT_EQ(total_ticks, TICKS_COUNT);
+ 	pclose(rfp);
++	free(buf);
+ }
+ 
+ TEST(wrong_timers_test) {
+diff --git a/tools/testing/selftests/arm64/fp/sve-ptrace.c b/tools/testing/selftests/arm64/fp/sve-ptrace.c
+index 577b6e05e860c9..c499d5789dd53f 100644
+--- a/tools/testing/selftests/arm64/fp/sve-ptrace.c
++++ b/tools/testing/selftests/arm64/fp/sve-ptrace.c
+@@ -253,7 +253,7 @@ static void ptrace_set_get_vl(pid_t child, const struct vec_type *type,
+ 		return;
+ 	}
+ 
+-	ksft_test_result(new_sve->vl = prctl_vl, "Set %s VL %u\n",
++	ksft_test_result(new_sve->vl == prctl_vl, "Set %s VL %u\n",
+ 			 type->name, vl);
+ 
+ 	free(new_sve);
+diff --git a/tools/testing/selftests/bpf/bpf_atomic.h b/tools/testing/selftests/bpf/bpf_atomic.h
+index a9674e54432228..c550e571196774 100644
+--- a/tools/testing/selftests/bpf/bpf_atomic.h
++++ b/tools/testing/selftests/bpf/bpf_atomic.h
+@@ -61,7 +61,7 @@ extern bool CONFIG_X86_64 __kconfig __weak;
+ 
+ #define smp_mb()                                 \
+ 	({                                       \
+-		unsigned long __val;             \
++		volatile unsigned long __val;    \
+ 		__sync_fetch_and_add(&__val, 0); \
+ 	})
+ 
+diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
+index 1d98eee7a2c3a7..f1bdccc7e4e794 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
++++ b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
+@@ -924,6 +924,8 @@ static void redir_partial(int family, int sotype, int sock_map, int parser_map)
+ 		goto close;
+ 
+ 	n = xsend(c1, buf, sizeof(buf), 0);
++	if (n == -1)
++		goto close;
+ 	if (n < sizeof(buf))
+ 		FAIL("incomplete write");
+ 
+diff --git a/tools/testing/selftests/bpf/veristat.c b/tools/testing/selftests/bpf/veristat.c
+index b2bb20b009524d..adf948fff21106 100644
+--- a/tools/testing/selftests/bpf/veristat.c
++++ b/tools/testing/selftests/bpf/veristat.c
+@@ -344,6 +344,7 @@ static error_t parse_arg(int key, char *arg, struct argp_state *state)
+ 			fprintf(stderr, "invalid top N specifier: %s\n", arg);
+ 			argp_usage(state);
+ 		}
++		break;
+ 	case 'C':
+ 		env.comparison_mode = true;
+ 		break;
+diff --git a/tools/testing/selftests/breakpoints/step_after_suspend_test.c b/tools/testing/selftests/breakpoints/step_after_suspend_test.c
+index 8d275f03e977f5..8d233ac95696be 100644
+--- a/tools/testing/selftests/breakpoints/step_after_suspend_test.c
++++ b/tools/testing/selftests/breakpoints/step_after_suspend_test.c
+@@ -127,22 +127,42 @@ int run_test(int cpu)
+ 	return KSFT_PASS;
+ }
+ 
++/*
++ * Reads the suspend success count from sysfs.
++ * Returns the count on success or exits on failure.
++ */
++static int get_suspend_success_count_or_fail(void)
++{
++	FILE *fp;
++	int val;
++
++	fp = fopen("/sys/power/suspend_stats/success", "r");
++	if (!fp)
++		ksft_exit_fail_msg(
++			"Failed to open suspend_stats/success: %s\n",
++			strerror(errno));
++
++	if (fscanf(fp, "%d", &val) != 1) {
++		fclose(fp);
++		ksft_exit_fail_msg(
++			"Failed to read suspend success count\n");
++	}
++
++	fclose(fp);
++	return val;
++}
++
+ void suspend(void)
+ {
+-	int power_state_fd;
+ 	int timerfd;
+ 	int err;
++	int count_before;
++	int count_after;
+ 	struct itimerspec spec = {};
+ 
+ 	if (getuid() != 0)
+ 		ksft_exit_skip("Please run the test as root - Exiting.\n");
+ 
+-	power_state_fd = open("/sys/power/state", O_RDWR);
+-	if (power_state_fd < 0)
+-		ksft_exit_fail_msg(
+-			"open(\"/sys/power/state\") failed %s)\n",
+-			strerror(errno));
+-
+ 	timerfd = timerfd_create(CLOCK_BOOTTIME_ALARM, 0);
+ 	if (timerfd < 0)
+ 		ksft_exit_fail_msg("timerfd_create() failed\n");
+@@ -152,14 +172,15 @@ void suspend(void)
+ 	if (err < 0)
+ 		ksft_exit_fail_msg("timerfd_settime() failed\n");
+ 
++	count_before = get_suspend_success_count_or_fail();
++
+ 	system("(echo mem > /sys/power/state) 2> /dev/null");
+ 
+-	timerfd_gettime(timerfd, &spec);
+-	if (spec.it_value.tv_sec != 0 || spec.it_value.tv_nsec != 0)
++	count_after = get_suspend_success_count_or_fail();
++	if (count_after <= count_before)
+ 		ksft_exit_fail_msg("Failed to enter Suspend state\n");
+ 
+ 	close(timerfd);
+-	close(power_state_fd);
+ }
+ 
+ int main(int argc, char **argv)
+diff --git a/tools/testing/selftests/cgroup/test_cpu.c b/tools/testing/selftests/cgroup/test_cpu.c
+index a2b50af8e9eeed..2a60e6c41940c5 100644
+--- a/tools/testing/selftests/cgroup/test_cpu.c
++++ b/tools/testing/selftests/cgroup/test_cpu.c
+@@ -2,6 +2,7 @@
+ 
+ #define _GNU_SOURCE
+ #include <linux/limits.h>
++#include <sys/param.h>
+ #include <sys/sysinfo.h>
+ #include <sys/wait.h>
+ #include <errno.h>
+@@ -645,10 +646,16 @@ test_cpucg_nested_weight_underprovisioned(const char *root)
+ static int test_cpucg_max(const char *root)
+ {
+ 	int ret = KSFT_FAIL;
+-	long usage_usec, user_usec;
+-	long usage_seconds = 1;
+-	long expected_usage_usec = usage_seconds * USEC_PER_SEC;
++	long quota_usec = 1000;
++	long default_period_usec = 100000; /* cpu.max's default period */
++	long duration_seconds = 1;
++
++	long duration_usec = duration_seconds * USEC_PER_SEC;
++	long usage_usec, n_periods, remainder_usec, expected_usage_usec;
+ 	char *cpucg;
++	char quota_buf[32];
++
++	snprintf(quota_buf, sizeof(quota_buf), "%ld", quota_usec);
+ 
+ 	cpucg = cg_name(root, "cpucg_test");
+ 	if (!cpucg)
+@@ -657,13 +664,13 @@ static int test_cpucg_max(const char *root)
+ 	if (cg_create(cpucg))
+ 		goto cleanup;
+ 
+-	if (cg_write(cpucg, "cpu.max", "1000"))
++	if (cg_write(cpucg, "cpu.max", quota_buf))
+ 		goto cleanup;
+ 
+ 	struct cpu_hog_func_param param = {
+ 		.nprocs = 1,
+ 		.ts = {
+-			.tv_sec = usage_seconds,
++			.tv_sec = duration_seconds,
+ 			.tv_nsec = 0,
+ 		},
+ 		.clock_type = CPU_HOG_CLOCK_WALL,
+@@ -672,14 +679,19 @@ static int test_cpucg_max(const char *root)
+ 		goto cleanup;
+ 
+ 	usage_usec = cg_read_key_long(cpucg, "cpu.stat", "usage_usec");
+-	user_usec = cg_read_key_long(cpucg, "cpu.stat", "user_usec");
+-	if (user_usec <= 0)
++	if (usage_usec <= 0)
+ 		goto cleanup;
+ 
+-	if (user_usec >= expected_usage_usec)
+-		goto cleanup;
++	/*
++	 * The following calculation applies only since
++	 * the cpu hog is set to run as per wall-clock time
++	 */
++	n_periods = duration_usec / default_period_usec;
++	remainder_usec = duration_usec - n_periods * default_period_usec;
++	expected_usage_usec
++		= n_periods * quota_usec + MIN(remainder_usec, quota_usec);
+ 
+-	if (values_close(usage_usec, expected_usage_usec, 95))
++	if (!values_close(usage_usec, expected_usage_usec, 10))
+ 		goto cleanup;
+ 
+ 	ret = KSFT_PASS;
+@@ -698,10 +710,16 @@ static int test_cpucg_max(const char *root)
+ static int test_cpucg_max_nested(const char *root)
+ {
+ 	int ret = KSFT_FAIL;
+-	long usage_usec, user_usec;
+-	long usage_seconds = 1;
+-	long expected_usage_usec = usage_seconds * USEC_PER_SEC;
++	long quota_usec = 1000;
++	long default_period_usec = 100000; /* cpu.max's default period */
++	long duration_seconds = 1;
++
++	long duration_usec = duration_seconds * USEC_PER_SEC;
++	long usage_usec, n_periods, remainder_usec, expected_usage_usec;
+ 	char *parent, *child;
++	char quota_buf[32];
++
++	snprintf(quota_buf, sizeof(quota_buf), "%ld", quota_usec);
+ 
+ 	parent = cg_name(root, "cpucg_parent");
+ 	child = cg_name(parent, "cpucg_child");
+@@ -717,13 +735,13 @@ static int test_cpucg_max_nested(const char *root)
+ 	if (cg_create(child))
+ 		goto cleanup;
+ 
+-	if (cg_write(parent, "cpu.max", "1000"))
++	if (cg_write(parent, "cpu.max", quota_buf))
+ 		goto cleanup;
+ 
+ 	struct cpu_hog_func_param param = {
+ 		.nprocs = 1,
+ 		.ts = {
+-			.tv_sec = usage_seconds,
++			.tv_sec = duration_seconds,
+ 			.tv_nsec = 0,
+ 		},
+ 		.clock_type = CPU_HOG_CLOCK_WALL,
+@@ -732,14 +750,19 @@ static int test_cpucg_max_nested(const char *root)
+ 		goto cleanup;
+ 
+ 	usage_usec = cg_read_key_long(child, "cpu.stat", "usage_usec");
+-	user_usec = cg_read_key_long(child, "cpu.stat", "user_usec");
+-	if (user_usec <= 0)
++	if (usage_usec <= 0)
+ 		goto cleanup;
+ 
+-	if (user_usec >= expected_usage_usec)
+-		goto cleanup;
++	/*
++	 * The following calculation applies only since
++	 * the cpu hog is set to run as per wall-clock time
++	 */
++	n_periods = duration_usec / default_period_usec;
++	remainder_usec = duration_usec - n_periods * default_period_usec;
++	expected_usage_usec
++		= n_periods * quota_usec + MIN(remainder_usec, quota_usec);
+ 
+-	if (values_close(usage_usec, expected_usage_usec, 95))
++	if (!values_close(usage_usec, expected_usage_usec, 10))
+ 		goto cleanup;
+ 
+ 	ret = KSFT_PASS;
+diff --git a/tools/testing/selftests/drivers/net/hw/tso.py b/tools/testing/selftests/drivers/net/hw/tso.py
+index 3370827409aa02..5fddb5056a2053 100755
+--- a/tools/testing/selftests/drivers/net/hw/tso.py
++++ b/tools/testing/selftests/drivers/net/hw/tso.py
+@@ -102,7 +102,7 @@ def build_tunnel(cfg, outer_ipver, tun_info):
+     remote_addr = cfg.remote_addr_v[outer_ipver]
+ 
+     tun_type = tun_info[0]
+-    tun_arg  = tun_info[2]
++    tun_arg  = tun_info[1]
+     ip(f"link add {tun_type}-ksft type {tun_type} {tun_arg} local {local_addr} remote {remote_addr} dev {cfg.ifname}")
+     defer(ip, f"link del {tun_type}-ksft")
+     ip(f"link set dev {tun_type}-ksft up")
+@@ -119,15 +119,30 @@ def build_tunnel(cfg, outer_ipver, tun_info):
+     return remote_v4, remote_v6
+ 
+ 
++def restore_wanted_features(cfg):
++    features_cmd = ""
++    for feature in cfg.hw_features:
++        setting = "on" if feature in cfg.wanted_features else "off"
++        features_cmd += f" {feature} {setting}"
++    try:
++        ethtool(f"-K {cfg.ifname} {features_cmd}")
++    except Exception as e:
++        ksft_pr(f"WARNING: failure restoring wanted features: {e}")
++
++
+ def test_builder(name, cfg, outer_ipver, feature, tun=None, inner_ipver=None):
+     """Construct specific tests from the common template."""
+     def f(cfg):
+         cfg.require_ipver(outer_ipver)
++        defer(restore_wanted_features, cfg)
+ 
+         if not cfg.have_stat_super_count and \
+            not cfg.have_stat_wire_count:
+             raise KsftSkipEx(f"Device does not support LSO queue stats")
+ 
++        if feature not in cfg.hw_features:
++            raise KsftSkipEx(f"Device does not support {feature}")
++
+         ipver = outer_ipver
+         if tun:
+             remote_v4, remote_v6 = build_tunnel(cfg, ipver, tun)
+@@ -136,36 +151,21 @@ def test_builder(name, cfg, outer_ipver, feature, tun=None, inner_ipver=None):
+             remote_v4 = cfg.remote_addr_v["4"]
+             remote_v6 = cfg.remote_addr_v["6"]
+ 
+-        tun_partial = tun and tun[1]
+-        # Tunnel which can silently fall back to gso-partial
+-        has_gso_partial = tun and 'tx-gso-partial' in cfg.features
+-
+-        # For TSO4 via partial we need mangleid
+-        if ipver == "4" and feature in cfg.partial_features:
+-            ksft_pr("Testing with mangleid enabled")
+-            if 'tx-tcp-mangleid-segmentation' not in cfg.features:
+-                ethtool(f"-K {cfg.ifname} tx-tcp-mangleid-segmentation on")
+-                defer(ethtool, f"-K {cfg.ifname} tx-tcp-mangleid-segmentation off")
+-
+         # First test without the feature enabled.
+         ethtool(f"-K {cfg.ifname} {feature} off")
+-        if has_gso_partial:
+-            ethtool(f"-K {cfg.ifname} tx-gso-partial off")
+         run_one_stream(cfg, ipver, remote_v4, remote_v6, should_lso=False)
+ 
+-        # Now test with the feature enabled.
+-        # For compatible tunnels only - just GSO partial, not specific feature.
+-        if has_gso_partial:
++        ethtool(f"-K {cfg.ifname} tx-gso-partial off")
++        ethtool(f"-K {cfg.ifname} tx-tcp-mangleid-segmentation off")
++        if feature in cfg.partial_features:
+             ethtool(f"-K {cfg.ifname} tx-gso-partial on")
+-            run_one_stream(cfg, ipver, remote_v4, remote_v6,
+-                           should_lso=tun_partial)
++            if ipver == "4":
++                ksft_pr("Testing with mangleid enabled")
++                ethtool(f"-K {cfg.ifname} tx-tcp-mangleid-segmentation on")
+ 
+         # Full feature enabled.
+-        if feature in cfg.features:
+-            ethtool(f"-K {cfg.ifname} {feature} on")
+-            run_one_stream(cfg, ipver, remote_v4, remote_v6, should_lso=True)
+-        else:
+-            raise KsftXfailEx(f"Device does not support {feature}")
++        ethtool(f"-K {cfg.ifname} {feature} on")
++        run_one_stream(cfg, ipver, remote_v4, remote_v6, should_lso=True)
+ 
+     f.__name__ = name + ((outer_ipver + "_") if tun else "") + "ipv" + inner_ipver
+     return f
+@@ -176,23 +176,39 @@ def query_nic_features(cfg) -> None:
+     cfg.have_stat_super_count = False
+     cfg.have_stat_wire_count = False
+ 
+-    cfg.features = set()
+     features = cfg.ethnl.features_get({"header": {"dev-index": cfg.ifindex}})
+-    for f in features["active"]["bits"]["bit"]:
+-        cfg.features.add(f["name"])
++
++    cfg.wanted_features = set()
++    for f in features["wanted"]["bits"]["bit"]:
++        cfg.wanted_features.add(f["name"])
++
++    cfg.hw_features = set()
++    hw_all_features_cmd = ""
++    for f in features["hw"]["bits"]["bit"]:
++        if f.get("value", False):
++            feature = f["name"]
++            cfg.hw_features.add(feature)
++            hw_all_features_cmd += f" {feature} on"
++    try:
++        ethtool(f"-K {cfg.ifname} {hw_all_features_cmd}")
++    except Exception as e:
++        ksft_pr(f"WARNING: failure enabling all hw features: {e}")
++        ksft_pr("partial gso feature detection may be impacted")
+ 
+     # Check which features are supported via GSO partial
+     cfg.partial_features = set()
+-    if 'tx-gso-partial' in cfg.features:
++    if 'tx-gso-partial' in cfg.hw_features:
+         ethtool(f"-K {cfg.ifname} tx-gso-partial off")
+ 
+         no_partial = set()
+         features = cfg.ethnl.features_get({"header": {"dev-index": cfg.ifindex}})
+         for f in features["active"]["bits"]["bit"]:
+             no_partial.add(f["name"])
+-        cfg.partial_features = cfg.features - no_partial
++        cfg.partial_features = cfg.hw_features - no_partial
+         ethtool(f"-K {cfg.ifname} tx-gso-partial on")
+ 
++    restore_wanted_features(cfg)
++
+     stats = cfg.netnl.qstats_get({"ifindex": cfg.ifindex}, dump=True)
+     if stats:
+         if 'tx-hw-gso-packets' in stats[0]:
+@@ -211,13 +227,14 @@ def main() -> None:
+         query_nic_features(cfg)
+ 
+         test_info = (
+-            # name,       v4/v6  ethtool_feature              tun:(type,    partial, args)
+-            ("",            "4", "tx-tcp-segmentation",           None),
+-            ("",            "6", "tx-tcp6-segmentation",          None),
+-            ("vxlan",        "", "tx-udp_tnl-segmentation",       ("vxlan",  True,  "id 100 dstport 4789 noudpcsum")),
+-            ("vxlan_csum",   "", "tx-udp_tnl-csum-segmentation",  ("vxlan",  False, "id 100 dstport 4789 udpcsum")),
+-            ("gre",         "4", "tx-gre-segmentation",           ("gre",    False,  "")),
+-            ("gre",         "6", "tx-gre-segmentation",           ("ip6gre", False,  "")),
++            # name,       v4/v6  ethtool_feature               tun:(type, args, inner ip versions)
++            ("",           "4", "tx-tcp-segmentation",         None),
++            ("",           "6", "tx-tcp6-segmentation",        None),
++            ("vxlan",      "4", "tx-udp_tnl-segmentation",     ("vxlan", "id 100 dstport 4789 noudpcsum", ("4", "6"))),
++            ("vxlan",      "6", "tx-udp_tnl-segmentation",     ("vxlan", "id 100 dstport 4789 udp6zerocsumtx udp6zerocsumrx", ("4", "6"))),
++            ("vxlan_csum", "", "tx-udp_tnl-csum-segmentation", ("vxlan", "id 100 dstport 4789 udpcsum", ("4", "6"))),
++            ("gre",        "4", "tx-gre-segmentation",         ("gre",   "", ("4", "6"))),
++            ("gre",        "6", "tx-gre-segmentation",         ("ip6gre","", ("4", "6"))),
+         )
+ 
+         cases = []
+@@ -227,11 +244,13 @@ def main() -> None:
+                 if info[1] and outer_ipver != info[1]:
+                     continue
+ 
+-                cases.append(test_builder(info[0], cfg, outer_ipver, info[2],
+-                                          tun=info[3], inner_ipver="4"))
+                 if info[3]:
+-                    cases.append(test_builder(info[0], cfg, outer_ipver, info[2],
+-                                              tun=info[3], inner_ipver="6"))
++                    cases += [
++                        test_builder(info[0], cfg, outer_ipver, info[2], info[3], inner_ipver)
++                        for inner_ipver in info[3][2]
++                    ]
++                else:
++                    cases.append(test_builder(info[0], cfg, outer_ipver, info[2], None, outer_ipver))
+ 
+         ksft_run(cases=cases, args=(cfg, ))
+     ksft_exit()
+diff --git a/tools/testing/selftests/drivers/net/lib/py/env.py b/tools/testing/selftests/drivers/net/lib/py/env.py
+index 3bccddf8cbc5a2..1b8bd648048f74 100644
+--- a/tools/testing/selftests/drivers/net/lib/py/env.py
++++ b/tools/testing/selftests/drivers/net/lib/py/env.py
+@@ -259,7 +259,7 @@ class NetDrvEpEnv(NetDrvEnvBase):
+             if not self._require_cmd(comm, "local"):
+                 raise KsftSkipEx("Test requires command: " + comm)
+         if remote:
+-            if not self._require_cmd(comm, "remote"):
++            if not self._require_cmd(comm, "remote", host=self.remote):
+                 raise KsftSkipEx("Test requires (remote) command: " + comm)
+ 
+     def wait_hw_stats_settle(self):
+diff --git a/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc b/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
+index b7c8f29c09a978..65916bb55dfbbf 100644
+--- a/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
++++ b/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
+@@ -14,11 +14,35 @@ fail() { #msg
+     exit_fail
+ }
+ 
++# As reading trace can last forever, simply look for 3 different
++# events then exit out of reading the file. If there's not 3 different
++# events, then the test has failed.
++check_unique() {
++    cat trace | grep -v '^#' | awk '
++	BEGIN { cnt = 0; }
++	{
++	    for (i = 0; i < cnt; i++) {
++		if (event[i] == $5) {
++		    break;
++		}
++	    }
++	    if (i == cnt) {
++		event[cnt++] = $5;
++		if (cnt > 2) {
++		    exit;
++		}
++	    }
++	}
++	END {
++	    printf "%d", cnt;
++	}'
++}
++
+ echo 'sched:*' > set_event
+ 
+ yield
+ 
+-count=`head -n 100 trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
++count=`check_unique`
+ if [ $count -lt 3 ]; then
+     fail "at least fork, exec and exit events should be recorded"
+ fi
+@@ -29,7 +53,7 @@ echo 1 > events/sched/enable
+ 
+ yield
+ 
+-count=`head -n 100 trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
++count=`check_unique`
+ if [ $count -lt 3 ]; then
+     fail "at least fork, exec and exit events should be recorded"
+ fi
+diff --git a/tools/testing/selftests/landlock/audit.h b/tools/testing/selftests/landlock/audit.h
+index 18a6014920b5f8..b16986aa64427b 100644
+--- a/tools/testing/selftests/landlock/audit.h
++++ b/tools/testing/selftests/landlock/audit.h
+@@ -403,11 +403,12 @@ static int audit_init_filter_exe(struct audit_filter *filter, const char *path)
+ 	/* It is assume that there is not already filtering rules. */
+ 	filter->record_type = AUDIT_EXE;
+ 	if (!path) {
+-		filter->exe_len = readlink("/proc/self/exe", filter->exe,
+-					   sizeof(filter->exe) - 1);
+-		if (filter->exe_len < 0)
++		int ret = readlink("/proc/self/exe", filter->exe,
++				   sizeof(filter->exe) - 1);
++		if (ret < 0)
+ 			return -errno;
+ 
++		filter->exe_len = ret;
+ 		return 0;
+ 	}
+ 
+diff --git a/tools/testing/selftests/landlock/audit_test.c b/tools/testing/selftests/landlock/audit_test.c
+index cfc571afd0eb81..46d02d49835aae 100644
+--- a/tools/testing/selftests/landlock/audit_test.c
++++ b/tools/testing/selftests/landlock/audit_test.c
+@@ -7,6 +7,7 @@
+ 
+ #define _GNU_SOURCE
+ #include <errno.h>
++#include <fcntl.h>
+ #include <limits.h>
+ #include <linux/landlock.h>
+ #include <pthread.h>
+diff --git a/tools/testing/selftests/net/netfilter/ipvs.sh b/tools/testing/selftests/net/netfilter/ipvs.sh
+index 6af2ea3ad6b820..9c9d5b38ab7122 100755
+--- a/tools/testing/selftests/net/netfilter/ipvs.sh
++++ b/tools/testing/selftests/net/netfilter/ipvs.sh
+@@ -151,7 +151,7 @@ test_nat() {
+ test_tun() {
+ 	ip netns exec "${ns0}" ip route add "${vip_v4}" via "${gip_v4}" dev br0
+ 
+-	ip netns exec "${ns1}" modprobe -q ipip
++	modprobe -q ipip
+ 	ip netns exec "${ns1}" ip link set tunl0 up
+ 	ip netns exec "${ns1}" sysctl -qw net.ipv4.ip_forward=0
+ 	ip netns exec "${ns1}" sysctl -qw net.ipv4.conf.all.send_redirects=0
+@@ -160,10 +160,10 @@ test_tun() {
+ 	ip netns exec "${ns1}" ipvsadm -a -i -t "${vip_v4}:${port}" -r ${rip_v4}:${port}
+ 	ip netns exec "${ns1}" ip addr add ${vip_v4}/32 dev lo:1
+ 
+-	ip netns exec "${ns2}" modprobe -q ipip
+ 	ip netns exec "${ns2}" ip link set tunl0 up
+ 	ip netns exec "${ns2}" sysctl -qw net.ipv4.conf.all.arp_ignore=1
+ 	ip netns exec "${ns2}" sysctl -qw net.ipv4.conf.all.arp_announce=2
++	ip netns exec "${ns2}" sysctl -qw net.ipv4.conf.tunl0.rp_filter=0
+ 	ip netns exec "${ns2}" ip addr add "${vip_v4}/32" dev lo:1
+ 
+ 	test_service
+diff --git a/tools/testing/selftests/net/netfilter/nft_interface_stress.sh b/tools/testing/selftests/net/netfilter/nft_interface_stress.sh
+index 5ff7be9daeee1c..c0fffaa6dbd9a8 100755
+--- a/tools/testing/selftests/net/netfilter/nft_interface_stress.sh
++++ b/tools/testing/selftests/net/netfilter/nft_interface_stress.sh
+@@ -10,6 +10,8 @@ source lib.sh
+ checktool "nft --version" "run test without nft tool"
+ checktool "iperf3 --version" "run test without iperf3 tool"
+ 
++read kernel_tainted < /proc/sys/kernel/tainted
++
+ # how many seconds to torture the kernel?
+ # default to 80% of max run time but don't exceed 48s
+ TEST_RUNTIME=$((${kselftest_timeout:-60} * 8 / 10))
+@@ -135,7 +137,8 @@ else
+ 	wait
+ fi
+ 
+-[[ $(</proc/sys/kernel/tainted) -eq 0 ]] || {
++
++[[ $kernel_tainted -eq 0 && $(</proc/sys/kernel/tainted) -ne 0 ]] && {
+ 	echo "FAIL: Kernel is tainted!"
+ 	exit $ksft_fail
+ }
+diff --git a/tools/testing/selftests/net/rtnetlink.sh b/tools/testing/selftests/net/rtnetlink.sh
+index 2e8243a65b5070..d2298da320a673 100755
+--- a/tools/testing/selftests/net/rtnetlink.sh
++++ b/tools/testing/selftests/net/rtnetlink.sh
+@@ -673,6 +673,11 @@ kci_test_ipsec_offload()
+ 	sysfsf=$sysfsd/ipsec
+ 	sysfsnet=/sys/bus/netdevsim/devices/netdevsim0/net/
+ 	probed=false
++	esp4_offload_probed_default=false
++
++	if lsmod | grep -q esp4_offload; then
++		esp4_offload_probed_default=true
++	fi
+ 
+ 	if ! mount | grep -q debugfs; then
+ 		mount -t debugfs none /sys/kernel/debug/ &> /dev/null
+@@ -766,6 +771,7 @@ EOF
+ 	fi
+ 
+ 	# clean up any leftovers
++	! "$esp4_offload_probed_default" && lsmod | grep -q esp4_offload && rmmod esp4_offload
+ 	echo 0 > /sys/bus/netdevsim/del_device
+ 	$probed && rmmod netdevsim
+ 
+diff --git a/tools/testing/selftests/net/vlan_hw_filter.sh b/tools/testing/selftests/net/vlan_hw_filter.sh
+index 0fb56baf28e4a4..e195d5cab6f75a 100755
+--- a/tools/testing/selftests/net/vlan_hw_filter.sh
++++ b/tools/testing/selftests/net/vlan_hw_filter.sh
+@@ -55,10 +55,10 @@ test_vlan0_del_crash_01() {
+ 	ip netns exec ${NETNS} ip link add bond0 type bond mode 0
+ 	ip netns exec ${NETNS} ip link add link bond0 name vlan0 type vlan id 0 protocol 802.1q
+ 	ip netns exec ${NETNS} ethtool -K bond0 rx-vlan-filter off
+-	ip netns exec ${NETNS} ifconfig bond0 up
++	ip netns exec ${NETNS} ip link set dev bond0 up
+ 	ip netns exec ${NETNS} ethtool -K bond0 rx-vlan-filter on
+-	ip netns exec ${NETNS} ifconfig bond0 down
+-	ip netns exec ${NETNS} ifconfig bond0 up
++	ip netns exec ${NETNS} ip link set dev bond0 down
++	ip netns exec ${NETNS} ip link set dev bond0 up
+ 	ip netns exec ${NETNS} ip link del vlan0 || fail "Please check vlan HW filter function"
+ 	cleanup
+ }
+@@ -68,11 +68,11 @@ test_vlan0_del_crash_02() {
+ 	setup
+ 	ip netns exec ${NETNS} ip link add bond0 type bond mode 0
+ 	ip netns exec ${NETNS} ethtool -K bond0 rx-vlan-filter off
+-	ip netns exec ${NETNS} ifconfig bond0 up
++	ip netns exec ${NETNS} ip link set dev bond0 up
+ 	ip netns exec ${NETNS} ethtool -K bond0 rx-vlan-filter on
+ 	ip netns exec ${NETNS} ip link add link bond0 name vlan0 type vlan id 0 protocol 802.1q
+-	ip netns exec ${NETNS} ifconfig bond0 down
+-	ip netns exec ${NETNS} ifconfig bond0 up
++	ip netns exec ${NETNS} ip link set dev bond0 down
++	ip netns exec ${NETNS} ip link set dev bond0 up
+ 	ip netns exec ${NETNS} ip link del vlan0 || fail "Please check vlan HW filter function"
+ 	cleanup
+ }
+@@ -84,9 +84,9 @@ test_vlan0_del_crash_03() {
+ 	ip netns exec ${NETNS} ip link add bond0 type bond mode 0
+ 	ip netns exec ${NETNS} ip link add link bond0 name vlan0 type vlan id 0 protocol 802.1q
+ 	ip netns exec ${NETNS} ethtool -K bond0 rx-vlan-filter off
+-	ip netns exec ${NETNS} ifconfig bond0 up
++	ip netns exec ${NETNS} ip link set dev bond0 up
+ 	ip netns exec ${NETNS} ethtool -K bond0 rx-vlan-filter on
+-	ip netns exec ${NETNS} ifconfig bond0 down
++	ip netns exec ${NETNS} ip link set dev bond0 down
+ 	ip netns exec ${NETNS} ip link del vlan0 || fail "Please check vlan HW filter function"
+ 	cleanup
+ }
+diff --git a/tools/testing/selftests/nolibc/nolibc-test.c b/tools/testing/selftests/nolibc/nolibc-test.c
+index dbe13000fb1ac1..b5c04c13724956 100644
+--- a/tools/testing/selftests/nolibc/nolibc-test.c
++++ b/tools/testing/selftests/nolibc/nolibc-test.c
+@@ -1646,6 +1646,28 @@ int test_strerror(void)
+ 	return 0;
+ }
+ 
++static int test_printf_error(void)
++{
++	int fd, ret, saved_errno;
++
++	fd = open("/dev/full", O_RDWR);
++	if (fd == -1)
++		return 1;
++
++	errno = 0;
++	ret = dprintf(fd, "foo");
++	saved_errno = errno;
++	close(fd);
++
++	if (ret != -1)
++		return 2;
++
++	if (saved_errno != ENOSPC)
++		return 3;
++
++	return 0;
++}
++
+ static int run_printf(int min, int max)
+ {
+ 	int test;
+@@ -1675,6 +1697,7 @@ static int run_printf(int min, int max)
+ 		CASE_TEST(width_trunc);  EXPECT_VFPRINTF(25, "                    ", "%25d", 1); break;
+ 		CASE_TEST(scanf);        EXPECT_ZR(1, test_scanf()); break;
+ 		CASE_TEST(strerror);     EXPECT_ZR(1, test_strerror()); break;
++		CASE_TEST(printf_error); EXPECT_ZR(1, test_printf_error()); break;
+ 		case __LINE__:
+ 			return ret; /* must be last */
+ 		/* note: do not set any defaults so as to permit holes above */
+diff --git a/tools/testing/selftests/perf_events/.gitignore b/tools/testing/selftests/perf_events/.gitignore
+index ee93dc4969b8b5..4931b3b6bbd397 100644
+--- a/tools/testing/selftests/perf_events/.gitignore
++++ b/tools/testing/selftests/perf_events/.gitignore
+@@ -2,3 +2,4 @@
+ sigtrap_threads
+ remove_on_exec
+ watermark_signal
++mmap
+diff --git a/tools/testing/selftests/perf_events/Makefile b/tools/testing/selftests/perf_events/Makefile
+index 70e3ff21127890..2e5d85770dfead 100644
+--- a/tools/testing/selftests/perf_events/Makefile
++++ b/tools/testing/selftests/perf_events/Makefile
+@@ -2,5 +2,5 @@
+ CFLAGS += -Wl,-no-as-needed -Wall $(KHDR_INCLUDES)
+ LDFLAGS += -lpthread
+ 
+-TEST_GEN_PROGS := sigtrap_threads remove_on_exec watermark_signal
++TEST_GEN_PROGS := sigtrap_threads remove_on_exec watermark_signal mmap
+ include ../lib.mk
+diff --git a/tools/testing/selftests/perf_events/mmap.c b/tools/testing/selftests/perf_events/mmap.c
+new file mode 100644
+index 00000000000000..ea0427aac1f98f
+--- /dev/null
++++ b/tools/testing/selftests/perf_events/mmap.c
+@@ -0,0 +1,236 @@
++// SPDX-License-Identifier: GPL-2.0-only
++#define _GNU_SOURCE
++
++#include <dirent.h>
++#include <sched.h>
++#include <stdbool.h>
++#include <stdio.h>
++#include <unistd.h>
++
++#include <sys/ioctl.h>
++#include <sys/mman.h>
++#include <sys/syscall.h>
++#include <sys/types.h>
++
++#include <linux/perf_event.h>
++
++#include "../kselftest_harness.h"
++
++#define RB_SIZE		0x3000
++#define AUX_SIZE	0x10000
++#define AUX_OFFS	0x4000
++
++#define HOLE_SIZE	0x1000
++
++/* Reserve space for rb, aux with space for shrink-beyond-vma testing. */
++#define REGION_SIZE	(2 * RB_SIZE + 2 * AUX_SIZE)
++#define REGION_AUX_OFFS (2 * RB_SIZE)
++
++#define MAP_BASE	1
++#define MAP_AUX		2
++
++#define EVENT_SRC_DIR	"/sys/bus/event_source/devices"
++
++FIXTURE(perf_mmap)
++{
++	int		fd;
++	void		*ptr;
++	void		*region;
++};
++
++FIXTURE_VARIANT(perf_mmap)
++{
++	bool		aux;
++	unsigned long	ptr_size;
++};
++
++FIXTURE_VARIANT_ADD(perf_mmap, rb)
++{
++	.aux = false,
++	.ptr_size = RB_SIZE,
++};
++
++FIXTURE_VARIANT_ADD(perf_mmap, aux)
++{
++	.aux = true,
++	.ptr_size = AUX_SIZE,
++};
++
++static bool read_event_type(struct dirent *dent, __u32 *type)
++{
++	char typefn[512];
++	FILE *fp;
++	int res;
++
++	snprintf(typefn, sizeof(typefn), "%s/%s/type", EVENT_SRC_DIR, dent->d_name);
++	fp = fopen(typefn, "r");
++	if (!fp)
++		return false;
++
++	res = fscanf(fp, "%u", type);
++	fclose(fp);
++	return res > 0;
++}
++
++FIXTURE_SETUP(perf_mmap)
++{
++	struct perf_event_attr attr = {
++		.size		= sizeof(attr),
++		.disabled	= 1,
++		.exclude_kernel	= 1,
++		.exclude_hv	= 1,
++	};
++	struct perf_event_attr attr_ok = {};
++	unsigned int eacces = 0, map = 0;
++	struct perf_event_mmap_page *rb;
++	struct dirent *dent;
++	void *aux, *region;
++	DIR *dir;
++
++	self->ptr = NULL;
++
++	dir = opendir(EVENT_SRC_DIR);
++	if (!dir)
++		SKIP(return, "perf not available.");
++
++	region = mmap(NULL, REGION_SIZE, PROT_NONE, MAP_ANON | MAP_PRIVATE, -1, 0);
++	ASSERT_NE(region, MAP_FAILED);
++	self->region = region;
++
++	// Try to find a suitable event on this system
++	while ((dent = readdir(dir))) {
++		int fd;
++
++		if (!read_event_type(dent, &attr.type))
++			continue;
++
++		fd = syscall(SYS_perf_event_open, &attr, 0, -1, -1, 0);
++		if (fd < 0) {
++			if (errno == EACCES)
++				eacces++;
++			continue;
++		}
++
++		// Check whether the event supports mmap()
++		rb = mmap(region, RB_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, fd, 0);
++		if (rb == MAP_FAILED) {
++			close(fd);
++			continue;
++		}
++
++		if (!map) {
++			// Save the event in case that no AUX capable event is found
++			attr_ok = attr;
++			map = MAP_BASE;
++		}
++
++		if (!variant->aux)
++			continue;
++
++		rb->aux_offset = AUX_OFFS;
++		rb->aux_size = AUX_SIZE;
++
++		// Check whether it supports a AUX buffer
++		aux = mmap(region + REGION_AUX_OFFS, AUX_SIZE, PROT_READ | PROT_WRITE,
++			   MAP_SHARED | MAP_FIXED, fd, AUX_OFFS);
++		if (aux == MAP_FAILED) {
++			munmap(rb, RB_SIZE);
++			close(fd);
++			continue;
++		}
++
++		attr_ok = attr;
++		map = MAP_AUX;
++		munmap(aux, AUX_SIZE);
++		munmap(rb, RB_SIZE);
++		close(fd);
++		break;
++	}
++	closedir(dir);
++
++	if (!map) {
++		if (!eacces)
++			SKIP(return, "No mappable perf event found.");
++		else
++			SKIP(return, "No permissions for perf_event_open()");
++	}
++
++	self->fd = syscall(SYS_perf_event_open, &attr_ok, 0, -1, -1, 0);
++	ASSERT_NE(self->fd, -1);
++
++	rb = mmap(region, RB_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, self->fd, 0);
++	ASSERT_NE(rb, MAP_FAILED);
++
++	if (!variant->aux) {
++		self->ptr = rb;
++		return;
++	}
++
++	if (map != MAP_AUX)
++		SKIP(return, "No AUX event found.");
++
++	rb->aux_offset = AUX_OFFS;
++	rb->aux_size = AUX_SIZE;
++	aux = mmap(region + REGION_AUX_OFFS, AUX_SIZE, PROT_READ | PROT_WRITE,
++		   MAP_SHARED | MAP_FIXED, self->fd, AUX_OFFS);
++	ASSERT_NE(aux, MAP_FAILED);
++	self->ptr = aux;
++}
++
++FIXTURE_TEARDOWN(perf_mmap)
++{
++	ASSERT_EQ(munmap(self->region, REGION_SIZE), 0);
++	if (self->fd != -1)
++		ASSERT_EQ(close(self->fd), 0);
++}
++
++TEST_F(perf_mmap, remap)
++{
++	void *tmp, *ptr = self->ptr;
++	unsigned long size = variant->ptr_size;
++
++	// Test the invalid remaps
++	ASSERT_EQ(mremap(ptr, size, HOLE_SIZE, MREMAP_MAYMOVE), MAP_FAILED);
++	ASSERT_EQ(mremap(ptr + HOLE_SIZE, size, HOLE_SIZE, MREMAP_MAYMOVE), MAP_FAILED);
++	ASSERT_EQ(mremap(ptr + size - HOLE_SIZE, HOLE_SIZE, size, MREMAP_MAYMOVE), MAP_FAILED);
++	// Shrink the end of the mapping such that we only unmap past end of the VMA,
++	// which should succeed and poke a hole into the PROT_NONE region
++	ASSERT_NE(mremap(ptr + size - HOLE_SIZE, size, HOLE_SIZE, MREMAP_MAYMOVE), MAP_FAILED);
++
++	// Remap the whole buffer to a new address
++	tmp = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0);
++	ASSERT_NE(tmp, MAP_FAILED);
++
++	// Try splitting offset 1 hole size into VMA, this should fail
++	ASSERT_EQ(mremap(ptr + HOLE_SIZE, size - HOLE_SIZE, size - HOLE_SIZE,
++			 MREMAP_MAYMOVE | MREMAP_FIXED, tmp), MAP_FAILED);
++	// Remapping the whole thing should succeed fine
++	ptr = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, tmp);
++	ASSERT_EQ(ptr, tmp);
++	ASSERT_EQ(munmap(tmp, size), 0);
++}
++
++TEST_F(perf_mmap, unmap)
++{
++	unsigned long size = variant->ptr_size;
++
++	// Try to poke holes into the mappings
++	ASSERT_NE(munmap(self->ptr, HOLE_SIZE), 0);
++	ASSERT_NE(munmap(self->ptr + HOLE_SIZE, HOLE_SIZE), 0);
++	ASSERT_NE(munmap(self->ptr + size - HOLE_SIZE, HOLE_SIZE), 0);
++}
++
++TEST_F(perf_mmap, map)
++{
++	unsigned long size = variant->ptr_size;
++
++	// Try to poke holes into the mappings by mapping anonymous memory over it
++	ASSERT_EQ(mmap(self->ptr, HOLE_SIZE, PROT_READ | PROT_WRITE,
++		       MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0), MAP_FAILED);
++	ASSERT_EQ(mmap(self->ptr + HOLE_SIZE, HOLE_SIZE, PROT_READ | PROT_WRITE,
++		       MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0), MAP_FAILED);
++	ASSERT_EQ(mmap(self->ptr + size - HOLE_SIZE, HOLE_SIZE, PROT_READ | PROT_WRITE,
++		       MAP_PRIVATE | MAP_ANON | MAP_FIXED, -1, 0), MAP_FAILED);
++}
++
++TEST_HARNESS_MAIN
+diff --git a/tools/testing/selftests/syscall_user_dispatch/sud_test.c b/tools/testing/selftests/syscall_user_dispatch/sud_test.c
+index d975a67673299f..48cf01aeec3e77 100644
+--- a/tools/testing/selftests/syscall_user_dispatch/sud_test.c
++++ b/tools/testing/selftests/syscall_user_dispatch/sud_test.c
+@@ -79,6 +79,21 @@ TEST_SIGNAL(dispatch_trigger_sigsys, SIGSYS)
+ 	}
+ }
+ 
++static void prctl_valid(struct __test_metadata *_metadata,
++			unsigned long op, unsigned long off,
++			unsigned long size, void *sel)
++{
++	EXPECT_EQ(0, prctl(PR_SET_SYSCALL_USER_DISPATCH, op, off, size, sel));
++}
++
++static void prctl_invalid(struct __test_metadata *_metadata,
++			  unsigned long op, unsigned long off,
++			  unsigned long size, void *sel, int err)
++{
++	EXPECT_EQ(-1, prctl(PR_SET_SYSCALL_USER_DISPATCH, op, off, size, sel));
++	EXPECT_EQ(err, errno);
++}
++
+ TEST(bad_prctl_param)
+ {
+ 	char sel = SYSCALL_DISPATCH_FILTER_ALLOW;
+@@ -86,57 +101,42 @@ TEST(bad_prctl_param)
+ 
+ 	/* Invalid op */
+ 	op = -1;
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0, 0, &sel);
+-	ASSERT_EQ(EINVAL, errno);
++	prctl_invalid(_metadata, op, 0, 0, &sel, EINVAL);
+ 
+ 	/* PR_SYS_DISPATCH_OFF */
+ 	op = PR_SYS_DISPATCH_OFF;
+ 
+ 	/* offset != 0 */
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0x1, 0x0, 0);
+-	EXPECT_EQ(EINVAL, errno);
++	prctl_invalid(_metadata, op, 0x1, 0x0, 0, EINVAL);
+ 
+ 	/* len != 0 */
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0x0, 0xff, 0);
+-	EXPECT_EQ(EINVAL, errno);
++	prctl_invalid(_metadata, op, 0x0, 0xff, 0, EINVAL);
+ 
+ 	/* sel != NULL */
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0x0, 0x0, &sel);
+-	EXPECT_EQ(EINVAL, errno);
++	prctl_invalid(_metadata, op, 0x0, 0x0, &sel, EINVAL);
+ 
+ 	/* Valid parameter */
+-	errno = 0;
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0x0, 0x0, 0x0);
+-	EXPECT_EQ(0, errno);
++	prctl_valid(_metadata, op, 0x0, 0x0, 0x0);
+ 
+ 	/* PR_SYS_DISPATCH_ON */
+ 	op = PR_SYS_DISPATCH_ON;
+ 
+ 	/* Dispatcher region is bad (offset > 0 && len == 0) */
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0x1, 0x0, &sel);
+-	EXPECT_EQ(EINVAL, errno);
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, op, -1L, 0x0, &sel);
+-	EXPECT_EQ(EINVAL, errno);
++	prctl_invalid(_metadata, op, 0x1, 0x0, &sel, EINVAL);
++	prctl_invalid(_metadata, op, -1L, 0x0, &sel, EINVAL);
+ 
+ 	/* Invalid selector */
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, op, 0x0, 0x1, (void *) -1);
+-	ASSERT_EQ(EFAULT, errno);
++	prctl_invalid(_metadata, op, 0x0, 0x1, (void *) -1, EFAULT);
+ 
+ 	/*
+ 	 * Dispatcher range overflows unsigned long
+ 	 */
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, PR_SYS_DISPATCH_ON, 1, -1L, &sel);
+-	ASSERT_EQ(EINVAL, errno) {
+-		TH_LOG("Should reject bad syscall range");
+-	}
++	prctl_invalid(_metadata, PR_SYS_DISPATCH_ON, 1, -1L, &sel, EINVAL);
+ 
+ 	/*
+ 	 * Allowed range overflows usigned long
+ 	 */
+-	prctl(PR_SET_SYSCALL_USER_DISPATCH, PR_SYS_DISPATCH_ON, -1L, 0x1, &sel);
+-	ASSERT_EQ(EINVAL, errno) {
+-		TH_LOG("Should reject bad syscall range");
+-	}
++	prctl_invalid(_metadata, PR_SYS_DISPATCH_ON, -1L, 0x1, &sel, EINVAL);
+ }
+ 
+ /*
+diff --git a/tools/testing/selftests/vDSO/vdso_test_chacha.c b/tools/testing/selftests/vDSO/vdso_test_chacha.c
+index 8757f738b0b1a7..0aad682b12c883 100644
+--- a/tools/testing/selftests/vDSO/vdso_test_chacha.c
++++ b/tools/testing/selftests/vDSO/vdso_test_chacha.c
+@@ -76,7 +76,8 @@ static void reference_chacha20_blocks(uint8_t *dst_bytes, const uint32_t *key, u
+ 
+ void __weak __arch_chacha20_blocks_nostack(uint8_t *dst_bytes, const uint32_t *key, uint32_t *counter, size_t nblocks)
+ {
+-	ksft_exit_skip("Not implemented on architecture\n");
++	ksft_test_result_skip("Not implemented on architecture\n");
++	ksft_finished();
+ }
+ 
+ int main(int argc, char *argv[])
+diff --git a/tools/verification/rv/src/in_kernel.c b/tools/verification/rv/src/in_kernel.c
+index c0dcee795c0de0..4bb746ea6e1735 100644
+--- a/tools/verification/rv/src/in_kernel.c
++++ b/tools/verification/rv/src/in_kernel.c
+@@ -431,7 +431,7 @@ ikm_event_handler(struct trace_seq *s, struct tep_record *record,
+ 
+ 	if (config_has_id && (config_my_pid == id))
+ 		return 0;
+-	else if (config_my_pid && (config_my_pid == pid))
++	else if (config_my_pid == pid)
+ 		return 0;
+ 
+ 	tep_print_event(trace_event->tep, s, record, "%16s-%-8d [%.3d] ",
+@@ -734,7 +734,7 @@ static int parse_arguments(char *monitor_name, int argc, char **argv)
+ 			config_reactor = optarg;
+ 			break;
+ 		case 's':
+-			config_my_pid = 0;
++			config_my_pid = -1;
+ 			break;
+ 		case 't':
+ 			config_trace = 1;


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-08-16  4:02 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-08-16  4:02 UTC (permalink / raw
  To: gentoo-commits

commit:     504b6e005618786ee0976cb46ced082c6085d11a
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Sat Aug 16 03:59:02 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Sat Aug 16 03:59:02 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=504b6e00

BMQ(BitMap Queue) Scheduler v6.16-r0

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                  |     7 +
 5020_BMQ-and-PDS-io-scheduler-v6.16-r0.patch | 11661 +++++++++++++++++++++++++
 5021_BMQ-and-PDS-gentoo-defaults.patch       |    13 +
 3 files changed, 11681 insertions(+)

diff --git a/0000_README b/0000_README
index 3d350c4c..60a31446 100644
--- a/0000_README
+++ b/0000_README
@@ -83,3 +83,10 @@ Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
 
+Patch:  5020_BMQ-and-PDS-io-scheduler-v6.16-r0.patch
+From:   https://gitlab.com/alfredchen/projectc
+Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
+
+Patch:  5021_BMQ-and-PDS-gentoo-defaults.patch
+From:   https://gitweb.gentoo.org/proj/linux-patches.git/
+Desc:   Set defaults for BMQ. Add archs as people test, default to N

diff --git a/5020_BMQ-and-PDS-io-scheduler-v6.16-r0.patch b/5020_BMQ-and-PDS-io-scheduler-v6.16-r0.patch
new file mode 100644
index 00000000..4c21e3d7
--- /dev/null
+++ b/5020_BMQ-and-PDS-io-scheduler-v6.16-r0.patch
@@ -0,0 +1,11661 @@
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index dd49a89a62d3..4118f8c92125 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -1700,3 +1700,12 @@ is 10 seconds.
+ 
+ The softlockup threshold is (``2 * watchdog_thresh``). Setting this
+ tunable to zero will disable lockup detection altogether.
++
++yield_type:
++===========
++
++BMQ/PDS CPU scheduler only. This determines what type of yield calls
++to sched_yield() will be performed.
++
++  0 - No yield.
++  1 - Requeue task. (default)
+diff --git a/Documentation/scheduler/sched-BMQ.txt b/Documentation/scheduler/sched-BMQ.txt
+new file mode 100644
+index 000000000000..05c84eec0f31
+--- /dev/null
++++ b/Documentation/scheduler/sched-BMQ.txt
+@@ -0,0 +1,110 @@
++                         BitMap queue CPU Scheduler
++                         --------------------------
++
++CONTENT
++========
++
++ Background
++ Design
++   Overview
++   Task policy
++   Priority management
++   BitMap Queue
++   CPU Assignment and Migration
++
++
++Background
++==========
++
++BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
++of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
++and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
++simple, while efficiency and scalable for interactive tasks, such as desktop,
++movie playback and gaming etc.
++
++Design
++======
++
++Overview
++--------
++
++BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
++each CPU is responsible for scheduling the tasks that are putting into it's
++run queue.
++
++The run queue is a set of priority queues. Note that these queues are fifo
++queue for non-rt tasks or priority queue for rt tasks in data structure. See
++BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
++that most applications are non-rt tasks. No matter the queue is fifo or
++priority, In each queue is an ordered list of runnable tasks awaiting execution
++and the data structures are the same. When it is time for a new task to run,
++the scheduler simply looks the lowest numbered queueue that contains a task,
++and runs the first task from the head of that queue. And per CPU idle task is
++also in the run queue, so the scheduler can always find a task to run on from
++its run queue.
++
++Each task will assigned the same timeslice(default 4ms) when it is picked to
++start running. Task will be reinserted at the end of the appropriate priority
++queue when it uses its whole timeslice. When the scheduler selects a new task
++from the priority queue it sets the CPU's preemption timer for the remainder of
++the previous timeslice. When that timer fires the scheduler will stop execution
++on that task, select another task and start over again.
++
++If a task blocks waiting for a shared resource then it's taken out of its
++priority queue and is placed in a wait queue for the shared resource. When it
++is unblocked it will be reinserted in the appropriate priority queue of an
++eligible CPU.
++
++Task policy
++-----------
++
++BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
++mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
++NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
++policy.
++
++DEADLINE
++	It is squashed as priority 0 FIFO task.
++
++FIFO/RR
++	All RT tasks share one single priority queue in BMQ run queue designed. The
++complexity of insert operation is O(n). BMQ is not designed for system runs
++with major rt policy tasks.
++
++NORMAL/BATCH/IDLE
++	BATCH and IDLE tasks are treated as the same policy. They compete CPU with
++NORMAL policy tasks, but they just don't boost. To control the priority of
++NORMAL/BATCH/IDLE tasks, simply use nice level.
++
++ISO
++	ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
++task instead.
++
++Priority management
++-------------------
++
++RT tasks have priority from 0-99. For non-rt tasks, there are three different
++factors used to determine the effective priority of a task. The effective
++priority being what is used to determine which queue it will be in.
++
++The first factor is simply the task’s static priority. Which is assigned from
++task's nice level, within [-20, 19] in userland's point of view and [0, 39]
++internally.
++
++The second factor is the priority boost. This is a value bounded between
++[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
++modified by the following cases:
++
++*When a thread has used up its entire timeslice, always deboost its boost by
++increasing by one.
++*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
++and its switch-in time(time after last switch and run) below the thredhold
++based on its priority boost, will boost its boost by decreasing by one buti is
++capped at 0 (won’t go negative).
++
++The intent in this system is to ensure that interactive threads are serviced
++quickly. These are usually the threads that interact directly with the user
++and cause user-perceivable latency. These threads usually do little work and
++spend most of their time blocked awaiting another user event. So they get the
++priority boost from unblocking while background threads that do most of the
++processing receive the priority penalty for using their entire timeslice.
+diff --git a/fs/bcachefs/io_write.c b/fs/bcachefs/io_write.c
+index 88b1eec8eff3..4619aa57cd9f 100644
+--- a/fs/bcachefs/io_write.c
++++ b/fs/bcachefs/io_write.c
+@@ -640,8 +640,14 @@ static inline void __wp_update_state(struct write_point *wp, enum write_point_st
+ 	if (state != wp->state) {
+ 		struct task_struct *p = current;
+ 		u64 now = ktime_get_ns();
++
++#ifdef CONFIG_SCHED_ALT
++		u64 runtime = tsk_seruntime(p) +
++			(now - p->last_ran);
++#else
+ 		u64 runtime = p->se.sum_exec_runtime +
+ 			(now - p->se.exec_start);
++#endif
+ 
+ 		if (state == WRITE_POINT_runnable)
+ 			wp->last_runtime = runtime;
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index c667702dc69b..6694bdc2c1c0 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -515,7 +515,7 @@ static int proc_pid_schedstat(struct seq_file *m, struct pid_namespace *ns,
+ 		seq_puts(m, "0 0 0\n");
+ 	else
+ 		seq_printf(m, "%llu %llu %lu\n",
+-		   (unsigned long long)task->se.sum_exec_runtime,
++		   (unsigned long long)tsk_seruntime(task),
+ 		   (unsigned long long)task->sched_info.run_delay,
+ 		   task->sched_info.pcount);
+ 
+diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
+index 8874f681b056..59eb72bf7d5f 100644
+--- a/include/asm-generic/resource.h
++++ b/include/asm-generic/resource.h
+@@ -23,7 +23,7 @@
+ 	[RLIMIT_LOCKS]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ 	[RLIMIT_SIGPENDING]	= { 		0,	       0 },	\
+ 	[RLIMIT_MSGQUEUE]	= {   MQ_BYTES_MAX,   MQ_BYTES_MAX },	\
+-	[RLIMIT_NICE]		= { 0, 0 },				\
++	[RLIMIT_NICE]		= { 30, 30 },				\
+ 	[RLIMIT_RTPRIO]		= { 0, 0 },				\
+ 	[RLIMIT_RTTIME]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ }
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index aa9c5be7a632..512a53701b9a 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -838,9 +838,13 @@ struct task_struct {
+ 	struct alloc_tag		*alloc_tag;
+ #endif
+ 
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
+ 	int				on_cpu;
++#endif
++
++#ifdef CONFIG_SMP
+ 	struct __call_single_node	wake_entry;
++#ifndef CONFIG_SCHED_ALT
+ 	unsigned int			wakee_flips;
+ 	unsigned long			wakee_flip_decay_ts;
+ 	struct task_struct		*last_wakee;
+@@ -854,6 +858,7 @@ struct task_struct {
+ 	 */
+ 	int				recent_used_cpu;
+ 	int				wake_cpu;
++#endif /* !CONFIG_SCHED_ALT */
+ #endif
+ 	int				on_rq;
+ 
+@@ -862,6 +867,19 @@ struct task_struct {
+ 	int				normal_prio;
+ 	unsigned int			rt_priority;
+ 
++#ifdef CONFIG_SCHED_ALT
++	u64				last_ran;
++	s64				time_slice;
++	struct list_head		sq_node;
++#ifdef CONFIG_SCHED_BMQ
++	int				boost_prio;
++#endif /* CONFIG_SCHED_BMQ */
++#ifdef CONFIG_SCHED_PDS
++	u64				deadline;
++#endif /* CONFIG_SCHED_PDS */
++	/* sched_clock time spent running */
++	u64				sched_time;
++#else /* !CONFIG_SCHED_ALT */
+ 	struct sched_entity		se;
+ 	struct sched_rt_entity		rt;
+ 	struct sched_dl_entity		dl;
+@@ -876,6 +894,7 @@ struct task_struct {
+ 	unsigned long			core_cookie;
+ 	unsigned int			core_occupation;
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_CGROUP_SCHED
+ 	struct task_group		*sched_task_group;
+@@ -912,11 +931,15 @@ struct task_struct {
+ 	const cpumask_t			*cpus_ptr;
+ 	cpumask_t			*user_cpus_ptr;
+ 	cpumask_t			cpus_mask;
++#ifndef CONFIG_SCHED_ALT
+ 	void				*migration_pending;
++#endif
+ #ifdef CONFIG_SMP
+ 	unsigned short			migration_disabled;
+ #endif
++#ifndef CONFIG_SCHED_ALT
+ 	unsigned short			migration_flags;
++#endif
+ 
+ #ifdef CONFIG_PREEMPT_RCU
+ 	int				rcu_read_lock_nesting;
+@@ -948,8 +971,10 @@ struct task_struct {
+ 
+ 	struct list_head		tasks;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 	struct plist_node		pushable_tasks;
+ 	struct rb_node			pushable_dl_tasks;
++#endif
+ #endif
+ 
+ 	struct mm_struct		*mm;
+@@ -1660,6 +1685,15 @@ struct task_struct {
+ 	randomized_struct_fields_end
+ } __attribute__ ((aligned (64)));
+ 
++#ifdef CONFIG_SCHED_ALT
++#define tsk_seruntime(t)		((t)->sched_time)
++/* replace the uncertian rt_timeout with 0UL */
++#define tsk_rttimeout(t)		(0UL)
++#else /* CFS */
++#define tsk_seruntime(t)	((t)->se.sum_exec_runtime)
++#define tsk_rttimeout(t)	((t)->rt.timeout)
++#endif /* !CONFIG_SCHED_ALT */
++
+ #define TASK_REPORT_IDLE	(TASK_REPORT + 1)
+ #define TASK_REPORT_MAX		(TASK_REPORT_IDLE << 1)
+ 
+@@ -2195,7 +2229,11 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
+ 
+ static inline bool task_is_runnable(struct task_struct *p)
+ {
++#ifdef CONFIG_SCHED_ALT
++	return p->on_rq;
++#else
+ 	return p->on_rq && !p->se.sched_delayed;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ 
+ extern bool sched_task_on_rq(struct task_struct *p);
+diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
+index f9aabbc9d22e..1f9109a84286 100644
+--- a/include/linux/sched/deadline.h
++++ b/include/linux/sched/deadline.h
+@@ -2,6 +2,25 @@
+ #ifndef _LINUX_SCHED_DEADLINE_H
+ #define _LINUX_SCHED_DEADLINE_H
+ 
++#ifdef CONFIG_SCHED_ALT
++
++static inline int dl_task(struct task_struct *p)
++{
++	return 0;
++}
++
++#ifdef CONFIG_SCHED_BMQ
++#define __tsk_deadline(p)	(0UL)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define __tsk_deadline(p)	((((u64) ((p)->prio))<<56) | (p)->deadline)
++#endif
++
++#else
++
++#define __tsk_deadline(p)	((p)->dl.deadline)
++
+ /*
+  * SCHED_DEADLINE tasks has negative priorities, reflecting
+  * the fact that any of them has higher prio than RT and
+@@ -23,6 +42,7 @@ static inline bool dl_task(struct task_struct *p)
+ {
+ 	return dl_prio(p->prio);
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ static inline bool dl_time_before(u64 a, u64 b)
+ {
+diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
+index 6ab43b4f72f9..ef1cff556c5e 100644
+--- a/include/linux/sched/prio.h
++++ b/include/linux/sched/prio.h
+@@ -19,6 +19,28 @@
+ #define MAX_PRIO		(MAX_RT_PRIO + NICE_WIDTH)
+ #define DEFAULT_PRIO		(MAX_RT_PRIO + NICE_WIDTH / 2)
+ 
++#ifdef CONFIG_SCHED_ALT
++
++/* Undefine MAX_PRIO and DEFAULT_PRIO */
++#undef MAX_PRIO
++#undef DEFAULT_PRIO
++
++/* +/- priority levels from the base priority */
++#ifdef CONFIG_SCHED_BMQ
++#define MAX_PRIORITY_ADJ	(12)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define MAX_PRIORITY_ADJ	(0)
++#endif
++
++#define MIN_NORMAL_PRIO		(128)
++#define NORMAL_PRIO_NUM		(64)
++#define MAX_PRIO		(MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
++#define DEFAULT_PRIO		(MAX_PRIO - MAX_PRIORITY_ADJ - NICE_WIDTH / 2)
++
++#endif /* CONFIG_SCHED_ALT */
++
+ /*
+  * Convert user-nice values [ -20 ... 0 ... 19 ]
+  * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
+diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
+index 4e3338103654..6dfef878fe3b 100644
+--- a/include/linux/sched/rt.h
++++ b/include/linux/sched/rt.h
+@@ -45,8 +45,10 @@ static inline bool rt_or_dl_task_policy(struct task_struct *tsk)
+ 
+ 	if (policy == SCHED_FIFO || policy == SCHED_RR)
+ 		return true;
++#ifndef CONFIG_SCHED_ALT
+ 	if (policy == SCHED_DEADLINE)
+ 		return true;
++#endif
+ 	return false;
+ }
+ 
+diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
+index 198bb5cc1774..f1aaf3827d1d 100644
+--- a/include/linux/sched/topology.h
++++ b/include/linux/sched/topology.h
+@@ -231,7 +231,8 @@ static inline void sched_update_asym_prefer_cpu(int cpu, int old_prio, int new_p
+ 
+ #endif	/* !CONFIG_SMP */
+ 
+-#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
++#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) && \
++	!defined(CONFIG_SCHED_ALT)
+ extern void rebuild_sched_domains_energy(void);
+ #else
+ static inline void rebuild_sched_domains_energy(void)
+diff --git a/init/Kconfig b/init/Kconfig
+index 666783eb50ab..0448eeb1dd2d 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -664,6 +664,7 @@ config TASK_IO_ACCOUNTING
+ 
+ config PSI
+ 	bool "Pressure stall information tracking"
++	depends on !SCHED_ALT
+ 	select KERNFS
+ 	help
+ 	  Collect metrics that indicate how overcommitted the CPU, memory,
+@@ -875,6 +876,35 @@ config UCLAMP_BUCKETS_COUNT
+ 
+ 	  If in doubt, use the default value.
+ 
++menuconfig SCHED_ALT
++	bool "Alternative CPU Schedulers"
++	default y
++	help
++	  This feature enable alternative CPU scheduler"
++
++if SCHED_ALT
++
++choice
++	prompt "Alternative CPU Scheduler"
++	default SCHED_BMQ
++
++config SCHED_BMQ
++	bool "BMQ CPU scheduler"
++	help
++	  The BitMap Queue CPU scheduler for excellent interactivity and
++	  responsiveness on the desktop and solid scalability on normal
++	  hardware and commodity servers.
++
++config SCHED_PDS
++	bool "PDS CPU scheduler"
++	help
++	  The Priority and Deadline based Skip list multiple queue CPU
++	  Scheduler.
++
++endchoice
++
++endif
++
+ endmenu
+ 
+ #
+@@ -940,6 +970,7 @@ config NUMA_BALANCING
+ 	depends on ARCH_SUPPORTS_NUMA_BALANCING
+ 	depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
+ 	depends on SMP && NUMA && MIGRATION && !PREEMPT_RT
++	depends on !SCHED_ALT
+ 	help
+ 	  This option adds support for automatic NUMA aware memory/task placement.
+ 	  The mechanism is quite primitive and is based on migrating memory when
+@@ -1383,6 +1414,7 @@ config CHECKPOINT_RESTORE
+ 
+ config SCHED_AUTOGROUP
+ 	bool "Automatic process group scheduling"
++	depends on !SCHED_ALT
+ 	select CGROUPS
+ 	select CGROUP_SCHED
+ 	select FAIR_GROUP_SCHED
+diff --git a/init/init_task.c b/init/init_task.c
+index e557f622bd90..99e59c2082e0 100644
+--- a/init/init_task.c
++++ b/init/init_task.c
+@@ -72,9 +72,16 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ 	.stack		= init_stack,
+ 	.usage		= REFCOUNT_INIT(2),
+ 	.flags		= PF_KTHREAD,
++#ifdef CONFIG_SCHED_ALT
++	.on_cpu		= 1,
++	.prio		= DEFAULT_PRIO,
++	.static_prio	= DEFAULT_PRIO,
++	.normal_prio	= DEFAULT_PRIO,
++#else
+ 	.prio		= MAX_PRIO - 20,
+ 	.static_prio	= MAX_PRIO - 20,
+ 	.normal_prio	= MAX_PRIO - 20,
++#endif
+ 	.policy		= SCHED_NORMAL,
+ 	.cpus_ptr	= &init_task.cpus_mask,
+ 	.user_cpus_ptr	= NULL,
+@@ -87,6 +94,16 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ 	.restart_block	= {
+ 		.fn = do_no_restart_syscall,
+ 	},
++#ifdef CONFIG_SCHED_ALT
++	.sq_node	= LIST_HEAD_INIT(init_task.sq_node),
++#ifdef CONFIG_SCHED_BMQ
++	.boost_prio	= 0,
++#endif
++#ifdef CONFIG_SCHED_PDS
++	.deadline	= 0,
++#endif
++	.time_slice	= HZ,
++#else
+ 	.se		= {
+ 		.group_node 	= LIST_HEAD_INIT(init_task.se.group_node),
+ 	},
+@@ -94,10 +111,13 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ 		.run_list	= LIST_HEAD_INIT(init_task.rt.run_list),
+ 		.time_slice	= RR_TIMESLICE,
+ 	},
++#endif
+ 	.tasks		= LIST_HEAD_INIT(init_task.tasks),
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ 	.pushable_tasks	= PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
+ #endif
++#endif
+ #ifdef CONFIG_CGROUP_SCHED
+ 	.sched_task_group = &root_task_group,
+ #endif
+diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
+index 54ea59ff8fbe..a6d3560cef75 100644
+--- a/kernel/Kconfig.preempt
++++ b/kernel/Kconfig.preempt
+@@ -134,7 +134,7 @@ config PREEMPT_DYNAMIC
+ 
+ config SCHED_CORE
+ 	bool "Core Scheduling for SMT"
+-	depends on SCHED_SMT
++	depends on SCHED_SMT && !SCHED_ALT
+ 	help
+ 	  This option permits Core Scheduling, a means of coordinated task
+ 	  selection across SMT siblings. When enabled -- see
+@@ -152,7 +152,7 @@ config SCHED_CORE
+ 
+ config SCHED_CLASS_EXT
+ 	bool "Extensible Scheduling Class"
+-	depends on BPF_SYSCALL && BPF_JIT && DEBUG_INFO_BTF
++	depends on BPF_SYSCALL && BPF_JIT && DEBUG_INFO_BTF && !SCHED_ALT
+ 	select STACKTRACE if STACKTRACE_SUPPORT
+ 	help
+ 	  This option enables a new scheduler class sched_ext (SCX), which
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 3bc4301466f3..45e7fbc3dead 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -662,7 +662,7 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
+ 	return ret;
+ }
+ 
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * Helper routine for generate_sched_domains().
+  * Do cpusets a, b have overlapping effective cpus_allowed masks?
+@@ -1075,7 +1075,7 @@ void rebuild_sched_domains_locked(void)
+ 	/* Have scheduler rebuild the domains */
+ 	partition_sched_domains(ndoms, doms, attr);
+ }
+-#else /* !CONFIG_SMP */
++#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
+ void rebuild_sched_domains_locked(void)
+ {
+ }
+@@ -3049,12 +3049,15 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ 				goto out_unlock;
+ 		}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 		if (dl_task(task)) {
+ 			cs->nr_migrate_dl_tasks++;
+ 			cs->sum_migrate_dl_bw += task->dl.dl_bw;
+ 		}
++#endif
+ 	}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (!cs->nr_migrate_dl_tasks)
+ 		goto out_success;
+ 
+@@ -3075,6 +3078,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ 	}
+ 
+ out_success:
++#endif
+ 	/*
+ 	 * Mark attach is in progress.  This makes validate_change() fail
+ 	 * changes which zero cpus/mems_allowed.
+@@ -3096,12 +3100,14 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset)
+ 	mutex_lock(&cpuset_mutex);
+ 	dec_attach_in_progress_locked(cs);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (cs->nr_migrate_dl_tasks) {
+ 		int cpu = cpumask_any(cs->effective_cpus);
+ 
+ 		dl_bw_free(cpu, cs->sum_migrate_dl_bw);
+ 		reset_migrate_dl_data(cs);
+ 	}
++#endif
+ 
+ 	mutex_unlock(&cpuset_mutex);
+ }
+diff --git a/kernel/delayacct.c b/kernel/delayacct.c
+index 30e7912ebb0d..f6b7e29d2018 100644
+--- a/kernel/delayacct.c
++++ b/kernel/delayacct.c
+@@ -164,7 +164,7 @@ int delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
+ 	 */
+ 	t1 = tsk->sched_info.pcount;
+ 	t2 = tsk->sched_info.run_delay;
+-	t3 = tsk->se.sum_exec_runtime;
++	t3 = tsk_seruntime(tsk);
+ 
+ 	d->cpu_count += t1;
+ 
+diff --git a/kernel/exit.c b/kernel/exit.c
+index bb184a67ac73..bd1ea422fd14 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -206,7 +206,7 @@ static void __exit_signal(struct release_task_post *post, struct task_struct *ts
+ 	sig->inblock += task_io_get_inblock(tsk);
+ 	sig->oublock += task_io_get_oublock(tsk);
+ 	task_io_accounting_add(&sig->ioac, &tsk->ioac);
+-	sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
++	sig->sum_sched_runtime += tsk_seruntime(tsk);
+ 	sig->nr_threads--;
+ 	__unhash_process(post, tsk, group_dead);
+ 	write_sequnlock(&sig->stats_lock);
+@@ -290,8 +290,8 @@ void release_task(struct task_struct *p)
+ 	write_unlock_irq(&tasklist_lock);
+ 	/* @thread_pid can't go away until free_pids() below */
+ 	proc_flush_pid(thread_pid);
+-	add_device_randomness(&p->se.sum_exec_runtime,
+-			      sizeof(p->se.sum_exec_runtime));
++	add_device_randomness((const void*) &tsk_seruntime(p),
++			      sizeof(unsigned long long));
+ 	free_pids(post.pids);
+ 	release_thread(p);
+ 	/*
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index c80902eacd79..b1d388145968 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -366,7 +366,7 @@ waiter_update_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+ 	lockdep_assert(RB_EMPTY_NODE(&waiter->tree.entry));
+ 
+ 	waiter->tree.prio = __waiter_prio(task);
+-	waiter->tree.deadline = task->dl.deadline;
++	waiter->tree.deadline = __tsk_deadline(task);
+ }
+ 
+ /*
+@@ -387,16 +387,20 @@ waiter_clone_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+  * Only use with rt_waiter_node_{less,equal}()
+  */
+ #define task_to_waiter_node(p)	\
+-	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = (p)->dl.deadline }
++	&(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = __tsk_deadline(p) }
+ #define task_to_waiter(p)	\
+ 	&(struct rt_mutex_waiter){ .tree = *task_to_waiter_node(p) }
+ 
+ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ 					       struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline < right->deadline);
++#else
+ 	if (left->prio < right->prio)
+ 		return 1;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -405,16 +409,22 @@ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return dl_time_before(left->deadline, right->deadline);
++#endif
+ 
+ 	return 0;
++#endif
+ }
+ 
+ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ 						 struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline == right->deadline);
++#else
+ 	if (left->prio != right->prio)
+ 		return 0;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -423,8 +433,10 @@ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return left->deadline == right->deadline;
++#endif
+ 
+ 	return 1;
++#endif
+ }
+ 
+ static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
+diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
+index 37f025a096c9..45ae7a6fd9ac 100644
+--- a/kernel/locking/ww_mutex.h
++++ b/kernel/locking/ww_mutex.h
+@@ -247,6 +247,7 @@ __ww_ctx_less(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
+ 
+ 		/* equal static prio */
+ 
++#ifndef	CONFIG_SCHED_ALT
+ 		if (dl_prio(a_prio)) {
+ 			if (dl_time_before(b->task->dl.deadline,
+ 					   a->task->dl.deadline))
+@@ -256,6 +257,7 @@ __ww_ctx_less(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
+ 					   b->task->dl.deadline))
+ 				return false;
+ 		}
++#endif
+ 
+ 		/* equal prio */
+ 	}
+diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
+index 8ae86371ddcd..a972ef1e31a7 100644
+--- a/kernel/sched/Makefile
++++ b/kernel/sched/Makefile
+@@ -33,7 +33,12 @@ endif
+ # These compilation units have roughly the same size and complexity - so their
+ # build parallelizes well and finishes roughly at once:
+ #
++ifdef CONFIG_SCHED_ALT
++obj-y += alt_core.o
++obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
++else
+ obj-y += core.o
+ obj-y += fair.o
++endif
+ obj-y += build_policy.o
+ obj-y += build_utility.o
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+new file mode 100644
+index 000000000000..0de153fa42ef
+--- /dev/null
++++ b/kernel/sched/alt_core.c
+@@ -0,0 +1,7777 @@
++/*
++ *  kernel/sched/alt_core.c
++ *
++ *  Core alternative kernel scheduler code and related syscalls
++ *
++ *  Copyright (C) 1991-2002  Linus Torvalds
++ *
++ *  2009-08-13	Brainfuck deadline scheduling policy by Con Kolivas deletes
++ *		a whole lot of those previous things.
++ *  2017-09-06	Priority and Deadline based Skip list multiple queue kernel
++ *		scheduler by Alfred Chen.
++ *  2019-02-20	BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
++ */
++#include <linux/sched/clock.h>
++#include <linux/sched/cputime.h>
++#include <linux/sched/debug.h>
++#include <linux/sched/hotplug.h>
++#include <linux/sched/init.h>
++#include <linux/sched/isolation.h>
++#include <linux/sched/loadavg.h>
++#include <linux/sched/mm.h>
++#include <linux/sched/nohz.h>
++#include <linux/sched/stat.h>
++#include <linux/sched/wake_q.h>
++
++#include <linux/blkdev.h>
++#include <linux/context_tracking.h>
++#include <linux/cpuset.h>
++#include <linux/delayacct.h>
++#include <linux/init_task.h>
++#include <linux/kcov.h>
++#include <linux/kprobes.h>
++#include <linux/nmi.h>
++#include <linux/rseq.h>
++#include <linux/scs.h>
++
++#include <uapi/linux/sched/types.h>
++
++#include <asm/irq_regs.h>
++#include <asm/switch_to.h>
++
++#define CREATE_TRACE_POINTS
++#include <trace/events/sched.h>
++#include <trace/events/ipi.h>
++#undef CREATE_TRACE_POINTS
++
++#include "sched.h"
++#include "smp.h"
++
++#include "pelt.h"
++
++#include "../../io_uring/io-wq.h"
++#include "../smpboot.h"
++
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpu);
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpumask);
++
++/*
++ * Export tracepoints that act as a bare tracehook (ie: have no trace event
++ * associated with them) to allow external modules to probe them.
++ */
++EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
++
++#define sched_feat(x)	(1)
++/*
++ * Print a warning if need_resched is set for the given duration (if
++ * LATENCY_WARN is enabled).
++ *
++ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
++ * per boot.
++ */
++__read_mostly int sysctl_resched_latency_warn_ms = 100;
++__read_mostly int sysctl_resched_latency_warn_once = 1;
++
++#define ALT_SCHED_VERSION "v6.16-r0"
++
++#define STOP_PRIO		(MAX_RT_PRIO - 1)
++
++/*
++ * Time slice
++ * (default: 4 msec, units: nanoseconds)
++ */
++unsigned int sysctl_sched_base_slice __read_mostly	= (4 << 20);
++
++#include "alt_core.h"
++#include "alt_topology.h"
++
++/* Reschedule if less than this many μs left */
++#define RESCHED_NS		(100 << 10)
++
++/**
++ * sched_yield_type - Type of sched_yield() will be performed.
++ * 0: No yield.
++ * 1: Requeue task. (default)
++ */
++int sched_yield_type __read_mostly = 1;
++
++#ifdef CONFIG_SMP
++cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DEFINE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_topo_end_mask);
++
++#ifdef CONFIG_SCHED_SMT
++DEFINE_STATIC_KEY_FALSE(sched_smt_present);
++EXPORT_SYMBOL_GPL(sched_smt_present);
++
++cpumask_t sched_smt_mask ____cacheline_aligned_in_smp;
++#endif
++
++/*
++ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
++ * the domain), this allows us to quickly tell if two cpus are in the same cache
++ * domain, see cpus_share_cache().
++ */
++DEFINE_PER_CPU(int, sd_llc_id);
++#endif /* CONFIG_SMP */
++
++DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++static cpumask_t sched_preempt_mask[SCHED_QUEUE_BITS + 2] ____cacheline_aligned_in_smp;
++
++cpumask_t *const sched_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS - 1];
++cpumask_t *const sched_sg_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
++cpumask_t *const sched_pcore_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
++cpumask_t *const sched_ecore_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS + 1];
++
++/* task function */
++static inline const struct cpumask *task_user_cpus(struct task_struct *p)
++{
++	if (!p->user_cpus_ptr)
++		return cpu_possible_mask; /* &init_task.cpus_mask */
++	return p->user_cpus_ptr;
++}
++
++/* sched_queue related functions */
++static inline void sched_queue_init(struct sched_queue *q)
++{
++	int i;
++
++	bitmap_zero(q->bitmap, SCHED_QUEUE_BITS);
++	for(i = 0; i < SCHED_LEVELS; i++)
++		INIT_LIST_HEAD(&q->heads[i]);
++}
++
++/*
++ * Init idle task and put into queue structure of rq
++ * IMPORTANT: may be called multiple times for a single cpu
++ */
++static inline void sched_queue_init_idle(struct sched_queue *q,
++					 struct task_struct *idle)
++{
++	INIT_LIST_HEAD(&q->heads[IDLE_TASK_SCHED_PRIO]);
++	list_add_tail(&idle->sq_node, &q->heads[IDLE_TASK_SCHED_PRIO]);
++	idle->on_rq = TASK_ON_RQ_QUEUED;
++}
++
++#define CLEAR_CACHED_PREEMPT_MASK(pr, low, high, cpu)		\
++	if (low < pr && pr <= high)				\
++		cpumask_clear_cpu(cpu, sched_preempt_mask + pr);
++
++#define SET_CACHED_PREEMPT_MASK(pr, low, high, cpu)		\
++	if (low < pr && pr <= high)				\
++		cpumask_set_cpu(cpu, sched_preempt_mask + pr);
++
++static atomic_t sched_prio_record = ATOMIC_INIT(0);
++
++/* water mark related functions */
++static inline void update_sched_preempt_mask(struct rq *rq)
++{
++	int prio = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
++	int last_prio = rq->prio;
++	int cpu, pr;
++
++	if (prio == last_prio)
++		return;
++
++	rq->prio = prio;
++#ifdef CONFIG_SCHED_PDS
++	rq->prio_idx = sched_prio2idx(rq->prio, rq);
++#endif
++	cpu = cpu_of(rq);
++	pr = atomic_read(&sched_prio_record);
++
++	if (prio < last_prio) {
++		if (IDLE_TASK_SCHED_PRIO == last_prio) {
++			rq->clear_idle_mask_func(cpu, sched_idle_mask);
++			last_prio -= 2;
++		}
++		CLEAR_CACHED_PREEMPT_MASK(pr, prio, last_prio, cpu);
++
++		return;
++	}
++	/* last_prio < prio */
++	if (IDLE_TASK_SCHED_PRIO == prio) {
++		rq->set_idle_mask_func(cpu, sched_idle_mask);
++		prio -= 2;
++	}
++	SET_CACHED_PREEMPT_MASK(pr, last_prio, prio, cpu);
++}
++
++/* need a wrapper since we may need to trace from modules */
++EXPORT_TRACEPOINT_SYMBOL(sched_set_state_tp);
++
++/* Call via the helper macro trace_set_current_state. */
++void __trace_set_current_state(int state_value)
++{
++	trace_sched_set_state_tp(current, state_value);
++}
++EXPORT_SYMBOL(__trace_set_current_state);
++
++/*
++ * Serialization rules:
++ *
++ * Lock order:
++ *
++ *   p->pi_lock
++ *     rq->lock
++ *       hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
++ *
++ *  rq1->lock
++ *    rq2->lock  where: rq1 < rq2
++ *
++ * Regular state:
++ *
++ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
++ * local CPU's rq->lock, it optionally removes the task from the runqueue and
++ * always looks at the local rq data structures to find the most eligible task
++ * to run next.
++ *
++ * Task enqueue is also under rq->lock, possibly taken from another CPU.
++ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
++ * the local CPU to avoid bouncing the runqueue state around [ see
++ * ttwu_queue_wakelist() ]
++ *
++ * Task wakeup, specifically wakeups that involve migration, are horribly
++ * complicated to avoid having to take two rq->locks.
++ *
++ * Special state:
++ *
++ * System-calls and anything external will use task_rq_lock() which acquires
++ * both p->pi_lock and rq->lock. As a consequence the state they change is
++ * stable while holding either lock:
++ *
++ *  - sched_setaffinity()/
++ *    set_cpus_allowed_ptr():	p->cpus_ptr, p->nr_cpus_allowed
++ *  - set_user_nice():		p->se.load, p->*prio
++ *  - __sched_setscheduler():	p->sched_class, p->policy, p->*prio,
++ *				p->se.load, p->rt_priority,
++ *				p->dl.dl_{runtime, deadline, period, flags, bw, density}
++ *  - sched_setnuma():		p->numa_preferred_nid
++ *  - sched_move_task():        p->sched_task_group
++ *  - uclamp_update_active()	p->uclamp*
++ *
++ * p->state <- TASK_*:
++ *
++ *   is changed locklessly using set_current_state(), __set_current_state() or
++ *   set_special_state(), see their respective comments, or by
++ *   try_to_wake_up(). This latter uses p->pi_lock to serialize against
++ *   concurrent self.
++ *
++ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
++ *
++ *   is set by activate_task() and cleared by deactivate_task(), under
++ *   rq->lock. Non-zero indicates the task is runnable, the special
++ *   ON_RQ_MIGRATING state is used for migration without holding both
++ *   rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
++ *
++ *   Additionally it is possible to be ->on_rq but still be considered not
++ *   runnable when p->se.sched_delayed is true. These tasks are on the runqueue
++ *   but will be dequeued as soon as they get picked again. See the
++ *   task_is_runnable() helper.
++ *
++ * p->on_cpu <- { 0, 1 }:
++ *
++ *   is set by prepare_task() and cleared by finish_task() such that it will be
++ *   set before p is scheduled-in and cleared after p is scheduled-out, both
++ *   under rq->lock. Non-zero indicates the task is running on its CPU.
++ *
++ *   [ The astute reader will observe that it is possible for two tasks on one
++ *     CPU to have ->on_cpu = 1 at the same time. ]
++ *
++ * task_cpu(p): is changed by set_task_cpu(), the rules are:
++ *
++ *  - Don't call set_task_cpu() on a blocked task:
++ *
++ *    We don't care what CPU we're not running on, this simplifies hotplug,
++ *    the CPU assignment of blocked tasks isn't required to be valid.
++ *
++ *  - for try_to_wake_up(), called under p->pi_lock:
++ *
++ *    This allows try_to_wake_up() to only take one rq->lock, see its comment.
++ *
++ *  - for migration called under rq->lock:
++ *    [ see task_on_rq_migrating() in task_rq_lock() ]
++ *
++ *    o move_queued_task()
++ *    o detach_task()
++ *
++ *  - for migration called under double_rq_lock():
++ *
++ *    o __migrate_swap_task()
++ *    o push_rt_task() / pull_rt_task()
++ *    o push_dl_task() / pull_dl_task()
++ *    o dl_task_offline_migration()
++ *
++ */
++
++/*
++ * Context: p->pi_lock
++ */
++static inline struct rq *
++task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock, unsigned long *flags)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock_irqsave(&rq->lock, *flags);
++			if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&rq->lock, *flags);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			raw_spin_lock_irqsave(&p->pi_lock, *flags);
++			if (likely(!p->on_cpu && !p->on_rq && rq == task_rq(p))) {
++				*plock = &p->pi_lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
++		}
++	}
++}
++
++static inline void
++task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock, unsigned long *flags)
++{
++	raw_spin_unlock_irqrestore(lock, *flags);
++}
++
++/*
++ * __task_rq_lock - lock the rq @p resides on.
++ */
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	lockdep_assert_held(&p->pi_lock);
++
++	for (;;) {
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
++			return rq;
++		raw_spin_unlock(&rq->lock);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++/*
++ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
++ */
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	for (;;) {
++		raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		/*
++		 *	move_queued_task()		task_rq_lock()
++		 *
++		 *	ACQUIRE (rq->lock)
++		 *	[S] ->on_rq = MIGRATING		[L] rq = task_rq()
++		 *	WMB (__set_task_cpu())		ACQUIRE (rq->lock);
++		 *	[S] ->cpu = new_cpu		[L] task_rq()
++		 *					[L] ->on_rq
++		 *	RELEASE (rq->lock)
++		 *
++		 * If we observe the old CPU in task_rq_lock(), the acquire of
++		 * the old rq->lock will fully serialize against the stores.
++		 *
++		 * If we observe the new CPU in task_rq_lock(), the address
++		 * dependency headed by '[L] rq = task_rq()' and the acquire
++		 * will pair with the WMB to ensure we then also see migrating.
++		 */
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
++			return rq;
++		}
++		raw_spin_unlock(&rq->lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++static inline void rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock_irqsave(&rq->lock, rf->flags);
++}
++
++static inline void rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
++}
++
++DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq,
++		    rq_lock_irqsave(_T->lock, &_T->rf),
++		    rq_unlock_irqrestore(_T->lock, &_T->rf),
++		    struct rq_flags rf)
++
++void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
++{
++	raw_spinlock_t *lock;
++
++	/* Matches synchronize_rcu() in __sched_core_enable() */
++	preempt_disable();
++
++	for (;;) {
++		lock = __rq_lockp(rq);
++		raw_spin_lock_nested(lock, subclass);
++		if (likely(lock == __rq_lockp(rq))) {
++			/* preempt_count *MUST* be > 1 */
++			preempt_enable_no_resched();
++			return;
++		}
++		raw_spin_unlock(lock);
++	}
++}
++
++void raw_spin_rq_unlock(struct rq *rq)
++{
++	raw_spin_unlock(rq_lockp(rq));
++}
++
++/*
++ * RQ-clock updating methods:
++ */
++
++static void update_rq_clock_task(struct rq *rq, s64 delta)
++{
++/*
++ * In theory, the compile should just see 0 here, and optimize out the call
++ * to sched_rt_avg_update. But I don't trust it...
++ */
++	s64 __maybe_unused steal = 0, irq_delta = 0;
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	if (irqtime_enabled()) {
++		irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
++
++		/*
++		 * Since irq_time is only updated on {soft,}irq_exit, we might run into
++		 * this case when a previous update_rq_clock() happened inside a
++		 * {soft,}IRQ region.
++		 *
++		 * When this happens, we stop ->clock_task and only update the
++		 * prev_irq_time stamp to account for the part that fit, so that a next
++		 * update will consume the rest. This ensures ->clock_task is
++		 * monotonic.
++		 *
++		 * It does however cause some slight miss-attribution of {soft,}IRQ
++		 * time, a more accurate solution would be to update the irq_time using
++		 * the current rq->clock timestamp, except that would require using
++		 * atomic ops.
++		 */
++		if (irq_delta > delta)
++			irq_delta = delta;
++
++		rq->prev_irq_time += irq_delta;
++		delta -= irq_delta;
++		delayacct_irq(rq->curr, irq_delta);
++	}
++#endif
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	if (static_key_false((&paravirt_steal_rq_enabled))) {
++		u64 prev_steal;
++
++		steal = prev_steal = paravirt_steal_clock(cpu_of(rq));
++		steal -= rq->prev_steal_time_rq;
++
++		if (unlikely(steal > delta))
++			steal = delta;
++
++		rq->prev_steal_time_rq = prev_steal;
++		delta -= steal;
++	}
++#endif
++
++	rq->clock_task += delta;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	if ((irq_delta + steal))
++		update_irq_load_avg(rq, irq_delta + steal);
++#endif
++}
++
++static inline void update_rq_clock(struct rq *rq)
++{
++	s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
++
++	if (unlikely(delta <= 0))
++		return;
++	rq->clock += delta;
++	sched_update_rq_clock(rq);
++	update_rq_clock_task(rq, delta);
++}
++
++/*
++ * RQ Load update routine
++ */
++#define RQ_LOAD_HISTORY_BITS		(sizeof(s32) * 8ULL)
++#define RQ_UTIL_SHIFT			(8)
++#define RQ_LOAD_HISTORY_TO_UTIL(l)	(((l) >> (RQ_LOAD_HISTORY_BITS - 1 - RQ_UTIL_SHIFT)) & 0xff)
++
++#define LOAD_BLOCK(t)		((t) >> 17)
++#define LOAD_HALF_BLOCK(t)	((t) >> 16)
++#define BLOCK_MASK(t)		((t) & ((0x01 << 18) - 1))
++#define LOAD_BLOCK_BIT(b)	(1UL << (RQ_LOAD_HISTORY_BITS - 1 - (b)))
++#define CURRENT_LOAD_BIT	LOAD_BLOCK_BIT(0)
++
++static inline void rq_load_update(struct rq *rq)
++{
++	u64 time = rq->clock;
++	u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(rq->load_stamp), RQ_LOAD_HISTORY_BITS - 1);
++	u64 prev = !!(rq->load_history & CURRENT_LOAD_BIT);
++	u64 curr = !!rq->nr_running;
++
++	if (delta) {
++		rq->load_history = rq->load_history >> delta;
++
++		if (delta < RQ_UTIL_SHIFT) {
++			rq->load_block += (~BLOCK_MASK(rq->load_stamp)) * prev;
++			if (!!LOAD_HALF_BLOCK(rq->load_block) ^ curr)
++				rq->load_history ^= LOAD_BLOCK_BIT(delta);
++		}
++
++		rq->load_block = BLOCK_MASK(time) * prev;
++	} else {
++		rq->load_block += (time - rq->load_stamp) * prev;
++	}
++	if (prev ^ curr)
++		rq->load_history ^= CURRENT_LOAD_BIT;
++	rq->load_stamp = time;
++}
++
++unsigned long rq_load_util(struct rq *rq, unsigned long max)
++{
++	return RQ_LOAD_HISTORY_TO_UTIL(rq->load_history) * (max >> RQ_UTIL_SHIFT);
++}
++
++#ifdef CONFIG_SMP
++unsigned long sched_cpu_util(int cpu)
++{
++	return rq_load_util(cpu_rq(cpu), arch_scale_cpu_capacity(cpu));
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_CPU_FREQ
++/**
++ * cpufreq_update_util - Take a note about CPU utilization changes.
++ * @rq: Runqueue to carry out the update for.
++ * @flags: Update reason flags.
++ *
++ * This function is called by the scheduler on the CPU whose utilization is
++ * being updated.
++ *
++ * It can only be called from RCU-sched read-side critical sections.
++ *
++ * The way cpufreq is currently arranged requires it to evaluate the CPU
++ * performance state (frequency/voltage) on a regular basis to prevent it from
++ * being stuck in a completely inadequate performance level for too long.
++ * That is not guaranteed to happen if the updates are only triggered from CFS
++ * and DL, though, because they may not be coming in if only RT tasks are
++ * active all the time (or there are RT tasks only).
++ *
++ * As a workaround for that issue, this function is called periodically by the
++ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
++ * but that really is a band-aid.  Going forward it should be replaced with
++ * solutions targeted more specifically at RT tasks.
++ */
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++	struct update_util_data *data;
++
++#ifdef CONFIG_SMP
++	rq_load_update(rq);
++#endif
++	data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data, cpu_of(rq)));
++	if (data)
++		data->func(data, rq_clock(rq), flags);
++}
++#else
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++#ifdef CONFIG_SMP
++	rq_load_update(rq);
++#endif
++}
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++/*
++ * Tick may be needed by tasks in the runqueue depending on their policy and
++ * requirements. If tick is needed, lets send the target an IPI to kick it out
++ * of nohz mode if necessary.
++ */
++static inline void sched_update_tick_dependency(struct rq *rq)
++{
++	int cpu = cpu_of(rq);
++
++	if (!tick_nohz_full_cpu(cpu))
++		return;
++
++	if (rq->nr_running < 2)
++		tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
++	else
++		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
++}
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_update_tick_dependency(struct rq *rq) { }
++#endif
++
++static inline void add_nr_running(struct rq *rq, unsigned count)
++{
++	rq->nr_running += count;
++#ifdef CONFIG_SMP
++	if (rq->nr_running > 1) {
++		cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
++		rq->prio_balance_time = rq->clock;
++	}
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++static inline void sub_nr_running(struct rq *rq, unsigned count)
++{
++	rq->nr_running -= count;
++#ifdef CONFIG_SMP
++	if (rq->nr_running < 2) {
++		cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
++		rq->prio_balance_time = 0;
++	}
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++bool sched_task_on_rq(struct task_struct *p)
++{
++	return task_on_rq_queued(p);
++}
++
++unsigned long get_wchan(struct task_struct *p)
++{
++	unsigned long ip = 0;
++	unsigned int state;
++
++	if (!p || p == current)
++		return 0;
++
++	/* Only get wchan if task is blocked and we can keep it that way. */
++	raw_spin_lock_irq(&p->pi_lock);
++	state = READ_ONCE(p->__state);
++	smp_rmb(); /* see try_to_wake_up() */
++	if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
++		ip = __get_wchan(p);
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	return ip;
++}
++
++/*
++ * Add/Remove/Requeue task to/from the runqueue routines
++ * Context: rq->lock
++ */
++#define __SCHED_DEQUEUE_TASK(p, rq, flags, func)					\
++	sched_info_dequeue(rq, p);							\
++											\
++	__list_del_entry(&p->sq_node);							\
++	if (p->sq_node.prev == p->sq_node.next) {					\
++		clear_bit(sched_idx2prio(p->sq_node.next - &rq->queue.heads[0], rq),	\
++			  rq->queue.bitmap);						\
++		func;									\
++	}
++
++#define __SCHED_ENQUEUE_TASK(p, rq, flags, func)					\
++	sched_info_enqueue(rq, p);							\
++	{										\
++	int idx, prio;									\
++	TASK_SCHED_PRIO_IDX(p, rq, idx, prio);						\
++	list_add_tail(&p->sq_node, &rq->queue.heads[idx]);				\
++	if (list_is_first(&p->sq_node, &rq->queue.heads[idx])) {			\
++		set_bit(prio, rq->queue.bitmap);					\
++		func;									\
++	}										\
++	}
++
++static inline void __dequeue_task(struct task_struct *p, struct rq *rq)
++{
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++#endif
++
++	__SCHED_DEQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++}
++
++static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++	__dequeue_task(p, rq);
++	sub_nr_running(rq, 1);
++}
++
++static inline void __enqueue_task(struct task_struct *p, struct rq *rq)
++{
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: enqueue(%d) %px %d\n", cpu_of(rq), p, p->prio);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++#endif
++
++	__SCHED_ENQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++}
++
++static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++	__enqueue_task(p, rq);
++	add_nr_running(rq, 1);
++}
++
++void requeue_task(struct task_struct *p, struct rq *rq)
++{
++	struct list_head *node = &p->sq_node;
++	int deq_idx, idx, prio;
++
++	TASK_SCHED_PRIO_IDX(p, rq, idx, prio);
++#ifdef ALT_SCHED_DEBUG
++	lockdep_assert_held(&rq->lock);
++	/*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
++		  cpu_of(rq), task_cpu(p));
++#endif
++	if (list_is_last(node, &rq->queue.heads[idx]))
++		return;
++
++	__list_del_entry(node);
++	if (node->prev == node->next && (deq_idx = node->next - &rq->queue.heads[0]) != idx)
++		clear_bit(sched_idx2prio(deq_idx, rq), rq->queue.bitmap);
++
++	list_add_tail(node, &rq->queue.heads[idx]);
++	if (list_is_first(node, &rq->queue.heads[idx]))
++		set_bit(prio, rq->queue.bitmap);
++	update_sched_preempt_mask(rq);
++}
++
++/*
++ * try_cmpxchg based fetch_or() macro so it works for different integer types:
++ */
++#define fetch_or(ptr, mask)						\
++	({								\
++		typeof(ptr) _ptr = (ptr);				\
++		typeof(mask) _mask = (mask);				\
++		typeof(*_ptr) _val = *_ptr;				\
++									\
++		do {							\
++		} while (!try_cmpxchg(_ptr, &_val, _val | _mask));	\
++	_val;								\
++})
++
++#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
++/*
++ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
++ * this avoids any races wrt polling state changes and thereby avoids
++ * spurious IPIs.
++ */
++static inline bool set_nr_and_not_polling(struct thread_info *ti, int tif)
++{
++	return !(fetch_or(&ti->flags, 1 << tif) & _TIF_POLLING_NRFLAG);
++}
++
++/*
++ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
++ *
++ * If this returns true, then the idle task promises to call
++ * sched_ttwu_pending() and reschedule soon.
++ */
++static bool set_nr_if_polling(struct task_struct *p)
++{
++	struct thread_info *ti = task_thread_info(p);
++	typeof(ti->flags) val = READ_ONCE(ti->flags);
++
++	do {
++		if (!(val & _TIF_POLLING_NRFLAG))
++			return false;
++		if (val & _TIF_NEED_RESCHED)
++			return true;
++	} while (!try_cmpxchg(&ti->flags, &val, val | _TIF_NEED_RESCHED));
++
++	return true;
++}
++
++#else
++static inline bool set_nr_and_not_polling(struct thread_info *ti, int tif)
++{
++	set_ti_thread_flag(ti, tif);
++	return true;
++}
++
++#ifdef CONFIG_SMP
++static inline bool set_nr_if_polling(struct task_struct *p)
++{
++	return false;
++}
++#endif
++#endif
++
++static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	struct wake_q_node *node = &task->wake_q;
++
++	/*
++	 * Atomically grab the task, if ->wake_q is !nil already it means
++	 * it's already queued (either by us or someone else) and will get the
++	 * wakeup due to that.
++	 *
++	 * In order to ensure that a pending wakeup will observe our pending
++	 * state, even in the failed case, an explicit smp_mb() must be used.
++	 */
++	smp_mb__before_atomic();
++	if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
++		return false;
++
++	/*
++	 * The head is context local, there can be no concurrency.
++	 */
++	*head->lastp = node;
++	head->lastp = &node->next;
++	return true;
++}
++
++/**
++ * wake_q_add() - queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ */
++void wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	if (__wake_q_add(head, task))
++		get_task_struct(task);
++}
++
++/**
++ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ *
++ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
++ * that already hold reference to @task can call the 'safe' version and trust
++ * wake_q to do the right thing depending whether or not the @task is already
++ * queued for wakeup.
++ */
++void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
++{
++	if (!__wake_q_add(head, task))
++		put_task_struct(task);
++}
++
++void wake_up_q(struct wake_q_head *head)
++{
++	struct wake_q_node *node = head->first;
++
++	while (node != WAKE_Q_TAIL) {
++		struct task_struct *task;
++
++		task = container_of(node, struct task_struct, wake_q);
++		node = node->next;
++		/* pairs with cmpxchg_relaxed() in __wake_q_add() */
++		WRITE_ONCE(task->wake_q.next, NULL);
++		/* Task can safely be re-inserted now. */
++
++		/*
++		 * wake_up_process() executes a full barrier, which pairs with
++		 * the queueing in wake_q_add() so as not to miss wakeups.
++		 */
++		wake_up_process(task);
++		put_task_struct(task);
++	}
++}
++
++/*
++ * resched_curr - mark rq's current task 'to be rescheduled now'.
++ *
++ * On UP this means the setting of the need_resched flag, on SMP it
++ * might also involve a cross-CPU call to trigger the scheduler on
++ * the target CPU.
++ */
++static inline void __resched_curr(struct rq *rq, int tif)
++{
++	struct task_struct *curr = rq->curr;
++	struct thread_info *cti = task_thread_info(curr);
++	int cpu;
++
++	lockdep_assert_held(&rq->lock);
++
++	/*
++	 * Always immediately preempt the idle task; no point in delaying doing
++	 * actual work.
++	 */
++	if (is_idle_task(curr) && tif == TIF_NEED_RESCHED_LAZY)
++		tif = TIF_NEED_RESCHED;
++
++	if (cti->flags & ((1 << tif) | _TIF_NEED_RESCHED))
++		return;
++
++	cpu = cpu_of(rq);
++	if (cpu == smp_processor_id()) {
++		set_ti_thread_flag(cti, tif);
++		if (tif == TIF_NEED_RESCHED)
++			set_preempt_need_resched();
++		return;
++	}
++
++	if (set_nr_and_not_polling(cti, tif)) {
++		if (tif == TIF_NEED_RESCHED)
++			smp_send_reschedule(cpu);
++	} else {
++		trace_sched_wake_idle_without_ipi(cpu);
++	}
++}
++
++static inline void resched_curr(struct rq *rq)
++{
++	__resched_curr(rq, TIF_NEED_RESCHED);
++}
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_preempt_lazy);
++static __always_inline bool dynamic_preempt_lazy(void)
++{
++	return static_branch_unlikely(&sk_dynamic_preempt_lazy);
++}
++#else
++static __always_inline bool dynamic_preempt_lazy(void)
++{
++	return IS_ENABLED(CONFIG_PREEMPT_LAZY);
++}
++#endif
++
++static __always_inline int get_lazy_tif_bit(void)
++{
++	if (dynamic_preempt_lazy())
++		return TIF_NEED_RESCHED_LAZY;
++
++	return TIF_NEED_RESCHED;
++}
++
++static inline void resched_curr_lazy(struct rq *rq)
++{
++	__resched_curr(rq, get_lazy_tif_bit());
++}
++
++void resched_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (cpu_online(cpu) || cpu == smp_processor_id())
++		resched_curr(cpu_rq(cpu));
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++#ifdef CONFIG_SMP
++#ifdef CONFIG_NO_HZ_COMMON
++/*
++ * This routine will record that the CPU is going idle with tick stopped.
++ * This info will be used in performing idle load balancing in the future.
++ */
++void nohz_balance_enter_idle(int cpu) {}
++
++/*
++ * In the semi idle case, use the nearest busy CPU for migrating timers
++ * from an idle CPU.  This is good for power-savings.
++ *
++ * We don't do similar optimization for completely idle system, as
++ * selecting an idle CPU will add more delays to the timers than intended
++ * (as that CPU's timer base may not be up to date wrt jiffies etc).
++ */
++int get_nohz_timer_target(void)
++{
++	int i, cpu = smp_processor_id(), default_cpu = -1;
++	struct cpumask *mask;
++	const struct cpumask *hk_mask;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE)) {
++		if (!idle_cpu(cpu))
++			return cpu;
++		default_cpu = cpu;
++	}
++
++	hk_mask = housekeeping_cpumask(HK_TYPE_KERNEL_NOISE);
++
++	for (mask = per_cpu(sched_cpu_topo_masks, cpu);
++	     mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
++		for_each_cpu_and(i, mask, hk_mask)
++			if (!idle_cpu(i))
++				return i;
++
++	if (default_cpu == -1)
++		default_cpu = housekeeping_any_cpu(HK_TYPE_KERNEL_NOISE);
++	cpu = default_cpu;
++
++	return cpu;
++}
++
++/*
++ * When add_timer_on() enqueues a timer into the timer wheel of an
++ * idle CPU then this timer might expire before the next timer event
++ * which is scheduled to wake up that CPU. In case of a completely
++ * idle system the next event might even be infinite time into the
++ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
++ * leaves the inner idle loop so the newly added timer is taken into
++ * account when the CPU goes back to idle and evaluates the timer
++ * wheel for the next timer event.
++ */
++static inline void wake_up_idle_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (cpu == smp_processor_id())
++		return;
++
++	/*
++	 * Set TIF_NEED_RESCHED and send an IPI if in the non-polling
++	 * part of the idle loop. This forces an exit from the idle loop
++	 * and a round trip to schedule(). Now this could be optimized
++	 * because a simple new idle loop iteration is enough to
++	 * re-evaluate the next tick. Provided some re-ordering of tick
++	 * nohz functions that would need to follow TIF_NR_POLLING
++	 * clearing:
++	 *
++	 * - On most architectures, a simple fetch_or on ti::flags with a
++	 *   "0" value would be enough to know if an IPI needs to be sent.
++	 *
++	 * - x86 needs to perform a last need_resched() check between
++	 *   monitor and mwait which doesn't take timers into account.
++	 *   There a dedicated TIF_TIMER flag would be required to
++	 *   fetch_or here and be checked along with TIF_NEED_RESCHED
++	 *   before mwait().
++	 *
++	 * However, remote timer enqueue is not such a frequent event
++	 * and testing of the above solutions didn't appear to report
++	 * much benefits.
++	 */
++	if (set_nr_and_not_polling(task_thread_info(rq->idle), TIF_NEED_RESCHED))
++		smp_send_reschedule(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++static inline bool wake_up_full_nohz_cpu(int cpu)
++{
++	/*
++	 * We just need the target to call irq_exit() and re-evaluate
++	 * the next tick. The nohz full kick at least implies that.
++	 * If needed we can still optimize that later with an
++	 * empty IRQ.
++	 */
++	if (cpu_is_offline(cpu))
++		return true;  /* Don't try to wake offline CPUs. */
++	if (tick_nohz_full_cpu(cpu)) {
++		if (cpu != smp_processor_id() ||
++		    tick_nohz_tick_stopped())
++			tick_nohz_full_kick_cpu(cpu);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_nohz_cpu(int cpu)
++{
++	if (!wake_up_full_nohz_cpu(cpu))
++		wake_up_idle_cpu(cpu);
++}
++
++static void nohz_csd_func(void *info)
++{
++	struct rq *rq = info;
++	int cpu = cpu_of(rq);
++	unsigned int flags;
++
++	/*
++	 * Release the rq::nohz_csd.
++	 */
++	flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
++	WARN_ON(!(flags & NOHZ_KICK_MASK));
++
++	rq->idle_balance = idle_cpu(cpu);
++	if (rq->idle_balance) {
++		rq->nohz_idle_balance = flags;
++		__raise_softirq_irqoff(SCHED_SOFTIRQ);
++	}
++}
++
++#endif /* CONFIG_NO_HZ_COMMON */
++#endif /* CONFIG_SMP */
++
++static inline void wakeup_preempt(struct rq *rq)
++{
++	if (sched_rq_first_task(rq) != rq->curr)
++		resched_curr(rq);
++}
++
++static __always_inline
++int __task_state_match(struct task_struct *p, unsigned int state)
++{
++	if (READ_ONCE(p->__state) & state)
++		return 1;
++
++	if (READ_ONCE(p->saved_state) & state)
++		return -1;
++
++	return 0;
++}
++
++static __always_inline
++int task_state_match(struct task_struct *p, unsigned int state)
++{
++	/*
++	 * Serialize against current_save_and_set_rtlock_wait_state(),
++	 * current_restore_rtlock_saved_state(), and __refrigerator().
++	 */
++	guard(raw_spinlock_irq)(&p->pi_lock);
++
++	return __task_state_match(p, state);
++}
++
++/*
++ * wait_task_inactive - wait for a thread to unschedule.
++ *
++ * Wait for the thread to block in any of the states set in @match_state.
++ * If it changes, i.e. @p might have woken up, then return zero.  When we
++ * succeed in waiting for @p to be off its CPU, we return a positive number
++ * (its total switch count).  If a second call a short while later returns the
++ * same number, the caller can be sure that @p has remained unscheduled the
++ * whole time.
++ *
++ * The caller must ensure that the task *will* unschedule sometime soon,
++ * else this function might spin for a *long* time. This function can't
++ * be called with interrupts off, or it may introduce deadlock with
++ * smp_call_function() if an IPI is sent by the same process we are
++ * waiting to become inactive.
++ */
++unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
++{
++	unsigned long flags;
++	int running, queued, match;
++	unsigned long ncsw;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	for (;;) {
++		rq = task_rq(p);
++
++		/*
++		 * If the task is actively running on another CPU
++		 * still, just relax and busy-wait without holding
++		 * any locks.
++		 *
++		 * NOTE! Since we don't hold any locks, it's not
++		 * even sure that "rq" stays as the right runqueue!
++		 * But we don't care, since this will return false
++		 * if the runqueue has changed and p is actually now
++		 * running somewhere else!
++		 */
++		while (task_on_cpu(p)) {
++			if (!task_state_match(p, match_state))
++				return 0;
++			cpu_relax();
++		}
++
++		/*
++		 * Ok, time to look more closely! We need the rq
++		 * lock now, to be *sure*. If we're wrong, we'll
++		 * just go back and repeat.
++		 */
++		task_access_lock_irqsave(p, &lock, &flags);
++		trace_sched_wait_task(p);
++		running = task_on_cpu(p);
++		queued = p->on_rq;
++		ncsw = 0;
++		if ((match = __task_state_match(p, match_state))) {
++			/*
++			 * When matching on p->saved_state, consider this task
++			 * still queued so it will wait.
++			 */
++			if (match < 0)
++				queued = 1;
++			ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
++		}
++		task_access_unlock_irqrestore(p, lock, &flags);
++
++		/*
++		 * If it changed from the expected state, bail out now.
++		 */
++		if (unlikely(!ncsw))
++			break;
++
++		/*
++		 * Was it really running after all now that we
++		 * checked with the proper locks actually held?
++		 *
++		 * Oops. Go back and try again..
++		 */
++		if (unlikely(running)) {
++			cpu_relax();
++			continue;
++		}
++
++		/*
++		 * It's not enough that it's not actively running,
++		 * it must be off the runqueue _entirely_, and not
++		 * preempted!
++		 *
++		 * So if it was still runnable (but just not actively
++		 * running right now), it's preempted, and we should
++		 * yield - it could be a while.
++		 */
++		if (unlikely(queued)) {
++			ktime_t to = NSEC_PER_SEC / HZ;
++
++			set_current_state(TASK_UNINTERRUPTIBLE);
++			schedule_hrtimeout(&to, HRTIMER_MODE_REL_HARD);
++			continue;
++		}
++
++		/*
++		 * Ahh, all good. It wasn't running, and it wasn't
++		 * runnable, which means that it will never become
++		 * running in the future either. We're all done!
++		 */
++		break;
++	}
++
++	return ncsw;
++}
++
++#ifdef CONFIG_SCHED_HRTICK
++/*
++ * Use HR-timers to deliver accurate preemption points.
++ */
++
++static void hrtick_clear(struct rq *rq)
++{
++	if (hrtimer_active(&rq->hrtick_timer))
++		hrtimer_cancel(&rq->hrtick_timer);
++}
++
++/*
++ * High-resolution timer tick.
++ * Runs from hardirq context with interrupts disabled.
++ */
++static enum hrtimer_restart hrtick(struct hrtimer *timer)
++{
++	struct rq *rq = container_of(timer, struct rq, hrtick_timer);
++
++	WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
++
++	raw_spin_lock(&rq->lock);
++	resched_curr(rq);
++	raw_spin_unlock(&rq->lock);
++
++	return HRTIMER_NORESTART;
++}
++
++/*
++ * Use hrtick when:
++ *  - enabled by features
++ *  - hrtimer is actually high res
++ */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	/**
++	 * Alt schedule FW doesn't support sched_feat yet
++	if (!sched_feat(HRTICK))
++		return 0;
++	*/
++	if (!cpu_active(cpu_of(rq)))
++		return 0;
++	return hrtimer_is_hres_active(&rq->hrtick_timer);
++}
++
++#ifdef CONFIG_SMP
++
++static void __hrtick_restart(struct rq *rq)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	ktime_t time = rq->hrtick_time;
++
++	hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
++}
++
++/*
++ * called from hardirq (IPI) context
++ */
++static void __hrtick_start(void *arg)
++{
++	struct rq *rq = arg;
++
++	raw_spin_lock(&rq->lock);
++	__hrtick_restart(rq);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and IRQs disabled
++ */
++static inline void hrtick_start(struct rq *rq, u64 delay)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	s64 delta;
++
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense and can cause timer DoS.
++	 */
++	delta = max_t(s64, delay, 10000LL);
++
++	rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
++
++	if (rq == this_rq())
++		__hrtick_restart(rq);
++	else
++		smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
++}
++
++#else
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and IRQs disabled
++ */
++static inline void hrtick_start(struct rq *rq, u64 delay)
++{
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense. Rely on vruntime for fairness.
++	 */
++	delay = max_t(u64, delay, 10000LL);
++	hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
++		      HRTIMER_MODE_REL_PINNED_HARD);
++}
++#endif /* CONFIG_SMP */
++
++static void hrtick_rq_init(struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
++#endif
++
++	hrtimer_setup(&rq->hrtick_timer, hrtick, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
++}
++#else	/* CONFIG_SCHED_HRTICK */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	return 0;
++}
++
++static inline void hrtick_clear(struct rq *rq)
++{
++}
++
++static inline void hrtick_rq_init(struct rq *rq)
++{
++}
++#endif	/* CONFIG_SCHED_HRTICK */
++
++/*
++ * activate_task - move a task to the runqueue.
++ *
++ * Context: rq->lock
++ */
++static void activate_task(struct task_struct *p, struct rq *rq)
++{
++	enqueue_task(p, rq, ENQUEUE_WAKEUP);
++
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
++	ASSERT_EXCLUSIVE_WRITER(p->on_rq);
++
++	/*
++	 * If in_iowait is set, the code below may not trigger any cpufreq
++	 * utilization updates, so do it here explicitly with the IOWAIT flag
++	 * passed.
++	 */
++	cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
++}
++
++static void block_task(struct rq *rq, struct task_struct *p)
++{
++	dequeue_task(p, rq, DEQUEUE_SLEEP);
++
++	if (p->sched_contributes_to_load)
++		rq->nr_uninterruptible++;
++
++	if (p->in_iowait) {
++		atomic_inc(&rq->nr_iowait);
++		delayacct_blkio_start();
++	}
++
++	ASSERT_EXCLUSIVE_WRITER(p->on_rq);
++
++	/*
++	 * The moment this write goes through, ttwu() can swoop in and migrate
++	 * this task, rendering our rq->__lock ineffective.
++	 *
++	 * __schedule()				try_to_wake_up()
++	 *   LOCK rq->__lock			  LOCK p->pi_lock
++	 *   pick_next_task()
++	 *     pick_next_task_fair()
++	 *       pick_next_entity()
++	 *         dequeue_entities()
++	 *           __block_task()
++	 *             RELEASE p->on_rq = 0	  if (p->on_rq && ...)
++	 *					    break;
++	 *
++	 *					  ACQUIRE (after ctrl-dep)
++	 *
++	 *					  cpu = select_task_rq();
++	 *					  set_task_cpu(p, cpu);
++	 *					  ttwu_queue()
++	 *					    ttwu_do_activate()
++	 *					      LOCK rq->__lock
++	 *					      activate_task()
++	 *					        STORE p->on_rq = 1
++	 *   UNLOCK rq->__lock
++	 *
++	 * Callers must ensure to not reference @p after this -- we no longer
++	 * own it.
++	 */
++	smp_store_release(&p->on_rq, 0);
++}
++
++static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
++	 * successfully executed on another CPU. We must ensure that updates of
++	 * per-task data have been completed by this moment.
++	 */
++	smp_wmb();
++
++	WRITE_ONCE(task_thread_info(p)->cpu, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
++{
++	unsigned int state = READ_ONCE(p->__state);
++
++	/*
++	 * We should never call set_task_cpu() on a blocked task,
++	 * ttwu() will sort out the placement.
++	 */
++	WARN_ON_ONCE(state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq);
++
++#ifdef CONFIG_LOCKDEP
++	/*
++	 * The caller should hold either p->pi_lock or rq->lock, when changing
++	 * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
++	 *
++	 * sched_move_task() holds both and thus holding either pins the cgroup,
++	 * see task_group().
++	 */
++	WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
++				      lockdep_is_held(&task_rq(p)->lock)));
++#endif
++	/*
++	 * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
++	 */
++	WARN_ON_ONCE(!cpu_online(new_cpu));
++
++	WARN_ON_ONCE(is_migration_disabled(p));
++	trace_sched_migrate_task(p, new_cpu);
++
++	if (task_cpu(p) != new_cpu)
++	{
++		rseq_migrate(p);
++		sched_mm_cid_migrate_from(p);
++		perf_event_task_migrate(p);
++	}
++
++	__set_task_cpu(p, new_cpu);
++}
++
++static void
++__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	/*
++	 * This here violates the locking rules for affinity, since we're only
++	 * supposed to change these variables while holding both rq->lock and
++	 * p->pi_lock.
++	 *
++	 * HOWEVER, it magically works, because ttwu() is the only code that
++	 * accesses these variables under p->pi_lock and only does so after
++	 * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
++	 * before finish_task().
++	 *
++	 * XXX do further audits, this smells like something putrid.
++	 */
++	WARN_ON_ONCE(!p->on_cpu);
++	p->cpus_ptr = new_mask;
++}
++
++void migrate_disable(void)
++{
++	struct task_struct *p = current;
++	int cpu;
++
++	if (p->migration_disabled) {
++#ifdef CONFIG_DEBUG_PREEMPT
++		/*
++		 * Warn about overflow half-way through the range.
++		 */
++		WARN_ON_ONCE((s16)p->migration_disabled < 0);
++#endif
++		p->migration_disabled++;
++		return;
++	}
++
++	guard(preempt)();
++	cpu = smp_processor_id();
++	if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
++		cpu_rq(cpu)->nr_pinned++;
++		p->migration_disabled = 1;
++		/*
++		 * Violates locking rules! see comment in __do_set_cpus_ptr().
++		 */
++		if (p->cpus_ptr == &p->cpus_mask)
++			__do_set_cpus_ptr(p, cpumask_of(cpu));
++	}
++}
++EXPORT_SYMBOL_GPL(migrate_disable);
++
++void migrate_enable(void)
++{
++	struct task_struct *p = current;
++
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Check both overflow from migrate_disable() and superfluous
++	 * migrate_enable().
++	 */
++	if (WARN_ON_ONCE((s16)p->migration_disabled <= 0))
++		return;
++#endif
++
++	if (p->migration_disabled > 1) {
++		p->migration_disabled--;
++		return;
++	}
++
++	/*
++	 * Ensure stop_task runs either before or after this, and that
++	 * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
++	 */
++	guard(preempt)();
++	/*
++	 * Assumption: current should be running on allowed cpu
++	 */
++	WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
++	if (p->cpus_ptr != &p->cpus_mask)
++		__do_set_cpus_ptr(p, &p->cpus_mask);
++	/*
++	 * Mustn't clear migration_disabled() until cpus_ptr points back at the
++	 * regular cpus_mask, otherwise things that race (eg.
++	 * select_fallback_rq) get confused.
++	 */
++	barrier();
++	p->migration_disabled = 0;
++	this_rq()->nr_pinned--;
++}
++EXPORT_SYMBOL_GPL(migrate_enable);
++
++static void __migrate_force_enable(struct task_struct *p, struct rq *rq)
++{
++	if (likely(p->cpus_ptr != &p->cpus_mask))
++		__do_set_cpus_ptr(p, &p->cpus_mask);
++	p->migration_disabled = 0;
++	/* When p is migrate_disabled, rq->lock should be held */
++	rq->nr_pinned--;
++}
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return rq->nr_pinned;
++}
++
++/*
++ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
++ * __set_cpus_allowed_ptr() and select_fallback_rq().
++ */
++static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
++{
++	/* When not in the task's cpumask, no point in looking further. */
++	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++		return false;
++
++	/* migrate_disabled() must be allowed to finish. */
++	if (is_migration_disabled(p))
++		return cpu_online(cpu);
++
++	/* Non kernel threads are not allowed during either online or offline. */
++	if (!(p->flags & PF_KTHREAD))
++		return cpu_active(cpu) && task_cpu_possible(cpu, p);
++
++	/* KTHREAD_IS_PER_CPU is always allowed. */
++	if (kthread_is_per_cpu(p))
++		return cpu_online(cpu);
++
++	/* Regular kernel threads don't get to stay during offline. */
++	if (cpu_dying(cpu))
++		return false;
++
++	/* But are allowed during online. */
++	return cpu_online(cpu);
++}
++
++/*
++ * This is how migration works:
++ *
++ * 1) we invoke migration_cpu_stop() on the target CPU using
++ *    stop_one_cpu().
++ * 2) stopper starts to run (implicitly forcing the migrated thread
++ *    off the CPU)
++ * 3) it checks whether the migrated task is still in the wrong runqueue.
++ * 4) if it's in the wrong runqueue then the migration thread removes
++ *    it and puts it into the right queue.
++ * 5) stopper completes and stop_one_cpu() returns and the migration
++ *    is done.
++ */
++
++/*
++ * move_queued_task - move a queued task to new rq.
++ *
++ * Returns (locked) new rq. Old rq's lock is released.
++ */
++struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu)
++{
++	lockdep_assert_held(&rq->lock);
++
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
++	dequeue_task(p, rq, 0);
++	set_task_cpu(p, new_cpu);
++	raw_spin_unlock(&rq->lock);
++
++	rq = cpu_rq(new_cpu);
++
++	raw_spin_lock(&rq->lock);
++	WARN_ON_ONCE(task_cpu(p) != new_cpu);
++
++	sched_mm_cid_migrate_to(rq, p);
++
++	sched_task_sanity_check(p, rq);
++	enqueue_task(p, rq, 0);
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
++	wakeup_preempt(rq);
++
++	return rq;
++}
++
++struct migration_arg {
++	struct task_struct *task;
++	int dest_cpu;
++};
++
++/*
++ * Move (not current) task off this CPU, onto the destination CPU. We're doing
++ * this because either it can't run here any more (set_cpus_allowed()
++ * away from this CPU, or CPU going down), or because we're
++ * attempting to rebalance this task on exec (sched_exec).
++ *
++ * So we race with normal scheduler movements, but that's OK, as long
++ * as the task is no longer on this CPU.
++ */
++static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int dest_cpu)
++{
++	/* Affinity changed (again). */
++	if (!is_cpu_allowed(p, dest_cpu))
++		return rq;
++
++	return move_queued_task(rq, p, dest_cpu);
++}
++
++/*
++ * migration_cpu_stop - this will be executed by a high-prio stopper thread
++ * and performs thread migration by bumping thread off CPU then
++ * 'pushing' onto another runqueue.
++ */
++static int migration_cpu_stop(void *data)
++{
++	struct migration_arg *arg = data;
++	struct task_struct *p = arg->task;
++	struct rq *rq = this_rq();
++	unsigned long flags;
++
++	/*
++	 * The original target CPU might have gone down and we might
++	 * be on another CPU but it doesn't matter.
++	 */
++	local_irq_save(flags);
++	/*
++	 * We need to explicitly wake pending tasks before running
++	 * __migrate_task() such that we will not miss enforcing cpus_ptr
++	 * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
++	 */
++	flush_smp_call_function_queue();
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++	/*
++	 * If task_rq(p) != rq, it cannot be migrated here, because we're
++	 * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
++	 * we're holding p->pi_lock.
++	 */
++	if (task_rq(p) == rq && task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		rq = __migrate_task(rq, p, arg->dest_cpu);
++	}
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	return 0;
++}
++
++static inline void
++set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx)
++{
++	cpumask_copy(&p->cpus_mask, ctx->new_mask);
++	p->nr_cpus_allowed = cpumask_weight(ctx->new_mask);
++
++	/*
++	 * Swap in a new user_cpus_ptr if SCA_USER flag set
++	 */
++	if (ctx->flags & SCA_USER)
++		swap(p->user_cpus_ptr, ctx->user_mask);
++}
++
++static void
++__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
++{
++	lockdep_assert_held(&p->pi_lock);
++	set_cpus_allowed_common(p, ctx);
++	mm_set_cpus_allowed(p->mm, ctx->new_mask);
++}
++
++/*
++ * Used for kthread_bind() and select_fallback_rq(), in both cases the user
++ * affinity (if any) should be destroyed too.
++ */
++void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.user_mask = NULL,
++		.flags     = SCA_USER,	/* clear the user requested mask */
++	};
++	union cpumask_rcuhead {
++		cpumask_t cpumask;
++		struct rcu_head rcu;
++	};
++
++	__do_set_cpus_allowed(p, &ac);
++
++	if (is_migration_disabled(p) && !cpumask_test_cpu(task_cpu(p), &p->cpus_mask))
++		__migrate_force_enable(p, task_rq(p));
++
++	/*
++	 * Because this is called with p->pi_lock held, it is not possible
++	 * to use kfree() here (when PREEMPT_RT=y), therefore punt to using
++	 * kfree_rcu().
++	 */
++	kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu);
++}
++
++int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
++		      int node)
++{
++	cpumask_t *user_mask;
++	unsigned long flags;
++
++	/*
++	 * Always clear dst->user_cpus_ptr first as their user_cpus_ptr's
++	 * may differ by now due to racing.
++	 */
++	dst->user_cpus_ptr = NULL;
++
++	/*
++	 * This check is racy and losing the race is a valid situation.
++	 * It is not worth the extra overhead of taking the pi_lock on
++	 * every fork/clone.
++	 */
++	if (data_race(!src->user_cpus_ptr))
++		return 0;
++
++	user_mask = alloc_user_cpus_ptr(node);
++	if (!user_mask)
++		return -ENOMEM;
++
++	/*
++	 * Use pi_lock to protect content of user_cpus_ptr
++	 *
++	 * Though unlikely, user_cpus_ptr can be reset to NULL by a concurrent
++	 * do_set_cpus_allowed().
++	 */
++	raw_spin_lock_irqsave(&src->pi_lock, flags);
++	if (src->user_cpus_ptr) {
++		swap(dst->user_cpus_ptr, user_mask);
++		cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
++	}
++	raw_spin_unlock_irqrestore(&src->pi_lock, flags);
++
++	if (unlikely(user_mask))
++		kfree(user_mask);
++
++	return 0;
++}
++
++static inline struct cpumask *clear_user_cpus_ptr(struct task_struct *p)
++{
++	struct cpumask *user_mask = NULL;
++
++	swap(p->user_cpus_ptr, user_mask);
++
++	return user_mask;
++}
++
++void release_user_cpus_ptr(struct task_struct *p)
++{
++	kfree(clear_user_cpus_ptr(p));
++}
++
++#endif
++
++/**
++ * task_curr - is this task currently executing on a CPU?
++ * @p: the task in question.
++ *
++ * Return: 1 if the task is currently executing. 0 otherwise.
++ */
++inline int task_curr(const struct task_struct *p)
++{
++	return cpu_curr(task_cpu(p)) == p;
++}
++
++#ifdef CONFIG_SMP
++/***
++ * kick_process - kick a running thread to enter/exit the kernel
++ * @p: the to-be-kicked thread
++ *
++ * Cause a process which is running on another CPU to enter
++ * kernel-mode, without any delay. (to get signals handled.)
++ *
++ * NOTE: this function doesn't have to take the runqueue lock,
++ * because all it wants to ensure is that the remote task enters
++ * the kernel. If the IPI races and the task has been migrated
++ * to another CPU then no harm is done and the purpose has been
++ * achieved as well.
++ */
++void kick_process(struct task_struct *p)
++{
++	guard(preempt)();
++	int cpu = task_cpu(p);
++
++	if ((cpu != smp_processor_id()) && task_curr(p))
++		smp_send_reschedule(cpu);
++}
++EXPORT_SYMBOL_GPL(kick_process);
++
++/*
++ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
++ *
++ * A few notes on cpu_active vs cpu_online:
++ *
++ *  - cpu_active must be a subset of cpu_online
++ *
++ *  - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
++ *    see __set_cpus_allowed_ptr(). At this point the newly online
++ *    CPU isn't yet part of the sched domains, and balancing will not
++ *    see it.
++ *
++ *  - on cpu-down we clear cpu_active() to mask the sched domains and
++ *    avoid the load balancer to place new tasks on the to be removed
++ *    CPU. Existing tasks will remain running there and will be taken
++ *    off.
++ *
++ * This means that fallback selection must not select !active CPUs.
++ * And can assume that any active CPU must be online. Conversely
++ * select_task_rq() below may allow selection of !active CPUs in order
++ * to satisfy the above rules.
++ */
++static int select_fallback_rq(int cpu, struct task_struct *p)
++{
++	int nid = cpu_to_node(cpu);
++	const struct cpumask *nodemask = NULL;
++	enum { cpuset, possible, fail } state = cpuset;
++	int dest_cpu;
++
++	/*
++	 * If the node that the CPU is on has been offlined, cpu_to_node()
++	 * will return -1. There is no CPU on the node, and we should
++	 * select the CPU on the other node.
++	 */
++	if (nid != -1) {
++		nodemask = cpumask_of_node(nid);
++
++		/* Look for allowed, online CPU in same node. */
++		for_each_cpu(dest_cpu, nodemask) {
++			if (is_cpu_allowed(p, dest_cpu))
++				return dest_cpu;
++		}
++	}
++
++	for (;;) {
++		/* Any allowed, online CPU? */
++		for_each_cpu(dest_cpu, p->cpus_ptr) {
++			if (!is_cpu_allowed(p, dest_cpu))
++				continue;
++			goto out;
++		}
++
++		/* No more Mr. Nice Guy. */
++		switch (state) {
++		case cpuset:
++			if (cpuset_cpus_allowed_fallback(p)) {
++				state = possible;
++				break;
++			}
++			fallthrough;
++		case possible:
++			/*
++			 * XXX When called from select_task_rq() we only
++			 * hold p->pi_lock and again violate locking order.
++			 *
++			 * More yuck to audit.
++			 */
++			do_set_cpus_allowed(p, task_cpu_fallback_mask(p));
++			state = fail;
++			break;
++
++		case fail:
++			BUG();
++			break;
++		}
++	}
++
++out:
++	if (state != cpuset) {
++		/*
++		 * Don't tell them about moving exiting tasks or
++		 * kernel threads (both mm NULL), since they never
++		 * leave kernel.
++		 */
++		if (p->mm && printk_ratelimit()) {
++			printk_deferred("process %d (%s) no longer affine to cpu%d\n",
++					task_pid_nr(p), p->comm, cpu);
++		}
++	}
++
++	return dest_cpu;
++}
++
++static inline void
++sched_preempt_mask_flush(cpumask_t *mask, int prio, int ref)
++{
++	int cpu;
++
++	cpumask_copy(mask, sched_preempt_mask + ref);
++	if (prio < ref) {
++		for_each_clear_bit(cpu, cpumask_bits(mask), nr_cpumask_bits) {
++			if (prio < cpu_rq(cpu)->prio)
++				cpumask_set_cpu(cpu, mask);
++		}
++	} else {
++		for_each_cpu_andnot(cpu, mask, sched_idle_mask) {
++			if (prio >= cpu_rq(cpu)->prio)
++				cpumask_clear_cpu(cpu, mask);
++		}
++	}
++}
++
++static inline int
++preempt_mask_check(cpumask_t *preempt_mask, const cpumask_t *allow_mask, int prio)
++{
++	cpumask_t *mask = sched_preempt_mask + prio;
++	int pr = atomic_read(&sched_prio_record);
++
++	if (pr != prio && SCHED_QUEUE_BITS - 1 != prio) {
++		sched_preempt_mask_flush(mask, prio, pr);
++		atomic_set(&sched_prio_record, prio);
++	}
++
++	return cpumask_and(preempt_mask, allow_mask, mask);
++}
++
++__read_mostly idle_select_func_t idle_select_func ____cacheline_aligned_in_smp = cpumask_and;
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	cpumask_t allow_mask, mask;
++
++	if (unlikely(!cpumask_and(&allow_mask, p->cpus_ptr, cpu_active_mask)))
++		return select_fallback_rq(task_cpu(p), p);
++
++	if (idle_select_func(&mask, &allow_mask, sched_idle_mask)	||
++	    preempt_mask_check(&mask, &allow_mask, task_sched_prio(p)))
++		return best_mask_cpu(task_cpu(p), &mask);
++
++	return best_mask_cpu(task_cpu(p), &allow_mask);
++}
++
++void sched_set_stop_task(int cpu, struct task_struct *stop)
++{
++	static struct lock_class_key stop_pi_lock;
++	struct sched_param stop_param = { .sched_priority = STOP_PRIO };
++	struct sched_param start_param = { .sched_priority = 0 };
++	struct task_struct *old_stop = cpu_rq(cpu)->stop;
++
++	if (stop) {
++		/*
++		 * Make it appear like a SCHED_FIFO task, its something
++		 * userspace knows about and won't get confused about.
++		 *
++		 * Also, it will make PI more or less work without too
++		 * much confusion -- but then, stop work should not
++		 * rely on PI working anyway.
++		 */
++		sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
++
++		/*
++		 * The PI code calls rt_mutex_setprio() with ->pi_lock held to
++		 * adjust the effective priority of a task. As a result,
++		 * rt_mutex_setprio() can trigger (RT) balancing operations,
++		 * which can then trigger wakeups of the stop thread to push
++		 * around the current task.
++		 *
++		 * The stop task itself will never be part of the PI-chain, it
++		 * never blocks, therefore that ->pi_lock recursion is safe.
++		 * Tell lockdep about this by placing the stop->pi_lock in its
++		 * own class.
++		 */
++		lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
++	}
++
++	cpu_rq(cpu)->stop = stop;
++
++	if (old_stop) {
++		/*
++		 * Reset it back to a normal scheduling policy so that
++		 * it can die in pieces.
++		 */
++		sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
++	}
++}
++
++static int affine_move_task(struct rq *rq, struct task_struct *p, int dest_cpu,
++			    raw_spinlock_t *lock, unsigned long irq_flags)
++	__releases(rq->lock)
++	__releases(p->pi_lock)
++{
++	/* Can the task run on the task's current CPU? If so, we're done */
++	if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
++		if (is_migration_disabled(p))
++			__migrate_force_enable(p, rq);
++
++		if (task_on_cpu(p) || READ_ONCE(p->__state) == TASK_WAKING) {
++			struct migration_arg arg = { p, dest_cpu };
++
++			/* Need help from migration thread: drop lock and wait. */
++			__task_access_unlock(p, lock);
++			raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++			stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
++			return 0;
++		}
++		if (task_on_rq_queued(p)) {
++			/*
++			 * OK, since we're going to drop the lock immediately
++			 * afterwards anyway.
++			 */
++			update_rq_clock(rq);
++			rq = move_queued_task(rq, p, dest_cpu);
++			lock = &rq->lock;
++		}
++	}
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++	return 0;
++}
++
++static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
++					 struct affinity_context *ctx,
++					 struct rq *rq,
++					 raw_spinlock_t *lock,
++					 unsigned long irq_flags)
++{
++	const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
++	const struct cpumask *cpu_valid_mask = cpu_active_mask;
++	bool kthread = p->flags & PF_KTHREAD;
++	int dest_cpu;
++	int ret = 0;
++
++	if (kthread || is_migration_disabled(p)) {
++		/*
++		 * Kernel threads are allowed on online && !active CPUs,
++		 * however, during cpu-hot-unplug, even these might get pushed
++		 * away if not KTHREAD_IS_PER_CPU.
++		 *
++		 * Specifically, migration_disabled() tasks must not fail the
++		 * cpumask_any_and_distribute() pick below, esp. so on
++		 * SCA_MIGRATE_ENABLE, otherwise we'll not call
++		 * set_cpus_allowed_common() and actually reset p->cpus_ptr.
++		 */
++		cpu_valid_mask = cpu_online_mask;
++	}
++
++	if (!kthread && !cpumask_subset(ctx->new_mask, cpu_allowed_mask)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	/*
++	 * Must re-check here, to close a race against __kthread_bind(),
++	 * sched_setaffinity() is not guaranteed to observe the flag.
++	 */
++	if ((ctx->flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	if (cpumask_equal(&p->cpus_mask, ctx->new_mask))
++		goto out;
++
++	dest_cpu = cpumask_any_and(cpu_valid_mask, ctx->new_mask);
++	if (dest_cpu >= nr_cpu_ids) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	__do_set_cpus_allowed(p, ctx);
++
++	return affine_move_task(rq, p, dest_cpu, lock, irq_flags);
++
++out:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++
++	return ret;
++}
++
++/*
++ * Change a given task's CPU affinity. Migrate the thread to a
++ * is removed from the allowed bitmask.
++ *
++ * NOTE: the caller must have a valid reference to the task, the
++ * task must not exit() & deallocate itself prematurely. The
++ * call is not atomic; no spinlocks may be held.
++ */
++int __set_cpus_allowed_ptr(struct task_struct *p,
++			   struct affinity_context *ctx)
++{
++	unsigned long irq_flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++	rq = __task_access_lock(p, &lock);
++	/*
++	 * Masking should be skipped if SCA_USER or any of the SCA_MIGRATE_*
++	 * flags are set.
++	 */
++	if (p->user_cpus_ptr &&
++	    !(ctx->flags & SCA_USER) &&
++	    cpumask_and(rq->scratch_mask, ctx->new_mask, p->user_cpus_ptr))
++		ctx->new_mask = rq->scratch_mask;
++
++
++	return __set_cpus_allowed_ptr_locked(p, ctx, rq, lock, irq_flags);
++}
++
++int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.flags     = 0,
++	};
++
++	return __set_cpus_allowed_ptr(p, &ac);
++}
++EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
++
++/*
++ * Change a given task's CPU affinity to the intersection of its current
++ * affinity mask and @subset_mask, writing the resulting mask to @new_mask.
++ * If user_cpus_ptr is defined, use it as the basis for restricting CPU
++ * affinity or use cpu_online_mask instead.
++ *
++ * If the resulting mask is empty, leave the affinity unchanged and return
++ * -EINVAL.
++ */
++static int restrict_cpus_allowed_ptr(struct task_struct *p,
++				     struct cpumask *new_mask,
++				     const struct cpumask *subset_mask)
++{
++	struct affinity_context ac = {
++		.new_mask  = new_mask,
++		.flags     = 0,
++	};
++	unsigned long irq_flags;
++	raw_spinlock_t *lock;
++	struct rq *rq;
++	int err;
++
++	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++	rq = __task_access_lock(p, &lock);
++
++	if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) {
++		err = -EINVAL;
++		goto err_unlock;
++	}
++
++	return __set_cpus_allowed_ptr_locked(p, &ac, rq, lock, irq_flags);
++
++err_unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++	return err;
++}
++
++/*
++ * Restrict the CPU affinity of task @p so that it is a subset of
++ * task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the
++ * old affinity mask. If the resulting mask is empty, we warn and walk
++ * up the cpuset hierarchy until we find a suitable mask.
++ */
++void force_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++	cpumask_var_t new_mask;
++	const struct cpumask *override_mask = task_cpu_possible_mask(p);
++
++	alloc_cpumask_var(&new_mask, GFP_KERNEL);
++
++	/*
++	 * __migrate_task() can fail silently in the face of concurrent
++	 * offlining of the chosen destination CPU, so take the hotplug
++	 * lock to ensure that the migration succeeds.
++	 */
++	cpus_read_lock();
++	if (!cpumask_available(new_mask))
++		goto out_set_mask;
++
++	if (!restrict_cpus_allowed_ptr(p, new_mask, override_mask))
++		goto out_free_mask;
++
++	/*
++	 * We failed to find a valid subset of the affinity mask for the
++	 * task, so override it based on its cpuset hierarchy.
++	 */
++	cpuset_cpus_allowed(p, new_mask);
++	override_mask = new_mask;
++
++out_set_mask:
++	if (printk_ratelimit()) {
++		printk_deferred("Overriding affinity for process %d (%s) to CPUs %*pbl\n",
++				task_pid_nr(p), p->comm,
++				cpumask_pr_args(override_mask));
++	}
++
++	WARN_ON(set_cpus_allowed_ptr(p, override_mask));
++out_free_mask:
++	cpus_read_unlock();
++	free_cpumask_var(new_mask);
++}
++
++/*
++ * Restore the affinity of a task @p which was previously restricted by a
++ * call to force_compatible_cpus_allowed_ptr().
++ *
++ * It is the caller's responsibility to serialise this with any calls to
++ * force_compatible_cpus_allowed_ptr(@p).
++ */
++void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++	struct affinity_context ac = {
++		.new_mask  = task_user_cpus(p),
++		.flags     = 0,
++	};
++	int ret;
++
++	/*
++	 * Try to restore the old affinity mask with __sched_setaffinity().
++	 * Cpuset masking will be done there too.
++	 */
++	ret = __sched_setaffinity(p, &ac);
++	WARN_ON_ONCE(ret);
++}
++
++#else /* CONFIG_SMP */
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	return 0;
++}
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return false;
++}
++
++#endif /* !CONFIG_SMP */
++
++static void
++ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq;
++
++	if (!schedstat_enabled())
++		return;
++
++	rq = this_rq();
++
++#ifdef CONFIG_SMP
++	if (cpu == rq->cpu) {
++		__schedstat_inc(rq->ttwu_local);
++		__schedstat_inc(p->stats.nr_wakeups_local);
++	} else {
++		/** Alt schedule FW ToDo:
++		 * How to do ttwu_wake_remote
++		 */
++	}
++#endif /* CONFIG_SMP */
++
++	__schedstat_inc(rq->ttwu_count);
++	__schedstat_inc(p->stats.nr_wakeups);
++}
++
++/*
++ * Mark the task runnable.
++ */
++static inline void ttwu_do_wakeup(struct task_struct *p)
++{
++	WRITE_ONCE(p->__state, TASK_RUNNING);
++	trace_sched_wakeup(p);
++}
++
++static inline void
++ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
++{
++	if (p->sched_contributes_to_load)
++		rq->nr_uninterruptible--;
++
++	if (
++#ifdef CONFIG_SMP
++	    !(wake_flags & WF_MIGRATED) &&
++#endif
++	    p->in_iowait) {
++		delayacct_blkio_end(p);
++		atomic_dec(&task_rq(p)->nr_iowait);
++	}
++
++	activate_task(p, rq);
++	wakeup_preempt(rq);
++
++	ttwu_do_wakeup(p);
++}
++
++/*
++ * Consider @p being inside a wait loop:
++ *
++ *   for (;;) {
++ *      set_current_state(TASK_UNINTERRUPTIBLE);
++ *
++ *      if (CONDITION)
++ *         break;
++ *
++ *      schedule();
++ *   }
++ *   __set_current_state(TASK_RUNNING);
++ *
++ * between set_current_state() and schedule(). In this case @p is still
++ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
++ * an atomic manner.
++ *
++ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
++ * then schedule() must still happen and p->state can be changed to
++ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
++ * need to do a full wakeup with enqueue.
++ *
++ * Returns: %true when the wakeup is done,
++ *          %false otherwise.
++ */
++static int ttwu_runnable(struct task_struct *p, int wake_flags)
++{
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	int ret = 0;
++
++	rq = __task_access_lock(p, &lock);
++	if (task_on_rq_queued(p)) {
++		if (!task_on_cpu(p)) {
++			/*
++			 * When on_rq && !on_cpu the task is preempted, see if
++			 * it should preempt the task that is current now.
++			 */
++			update_rq_clock(rq);
++			wakeup_preempt(rq);
++		}
++		ttwu_do_wakeup(p);
++		ret = 1;
++	}
++	__task_access_unlock(p, lock);
++
++	return ret;
++}
++
++#ifdef CONFIG_SMP
++void sched_ttwu_pending(void *arg)
++{
++	struct llist_node *llist = arg;
++	struct rq *rq = this_rq();
++	struct task_struct *p, *t;
++	struct rq_flags rf;
++
++	if (!llist)
++		return;
++
++	rq_lock_irqsave(rq, &rf);
++	update_rq_clock(rq);
++
++	llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
++		if (WARN_ON_ONCE(p->on_cpu))
++			smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++		if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
++			set_task_cpu(p, cpu_of(rq));
++
++		ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
++	}
++
++	/*
++	 * Must be after enqueueing at least once task such that
++	 * idle_cpu() does not observe a false-negative -- if it does,
++	 * it is possible for select_idle_siblings() to stack a number
++	 * of tasks on this CPU during that window.
++	 *
++	 * It is OK to clear ttwu_pending when another task pending.
++	 * We will receive IPI after local IRQ enabled and then enqueue it.
++	 * Since now nr_running > 0, idle_cpu() will always get correct result.
++	 */
++	WRITE_ONCE(rq->ttwu_pending, 0);
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Prepare the scene for sending an IPI for a remote smp_call
++ *
++ * Returns true if the caller can proceed with sending the IPI.
++ * Returns false otherwise.
++ */
++bool call_function_single_prep_ipi(int cpu)
++{
++	if (set_nr_if_polling(cpu_rq(cpu)->idle)) {
++		trace_sched_wake_idle_without_ipi(cpu);
++		return false;
++	}
++
++	return true;
++}
++
++/*
++ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
++ * necessary. The wakee CPU on receipt of the IPI will queue the task
++ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
++ * of the wakeup instead of the waker.
++ */
++static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
++
++	WRITE_ONCE(rq->ttwu_pending, 1);
++	__smp_call_single_queue(cpu, &p->wake_entry.llist);
++}
++
++static inline bool ttwu_queue_cond(struct task_struct *p, int cpu)
++{
++	/*
++	 * Do not complicate things with the async wake_list while the CPU is
++	 * in hotplug state.
++	 */
++	if (!cpu_active(cpu))
++		return false;
++
++	/* Ensure the task will still be allowed to run on the CPU. */
++	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++		return false;
++
++	/*
++	 * If the CPU does not share cache, then queue the task on the
++	 * remote rqs wakelist to avoid accessing remote data.
++	 */
++	if (!cpus_share_cache(smp_processor_id(), cpu))
++		return true;
++
++	if (cpu == smp_processor_id())
++		return false;
++
++	/*
++	 * If the wakee cpu is idle, or the task is descheduling and the
++	 * only running task on the CPU, then use the wakelist to offload
++	 * the task activation to the idle (or soon-to-be-idle) CPU as
++	 * the current CPU is likely busy. nr_running is checked to
++	 * avoid unnecessary task stacking.
++	 *
++	 * Note that we can only get here with (wakee) p->on_rq=0,
++	 * p->on_cpu can be whatever, we've done the dequeue, so
++	 * the wakee has been accounted out of ->nr_running.
++	 */
++	if (!cpu_rq(cpu)->nr_running)
++		return true;
++
++	return false;
++}
++
++static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(p, cpu)) {
++		sched_clock_cpu(cpu); /* Sync clocks across CPUs */
++		__ttwu_queue_wakelist(p, cpu, wake_flags);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_if_idle(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	guard(rcu)();
++	if (is_idle_task(rcu_dereference(rq->curr))) {
++		guard(raw_spinlock_irqsave)(&rq->lock);
++		if (is_idle_task(rq->curr))
++			resched_curr(rq);
++	}
++}
++
++extern struct static_key_false sched_asym_cpucapacity;
++
++static __always_inline bool sched_asym_cpucap_active(void)
++{
++	return static_branch_unlikely(&sched_asym_cpucapacity);
++}
++
++bool cpus_equal_capacity(int this_cpu, int that_cpu)
++{
++	if (!sched_asym_cpucap_active())
++		return true;
++
++	if (this_cpu == that_cpu)
++		return true;
++
++	return arch_scale_cpu_capacity(this_cpu) == arch_scale_cpu_capacity(that_cpu);
++}
++
++bool cpus_share_cache(int this_cpu, int that_cpu)
++{
++	if (this_cpu == that_cpu)
++		return true;
++
++	return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
++}
++#else /* !CONFIG_SMP */
++
++static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	return false;
++}
++
++#endif /* CONFIG_SMP */
++
++static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (ttwu_queue_wakelist(p, cpu, wake_flags))
++		return;
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++	ttwu_do_activate(rq, p, wake_flags);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Invoked from try_to_wake_up() to check whether the task can be woken up.
++ *
++ * The caller holds p::pi_lock if p != current or has preemption
++ * disabled when p == current.
++ *
++ * The rules of saved_state:
++ *
++ *   The related locking code always holds p::pi_lock when updating
++ *   p::saved_state, which means the code is fully serialized in both cases.
++ *
++ *  For PREEMPT_RT, the lock wait and lock wakeups happen via TASK_RTLOCK_WAIT.
++ *  No other bits set. This allows to distinguish all wakeup scenarios.
++ *
++ *  For FREEZER, the wakeup happens via TASK_FROZEN. No other bits set. This
++ *  allows us to prevent early wakeup of tasks before they can be run on
++ *  asymmetric ISA architectures (eg ARMv9).
++ */
++static __always_inline
++bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
++{
++	int match;
++
++	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++		WARN_ON_ONCE((state & TASK_RTLOCK_WAIT) &&
++			     state != TASK_RTLOCK_WAIT);
++	}
++
++	*success = !!(match = __task_state_match(p, state));
++
++	/*
++	 * Saved state preserves the task state across blocking on
++	 * an RT lock or TASK_FREEZABLE tasks.  If the state matches,
++	 * set p::saved_state to TASK_RUNNING, but do not wake the task
++	 * because it waits for a lock wakeup or __thaw_task(). Also
++	 * indicate success because from the regular waker's point of
++	 * view this has succeeded.
++	 *
++	 * After acquiring the lock the task will restore p::__state
++	 * from p::saved_state which ensures that the regular
++	 * wakeup is not lost. The restore will also set
++	 * p::saved_state to TASK_RUNNING so any further tests will
++	 * not result in false positives vs. @success
++	 */
++	if (match < 0)
++		p->saved_state = TASK_RUNNING;
++
++	return match > 0;
++}
++
++/*
++ * Notes on Program-Order guarantees on SMP systems.
++ *
++ *  MIGRATION
++ *
++ * The basic program-order guarantee on SMP systems is that when a task [t]
++ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
++ * execution on its new CPU [c1].
++ *
++ * For migration (of runnable tasks) this is provided by the following means:
++ *
++ *  A) UNLOCK of the rq(c0)->lock scheduling out task t
++ *  B) migration for t is required to synchronize *both* rq(c0)->lock and
++ *     rq(c1)->lock (if not at the same time, then in that order).
++ *  C) LOCK of the rq(c1)->lock scheduling in task
++ *
++ * Transitivity guarantees that B happens after A and C after B.
++ * Note: we only require RCpc transitivity.
++ * Note: the CPU doing B need not be c0 or c1
++ *
++ * Example:
++ *
++ *   CPU0            CPU1            CPU2
++ *
++ *   LOCK rq(0)->lock
++ *   sched-out X
++ *   sched-in Y
++ *   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(0)->lock // orders against CPU0
++ *                                   dequeue X
++ *                                   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(1)->lock
++ *                                   enqueue X
++ *                                   UNLOCK rq(1)->lock
++ *
++ *                   LOCK rq(1)->lock // orders against CPU2
++ *                   sched-out Z
++ *                   sched-in X
++ *                   UNLOCK rq(1)->lock
++ *
++ *
++ *  BLOCKING -- aka. SLEEP + WAKEUP
++ *
++ * For blocking we (obviously) need to provide the same guarantee as for
++ * migration. However the means are completely different as there is no lock
++ * chain to provide order. Instead we do:
++ *
++ *   1) smp_store_release(X->on_cpu, 0)   -- finish_task()
++ *   2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
++ *
++ * Example:
++ *
++ *   CPU0 (schedule)  CPU1 (try_to_wake_up) CPU2 (schedule)
++ *
++ *   LOCK rq(0)->lock LOCK X->pi_lock
++ *   dequeue X
++ *   sched-out X
++ *   smp_store_release(X->on_cpu, 0);
++ *
++ *                    smp_cond_load_acquire(&X->on_cpu, !VAL);
++ *                    X->state = WAKING
++ *                    set_task_cpu(X,2)
++ *
++ *                    LOCK rq(2)->lock
++ *                    enqueue X
++ *                    X->state = RUNNING
++ *                    UNLOCK rq(2)->lock
++ *
++ *                                          LOCK rq(2)->lock // orders against CPU1
++ *                                          sched-out Z
++ *                                          sched-in X
++ *                                          UNLOCK rq(2)->lock
++ *
++ *                    UNLOCK X->pi_lock
++ *   UNLOCK rq(0)->lock
++ *
++ *
++ * However; for wakeups there is a second guarantee we must provide, namely we
++ * must observe the state that lead to our wakeup. That is, not only must our
++ * task observe its own prior state, it must also observe the stores prior to
++ * its wakeup.
++ *
++ * This means that any means of doing remote wakeups must order the CPU doing
++ * the wakeup against the CPU the task is going to end up running on. This,
++ * however, is already required for the regular Program-Order guarantee above,
++ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
++ *
++ */
++
++/**
++ * try_to_wake_up - wake up a thread
++ * @p: the thread to be awakened
++ * @state: the mask of task states that can be woken
++ * @wake_flags: wake modifier flags (WF_*)
++ *
++ * Conceptually does:
++ *
++ *   If (@state & @p->state) @p->state = TASK_RUNNING.
++ *
++ * If the task was not queued/runnable, also place it back on a runqueue.
++ *
++ * This function is atomic against schedule() which would dequeue the task.
++ *
++ * It issues a full memory barrier before accessing @p->state, see the comment
++ * with set_current_state().
++ *
++ * Uses p->pi_lock to serialize against concurrent wake-ups.
++ *
++ * Relies on p->pi_lock stabilizing:
++ *  - p->sched_class
++ *  - p->cpus_ptr
++ *  - p->sched_task_group
++ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
++ *
++ * Tries really hard to only take one task_rq(p)->lock for performance.
++ * Takes rq->lock in:
++ *  - ttwu_runnable()    -- old rq, unavoidable, see comment there;
++ *  - ttwu_queue()       -- new rq, for enqueue of the task;
++ *  - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
++ *
++ * As a consequence we race really badly with just about everything. See the
++ * many memory barriers and their comments for details.
++ *
++ * Return: %true if @p->state changes (an actual wakeup was done),
++ *	   %false otherwise.
++ */
++int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
++{
++	guard(preempt)();
++	int cpu, success = 0;
++
++	if (p == current) {
++		/*
++		 * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
++		 * == smp_processor_id()'. Together this means we can special
++		 * case the whole 'p->on_rq && ttwu_runnable()' case below
++		 * without taking any locks.
++		 *
++		 * In particular:
++		 *  - we rely on Program-Order guarantees for all the ordering,
++		 *  - we're serialized against set_special_state() by virtue of
++		 *    it disabling IRQs (this allows not taking ->pi_lock).
++		 */
++		if (!ttwu_state_match(p, state, &success))
++			goto out;
++
++		trace_sched_waking(p);
++		ttwu_do_wakeup(p);
++		goto out;
++	}
++
++	/*
++	 * If we are going to wake up a thread waiting for CONDITION we
++	 * need to ensure that CONDITION=1 done by the caller can not be
++	 * reordered with p->state check below. This pairs with smp_store_mb()
++	 * in set_current_state() that the waiting thread does.
++	 */
++	scoped_guard (raw_spinlock_irqsave, &p->pi_lock) {
++		smp_mb__after_spinlock();
++		if (!ttwu_state_match(p, state, &success))
++			break;
++
++		trace_sched_waking(p);
++
++		/*
++		 * Ensure we load p->on_rq _after_ p->state, otherwise it would
++		 * be possible to, falsely, observe p->on_rq == 0 and get stuck
++		 * in smp_cond_load_acquire() below.
++		 *
++		 * sched_ttwu_pending()			try_to_wake_up()
++		 *   STORE p->on_rq = 1			  LOAD p->state
++		 *   UNLOCK rq->lock
++		 *
++		 * __schedule() (switch to task 'p')
++		 *   LOCK rq->lock			  smp_rmb();
++		 *   smp_mb__after_spinlock();
++		 *   UNLOCK rq->lock
++		 *
++		 * [task p]
++		 *   STORE p->state = UNINTERRUPTIBLE	  LOAD p->on_rq
++		 *
++		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++		 * __schedule().  See the comment for smp_mb__after_spinlock().
++		 *
++		 * A similar smp_rmb() lives in __task_needs_rq_lock().
++		 */
++		smp_rmb();
++		if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
++			break;
++
++#ifdef CONFIG_SMP
++		/*
++		 * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
++		 * possible to, falsely, observe p->on_cpu == 0.
++		 *
++		 * One must be running (->on_cpu == 1) in order to remove oneself
++		 * from the runqueue.
++		 *
++		 * __schedule() (switch to task 'p')	try_to_wake_up()
++		 *   STORE p->on_cpu = 1		  LOAD p->on_rq
++		 *   UNLOCK rq->lock
++		 *
++		 * __schedule() (put 'p' to sleep)
++		 *   LOCK rq->lock			  smp_rmb();
++		 *   smp_mb__after_spinlock();
++		 *   STORE p->on_rq = 0			  LOAD p->on_cpu
++		 *
++		 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++		 * __schedule().  See the comment for smp_mb__after_spinlock().
++		 *
++		 * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
++		 * schedule()'s deactivate_task() has 'happened' and p will no longer
++		 * care about it's own p->state. See the comment in __schedule().
++		 */
++		smp_acquire__after_ctrl_dep();
++
++		/*
++		 * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
++		 * == 0), which means we need to do an enqueue, change p->state to
++		 * TASK_WAKING such that we can unlock p->pi_lock before doing the
++		 * enqueue, such as ttwu_queue_wakelist().
++		 */
++		WRITE_ONCE(p->__state, TASK_WAKING);
++
++		/*
++		 * If the owning (remote) CPU is still in the middle of schedule() with
++		 * this task as prev, considering queueing p on the remote CPUs wake_list
++		 * which potentially sends an IPI instead of spinning on p->on_cpu to
++		 * let the waker make forward progress. This is safe because IRQs are
++		 * disabled and the IPI will deliver after on_cpu is cleared.
++		 *
++		 * Ensure we load task_cpu(p) after p->on_cpu:
++		 *
++		 * set_task_cpu(p, cpu);
++		 *   STORE p->cpu = @cpu
++		 * __schedule() (switch to task 'p')
++		 *   LOCK rq->lock
++		 *   smp_mb__after_spin_lock()          smp_cond_load_acquire(&p->on_cpu)
++		 *   STORE p->on_cpu = 1                LOAD p->cpu
++		 *
++		 * to ensure we observe the correct CPU on which the task is currently
++		 * scheduling.
++		 */
++		if (smp_load_acquire(&p->on_cpu) &&
++		    ttwu_queue_wakelist(p, task_cpu(p), wake_flags))
++			break;
++
++		/*
++		 * If the owning (remote) CPU is still in the middle of schedule() with
++		 * this task as prev, wait until it's done referencing the task.
++		 *
++		 * Pairs with the smp_store_release() in finish_task().
++		 *
++		 * This ensures that tasks getting woken will be fully ordered against
++		 * their previous state and preserve Program Order.
++		 */
++		smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++		sched_task_ttwu(p);
++
++		if ((wake_flags & WF_CURRENT_CPU) &&
++		    cpumask_test_cpu(smp_processor_id(), p->cpus_ptr))
++			cpu = smp_processor_id();
++		else
++			cpu = select_task_rq(p);
++
++		if (cpu != task_cpu(p)) {
++			if (p->in_iowait) {
++				delayacct_blkio_end(p);
++				atomic_dec(&task_rq(p)->nr_iowait);
++			}
++
++			wake_flags |= WF_MIGRATED;
++			set_task_cpu(p, cpu);
++		}
++#else
++		sched_task_ttwu(p);
++
++		cpu = task_cpu(p);
++#endif /* CONFIG_SMP */
++
++		ttwu_queue(p, cpu, wake_flags);
++	}
++out:
++	if (success)
++		ttwu_stat(p, task_cpu(p), wake_flags);
++
++	return success;
++}
++
++static bool __task_needs_rq_lock(struct task_struct *p)
++{
++	unsigned int state = READ_ONCE(p->__state);
++
++	/*
++	 * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when
++	 * the task is blocked. Make sure to check @state since ttwu() can drop
++	 * locks at the end, see ttwu_queue_wakelist().
++	 */
++	if (state == TASK_RUNNING || state == TASK_WAKING)
++		return true;
++
++	/*
++	 * Ensure we load p->on_rq after p->__state, otherwise it would be
++	 * possible to, falsely, observe p->on_rq == 0.
++	 *
++	 * See try_to_wake_up() for a longer comment.
++	 */
++	smp_rmb();
++	if (p->on_rq)
++		return true;
++
++#ifdef CONFIG_SMP
++	/*
++	 * Ensure the task has finished __schedule() and will not be referenced
++	 * anymore. Again, see try_to_wake_up() for a longer comment.
++	 */
++	smp_rmb();
++	smp_cond_load_acquire(&p->on_cpu, !VAL);
++#endif
++
++	return false;
++}
++
++/**
++ * task_call_func - Invoke a function on task in fixed state
++ * @p: Process for which the function is to be invoked, can be @current.
++ * @func: Function to invoke.
++ * @arg: Argument to function.
++ *
++ * Fix the task in it's current state by avoiding wakeups and or rq operations
++ * and call @func(@arg) on it.  This function can use task_is_runnable() and
++ * task_curr() to work out what the state is, if required.  Given that @func
++ * can be invoked with a runqueue lock held, it had better be quite
++ * lightweight.
++ *
++ * Returns:
++ *   Whatever @func returns
++ */
++int task_call_func(struct task_struct *p, task_call_f func, void *arg)
++{
++	struct rq *rq = NULL;
++	struct rq_flags rf;
++	int ret;
++
++	raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
++
++	if (__task_needs_rq_lock(p))
++		rq = __task_rq_lock(p, &rf);
++
++	/*
++	 * At this point the task is pinned; either:
++	 *  - blocked and we're holding off wakeups      (pi->lock)
++	 *  - woken, and we're holding off enqueue       (rq->lock)
++	 *  - queued, and we're holding off schedule     (rq->lock)
++	 *  - running, and we're holding off de-schedule (rq->lock)
++	 *
++	 * The called function (@func) can use: task_curr(), p->on_rq and
++	 * p->__state to differentiate between these states.
++	 */
++	ret = func(p, arg);
++
++	if (rq)
++		__task_rq_unlock(rq, &rf);
++
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
++	return ret;
++}
++
++/**
++ * cpu_curr_snapshot - Return a snapshot of the currently running task
++ * @cpu: The CPU on which to snapshot the task.
++ *
++ * Returns the task_struct pointer of the task "currently" running on
++ * the specified CPU.  If the same task is running on that CPU throughout,
++ * the return value will be a pointer to that task's task_struct structure.
++ * If the CPU did any context switches even vaguely concurrently with the
++ * execution of this function, the return value will be a pointer to the
++ * task_struct structure of a randomly chosen task that was running on
++ * that CPU somewhere around the time that this function was executing.
++ *
++ * If the specified CPU was offline, the return value is whatever it
++ * is, perhaps a pointer to the task_struct structure of that CPU's idle
++ * task, but there is no guarantee.  Callers wishing a useful return
++ * value must take some action to ensure that the specified CPU remains
++ * online throughout.
++ *
++ * This function executes full memory barriers before and after fetching
++ * the pointer, which permits the caller to confine this function's fetch
++ * with respect to the caller's accesses to other shared variables.
++ */
++struct task_struct *cpu_curr_snapshot(int cpu)
++{
++	struct task_struct *t;
++
++	smp_mb(); /* Pairing determined by caller's synchronization design. */
++	t = rcu_dereference(cpu_curr(cpu));
++	smp_mb(); /* Pairing determined by caller's synchronization design. */
++	return t;
++}
++
++/**
++ * wake_up_process - Wake up a specific process
++ * @p: The process to be woken up.
++ *
++ * Attempt to wake up the nominated process and move it to the set of runnable
++ * processes.
++ *
++ * Return: 1 if the process was woken up, 0 if it was already running.
++ *
++ * This function executes a full memory barrier before accessing the task state.
++ */
++int wake_up_process(struct task_struct *p)
++{
++	return try_to_wake_up(p, TASK_NORMAL, 0);
++}
++EXPORT_SYMBOL(wake_up_process);
++
++int wake_up_state(struct task_struct *p, unsigned int state)
++{
++	return try_to_wake_up(p, state, 0);
++}
++
++/*
++ * Perform scheduler related setup for a newly forked process p.
++ * p is forked by current.
++ *
++ * __sched_fork() is basic setup which is also used by sched_init() to
++ * initialize the boot CPU's idle task.
++ */
++static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	p->on_rq			= 0;
++	p->on_cpu			= 0;
++	p->utime			= 0;
++	p->stime			= 0;
++	p->sched_time			= 0;
++
++#ifdef CONFIG_SCHEDSTATS
++	/* Even if schedstat is disabled, there should not be garbage */
++	memset(&p->stats, 0, sizeof(p->stats));
++#endif
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++	INIT_HLIST_HEAD(&p->preempt_notifiers);
++#endif
++
++#ifdef CONFIG_COMPACTION
++	p->capture_control = NULL;
++#endif
++#ifdef CONFIG_SMP
++	p->wake_entry.u_flags = CSD_TYPE_TTWU;
++#endif
++	init_sched_mm_cid(p);
++}
++
++/*
++ * fork()/clone()-time setup:
++ */
++int sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	__sched_fork(clone_flags, p);
++	/*
++	 * We mark the process as NEW here. This guarantees that
++	 * nobody will actually run it, and a signal or other external
++	 * event cannot wake it up and insert it on the runqueue either.
++	 */
++	p->__state = TASK_NEW;
++
++	/*
++	 * Make sure we do not leak PI boosting priority to the child.
++	 */
++	p->prio = current->normal_prio;
++
++	/*
++	 * Revert to default priority/policy on fork if requested.
++	 */
++	if (unlikely(p->sched_reset_on_fork)) {
++		if (task_has_rt_policy(p)) {
++			p->policy = SCHED_NORMAL;
++			p->static_prio = NICE_TO_PRIO(0);
++			p->rt_priority = 0;
++		} else if (PRIO_TO_NICE(p->static_prio) < 0)
++			p->static_prio = NICE_TO_PRIO(0);
++
++		p->prio = p->normal_prio = p->static_prio;
++
++		/*
++		 * We don't need the reset flag anymore after the fork. It has
++		 * fulfilled its duty:
++		 */
++		p->sched_reset_on_fork = 0;
++	}
++
++#ifdef CONFIG_SCHED_INFO
++	if (unlikely(sched_info_on()))
++		memset(&p->sched_info, 0, sizeof(p->sched_info));
++#endif
++	init_task_preempt_count(p);
++
++	return 0;
++}
++
++int sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	/*
++	 * Because we're not yet on the pid-hash, p->pi_lock isn't strictly
++	 * required yet, but lockdep gets upset if rules are violated.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	/*
++	 * Share the timeslice between parent and child, thus the
++	 * total amount of pending timeslices in the system doesn't change,
++	 * resulting in more scheduling fairness.
++	 */
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	rq->curr->time_slice /= 2;
++	p->time_slice = rq->curr->time_slice;
++#ifdef CONFIG_SCHED_HRTICK
++	hrtick_start(rq, rq->curr->time_slice);
++#endif
++
++	if (p->time_slice < RESCHED_NS) {
++		p->time_slice = sysctl_sched_base_slice;
++		resched_curr(rq);
++	}
++	sched_task_fork(p, rq);
++	raw_spin_unlock(&rq->lock);
++
++	rseq_migrate(p);
++	/*
++	 * We're setting the CPU for the first time, we don't migrate,
++	 * so use __set_task_cpu().
++	 */
++	__set_task_cpu(p, smp_processor_id());
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	return 0;
++}
++
++void sched_cancel_fork(struct task_struct *p)
++{
++}
++
++void sched_post_fork(struct task_struct *p)
++{
++}
++
++#ifdef CONFIG_SCHEDSTATS
++
++DEFINE_STATIC_KEY_FALSE(sched_schedstats);
++
++static void set_schedstats(bool enabled)
++{
++	if (enabled)
++		static_branch_enable(&sched_schedstats);
++	else
++		static_branch_disable(&sched_schedstats);
++}
++
++void force_schedstat_enabled(void)
++{
++	if (!schedstat_enabled()) {
++		pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
++		static_branch_enable(&sched_schedstats);
++	}
++}
++
++static int __init setup_schedstats(char *str)
++{
++	int ret = 0;
++	if (!str)
++		goto out;
++
++	if (!strcmp(str, "enable")) {
++		set_schedstats(true);
++		ret = 1;
++	} else if (!strcmp(str, "disable")) {
++		set_schedstats(false);
++		ret = 1;
++	}
++out:
++	if (!ret)
++		pr_warn("Unable to parse schedstats=\n");
++
++	return ret;
++}
++__setup("schedstats=", setup_schedstats);
++
++#ifdef CONFIG_PROC_SYSCTL
++static int sysctl_schedstats(const struct ctl_table *table, int write, void *buffer,
++		size_t *lenp, loff_t *ppos)
++{
++	struct ctl_table t;
++	int err;
++	int state = static_branch_likely(&sched_schedstats);
++
++	if (write && !capable(CAP_SYS_ADMIN))
++		return -EPERM;
++
++	t = *table;
++	t.data = &state;
++	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
++	if (err < 0)
++		return err;
++	if (write)
++		set_schedstats(state);
++	return err;
++}
++#endif /* CONFIG_PROC_SYSCTL */
++#endif /* CONFIG_SCHEDSTATS */
++
++#ifdef CONFIG_SYSCTL
++static const struct ctl_table sched_core_sysctls[] = {
++#ifdef CONFIG_SCHEDSTATS
++	{
++		.procname       = "sched_schedstats",
++		.data           = NULL,
++		.maxlen         = sizeof(unsigned int),
++		.mode           = 0644,
++		.proc_handler   = sysctl_schedstats,
++		.extra1         = SYSCTL_ZERO,
++		.extra2         = SYSCTL_ONE,
++	},
++#endif /* CONFIG_SCHEDSTATS */
++};
++static int __init sched_core_sysctl_init(void)
++{
++	register_sysctl_init("kernel", sched_core_sysctls);
++	return 0;
++}
++late_initcall(sched_core_sysctl_init);
++#endif /* CONFIG_SYSCTL */
++
++/*
++ * wake_up_new_task - wake up a newly created task for the first time.
++ *
++ * This function will do some initial scheduler statistics housekeeping
++ * that must be done for every newly created context, then puts the task
++ * on the runqueue and wakes it.
++ */
++void wake_up_new_task(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	WRITE_ONCE(p->__state, TASK_RUNNING);
++	rq = cpu_rq(select_task_rq(p));
++#ifdef CONFIG_SMP
++	rseq_migrate(p);
++	/*
++	 * Fork balancing, do it here and not earlier because:
++	 * - cpus_ptr can change in the fork path
++	 * - any previously selected CPU might disappear through hotplug
++	 *
++	 * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
++	 * as we're not fully set-up yet.
++	 */
++	__set_task_cpu(p, cpu_of(rq));
++#endif
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	activate_task(p, rq);
++	trace_sched_wakeup_new(p);
++	wakeup_preempt(rq);
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++
++static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
++
++void preempt_notifier_inc(void)
++{
++	static_branch_inc(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_inc);
++
++void preempt_notifier_dec(void)
++{
++	static_branch_dec(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_dec);
++
++/**
++ * preempt_notifier_register - tell me when current is being preempted & rescheduled
++ * @notifier: notifier struct to register
++ */
++void preempt_notifier_register(struct preempt_notifier *notifier)
++{
++	if (!static_branch_unlikely(&preempt_notifier_key))
++		WARN(1, "registering preempt_notifier while notifiers disabled\n");
++
++	hlist_add_head(&notifier->link, &current->preempt_notifiers);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_register);
++
++/**
++ * preempt_notifier_unregister - no longer interested in preemption notifications
++ * @notifier: notifier struct to unregister
++ *
++ * This is *not* safe to call from within a preemption notifier.
++ */
++void preempt_notifier_unregister(struct preempt_notifier *notifier)
++{
++	hlist_del(&notifier->link);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
++
++static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_in(notifier, raw_smp_processor_id());
++}
++
++static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_in_preempt_notifiers(curr);
++}
++
++static void
++__fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				   struct task_struct *next)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_out(notifier, next);
++}
++
++static __always_inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_out_preempt_notifiers(curr, next);
++}
++
++#else /* !CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++}
++
++static inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++}
++
++#endif /* CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void prepare_task(struct task_struct *next)
++{
++	/*
++	 * Claim the task as running, we do this before switching to it
++	 * such that any running task will have this set.
++	 *
++	 * See the smp_load_acquire(&p->on_cpu) case in ttwu() and
++	 * its ordering comment.
++	 */
++	WRITE_ONCE(next->on_cpu, 1);
++}
++
++static inline void finish_task(struct task_struct *prev)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * This must be the very last reference to @prev from this CPU. After
++	 * p->on_cpu is cleared, the task can be moved to a different CPU. We
++	 * must ensure this doesn't happen until the switch is completely
++	 * finished.
++	 *
++	 * In particular, the load of prev->state in finish_task_switch() must
++	 * happen before this.
++	 *
++	 * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
++	 */
++	smp_store_release(&prev->on_cpu, 0);
++#else
++	prev->on_cpu = 0;
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++	void (*func)(struct rq *rq);
++	struct balance_callback *next;
++
++	lockdep_assert_held(&rq->lock);
++
++	while (head) {
++		func = (void (*)(struct rq *))head->func;
++		next = head->next;
++		head->next = NULL;
++		head = next;
++
++		func(rq);
++	}
++}
++
++static void balance_push(struct rq *rq);
++
++/*
++ * balance_push_callback is a right abuse of the callback interface and plays
++ * by significantly different rules.
++ *
++ * Where the normal balance_callback's purpose is to be ran in the same context
++ * that queued it (only later, when it's safe to drop rq->lock again),
++ * balance_push_callback is specifically targeted at __schedule().
++ *
++ * This abuse is tolerated because it places all the unlikely/odd cases behind
++ * a single test, namely: rq->balance_callback == NULL.
++ */
++struct balance_callback balance_push_callback = {
++	.next = NULL,
++	.func = balance_push,
++};
++
++static inline struct balance_callback *
++__splice_balance_callbacks(struct rq *rq, bool split)
++{
++	struct balance_callback *head = rq->balance_callback;
++
++	if (likely(!head))
++		return NULL;
++
++	lockdep_assert_rq_held(rq);
++	/*
++	 * Must not take balance_push_callback off the list when
++	 * splice_balance_callbacks() and balance_callbacks() are not
++	 * in the same rq->lock section.
++	 *
++	 * In that case it would be possible for __schedule() to interleave
++	 * and observe the list empty.
++	 */
++	if (split && head == &balance_push_callback)
++		head = NULL;
++	else
++		rq->balance_callback = NULL;
++
++	return head;
++}
++
++struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++	return __splice_balance_callbacks(rq, true);
++}
++
++static void __balance_callbacks(struct rq *rq)
++{
++	do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
++}
++
++void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++	unsigned long flags;
++
++	if (unlikely(head)) {
++		raw_spin_lock_irqsave(&rq->lock, flags);
++		do_balance_callbacks(rq, head);
++		raw_spin_unlock_irqrestore(&rq->lock, flags);
++	}
++}
++
++#else
++
++static inline void __balance_callbacks(struct rq *rq)
++{
++}
++#endif
++
++static inline void
++prepare_lock_switch(struct rq *rq, struct task_struct *next)
++{
++	/*
++	 * Since the runqueue lock will be released by the next
++	 * task (which is an invalid locking op but in the case
++	 * of the scheduler it's an obvious special-case), so we
++	 * do an early lockdep release here:
++	 */
++	spin_release(&rq->lock.dep_map, _THIS_IP_);
++#ifdef CONFIG_DEBUG_SPINLOCK
++	/* this is a valid case when another task releases the spinlock */
++	rq->lock.owner = next;
++#endif
++}
++
++static inline void finish_lock_switch(struct rq *rq)
++{
++	/*
++	 * If we are tracking spinlock dependencies then we have to
++	 * fix up the runqueue lock - which gets 'carried over' from
++	 * prev into current:
++	 */
++	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
++	__balance_callbacks(rq);
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++/*
++ * NOP if the arch has not defined these:
++ */
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++static inline void kmap_local_sched_out(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_out();
++#endif
++}
++
++static inline void kmap_local_sched_in(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_in();
++#endif
++}
++
++/**
++ * prepare_task_switch - prepare to switch tasks
++ * @rq: the runqueue preparing to switch
++ * @next: the task we are going to switch to.
++ *
++ * This is called with the rq lock held and interrupts off. It must
++ * be paired with a subsequent finish_task_switch after the context
++ * switch.
++ *
++ * prepare_task_switch sets up locking and calls architecture specific
++ * hooks.
++ */
++static inline void
++prepare_task_switch(struct rq *rq, struct task_struct *prev,
++		    struct task_struct *next)
++{
++	kcov_prepare_switch(prev);
++	sched_info_switch(rq, prev, next);
++	perf_event_task_sched_out(prev, next);
++	rseq_preempt(prev);
++	fire_sched_out_preempt_notifiers(prev, next);
++	kmap_local_sched_out();
++	prepare_task(next);
++	prepare_arch_switch(next);
++}
++
++/**
++ * finish_task_switch - clean up after a task-switch
++ * @rq: runqueue associated with task-switch
++ * @prev: the thread we just switched away from.
++ *
++ * finish_task_switch must be called after the context switch, paired
++ * with a prepare_task_switch call before the context switch.
++ * finish_task_switch will reconcile locking set up by prepare_task_switch,
++ * and do any other architecture-specific cleanup actions.
++ *
++ * Note that we may have delayed dropping an mm in context_switch(). If
++ * so, we finish that here outside of the runqueue lock.  (Doing it
++ * with the lock held can cause deadlocks; see schedule() for
++ * details.)
++ *
++ * The context switch have flipped the stack from under us and restored the
++ * local variables which were saved when this task called schedule() in the
++ * past. 'prev == current' is still correct but we need to recalculate this_rq
++ * because prev may have moved to another CPU.
++ */
++static struct rq *finish_task_switch(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	struct rq *rq = this_rq();
++	struct mm_struct *mm = rq->prev_mm;
++	unsigned int prev_state;
++
++	/*
++	 * The previous task will have left us with a preempt_count of 2
++	 * because it left us after:
++	 *
++	 *	schedule()
++	 *	  preempt_disable();			// 1
++	 *	  __schedule()
++	 *	    raw_spin_lock_irq(&rq->lock)	// 2
++	 *
++	 * Also, see FORK_PREEMPT_COUNT.
++	 */
++	if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
++		      "corrupted preempt_count: %s/%d/0x%x\n",
++		      current->comm, current->pid, preempt_count()))
++		preempt_count_set(FORK_PREEMPT_COUNT);
++
++	rq->prev_mm = NULL;
++
++	/*
++	 * A task struct has one reference for the use as "current".
++	 * If a task dies, then it sets TASK_DEAD in tsk->state and calls
++	 * schedule one last time. The schedule call will never return, and
++	 * the scheduled task must drop that reference.
++	 *
++	 * We must observe prev->state before clearing prev->on_cpu (in
++	 * finish_task), otherwise a concurrent wakeup can get prev
++	 * running on another CPU and we could rave with its RUNNING -> DEAD
++	 * transition, resulting in a double drop.
++	 */
++	prev_state = READ_ONCE(prev->__state);
++	vtime_task_switch(prev);
++	perf_event_task_sched_in(prev, current);
++	finish_task(prev);
++	tick_nohz_task_switch();
++	finish_lock_switch(rq);
++	finish_arch_post_lock_switch();
++	kcov_finish_switch(current);
++	/*
++	 * kmap_local_sched_out() is invoked with rq::lock held and
++	 * interrupts disabled. There is no requirement for that, but the
++	 * sched out code does not have an interrupt enabled section.
++	 * Restoring the maps on sched in does not require interrupts being
++	 * disabled either.
++	 */
++	kmap_local_sched_in();
++
++	fire_sched_in_preempt_notifiers(current);
++	/*
++	 * When switching through a kernel thread, the loop in
++	 * membarrier_{private,global}_expedited() may have observed that
++	 * kernel thread and not issued an IPI. It is therefore possible to
++	 * schedule between user->kernel->user threads without passing though
++	 * switch_mm(). Membarrier requires a barrier after storing to
++	 * rq->curr, before returning to userspace, so provide them here:
++	 *
++	 * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
++	 *   provided by mmdrop_lazy_tlb(),
++	 * - a sync_core for SYNC_CORE.
++	 */
++	if (mm) {
++		membarrier_mm_sync_core_before_usermode(mm);
++		mmdrop_lazy_tlb_sched(mm);
++	}
++	if (unlikely(prev_state == TASK_DEAD)) {
++		/* Task is done with its stack. */
++		put_task_stack(prev);
++
++		put_task_struct_rcu_user(prev);
++	}
++
++	return rq;
++}
++
++/**
++ * schedule_tail - first thing a freshly forked thread must call.
++ * @prev: the thread we just switched away from.
++ */
++asmlinkage __visible void schedule_tail(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	/*
++	 * New tasks start with FORK_PREEMPT_COUNT, see there and
++	 * finish_task_switch() for details.
++	 *
++	 * finish_task_switch() will drop rq->lock() and lower preempt_count
++	 * and the preempt_enable() will end up enabling preemption (on
++	 * PREEMPT_COUNT kernels).
++	 */
++
++	finish_task_switch(prev);
++	/*
++	 * This is a special case: the newly created task has just
++	 * switched the context for the first time. It is returning from
++	 * schedule for the first time in this path.
++	 */
++	trace_sched_exit_tp(true, CALLER_ADDR0);
++	preempt_enable();
++
++	if (current->set_child_tid)
++		put_user(task_pid_vnr(current), current->set_child_tid);
++
++	calculate_sigpending();
++}
++
++/*
++ * context_switch - switch to the new MM and the new thread's register state.
++ */
++static __always_inline struct rq *
++context_switch(struct rq *rq, struct task_struct *prev,
++	       struct task_struct *next)
++{
++	prepare_task_switch(rq, prev, next);
++
++	/*
++	 * For paravirt, this is coupled with an exit in switch_to to
++	 * combine the page table reload and the switch backend into
++	 * one hypercall.
++	 */
++	arch_start_context_switch(prev);
++
++	/*
++	 * kernel -> kernel   lazy + transfer active
++	 *   user -> kernel   lazy + mmgrab_lazy_tlb() active
++	 *
++	 * kernel ->   user   switch + mmdrop_lazy_tlb() active
++	 *   user ->   user   switch
++	 *
++	 * switch_mm_cid() needs to be updated if the barriers provided
++	 * by context_switch() are modified.
++	 */
++	if (!next->mm) {                                // to kernel
++		enter_lazy_tlb(prev->active_mm, next);
++
++		next->active_mm = prev->active_mm;
++		if (prev->mm)                           // from user
++			mmgrab_lazy_tlb(prev->active_mm);
++		else
++			prev->active_mm = NULL;
++	} else {                                        // to user
++		membarrier_switch_mm(rq, prev->active_mm, next->mm);
++		/*
++		 * sys_membarrier() requires an smp_mb() between setting
++		 * rq->curr / membarrier_switch_mm() and returning to userspace.
++		 *
++		 * The below provides this either through switch_mm(), or in
++		 * case 'prev->active_mm == next->mm' through
++		 * finish_task_switch()'s mmdrop().
++		 */
++		switch_mm_irqs_off(prev->active_mm, next->mm, next);
++		lru_gen_use_mm(next->mm);
++
++		if (!prev->mm) {                        // from kernel
++			/* will mmdrop_lazy_tlb() in finish_task_switch(). */
++			rq->prev_mm = prev->active_mm;
++			prev->active_mm = NULL;
++		}
++	}
++
++	/* switch_mm_cid() requires the memory barriers above. */
++	switch_mm_cid(rq, prev, next);
++
++	prepare_lock_switch(rq, next);
++
++	/* Here we just switch the register state and the stack. */
++	switch_to(prev, next, prev);
++	barrier();
++
++	return finish_task_switch(prev);
++}
++
++/*
++ * nr_running, nr_uninterruptible and nr_context_switches:
++ *
++ * externally visible scheduler statistics: current number of runnable
++ * threads, total number of context switches performed since bootup.
++ */
++unsigned int nr_running(void)
++{
++	unsigned int i, sum = 0;
++
++	for_each_online_cpu(i)
++		sum += cpu_rq(i)->nr_running;
++
++	return sum;
++}
++
++/*
++ * Check if only the current task is running on the CPU.
++ *
++ * Caution: this function does not check that the caller has disabled
++ * preemption, thus the result might have a time-of-check-to-time-of-use
++ * race.  The caller is responsible to use it correctly, for example:
++ *
++ * - from a non-preemptible section (of course)
++ *
++ * - from a thread that is bound to a single CPU
++ *
++ * - in a loop with very short iterations (e.g. a polling loop)
++ */
++bool single_task_running(void)
++{
++	return raw_rq()->nr_running == 1;
++}
++EXPORT_SYMBOL(single_task_running);
++
++unsigned long long nr_context_switches_cpu(int cpu)
++{
++	return cpu_rq(cpu)->nr_switches;
++}
++
++unsigned long long nr_context_switches(void)
++{
++	int i;
++	unsigned long long sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += cpu_rq(i)->nr_switches;
++
++	return sum;
++}
++
++/*
++ * Consumers of these two interfaces, like for example the cpuidle menu
++ * governor, are using nonsensical data. Preferring shallow idle state selection
++ * for a CPU that has IO-wait which might not even end up running the task when
++ * it does become runnable.
++ */
++
++unsigned int nr_iowait_cpu(int cpu)
++{
++	return atomic_read(&cpu_rq(cpu)->nr_iowait);
++}
++
++/*
++ * IO-wait accounting, and how it's mostly bollocks (on SMP).
++ *
++ * The idea behind IO-wait account is to account the idle time that we could
++ * have spend running if it were not for IO. That is, if we were to improve the
++ * storage performance, we'd have a proportional reduction in IO-wait time.
++ *
++ * This all works nicely on UP, where, when a task blocks on IO, we account
++ * idle time as IO-wait, because if the storage were faster, it could've been
++ * running and we'd not be idle.
++ *
++ * This has been extended to SMP, by doing the same for each CPU. This however
++ * is broken.
++ *
++ * Imagine for instance the case where two tasks block on one CPU, only the one
++ * CPU will have IO-wait accounted, while the other has regular idle. Even
++ * though, if the storage were faster, both could've ran at the same time,
++ * utilising both CPUs.
++ *
++ * This means, that when looking globally, the current IO-wait accounting on
++ * SMP is a lower bound, by reason of under accounting.
++ *
++ * Worse, since the numbers are provided per CPU, they are sometimes
++ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
++ * associated with any one particular CPU, it can wake to another CPU than it
++ * blocked on. This means the per CPU IO-wait number is meaningless.
++ *
++ * Task CPU affinities can make all that even more 'interesting'.
++ */
++
++unsigned int nr_iowait(void)
++{
++	unsigned int i, sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += nr_iowait_cpu(i);
++
++	return sum;
++}
++
++#ifdef CONFIG_SMP
++
++/*
++ * sched_exec - execve() is a valuable balancing opportunity, because at
++ * this point the task has the smallest effective memory and cache
++ * footprint.
++ */
++void sched_exec(void)
++{
++}
++
++#endif
++
++DEFINE_PER_CPU(struct kernel_stat, kstat);
++DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
++
++EXPORT_PER_CPU_SYMBOL(kstat);
++EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
++
++static inline void update_curr(struct rq *rq, struct task_struct *p)
++{
++	s64 ns = rq->clock_task - p->last_ran;
++
++	p->sched_time += ns;
++	cgroup_account_cputime(p, ns);
++	account_group_exec_runtime(p, ns);
++
++	p->time_slice -= ns;
++	p->last_ran = rq->clock_task;
++}
++
++/*
++ * Return accounted runtime for the task.
++ * Return separately the current's pending runtime that have not been
++ * accounted yet.
++ */
++unsigned long long task_sched_runtime(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	u64 ns;
++
++#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
++	/*
++	 * 64-bit doesn't need locks to atomically read a 64-bit value.
++	 * So we have a optimization chance when the task's delta_exec is 0.
++	 * Reading ->on_cpu is racy, but this is OK.
++	 *
++	 * If we race with it leaving CPU, we'll take a lock. So we're correct.
++	 * If we race with it entering CPU, unaccounted time is 0. This is
++	 * indistinguishable from the read occurring a few cycles earlier.
++	 * If we see ->on_cpu without ->on_rq, the task is leaving, and has
++	 * been accounted, so we're correct here as well.
++	 */
++	if (!p->on_cpu || !task_on_rq_queued(p))
++		return tsk_seruntime(p);
++#endif
++
++	rq = task_access_lock_irqsave(p, &lock, &flags);
++	/*
++	 * Must be ->curr _and_ ->on_rq.  If dequeued, we would
++	 * project cycles that may never be accounted to this
++	 * thread, breaking clock_gettime().
++	 */
++	if (p == rq->curr && task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		update_curr(rq, p);
++	}
++	ns = tsk_seruntime(p);
++	task_access_unlock_irqrestore(p, lock, &flags);
++
++	return ns;
++}
++
++/* This manages tasks that have run out of timeslice during a scheduler_tick */
++static inline void scheduler_task_tick(struct rq *rq)
++{
++	struct task_struct *p = rq->curr;
++
++	if (is_idle_task(p))
++		return;
++
++	update_curr(rq, p);
++	cpufreq_update_util(rq, 0);
++
++	/*
++	 * Tasks have less than RESCHED_NS of time slice left they will be
++	 * rescheduled.
++	 */
++	if (p->time_slice >= RESCHED_NS)
++		return;
++	set_tsk_need_resched(p);
++	set_preempt_need_resched();
++}
++
++static u64 cpu_resched_latency(struct rq *rq)
++{
++	int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
++	u64 resched_latency, now = rq_clock(rq);
++	static bool warned_once;
++
++	if (sysctl_resched_latency_warn_once && warned_once)
++		return 0;
++
++	if (!need_resched() || !latency_warn_ms)
++		return 0;
++
++	if (system_state == SYSTEM_BOOTING)
++		return 0;
++
++	if (!rq->last_seen_need_resched_ns) {
++		rq->last_seen_need_resched_ns = now;
++		rq->ticks_without_resched = 0;
++		return 0;
++	}
++
++	rq->ticks_without_resched++;
++	resched_latency = now - rq->last_seen_need_resched_ns;
++	if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
++		return 0;
++
++	warned_once = true;
++
++	return resched_latency;
++}
++
++static int __init setup_resched_latency_warn_ms(char *str)
++{
++	long val;
++
++	if ((kstrtol(str, 0, &val))) {
++		pr_warn("Unable to set resched_latency_warn_ms\n");
++		return 1;
++	}
++
++	sysctl_resched_latency_warn_ms = val;
++	return 1;
++}
++__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
++
++/*
++ * This function gets called by the timer code, with HZ frequency.
++ * We call it with interrupts disabled.
++ */
++void sched_tick(void)
++{
++	int cpu __maybe_unused = smp_processor_id();
++	struct rq *rq = cpu_rq(cpu);
++	struct task_struct *curr = rq->curr;
++	u64 resched_latency;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE))
++		arch_scale_freq_tick();
++
++	sched_clock_tick();
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	if (dynamic_preempt_lazy() && tif_test_bit(TIF_NEED_RESCHED_LAZY))
++		resched_curr(rq);
++
++	scheduler_task_tick(rq);
++	if (sched_feat(LATENCY_WARN))
++		resched_latency = cpu_resched_latency(rq);
++	calc_global_load_tick(rq);
++
++	task_tick_mm_cid(rq, rq->curr);
++
++	raw_spin_unlock(&rq->lock);
++
++	if (sched_feat(LATENCY_WARN) && resched_latency)
++		resched_latency_warn(cpu, resched_latency);
++
++	perf_event_task_tick();
++
++	if (curr->flags & PF_WQ_WORKER)
++		wq_worker_tick(curr);
++}
++
++#ifdef CONFIG_NO_HZ_FULL
++
++struct tick_work {
++	int			cpu;
++	atomic_t		state;
++	struct delayed_work	work;
++};
++/* Values for ->state, see diagram below. */
++#define TICK_SCHED_REMOTE_OFFLINE	0
++#define TICK_SCHED_REMOTE_OFFLINING	1
++#define TICK_SCHED_REMOTE_RUNNING	2
++
++/*
++ * State diagram for ->state:
++ *
++ *
++ *          TICK_SCHED_REMOTE_OFFLINE
++ *                    |   ^
++ *                    |   |
++ *                    |   | sched_tick_remote()
++ *                    |   |
++ *                    |   |
++ *                    +--TICK_SCHED_REMOTE_OFFLINING
++ *                    |   ^
++ *                    |   |
++ * sched_tick_start() |   | sched_tick_stop()
++ *                    |   |
++ *                    V   |
++ *          TICK_SCHED_REMOTE_RUNNING
++ *
++ *
++ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
++ * and sched_tick_start() are happy to leave the state in RUNNING.
++ */
++
++static struct tick_work __percpu *tick_work_cpu;
++
++static void sched_tick_remote(struct work_struct *work)
++{
++	struct delayed_work *dwork = to_delayed_work(work);
++	struct tick_work *twork = container_of(dwork, struct tick_work, work);
++	int cpu = twork->cpu;
++	struct rq *rq = cpu_rq(cpu);
++	int os;
++
++	/*
++	 * Handle the tick only if it appears the remote CPU is running in full
++	 * dynticks mode. The check is racy by nature, but missing a tick or
++	 * having one too much is no big deal because the scheduler tick updates
++	 * statistics and checks timeslices in a time-independent way, regardless
++	 * of when exactly it is running.
++	 */
++	if (tick_nohz_tick_stopped_cpu(cpu)) {
++		guard(raw_spinlock_irqsave)(&rq->lock);
++		struct task_struct *curr = rq->curr;
++
++		if (cpu_online(cpu)) {
++			update_rq_clock(rq);
++
++			if (!is_idle_task(curr)) {
++				/*
++				 * Make sure the next tick runs within a
++				 * reasonable amount of time.
++				 */
++				u64 delta = rq_clock_task(rq) - curr->last_ran;
++				WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
++			}
++			scheduler_task_tick(rq);
++
++			calc_load_nohz_remote(rq);
++		}
++	}
++
++	/*
++	 * Run the remote tick once per second (1Hz). This arbitrary
++	 * frequency is large enough to avoid overload but short enough
++	 * to keep scheduler internal stats reasonably up to date.  But
++	 * first update state to reflect hotplug activity if required.
++	 */
++	os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
++	if (os == TICK_SCHED_REMOTE_RUNNING)
++		queue_delayed_work(system_unbound_wq, dwork, HZ);
++}
++
++static void sched_tick_start(int cpu)
++{
++	int os;
++	struct tick_work *twork;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
++	if (os == TICK_SCHED_REMOTE_OFFLINE) {
++		twork->cpu = cpu;
++		INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
++		queue_delayed_work(system_unbound_wq, &twork->work, HZ);
++	}
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++static void sched_tick_stop(int cpu)
++{
++	struct tick_work *twork;
++	int os;
++
++	if (housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	/* There cannot be competing actions, but don't rely on stop-machine. */
++	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_OFFLINING);
++	WARN_ON_ONCE(os != TICK_SCHED_REMOTE_RUNNING);
++	/* Don't cancel, as this would mess up the state machine. */
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++int __init sched_tick_offload_init(void)
++{
++	tick_work_cpu = alloc_percpu(struct tick_work);
++	BUG_ON(!tick_work_cpu);
++	return 0;
++}
++
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_tick_start(int cpu) { }
++static inline void sched_tick_stop(int cpu) { }
++#endif
++
++#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
++				defined(CONFIG_PREEMPT_TRACER))
++/*
++ * If the value passed in is equal to the current preempt count
++ * then we just disabled preemption. Start timing the latency.
++ */
++static inline void preempt_latency_start(int val)
++{
++	if (preempt_count() == val) {
++		unsigned long ip = get_lock_parent_ip();
++#ifdef CONFIG_DEBUG_PREEMPT
++		current->preempt_disable_ip = ip;
++#endif
++		trace_preempt_off(CALLER_ADDR0, ip);
++	}
++}
++
++void preempt_count_add(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
++		return;
++#endif
++	__preempt_count_add(val);
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Spinlock count overflowing soon?
++	 */
++	DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
++				PREEMPT_MASK - 10);
++#endif
++	preempt_latency_start(val);
++}
++EXPORT_SYMBOL(preempt_count_add);
++NOKPROBE_SYMBOL(preempt_count_add);
++
++/*
++ * If the value passed in equals to the current preempt count
++ * then we just enabled preemption. Stop timing the latency.
++ */
++static inline void preempt_latency_stop(int val)
++{
++	if (preempt_count() == val)
++		trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
++}
++
++void preempt_count_sub(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
++		return;
++	/*
++	 * Is the spinlock portion underflowing?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
++			!(preempt_count() & PREEMPT_MASK)))
++		return;
++#endif
++
++	preempt_latency_stop(val);
++	__preempt_count_sub(val);
++}
++EXPORT_SYMBOL(preempt_count_sub);
++NOKPROBE_SYMBOL(preempt_count_sub);
++
++#else
++static inline void preempt_latency_start(int val) { }
++static inline void preempt_latency_stop(int val) { }
++#endif
++
++static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	return p->preempt_disable_ip;
++#else
++	return 0;
++#endif
++}
++
++/*
++ * Print scheduling while atomic bug:
++ */
++static noinline void __schedule_bug(struct task_struct *prev)
++{
++	/* Save this before calling printk(), since that will clobber it */
++	unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
++
++	if (oops_in_progress)
++		return;
++
++	printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
++		prev->comm, prev->pid, preempt_count());
++
++	debug_show_held_locks(prev);
++	print_modules();
++	if (irqs_disabled())
++		print_irqtrace_events(prev);
++	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++		pr_err("Preemption disabled at:");
++		print_ip_sym(KERN_ERR, preempt_disable_ip);
++	}
++	check_panic_on_warn("scheduling while atomic");
++
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++
++/*
++ * Various schedule()-time debugging checks and statistics:
++ */
++static inline void schedule_debug(struct task_struct *prev, bool preempt)
++{
++#ifdef CONFIG_SCHED_STACK_END_CHECK
++	if (task_stack_end_corrupted(prev))
++		panic("corrupted stack end detected inside scheduler\n");
++
++	if (task_scs_end_corrupted(prev))
++		panic("corrupted shadow stack detected inside scheduler\n");
++#endif
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++	if (!preempt && READ_ONCE(prev->__state) && prev->non_block_count) {
++		printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
++			prev->comm, prev->pid, prev->non_block_count);
++		dump_stack();
++		add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++	}
++#endif
++
++	if (unlikely(in_atomic_preempt_off())) {
++		__schedule_bug(prev);
++		preempt_count_set(PREEMPT_DISABLED);
++	}
++	rcu_sleep_check();
++	WARN_ON_ONCE(ct_state() == CT_STATE_USER);
++
++	profile_hit(SCHED_PROFILING, __builtin_return_address(0));
++
++	schedstat_inc(this_rq()->sched_count);
++}
++
++#ifdef ALT_SCHED_DEBUG
++void alt_sched_debug(void)
++{
++	printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx,"
++	       " ecore_idle: 0x%04lx\n",
++	       sched_rq_pending_mask.bits[0],
++	       sched_idle_mask->bits[0],
++	       sched_pcore_idle_mask->bits[0],
++	       sched_ecore_idle_mask->bits[0]);
++}
++#endif
++
++#ifdef	CONFIG_SMP
++
++#ifdef CONFIG_PREEMPT_RT
++#define SCHED_NR_MIGRATE_BREAK 8
++#else
++#define SCHED_NR_MIGRATE_BREAK 32
++#endif
++
++__read_mostly unsigned int sysctl_sched_nr_migrate = SCHED_NR_MIGRATE_BREAK;
++
++/*
++ * Migrate pending tasks in @rq to @dest_cpu
++ */
++static inline int
++migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
++{
++	struct task_struct *p, *skip = rq->curr;
++	int nr_migrated = 0;
++	int nr_tries = min(rq->nr_running / 2, sysctl_sched_nr_migrate);
++
++	/* WA to check rq->curr is still on rq */
++	if (!task_on_rq_queued(skip))
++		return 0;
++
++	while (skip != rq->idle && nr_tries &&
++	       (p = sched_rq_next_task(skip, rq)) != rq->idle) {
++		skip = sched_rq_next_task(p, rq);
++		if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
++			__SCHED_DEQUEUE_TASK(p, rq, 0, );
++			set_task_cpu(p, dest_cpu);
++			sched_task_sanity_check(p, dest_rq);
++			sched_mm_cid_migrate_to(dest_rq, p);
++			__SCHED_ENQUEUE_TASK(p, dest_rq, 0, );
++			nr_migrated++;
++		}
++		nr_tries--;
++	}
++
++	return nr_migrated;
++}
++
++static inline int take_other_rq_tasks(struct rq *rq, int cpu)
++{
++	cpumask_t *topo_mask, *end_mask, chk;
++
++	if (unlikely(!rq->online))
++		return 0;
++
++	if (cpumask_empty(&sched_rq_pending_mask))
++		return 0;
++
++	topo_mask = per_cpu(sched_cpu_topo_masks, cpu);
++	end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
++	do {
++		int i;
++
++		if (!cpumask_and(&chk, &sched_rq_pending_mask, topo_mask))
++			continue;
++
++		for_each_cpu_wrap(i, &chk, cpu) {
++			int nr_migrated;
++			struct rq *src_rq;
++
++			src_rq = cpu_rq(i);
++			if (!do_raw_spin_trylock(&src_rq->lock))
++				continue;
++			spin_acquire(&src_rq->lock.dep_map,
++				     SINGLE_DEPTH_NESTING, 1, _RET_IP_);
++
++			if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
++				sub_nr_running(src_rq, nr_migrated);
++
++				spin_release(&src_rq->lock.dep_map, _RET_IP_);
++				do_raw_spin_unlock(&src_rq->lock);
++
++				add_nr_running(rq, nr_migrated);
++
++				update_sched_preempt_mask(rq);
++				cpufreq_update_util(rq, 0);
++
++				return 1;
++			}
++
++			spin_release(&src_rq->lock.dep_map, _RET_IP_);
++			do_raw_spin_unlock(&src_rq->lock);
++		}
++	} while (++topo_mask < end_mask);
++
++	return 0;
++}
++#endif
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
++{
++	p->time_slice = sysctl_sched_base_slice;
++
++	sched_task_renew(p, rq);
++
++	if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
++		requeue_task(p, rq);
++}
++
++static inline int balance_select_task_rq(struct task_struct *p, cpumask_t *avail_mask)
++{
++	cpumask_t mask;
++
++	if (!preempt_mask_check(&mask, avail_mask, task_sched_prio(p)))
++		return -1;
++
++	if (cpumask_and(&mask, &mask, p->cpus_ptr))
++		return best_mask_cpu(task_cpu(p), &mask);
++
++	return task_cpu(p);
++}
++
++static inline void
++__move_queued_task(struct rq *rq, struct task_struct *p, struct rq *dest_rq, int dest_cpu)
++{
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
++	dequeue_task(p, rq, 0);
++	set_task_cpu(p, dest_cpu);
++
++	sched_mm_cid_migrate_to(dest_rq, p);
++
++	sched_task_sanity_check(p, dest_rq);
++	enqueue_task(p, dest_rq, 0);
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
++	wakeup_preempt(dest_rq);
++}
++
++static inline void prio_balance(struct rq *rq, const int cpu)
++{
++	struct task_struct *p, *next;
++	cpumask_t mask;
++
++	if (!rq->online)
++		return;
++
++	if (!cpumask_empty(sched_idle_mask))
++		return;
++
++	if (0 == rq->prio_balance_time)
++		return;
++
++	if (rq->clock - rq->prio_balance_time < sysctl_sched_base_slice << 1)
++		return;
++
++	rq->prio_balance_time = rq->clock;
++
++	cpumask_copy(&mask, cpu_active_mask);
++	cpumask_clear_cpu(cpu, &mask);
++
++	p = sched_rq_next_task(rq->curr, rq);
++	while (p != rq->idle) {
++		next = sched_rq_next_task(p, rq);
++		if (!is_migration_disabled(p)) {
++			int dest_cpu;
++
++			dest_cpu = balance_select_task_rq(p, &mask);
++			if (dest_cpu < 0)
++				return;
++
++			if (cpu != dest_cpu) {
++				struct rq *dest_rq = cpu_rq(dest_cpu);
++
++				if (do_raw_spin_trylock(&dest_rq->lock)) {
++					cpumask_clear_cpu(dest_cpu, &mask);
++
++					spin_acquire(&dest_rq->lock.dep_map,
++						     SINGLE_DEPTH_NESTING, 1, _RET_IP_);
++
++					__move_queued_task(rq, p, dest_rq, dest_cpu);
++
++					spin_release(&dest_rq->lock.dep_map, _RET_IP_);
++					do_raw_spin_unlock(&dest_rq->lock);
++				}
++			}
++		}
++		p = next;
++	}
++}
++
++/*
++ * Timeslices below RESCHED_NS are considered as good as expired as there's no
++ * point rescheduling when there's so little time left.
++ */
++static inline void check_curr(struct task_struct *p, struct rq *rq)
++{
++	if (unlikely(rq->idle == p))
++		return;
++
++	update_curr(rq, p);
++
++	if (p->time_slice < RESCHED_NS)
++		time_slice_expired(p, rq);
++}
++
++static inline struct task_struct *
++choose_next_task(struct rq *rq, int cpu)
++{
++	struct task_struct *next = sched_rq_first_task(rq);
++
++	if (next == rq->idle) {
++#ifdef	CONFIG_SMP
++		if (!take_other_rq_tasks(rq, cpu)) {
++			if (likely(rq->balance_func && rq->online))
++				rq->balance_func(rq, cpu);
++#endif /* CONFIG_SMP */
++
++			schedstat_inc(rq->sched_goidle);
++			/*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
++			return next;
++#ifdef	CONFIG_SMP
++		}
++		next = sched_rq_first_task(rq);
++#endif
++	}
++#ifdef CONFIG_SCHED_HRTICK
++	hrtick_start(rq, next->time_slice);
++#endif
++	/*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu, next);*/
++	return next;
++}
++
++/*
++ * Constants for the sched_mode argument of __schedule().
++ *
++ * The mode argument allows RT enabled kernels to differentiate a
++ * preemption from blocking on an 'sleeping' spin/rwlock.
++ */
++ #define SM_IDLE		(-1)
++ #define SM_NONE		0
++ #define SM_PREEMPT		1
++ #define SM_RTLOCK_WAIT		2
++
++/*
++ * Helper function for __schedule()
++ *
++ * If a task does not have signals pending, deactivate it
++ * Otherwise marks the task's __state as RUNNING
++ */
++static bool try_to_block_task(struct rq *rq, struct task_struct *p,
++			      unsigned long *task_state_p)
++{
++	unsigned long task_state = *task_state_p;
++	if (signal_pending_state(task_state, p)) {
++		WRITE_ONCE(p->__state, TASK_RUNNING);
++		*task_state_p = TASK_RUNNING;
++		return false;
++	}
++	p->sched_contributes_to_load =
++		(task_state & TASK_UNINTERRUPTIBLE) &&
++		!(task_state & TASK_NOLOAD) &&
++		!(task_state & TASK_FROZEN);
++
++	/*
++	 * __schedule()			ttwu()
++	 *   prev_state = prev->state;    if (p->on_rq && ...)
++	 *   if (prev_state)		    goto out;
++	 *     p->on_rq = 0;		  smp_acquire__after_ctrl_dep();
++	 *				  p->state = TASK_WAKING
++	 *
++	 * Where __schedule() and ttwu() have matching control dependencies.
++	 *
++	 * After this, schedule() must not care about p->state any more.
++	 */
++	sched_task_deactivate(p, rq);
++	block_task(rq, p);
++	return true;
++}
++
++/*
++ * schedule() is the main scheduler function.
++ *
++ * The main means of driving the scheduler and thus entering this function are:
++ *
++ *   1. Explicit blocking: mutex, semaphore, waitqueue, etc.
++ *
++ *   2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
++ *      paths. For example, see arch/x86/entry_64.S.
++ *
++ *      To drive preemption between tasks, the scheduler sets the flag in timer
++ *      interrupt handler sched_tick().
++ *
++ *   3. Wakeups don't really cause entry into schedule(). They add a
++ *      task to the run-queue and that's it.
++ *
++ *      Now, if the new task added to the run-queue preempts the current
++ *      task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
++ *      called on the nearest possible occasion:
++ *
++ *       - If the kernel is preemptible (CONFIG_PREEMPTION=y):
++ *
++ *         - in syscall or exception context, at the next outmost
++ *           preempt_enable(). (this might be as soon as the wake_up()'s
++ *           spin_unlock()!)
++ *
++ *         - in IRQ context, return from interrupt-handler to
++ *           preemptible context
++ *
++ *       - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
++ *         then at the next:
++ *
++ *          - cond_resched() call
++ *          - explicit schedule() call
++ *          - return from syscall or exception to user-space
++ *          - return from interrupt-handler to user-space
++ *
++ * WARNING: must be called with preemption disabled!
++ */
++static void __sched notrace __schedule(int sched_mode)
++{
++	struct task_struct *prev, *next;
++	/*
++	 * On PREEMPT_RT kernel, SM_RTLOCK_WAIT is noted
++	 * as a preemption by schedule_debug() and RCU.
++	 */
++	bool preempt = sched_mode > SM_NONE;
++	bool is_switch = false;
++	unsigned long *switch_count;
++	unsigned long prev_state;
++	struct rq *rq;
++	int cpu;
++
++	trace_sched_entry_tp(preempt, CALLER_ADDR0);
++
++	cpu = smp_processor_id();
++	rq = cpu_rq(cpu);
++	prev = rq->curr;
++
++	schedule_debug(prev, preempt);
++
++	/* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
++	hrtick_clear(rq);
++
++	klp_sched_try_switch(prev);
++
++	local_irq_disable();
++	rcu_note_context_switch(preempt);
++
++	/*
++	 * Make sure that signal_pending_state()->signal_pending() below
++	 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
++	 * done by the caller to avoid the race with signal_wake_up():
++	 *
++	 * __set_current_state(@state)		signal_wake_up()
++	 * schedule()				  set_tsk_thread_flag(p, TIF_SIGPENDING)
++	 *					  wake_up_state(p, state)
++	 *   LOCK rq->lock			    LOCK p->pi_state
++	 *   smp_mb__after_spinlock()		    smp_mb__after_spinlock()
++	 *     if (signal_pending_state())	    if (p->state & @state)
++	 *
++	 * Also, the membarrier system call requires a full memory barrier
++	 * after coming from user-space, before storing to rq->curr; this
++	 * barrier matches a full barrier in the proximity of the membarrier
++	 * system call exit.
++	 */
++	raw_spin_lock(&rq->lock);
++	smp_mb__after_spinlock();
++
++	update_rq_clock(rq);
++
++	switch_count = &prev->nivcsw;
++
++	/* Task state changes only considers SM_PREEMPT as preemption */
++	preempt = sched_mode == SM_PREEMPT;
++
++	/*
++	 * We must load prev->state once (task_struct::state is volatile), such
++	 * that we form a control dependency vs deactivate_task() below.
++	 */
++	prev_state = READ_ONCE(prev->__state);
++	if (sched_mode == SM_IDLE) {
++		if (!rq->nr_running) {
++			next = prev;
++			goto picked;
++		}
++	} else if (!preempt && prev_state) {
++		try_to_block_task(rq, prev, &prev_state);
++		switch_count = &prev->nvcsw;
++	}
++
++	check_curr(prev, rq);
++
++	next = choose_next_task(rq, cpu);
++picked:
++	clear_tsk_need_resched(prev);
++	clear_preempt_need_resched();
++	rq->last_seen_need_resched_ns = 0;
++
++	is_switch = prev != next;
++	if (likely(is_switch)) {
++		next->last_ran = rq->clock_task;
++
++		/*printk(KERN_INFO "sched: %px -> %px\n", prev, next);*/
++		rq->nr_switches++;
++		/*
++		 * RCU users of rcu_dereference(rq->curr) may not see
++		 * changes to task_struct made by pick_next_task().
++		 */
++		RCU_INIT_POINTER(rq->curr, next);
++		/*
++		 * The membarrier system call requires each architecture
++		 * to have a full memory barrier after updating
++		 * rq->curr, before returning to user-space.
++		 *
++		 * Here are the schemes providing that barrier on the
++		 * various architectures:
++		 * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC,
++		 *   RISC-V.  switch_mm() relies on membarrier_arch_switch_mm()
++		 *   on PowerPC and on RISC-V.
++		 * - finish_lock_switch() for weakly-ordered
++		 *   architectures where spin_unlock is a full barrier,
++		 * - switch_to() for arm64 (weakly-ordered, spin_unlock
++		 *   is a RELEASE barrier),
++		 *
++		 * The barrier matches a full barrier in the proximity of
++		 * the membarrier system call entry.
++		 *
++		 * On RISC-V, this barrier pairing is also needed for the
++		 * SYNC_CORE command when switching between processes, cf.
++		 * the inline comments in membarrier_arch_switch_mm().
++		 */
++		++*switch_count;
++
++		trace_sched_switch(preempt, prev, next, prev_state);
++
++		/* Also unlocks the rq: */
++		rq = context_switch(rq, prev, next);
++
++		cpu = cpu_of(rq);
++	} else {
++		__balance_callbacks(rq);
++		prio_balance(rq, cpu);
++		raw_spin_unlock_irq(&rq->lock);
++	}
++	trace_sched_exit_tp(is_switch, CALLER_ADDR0);
++}
++
++void __noreturn do_task_dead(void)
++{
++	/* Causes final put_task_struct in finish_task_switch(): */
++	set_special_state(TASK_DEAD);
++
++	/* Tell freezer to ignore us: */
++	current->flags |= PF_NOFREEZE;
++
++	__schedule(SM_NONE);
++	BUG();
++
++	/* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
++	for (;;)
++		cpu_relax();
++}
++
++static inline void sched_submit_work(struct task_struct *tsk)
++{
++	static DEFINE_WAIT_OVERRIDE_MAP(sched_map, LD_WAIT_CONFIG);
++	unsigned int task_flags;
++
++	/*
++	 * Establish LD_WAIT_CONFIG context to ensure none of the code called
++	 * will use a blocking primitive -- which would lead to recursion.
++	 */
++	lock_map_acquire_try(&sched_map);
++
++	task_flags = tsk->flags;
++	/*
++	 * If a worker goes to sleep, notify and ask workqueue whether it
++	 * wants to wake up a task to maintain concurrency.
++	 */
++	if (task_flags & PF_WQ_WORKER)
++		wq_worker_sleeping(tsk);
++	else if (task_flags & PF_IO_WORKER)
++		io_wq_worker_sleeping(tsk);
++
++	/*
++	 * spinlock and rwlock must not flush block requests.  This will
++	 * deadlock if the callback attempts to acquire a lock which is
++	 * already acquired.
++	 */
++	WARN_ON_ONCE(current->__state & TASK_RTLOCK_WAIT);
++
++	/*
++	 * If we are going to sleep and we have plugged IO queued,
++	 * make sure to submit it to avoid deadlocks.
++	 */
++	blk_flush_plug(tsk->plug, true);
++
++	lock_map_release(&sched_map);
++}
++
++static void sched_update_worker(struct task_struct *tsk)
++{
++	if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER | PF_BLOCK_TS)) {
++		if (tsk->flags & PF_BLOCK_TS)
++			blk_plug_invalidate_ts(tsk);
++		if (tsk->flags & PF_WQ_WORKER)
++			wq_worker_running(tsk);
++		else if (tsk->flags & PF_IO_WORKER)
++			io_wq_worker_running(tsk);
++	}
++}
++
++static __always_inline void __schedule_loop(int sched_mode)
++{
++	do {
++		preempt_disable();
++		__schedule(sched_mode);
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++}
++
++asmlinkage __visible void __sched schedule(void)
++{
++	struct task_struct *tsk = current;
++
++#ifdef CONFIG_RT_MUTEXES
++	lockdep_assert(!tsk->sched_rt_mutex);
++#endif
++
++	if (!task_is_running(tsk))
++		sched_submit_work(tsk);
++	__schedule_loop(SM_NONE);
++	sched_update_worker(tsk);
++}
++EXPORT_SYMBOL(schedule);
++
++/*
++ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
++ * state (have scheduled out non-voluntarily) by making sure that all
++ * tasks have either left the run queue or have gone into user space.
++ * As idle tasks do not do either, they must not ever be preempted
++ * (schedule out non-voluntarily).
++ *
++ * schedule_idle() is similar to schedule_preempt_disable() except that it
++ * never enables preemption because it does not call sched_submit_work().
++ */
++void __sched schedule_idle(void)
++{
++	/*
++	 * As this skips calling sched_submit_work(), which the idle task does
++	 * regardless because that function is a NOP when the task is in a
++	 * TASK_RUNNING state, make sure this isn't used someplace that the
++	 * current task can be in any other state. Note, idle is always in the
++	 * TASK_RUNNING state.
++	 */
++	WARN_ON_ONCE(current->__state);
++	do {
++		__schedule(SM_IDLE);
++	} while (need_resched());
++}
++
++#if defined(CONFIG_CONTEXT_TRACKING_USER) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK)
++asmlinkage __visible void __sched schedule_user(void)
++{
++	/*
++	 * If we come here after a random call to set_need_resched(),
++	 * or we have been woken up remotely but the IPI has not yet arrived,
++	 * we haven't yet exited the RCU idle mode. Do it here manually until
++	 * we find a better solution.
++	 *
++	 * NB: There are buggy callers of this function.  Ideally we
++	 * should warn if prev_state != CT_STATE_USER, but that will trigger
++	 * too frequently to make sense yet.
++	 */
++	enum ctx_state prev_state = exception_enter();
++	schedule();
++	exception_exit(prev_state);
++}
++#endif
++
++/**
++ * schedule_preempt_disabled - called with preemption disabled
++ *
++ * Returns with preemption disabled. Note: preempt_count must be 1
++ */
++void __sched schedule_preempt_disabled(void)
++{
++	sched_preempt_enable_no_resched();
++	schedule();
++	preempt_disable();
++}
++
++#ifdef CONFIG_PREEMPT_RT
++void __sched notrace schedule_rtlock(void)
++{
++	__schedule_loop(SM_RTLOCK_WAIT);
++}
++NOKPROBE_SYMBOL(schedule_rtlock);
++#endif
++
++static void __sched notrace preempt_schedule_common(void)
++{
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		__schedule(SM_PREEMPT);
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++
++		/*
++		 * Check again in case we missed a preemption opportunity
++		 * between schedule and now.
++		 */
++	} while (need_resched());
++}
++
++#ifdef CONFIG_PREEMPTION
++/*
++ * This is the entry point to schedule() from in-kernel preemption
++ * off of preempt_enable.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule(void)
++{
++	/*
++	 * If there is a non-zero preempt_count or interrupts are disabled,
++	 * we do not want to preempt the current task. Just return..
++	 */
++	if (likely(!preemptible()))
++		return;
++
++	preempt_schedule_common();
++}
++NOKPROBE_SYMBOL(preempt_schedule);
++EXPORT_SYMBOL(preempt_schedule);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_dynamic_enabled
++#define preempt_schedule_dynamic_enabled	preempt_schedule
++#define preempt_schedule_dynamic_disabled	NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule);
++void __sched notrace dynamic_preempt_schedule(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule))
++		return;
++	preempt_schedule();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule);
++EXPORT_SYMBOL(dynamic_preempt_schedule);
++#endif
++#endif
++
++/**
++ * preempt_schedule_notrace - preempt_schedule called by tracing
++ *
++ * The tracing infrastructure uses preempt_enable_notrace to prevent
++ * recursion and tracing preempt enabling caused by the tracing
++ * infrastructure itself. But as tracing can happen in areas coming
++ * from userspace or just about to enter userspace, a preempt enable
++ * can occur before user_exit() is called. This will cause the scheduler
++ * to be called when the system is still in usermode.
++ *
++ * To prevent this, the preempt_enable_notrace will use this function
++ * instead of preempt_schedule() to exit user context if needed before
++ * calling the scheduler.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
++{
++	enum ctx_state prev_ctx;
++
++	if (likely(!preemptible()))
++		return;
++
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		/*
++		 * Needs preempt disabled in case user_exit() is traced
++		 * and the tracer calls preempt_enable_notrace() causing
++		 * an infinite recursion.
++		 */
++		prev_ctx = exception_enter();
++		__schedule(SM_PREEMPT);
++		exception_exit(prev_ctx);
++
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++	} while (need_resched());
++}
++EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_notrace_dynamic_enabled
++#define preempt_schedule_notrace_dynamic_enabled	preempt_schedule_notrace
++#define preempt_schedule_notrace_dynamic_disabled	NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace);
++void __sched notrace dynamic_preempt_schedule_notrace(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule_notrace))
++		return;
++	preempt_schedule_notrace();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule_notrace);
++EXPORT_SYMBOL(dynamic_preempt_schedule_notrace);
++#endif
++#endif
++
++#endif /* CONFIG_PREEMPTION */
++
++/*
++ * This is the entry point to schedule() from kernel preemption
++ * off of IRQ context.
++ * Note, that this is called and return with IRQs disabled. This will
++ * protect us against recursive calling from IRQ contexts.
++ */
++asmlinkage __visible void __sched preempt_schedule_irq(void)
++{
++	enum ctx_state prev_state;
++
++	/* Catch callers which need to be fixed */
++	BUG_ON(preempt_count() || !irqs_disabled());
++
++	prev_state = exception_enter();
++
++	do {
++		preempt_disable();
++		local_irq_enable();
++		__schedule(SM_PREEMPT);
++		local_irq_disable();
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++
++	exception_exit(prev_state);
++}
++
++int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
++			  void *key)
++{
++	WARN_ON_ONCE(wake_flags & ~(WF_SYNC|WF_CURRENT_CPU));
++	return try_to_wake_up(curr->private, mode, wake_flags);
++}
++EXPORT_SYMBOL(default_wake_function);
++
++void check_task_changed(struct task_struct *p, struct rq *rq)
++{
++	/* Trigger resched if task sched_prio has been modified. */
++	if (task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		requeue_task(p, rq);
++		wakeup_preempt(rq);
++	}
++}
++
++void __setscheduler_prio(struct task_struct *p, int prio)
++{
++	p->prio = prio;
++}
++
++#ifdef CONFIG_RT_MUTEXES
++
++/*
++ * Would be more useful with typeof()/auto_type but they don't mix with
++ * bit-fields. Since it's a local thing, use int. Keep the generic sounding
++ * name such that if someone were to implement this function we get to compare
++ * notes.
++ */
++#define fetch_and_set(x, v) ({ int _x = (x); (x) = (v); _x; })
++
++void rt_mutex_pre_schedule(void)
++{
++	lockdep_assert(!fetch_and_set(current->sched_rt_mutex, 1));
++	sched_submit_work(current);
++}
++
++void rt_mutex_schedule(void)
++{
++	lockdep_assert(current->sched_rt_mutex);
++	__schedule_loop(SM_NONE);
++}
++
++void rt_mutex_post_schedule(void)
++{
++	sched_update_worker(current);
++	lockdep_assert(fetch_and_set(current->sched_rt_mutex, 0));
++}
++
++/*
++ * rt_mutex_setprio - set the current priority of a task
++ * @p: task to boost
++ * @pi_task: donor task
++ *
++ * This function changes the 'effective' priority of a task. It does
++ * not touch ->normal_prio like __setscheduler().
++ *
++ * Used by the rt_mutex code to implement priority inheritance
++ * logic. Call site only calls if the priority of the task changed.
++ */
++void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
++{
++	int prio;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	/* XXX used to be waiter->prio, not waiter->task->prio */
++	prio = __rt_effective_prio(pi_task, p->normal_prio);
++
++	/*
++	 * If nothing changed; bail early.
++	 */
++	if (p->pi_top_task == pi_task && prio == p->prio)
++		return;
++
++	rq = __task_access_lock(p, &lock);
++	/*
++	 * Set under pi_lock && rq->lock, such that the value can be used under
++	 * either lock.
++	 *
++	 * Note that there is loads of tricky to make this pointer cache work
++	 * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
++	 * ensure a task is de-boosted (pi_task is set to NULL) before the
++	 * task is allowed to run again (and can exit). This ensures the pointer
++	 * points to a blocked task -- which guarantees the task is present.
++	 */
++	p->pi_top_task = pi_task;
++
++	/*
++	 * For FIFO/RR we only need to set prio, if that matches we're done.
++	 */
++	if (prio == p->prio)
++		goto out_unlock;
++
++	/*
++	 * Idle task boosting is a no-no in general. There is one
++	 * exception, when PREEMPT_RT and NOHZ is active:
++	 *
++	 * The idle task calls get_next_timer_interrupt() and holds
++	 * the timer wheel base->lock on the CPU and another CPU wants
++	 * to access the timer (probably to cancel it). We can safely
++	 * ignore the boosting request, as the idle CPU runs this code
++	 * with interrupts disabled and will complete the lock
++	 * protected section without being interrupted. So there is no
++	 * real need to boost.
++	 */
++	if (unlikely(p == rq->idle)) {
++		WARN_ON(p != rq->curr);
++		WARN_ON(p->pi_blocked_on);
++		goto out_unlock;
++	}
++
++	trace_sched_pi_setprio(p, pi_task);
++
++	__setscheduler_prio(p, prio);
++
++	check_task_changed(p, rq);
++out_unlock:
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++
++	if (task_on_rq_queued(p))
++		__balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++
++	preempt_enable();
++}
++#endif
++
++#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
++int __sched __cond_resched(void)
++{
++	if (should_resched(0) && !irqs_disabled()) {
++		preempt_schedule_common();
++		return 1;
++	}
++	/*
++	 * In PREEMPT_RCU kernels, ->rcu_read_lock_nesting tells the tick
++	 * whether the current CPU is in an RCU read-side critical section,
++	 * so the tick can report quiescent states even for CPUs looping
++	 * in kernel context.  In contrast, in non-preemptible kernels,
++	 * RCU readers leave no in-memory hints, which means that CPU-bound
++	 * processes executing in kernel context might never report an
++	 * RCU quiescent state.  Therefore, the following code causes
++	 * cond_resched() to report a quiescent state, but only when RCU
++	 * is in urgent need of one.
++	 * A third case, preemptible, but non-PREEMPT_RCU provides for
++	 * urgently needed quiescent states via rcu_flavor_sched_clock_irq().
++	 */
++#ifndef CONFIG_PREEMPT_RCU
++	rcu_all_qs();
++#endif
++	return 0;
++}
++EXPORT_SYMBOL(__cond_resched);
++#endif
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define cond_resched_dynamic_enabled	__cond_resched
++#define cond_resched_dynamic_disabled	((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(cond_resched);
++
++#define might_resched_dynamic_enabled	__cond_resched
++#define might_resched_dynamic_disabled	((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(might_resched);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
++int __sched dynamic_cond_resched(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_cond_resched))
++		return 0;
++	return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_cond_resched);
++
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_might_resched);
++int __sched dynamic_might_resched(void)
++{
++	if (!static_branch_unlikely(&sk_dynamic_might_resched))
++		return 0;
++	return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_might_resched);
++#endif
++#endif
++
++/*
++ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
++ * call schedule, and on return reacquire the lock.
++ *
++ * This works OK both with and without CONFIG_PREEMPTION.  We do strange low-level
++ * operations here to prevent schedule() from being called twice (once via
++ * spin_unlock(), once by hand).
++ */
++int __cond_resched_lock(spinlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held(lock);
++
++	if (spin_needbreak(lock) || resched) {
++		spin_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		spin_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_lock);
++
++int __cond_resched_rwlock_read(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_read(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		read_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		read_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_read);
++
++int __cond_resched_rwlock_write(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_write(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		write_unlock(lock);
++		if (!_cond_resched())
++			cpu_relax();
++		ret = 1;
++		write_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_write);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++
++#ifdef CONFIG_GENERIC_ENTRY
++#include <linux/entry-common.h>
++#endif
++
++/*
++ * SC:cond_resched
++ * SC:might_resched
++ * SC:preempt_schedule
++ * SC:preempt_schedule_notrace
++ * SC:irqentry_exit_cond_resched
++ *
++ *
++ * NONE:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *   dynamic_preempt_lazy       <- false
++ *
++ * VOLUNTARY:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- __cond_resched
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *   dynamic_preempt_lazy       <- false
++ *
++ * FULL:
++ *   cond_resched               <- RET0
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- preempt_schedule
++ *   preempt_schedule_notrace   <- preempt_schedule_notrace
++ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
++ *   dynamic_preempt_lazy       <- false
++ *
++ * LAZY:
++ *   cond_resched               <- RET0
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- preempt_schedule
++ *   preempt_schedule_notrace   <- preempt_schedule_notrace
++ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
++ *   dynamic_preempt_lazy       <- true
++ */
++
++enum {
++	preempt_dynamic_undefined = -1,
++	preempt_dynamic_none,
++	preempt_dynamic_voluntary,
++	preempt_dynamic_full,
++	preempt_dynamic_lazy,
++};
++
++int preempt_dynamic_mode = preempt_dynamic_undefined;
++
++int sched_dynamic_mode(const char *str)
++{
++#ifndef CONFIG_PREEMPT_RT
++	if (!strcmp(str, "none"))
++		return preempt_dynamic_none;
++
++	if (!strcmp(str, "voluntary"))
++		return preempt_dynamic_voluntary;
++#endif
++
++	if (!strcmp(str, "full"))
++		return preempt_dynamic_full;
++
++#ifdef CONFIG_ARCH_HAS_PREEMPT_LAZY
++	if (!strcmp(str, "lazy"))
++		return preempt_dynamic_lazy;
++#endif
++
++	return -EINVAL;
++}
++
++#define preempt_dynamic_key_enable(f)  static_key_enable(&sk_dynamic_##f.key)
++#define preempt_dynamic_key_disable(f) static_key_disable(&sk_dynamic_##f.key)
++
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define preempt_dynamic_enable(f)	static_call_update(f, f##_dynamic_enabled)
++#define preempt_dynamic_disable(f)	static_call_update(f, f##_dynamic_disabled)
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++#define preempt_dynamic_enable(f)	preempt_dynamic_key_enable(f)
++#define preempt_dynamic_disable(f)	preempt_dynamic_key_disable(f)
++#else
++#error "Unsupported PREEMPT_DYNAMIC mechanism"
++#endif
++
++static DEFINE_MUTEX(sched_dynamic_mutex);
++
++static void __sched_dynamic_update(int mode)
++{
++	/*
++	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
++	 * the ZERO state, which is invalid.
++	 */
++	preempt_dynamic_enable(cond_resched);
++	preempt_dynamic_enable(cond_resched);
++	preempt_dynamic_enable(might_resched);
++	preempt_dynamic_enable(preempt_schedule);
++	preempt_dynamic_enable(preempt_schedule_notrace);
++	preempt_dynamic_enable(irqentry_exit_cond_resched);
++	preempt_dynamic_key_disable(preempt_lazy);
++
++	switch (mode) {
++	case preempt_dynamic_none:
++		preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_disable(might_resched);
++		preempt_dynamic_disable(preempt_schedule);
++		preempt_dynamic_disable(preempt_schedule_notrace);
++		preempt_dynamic_disable(irqentry_exit_cond_resched);
++		preempt_dynamic_key_disable(preempt_lazy);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: none\n");
++		break;
++
++	case preempt_dynamic_voluntary:
++		preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_enable(might_resched);
++		preempt_dynamic_disable(preempt_schedule);
++		preempt_dynamic_disable(preempt_schedule_notrace);
++		preempt_dynamic_disable(irqentry_exit_cond_resched);
++		preempt_dynamic_key_disable(preempt_lazy);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: voluntary\n");
++		break;
++
++	case preempt_dynamic_full:
++		preempt_dynamic_enable(cond_resched);
++		preempt_dynamic_disable(might_resched);
++		preempt_dynamic_enable(preempt_schedule);
++		preempt_dynamic_enable(preempt_schedule_notrace);
++		preempt_dynamic_enable(irqentry_exit_cond_resched);
++		preempt_dynamic_key_disable(preempt_lazy);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: full\n");
++		break;
++
++	case preempt_dynamic_lazy:
++		preempt_dynamic_disable(cond_resched);
++		preempt_dynamic_disable(might_resched);
++		preempt_dynamic_enable(preempt_schedule);
++		preempt_dynamic_enable(preempt_schedule_notrace);
++		preempt_dynamic_enable(irqentry_exit_cond_resched);
++		preempt_dynamic_key_enable(preempt_lazy);
++		if (mode != preempt_dynamic_mode)
++			pr_info("Dynamic Preempt: lazy\n");
++		break;
++	}
++
++	preempt_dynamic_mode = mode;
++}
++
++void sched_dynamic_update(int mode)
++{
++	mutex_lock(&sched_dynamic_mutex);
++	__sched_dynamic_update(mode);
++	mutex_unlock(&sched_dynamic_mutex);
++}
++
++static int __init setup_preempt_mode(char *str)
++{
++	int mode = sched_dynamic_mode(str);
++	if (mode < 0) {
++		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
++		return 0;
++	}
++
++	sched_dynamic_update(mode);
++	return 1;
++}
++__setup("preempt=", setup_preempt_mode);
++
++static void __init preempt_dynamic_init(void)
++{
++	if (preempt_dynamic_mode == preempt_dynamic_undefined) {
++		if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
++			sched_dynamic_update(preempt_dynamic_none);
++		} else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
++			sched_dynamic_update(preempt_dynamic_voluntary);
++		} else if (IS_ENABLED(CONFIG_PREEMPT_LAZY)) {
++			sched_dynamic_update(preempt_dynamic_lazy);
++		} else {
++			/* Default static call setting, nothing to do */
++			WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
++			preempt_dynamic_mode = preempt_dynamic_full;
++			pr_info("Dynamic Preempt: full\n");
++		}
++	}
++}
++
++#define PREEMPT_MODEL_ACCESSOR(mode) \
++	bool preempt_model_##mode(void)						 \
++	{									 \
++		WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \
++		return preempt_dynamic_mode == preempt_dynamic_##mode;		 \
++	}									 \
++	EXPORT_SYMBOL_GPL(preempt_model_##mode)
++
++PREEMPT_MODEL_ACCESSOR(none);
++PREEMPT_MODEL_ACCESSOR(voluntary);
++PREEMPT_MODEL_ACCESSOR(full);
++PREEMPT_MODEL_ACCESSOR(lazy);
++
++#else /* !CONFIG_PREEMPT_DYNAMIC: */
++
++#define preempt_dynamic_mode -1
++
++static inline void preempt_dynamic_init(void) { }
++
++#endif /* CONFIG_PREEMPT_DYNAMIC */
++
++const char *preempt_modes[] = {
++	"none", "voluntary", "full", "lazy", NULL,
++};
++
++const char *preempt_model_str(void)
++{
++	bool brace = IS_ENABLED(CONFIG_PREEMPT_RT) &&
++		(IS_ENABLED(CONFIG_PREEMPT_DYNAMIC) ||
++		 IS_ENABLED(CONFIG_PREEMPT_LAZY));
++	static char buf[128];
++
++	if (IS_ENABLED(CONFIG_PREEMPT_BUILD)) {
++		struct seq_buf s;
++
++		seq_buf_init(&s, buf, sizeof(buf));
++		seq_buf_puts(&s, "PREEMPT");
++
++		if (IS_ENABLED(CONFIG_PREEMPT_RT))
++			seq_buf_printf(&s, "%sRT%s",
++				       brace ? "_{" : "_",
++				       brace ? "," : "");
++
++		if (IS_ENABLED(CONFIG_PREEMPT_DYNAMIC)) {
++			seq_buf_printf(&s, "(%s)%s",
++				       preempt_dynamic_mode >= 0 ?
++				       preempt_modes[preempt_dynamic_mode] : "undef",
++				       brace ? "}" : "");
++			return seq_buf_str(&s);
++		}
++
++		if (IS_ENABLED(CONFIG_PREEMPT_LAZY)) {
++			seq_buf_printf(&s, "LAZY%s",
++				       brace ? "}" : "");
++			return seq_buf_str(&s);
++		}
++
++		return seq_buf_str(&s);
++	}
++
++	if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY_BUILD))
++		return "VOLUNTARY";
++
++	return "NONE";
++}
++
++int io_schedule_prepare(void)
++{
++	int old_iowait = current->in_iowait;
++
++	current->in_iowait = 1;
++	blk_flush_plug(current->plug, true);
++	return old_iowait;
++}
++
++void io_schedule_finish(int token)
++{
++	current->in_iowait = token;
++}
++
++/*
++ * This task is about to go to sleep on IO.  Increment rq->nr_iowait so
++ * that process accounting knows that this is a task in IO wait state.
++ *
++ * But don't do that if it is a deliberate, throttling IO wait (this task
++ * has set its backing_dev_info: the queue against which it should throttle)
++ */
++
++long __sched io_schedule_timeout(long timeout)
++{
++	int token;
++	long ret;
++
++	token = io_schedule_prepare();
++	ret = schedule_timeout(timeout);
++	io_schedule_finish(token);
++
++	return ret;
++}
++EXPORT_SYMBOL(io_schedule_timeout);
++
++void __sched io_schedule(void)
++{
++	int token;
++
++	token = io_schedule_prepare();
++	schedule();
++	io_schedule_finish(token);
++}
++EXPORT_SYMBOL(io_schedule);
++
++void sched_show_task(struct task_struct *p)
++{
++	unsigned long free;
++	int ppid;
++
++	if (!try_get_task_stack(p))
++		return;
++
++	pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
++
++	if (task_is_running(p))
++		pr_cont("  running task    ");
++	free = stack_not_used(p);
++	ppid = 0;
++	rcu_read_lock();
++	if (pid_alive(p))
++		ppid = task_pid_nr(rcu_dereference(p->real_parent));
++	rcu_read_unlock();
++	pr_cont(" stack:%-5lu pid:%-5d tgid:%-5d ppid:%-6d task_flags:0x%04x flags:0x%08lx\n",
++		free, task_pid_nr(p), task_tgid_nr(p),
++		ppid, p->flags, read_task_thread_flags(p));
++
++	print_worker_info(KERN_INFO, p);
++	print_stop_info(KERN_INFO, p);
++	show_stack(p, NULL, KERN_INFO);
++	put_task_stack(p);
++}
++EXPORT_SYMBOL_GPL(sched_show_task);
++
++static inline bool
++state_filter_match(unsigned long state_filter, struct task_struct *p)
++{
++	unsigned int state = READ_ONCE(p->__state);
++
++	/* no filter, everything matches */
++	if (!state_filter)
++		return true;
++
++	/* filter, but doesn't match */
++	if (!(state & state_filter))
++		return false;
++
++	/*
++	 * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
++	 * TASK_KILLABLE).
++	 */
++	if (state_filter == TASK_UNINTERRUPTIBLE && (state & TASK_NOLOAD))
++		return false;
++
++	return true;
++}
++
++
++void show_state_filter(unsigned int state_filter)
++{
++	struct task_struct *g, *p;
++
++	rcu_read_lock();
++	for_each_process_thread(g, p) {
++		/*
++		 * reset the NMI-timeout, listing all files on a slow
++		 * console might take a lot of time:
++		 * Also, reset softlockup watchdogs on all CPUs, because
++		 * another CPU might be blocked waiting for us to process
++		 * an IPI.
++		 */
++		touch_nmi_watchdog();
++		touch_all_softlockup_watchdogs();
++		if (state_filter_match(state_filter, p))
++			sched_show_task(p);
++	}
++
++	/* TODO: Alt schedule FW should support this
++	if (!state_filter)
++		sysrq_sched_debug_show();
++	*/
++	rcu_read_unlock();
++	/*
++	 * Only show locks if all tasks are dumped:
++	 */
++	if (!state_filter)
++		debug_show_all_locks();
++}
++
++void dump_cpu_task(int cpu)
++{
++	if (in_hardirq() && cpu == smp_processor_id()) {
++		struct pt_regs *regs;
++
++		regs = get_irq_regs();
++		if (regs) {
++			show_regs(regs);
++			return;
++		}
++	}
++
++	if (trigger_single_cpu_backtrace(cpu))
++		return;
++
++	pr_info("Task dump for CPU %d:\n", cpu);
++	sched_show_task(cpu_curr(cpu));
++}
++
++/**
++ * init_idle - set up an idle thread for a given CPU
++ * @idle: task in question
++ * @cpu: CPU the idle task belongs to
++ *
++ * NOTE: this function does not set the idle thread's NEED_RESCHED
++ * flag, to make booting more robust.
++ */
++void __init init_idle(struct task_struct *idle, int cpu)
++{
++#ifdef CONFIG_SMP
++	struct affinity_context ac = (struct affinity_context) {
++		.new_mask  = cpumask_of(cpu),
++		.flags     = 0,
++	};
++#endif
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&idle->pi_lock, flags);
++	raw_spin_lock(&rq->lock);
++
++	idle->last_ran = rq->clock_task;
++	idle->__state = TASK_RUNNING;
++	/*
++	 * PF_KTHREAD should already be set at this point; regardless, make it
++	 * look like a proper per-CPU kthread.
++	 */
++	idle->flags |= PF_KTHREAD | PF_NO_SETAFFINITY;
++	kthread_set_per_cpu(idle, cpu);
++
++	sched_queue_init_idle(&rq->queue, idle);
++
++#ifdef CONFIG_SMP
++	/*
++	 * No validation and serialization required at boot time and for
++	 * setting up the idle tasks of not yet online CPUs.
++	 */
++	set_cpus_allowed_common(idle, &ac);
++#endif
++
++	/* Silence PROVE_RCU */
++	rcu_read_lock();
++	__set_task_cpu(idle, cpu);
++	rcu_read_unlock();
++
++	rq->idle = idle;
++	rcu_assign_pointer(rq->curr, idle);
++	idle->on_cpu = 1;
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
++
++	/* Set the preempt count _outside_ the spinlocks! */
++	init_idle_preempt_count(idle, cpu);
++
++	ftrace_graph_init_idle_task(idle, cpu);
++	vtime_init_idle(idle, cpu);
++#ifdef CONFIG_SMP
++	sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
++			      const struct cpumask __maybe_unused *trial)
++{
++	return 1;
++}
++
++int task_can_attach(struct task_struct *p)
++{
++	int ret = 0;
++
++	/*
++	 * Kthreads which disallow setaffinity shouldn't be moved
++	 * to a new cpuset; we don't want to change their CPU
++	 * affinity and isolating such threads by their set of
++	 * allowed nodes is unnecessary.  Thus, cpusets are not
++	 * applicable for such threads.  This prevents checking for
++	 * success of set_cpus_allowed_ptr() on all attached tasks
++	 * before cpus_mask may be changed.
++	 */
++	if (p->flags & PF_NO_SETAFFINITY)
++		ret = -EINVAL;
++
++	return ret;
++}
++
++bool sched_smp_initialized __read_mostly;
++
++#ifdef CONFIG_HOTPLUG_CPU
++/*
++ * Invoked on the outgoing CPU in context of the CPU hotplug thread
++ * after ensuring that there are no user space tasks left on the CPU.
++ *
++ * If there is a lazy mm in use on the hotplug thread, drop it and
++ * switch to init_mm.
++ *
++ * The reference count on init_mm is dropped in finish_cpu().
++ */
++static void sched_force_init_mm(void)
++{
++	struct mm_struct *mm = current->active_mm;
++
++	if (mm != &init_mm) {
++		mmgrab_lazy_tlb(&init_mm);
++		local_irq_disable();
++		current->active_mm = &init_mm;
++		switch_mm_irqs_off(mm, &init_mm, current);
++		local_irq_enable();
++		finish_arch_post_lock_switch();
++		mmdrop_lazy_tlb(mm);
++	}
++
++	/* finish_cpu(), as ran on the BP, will clean up the active_mm state */
++}
++
++static int __balance_push_cpu_stop(void *arg)
++{
++	struct task_struct *p = arg;
++	struct rq *rq = this_rq();
++	struct rq_flags rf;
++	int cpu;
++
++	raw_spin_lock_irq(&p->pi_lock);
++	rq_lock(rq, &rf);
++
++	update_rq_clock(rq);
++
++	if (task_rq(p) == rq && task_on_rq_queued(p)) {
++		cpu = select_fallback_rq(rq->cpu, p);
++		rq = __migrate_task(rq, p, cpu);
++	}
++
++	rq_unlock(rq, &rf);
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	put_task_struct(p);
++
++	return 0;
++}
++
++static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
++
++/*
++ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
++ * effective when the hotplug motion is down.
++ */
++static void balance_push(struct rq *rq)
++{
++	struct task_struct *push_task = rq->curr;
++
++	lockdep_assert_held(&rq->lock);
++
++	/*
++	 * Ensure the thing is persistent until balance_push_set(.on = false);
++	 */
++	rq->balance_callback = &balance_push_callback;
++
++	/*
++	 * Only active while going offline and when invoked on the outgoing
++	 * CPU.
++	 */
++	if (!cpu_dying(rq->cpu) || rq != this_rq())
++		return;
++
++	/*
++	 * Both the cpu-hotplug and stop task are in this case and are
++	 * required to complete the hotplug process.
++	 */
++	if (kthread_is_per_cpu(push_task) ||
++	    is_migration_disabled(push_task)) {
++
++		/*
++		 * If this is the idle task on the outgoing CPU try to wake
++		 * up the hotplug control thread which might wait for the
++		 * last task to vanish. The rcuwait_active() check is
++		 * accurate here because the waiter is pinned on this CPU
++		 * and can't obviously be running in parallel.
++		 *
++		 * On RT kernels this also has to check whether there are
++		 * pinned and scheduled out tasks on the runqueue. They
++		 * need to leave the migrate disabled section first.
++		 */
++		if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
++		    rcuwait_active(&rq->hotplug_wait)) {
++			raw_spin_unlock(&rq->lock);
++			rcuwait_wake_up(&rq->hotplug_wait);
++			raw_spin_lock(&rq->lock);
++		}
++		return;
++	}
++
++	get_task_struct(push_task);
++	/*
++	 * Temporarily drop rq->lock such that we can wake-up the stop task.
++	 * Both preemption and IRQs are still disabled.
++	 */
++	preempt_disable();
++	raw_spin_unlock(&rq->lock);
++	stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
++			    this_cpu_ptr(&push_work));
++	preempt_enable();
++	/*
++	 * At this point need_resched() is true and we'll take the loop in
++	 * schedule(). The next pick is obviously going to be the stop task
++	 * which kthread_is_per_cpu() and will push this task away.
++	 */
++	raw_spin_lock(&rq->lock);
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct rq_flags rf;
++
++	rq_lock_irqsave(rq, &rf);
++	if (on) {
++		WARN_ON_ONCE(rq->balance_callback);
++		rq->balance_callback = &balance_push_callback;
++	} else if (rq->balance_callback == &balance_push_callback) {
++		rq->balance_callback = NULL;
++	}
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Invoked from a CPUs hotplug control thread after the CPU has been marked
++ * inactive. All tasks which are not per CPU kernel threads are either
++ * pushed off this CPU now via balance_push() or placed on a different CPU
++ * during wakeup. Wait until the CPU is quiescent.
++ */
++static void balance_hotplug_wait(void)
++{
++	struct rq *rq = this_rq();
++
++	rcuwait_wait_event(&rq->hotplug_wait,
++			   rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
++			   TASK_UNINTERRUPTIBLE);
++}
++
++#else
++
++static void balance_push(struct rq *rq)
++{
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++}
++
++static inline void balance_hotplug_wait(void)
++{
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++static void set_rq_offline(struct rq *rq)
++{
++	if (rq->online) {
++		update_rq_clock(rq);
++		rq->online = false;
++	}
++}
++
++static void set_rq_online(struct rq *rq)
++{
++	if (!rq->online)
++		rq->online = true;
++}
++
++static inline void sched_set_rq_online(struct rq *rq, int cpu)
++{
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	set_rq_online(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++static inline void sched_set_rq_offline(struct rq *rq, int cpu)
++{
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	set_rq_offline(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++/*
++ * used to mark begin/end of suspend/resume:
++ */
++static int num_cpus_frozen;
++
++/*
++ * Update cpusets according to cpu_active mask.  If cpusets are
++ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
++ * around partition_sched_domains().
++ *
++ * If we come here as part of a suspend/resume, don't touch cpusets because we
++ * want to restore it back to its original state upon resume anyway.
++ */
++static void cpuset_cpu_active(void)
++{
++	if (cpuhp_tasks_frozen) {
++		/*
++		 * num_cpus_frozen tracks how many CPUs are involved in suspend
++		 * resume sequence. As long as this is not the last online
++		 * operation in the resume sequence, just build a single sched
++		 * domain, ignoring cpusets.
++		 */
++		cpuset_reset_sched_domains();
++		if (--num_cpus_frozen)
++			return;
++		/*
++		 * This is the last CPU online operation. So fall through and
++		 * restore the original sched domains by considering the
++		 * cpuset configurations.
++		 */
++		cpuset_force_rebuild();
++	}
++
++	cpuset_update_active_cpus();
++}
++
++static void cpuset_cpu_inactive(unsigned int cpu)
++{
++	if (!cpuhp_tasks_frozen) {
++		cpuset_update_active_cpus();
++	} else {
++		num_cpus_frozen++;
++		cpuset_reset_sched_domains();
++	}
++}
++
++static inline void sched_smt_present_inc(int cpu)
++{
++#ifdef CONFIG_SCHED_SMT
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++		static_branch_inc_cpuslocked(&sched_smt_present);
++		cpumask_or(&sched_smt_mask, &sched_smt_mask, cpu_smt_mask(cpu));
++	}
++#endif
++}
++
++static inline void sched_smt_present_dec(int cpu)
++{
++#ifdef CONFIG_SCHED_SMT
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++		static_branch_dec_cpuslocked(&sched_smt_present);
++		if (!static_branch_likely(&sched_smt_present))
++			cpumask_clear(sched_pcore_idle_mask);
++		cpumask_andnot(&sched_smt_mask, &sched_smt_mask, cpu_smt_mask(cpu));
++	}
++#endif
++}
++
++int sched_cpu_activate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	/*
++	 * Clear the balance_push callback and prepare to schedule
++	 * regular tasks.
++	 */
++	balance_push_set(cpu, false);
++
++	set_cpu_active(cpu, true);
++
++	if (sched_smp_initialized)
++		cpuset_cpu_active();
++
++	/*
++	 * Put the rq online, if not already. This happens:
++	 *
++	 * 1) In the early boot process, because we build the real domains
++	 *    after all cpus have been brought up.
++	 *
++	 * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
++	 *    domains.
++	 */
++	sched_set_rq_online(rq, cpu);
++
++	/*
++	 * When going up, increment the number of cores with SMT present.
++	 */
++	sched_smt_present_inc(cpu);
++
++	return 0;
++}
++
++int sched_cpu_deactivate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	set_cpu_active(cpu, false);
++
++	/*
++	 * From this point forward, this CPU will refuse to run any task that
++	 * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
++	 * push those tasks away until this gets cleared, see
++	 * sched_cpu_dying().
++	 */
++	balance_push_set(cpu, true);
++
++	/*
++	 * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
++	 * users of this state to go away such that all new such users will
++	 * observe it.
++	 *
++	 * Specifically, we rely on ttwu to no longer target this CPU, see
++	 * ttwu_queue_cond() and is_cpu_allowed().
++	 *
++	 * Do sync before park smpboot threads to take care the RCU boost case.
++	 */
++	synchronize_rcu();
++
++	sched_set_rq_offline(rq, cpu);
++
++	/*
++	 * When going down, decrement the number of cores with SMT present.
++	 */
++	sched_smt_present_dec(cpu);
++
++	if (!sched_smp_initialized)
++		return 0;
++
++	cpuset_cpu_inactive(cpu);
++
++	return 0;
++}
++
++static void sched_rq_cpu_starting(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	rq->calc_load_update = calc_load_update;
++}
++
++int sched_cpu_starting(unsigned int cpu)
++{
++	sched_rq_cpu_starting(cpu);
++	sched_tick_start(cpu);
++	return 0;
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++
++/*
++ * Invoked immediately before the stopper thread is invoked to bring the
++ * CPU down completely. At this point all per CPU kthreads except the
++ * hotplug thread (current) and the stopper thread (inactive) have been
++ * either parked or have been unbound from the outgoing CPU. Ensure that
++ * any of those which might be on the way out are gone.
++ *
++ * If after this point a bound task is being woken on this CPU then the
++ * responsible hotplug callback has failed to do it's job.
++ * sched_cpu_dying() will catch it with the appropriate fireworks.
++ */
++int sched_cpu_wait_empty(unsigned int cpu)
++{
++	balance_hotplug_wait();
++	sched_force_init_mm();
++	return 0;
++}
++
++/*
++ * Since this CPU is going 'away' for a while, fold any nr_active delta we
++ * might have. Called from the CPU stopper task after ensuring that the
++ * stopper is the last running task on the CPU, so nr_active count is
++ * stable. We need to take the tear-down thread which is calling this into
++ * account, so we hand in adjust = 1 to the load calculation.
++ *
++ * Also see the comment "Global load-average calculations".
++ */
++static void calc_load_migrate(struct rq *rq)
++{
++	long delta = calc_load_fold_active(rq, 1);
++
++	if (delta)
++		atomic_long_add(delta, &calc_load_tasks);
++}
++
++static void dump_rq_tasks(struct rq *rq, const char *loglvl)
++{
++	struct task_struct *g, *p;
++	int cpu = cpu_of(rq);
++
++	lockdep_assert_held(&rq->lock);
++
++	printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
++	for_each_process_thread(g, p) {
++		if (task_cpu(p) != cpu)
++			continue;
++
++		if (!task_on_rq_queued(p))
++			continue;
++
++		printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
++	}
++}
++
++int sched_cpu_dying(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	/* Handle pending wakeups and then migrate everything off */
++	sched_tick_stop(cpu);
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
++		WARN(true, "Dying CPU not properly vacated!");
++		dump_rq_tasks(rq, KERN_WARNING);
++	}
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	calc_load_migrate(rq);
++	hrtick_clear(rq);
++	return 0;
++}
++#endif
++
++#ifdef CONFIG_SMP
++static void sched_init_topology_cpumask_early(void)
++{
++	int cpu;
++	cpumask_t *tmp;
++
++	for_each_possible_cpu(cpu) {
++		/* init topo masks */
++		tmp = per_cpu(sched_cpu_topo_masks, cpu);
++
++		cpumask_copy(tmp, cpu_possible_mask);
++		per_cpu(sched_cpu_llc_mask, cpu) = tmp;
++		per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
++	}
++}
++
++#define TOPOLOGY_CPUMASK(name, mask, last)\
++	if (cpumask_and(topo, topo, mask)) {					\
++		cpumask_copy(topo, mask);					\
++		printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name,	\
++		       cpu, (topo++)->bits[0]);					\
++	}									\
++	if (!last)								\
++		bitmap_complement(cpumask_bits(topo), cpumask_bits(mask),	\
++				  nr_cpumask_bits);
++
++static void sched_init_topology_cpumask(void)
++{
++	int cpu;
++	cpumask_t *topo;
++
++	for_each_online_cpu(cpu) {
++		topo = per_cpu(sched_cpu_topo_masks, cpu);
++
++		bitmap_complement(cpumask_bits(topo), cpumask_bits(cpumask_of(cpu)),
++				  nr_cpumask_bits);
++#ifdef CONFIG_SCHED_SMT
++		TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
++#endif
++		TOPOLOGY_CPUMASK(cluster, topology_cluster_cpumask(cpu), false);
++
++		per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
++		per_cpu(sched_cpu_llc_mask, cpu) = topo;
++		TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
++
++		TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
++
++		TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
++
++		per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
++		printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
++		       cpu, per_cpu(sd_llc_id, cpu),
++		       (int) (per_cpu(sched_cpu_llc_mask, cpu) -
++			      per_cpu(sched_cpu_topo_masks, cpu)));
++	}
++}
++#endif
++
++void __init sched_init_smp(void)
++{
++	/* Move init over to a non-isolated CPU */
++	if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_TYPE_DOMAIN)) < 0)
++		BUG();
++	current->flags &= ~PF_NO_SETAFFINITY;
++
++	sched_init_topology();
++	sched_init_topology_cpumask();
++
++	sched_smp_initialized = true;
++}
++
++static int __init migration_init(void)
++{
++	sched_cpu_starting(smp_processor_id());
++	return 0;
++}
++early_initcall(migration_init);
++
++#else
++void __init sched_init_smp(void)
++{
++	cpu_rq(0)->idle->time_slice = sysctl_sched_base_slice;
++}
++#endif /* CONFIG_SMP */
++
++int in_sched_functions(unsigned long addr)
++{
++	return in_lock_functions(addr) ||
++		(addr >= (unsigned long)__sched_text_start
++		&& addr < (unsigned long)__sched_text_end);
++}
++
++#ifdef CONFIG_CGROUP_SCHED
++/*
++ * Default task group.
++ * Every task in system belongs to this group at bootup.
++ */
++struct task_group root_task_group;
++LIST_HEAD(task_groups);
++
++/* Cacheline aligned slab cache for task_group */
++static struct kmem_cache *task_group_cache __ro_after_init;
++#endif /* CONFIG_CGROUP_SCHED */
++
++void __init sched_init(void)
++{
++	int i;
++	struct rq *rq;
++
++	printk(KERN_INFO "sched/alt: "ALT_SCHED_NAME" CPU Scheduler "ALT_SCHED_VERSION\
++			 " by Alfred Chen.\n");
++
++	wait_bit_init();
++
++#ifdef CONFIG_SMP
++	for (i = 0; i < SCHED_QUEUE_BITS; i++)
++		cpumask_copy(sched_preempt_mask + i, cpu_present_mask);
++#endif
++
++#ifdef CONFIG_CGROUP_SCHED
++	task_group_cache = KMEM_CACHE(task_group, 0);
++
++	list_add(&root_task_group.list, &task_groups);
++	INIT_LIST_HEAD(&root_task_group.children);
++	INIT_LIST_HEAD(&root_task_group.siblings);
++#endif /* CONFIG_CGROUP_SCHED */
++	for_each_possible_cpu(i) {
++		rq = cpu_rq(i);
++
++		sched_queue_init(&rq->queue);
++		rq->prio = IDLE_TASK_SCHED_PRIO;
++		rq->prio_balance_time = 0;
++#ifdef CONFIG_SCHED_PDS
++		rq->prio_idx = rq->prio;
++#endif
++
++		raw_spin_lock_init(&rq->lock);
++		rq->nr_running = rq->nr_uninterruptible = 0;
++		rq->calc_load_active = 0;
++		rq->calc_load_update = jiffies + LOAD_FREQ;
++#ifdef CONFIG_SMP
++		rq->online = false;
++		rq->cpu = i;
++
++		rq->clear_idle_mask_func = cpumask_clear_cpu;
++		rq->set_idle_mask_func = cpumask_set_cpu;
++		rq->balance_func = NULL;
++		rq->active_balance_arg.active = 0;
++
++#ifdef CONFIG_NO_HZ_COMMON
++		INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
++#endif
++		rq->balance_callback = &balance_push_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++		rcuwait_init(&rq->hotplug_wait);
++#endif
++#endif /* CONFIG_SMP */
++		rq->nr_switches = 0;
++
++		hrtick_rq_init(rq);
++		atomic_set(&rq->nr_iowait, 0);
++
++		zalloc_cpumask_var_node(&rq->scratch_mask, GFP_KERNEL, cpu_to_node(i));
++	}
++#ifdef CONFIG_SMP
++	/* Set rq->online for cpu 0 */
++	cpu_rq(0)->online = true;
++#endif
++	/*
++	 * The boot idle thread does lazy MMU switching as well:
++	 */
++	mmgrab_lazy_tlb(&init_mm);
++	enter_lazy_tlb(&init_mm, current);
++
++	/*
++	 * The idle task doesn't need the kthread struct to function, but it
++	 * is dressed up as a per-CPU kthread and thus needs to play the part
++	 * if we want to avoid special-casing it in code that deals with per-CPU
++	 * kthreads.
++	 */
++	WARN_ON(!set_kthread_struct(current));
++
++	/*
++	 * Make us the idle thread. Technically, schedule() should not be
++	 * called from this thread, however somewhere below it might be,
++	 * but because we are the idle thread, we just pick up running again
++	 * when this runqueue becomes "idle".
++	 */
++	__sched_fork(0, current);
++	init_idle(current, smp_processor_id());
++
++	calc_load_update = jiffies + LOAD_FREQ;
++
++#ifdef CONFIG_SMP
++	idle_thread_set_boot_cpu();
++	balance_push_set(smp_processor_id(), false);
++
++	sched_init_topology_cpumask_early();
++#endif /* SMP */
++
++	preempt_dynamic_init();
++}
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++
++void __might_sleep(const char *file, int line)
++{
++	unsigned int state = get_current_state();
++	/*
++	 * Blocking primitives will set (and therefore destroy) current->state,
++	 * since we will exit with TASK_RUNNING make sure we enter with it,
++	 * otherwise we will destroy state.
++	 */
++	WARN_ONCE(state != TASK_RUNNING && current->task_state_change,
++			"do not call blocking ops when !TASK_RUNNING; "
++			"state=%x set at [<%p>] %pS\n", state,
++			(void *)current->task_state_change,
++			(void *)current->task_state_change);
++
++	__might_resched(file, line, 0);
++}
++EXPORT_SYMBOL(__might_sleep);
++
++static void print_preempt_disable_ip(int preempt_offset, unsigned long ip)
++{
++	if (!IS_ENABLED(CONFIG_DEBUG_PREEMPT))
++		return;
++
++	if (preempt_count() == preempt_offset)
++		return;
++
++	pr_err("Preemption disabled at:");
++	print_ip_sym(KERN_ERR, ip);
++}
++
++static inline bool resched_offsets_ok(unsigned int offsets)
++{
++	unsigned int nested = preempt_count();
++
++	nested += rcu_preempt_depth() << MIGHT_RESCHED_RCU_SHIFT;
++
++	return nested == offsets;
++}
++
++void __might_resched(const char *file, int line, unsigned int offsets)
++{
++	/* Ratelimiting timestamp: */
++	static unsigned long prev_jiffy;
++
++	unsigned long preempt_disable_ip;
++
++	/* WARN_ON_ONCE() by default, no rate limit required: */
++	rcu_sleep_check();
++
++	if ((resched_offsets_ok(offsets) && !irqs_disabled() &&
++	     !is_idle_task(current) && !current->non_block_count) ||
++	    system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
++	    oops_in_progress)
++		return;
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	/* Save this before calling printk(), since that will clobber it: */
++	preempt_disable_ip = get_preempt_disable_ip(current);
++
++	pr_err("BUG: sleeping function called from invalid context at %s:%d\n",
++	       file, line);
++	pr_err("in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
++	       in_atomic(), irqs_disabled(), current->non_block_count,
++	       current->pid, current->comm);
++	pr_err("preempt_count: %x, expected: %x\n", preempt_count(),
++	       offsets & MIGHT_RESCHED_PREEMPT_MASK);
++
++	if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
++		pr_err("RCU nest depth: %d, expected: %u\n",
++		       rcu_preempt_depth(), offsets >> MIGHT_RESCHED_RCU_SHIFT);
++	}
++
++	if (task_stack_end_corrupted(current))
++		pr_emerg("Thread overran stack, or stack corrupted\n");
++
++	debug_show_held_locks(current);
++	if (irqs_disabled())
++		print_irqtrace_events(current);
++
++	print_preempt_disable_ip(offsets & MIGHT_RESCHED_PREEMPT_MASK,
++				 preempt_disable_ip);
++
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL(__might_resched);
++
++void __cant_sleep(const char *file, int line, int preempt_offset)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > preempt_offset)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
++	printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
++			in_atomic(), irqs_disabled(),
++			current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_sleep);
++
++#ifdef CONFIG_SMP
++void __cant_migrate(const char *file, int line)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (is_migration_disabled(current))
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > 0)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
++	pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
++	       in_atomic(), irqs_disabled(), is_migration_disabled(current),
++	       current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_migrate);
++#endif
++#endif
++
++#ifdef CONFIG_MAGIC_SYSRQ
++void normalize_rt_tasks(void)
++{
++	struct task_struct *g, *p;
++	struct sched_attr attr = {
++		.sched_policy = SCHED_NORMAL,
++	};
++
++	read_lock(&tasklist_lock);
++	for_each_process_thread(g, p) {
++		/*
++		 * Only normalize user tasks:
++		 */
++		if (p->flags & PF_KTHREAD)
++			continue;
++
++		schedstat_set(p->stats.wait_start,  0);
++		schedstat_set(p->stats.sleep_start, 0);
++		schedstat_set(p->stats.block_start, 0);
++
++		if (!rt_or_dl_task(p)) {
++			/*
++			 * Renice negative nice level userspace
++			 * tasks back to 0:
++			 */
++			if (task_nice(p) < 0)
++				set_user_nice(p, 0);
++			continue;
++		}
++
++		__sched_setscheduler(p, &attr, false, false);
++	}
++	read_unlock(&tasklist_lock);
++}
++#endif /* CONFIG_MAGIC_SYSRQ */
++
++#if defined(CONFIG_KGDB_KDB)
++/*
++ * These functions are only useful for KDB.
++ *
++ * They can only be called when the whole system has been
++ * stopped - every CPU needs to be quiescent, and no scheduling
++ * activity can take place. Using them for anything else would
++ * be a serious bug, and as a result, they aren't even visible
++ * under any other configuration.
++ */
++
++/**
++ * curr_task - return the current task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
++ *
++ * Return: The current task for @cpu.
++ */
++struct task_struct *curr_task(int cpu)
++{
++	return cpu_curr(cpu);
++}
++
++#endif /* defined(CONFIG_KGDB_KDB) */
++
++#ifdef CONFIG_CGROUP_SCHED
++static void sched_free_group(struct task_group *tg)
++{
++	kmem_cache_free(task_group_cache, tg);
++}
++
++static void sched_free_group_rcu(struct rcu_head *rhp)
++{
++	sched_free_group(container_of(rhp, struct task_group, rcu));
++}
++
++static void sched_unregister_group(struct task_group *tg)
++{
++	/*
++	 * We have to wait for yet another RCU grace period to expire, as
++	 * print_cfs_stats() might run concurrently.
++	 */
++	call_rcu(&tg->rcu, sched_free_group_rcu);
++}
++
++/* allocate runqueue etc for a new task group */
++struct task_group *sched_create_group(struct task_group *parent)
++{
++	struct task_group *tg;
++
++	tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
++	if (!tg)
++		return ERR_PTR(-ENOMEM);
++
++	return tg;
++}
++
++void sched_online_group(struct task_group *tg, struct task_group *parent)
++{
++}
++
++/* RCU callback to free various structures associated with a task group */
++static void sched_unregister_group_rcu(struct rcu_head *rhp)
++{
++	/* Now it should be safe to free those cfs_rqs: */
++	sched_unregister_group(container_of(rhp, struct task_group, rcu));
++}
++
++void sched_destroy_group(struct task_group *tg)
++{
++	/* Wait for possible concurrent references to cfs_rqs complete: */
++	call_rcu(&tg->rcu, sched_unregister_group_rcu);
++}
++
++void sched_release_group(struct task_group *tg)
++{
++}
++
++static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
++{
++	return css ? container_of(css, struct task_group, css) : NULL;
++}
++
++static struct cgroup_subsys_state *
++cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
++{
++	struct task_group *parent = css_tg(parent_css);
++	struct task_group *tg;
++
++	if (!parent) {
++		/* This is early initialization for the top cgroup */
++		return &root_task_group.css;
++	}
++
++	tg = sched_create_group(parent);
++	if (IS_ERR(tg))
++		return ERR_PTR(-ENOMEM);
++	return &tg->css;
++}
++
++/* Expose task group only after completing cgroup initialization */
++static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++	struct task_group *parent = css_tg(css->parent);
++
++	if (parent)
++		sched_online_group(tg, parent);
++	return 0;
++}
++
++static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	sched_release_group(tg);
++}
++
++static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	/*
++	 * Relies on the RCU grace period between css_released() and this.
++	 */
++	sched_unregister_group(tg);
++}
++
++#ifdef CONFIG_RT_GROUP_SCHED
++static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
++{
++	return 0;
++}
++#endif
++
++static void cpu_cgroup_attach(struct cgroup_taskset *tset)
++{
++}
++
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
++static int sched_group_set_shares(struct task_group *tg, unsigned long shares)
++{
++	return 0;
++}
++
++static int sched_group_set_idle(struct task_group *tg, long idle)
++{
++	return 0;
++}
++
++static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
++				struct cftype *cftype, u64 shareval)
++{
++	return sched_group_set_shares(css_tg(css), shareval);
++}
++
++static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static s64 cpu_idle_read_s64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
++				struct cftype *cft, s64 idle)
++{
++	return sched_group_set_idle(css_tg(css), idle);
++}
++#endif
++
++#ifdef CONFIG_CFS_BANDWIDTH
++static s64 cpu_cfs_quota_read_s64(struct cgroup_subsys_state *css,
++				  struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_quota_write_s64(struct cgroup_subsys_state *css,
++				   struct cftype *cftype, s64 cfs_quota_us)
++{
++	return 0;
++}
++
++static u64 cpu_cfs_period_read_u64(struct cgroup_subsys_state *css,
++				   struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_period_write_u64(struct cgroup_subsys_state *css,
++				    struct cftype *cftype, u64 cfs_period_us)
++{
++	return 0;
++}
++
++static u64 cpu_cfs_burst_read_u64(struct cgroup_subsys_state *css,
++				  struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_cfs_burst_write_u64(struct cgroup_subsys_state *css,
++				   struct cftype *cftype, u64 cfs_burst_us)
++{
++	return 0;
++}
++
++static int cpu_cfs_stat_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static int cpu_cfs_local_stat_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++#endif
++
++#ifdef CONFIG_RT_GROUP_SCHED
++static int cpu_rt_runtime_write(struct cgroup_subsys_state *css,
++				struct cftype *cft, s64 val)
++{
++	return 0;
++}
++
++static s64 cpu_rt_runtime_read(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_rt_period_write_uint(struct cgroup_subsys_state *css,
++				    struct cftype *cftype, u64 rt_period_us)
++{
++	return 0;
++}
++
++static u64 cpu_rt_period_read_uint(struct cgroup_subsys_state *css,
++				   struct cftype *cft)
++{
++	return 0;
++}
++#endif
++
++#ifdef CONFIG_UCLAMP_TASK_GROUP
++static int cpu_uclamp_min_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static int cpu_uclamp_max_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static ssize_t cpu_uclamp_min_write(struct kernfs_open_file *of,
++				    char *buf, size_t nbytes,
++				    loff_t off)
++{
++	return nbytes;
++}
++
++static ssize_t cpu_uclamp_max_write(struct kernfs_open_file *of,
++				    char *buf, size_t nbytes,
++				    loff_t off)
++{
++	return nbytes;
++}
++#endif
++
++static struct cftype cpu_legacy_files[] = {
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
++	{
++		.name = "shares",
++		.read_u64 = cpu_shares_read_u64,
++		.write_u64 = cpu_shares_write_u64,
++	},
++	{
++		.name = "idle",
++		.read_s64 = cpu_idle_read_s64,
++		.write_s64 = cpu_idle_write_s64,
++	},
++#endif
++#ifdef CONFIG_CFS_BANDWIDTH
++	{
++		.name = "cfs_quota_us",
++		.read_s64 = cpu_cfs_quota_read_s64,
++		.write_s64 = cpu_cfs_quota_write_s64,
++	},
++	{
++		.name = "cfs_period_us",
++		.read_u64 = cpu_cfs_period_read_u64,
++		.write_u64 = cpu_cfs_period_write_u64,
++	},
++	{
++		.name = "cfs_burst_us",
++		.read_u64 = cpu_cfs_burst_read_u64,
++		.write_u64 = cpu_cfs_burst_write_u64,
++	},
++	{
++		.name = "stat",
++		.seq_show = cpu_cfs_stat_show,
++	},
++	{
++		.name = "stat.local",
++		.seq_show = cpu_cfs_local_stat_show,
++	},
++#endif
++#ifdef CONFIG_RT_GROUP_SCHED
++	{
++		.name = "rt_runtime_us",
++		.read_s64 = cpu_rt_runtime_read,
++		.write_s64 = cpu_rt_runtime_write,
++	},
++	{
++		.name = "rt_period_us",
++		.read_u64 = cpu_rt_period_read_uint,
++		.write_u64 = cpu_rt_period_write_uint,
++	},
++#endif
++#ifdef CONFIG_UCLAMP_TASK_GROUP
++	{
++		.name = "uclamp.min",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_min_show,
++		.write = cpu_uclamp_min_write,
++	},
++	{
++		.name = "uclamp.max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_max_show,
++		.write = cpu_uclamp_max_write,
++	},
++#endif
++	{ }	/* Terminate */
++};
++
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
++static u64 cpu_weight_read_u64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_weight_write_u64(struct cgroup_subsys_state *css,
++				struct cftype *cft, u64 weight)
++{
++	return 0;
++}
++
++static s64 cpu_weight_nice_read_s64(struct cgroup_subsys_state *css,
++				    struct cftype *cft)
++{
++	return 0;
++}
++
++static int cpu_weight_nice_write_s64(struct cgroup_subsys_state *css,
++				     struct cftype *cft, s64 nice)
++{
++	return 0;
++}
++#endif
++
++#ifdef CONFIG_CFS_BANDWIDTH
++static int cpu_max_show(struct seq_file *sf, void *v)
++{
++	return 0;
++}
++
++static ssize_t cpu_max_write(struct kernfs_open_file *of,
++			     char *buf, size_t nbytes, loff_t off)
++{
++	return nbytes;
++}
++#endif
++
++static struct cftype cpu_files[] = {
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
++	{
++		.name = "weight",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_u64 = cpu_weight_read_u64,
++		.write_u64 = cpu_weight_write_u64,
++	},
++	{
++		.name = "weight.nice",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_s64 = cpu_weight_nice_read_s64,
++		.write_s64 = cpu_weight_nice_write_s64,
++	},
++	{
++		.name = "idle",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_s64 = cpu_idle_read_s64,
++		.write_s64 = cpu_idle_write_s64,
++	},
++#endif
++#ifdef CONFIG_CFS_BANDWIDTH
++	{
++		.name = "max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_max_show,
++		.write = cpu_max_write,
++	},
++	{
++		.name = "max.burst",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.read_u64 = cpu_cfs_burst_read_u64,
++		.write_u64 = cpu_cfs_burst_write_u64,
++	},
++#endif
++#ifdef CONFIG_UCLAMP_TASK_GROUP
++	{
++		.name = "uclamp.min",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_min_show,
++		.write = cpu_uclamp_min_write,
++	},
++	{
++		.name = "uclamp.max",
++		.flags = CFTYPE_NOT_ON_ROOT,
++		.seq_show = cpu_uclamp_max_show,
++		.write = cpu_uclamp_max_write,
++	},
++#endif
++	{ }	/* terminate */
++};
++
++static int cpu_extra_stat_show(struct seq_file *sf,
++			       struct cgroup_subsys_state *css)
++{
++	return 0;
++}
++
++static int cpu_local_stat_show(struct seq_file *sf,
++			       struct cgroup_subsys_state *css)
++{
++	return 0;
++}
++
++struct cgroup_subsys cpu_cgrp_subsys = {
++	.css_alloc	= cpu_cgroup_css_alloc,
++	.css_online	= cpu_cgroup_css_online,
++	.css_released	= cpu_cgroup_css_released,
++	.css_free	= cpu_cgroup_css_free,
++	.css_extra_stat_show = cpu_extra_stat_show,
++	.css_local_stat_show = cpu_local_stat_show,
++#ifdef CONFIG_RT_GROUP_SCHED
++	.can_attach	= cpu_cgroup_can_attach,
++#endif
++	.attach		= cpu_cgroup_attach,
++	.legacy_cftypes	= cpu_legacy_files,
++	.dfl_cftypes	= cpu_files,
++	.early_init	= true,
++	.threaded	= true,
++};
++#endif	/* CONFIG_CGROUP_SCHED */
++
++#undef CREATE_TRACE_POINTS
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#
++/*
++ * @cid_lock: Guarantee forward-progress of cid allocation.
++ *
++ * Concurrency ID allocation within a bitmap is mostly lock-free. The cid_lock
++ * is only used when contention is detected by the lock-free allocation so
++ * forward progress can be guaranteed.
++ */
++DEFINE_RAW_SPINLOCK(cid_lock);
++
++/*
++ * @use_cid_lock: Select cid allocation behavior: lock-free vs spinlock.
++ *
++ * When @use_cid_lock is 0, the cid allocation is lock-free. When contention is
++ * detected, it is set to 1 to ensure that all newly coming allocations are
++ * serialized by @cid_lock until the allocation which detected contention
++ * completes and sets @use_cid_lock back to 0. This guarantees forward progress
++ * of a cid allocation.
++ */
++int use_cid_lock;
++
++/*
++ * mm_cid remote-clear implements a lock-free algorithm to clear per-mm/cpu cid
++ * concurrently with respect to the execution of the source runqueue context
++ * switch.
++ *
++ * There is one basic properties we want to guarantee here:
++ *
++ * (1) Remote-clear should _never_ mark a per-cpu cid UNSET when it is actively
++ * used by a task. That would lead to concurrent allocation of the cid and
++ * userspace corruption.
++ *
++ * Provide this guarantee by introducing a Dekker memory ordering to guarantee
++ * that a pair of loads observe at least one of a pair of stores, which can be
++ * shown as:
++ *
++ *      X = Y = 0
++ *
++ *      w[X]=1          w[Y]=1
++ *      MB              MB
++ *      r[Y]=y          r[X]=x
++ *
++ * Which guarantees that x==0 && y==0 is impossible. But rather than using
++ * values 0 and 1, this algorithm cares about specific state transitions of the
++ * runqueue current task (as updated by the scheduler context switch), and the
++ * per-mm/cpu cid value.
++ *
++ * Let's introduce task (Y) which has task->mm == mm and task (N) which has
++ * task->mm != mm for the rest of the discussion. There are two scheduler state
++ * transitions on context switch we care about:
++ *
++ * (TSA) Store to rq->curr with transition from (N) to (Y)
++ *
++ * (TSB) Store to rq->curr with transition from (Y) to (N)
++ *
++ * On the remote-clear side, there is one transition we care about:
++ *
++ * (TMA) cmpxchg to *pcpu_cid to set the LAZY flag
++ *
++ * There is also a transition to UNSET state which can be performed from all
++ * sides (scheduler, remote-clear). It is always performed with a cmpxchg which
++ * guarantees that only a single thread will succeed:
++ *
++ * (TMB) cmpxchg to *pcpu_cid to mark UNSET
++ *
++ * Just to be clear, what we do _not_ want to happen is a transition to UNSET
++ * when a thread is actively using the cid (property (1)).
++ *
++ * Let's looks at the relevant combinations of TSA/TSB, and TMA transitions.
++ *
++ * Scenario A) (TSA)+(TMA) (from next task perspective)
++ *
++ * CPU0                                      CPU1
++ *
++ * Context switch CS-1                       Remote-clear
++ *   - store to rq->curr: (N)->(Y) (TSA)     - cmpxchg to *pcpu_id to LAZY (TMA)
++ *                                             (implied barrier after cmpxchg)
++ *   - switch_mm_cid()
++ *     - memory barrier (see switch_mm_cid()
++ *       comment explaining how this barrier
++ *       is combined with other scheduler
++ *       barriers)
++ *     - mm_cid_get (next)
++ *       - READ_ONCE(*pcpu_cid)              - rcu_dereference(src_rq->curr)
++ *
++ * This Dekker ensures that either task (Y) is observed by the
++ * rcu_dereference() or the LAZY flag is observed by READ_ONCE(), or both are
++ * observed.
++ *
++ * If task (Y) store is observed by rcu_dereference(), it means that there is
++ * still an active task on the cpu. Remote-clear will therefore not transition
++ * to UNSET, which fulfills property (1).
++ *
++ * If task (Y) is not observed, but the lazy flag is observed by READ_ONCE(),
++ * it will move its state to UNSET, which clears the percpu cid perhaps
++ * uselessly (which is not an issue for correctness). Because task (Y) is not
++ * observed, CPU1 can move ahead to set the state to UNSET. Because moving
++ * state to UNSET is done with a cmpxchg expecting that the old state has the
++ * LAZY flag set, only one thread will successfully UNSET.
++ *
++ * If both states (LAZY flag and task (Y)) are observed, the thread on CPU0
++ * will observe the LAZY flag and transition to UNSET (perhaps uselessly), and
++ * CPU1 will observe task (Y) and do nothing more, which is fine.
++ *
++ * What we are effectively preventing with this Dekker is a scenario where
++ * neither LAZY flag nor store (Y) are observed, which would fail property (1)
++ * because this would UNSET a cid which is actively used.
++ */
++
++void sched_mm_cid_migrate_from(struct task_struct *t)
++{
++	t->migrate_from_cpu = task_cpu(t);
++}
++
++static
++int __sched_mm_cid_migrate_from_fetch_cid(struct rq *src_rq,
++					  struct task_struct *t,
++					  struct mm_cid *src_pcpu_cid)
++{
++	struct mm_struct *mm = t->mm;
++	struct task_struct *src_task;
++	int src_cid, last_mm_cid;
++
++	if (!mm)
++		return -1;
++
++	last_mm_cid = t->last_mm_cid;
++	/*
++	 * If the migrated task has no last cid, or if the current
++	 * task on src rq uses the cid, it means the source cid does not need
++	 * to be moved to the destination cpu.
++	 */
++	if (last_mm_cid == -1)
++		return -1;
++	src_cid = READ_ONCE(src_pcpu_cid->cid);
++	if (!mm_cid_is_valid(src_cid) || last_mm_cid != src_cid)
++		return -1;
++
++	/*
++	 * If we observe an active task using the mm on this rq, it means we
++	 * are not the last task to be migrated from this cpu for this mm, so
++	 * there is no need to move src_cid to the destination cpu.
++	 */
++	guard(rcu)();
++	src_task = rcu_dereference(src_rq->curr);
++	if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++		t->last_mm_cid = -1;
++		return -1;
++	}
++
++	return src_cid;
++}
++
++static
++int __sched_mm_cid_migrate_from_try_steal_cid(struct rq *src_rq,
++					      struct task_struct *t,
++					      struct mm_cid *src_pcpu_cid,
++					      int src_cid)
++{
++	struct task_struct *src_task;
++	struct mm_struct *mm = t->mm;
++	int lazy_cid;
++
++	if (src_cid == -1)
++		return -1;
++
++	/*
++	 * Attempt to clear the source cpu cid to move it to the destination
++	 * cpu.
++	 */
++	lazy_cid = mm_cid_set_lazy_put(src_cid);
++	if (!try_cmpxchg(&src_pcpu_cid->cid, &src_cid, lazy_cid))
++		return -1;
++
++	/*
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm matches the scheduler barrier in context_switch()
++	 * between store to rq->curr and load of prev and next task's
++	 * per-mm/cpu cid.
++	 *
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm_cid_active matches the barrier in
++	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++	 * load of per-mm/cpu cid.
++	 */
++
++	/*
++	 * If we observe an active task using the mm on this rq after setting
++	 * the lazy-put flag, this task will be responsible for transitioning
++	 * from lazy-put flag set to MM_CID_UNSET.
++	 */
++	scoped_guard (rcu) {
++		src_task = rcu_dereference(src_rq->curr);
++		if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++			rcu_read_unlock();
++			/*
++			 * We observed an active task for this mm, there is therefore
++			 * no point in moving this cid to the destination cpu.
++			 */
++			t->last_mm_cid = -1;
++			return -1;
++		}
++	}
++
++	/*
++	 * The src_cid is unused, so it can be unset.
++	 */
++	if (!try_cmpxchg(&src_pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++		return -1;
++	WRITE_ONCE(src_pcpu_cid->recent_cid, MM_CID_UNSET);
++	return src_cid;
++}
++
++/*
++ * Migration to dst cpu. Called with dst_rq lock held.
++ * Interrupts are disabled, which keeps the window of cid ownership without the
++ * source rq lock held small.
++ */
++void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t)
++{
++	struct mm_cid *src_pcpu_cid, *dst_pcpu_cid;
++	struct mm_struct *mm = t->mm;
++	int src_cid, src_cpu;
++	bool dst_cid_is_set;
++	struct rq *src_rq;
++
++	lockdep_assert_rq_held(dst_rq);
++
++	if (!mm)
++		return;
++	src_cpu = t->migrate_from_cpu;
++	if (src_cpu == -1) {
++		t->last_mm_cid = -1;
++		return;
++	}
++	/*
++	 * Move the src cid if the dst cid is unset. This keeps id
++	 * allocation closest to 0 in cases where few threads migrate around
++	 * many CPUs.
++	 *
++	 * If destination cid or recent cid is already set, we may have
++	 * to just clear the src cid to ensure compactness in frequent
++	 * migrations scenarios.
++	 *
++	 * It is not useful to clear the src cid when the number of threads is
++	 * greater or equal to the number of allowed CPUs, because user-space
++	 * can expect that the number of allowed cids can reach the number of
++	 * allowed CPUs.
++	 */
++	dst_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(dst_rq));
++	dst_cid_is_set = !mm_cid_is_unset(READ_ONCE(dst_pcpu_cid->cid)) ||
++			 !mm_cid_is_unset(READ_ONCE(dst_pcpu_cid->recent_cid));
++	if (dst_cid_is_set && atomic_read(&mm->mm_users) >= READ_ONCE(mm->nr_cpus_allowed))
++		return;
++	src_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, src_cpu);
++	src_rq = cpu_rq(src_cpu);
++	src_cid = __sched_mm_cid_migrate_from_fetch_cid(src_rq, t, src_pcpu_cid);
++	if (src_cid == -1)
++		return;
++	src_cid = __sched_mm_cid_migrate_from_try_steal_cid(src_rq, t, src_pcpu_cid,
++							    src_cid);
++	if (src_cid == -1)
++		return;
++	if (dst_cid_is_set) {
++		__mm_cid_put(mm, src_cid);
++		return;
++	}
++	/* Move src_cid to dst cpu. */
++	mm_cid_snapshot_time(dst_rq, mm);
++	WRITE_ONCE(dst_pcpu_cid->cid, src_cid);
++	WRITE_ONCE(dst_pcpu_cid->recent_cid, src_cid);
++}
++
++static void sched_mm_cid_remote_clear(struct mm_struct *mm, struct mm_cid *pcpu_cid,
++				      int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct task_struct *t;
++	int cid, lazy_cid;
++
++	cid = READ_ONCE(pcpu_cid->cid);
++	if (!mm_cid_is_valid(cid))
++		return;
++
++	/*
++	 * Clear the cpu cid if it is set to keep cid allocation compact.  If
++	 * there happens to be other tasks left on the source cpu using this
++	 * mm, the next task using this mm will reallocate its cid on context
++	 * switch.
++	 */
++	lazy_cid = mm_cid_set_lazy_put(cid);
++	if (!try_cmpxchg(&pcpu_cid->cid, &cid, lazy_cid))
++		return;
++
++	/*
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm matches the scheduler barrier in context_switch()
++	 * between store to rq->curr and load of prev and next task's
++	 * per-mm/cpu cid.
++	 *
++	 * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++	 * rq->curr->mm_cid_active matches the barrier in
++	 * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++	 * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++	 * load of per-mm/cpu cid.
++	 */
++
++	/*
++	 * If we observe an active task using the mm on this rq after setting
++	 * the lazy-put flag, that task will be responsible for transitioning
++	 * from lazy-put flag set to MM_CID_UNSET.
++	 */
++	scoped_guard (rcu) {
++		t = rcu_dereference(rq->curr);
++		if (READ_ONCE(t->mm_cid_active) && t->mm == mm)
++			return;
++	}
++
++	/*
++	 * The cid is unused, so it can be unset.
++	 * Disable interrupts to keep the window of cid ownership without rq
++	 * lock small.
++	 */
++	scoped_guard (irqsave) {
++		if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++			__mm_cid_put(mm, cid);
++	}
++}
++
++static void sched_mm_cid_remote_clear_old(struct mm_struct *mm, int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct mm_cid *pcpu_cid;
++	struct task_struct *curr;
++	u64 rq_clock;
++
++	/*
++	 * rq->clock load is racy on 32-bit but one spurious clear once in a
++	 * while is irrelevant.
++	 */
++	rq_clock = READ_ONCE(rq->clock);
++	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++
++	/*
++	 * In order to take care of infrequently scheduled tasks, bump the time
++	 * snapshot associated with this cid if an active task using the mm is
++	 * observed on this rq.
++	 */
++	scoped_guard (rcu) {
++		curr = rcu_dereference(rq->curr);
++		if (READ_ONCE(curr->mm_cid_active) && curr->mm == mm) {
++			WRITE_ONCE(pcpu_cid->time, rq_clock);
++			return;
++		}
++	}
++
++	if (rq_clock < pcpu_cid->time + SCHED_MM_CID_PERIOD_NS)
++		return;
++	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
++					     int weight)
++{
++	struct mm_cid *pcpu_cid;
++	int cid;
++
++	pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++	cid = READ_ONCE(pcpu_cid->cid);
++	if (!mm_cid_is_valid(cid) || cid < weight)
++		return;
++	sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void task_mm_cid_work(struct callback_head *work)
++{
++	unsigned long now = jiffies, old_scan, next_scan;
++	struct task_struct *t = current;
++	struct cpumask *cidmask;
++	struct mm_struct *mm;
++	int weight, cpu;
++
++	WARN_ON_ONCE(t != container_of(work, struct task_struct, cid_work));
++
++	work->next = work;	/* Prevent double-add */
++	if (t->flags & PF_EXITING)
++		return;
++	mm = t->mm;
++	if (!mm)
++		return;
++	old_scan = READ_ONCE(mm->mm_cid_next_scan);
++	next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++	if (!old_scan) {
++		unsigned long res;
++
++		res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
++		if (res != old_scan)
++			old_scan = res;
++		else
++			old_scan = next_scan;
++	}
++	if (time_before(now, old_scan))
++		return;
++	if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
++		return;
++	cidmask = mm_cidmask(mm);
++	/* Clear cids that were not recently used. */
++	for_each_possible_cpu(cpu)
++		sched_mm_cid_remote_clear_old(mm, cpu);
++	weight = cpumask_weight(cidmask);
++	/*
++	 * Clear cids that are greater or equal to the cidmask weight to
++	 * recompact it.
++	 */
++	for_each_possible_cpu(cpu)
++		sched_mm_cid_remote_clear_weight(mm, cpu, weight);
++}
++
++void init_sched_mm_cid(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	int mm_users = 0;
++
++	if (mm) {
++		mm_users = atomic_read(&mm->mm_users);
++		if (mm_users == 1)
++			mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++	}
++	t->cid_work.next = &t->cid_work;	/* Protect against double add */
++	init_task_work(&t->cid_work, task_mm_cid_work);
++}
++
++void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
++{
++	struct callback_head *work = &curr->cid_work;
++	unsigned long now = jiffies;
++
++	if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
++	    work->next != work)
++		return;
++	if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
++		return;
++
++	/* No page allocation under rq lock */
++	task_work_add(curr, work, TWA_RESUME);
++}
++
++void sched_mm_cid_exit_signals(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	guard(rq_lock_irqsave)(rq);
++	preempt_enable_no_resched();	/* holding spinlock */
++	WRITE_ONCE(t->mm_cid_active, 0);
++	/*
++	 * Store t->mm_cid_active before loading per-mm/cpu cid.
++	 * Matches barrier in sched_mm_cid_remote_clear_old().
++	 */
++	smp_mb();
++	mm_cid_put(mm);
++	t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_before_execve(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	guard(rq_lock_irqsave)(rq);
++	preempt_enable_no_resched();	/* holding spinlock */
++	WRITE_ONCE(t->mm_cid_active, 0);
++	/*
++	 * Store t->mm_cid_active before loading per-mm/cpu cid.
++	 * Matches barrier in sched_mm_cid_remote_clear_old().
++	 */
++	smp_mb();
++	mm_cid_put(mm);
++	t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_after_execve(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct rq *rq;
++
++	if (!mm)
++		return;
++
++	preempt_disable();
++	rq = this_rq();
++	scoped_guard (rq_lock_irqsave, rq) {
++		preempt_enable_no_resched();	/* holding spinlock */
++		WRITE_ONCE(t->mm_cid_active, 1);
++		/*
++		 * Store t->mm_cid_active before loading per-mm/cpu cid.
++		 * Matches barrier in sched_mm_cid_remote_clear_old().
++		 */
++		smp_mb();
++		t->last_mm_cid = t->mm_cid = mm_cid_get(rq, t, mm);
++	}
++	rseq_set_notify_resume(t);
++}
++
++void sched_mm_cid_fork(struct task_struct *t)
++{
++	WARN_ON_ONCE(!t->mm || t->mm_cid != -1);
++	t->mm_cid_active = 1;
++}
++#endif
+diff --git a/kernel/sched/alt_core.h b/kernel/sched/alt_core.h
+new file mode 100644
+index 000000000000..12d76d9d290e
+--- /dev/null
++++ b/kernel/sched/alt_core.h
+@@ -0,0 +1,213 @@
++#ifndef _KERNEL_SCHED_ALT_CORE_H
++#define _KERNEL_SCHED_ALT_CORE_H
++
++/*
++ * Compile time debug macro
++ * #define ALT_SCHED_DEBUG
++ */
++
++/*
++ * Task related inlined functions
++ */
++static inline bool is_migration_disabled(struct task_struct *p)
++{
++#ifdef CONFIG_SMP
++	return p->migration_disabled;
++#else
++	return false;
++#endif
++}
++
++/* rt_prio(prio) defined in include/linux/sched/rt.h */
++#define rt_task(p)		rt_prio((p)->prio)
++#define rt_policy(policy)	((policy) == SCHED_FIFO || (policy) == SCHED_RR)
++#define task_has_rt_policy(p)	(rt_policy((p)->policy))
++
++struct affinity_context {
++	const struct cpumask	*new_mask;
++	struct cpumask		*user_mask;
++	unsigned int		flags;
++};
++
++/* CONFIG_SCHED_CLASS_EXT is not supported */
++#define scx_switched_all()	false
++
++#define SCA_CHECK		0x01
++#define SCA_MIGRATE_DISABLE	0x02
++#define SCA_MIGRATE_ENABLE	0x04
++#define SCA_USER		0x08
++
++#ifdef CONFIG_SMP
++
++extern int __set_cpus_allowed_ptr(struct task_struct *p, struct affinity_context *ctx);
++
++static inline cpumask_t *alloc_user_cpus_ptr(int node)
++{
++	/*
++	 * See do_set_cpus_allowed() above for the rcu_head usage.
++	 */
++	int size = max_t(int, cpumask_size(), sizeof(struct rcu_head));
++
++	return kmalloc_node(size, GFP_KERNEL, node);
++}
++
++#else /* !CONFIG_SMP: */
++
++static inline int __set_cpus_allowed_ptr(struct task_struct *p,
++					 struct affinity_context *ctx)
++{
++	return set_cpus_allowed_ptr(p, ctx->new_mask);
++}
++
++static inline cpumask_t *alloc_user_cpus_ptr(int node)
++{
++	return NULL;
++}
++
++#endif /* !CONFIG_SMP */
++
++#ifdef CONFIG_RT_MUTEXES
++
++static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
++{
++	if (pi_task)
++		prio = min(prio, pi_task->prio);
++
++	return prio;
++}
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	struct task_struct *pi_task = rt_mutex_get_top_task(p);
++
++	return __rt_effective_prio(pi_task, prio);
++}
++
++#else /* !CONFIG_RT_MUTEXES: */
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	return prio;
++}
++
++#endif /* !CONFIG_RT_MUTEXES */
++
++extern int __sched_setscheduler(struct task_struct *p, const struct sched_attr *attr, bool user, bool pi);
++extern int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);
++extern void __setscheduler_prio(struct task_struct *p, int prio);
++
++/*
++ * Context API
++ */
++static inline struct rq *__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock(&rq->lock);
++			if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock(&rq->lock);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			*plock = NULL;
++			return rq;
++		}
++	}
++}
++
++static inline void __task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
++{
++	if (NULL != lock)
++		raw_spin_unlock(lock);
++}
++
++void check_task_changed(struct task_struct *p, struct rq *rq);
++
++/*
++ * RQ related inlined functions
++ */
++
++/*
++ * This routine assume that the idle task always in queue
++ */
++static inline struct task_struct *sched_rq_first_task(struct rq *rq)
++{
++	const struct list_head *head = &rq->queue.heads[sched_rq_prio_idx(rq)];
++
++	return list_first_entry(head, struct task_struct, sq_node);
++}
++
++static inline struct task_struct * sched_rq_next_task(struct task_struct *p, struct rq *rq)
++{
++	struct list_head *next = p->sq_node.next;
++
++	if (&rq->queue.heads[0] <= next && next < &rq->queue.heads[SCHED_LEVELS]) {
++		struct list_head *head;
++		unsigned long idx = next - &rq->queue.heads[0];
++
++		idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
++				    sched_idx2prio(idx, rq) + 1);
++		head = &rq->queue.heads[sched_prio2idx(idx, rq)];
++
++		return list_first_entry(head, struct task_struct, sq_node);
++	}
++
++	return list_next_entry(p, sq_node);
++}
++
++extern void requeue_task(struct task_struct *p, struct rq *rq);
++
++#ifdef ALT_SCHED_DEBUG
++extern void alt_sched_debug(void);
++#else
++static inline void alt_sched_debug(void) {}
++#endif
++
++extern int sched_yield_type;
++
++#ifdef CONFIG_SMP
++extern cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DECLARE_STATIC_KEY_FALSE(sched_smt_present);
++DECLARE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
++
++extern cpumask_t sched_smt_mask ____cacheline_aligned_in_smp;
++
++extern cpumask_t *const sched_idle_mask;
++extern cpumask_t *const sched_sg_idle_mask;
++extern cpumask_t *const sched_pcore_idle_mask;
++extern cpumask_t *const sched_ecore_idle_mask;
++
++extern struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu);
++
++typedef bool (*idle_select_func_t)(struct cpumask *dstp, const struct cpumask *src1p,
++				   const struct cpumask *src2p);
++
++extern idle_select_func_t idle_select_func;
++#endif
++
++/* balance callback */
++#ifdef CONFIG_SMP
++extern struct balance_callback *splice_balance_callbacks(struct rq *rq);
++extern void balance_callbacks(struct rq *rq, struct balance_callback *head);
++#else
++
++static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++	return NULL;
++}
++
++static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++}
++
++#endif
++
++#endif /* _KERNEL_SCHED_ALT_CORE_H */
+diff --git a/kernel/sched/alt_debug.c b/kernel/sched/alt_debug.c
+new file mode 100644
+index 000000000000..1dbd7eb6a434
+--- /dev/null
++++ b/kernel/sched/alt_debug.c
+@@ -0,0 +1,32 @@
++/*
++ * kernel/sched/alt_debug.c
++ *
++ * Print the alt scheduler debugging details
++ *
++ * Author: Alfred Chen
++ * Date  : 2020
++ */
++#include "sched.h"
++#include "linux/sched/debug.h"
++
++/*
++ * This allows printing both to /proc/sched_debug and
++ * to the console
++ */
++#define SEQ_printf(m, x...)			\
++ do {						\
++	if (m)					\
++		seq_printf(m, x);		\
++	else					\
++		pr_cont(x);			\
++ } while (0)
++
++void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
++			  struct seq_file *m)
++{
++	SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
++						get_nr_threads(p));
++}
++
++void proc_sched_set_task(struct task_struct *p)
++{}
+diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
+new file mode 100644
+index 000000000000..0366c0b35bc2
+--- /dev/null
++++ b/kernel/sched/alt_sched.h
+@@ -0,0 +1,1037 @@
++#ifndef _KERNEL_SCHED_ALT_SCHED_H
++#define _KERNEL_SCHED_ALT_SCHED_H
++
++#include <linux/context_tracking.h>
++#include <linux/profile.h>
++#include <linux/stop_machine.h>
++#include <linux/syscalls.h>
++#include <linux/tick.h>
++
++#include <trace/events/power.h>
++#include <trace/events/sched.h>
++
++#include "../workqueue_internal.h"
++
++#include "cpupri.h"
++
++#ifdef CONFIG_CGROUP_SCHED
++/* task group related information */
++struct task_group {
++	struct cgroup_subsys_state css;
++
++	struct rcu_head rcu;
++	struct list_head list;
++
++	struct task_group *parent;
++	struct list_head siblings;
++	struct list_head children;
++};
++
++extern struct task_group *sched_create_group(struct task_group *parent);
++extern void sched_online_group(struct task_group *tg,
++			       struct task_group *parent);
++extern void sched_destroy_group(struct task_group *tg);
++extern void sched_release_group(struct task_group *tg);
++#endif /* CONFIG_CGROUP_SCHED */
++
++#define MIN_SCHED_NORMAL_PRIO	(32)
++/*
++ * levels: RT(0-24), reserved(25-31), NORMAL(32-63), cpu idle task(64)
++ *
++ * -- BMQ --
++ * NORMAL: (lower boost range 12, NICE_WIDTH 40, higher boost range 12) / 2
++ * -- PDS --
++ * NORMAL: SCHED_EDGE_DELTA + ((NICE_WIDTH 40) / 2)
++ */
++#define SCHED_LEVELS		(64 + 1)
++
++#define IDLE_TASK_SCHED_PRIO	(SCHED_LEVELS - 1)
++
++/*
++ * Increase resolution of nice-level calculations for 64-bit architectures.
++ * The extra resolution improves shares distribution and load balancing of
++ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
++ * hierarchies, especially on larger systems. This is not a user-visible change
++ * and does not change the user-interface for setting shares/weights.
++ *
++ * We increase resolution only if we have enough bits to allow this increased
++ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
++ * are pretty high and the returns do not justify the increased costs.
++ *
++ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
++ * increase coverage and consistency always enable it on 64-bit platforms.
++ */
++#ifdef CONFIG_64BIT
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		((w) << SCHED_FIXEDPOINT_SHIFT)
++# define scale_load_down(w) \
++({ \
++	unsigned long __w = (w); \
++	if (__w) \
++		__w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
++	__w; \
++})
++#else
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		(w)
++# define scale_load_down(w)	(w)
++#endif
++
++/* task_struct::on_rq states: */
++#define TASK_ON_RQ_QUEUED	1
++#define TASK_ON_RQ_MIGRATING	2
++
++static inline int task_on_rq_queued(struct task_struct *p)
++{
++	return READ_ONCE(p->on_rq) == TASK_ON_RQ_QUEUED;
++}
++
++static inline int task_on_rq_migrating(struct task_struct *p)
++{
++	return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
++}
++
++/* Wake flags. The first three directly map to some SD flag value */
++#define WF_EXEC         0x02 /* Wakeup after exec; maps to SD_BALANCE_EXEC */
++#define WF_FORK         0x04 /* Wakeup after fork; maps to SD_BALANCE_FORK */
++#define WF_TTWU         0x08 /* Wakeup;            maps to SD_BALANCE_WAKE */
++
++#define WF_SYNC         0x10 /* Waker goes to sleep after wakeup */
++#define WF_MIGRATED     0x20 /* Internal use, task got migrated */
++#define WF_CURRENT_CPU  0x40 /* Prefer to move the wakee to the current CPU. */
++
++#ifdef CONFIG_SMP
++static_assert(WF_EXEC == SD_BALANCE_EXEC);
++static_assert(WF_FORK == SD_BALANCE_FORK);
++static_assert(WF_TTWU == SD_BALANCE_WAKE);
++#endif
++
++#define SCHED_QUEUE_BITS	(SCHED_LEVELS - 1)
++
++struct sched_queue {
++	DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
++	struct list_head heads[SCHED_LEVELS];
++};
++
++struct rq;
++struct cpuidle_state;
++
++struct balance_callback {
++	struct balance_callback *next;
++	void (*func)(struct rq *rq);
++};
++
++typedef void (*balance_func_t)(struct rq *rq, int cpu);
++typedef void (*set_idle_mask_func_t)(unsigned int cpu, struct cpumask *dstp);
++typedef void (*clear_idle_mask_func_t)(int cpu, struct cpumask *dstp);
++
++struct balance_arg {
++	struct task_struct	*task;
++	int			active;
++	cpumask_t		*cpumask;
++};
++
++/*
++ * This is the main, per-CPU runqueue data structure.
++ * This data should only be modified by the local cpu.
++ */
++struct rq {
++	/* runqueue lock: */
++	raw_spinlock_t			lock;
++
++	struct task_struct __rcu	*curr;
++	struct task_struct		*idle;
++	struct task_struct		*stop;
++	struct mm_struct		*prev_mm;
++
++	struct sched_queue		queue		____cacheline_aligned;
++
++	int				prio;
++#ifdef CONFIG_SCHED_PDS
++	int				prio_idx;
++	u64				time_edge;
++#endif
++
++	/* switch count */
++	u64 nr_switches;
++
++	atomic_t nr_iowait;
++
++	u64 last_seen_need_resched_ns;
++	int ticks_without_resched;
++
++#ifdef CONFIG_MEMBARRIER
++	int membarrier_state;
++#endif
++
++	set_idle_mask_func_t	set_idle_mask_func;
++	clear_idle_mask_func_t	clear_idle_mask_func;
++
++#ifdef CONFIG_SMP
++	int cpu;		/* cpu of this runqueue */
++	bool online;
++
++	unsigned int		ttwu_pending;
++	unsigned char		nohz_idle_balance;
++	unsigned char		idle_balance;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	struct sched_avg	avg_irq;
++#endif
++
++	balance_func_t		balance_func;
++	struct balance_arg	active_balance_arg		____cacheline_aligned;
++	struct cpu_stop_work	active_balance_work;
++
++	struct balance_callback	*balance_callback;
++
++#ifdef CONFIG_HOTPLUG_CPU
++	struct rcuwait		hotplug_wait;
++#endif
++	unsigned int		nr_pinned;
++
++#endif /* CONFIG_SMP */
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	u64 prev_irq_time;
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++#ifdef CONFIG_PARAVIRT
++	u64 prev_steal_time;
++#endif /* CONFIG_PARAVIRT */
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	u64 prev_steal_time_rq;
++#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
++
++	/* For genenal cpu load util */
++	s32 load_history;
++	u64 load_block;
++	u64 load_stamp;
++
++	/* calc_load related fields */
++	unsigned long calc_load_update;
++	long calc_load_active;
++
++	/* Ensure that all clocks are in the same cache line */
++	u64			clock ____cacheline_aligned;
++	u64			clock_task;
++	u64			prio_balance_time;
++
++	unsigned int  nr_running;
++	unsigned long nr_uninterruptible;
++
++#ifdef CONFIG_SCHED_HRTICK
++#ifdef CONFIG_SMP
++	call_single_data_t hrtick_csd;
++#endif
++	struct hrtimer		hrtick_timer;
++	ktime_t			hrtick_time;
++#endif
++
++#ifdef CONFIG_SCHEDSTATS
++
++	/* latency stats */
++	struct sched_info rq_sched_info;
++	unsigned long long rq_cpu_time;
++	/* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
++
++	/* sys_sched_yield() stats */
++	unsigned int yld_count;
++
++	/* schedule() stats */
++	unsigned int sched_switch;
++	unsigned int sched_count;
++	unsigned int sched_goidle;
++
++	/* try_to_wake_up() stats */
++	unsigned int ttwu_count;
++	unsigned int ttwu_local;
++#endif /* CONFIG_SCHEDSTATS */
++
++#ifdef CONFIG_CPU_IDLE
++	/* Must be inspected within a rcu lock section */
++	struct cpuidle_state *idle_state;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++#ifdef CONFIG_SMP
++	call_single_data_t	nohz_csd;
++#endif
++	atomic_t		nohz_flags;
++#endif /* CONFIG_NO_HZ_COMMON */
++
++	/* Scratch cpumask to be temporarily used under rq_lock */
++	cpumask_var_t		scratch_mask;
++};
++
++extern unsigned int sysctl_sched_base_slice;
++
++extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
++
++extern unsigned long calc_load_update;
++extern atomic_long_t calc_load_tasks;
++
++extern void calc_global_load_tick(struct rq *this_rq);
++extern long calc_load_fold_active(struct rq *this_rq, long adjust);
++
++DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++#define cpu_rq(cpu)		(&per_cpu(runqueues, (cpu)))
++#define this_rq()		this_cpu_ptr(&runqueues)
++#define task_rq(p)		cpu_rq(task_cpu(p))
++#define cpu_curr(cpu)		(cpu_rq(cpu)->curr)
++#define raw_rq()		raw_cpu_ptr(&runqueues)
++
++#ifdef CONFIG_SMP
++#ifdef CONFIG_SYSCTL
++void register_sched_domain_sysctl(void);
++void unregister_sched_domain_sysctl(void);
++#else
++static inline void register_sched_domain_sysctl(void)
++{
++}
++static inline void unregister_sched_domain_sysctl(void)
++{
++}
++#endif
++
++extern bool sched_smp_initialized;
++
++enum {
++#ifdef CONFIG_SCHED_SMT
++	SMT_LEVEL_SPACE_HOLDER,
++#endif
++	COREGROUP_LEVEL_SPACE_HOLDER,
++	CORE_LEVEL_SPACE_HOLDER,
++	OTHER_LEVEL_SPACE_HOLDER,
++	NR_CPU_AFFINITY_LEVELS
++};
++
++DECLARE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++
++static inline int
++__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
++{
++	int cpu;
++
++	while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
++		mask++;
++
++	return cpu;
++}
++
++static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
++{
++	return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
++}
++
++#endif /* CONFIG_SMP */
++
++extern void resched_latency_warn(int cpu, u64 latency);
++
++#ifndef arch_scale_freq_tick
++static __always_inline
++void arch_scale_freq_tick(void)
++{
++}
++#endif
++
++#ifndef arch_scale_freq_capacity
++static __always_inline
++unsigned long arch_scale_freq_capacity(int cpu)
++{
++	return SCHED_CAPACITY_SCALE;
++}
++#endif
++
++static inline u64 __rq_clock_broken(struct rq *rq)
++{
++	return READ_ONCE(rq->clock);
++}
++
++static inline u64 rq_clock(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock;
++}
++
++static inline u64 rq_clock_task(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock_task;
++}
++
++/*
++ * {de,en}queue flags:
++ *
++ * DEQUEUE_SLEEP  - task is no longer runnable
++ * ENQUEUE_WAKEUP - task just became runnable
++ *
++ */
++
++#define DEQUEUE_SLEEP		0x01
++
++#define ENQUEUE_WAKEUP		0x01
++
++
++/*
++ * Below are scheduler API which using in other kernel code
++ * It use the dummy rq_flags
++ * ToDo : BMQ need to support these APIs for compatibility with mainline
++ * scheduler code.
++ */
++struct rq_flags {
++	unsigned long flags;
++};
++
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock);
++
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock);
++
++static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
++	__releases(rq->lock)
++	__releases(p->pi_lock)
++{
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++}
++
++static inline void
++rq_lock(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock(&rq->lock);
++}
++
++static inline void
++rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++rq_lock_irq(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock_irq(&rq->lock);
++}
++
++static inline void
++rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++static inline struct rq *
++this_rq_lock_irq(struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	local_irq_disable();
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	return rq;
++}
++
++static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
++{
++	return &rq->lock;
++}
++
++static inline raw_spinlock_t *rq_lockp(struct rq *rq)
++{
++	return __rq_lockp(rq);
++}
++
++static inline void lockdep_assert_rq_held(struct rq *rq)
++{
++	lockdep_assert_held(__rq_lockp(rq));
++}
++
++extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
++extern void raw_spin_rq_unlock(struct rq *rq);
++
++static inline void raw_spin_rq_lock(struct rq *rq)
++{
++	raw_spin_rq_lock_nested(rq, 0);
++}
++
++static inline void raw_spin_rq_lock_irq(struct rq *rq)
++{
++	local_irq_disable();
++	raw_spin_rq_lock(rq);
++}
++
++static inline void raw_spin_rq_unlock_irq(struct rq *rq)
++{
++	raw_spin_rq_unlock(rq);
++	local_irq_enable();
++}
++
++static inline int task_current(struct rq *rq, struct task_struct *p)
++{
++	return rq->curr == p;
++}
++
++static inline bool task_on_cpu(struct task_struct *p)
++{
++	return p->on_cpu;
++}
++
++extern struct static_key_false sched_schedstats;
++
++#ifdef CONFIG_CPU_IDLE
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++	rq->idle_state = idle_state;
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	WARN_ON(!rcu_read_lock_held());
++	return rq->idle_state;
++}
++#else
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	return NULL;
++}
++#endif
++
++static inline int cpu_of(const struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	return rq->cpu;
++#else
++	return 0;
++#endif
++}
++
++extern void resched_cpu(int cpu);
++
++#include "stats.h"
++
++#ifdef CONFIG_NO_HZ_COMMON
++#define NOHZ_BALANCE_KICK_BIT	0
++#define NOHZ_STATS_KICK_BIT	1
++
++#define NOHZ_BALANCE_KICK	BIT(NOHZ_BALANCE_KICK_BIT)
++#define NOHZ_STATS_KICK		BIT(NOHZ_STATS_KICK_BIT)
++
++#define NOHZ_KICK_MASK	(NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
++
++#define nohz_flags(cpu)	(&cpu_rq(cpu)->nohz_flags)
++
++/* TODO: needed?
++extern void nohz_balance_exit_idle(struct rq *rq);
++#else
++static inline void nohz_balance_exit_idle(struct rq *rq) { }
++*/
++#endif
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++struct irqtime {
++	u64			total;
++	u64			tick_delta;
++	u64			irq_start_time;
++	struct u64_stats_sync	sync;
++};
++
++DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
++extern int sched_clock_irqtime;
++
++static inline int irqtime_enabled(void)
++{
++	return sched_clock_irqtime;
++}
++
++/*
++ * Returns the irqtime minus the softirq time computed by ksoftirqd.
++ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
++ * and never move forward.
++ */
++static inline u64 irq_time_read(int cpu)
++{
++	struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
++	unsigned int seq;
++	u64 total;
++
++	do {
++		seq = __u64_stats_fetch_begin(&irqtime->sync);
++		total = irqtime->total;
++	} while (__u64_stats_fetch_retry(&irqtime->sync, seq));
++
++	return total;
++}
++#else
++
++static inline int irqtime_enabled(void)
++{
++	return 0;
++}
++
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++
++#ifdef CONFIG_CPU_FREQ
++DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++extern int __init sched_tick_offload_init(void);
++#else
++static inline int sched_tick_offload_init(void) { return 0; }
++#endif
++
++#ifdef arch_scale_freq_capacity
++#ifndef arch_scale_freq_invariant
++#define arch_scale_freq_invariant()	(true)
++#endif
++#else /* arch_scale_freq_capacity */
++#define arch_scale_freq_invariant()	(false)
++#endif
++
++#ifdef CONFIG_SMP
++unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
++				 unsigned long min,
++				 unsigned long max);
++#endif /* CONFIG_SMP */
++
++extern void schedule_idle(void);
++
++#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
++
++/*
++ * !! For sched_setattr_nocheck() (kernel) only !!
++ *
++ * This is actually gross. :(
++ *
++ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
++ * tasks, but still be able to sleep. We need this on platforms that cannot
++ * atomically change clock frequency. Remove once fast switching will be
++ * available on such platforms.
++ *
++ * SUGOV stands for SchedUtil GOVernor.
++ */
++#define SCHED_FLAG_SUGOV	0x10000000
++
++#ifdef CONFIG_MEMBARRIER
++/*
++ * The scheduler provides memory barriers required by membarrier between:
++ * - prior user-space memory accesses and store to rq->membarrier_state,
++ * - store to rq->membarrier_state and following user-space memory accesses.
++ * In the same way it provides those guarantees around store to rq->curr.
++ */
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++	int membarrier_state;
++
++	if (prev_mm == next_mm)
++		return;
++
++	membarrier_state = atomic_read(&next_mm->membarrier_state);
++	if (READ_ONCE(rq->membarrier_state) == membarrier_state)
++		return;
++
++	WRITE_ONCE(rq->membarrier_state, membarrier_state);
++}
++#else
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++}
++#endif
++
++#ifdef CONFIG_NUMA
++extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
++#else
++static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return nr_cpu_ids;
++}
++#endif
++
++extern void swake_up_all_locked(struct swait_queue_head *q);
++extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
++
++extern int try_to_wake_up(struct task_struct *tsk, unsigned int state, int wake_flags);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++extern int preempt_dynamic_mode;
++extern int sched_dynamic_mode(const char *str);
++extern void sched_dynamic_update(int mode);
++#endif
++extern const char *preempt_modes[];
++
++static inline void nohz_run_idle_balance(int cpu) { }
++
++static inline unsigned long
++uclamp_eff_value(struct task_struct *p, enum uclamp_id clamp_id)
++{
++	if (clamp_id == UCLAMP_MIN)
++		return 0;
++
++	return SCHED_CAPACITY_SCALE;
++}
++
++static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
++
++static inline bool uclamp_is_used(void)
++{
++	return false;
++}
++
++static inline unsigned long
++uclamp_rq_get(struct rq *rq, enum uclamp_id clamp_id)
++{
++	if (clamp_id == UCLAMP_MIN)
++		return 0;
++
++	return SCHED_CAPACITY_SCALE;
++}
++
++static inline void
++uclamp_rq_set(struct rq *rq, enum uclamp_id clamp_id, unsigned int value)
++{
++}
++
++static inline bool uclamp_rq_is_idle(struct rq *rq)
++{
++	return false;
++}
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#define SCHED_MM_CID_PERIOD_NS	(100ULL * 1000000)	/* 100ms */
++#define MM_CID_SCAN_DELAY	100			/* 100ms */
++
++extern raw_spinlock_t cid_lock;
++extern int use_cid_lock;
++
++extern void sched_mm_cid_migrate_from(struct task_struct *t);
++extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t);
++extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
++extern void init_sched_mm_cid(struct task_struct *t);
++
++static inline void __mm_cid_put(struct mm_struct *mm, int cid)
++{
++	if (cid < 0)
++		return;
++	cpumask_clear_cpu(cid, mm_cidmask(mm));
++}
++
++/*
++ * The per-mm/cpu cid can have the MM_CID_LAZY_PUT flag set or transition to
++ * the MM_CID_UNSET state without holding the rq lock, but the rq lock needs to
++ * be held to transition to other states.
++ *
++ * State transitions synchronized with cmpxchg or try_cmpxchg need to be
++ * consistent across cpus, which prevents use of this_cpu_cmpxchg.
++ */
++static inline void mm_cid_put_lazy(struct task_struct *t)
++{
++	struct mm_struct *mm = t->mm;
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	int cid;
++
++	lockdep_assert_irqs_disabled();
++	cid = __this_cpu_read(pcpu_cid->cid);
++	if (!mm_cid_is_lazy_put(cid) ||
++	    !try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++		return;
++	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int mm_cid_pcpu_unset(struct mm_struct *mm)
++{
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	int cid, res;
++
++	lockdep_assert_irqs_disabled();
++	cid = __this_cpu_read(pcpu_cid->cid);
++	for (;;) {
++		if (mm_cid_is_unset(cid))
++			return MM_CID_UNSET;
++		/*
++		 * Attempt transition from valid or lazy-put to unset.
++		 */
++		res = cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, cid, MM_CID_UNSET);
++		if (res == cid)
++			break;
++		cid = res;
++	}
++	return cid;
++}
++
++static inline void mm_cid_put(struct mm_struct *mm)
++{
++	int cid;
++
++	lockdep_assert_irqs_disabled();
++	cid = mm_cid_pcpu_unset(mm);
++	if (cid == MM_CID_UNSET)
++		return;
++	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm)
++{
++	struct cpumask *cidmask = mm_cidmask(mm);
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	int cid, max_nr_cid, allowed_max_nr_cid;
++
++	/*
++	 * After shrinking the number of threads or reducing the number
++	 * of allowed cpus, reduce the value of max_nr_cid so expansion
++	 * of cid allocation will preserve cache locality if the number
++	 * of threads or allowed cpus increase again.
++	 */
++	max_nr_cid = atomic_read(&mm->max_nr_cid);
++	while ((allowed_max_nr_cid = min_t(int, READ_ONCE(mm->nr_cpus_allowed),
++					   atomic_read(&mm->mm_users))),
++	       max_nr_cid > allowed_max_nr_cid) {
++		/* atomic_try_cmpxchg loads previous mm->max_nr_cid into max_nr_cid. */
++		if (atomic_try_cmpxchg(&mm->max_nr_cid, &max_nr_cid, allowed_max_nr_cid)) {
++			max_nr_cid = allowed_max_nr_cid;
++			break;
++		}
++	}
++	/* Try to re-use recent cid. This improves cache locality. */
++	cid = __this_cpu_read(pcpu_cid->recent_cid);
++	if (!mm_cid_is_unset(cid) && cid < max_nr_cid &&
++	    !cpumask_test_and_set_cpu(cid, cidmask))
++		return cid;
++	/*
++	 * Expand cid allocation if the maximum number of concurrency
++	 * IDs allocated (max_nr_cid) is below the number cpus allowed
++	 * and number of threads. Expanding cid allocation as much as
++	 * possible improves cache locality.
++	 */
++	cid = max_nr_cid;
++	while (cid < READ_ONCE(mm->nr_cpus_allowed) && cid < atomic_read(&mm->mm_users)) {
++		/* atomic_try_cmpxchg loads previous mm->max_nr_cid into cid. */
++		if (!atomic_try_cmpxchg(&mm->max_nr_cid, &cid, cid + 1))
++			continue;
++		if (!cpumask_test_and_set_cpu(cid, cidmask))
++			return cid;
++	}
++	/*
++	 * Find the first available concurrency id.
++	 * Retry finding first zero bit if the mask is temporarily
++	 * filled. This only happens during concurrent remote-clear
++	 * which owns a cid without holding a rq lock.
++	 */
++	for (;;) {
++		cid = cpumask_first_zero(cidmask);
++		if (cid < READ_ONCE(mm->nr_cpus_allowed))
++			break;
++		cpu_relax();
++	}
++	if (cpumask_test_and_set_cpu(cid, cidmask))
++		return -1;
++
++	return cid;
++}
++
++/*
++ * Save a snapshot of the current runqueue time of this cpu
++ * with the per-cpu cid value, allowing to estimate how recently it was used.
++ */
++static inline void mm_cid_snapshot_time(struct rq *rq, struct mm_struct *mm)
++{
++	struct mm_cid *pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(rq));
++
++	lockdep_assert_rq_held(rq);
++	WRITE_ONCE(pcpu_cid->time, rq->clock);
++}
++
++static inline int __mm_cid_get(struct rq *rq, struct task_struct *t,
++			       struct mm_struct *mm)
++{
++	int cid;
++
++	/*
++	 * All allocations (even those using the cid_lock) are lock-free. If
++	 * use_cid_lock is set, hold the cid_lock to perform cid allocation to
++	 * guarantee forward progress.
++	 */
++	if (!READ_ONCE(use_cid_lock)) {
++		cid = __mm_cid_try_get(t, mm);
++		if (cid >= 0)
++			goto end;
++		raw_spin_lock(&cid_lock);
++	} else {
++		raw_spin_lock(&cid_lock);
++		cid = __mm_cid_try_get(t, mm);
++		if (cid >= 0)
++			goto unlock;
++	}
++
++	/*
++	 * cid concurrently allocated. Retry while forcing following
++	 * allocations to use the cid_lock to ensure forward progress.
++	 */
++	WRITE_ONCE(use_cid_lock, 1);
++	/*
++	 * Set use_cid_lock before allocation. Only care about program order
++	 * because this is only required for forward progress.
++	 */
++	barrier();
++	/*
++	 * Retry until it succeeds. It is guaranteed to eventually succeed once
++	 * all newcoming allocations observe the use_cid_lock flag set.
++	 */
++	do {
++		cid = __mm_cid_try_get(t, mm);
++		cpu_relax();
++	} while (cid < 0);
++	/*
++	 * Allocate before clearing use_cid_lock. Only care about
++	 * program order because this is for forward progress.
++	 */
++	barrier();
++	WRITE_ONCE(use_cid_lock, 0);
++unlock:
++	raw_spin_unlock(&cid_lock);
++end:
++	mm_cid_snapshot_time(rq, mm);
++	return cid;
++}
++
++static inline int mm_cid_get(struct rq *rq, struct task_struct *t,
++			     struct mm_struct *mm)
++{
++	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++	struct cpumask *cpumask;
++	int cid;
++
++	lockdep_assert_rq_held(rq);
++	cpumask = mm_cidmask(mm);
++	cid = __this_cpu_read(pcpu_cid->cid);
++	if (mm_cid_is_valid(cid)) {
++		mm_cid_snapshot_time(rq, mm);
++		return cid;
++	}
++	if (mm_cid_is_lazy_put(cid)) {
++		if (try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++			__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++	}
++	cid = __mm_cid_get(rq, t, mm);
++	__this_cpu_write(pcpu_cid->cid, cid);
++	__this_cpu_write(pcpu_cid->recent_cid, cid);
++
++	return cid;
++}
++
++static inline void switch_mm_cid(struct rq *rq,
++				 struct task_struct *prev,
++				 struct task_struct *next)
++{
++	/*
++	 * Provide a memory barrier between rq->curr store and load of
++	 * {prev,next}->mm->pcpu_cid[cpu] on rq->curr->mm transition.
++	 *
++	 * Should be adapted if context_switch() is modified.
++	 */
++	if (!next->mm) {                                // to kernel
++		/*
++		 * user -> kernel transition does not guarantee a barrier, but
++		 * we can use the fact that it performs an atomic operation in
++		 * mmgrab().
++		 */
++		if (prev->mm)                           // from user
++			smp_mb__after_mmgrab();
++		/*
++		 * kernel -> kernel transition does not change rq->curr->mm
++		 * state. It stays NULL.
++		 */
++	} else {                                        // to user
++		/*
++		 * kernel -> user transition does not provide a barrier
++		 * between rq->curr store and load of {prev,next}->mm->pcpu_cid[cpu].
++		 * Provide it here.
++		 */
++		if (!prev->mm)                          // from kernel
++			smp_mb();
++		/*
++		 * user -> user transition guarantees a memory barrier through
++		 * switch_mm() when current->mm changes. If current->mm is
++		 * unchanged, no barrier is needed.
++		 */
++	}
++	if (prev->mm_cid_active) {
++		mm_cid_snapshot_time(rq, prev->mm);
++		mm_cid_put_lazy(prev);
++		prev->mm_cid = -1;
++	}
++	if (next->mm_cid_active)
++		next->last_mm_cid = next->mm_cid = mm_cid_get(rq, next, next->mm);
++}
++
++#else
++static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
++static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
++static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t) { }
++static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
++static inline void init_sched_mm_cid(struct task_struct *t) { }
++#endif
++
++#ifdef CONFIG_SMP
++extern struct balance_callback balance_push_callback;
++
++static inline void
++queue_balance_callback(struct rq *rq,
++		       struct balance_callback *head,
++		       void (*func)(struct rq *rq))
++{
++	lockdep_assert_rq_held(rq);
++
++	/*
++	 * Don't (re)queue an already queued item; nor queue anything when
++	 * balance_push() is active, see the comment with
++	 * balance_push_callback.
++	 */
++	if (unlikely(head->next || rq->balance_callback == &balance_push_callback))
++		return;
++
++	head->func = func;
++	head->next = rq->balance_callback;
++	rq->balance_callback = head;
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_SCHED_BMQ
++#include "bmq.h"
++#endif
++#ifdef CONFIG_SCHED_PDS
++#include "pds.h"
++#endif
++
++#endif /* _KERNEL_SCHED_ALT_SCHED_H */
+diff --git a/kernel/sched/alt_topology.c b/kernel/sched/alt_topology.c
+new file mode 100644
+index 000000000000..2266138ee783
+--- /dev/null
++++ b/kernel/sched/alt_topology.c
+@@ -0,0 +1,350 @@
++#include "alt_core.h"
++#include "alt_topology.h"
++
++#ifdef CONFIG_SMP
++
++static cpumask_t sched_pcore_mask ____cacheline_aligned_in_smp;
++
++static int __init sched_pcore_mask_setup(char *str)
++{
++	if (cpulist_parse(str, &sched_pcore_mask))
++		pr_warn("sched/alt: pcore_cpus= incorrect CPU range\n");
++
++	return 0;
++}
++__setup("pcore_cpus=", sched_pcore_mask_setup);
++
++/*
++ * set/clear idle mask functions
++ */
++#ifdef CONFIG_SCHED_SMT
++static void set_idle_mask_smt(unsigned int cpu, struct cpumask *dstp)
++{
++	cpumask_set_cpu(cpu, dstp);
++	if (cpumask_subset(cpu_smt_mask(cpu), sched_idle_mask))
++		cpumask_or(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
++}
++
++static void clear_idle_mask_smt(int cpu, struct cpumask *dstp)
++{
++	cpumask_clear_cpu(cpu, dstp);
++	cpumask_andnot(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
++}
++#endif
++
++static void set_idle_mask_pcore(unsigned int cpu, struct cpumask *dstp)
++{
++	cpumask_set_cpu(cpu, dstp);
++	cpumask_set_cpu(cpu, sched_pcore_idle_mask);
++}
++
++static void clear_idle_mask_pcore(int cpu, struct cpumask *dstp)
++{
++	cpumask_clear_cpu(cpu, dstp);
++	cpumask_clear_cpu(cpu, sched_pcore_idle_mask);
++}
++
++static void set_idle_mask_ecore(unsigned int cpu, struct cpumask *dstp)
++{
++	cpumask_set_cpu(cpu, dstp);
++	cpumask_set_cpu(cpu, sched_ecore_idle_mask);
++}
++
++static void clear_idle_mask_ecore(int cpu, struct cpumask *dstp)
++{
++	cpumask_clear_cpu(cpu, dstp);
++	cpumask_clear_cpu(cpu, sched_ecore_idle_mask);
++}
++
++/*
++ * Idle cpu/rq selection functions
++ */
++#ifdef CONFIG_SCHED_SMT
++static bool p1_idle_select_func(struct cpumask *dstp, const struct cpumask *src1p,
++				 const struct cpumask *src2p)
++{
++	return cpumask_and(dstp, src1p, src2p + 1)	||
++	       cpumask_and(dstp, src1p, src2p);
++}
++#endif
++
++static bool p1p2_idle_select_func(struct cpumask *dstp, const struct cpumask *src1p,
++					const struct cpumask *src2p)
++{
++	return cpumask_and(dstp, src1p, src2p + 1)	||
++	       cpumask_and(dstp, src1p, src2p + 2)	||
++	       cpumask_and(dstp, src1p, src2p);
++}
++
++/* common balance functions */
++static int active_balance_cpu_stop(void *data)
++{
++	struct balance_arg *arg = data;
++	struct task_struct *p = arg->task;
++	struct rq *rq = this_rq();
++	unsigned long flags;
++	cpumask_t tmp;
++
++	local_irq_save(flags);
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++
++	arg->active = 0;
++
++	if (task_on_rq_queued(p) && task_rq(p) == rq &&
++	    cpumask_and(&tmp, p->cpus_ptr, arg->cpumask) &&
++	    !is_migration_disabled(p)) {
++		int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu_of(rq)));
++		rq = move_queued_task(rq, p, dcpu);
++	}
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	return 0;
++}
++
++/* trigger_active_balance - for @rq */
++static inline int
++trigger_active_balance(struct rq *src_rq, struct rq *rq, cpumask_t *target_mask)
++{
++	struct balance_arg *arg;
++	unsigned long flags;
++	struct task_struct *p;
++	int res;
++
++	if (!raw_spin_trylock_irqsave(&rq->lock, flags))
++		return 0;
++
++	arg = &rq->active_balance_arg;
++	res = (1 == rq->nr_running) &&					\
++	      !is_migration_disabled((p = sched_rq_first_task(rq))) &&	\
++	      cpumask_intersects(p->cpus_ptr, target_mask) &&		\
++	      !arg->active;
++	if (res) {
++		arg->task = p;
++		arg->cpumask = target_mask;
++
++		arg->active = 1;
++	}
++
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	if (res) {
++		preempt_disable();
++		raw_spin_unlock(&src_rq->lock);
++
++		stop_one_cpu_nowait(cpu_of(rq), active_balance_cpu_stop, arg,
++				    &rq->active_balance_work);
++
++		preempt_enable();
++		raw_spin_lock(&src_rq->lock);
++	}
++
++	return res;
++}
++
++static inline int
++ecore_source_balance(struct rq *rq, cpumask_t *single_task_mask, cpumask_t *target_mask)
++{
++	if (cpumask_andnot(single_task_mask, single_task_mask, &sched_pcore_mask)) {
++		int i, cpu = cpu_of(rq);
++
++		for_each_cpu_wrap(i, single_task_mask, cpu)
++			if (trigger_active_balance(rq, cpu_rq(i), target_mask))
++				return 1;
++	}
++
++	return 0;
++}
++
++static DEFINE_PER_CPU(struct balance_callback, active_balance_head);
++
++#ifdef CONFIG_SCHED_SMT
++static inline int
++smt_pcore_source_balance(struct rq *rq, cpumask_t *single_task_mask, cpumask_t *target_mask)
++{
++	cpumask_t smt_single_mask;
++
++	if (cpumask_and(&smt_single_mask, single_task_mask, &sched_smt_mask)) {
++		int i, cpu = cpu_of(rq);
++
++		for_each_cpu_wrap(i, &smt_single_mask, cpu) {
++			if (cpumask_subset(cpu_smt_mask(i), &smt_single_mask) &&
++			    trigger_active_balance(rq, cpu_rq(i), target_mask))
++				return 1;
++		}
++	}
++
++	return 0;
++}
++
++/* smt p core balance functions */
++static inline void smt_pcore_balance(struct rq *rq)
++{
++	cpumask_t single_task_mask;
++
++	if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++	    cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++	    (/* smt core group balance */
++	     (static_key_count(&sched_smt_present.key) > 1 &&
++	      smt_pcore_source_balance(rq, &single_task_mask, sched_sg_idle_mask)
++	     ) ||
++	     /* e core to idle smt core balance */
++	     ecore_source_balance(rq, &single_task_mask, sched_sg_idle_mask)))
++		return;
++}
++
++static void smt_pcore_balance_func(struct rq *rq, const int cpu)
++{
++	if (cpumask_test_cpu(cpu, sched_sg_idle_mask))
++		queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), smt_pcore_balance);
++}
++
++/* smt balance functions */
++static inline void smt_balance(struct rq *rq)
++{
++	cpumask_t single_task_mask;
++
++	if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++	    cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++	    static_key_count(&sched_smt_present.key) > 1 &&
++	    smt_pcore_source_balance(rq, &single_task_mask, sched_sg_idle_mask))
++		return;
++}
++
++static void smt_balance_func(struct rq *rq, const int cpu)
++{
++	if (cpumask_test_cpu(cpu, sched_sg_idle_mask))
++		queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), smt_balance);
++}
++
++/* e core balance functions */
++static inline void ecore_balance(struct rq *rq)
++{
++	cpumask_t single_task_mask;
++
++	if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++	    cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++	    /* smt occupied p core to idle e core balance */
++	    smt_pcore_source_balance(rq, &single_task_mask, sched_ecore_idle_mask))
++		return;
++}
++
++static void ecore_balance_func(struct rq *rq, const int cpu)
++{
++	queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), ecore_balance);
++}
++#endif /* CONFIG_SCHED_SMT */
++
++/* p core balance functions */
++static inline void pcore_balance(struct rq *rq)
++{
++	cpumask_t single_task_mask;
++
++	if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++	    cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++	    /* idle e core to p core balance */
++	    ecore_source_balance(rq, &single_task_mask, sched_pcore_idle_mask))
++		return;
++}
++
++static void pcore_balance_func(struct rq *rq, const int cpu)
++{
++	queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), pcore_balance);
++}
++
++#ifdef ALT_SCHED_DEBUG
++#define SCHED_DEBUG_INFO(...)	printk(KERN_INFO __VA_ARGS__)
++#else
++#define SCHED_DEBUG_INFO(...)	do { } while(0)
++#endif
++
++#define SET_IDLE_SELECT_FUNC(func)						\
++{										\
++	idle_select_func = func;						\
++	printk(KERN_INFO "sched: "#func);					\
++}
++
++#define SET_RQ_BALANCE_FUNC(rq, cpu, func)					\
++{										\
++	rq->balance_func = func;						\
++	SCHED_DEBUG_INFO("sched: cpu#%02d -> "#func, cpu);			\
++}
++
++#define SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_func, clear_func)			\
++{										\
++	rq->set_idle_mask_func		= set_func;				\
++	rq->clear_idle_mask_func	= clear_func;				\
++	SCHED_DEBUG_INFO("sched: cpu#%02d -> "#set_func" "#clear_func, cpu);	\
++}
++
++void sched_init_topology(void)
++{
++	int cpu;
++	struct rq *rq;
++	cpumask_t sched_ecore_mask = { CPU_BITS_NONE };
++	int ecore_present = 0;
++
++#ifdef CONFIG_SCHED_SMT
++	if (!cpumask_empty(&sched_smt_mask))
++		printk(KERN_INFO "sched: smt mask: 0x%08lx\n", sched_smt_mask.bits[0]);
++#endif
++
++	if (!cpumask_empty(&sched_pcore_mask)) {
++		cpumask_andnot(&sched_ecore_mask, cpu_online_mask, &sched_pcore_mask);
++		printk(KERN_INFO "sched: pcore mask: 0x%08lx, ecore mask: 0x%08lx\n",
++		       sched_pcore_mask.bits[0], sched_ecore_mask.bits[0]);
++
++		ecore_present = !cpumask_empty(&sched_ecore_mask);
++	}
++
++#ifdef CONFIG_SCHED_SMT
++	/* idle select function */
++	if (cpumask_equal(&sched_smt_mask, cpu_online_mask)) {
++		SET_IDLE_SELECT_FUNC(p1_idle_select_func);
++	} else
++#endif
++	if (!cpumask_empty(&sched_pcore_mask)) {
++		SET_IDLE_SELECT_FUNC(p1p2_idle_select_func);
++	}
++
++	for_each_online_cpu(cpu) {
++		rq = cpu_rq(cpu);
++		/* take chance to reset time slice for idle tasks */
++		rq->idle->time_slice = sysctl_sched_base_slice;
++
++#ifdef CONFIG_SCHED_SMT
++		if (cpumask_weight(cpu_smt_mask(cpu)) > 1) {
++			SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_smt, clear_idle_mask_smt);
++
++			if (cpumask_test_cpu(cpu, &sched_pcore_mask) &&
++			    !cpumask_intersects(&sched_ecore_mask, &sched_smt_mask)) {
++				SET_RQ_BALANCE_FUNC(rq, cpu, smt_pcore_balance_func);
++			} else {
++				SET_RQ_BALANCE_FUNC(rq, cpu, smt_balance_func);
++			}
++
++			continue;
++		}
++#endif
++		/* !SMT or only one cpu in sg */
++		if (cpumask_test_cpu(cpu, &sched_pcore_mask)) {
++			SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_pcore, clear_idle_mask_pcore);
++
++			if (ecore_present)
++				SET_RQ_BALANCE_FUNC(rq, cpu, pcore_balance_func);
++
++			continue;
++		}
++		if (cpumask_test_cpu(cpu, &sched_ecore_mask)) {
++			SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_ecore, clear_idle_mask_ecore);
++#ifdef CONFIG_SCHED_SMT
++			if (cpumask_intersects(&sched_pcore_mask, &sched_smt_mask))
++				SET_RQ_BALANCE_FUNC(rq, cpu, ecore_balance_func);
++#endif
++		}
++	}
++}
++#endif /* CONFIG_SMP */
+diff --git a/kernel/sched/alt_topology.h b/kernel/sched/alt_topology.h
+new file mode 100644
+index 000000000000..076174cd2bc6
+--- /dev/null
++++ b/kernel/sched/alt_topology.h
+@@ -0,0 +1,6 @@
++#ifndef _KERNEL_SCHED_ALT_TOPOLOGY_H
++#define _KERNEL_SCHED_ALT_TOPOLOGY_H
++
++extern void sched_init_topology(void);
++
++#endif /* _KERNEL_SCHED_ALT_TOPOLOGY_H */
+diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
+new file mode 100644
+index 000000000000..5a7835246ec3
+--- /dev/null
++++ b/kernel/sched/bmq.h
+@@ -0,0 +1,103 @@
++#ifndef _KERNEL_SCHED_BMQ_H
++#define _KERNEL_SCHED_BMQ_H
++
++#define ALT_SCHED_NAME "BMQ"
++
++/*
++ * BMQ only routines
++ */
++static inline void boost_task(struct task_struct *p, int n)
++{
++	int limit;
++
++	switch (p->policy) {
++	case SCHED_NORMAL:
++		limit = -MAX_PRIORITY_ADJ;
++		break;
++	case SCHED_BATCH:
++		limit = 0;
++		break;
++	default:
++		return;
++	}
++
++	p->boost_prio = max(limit, p->boost_prio - n);
++}
++
++static inline void deboost_task(struct task_struct *p)
++{
++	if (p->boost_prio < MAX_PRIORITY_ADJ)
++		p->boost_prio++;
++}
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms) {}
++
++/* This API is used in task_prio(), return value readed by human users */
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	return p->prio + p->boost_prio - MIN_NORMAL_PRIO;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MIN_NORMAL_PRIO)? (p->prio >> 2) :
++		MIN_SCHED_NORMAL_PRIO + (p->prio + p->boost_prio - MIN_NORMAL_PRIO) / 2;
++}
++
++#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio)	\
++	prio = task_sched_prio(p);		\
++	idx = prio;
++
++static inline int sched_prio2idx(int prio, struct rq *rq)
++{
++	return prio;
++}
++
++static inline int sched_idx2prio(int idx, struct rq *rq)
++{
++	return idx;
++}
++
++static inline int sched_rq_prio_idx(struct rq *rq)
++{
++	return rq->prio;
++}
++
++static inline int task_running_nice(struct task_struct *p)
++{
++	return (p->prio + p->boost_prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq) {}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++	deboost_task(p);
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
++static inline void sched_task_fork(struct task_struct *p, struct rq *rq) {}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	p->boost_prio = MAX_PRIORITY_ADJ;
++}
++
++static inline void sched_task_ttwu(struct task_struct *p)
++{
++	s64 delta = this_rq()->clock_task > p->last_ran;
++
++	if (likely(delta > 0))
++		boost_task(p, delta  >> 22);
++}
++
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
++{
++	boost_task(p, 1);
++}
++
++#endif /* _KERNEL_SCHED_BMQ_H */
+diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c
+index 72d97aa8b726..60ce3eecaa7b 100644
+--- a/kernel/sched/build_policy.c
++++ b/kernel/sched/build_policy.c
+@@ -49,15 +49,21 @@
+ 
+ #include "idle.c"
+ 
++#ifndef CONFIG_SCHED_ALT
+ #include "rt.c"
++#endif
+ 
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ # include "cpudeadline.c"
++#endif
+ # include "pelt.c"
+ #endif
+ 
+ #include "cputime.c"
++#ifndef CONFIG_SCHED_ALT
+ #include "deadline.c"
++#endif
+ 
+ #ifdef CONFIG_SCHED_CLASS_EXT
+ # include "ext.c"
+diff --git a/kernel/sched/build_utility.c b/kernel/sched/build_utility.c
+index bf9d8db94b70..1c5443b89013 100644
+--- a/kernel/sched/build_utility.c
++++ b/kernel/sched/build_utility.c
+@@ -56,6 +56,10 @@
+ 
+ #include "clock.c"
+ 
++#ifdef CONFIG_SCHED_ALT
++# include "alt_topology.c"
++#endif
++
+ #ifdef CONFIG_CGROUP_CPUACCT
+ # include "cpuacct.c"
+ #endif
+@@ -68,7 +72,7 @@
+ # include "cpufreq_schedutil.c"
+ #endif
+ 
+-#include "debug.c"
++# include "debug.c"
+ 
+ #ifdef CONFIG_SCHEDSTATS
+ # include "stats.c"
+@@ -82,7 +86,9 @@
+ 
+ #ifdef CONFIG_SMP
+ # include "cpupri.c"
++#ifndef CONFIG_SCHED_ALT
+ # include "stop_task.c"
++#endif
+ # include "topology.c"
+ #endif
+ 
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 461242ec958a..c50e2cdd4444 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -223,6 +223,7 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
+ 
+ static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	unsigned long min, max, util = scx_cpuperf_target(sg_cpu->cpu);
+ 
+ 	if (!scx_switched_all())
+@@ -231,6 +232,10 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
+ 	util = max(util, boost);
+ 	sg_cpu->bw_min = min;
+ 	sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
++#else /* CONFIG_SCHED_ALT */
++	sg_cpu->bw_min = 0;
++	sg_cpu->util = rq_load_util(cpu_rq(sg_cpu->cpu), arch_scale_cpu_capacity(sg_cpu->cpu));
++#endif /* CONFIG_SCHED_ALT */
+ }
+ 
+ /**
+@@ -390,8 +395,10 @@ static inline bool sugov_hold_freq(struct sugov_cpu *sg_cpu) { return false; }
+  */
+ static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
+-		sg_cpu->sg_policy->need_freq_update = true;
++		sg_cpu->sg_policy->limits_changed = true;
++#endif
+ }
+ 
+ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
+@@ -685,6 +692,7 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
+ 	}
+ 
+ 	ret = sched_setattr_nocheck(thread, &attr);
++
+ 	if (ret) {
+ 		kthread_stop(thread);
+ 		pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index 6dab4854c6c0..24705643a077 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -124,7 +124,7 @@ void account_user_time(struct task_struct *p, u64 cputime)
+ 	p->utime += cputime;
+ 	account_group_user_time(p, cputime);
+ 
+-	index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
++	index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
+ 
+ 	/* Add user time to cpustat. */
+ 	task_group_account_field(p, index, cputime);
+@@ -148,7 +148,7 @@ void account_guest_time(struct task_struct *p, u64 cputime)
+ 	p->gtime += cputime;
+ 
+ 	/* Add guest time to cpustat. */
+-	if (task_nice(p) > 0) {
++	if (task_running_nice(p)) {
+ 		task_group_account_field(p, CPUTIME_NICE, cputime);
+ 		cpustat[CPUTIME_GUEST_NICE] += cputime;
+ 	} else {
+@@ -286,7 +286,7 @@ static inline u64 account_other_time(u64 max)
+ #ifdef CONFIG_64BIT
+ static inline u64 read_sum_exec_runtime(struct task_struct *t)
+ {
+-	return t->se.sum_exec_runtime;
++	return tsk_seruntime(t);
+ }
+ #else
+ static u64 read_sum_exec_runtime(struct task_struct *t)
+@@ -296,7 +296,7 @@ static u64 read_sum_exec_runtime(struct task_struct *t)
+ 	struct rq *rq;
+ 
+ 	rq = task_rq_lock(t, &rf);
+-	ns = t->se.sum_exec_runtime;
++	ns = tsk_seruntime(t);
+ 	task_rq_unlock(rq, t, &rf);
+ 
+ 	return ns;
+@@ -621,7 +621,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
+ void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
+ {
+ 	struct task_cputime cputime = {
+-		.sum_exec_runtime = p->se.sum_exec_runtime,
++		.sum_exec_runtime = tsk_seruntime(p),
+ 	};
+ 
+ 	if (task_cputime(p, &cputime.utime, &cputime.stime))
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index 557246880a7e..1236d8fb5f9b 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -7,6 +7,7 @@
+  * Copyright(C) 2007, Red Hat, Inc., Ingo Molnar
+  */
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * This allows printing both to /sys/kernel/debug/sched/debug and
+  * to the console
+@@ -215,6 +216,7 @@ static const struct file_operations sched_scaling_fops = {
+ };
+ 
+ #endif /* SMP */
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 
+@@ -281,6 +283,7 @@ static const struct file_operations sched_dynamic_fops = {
+ 
+ #endif /* CONFIG_PREEMPT_DYNAMIC */
+ 
++#ifndef CONFIG_SCHED_ALT
+ __read_mostly bool sched_debug_verbose;
+ 
+ #ifdef CONFIG_SMP
+@@ -471,9 +474,11 @@ static const struct file_operations fair_server_period_fops = {
+ 	.llseek		= seq_lseek,
+ 	.release	= single_release,
+ };
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ static struct dentry *debugfs_sched;
+ 
++#ifndef CONFIG_SCHED_ALT
+ static void debugfs_fair_server_init(void)
+ {
+ 	struct dentry *d_fair;
+@@ -494,6 +499,7 @@ static void debugfs_fair_server_init(void)
+ 		debugfs_create_file("period", 0644, d_cpu, (void *) cpu, &fair_server_period_fops);
+ 	}
+ }
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ static __init int sched_init_debug(void)
+ {
+@@ -501,14 +507,17 @@ static __init int sched_init_debug(void)
+ 
+ 	debugfs_sched = debugfs_create_dir("sched", NULL);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
+ 	debugfs_create_file_unsafe("verbose", 0644, debugfs_sched, &sched_debug_verbose, &sched_verbose_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 	debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
+ #endif
+ 
+ 	debugfs_create_u32("base_slice_ns", 0644, debugfs_sched, &sysctl_sched_base_slice);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms);
+ 	debugfs_create_u32("latency_warn_once", 0644, debugfs_sched, &sysctl_resched_latency_warn_once);
+ 
+@@ -533,13 +542,17 @@ static __init int sched_init_debug(void)
+ #endif
+ 
+ 	debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_fair_server_init();
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	return 0;
+ }
+ late_initcall(sched_init_debug);
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ 
+ static cpumask_var_t		sd_sysctl_cpus;
+@@ -1288,6 +1301,11 @@ void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
+ 
+ 	sched_show_numa(p, m);
+ }
++#else
++void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
++						  struct seq_file *m)
++{ }
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ void proc_sched_set_task(struct task_struct *p)
+ {
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index 2c85c86b455f..4369a4b123c9 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -423,6 +423,7 @@ void cpu_startup_entry(enum cpuhp_state state)
+ 		do_idle();
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * idle-task scheduling class.
+  */
+@@ -538,3 +539,4 @@ DEFINE_SCHED_CLASS(idle) = {
+ 	.switched_to		= switched_to_idle,
+ 	.update_curr		= update_curr_idle,
+ };
++#endif
+diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
+new file mode 100644
+index 000000000000..fe3099071eb7
+--- /dev/null
++++ b/kernel/sched/pds.h
+@@ -0,0 +1,139 @@
++#ifndef _KERNEL_SCHED_PDS_H
++#define _KERNEL_SCHED_PDS_H
++
++#define ALT_SCHED_NAME "PDS"
++
++static const u64 RT_MASK = ((1ULL << MIN_SCHED_NORMAL_PRIO) - 1);
++
++#define SCHED_NORMAL_PRIO_NUM	(32)
++#define SCHED_EDGE_DELTA	(SCHED_NORMAL_PRIO_NUM - NICE_WIDTH / 2)
++
++/* PDS assume SCHED_NORMAL_PRIO_NUM is power of 2 */
++#define SCHED_NORMAL_PRIO_MOD(x)	((x) & (SCHED_NORMAL_PRIO_NUM - 1))
++
++/* default time slice 4ms -> shift 22, 2 time slice slots -> shift 23 */
++static __read_mostly int sched_timeslice_shift = 23;
++
++/*
++ * Common interfaces
++ */
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	u64 sched_dl = max(p->deadline, rq->time_edge);
++
++#ifdef ALT_SCHED_DEBUG
++	if (WARN_ONCE(sched_dl - rq->time_edge > NORMAL_PRIO_NUM - 1,
++		      "pds: task_sched_prio_normal() delta %lld\n", sched_dl - rq->time_edge))
++		return SCHED_NORMAL_PRIO_NUM - 1;
++#endif
++
++	return sched_dl - rq->time_edge;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MIN_NORMAL_PRIO) ? (p->prio >> 2) :
++		MIN_SCHED_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
++}
++
++#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio)							\
++	if (p->prio < MIN_NORMAL_PRIO) {							\
++		prio = p->prio >> 2;								\
++		idx = prio;									\
++	} else {										\
++		u64 sched_dl = max(p->deadline, rq->time_edge);					\
++		prio = MIN_SCHED_NORMAL_PRIO + sched_dl - rq->time_edge;			\
++		idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_dl);			\
++	}
++
++static inline int sched_prio2idx(int sched_prio, struct rq *rq)
++{
++	return (IDLE_TASK_SCHED_PRIO == sched_prio || sched_prio < MIN_SCHED_NORMAL_PRIO) ?
++		sched_prio :
++		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_prio + rq->time_edge);
++}
++
++static inline int sched_idx2prio(int sched_idx, struct rq *rq)
++{
++	return (sched_idx < MIN_SCHED_NORMAL_PRIO) ?
++		sched_idx :
++		MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_idx - rq->time_edge);
++}
++
++static inline int sched_rq_prio_idx(struct rq *rq)
++{
++	return rq->prio_idx;
++}
++
++static inline int task_running_nice(struct task_struct *p)
++{
++	return (p->prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq)
++{
++	struct list_head head;
++	u64 old = rq->time_edge;
++	u64 now = rq->clock >> sched_timeslice_shift;
++	u64 prio, delta;
++	DECLARE_BITMAP(normal, SCHED_QUEUE_BITS);
++
++	if (now == old)
++		return;
++
++	rq->time_edge = now;
++	delta = min_t(u64, SCHED_NORMAL_PRIO_NUM, now - old);
++	INIT_LIST_HEAD(&head);
++
++	prio = MIN_SCHED_NORMAL_PRIO;
++	for_each_set_bit_from(prio, rq->queue.bitmap, MIN_SCHED_NORMAL_PRIO + delta)
++		list_splice_tail_init(rq->queue.heads + MIN_SCHED_NORMAL_PRIO +
++				      SCHED_NORMAL_PRIO_MOD(prio + old), &head);
++
++	bitmap_shift_right(normal, rq->queue.bitmap, delta, SCHED_QUEUE_BITS);
++	if (!list_empty(&head)) {
++		u64 idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(now);
++
++		__list_splice(&head, rq->queue.heads + idx, rq->queue.heads[idx].next);
++		set_bit(MIN_SCHED_NORMAL_PRIO, normal);
++	}
++	bitmap_replace(rq->queue.bitmap, normal, rq->queue.bitmap,
++		       (const unsigned long *)&RT_MASK, SCHED_QUEUE_BITS);
++
++	if (rq->prio < MIN_SCHED_NORMAL_PRIO || IDLE_TASK_SCHED_PRIO == rq->prio)
++		return;
++
++	rq->prio = max_t(u64, MIN_SCHED_NORMAL_PRIO, rq->prio - delta);
++	rq->prio_idx = sched_prio2idx(rq->prio, rq);
++}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++	if (p->prio >= MIN_NORMAL_PRIO)
++		p->deadline = rq->time_edge + SCHED_EDGE_DELTA +
++			      (p->static_prio - (MAX_PRIO - NICE_WIDTH)) / 2;
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
++{
++	u64 max_dl = rq->time_edge + SCHED_EDGE_DELTA + NICE_WIDTH / 2 - 1;
++	if (unlikely(p->deadline > max_dl))
++		p->deadline = max_dl;
++}
++
++static inline void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++	sched_task_renew(p, rq);
++}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	p->time_slice = sysctl_sched_base_slice;
++	sched_task_renew(p, rq);
++}
++
++static inline void sched_task_ttwu(struct task_struct *p) {}
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
++
++#endif /* _KERNEL_SCHED_PDS_H */
+diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
+index 7a8534a2deff..c57eb8f000d1 100644
+--- a/kernel/sched/pelt.c
++++ b/kernel/sched/pelt.c
+@@ -266,6 +266,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load)
+ 	WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * sched_entity:
+  *
+@@ -383,8 +384,9 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ 
+ 	return 0;
+ }
++#endif
+ 
+-#ifdef CONFIG_SCHED_HW_PRESSURE
++#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * hardware:
+  *
+@@ -468,6 +470,7 @@ int update_irq_load_avg(struct rq *rq, u64 running)
+ }
+ #endif
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * Load avg and utiliztion metrics need to be updated periodically and before
+  * consumption. This function updates the metrics for all subsystems except for
+@@ -487,3 +490,4 @@ bool update_other_load_avgs(struct rq *rq)
+ 		update_hw_load_avg(rq_clock_task(rq), rq, hw_pressure) |
+ 		update_irq_load_avg(rq, 0);
+ }
++#endif /* !CONFIG_SCHED_ALT */
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+index f4f6a0875c66..ee780f2b6c17 100644
+--- a/kernel/sched/pelt.h
++++ b/kernel/sched/pelt.h
+@@ -1,14 +1,16 @@
+ #ifdef CONFIG_SMP
+ #include "sched-pelt.h"
+ 
++#ifndef CONFIG_SCHED_ALT
+ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
+ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
+ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
+ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
+ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
+ bool update_other_load_avgs(struct rq *rq);
++#endif
+ 
+-#ifdef CONFIG_SCHED_HW_PRESSURE
++#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ int update_hw_load_avg(u64 now, struct rq *rq, u64 capacity);
+ 
+ static inline u64 hw_load_avg(struct rq *rq)
+@@ -45,6 +47,7 @@ static inline u32 get_pelt_divider(struct sched_avg *avg)
+ 	return PELT_MIN_DIVIDER + avg->period_contrib;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void cfs_se_util_change(struct sched_avg *avg)
+ {
+ 	unsigned int enqueued;
+@@ -181,9 +184,11 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+ 	return rq_clock_pelt(rq_of(cfs_rq));
+ }
+ #endif
++#endif /* CONFIG_SCHED_ALT */
+ 
+ #else
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline int
+ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
+ {
+@@ -201,6 +206,7 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ {
+ 	return 0;
+ }
++#endif
+ 
+ static inline int
+ update_hw_load_avg(u64 now, struct rq *rq, u64 capacity)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 83e3aa917142..b633905baadb 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -5,6 +5,10 @@
+ #ifndef _KERNEL_SCHED_SCHED_H
+ #define _KERNEL_SCHED_SCHED_H
+ 
++#ifdef CONFIG_SCHED_ALT
++#include "alt_sched.h"
++#else
++
+ #include <linux/sched/affinity.h>
+ #include <linux/sched/autogroup.h>
+ #include <linux/sched/cpufreq.h>
+@@ -3985,4 +3989,9 @@ void sched_enq_and_set_task(struct sched_enq_and_set_ctx *ctx);
+ 
+ #include "ext.h"
+ 
++static inline int task_running_nice(struct task_struct *p)
++{
++	return (task_nice(p) > 0);
++}
++#endif /* !CONFIG_SCHED_ALT */
+ #endif /* _KERNEL_SCHED_SCHED_H */
+diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
+index 4346fd81c31f..11f05554b538 100644
+--- a/kernel/sched/stats.c
++++ b/kernel/sched/stats.c
+@@ -115,8 +115,10 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 	} else {
+ 		struct rq *rq;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		struct sched_domain *sd;
+ 		int dcount = 0;
++#endif
+ #endif
+ 		cpu = (unsigned long)(v - 2);
+ 		rq = cpu_rq(cpu);
+@@ -133,6 +135,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 		seq_printf(seq, "\n");
+ 
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		/* domain-specific stats */
+ 		rcu_read_lock();
+ 		for_each_domain(cpu, sd) {
+@@ -163,6 +166,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 			    sd->ttwu_move_balance);
+ 		}
+ 		rcu_read_unlock();
++#endif
+ #endif
+ 	}
+ 	return 0;
+diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
+index 452826df6ae1..b980bfc4ec95 100644
+--- a/kernel/sched/stats.h
++++ b/kernel/sched/stats.h
+@@ -89,6 +89,7 @@ static inline void rq_sched_info_depart  (struct rq *rq, unsigned long long delt
+ 
+ #endif /* CONFIG_SCHEDSTATS */
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_FAIR_GROUP_SCHED
+ struct sched_entity_stats {
+ 	struct sched_entity     se;
+@@ -105,6 +106,7 @@ __schedstats_from_se(struct sched_entity *se)
+ #endif
+ 	return &task_of(se)->stats;
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_PSI
+ void psi_task_change(struct task_struct *task, int clear, int set);
+diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
+index 547c1f05b667..90bfb06bb34e 100644
+--- a/kernel/sched/syscalls.c
++++ b/kernel/sched/syscalls.c
+@@ -16,6 +16,14 @@
+ #include "sched.h"
+ #include "autogroup.h"
+ 
++#ifdef CONFIG_SCHED_ALT
++#include "alt_core.h"
++
++static inline int __normal_prio(int policy, int rt_prio, int static_prio)
++{
++	return rt_policy(policy) ? (MAX_RT_PRIO - 1 - rt_prio) : static_prio;
++}
++#else /* !CONFIG_SCHED_ALT */
+ static inline int __normal_prio(int policy, int rt_prio, int nice)
+ {
+ 	int prio;
+@@ -29,6 +37,7 @@ static inline int __normal_prio(int policy, int rt_prio, int nice)
+ 
+ 	return prio;
+ }
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ /*
+  * Calculate the expected normal priority: i.e. priority
+@@ -39,7 +48,11 @@ static inline int __normal_prio(int policy, int rt_prio, int nice)
+  */
+ static inline int normal_prio(struct task_struct *p)
+ {
++#ifdef CONFIG_SCHED_ALT
++	return __normal_prio(p->policy, p->rt_priority, p->static_prio);
++#else /* !CONFIG_SCHED_ALT */
+ 	return __normal_prio(p->policy, p->rt_priority, PRIO_TO_NICE(p->static_prio));
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ 
+ /*
+@@ -64,6 +77,37 @@ static int effective_prio(struct task_struct *p)
+ 
+ void set_user_nice(struct task_struct *p, long nice)
+ {
++#ifdef CONFIG_SCHED_ALT
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
++		return;
++	/*
++	 * We have to be careful, if called from sys_setpriority(),
++	 * the task might be in the middle of scheduling on another CPU.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	rq = __task_access_lock(p, &lock);
++
++	p->static_prio = NICE_TO_PRIO(nice);
++	/*
++	 * The RT priorities are set via sched_setscheduler(), but we still
++	 * allow the 'normal' nice value to be set - but as expected
++	 * it won't have any effect on scheduling until the task is
++	 * not SCHED_NORMAL/SCHED_BATCH:
++	 */
++	if (task_has_rt_policy(p))
++		goto out_unlock;
++
++	p->prio = effective_prio(p);
++
++	check_task_changed(p, rq);
++out_unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++#else
+ 	bool queued, running;
+ 	struct rq *rq;
+ 	int old_prio;
+@@ -112,6 +156,7 @@ void set_user_nice(struct task_struct *p, long nice)
+ 	 * lowered its priority, then reschedule its CPU:
+ 	 */
+ 	p->sched_class->prio_changed(rq, p, old_prio);
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ EXPORT_SYMBOL(set_user_nice);
+ 
+@@ -190,7 +235,19 @@ SYSCALL_DEFINE1(nice, int, increment)
+  */
+ int task_prio(const struct task_struct *p)
+ {
++#ifdef CONFIG_SCHED_ALT
++/*
++ * sched policy         return value   kernel prio    user prio/nice
++ *
++ * (BMQ)normal, batch, idle[0 ... 53]  [100 ... 139]          0/[-20 ... 19]/[-7 ... 7]
++ * (PDS)normal, batch, idle[0 ... 39]            100          0/[-20 ... 19]
++ * fifo, rr             [-1 ... -100]     [99 ... 0]  [0 ... 99]
++ */
++	return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
++		task_sched_prio_normal(p, task_rq(p));
++#else
+ 	return p->prio - MAX_RT_PRIO;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ 
+ /**
+@@ -300,11 +357,16 @@ static void __setscheduler_params(struct task_struct *p,
+ 
+ 	p->policy = policy;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_policy(policy))
+ 		__setparam_dl(p, attr);
+ 	else if (fair_policy(policy))
+ 		__setparam_fair(p, attr);
++#else	/* !CONFIG_SCHED_ALT */
++	p->static_prio = NICE_TO_PRIO(attr->sched_nice);
++#endif /* CONFIG_SCHED_ALT */
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	/* rt-policy tasks do not have a timerslack */
+ 	if (rt_or_dl_task_policy(p)) {
+ 		p->timer_slack_ns = 0;
+@@ -312,6 +374,7 @@ static void __setscheduler_params(struct task_struct *p,
+ 		/* when switching back to non-rt policy, restore timerslack */
+ 		p->timer_slack_ns = p->default_timer_slack_ns;
+ 	}
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	/*
+ 	 * __sched_setscheduler() ensures attr->sched_priority == 0 when
+@@ -320,7 +383,9 @@ static void __setscheduler_params(struct task_struct *p,
+ 	 */
+ 	p->rt_priority = attr->sched_priority;
+ 	p->normal_prio = normal_prio(p);
++#ifndef CONFIG_SCHED_ALT
+ 	set_load_weight(p, true);
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ 
+ /*
+@@ -336,6 +401,8 @@ static bool check_same_owner(struct task_struct *p)
+ 		uid_eq(cred->euid, pcred->uid));
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
++
+ #ifdef CONFIG_UCLAMP_TASK
+ 
+ static int uclamp_validate(struct task_struct *p,
+@@ -449,6 +516,7 @@ static inline int uclamp_validate(struct task_struct *p,
+ static void __setscheduler_uclamp(struct task_struct *p,
+ 				  const struct sched_attr *attr) { }
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ /*
+  * Allow unprivileged RT tasks to decrease priority.
+@@ -459,11 +527,13 @@ static int user_check_sched_setscheduler(struct task_struct *p,
+ 					 const struct sched_attr *attr,
+ 					 int policy, int reset_on_fork)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	if (fair_policy(policy)) {
+ 		if (attr->sched_nice < task_nice(p) &&
+ 		    !is_nice_reduction(p, attr->sched_nice))
+ 			goto req_priv;
+ 	}
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	if (rt_policy(policy)) {
+ 		unsigned long rlim_rtprio = task_rlimit(p, RLIMIT_RTPRIO);
+@@ -478,6 +548,7 @@ static int user_check_sched_setscheduler(struct task_struct *p,
+ 			goto req_priv;
+ 	}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	/*
+ 	 * Can't set/change SCHED_DEADLINE policy at all for now
+ 	 * (safest behavior); in the future we would like to allow
+@@ -495,6 +566,7 @@ static int user_check_sched_setscheduler(struct task_struct *p,
+ 		if (!is_nice_reduction(p, task_nice(p)))
+ 			goto req_priv;
+ 	}
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	/* Can't change other user's priorities: */
+ 	if (!check_same_owner(p))
+@@ -517,6 +589,158 @@ int __sched_setscheduler(struct task_struct *p,
+ 			 const struct sched_attr *attr,
+ 			 bool user, bool pi)
+ {
++#ifdef CONFIG_SCHED_ALT
++	const struct sched_attr dl_squash_attr = {
++		.size		= sizeof(struct sched_attr),
++		.sched_policy	= SCHED_FIFO,
++		.sched_nice	= 0,
++		.sched_priority = 99,
++	};
++	int oldpolicy = -1, policy = attr->sched_policy;
++	int retval, newprio;
++	struct balance_callback *head;
++	unsigned long flags;
++	struct rq *rq;
++	int reset_on_fork;
++	raw_spinlock_t *lock;
++
++	/* The pi code expects interrupts enabled */
++	BUG_ON(pi && in_interrupt());
++
++	/*
++	 * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
++	 */
++	if (unlikely(SCHED_DEADLINE == policy)) {
++		attr = &dl_squash_attr;
++		policy = attr->sched_policy;
++	}
++recheck:
++	/* Double check policy once rq lock held */
++	if (policy < 0) {
++		reset_on_fork = p->sched_reset_on_fork;
++		policy = oldpolicy = p->policy;
++	} else {
++		reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
++
++		if (policy > SCHED_IDLE)
++			return -EINVAL;
++	}
++
++	if (attr->sched_flags & ~(SCHED_FLAG_ALL))
++		return -EINVAL;
++
++	/*
++	 * Valid priorities for SCHED_FIFO and SCHED_RR are
++	 * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
++	 * SCHED_BATCH and SCHED_IDLE is 0.
++	 */
++	if (attr->sched_priority < 0 ||
++	    (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
++	    (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
++		return -EINVAL;
++	if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
++	    (attr->sched_priority != 0))
++		return -EINVAL;
++
++	if (user) {
++		retval = user_check_sched_setscheduler(p, attr, policy, reset_on_fork);
++		if (retval)
++			return retval;
++
++		retval = security_task_setscheduler(p);
++		if (retval)
++			return retval;
++	}
++
++	/*
++	 * Make sure no PI-waiters arrive (or leave) while we are
++	 * changing the priority of the task:
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++
++	/*
++	 * To be able to change p->policy safely, task_access_lock()
++	 * must be called.
++	 * IF use task_access_lock() here:
++	 * For the task p which is not running, reading rq->stop is
++	 * racy but acceptable as ->stop doesn't change much.
++	 * An enhancemnet can be made to read rq->stop saftly.
++	 */
++	rq = __task_access_lock(p, &lock);
++
++	/*
++	 * Changing the policy of the stop threads its a very bad idea
++	 */
++	if (p == rq->stop) {
++		retval = -EINVAL;
++		goto unlock;
++	}
++
++	/*
++	 * If not changing anything there's no need to proceed further:
++	 */
++	if (unlikely(policy == p->policy)) {
++		if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
++			goto change;
++		if (!rt_policy(policy) &&
++		    NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
++			goto change;
++
++		p->sched_reset_on_fork = reset_on_fork;
++		retval = 0;
++		goto unlock;
++	}
++change:
++
++	/* Re-check policy now with rq lock held */
++	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
++		policy = oldpolicy = -1;
++		__task_access_unlock(p, lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++		goto recheck;
++	}
++
++	p->sched_reset_on_fork = reset_on_fork;
++
++	newprio = __normal_prio(policy, attr->sched_priority, NICE_TO_PRIO(attr->sched_nice));
++	if (pi) {
++		/*
++		 * Take priority boosted tasks into account. If the new
++		 * effective priority is unchanged, we just store the new
++		 * normal parameters and do not touch the scheduler class and
++		 * the runqueue. This will be done when the task deboost
++		 * itself.
++		 */
++		newprio = rt_effective_prio(p, newprio);
++	}
++
++	if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
++		__setscheduler_params(p, attr);
++		__setscheduler_prio(p, newprio);
++	}
++
++	check_task_changed(p, rq);
++
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++	head = splice_balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	if (pi)
++		rt_mutex_adjust_pi(p);
++
++	/* Run balance callbacks after we've adjusted the PI chain: */
++	balance_callbacks(rq, head);
++	preempt_enable();
++
++	return 0;
++
++unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++	return retval;
++#else /* !CONFIG_SCHED_ALT */
+ 	int oldpolicy = -1, policy = attr->sched_policy;
+ 	int retval, oldprio, newprio, queued, running;
+ 	const struct sched_class *prev_class, *next_class;
+@@ -755,6 +979,7 @@ int __sched_setscheduler(struct task_struct *p,
+ 	if (cpuset_locked)
+ 		cpuset_unlock();
+ 	return retval;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ 
+ static int _sched_setscheduler(struct task_struct *p, int policy,
+@@ -766,8 +991,10 @@ static int _sched_setscheduler(struct task_struct *p, int policy,
+ 		.sched_nice	= PRIO_TO_NICE(p->static_prio),
+ 	};
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (p->se.custom_slice)
+ 		attr.sched_runtime = p->se.slice;
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	/* Fixup the legacy SCHED_RESET_ON_FORK hack. */
+ 	if ((policy != SETPARAM_POLICY) && (policy & SCHED_RESET_ON_FORK)) {
+@@ -935,13 +1162,18 @@ static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *a
+ 
+ static void get_params(struct task_struct *p, struct sched_attr *attr)
+ {
+-	if (task_has_dl_policy(p)) {
++#ifndef CONFIG_SCHED_ALT
++	if (task_has_dl_policy(p))
+ 		__getparam_dl(p, attr);
+-	} else if (task_has_rt_policy(p)) {
++	else
++#endif
++	if (task_has_rt_policy(p)) {
+ 		attr->sched_priority = p->rt_priority;
+ 	} else {
+ 		attr->sched_nice = task_nice(p);
++#ifndef CONFIG_SCHED_ALT
+ 		attr->sched_runtime = p->se.slice;
++#endif
+ 	}
+ }
+ 
+@@ -1123,6 +1355,7 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
+ #ifdef CONFIG_SMP
+ int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	/*
+ 	 * If the task isn't a deadline task or admission control is
+ 	 * disabled then we don't care about affinity changes.
+@@ -1146,6 +1379,7 @@ int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
+ 	guard(rcu)();
+ 	if (!cpumask_subset(task_rq(p)->rd->span, mask))
+ 		return -EBUSY;
++#endif
+ 
+ 	return 0;
+ }
+@@ -1170,9 +1404,11 @@ int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx)
+ 	ctx->new_mask = new_mask;
+ 	ctx->flags |= SCA_CHECK;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	retval = dl_task_check_affinity(p, new_mask);
+ 	if (retval)
+ 		goto out_free_new_mask;
++#endif
+ 
+ 	retval = __set_cpus_allowed_ptr(p, ctx);
+ 	if (retval)
+@@ -1352,13 +1588,34 @@ SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
+ 
+ static void do_sched_yield(void)
+ {
+-	struct rq_flags rf;
+ 	struct rq *rq;
++	struct rq_flags rf;
++
++#ifdef CONFIG_SCHED_ALT
++	struct task_struct *p;
++
++	if (!sched_yield_type)
++		return;
+ 
+ 	rq = this_rq_lock_irq(&rf);
+ 
++	schedstat_inc(rq->yld_count);
++
++	p = current;
++	if (rt_task(p)) {
++		if (task_on_rq_queued(p))
++			requeue_task(p, rq);
++	} else if (rq->nr_running > 1) {
++		do_sched_yield_type_1(p, rq);
++		if (task_on_rq_queued(p))
++			requeue_task(p, rq);
++	}
++#else /* !CONFIG_SCHED_ALT */
++	rq = this_rq_lock_irq(&rf);
++
+ 	schedstat_inc(rq->yld_count);
+ 	current->sched_class->yield_task(rq);
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	preempt_disable();
+ 	rq_unlock_irq(rq, &rf);
+@@ -1427,6 +1684,9 @@ EXPORT_SYMBOL(yield);
+  */
+ int __sched yield_to(struct task_struct *p, bool preempt)
+ {
++#ifdef CONFIG_SCHED_ALT
++	return 0;
++#else /* !CONFIG_SCHED_ALT */
+ 	struct task_struct *curr = current;
+ 	struct rq *rq, *p_rq;
+ 	int yielded = 0;
+@@ -1472,6 +1732,7 @@ int __sched yield_to(struct task_struct *p, bool preempt)
+ 		schedule();
+ 
+ 	return yielded;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ EXPORT_SYMBOL_GPL(yield_to);
+ 
+@@ -1492,7 +1753,9 @@ SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
+ 	case SCHED_RR:
+ 		ret = MAX_RT_PRIO-1;
+ 		break;
++#ifndef CONFIG_SCHED_ALT
+ 	case SCHED_DEADLINE:
++#endif
+ 	case SCHED_NORMAL:
+ 	case SCHED_BATCH:
+ 	case SCHED_IDLE:
+@@ -1520,7 +1783,9 @@ SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
+ 	case SCHED_RR:
+ 		ret = 1;
+ 		break;
++#ifndef CONFIG_SCHED_ALT
+ 	case SCHED_DEADLINE:
++#endif
+ 	case SCHED_NORMAL:
+ 	case SCHED_BATCH:
+ 	case SCHED_IDLE:
+@@ -1532,7 +1797,9 @@ SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
+ 
+ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	unsigned int time_slice = 0;
++#endif
+ 	int retval;
+ 
+ 	if (pid < 0)
+@@ -1547,6 +1814,7 @@ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
+ 		if (retval)
+ 			return retval;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 		scoped_guard (task_rq_lock, p) {
+ 			struct rq *rq = scope.rq;
+ 			if (p->sched_class->get_rr_interval)
+@@ -1555,6 +1823,13 @@ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
+ 	}
+ 
+ 	jiffies_to_timespec64(time_slice, t);
++#else
++	}
++
++	alt_sched_debug();
++
++	*t = ns_to_timespec64(sysctl_sched_base_slice);
++#endif /* !CONFIG_SCHED_ALT */
+ 	return 0;
+ }
+ 
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index b958fe48e020..dbe8882be278 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -3,6 +3,7 @@
+  * Scheduler topology setup/handling methods
+  */
+ 
++#ifndef CONFIG_SCHED_ALT
+ #include <linux/bsearch.h>
+ 
+ DEFINE_MUTEX(sched_domains_mutex);
+@@ -1499,8 +1500,10 @@ static void asym_cpu_capacity_scan(void)
+  */
+ 
+ static int default_relax_domain_level = -1;
++#endif /* CONFIG_SCHED_ALT */
+ int sched_domain_level_max;
+ 
++#ifndef CONFIG_SCHED_ALT
+ static int __init setup_relax_domain_level(char *str)
+ {
+ 	if (kstrtoint(str, 0, &default_relax_domain_level))
+@@ -1733,6 +1736,7 @@ sd_init(struct sched_domain_topology_level *tl,
+ 
+ 	return sd;
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ /*
+  * Topology list, bottom-up.
+@@ -1769,6 +1773,7 @@ void __init set_sched_topology(struct sched_domain_topology_level *tl)
+ 	sched_domain_topology_saved = NULL;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_NUMA
+ 
+ static const struct cpumask *sd_numa_mask(int cpu)
+@@ -2840,3 +2845,31 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
+ 	partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
+ 	sched_domains_mutex_unlock();
+ }
++#else /* CONFIG_SCHED_ALT */
++DEFINE_STATIC_KEY_FALSE(sched_asym_cpucapacity);
++
++void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
++			     struct sched_domain_attr *dattr_new)
++{}
++
++#ifdef CONFIG_NUMA
++int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return best_mask_cpu(cpu, cpus);
++}
++
++int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
++{
++	return cpumask_nth(cpu, cpus);
++}
++
++const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops)
++{
++	return ERR_PTR(-EOPNOTSUPP);
++}
++EXPORT_SYMBOL_GPL(sched_numa_hop_mask);
++#endif /* CONFIG_NUMA */
++
++void sched_update_asym_prefer_cpu(int cpu, int old_prio, int new_prio)
++{}
++#endif
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 9b4f0cff76ea..1ac3182c8a87 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -77,6 +77,10 @@ EXPORT_SYMBOL_GPL(sysctl_long_vals);
+ static const int ngroups_max = NGROUPS_MAX;
+ static const int cap_last_cap = CAP_LAST_CAP;
+ 
++#ifdef CONFIG_SCHED_ALT
++extern int sched_yield_type;
++#endif
++
+ #ifdef CONFIG_PROC_SYSCTL
+ 
+ /**
+@@ -1720,6 +1724,17 @@ static const struct ctl_table kern_table[] = {
+ 		.proc_handler	= proc_dointvec,
+ 	},
+ #endif
++#ifdef CONFIG_SCHED_ALT
++	{
++		.procname	= "yield_type",
++		.data		= &sched_yield_type,
++		.maxlen		= sizeof (int),
++		.mode		= 0644,
++		.proc_handler	= &proc_dointvec_minmax,
++		.extra1		= SYSCTL_ZERO,
++		.extra2		= SYSCTL_TWO,
++	},
++#endif
+ #ifdef CONFIG_SYSCTL_ARCH_UNALIGN_NO_WARN
+ 	{
+ 		.procname	= "ignore-unaligned-usertrap",
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 2e5b89d7d866..38c4526f5bc7 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -223,7 +223,7 @@ static void task_sample_cputime(struct task_struct *p, u64 *samples)
+ 	u64 stime, utime;
+ 
+ 	task_cputime(p, &utime, &stime);
+-	store_samples(samples, stime, utime, p->se.sum_exec_runtime);
++	store_samples(samples, stime, utime, tsk_seruntime(p));
+ }
+ 
+ static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
+@@ -835,6 +835,7 @@ static void collect_posix_cputimers(struct posix_cputimers *pct, u64 *samples,
+ 	}
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void check_dl_overrun(struct task_struct *tsk)
+ {
+ 	if (tsk->dl.dl_overrun) {
+@@ -842,6 +843,7 @@ static inline void check_dl_overrun(struct task_struct *tsk)
+ 		send_signal_locked(SIGXCPU, SEND_SIG_PRIV, tsk, PIDTYPE_TGID);
+ 	}
+ }
++#endif
+ 
+ static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
+ {
+@@ -869,8 +871,10 @@ static void check_thread_timers(struct task_struct *tsk,
+ 	u64 samples[CPUCLOCK_MAX];
+ 	unsigned long soft;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk))
+ 		check_dl_overrun(tsk);
++#endif
+ 
+ 	if (expiry_cache_is_inactive(pct))
+ 		return;
+@@ -884,7 +888,7 @@ static void check_thread_timers(struct task_struct *tsk,
+ 	soft = task_rlimit(tsk, RLIMIT_RTTIME);
+ 	if (soft != RLIM_INFINITY) {
+ 		/* Task RT timeout is accounted in jiffies. RTTIME is usec */
+-		unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
++		unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
+ 		unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
+ 
+ 		/* At the hard limit, send SIGKILL. No further action. */
+@@ -1120,8 +1124,10 @@ static inline bool fastpath_timer_check(struct task_struct *tsk)
+ 			return true;
+ 	}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk) && tsk->dl.dl_overrun)
+ 		return true;
++#endif
+ 
+ 	return false;
+ }
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index 553fa469d7cc..d7c90f6ff009 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -2453,7 +2453,11 @@ static void run_local_timers(void)
+ 		 */
+ 		if (time_after_eq(jiffies, READ_ONCE(base->next_expiry)) ||
+ 		    (i == BASE_DEF && tmigr_requires_handle_remote())) {
++#ifdef CONFIG_SCHED_BMQ
++			__raise_softirq_irqoff(TIMER_SOFTIRQ);
++#else
+ 			raise_timer_softirq(TIMER_SOFTIRQ);
++#endif
+ 			return;
+ 		}
+ 	}
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index fd259da0aa64..c51da39ec059 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -1645,6 +1645,9 @@ static void osnoise_sleep(bool skip_period)
+  */
+ static inline int osnoise_migration_pending(void)
+ {
++#ifdef CONFIG_SCHED_ALT
++	return 0;
++#else
+ 	if (!current->migration_pending)
+ 		return 0;
+ 
+@@ -1666,6 +1669,7 @@ static inline int osnoise_migration_pending(void)
+ 	mutex_unlock(&interface_lock);
+ 
+ 	return 1;
++#endif
+ }
+ 
+ /*
+diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
+index d88c44f1dfa5..4af3cbbdcccb 100644
+--- a/kernel/trace/trace_selftest.c
++++ b/kernel/trace/trace_selftest.c
+@@ -1423,10 +1423,15 @@ static int trace_wakeup_test_thread(void *data)
+ {
+ 	/* Make this a -deadline thread */
+ 	static const struct sched_attr attr = {
++#ifdef CONFIG_SCHED_ALT
++		/* No deadline on BMQ/PDS, use RR */
++		.sched_policy = SCHED_RR,
++#else
+ 		.sched_policy = SCHED_DEADLINE,
+ 		.sched_runtime = 100000ULL,
+ 		.sched_deadline = 10000000ULL,
+ 		.sched_period = 10000000ULL
++#endif
+ 	};
+ 	struct wakeup_test_data *x = data;
+ 
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 9f9148075828..dd183bc5968f 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -1247,6 +1247,7 @@ static bool kick_pool(struct worker_pool *pool)
+ 
+ 	p = worker->task;
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ 	/*
+ 	 * Idle @worker is about to execute @work and waking up provides an
+@@ -1276,6 +1277,8 @@ static bool kick_pool(struct worker_pool *pool)
+ 		}
+ 	}
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
++
+ 	wake_up_process(p);
+ 	return true;
+ }
+@@ -1404,7 +1407,11 @@ void wq_worker_running(struct task_struct *task)
+ 	 * CPU intensive auto-detection cares about how long a work item hogged
+ 	 * CPU without sleeping. Reset the starting timestamp on wakeup.
+ 	 */
++#ifdef CONFIG_SCHED_ALT
++	worker->current_at = worker->task->sched_time;
++#else
+ 	worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+ 
+ 	WRITE_ONCE(worker->sleeping, 0);
+ }
+@@ -1489,7 +1496,11 @@ void wq_worker_tick(struct task_struct *task)
+ 	 * We probably want to make this prettier in the future.
+ 	 */
+ 	if ((worker->flags & WORKER_NOT_RUNNING) || READ_ONCE(worker->sleeping) ||
++#ifdef CONFIG_SCHED_ALT
++	    worker->task->sched_time - worker->current_at <
++#else
+ 	    worker->task->se.sum_exec_runtime - worker->current_at <
++#endif
+ 	    wq_cpu_intensive_thresh_us * NSEC_PER_USEC)
+ 		return;
+ 
+@@ -3166,7 +3177,11 @@ __acquires(&pool->lock)
+ 	worker->current_func = work->func;
+ 	worker->current_pwq = pwq;
+ 	if (worker->task)
++#ifdef CONFIG_SCHED_ALT
++		worker->current_at = worker->task->sched_time;
++#else
+ 		worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+ 	work_data = *work_data_bits(work);
+ 	worker->current_color = get_work_color(work_data);
+ 

diff --git a/5021_BMQ-and-PDS-gentoo-defaults.patch b/5021_BMQ-and-PDS-gentoo-defaults.patch
new file mode 100644
index 00000000..6dc48eec
--- /dev/null
+++ b/5021_BMQ-and-PDS-gentoo-defaults.patch
@@ -0,0 +1,13 @@
+--- a/init/Kconfig	2023-02-13 08:16:09.534315265 -0500
++++ b/init/Kconfig	2023-02-13 08:17:24.130237204 -0500
+@@ -867,8 +867,9 @@ config UCLAMP_BUCKETS_COUNT
+ 	  If in doubt, use the default value.
+ 
+ menuconfig SCHED_ALT
++	depends on X86_64
+ 	bool "Alternative CPU Schedulers"
+-	default y
++	default n
+ 	help
+ 	  This feature enable alternative CPU scheduler"
+ 


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-08-16  5:21 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-08-16  5:21 UTC (permalink / raw
  To: gentoo-commits

commit:     cae8c76b9da5adc2674fdeb2ce15c61d57b6b511
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Sat Aug 16 04:57:07 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Sat Aug 16 04:57:07 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cae8c76b

Add patch 1900 fix log tree replay failure on btrfs

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                  |   4 +
 1900_btrfs_fix_log_tree_replay_failure.patch | 143 +++++++++++++++++++++++++++
 2 files changed, 147 insertions(+)

diff --git a/0000_README b/0000_README
index 60a31446..208a4856 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch:  1730_parisc-Disable-prctl.patch
 From:   https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
 Desc:   prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
 
+Patch:  1900_btrfs_fix_log_tree_replay_failure.patch
+From:   https://gitlab.com/cki-project/kernel-ark/-/commit/e6c71b29fab08fd0ab55d2f83c4539d68d543895
+Desc:   btrfs: fix log tree replay failure due to file with 0 links and extents
+
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1900_btrfs_fix_log_tree_replay_failure.patch b/1900_btrfs_fix_log_tree_replay_failure.patch
new file mode 100644
index 00000000..335bb7f2
--- /dev/null
+++ b/1900_btrfs_fix_log_tree_replay_failure.patch
@@ -0,0 +1,143 @@
+From e6c71b29fab08fd0ab55d2f83c4539d68d543895 Mon Sep 17 00:00:00 2001
+From: Filipe Manana <fdmanana@suse.com>
+Date: Wed, 30 Jul 2025 19:18:37 +0100
+Subject: [PATCH] btrfs: fix log tree replay failure due to file with 0 links
+ and extents
+
+If we log a new inode (not persisted in a past transaction) that has 0
+links and extents, then log another inode with an higher inode number, we
+end up with failing to replay the log tree with -EINVAL. The steps for
+this are:
+
+1) create new file A
+2) write some data to file A
+3) open an fd on file A
+4) unlink file A
+5) fsync file A using the previously open fd
+6) create file B (has higher inode number than file A)
+7) fsync file B
+8) power fail before current transaction commits
+
+Now when attempting to mount the fs, the log replay will fail with
+-ENOENT at replay_one_extent() when attempting to replay the first
+extent of file A. The failure comes when trying to open the inode for
+file A in the subvolume tree, since it doesn't exist.
+
+Before commit 5f61b961599a ("btrfs: fix inode lookup error handling
+during log replay"), the returned error was -EIO instead of -ENOENT,
+since we converted any errors when attempting to read an inode during
+log replay to -EIO.
+
+The reason for this is that the log replay procedure fails to ignore
+the current inode when we are at the stage LOG_WALK_REPLAY_ALL, our
+current inode has 0 links and last inode we processed in the previous
+stage has a non 0 link count. In other words, the issue is that at
+replay_one_extent() we only update wc->ignore_cur_inode if the current
+replay stage is LOG_WALK_REPLAY_INODES.
+
+Fix this by updating wc->ignore_cur_inode whenever we find an inode item
+regardless of the current replay stage. This is a simple solution and easy
+to backport, but later we can do other alternatives like avoid logging
+extents or inode items other than the inode item for inodes with a link
+count of 0.
+
+The problem with the wc->ignore_cur_inode logic has been around since
+commit f2d72f42d5fa ("Btrfs: fix warning when replaying log after fsync
+of a tmpfile") but it only became frequent to hit since the more recent
+commit 5e85262e542d ("btrfs: fix fsync of files with no hard links not
+persisting deletion"), because we stopped skipping inodes with a link
+count of 0 when logging, while before the problem would only be triggered
+if trying to replay a log tree created with an older kernel which has a
+logged inode with 0 links.
+
+A test case for fstests will be submitted soon.
+
+Reported-by: Peter Jung <ptr1337@cachyos.org>
+Link: https://lore.kernel.org/linux-btrfs/fce139db-4458-4788-bb97-c29acf6cb1df@cachyos.org/
+Reported-by: burneddi <burneddi@protonmail.com>
+Link: https://lore.kernel.org/linux-btrfs/lh4W-Lwc0Mbk-QvBhhQyZxf6VbM3E8VtIvU3fPIQgweP_Q1n7wtlUZQc33sYlCKYd-o6rryJQfhHaNAOWWRKxpAXhM8NZPojzsJPyHMf2qY=@protonmail.com/#t
+Reported-by: Russell Haley <yumpusamongus@gmail.com>
+Link: https://lore.kernel.org/linux-btrfs/598ecc75-eb80-41b3-83c2-f2317fbb9864@gmail.com/
+Fixes: f2d72f42d5fa ("Btrfs: fix warning when replaying log after fsync of a tmpfile")
+Reviewed-by: Boris Burkov <boris@bur.io>
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+---
+ fs/btrfs/tree-log.c | 45 +++++++++++++++++++++++++++++----------------
+ 1 file changed, 29 insertions(+), 16 deletions(-)
+
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index e05140ce95be9..2fb9e7bfc9077 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -321,8 +321,7 @@ struct walk_control {
+ 
+ 	/*
+ 	 * Ignore any items from the inode currently being processed. Needs
+-	 * to be set every time we find a BTRFS_INODE_ITEM_KEY and we are in
+-	 * the LOG_WALK_REPLAY_INODES stage.
++	 * to be set every time we find a BTRFS_INODE_ITEM_KEY.
+ 	 */
+ 	bool ignore_cur_inode;
+ 
+@@ -2410,23 +2409,30 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 
+ 	nritems = btrfs_header_nritems(eb);
+ 	for (i = 0; i < nritems; i++) {
+-		btrfs_item_key_to_cpu(eb, &key, i);
++		struct btrfs_inode_item *inode_item;
+ 
+-		/* inode keys are done during the first stage */
+-		if (key.type == BTRFS_INODE_ITEM_KEY &&
+-		    wc->stage == LOG_WALK_REPLAY_INODES) {
+-			struct btrfs_inode_item *inode_item;
+-			u32 mode;
++		btrfs_item_key_to_cpu(eb, &key, i);
+ 
+-			inode_item = btrfs_item_ptr(eb, i,
+-					    struct btrfs_inode_item);
++		if (key.type == BTRFS_INODE_ITEM_KEY) {
++			inode_item = btrfs_item_ptr(eb, i, struct btrfs_inode_item);
+ 			/*
+-			 * If we have a tmpfile (O_TMPFILE) that got fsync'ed
+-			 * and never got linked before the fsync, skip it, as
+-			 * replaying it is pointless since it would be deleted
+-			 * later. We skip logging tmpfiles, but it's always
+-			 * possible we are replaying a log created with a kernel
+-			 * that used to log tmpfiles.
++			 * An inode with no links is either:
++			 *
++			 * 1) A tmpfile (O_TMPFILE) that got fsync'ed and never
++			 *    got linked before the fsync, skip it, as replaying
++			 *    it is pointless since it would be deleted later.
++			 *    We skip logging tmpfiles, but it's always possible
++			 *    we are replaying a log created with a kernel that
++			 *    used to log tmpfiles;
++			 *
++			 * 2) A non-tmpfile which got its last link deleted
++			 *    while holding an open fd on it and later got
++			 *    fsynced through that fd. We always log the
++			 *    parent inodes when inode->last_unlink_trans is
++			 *    set to the current transaction, so ignore all the
++			 *    inode items for this inode. We will delete the
++			 *    inode when processing the parent directory with
++			 *    replay_dir_deletes().
+ 			 */
+ 			if (btrfs_inode_nlink(eb, inode_item) == 0) {
+ 				wc->ignore_cur_inode = true;
+@@ -2434,6 +2440,13 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 			} else {
+ 				wc->ignore_cur_inode = false;
+ 			}
++		}
++
++		/* Inode keys are done during the first stage. */
++		if (key.type == BTRFS_INODE_ITEM_KEY &&
++		    wc->stage == LOG_WALK_REPLAY_INODES) {
++			 u32 mode;
++
+ 			ret = replay_xattr_deletes(wc->trans, root, log,
+ 						   path, key.objectid);
+ 			if (ret)
+-- 
+GitLab
+


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-08-16  5:54 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-08-16  5:54 UTC (permalink / raw
  To: gentoo-commits

commit:     fdfbe5e683e8c920990b61f7f6a4cf5ea042cfd9
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Sat Aug 16 05:40:15 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Sat Aug 16 05:40:33 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fdfbe5e6

Add libbpf suppress adding '-Werror' if WERROR=0

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                         |  4 ++++
 2991_libbpf_add_WERROR_option.patch | 47 +++++++++++++++++++++++++++++++++++++
 2 files changed, 51 insertions(+)

diff --git a/0000_README b/0000_README
index 208a4856..eb7b0c95 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
 From:   https://lore.kernel.org/bpf/
 Desc:   libbpf: workaround -Wmaybe-uninitialized false positive
 
+Patch:  2991_libbpf_add_WERROR_option.patch
+From:   https://lore.kernel.org/bpf/
+Desc:   libbpf: suppress adding -werror is WERROR=0
+
 Patch:  3000_Support-printing-firmware-info.patch
 From:   https://bugs.gentoo.org/732852
 Desc:   Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO). Thanks to Georgy Yakovlev

diff --git a/2991_libbpf_add_WERROR_option.patch b/2991_libbpf_add_WERROR_option.patch
new file mode 100644
index 00000000..e8649909
--- /dev/null
+++ b/2991_libbpf_add_WERROR_option.patch
@@ -0,0 +1,47 @@
+Subject: [PATCH] tools/libbpf: add WERROR option
+Date: Sat,  5 Jul 2025 11:43:12 +0100
+Message-ID: <7e6c41e47c6a8ab73945e6aac319e0dd53337e1b.1751712192.git.sam@gentoo.org>
+X-Mailer: git-send-email 2.50.0
+Precedence: bulk
+X-Mailing-List: bpf@vger.kernel.org
+List-Id: <bpf.vger.kernel.org>
+List-Subscribe: <mailto:bpf+subscribe@vger.kernel.org>
+List-Unsubscribe: <mailto:bpf+unsubscribe@vger.kernel.org>
+MIME-Version: 1.0
+Content-Transfer-Encoding: 8bit
+
+Check the 'WERROR' variable and suppress adding '-Werror' if WERROR=0.
+
+This mirrors what tools/perf and other directories in tools do to handle
+-Werror rather than adding it unconditionally.
+
+Signed-off-by: Sam James <sam@gentoo.org>
+---
+ tools/lib/bpf/Makefile | 7 ++++++-
+ 1 file changed, 6 insertions(+), 1 deletion(-)
+
+diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
+index 168140f8e646..9563d37265da 100644
+--- a/tools/lib/bpf/Makefile
++++ b/tools/lib/bpf/Makefile
+@@ -77,10 +77,15 @@ else
+   CFLAGS := -g -O2
+ endif
+ 
++# Treat warnings as errors unless directed not to
++ifneq ($(WERROR),0)
++  CFLAGS += -Werror
++endif
++
+ # Append required CFLAGS
+ override CFLAGS += -std=gnu89
+ override CFLAGS += $(EXTRA_WARNINGS) -Wno-switch-enum
+-override CFLAGS += -Werror -Wall
++override CFLAGS += -Wall
+ override CFLAGS += $(INCLUDES)
+ override CFLAGS += -fvisibility=hidden
+ override CFLAGS += -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64
+-- 
+2.50.0
+
+


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-08-16  5:54 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-08-16  5:54 UTC (permalink / raw
  To: gentoo-commits

commit:     d3ddf27ad3ee96237fd459b25310341da8b077ea
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Sat Aug 16 05:45:29 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Sat Aug 16 05:45:29 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d3ddf27a

Add patch for make it possible to override the tar executable

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                      |  4 ++
 2910_kheaders_override_TAR.patch | 96 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 100 insertions(+)

diff --git a/0000_README b/0000_README
index eb7b0c95..77cb0499 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  2901_permit-menuconfig-sorting.patch
 From:   https://lore.kernel.org/
 Desc:   menuconfig: Allow sorting the entries alphabetically
 
+Patch:  2910_kheaders_override_TAR.patch
+From:   https://lore.kernel.org/
+Desc:   Add TAR make variable for override the tar executable
+
 Patch:  2920_sign-file-patch-for-libressl.patch
 From:   https://bugs.gentoo.org/717166
 Desc:   sign-file: full functionality with modern LibreSSL

diff --git a/2910_kheaders_override_TAR.patch b/2910_kheaders_override_TAR.patch
new file mode 100644
index 00000000..e8511e1a
--- /dev/null
+++ b/2910_kheaders_override_TAR.patch
@@ -0,0 +1,96 @@
+Subject: [PATCH v3] kheaders: make it possible to override TAR
+Date: Sat, 19 Jul 2025 16:24:05 +0100
+Message-ID: <277557da458c5fa07eba7d785b4f527cc37a023f.1752938644.git.sam@gentoo.org>
+X-Mailer: git-send-email 2.50.1
+In-Reply-To: <20230412082743.350699-1-mgorny@gentoo.org>
+References: <20230412082743.350699-1-mgorny@gentoo.org>
+Precedence: bulk
+X-Mailing-List: linux-kernel@vger.kernel.org
+List-Id: <linux-kernel.vger.kernel.org>
+List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org>
+List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org>
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Michał Górny <mgorny@gentoo.org>
+
+Commit 86cdd2fdc4e39c388d39c7ba2396d1a9dfd66226 ("kheaders: make headers
+archive reproducible") introduced a number of options specific to GNU
+tar to the `tar` invocation in `gen_kheaders.sh` script.  This causes
+the script to fail to work on systems where `tar` is not GNU tar.  This
+can occur e.g. on recent Gentoo Linux installations that support using
+bsdtar from libarchive instead.
+
+Add a `TAR` make variable to make it possible to override the tar
+executable used, e.g. by specifying:
+
+  make TAR=gtar
+
+Link: https://bugs.gentoo.org/884061
+Reported-by: Sam James <sam@gentoo.org>
+Tested-by: Sam James <sam@gentoo.org>
+Co-developed-by: Masahiro Yamada <masahiroy@kernel.org>
+Signed-off-by: Michał Górny <mgorny@gentoo.org>
+Signed-off-by: Sam James <sam@gentoo.org>
+---
+v3: Rebase, cover more tar instances.
+
+ Makefile               | 3 ++-
+ kernel/gen_kheaders.sh | 6 +++---
+ 2 files changed, 5 insertions(+), 4 deletions(-)
+
+diff --git a/Makefile b/Makefile
+index c09766beb7eff..22d6037d738fe 100644
+--- a/Makefile
++++ b/Makefile
+@@ -543,6 +543,7 @@ LZMA		= lzma
+ LZ4		= lz4
+ XZ		= xz
+ ZSTD		= zstd
++TAR		= tar
+ 
+ CHECKFLAGS     := -D__linux__ -Dlinux -D__STDC__ -Dunix -D__unix__ \
+ 		  -Wbitwise -Wno-return-void -Wno-unknown-attribute $(CF)
+@@ -622,7 +623,7 @@ export RUSTC RUSTDOC RUSTFMT RUSTC_OR_CLIPPY_QUIET RUSTC_OR_CLIPPY BINDGEN
+ export HOSTRUSTC KBUILD_HOSTRUSTFLAGS
+ export CPP AR NM STRIP OBJCOPY OBJDUMP READELF PAHOLE RESOLVE_BTFIDS LEX YACC AWK INSTALLKERNEL
+ export PERL PYTHON3 CHECK CHECKFLAGS MAKE UTS_MACHINE HOSTCXX
+-export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD
++export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD TAR
+ export KBUILD_HOSTCXXFLAGS KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS KBUILD_PROCMACROLDFLAGS LDFLAGS_MODULE
+ export KBUILD_USERCFLAGS KBUILD_USERLDFLAGS
+ 
+diff --git a/kernel/gen_kheaders.sh b/kernel/gen_kheaders.sh
+index c9e5dc068e854..bb609a9ed72b4 100755
+--- a/kernel/gen_kheaders.sh
++++ b/kernel/gen_kheaders.sh
+@@ -66,13 +66,13 @@ if [ "$building_out_of_srctree" ]; then
+ 		cd $srctree
+ 		for f in $dir_list
+ 			do find "$f" -name "*.h";
+-		done | tar -c -f - -T - | tar -xf - -C "${tmpdir}"
++		done | ${TAR:-tar} -c -f - -T - | ${TAR:-tar} -xf - -C "${tmpdir}"
+ 	)
+ fi
+ 
+ for f in $dir_list;
+ 	do find "$f" -name "*.h";
+-done | tar -c -f - -T - | tar -xf - -C "${tmpdir}"
++done | ${TAR:-tar} -c -f - -T - | ${TAR:-tar} -xf - -C "${tmpdir}"
+ 
+ # Always exclude include/generated/utsversion.h
+ # Otherwise, the contents of the tarball may vary depending on the build steps.
+@@ -88,7 +88,7 @@ xargs -0 -P8 -n1 \
+ rm -f "${tmpdir}.contents.txt"
+ 
+ # Create archive and try to normalize metadata for reproducibility.
+-tar "${KBUILD_BUILD_TIMESTAMP:+--mtime=$KBUILD_BUILD_TIMESTAMP}" \
++${TAR:-tar} "${KBUILD_BUILD_TIMESTAMP:+--mtime=$KBUILD_BUILD_TIMESTAMP}" \
+     --owner=0 --group=0 --sort=name --numeric-owner --mode=u=rw,go=r,a+X \
+     -I $XZ -cf $tarfile -C "${tmpdir}/" . > /dev/null
+ 
+-- 
+2.50.1
+
+


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-08-21  0:27 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-08-21  0:27 UTC (permalink / raw
  To: gentoo-commits

commit:     110d64b0083ce8ab9ca1928905b4c27b21014981
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 21 00:26:44 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Aug 21 00:26:44 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=110d64b0

Add Update CPU Optimization patch for 6.16+

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                   |   4 +
 5010_enable-cpu-optimizations-universal.patch | 846 ++++++++++++++++++++++++++
 2 files changed, 850 insertions(+)

diff --git a/0000_README b/0000_README
index 77cb0499..8ee44525 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
 
+Patch:  5010_enable-cpu-optimizations-universal.patch
+From:   https://github.com/graysky2/kernel_compiler_patch
+Desc:   More ISA levels and uarches for kernel 6.16+
+
 Patch:  5020_BMQ-and-PDS-io-scheduler-v6.16-r0.patch
 From:   https://gitlab.com/alfredchen/projectc
 Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.

diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
new file mode 100644
index 00000000..962a82a6
--- /dev/null
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -0,0 +1,846 @@
+From 6b1d270f55e3143bcb3ad914adf920774351a6b9 Mon Sep 17 00:00:00 2001
+From: graysky <therealgraysky AT proton DOT me>
+Date: Mon, 18 Aug 2025 04:14:48 -0400
+
+1. New generic x86-64 ISA levels
+
+These are selectable under:
+	Processor type and features ---> x86-64 compiler ISA level
+
+• x86-64     A value of (1) is the default
+• x86-64-v2  A value of (2) brings support for vector
+             instructions up to Streaming SIMD Extensions 4.2 (SSE4.2)
+	     and Supplemental Streaming SIMD Extensions 3 (SSSE3), the
+	     POPCNT instruction, and CMPXCHG16B.
+• x86-64-v3  A value of (3) adds vector instructions up to AVX2, MOVBE,
+             and additional bit-manipulation instructions.
+
+There is also x86-64-v4 but including this makes little sense as
+the kernel does not use any of the AVX512 instructions anyway.
+
+Users of glibc 2.33 and above can see which level is supported by running:
+	/lib/ld-linux-x86-64.so.2 --help | grep supported
+Or
+	/lib64/ld-linux-x86-64.so.2 --help | grep supported
+
+2. New micro-architectures
+
+These are selectable under:
+	Processor type and features ---> Processor family
+
+• AMD Improved K8-family
+• AMD K10-family
+• AMD Family 10h (Barcelona)
+• AMD Family 14h (Bobcat)
+• AMD Family 16h (Jaguar)
+• AMD Family 15h (Bulldozer)
+• AMD Family 15h (Piledriver)
+• AMD Family 15h (Steamroller)
+• AMD Family 15h (Excavator)
+• AMD Family 17h (Zen)
+• AMD Family 17h (Zen 2)
+• AMD Family 19h (Zen 3)**
+• AMD Family 19h (Zen 4)‡
+• AMD Family 1Ah (Zen 5)§
+• Intel Silvermont low-power processors
+• Intel Goldmont low-power processors (Apollo Lake and Denverton)
+• Intel Goldmont Plus low-power processors (Gemini Lake)
+• Intel 1st Gen Core i3/i5/i7 (Nehalem)
+• Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+• Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+• Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+• Intel 4th Gen Core i3/i5/i7 (Haswell)
+• Intel 5th Gen Core i3/i5/i7 (Broadwell)
+• Intel 6th Gen Core i3/i5/i7 (Skylake)
+• Intel 6th Gen Core i7/i9 (Skylake X)
+• Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
+• Intel 10th Gen Core i7/i9 (Ice Lake)
+• Intel Xeon (Cascade Lake)
+• Intel Xeon (Cooper Lake)*
+• Intel 3rd Gen 10nm++ i3/i5/i7/i9-family (Tiger Lake)*
+• Intel 4th Gen 10nm++ Xeon (Sapphire Rapids)†
+• Intel 11th Gen i3/i5/i7/i9-family (Rocket Lake)†
+• Intel 12th Gen i3/i5/i7/i9-family (Alder Lake)†
+• Intel 13th Gen i3/i5/i7/i9-family (Raptor Lake)‡
+• Intel 14th Gen i3/i5/i7/i9-family (Meteor Lake)‡
+• Intel 5th Gen 10nm++ Xeon (Emerald Rapids)‡
+
+Notes: If not otherwise noted, gcc >=9.1 is required for support.
+       *Requires gcc >=10.1 or clang >=10.0
+      **Required gcc >=10.3 or clang >=12.0
+       †Required gcc >=11.1 or clang >=12.0
+       ‡Required gcc >=13.0 or clang >=15.0.5
+       §Required gcc  >14.0 or clang >=19.0?
+
+3. Auto-detected micro-architecture levels
+
+Compile by passing the '-march=native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[1]
+
+Users of Intel CPUs should select the 'Intel-Native' option and users of AMD
+CPUs should select the 'AMD-Native' option.
+
+MINOR NOTES RELATING TO INTEL ATOM PROCESSORS
+This patch also changes -march=atom to -march=bonnell in accordance with the
+gcc v4.9 changes. Upstream is using the deprecated -match=atom flags when I
+believe it should use the newer -march=bonnell flag for atom processors.[2]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[3] The
+recommendation is to use the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_compiler_patch?tab=readme-ov-file#benchmarks
+
+REQUIREMENTS
+linux version 6.16+
+gcc version >=9.0 or clang version >=9.0
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[4]
+
+REFERENCES
+1.  https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html#index-x86-Options
+2.  https://bugzilla.kernel.org/show_bug.cgi?id=77461
+3.  https://github.com/graysky2/kernel_gcc_patch/issues/15
+4.  http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+---
+ arch/x86/Kconfig.cpu | 427 ++++++++++++++++++++++++++++++++++++++++++-
+ arch/x86/Makefile    | 213 ++++++++++++++++++++-
+ 2 files changed, 631 insertions(+), 9 deletions(-)
+
+diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
+index f928cf6e3252..2802936f2e62 100644
+--- a/arch/x86/Kconfig.cpu
++++ b/arch/x86/Kconfig.cpu
+@@ -31,6 +31,7 @@ choice
+ 	  - "Pentium-4" for the Intel Pentium 4 or P4-based Celeron.
+ 	  - "K6" for the AMD K6, K6-II and K6-III (aka K6-3D).
+ 	  - "Athlon" for the AMD K7 family (Athlon/Duron/Thunderbird).
++	  - "Opteron/Athlon64/Hammer/K8" for all K8 and newer AMD CPUs.
+ 	  - "Crusoe" for the Transmeta Crusoe series.
+ 	  - "Efficeon" for the Transmeta Efficeon series.
+ 	  - "Winchip-C6" for original IDT Winchip.
+@@ -41,7 +42,10 @@ choice
+ 	  - "CyrixIII/VIA C3" for VIA Cyrix III or VIA C3.
+ 	  - "VIA C3-2" for VIA C3-2 "Nehemiah" (model 9 and above).
+ 	  - "VIA C7" for VIA C7.
++	  - "Intel P4" for the Pentium 4/Netburst microarchitecture.
++	  - "Core 2/newer Xeon" for all core2 and newer Intel CPUs.
+ 	  - "Intel Atom" for the Atom-microarchitecture CPUs.
++	  - "Generic-x86-64" for a kernel which runs on any x86-64 CPU.
+ 
+ 	  See each option's help text for additional details. If you don't know
+ 	  what to do, choose "Pentium-Pro".
+@@ -135,10 +139,21 @@ config MPENTIUM4
+ 		-Mobile Pentium 4
+ 		-Mobile Pentium 4 M
+ 		-Extreme Edition (Gallatin)
++		-Prescott
++		-Prescott 2M
++		-Cedar Mill
++		-Presler
++		-Smithfiled
+ 	    Xeons (Intel Xeon, Xeon MP, Xeon LV, Xeon MV) corename:
+ 		-Foster
+ 		-Prestonia
+ 		-Gallatin
++		-Nocona
++		-Irwindale
++		-Cranford
++		-Potomac
++		-Paxville
++		-Dempsey
+ 
+ config MK6
+ 	bool "K6/K6-II/K6-III"
+@@ -281,6 +296,402 @@ config X86_GENERIC
+ 	  This is really intended for distributors who need more
+ 	  generic optimizations.
+ 
++choice
++	prompt "x86_64 Compiler Build Optimization"
++	depends on !X86_NATIVE_CPU
++	default GENERIC_CPU
++
++config GENERIC_CPU
++	bool "Generic-x86-64"
++	depends on X86_64
++	help
++	  Generic x86-64 CPU.
++	  Runs equally well on all x86-64 CPUs.
++
++config MK8
++	bool "AMD Opteron/Athlon64/Hammer/K8"
++	help
++	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MK8SSE3
++	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++	help
++	  Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MK10
++	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++	help
++	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++	  Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MBARCELONA
++	bool "AMD Barcelona"
++	help
++	  Select this for AMD Family 10h Barcelona processors.
++
++	  Enables -march=barcelona
++
++config MBOBCAT
++	bool "AMD Bobcat"
++	help
++	  Select this for AMD Family 14h Bobcat processors.
++
++	  Enables -march=btver1
++
++config MJAGUAR
++	bool "AMD Jaguar"
++	help
++	  Select this for AMD Family 16h Jaguar processors.
++
++	  Enables -march=btver2
++
++config MBULLDOZER
++	bool "AMD Bulldozer"
++	help
++	  Select this for AMD Family 15h Bulldozer processors.
++
++	  Enables -march=bdver1
++
++config MPILEDRIVER
++	bool "AMD Piledriver"
++	help
++	  Select this for AMD Family 15h Piledriver processors.
++
++	  Enables -march=bdver2
++
++config MSTEAMROLLER
++	bool "AMD Steamroller"
++	help
++	  Select this for AMD Family 15h Steamroller processors.
++
++	  Enables -march=bdver3
++
++config MEXCAVATOR
++	bool "AMD Excavator"
++	help
++	  Select this for AMD Family 15h Excavator processors.
++
++	  Enables -march=bdver4
++
++config MZEN
++	bool "AMD Ryzen"
++	help
++	  Select this for AMD Family 17h Zen processors.
++
++	  Enables -march=znver1
++
++config MZEN2
++	bool "AMD Ryzen 2"
++	help
++	  Select this for AMD Family 17h Zen 2 processors.
++
++	  Enables -march=znver2
++
++config MZEN3
++	bool "AMD Ryzen 3"
++	depends on (CC_IS_GCC && GCC_VERSION >= 100300) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++	help
++	  Select this for AMD Family 19h Zen 3 processors.
++
++	  Enables -march=znver3
++
++config MZEN4
++	bool "AMD Ryzen 4"
++	depends on (CC_IS_GCC && GCC_VERSION >= 130000) || (CC_IS_CLANG && CLANG_VERSION >= 160000)
++	help
++	  Select this for AMD Family 19h Zen 4 processors.
++
++	  Enables -march=znver4
++
++config MZEN5
++	bool "AMD Ryzen 5"
++	depends on (CC_IS_GCC && GCC_VERSION > 140000) || (CC_IS_CLANG && CLANG_VERSION >= 190100)
++	help
++	  Select this for AMD Family 19h Zen 5 processors.
++
++	  Enables -march=znver5
++
++config MPSC
++	bool "Intel P4 / older Netburst based Xeon"
++	depends on X86_64
++	help
++	  Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
++	  Xeon CPUs with Intel 64bit which is compatible with x86-64.
++	  Note that the latest Xeons (Xeon 51xx and 53xx) are not based on the
++	  Netburst core and shouldn't use this option. You can distinguish them
++	  using the cpu family field
++	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
++
++config MCORE2
++	bool "Intel Core 2"
++	depends on X86_64
++	help
++
++	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
++	  53xx) CPUs. You can distinguish newer from older Xeons by the CPU
++	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
++	  (not a typo)
++
++	  Enables -march=core2
++
++config MNEHALEM
++	bool "Intel Nehalem"
++	depends on X86_64
++	help
++
++	  Select this for 1st Gen Core processors in the Nehalem family.
++
++	  Enables -march=nehalem
++
++config MWESTMERE
++	bool "Intel Westmere"
++	depends on X86_64
++	help
++
++	  Select this for the Intel Westmere formerly Nehalem-C family.
++
++	  Enables -march=westmere
++
++config MSILVERMONT
++	bool "Intel Silvermont"
++	depends on X86_64
++	help
++
++	  Select this for the Intel Silvermont platform.
++
++	  Enables -march=silvermont
++
++config MGOLDMONT
++	bool "Intel Goldmont"
++	depends on X86_64
++	help
++
++	  Select this for the Intel Goldmont platform including Apollo Lake and Denverton.
++
++	  Enables -march=goldmont
++
++config MGOLDMONTPLUS
++	bool "Intel Goldmont Plus"
++	depends on X86_64
++	help
++
++	  Select this for the Intel Goldmont Plus platform including Gemini Lake.
++
++	  Enables -march=goldmont-plus
++
++config MSANDYBRIDGE
++	bool "Intel Sandy Bridge"
++	depends on X86_64
++	help
++
++	  Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++	  Enables -march=sandybridge
++
++config MIVYBRIDGE
++	bool "Intel Ivy Bridge"
++	depends on X86_64
++	help
++
++	  Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++	  Enables -march=ivybridge
++
++config MHASWELL
++	bool "Intel Haswell"
++	depends on X86_64
++	help
++
++	  Select this for 4th Gen Core processors in the Haswell family.
++
++	  Enables -march=haswell
++
++config MBROADWELL
++	bool "Intel Broadwell"
++	depends on X86_64
++	help
++
++	  Select this for 5th Gen Core processors in the Broadwell family.
++
++	  Enables -march=broadwell
++
++config MSKYLAKE
++	bool "Intel Skylake"
++	depends on X86_64
++	help
++
++	  Select this for 6th Gen Core processors in the Skylake family.
++
++	  Enables -march=skylake
++
++config MSKYLAKEX
++	bool "Intel Skylake-X (7th Gen Core i7/i9)"
++	depends on X86_64
++	help
++
++	  Select this for 7th Gen Core i7/i9 processors in the Skylake-X family.
++
++	  Enables -march=skylake-avx512
++
++config MCANNONLAKE
++	bool "Intel Coffee Lake/Kaby Lake Refresh (8th Gen Core i3/i5/i7)"
++	depends on X86_64
++	help
++
++	  Select this for 8th Gen Core i3/i5/i7 processors in the Coffee Lake or Kaby Lake Refresh families.
++
++	  Enables -march=cannonlake
++
++config MICELAKE_CLIENT
++	bool "Intel Ice Lake"
++	depends on X86_64
++	help
++
++	  Select this for 10th Gen Core client processors in the Ice Lake family.
++
++	  Enables -march=icelake-client
++
++config MICELAKE_SERVER
++	bool "Intel Ice Lake-SP (3rd Gen Xeon Scalable)"
++	depends on X86_64
++	help
++
++	  Select this for 3rd Gen Xeon Scalable processors in the Ice Lake-SP family.
++
++	  Enables -march=icelake-server
++
++config MCOOPERLAKE
++	bool "Intel Cooper Lake"
++	depends on X86_64
++	depends on (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
++	help
++
++	  Select this for Xeon processors in the Cooper Lake family.
++
++	  Enables -march=cooperlake
++
++config MCASCADELAKE
++	bool "Intel Cascade Lake"
++	depends on X86_64
++	depends on (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
++	help
++
++	  Select this for Xeon processors in the Cascade Lake family.
++
++	  Enables -march=cascadelake
++
++config MTIGERLAKE
++	bool "Intel Tiger Lake"
++	depends on X86_64
++	depends on  (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
++	help
++
++	  Select this for third-generation 10 nm process processors in the Tiger Lake family.
++
++	  Enables -march=tigerlake
++
++config MSAPPHIRERAPIDS
++	bool "Intel Sapphire Rapids"
++	depends on X86_64
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++	help
++
++	  Select this for fourth-generation 10 nm process processors in the Sapphire Rapids family.
++
++	  Enables -march=sapphirerapids
++
++config MROCKETLAKE
++	bool "Intel Rocket Lake"
++	depends on X86_64
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++	help
++
++	  Select this for eleventh-generation processors in the Rocket Lake family.
++
++	  Enables -march=rocketlake
++
++config MALDERLAKE
++	bool "Intel Alder Lake"
++	depends on X86_64
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++	help
++
++	  Select this for twelfth-generation processors in the Alder Lake family.
++
++	  Enables -march=alderlake
++
++config MRAPTORLAKE
++	bool "Intel Raptor Lake"
++	depends on X86_64
++	depends on (CC_IS_GCC && GCC_VERSION >= 130000) || (CC_IS_CLANG && CLANG_VERSION >= 150500)
++	help
++
++	  Select this for thirteenth-generation processors in the Raptor Lake family.
++
++	  Enables -march=raptorlake
++
++config MMETEORLAKE
++	bool "Intel Meteor Lake"
++	depends on X86_64
++	depends on (CC_IS_GCC && GCC_VERSION >= 130000) || (CC_IS_CLANG && CLANG_VERSION >= 150500)
++	help
++
++	  Select this for fourteenth-generation processors in the Meteor Lake family.
++
++	  Enables -march=meteorlake
++
++config MEMERALDRAPIDS
++	bool "Intel Emerald Rapids"
++	depends on X86_64
++	depends on (CC_IS_GCC && GCC_VERSION > 130000) || (CC_IS_CLANG && CLANG_VERSION >= 150500)
++	help
++
++	  Select this for fifth-generation Xeon Scalable processors in the Emerald Rapids family.
++
++	  Enables -march=emeraldrapids
++
++config MDIAMONDRAPIDS
++	bool "Intel Diamond Rapids (7th Gen Xeon Scalable)"
++	depends on X86_64
++	depends on (CC_IS_GCC && GCC_VERSION > 150000) || (CC_IS_CLANG && CLANG_VERSION >= 200000)
++	help
++
++	  Select this for seventh-generation Xeon Scalable processors in the Diamond Rapids family.
++
++	  Enables -march=diamondrapids
++
++endchoice
++
++config X86_64_VERSION
++	int "x86-64 compiler ISA level"
++	range 1 3
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++	depends on X86_64 && GENERIC_CPU
++	help
++	  Specify a specific x86-64 compiler ISA level.
++
++	  There are three x86-64 ISA levels that work on top of
++	  the x86-64 baseline, namely: x86-64-v2 and x86-64-v3.
++
++	  x86-64-v2 brings support for vector instructions up to Streaming SIMD
++	  Extensions 4.2 (SSE4.2) and Supplemental Streaming SIMD Extensions 3
++	  (SSSE3), the POPCNT instruction, and CMPXCHG16B.
++
++	  x86-64-v3 adds vector instructions up to AVX2, MOVBE, and additional
++	  bit-manipulation instructions.
++
++	  x86-64-v4 is not included since the kernel does not use AVX512 instructions
++
++	  You can find the best version for your CPU by running one of the following:
++	  /lib/ld-linux-x86-64.so.2 --help | grep supported
++	  /lib64/ld-linux-x86-64.so.2 --help | grep supported
++
+ #
+ # Define implied options from the CPU selection here
+ config X86_INTERNODE_CACHE_SHIFT
+@@ -290,8 +701,8 @@ config X86_INTERNODE_CACHE_SHIFT
+ 
+ config X86_L1_CACHE_SHIFT
+ 	int
+-	default "7" if MPENTIUM4
+-	default "6" if MK7 || MPENTIUMM || MATOM || MVIAC7 || X86_GENERIC || X86_64
++	default "7" if MPENTIUM4 || MPSC
++	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE_CLIENT || MICELAKE_SERVER || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MDIAMONDRAPIDS || X86_NATIVE_CPU
+ 	default "4" if MELAN || M486SX || M486 || MGEODEGX1
+ 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+ 
+@@ -309,19 +720,19 @@ config X86_ALIGNMENT_16
+ 
+ config X86_INTEL_USERCOPY
+ 	def_bool y
+-	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK7 || MEFFICEON
++	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE_CLIENT || MICELAKE_SERVER || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MDIAMONDRAPIDS
+ 
+ config X86_USE_PPRO_CHECKSUM
+ 	def_bool y
+-	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MATOM
++	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE_CLIENT || MICELAKE_SERVER || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MDIAMONDRAPIDS
+ 
+ config X86_TSC
+ 	def_bool y
+-	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MATOM) || X86_64
++	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
+ 
+ config X86_HAVE_PAE
+ 	def_bool y
+-	depends on MCRUSOE || MEFFICEON || MCYRIXIII || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC7 || MATOM || X86_64
++	depends on MCRUSOE || MEFFICEON || MCYRIXIII || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC7 || MCORE2 || MATOM || X86_64
+ 
+ config X86_CX8
+ 	def_bool y
+@@ -331,12 +742,12 @@ config X86_CX8
+ # generates cmov.
+ config X86_CMOV
+ 	def_bool y
+-	depends on (MK7 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || MATOM || MGEODE_LX || X86_64)
++	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
+ 
+ config X86_MINIMUM_CPU_FAMILY
+ 	int
+ 	default "64" if X86_64
+-	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MK7)
++	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCORE2 || MK7 || MK8)
+ 	default "5" if X86_32 && X86_CX8
+ 	default "4"
+ 
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index 1913d342969b..6c165daccb3d 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -177,10 +177,221 @@ ifdef CONFIG_X86_NATIVE_CPU
+         KBUILD_CFLAGS += -march=native
+         KBUILD_RUSTFLAGS += -Ctarget-cpu=native
+ else
++ifdef CONFIG_MK8
++        KBUILD_CFLAGS += -march=k8
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=k8
++endif
++
++ifdef CONFIG_MK8SSE3
++        KBUILD_CFLAGS += -march=k8-sse3
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=k8-sse3
++endif
++
++ifdef CONFIG_MK10
++        KBUILD_CFLAGS += -march=amdfam10
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=amdfam10
++endif
++
++ifdef CONFIG_MBARCELONA
++        KBUILD_CFLAGS += -march=barcelona
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=barcelona
++endif
++
++ifdef CONFIG_MBOBCAT
++        KBUILD_CFLAGS += -march=btver1
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=btver1
++endif
++
++ifdef CONFIG_MJAGUAR
++        KBUILD_CFLAGS += -march=btver2
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=btver2
++endif
++
++ifdef CONFIG_MBULLDOZER
++        KBUILD_CFLAGS += -march=bdver1
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=bdver1
++endif
++
++ifdef CONFIG_MPILEDRIVER
++        KBUILD_CFLAGS += -march=bdver2 -mno-tbm
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=bdver2 -mno-tbm
++endif
++
++ifdef CONFIG_MSTEAMROLLER
++        KBUILD_CFLAGS += -march=bdver3 -mno-tbm
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=bdver3 -mno-tbm
++endif
++
++ifdef CONFIG_MEXCAVATOR
++        KBUILD_CFLAGS += -march=bdver4 -mno-tbm
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=bdver4 -mno-tbm
++endif
++
++ifdef CONFIG_MZEN
++        KBUILD_CFLAGS += -march=znver1
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=znver1
++endif
++
++ifdef CONFIG_MZEN2
++        KBUILD_CFLAGS += -march=znver2
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=znver2
++endif
++
++ifdef CONFIG_MZEN3
++        KBUILD_CFLAGS += -march=znver3
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=znver3
++endif
++
++ifdef CONFIG_MZEN4
++        KBUILD_CFLAGS += -march=znver4
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=znver4
++endif
++
++ifdef CONFIG_MZEN5
++        KBUILD_CFLAGS += -march=znver5
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=znver5
++endif
++
++ifdef CONFIG_MPSC
++        KBUILD_CFLAGS += -march=nocona
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=nocona
++endif
++
++ifdef CONFIG_MCORE2
++        KBUILD_CFLAGS += -march=core2
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=core2
++endif
++
++ifdef CONFIG_MNEHALEM
++        KBUILD_CFLAGS += -march=nehalem
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=nehalem
++endif
++
++ifdef CONFIG_MWESTMERE
++        KBUILD_CFLAGS += -march=westmere
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=westmere
++endif
++
++ifdef CONFIG_MSILVERMONT
++        KBUILD_CFLAGS += -march=silvermont
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=silvermont
++endif
++
++ifdef CONFIG_MGOLDMONT
++        KBUILD_CFLAGS += -march=goldmont
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=goldmont
++endif
++
++ifdef CONFIG_MGOLDMONTPLUS
++        KBUILD_CFLAGS += -march=goldmont-plus
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=goldmont-plus
++endif
++
++ifdef CONFIG_MSANDYBRIDGE
++        KBUILD_CFLAGS += -march=sandybridge
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=sandybridge
++endif
++
++ifdef CONFIG_MIVYBRIDGE
++        KBUILD_CFLAGS += -march=ivybridge
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=ivybridge
++endif
++
++ifdef CONFIG_MHASWELL
++        KBUILD_CFLAGS += -march=haswell
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=haswell
++endif
++
++ifdef CONFIG_MBROADWELL
++        KBUILD_CFLAGS += -march=broadwell
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=broadwell
++endif
++
++ifdef CONFIG_MSKYLAKE
++        KBUILD_CFLAGS += -march=skylake
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=skylake
++endif
++
++ifdef CONFIG_MSKYLAKEX
++        KBUILD_CFLAGS += -march=skylake-avx512
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=skylake-avx512
++endif
++
++ifdef CONFIG_MCANNONLAKE
++        KBUILD_CFLAGS += -march=cannonlake
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=cannonlake
++endif
++
++ifdef CONFIG_MICELAKE_CLIENT
++        KBUILD_CFLAGS += -march=icelake-client
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=icelake-client
++endif
++
++ifdef CONFIG_MICELAKE_SERVER
++        KBUILD_CFLAGS += -march=icelake-server
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=icelake-server
++endif
++
++ifdef CONFIG_MCOOPERLAKE
++        KBUILD_CFLAGS += -march=cooperlake
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=cooperlake
++endif
++
++ifdef CONFIG_MCASCADELAKE
++        KBUILD_CFLAGS += -march=cascadelake
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=cascadelake
++endif
++
++ifdef CONFIG_MTIGERLAKE
++        KBUILD_CFLAGS += -march=tigerlake
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=tigerlake
++endif
++
++ifdef CONFIG_MSAPPHIRERAPIDS
++        KBUILD_CFLAGS += -march=sapphirerapids
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=sapphirerapids
++endif
++
++ifdef CONFIG_MROCKETLAKE
++        KBUILD_CFLAGS += -march=rocketlake
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=rocketlake
++endif
++
++ifdef CONFIG_MALDERLAKE
++        KBUILD_CFLAGS += -march=alderlake
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=alderlake
++endif
++
++ifdef CONFIG_MRAPTORLAKE
++        KBUILD_CFLAGS += -march=raptorlake
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=raptorlake
++endif
++
++ifdef CONFIG_MMETEORLAKE
++        KBUILD_CFLAGS += -march=meteorlake
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=meteorlake
++endif
++
++ifdef CONFIG_MEMERALDRAPIDS
++        KBUILD_CFLAGS += -march=emeraldrapids
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=emeraldrapids
++endif
++
++ifdef CONFIG_MDIAMONDRAPIDS
++        KBUILD_CFLAGS += -march=diamondrapids
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=diamondrapids
++endif
++
++ifdef CONFIG_GENERIC_CPU
++ifeq ($(CONFIG_X86_64_VERSION),1)
+         KBUILD_CFLAGS += -march=x86-64 -mtune=generic
+         KBUILD_RUSTFLAGS += -Ctarget-cpu=x86-64 -Ztune-cpu=generic
++else
++        KBUILD_CFLAGS +=-march=x86-64-v$(CONFIG_X86_64_VERSION)
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=x86-64-v$(CONFIG_X86_64_VERSION)
++endif
++endif
+ endif
+-
+         KBUILD_CFLAGS += -mno-red-zone
+         KBUILD_CFLAGS += -mcmodel=kernel
+         KBUILD_RUSTFLAGS += -Cno-redzone=y
+-- 
+2.50.1
+


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-08-21  1:00 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-08-21  1:00 UTC (permalink / raw
  To: gentoo-commits

commit:     26563a34c895bf0923dee40f0bc4cfddbd1a6d1d
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 21 00:56:46 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Aug 21 00:59:54 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=26563a34

Revert "Add Update CPU Optimization patch for 6.16+"

This reverts commit 110d64b0083ce8ab9ca1928905b4c27b21014981.

"cc1: error: bad value (‘x86-64-v’) for ‘-march=’ switch"

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                   |   4 -
 5010_enable-cpu-optimizations-universal.patch | 846 --------------------------
 2 files changed, 850 deletions(-)

diff --git a/0000_README b/0000_README
index 8ee44525..77cb0499 100644
--- a/0000_README
+++ b/0000_README
@@ -95,10 +95,6 @@ Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
 
-Patch:  5010_enable-cpu-optimizations-universal.patch
-From:   https://github.com/graysky2/kernel_compiler_patch
-Desc:   More ISA levels and uarches for kernel 6.16+
-
 Patch:  5020_BMQ-and-PDS-io-scheduler-v6.16-r0.patch
 From:   https://gitlab.com/alfredchen/projectc
 Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.

diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
deleted file mode 100644
index 962a82a6..00000000
--- a/5010_enable-cpu-optimizations-universal.patch
+++ /dev/null
@@ -1,846 +0,0 @@
-From 6b1d270f55e3143bcb3ad914adf920774351a6b9 Mon Sep 17 00:00:00 2001
-From: graysky <therealgraysky AT proton DOT me>
-Date: Mon, 18 Aug 2025 04:14:48 -0400
-
-1. New generic x86-64 ISA levels
-
-These are selectable under:
-	Processor type and features ---> x86-64 compiler ISA level
-
-• x86-64     A value of (1) is the default
-• x86-64-v2  A value of (2) brings support for vector
-             instructions up to Streaming SIMD Extensions 4.2 (SSE4.2)
-	     and Supplemental Streaming SIMD Extensions 3 (SSSE3), the
-	     POPCNT instruction, and CMPXCHG16B.
-• x86-64-v3  A value of (3) adds vector instructions up to AVX2, MOVBE,
-             and additional bit-manipulation instructions.
-
-There is also x86-64-v4 but including this makes little sense as
-the kernel does not use any of the AVX512 instructions anyway.
-
-Users of glibc 2.33 and above can see which level is supported by running:
-	/lib/ld-linux-x86-64.so.2 --help | grep supported
-Or
-	/lib64/ld-linux-x86-64.so.2 --help | grep supported
-
-2. New micro-architectures
-
-These are selectable under:
-	Processor type and features ---> Processor family
-
-• AMD Improved K8-family
-• AMD K10-family
-• AMD Family 10h (Barcelona)
-• AMD Family 14h (Bobcat)
-• AMD Family 16h (Jaguar)
-• AMD Family 15h (Bulldozer)
-• AMD Family 15h (Piledriver)
-• AMD Family 15h (Steamroller)
-• AMD Family 15h (Excavator)
-• AMD Family 17h (Zen)
-• AMD Family 17h (Zen 2)
-• AMD Family 19h (Zen 3)**
-• AMD Family 19h (Zen 4)‡
-• AMD Family 1Ah (Zen 5)§
-• Intel Silvermont low-power processors
-• Intel Goldmont low-power processors (Apollo Lake and Denverton)
-• Intel Goldmont Plus low-power processors (Gemini Lake)
-• Intel 1st Gen Core i3/i5/i7 (Nehalem)
-• Intel 1.5 Gen Core i3/i5/i7 (Westmere)
-• Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
-• Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
-• Intel 4th Gen Core i3/i5/i7 (Haswell)
-• Intel 5th Gen Core i3/i5/i7 (Broadwell)
-• Intel 6th Gen Core i3/i5/i7 (Skylake)
-• Intel 6th Gen Core i7/i9 (Skylake X)
-• Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
-• Intel 10th Gen Core i7/i9 (Ice Lake)
-• Intel Xeon (Cascade Lake)
-• Intel Xeon (Cooper Lake)*
-• Intel 3rd Gen 10nm++ i3/i5/i7/i9-family (Tiger Lake)*
-• Intel 4th Gen 10nm++ Xeon (Sapphire Rapids)†
-• Intel 11th Gen i3/i5/i7/i9-family (Rocket Lake)†
-• Intel 12th Gen i3/i5/i7/i9-family (Alder Lake)†
-• Intel 13th Gen i3/i5/i7/i9-family (Raptor Lake)‡
-• Intel 14th Gen i3/i5/i7/i9-family (Meteor Lake)‡
-• Intel 5th Gen 10nm++ Xeon (Emerald Rapids)‡
-
-Notes: If not otherwise noted, gcc >=9.1 is required for support.
-       *Requires gcc >=10.1 or clang >=10.0
-      **Required gcc >=10.3 or clang >=12.0
-       †Required gcc >=11.1 or clang >=12.0
-       ‡Required gcc >=13.0 or clang >=15.0.5
-       §Required gcc  >14.0 or clang >=19.0?
-
-3. Auto-detected micro-architecture levels
-
-Compile by passing the '-march=native' option which, "selects the CPU
-to generate code for at compilation time by determining the processor type of
-the compiling machine. Using -march=native enables all instruction subsets
-supported by the local machine and will produce code optimized for the local
-machine under the constraints of the selected instruction set."[1]
-
-Users of Intel CPUs should select the 'Intel-Native' option and users of AMD
-CPUs should select the 'AMD-Native' option.
-
-MINOR NOTES RELATING TO INTEL ATOM PROCESSORS
-This patch also changes -march=atom to -march=bonnell in accordance with the
-gcc v4.9 changes. Upstream is using the deprecated -match=atom flags when I
-believe it should use the newer -march=bonnell flag for atom processors.[2]
-
-It is not recommended to compile on Atom-CPUs with the 'native' option.[3] The
-recommendation is to use the 'atom' option instead.
-
-BENEFITS
-Small but real speed increases are measurable using a make endpoint comparing
-a generic kernel to one built with one of the respective microarchs.
-
-See the following experimental evidence supporting this statement:
-https://github.com/graysky2/kernel_compiler_patch?tab=readme-ov-file#benchmarks
-
-REQUIREMENTS
-linux version 6.16+
-gcc version >=9.0 or clang version >=9.0
-
-ACKNOWLEDGMENTS
-This patch builds on the seminal work by Jeroen.[4]
-
-REFERENCES
-1.  https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html#index-x86-Options
-2.  https://bugzilla.kernel.org/show_bug.cgi?id=77461
-3.  https://github.com/graysky2/kernel_gcc_patch/issues/15
-4.  http://www.linuxforge.net/docs/linux/linux-gcc.php
-
----
- arch/x86/Kconfig.cpu | 427 ++++++++++++++++++++++++++++++++++++++++++-
- arch/x86/Makefile    | 213 ++++++++++++++++++++-
- 2 files changed, 631 insertions(+), 9 deletions(-)
-
-diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
-index f928cf6e3252..2802936f2e62 100644
---- a/arch/x86/Kconfig.cpu
-+++ b/arch/x86/Kconfig.cpu
-@@ -31,6 +31,7 @@ choice
- 	  - "Pentium-4" for the Intel Pentium 4 or P4-based Celeron.
- 	  - "K6" for the AMD K6, K6-II and K6-III (aka K6-3D).
- 	  - "Athlon" for the AMD K7 family (Athlon/Duron/Thunderbird).
-+	  - "Opteron/Athlon64/Hammer/K8" for all K8 and newer AMD CPUs.
- 	  - "Crusoe" for the Transmeta Crusoe series.
- 	  - "Efficeon" for the Transmeta Efficeon series.
- 	  - "Winchip-C6" for original IDT Winchip.
-@@ -41,7 +42,10 @@ choice
- 	  - "CyrixIII/VIA C3" for VIA Cyrix III or VIA C3.
- 	  - "VIA C3-2" for VIA C3-2 "Nehemiah" (model 9 and above).
- 	  - "VIA C7" for VIA C7.
-+	  - "Intel P4" for the Pentium 4/Netburst microarchitecture.
-+	  - "Core 2/newer Xeon" for all core2 and newer Intel CPUs.
- 	  - "Intel Atom" for the Atom-microarchitecture CPUs.
-+	  - "Generic-x86-64" for a kernel which runs on any x86-64 CPU.
- 
- 	  See each option's help text for additional details. If you don't know
- 	  what to do, choose "Pentium-Pro".
-@@ -135,10 +139,21 @@ config MPENTIUM4
- 		-Mobile Pentium 4
- 		-Mobile Pentium 4 M
- 		-Extreme Edition (Gallatin)
-+		-Prescott
-+		-Prescott 2M
-+		-Cedar Mill
-+		-Presler
-+		-Smithfiled
- 	    Xeons (Intel Xeon, Xeon MP, Xeon LV, Xeon MV) corename:
- 		-Foster
- 		-Prestonia
- 		-Gallatin
-+		-Nocona
-+		-Irwindale
-+		-Cranford
-+		-Potomac
-+		-Paxville
-+		-Dempsey
- 
- config MK6
- 	bool "K6/K6-II/K6-III"
-@@ -281,6 +296,402 @@ config X86_GENERIC
- 	  This is really intended for distributors who need more
- 	  generic optimizations.
- 
-+choice
-+	prompt "x86_64 Compiler Build Optimization"
-+	depends on !X86_NATIVE_CPU
-+	default GENERIC_CPU
-+
-+config GENERIC_CPU
-+	bool "Generic-x86-64"
-+	depends on X86_64
-+	help
-+	  Generic x86-64 CPU.
-+	  Runs equally well on all x86-64 CPUs.
-+
-+config MK8
-+	bool "AMD Opteron/Athlon64/Hammer/K8"
-+	help
-+	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
-+	  Enables use of some extended instructions, and passes appropriate
-+	  optimization flags to GCC.
-+
-+config MK8SSE3
-+	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
-+	help
-+	  Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
-+	  Enables use of some extended instructions, and passes appropriate
-+	  optimization flags to GCC.
-+
-+config MK10
-+	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
-+	help
-+	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
-+	  Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
-+	  Enables use of some extended instructions, and passes appropriate
-+	  optimization flags to GCC.
-+
-+config MBARCELONA
-+	bool "AMD Barcelona"
-+	help
-+	  Select this for AMD Family 10h Barcelona processors.
-+
-+	  Enables -march=barcelona
-+
-+config MBOBCAT
-+	bool "AMD Bobcat"
-+	help
-+	  Select this for AMD Family 14h Bobcat processors.
-+
-+	  Enables -march=btver1
-+
-+config MJAGUAR
-+	bool "AMD Jaguar"
-+	help
-+	  Select this for AMD Family 16h Jaguar processors.
-+
-+	  Enables -march=btver2
-+
-+config MBULLDOZER
-+	bool "AMD Bulldozer"
-+	help
-+	  Select this for AMD Family 15h Bulldozer processors.
-+
-+	  Enables -march=bdver1
-+
-+config MPILEDRIVER
-+	bool "AMD Piledriver"
-+	help
-+	  Select this for AMD Family 15h Piledriver processors.
-+
-+	  Enables -march=bdver2
-+
-+config MSTEAMROLLER
-+	bool "AMD Steamroller"
-+	help
-+	  Select this for AMD Family 15h Steamroller processors.
-+
-+	  Enables -march=bdver3
-+
-+config MEXCAVATOR
-+	bool "AMD Excavator"
-+	help
-+	  Select this for AMD Family 15h Excavator processors.
-+
-+	  Enables -march=bdver4
-+
-+config MZEN
-+	bool "AMD Ryzen"
-+	help
-+	  Select this for AMD Family 17h Zen processors.
-+
-+	  Enables -march=znver1
-+
-+config MZEN2
-+	bool "AMD Ryzen 2"
-+	help
-+	  Select this for AMD Family 17h Zen 2 processors.
-+
-+	  Enables -march=znver2
-+
-+config MZEN3
-+	bool "AMD Ryzen 3"
-+	depends on (CC_IS_GCC && GCC_VERSION >= 100300) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
-+	help
-+	  Select this for AMD Family 19h Zen 3 processors.
-+
-+	  Enables -march=znver3
-+
-+config MZEN4
-+	bool "AMD Ryzen 4"
-+	depends on (CC_IS_GCC && GCC_VERSION >= 130000) || (CC_IS_CLANG && CLANG_VERSION >= 160000)
-+	help
-+	  Select this for AMD Family 19h Zen 4 processors.
-+
-+	  Enables -march=znver4
-+
-+config MZEN5
-+	bool "AMD Ryzen 5"
-+	depends on (CC_IS_GCC && GCC_VERSION > 140000) || (CC_IS_CLANG && CLANG_VERSION >= 190100)
-+	help
-+	  Select this for AMD Family 19h Zen 5 processors.
-+
-+	  Enables -march=znver5
-+
-+config MPSC
-+	bool "Intel P4 / older Netburst based Xeon"
-+	depends on X86_64
-+	help
-+	  Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
-+	  Xeon CPUs with Intel 64bit which is compatible with x86-64.
-+	  Note that the latest Xeons (Xeon 51xx and 53xx) are not based on the
-+	  Netburst core and shouldn't use this option. You can distinguish them
-+	  using the cpu family field
-+	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
-+
-+config MCORE2
-+	bool "Intel Core 2"
-+	depends on X86_64
-+	help
-+
-+	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
-+	  53xx) CPUs. You can distinguish newer from older Xeons by the CPU
-+	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
-+	  (not a typo)
-+
-+	  Enables -march=core2
-+
-+config MNEHALEM
-+	bool "Intel Nehalem"
-+	depends on X86_64
-+	help
-+
-+	  Select this for 1st Gen Core processors in the Nehalem family.
-+
-+	  Enables -march=nehalem
-+
-+config MWESTMERE
-+	bool "Intel Westmere"
-+	depends on X86_64
-+	help
-+
-+	  Select this for the Intel Westmere formerly Nehalem-C family.
-+
-+	  Enables -march=westmere
-+
-+config MSILVERMONT
-+	bool "Intel Silvermont"
-+	depends on X86_64
-+	help
-+
-+	  Select this for the Intel Silvermont platform.
-+
-+	  Enables -march=silvermont
-+
-+config MGOLDMONT
-+	bool "Intel Goldmont"
-+	depends on X86_64
-+	help
-+
-+	  Select this for the Intel Goldmont platform including Apollo Lake and Denverton.
-+
-+	  Enables -march=goldmont
-+
-+config MGOLDMONTPLUS
-+	bool "Intel Goldmont Plus"
-+	depends on X86_64
-+	help
-+
-+	  Select this for the Intel Goldmont Plus platform including Gemini Lake.
-+
-+	  Enables -march=goldmont-plus
-+
-+config MSANDYBRIDGE
-+	bool "Intel Sandy Bridge"
-+	depends on X86_64
-+	help
-+
-+	  Select this for 2nd Gen Core processors in the Sandy Bridge family.
-+
-+	  Enables -march=sandybridge
-+
-+config MIVYBRIDGE
-+	bool "Intel Ivy Bridge"
-+	depends on X86_64
-+	help
-+
-+	  Select this for 3rd Gen Core processors in the Ivy Bridge family.
-+
-+	  Enables -march=ivybridge
-+
-+config MHASWELL
-+	bool "Intel Haswell"
-+	depends on X86_64
-+	help
-+
-+	  Select this for 4th Gen Core processors in the Haswell family.
-+
-+	  Enables -march=haswell
-+
-+config MBROADWELL
-+	bool "Intel Broadwell"
-+	depends on X86_64
-+	help
-+
-+	  Select this for 5th Gen Core processors in the Broadwell family.
-+
-+	  Enables -march=broadwell
-+
-+config MSKYLAKE
-+	bool "Intel Skylake"
-+	depends on X86_64
-+	help
-+
-+	  Select this for 6th Gen Core processors in the Skylake family.
-+
-+	  Enables -march=skylake
-+
-+config MSKYLAKEX
-+	bool "Intel Skylake-X (7th Gen Core i7/i9)"
-+	depends on X86_64
-+	help
-+
-+	  Select this for 7th Gen Core i7/i9 processors in the Skylake-X family.
-+
-+	  Enables -march=skylake-avx512
-+
-+config MCANNONLAKE
-+	bool "Intel Coffee Lake/Kaby Lake Refresh (8th Gen Core i3/i5/i7)"
-+	depends on X86_64
-+	help
-+
-+	  Select this for 8th Gen Core i3/i5/i7 processors in the Coffee Lake or Kaby Lake Refresh families.
-+
-+	  Enables -march=cannonlake
-+
-+config MICELAKE_CLIENT
-+	bool "Intel Ice Lake"
-+	depends on X86_64
-+	help
-+
-+	  Select this for 10th Gen Core client processors in the Ice Lake family.
-+
-+	  Enables -march=icelake-client
-+
-+config MICELAKE_SERVER
-+	bool "Intel Ice Lake-SP (3rd Gen Xeon Scalable)"
-+	depends on X86_64
-+	help
-+
-+	  Select this for 3rd Gen Xeon Scalable processors in the Ice Lake-SP family.
-+
-+	  Enables -march=icelake-server
-+
-+config MCOOPERLAKE
-+	bool "Intel Cooper Lake"
-+	depends on X86_64
-+	depends on (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
-+	help
-+
-+	  Select this for Xeon processors in the Cooper Lake family.
-+
-+	  Enables -march=cooperlake
-+
-+config MCASCADELAKE
-+	bool "Intel Cascade Lake"
-+	depends on X86_64
-+	depends on (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
-+	help
-+
-+	  Select this for Xeon processors in the Cascade Lake family.
-+
-+	  Enables -march=cascadelake
-+
-+config MTIGERLAKE
-+	bool "Intel Tiger Lake"
-+	depends on X86_64
-+	depends on  (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
-+	help
-+
-+	  Select this for third-generation 10 nm process processors in the Tiger Lake family.
-+
-+	  Enables -march=tigerlake
-+
-+config MSAPPHIRERAPIDS
-+	bool "Intel Sapphire Rapids"
-+	depends on X86_64
-+	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
-+	help
-+
-+	  Select this for fourth-generation 10 nm process processors in the Sapphire Rapids family.
-+
-+	  Enables -march=sapphirerapids
-+
-+config MROCKETLAKE
-+	bool "Intel Rocket Lake"
-+	depends on X86_64
-+	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
-+	help
-+
-+	  Select this for eleventh-generation processors in the Rocket Lake family.
-+
-+	  Enables -march=rocketlake
-+
-+config MALDERLAKE
-+	bool "Intel Alder Lake"
-+	depends on X86_64
-+	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
-+	help
-+
-+	  Select this for twelfth-generation processors in the Alder Lake family.
-+
-+	  Enables -march=alderlake
-+
-+config MRAPTORLAKE
-+	bool "Intel Raptor Lake"
-+	depends on X86_64
-+	depends on (CC_IS_GCC && GCC_VERSION >= 130000) || (CC_IS_CLANG && CLANG_VERSION >= 150500)
-+	help
-+
-+	  Select this for thirteenth-generation processors in the Raptor Lake family.
-+
-+	  Enables -march=raptorlake
-+
-+config MMETEORLAKE
-+	bool "Intel Meteor Lake"
-+	depends on X86_64
-+	depends on (CC_IS_GCC && GCC_VERSION >= 130000) || (CC_IS_CLANG && CLANG_VERSION >= 150500)
-+	help
-+
-+	  Select this for fourteenth-generation processors in the Meteor Lake family.
-+
-+	  Enables -march=meteorlake
-+
-+config MEMERALDRAPIDS
-+	bool "Intel Emerald Rapids"
-+	depends on X86_64
-+	depends on (CC_IS_GCC && GCC_VERSION > 130000) || (CC_IS_CLANG && CLANG_VERSION >= 150500)
-+	help
-+
-+	  Select this for fifth-generation Xeon Scalable processors in the Emerald Rapids family.
-+
-+	  Enables -march=emeraldrapids
-+
-+config MDIAMONDRAPIDS
-+	bool "Intel Diamond Rapids (7th Gen Xeon Scalable)"
-+	depends on X86_64
-+	depends on (CC_IS_GCC && GCC_VERSION > 150000) || (CC_IS_CLANG && CLANG_VERSION >= 200000)
-+	help
-+
-+	  Select this for seventh-generation Xeon Scalable processors in the Diamond Rapids family.
-+
-+	  Enables -march=diamondrapids
-+
-+endchoice
-+
-+config X86_64_VERSION
-+	int "x86-64 compiler ISA level"
-+	range 1 3
-+	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
-+	depends on X86_64 && GENERIC_CPU
-+	help
-+	  Specify a specific x86-64 compiler ISA level.
-+
-+	  There are three x86-64 ISA levels that work on top of
-+	  the x86-64 baseline, namely: x86-64-v2 and x86-64-v3.
-+
-+	  x86-64-v2 brings support for vector instructions up to Streaming SIMD
-+	  Extensions 4.2 (SSE4.2) and Supplemental Streaming SIMD Extensions 3
-+	  (SSSE3), the POPCNT instruction, and CMPXCHG16B.
-+
-+	  x86-64-v3 adds vector instructions up to AVX2, MOVBE, and additional
-+	  bit-manipulation instructions.
-+
-+	  x86-64-v4 is not included since the kernel does not use AVX512 instructions
-+
-+	  You can find the best version for your CPU by running one of the following:
-+	  /lib/ld-linux-x86-64.so.2 --help | grep supported
-+	  /lib64/ld-linux-x86-64.so.2 --help | grep supported
-+
- #
- # Define implied options from the CPU selection here
- config X86_INTERNODE_CACHE_SHIFT
-@@ -290,8 +701,8 @@ config X86_INTERNODE_CACHE_SHIFT
- 
- config X86_L1_CACHE_SHIFT
- 	int
--	default "7" if MPENTIUM4
--	default "6" if MK7 || MPENTIUMM || MATOM || MVIAC7 || X86_GENERIC || X86_64
-+	default "7" if MPENTIUM4 || MPSC
-+	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE_CLIENT || MICELAKE_SERVER || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MDIAMONDRAPIDS || X86_NATIVE_CPU
- 	default "4" if MELAN || M486SX || M486 || MGEODEGX1
- 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
- 
-@@ -309,19 +720,19 @@ config X86_ALIGNMENT_16
- 
- config X86_INTEL_USERCOPY
- 	def_bool y
--	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK7 || MEFFICEON
-+	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE_CLIENT || MICELAKE_SERVER || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MDIAMONDRAPIDS
- 
- config X86_USE_PPRO_CHECKSUM
- 	def_bool y
--	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MATOM
-+	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE_CLIENT || MICELAKE_SERVER || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MDIAMONDRAPIDS
- 
- config X86_TSC
- 	def_bool y
--	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MATOM) || X86_64
-+	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
- 
- config X86_HAVE_PAE
- 	def_bool y
--	depends on MCRUSOE || MEFFICEON || MCYRIXIII || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC7 || MATOM || X86_64
-+	depends on MCRUSOE || MEFFICEON || MCYRIXIII || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC7 || MCORE2 || MATOM || X86_64
- 
- config X86_CX8
- 	def_bool y
-@@ -331,12 +742,12 @@ config X86_CX8
- # generates cmov.
- config X86_CMOV
- 	def_bool y
--	depends on (MK7 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || MATOM || MGEODE_LX || X86_64)
-+	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
- 
- config X86_MINIMUM_CPU_FAMILY
- 	int
- 	default "64" if X86_64
--	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MK7)
-+	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCORE2 || MK7 || MK8)
- 	default "5" if X86_32 && X86_CX8
- 	default "4"
- 
-diff --git a/arch/x86/Makefile b/arch/x86/Makefile
-index 1913d342969b..6c165daccb3d 100644
---- a/arch/x86/Makefile
-+++ b/arch/x86/Makefile
-@@ -177,10 +177,221 @@ ifdef CONFIG_X86_NATIVE_CPU
-         KBUILD_CFLAGS += -march=native
-         KBUILD_RUSTFLAGS += -Ctarget-cpu=native
- else
-+ifdef CONFIG_MK8
-+        KBUILD_CFLAGS += -march=k8
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=k8
-+endif
-+
-+ifdef CONFIG_MK8SSE3
-+        KBUILD_CFLAGS += -march=k8-sse3
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=k8-sse3
-+endif
-+
-+ifdef CONFIG_MK10
-+        KBUILD_CFLAGS += -march=amdfam10
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=amdfam10
-+endif
-+
-+ifdef CONFIG_MBARCELONA
-+        KBUILD_CFLAGS += -march=barcelona
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=barcelona
-+endif
-+
-+ifdef CONFIG_MBOBCAT
-+        KBUILD_CFLAGS += -march=btver1
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=btver1
-+endif
-+
-+ifdef CONFIG_MJAGUAR
-+        KBUILD_CFLAGS += -march=btver2
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=btver2
-+endif
-+
-+ifdef CONFIG_MBULLDOZER
-+        KBUILD_CFLAGS += -march=bdver1
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=bdver1
-+endif
-+
-+ifdef CONFIG_MPILEDRIVER
-+        KBUILD_CFLAGS += -march=bdver2 -mno-tbm
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=bdver2 -mno-tbm
-+endif
-+
-+ifdef CONFIG_MSTEAMROLLER
-+        KBUILD_CFLAGS += -march=bdver3 -mno-tbm
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=bdver3 -mno-tbm
-+endif
-+
-+ifdef CONFIG_MEXCAVATOR
-+        KBUILD_CFLAGS += -march=bdver4 -mno-tbm
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=bdver4 -mno-tbm
-+endif
-+
-+ifdef CONFIG_MZEN
-+        KBUILD_CFLAGS += -march=znver1
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=znver1
-+endif
-+
-+ifdef CONFIG_MZEN2
-+        KBUILD_CFLAGS += -march=znver2
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=znver2
-+endif
-+
-+ifdef CONFIG_MZEN3
-+        KBUILD_CFLAGS += -march=znver3
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=znver3
-+endif
-+
-+ifdef CONFIG_MZEN4
-+        KBUILD_CFLAGS += -march=znver4
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=znver4
-+endif
-+
-+ifdef CONFIG_MZEN5
-+        KBUILD_CFLAGS += -march=znver5
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=znver5
-+endif
-+
-+ifdef CONFIG_MPSC
-+        KBUILD_CFLAGS += -march=nocona
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=nocona
-+endif
-+
-+ifdef CONFIG_MCORE2
-+        KBUILD_CFLAGS += -march=core2
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=core2
-+endif
-+
-+ifdef CONFIG_MNEHALEM
-+        KBUILD_CFLAGS += -march=nehalem
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=nehalem
-+endif
-+
-+ifdef CONFIG_MWESTMERE
-+        KBUILD_CFLAGS += -march=westmere
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=westmere
-+endif
-+
-+ifdef CONFIG_MSILVERMONT
-+        KBUILD_CFLAGS += -march=silvermont
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=silvermont
-+endif
-+
-+ifdef CONFIG_MGOLDMONT
-+        KBUILD_CFLAGS += -march=goldmont
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=goldmont
-+endif
-+
-+ifdef CONFIG_MGOLDMONTPLUS
-+        KBUILD_CFLAGS += -march=goldmont-plus
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=goldmont-plus
-+endif
-+
-+ifdef CONFIG_MSANDYBRIDGE
-+        KBUILD_CFLAGS += -march=sandybridge
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=sandybridge
-+endif
-+
-+ifdef CONFIG_MIVYBRIDGE
-+        KBUILD_CFLAGS += -march=ivybridge
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=ivybridge
-+endif
-+
-+ifdef CONFIG_MHASWELL
-+        KBUILD_CFLAGS += -march=haswell
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=haswell
-+endif
-+
-+ifdef CONFIG_MBROADWELL
-+        KBUILD_CFLAGS += -march=broadwell
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=broadwell
-+endif
-+
-+ifdef CONFIG_MSKYLAKE
-+        KBUILD_CFLAGS += -march=skylake
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=skylake
-+endif
-+
-+ifdef CONFIG_MSKYLAKEX
-+        KBUILD_CFLAGS += -march=skylake-avx512
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=skylake-avx512
-+endif
-+
-+ifdef CONFIG_MCANNONLAKE
-+        KBUILD_CFLAGS += -march=cannonlake
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=cannonlake
-+endif
-+
-+ifdef CONFIG_MICELAKE_CLIENT
-+        KBUILD_CFLAGS += -march=icelake-client
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=icelake-client
-+endif
-+
-+ifdef CONFIG_MICELAKE_SERVER
-+        KBUILD_CFLAGS += -march=icelake-server
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=icelake-server
-+endif
-+
-+ifdef CONFIG_MCOOPERLAKE
-+        KBUILD_CFLAGS += -march=cooperlake
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=cooperlake
-+endif
-+
-+ifdef CONFIG_MCASCADELAKE
-+        KBUILD_CFLAGS += -march=cascadelake
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=cascadelake
-+endif
-+
-+ifdef CONFIG_MTIGERLAKE
-+        KBUILD_CFLAGS += -march=tigerlake
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=tigerlake
-+endif
-+
-+ifdef CONFIG_MSAPPHIRERAPIDS
-+        KBUILD_CFLAGS += -march=sapphirerapids
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=sapphirerapids
-+endif
-+
-+ifdef CONFIG_MROCKETLAKE
-+        KBUILD_CFLAGS += -march=rocketlake
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=rocketlake
-+endif
-+
-+ifdef CONFIG_MALDERLAKE
-+        KBUILD_CFLAGS += -march=alderlake
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=alderlake
-+endif
-+
-+ifdef CONFIG_MRAPTORLAKE
-+        KBUILD_CFLAGS += -march=raptorlake
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=raptorlake
-+endif
-+
-+ifdef CONFIG_MMETEORLAKE
-+        KBUILD_CFLAGS += -march=meteorlake
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=meteorlake
-+endif
-+
-+ifdef CONFIG_MEMERALDRAPIDS
-+        KBUILD_CFLAGS += -march=emeraldrapids
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=emeraldrapids
-+endif
-+
-+ifdef CONFIG_MDIAMONDRAPIDS
-+        KBUILD_CFLAGS += -march=diamondrapids
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=diamondrapids
-+endif
-+
-+ifdef CONFIG_GENERIC_CPU
-+ifeq ($(CONFIG_X86_64_VERSION),1)
-         KBUILD_CFLAGS += -march=x86-64 -mtune=generic
-         KBUILD_RUSTFLAGS += -Ctarget-cpu=x86-64 -Ztune-cpu=generic
-+else
-+        KBUILD_CFLAGS +=-march=x86-64-v$(CONFIG_X86_64_VERSION)
-+        KBUILD_RUSTFLAGS += -Ctarget-cpu=x86-64-v$(CONFIG_X86_64_VERSION)
-+endif
-+endif
- endif
--
-         KBUILD_CFLAGS += -mno-red-zone
-         KBUILD_CFLAGS += -mcmodel=kernel
-         KBUILD_RUSTFLAGS += -Cno-redzone=y
--- 
-2.50.1
-


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-08-21  1:07 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-08-21  1:07 UTC (permalink / raw
  To: gentoo-commits

commit:     bb0abc00f63f90c3046725accfb6440344735627
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 21 01:04:21 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Aug 21 01:04:21 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bb0abc00

Linux patch 6.16.2

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README             |     4 +
 1001_linux-6.16.2.patch | 28136 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 28140 insertions(+)

diff --git a/0000_README b/0000_README
index 77cb0499..8d6c88e6 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch:  1000_linux-6.16.1.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.16.1
 
+Patch:  1001_linux-6.16.2.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.16.2
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1001_linux-6.16.2.patch b/1001_linux-6.16.2.patch
new file mode 100644
index 00000000..8ce08428
--- /dev/null
+++ b/1001_linux-6.16.2.patch
@@ -0,0 +1,28136 @@
+diff --git a/Documentation/filesystems/fscrypt.rst b/Documentation/filesystems/fscrypt.rst
+index 29e84d125e0244..4a3e844b790c43 100644
+--- a/Documentation/filesystems/fscrypt.rst
++++ b/Documentation/filesystems/fscrypt.rst
+@@ -147,9 +147,8 @@ However, these ioctls have some limitations:
+   were wiped.  To partially solve this, you can add init_on_free=1 to
+   your kernel command line.  However, this has a performance cost.
+ 
+-- Secret keys might still exist in CPU registers, in crypto
+-  accelerator hardware (if used by the crypto API to implement any of
+-  the algorithms), or in other places not explicitly considered here.
++- Secret keys might still exist in CPU registers or in other places
++  not explicitly considered here.
+ 
+ Full system compromise
+ ~~~~~~~~~~~~~~~~~~~~~~
+@@ -406,9 +405,12 @@ the work is done by XChaCha12, which is much faster than AES when AES
+ acceleration is unavailable.  For more information about Adiantum, see
+ `the Adiantum paper <https://eprint.iacr.org/2018/720.pdf>`_.
+ 
+-The (AES-128-CBC-ESSIV, AES-128-CBC-CTS) pair exists only to support
+-systems whose only form of AES acceleration is an off-CPU crypto
+-accelerator such as CAAM or CESA that does not support XTS.
++The (AES-128-CBC-ESSIV, AES-128-CBC-CTS) pair was added to try to
++provide a more efficient option for systems that lack AES instructions
++in the CPU but do have a non-inline crypto engine such as CAAM or CESA
++that supports AES-CBC (and not AES-XTS).  This is deprecated.  It has
++been shown that just doing AES on the CPU is actually faster.
++Moreover, Adiantum is faster still and is recommended on such systems.
+ 
+ The remaining mode pairs are the "national pride ciphers":
+ 
+@@ -1326,22 +1328,13 @@ this by validating all top-level encryption policies prior to access.
+ Inline encryption support
+ =========================
+ 
+-By default, fscrypt uses the kernel crypto API for all cryptographic
+-operations (other than HKDF, which fscrypt partially implements
+-itself).  The kernel crypto API supports hardware crypto accelerators,
+-but only ones that work in the traditional way where all inputs and
+-outputs (e.g. plaintexts and ciphertexts) are in memory.  fscrypt can
+-take advantage of such hardware, but the traditional acceleration
+-model isn't particularly efficient and fscrypt hasn't been optimized
+-for it.
+-
+-Instead, many newer systems (especially mobile SoCs) have *inline
+-encryption hardware* that can encrypt/decrypt data while it is on its
+-way to/from the storage device.  Linux supports inline encryption
+-through a set of extensions to the block layer called *blk-crypto*.
+-blk-crypto allows filesystems to attach encryption contexts to bios
+-(I/O requests) to specify how the data will be encrypted or decrypted
+-in-line.  For more information about blk-crypto, see
++Many newer systems (especially mobile SoCs) have *inline encryption
++hardware* that can encrypt/decrypt data while it is on its way to/from
++the storage device.  Linux supports inline encryption through a set of
++extensions to the block layer called *blk-crypto*.  blk-crypto allows
++filesystems to attach encryption contexts to bios (I/O requests) to
++specify how the data will be encrypted or decrypted in-line.  For more
++information about blk-crypto, see
+ :ref:`Documentation/block/inline-encryption.rst <inline_encryption>`.
+ 
+ On supported filesystems (currently ext4 and f2fs), fscrypt can use
+diff --git a/Documentation/firmware-guide/acpi/i2c-muxes.rst b/Documentation/firmware-guide/acpi/i2c-muxes.rst
+index 3a8997ccd7c4b6..f366539acd792a 100644
+--- a/Documentation/firmware-guide/acpi/i2c-muxes.rst
++++ b/Documentation/firmware-guide/acpi/i2c-muxes.rst
+@@ -14,7 +14,7 @@ Consider this topology::
+     |      |   | 0x70 |--CH01--> i2c client B (0x50)
+     +------+   +------+
+ 
+-which corresponds to the following ASL::
++which corresponds to the following ASL (in the scope of \_SB)::
+ 
+     Device (SMB1)
+     {
+@@ -24,7 +24,7 @@ which corresponds to the following ASL::
+             Name (_HID, ...)
+             Name (_CRS, ResourceTemplate () {
+                 I2cSerialBus (0x70, ControllerInitiated, I2C_SPEED,
+-                            AddressingMode7Bit, "^SMB1", 0x00,
++                            AddressingMode7Bit, "\\_SB.SMB1", 0x00,
+                             ResourceConsumer,,)
+             }
+ 
+@@ -37,7 +37,7 @@ which corresponds to the following ASL::
+                     Name (_HID, ...)
+                     Name (_CRS, ResourceTemplate () {
+                         I2cSerialBus (0x50, ControllerInitiated, I2C_SPEED,
+-                                    AddressingMode7Bit, "^CH00", 0x00,
++                                    AddressingMode7Bit, "\\_SB.SMB1.CH00", 0x00,
+                                     ResourceConsumer,,)
+                     }
+                 }
+@@ -52,7 +52,7 @@ which corresponds to the following ASL::
+                     Name (_HID, ...)
+                     Name (_CRS, ResourceTemplate () {
+                         I2cSerialBus (0x50, ControllerInitiated, I2C_SPEED,
+-                                    AddressingMode7Bit, "^CH01", 0x00,
++                                    AddressingMode7Bit, "\\_SB.SMB1.CH01", 0x00,
+                                     ResourceConsumer,,)
+                     }
+                 }
+diff --git a/Documentation/sphinx/kernel_abi.py b/Documentation/sphinx/kernel_abi.py
+index db6f0380de94cb..4c4375201b9ec3 100644
+--- a/Documentation/sphinx/kernel_abi.py
++++ b/Documentation/sphinx/kernel_abi.py
+@@ -146,8 +146,10 @@ class KernelCmd(Directive):
+                 n += 1
+ 
+             if f != old_f:
+-                # Add the file to Sphinx build dependencies
+-                env.note_dependency(os.path.abspath(f))
++                # Add the file to Sphinx build dependencies if the file exists
++                fname = os.path.join(srctree, f)
++                if os.path.isfile(fname):
++                    env.note_dependency(fname)
+ 
+                 old_f = f
+ 
+diff --git a/Makefile b/Makefile
+index d18dae20b7af39..ed2967dd07d5e2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 16
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm/mach-rockchip/platsmp.c b/arch/arm/mach-rockchip/platsmp.c
+index 36915a073c2340..f432d22bfed844 100644
+--- a/arch/arm/mach-rockchip/platsmp.c
++++ b/arch/arm/mach-rockchip/platsmp.c
+@@ -279,11 +279,6 @@ static void __init rockchip_smp_prepare_cpus(unsigned int max_cpus)
+ 	}
+ 
+ 	if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) {
+-		if (rockchip_smp_prepare_sram(node)) {
+-			of_node_put(node);
+-			return;
+-		}
+-
+ 		/* enable the SCU power domain */
+ 		pmu_set_power_domain(PMU_PWRDN_SCU, true);
+ 
+@@ -316,11 +311,19 @@ static void __init rockchip_smp_prepare_cpus(unsigned int max_cpus)
+ 		asm ("mrc p15, 1, %0, c9, c0, 2\n" : "=r" (l2ctlr));
+ 		ncores = ((l2ctlr >> 24) & 0x3) + 1;
+ 	}
+-	of_node_put(node);
+ 
+ 	/* Make sure that all cores except the first are really off */
+ 	for (i = 1; i < ncores; i++)
+ 		pmu_set_power_domain(0 + i, false);
++
++	if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) {
++		if (rockchip_smp_prepare_sram(node)) {
++			of_node_put(node);
++			return;
++		}
++	}
++
++	of_node_put(node);
+ }
+ 
+ static void __init rk3036_smp_prepare_cpus(unsigned int max_cpus)
+diff --git a/arch/arm/mach-tegra/reset.c b/arch/arm/mach-tegra/reset.c
+index d5c805adf7a82b..ea706fac63587a 100644
+--- a/arch/arm/mach-tegra/reset.c
++++ b/arch/arm/mach-tegra/reset.c
+@@ -63,7 +63,7 @@ static void __init tegra_cpu_reset_handler_enable(void)
+ 	BUG_ON(is_enabled);
+ 	BUG_ON(tegra_cpu_reset_handler_size > TEGRA_IRAM_RESET_HANDLER_SIZE);
+ 
+-	memcpy(iram_base, (void *)__tegra_cpu_reset_handler_start,
++	memcpy_toio(iram_base, (void *)__tegra_cpu_reset_handler_start,
+ 			tegra_cpu_reset_handler_size);
+ 
+ 	err = call_firmware_op(set_cpu_boot_addr, 0, reset_address);
+diff --git a/arch/arm64/boot/dts/ti/k3-j722s-evm.dts b/arch/arm64/boot/dts/ti/k3-j722s-evm.dts
+index a47852fdca70c4..d0533723412a88 100644
+--- a/arch/arm64/boot/dts/ti/k3-j722s-evm.dts
++++ b/arch/arm64/boot/dts/ti/k3-j722s-evm.dts
+@@ -634,7 +634,7 @@ p05-hog {
+ 			/* P05 - USB2.0_MUX_SEL */
+ 			gpio-hog;
+ 			gpios = <5 GPIO_ACTIVE_LOW>;
+-			output-high;
++			output-low;
+ 		};
+ 
+ 		p01_hog: p01-hog {
+diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
+index a407f9cd549edc..c07a58b96329d8 100644
+--- a/arch/arm64/include/asm/acpi.h
++++ b/arch/arm64/include/asm/acpi.h
+@@ -150,7 +150,7 @@ acpi_set_mailbox_entry(int cpu, struct acpi_madt_generic_interrupt *processor)
+ {}
+ #endif
+ 
+-static inline const char *acpi_get_enable_method(int cpu)
++static __always_inline const char *acpi_get_enable_method(int cpu)
+ {
+ 	if (acpi_psci_present())
+ 		return "psci";
+diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c
+index b9a66fc146c9fa..4d529ff7ba513a 100644
+--- a/arch/arm64/kernel/acpi.c
++++ b/arch/arm64/kernel/acpi.c
+@@ -197,6 +197,8 @@ static int __init acpi_fadt_sanity_check(void)
+  */
+ void __init acpi_boot_table_init(void)
+ {
++	int ret;
++
+ 	/*
+ 	 * Enable ACPI instead of device tree unless
+ 	 * - ACPI has been disabled explicitly (acpi=off), or
+@@ -250,10 +252,12 @@ void __init acpi_boot_table_init(void)
+ 		 * behaviour, use acpi=nospcr to disable console in ACPI SPCR
+ 		 * table as default serial console.
+ 		 */
+-		acpi_parse_spcr(earlycon_acpi_spcr_enable,
++		ret = acpi_parse_spcr(earlycon_acpi_spcr_enable,
+ 			!param_acpi_nospcr);
+-		pr_info("Use ACPI SPCR as default console: %s\n",
+-				param_acpi_nospcr ? "No" : "Yes");
++		if (!ret || param_acpi_nospcr || !IS_ENABLED(CONFIG_ACPI_SPCR_TABLE))
++			pr_info("Use ACPI SPCR as default console: No\n");
++		else
++			pr_info("Use ACPI SPCR as default console: Yes\n");
+ 
+ 		if (IS_ENABLED(CONFIG_ACPI_BGRT))
+ 			acpi_table_parse(ACPI_SIG_BGRT, acpi_parse_bgrt);
+diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
+index 1d9d51d7627fd4..f6494c09421442 100644
+--- a/arch/arm64/kernel/stacktrace.c
++++ b/arch/arm64/kernel/stacktrace.c
+@@ -152,6 +152,8 @@ kunwind_recover_return_address(struct kunwind_state *state)
+ 		orig_pc = kretprobe_find_ret_addr(state->task,
+ 						  (void *)state->common.fp,
+ 						  &state->kr_cur);
++		if (!orig_pc)
++			return -EINVAL;
+ 		state->common.pc = orig_pc;
+ 		state->flags.kretprobe = 1;
+ 	}
+diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
+index 9bfa5c944379d3..7468b22585cef0 100644
+--- a/arch/arm64/kernel/traps.c
++++ b/arch/arm64/kernel/traps.c
+@@ -931,6 +931,7 @@ void __noreturn panic_bad_stack(struct pt_regs *regs, unsigned long esr, unsigne
+ 
+ void __noreturn arm64_serror_panic(struct pt_regs *regs, unsigned long esr)
+ {
++	add_taint(TAINT_MACHINE_CHECK, LOCKDEP_STILL_OK);
+ 	console_verbose();
+ 
+ 	pr_crit("SError Interrupt on CPU%d, code 0x%016lx -- %s\n",
+diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
+index 11eb8d1adc8418..f590dc71ce9980 100644
+--- a/arch/arm64/mm/fault.c
++++ b/arch/arm64/mm/fault.c
+@@ -838,6 +838,7 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
+ 		 */
+ 		siaddr  = untagged_addr(far);
+ 	}
++	add_taint(TAINT_MACHINE_CHECK, LOCKDEP_STILL_OK);
+ 	arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr);
+ 
+ 	return 0;
+diff --git a/arch/arm64/mm/ptdump_debugfs.c b/arch/arm64/mm/ptdump_debugfs.c
+index 68bf1a125502da..1e308328c07966 100644
+--- a/arch/arm64/mm/ptdump_debugfs.c
++++ b/arch/arm64/mm/ptdump_debugfs.c
+@@ -1,6 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/debugfs.h>
+-#include <linux/memory_hotplug.h>
+ #include <linux/seq_file.h>
+ 
+ #include <asm/ptdump.h>
+@@ -9,9 +8,7 @@ static int ptdump_show(struct seq_file *m, void *v)
+ {
+ 	struct ptdump_info *info = m->private;
+ 
+-	get_online_mems();
+ 	ptdump_walk(m, info);
+-	put_online_mems();
+ 	return 0;
+ }
+ DEFINE_SHOW_ATTRIBUTE(ptdump);
+diff --git a/arch/loongarch/kernel/env.c b/arch/loongarch/kernel/env.c
+index 27144de5c5fe4f..c0a5dc9aeae287 100644
+--- a/arch/loongarch/kernel/env.c
++++ b/arch/loongarch/kernel/env.c
+@@ -39,16 +39,19 @@ void __init init_environ(void)
+ 
+ static int __init init_cpu_fullname(void)
+ {
+-	struct device_node *root;
+ 	int cpu, ret;
+-	char *model;
++	char *cpuname;
++	const char *model;
++	struct device_node *root;
+ 
+ 	/* Parsing cpuname from DTS model property */
+ 	root = of_find_node_by_path("/");
+-	ret = of_property_read_string(root, "model", (const char **)&model);
++	ret = of_property_read_string(root, "model", &model);
++	if (ret == 0) {
++		cpuname = kstrdup(model, GFP_KERNEL);
++		loongson_sysconf.cpuname = strsep(&cpuname, " ");
++	}
+ 	of_node_put(root);
+-	if (ret == 0)
+-		loongson_sysconf.cpuname = strsep(&model, " ");
+ 
+ 	if (loongson_sysconf.cpuname && !strncmp(loongson_sysconf.cpuname, "Loongson", 8)) {
+ 		for (cpu = 0; cpu < NR_CPUS; cpu++)
+diff --git a/arch/loongarch/kernel/relocate_kernel.S b/arch/loongarch/kernel/relocate_kernel.S
+index 84e6de2fd97354..8b5140ac9ea112 100644
+--- a/arch/loongarch/kernel/relocate_kernel.S
++++ b/arch/loongarch/kernel/relocate_kernel.S
+@@ -109,4 +109,4 @@ SYM_CODE_END(kexec_smp_wait)
+ relocate_new_kernel_end:
+ 
+ 	.section ".data"
+-SYM_DATA(relocate_new_kernel_size, .long relocate_new_kernel_end - relocate_new_kernel)
++SYM_DATA(relocate_new_kernel_size, .quad relocate_new_kernel_end - relocate_new_kernel)
+diff --git a/arch/loongarch/kernel/unwind_orc.c b/arch/loongarch/kernel/unwind_orc.c
+index 0005be49b0569f..0d5fa64a222522 100644
+--- a/arch/loongarch/kernel/unwind_orc.c
++++ b/arch/loongarch/kernel/unwind_orc.c
+@@ -508,7 +508,7 @@ bool unwind_next_frame(struct unwind_state *state)
+ 
+ 	state->pc = bt_address(pc);
+ 	if (!state->pc) {
+-		pr_err("cannot find unwind pc at %pK\n", (void *)pc);
++		pr_err("cannot find unwind pc at %p\n", (void *)pc);
+ 		goto err;
+ 	}
+ 
+diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
+index fa1500d4aa3e3a..5ba3249cea98a2 100644
+--- a/arch/loongarch/net/bpf_jit.c
++++ b/arch/loongarch/net/bpf_jit.c
+@@ -208,11 +208,9 @@ bool bpf_jit_supports_far_kfunc_call(void)
+ 	return true;
+ }
+ 
+-/* initialized on the first pass of build_body() */
+-static int out_offset = -1;
+-static int emit_bpf_tail_call(struct jit_ctx *ctx)
++static int emit_bpf_tail_call(struct jit_ctx *ctx, int insn)
+ {
+-	int off;
++	int off, tc_ninsn = 0;
+ 	u8 tcc = tail_call_reg(ctx);
+ 	u8 a1 = LOONGARCH_GPR_A1;
+ 	u8 a2 = LOONGARCH_GPR_A2;
+@@ -222,7 +220,7 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
+ 	const int idx0 = ctx->idx;
+ 
+ #define cur_offset (ctx->idx - idx0)
+-#define jmp_offset (out_offset - (cur_offset))
++#define jmp_offset (tc_ninsn - (cur_offset))
+ 
+ 	/*
+ 	 * a0: &ctx
+@@ -232,6 +230,7 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
+ 	 * if (index >= array->map.max_entries)
+ 	 *	 goto out;
+ 	 */
++	tc_ninsn = insn ? ctx->offset[insn+1] - ctx->offset[insn] : ctx->offset[0];
+ 	off = offsetof(struct bpf_array, map.max_entries);
+ 	emit_insn(ctx, ldwu, t1, a1, off);
+ 	/* bgeu $a2, $t1, jmp_offset */
+@@ -263,15 +262,6 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
+ 	emit_insn(ctx, ldd, t3, t2, off);
+ 	__build_epilogue(ctx, true);
+ 
+-	/* out: */
+-	if (out_offset == -1)
+-		out_offset = cur_offset;
+-	if (cur_offset != out_offset) {
+-		pr_err_once("tail_call out_offset = %d, expected %d!\n",
+-			    cur_offset, out_offset);
+-		return -1;
+-	}
+-
+ 	return 0;
+ 
+ toofar:
+@@ -916,7 +906,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
+ 	/* tail call */
+ 	case BPF_JMP | BPF_TAIL_CALL:
+ 		mark_tail_call(ctx);
+-		if (emit_bpf_tail_call(ctx) < 0)
++		if (emit_bpf_tail_call(ctx, i) < 0)
+ 			return -EINVAL;
+ 		break;
+ 
+@@ -1342,7 +1332,6 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ 	if (tmp_blinded)
+ 		bpf_jit_prog_release_other(prog, prog == orig_prog ? tmp : orig_prog);
+ 
+-	out_offset = -1;
+ 
+ 	return prog;
+ 
+diff --git a/arch/loongarch/vdso/Makefile b/arch/loongarch/vdso/Makefile
+index ccd2c5e135c66f..d8316f99348240 100644
+--- a/arch/loongarch/vdso/Makefile
++++ b/arch/loongarch/vdso/Makefile
+@@ -36,7 +36,7 @@ endif
+ 
+ # VDSO linker flags.
+ ldflags-y := -Bsymbolic --no-undefined -soname=linux-vdso.so.1 \
+-	$(filter -E%,$(KBUILD_CFLAGS)) -nostdlib -shared --build-id -T
++	$(filter -E%,$(KBUILD_CFLAGS)) -shared --build-id -T
+ 
+ #
+ # Shared build commands.
+diff --git a/arch/mips/include/asm/vpe.h b/arch/mips/include/asm/vpe.h
+index 61fd4d0aeda41f..c0769dc4b85321 100644
+--- a/arch/mips/include/asm/vpe.h
++++ b/arch/mips/include/asm/vpe.h
+@@ -119,4 +119,12 @@ void cleanup_tc(struct tc *tc);
+ 
+ int __init vpe_module_init(void);
+ void __exit vpe_module_exit(void);
++
++#ifdef CONFIG_MIPS_VPE_LOADER_MT
++void *vpe_alloc(void);
++int vpe_start(void *vpe, unsigned long start);
++int vpe_stop(void *vpe);
++int vpe_free(void *vpe);
++#endif /* CONFIG_MIPS_VPE_LOADER_MT */
++
+ #endif /* _ASM_VPE_H */
+diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
+index b630604c577f9f..02aa6a04a21da4 100644
+--- a/arch/mips/kernel/process.c
++++ b/arch/mips/kernel/process.c
+@@ -690,18 +690,20 @@ unsigned long mips_stack_top(void)
+ 	}
+ 
+ 	/* Space for the VDSO, data page & GIC user page */
+-	top -= PAGE_ALIGN(current->thread.abi->vdso->size);
+-	top -= PAGE_SIZE;
+-	top -= mips_gic_present() ? PAGE_SIZE : 0;
++	if (current->thread.abi) {
++		top -= PAGE_ALIGN(current->thread.abi->vdso->size);
++		top -= PAGE_SIZE;
++		top -= mips_gic_present() ? PAGE_SIZE : 0;
++
++		/* Space to randomize the VDSO base */
++		if (current->flags & PF_RANDOMIZE)
++			top -= VDSO_RANDOMIZE_SIZE;
++	}
+ 
+ 	/* Space for cache colour alignment */
+ 	if (cpu_has_dc_aliases)
+ 		top -= shm_align_mask + 1;
+ 
+-	/* Space to randomize the VDSO base */
+-	if (current->flags & PF_RANDOMIZE)
+-		top -= VDSO_RANDOMIZE_SIZE;
+-
+ 	return top;
+ }
+ 
+diff --git a/arch/mips/lantiq/falcon/sysctrl.c b/arch/mips/lantiq/falcon/sysctrl.c
+index 1187729d8cbb1b..357543996ee661 100644
+--- a/arch/mips/lantiq/falcon/sysctrl.c
++++ b/arch/mips/lantiq/falcon/sysctrl.c
+@@ -214,19 +214,16 @@ void __init ltq_soc_init(void)
+ 	of_node_put(np_syseth);
+ 	of_node_put(np_sysgpe);
+ 
+-	if ((request_mem_region(res_status.start, resource_size(&res_status),
+-				res_status.name) < 0) ||
+-		(request_mem_region(res_ebu.start, resource_size(&res_ebu),
+-				res_ebu.name) < 0) ||
+-		(request_mem_region(res_sys[0].start,
+-				resource_size(&res_sys[0]),
+-				res_sys[0].name) < 0) ||
+-		(request_mem_region(res_sys[1].start,
+-				resource_size(&res_sys[1]),
+-				res_sys[1].name) < 0) ||
+-		(request_mem_region(res_sys[2].start,
+-				resource_size(&res_sys[2]),
+-				res_sys[2].name) < 0))
++	if ((!request_mem_region(res_status.start, resource_size(&res_status),
++				 res_status.name)) ||
++	    (!request_mem_region(res_ebu.start, resource_size(&res_ebu),
++				 res_ebu.name)) ||
++	    (!request_mem_region(res_sys[0].start, resource_size(&res_sys[0]),
++				 res_sys[0].name)) ||
++	    (!request_mem_region(res_sys[1].start, resource_size(&res_sys[1]),
++				 res_sys[1].name)) ||
++	    (!request_mem_region(res_sys[2].start, resource_size(&res_sys[2]),
++				 res_sys[2].name)))
+ 		pr_err("Failed to request core resources");
+ 
+ 	status_membase = ioremap(res_status.start,
+diff --git a/arch/parisc/Makefile b/arch/parisc/Makefile
+index 21b8166a688394..9cd9aa3d16f29a 100644
+--- a/arch/parisc/Makefile
++++ b/arch/parisc/Makefile
+@@ -139,7 +139,7 @@ palo lifimage: vmlinuz
+ 	fi
+ 	@if test ! -f "$(PALOCONF)"; then \
+ 		cp $(srctree)/arch/parisc/defpalo.conf $(objtree)/palo.conf; \
+-		echo 'A generic palo config file ($(objree)/palo.conf) has been created for you.'; \
++		echo 'A generic palo config file ($(objtree)/palo.conf) has been created for you.'; \
+ 		echo 'You should check it and re-run "make palo".'; \
+ 		echo 'WARNING: the "lifimage" file is now placed in this directory by default!'; \
+ 		false; \
+diff --git a/arch/powerpc/include/asm/floppy.h b/arch/powerpc/include/asm/floppy.h
+index f8ce178b43b783..34abf8bea2ccd6 100644
+--- a/arch/powerpc/include/asm/floppy.h
++++ b/arch/powerpc/include/asm/floppy.h
+@@ -144,9 +144,12 @@ static int hard_dma_setup(char *addr, unsigned long size, int mode, int io)
+ 		bus_addr = 0;
+ 	}
+ 
+-	if (!bus_addr)	/* need to map it */
++	if (!bus_addr) {	/* need to map it */
+ 		bus_addr = dma_map_single(&isa_bridge_pcidev->dev, addr, size,
+ 					  dir);
++		if (dma_mapping_error(&isa_bridge_pcidev->dev, bus_addr))
++			return -ENOMEM;
++	}
+ 
+ 	/* remember this one as prev */
+ 	prev_addr = addr;
+diff --git a/arch/powerpc/platforms/512x/mpc512x_lpbfifo.c b/arch/powerpc/platforms/512x/mpc512x_lpbfifo.c
+index 9668b052cd4b3a..f251e0f6826204 100644
+--- a/arch/powerpc/platforms/512x/mpc512x_lpbfifo.c
++++ b/arch/powerpc/platforms/512x/mpc512x_lpbfifo.c
+@@ -240,10 +240,8 @@ static int mpc512x_lpbfifo_kick(void)
+ 	dma_conf.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+ 
+ 	/* Make DMA channel work with LPB FIFO data register */
+-	if (dma_dev->device_config(lpbfifo.chan, &dma_conf)) {
+-		ret = -EINVAL;
+-		goto err_dma_prep;
+-	}
++	if (dma_dev->device_config(lpbfifo.chan, &dma_conf))
++		return -EINVAL;
+ 
+ 	sg_init_table(&sg, 1);
+ 
+diff --git a/arch/riscv/boot/dts/thead/th1520.dtsi b/arch/riscv/boot/dts/thead/th1520.dtsi
+index 1db0054c4e0934..93135e0f5a77b8 100644
+--- a/arch/riscv/boot/dts/thead/th1520.dtsi
++++ b/arch/riscv/boot/dts/thead/th1520.dtsi
+@@ -294,8 +294,9 @@ gmac1: ethernet@ffe7060000 {
+ 			reg-names = "dwmac", "apb";
+ 			interrupts = <67 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "macirq";
+-			clocks = <&clk CLK_GMAC_AXI>, <&clk CLK_GMAC1>;
+-			clock-names = "stmmaceth", "pclk";
++			clocks = <&clk CLK_GMAC_AXI>, <&clk CLK_GMAC1>,
++				 <&clk CLK_PERISYS_APB4_HCLK>;
++			clock-names = "stmmaceth", "pclk", "apb";
+ 			snps,pbl = <32>;
+ 			snps,fixed-burst;
+ 			snps,multicast-filter-bins = <64>;
+@@ -316,8 +317,9 @@ gmac0: ethernet@ffe7070000 {
+ 			reg-names = "dwmac", "apb";
+ 			interrupts = <66 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "macirq";
+-			clocks = <&clk CLK_GMAC_AXI>, <&clk CLK_GMAC0>;
+-			clock-names = "stmmaceth", "pclk";
++			clocks = <&clk CLK_GMAC_AXI>, <&clk CLK_GMAC0>,
++				 <&clk CLK_PERISYS_APB4_HCLK>;
++			clock-names = "stmmaceth", "pclk", "apb";
+ 			snps,pbl = <32>;
+ 			snps,fixed-burst;
+ 			snps,multicast-filter-bins = <64>;
+diff --git a/arch/riscv/mm/ptdump.c b/arch/riscv/mm/ptdump.c
+index 32922550a50a3a..3b51690cc87605 100644
+--- a/arch/riscv/mm/ptdump.c
++++ b/arch/riscv/mm/ptdump.c
+@@ -6,7 +6,6 @@
+ #include <linux/efi.h>
+ #include <linux/init.h>
+ #include <linux/debugfs.h>
+-#include <linux/memory_hotplug.h>
+ #include <linux/seq_file.h>
+ #include <linux/ptdump.h>
+ 
+@@ -413,9 +412,7 @@ bool ptdump_check_wx(void)
+ 
+ static int ptdump_show(struct seq_file *m, void *v)
+ {
+-	get_online_mems();
+ 	ptdump_walk(m, m->private);
+-	put_online_mems();
+ 
+ 	return 0;
+ }
+diff --git a/arch/s390/include/asm/timex.h b/arch/s390/include/asm/timex.h
+index bed8d0b5a282c0..59dfb8780f62ad 100644
+--- a/arch/s390/include/asm/timex.h
++++ b/arch/s390/include/asm/timex.h
+@@ -196,13 +196,6 @@ static inline unsigned long get_tod_clock_fast(void)
+ 	asm volatile("stckf %0" : "=Q" (clk) : : "cc");
+ 	return clk;
+ }
+-
+-static inline cycles_t get_cycles(void)
+-{
+-	return (cycles_t) get_tod_clock() >> 2;
+-}
+-#define get_cycles get_cycles
+-
+ int get_phys_clock(unsigned long *clock);
+ void init_cpu_timer(void);
+ 
+@@ -230,6 +223,12 @@ static inline unsigned long get_tod_clock_monotonic(void)
+ 	return tod;
+ }
+ 
++static inline cycles_t get_cycles(void)
++{
++	return (cycles_t)get_tod_clock_monotonic() >> 2;
++}
++#define get_cycles get_cycles
++
+ /**
+  * tod_to_ns - convert a TOD format value to nanoseconds
+  * @todval: to be converted TOD format value
+diff --git a/arch/s390/kernel/early.c b/arch/s390/kernel/early.c
+index 54cf0923050f2d..9b4b5ccda323ac 100644
+--- a/arch/s390/kernel/early.c
++++ b/arch/s390/kernel/early.c
+@@ -154,6 +154,7 @@ void __init __do_early_pgm_check(struct pt_regs *regs)
+ 
+ 	regs->int_code = lc->pgm_int_code;
+ 	regs->int_parm_long = lc->trans_exc_code;
++	regs->last_break = lc->pgm_last_break;
+ 	ip = __rewind_psw(regs->psw, regs->int_code >> 16);
+ 
+ 	/* Monitor Event? Might be a warning */
+diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c
+index fed17d407a4442..cb7ed55e24d206 100644
+--- a/arch/s390/kernel/time.c
++++ b/arch/s390/kernel/time.c
+@@ -580,7 +580,7 @@ static int stp_sync_clock(void *data)
+ 		atomic_dec(&sync->cpus);
+ 		/* Wait for in_sync to be set. */
+ 		while (READ_ONCE(sync->in_sync) == 0)
+-			__udelay(1);
++			;
+ 	}
+ 	if (sync->in_sync != 1)
+ 		/* Didn't work. Clear per-cpu in sync bit again. */
+diff --git a/arch/s390/mm/dump_pagetables.c b/arch/s390/mm/dump_pagetables.c
+index ac604b17666095..9af2aae0a5152a 100644
+--- a/arch/s390/mm/dump_pagetables.c
++++ b/arch/s390/mm/dump_pagetables.c
+@@ -247,11 +247,9 @@ static int ptdump_show(struct seq_file *m, void *v)
+ 		.marker = markers,
+ 	};
+ 
+-	get_online_mems();
+ 	mutex_lock(&cpa_mutex);
+ 	ptdump_walk_pgd(&st.ptdump, &init_mm, NULL);
+ 	mutex_unlock(&cpa_mutex);
+-	put_online_mems();
+ 	return 0;
+ }
+ DEFINE_SHOW_ATTRIBUTE(ptdump);
+diff --git a/arch/um/include/asm/thread_info.h b/arch/um/include/asm/thread_info.h
+index f9ad06fcc991a2..eb9b3a6d99e847 100644
+--- a/arch/um/include/asm/thread_info.h
++++ b/arch/um/include/asm/thread_info.h
+@@ -50,7 +50,11 @@ struct thread_info {
+ #define _TIF_NOTIFY_SIGNAL	(1 << TIF_NOTIFY_SIGNAL)
+ #define _TIF_MEMDIE		(1 << TIF_MEMDIE)
+ #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
++#define _TIF_NOTIFY_RESUME	(1 << TIF_NOTIFY_RESUME)
+ #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
+ #define _TIF_SINGLESTEP		(1 << TIF_SINGLESTEP)
+ 
++#define _TIF_WORK_MASK		(_TIF_NEED_RESCHED | _TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL | \
++				 _TIF_NOTIFY_RESUME)
++
+ #endif
+diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c
+index 0cd6fad3d908d4..1be644de9e41ec 100644
+--- a/arch/um/kernel/process.c
++++ b/arch/um/kernel/process.c
+@@ -82,14 +82,18 @@ struct task_struct *__switch_to(struct task_struct *from, struct task_struct *to
+ void interrupt_end(void)
+ {
+ 	struct pt_regs *regs = &current->thread.regs;
+-
+-	if (need_resched())
+-		schedule();
+-	if (test_thread_flag(TIF_SIGPENDING) ||
+-	    test_thread_flag(TIF_NOTIFY_SIGNAL))
+-		do_signal(regs);
+-	if (test_thread_flag(TIF_NOTIFY_RESUME))
+-		resume_user_mode_work(regs);
++	unsigned long thread_flags;
++
++	thread_flags = read_thread_flags();
++	while (thread_flags & _TIF_WORK_MASK) {
++		if (thread_flags & _TIF_NEED_RESCHED)
++			schedule();
++		if (thread_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
++			do_signal(regs);
++		if (thread_flags & _TIF_NOTIFY_RESUME)
++			resume_user_mode_work(regs);
++		thread_flags = read_thread_flags();
++	}
+ }
+ 
+ int get_current_pid(void)
+diff --git a/arch/x86/boot/startup/sev-shared.c b/arch/x86/boot/startup/sev-shared.c
+index ac7dfd21ddd4d5..a34cd19796f9ac 100644
+--- a/arch/x86/boot/startup/sev-shared.c
++++ b/arch/x86/boot/startup/sev-shared.c
+@@ -785,6 +785,7 @@ static void __head svsm_pval_4k_page(unsigned long paddr, bool validate)
+ 	pc->entry[0].page_size = RMP_PG_SIZE_4K;
+ 	pc->entry[0].action    = validate;
+ 	pc->entry[0].ignore_cf = 0;
++	pc->entry[0].rsvd      = 0;
+ 	pc->entry[0].pfn       = paddr >> PAGE_SHIFT;
+ 
+ 	/* Protocol 0, Call ID 1 */
+diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
+index eb80445e8e5d38..92a9ce93f0e55a 100644
+--- a/arch/x86/coco/sev/core.c
++++ b/arch/x86/coco/sev/core.c
+@@ -227,6 +227,7 @@ static u64 svsm_build_ca_from_pfn_range(u64 pfn, u64 pfn_end, bool action,
+ 		pe->page_size = RMP_PG_SIZE_4K;
+ 		pe->action    = action;
+ 		pe->ignore_cf = 0;
++		pe->rsvd      = 0;
+ 		pe->pfn       = pfn;
+ 
+ 		pe++;
+@@ -257,6 +258,7 @@ static int svsm_build_ca_from_psc_desc(struct snp_psc_desc *desc, unsigned int d
+ 		pe->page_size = e->pagesize ? RMP_PG_SIZE_2M : RMP_PG_SIZE_4K;
+ 		pe->action    = e->operation == SNP_PAGE_STATE_PRIVATE;
+ 		pe->ignore_cf = 0;
++		pe->rsvd      = 0;
+ 		pe->pfn       = e->gfn;
+ 
+ 		pe++;
+diff --git a/arch/x86/coco/sev/vc-handle.c b/arch/x86/coco/sev/vc-handle.c
+index 0989d98da1303e..c3b4acbde0d8c6 100644
+--- a/arch/x86/coco/sev/vc-handle.c
++++ b/arch/x86/coco/sev/vc-handle.c
+@@ -17,6 +17,7 @@
+ #include <linux/mm.h>
+ #include <linux/io.h>
+ #include <linux/psp-sev.h>
++#include <linux/efi.h>
+ #include <uapi/linux/sev-guest.h>
+ 
+ #include <asm/init.h>
+@@ -178,9 +179,15 @@ static enum es_result __vc_decode_kern_insn(struct es_em_ctxt *ctxt)
+ 		return ES_OK;
+ }
+ 
++/*
++ * User instruction decoding is also required for the EFI runtime. Even though
++ * the EFI runtime is running in kernel mode, it uses special EFI virtual
++ * address mappings that require the use of efi_mm to properly address and
++ * decode.
++ */
+ static enum es_result vc_decode_insn(struct es_em_ctxt *ctxt)
+ {
+-	if (user_mode(ctxt->regs))
++	if (user_mode(ctxt->regs) || mm_is_efi(current->active_mm))
+ 		return __vc_decode_user_insn(ctxt);
+ 	else
+ 		return __vc_decode_kern_insn(ctxt);
+@@ -364,29 +371,30 @@ static enum es_result __vc_handle_msr_caa(struct pt_regs *regs, bool write)
+  * executing with Secure TSC enabled, so special handling is required for
+  * accesses of MSR_IA32_TSC and MSR_AMD64_GUEST_TSC_FREQ.
+  */
+-static enum es_result __vc_handle_secure_tsc_msrs(struct pt_regs *regs, bool write)
++static enum es_result __vc_handle_secure_tsc_msrs(struct es_em_ctxt *ctxt, bool write)
+ {
++	struct pt_regs *regs = ctxt->regs;
+ 	u64 tsc;
+ 
+ 	/*
+-	 * GUEST_TSC_FREQ should not be intercepted when Secure TSC is enabled.
+-	 * Terminate the SNP guest when the interception is enabled.
++	 * Writing to MSR_IA32_TSC can cause subsequent reads of the TSC to
++	 * return undefined values, and GUEST_TSC_FREQ is read-only. Generate
++	 * a #GP on all writes.
+ 	 */
+-	if (regs->cx == MSR_AMD64_GUEST_TSC_FREQ)
+-		return ES_VMM_ERROR;
++	if (write) {
++		ctxt->fi.vector = X86_TRAP_GP;
++		ctxt->fi.error_code = 0;
++		return ES_EXCEPTION;
++	}
+ 
+ 	/*
+-	 * Writes: Writing to MSR_IA32_TSC can cause subsequent reads of the TSC
+-	 *         to return undefined values, so ignore all writes.
+-	 *
+-	 * Reads: Reads of MSR_IA32_TSC should return the current TSC value, use
+-	 *        the value returned by rdtsc_ordered().
++	 * GUEST_TSC_FREQ read should not be intercepted when Secure TSC is
++	 * enabled. Terminate the guest if a read is attempted.
+ 	 */
+-	if (write) {
+-		WARN_ONCE(1, "TSC MSR writes are verboten!\n");
+-		return ES_OK;
+-	}
++	if (regs->cx == MSR_AMD64_GUEST_TSC_FREQ)
++		return ES_VMM_ERROR;
+ 
++	/* Reads of MSR_IA32_TSC should return the current TSC value. */
+ 	tsc = rdtsc_ordered();
+ 	regs->ax = lower_32_bits(tsc);
+ 	regs->dx = upper_32_bits(tsc);
+@@ -409,7 +417,7 @@ static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt)
+ 	case MSR_IA32_TSC:
+ 	case MSR_AMD64_GUEST_TSC_FREQ:
+ 		if (sev_status & MSR_AMD64_SNP_SECURE_TSC)
+-			return __vc_handle_secure_tsc_msrs(regs, write);
++			return __vc_handle_secure_tsc_msrs(ctxt, write);
+ 		break;
+ 	default:
+ 		break;
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 7e45a20d3ebc3f..cbe76e0a008b4c 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1683,6 +1683,7 @@ static inline u16 kvm_lapic_irq_dest_mode(bool dest_mode_logical)
+ enum kvm_x86_run_flags {
+ 	KVM_RUN_FORCE_IMMEDIATE_EXIT	= BIT(0),
+ 	KVM_RUN_LOAD_GUEST_DR6		= BIT(1),
++	KVM_RUN_LOAD_DEBUGCTL		= BIT(2),
+ };
+ 
+ struct kvm_x86_ops {
+@@ -1713,6 +1714,12 @@ struct kvm_x86_ops {
+ 	void (*vcpu_load)(struct kvm_vcpu *vcpu, int cpu);
+ 	void (*vcpu_put)(struct kvm_vcpu *vcpu);
+ 
++	/*
++	 * Mask of DEBUGCTL bits that are owned by the host, i.e. that need to
++	 * match the host's value even while the guest is active.
++	 */
++	const u64 HOST_OWNED_DEBUGCTL;
++
+ 	void (*update_exception_bitmap)(struct kvm_vcpu *vcpu);
+ 	int (*get_msr)(struct kvm_vcpu *vcpu, struct msr_data *msr);
+ 	int (*set_msr)(struct kvm_vcpu *vcpu, struct msr_data *msr);
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index f2721801d8d423..d19972d5d72955 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -115,10 +115,9 @@ void (*x86_return_thunk)(void) __ro_after_init = __x86_return_thunk;
+ 
+ static void __init set_return_thunk(void *thunk)
+ {
+-	if (x86_return_thunk != __x86_return_thunk)
+-		pr_warn("x86/bugs: return thunk changed\n");
+-
+ 	x86_return_thunk = thunk;
++
++	pr_info("active return thunk: %ps\n", thunk);
+ }
+ 
+ /* Update SPEC_CTRL MSR and its cached copy unconditionally */
+diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
+index 9aa9ac8399aee6..81058414deda16 100644
+--- a/arch/x86/kernel/fpu/xstate.c
++++ b/arch/x86/kernel/fpu/xstate.c
+@@ -1855,19 +1855,20 @@ long fpu_xstate_prctl(int option, unsigned long arg2)
+ #ifdef CONFIG_PROC_PID_ARCH_STATUS
+ /*
+  * Report the amount of time elapsed in millisecond since last AVX512
+- * use in the task.
++ * use in the task. Report -1 if no AVX-512 usage.
+  */
+ static void avx512_status(struct seq_file *m, struct task_struct *task)
+ {
+-	unsigned long timestamp = READ_ONCE(x86_task_fpu(task)->avx512_timestamp);
+-	long delta;
++	unsigned long timestamp;
++	long delta = -1;
+ 
+-	if (!timestamp) {
+-		/*
+-		 * Report -1 if no AVX512 usage
+-		 */
+-		delta = -1;
+-	} else {
++	/* AVX-512 usage is not tracked for kernel threads. Don't report anything. */
++	if (task->flags & (PF_KTHREAD | PF_USER_WORKER))
++		return;
++
++	timestamp = READ_ONCE(x86_task_fpu(task)->avx512_timestamp);
++
++	if (timestamp) {
+ 		delta = (long)(jiffies - timestamp);
+ 		/*
+ 		 * Cap to LONG_MAX if time difference > LONG_MAX
+diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
+index c85cbce6d2f6ef..4a6d4460f94715 100644
+--- a/arch/x86/kvm/vmx/main.c
++++ b/arch/x86/kvm/vmx/main.c
+@@ -915,6 +915,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
+ 	.vcpu_load = vt_op(vcpu_load),
+ 	.vcpu_put = vt_op(vcpu_put),
+ 
++	.HOST_OWNED_DEBUGCTL = DEBUGCTLMSR_FREEZE_IN_SMM,
++
+ 	.update_exception_bitmap = vt_op(update_exception_bitmap),
+ 	.get_feature_msr = vmx_get_feature_msr,
+ 	.get_msr = vt_op(get_msr),
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 7211c71d424135..c69df3aba8d1f9 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -2663,10 +2663,11 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ 	if (vmx->nested.nested_run_pending &&
+ 	    (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS)) {
+ 		kvm_set_dr(vcpu, 7, vmcs12->guest_dr7);
+-		vmcs_write64(GUEST_IA32_DEBUGCTL, vmcs12->guest_ia32_debugctl);
++		vmx_guest_debugctl_write(vcpu, vmcs12->guest_ia32_debugctl &
++					       vmx_get_supported_debugctl(vcpu, false));
+ 	} else {
+ 		kvm_set_dr(vcpu, 7, vcpu->arch.dr7);
+-		vmcs_write64(GUEST_IA32_DEBUGCTL, vmx->nested.pre_vmenter_debugctl);
++		vmx_guest_debugctl_write(vcpu, vmx->nested.pre_vmenter_debugctl);
+ 	}
+ 	if (kvm_mpx_supported() && (!vmx->nested.nested_run_pending ||
+ 	    !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS)))
+@@ -3156,7 +3157,8 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu,
+ 		return -EINVAL;
+ 
+ 	if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS) &&
+-	    CC(!kvm_dr7_valid(vmcs12->guest_dr7)))
++	    (CC(!kvm_dr7_valid(vmcs12->guest_dr7)) ||
++	     CC(!vmx_is_valid_debugctl(vcpu, vmcs12->guest_ia32_debugctl, false))))
+ 		return -EINVAL;
+ 
+ 	if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PAT) &&
+@@ -3530,7 +3532,7 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
+ 
+ 	if (!vmx->nested.nested_run_pending ||
+ 	    !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS))
+-		vmx->nested.pre_vmenter_debugctl = vmcs_read64(GUEST_IA32_DEBUGCTL);
++		vmx->nested.pre_vmenter_debugctl = vmx_guest_debugctl_read();
+ 	if (kvm_mpx_supported() &&
+ 	    (!vmx->nested.nested_run_pending ||
+ 	     !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS)))
+@@ -4608,6 +4610,12 @@ static void sync_vmcs02_to_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
+ 		(vmcs12->vm_entry_controls & ~VM_ENTRY_IA32E_MODE) |
+ 		(vm_entry_controls_get(to_vmx(vcpu)) & VM_ENTRY_IA32E_MODE);
+ 
++	/*
++	 * Note!  Save DR7, but intentionally don't grab DEBUGCTL from vmcs02.
++	 * Writes to DEBUGCTL that aren't intercepted by L1 are immediately
++	 * propagated to vmcs12 (see vmx_set_msr()), as the value loaded into
++	 * vmcs02 doesn't strictly track vmcs12.
++	 */
+ 	if (vmcs12->vm_exit_controls & VM_EXIT_SAVE_DEBUG_CONTROLS)
+ 		vmcs12->guest_dr7 = vcpu->arch.dr7;
+ 
+@@ -4798,7 +4806,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
+ 	__vmx_set_segment(vcpu, &seg, VCPU_SREG_LDTR);
+ 
+ 	kvm_set_dr(vcpu, 7, 0x400);
+-	vmcs_write64(GUEST_IA32_DEBUGCTL, 0);
++	vmx_guest_debugctl_write(vcpu, 0);
+ 
+ 	if (nested_vmx_load_msr(vcpu, vmcs12->vm_exit_msr_load_addr,
+ 				vmcs12->vm_exit_msr_load_count))
+@@ -4853,6 +4861,9 @@ static void nested_vmx_restore_host_state(struct kvm_vcpu *vcpu)
+ 			WARN_ON(kvm_set_dr(vcpu, 7, vmcs_readl(GUEST_DR7)));
+ 	}
+ 
++	/* Reload DEBUGCTL to ensure vmcs01 has a fresh FREEZE_IN_SMM value. */
++	vmx_reload_guest_debugctl(vcpu);
++
+ 	/*
+ 	 * Note that calling vmx_set_{efer,cr0,cr4} is important as they
+ 	 * handle a variety of side effects to KVM's software model.
+diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
+index bbf4509f32d0fd..0b173602821ba3 100644
+--- a/arch/x86/kvm/vmx/pmu_intel.c
++++ b/arch/x86/kvm/vmx/pmu_intel.c
+@@ -653,11 +653,11 @@ static void intel_pmu_reset(struct kvm_vcpu *vcpu)
+  */
+ static void intel_pmu_legacy_freezing_lbrs_on_pmi(struct kvm_vcpu *vcpu)
+ {
+-	u64 data = vmcs_read64(GUEST_IA32_DEBUGCTL);
++	u64 data = vmx_guest_debugctl_read();
+ 
+ 	if (data & DEBUGCTLMSR_FREEZE_LBRS_ON_PMI) {
+ 		data &= ~DEBUGCTLMSR_LBR;
+-		vmcs_write64(GUEST_IA32_DEBUGCTL, data);
++		vmx_guest_debugctl_write(vcpu, data);
+ 	}
+ }
+ 
+@@ -730,7 +730,7 @@ void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu)
+ 
+ 	if (!lbr_desc->event) {
+ 		vmx_disable_lbr_msrs_passthrough(vcpu);
+-		if (vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR)
++		if (vmx_guest_debugctl_read() & DEBUGCTLMSR_LBR)
+ 			goto warn;
+ 		if (test_bit(INTEL_PMC_IDX_FIXED_VLBR, pmu->pmc_in_use))
+ 			goto warn;
+@@ -752,7 +752,7 @@ void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu)
+ 
+ static void intel_pmu_cleanup(struct kvm_vcpu *vcpu)
+ {
+-	if (!(vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR))
++	if (!(vmx_guest_debugctl_read() & DEBUGCTLMSR_LBR))
+ 		intel_pmu_release_guest_lbr_event(vcpu);
+ }
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 91fbddbbc3ba7c..7fddb0abbeaad3 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -2149,7 +2149,7 @@ int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 			msr_info->data = vmx->pt_desc.guest.addr_a[index / 2];
+ 		break;
+ 	case MSR_IA32_DEBUGCTLMSR:
+-		msr_info->data = vmcs_read64(GUEST_IA32_DEBUGCTL);
++		msr_info->data = vmx_guest_debugctl_read();
+ 		break;
+ 	default:
+ 	find_uret_msr:
+@@ -2174,7 +2174,7 @@ static u64 nested_vmx_truncate_sysenter_addr(struct kvm_vcpu *vcpu,
+ 	return (unsigned long)data;
+ }
+ 
+-static u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated)
++u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated)
+ {
+ 	u64 debugctl = 0;
+ 
+@@ -2193,6 +2193,18 @@ static u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated
+ 	return debugctl;
+ }
+ 
++bool vmx_is_valid_debugctl(struct kvm_vcpu *vcpu, u64 data, bool host_initiated)
++{
++	u64 invalid;
++
++	invalid = data & ~vmx_get_supported_debugctl(vcpu, host_initiated);
++	if (invalid & (DEBUGCTLMSR_BTF | DEBUGCTLMSR_LBR)) {
++		kvm_pr_unimpl_wrmsr(vcpu, MSR_IA32_DEBUGCTLMSR, data);
++		invalid &= ~(DEBUGCTLMSR_BTF | DEBUGCTLMSR_LBR);
++	}
++	return !invalid;
++}
++
+ /*
+  * Writes msr value into the appropriate "register".
+  * Returns 0 on success, non-0 otherwise.
+@@ -2261,29 +2273,22 @@ int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		}
+ 		vmcs_writel(GUEST_SYSENTER_ESP, data);
+ 		break;
+-	case MSR_IA32_DEBUGCTLMSR: {
+-		u64 invalid;
+-
+-		invalid = data & ~vmx_get_supported_debugctl(vcpu, msr_info->host_initiated);
+-		if (invalid & (DEBUGCTLMSR_BTF|DEBUGCTLMSR_LBR)) {
+-			kvm_pr_unimpl_wrmsr(vcpu, msr_index, data);
+-			data &= ~(DEBUGCTLMSR_BTF|DEBUGCTLMSR_LBR);
+-			invalid &= ~(DEBUGCTLMSR_BTF|DEBUGCTLMSR_LBR);
+-		}
+-
+-		if (invalid)
++	case MSR_IA32_DEBUGCTLMSR:
++		if (!vmx_is_valid_debugctl(vcpu, data, msr_info->host_initiated))
+ 			return 1;
+ 
++		data &= vmx_get_supported_debugctl(vcpu, msr_info->host_initiated);
++
+ 		if (is_guest_mode(vcpu) && get_vmcs12(vcpu)->vm_exit_controls &
+ 						VM_EXIT_SAVE_DEBUG_CONTROLS)
+ 			get_vmcs12(vcpu)->guest_ia32_debugctl = data;
+ 
+-		vmcs_write64(GUEST_IA32_DEBUGCTL, data);
++		vmx_guest_debugctl_write(vcpu, data);
++
+ 		if (intel_pmu_lbr_is_enabled(vcpu) && !to_vmx(vcpu)->lbr_desc.event &&
+ 		    (data & DEBUGCTLMSR_LBR))
+ 			intel_pmu_create_guest_lbr_event(vcpu);
+ 		return 0;
+-	}
+ 	case MSR_IA32_BNDCFGS:
+ 		if (!kvm_mpx_supported() ||
+ 		    (!msr_info->host_initiated &&
+@@ -4794,7 +4799,8 @@ static void init_vmcs(struct vcpu_vmx *vmx)
+ 	vmcs_write32(GUEST_SYSENTER_CS, 0);
+ 	vmcs_writel(GUEST_SYSENTER_ESP, 0);
+ 	vmcs_writel(GUEST_SYSENTER_EIP, 0);
+-	vmcs_write64(GUEST_IA32_DEBUGCTL, 0);
++
++	vmx_guest_debugctl_write(&vmx->vcpu, 0);
+ 
+ 	if (cpu_has_vmx_tpr_shadow()) {
+ 		vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, 0);
+@@ -7371,6 +7377,9 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
+ 	if (run_flags & KVM_RUN_LOAD_GUEST_DR6)
+ 		set_debugreg(vcpu->arch.dr6, 6);
+ 
++	if (run_flags & KVM_RUN_LOAD_DEBUGCTL)
++		vmx_reload_guest_debugctl(vcpu);
++
+ 	/*
+ 	 * Refresh vmcs.HOST_CR3 if necessary.  This must be done immediately
+ 	 * prior to VM-Enter, as the kernel may load a new ASID (PCID) any time
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index b5758c33c60f9d..076af78af15118 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -414,6 +414,32 @@ static inline void vmx_set_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr,
+ 
+ void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu);
+ 
++u64 vmx_get_supported_debugctl(struct kvm_vcpu *vcpu, bool host_initiated);
++bool vmx_is_valid_debugctl(struct kvm_vcpu *vcpu, u64 data, bool host_initiated);
++
++static inline void vmx_guest_debugctl_write(struct kvm_vcpu *vcpu, u64 val)
++{
++	WARN_ON_ONCE(val & DEBUGCTLMSR_FREEZE_IN_SMM);
++
++	val |= vcpu->arch.host_debugctl & DEBUGCTLMSR_FREEZE_IN_SMM;
++	vmcs_write64(GUEST_IA32_DEBUGCTL, val);
++}
++
++static inline u64 vmx_guest_debugctl_read(void)
++{
++	return vmcs_read64(GUEST_IA32_DEBUGCTL) & ~DEBUGCTLMSR_FREEZE_IN_SMM;
++}
++
++static inline void vmx_reload_guest_debugctl(struct kvm_vcpu *vcpu)
++{
++	u64 val = vmcs_read64(GUEST_IA32_DEBUGCTL);
++
++	if (!((val ^ vcpu->arch.host_debugctl) & DEBUGCTLMSR_FREEZE_IN_SMM))
++		return;
++
++	vmx_guest_debugctl_write(vcpu, val & ~DEBUGCTLMSR_FREEZE_IN_SMM);
++}
++
+ /*
+  * Note, early Intel manuals have the write-low and read-high bitmap offsets
+  * the wrong way round.  The bitmaps control MSRs 0x00000000-0x00001fff and
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 05de6c5949a470..45c8cabba524ac 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -10785,7 +10785,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 		dm_request_for_irq_injection(vcpu) &&
+ 		kvm_cpu_accept_dm_intr(vcpu);
+ 	fastpath_t exit_fastpath;
+-	u64 run_flags;
++	u64 run_flags, debug_ctl;
+ 
+ 	bool req_immediate_exit = false;
+ 
+@@ -11057,7 +11057,17 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 		set_debugreg(DR7_FIXED_1, 7);
+ 	}
+ 
+-	vcpu->arch.host_debugctl = get_debugctlmsr();
++	/*
++	 * Refresh the host DEBUGCTL snapshot after disabling IRQs, as DEBUGCTL
++	 * can be modified in IRQ context, e.g. via SMP function calls.  Inform
++	 * vendor code if any host-owned bits were changed, e.g. so that the
++	 * value loaded into hardware while running the guest can be updated.
++	 */
++	debug_ctl = get_debugctlmsr();
++	if ((debug_ctl ^ vcpu->arch.host_debugctl) & kvm_x86_ops.HOST_OWNED_DEBUGCTL &&
++	    !vcpu->arch.guest_state_protected)
++		run_flags |= KVM_RUN_LOAD_DEBUGCTL;
++	vcpu->arch.host_debugctl = debug_ctl;
+ 
+ 	guest_timing_enter_irqoff();
+ 
+diff --git a/arch/x86/lib/crypto/poly1305_glue.c b/arch/x86/lib/crypto/poly1305_glue.c
+index b7e78a583e07f7..856d48fd422b02 100644
+--- a/arch/x86/lib/crypto/poly1305_glue.c
++++ b/arch/x86/lib/crypto/poly1305_glue.c
+@@ -25,6 +25,42 @@ struct poly1305_arch_internal {
+ 	struct { u32 r2, r1, r4, r3; } rn[9];
+ };
+ 
++/*
++ * The AVX code uses base 2^26, while the scalar code uses base 2^64. If we hit
++ * the unfortunate situation of using AVX and then having to go back to scalar
++ * -- because the user is silly and has called the update function from two
++ * separate contexts -- then we need to convert back to the original base before
++ * proceeding. It is possible to reason that the initial reduction below is
++ * sufficient given the implementation invariants. However, for an avoidance of
++ * doubt and because this is not performance critical, we do the full reduction
++ * anyway. Z3 proof of below function: https://xn--4db.cc/ltPtHCKN/py
++ */
++static void convert_to_base2_64(void *ctx)
++{
++	struct poly1305_arch_internal *state = ctx;
++	u32 cy;
++
++	if (!state->is_base2_26)
++		return;
++
++	cy = state->h[0] >> 26; state->h[0] &= 0x3ffffff; state->h[1] += cy;
++	cy = state->h[1] >> 26; state->h[1] &= 0x3ffffff; state->h[2] += cy;
++	cy = state->h[2] >> 26; state->h[2] &= 0x3ffffff; state->h[3] += cy;
++	cy = state->h[3] >> 26; state->h[3] &= 0x3ffffff; state->h[4] += cy;
++	state->hs[0] = ((u64)state->h[2] << 52) | ((u64)state->h[1] << 26) | state->h[0];
++	state->hs[1] = ((u64)state->h[4] << 40) | ((u64)state->h[3] << 14) | (state->h[2] >> 12);
++	state->hs[2] = state->h[4] >> 24;
++	/* Unsigned Less Than: branchlessly produces 1 if a < b, else 0. */
++#define ULT(a, b) ((a ^ ((a ^ b) | ((a - b) ^ b))) >> (sizeof(a) * 8 - 1))
++	cy = (state->hs[2] >> 2) + (state->hs[2] & ~3ULL);
++	state->hs[2] &= 3;
++	state->hs[0] += cy;
++	state->hs[1] += (cy = ULT(state->hs[0], cy));
++	state->hs[2] += ULT(state->hs[1], cy);
++#undef ULT
++	state->is_base2_26 = 0;
++}
++
+ asmlinkage void poly1305_block_init_arch(
+ 	struct poly1305_block_state *state,
+ 	const u8 raw_key[POLY1305_BLOCK_SIZE]);
+@@ -62,7 +98,17 @@ void poly1305_blocks_arch(struct poly1305_block_state *state, const u8 *inp,
+ 	BUILD_BUG_ON(SZ_4K < POLY1305_BLOCK_SIZE ||
+ 		     SZ_4K % POLY1305_BLOCK_SIZE);
+ 
+-	if (!static_branch_likely(&poly1305_use_avx)) {
++	/*
++	 * The AVX implementations have significant setup overhead (e.g. key
++	 * power computation, kernel FPU enabling) which makes them slower for
++	 * short messages.  Fall back to the scalar implementation for messages
++	 * shorter than 288 bytes, unless the AVX-specific key setup has already
++	 * been performed (indicated by ctx->is_base2_26).
++	 */
++	if (!static_branch_likely(&poly1305_use_avx) ||
++	    (len < POLY1305_BLOCK_SIZE * 18 && !ctx->is_base2_26) ||
++	    unlikely(!irq_fpu_usable())) {
++		convert_to_base2_64(ctx);
+ 		poly1305_blocks_x86_64(ctx, inp, len, padbit);
+ 		return;
+ 	}
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 0cb1e9873aabb2..d68da9e92e1eee 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -701,17 +701,13 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
+ {
+ 	struct bfq_data *bfqd = data->q->elevator->elevator_data;
+ 	struct bfq_io_cq *bic = bfq_bic_lookup(data->q);
+-	int depth;
+-	unsigned limit = data->q->nr_requests;
+-	unsigned int act_idx;
++	unsigned int limit, act_idx;
+ 
+ 	/* Sync reads have full depth available */
+-	if (op_is_sync(opf) && !op_is_write(opf)) {
+-		depth = 0;
+-	} else {
+-		depth = bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(opf)];
+-		limit = (limit * depth) >> bfqd->full_depth_shift;
+-	}
++	if (op_is_sync(opf) && !op_is_write(opf))
++		limit = data->q->nr_requests;
++	else
++		limit = bfqd->async_depths[!!bfqd->wr_busy_queues][op_is_sync(opf)];
+ 
+ 	for (act_idx = 0; bic && act_idx < bfqd->num_actuators; act_idx++) {
+ 		/* Fast path to check if bfqq is already allocated. */
+@@ -725,14 +721,16 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
+ 		 * available requests and thus starve other entities.
+ 		 */
+ 		if (bfqq_request_over_limit(bfqd, bic, opf, act_idx, limit)) {
+-			depth = 1;
++			limit = 1;
+ 			break;
+ 		}
+ 	}
++
+ 	bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u",
+-		__func__, bfqd->wr_busy_queues, op_is_sync(opf), depth);
+-	if (depth)
+-		data->shallow_depth = depth;
++		__func__, bfqd->wr_busy_queues, op_is_sync(opf), limit);
++
++	if (limit < data->q->nr_requests)
++		data->shallow_depth = limit;
+ }
+ 
+ static struct bfq_queue *
+@@ -7128,9 +7126,8 @@ void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg)
+  */
+ static void bfq_update_depths(struct bfq_data *bfqd, struct sbitmap_queue *bt)
+ {
+-	unsigned int depth = 1U << bt->sb.shift;
++	unsigned int nr_requests = bfqd->queue->nr_requests;
+ 
+-	bfqd->full_depth_shift = bt->sb.shift;
+ 	/*
+ 	 * In-word depths if no bfq_queue is being weight-raised:
+ 	 * leaving 25% of tags only for sync reads.
+@@ -7142,13 +7139,13 @@ static void bfq_update_depths(struct bfq_data *bfqd, struct sbitmap_queue *bt)
+ 	 * limit 'something'.
+ 	 */
+ 	/* no more than 50% of tags for async I/O */
+-	bfqd->word_depths[0][0] = max(depth >> 1, 1U);
++	bfqd->async_depths[0][0] = max(nr_requests >> 1, 1U);
+ 	/*
+ 	 * no more than 75% of tags for sync writes (25% extra tags
+ 	 * w.r.t. async I/O, to prevent async I/O from starving sync
+ 	 * writes)
+ 	 */
+-	bfqd->word_depths[0][1] = max((depth * 3) >> 2, 1U);
++	bfqd->async_depths[0][1] = max((nr_requests * 3) >> 2, 1U);
+ 
+ 	/*
+ 	 * In-word depths in case some bfq_queue is being weight-
+@@ -7158,9 +7155,9 @@ static void bfq_update_depths(struct bfq_data *bfqd, struct sbitmap_queue *bt)
+ 	 * shortage.
+ 	 */
+ 	/* no more than ~18% of tags for async I/O */
+-	bfqd->word_depths[1][0] = max((depth * 3) >> 4, 1U);
++	bfqd->async_depths[1][0] = max((nr_requests * 3) >> 4, 1U);
+ 	/* no more than ~37% of tags for sync writes (~20% extra tags) */
+-	bfqd->word_depths[1][1] = max((depth * 6) >> 4, 1U);
++	bfqd->async_depths[1][1] = max((nr_requests * 6) >> 4, 1U);
+ }
+ 
+ static void bfq_depth_updated(struct blk_mq_hw_ctx *hctx)
+diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
+index 687a3a7ba78478..31217f196f4f1b 100644
+--- a/block/bfq-iosched.h
++++ b/block/bfq-iosched.h
+@@ -813,8 +813,7 @@ struct bfq_data {
+ 	 * Depth limits used in bfq_limit_depth (see comments on the
+ 	 * function)
+ 	 */
+-	unsigned int word_depths[2][2];
+-	unsigned int full_depth_shift;
++	unsigned int async_depths[2][2];
+ 
+ 	/*
+ 	 * Number of independent actuators. This is equal to 1 in
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index dec1cd4f1f5b6e..32d11305d51bb2 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -3169,8 +3169,10 @@ void blk_mq_submit_bio(struct bio *bio)
+ 	if (blk_mq_attempt_bio_merge(q, bio, nr_segs))
+ 		goto queue_exit;
+ 
+-	if (blk_queue_is_zoned(q) && blk_zone_plug_bio(bio, nr_segs))
+-		goto queue_exit;
++	if (bio_needs_zone_write_plugging(bio)) {
++		if (blk_zone_plug_bio(bio, nr_segs))
++			goto queue_exit;
++	}
+ 
+ new_request:
+ 	if (rq) {
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index 1a82980d52e93a..44dabc636a592f 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -792,7 +792,7 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
+ 	}
+ 
+ 	/* chunk_sectors a multiple of the physical block size? */
+-	if ((t->chunk_sectors << 9) & (t->physical_block_size - 1)) {
++	if (t->chunk_sectors % (t->physical_block_size >> SECTOR_SHIFT)) {
+ 		t->chunk_sectors = 0;
+ 		t->flags |= BLK_FLAG_MISALIGNED;
+ 		ret = -1;
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index c611444480b320..de39746de18b48 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -821,7 +821,7 @@ static void blk_queue_release(struct kobject *kobj)
+ 	/* nothing to do here, all data is associated with the parent gendisk */
+ }
+ 
+-static const struct kobj_type blk_queue_ktype = {
++const struct kobj_type blk_queue_ktype = {
+ 	.default_groups = blk_queue_attr_groups,
+ 	.sysfs_ops	= &queue_sysfs_ops,
+ 	.release	= blk_queue_release,
+@@ -849,15 +849,14 @@ int blk_register_queue(struct gendisk *disk)
+ 	struct request_queue *q = disk->queue;
+ 	int ret;
+ 
+-	kobject_init(&disk->queue_kobj, &blk_queue_ktype);
+ 	ret = kobject_add(&disk->queue_kobj, &disk_to_dev(disk)->kobj, "queue");
+ 	if (ret < 0)
+-		goto out_put_queue_kobj;
++		return ret;
+ 
+ 	if (queue_is_mq(q)) {
+ 		ret = blk_mq_sysfs_register(disk);
+ 		if (ret)
+-			goto out_put_queue_kobj;
++			goto out_del_queue_kobj;
+ 	}
+ 	mutex_lock(&q->sysfs_lock);
+ 
+@@ -908,8 +907,8 @@ int blk_register_queue(struct gendisk *disk)
+ 	mutex_unlock(&q->sysfs_lock);
+ 	if (queue_is_mq(q))
+ 		blk_mq_sysfs_unregister(disk);
+-out_put_queue_kobj:
+-	kobject_put(&disk->queue_kobj);
++out_del_queue_kobj:
++	kobject_del(&disk->queue_kobj);
+ 	return ret;
+ }
+ 
+@@ -960,5 +959,4 @@ void blk_unregister_queue(struct gendisk *disk)
+ 		elevator_set_none(q);
+ 
+ 	blk_debugfs_remove(disk);
+-	kobject_put(&disk->queue_kobj);
+ }
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 351d659280e116..efe71b1a1da138 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -1116,25 +1116,7 @@ bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs)
+ {
+ 	struct block_device *bdev = bio->bi_bdev;
+ 
+-	if (!bdev->bd_disk->zone_wplugs_hash)
+-		return false;
+-
+-	/*
+-	 * If the BIO already has the plugging flag set, then it was already
+-	 * handled through this path and this is a submission from the zone
+-	 * plug bio submit work.
+-	 */
+-	if (bio_flagged(bio, BIO_ZONE_WRITE_PLUGGING))
+-		return false;
+-
+-	/*
+-	 * We do not need to do anything special for empty flush BIOs, e.g
+-	 * BIOs such as issued by blkdev_issue_flush(). The is because it is
+-	 * the responsibility of the user to first wait for the completion of
+-	 * write operations for flush to have any effect on the persistence of
+-	 * the written data.
+-	 */
+-	if (op_is_flush(bio->bi_opf) && !bio_sectors(bio))
++	if (WARN_ON_ONCE(!bdev->bd_disk->zone_wplugs_hash))
+ 		return false;
+ 
+ 	/*
+diff --git a/block/blk.h b/block/blk.h
+index fae7653a941f5a..4746a7704856af 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -19,6 +19,7 @@ struct elevator_type;
+ /* Max future timer expiry for timeouts */
+ #define BLK_MAX_TIMEOUT		(5 * HZ)
+ 
++extern const struct kobj_type blk_queue_ktype;
+ extern struct dentry *blk_debugfs_root;
+ 
+ struct blk_flush_queue {
+diff --git a/block/genhd.c b/block/genhd.c
+index c26733f6324b25..9bbc38d1279266 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -1303,6 +1303,7 @@ static void disk_release(struct device *dev)
+ 	disk_free_zone_resources(disk);
+ 	xa_destroy(&disk->part_tbl);
+ 
++	kobject_put(&disk->queue_kobj);
+ 	disk->queue->disk = NULL;
+ 	blk_put_queue(disk->queue);
+ 
+@@ -1486,6 +1487,7 @@ struct gendisk *__alloc_disk_node(struct request_queue *q, int node_id,
+ 	INIT_LIST_HEAD(&disk->slave_bdevs);
+ #endif
+ 	mutex_init(&disk->rqos_state_mutex);
++	kobject_init(&disk->queue_kobj, &blk_queue_ktype);
+ 	return disk;
+ 
+ out_erase_part0:
+diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
+index 4dba8405bd015d..bfd9a40bb33d44 100644
+--- a/block/kyber-iosched.c
++++ b/block/kyber-iosched.c
+@@ -157,10 +157,7 @@ struct kyber_queue_data {
+ 	 */
+ 	struct sbitmap_queue domain_tokens[KYBER_NUM_DOMAINS];
+ 
+-	/*
+-	 * Async request percentage, converted to per-word depth for
+-	 * sbitmap_get_shallow().
+-	 */
++	/* Number of allowed async requests. */
+ 	unsigned int async_depth;
+ 
+ 	struct kyber_cpu_latency __percpu *cpu_latency;
+@@ -454,10 +451,8 @@ static void kyber_depth_updated(struct blk_mq_hw_ctx *hctx)
+ {
+ 	struct kyber_queue_data *kqd = hctx->queue->elevator->elevator_data;
+ 	struct blk_mq_tags *tags = hctx->sched_tags;
+-	unsigned int shift = tags->bitmap_tags.sb.shift;
+-
+-	kqd->async_depth = (1U << shift) * KYBER_ASYNC_PERCENT / 100U;
+ 
++	kqd->async_depth = hctx->queue->nr_requests * KYBER_ASYNC_PERCENT / 100U;
+ 	sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, kqd->async_depth);
+ }
+ 
+diff --git a/block/mq-deadline.c b/block/mq-deadline.c
+index 2edf1cac06d5b8..9ab6c62566952b 100644
+--- a/block/mq-deadline.c
++++ b/block/mq-deadline.c
+@@ -487,20 +487,6 @@ static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx)
+ 	return rq;
+ }
+ 
+-/*
+- * 'depth' is a number in the range 1..INT_MAX representing a number of
+- * requests. Scale it with a factor (1 << bt->sb.shift) / q->nr_requests since
+- * 1..(1 << bt->sb.shift) is the range expected by sbitmap_get_shallow().
+- * Values larger than q->nr_requests have the same effect as q->nr_requests.
+- */
+-static int dd_to_word_depth(struct blk_mq_hw_ctx *hctx, unsigned int qdepth)
+-{
+-	struct sbitmap_queue *bt = &hctx->sched_tags->bitmap_tags;
+-	const unsigned int nrr = hctx->queue->nr_requests;
+-
+-	return ((qdepth << bt->sb.shift) + nrr - 1) / nrr;
+-}
+-
+ /*
+  * Called by __blk_mq_alloc_request(). The shallow_depth value set by this
+  * function is used by __blk_mq_get_tag().
+@@ -517,7 +503,7 @@ static void dd_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
+ 	 * Throttle asynchronous requests and writes such that these requests
+ 	 * do not block the allocation of synchronous requests.
+ 	 */
+-	data->shallow_depth = dd_to_word_depth(data->hctx, dd->async_depth);
++	data->shallow_depth = dd->async_depth;
+ }
+ 
+ /* Called by blk_mq_update_nr_requests(). */
+diff --git a/crypto/jitterentropy-kcapi.c b/crypto/jitterentropy-kcapi.c
+index c24d4ff2b4a8b0..1266eb790708b8 100644
+--- a/crypto/jitterentropy-kcapi.c
++++ b/crypto/jitterentropy-kcapi.c
+@@ -144,7 +144,7 @@ int jent_hash_time(void *hash_state, __u64 time, u8 *addtl,
+ 	 * Inject the data from the previous loop into the pool. This data is
+ 	 * not considered to contain any entropy, but it stirs the pool a bit.
+ 	 */
+-	ret = crypto_shash_update(desc, intermediary, sizeof(intermediary));
++	ret = crypto_shash_update(hash_state_desc, intermediary, sizeof(intermediary));
+ 	if (ret)
+ 		goto err;
+ 
+@@ -157,11 +157,12 @@ int jent_hash_time(void *hash_state, __u64 time, u8 *addtl,
+ 	 * conditioning operation to have an identical amount of input data
+ 	 * according to section 3.1.5.
+ 	 */
+-	if (!stuck) {
+-		ret = crypto_shash_update(hash_state_desc, (u8 *)&time,
+-					  sizeof(__u64));
++	if (stuck) {
++		time = 0;
+ 	}
+ 
++	ret = crypto_shash_update(hash_state_desc, (u8 *)&time, sizeof(__u64));
++
+ err:
+ 	shash_desc_zero(desc);
+ 	memzero_explicit(intermediary, sizeof(intermediary));
+diff --git a/drivers/accel/habanalabs/common/memory.c b/drivers/accel/habanalabs/common/memory.c
+index 601fdbe701790b..61472a381904ec 100644
+--- a/drivers/accel/habanalabs/common/memory.c
++++ b/drivers/accel/habanalabs/common/memory.c
+@@ -1829,9 +1829,6 @@ static void hl_release_dmabuf(struct dma_buf *dmabuf)
+ 	struct hl_dmabuf_priv *hl_dmabuf = dmabuf->priv;
+ 	struct hl_ctx *ctx;
+ 
+-	if (!hl_dmabuf)
+-		return;
+-
+ 	ctx = hl_dmabuf->ctx;
+ 
+ 	if (hl_dmabuf->memhash_hnode)
+@@ -1859,7 +1856,12 @@ static int export_dmabuf(struct hl_ctx *ctx,
+ {
+ 	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ 	struct hl_device *hdev = ctx->hdev;
+-	int rc, fd;
++	CLASS(get_unused_fd, fd)(flags);
++
++	if (fd < 0) {
++		dev_err(hdev->dev, "failed to get a file descriptor for a dma-buf, %d\n", fd);
++		return fd;
++	}
+ 
+ 	exp_info.ops = &habanalabs_dmabuf_ops;
+ 	exp_info.size = total_size;
+@@ -1872,13 +1874,6 @@ static int export_dmabuf(struct hl_ctx *ctx,
+ 		return PTR_ERR(hl_dmabuf->dmabuf);
+ 	}
+ 
+-	fd = dma_buf_fd(hl_dmabuf->dmabuf, flags);
+-	if (fd < 0) {
+-		dev_err(hdev->dev, "failed to get a file descriptor for a dma-buf, %d\n", fd);
+-		rc = fd;
+-		goto err_dma_buf_put;
+-	}
+-
+ 	hl_dmabuf->ctx = ctx;
+ 	hl_ctx_get(hl_dmabuf->ctx);
+ 	atomic_inc(&ctx->hdev->dmabuf_export_cnt);
+@@ -1890,13 +1885,9 @@ static int export_dmabuf(struct hl_ctx *ctx,
+ 	get_file(ctx->hpriv->file_priv->filp);
+ 
+ 	*dmabuf_fd = fd;
++	fd_install(take_fd(fd), hl_dmabuf->dmabuf->file);
+ 
+ 	return 0;
+-
+-err_dma_buf_put:
+-	hl_dmabuf->dmabuf->priv = NULL;
+-	dma_buf_put(hl_dmabuf->dmabuf);
+-	return rc;
+ }
+ 
+ static int validate_export_params_common(struct hl_device *hdev, u64 addr, u64 size, u64 offset)
+diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
+index 7cf6101cb4c731..2a99f5eb69629a 100644
+--- a/drivers/acpi/acpi_processor.c
++++ b/drivers/acpi/acpi_processor.c
+@@ -275,7 +275,7 @@ static inline int acpi_processor_hotadd_init(struct acpi_processor *pr,
+ 
+ static int acpi_processor_get_info(struct acpi_device *device)
+ {
+-	union acpi_object object = { 0 };
++	union acpi_object object = { .processor = { 0 } };
+ 	struct acpi_buffer buffer = { sizeof(union acpi_object), &object };
+ 	struct acpi_processor *pr = acpi_driver_data(device);
+ 	int device_declaration = 0;
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index f0584ccad45191..bda33a0f0a0131 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -902,6 +902,17 @@ static bool ghes_do_proc(struct ghes *ghes,
+ 		}
+ 	}
+ 
++	/*
++	 * If no memory failure work is queued for abnormal synchronous
++	 * errors, do a force kill.
++	 */
++	if (sync && !queued) {
++		dev_err(ghes->dev,
++			HW_ERR GHES_PFX "%s:%d: synchronous unrecoverable error (SIGBUS)\n",
++			current->comm, task_pid_nr(current));
++		force_sig(SIGBUS);
++	}
++
+ 	return queued;
+ }
+ 
+@@ -1088,6 +1099,8 @@ static void __ghes_panic(struct ghes *ghes,
+ 
+ 	__ghes_print_estatus(KERN_EMERG, ghes->generic, estatus);
+ 
++	add_taint(TAINT_MACHINE_CHECK, LOCKDEP_STILL_OK);
++
+ 	ghes_clear_estatus(ghes, estatus, buf_paddr, fixmap_idx);
+ 
+ 	if (!panic_timeout)
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 75c7db8b156a20..7855bbf752b1e8 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -2033,7 +2033,7 @@ void __init acpi_ec_ecdt_probe(void)
+ 		goto out;
+ 	}
+ 
+-	if (!strstarts(ecdt_ptr->id, "\\")) {
++	if (!strlen(ecdt_ptr->id)) {
+ 		/*
+ 		 * The ECDT table on some MSI notebooks contains invalid data, together
+ 		 * with an empty ID string ("").
+@@ -2042,9 +2042,13 @@ void __init acpi_ec_ecdt_probe(void)
+ 		 * a "fully qualified reference to the (...) embedded controller device",
+ 		 * so this string always has to start with a backslash.
+ 		 *
+-		 * By verifying this we can avoid such faulty ECDT tables in a safe way.
++		 * However some ThinkBook machines have a ECDT table with a valid EC
++		 * description but an invalid ID string ("_SB.PC00.LPCB.EC0").
++		 *
++		 * Because of this we only check if the ID string is empty in order to
++		 * avoid the obvious cases.
+ 		 */
+-		pr_err(FW_BUG "Ignoring ECDT due to invalid ID string \"%s\"\n", ecdt_ptr->id);
++		pr_err(FW_BUG "Ignoring ECDT due to empty ID string\n");
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/acpi/prmt.c b/drivers/acpi/prmt.c
+index e549914a636c66..be033bbb126a44 100644
+--- a/drivers/acpi/prmt.c
++++ b/drivers/acpi/prmt.c
+@@ -85,8 +85,6 @@ static u64 efi_pa_va_lookup(efi_guid_t *guid, u64 pa)
+ 		}
+ 	}
+ 
+-	pr_warn("Failed to find VA for GUID: %pUL, PA: 0x%llx", guid, pa);
+-
+ 	return 0;
+ }
+ 
+@@ -154,13 +152,37 @@ acpi_parse_prmt(union acpi_subtable_headers *header, const unsigned long end)
+ 		guid_copy(&th->guid, (guid_t *)handler_info->handler_guid);
+ 		th->handler_addr =
+ 			(void *)efi_pa_va_lookup(&th->guid, handler_info->handler_address);
++		/*
++		 * Print a warning message if handler_addr is zero which is not expected to
++		 * ever happen.
++		 */
++		if (unlikely(!th->handler_addr))
++			pr_warn("Failed to find VA of handler for GUID: %pUL, PA: 0x%llx",
++				&th->guid, handler_info->handler_address);
+ 
+ 		th->static_data_buffer_addr =
+ 			efi_pa_va_lookup(&th->guid, handler_info->static_data_buffer_address);
++		/*
++		 * According to the PRM specification, static_data_buffer_address can be zero,
++		 * so avoid printing a warning message in that case.  Otherwise, if the
++		 * return value of efi_pa_va_lookup() is zero, print the message.
++		 */
++		if (unlikely(!th->static_data_buffer_addr && handler_info->static_data_buffer_address))
++			pr_warn("Failed to find VA of static data buffer for GUID: %pUL, PA: 0x%llx",
++				&th->guid, handler_info->static_data_buffer_address);
+ 
+ 		th->acpi_param_buffer_addr =
+ 			efi_pa_va_lookup(&th->guid, handler_info->acpi_param_buffer_address);
+ 
++		/*
++		 * According to the PRM specification, acpi_param_buffer_address can be zero,
++		 * so avoid printing a warning message in that case.  Otherwise, if the
++		 * return value of efi_pa_va_lookup() is zero, print the message.
++		 */
++		if (unlikely(!th->acpi_param_buffer_addr && handler_info->acpi_param_buffer_address))
++			pr_warn("Failed to find VA of acpi param buffer for GUID: %pUL, PA: 0x%llx",
++				&th->guid, handler_info->acpi_param_buffer_address);
++
+ 	} while (++cur_handler < tm->handler_count && (handler_info = get_next_handler(handler_info)));
+ 
+ 	return 0;
+diff --git a/drivers/acpi/processor_perflib.c b/drivers/acpi/processor_perflib.c
+index 64b8d1e1959431..8972446b71625a 100644
+--- a/drivers/acpi/processor_perflib.c
++++ b/drivers/acpi/processor_perflib.c
+@@ -173,6 +173,9 @@ void acpi_processor_ppc_init(struct cpufreq_policy *policy)
+ {
+ 	unsigned int cpu;
+ 
++	if (ignore_ppc == 1)
++		return;
++
+ 	for_each_cpu(cpu, policy->related_cpus) {
+ 		struct acpi_processor *pr = per_cpu(processors, cpu);
+ 		int ret;
+@@ -193,6 +196,14 @@ void acpi_processor_ppc_init(struct cpufreq_policy *policy)
+ 		if (ret < 0)
+ 			pr_err("Failed to add freq constraint for CPU%d (%d)\n",
+ 			       cpu, ret);
++
++		if (!pr->performance)
++			continue;
++
++		ret = acpi_processor_get_platform_limit(pr);
++		if (ret)
++			pr_err("Failed to update freq constraint for CPU%d (%d)\n",
++			       cpu, ret);
+ 	}
+ }
+ 
+diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/binder_alloc_selftest.c
+index c88735c5484854..486af3ec3c0272 100644
+--- a/drivers/android/binder_alloc_selftest.c
++++ b/drivers/android/binder_alloc_selftest.c
+@@ -142,12 +142,12 @@ static void binder_selftest_free_buf(struct binder_alloc *alloc,
+ 	for (i = 0; i < BUFFER_NUM; i++)
+ 		binder_alloc_free_buf(alloc, buffers[seq[i]]);
+ 
+-	for (i = 0; i < end / PAGE_SIZE; i++) {
+ 		/**
+ 		 * Error message on a free page can be false positive
+ 		 * if binder shrinker ran during binder_alloc_free_buf
+ 		 * calls above.
+ 		 */
++	for (i = 0; i <= (end - 1) / PAGE_SIZE; i++) {
+ 		if (list_empty(page_to_lru(alloc->pages[i]))) {
+ 			pr_err_size_seq(sizes, seq);
+ 			pr_err("expect lru but is %s at page index %d\n",
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index aa93b0ecbbc692..4f69da480e8669 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -1778,11 +1778,21 @@ static void ahci_update_initial_lpm_policy(struct ata_port *ap)
+ 		return;
+ 	}
+ 
++	/* If no Partial or no Slumber, we cannot support DIPM. */
++	if ((ap->host->flags & ATA_HOST_NO_PART) ||
++	    (ap->host->flags & ATA_HOST_NO_SSC)) {
++		ata_port_dbg(ap, "Host does not support DIPM\n");
++		ap->flags |= ATA_FLAG_NO_DIPM;
++	}
++
+ 	/* If no LPM states are supported by the HBA, do not bother with LPM */
+ 	if ((ap->host->flags & ATA_HOST_NO_PART) &&
+ 	    (ap->host->flags & ATA_HOST_NO_SSC) &&
+ 	    (ap->host->flags & ATA_HOST_NO_DEVSLP)) {
+-		ata_port_dbg(ap, "no LPM states supported, not enabling LPM\n");
++		ata_port_dbg(ap,
++			"No LPM states supported, forcing LPM max_power\n");
++		ap->flags |= ATA_FLAG_NO_LPM;
++		ap->target_lpm_policy = ATA_LPM_MAX_POWER;
+ 		return;
+ 	}
+ 
+diff --git a/drivers/ata/ata_piix.c b/drivers/ata/ata_piix.c
+index d441246fa357a1..6e19ae97e6c87f 100644
+--- a/drivers/ata/ata_piix.c
++++ b/drivers/ata/ata_piix.c
+@@ -1089,6 +1089,7 @@ static struct ata_port_operations ich_pata_ops = {
+ };
+ 
+ static struct attribute *piix_sidpr_shost_attrs[] = {
++	&dev_attr_link_power_management_supported.attr,
+ 	&dev_attr_link_power_management_policy.attr,
+ 	NULL
+ };
+diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c
+index 4e9c82f36df17f..41130abe90c4c2 100644
+--- a/drivers/ata/libahci.c
++++ b/drivers/ata/libahci.c
+@@ -111,6 +111,7 @@ static DEVICE_ATTR(em_buffer, S_IWUSR | S_IRUGO,
+ static DEVICE_ATTR(em_message_supported, S_IRUGO, ahci_show_em_supported, NULL);
+ 
+ static struct attribute *ahci_shost_attrs[] = {
++	&dev_attr_link_power_management_supported.attr,
+ 	&dev_attr_link_power_management_policy.attr,
+ 	&dev_attr_em_message_type.attr,
+ 	&dev_attr_em_message.attr,
+diff --git a/drivers/ata/libata-sata.c b/drivers/ata/libata-sata.c
+index cb46ce276bb119..3d5cb38d8557d4 100644
+--- a/drivers/ata/libata-sata.c
++++ b/drivers/ata/libata-sata.c
+@@ -900,14 +900,52 @@ static const char *ata_lpm_policy_names[] = {
+ 	[ATA_LPM_MIN_POWER]		= "min_power",
+ };
+ 
++/*
++ * Check if a port supports link power management.
++ * Must be called with the port locked.
++ */
++static bool ata_scsi_lpm_supported(struct ata_port *ap)
++{
++	struct ata_link *link;
++	struct ata_device *dev;
++
++	if (ap->flags & ATA_FLAG_NO_LPM)
++		return false;
++
++	ata_for_each_link(link, ap, EDGE) {
++		ata_for_each_dev(dev, &ap->link, ENABLED) {
++			if (dev->quirks & ATA_QUIRK_NOLPM)
++				return false;
++		}
++	}
++
++	return true;
++}
++
++static ssize_t ata_scsi_lpm_supported_show(struct device *dev,
++				 struct device_attribute *attr, char *buf)
++{
++	struct Scsi_Host *shost = class_to_shost(dev);
++	struct ata_port *ap = ata_shost_to_port(shost);
++	unsigned long flags;
++	bool supported;
++
++	spin_lock_irqsave(ap->lock, flags);
++	supported = ata_scsi_lpm_supported(ap);
++	spin_unlock_irqrestore(ap->lock, flags);
++
++	return sysfs_emit(buf, "%d\n", supported);
++}
++DEVICE_ATTR(link_power_management_supported, S_IRUGO,
++	    ata_scsi_lpm_supported_show, NULL);
++EXPORT_SYMBOL_GPL(dev_attr_link_power_management_supported);
++
+ static ssize_t ata_scsi_lpm_store(struct device *device,
+ 				  struct device_attribute *attr,
+ 				  const char *buf, size_t count)
+ {
+ 	struct Scsi_Host *shost = class_to_shost(device);
+ 	struct ata_port *ap = ata_shost_to_port(shost);
+-	struct ata_link *link;
+-	struct ata_device *dev;
+ 	enum ata_lpm_policy policy;
+ 	unsigned long flags;
+ 
+@@ -924,13 +962,9 @@ static ssize_t ata_scsi_lpm_store(struct device *device,
+ 
+ 	spin_lock_irqsave(ap->lock, flags);
+ 
+-	ata_for_each_link(link, ap, EDGE) {
+-		ata_for_each_dev(dev, &ap->link, ENABLED) {
+-			if (dev->quirks & ATA_QUIRK_NOLPM) {
+-				count = -EOPNOTSUPP;
+-				goto out_unlock;
+-			}
+-		}
++	if (!ata_scsi_lpm_supported(ap)) {
++		count = -EOPNOTSUPP;
++		goto out_unlock;
+ 	}
+ 
+ 	ap->target_lpm_policy = policy;
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index c55a7c70bc1a88..1ef26216f9718a 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -1854,6 +1854,11 @@ void pm_runtime_reinit(struct device *dev)
+ 				pm_runtime_put(dev->parent);
+ 		}
+ 	}
++	/*
++	 * Clear power.needs_force_resume in case it has been set by
++	 * pm_runtime_force_suspend() invoked from a driver remove callback.
++	 */
++	dev->power.needs_force_resume = false;
+ }
+ 
+ /**
+diff --git a/drivers/base/regmap/regmap-irq.c b/drivers/base/regmap/regmap-irq.c
+index d1585f07377606..4aac12d38215f1 100644
+--- a/drivers/base/regmap/regmap-irq.c
++++ b/drivers/base/regmap/regmap-irq.c
+@@ -816,7 +816,7 @@ int regmap_add_irq_chip_fwnode(struct fwnode_handle *fwnode,
+ 						     d->mask_buf[i],
+ 						     chip->irq_drv_data);
+ 			if (ret)
+-				goto err_alloc;
++				goto err_mutex;
+ 		}
+ 
+ 		if (chip->mask_base && !chip->handle_mask_sync) {
+@@ -827,7 +827,7 @@ int regmap_add_irq_chip_fwnode(struct fwnode_handle *fwnode,
+ 			if (ret) {
+ 				dev_err(map->dev, "Failed to set masks in 0x%x: %d\n",
+ 					reg, ret);
+-				goto err_alloc;
++				goto err_mutex;
+ 			}
+ 		}
+ 
+@@ -838,7 +838,7 @@ int regmap_add_irq_chip_fwnode(struct fwnode_handle *fwnode,
+ 			if (ret) {
+ 				dev_err(map->dev, "Failed to set masks in 0x%x: %d\n",
+ 					reg, ret);
+-				goto err_alloc;
++				goto err_mutex;
+ 			}
+ 		}
+ 
+@@ -855,7 +855,7 @@ int regmap_add_irq_chip_fwnode(struct fwnode_handle *fwnode,
+ 			if (ret != 0) {
+ 				dev_err(map->dev, "Failed to read IRQ status: %d\n",
+ 					ret);
+-				goto err_alloc;
++				goto err_mutex;
+ 			}
+ 		}
+ 
+@@ -879,7 +879,7 @@ int regmap_add_irq_chip_fwnode(struct fwnode_handle *fwnode,
+ 			if (ret != 0) {
+ 				dev_err(map->dev, "Failed to ack 0x%x: %d\n",
+ 					reg, ret);
+-				goto err_alloc;
++				goto err_mutex;
+ 			}
+ 		}
+ 	}
+@@ -901,7 +901,7 @@ int regmap_add_irq_chip_fwnode(struct fwnode_handle *fwnode,
+ 			if (ret != 0) {
+ 				dev_err(map->dev, "Failed to set masks in 0x%x: %d\n",
+ 					reg, ret);
+-				goto err_alloc;
++				goto err_mutex;
+ 			}
+ 		}
+ 	}
+@@ -910,7 +910,7 @@ int regmap_add_irq_chip_fwnode(struct fwnode_handle *fwnode,
+ 	if (chip->status_is_level) {
+ 		ret = read_irq_data(d);
+ 		if (ret < 0)
+-			goto err_alloc;
++			goto err_mutex;
+ 
+ 		memcpy(d->prev_status_buf, d->status_buf,
+ 		       array_size(d->chip->num_regs, sizeof(d->prev_status_buf[0])));
+@@ -918,7 +918,7 @@ int regmap_add_irq_chip_fwnode(struct fwnode_handle *fwnode,
+ 
+ 	ret = regmap_irq_create_domain(fwnode, irq_base, chip, d);
+ 	if (ret)
+-		goto err_alloc;
++		goto err_mutex;
+ 
+ 	ret = request_threaded_irq(irq, NULL, regmap_irq_thread,
+ 				   irq_flags | IRQF_ONESHOT,
+@@ -935,6 +935,8 @@ int regmap_add_irq_chip_fwnode(struct fwnode_handle *fwnode,
+ 
+ err_domain:
+ 	/* Should really dispose of the domain but... */
++err_mutex:
++	mutex_destroy(&d->lock);
+ err_alloc:
+ 	kfree(d->type_buf);
+ 	kfree(d->type_buf_def);
+@@ -1027,6 +1029,7 @@ void regmap_del_irq_chip(int irq, struct regmap_irq_chip_data *d)
+ 			kfree(d->config_buf[i]);
+ 		kfree(d->config_buf);
+ 	}
++	mutex_destroy(&d->lock);
+ 	kfree(d);
+ }
+ EXPORT_SYMBOL_GPL(regmap_del_irq_chip);
+diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
+index e5a2e5f7887b86..975024cf03c594 100644
+--- a/drivers/block/drbd/drbd_receiver.c
++++ b/drivers/block/drbd/drbd_receiver.c
+@@ -2500,7 +2500,11 @@ static int handle_write_conflicts(struct drbd_device *device,
+ 			peer_req->w.cb = superseded ? e_send_superseded :
+ 						   e_send_retry_write;
+ 			list_add_tail(&peer_req->w.list, &device->done_ee);
+-			queue_work(connection->ack_sender, &peer_req->peer_device->send_acks_work);
++			/* put is in drbd_send_acks_wf() */
++			kref_get(&device->kref);
++			if (!queue_work(connection->ack_sender,
++					&peer_req->peer_device->send_acks_work))
++				kref_put(&device->kref, drbd_destroy_device);
+ 
+ 			err = -ENOENT;
+ 			goto out;
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 8d994cae3b83ba..1b6ee91f8eb987 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1431,17 +1431,34 @@ static int loop_set_dio(struct loop_device *lo, unsigned long arg)
+ 	return 0;
+ }
+ 
+-static int loop_set_block_size(struct loop_device *lo, unsigned long arg)
++static int loop_set_block_size(struct loop_device *lo, blk_mode_t mode,
++			       struct block_device *bdev, unsigned long arg)
+ {
+ 	struct queue_limits lim;
+ 	unsigned int memflags;
+ 	int err = 0;
+ 
+-	if (lo->lo_state != Lo_bound)
+-		return -ENXIO;
++	/*
++	 * If we don't hold exclusive handle for the device, upgrade to it
++	 * here to avoid changing device under exclusive owner.
++	 */
++	if (!(mode & BLK_OPEN_EXCL)) {
++		err = bd_prepare_to_claim(bdev, loop_set_block_size, NULL);
++		if (err)
++			return err;
++	}
++
++	err = mutex_lock_killable(&lo->lo_mutex);
++	if (err)
++		goto abort_claim;
++
++	if (lo->lo_state != Lo_bound) {
++		err = -ENXIO;
++		goto unlock;
++	}
+ 
+ 	if (lo->lo_queue->limits.logical_block_size == arg)
+-		return 0;
++		goto unlock;
+ 
+ 	sync_blockdev(lo->lo_device);
+ 	invalidate_bdev(lo->lo_device);
+@@ -1454,6 +1471,11 @@ static int loop_set_block_size(struct loop_device *lo, unsigned long arg)
+ 	loop_update_dio(lo);
+ 	blk_mq_unfreeze_queue(lo->lo_queue, memflags);
+ 
++unlock:
++	mutex_unlock(&lo->lo_mutex);
++abort_claim:
++	if (!(mode & BLK_OPEN_EXCL))
++		bd_abort_claiming(bdev, loop_set_block_size);
+ 	return err;
+ }
+ 
+@@ -1472,9 +1494,6 @@ static int lo_simple_ioctl(struct loop_device *lo, unsigned int cmd,
+ 	case LOOP_SET_DIRECT_IO:
+ 		err = loop_set_dio(lo, arg);
+ 		break;
+-	case LOOP_SET_BLOCK_SIZE:
+-		err = loop_set_block_size(lo, arg);
+-		break;
+ 	default:
+ 		err = -EINVAL;
+ 	}
+@@ -1529,9 +1548,12 @@ static int lo_ioctl(struct block_device *bdev, blk_mode_t mode,
+ 		break;
+ 	case LOOP_GET_STATUS64:
+ 		return loop_get_status64(lo, argp);
++	case LOOP_SET_BLOCK_SIZE:
++		if (!(mode & BLK_OPEN_WRITE) && !capable(CAP_SYS_ADMIN))
++			return -EPERM;
++		return loop_set_block_size(lo, mode, bdev, arg);
+ 	case LOOP_SET_CAPACITY:
+ 	case LOOP_SET_DIRECT_IO:
+-	case LOOP_SET_BLOCK_SIZE:
+ 		if (!(mode & BLK_OPEN_WRITE) && !capable(CAP_SYS_ADMIN))
+ 			return -EPERM;
+ 		fallthrough;
+diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c
+index b5727dea15bde7..7af21fe6767172 100644
+--- a/drivers/block/sunvdc.c
++++ b/drivers/block/sunvdc.c
+@@ -957,8 +957,10 @@ static bool vdc_port_mpgroup_check(struct vio_dev *vdev)
+ 	dev = device_find_child(vdev->dev.parent, &port_data,
+ 				vdc_device_probed);
+ 
+-	if (dev)
++	if (dev) {
++		put_device(dev);
+ 		return true;
++	}
+ 
+ 	return false;
+ }
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 3e60558bf52595..dabb468fa0b99a 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -215,7 +215,7 @@ struct ublk_device {
+ 
+ 	struct completion	completion;
+ 	unsigned int		nr_queues_ready;
+-	unsigned int		nr_privileged_daemon;
++	bool 			unprivileged_daemons;
+ 	struct mutex cancel_mutex;
+ 	bool canceling;
+ 	pid_t 	ublksrv_tgid;
+@@ -1532,7 +1532,7 @@ static void ublk_reset_ch_dev(struct ublk_device *ub)
+ 	/* set to NULL, otherwise new tasks cannot mmap io_cmd_buf */
+ 	ub->mm = NULL;
+ 	ub->nr_queues_ready = 0;
+-	ub->nr_privileged_daemon = 0;
++	ub->unprivileged_daemons = false;
+ 	ub->ublksrv_tgid = -1;
+ }
+ 
+@@ -1945,12 +1945,10 @@ static void ublk_mark_io_ready(struct ublk_device *ub, struct ublk_queue *ubq)
+ 	__must_hold(&ub->mutex)
+ {
+ 	ubq->nr_io_ready++;
+-	if (ublk_queue_ready(ubq)) {
++	if (ublk_queue_ready(ubq))
+ 		ub->nr_queues_ready++;
+-
+-		if (capable(CAP_SYS_ADMIN))
+-			ub->nr_privileged_daemon++;
+-	}
++	if (!ub->unprivileged_daemons && !capable(CAP_SYS_ADMIN))
++		ub->unprivileged_daemons = true;
+ 
+ 	if (ub->nr_queues_ready == ub->dev_info.nr_hw_queues) {
+ 		/* now we are ready for handling ublk io request */
+@@ -2759,8 +2757,8 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub,
+ 
+ 	ublk_apply_params(ub);
+ 
+-	/* don't probe partitions if any one ubq daemon is un-trusted */
+-	if (ub->nr_privileged_daemon != ub->nr_queues_ready)
++	/* don't probe partitions if any daemon task is un-trusted */
++	if (ub->unprivileged_daemons)
+ 		set_bit(GD_SUPPRESS_PART_SCAN, &disk->state);
+ 
+ 	ublk_get_device(ub);
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index e8977fff421226..9efdd111baf5f2 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -515,6 +515,7 @@ static const struct usb_device_id quirks_table[] = {
+ 	/* Realtek 8851BE Bluetooth devices */
+ 	{ USB_DEVICE(0x0bda, 0xb850), .driver_info = BTUSB_REALTEK },
+ 	{ USB_DEVICE(0x13d3, 0x3600), .driver_info = BTUSB_REALTEK },
++	{ USB_DEVICE(0x13d3, 0x3601), .driver_info = BTUSB_REALTEK },
+ 
+ 	/* Realtek 8851BU Bluetooth devices */
+ 	{ USB_DEVICE(0x3625, 0x010b), .driver_info = BTUSB_REALTEK |
+@@ -709,6 +710,8 @@ static const struct usb_device_id quirks_table[] = {
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x0489, 0xe139), .driver_info = BTUSB_MEDIATEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x0489, 0xe14e), .driver_info = BTUSB_MEDIATEK |
++						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x0489, 0xe14f), .driver_info = BTUSB_MEDIATEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x0489, 0xe150), .driver_info = BTUSB_MEDIATEK |
+diff --git a/drivers/bus/mhi/host/pci_generic.c b/drivers/bus/mhi/host/pci_generic.c
+index 92bd133e7c456e..7655a389dc5975 100644
+--- a/drivers/bus/mhi/host/pci_generic.c
++++ b/drivers/bus/mhi/host/pci_generic.c
+@@ -43,6 +43,7 @@
+  * @mru_default: default MRU size for MBIM network packets
+  * @sideband_wake: Devices using dedicated sideband GPIO for wakeup instead
+  *		   of inband wake support (such as sdx24)
++ * @no_m3: M3 not supported
+  */
+ struct mhi_pci_dev_info {
+ 	const struct mhi_controller_config *config;
+@@ -54,6 +55,7 @@ struct mhi_pci_dev_info {
+ 	unsigned int dma_data_width;
+ 	unsigned int mru_default;
+ 	bool sideband_wake;
++	bool no_m3;
+ };
+ 
+ #define MHI_CHANNEL_CONFIG_UL(ch_num, ch_name, el_count, ev_ring) \
+@@ -295,6 +297,7 @@ static const struct mhi_pci_dev_info mhi_qcom_qdu100_info = {
+ 	.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
+ 	.dma_data_width = 32,
+ 	.sideband_wake = false,
++	.no_m3 = true,
+ };
+ 
+ static const struct mhi_channel_config mhi_qcom_sa8775p_channels[] = {
+@@ -818,6 +821,16 @@ static const struct mhi_pci_dev_info mhi_telit_fn920c04_info = {
+ 	.edl_trigger = true,
+ };
+ 
++static const struct mhi_pci_dev_info mhi_telit_fn990b40_info = {
++	.name = "telit-fn990b40",
++	.config = &modem_telit_fn920c04_config,
++	.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
++	.dma_data_width = 32,
++	.sideband_wake = false,
++	.mru_default = 32768,
++	.edl_trigger = true,
++};
++
+ static const struct mhi_pci_dev_info mhi_netprisma_lcur57_info = {
+ 	.name = "netprisma-lcur57",
+ 	.edl = "qcom/prog_firehose_sdx24.mbn",
+@@ -865,6 +878,9 @@ static const struct pci_device_id mhi_pci_id_table[] = {
+ 		.driver_data = (kernel_ulong_t) &mhi_telit_fe990a_info },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0308),
+ 		.driver_data = (kernel_ulong_t) &mhi_qcom_sdx65_info },
++	/* Telit FN990B40 (sdx72) */
++	{ PCI_DEVICE_SUB(PCI_VENDOR_ID_QCOM, 0x0309, 0x1c5d, 0x201a),
++		.driver_data = (kernel_ulong_t) &mhi_telit_fn990b40_info },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0309),
+ 		.driver_data = (kernel_ulong_t) &mhi_qcom_sdx75_info },
+ 	/* QDU100, x100-DU */
+@@ -1306,8 +1322,8 @@ static int mhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	/* start health check */
+ 	mod_timer(&mhi_pdev->health_check_timer, jiffies + HEALTH_CHECK_PERIOD);
+ 
+-	/* Only allow runtime-suspend if PME capable (for wakeup) */
+-	if (pci_pme_capable(pdev, PCI_D3hot)) {
++	/* Allow runtime suspend only if both PME from D3Hot and M3 are supported */
++	if (pci_pme_capable(pdev, PCI_D3hot) && !(info->no_m3)) {
+ 		pm_runtime_set_autosuspend_delay(&pdev->dev, 2000);
+ 		pm_runtime_use_autosuspend(&pdev->dev);
+ 		pm_runtime_mark_last_busy(&pdev->dev);
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index 064944ae9fdc39..8e9050f99e9eff 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -4607,10 +4607,10 @@ static int handle_one_recv_msg(struct ipmi_smi *intf,
+ 		 * The NetFN and Command in the response is not even
+ 		 * marginally correct.
+ 		 */
+-		dev_warn(intf->si_dev,
+-			 "BMC returned incorrect response, expected netfn %x cmd %x, got netfn %x cmd %x\n",
+-			 (msg->data[0] >> 2) | 1, msg->data[1],
+-			 msg->rsp[0] >> 2, msg->rsp[1]);
++		dev_warn_ratelimited(intf->si_dev,
++				     "BMC returned incorrect response, expected netfn %x cmd %x, got netfn %x cmd %x\n",
++				     (msg->data[0] >> 2) | 1, msg->data[1],
++				     msg->rsp[0] >> 2, msg->rsp[1]);
+ 
+ 		goto return_unspecified;
+ 	}
+diff --git a/drivers/char/ipmi/ipmi_watchdog.c b/drivers/char/ipmi/ipmi_watchdog.c
+index ab759b492fdd66..a013ddbf14662f 100644
+--- a/drivers/char/ipmi/ipmi_watchdog.c
++++ b/drivers/char/ipmi/ipmi_watchdog.c
+@@ -1146,14 +1146,8 @@ static struct ipmi_smi_watcher smi_watcher = {
+ 	.smi_gone = ipmi_smi_gone
+ };
+ 
+-static int action_op(const char *inval, char *outval)
++static int action_op_set_val(const char *inval)
+ {
+-	if (outval)
+-		strcpy(outval, action);
+-
+-	if (!inval)
+-		return 0;
+-
+ 	if (strcmp(inval, "reset") == 0)
+ 		action_val = WDOG_TIMEOUT_RESET;
+ 	else if (strcmp(inval, "none") == 0)
+@@ -1164,18 +1158,26 @@ static int action_op(const char *inval, char *outval)
+ 		action_val = WDOG_TIMEOUT_POWER_DOWN;
+ 	else
+ 		return -EINVAL;
+-	strcpy(action, inval);
+ 	return 0;
+ }
+ 
+-static int preaction_op(const char *inval, char *outval)
++static int action_op(const char *inval, char *outval)
+ {
++	int rv;
++
+ 	if (outval)
+-		strcpy(outval, preaction);
++		strcpy(outval, action);
+ 
+ 	if (!inval)
+ 		return 0;
++	rv = action_op_set_val(inval);
++	if (!rv)
++		strcpy(action, inval);
++	return rv;
++}
+ 
++static int preaction_op_set_val(const char *inval)
++{
+ 	if (strcmp(inval, "pre_none") == 0)
+ 		preaction_val = WDOG_PRETIMEOUT_NONE;
+ 	else if (strcmp(inval, "pre_smi") == 0)
+@@ -1188,18 +1190,26 @@ static int preaction_op(const char *inval, char *outval)
+ 		preaction_val = WDOG_PRETIMEOUT_MSG_INT;
+ 	else
+ 		return -EINVAL;
+-	strcpy(preaction, inval);
+ 	return 0;
+ }
+ 
+-static int preop_op(const char *inval, char *outval)
++static int preaction_op(const char *inval, char *outval)
+ {
++	int rv;
++
+ 	if (outval)
+-		strcpy(outval, preop);
++		strcpy(outval, preaction);
+ 
+ 	if (!inval)
+ 		return 0;
++	rv = preaction_op_set_val(inval);
++	if (!rv)
++		strcpy(preaction, inval);
++	return 0;
++}
+ 
++static int preop_op_set_val(const char *inval)
++{
+ 	if (strcmp(inval, "preop_none") == 0)
+ 		preop_val = WDOG_PREOP_NONE;
+ 	else if (strcmp(inval, "preop_panic") == 0)
+@@ -1208,7 +1218,22 @@ static int preop_op(const char *inval, char *outval)
+ 		preop_val = WDOG_PREOP_GIVE_DATA;
+ 	else
+ 		return -EINVAL;
+-	strcpy(preop, inval);
++	return 0;
++}
++
++static int preop_op(const char *inval, char *outval)
++{
++	int rv;
++
++	if (outval)
++		strcpy(outval, preop);
++
++	if (!inval)
++		return 0;
++
++	rv = preop_op_set_val(inval);
++	if (!rv)
++		strcpy(preop, inval);
+ 	return 0;
+ }
+ 
+@@ -1245,18 +1270,18 @@ static int __init ipmi_wdog_init(void)
+ {
+ 	int rv;
+ 
+-	if (action_op(action, NULL)) {
++	if (action_op_set_val(action)) {
+ 		action_op("reset", NULL);
+ 		pr_info("Unknown action '%s', defaulting to reset\n", action);
+ 	}
+ 
+-	if (preaction_op(preaction, NULL)) {
++	if (preaction_op_set_val(preaction)) {
+ 		preaction_op("pre_none", NULL);
+ 		pr_info("Unknown preaction '%s', defaulting to none\n",
+ 			preaction);
+ 	}
+ 
+-	if (preop_op(preop, NULL)) {
++	if (preop_op_set_val(preop)) {
+ 		preop_op("preop_none", NULL);
+ 		pr_info("Unknown preop '%s', defaulting to none\n", preop);
+ 	}
+diff --git a/drivers/char/misc.c b/drivers/char/misc.c
+index d5accc10a11009..5247d0ec0f4c53 100644
+--- a/drivers/char/misc.c
++++ b/drivers/char/misc.c
+@@ -296,8 +296,8 @@ static int __init misc_init(void)
+ 	if (err)
+ 		goto fail_remove;
+ 
+-	err = -EIO;
+-	if (__register_chrdev(MISC_MAJOR, 0, MINORMASK + 1, "misc", &misc_fops))
++	err = __register_chrdev(MISC_MAJOR, 0, MINORMASK + 1, "misc", &misc_fops);
++	if (err < 0)
+ 		goto fail_printk;
+ 	return 0;
+ 
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index 8d7e4da6ed538a..8d18b33aa62d08 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -82,6 +82,13 @@ static bool tpm_chip_req_canceled(struct tpm_chip *chip, u8 status)
+ 	return chip->ops->req_canceled(chip, status);
+ }
+ 
++static bool tpm_transmit_completed(u8 status, struct tpm_chip *chip)
++{
++	u8 status_masked = status & chip->ops->req_complete_mask;
++
++	return status_masked == chip->ops->req_complete_val;
++}
++
+ static ssize_t tpm_try_transmit(struct tpm_chip *chip, void *buf, size_t bufsiz)
+ {
+ 	struct tpm_header *header = buf;
+@@ -129,8 +136,7 @@ static ssize_t tpm_try_transmit(struct tpm_chip *chip, void *buf, size_t bufsiz)
+ 	stop = jiffies + tpm_calc_ordinal_duration(chip, ordinal);
+ 	do {
+ 		u8 status = tpm_chip_status(chip);
+-		if ((status & chip->ops->req_complete_mask) ==
+-		    chip->ops->req_complete_val)
++		if (tpm_transmit_completed(status, chip))
+ 			goto out_recv;
+ 
+ 		if (tpm_chip_req_canceled(chip, status)) {
+@@ -142,6 +148,13 @@ static ssize_t tpm_try_transmit(struct tpm_chip *chip, void *buf, size_t bufsiz)
+ 		rmb();
+ 	} while (time_before(jiffies, stop));
+ 
++	/*
++	 * Check for completion one more time, just in case the device reported
++	 * it while the driver was sleeping in the busy loop above.
++	 */
++	if (tpm_transmit_completed(tpm_chip_status(chip), chip))
++		goto out_recv;
++
+ 	tpm_chip_cancel(chip);
+ 	dev_err(&chip->dev, "Operation Timed out\n");
+ 	return -ETIME;
+diff --git a/drivers/char/tpm/tpm_crb_ffa.c b/drivers/char/tpm/tpm_crb_ffa.c
+index 4ead61f012996f..462fcf6100208c 100644
+--- a/drivers/char/tpm/tpm_crb_ffa.c
++++ b/drivers/char/tpm/tpm_crb_ffa.c
+@@ -115,6 +115,7 @@ struct tpm_crb_ffa {
+ };
+ 
+ static struct tpm_crb_ffa *tpm_crb_ffa;
++static struct ffa_driver tpm_crb_ffa_driver;
+ 
+ static int tpm_crb_ffa_to_linux_errno(int errno)
+ {
+@@ -168,13 +169,23 @@ static int tpm_crb_ffa_to_linux_errno(int errno)
+  */
+ int tpm_crb_ffa_init(void)
+ {
++	int ret = 0;
++
++	if (!IS_MODULE(CONFIG_TCG_ARM_CRB_FFA)) {
++		ret = ffa_register(&tpm_crb_ffa_driver);
++		if (ret) {
++			tpm_crb_ffa = ERR_PTR(-ENODEV);
++			return ret;
++		}
++	}
++
+ 	if (!tpm_crb_ffa)
+-		return -ENOENT;
++		ret = -ENOENT;
+ 
+ 	if (IS_ERR_VALUE(tpm_crb_ffa))
+-		return -ENODEV;
++		ret = -ENODEV;
+ 
+-	return 0;
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(tpm_crb_ffa_init);
+ 
+@@ -369,7 +380,9 @@ static struct ffa_driver tpm_crb_ffa_driver = {
+ 	.id_table = tpm_crb_ffa_device_id,
+ };
+ 
++#ifdef MODULE
+ module_ffa_driver(tpm_crb_ffa_driver);
++#endif
+ 
+ MODULE_AUTHOR("Arm");
+ MODULE_DESCRIPTION("TPM CRB FFA driver");
+diff --git a/drivers/clk/qcom/dispcc-sm8750.c b/drivers/clk/qcom/dispcc-sm8750.c
+index 877b40d50e6ff5..ca09da111a50e8 100644
+--- a/drivers/clk/qcom/dispcc-sm8750.c
++++ b/drivers/clk/qcom/dispcc-sm8750.c
+@@ -393,7 +393,7 @@ static struct clk_rcg2 disp_cc_mdss_byte0_clk_src = {
+ 		.name = "disp_cc_mdss_byte0_clk_src",
+ 		.parent_data = disp_cc_parent_data_1,
+ 		.num_parents = ARRAY_SIZE(disp_cc_parent_data_1),
+-		.flags = CLK_SET_RATE_PARENT,
++		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+ 		.ops = &clk_byte2_ops,
+ 	},
+ };
+@@ -408,7 +408,7 @@ static struct clk_rcg2 disp_cc_mdss_byte1_clk_src = {
+ 		.name = "disp_cc_mdss_byte1_clk_src",
+ 		.parent_data = disp_cc_parent_data_1,
+ 		.num_parents = ARRAY_SIZE(disp_cc_parent_data_1),
+-		.flags = CLK_SET_RATE_PARENT,
++		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+ 		.ops = &clk_byte2_ops,
+ 	},
+ };
+@@ -712,7 +712,7 @@ static struct clk_rcg2 disp_cc_mdss_pclk0_clk_src = {
+ 		.name = "disp_cc_mdss_pclk0_clk_src",
+ 		.parent_data = disp_cc_parent_data_1,
+ 		.num_parents = ARRAY_SIZE(disp_cc_parent_data_1),
+-		.flags = CLK_SET_RATE_PARENT,
++		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+ 		.ops = &clk_pixel_ops,
+ 	},
+ };
+@@ -727,7 +727,7 @@ static struct clk_rcg2 disp_cc_mdss_pclk1_clk_src = {
+ 		.name = "disp_cc_mdss_pclk1_clk_src",
+ 		.parent_data = disp_cc_parent_data_1,
+ 		.num_parents = ARRAY_SIZE(disp_cc_parent_data_1),
+-		.flags = CLK_SET_RATE_PARENT,
++		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+ 		.ops = &clk_pixel_ops,
+ 	},
+ };
+@@ -742,7 +742,7 @@ static struct clk_rcg2 disp_cc_mdss_pclk2_clk_src = {
+ 		.name = "disp_cc_mdss_pclk2_clk_src",
+ 		.parent_data = disp_cc_parent_data_1,
+ 		.num_parents = ARRAY_SIZE(disp_cc_parent_data_1),
+-		.flags = CLK_SET_RATE_PARENT,
++		.flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
+ 		.ops = &clk_pixel_ops,
+ 	},
+ };
+diff --git a/drivers/clk/qcom/gcc-ipq5018.c b/drivers/clk/qcom/gcc-ipq5018.c
+index 70f5dcb96700f5..24eb4c40da6346 100644
+--- a/drivers/clk/qcom/gcc-ipq5018.c
++++ b/drivers/clk/qcom/gcc-ipq5018.c
+@@ -1371,7 +1371,7 @@ static struct clk_branch gcc_xo_clk = {
+ 				&gcc_xo_clk_src.clkr.hw,
+ 			},
+ 			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT,
++			.flags = CLK_SET_RATE_PARENT | CLK_IS_CRITICAL,
+ 			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+diff --git a/drivers/clk/qcom/gcc-ipq8074.c b/drivers/clk/qcom/gcc-ipq8074.c
+index 7258ba5c09001e..1329ea28d70313 100644
+--- a/drivers/clk/qcom/gcc-ipq8074.c
++++ b/drivers/clk/qcom/gcc-ipq8074.c
+@@ -1895,10 +1895,10 @@ static const struct freq_conf ftbl_nss_port6_tx_clk_src_125[] = {
+ static const struct freq_multi_tbl ftbl_nss_port6_tx_clk_src[] = {
+ 	FMS(19200000, P_XO, 1, 0, 0),
+ 	FM(25000000, ftbl_nss_port6_tx_clk_src_25),
+-	FMS(78125000, P_UNIPHY1_RX, 4, 0, 0),
++	FMS(78125000, P_UNIPHY2_TX, 4, 0, 0),
+ 	FM(125000000, ftbl_nss_port6_tx_clk_src_125),
+-	FMS(156250000, P_UNIPHY1_RX, 2, 0, 0),
+-	FMS(312500000, P_UNIPHY1_RX, 1, 0, 0),
++	FMS(156250000, P_UNIPHY2_TX, 2, 0, 0),
++	FMS(312500000, P_UNIPHY2_TX, 1, 0, 0),
+ 	{ }
+ };
+ 
+diff --git a/drivers/clk/renesas/rzg2l-cpg.c b/drivers/clk/renesas/rzg2l-cpg.c
+index a8628f64a03bd7..c87ad5a972b766 100644
+--- a/drivers/clk/renesas/rzg2l-cpg.c
++++ b/drivers/clk/renesas/rzg2l-cpg.c
+@@ -1389,10 +1389,6 @@ rzg2l_cpg_register_mod_clk(const struct rzg2l_mod_clk *mod,
+ 		goto fail;
+ 	}
+ 
+-	clk = clock->hw.clk;
+-	dev_dbg(dev, "Module clock %pC at %lu Hz\n", clk, clk_get_rate(clk));
+-	priv->clks[id] = clk;
+-
+ 	if (mod->is_coupled) {
+ 		struct mstp_clock *sibling;
+ 
+@@ -1404,6 +1400,10 @@ rzg2l_cpg_register_mod_clk(const struct rzg2l_mod_clk *mod,
+ 		}
+ 	}
+ 
++	clk = clock->hw.clk;
++	dev_dbg(dev, "Module clock %pC at %lu Hz\n", clk, clk_get_rate(clk));
++	priv->clks[id] = clk;
++
+ 	return;
+ 
+ fail:
+diff --git a/drivers/clk/samsung/clk-exynos850.c b/drivers/clk/samsung/clk-exynos850.c
+index cf7e08cca78e04..56f27697c76b13 100644
+--- a/drivers/clk/samsung/clk-exynos850.c
++++ b/drivers/clk/samsung/clk-exynos850.c
+@@ -1360,7 +1360,7 @@ static const unsigned long cpucl1_clk_regs[] __initconst = {
+ 	CLK_CON_GAT_GATE_CLK_CPUCL1_CPU,
+ };
+ 
+-/* List of parent clocks for Muxes in CMU_CPUCL0 */
++/* List of parent clocks for Muxes in CMU_CPUCL1 */
+ PNAME(mout_pll_cpucl1_p)		 = { "oscclk", "fout_cpucl1_pll" };
+ PNAME(mout_cpucl1_switch_user_p)	 = { "oscclk", "dout_cpucl1_switch" };
+ PNAME(mout_cpucl1_dbg_user_p)		 = { "oscclk", "dout_cpucl1_dbg" };
+diff --git a/drivers/clk/samsung/clk-gs101.c b/drivers/clk/samsung/clk-gs101.c
+index f9c3d68d449c35..70b26db9b95ad0 100644
+--- a/drivers/clk/samsung/clk-gs101.c
++++ b/drivers/clk/samsung/clk-gs101.c
+@@ -1154,7 +1154,7 @@ static const struct samsung_div_clock cmu_top_div_clks[] __initconst = {
+ 	    CLK_CON_DIV_CLKCMU_G2D_MSCL, 0, 4),
+ 	DIV(CLK_DOUT_CMU_G3AA_G3AA, "dout_cmu_g3aa_g3aa", "gout_cmu_g3aa_g3aa",
+ 	    CLK_CON_DIV_CLKCMU_G3AA_G3AA, 0, 4),
+-	DIV(CLK_DOUT_CMU_G3D_SWITCH, "dout_cmu_g3d_busd", "gout_cmu_g3d_busd",
++	DIV(CLK_DOUT_CMU_G3D_BUSD, "dout_cmu_g3d_busd", "gout_cmu_g3d_busd",
+ 	    CLK_CON_DIV_CLKCMU_G3D_BUSD, 0, 4),
+ 	DIV(CLK_DOUT_CMU_G3D_GLB, "dout_cmu_g3d_glb", "gout_cmu_g3d_glb",
+ 	    CLK_CON_DIV_CLKCMU_G3D_GLB, 0, 4),
+@@ -2129,7 +2129,7 @@ PNAME(mout_hsi0_usbdpdbg_user_p)	= { "oscclk",
+ 					    "dout_cmu_hsi0_usbdpdbg" };
+ PNAME(mout_hsi0_bus_p)			= { "mout_hsi0_bus_user",
+ 					    "mout_hsi0_alt_user" };
+-PNAME(mout_hsi0_usb20_ref_p)		= { "fout_usb_pll",
++PNAME(mout_hsi0_usb20_ref_p)		= { "mout_pll_usb",
+ 					    "mout_hsi0_tcxo_user" };
+ PNAME(mout_hsi0_usb31drd_p)		= { "fout_usb_pll",
+ 					    "mout_hsi0_usb31drd_user",
+diff --git a/drivers/clk/tegra/clk-periph.c b/drivers/clk/tegra/clk-periph.c
+index 0626650a7011cc..c9fc52a36fce9c 100644
+--- a/drivers/clk/tegra/clk-periph.c
++++ b/drivers/clk/tegra/clk-periph.c
+@@ -51,7 +51,7 @@ static int clk_periph_determine_rate(struct clk_hw *hw,
+ 	struct tegra_clk_periph *periph = to_clk_periph(hw);
+ 	const struct clk_ops *div_ops = periph->div_ops;
+ 	struct clk_hw *div_hw = &periph->divider.hw;
+-	unsigned long rate;
++	long rate;
+ 
+ 	__clk_hw_set_clk(div_hw, hw);
+ 
+@@ -59,7 +59,7 @@ static int clk_periph_determine_rate(struct clk_hw *hw,
+ 	if (rate < 0)
+ 		return rate;
+ 
+-	req->rate = rate;
++	req->rate = (unsigned long)rate;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/clk/thead/clk-th1520-ap.c b/drivers/clk/thead/clk-th1520-ap.c
+index 485b1d5cfd18c6..cf1bba58f641e9 100644
+--- a/drivers/clk/thead/clk-th1520-ap.c
++++ b/drivers/clk/thead/clk-th1520-ap.c
+@@ -792,11 +792,12 @@ static CCU_GATE(CLK_AON2CPU_A2X, aon2cpu_a2x_clk, "aon2cpu-a2x", axi4_cpusys2_ac
+ 		0x134, BIT(8), 0);
+ static CCU_GATE(CLK_X2X_CPUSYS, x2x_cpusys_clk, "x2x-cpusys", axi4_cpusys2_aclk_pd,
+ 		0x134, BIT(7), 0);
+-static CCU_GATE(CLK_CPU2AON_X2H, cpu2aon_x2h_clk, "cpu2aon-x2h", axi_aclk_pd, 0x138, BIT(8), 0);
++static CCU_GATE(CLK_CPU2AON_X2H, cpu2aon_x2h_clk, "cpu2aon-x2h", axi_aclk_pd,
++		0x138, BIT(8), CLK_IGNORE_UNUSED);
+ static CCU_GATE(CLK_CPU2PERI_X2H, cpu2peri_x2h_clk, "cpu2peri-x2h", axi4_cpusys2_aclk_pd,
+ 		0x140, BIT(9), CLK_IGNORE_UNUSED);
+ static CCU_GATE(CLK_PERISYS_APB1_HCLK, perisys_apb1_hclk, "perisys-apb1-hclk", perisys_ahb_hclk_pd,
+-		0x150, BIT(9), 0);
++		0x150, BIT(9), CLK_IGNORE_UNUSED);
+ static CCU_GATE(CLK_PERISYS_APB2_HCLK, perisys_apb2_hclk, "perisys-apb2-hclk", perisys_ahb_hclk_pd,
+ 		0x150, BIT(10), CLK_IGNORE_UNUSED);
+ static CCU_GATE(CLK_PERISYS_APB3_HCLK, perisys_apb3_hclk, "perisys-apb3-hclk", perisys_ahb_hclk_pd,
+diff --git a/drivers/comedi/comedi_fops.c b/drivers/comedi/comedi_fops.c
+index c83fd14dd7ad56..23b7178522ae03 100644
+--- a/drivers/comedi/comedi_fops.c
++++ b/drivers/comedi/comedi_fops.c
+@@ -787,6 +787,7 @@ static int is_device_busy(struct comedi_device *dev)
+ 	struct comedi_subdevice *s;
+ 	int i;
+ 
++	lockdep_assert_held_write(&dev->attach_lock);
+ 	lockdep_assert_held(&dev->mutex);
+ 	if (!dev->attached)
+ 		return 0;
+@@ -795,7 +796,16 @@ static int is_device_busy(struct comedi_device *dev)
+ 		s = &dev->subdevices[i];
+ 		if (s->busy)
+ 			return 1;
+-		if (s->async && comedi_buf_is_mmapped(s))
++		if (!s->async)
++			continue;
++		if (comedi_buf_is_mmapped(s))
++			return 1;
++		/*
++		 * There may be tasks still waiting on the subdevice's wait
++		 * queue, although they should already be about to be removed
++		 * from it since the subdevice has no active async command.
++		 */
++		if (wq_has_sleeper(&s->async->wait_head))
+ 			return 1;
+ 	}
+ 
+@@ -825,15 +835,22 @@ static int do_devconfig_ioctl(struct comedi_device *dev,
+ 		return -EPERM;
+ 
+ 	if (!arg) {
+-		if (is_device_busy(dev))
+-			return -EBUSY;
++		int rc = 0;
++
+ 		if (dev->attached) {
+-			struct module *driver_module = dev->driver->module;
++			down_write(&dev->attach_lock);
++			if (is_device_busy(dev)) {
++				rc = -EBUSY;
++			} else {
++				struct module *driver_module =
++					dev->driver->module;
+ 
+-			comedi_device_detach(dev);
+-			module_put(driver_module);
++				comedi_device_detach_locked(dev);
++				module_put(driver_module);
++			}
++			up_write(&dev->attach_lock);
+ 		}
+-		return 0;
++		return rc;
+ 	}
+ 
+ 	if (copy_from_user(&it, arg, sizeof(it)))
+diff --git a/drivers/comedi/comedi_internal.h b/drivers/comedi/comedi_internal.h
+index 9b3631a654c895..cf10ba016ebc81 100644
+--- a/drivers/comedi/comedi_internal.h
++++ b/drivers/comedi/comedi_internal.h
+@@ -50,6 +50,7 @@ extern struct mutex comedi_drivers_list_lock;
+ int insn_inval(struct comedi_device *dev, struct comedi_subdevice *s,
+ 	       struct comedi_insn *insn, unsigned int *data);
+ 
++void comedi_device_detach_locked(struct comedi_device *dev);
+ void comedi_device_detach(struct comedi_device *dev);
+ int comedi_device_attach(struct comedi_device *dev,
+ 			 struct comedi_devconfig *it);
+diff --git a/drivers/comedi/drivers.c b/drivers/comedi/drivers.c
+index 9e4b7c840a8f5a..f1dc854928c176 100644
+--- a/drivers/comedi/drivers.c
++++ b/drivers/comedi/drivers.c
+@@ -158,7 +158,7 @@ static void comedi_device_detach_cleanup(struct comedi_device *dev)
+ 	int i;
+ 	struct comedi_subdevice *s;
+ 
+-	lockdep_assert_held(&dev->attach_lock);
++	lockdep_assert_held_write(&dev->attach_lock);
+ 	lockdep_assert_held(&dev->mutex);
+ 	if (dev->subdevices) {
+ 		for (i = 0; i < dev->n_subdevices; i++) {
+@@ -196,16 +196,23 @@ static void comedi_device_detach_cleanup(struct comedi_device *dev)
+ 	comedi_clear_hw_dev(dev);
+ }
+ 
+-void comedi_device_detach(struct comedi_device *dev)
++void comedi_device_detach_locked(struct comedi_device *dev)
+ {
++	lockdep_assert_held_write(&dev->attach_lock);
+ 	lockdep_assert_held(&dev->mutex);
+ 	comedi_device_cancel_all(dev);
+-	down_write(&dev->attach_lock);
+ 	dev->attached = false;
+ 	dev->detach_count++;
+ 	if (dev->driver)
+ 		dev->driver->detach(dev);
+ 	comedi_device_detach_cleanup(dev);
++}
++
++void comedi_device_detach(struct comedi_device *dev)
++{
++	lockdep_assert_held(&dev->mutex);
++	down_write(&dev->attach_lock);
++	comedi_device_detach_locked(dev);
+ 	up_write(&dev->attach_lock);
+ }
+ 
+diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c
+index b7c688a5659c01..c8e63c7aa9a34c 100644
+--- a/drivers/cpufreq/cppc_cpufreq.c
++++ b/drivers/cpufreq/cppc_cpufreq.c
+@@ -925,7 +925,7 @@ static struct freq_attr *cppc_cpufreq_attr[] = {
+ };
+ 
+ static struct cpufreq_driver cppc_cpufreq_driver = {
+-	.flags = CPUFREQ_CONST_LOOPS,
++	.flags = CPUFREQ_CONST_LOOPS | CPUFREQ_NEED_UPDATE_LIMITS,
+ 	.verify = cppc_verify_policy,
+ 	.target = cppc_cpufreq_set_target,
+ 	.get = cppc_cpufreq_get_rate,
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index c1c6f11ac551bb..628f5b633b61fe 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -2716,10 +2716,12 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
+ 	pr_debug("starting governor %s failed\n", policy->governor->name);
+ 	if (old_gov) {
+ 		policy->governor = old_gov;
+-		if (cpufreq_init_governor(policy))
++		if (cpufreq_init_governor(policy)) {
+ 			policy->governor = NULL;
+-		else
+-			cpufreq_start_governor(policy);
++		} else if (cpufreq_start_governor(policy)) {
++			cpufreq_exit_governor(policy);
++			policy->governor = NULL;
++		}
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 60326ab5475f9d..06a1c7dd081ffb 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -2775,6 +2775,8 @@ static const struct x86_cpu_id intel_pstate_cpu_ids[] = {
+ 	X86_MATCH(INTEL_TIGERLAKE,		core_funcs),
+ 	X86_MATCH(INTEL_SAPPHIRERAPIDS_X,	core_funcs),
+ 	X86_MATCH(INTEL_EMERALDRAPIDS_X,	core_funcs),
++	X86_MATCH(INTEL_GRANITERAPIDS_D,	core_funcs),
++	X86_MATCH(INTEL_GRANITERAPIDS_X,	core_funcs),
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(x86cpu, intel_pstate_cpu_ids);
+diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
+index 52d5d26fc7c643..81306612a5c67d 100644
+--- a/drivers/cpuidle/governors/menu.c
++++ b/drivers/cpuidle/governors/menu.c
+@@ -97,6 +97,14 @@ static inline int which_bucket(u64 duration_ns)
+ 
+ static DEFINE_PER_CPU(struct menu_device, menu_devices);
+ 
++static void menu_update_intervals(struct menu_device *data, unsigned int interval_us)
++{
++	/* Update the repeating-pattern data. */
++	data->intervals[data->interval_ptr++] = interval_us;
++	if (data->interval_ptr >= INTERVALS)
++		data->interval_ptr = 0;
++}
++
+ static void menu_update(struct cpuidle_driver *drv, struct cpuidle_device *dev);
+ 
+ /*
+@@ -222,6 +230,14 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ 	if (data->needs_update) {
+ 		menu_update(drv, dev);
+ 		data->needs_update = 0;
++	} else if (!dev->last_residency_ns) {
++		/*
++		 * This happens when the driver rejects the previously selected
++		 * idle state and returns an error, so update the recent
++		 * intervals table to prevent invalid information from being
++		 * used going forward.
++		 */
++		menu_update_intervals(data, UINT_MAX);
+ 	}
+ 
+ 	/* Find the shortest expected idle interval. */
+@@ -482,10 +498,7 @@ static void menu_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
+ 
+ 	data->correction_factor[data->bucket] = new_factor;
+ 
+-	/* update the repeating-pattern data */
+-	data->intervals[data->interval_ptr++] = ktime_to_us(measured_ns);
+-	if (data->interval_ptr >= INTERVALS)
+-		data->interval_ptr = 0;
++	menu_update_intervals(data, ktime_to_us(measured_ns));
+ }
+ 
+ /**
+diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
+index 38ff931059b49d..9cd5e3d54d9d0e 100644
+--- a/drivers/crypto/caam/ctrl.c
++++ b/drivers/crypto/caam/ctrl.c
+@@ -573,7 +573,7 @@ static const struct soc_device_attribute caam_imx_soc_table[] = {
+ 	{ .soc_id = "i.MX7*",  .data = &caam_imx7_data },
+ 	{ .soc_id = "i.MX8M*", .data = &caam_imx7_data },
+ 	{ .soc_id = "i.MX8ULP", .data = &caam_imx8ulp_data },
+-	{ .soc_id = "i.MX8QM", .data = &caam_imx8ulp_data },
++	{ .soc_id = "i.MX8Q*", .data = &caam_imx8ulp_data },
+ 	{ .soc_id = "VF*",     .data = &caam_vf610_data },
+ 	{ .family = "Freescale i.MX" },
+ 	{ /* sentinel */ }
+diff --git a/drivers/crypto/ccp/sp-pci.c b/drivers/crypto/ccp/sp-pci.c
+index e1be2072d680e0..e7bb803912a6d3 100644
+--- a/drivers/crypto/ccp/sp-pci.c
++++ b/drivers/crypto/ccp/sp-pci.c
+@@ -453,6 +453,7 @@ static const struct psp_vdata pspv6 = {
+ 	.cmdresp_reg		= 0x10944,	/* C2PMSG_17 */
+ 	.cmdbuff_addr_lo_reg	= 0x10948,	/* C2PMSG_18 */
+ 	.cmdbuff_addr_hi_reg	= 0x1094c,	/* C2PMSG_19 */
++	.bootloader_info_reg	= 0x109ec,	/* C2PMSG_59 */
+ 	.feature_reg            = 0x109fc,	/* C2PMSG_63 */
+ 	.inten_reg              = 0x10510,	/* P2CMSG_INTEN */
+ 	.intsts_reg             = 0x10514,	/* P2CMSG_INTSTS */
+diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
+index 61b5e1c5d0192c..1550c3818383ab 100644
+--- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c
++++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
+@@ -1491,11 +1491,13 @@ static void hpre_ecdh_cb(struct hpre_ctx *ctx, void *resp)
+ 	if (overtime_thrhld && hpre_is_bd_timeout(req, overtime_thrhld))
+ 		atomic64_inc(&dfx[HPRE_OVER_THRHLD_CNT].value);
+ 
++	/* Do unmap before data processing */
++	hpre_ecdh_hw_data_clr_all(ctx, req, areq->dst, areq->src);
++
+ 	p = sg_virt(areq->dst);
+ 	memmove(p, p + ctx->key_sz - curve_sz, curve_sz);
+ 	memmove(p + curve_sz, p + areq->dst_len - curve_sz, curve_sz);
+ 
+-	hpre_ecdh_hw_data_clr_all(ctx, req, areq->dst, areq->src);
+ 	kpp_request_complete(areq, ret);
+ 
+ 	atomic64_inc(&dfx[HPRE_RECV_CNT].value);
+@@ -1808,9 +1810,11 @@ static void hpre_curve25519_cb(struct hpre_ctx *ctx, void *resp)
+ 	if (overtime_thrhld && hpre_is_bd_timeout(req, overtime_thrhld))
+ 		atomic64_inc(&dfx[HPRE_OVER_THRHLD_CNT].value);
+ 
++	/* Do unmap before data processing */
++	hpre_curve25519_hw_data_clr_all(ctx, req, areq->dst, areq->src);
++
+ 	hpre_key_to_big_end(sg_virt(areq->dst), CURVE25519_KEY_SIZE);
+ 
+-	hpre_curve25519_hw_data_clr_all(ctx, req, areq->dst, areq->src);
+ 	kpp_request_complete(areq, ret);
+ 
+ 	atomic64_inc(&dfx[HPRE_RECV_CNT].value);
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
+index 78367849c3d551..9095dea2748d5e 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
+@@ -1494,6 +1494,7 @@ int otx2_cpt_discover_eng_capabilities(struct otx2_cptpf_dev *cptpf)
+ 	dma_addr_t rptr_baddr;
+ 	struct pci_dev *pdev;
+ 	u32 len, compl_rlen;
++	int timeout = 10000;
+ 	int ret, etype;
+ 	void *rptr;
+ 
+@@ -1554,16 +1555,27 @@ int otx2_cpt_discover_eng_capabilities(struct otx2_cptpf_dev *cptpf)
+ 							 etype);
+ 		otx2_cpt_fill_inst(&inst, &iq_cmd, rptr_baddr);
+ 		lfs->ops->send_cmd(&inst, 1, &cptpf->lfs.lf[0]);
++		timeout = 10000;
+ 
+ 		while (lfs->ops->cpt_get_compcode(result) ==
+-						OTX2_CPT_COMPLETION_CODE_INIT)
++						OTX2_CPT_COMPLETION_CODE_INIT) {
+ 			cpu_relax();
++			udelay(1);
++			timeout--;
++			if (!timeout) {
++				ret = -ENODEV;
++				cptpf->is_eng_caps_discovered = false;
++				dev_warn(&pdev->dev, "Timeout on CPT load_fvc completion poll\n");
++				goto error_no_response;
++			}
++		}
+ 
+ 		cptpf->eng_caps[etype].u = be64_to_cpup(rptr);
+ 	}
+-	dma_unmap_single(&pdev->dev, rptr_baddr, len, DMA_BIDIRECTIONAL);
+ 	cptpf->is_eng_caps_discovered = true;
+ 
++error_no_response:
++	dma_unmap_single(&pdev->dev, rptr_baddr, len, DMA_BIDIRECTIONAL);
+ free_result:
+ 	kfree(result);
+ lf_cleanup:
+diff --git a/drivers/devfreq/governor_userspace.c b/drivers/devfreq/governor_userspace.c
+index d1aa6806b683ac..175de0c0b50e08 100644
+--- a/drivers/devfreq/governor_userspace.c
++++ b/drivers/devfreq/governor_userspace.c
+@@ -9,6 +9,7 @@
+ #include <linux/slab.h>
+ #include <linux/device.h>
+ #include <linux/devfreq.h>
++#include <linux/kstrtox.h>
+ #include <linux/pm.h>
+ #include <linux/mutex.h>
+ #include <linux/module.h>
+@@ -39,10 +40,13 @@ static ssize_t set_freq_store(struct device *dev, struct device_attribute *attr,
+ 	unsigned long wanted;
+ 	int err = 0;
+ 
++	err = kstrtoul(buf, 0, &wanted);
++	if (err)
++		return err;
++
+ 	mutex_lock(&devfreq->lock);
+ 	data = devfreq->governor_data;
+ 
+-	sscanf(buf, "%lu", &wanted);
+ 	data->user_frequency = wanted;
+ 	data->valid = true;
+ 	err = update_devfreq(devfreq);
+diff --git a/drivers/dma/stm32/stm32-dma.c b/drivers/dma/stm32/stm32-dma.c
+index 917f8e9223739a..0e39f99bce8be8 100644
+--- a/drivers/dma/stm32/stm32-dma.c
++++ b/drivers/dma/stm32/stm32-dma.c
+@@ -744,7 +744,7 @@ static void stm32_dma_handle_chan_done(struct stm32_dma_chan *chan, u32 scr)
+ 		/* cyclic while CIRC/DBM disable => post resume reconfiguration needed */
+ 		if (!(scr & (STM32_DMA_SCR_CIRC | STM32_DMA_SCR_DBM)))
+ 			stm32_dma_post_resume_reconfigure(chan);
+-		else if (scr & STM32_DMA_SCR_DBM)
++		else if (scr & STM32_DMA_SCR_DBM && chan->desc->num_sgs > 2)
+ 			stm32_dma_configure_next_sg(chan);
+ 	} else {
+ 		chan->busy = false;
+diff --git a/drivers/edac/ie31200_edac.c b/drivers/edac/ie31200_edac.c
+index a53612be4b2fe0..6aac6672ba3806 100644
+--- a/drivers/edac/ie31200_edac.c
++++ b/drivers/edac/ie31200_edac.c
+@@ -91,6 +91,8 @@
+ #define PCI_DEVICE_ID_INTEL_IE31200_RPL_S_2	0x4640
+ #define PCI_DEVICE_ID_INTEL_IE31200_RPL_S_3	0x4630
+ #define PCI_DEVICE_ID_INTEL_IE31200_RPL_S_4	0xa700
++#define PCI_DEVICE_ID_INTEL_IE31200_RPL_S_5	0xa740
++#define PCI_DEVICE_ID_INTEL_IE31200_RPL_S_6	0xa704
+ 
+ /* Alder Lake-S */
+ #define PCI_DEVICE_ID_INTEL_IE31200_ADL_S_1	0x4660
+@@ -740,6 +742,8 @@ static const struct pci_device_id ie31200_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IE31200_RPL_S_2), (kernel_ulong_t)&rpl_s_cfg},
+ 	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IE31200_RPL_S_3), (kernel_ulong_t)&rpl_s_cfg},
+ 	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IE31200_RPL_S_4), (kernel_ulong_t)&rpl_s_cfg},
++	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IE31200_RPL_S_5), (kernel_ulong_t)&rpl_s_cfg},
++	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IE31200_RPL_S_6), (kernel_ulong_t)&rpl_s_cfg},
+ 	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_IE31200_ADL_S_1), (kernel_ulong_t)&rpl_s_cfg},
+ 	{ 0, } /* 0 terminated list. */
+ };
+diff --git a/drivers/edac/synopsys_edac.c b/drivers/edac/synopsys_edac.c
+index 5ed32a3299c44e..51143b3257de2d 100644
+--- a/drivers/edac/synopsys_edac.c
++++ b/drivers/edac/synopsys_edac.c
+@@ -332,20 +332,26 @@ struct synps_edac_priv {
+ #endif
+ };
+ 
++enum synps_platform_type {
++	ZYNQ,
++	ZYNQMP,
++	SYNPS,
++};
++
+ /**
+  * struct synps_platform_data -  synps platform data structure.
++ * @platform:		Identifies the target hardware platform
+  * @get_error_info:	Get EDAC error info.
+  * @get_mtype:		Get mtype.
+  * @get_dtype:		Get dtype.
+- * @get_ecc_state:	Get ECC state.
+  * @get_mem_info:	Get EDAC memory info
+  * @quirks:		To differentiate IPs.
+  */
+ struct synps_platform_data {
++	enum synps_platform_type platform;
+ 	int (*get_error_info)(struct synps_edac_priv *priv);
+ 	enum mem_type (*get_mtype)(const void __iomem *base);
+ 	enum dev_type (*get_dtype)(const void __iomem *base);
+-	bool (*get_ecc_state)(void __iomem *base);
+ #ifdef CONFIG_EDAC_DEBUG
+ 	u64 (*get_mem_info)(struct synps_edac_priv *priv);
+ #endif
+@@ -720,51 +726,38 @@ static enum dev_type zynqmp_get_dtype(const void __iomem *base)
+ 	return dt;
+ }
+ 
+-/**
+- * zynq_get_ecc_state - Return the controller ECC enable/disable status.
+- * @base:	DDR memory controller base address.
+- *
+- * Get the ECC enable/disable status of the controller.
+- *
+- * Return: true if enabled, otherwise false.
+- */
+-static bool zynq_get_ecc_state(void __iomem *base)
++static bool get_ecc_state(struct synps_edac_priv *priv)
+ {
++	u32 ecctype, clearval;
+ 	enum dev_type dt;
+-	u32 ecctype;
+-
+-	dt = zynq_get_dtype(base);
+-	if (dt == DEV_UNKNOWN)
+-		return false;
+ 
+-	ecctype = readl(base + SCRUB_OFST) & SCRUB_MODE_MASK;
+-	if ((ecctype == SCRUB_MODE_SECDED) && (dt == DEV_X2))
+-		return true;
+-
+-	return false;
+-}
+-
+-/**
+- * zynqmp_get_ecc_state - Return the controller ECC enable/disable status.
+- * @base:	DDR memory controller base address.
+- *
+- * Get the ECC enable/disable status for the controller.
+- *
+- * Return: a ECC status boolean i.e true/false - enabled/disabled.
+- */
+-static bool zynqmp_get_ecc_state(void __iomem *base)
+-{
+-	enum dev_type dt;
+-	u32 ecctype;
+-
+-	dt = zynqmp_get_dtype(base);
+-	if (dt == DEV_UNKNOWN)
+-		return false;
+-
+-	ecctype = readl(base + ECC_CFG0_OFST) & SCRUB_MODE_MASK;
+-	if ((ecctype == SCRUB_MODE_SECDED) &&
+-	    ((dt == DEV_X2) || (dt == DEV_X4) || (dt == DEV_X8)))
+-		return true;
++	if (priv->p_data->platform == ZYNQ) {
++		dt = zynq_get_dtype(priv->baseaddr);
++		if (dt == DEV_UNKNOWN)
++			return false;
++
++		ecctype = readl(priv->baseaddr + SCRUB_OFST) & SCRUB_MODE_MASK;
++		if (ecctype == SCRUB_MODE_SECDED && dt == DEV_X2) {
++			clearval = ECC_CTRL_CLR_CE_ERR | ECC_CTRL_CLR_UE_ERR;
++			writel(clearval, priv->baseaddr + ECC_CTRL_OFST);
++			writel(0x0, priv->baseaddr + ECC_CTRL_OFST);
++			return true;
++		}
++	} else {
++		dt = zynqmp_get_dtype(priv->baseaddr);
++		if (dt == DEV_UNKNOWN)
++			return false;
++
++		ecctype = readl(priv->baseaddr + ECC_CFG0_OFST) & SCRUB_MODE_MASK;
++		if (ecctype == SCRUB_MODE_SECDED &&
++		    (dt == DEV_X2 || dt == DEV_X4 || dt == DEV_X8)) {
++			clearval = readl(priv->baseaddr + ECC_CLR_OFST) |
++			ECC_CTRL_CLR_CE_ERR | ECC_CTRL_CLR_CE_ERRCNT |
++			ECC_CTRL_CLR_UE_ERR | ECC_CTRL_CLR_UE_ERRCNT;
++			writel(clearval, priv->baseaddr + ECC_CLR_OFST);
++			return true;
++		}
++	}
+ 
+ 	return false;
+ }
+@@ -934,18 +927,18 @@ static int setup_irq(struct mem_ctl_info *mci,
+ }
+ 
+ static const struct synps_platform_data zynq_edac_def = {
++	.platform = ZYNQ,
+ 	.get_error_info	= zynq_get_error_info,
+ 	.get_mtype	= zynq_get_mtype,
+ 	.get_dtype	= zynq_get_dtype,
+-	.get_ecc_state	= zynq_get_ecc_state,
+ 	.quirks		= 0,
+ };
+ 
+ static const struct synps_platform_data zynqmp_edac_def = {
++	.platform = ZYNQMP,
+ 	.get_error_info	= zynqmp_get_error_info,
+ 	.get_mtype	= zynqmp_get_mtype,
+ 	.get_dtype	= zynqmp_get_dtype,
+-	.get_ecc_state	= zynqmp_get_ecc_state,
+ #ifdef CONFIG_EDAC_DEBUG
+ 	.get_mem_info	= zynqmp_get_mem_info,
+ #endif
+@@ -957,10 +950,10 @@ static const struct synps_platform_data zynqmp_edac_def = {
+ };
+ 
+ static const struct synps_platform_data synopsys_edac_def = {
++	.platform = SYNPS,
+ 	.get_error_info	= zynqmp_get_error_info,
+ 	.get_mtype	= zynqmp_get_mtype,
+ 	.get_dtype	= zynqmp_get_dtype,
+-	.get_ecc_state	= zynqmp_get_ecc_state,
+ 	.quirks         = (DDR_ECC_INTR_SUPPORT | DDR_ECC_INTR_SELF_CLEAR
+ #ifdef CONFIG_EDAC_DEBUG
+ 			  | DDR_ECC_DATA_POISON_SUPPORT
+@@ -1390,10 +1383,6 @@ static int mc_probe(struct platform_device *pdev)
+ 	if (!p_data)
+ 		return -ENODEV;
+ 
+-	if (!p_data->get_ecc_state(baseaddr)) {
+-		edac_printk(KERN_INFO, EDAC_MC, "ECC not enabled\n");
+-		return -ENXIO;
+-	}
+ 
+ 	layers[0].type = EDAC_MC_LAYER_CHIP_SELECT;
+ 	layers[0].size = SYNPS_EDAC_NR_CSROWS;
+@@ -1413,6 +1402,12 @@ static int mc_probe(struct platform_device *pdev)
+ 	priv = mci->pvt_info;
+ 	priv->baseaddr = baseaddr;
+ 	priv->p_data = p_data;
++	if (!get_ecc_state(priv)) {
++		edac_printk(KERN_INFO, EDAC_MC, "ECC not enabled\n");
++		rc = -ENODEV;
++		goto free_edac_mc;
++	}
++
+ 	spin_lock_init(&priv->reglock);
+ 
+ 	mc_init(mci, pdev);
+diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c
+index 37eb2e6c2f9f4d..65bf1685350a2a 100644
+--- a/drivers/firmware/arm_ffa/driver.c
++++ b/drivers/firmware/arm_ffa/driver.c
+@@ -2059,7 +2059,7 @@ static int __init ffa_init(void)
+ 	kfree(drv_info);
+ 	return ret;
+ }
+-module_init(ffa_init);
++rootfs_initcall(ffa_init);
+ 
+ static void __exit ffa_exit(void)
+ {
+diff --git a/drivers/firmware/arm_scmi/scmi_power_control.c b/drivers/firmware/arm_scmi/scmi_power_control.c
+index 21f467a9294288..955736336061d2 100644
+--- a/drivers/firmware/arm_scmi/scmi_power_control.c
++++ b/drivers/firmware/arm_scmi/scmi_power_control.c
+@@ -46,6 +46,7 @@
+ #include <linux/math.h>
+ #include <linux/module.h>
+ #include <linux/mutex.h>
++#include <linux/pm.h>
+ #include <linux/printk.h>
+ #include <linux/reboot.h>
+ #include <linux/scmi_protocol.h>
+@@ -324,12 +325,7 @@ static int scmi_userspace_notifier(struct notifier_block *nb,
+ 
+ static void scmi_suspend_work_func(struct work_struct *work)
+ {
+-	struct scmi_syspower_conf *sc =
+-		container_of(work, struct scmi_syspower_conf, suspend_work);
+-
+ 	pm_suspend(PM_SUSPEND_MEM);
+-
+-	sc->state = SCMI_SYSPOWER_IDLE;
+ }
+ 
+ static int scmi_syspower_probe(struct scmi_device *sdev)
+@@ -354,6 +350,7 @@ static int scmi_syspower_probe(struct scmi_device *sdev)
+ 	sc->required_transition = SCMI_SYSTEM_MAX;
+ 	sc->userspace_nb.notifier_call = &scmi_userspace_notifier;
+ 	sc->dev = &sdev->dev;
++	dev_set_drvdata(&sdev->dev, sc);
+ 
+ 	INIT_WORK(&sc->suspend_work, scmi_suspend_work_func);
+ 
+@@ -363,6 +360,18 @@ static int scmi_syspower_probe(struct scmi_device *sdev)
+ 						       NULL, &sc->userspace_nb);
+ }
+ 
++static int scmi_system_power_resume(struct device *dev)
++{
++	struct scmi_syspower_conf *sc = dev_get_drvdata(dev);
++
++	sc->state = SCMI_SYSPOWER_IDLE;
++	return 0;
++}
++
++static const struct dev_pm_ops scmi_system_power_pmops = {
++	SYSTEM_SLEEP_PM_OPS(NULL, scmi_system_power_resume)
++};
++
+ static const struct scmi_device_id scmi_id_table[] = {
+ 	{ SCMI_PROTOCOL_SYSTEM, "syspower" },
+ 	{ },
+@@ -370,6 +379,9 @@ static const struct scmi_device_id scmi_id_table[] = {
+ MODULE_DEVICE_TABLE(scmi, scmi_id_table);
+ 
+ static struct scmi_driver scmi_system_power_driver = {
++	.driver	= {
++		.pm = pm_sleep_ptr(&scmi_system_power_pmops),
++	},
+ 	.name = "scmi-system-power",
+ 	.probe = scmi_syspower_probe,
+ 	.id_table = scmi_id_table,
+diff --git a/drivers/firmware/tegra/Kconfig b/drivers/firmware/tegra/Kconfig
+index cde1ab8bd9d1cb..91f2320c0d0f89 100644
+--- a/drivers/firmware/tegra/Kconfig
++++ b/drivers/firmware/tegra/Kconfig
+@@ -2,7 +2,7 @@
+ menu "Tegra firmware driver"
+ 
+ config TEGRA_IVC
+-	bool "Tegra IVC protocol"
++	bool "Tegra IVC protocol" if COMPILE_TEST
+ 	depends on ARCH_TEGRA
+ 	help
+ 	  IVC (Inter-VM Communication) protocol is part of the IPC
+@@ -13,8 +13,9 @@ config TEGRA_IVC
+ 
+ config TEGRA_BPMP
+ 	bool "Tegra BPMP driver"
+-	depends on ARCH_TEGRA && TEGRA_HSP_MBOX && TEGRA_IVC
++	depends on ARCH_TEGRA && TEGRA_HSP_MBOX
+ 	depends on !CPU_BIG_ENDIAN
++	select TEGRA_IVC
+ 	help
+ 	  BPMP (Boot and Power Management Processor) is designed to off-loading
+ 	  the PM functions which include clock/DVFS/thermal/power from the CPU.
+diff --git a/drivers/gpio/gpio-loongson-64bit.c b/drivers/gpio/gpio-loongson-64bit.c
+index 70a01c5b8ad176..add09971d26a18 100644
+--- a/drivers/gpio/gpio-loongson-64bit.c
++++ b/drivers/gpio/gpio-loongson-64bit.c
+@@ -222,6 +222,7 @@ static const struct loongson_gpio_chip_data loongson_gpio_ls2k2000_data0 = {
+ 	.conf_offset = 0x0,
+ 	.in_offset = 0xc,
+ 	.out_offset = 0x8,
++	.inten_offset = 0x14,
+ };
+ 
+ static const struct loongson_gpio_chip_data loongson_gpio_ls2k2000_data1 = {
+@@ -230,6 +231,7 @@ static const struct loongson_gpio_chip_data loongson_gpio_ls2k2000_data1 = {
+ 	.conf_offset = 0x0,
+ 	.in_offset = 0x20,
+ 	.out_offset = 0x10,
++	.inten_offset = 0x30,
+ };
+ 
+ static const struct loongson_gpio_chip_data loongson_gpio_ls2k2000_data2 = {
+@@ -246,6 +248,7 @@ static const struct loongson_gpio_chip_data loongson_gpio_ls3a5000_data = {
+ 	.conf_offset = 0x0,
+ 	.in_offset = 0xc,
+ 	.out_offset = 0x8,
++	.inten_offset = 0x14,
+ };
+ 
+ static const struct loongson_gpio_chip_data loongson_gpio_ls7a_data = {
+@@ -254,6 +257,7 @@ static const struct loongson_gpio_chip_data loongson_gpio_ls7a_data = {
+ 	.conf_offset = 0x800,
+ 	.in_offset = 0xa00,
+ 	.out_offset = 0x900,
++	.inten_offset = 0xb00,
+ };
+ 
+ /* LS7A2000 chipset GPIO */
+@@ -263,6 +267,7 @@ static const struct loongson_gpio_chip_data loongson_gpio_ls7a2000_data0 = {
+ 	.conf_offset = 0x800,
+ 	.in_offset = 0xa00,
+ 	.out_offset = 0x900,
++	.inten_offset = 0xb00,
+ };
+ 
+ /* LS7A2000 ACPI GPIO */
+@@ -281,6 +286,7 @@ static const struct loongson_gpio_chip_data loongson_gpio_ls3a6000_data = {
+ 	.conf_offset = 0x0,
+ 	.in_offset = 0xc,
+ 	.out_offset = 0x8,
++	.inten_offset = 0x14,
+ };
+ 
+ static const struct of_device_id loongson_gpio_of_match[] = {
+diff --git a/drivers/gpio/gpio-mlxbf2.c b/drivers/gpio/gpio-mlxbf2.c
+index 6f3dda6b635fa2..390f2e74a9d819 100644
+--- a/drivers/gpio/gpio-mlxbf2.c
++++ b/drivers/gpio/gpio-mlxbf2.c
+@@ -397,7 +397,7 @@ mlxbf2_gpio_probe(struct platform_device *pdev)
+ 	gc->ngpio = npins;
+ 	gc->owner = THIS_MODULE;
+ 
+-	irq = platform_get_irq(pdev, 0);
++	irq = platform_get_irq_optional(pdev, 0);
+ 	if (irq >= 0) {
+ 		girq = &gs->gc.irq;
+ 		gpio_irq_chip_set_chip(girq, &mlxbf2_gpio_irq_chip);
+diff --git a/drivers/gpio/gpio-mlxbf3.c b/drivers/gpio/gpio-mlxbf3.c
+index 9875e34bde72a4..ed29b07d16c190 100644
+--- a/drivers/gpio/gpio-mlxbf3.c
++++ b/drivers/gpio/gpio-mlxbf3.c
+@@ -190,9 +190,7 @@ static int mlxbf3_gpio_probe(struct platform_device *pdev)
+ 	struct mlxbf3_gpio_context *gs;
+ 	struct gpio_irq_chip *girq;
+ 	struct gpio_chip *gc;
+-	char *colon_ptr;
+ 	int ret, irq;
+-	long num;
+ 
+ 	gs = devm_kzalloc(dev, sizeof(*gs), GFP_KERNEL);
+ 	if (!gs)
+@@ -229,39 +227,25 @@ static int mlxbf3_gpio_probe(struct platform_device *pdev)
+ 	gc->owner = THIS_MODULE;
+ 	gc->add_pin_ranges = mlxbf3_gpio_add_pin_ranges;
+ 
+-	colon_ptr = strchr(dev_name(dev), ':');
+-	if (!colon_ptr) {
+-		dev_err(dev, "invalid device name format\n");
+-		return -EINVAL;
+-	}
+-
+-	ret = kstrtol(++colon_ptr, 16, &num);
+-	if (ret) {
+-		dev_err(dev, "invalid device instance\n");
+-		return ret;
+-	}
+-
+-	if (!num) {
+-		irq = platform_get_irq(pdev, 0);
+-		if (irq >= 0) {
+-			girq = &gs->gc.irq;
+-			gpio_irq_chip_set_chip(girq, &gpio_mlxbf3_irqchip);
+-			girq->default_type = IRQ_TYPE_NONE;
+-			/* This will let us handle the parent IRQ in the driver */
+-			girq->num_parents = 0;
+-			girq->parents = NULL;
+-			girq->parent_handler = NULL;
+-			girq->handler = handle_bad_irq;
+-
+-			/*
+-			 * Directly request the irq here instead of passing
+-			 * a flow-handler because the irq is shared.
+-			 */
+-			ret = devm_request_irq(dev, irq, mlxbf3_gpio_irq_handler,
+-					       IRQF_SHARED, dev_name(dev), gs);
+-			if (ret)
+-				return dev_err_probe(dev, ret, "failed to request IRQ");
+-		}
++	irq = platform_get_irq_optional(pdev, 0);
++	if (irq >= 0) {
++		girq = &gs->gc.irq;
++		gpio_irq_chip_set_chip(girq, &gpio_mlxbf3_irqchip);
++		girq->default_type = IRQ_TYPE_NONE;
++		/* This will let us handle the parent IRQ in the driver */
++		girq->num_parents = 0;
++		girq->parents = NULL;
++		girq->parent_handler = NULL;
++		girq->handler = handle_bad_irq;
++
++		/*
++		 * Directly request the irq here instead of passing
++		 * a flow-handler because the irq is shared.
++		 */
++		ret = devm_request_irq(dev, irq, mlxbf3_gpio_irq_handler,
++				       IRQF_SHARED, dev_name(dev), gs);
++		if (ret)
++			return dev_err_probe(dev, ret, "failed to request IRQ");
+ 	}
+ 
+ 	platform_set_drvdata(pdev, gs);
+diff --git a/drivers/gpio/gpio-pxa.c b/drivers/gpio/gpio-pxa.c
+index aead35ea090e6c..c3dfaed45c4319 100644
+--- a/drivers/gpio/gpio-pxa.c
++++ b/drivers/gpio/gpio-pxa.c
+@@ -497,8 +497,6 @@ static void pxa_mask_muxed_gpio(struct irq_data *d)
+ 	gfer = readl_relaxed(base + GFER_OFFSET) & ~GPIO_bit(gpio);
+ 	writel_relaxed(grer, base + GRER_OFFSET);
+ 	writel_relaxed(gfer, base + GFER_OFFSET);
+-
+-	gpiochip_disable_irq(&pchip->chip, gpio);
+ }
+ 
+ static int pxa_gpio_set_wake(struct irq_data *d, unsigned int on)
+@@ -518,21 +516,17 @@ static void pxa_unmask_muxed_gpio(struct irq_data *d)
+ 	unsigned int gpio = irqd_to_hwirq(d);
+ 	struct pxa_gpio_bank *c = gpio_to_pxabank(&pchip->chip, gpio);
+ 
+-	gpiochip_enable_irq(&pchip->chip, gpio);
+-
+ 	c->irq_mask |= GPIO_bit(gpio);
+ 	update_edge_detect(c);
+ }
+ 
+-static const struct irq_chip pxa_muxed_gpio_chip = {
++static struct irq_chip pxa_muxed_gpio_chip = {
+ 	.name		= "GPIO",
+ 	.irq_ack	= pxa_ack_muxed_gpio,
+ 	.irq_mask	= pxa_mask_muxed_gpio,
+ 	.irq_unmask	= pxa_unmask_muxed_gpio,
+ 	.irq_set_type	= pxa_gpio_irq_type,
+ 	.irq_set_wake	= pxa_gpio_set_wake,
+-	.flags = IRQCHIP_IMMUTABLE,
+-	GPIOCHIP_IRQ_RESOURCE_HELPERS,
+ };
+ 
+ static int pxa_gpio_nums(struct platform_device *pdev)
+diff --git a/drivers/gpio/gpio-tps65912.c b/drivers/gpio/gpio-tps65912.c
+index fab771cb6a87bf..bac757c191c2ea 100644
+--- a/drivers/gpio/gpio-tps65912.c
++++ b/drivers/gpio/gpio-tps65912.c
+@@ -49,10 +49,13 @@ static int tps65912_gpio_direction_output(struct gpio_chip *gc,
+ 					  unsigned offset, int value)
+ {
+ 	struct tps65912_gpio *gpio = gpiochip_get_data(gc);
++	int ret;
+ 
+ 	/* Set the initial value */
+-	regmap_update_bits(gpio->tps->regmap, TPS65912_GPIO1 + offset,
+-			   GPIO_SET_MASK, value ? GPIO_SET_MASK : 0);
++	ret = regmap_update_bits(gpio->tps->regmap, TPS65912_GPIO1 + offset,
++				 GPIO_SET_MASK, value ? GPIO_SET_MASK : 0);
++	if (ret)
++		return ret;
+ 
+ 	return regmap_update_bits(gpio->tps->regmap, TPS65912_GPIO1 + offset,
+ 				  GPIO_CFG_MASK, GPIO_CFG_MASK);
+diff --git a/drivers/gpio/gpio-virtio.c b/drivers/gpio/gpio-virtio.c
+index ac39da17a29bb8..608353feb0f350 100644
+--- a/drivers/gpio/gpio-virtio.c
++++ b/drivers/gpio/gpio-virtio.c
+@@ -526,7 +526,6 @@ static const char **virtio_gpio_get_names(struct virtio_gpio *vgpio,
+ 
+ static int virtio_gpio_probe(struct virtio_device *vdev)
+ {
+-	struct virtio_gpio_config config;
+ 	struct device *dev = &vdev->dev;
+ 	struct virtio_gpio *vgpio;
+ 	struct irq_chip *gpio_irq_chip;
+@@ -539,9 +538,11 @@ static int virtio_gpio_probe(struct virtio_device *vdev)
+ 		return -ENOMEM;
+ 
+ 	/* Read configuration */
+-	virtio_cread_bytes(vdev, 0, &config, sizeof(config));
+-	gpio_names_size = le32_to_cpu(config.gpio_names_size);
+-	ngpio = le16_to_cpu(config.ngpio);
++	gpio_names_size =
++		virtio_cread32(vdev, offsetof(struct virtio_gpio_config,
++					      gpio_names_size));
++	ngpio =  virtio_cread16(vdev, offsetof(struct virtio_gpio_config,
++					       ngpio));
+ 	if (!ngpio) {
+ 		dev_err(dev, "Number of GPIOs can't be zero\n");
+ 		return -EINVAL;
+diff --git a/drivers/gpio/gpio-wcd934x.c b/drivers/gpio/gpio-wcd934x.c
+index 2bba27b13947f1..cfa7b0a50c8e33 100644
+--- a/drivers/gpio/gpio-wcd934x.c
++++ b/drivers/gpio/gpio-wcd934x.c
+@@ -46,9 +46,12 @@ static int wcd_gpio_direction_output(struct gpio_chip *chip, unsigned int pin,
+ 				     int val)
+ {
+ 	struct wcd_gpio_data *data = gpiochip_get_data(chip);
++	int ret;
+ 
+-	regmap_update_bits(data->map, WCD_REG_DIR_CTL_OFFSET,
+-			   WCD_PIN_MASK(pin), WCD_PIN_MASK(pin));
++	ret = regmap_update_bits(data->map, WCD_REG_DIR_CTL_OFFSET,
++				 WCD_PIN_MASK(pin), WCD_PIN_MASK(pin));
++	if (ret)
++		return ret;
+ 
+ 	return regmap_update_bits(data->map, WCD_REG_VAL_CTL_OFFSET,
+ 				  WCD_PIN_MASK(pin),
+diff --git a/drivers/gpu/drm/amd/amdgpu/aldebaran.c b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
+index e13fbd97414126..9569dc16dd3dac 100644
+--- a/drivers/gpu/drm/amd/amdgpu/aldebaran.c
++++ b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
+@@ -71,18 +71,29 @@ aldebaran_get_reset_handler(struct amdgpu_reset_control *reset_ctl,
+ 	return NULL;
+ }
+ 
++static inline uint32_t aldebaran_get_ip_block_mask(struct amdgpu_device *adev)
++{
++	uint32_t ip_block_mask = BIT(AMD_IP_BLOCK_TYPE_GFX) |
++				 BIT(AMD_IP_BLOCK_TYPE_SDMA);
++
++	if (adev->aid_mask)
++		ip_block_mask |= BIT(AMD_IP_BLOCK_TYPE_IH);
++
++	return ip_block_mask;
++}
++
+ static int aldebaran_mode2_suspend_ip(struct amdgpu_device *adev)
+ {
++	uint32_t ip_block_mask = aldebaran_get_ip_block_mask(adev);
++	uint32_t ip_block;
+ 	int r, i;
+ 
+ 	amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);
+ 	amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
+ 
+ 	for (i = adev->num_ip_blocks - 1; i >= 0; i--) {
+-		if (!(adev->ip_blocks[i].version->type ==
+-			      AMD_IP_BLOCK_TYPE_GFX ||
+-		      adev->ip_blocks[i].version->type ==
+-			      AMD_IP_BLOCK_TYPE_SDMA))
++		ip_block = BIT(adev->ip_blocks[i].version->type);
++		if (!(ip_block_mask & ip_block))
+ 			continue;
+ 
+ 		r = amdgpu_ip_block_suspend(&adev->ip_blocks[i]);
+@@ -200,8 +211,10 @@ aldebaran_mode2_perform_reset(struct amdgpu_reset_control *reset_ctl,
+ static int aldebaran_mode2_restore_ip(struct amdgpu_device *adev)
+ {
+ 	struct amdgpu_firmware_info *ucode_list[AMDGPU_UCODE_ID_MAXIMUM];
++	uint32_t ip_block_mask = aldebaran_get_ip_block_mask(adev);
+ 	struct amdgpu_firmware_info *ucode;
+ 	struct amdgpu_ip_block *cmn_block;
++	struct amdgpu_ip_block *ih_block;
+ 	int ucode_count = 0;
+ 	int i, r;
+ 
+@@ -243,6 +256,18 @@ static int aldebaran_mode2_restore_ip(struct amdgpu_device *adev)
+ 	if (r)
+ 		return r;
+ 
++	if (ip_block_mask & BIT(AMD_IP_BLOCK_TYPE_IH)) {
++		ih_block = amdgpu_device_ip_get_ip_block(adev,
++							 AMD_IP_BLOCK_TYPE_IH);
++		if (unlikely(!ih_block)) {
++			dev_err(adev->dev, "Failed to get IH handle\n");
++			return -EINVAL;
++		}
++		r = amdgpu_ip_block_resume(ih_block);
++		if (r)
++			return r;
++	}
++
+ 	/* Reinit GFXHUB */
+ 	adev->gfxhub.funcs->init(adev);
+ 	r = adev->gfxhub.funcs->gart_enable(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cper.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cper.c
+index 5a234eadae8b3a..15dde1f5032842 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cper.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cper.c
+@@ -212,7 +212,7 @@ int amdgpu_cper_entry_fill_bad_page_threshold_section(struct amdgpu_device *adev
+ 		   NONSTD_SEC_OFFSET(hdr->sec_cnt, idx));
+ 
+ 	amdgpu_cper_entry_fill_section_desc(adev, section_desc, true, false,
+-					    CPER_SEV_NUM, RUNTIME, NONSTD_SEC_LEN,
++					    CPER_SEV_FATAL, RUNTIME, NONSTD_SEC_LEN,
+ 					    NONSTD_SEC_OFFSET(hdr->sec_cnt, idx));
+ 
+ 	section->hdr.valid_bits.err_info_cnt = 1;
+@@ -326,7 +326,9 @@ int amdgpu_cper_generate_bp_threshold_record(struct amdgpu_device *adev)
+ 		return -ENOMEM;
+ 	}
+ 
+-	amdgpu_cper_entry_fill_hdr(adev, bp_threshold, AMDGPU_CPER_TYPE_BP_THRESHOLD, CPER_SEV_NUM);
++	amdgpu_cper_entry_fill_hdr(adev, bp_threshold,
++				   AMDGPU_CPER_TYPE_BP_THRESHOLD,
++				   CPER_SEV_FATAL);
+ 	ret = amdgpu_cper_entry_fill_bad_page_threshold_section(adev, bp_threshold, 0);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c
+index 02138aa557935e..dfb6cfd8376069 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c
+@@ -88,8 +88,8 @@ int amdgpu_map_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ 	}
+ 
+ 	r = amdgpu_vm_bo_map(adev, *bo_va, csa_addr, 0, size,
+-			     AMDGPU_PTE_READABLE | AMDGPU_PTE_WRITEABLE |
+-			     AMDGPU_PTE_EXECUTABLE);
++			     AMDGPU_VM_PAGE_READABLE | AMDGPU_VM_PAGE_WRITEABLE |
++			     AMDGPU_VM_PAGE_EXECUTABLE);
+ 
+ 	if (r) {
+ 		DRM_ERROR("failed to do bo_map on static CSA, err=%d\n", r);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index c14f63cefe6739..7d8b98aa5271c2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -596,6 +596,10 @@ int psp_wait_for(struct psp_context *psp, uint32_t reg_index,
+ 		udelay(1);
+ 	}
+ 
++	dev_err(adev->dev,
++		"psp reg (0x%x) wait timed out, mask: %x, read: %x exp: %x",
++		reg_index, mask, val, reg_val);
++
+ 	return -ETIME;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
+index 428adc7f741de3..a4a00855d0b238 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
+@@ -51,6 +51,17 @@
+ #define C2PMSG_CMD_SPI_GET_ROM_IMAGE_ADDR_HI 0x10
+ #define C2PMSG_CMD_SPI_GET_FLASH_IMAGE 0x11
+ 
++/* Command register bit 31 set to indicate readiness */
++#define MBOX_TOS_READY_FLAG (GFX_FLAG_RESPONSE)
++#define MBOX_TOS_READY_MASK (GFX_CMD_RESPONSE_MASK | GFX_CMD_STATUS_MASK)
++
++/* Values to check for a successful GFX_CMD response wait. Check against
++ * both status bits and response state - helps to detect a command failure
++ * or other unexpected cases like a device drop reading all 0xFFs
++ */
++#define MBOX_TOS_RESP_FLAG (GFX_FLAG_RESPONSE)
++#define MBOX_TOS_RESP_MASK (GFX_CMD_RESPONSE_MASK | GFX_CMD_STATUS_MASK)
++
+ extern const struct attribute_group amdgpu_flash_attr_group;
+ 
+ enum psp_shared_mem_size {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
+index 2c58e09e56f95d..2ddedf476542fe 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
+@@ -476,6 +476,8 @@ int amdgpu_ras_eeprom_reset_table(struct amdgpu_ras_eeprom_control *control)
+ 
+ 	control->ras_num_recs = 0;
+ 	control->ras_num_bad_pages = 0;
++	control->ras_num_mca_recs = 0;
++	control->ras_num_pa_recs = 0;
+ 	control->ras_fri = 0;
+ 
+ 	amdgpu_dpm_send_hbm_bad_pages_num(adev, control->ras_num_bad_pages);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+index 07c936e90d8e40..78f9e86ccc0990 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+@@ -648,9 +648,8 @@ static void amdgpu_vram_mgr_del(struct ttm_resource_manager *man,
+ 	list_for_each_entry(block, &vres->blocks, link)
+ 		vis_usage += amdgpu_vram_mgr_vis_size(adev, block);
+ 
+-	amdgpu_vram_mgr_do_reserve(man);
+-
+ 	drm_buddy_free_list(mm, &vres->blocks, vres->flags);
++	amdgpu_vram_mgr_do_reserve(man);
+ 	mutex_unlock(&mgr->lock);
+ 
+ 	atomic64_sub(vis_usage, &mgr->vis_usage);
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
+index 145186a1e48f6b..2c4ebd98927ff3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
+@@ -94,7 +94,7 @@ static int psp_v10_0_ring_create(struct psp_context *psp,
+ 
+ 	/* Wait for response flag (bit 31) in C2PMSG_64 */
+ 	ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			   0x80000000, 0x8000FFFF, false);
++			   MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 
+ 	return ret;
+ }
+@@ -115,7 +115,7 @@ static int psp_v10_0_ring_stop(struct psp_context *psp,
+ 
+ 	/* Wait for response flag (bit 31) in C2PMSG_64 */
+ 	ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			   0x80000000, 0x80000000, false);
++			   MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
+index 215543575f477c..1a4a26e6ffd24c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
+@@ -277,11 +277,13 @@ static int psp_v11_0_ring_stop(struct psp_context *psp,
+ 
+ 	/* Wait for response flag (bit 31) */
+ 	if (amdgpu_sriov_vf(adev))
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+-				   0x80000000, 0x80000000, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 	else
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-				   0x80000000, 0x80000000, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 
+ 	return ret;
+ }
+@@ -317,13 +319,15 @@ static int psp_v11_0_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_101 */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+-				   0x80000000, 0x8000FFFF, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 
+ 	} else {
+ 		/* Wait for sOS ready for ring creation */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-				   0x80000000, 0x80000000, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++			MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
+ 		if (ret) {
+ 			DRM_ERROR("Failed to wait for sOS ready for ring creation\n");
+ 			return ret;
+@@ -347,8 +351,9 @@ static int psp_v11_0_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_64 */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-				   0x80000000, 0x8000FFFF, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 	}
+ 
+ 	return ret;
+@@ -381,7 +386,8 @@ static int psp_v11_0_mode1_reset(struct psp_context *psp)
+ 
+ 	offset = SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64);
+ 
+-	ret = psp_wait_for(psp, offset, 0x80000000, 0x8000FFFF, false);
++	ret = psp_wait_for(psp, offset, MBOX_TOS_READY_FLAG,
++			   MBOX_TOS_READY_MASK, false);
+ 
+ 	if (ret) {
+ 		DRM_INFO("psp is not working correctly before mode1 reset!\n");
+@@ -395,7 +401,8 @@ static int psp_v11_0_mode1_reset(struct psp_context *psp)
+ 
+ 	offset = SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_33);
+ 
+-	ret = psp_wait_for(psp, offset, 0x80000000, 0x80000000, false);
++	ret = psp_wait_for(psp, offset, MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK,
++			   false);
+ 
+ 	if (ret) {
+ 		DRM_INFO("psp mode 1 reset failed!\n");
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v11_0_8.c b/drivers/gpu/drm/amd/amdgpu/psp_v11_0_8.c
+index 5697760a819bc7..338d015c0f2ee2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v11_0_8.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v11_0_8.c
+@@ -41,8 +41,9 @@ static int psp_v11_0_8_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+-				   0x80000000, 0x80000000, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 	} else {
+ 		/* Write the ring destroy command*/
+ 		WREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_64,
+@@ -50,8 +51,9 @@ static int psp_v11_0_8_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-				   0x80000000, 0x80000000, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 	}
+ 
+ 	return ret;
+@@ -87,13 +89,15 @@ static int psp_v11_0_8_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_101 */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+-				   0x80000000, 0x8000FFFF, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 
+ 	} else {
+ 		/* Wait for sOS ready for ring creation */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-				   0x80000000, 0x80000000, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++			MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
+ 		if (ret) {
+ 			DRM_ERROR("Failed to wait for trust OS ready for ring creation\n");
+ 			return ret;
+@@ -117,8 +121,9 @@ static int psp_v11_0_8_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_64 */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-				   0x80000000, 0x8000FFFF, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
+index 80153f8374704a..d54b3e0fabaf40 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
+@@ -163,7 +163,7 @@ static int psp_v12_0_ring_create(struct psp_context *psp,
+ 
+ 	/* Wait for response flag (bit 31) in C2PMSG_64 */
+ 	ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			   0x80000000, 0x8000FFFF, false);
++			   MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 
+ 	return ret;
+ }
+@@ -184,11 +184,13 @@ static int psp_v12_0_ring_stop(struct psp_context *psp,
+ 
+ 	/* Wait for response flag (bit 31) */
+ 	if (amdgpu_sriov_vf(adev))
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+-				   0x80000000, 0x80000000, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 	else
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-				   0x80000000, 0x80000000, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 
+ 	return ret;
+ }
+@@ -219,7 +221,8 @@ static int psp_v12_0_mode1_reset(struct psp_context *psp)
+ 
+ 	offset = SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64);
+ 
+-	ret = psp_wait_for(psp, offset, 0x80000000, 0x8000FFFF, false);
++	ret = psp_wait_for(psp, offset, MBOX_TOS_READY_FLAG,
++			   MBOX_TOS_READY_MASK, false);
+ 
+ 	if (ret) {
+ 		DRM_INFO("psp is not working correctly before mode1 reset!\n");
+@@ -233,7 +236,8 @@ static int psp_v12_0_mode1_reset(struct psp_context *psp)
+ 
+ 	offset = SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_33);
+ 
+-	ret = psp_wait_for(psp, offset, 0x80000000, 0x80000000, false);
++	ret = psp_wait_for(psp, offset, MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK,
++			   false);
+ 
+ 	if (ret) {
+ 		DRM_INFO("psp mode 1 reset failed!\n");
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
+index ead616c117057f..58b6b64dcd683b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
+@@ -384,8 +384,9 @@ static int psp_v13_0_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
+-				   0x80000000, 0x80000000, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 	} else {
+ 		/* Write the ring destroy command*/
+ 		WREG32_SOC15(MP0, 0, regMP0_SMN_C2PMSG_64,
+@@ -393,8 +394,9 @@ static int psp_v13_0_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+-				   0x80000000, 0x80000000, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 	}
+ 
+ 	return ret;
+@@ -430,13 +432,15 @@ static int psp_v13_0_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_101 */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
+-				   0x80000000, 0x8000FFFF, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 
+ 	} else {
+ 		/* Wait for sOS ready for ring creation */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+-				   0x80000000, 0x80000000, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++			MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
+ 		if (ret) {
+ 			DRM_ERROR("Failed to wait for trust OS ready for ring creation\n");
+ 			return ret;
+@@ -460,8 +464,9 @@ static int psp_v13_0_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_64 */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+-				   0x80000000, 0x8000FFFF, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v13_0_4.c b/drivers/gpu/drm/amd/amdgpu/psp_v13_0_4.c
+index eaa5512a21dacd..f65af52c1c1939 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v13_0_4.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v13_0_4.c
+@@ -204,8 +204,9 @@ static int psp_v13_0_4_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
+-				   0x80000000, 0x80000000, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 	} else {
+ 		/* Write the ring destroy command*/
+ 		WREG32_SOC15(MP0, 0, regMP0_SMN_C2PMSG_64,
+@@ -213,8 +214,9 @@ static int psp_v13_0_4_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+-				   0x80000000, 0x80000000, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 	}
+ 
+ 	return ret;
+@@ -250,13 +252,15 @@ static int psp_v13_0_4_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_101 */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
+-				   0x80000000, 0x8000FFFF, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 
+ 	} else {
+ 		/* Wait for sOS ready for ring creation */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+-				   0x80000000, 0x80000000, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++			MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
+ 		if (ret) {
+ 			DRM_ERROR("Failed to wait for trust OS ready for ring creation\n");
+ 			return ret;
+@@ -280,8 +284,9 @@ static int psp_v13_0_4_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_64 */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+-				   0x80000000, 0x8000FFFF, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c
+index 256288c6cd78ef..ffa47c7d24c919 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c
+@@ -248,8 +248,9 @@ static int psp_v14_0_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_101),
+-				   0x80000000, 0x80000000, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_101),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 	} else {
+ 		/* Write the ring destroy command*/
+ 		WREG32_SOC15(MP0, 0, regMPASP_SMN_C2PMSG_64,
+@@ -257,8 +258,9 @@ static int psp_v14_0_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
+-				   0x80000000, 0x80000000, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 	}
+ 
+ 	return ret;
+@@ -294,13 +296,15 @@ static int psp_v14_0_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_101 */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_101),
+-				   0x80000000, 0x8000FFFF, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_101),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 
+ 	} else {
+ 		/* Wait for sOS ready for ring creation */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
+-				   0x80000000, 0x80000000, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
++			MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
+ 		if (ret) {
+ 			DRM_ERROR("Failed to wait for trust OS ready for ring creation\n");
+ 			return ret;
+@@ -324,8 +328,9 @@ static int psp_v14_0_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_64 */
+-		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
+-				   0x80000000, 0x8000FFFF, false);
++		ret = psp_wait_for(
++			psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
++			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index f58fa5da7fe558..7b5440bdad2f35 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3060,6 +3060,19 @@ static void hpd_rx_irq_work_suspend(struct amdgpu_display_manager *dm)
+ 	}
+ }
+ 
++static int dm_cache_state(struct amdgpu_device *adev)
++{
++	int r;
++
++	adev->dm.cached_state = drm_atomic_helper_suspend(adev_to_drm(adev));
++	if (IS_ERR(adev->dm.cached_state)) {
++		r = PTR_ERR(adev->dm.cached_state);
++		adev->dm.cached_state = NULL;
++	}
++
++	return adev->dm.cached_state ? 0 : r;
++}
++
+ static int dm_prepare_suspend(struct amdgpu_ip_block *ip_block)
+ {
+ 	struct amdgpu_device *adev = ip_block->adev;
+@@ -3068,11 +3081,8 @@ static int dm_prepare_suspend(struct amdgpu_ip_block *ip_block)
+ 		return 0;
+ 
+ 	WARN_ON(adev->dm.cached_state);
+-	adev->dm.cached_state = drm_atomic_helper_suspend(adev_to_drm(adev));
+-	if (IS_ERR(adev->dm.cached_state))
+-		return PTR_ERR(adev->dm.cached_state);
+ 
+-	return 0;
++	return dm_cache_state(adev);
+ }
+ 
+ static int dm_suspend(struct amdgpu_ip_block *ip_block)
+@@ -3106,9 +3116,10 @@ static int dm_suspend(struct amdgpu_ip_block *ip_block)
+ 	}
+ 
+ 	if (!adev->dm.cached_state) {
+-		adev->dm.cached_state = drm_atomic_helper_suspend(adev_to_drm(adev));
+-		if (IS_ERR(adev->dm.cached_state))
+-			return PTR_ERR(adev->dm.cached_state);
++		int r = dm_cache_state(adev);
++
++		if (r)
++			return r;
+ 	}
+ 
+ 	s3_handle_hdmi_cec(adev_to_drm(adev), true);
+@@ -5368,7 +5379,8 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
+ 
+ static void amdgpu_dm_destroy_drm_device(struct amdgpu_display_manager *dm)
+ {
+-	drm_atomic_private_obj_fini(&dm->atomic_obj);
++	if (dm->atomic_obj.state)
++		drm_atomic_private_obj_fini(&dm->atomic_obj);
+ }
+ 
+ /******************************************************************************
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+index f984cb0cb88976..ff7b867ae98b88 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+@@ -119,8 +119,10 @@ bool amdgpu_dm_link_setup_psr(struct dc_stream_state *stream)
+ 		psr_config.allow_multi_disp_optimizations =
+ 			(amdgpu_dc_feature_mask & DC_PSR_ALLOW_MULTI_DISP_OPT);
+ 
+-		if (!psr_su_set_dsc_slice_height(dc, link, stream, &psr_config))
+-			return false;
++		if (link->psr_settings.psr_version == DC_PSR_VERSION_SU_1) {
++			if (!psr_su_set_dsc_slice_height(dc, link, stream, &psr_config))
++				return false;
++		}
+ 
+ 		ret = dc_link_setup_psr(link, stream, &psr_config, &psr_context);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index b34b5b52236dce..0017e9991670bd 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -5439,8 +5439,8 @@ bool dc_update_planes_and_stream(struct dc *dc,
+ 	else
+ 		ret = update_planes_and_stream_v2(dc, srf_updates,
+ 			surface_count, stream, stream_update);
+-
+-	if (ret)
++	if (ret && (dc->ctx->dce_version >= DCN_VERSION_3_2 ||
++		dc->ctx->dce_version == DCN_VERSION_3_01))
+ 		clear_update_flags(srf_updates, surface_count, stream);
+ 
+ 	return ret;
+@@ -5471,7 +5471,7 @@ void dc_commit_updates_for_stream(struct dc *dc,
+ 		ret = update_planes_and_stream_v1(dc, srf_updates, surface_count, stream,
+ 				stream_update, state);
+ 
+-	if (ret)
++	if (ret && dc->ctx->dce_version >= DCN_VERSION_3_2)
+ 		clear_update_flags(srf_updates, surface_count, stream);
+ }
+ 
+@@ -6395,11 +6395,13 @@ unsigned int dc_get_det_buffer_size_from_state(const struct dc_state *context)
+  */
+ bool dc_get_host_router_index(const struct dc_link *link, unsigned int *host_router_index)
+ {
+-	struct dc *dc = link->ctx->dc;
++	struct dc *dc;
+ 
+-	if (link->ep_type != DISPLAY_ENDPOINT_USB4_DPIA)
++	if (!link || !host_router_index || link->ep_type != DISPLAY_ENDPOINT_USB4_DPIA)
+ 		return false;
+ 
++	dc = link->ctx->dc;
++
+ 	if (link->link_index < dc->lowest_dpia_link_index)
+ 		return false;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+index c277df12c8172e..cdb8685ae7d719 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+@@ -283,14 +283,13 @@ void dcn20_setup_gsl_group_as_lock(
+ 	}
+ 
+ 	/* at this point we want to program whether it's to enable or disable */
+-	if (pipe_ctx->stream_res.tg->funcs->set_gsl != NULL &&
+-		pipe_ctx->stream_res.tg->funcs->set_gsl_source_select != NULL) {
++	if (pipe_ctx->stream_res.tg->funcs->set_gsl != NULL) {
+ 		pipe_ctx->stream_res.tg->funcs->set_gsl(
+ 			pipe_ctx->stream_res.tg,
+ 			&gsl);
+-
+-		pipe_ctx->stream_res.tg->funcs->set_gsl_source_select(
+-			pipe_ctx->stream_res.tg, group_idx,	enable ? 4 : 0);
++		if (pipe_ctx->stream_res.tg->funcs->set_gsl_source_select != NULL)
++			pipe_ctx->stream_res.tg->funcs->set_gsl_source_select(
++				pipe_ctx->stream_res.tg, group_idx, enable ? 4 : 0);
+ 	} else
+ 		BREAK_TO_DEBUGGER();
+ }
+@@ -956,7 +955,7 @@ enum dc_status dcn20_enable_stream_timing(
+ 		return DC_ERROR_UNEXPECTED;
+ 	}
+ 
+-	hws->funcs.wait_for_blank_complete(pipe_ctx->stream_res.opp);
++	fsleep(stream->timing.v_total * (stream->timing.h_total * 10000u / stream->timing.pix_clk_100hz));
+ 
+ 	params.vertical_total_min = stream->adjust.v_total_min;
+ 	params.vertical_total_max = stream->adjust.v_total_max;
+diff --git a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
+index 273a3be6d593af..f16cba4b9119df 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
++++ b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
+@@ -140,7 +140,8 @@ void link_blank_dp_stream(struct dc_link *link, bool hw_init)
+ 				}
+ 		}
+ 
+-		if ((!link->wa_flags.dp_keep_receiver_powered) || hw_init)
++		if (((!link->wa_flags.dp_keep_receiver_powered) || hw_init) &&
++			(link->type != dc_connection_none))
+ 			dpcd_write_rx_power_ctrl(link, false);
+ 	}
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/mpc/dcn401/dcn401_mpc.c b/drivers/gpu/drm/amd/display/dc/mpc/dcn401/dcn401_mpc.c
+index 98cf0cbd59ba0e..591af5a3d3e3ad 100644
+--- a/drivers/gpu/drm/amd/display/dc/mpc/dcn401/dcn401_mpc.c
++++ b/drivers/gpu/drm/amd/display/dc/mpc/dcn401/dcn401_mpc.c
+@@ -561,7 +561,7 @@ void mpc401_get_gamut_remap(struct mpc *mpc,
+ 	struct mpc_grph_gamut_adjustment *adjust)
+ {
+ 	uint16_t arr_reg_val[12] = {0};
+-	uint32_t mode_select;
++	uint32_t mode_select = MPCC_GAMUT_REMAP_MODE_SELECT_0;
+ 
+ 	read_gamut_remap(mpc, mpcc_id, arr_reg_val, adjust->mpcc_gamut_remap_block_id, &mode_select);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
+index 8383e2e59be5b4..eed64b05bc606d 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
+@@ -926,6 +926,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ 	.seamless_boot_odm_combine = true,
+ 	.enable_legacy_fast_update = true,
+ 	.using_dml2 = false,
++	.disable_dsc_power_gate = true,
+ };
+ 
+ static const struct dc_panel_config panel_config_defaults = {
+diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn35.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn35.c
+index 72a0f078cd1a58..2884977a3dd2f0 100644
+--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn35.c
++++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn35.c
+@@ -92,19 +92,15 @@ void dmub_dcn35_reset(struct dmub_srv *dmub)
+ 	uint32_t in_reset, is_enabled, scratch, i, pwait_mode;
+ 
+ 	REG_GET(DMCUB_CNTL2, DMCUB_SOFT_RESET, &in_reset);
++	REG_GET(DMCUB_CNTL, DMCUB_ENABLE, &is_enabled);
+ 
+-	if (in_reset == 0) {
++	if (in_reset == 0 && is_enabled != 0) {
+ 		cmd.bits.status = 1;
+ 		cmd.bits.command_code = DMUB_GPINT__STOP_FW;
+ 		cmd.bits.param = 0;
+ 
+ 		dmub->hw_funcs.set_gpint(dmub, cmd);
+ 
+-		/**
+-		 * Timeout covers both the ACK and the wait
+-		 * for remaining work to finish.
+-		 */
+-
+ 		for (i = 0; i < timeout; ++i) {
+ 			if (dmub->hw_funcs.is_gpint_acked(dmub, cmd))
+ 				break;
+@@ -130,11 +126,9 @@ void dmub_dcn35_reset(struct dmub_srv *dmub)
+ 		/* Force reset in case we timed out, DMCUB is likely hung. */
+ 	}
+ 
+-	REG_GET(DMCUB_CNTL, DMCUB_ENABLE, &is_enabled);
+-
+ 	if (is_enabled) {
+ 		REG_UPDATE(DMCUB_CNTL2, DMCUB_SOFT_RESET, 1);
+-		REG_UPDATE(MMHUBBUB_SOFT_RESET, DMUIF_SOFT_RESET, 1);
++		udelay(1);
+ 		REG_UPDATE(DMCUB_CNTL, DMCUB_ENABLE, 0);
+ 	}
+ 
+@@ -160,11 +154,7 @@ void dmub_dcn35_reset_release(struct dmub_srv *dmub)
+ 		     LONO_SOCCLK_GATE_DISABLE, 1,
+ 		     LONO_DMCUBCLK_GATE_DISABLE, 1);
+ 
+-	REG_UPDATE(MMHUBBUB_SOFT_RESET, DMUIF_SOFT_RESET, 1);
+-	udelay(1);
+ 	REG_UPDATE_2(DMCUB_CNTL, DMCUB_ENABLE, 1, DMCUB_TRACEPORT_EN, 1);
+-	REG_UPDATE(DMCUB_CNTL2, DMCUB_SOFT_RESET, 1);
+-	udelay(1);
+ 	REG_UPDATE(MMHUBBUB_SOFT_RESET, DMUIF_SOFT_RESET, 0);
+ 	REG_UPDATE(DMCUB_CNTL2, DMCUB_SOFT_RESET, 0);
+ }
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+index edd9895b46c024..39ee810850885c 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+@@ -1398,6 +1398,8 @@ static ssize_t amdgpu_set_pp_power_profile_mode(struct device *dev,
+ 			if (ret)
+ 				return -EINVAL;
+ 			parameter_size++;
++			if (!tmp_str)
++				break;
+ 			while (isspace(*tmp_str))
+ 				tmp_str++;
+ 		}
+@@ -3645,6 +3647,9 @@ static int parse_input_od_command_lines(const char *buf,
+ 			return -EINVAL;
+ 		parameter_size++;
+ 
++		if (!tmp_str)
++			break;
++
+ 		while (isspace(*tmp_str))
+ 			tmp_str++;
+ 	}
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+index a55ea76d739969..2c9869feba610f 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+@@ -666,7 +666,6 @@ static int vangogh_print_clk_levels(struct smu_context *smu,
+ {
+ 	DpmClocks_t *clk_table = smu->smu_table.clocks_table;
+ 	SmuMetrics_t metrics;
+-	struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
+ 	int i, idx, size = 0, ret = 0;
+ 	uint32_t cur_value = 0, value = 0, count = 0;
+ 	bool cur_value_match_level = false;
+@@ -682,31 +681,25 @@ static int vangogh_print_clk_levels(struct smu_context *smu,
+ 
+ 	switch (clk_type) {
+ 	case SMU_OD_SCLK:
+-		if (smu_dpm_ctx->dpm_level == AMD_DPM_FORCED_LEVEL_MANUAL) {
+-			size += sysfs_emit_at(buf, size, "%s:\n", "OD_SCLK");
+-			size += sysfs_emit_at(buf, size, "0: %10uMhz\n",
+-			(smu->gfx_actual_hard_min_freq > 0) ? smu->gfx_actual_hard_min_freq : smu->gfx_default_hard_min_freq);
+-			size += sysfs_emit_at(buf, size, "1: %10uMhz\n",
+-			(smu->gfx_actual_soft_max_freq > 0) ? smu->gfx_actual_soft_max_freq : smu->gfx_default_soft_max_freq);
+-		}
++		size += sysfs_emit_at(buf, size, "%s:\n", "OD_SCLK");
++		size += sysfs_emit_at(buf, size, "0: %10uMhz\n",
++		(smu->gfx_actual_hard_min_freq > 0) ? smu->gfx_actual_hard_min_freq : smu->gfx_default_hard_min_freq);
++		size += sysfs_emit_at(buf, size, "1: %10uMhz\n",
++		(smu->gfx_actual_soft_max_freq > 0) ? smu->gfx_actual_soft_max_freq : smu->gfx_default_soft_max_freq);
+ 		break;
+ 	case SMU_OD_CCLK:
+-		if (smu_dpm_ctx->dpm_level == AMD_DPM_FORCED_LEVEL_MANUAL) {
+-			size += sysfs_emit_at(buf, size, "CCLK_RANGE in Core%d:\n",  smu->cpu_core_id_select);
+-			size += sysfs_emit_at(buf, size, "0: %10uMhz\n",
+-			(smu->cpu_actual_soft_min_freq > 0) ? smu->cpu_actual_soft_min_freq : smu->cpu_default_soft_min_freq);
+-			size += sysfs_emit_at(buf, size, "1: %10uMhz\n",
+-			(smu->cpu_actual_soft_max_freq > 0) ? smu->cpu_actual_soft_max_freq : smu->cpu_default_soft_max_freq);
+-		}
++		size += sysfs_emit_at(buf, size, "CCLK_RANGE in Core%d:\n",  smu->cpu_core_id_select);
++		size += sysfs_emit_at(buf, size, "0: %10uMhz\n",
++		(smu->cpu_actual_soft_min_freq > 0) ? smu->cpu_actual_soft_min_freq : smu->cpu_default_soft_min_freq);
++		size += sysfs_emit_at(buf, size, "1: %10uMhz\n",
++		(smu->cpu_actual_soft_max_freq > 0) ? smu->cpu_actual_soft_max_freq : smu->cpu_default_soft_max_freq);
+ 		break;
+ 	case SMU_OD_RANGE:
+-		if (smu_dpm_ctx->dpm_level == AMD_DPM_FORCED_LEVEL_MANUAL) {
+-			size += sysfs_emit_at(buf, size, "%s:\n", "OD_RANGE");
+-			size += sysfs_emit_at(buf, size, "SCLK: %7uMhz %10uMhz\n",
+-				smu->gfx_default_hard_min_freq, smu->gfx_default_soft_max_freq);
+-			size += sysfs_emit_at(buf, size, "CCLK: %7uMhz %10uMhz\n",
+-				smu->cpu_default_soft_min_freq, smu->cpu_default_soft_max_freq);
+-		}
++		size += sysfs_emit_at(buf, size, "%s:\n", "OD_RANGE");
++		size += sysfs_emit_at(buf, size, "SCLK: %7uMhz %10uMhz\n",
++			smu->gfx_default_hard_min_freq, smu->gfx_default_soft_max_freq);
++		size += sysfs_emit_at(buf, size, "CCLK: %7uMhz %10uMhz\n",
++			smu->cpu_default_soft_min_freq, smu->cpu_default_soft_max_freq);
+ 		break;
+ 	case SMU_SOCCLK:
+ 		/* the level 3 ~ 6 of socclk use the same frequency for vangogh */
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
+index 7473672abd2a91..a608cdbdada4cb 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
+@@ -40,28 +40,29 @@
+ #define SMU_IH_INTERRUPT_CONTEXT_ID_FAN_ABNORMAL        0x8
+ #define SMU_IH_INTERRUPT_CONTEXT_ID_FAN_RECOVERY        0x9
+ 
+-#define smu_cmn_init_soft_gpu_metrics(ptr, frev, crev)         \
+-	do {                                                   \
+-		typecheck(struct gpu_metrics_v##frev##_##crev, \
+-			  typeof(*(ptr)));                     \
+-		struct metrics_table_header *header =          \
+-			(struct metrics_table_header *)(ptr);  \
+-		memset(header, 0xFF, sizeof(*(ptr)));          \
+-		header->format_revision = frev;                \
+-		header->content_revision = crev;               \
+-		header->structure_size = sizeof(*(ptr));       \
++#define smu_cmn_init_soft_gpu_metrics(ptr, frev, crev)                   \
++	do {                                                             \
++		typecheck(struct gpu_metrics_v##frev##_##crev *, (ptr)); \
++		struct gpu_metrics_v##frev##_##crev *tmp = (ptr);        \
++		struct metrics_table_header *header =                    \
++			(struct metrics_table_header *)tmp;              \
++		memset(header, 0xFF, sizeof(*tmp));                      \
++		header->format_revision = frev;                          \
++		header->content_revision = crev;                         \
++		header->structure_size = sizeof(*tmp);                   \
+ 	} while (0)
+ 
+-#define smu_cmn_init_partition_metrics(ptr, frev, crev)                     \
+-	do {                                                                \
+-		typecheck(struct amdgpu_partition_metrics_v##frev##_##crev, \
+-			  typeof(*(ptr)));                                  \
+-		struct metrics_table_header *header =                       \
+-			(struct metrics_table_header *)(ptr);               \
+-		memset(header, 0xFF, sizeof(*(ptr)));                       \
+-		header->format_revision = frev;                             \
+-		header->content_revision = crev;                            \
+-		header->structure_size = sizeof(*(ptr));                    \
++#define smu_cmn_init_partition_metrics(ptr, fr, cr)                        \
++	do {                                                               \
++		typecheck(struct amdgpu_partition_metrics_v##fr##_##cr *,  \
++			  (ptr));                                          \
++		struct amdgpu_partition_metrics_v##fr##_##cr *tmp = (ptr); \
++		struct metrics_table_header *header =                      \
++			(struct metrics_table_header *)tmp;                \
++		memset(header, 0xFF, sizeof(*tmp));                        \
++		header->format_revision = fr;                              \
++		header->content_revision = cr;                             \
++		header->structure_size = sizeof(*tmp);                     \
+ 	} while (0)
+ 
+ extern const int link_speed[];
+diff --git a/drivers/gpu/drm/clients/drm_client_setup.c b/drivers/gpu/drm/clients/drm_client_setup.c
+index e17265039ca800..e460ad354de281 100644
+--- a/drivers/gpu/drm/clients/drm_client_setup.c
++++ b/drivers/gpu/drm/clients/drm_client_setup.c
+@@ -2,6 +2,7 @@
+ 
+ #include <drm/clients/drm_client_setup.h>
+ #include <drm/drm_device.h>
++#include <drm/drm_drv.h>
+ #include <drm/drm_fourcc.h>
+ #include <drm/drm_print.h>
+ 
+@@ -31,6 +32,10 @@ MODULE_PARM_DESC(active,
+  */
+ void drm_client_setup(struct drm_device *dev, const struct drm_format_info *format)
+ {
++	if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
++		drm_dbg(dev, "driver does not support mode-setting, skipping DRM clients\n");
++		return;
++	}
+ 
+ #ifdef CONFIG_DRM_FBDEV_EMULATION
+ 	if (!strcmp(drm_client_default, "fbdev")) {
+diff --git a/drivers/gpu/drm/i915/display/intel_fbc.c b/drivers/gpu/drm/i915/display/intel_fbc.c
+index bed2bba20b555c..6ddbc682ec1cd2 100644
+--- a/drivers/gpu/drm/i915/display/intel_fbc.c
++++ b/drivers/gpu/drm/i915/display/intel_fbc.c
+@@ -550,10 +550,6 @@ static void ilk_fbc_deactivate(struct intel_fbc *fbc)
+ 	if (dpfc_ctl & DPFC_CTL_EN) {
+ 		dpfc_ctl &= ~DPFC_CTL_EN;
+ 		intel_de_write(display, ILK_DPFC_CONTROL(fbc->id), dpfc_ctl);
+-
+-		/* wa_18038517565 Enable DPFC clock gating after FBC disable */
+-		if (display->platform.dg2 || DISPLAY_VER(display) >= 14)
+-			fbc_compressor_clkgate_disable_wa(fbc, false);
+ 	}
+ }
+ 
+@@ -1708,6 +1704,10 @@ static void __intel_fbc_disable(struct intel_fbc *fbc)
+ 
+ 	__intel_fbc_cleanup_cfb(fbc);
+ 
++	/* wa_18038517565 Enable DPFC clock gating after FBC disable */
++	if (display->platform.dg2 || DISPLAY_VER(display) >= 14)
++		fbc_compressor_clkgate_disable_wa(fbc, false);
++
+ 	fbc->state.plane = NULL;
+ 	fbc->flip_pending = false;
+ 	fbc->busy_bits = 0;
+diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c
+index 430ad4ef714668..7da0ad854ed200 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr.c
++++ b/drivers/gpu/drm/i915/display/intel_psr.c
+@@ -3250,7 +3250,9 @@ static void intel_psr_configure_full_frame_update(struct intel_dp *intel_dp)
+ 
+ static void _psr_invalidate_handle(struct intel_dp *intel_dp)
+ {
+-	if (intel_dp->psr.psr2_sel_fetch_enabled) {
++	struct intel_display *display = to_intel_display(intel_dp);
++
++	if (DISPLAY_VER(display) < 20 && intel_dp->psr.psr2_sel_fetch_enabled) {
+ 		if (!intel_dp->psr.psr2_sel_fetch_cff_enabled) {
+ 			intel_dp->psr.psr2_sel_fetch_cff_enabled = true;
+ 			intel_psr_configure_full_frame_update(intel_dp);
+@@ -3338,7 +3340,7 @@ static void _psr_flush_handle(struct intel_dp *intel_dp)
+ 	struct intel_display *display = to_intel_display(intel_dp);
+ 	struct drm_i915_private *dev_priv = to_i915(display->drm);
+ 
+-	if (intel_dp->psr.psr2_sel_fetch_enabled) {
++	if (DISPLAY_VER(display) < 20 && intel_dp->psr.psr2_sel_fetch_enabled) {
+ 		if (intel_dp->psr.psr2_sel_fetch_cff_enabled) {
+ 			/* can we turn CFF off? */
+ 			if (intel_dp->psr.busy_frontbuffer_bits == 0)
+@@ -3355,11 +3357,13 @@ static void _psr_flush_handle(struct intel_dp *intel_dp)
+ 		 * existing SU configuration
+ 		 */
+ 		intel_psr_configure_full_frame_update(intel_dp);
+-	}
+ 
+-	intel_psr_force_update(intel_dp);
++		intel_psr_force_update(intel_dp);
++	} else {
++		intel_psr_exit(intel_dp);
++	}
+ 
+-	if (!intel_dp->psr.psr2_sel_fetch_enabled && !intel_dp->psr.active &&
++	if ((!intel_dp->psr.psr2_sel_fetch_enabled || DISPLAY_VER(display) >= 20) &&
+ 	    !intel_dp->psr.busy_frontbuffer_bits)
+ 		queue_work(dev_priv->unordered_wq, &intel_dp->psr.work);
+ }
+diff --git a/drivers/gpu/drm/imagination/pvr_power.c b/drivers/gpu/drm/imagination/pvr_power.c
+index 3e349d039fc0c4..187a07e0bd9adb 100644
+--- a/drivers/gpu/drm/imagination/pvr_power.c
++++ b/drivers/gpu/drm/imagination/pvr_power.c
+@@ -340,6 +340,63 @@ pvr_power_device_idle(struct device *dev)
+ 	return pvr_power_is_idle(pvr_dev) ? 0 : -EBUSY;
+ }
+ 
++static int
++pvr_power_clear_error(struct pvr_device *pvr_dev)
++{
++	struct device *dev = from_pvr_device(pvr_dev)->dev;
++	int err;
++
++	/* Ensure the device state is known and nothing is happening past this point */
++	pm_runtime_disable(dev);
++
++	/* Attempt to clear the runtime PM error by setting the current state again */
++	if (pm_runtime_status_suspended(dev))
++		err = pm_runtime_set_suspended(dev);
++	else
++		err = pm_runtime_set_active(dev);
++
++	if (err) {
++		drm_err(from_pvr_device(pvr_dev),
++			"%s: Failed to clear runtime PM error (new error %d)\n",
++			__func__, err);
++	}
++
++	pm_runtime_enable(dev);
++
++	return err;
++}
++
++/**
++ * pvr_power_get_clear() - Acquire a power reference, correcting any errors
++ * @pvr_dev: Device pointer
++ *
++ * Attempt to acquire a power reference on the device. If the runtime PM
++ * is in error state, attempt to clear the error and retry.
++ *
++ * Returns:
++ *  * 0 on success, or
++ *  * Any error code returned by pvr_power_get() or the runtime PM API.
++ */
++static int
++pvr_power_get_clear(struct pvr_device *pvr_dev)
++{
++	int err;
++
++	err = pvr_power_get(pvr_dev);
++	if (err == 0)
++		return err;
++
++	drm_warn(from_pvr_device(pvr_dev),
++		 "%s: pvr_power_get returned error %d, attempting recovery\n",
++		 __func__, err);
++
++	err = pvr_power_clear_error(pvr_dev);
++	if (err)
++		return err;
++
++	return pvr_power_get(pvr_dev);
++}
++
+ /**
+  * pvr_power_reset() - Reset the GPU
+  * @pvr_dev: Device pointer
+@@ -364,7 +421,7 @@ pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
+ 	 * Take a power reference during the reset. This should prevent any interference with the
+ 	 * power state during reset.
+ 	 */
+-	WARN_ON(pvr_power_get(pvr_dev));
++	WARN_ON(pvr_power_get_clear(pvr_dev));
+ 
+ 	down_write(&pvr_dev->reset_sem);
+ 
+diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile
+index 7a2ada6e2d74a9..44c66a29341572 100644
+--- a/drivers/gpu/drm/msm/Makefile
++++ b/drivers/gpu/drm/msm/Makefile
+@@ -195,6 +195,11 @@ ADRENO_HEADERS = \
+ 	generated/a4xx.xml.h \
+ 	generated/a5xx.xml.h \
+ 	generated/a6xx.xml.h \
++	generated/a6xx_descriptors.xml.h \
++	generated/a6xx_enums.xml.h \
++	generated/a6xx_perfcntrs.xml.h \
++	generated/a7xx_enums.xml.h \
++	generated/a7xx_perfcntrs.xml.h \
+ 	generated/a6xx_gmu.xml.h \
+ 	generated/adreno_common.xml.h \
+ 	generated/adreno_pm4.xml.h \
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_catalog.c b/drivers/gpu/drm/msm/adreno/a6xx_catalog.c
+index 70f7ad806c3407..a45819e04aabc1 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_catalog.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_catalog.c
+@@ -1335,7 +1335,7 @@ static const uint32_t a7xx_pwrup_reglist_regs[] = {
+ 	REG_A6XX_RB_NC_MODE_CNTL,
+ 	REG_A6XX_RB_CMP_DBG_ECO_CNTL,
+ 	REG_A7XX_GRAS_NC_MODE_CNTL,
+-	REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE,
++	REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE_ENABLE,
+ 	REG_A6XX_UCHE_GBIF_GX_CONFIG,
+ 	REG_A6XX_UCHE_CLIENT_PF,
+ 	REG_A6XX_TPL1_DBG_ECO_CNTL1,
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
+index 9201a53dd341bf..6e71f617fc3d0d 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
+@@ -6,6 +6,10 @@
+ 
+ 
+ #include "adreno_gpu.h"
++#include "a6xx_enums.xml.h"
++#include "a7xx_enums.xml.h"
++#include "a6xx_perfcntrs.xml.h"
++#include "a7xx_perfcntrs.xml.h"
+ #include "a6xx.xml.h"
+ 
+ #include "a6xx_gmu.h"
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+index 341a72a6740182..a85d3df7a5facf 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+@@ -158,7 +158,7 @@ static int a6xx_crashdumper_run(struct msm_gpu *gpu,
+ 	/* Make sure all pending memory writes are posted */
+ 	wmb();
+ 
+-	gpu_write64(gpu, REG_A6XX_CP_CRASH_SCRIPT_BASE, dumper->iova);
++	gpu_write64(gpu, REG_A6XX_CP_CRASH_DUMP_SCRIPT_BASE, dumper->iova);
+ 
+ 	gpu_write(gpu, REG_A6XX_CP_CRASH_DUMP_CNTL, 1);
+ 
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h
+index e545106c70be71..95d93ac6812a4d 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.h
+@@ -212,7 +212,7 @@ static const struct a6xx_shader_block {
+ 	SHADER(A6XX_SP_LB_5_DATA, 0x200),
+ 	SHADER(A6XX_SP_CB_BINDLESS_DATA, 0x800),
+ 	SHADER(A6XX_SP_CB_LEGACY_DATA, 0x280),
+-	SHADER(A6XX_SP_UAV_DATA, 0x80),
++	SHADER(A6XX_SP_GFX_UAV_BASE_DATA, 0x80),
+ 	SHADER(A6XX_SP_INST_TAG, 0x80),
+ 	SHADER(A6XX_SP_CB_BINDLESS_TAG, 0x80),
+ 	SHADER(A6XX_SP_TMO_UMO_TAG, 0x80),
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
+index 3b17fd2dba8911..e6084e6999eb84 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
+@@ -210,7 +210,7 @@ void a6xx_preempt_hw_init(struct msm_gpu *gpu)
+ 	gpu_write64(gpu, REG_A6XX_CP_CONTEXT_SWITCH_SMMU_INFO, 0);
+ 
+ 	/* Enable the GMEM save/restore feature for preemption */
+-	gpu_write(gpu, REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE, 0x1);
++	gpu_write(gpu, REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE_ENABLE, 0x1);
+ 
+ 	/* Reset the preemption state */
+ 	set_preempt_state(a6xx_gpu, PREEMPT_NONE);
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h b/drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h
+index 9a327d543f27de..e02cabb39f194c 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h
++++ b/drivers/gpu/drm/msm/adreno/adreno_gen7_9_0_snapshot.h
+@@ -1311,8 +1311,8 @@ static struct a6xx_indexed_registers gen7_9_0_cp_indexed_reg_list[] = {
+ 		REG_A7XX_CP_BV_SQE_UCODE_DBG_DATA, 0x08000},
+ 	{ "CP_BV_SQE_STAT_ADDR", REG_A7XX_CP_BV_SQE_STAT_ADDR,
+ 		REG_A7XX_CP_BV_SQE_STAT_DATA, 0x00040},
+-	{ "CP_RESOURCE_TBL", REG_A7XX_CP_RESOURCE_TBL_DBG_ADDR,
+-		REG_A7XX_CP_RESOURCE_TBL_DBG_DATA, 0x04100},
++	{ "CP_RESOURCE_TBL", REG_A7XX_CP_RESOURCE_TABLE_DBG_ADDR,
++		REG_A7XX_CP_RESOURCE_TABLE_DBG_DATA, 0x04100},
+ 	{ "CP_LPAC_DRAW_STATE_ADDR", REG_A7XX_CP_LPAC_DRAW_STATE_ADDR,
+ 		REG_A7XX_CP_LPAC_DRAW_STATE_DATA, 0x00200},
+ 	{ "CP_LPAC_ROQ", REG_A7XX_CP_LPAC_ROQ_DBG_ADDR,
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index d007687c24467d..f0d016ce8e11cb 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -555,6 +555,7 @@ static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj,
+ 					   u32 metadata_size)
+ {
+ 	struct msm_gem_object *msm_obj = to_msm_bo(obj);
++	void *new_metadata;
+ 	void *buf;
+ 	int ret;
+ 
+@@ -572,8 +573,14 @@ static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj,
+ 	if (ret)
+ 		goto out;
+ 
+-	msm_obj->metadata =
++	new_metadata =
+ 		krealloc(msm_obj->metadata, metadata_size, GFP_KERNEL);
++	if (!new_metadata) {
++		ret = -ENOMEM;
++		goto out;
++	}
++
++	msm_obj->metadata = new_metadata;
+ 	msm_obj->metadata_size = metadata_size;
+ 	memcpy(msm_obj->metadata, buf, metadata_size);
+ 
+diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
+index 2995e80fec3ba9..ebd8a360335966 100644
+--- a/drivers/gpu/drm/msm/msm_gem.c
++++ b/drivers/gpu/drm/msm/msm_gem.c
+@@ -963,7 +963,8 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m,
+ 	uint64_t off = drm_vma_node_start(&obj->vma_node);
+ 	const char *madv;
+ 
+-	msm_gem_lock(obj);
++	if (!msm_gem_trylock(obj))
++		return;
+ 
+ 	stats->all.count++;
+ 	stats->all.size += obj->size;
+diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
+index ba5c4ff76292ca..ea001b0a6b6466 100644
+--- a/drivers/gpu/drm/msm/msm_gem.h
++++ b/drivers/gpu/drm/msm/msm_gem.h
+@@ -188,6 +188,12 @@ msm_gem_lock(struct drm_gem_object *obj)
+ 	dma_resv_lock(obj->resv, NULL);
+ }
+ 
++static inline bool __must_check
++msm_gem_trylock(struct drm_gem_object *obj)
++{
++	return dma_resv_trylock(obj->resv);
++}
++
+ static inline int
+ msm_gem_lock_interruptible(struct drm_gem_object *obj)
+ {
+diff --git a/drivers/gpu/drm/msm/registers/adreno/a6xx.xml b/drivers/gpu/drm/msm/registers/adreno/a6xx.xml
+index 2db425abf0f3cc..d860fd94feae85 100644
+--- a/drivers/gpu/drm/msm/registers/adreno/a6xx.xml
++++ b/drivers/gpu/drm/msm/registers/adreno/a6xx.xml
+@@ -5,6 +5,11 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ <import file="freedreno_copyright.xml"/>
+ <import file="adreno/adreno_common.xml"/>
+ <import file="adreno/adreno_pm4.xml"/>
++<import file="adreno/a6xx_enums.xml"/>
++<import file="adreno/a7xx_enums.xml"/>
++<import file="adreno/a6xx_perfcntrs.xml"/>
++<import file="adreno/a7xx_perfcntrs.xml"/>
++<import file="adreno/a6xx_descriptors.xml"/>
+ 
+ <!--
+ Each register that is actually being used by driver should have "usage" defined,
+@@ -20,2205 +25,6 @@ is either overwritten by renderpass/blit (ib2) or not used if not overwritten
+ by a particular renderpass/blit.
+ -->
+ 
+-<!-- these might be same as a5xx -->
+-<enum name="a6xx_tile_mode">
+-	<value name="TILE6_LINEAR" value="0"/>
+-	<value name="TILE6_2" value="2"/>
+-	<value name="TILE6_3" value="3"/>
+-</enum>
+-
+-<enum name="a6xx_format">
+-	<value value="0x02" name="FMT6_A8_UNORM"/>
+-	<value value="0x03" name="FMT6_8_UNORM"/>
+-	<value value="0x04" name="FMT6_8_SNORM"/>
+-	<value value="0x05" name="FMT6_8_UINT"/>
+-	<value value="0x06" name="FMT6_8_SINT"/>
+-
+-	<value value="0x08" name="FMT6_4_4_4_4_UNORM"/>
+-	<value value="0x0a" name="FMT6_5_5_5_1_UNORM"/>
+-	<value value="0x0c" name="FMT6_1_5_5_5_UNORM"/> <!-- read only -->
+-	<value value="0x0e" name="FMT6_5_6_5_UNORM"/>
+-
+-	<value value="0x0f" name="FMT6_8_8_UNORM"/>
+-	<value value="0x10" name="FMT6_8_8_SNORM"/>
+-	<value value="0x11" name="FMT6_8_8_UINT"/>
+-	<value value="0x12" name="FMT6_8_8_SINT"/>
+-	<value value="0x13" name="FMT6_L8_A8_UNORM"/>
+-
+-	<value value="0x15" name="FMT6_16_UNORM"/>
+-	<value value="0x16" name="FMT6_16_SNORM"/>
+-	<value value="0x17" name="FMT6_16_FLOAT"/>
+-	<value value="0x18" name="FMT6_16_UINT"/>
+-	<value value="0x19" name="FMT6_16_SINT"/>
+-
+-	<value value="0x21" name="FMT6_8_8_8_UNORM"/>
+-	<value value="0x22" name="FMT6_8_8_8_SNORM"/>
+-	<value value="0x23" name="FMT6_8_8_8_UINT"/>
+-	<value value="0x24" name="FMT6_8_8_8_SINT"/>
+-
+-	<value value="0x30" name="FMT6_8_8_8_8_UNORM"/>
+-	<value value="0x31" name="FMT6_8_8_8_X8_UNORM"/> <!-- samples 1 for alpha -->
+-	<value value="0x32" name="FMT6_8_8_8_8_SNORM"/>
+-	<value value="0x33" name="FMT6_8_8_8_8_UINT"/>
+-	<value value="0x34" name="FMT6_8_8_8_8_SINT"/>
+-
+-	<value value="0x35" name="FMT6_9_9_9_E5_FLOAT"/>
+-
+-	<value value="0x36" name="FMT6_10_10_10_2_UNORM"/>
+-	<value value="0x37" name="FMT6_10_10_10_2_UNORM_DEST"/>
+-	<value value="0x39" name="FMT6_10_10_10_2_SNORM"/>
+-	<value value="0x3a" name="FMT6_10_10_10_2_UINT"/>
+-	<value value="0x3b" name="FMT6_10_10_10_2_SINT"/>
+-
+-	<value value="0x42" name="FMT6_11_11_10_FLOAT"/>
+-
+-	<value value="0x43" name="FMT6_16_16_UNORM"/>
+-	<value value="0x44" name="FMT6_16_16_SNORM"/>
+-	<value value="0x45" name="FMT6_16_16_FLOAT"/>
+-	<value value="0x46" name="FMT6_16_16_UINT"/>
+-	<value value="0x47" name="FMT6_16_16_SINT"/>
+-
+-	<value value="0x48" name="FMT6_32_UNORM"/>
+-	<value value="0x49" name="FMT6_32_SNORM"/>
+-	<value value="0x4a" name="FMT6_32_FLOAT"/>
+-	<value value="0x4b" name="FMT6_32_UINT"/>
+-	<value value="0x4c" name="FMT6_32_SINT"/>
+-	<value value="0x4d" name="FMT6_32_FIXED"/>
+-
+-	<value value="0x58" name="FMT6_16_16_16_UNORM"/>
+-	<value value="0x59" name="FMT6_16_16_16_SNORM"/>
+-	<value value="0x5a" name="FMT6_16_16_16_FLOAT"/>
+-	<value value="0x5b" name="FMT6_16_16_16_UINT"/>
+-	<value value="0x5c" name="FMT6_16_16_16_SINT"/>
+-
+-	<value value="0x60" name="FMT6_16_16_16_16_UNORM"/>
+-	<value value="0x61" name="FMT6_16_16_16_16_SNORM"/>
+-	<value value="0x62" name="FMT6_16_16_16_16_FLOAT"/>
+-	<value value="0x63" name="FMT6_16_16_16_16_UINT"/>
+-	<value value="0x64" name="FMT6_16_16_16_16_SINT"/>
+-
+-	<value value="0x65" name="FMT6_32_32_UNORM"/>
+-	<value value="0x66" name="FMT6_32_32_SNORM"/>
+-	<value value="0x67" name="FMT6_32_32_FLOAT"/>
+-	<value value="0x68" name="FMT6_32_32_UINT"/>
+-	<value value="0x69" name="FMT6_32_32_SINT"/>
+-	<value value="0x6a" name="FMT6_32_32_FIXED"/>
+-
+-	<value value="0x70" name="FMT6_32_32_32_UNORM"/>
+-	<value value="0x71" name="FMT6_32_32_32_SNORM"/>
+-	<value value="0x72" name="FMT6_32_32_32_UINT"/>
+-	<value value="0x73" name="FMT6_32_32_32_SINT"/>
+-	<value value="0x74" name="FMT6_32_32_32_FLOAT"/>
+-	<value value="0x75" name="FMT6_32_32_32_FIXED"/>
+-
+-	<value value="0x80" name="FMT6_32_32_32_32_UNORM"/>
+-	<value value="0x81" name="FMT6_32_32_32_32_SNORM"/>
+-	<value value="0x82" name="FMT6_32_32_32_32_FLOAT"/>
+-	<value value="0x83" name="FMT6_32_32_32_32_UINT"/>
+-	<value value="0x84" name="FMT6_32_32_32_32_SINT"/>
+-	<value value="0x85" name="FMT6_32_32_32_32_FIXED"/>
+-
+-	<value value="0x8c" name="FMT6_G8R8B8R8_422_UNORM"/> <!-- UYVY -->
+-	<value value="0x8d" name="FMT6_R8G8R8B8_422_UNORM"/> <!-- YUYV -->
+-	<value value="0x8e" name="FMT6_R8_G8B8_2PLANE_420_UNORM"/> <!-- NV12 -->
+-	<value value="0x8f" name="FMT6_NV21"/>
+-	<value value="0x90" name="FMT6_R8_G8_B8_3PLANE_420_UNORM"/> <!-- YV12 -->
+-
+-	<value value="0x91" name="FMT6_Z24_UNORM_S8_UINT_AS_R8G8B8A8"/>
+-
+-	<!-- Note: tiling/UBWC for these may be different from equivalent formats
+-	For example FMT6_NV12_Y is not compatible with FMT6_8_UNORM
+-	-->
+-	<value value="0x94" name="FMT6_NV12_Y"/>
+-	<value value="0x95" name="FMT6_NV12_UV"/>
+-	<value value="0x96" name="FMT6_NV12_VU"/>
+-	<value value="0x97" name="FMT6_NV12_4R"/>
+-	<value value="0x98" name="FMT6_NV12_4R_Y"/>
+-	<value value="0x99" name="FMT6_NV12_4R_UV"/>
+-	<value value="0x9a" name="FMT6_P010"/>
+-	<value value="0x9b" name="FMT6_P010_Y"/>
+-	<value value="0x9c" name="FMT6_P010_UV"/>
+-	<value value="0x9d" name="FMT6_TP10"/>
+-	<value value="0x9e" name="FMT6_TP10_Y"/>
+-	<value value="0x9f" name="FMT6_TP10_UV"/>
+-
+-	<value value="0xa0" name="FMT6_Z24_UNORM_S8_UINT"/>
+-
+-	<value value="0xab" name="FMT6_ETC2_RG11_UNORM"/>
+-	<value value="0xac" name="FMT6_ETC2_RG11_SNORM"/>
+-	<value value="0xad" name="FMT6_ETC2_R11_UNORM"/>
+-	<value value="0xae" name="FMT6_ETC2_R11_SNORM"/>
+-	<value value="0xaf" name="FMT6_ETC1"/>
+-	<value value="0xb0" name="FMT6_ETC2_RGB8"/>
+-	<value value="0xb1" name="FMT6_ETC2_RGBA8"/>
+-	<value value="0xb2" name="FMT6_ETC2_RGB8A1"/>
+-	<value value="0xb3" name="FMT6_DXT1"/>
+-	<value value="0xb4" name="FMT6_DXT3"/>
+-	<value value="0xb5" name="FMT6_DXT5"/>
+-	<value value="0xb7" name="FMT6_RGTC1_UNORM"/>
+-	<value value="0xb8" name="FMT6_RGTC1_SNORM"/>
+-	<value value="0xbb" name="FMT6_RGTC2_UNORM"/>
+-	<value value="0xbc" name="FMT6_RGTC2_SNORM"/>
+-	<value value="0xbe" name="FMT6_BPTC_UFLOAT"/>
+-	<value value="0xbf" name="FMT6_BPTC_FLOAT"/>
+-	<value value="0xc0" name="FMT6_BPTC"/>
+-	<value value="0xc1" name="FMT6_ASTC_4x4"/>
+-	<value value="0xc2" name="FMT6_ASTC_5x4"/>
+-	<value value="0xc3" name="FMT6_ASTC_5x5"/>
+-	<value value="0xc4" name="FMT6_ASTC_6x5"/>
+-	<value value="0xc5" name="FMT6_ASTC_6x6"/>
+-	<value value="0xc6" name="FMT6_ASTC_8x5"/>
+-	<value value="0xc7" name="FMT6_ASTC_8x6"/>
+-	<value value="0xc8" name="FMT6_ASTC_8x8"/>
+-	<value value="0xc9" name="FMT6_ASTC_10x5"/>
+-	<value value="0xca" name="FMT6_ASTC_10x6"/>
+-	<value value="0xcb" name="FMT6_ASTC_10x8"/>
+-	<value value="0xcc" name="FMT6_ASTC_10x10"/>
+-	<value value="0xcd" name="FMT6_ASTC_12x10"/>
+-	<value value="0xce" name="FMT6_ASTC_12x12"/>
+-
+-	<!-- for sampling stencil (integer, 2nd channel), not available on a630 -->
+-	<value value="0xea" name="FMT6_Z24_UINT_S8_UINT"/>
+-
+-	<!-- Not a hw enum, used internally in driver -->
+-	<value value="0xff" name="FMT6_NONE"/>
+-
+-</enum>
+-
+-<!-- probably same as a5xx -->
+-<enum name="a6xx_polygon_mode">
+-	<value name="POLYMODE6_POINTS" value="1"/>
+-	<value name="POLYMODE6_LINES" value="2"/>
+-	<value name="POLYMODE6_TRIANGLES" value="3"/>
+-</enum>
+-
+-<enum name="a6xx_depth_format">
+-	<value name="DEPTH6_NONE" value="0"/>
+-	<value name="DEPTH6_16" value="1"/>
+-	<value name="DEPTH6_24_8" value="2"/>
+-	<value name="DEPTH6_32" value="4"/>
+-</enum>
+-
+-<bitset name="a6x_cp_protect" inline="yes">
+-	<bitfield name="BASE_ADDR" low="0" high="17"/>
+-	<bitfield name="MASK_LEN" low="18" high="30"/>
+-	<bitfield name="READ" pos="31" type="boolean"/>
+-</bitset>
+-
+-<enum name="a6xx_shader_id">
+-	<value value="0x9" name="A6XX_TP0_TMO_DATA"/>
+-	<value value="0xa" name="A6XX_TP0_SMO_DATA"/>
+-	<value value="0xb" name="A6XX_TP0_MIPMAP_BASE_DATA"/>
+-	<value value="0x19" name="A6XX_TP1_TMO_DATA"/>
+-	<value value="0x1a" name="A6XX_TP1_SMO_DATA"/>
+-	<value value="0x1b" name="A6XX_TP1_MIPMAP_BASE_DATA"/>
+-	<value value="0x29" name="A6XX_SP_INST_DATA"/>
+-	<value value="0x2a" name="A6XX_SP_LB_0_DATA"/>
+-	<value value="0x2b" name="A6XX_SP_LB_1_DATA"/>
+-	<value value="0x2c" name="A6XX_SP_LB_2_DATA"/>
+-	<value value="0x2d" name="A6XX_SP_LB_3_DATA"/>
+-	<value value="0x2e" name="A6XX_SP_LB_4_DATA"/>
+-	<value value="0x2f" name="A6XX_SP_LB_5_DATA"/>
+-	<value value="0x30" name="A6XX_SP_CB_BINDLESS_DATA"/>
+-	<value value="0x31" name="A6XX_SP_CB_LEGACY_DATA"/>
+-	<value value="0x32" name="A6XX_SP_UAV_DATA"/>
+-	<value value="0x33" name="A6XX_SP_INST_TAG"/>
+-	<value value="0x34" name="A6XX_SP_CB_BINDLESS_TAG"/>
+-	<value value="0x35" name="A6XX_SP_TMO_UMO_TAG"/>
+-	<value value="0x36" name="A6XX_SP_SMO_TAG"/>
+-	<value value="0x37" name="A6XX_SP_STATE_DATA"/>
+-	<value value="0x49" name="A6XX_HLSQ_CHUNK_CVS_RAM"/>
+-	<value value="0x4a" name="A6XX_HLSQ_CHUNK_CPS_RAM"/>
+-	<value value="0x4b" name="A6XX_HLSQ_CHUNK_CVS_RAM_TAG"/>
+-	<value value="0x4c" name="A6XX_HLSQ_CHUNK_CPS_RAM_TAG"/>
+-	<value value="0x4d" name="A6XX_HLSQ_ICB_CVS_CB_BASE_TAG"/>
+-	<value value="0x4e" name="A6XX_HLSQ_ICB_CPS_CB_BASE_TAG"/>
+-	<value value="0x50" name="A6XX_HLSQ_CVS_MISC_RAM"/>
+-	<value value="0x51" name="A6XX_HLSQ_CPS_MISC_RAM"/>
+-	<value value="0x52" name="A6XX_HLSQ_INST_RAM"/>
+-	<value value="0x53" name="A6XX_HLSQ_GFX_CVS_CONST_RAM"/>
+-	<value value="0x54" name="A6XX_HLSQ_GFX_CPS_CONST_RAM"/>
+-	<value value="0x55" name="A6XX_HLSQ_CVS_MISC_RAM_TAG"/>
+-	<value value="0x56" name="A6XX_HLSQ_CPS_MISC_RAM_TAG"/>
+-	<value value="0x57" name="A6XX_HLSQ_INST_RAM_TAG"/>
+-	<value value="0x58" name="A6XX_HLSQ_GFX_CVS_CONST_RAM_TAG"/>
+-	<value value="0x59" name="A6XX_HLSQ_GFX_CPS_CONST_RAM_TAG"/>
+-	<value value="0x5a" name="A6XX_HLSQ_PWR_REST_RAM"/>
+-	<value value="0x5b" name="A6XX_HLSQ_PWR_REST_TAG"/>
+-	<value value="0x60" name="A6XX_HLSQ_DATAPATH_META"/>
+-	<value value="0x61" name="A6XX_HLSQ_FRONTEND_META"/>
+-	<value value="0x62" name="A6XX_HLSQ_INDIRECT_META"/>
+-	<value value="0x63" name="A6XX_HLSQ_BACKEND_META"/>
+-	<value value="0x70" name="A6XX_SP_LB_6_DATA"/>
+-	<value value="0x71" name="A6XX_SP_LB_7_DATA"/>
+-	<value value="0x73" name="A6XX_HLSQ_INST_RAM_1"/>
+-</enum>
+-
+-<enum name="a7xx_statetype_id">
+-	<value value="0" name="A7XX_TP0_NCTX_REG"/>
+-	<value value="1" name="A7XX_TP0_CTX0_3D_CVS_REG"/>
+-	<value value="2" name="A7XX_TP0_CTX0_3D_CPS_REG"/>
+-	<value value="3" name="A7XX_TP0_CTX1_3D_CVS_REG"/>
+-	<value value="4" name="A7XX_TP0_CTX1_3D_CPS_REG"/>
+-	<value value="5" name="A7XX_TP0_CTX2_3D_CPS_REG"/>
+-	<value value="6" name="A7XX_TP0_CTX3_3D_CPS_REG"/>
+-	<value value="9" name="A7XX_TP0_TMO_DATA"/>
+-	<value value="10" name="A7XX_TP0_SMO_DATA"/>
+-	<value value="11" name="A7XX_TP0_MIPMAP_BASE_DATA"/>
+-	<value value="32" name="A7XX_SP_NCTX_REG"/>
+-	<value value="33" name="A7XX_SP_CTX0_3D_CVS_REG"/>
+-	<value value="34" name="A7XX_SP_CTX0_3D_CPS_REG"/>
+-	<value value="35" name="A7XX_SP_CTX1_3D_CVS_REG"/>
+-	<value value="36" name="A7XX_SP_CTX1_3D_CPS_REG"/>
+-	<value value="37" name="A7XX_SP_CTX2_3D_CPS_REG"/>
+-	<value value="38" name="A7XX_SP_CTX3_3D_CPS_REG"/>
+-	<value value="39" name="A7XX_SP_INST_DATA"/>
+-	<value value="40" name="A7XX_SP_INST_DATA_1"/>
+-	<value value="41" name="A7XX_SP_LB_0_DATA"/>
+-	<value value="42" name="A7XX_SP_LB_1_DATA"/>
+-	<value value="43" name="A7XX_SP_LB_2_DATA"/>
+-	<value value="44" name="A7XX_SP_LB_3_DATA"/>
+-	<value value="45" name="A7XX_SP_LB_4_DATA"/>
+-	<value value="46" name="A7XX_SP_LB_5_DATA"/>
+-	<value value="47" name="A7XX_SP_LB_6_DATA"/>
+-	<value value="48" name="A7XX_SP_LB_7_DATA"/>
+-	<value value="49" name="A7XX_SP_CB_RAM"/>
+-	<value value="50" name="A7XX_SP_LB_13_DATA"/>
+-	<value value="51" name="A7XX_SP_LB_14_DATA"/>
+-	<value value="52" name="A7XX_SP_INST_TAG"/>
+-	<value value="53" name="A7XX_SP_INST_DATA_2"/>
+-	<value value="54" name="A7XX_SP_TMO_TAG"/>
+-	<value value="55" name="A7XX_SP_SMO_TAG"/>
+-	<value value="56" name="A7XX_SP_STATE_DATA"/>
+-	<value value="57" name="A7XX_SP_HWAVE_RAM"/>
+-	<value value="58" name="A7XX_SP_L0_INST_BUF"/>
+-	<value value="59" name="A7XX_SP_LB_8_DATA"/>
+-	<value value="60" name="A7XX_SP_LB_9_DATA"/>
+-	<value value="61" name="A7XX_SP_LB_10_DATA"/>
+-	<value value="62" name="A7XX_SP_LB_11_DATA"/>
+-	<value value="63" name="A7XX_SP_LB_12_DATA"/>
+-	<value value="64" name="A7XX_HLSQ_DATAPATH_DSTR_META"/>
+-	<value value="67" name="A7XX_HLSQ_L2STC_TAG_RAM"/>
+-	<value value="68" name="A7XX_HLSQ_L2STC_INFO_CMD"/>
+-	<value value="69" name="A7XX_HLSQ_CVS_BE_CTXT_BUF_RAM_TAG"/>
+-	<value value="70" name="A7XX_HLSQ_CPS_BE_CTXT_BUF_RAM_TAG"/>
+-	<value value="71" name="A7XX_HLSQ_GFX_CVS_BE_CTXT_BUF_RAM"/>
+-	<value value="72" name="A7XX_HLSQ_GFX_CPS_BE_CTXT_BUF_RAM"/>
+-	<value value="73" name="A7XX_HLSQ_CHUNK_CVS_RAM"/>
+-	<value value="74" name="A7XX_HLSQ_CHUNK_CPS_RAM"/>
+-	<value value="75" name="A7XX_HLSQ_CHUNK_CVS_RAM_TAG"/>
+-	<value value="76" name="A7XX_HLSQ_CHUNK_CPS_RAM_TAG"/>
+-	<value value="77" name="A7XX_HLSQ_ICB_CVS_CB_BASE_TAG"/>
+-	<value value="78" name="A7XX_HLSQ_ICB_CPS_CB_BASE_TAG"/>
+-	<value value="79" name="A7XX_HLSQ_CVS_MISC_RAM"/>
+-	<value value="80" name="A7XX_HLSQ_CPS_MISC_RAM"/>
+-	<value value="81" name="A7XX_HLSQ_CPS_MISC_RAM_1"/>
+-	<value value="82" name="A7XX_HLSQ_INST_RAM"/>
+-	<value value="83" name="A7XX_HLSQ_GFX_CVS_CONST_RAM"/>
+-	<value value="84" name="A7XX_HLSQ_GFX_CPS_CONST_RAM"/>
+-	<value value="85" name="A7XX_HLSQ_CVS_MISC_RAM_TAG"/>
+-	<value value="86" name="A7XX_HLSQ_CPS_MISC_RAM_TAG"/>
+-	<value value="87" name="A7XX_HLSQ_INST_RAM_TAG"/>
+-	<value value="88" name="A7XX_HLSQ_GFX_CVS_CONST_RAM_TAG"/>
+-	<value value="89" name="A7XX_HLSQ_GFX_CPS_CONST_RAM_TAG"/>
+-	<value value="90" name="A7XX_HLSQ_GFX_LOCAL_MISC_RAM"/>
+-	<value value="91" name="A7XX_HLSQ_GFX_LOCAL_MISC_RAM_TAG"/>
+-	<value value="92" name="A7XX_HLSQ_INST_RAM_1"/>
+-	<value value="93" name="A7XX_HLSQ_STPROC_META"/>
+-	<value value="94" name="A7XX_HLSQ_BV_BE_META"/>
+-	<value value="95" name="A7XX_HLSQ_INST_RAM_2"/>
+-	<value value="96" name="A7XX_HLSQ_DATAPATH_META"/>
+-	<value value="97" name="A7XX_HLSQ_FRONTEND_META"/>
+-	<value value="98" name="A7XX_HLSQ_INDIRECT_META"/>
+-	<value value="99" name="A7XX_HLSQ_BACKEND_META"/>
+-</enum>
+-
+-<enum name="a6xx_debugbus_id">
+-	<value value="0x1" name="A6XX_DBGBUS_CP"/>
+-	<value value="0x2" name="A6XX_DBGBUS_RBBM"/>
+-	<value value="0x3" name="A6XX_DBGBUS_VBIF"/>
+-	<value value="0x4" name="A6XX_DBGBUS_HLSQ"/>
+-	<value value="0x5" name="A6XX_DBGBUS_UCHE"/>
+-	<value value="0x6" name="A6XX_DBGBUS_DPM"/>
+-	<value value="0x7" name="A6XX_DBGBUS_TESS"/>
+-	<value value="0x8" name="A6XX_DBGBUS_PC"/>
+-	<value value="0x9" name="A6XX_DBGBUS_VFDP"/>
+-	<value value="0xa" name="A6XX_DBGBUS_VPC"/>
+-	<value value="0xb" name="A6XX_DBGBUS_TSE"/>
+-	<value value="0xc" name="A6XX_DBGBUS_RAS"/>
+-	<value value="0xd" name="A6XX_DBGBUS_VSC"/>
+-	<value value="0xe" name="A6XX_DBGBUS_COM"/>
+-	<value value="0x10" name="A6XX_DBGBUS_LRZ"/>
+-	<value value="0x11" name="A6XX_DBGBUS_A2D"/>
+-	<value value="0x12" name="A6XX_DBGBUS_CCUFCHE"/>
+-	<value value="0x13" name="A6XX_DBGBUS_GMU_CX"/>
+-	<value value="0x14" name="A6XX_DBGBUS_RBP"/>
+-	<value value="0x15" name="A6XX_DBGBUS_DCS"/>
+-	<value value="0x16" name="A6XX_DBGBUS_DBGC"/>
+-	<value value="0x17" name="A6XX_DBGBUS_CX"/>
+-	<value value="0x18" name="A6XX_DBGBUS_GMU_GX"/>
+-	<value value="0x19" name="A6XX_DBGBUS_TPFCHE"/>
+-	<value value="0x1a" name="A6XX_DBGBUS_GBIF_GX"/>
+-	<value value="0x1d" name="A6XX_DBGBUS_GPC"/>
+-	<value value="0x1e" name="A6XX_DBGBUS_LARC"/>
+-	<value value="0x1f" name="A6XX_DBGBUS_HLSQ_SPTP"/>
+-	<value value="0x20" name="A6XX_DBGBUS_RB_0"/>
+-	<value value="0x21" name="A6XX_DBGBUS_RB_1"/>
+-	<value value="0x22" name="A6XX_DBGBUS_RB_2"/>
+-	<value value="0x24" name="A6XX_DBGBUS_UCHE_WRAPPER"/>
+-	<value value="0x28" name="A6XX_DBGBUS_CCU_0"/>
+-	<value value="0x29" name="A6XX_DBGBUS_CCU_1"/>
+-	<value value="0x2a" name="A6XX_DBGBUS_CCU_2"/>
+-	<value value="0x38" name="A6XX_DBGBUS_VFD_0"/>
+-	<value value="0x39" name="A6XX_DBGBUS_VFD_1"/>
+-	<value value="0x3a" name="A6XX_DBGBUS_VFD_2"/>
+-	<value value="0x3b" name="A6XX_DBGBUS_VFD_3"/>
+-	<value value="0x3c" name="A6XX_DBGBUS_VFD_4"/>
+-	<value value="0x3d" name="A6XX_DBGBUS_VFD_5"/>
+-	<value value="0x40" name="A6XX_DBGBUS_SP_0"/>
+-	<value value="0x41" name="A6XX_DBGBUS_SP_1"/>
+-	<value value="0x42" name="A6XX_DBGBUS_SP_2"/>
+-	<value value="0x48" name="A6XX_DBGBUS_TPL1_0"/>
+-	<value value="0x49" name="A6XX_DBGBUS_TPL1_1"/>
+-	<value value="0x4a" name="A6XX_DBGBUS_TPL1_2"/>
+-	<value value="0x4b" name="A6XX_DBGBUS_TPL1_3"/>
+-	<value value="0x4c" name="A6XX_DBGBUS_TPL1_4"/>
+-	<value value="0x4d" name="A6XX_DBGBUS_TPL1_5"/>
+-	<value value="0x58" name="A6XX_DBGBUS_SPTP_0"/>
+-	<value value="0x59" name="A6XX_DBGBUS_SPTP_1"/>
+-	<value value="0x5a" name="A6XX_DBGBUS_SPTP_2"/>
+-	<value value="0x5b" name="A6XX_DBGBUS_SPTP_3"/>
+-	<value value="0x5c" name="A6XX_DBGBUS_SPTP_4"/>
+-	<value value="0x5d" name="A6XX_DBGBUS_SPTP_5"/>
+-</enum>
+-
+-<enum name="a7xx_state_location">
+-	<value value="0" name="A7XX_HLSQ_STATE"/>
+-	<value value="1" name="A7XX_HLSQ_DP"/>
+-	<value value="2" name="A7XX_SP_TOP"/>
+-	<value value="3" name="A7XX_USPTP"/>
+-	<value value="4" name="A7XX_HLSQ_DP_STR"/>
+-</enum>
+-
+-<enum name="a7xx_pipe">
+-	<value value="0" name="A7XX_PIPE_NONE"/>
+-	<value value="1" name="A7XX_PIPE_BR"/>
+-	<value value="2" name="A7XX_PIPE_BV"/>
+-	<value value="3" name="A7XX_PIPE_LPAC"/>
+-</enum>
+-
+-<enum name="a7xx_cluster">
+-	<value value="0" name="A7XX_CLUSTER_NONE"/>
+-	<value value="1" name="A7XX_CLUSTER_FE"/>
+-	<value value="2" name="A7XX_CLUSTER_SP_VS"/>
+-	<value value="3" name="A7XX_CLUSTER_PC_VS"/>
+-	<value value="4" name="A7XX_CLUSTER_GRAS"/>
+-	<value value="5" name="A7XX_CLUSTER_SP_PS"/>
+-	<value value="6" name="A7XX_CLUSTER_VPC_PS"/>
+-	<value value="7" name="A7XX_CLUSTER_PS"/>
+-</enum>
+-
+-<enum name="a7xx_debugbus_id">
+-	<value value="1" name="A7XX_DBGBUS_CP_0_0"/>
+-	<value value="2" name="A7XX_DBGBUS_CP_0_1"/>
+-	<value value="3" name="A7XX_DBGBUS_RBBM"/>
+-	<value value="5" name="A7XX_DBGBUS_GBIF_GX"/>
+-	<value value="6" name="A7XX_DBGBUS_GBIF_CX"/>
+-	<value value="7" name="A7XX_DBGBUS_HLSQ"/>
+-	<value value="9" name="A7XX_DBGBUS_UCHE_0"/>
+-	<value value="10" name="A7XX_DBGBUS_UCHE_1"/>
+-	<value value="13" name="A7XX_DBGBUS_TESS_BR"/>
+-	<value value="14" name="A7XX_DBGBUS_TESS_BV"/>
+-	<value value="17" name="A7XX_DBGBUS_PC_BR"/>
+-	<value value="18" name="A7XX_DBGBUS_PC_BV"/>
+-	<value value="21" name="A7XX_DBGBUS_VFDP_BR"/>
+-	<value value="22" name="A7XX_DBGBUS_VFDP_BV"/>
+-	<value value="25" name="A7XX_DBGBUS_VPC_BR"/>
+-	<value value="26" name="A7XX_DBGBUS_VPC_BV"/>
+-	<value value="29" name="A7XX_DBGBUS_TSE_BR"/>
+-	<value value="30" name="A7XX_DBGBUS_TSE_BV"/>
+-	<value value="33" name="A7XX_DBGBUS_RAS_BR"/>
+-	<value value="34" name="A7XX_DBGBUS_RAS_BV"/>
+-	<value value="37" name="A7XX_DBGBUS_VSC"/>
+-	<value value="39" name="A7XX_DBGBUS_COM_0"/>
+-	<value value="43" name="A7XX_DBGBUS_LRZ_BR"/>
+-	<value value="44" name="A7XX_DBGBUS_LRZ_BV"/>
+-	<value value="47" name="A7XX_DBGBUS_UFC_0"/>
+-	<value value="48" name="A7XX_DBGBUS_UFC_1"/>
+-	<value value="55" name="A7XX_DBGBUS_GMU_GX"/>
+-	<value value="59" name="A7XX_DBGBUS_DBGC"/>
+-	<value value="60" name="A7XX_DBGBUS_CX"/>
+-	<value value="61" name="A7XX_DBGBUS_GMU_CX"/>
+-	<value value="62" name="A7XX_DBGBUS_GPC_BR"/>
+-	<value value="63" name="A7XX_DBGBUS_GPC_BV"/>
+-	<value value="66" name="A7XX_DBGBUS_LARC"/>
+-	<value value="68" name="A7XX_DBGBUS_HLSQ_SPTP"/>
+-	<value value="70" name="A7XX_DBGBUS_RB_0"/>
+-	<value value="71" name="A7XX_DBGBUS_RB_1"/>
+-	<value value="72" name="A7XX_DBGBUS_RB_2"/>
+-	<value value="73" name="A7XX_DBGBUS_RB_3"/>
+-	<value value="74" name="A7XX_DBGBUS_RB_4"/>
+-	<value value="75" name="A7XX_DBGBUS_RB_5"/>
+-	<value value="102" name="A7XX_DBGBUS_UCHE_WRAPPER"/>
+-	<value value="106" name="A7XX_DBGBUS_CCU_0"/>
+-	<value value="107" name="A7XX_DBGBUS_CCU_1"/>
+-	<value value="108" name="A7XX_DBGBUS_CCU_2"/>
+-	<value value="109" name="A7XX_DBGBUS_CCU_3"/>
+-	<value value="110" name="A7XX_DBGBUS_CCU_4"/>
+-	<value value="111" name="A7XX_DBGBUS_CCU_5"/>
+-	<value value="138" name="A7XX_DBGBUS_VFD_BR_0"/>
+-	<value value="139" name="A7XX_DBGBUS_VFD_BR_1"/>
+-	<value value="140" name="A7XX_DBGBUS_VFD_BR_2"/>
+-	<value value="141" name="A7XX_DBGBUS_VFD_BR_3"/>
+-	<value value="142" name="A7XX_DBGBUS_VFD_BR_4"/>
+-	<value value="143" name="A7XX_DBGBUS_VFD_BR_5"/>
+-	<value value="144" name="A7XX_DBGBUS_VFD_BR_6"/>
+-	<value value="145" name="A7XX_DBGBUS_VFD_BR_7"/>
+-	<value value="202" name="A7XX_DBGBUS_VFD_BV_0"/>
+-	<value value="203" name="A7XX_DBGBUS_VFD_BV_1"/>
+-	<value value="204" name="A7XX_DBGBUS_VFD_BV_2"/>
+-	<value value="205" name="A7XX_DBGBUS_VFD_BV_3"/>
+-	<value value="234" name="A7XX_DBGBUS_USP_0"/>
+-	<value value="235" name="A7XX_DBGBUS_USP_1"/>
+-	<value value="236" name="A7XX_DBGBUS_USP_2"/>
+-	<value value="237" name="A7XX_DBGBUS_USP_3"/>
+-	<value value="238" name="A7XX_DBGBUS_USP_4"/>
+-	<value value="239" name="A7XX_DBGBUS_USP_5"/>
+-	<value value="266" name="A7XX_DBGBUS_TP_0"/>
+-	<value value="267" name="A7XX_DBGBUS_TP_1"/>
+-	<value value="268" name="A7XX_DBGBUS_TP_2"/>
+-	<value value="269" name="A7XX_DBGBUS_TP_3"/>
+-	<value value="270" name="A7XX_DBGBUS_TP_4"/>
+-	<value value="271" name="A7XX_DBGBUS_TP_5"/>
+-	<value value="272" name="A7XX_DBGBUS_TP_6"/>
+-	<value value="273" name="A7XX_DBGBUS_TP_7"/>
+-	<value value="274" name="A7XX_DBGBUS_TP_8"/>
+-	<value value="275" name="A7XX_DBGBUS_TP_9"/>
+-	<value value="276" name="A7XX_DBGBUS_TP_10"/>
+-	<value value="277" name="A7XX_DBGBUS_TP_11"/>
+-	<value value="330" name="A7XX_DBGBUS_USPTP_0"/>
+-	<value value="331" name="A7XX_DBGBUS_USPTP_1"/>
+-	<value value="332" name="A7XX_DBGBUS_USPTP_2"/>
+-	<value value="333" name="A7XX_DBGBUS_USPTP_3"/>
+-	<value value="334" name="A7XX_DBGBUS_USPTP_4"/>
+-	<value value="335" name="A7XX_DBGBUS_USPTP_5"/>
+-	<value value="336" name="A7XX_DBGBUS_USPTP_6"/>
+-	<value value="337" name="A7XX_DBGBUS_USPTP_7"/>
+-	<value value="338" name="A7XX_DBGBUS_USPTP_8"/>
+-	<value value="339" name="A7XX_DBGBUS_USPTP_9"/>
+-	<value value="340" name="A7XX_DBGBUS_USPTP_10"/>
+-	<value value="341" name="A7XX_DBGBUS_USPTP_11"/>
+-	<value value="396" name="A7XX_DBGBUS_CCHE_0"/>
+-	<value value="397" name="A7XX_DBGBUS_CCHE_1"/>
+-	<value value="398" name="A7XX_DBGBUS_CCHE_2"/>
+-	<value value="408" name="A7XX_DBGBUS_VPC_DSTR_0"/>
+-	<value value="409" name="A7XX_DBGBUS_VPC_DSTR_1"/>
+-	<value value="410" name="A7XX_DBGBUS_VPC_DSTR_2"/>
+-	<value value="411" name="A7XX_DBGBUS_HLSQ_DP_STR_0"/>
+-	<value value="412" name="A7XX_DBGBUS_HLSQ_DP_STR_1"/>
+-	<value value="413" name="A7XX_DBGBUS_HLSQ_DP_STR_2"/>
+-	<value value="414" name="A7XX_DBGBUS_HLSQ_DP_STR_3"/>
+-	<value value="415" name="A7XX_DBGBUS_HLSQ_DP_STR_4"/>
+-	<value value="416" name="A7XX_DBGBUS_HLSQ_DP_STR_5"/>
+-	<value value="443" name="A7XX_DBGBUS_UFC_DSTR_0"/>
+-	<value value="444" name="A7XX_DBGBUS_UFC_DSTR_1"/>
+-	<value value="445" name="A7XX_DBGBUS_UFC_DSTR_2"/>
+-	<value value="446" name="A7XX_DBGBUS_CGC_SUBCORE"/>
+-	<value value="447" name="A7XX_DBGBUS_CGC_CORE"/>
+-</enum>
+-
+-<enum name="a6xx_cp_perfcounter_select">
+-	<value value="0" name="PERF_CP_ALWAYS_COUNT"/>
+-	<value value="1" name="PERF_CP_BUSY_GFX_CORE_IDLE"/>
+-	<value value="2" name="PERF_CP_BUSY_CYCLES"/>
+-	<value value="3" name="PERF_CP_NUM_PREEMPTIONS"/>
+-	<value value="4" name="PERF_CP_PREEMPTION_REACTION_DELAY"/>
+-	<value value="5" name="PERF_CP_PREEMPTION_SWITCH_OUT_TIME"/>
+-	<value value="6" name="PERF_CP_PREEMPTION_SWITCH_IN_TIME"/>
+-	<value value="7" name="PERF_CP_DEAD_DRAWS_IN_BIN_RENDER"/>
+-	<value value="8" name="PERF_CP_PREDICATED_DRAWS_KILLED"/>
+-	<value value="9" name="PERF_CP_MODE_SWITCH"/>
+-	<value value="10" name="PERF_CP_ZPASS_DONE"/>
+-	<value value="11" name="PERF_CP_CONTEXT_DONE"/>
+-	<value value="12" name="PERF_CP_CACHE_FLUSH"/>
+-	<value value="13" name="PERF_CP_LONG_PREEMPTIONS"/>
+-	<value value="14" name="PERF_CP_SQE_I_CACHE_STARVE"/>
+-	<value value="15" name="PERF_CP_SQE_IDLE"/>
+-	<value value="16" name="PERF_CP_SQE_PM4_STARVE_RB_IB"/>
+-	<value value="17" name="PERF_CP_SQE_PM4_STARVE_SDS"/>
+-	<value value="18" name="PERF_CP_SQE_MRB_STARVE"/>
+-	<value value="19" name="PERF_CP_SQE_RRB_STARVE"/>
+-	<value value="20" name="PERF_CP_SQE_VSD_STARVE"/>
+-	<value value="21" name="PERF_CP_VSD_DECODE_STARVE"/>
+-	<value value="22" name="PERF_CP_SQE_PIPE_OUT_STALL"/>
+-	<value value="23" name="PERF_CP_SQE_SYNC_STALL"/>
+-	<value value="24" name="PERF_CP_SQE_PM4_WFI_STALL"/>
+-	<value value="25" name="PERF_CP_SQE_SYS_WFI_STALL"/>
+-	<value value="26" name="PERF_CP_SQE_T4_EXEC"/>
+-	<value value="27" name="PERF_CP_SQE_LOAD_STATE_EXEC"/>
+-	<value value="28" name="PERF_CP_SQE_SAVE_SDS_STATE"/>
+-	<value value="29" name="PERF_CP_SQE_DRAW_EXEC"/>
+-	<value value="30" name="PERF_CP_SQE_CTXT_REG_BUNCH_EXEC"/>
+-	<value value="31" name="PERF_CP_SQE_EXEC_PROFILED"/>
+-	<value value="32" name="PERF_CP_MEMORY_POOL_EMPTY"/>
+-	<value value="33" name="PERF_CP_MEMORY_POOL_SYNC_STALL"/>
+-	<value value="34" name="PERF_CP_MEMORY_POOL_ABOVE_THRESH"/>
+-	<value value="35" name="PERF_CP_AHB_WR_STALL_PRE_DRAWS"/>
+-	<value value="36" name="PERF_CP_AHB_STALL_SQE_GMU"/>
+-	<value value="37" name="PERF_CP_AHB_STALL_SQE_WR_OTHER"/>
+-	<value value="38" name="PERF_CP_AHB_STALL_SQE_RD_OTHER"/>
+-	<value value="39" name="PERF_CP_CLUSTER0_EMPTY"/>
+-	<value value="40" name="PERF_CP_CLUSTER1_EMPTY"/>
+-	<value value="41" name="PERF_CP_CLUSTER2_EMPTY"/>
+-	<value value="42" name="PERF_CP_CLUSTER3_EMPTY"/>
+-	<value value="43" name="PERF_CP_CLUSTER4_EMPTY"/>
+-	<value value="44" name="PERF_CP_CLUSTER5_EMPTY"/>
+-	<value value="45" name="PERF_CP_PM4_DATA"/>
+-	<value value="46" name="PERF_CP_PM4_HEADERS"/>
+-	<value value="47" name="PERF_CP_VBIF_READ_BEATS"/>
+-	<value value="48" name="PERF_CP_VBIF_WRITE_BEATS"/>
+-	<value value="49" name="PERF_CP_SQE_INSTR_COUNTER"/>
+-</enum>
+-
+-<enum name="a6xx_rbbm_perfcounter_select">
+-	<value value="0" name="PERF_RBBM_ALWAYS_COUNT"/>
+-	<value value="1" name="PERF_RBBM_ALWAYS_ON"/>
+-	<value value="2" name="PERF_RBBM_TSE_BUSY"/>
+-	<value value="3" name="PERF_RBBM_RAS_BUSY"/>
+-	<value value="4" name="PERF_RBBM_PC_DCALL_BUSY"/>
+-	<value value="5" name="PERF_RBBM_PC_VSD_BUSY"/>
+-	<value value="6" name="PERF_RBBM_STATUS_MASKED"/>
+-	<value value="7" name="PERF_RBBM_COM_BUSY"/>
+-	<value value="8" name="PERF_RBBM_DCOM_BUSY"/>
+-	<value value="9" name="PERF_RBBM_VBIF_BUSY"/>
+-	<value value="10" name="PERF_RBBM_VSC_BUSY"/>
+-	<value value="11" name="PERF_RBBM_TESS_BUSY"/>
+-	<value value="12" name="PERF_RBBM_UCHE_BUSY"/>
+-	<value value="13" name="PERF_RBBM_HLSQ_BUSY"/>
+-</enum>
+-
+-<enum name="a6xx_pc_perfcounter_select">
+-	<value value="0" name="PERF_PC_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_PC_WORKING_CYCLES"/>
+-	<value value="2" name="PERF_PC_STALL_CYCLES_VFD"/>
+-	<value value="3" name="PERF_PC_STALL_CYCLES_TSE"/>
+-	<value value="4" name="PERF_PC_STALL_CYCLES_VPC"/>
+-	<value value="5" name="PERF_PC_STALL_CYCLES_UCHE"/>
+-	<value value="6" name="PERF_PC_STALL_CYCLES_TESS"/>
+-	<value value="7" name="PERF_PC_STALL_CYCLES_TSE_ONLY"/>
+-	<value value="8" name="PERF_PC_STALL_CYCLES_VPC_ONLY"/>
+-	<value value="9" name="PERF_PC_PASS1_TF_STALL_CYCLES"/>
+-	<value value="10" name="PERF_PC_STARVE_CYCLES_FOR_INDEX"/>
+-	<value value="11" name="PERF_PC_STARVE_CYCLES_FOR_TESS_FACTOR"/>
+-	<value value="12" name="PERF_PC_STARVE_CYCLES_FOR_VIZ_STREAM"/>
+-	<value value="13" name="PERF_PC_STARVE_CYCLES_FOR_POSITION"/>
+-	<value value="14" name="PERF_PC_STARVE_CYCLES_DI"/>
+-	<value value="15" name="PERF_PC_VIS_STREAMS_LOADED"/>
+-	<value value="16" name="PERF_PC_INSTANCES"/>
+-	<value value="17" name="PERF_PC_VPC_PRIMITIVES"/>
+-	<value value="18" name="PERF_PC_DEAD_PRIM"/>
+-	<value value="19" name="PERF_PC_LIVE_PRIM"/>
+-	<value value="20" name="PERF_PC_VERTEX_HITS"/>
+-	<value value="21" name="PERF_PC_IA_VERTICES"/>
+-	<value value="22" name="PERF_PC_IA_PRIMITIVES"/>
+-	<value value="23" name="PERF_PC_GS_PRIMITIVES"/>
+-	<value value="24" name="PERF_PC_HS_INVOCATIONS"/>
+-	<value value="25" name="PERF_PC_DS_INVOCATIONS"/>
+-	<value value="26" name="PERF_PC_VS_INVOCATIONS"/>
+-	<value value="27" name="PERF_PC_GS_INVOCATIONS"/>
+-	<value value="28" name="PERF_PC_DS_PRIMITIVES"/>
+-	<value value="29" name="PERF_PC_VPC_POS_DATA_TRANSACTION"/>
+-	<value value="30" name="PERF_PC_3D_DRAWCALLS"/>
+-	<value value="31" name="PERF_PC_2D_DRAWCALLS"/>
+-	<value value="32" name="PERF_PC_NON_DRAWCALL_GLOBAL_EVENTS"/>
+-	<value value="33" name="PERF_TESS_BUSY_CYCLES"/>
+-	<value value="34" name="PERF_TESS_WORKING_CYCLES"/>
+-	<value value="35" name="PERF_TESS_STALL_CYCLES_PC"/>
+-	<value value="36" name="PERF_TESS_STARVE_CYCLES_PC"/>
+-	<value value="37" name="PERF_PC_TSE_TRANSACTION"/>
+-	<value value="38" name="PERF_PC_TSE_VERTEX"/>
+-	<value value="39" name="PERF_PC_TESS_PC_UV_TRANS"/>
+-	<value value="40" name="PERF_PC_TESS_PC_UV_PATCHES"/>
+-	<value value="41" name="PERF_PC_TESS_FACTOR_TRANS"/>
+-</enum>
+-
+-<enum name="a6xx_vfd_perfcounter_select">
+-	<value value="0" name="PERF_VFD_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_VFD_STALL_CYCLES_UCHE"/>
+-	<value value="2" name="PERF_VFD_STALL_CYCLES_VPC_ALLOC"/>
+-	<value value="3" name="PERF_VFD_STALL_CYCLES_SP_INFO"/>
+-	<value value="4" name="PERF_VFD_STALL_CYCLES_SP_ATTR"/>
+-	<value value="5" name="PERF_VFD_STARVE_CYCLES_UCHE"/>
+-	<value value="6" name="PERF_VFD_RBUFFER_FULL"/>
+-	<value value="7" name="PERF_VFD_ATTR_INFO_FIFO_FULL"/>
+-	<value value="8" name="PERF_VFD_DECODED_ATTRIBUTE_BYTES"/>
+-	<value value="9" name="PERF_VFD_NUM_ATTRIBUTES"/>
+-	<value value="10" name="PERF_VFD_UPPER_SHADER_FIBERS"/>
+-	<value value="11" name="PERF_VFD_LOWER_SHADER_FIBERS"/>
+-	<value value="12" name="PERF_VFD_MODE_0_FIBERS"/>
+-	<value value="13" name="PERF_VFD_MODE_1_FIBERS"/>
+-	<value value="14" name="PERF_VFD_MODE_2_FIBERS"/>
+-	<value value="15" name="PERF_VFD_MODE_3_FIBERS"/>
+-	<value value="16" name="PERF_VFD_MODE_4_FIBERS"/>
+-	<value value="17" name="PERF_VFD_TOTAL_VERTICES"/>
+-	<value value="18" name="PERF_VFDP_STALL_CYCLES_VFD"/>
+-	<value value="19" name="PERF_VFDP_STALL_CYCLES_VFD_INDEX"/>
+-	<value value="20" name="PERF_VFDP_STALL_CYCLES_VFD_PROG"/>
+-	<value value="21" name="PERF_VFDP_STARVE_CYCLES_PC"/>
+-	<value value="22" name="PERF_VFDP_VS_STAGE_WAVES"/>
+-</enum>
+-
+-<enum name="a6xx_hlsq_perfcounter_select">
+-	<value value="0" name="PERF_HLSQ_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_HLSQ_STALL_CYCLES_UCHE"/>
+-	<value value="2" name="PERF_HLSQ_STALL_CYCLES_SP_STATE"/>
+-	<value value="3" name="PERF_HLSQ_STALL_CYCLES_SP_FS_STAGE"/>
+-	<value value="4" name="PERF_HLSQ_UCHE_LATENCY_CYCLES"/>
+-	<value value="5" name="PERF_HLSQ_UCHE_LATENCY_COUNT"/>
+-	<value value="6" name="PERF_HLSQ_FS_STAGE_1X_WAVES"/>
+-	<value value="7" name="PERF_HLSQ_FS_STAGE_2X_WAVES"/>
+-	<value value="8" name="PERF_HLSQ_QUADS"/>
+-	<value value="9" name="PERF_HLSQ_CS_INVOCATIONS"/>
+-	<value value="10" name="PERF_HLSQ_COMPUTE_DRAWCALLS"/>
+-	<value value="11" name="PERF_HLSQ_FS_DATA_WAIT_PROGRAMMING"/>
+-	<value value="12" name="PERF_HLSQ_DUAL_FS_PROG_ACTIVE"/>
+-	<value value="13" name="PERF_HLSQ_DUAL_VS_PROG_ACTIVE"/>
+-	<value value="14" name="PERF_HLSQ_FS_BATCH_COUNT_ZERO"/>
+-	<value value="15" name="PERF_HLSQ_VS_BATCH_COUNT_ZERO"/>
+-	<value value="16" name="PERF_HLSQ_WAVE_PENDING_NO_QUAD"/>
+-	<value value="17" name="PERF_HLSQ_WAVE_PENDING_NO_PRIM_BASE"/>
+-	<value value="18" name="PERF_HLSQ_STALL_CYCLES_VPC"/>
+-	<value value="19" name="PERF_HLSQ_PIXELS"/>
+-	<value value="20" name="PERF_HLSQ_DRAW_MODE_SWITCH_VSFS_SYNC"/>
+-</enum>
+-
+-<enum name="a6xx_vpc_perfcounter_select">
+-	<value value="0" name="PERF_VPC_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_VPC_WORKING_CYCLES"/>
+-	<value value="2" name="PERF_VPC_STALL_CYCLES_UCHE"/>
+-	<value value="3" name="PERF_VPC_STALL_CYCLES_VFD_WACK"/>
+-	<value value="4" name="PERF_VPC_STALL_CYCLES_HLSQ_PRIM_ALLOC"/>
+-	<value value="5" name="PERF_VPC_STALL_CYCLES_PC"/>
+-	<value value="6" name="PERF_VPC_STALL_CYCLES_SP_LM"/>
+-	<value value="7" name="PERF_VPC_STARVE_CYCLES_SP"/>
+-	<value value="8" name="PERF_VPC_STARVE_CYCLES_LRZ"/>
+-	<value value="9" name="PERF_VPC_PC_PRIMITIVES"/>
+-	<value value="10" name="PERF_VPC_SP_COMPONENTS"/>
+-	<value value="11" name="PERF_VPC_STALL_CYCLES_VPCRAM_POS"/>
+-	<value value="12" name="PERF_VPC_LRZ_ASSIGN_PRIMITIVES"/>
+-	<value value="13" name="PERF_VPC_RB_VISIBLE_PRIMITIVES"/>
+-	<value value="14" name="PERF_VPC_LM_TRANSACTION"/>
+-	<value value="15" name="PERF_VPC_STREAMOUT_TRANSACTION"/>
+-	<value value="16" name="PERF_VPC_VS_BUSY_CYCLES"/>
+-	<value value="17" name="PERF_VPC_PS_BUSY_CYCLES"/>
+-	<value value="18" name="PERF_VPC_VS_WORKING_CYCLES"/>
+-	<value value="19" name="PERF_VPC_PS_WORKING_CYCLES"/>
+-	<value value="20" name="PERF_VPC_STARVE_CYCLES_RB"/>
+-	<value value="21" name="PERF_VPC_NUM_VPCRAM_READ_POS"/>
+-	<value value="22" name="PERF_VPC_WIT_FULL_CYCLES"/>
+-	<value value="23" name="PERF_VPC_VPCRAM_FULL_CYCLES"/>
+-	<value value="24" name="PERF_VPC_LM_FULL_WAIT_FOR_INTP_END"/>
+-	<value value="25" name="PERF_VPC_NUM_VPCRAM_WRITE"/>
+-	<value value="26" name="PERF_VPC_NUM_VPCRAM_READ_SO"/>
+-	<value value="27" name="PERF_VPC_NUM_ATTR_REQ_LM"/>
+-</enum>
+-
+-<enum name="a6xx_tse_perfcounter_select">
+-	<value value="0" name="PERF_TSE_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_TSE_CLIPPING_CYCLES"/>
+-	<value value="2" name="PERF_TSE_STALL_CYCLES_RAS"/>
+-	<value value="3" name="PERF_TSE_STALL_CYCLES_LRZ_BARYPLANE"/>
+-	<value value="4" name="PERF_TSE_STALL_CYCLES_LRZ_ZPLANE"/>
+-	<value value="5" name="PERF_TSE_STARVE_CYCLES_PC"/>
+-	<value value="6" name="PERF_TSE_INPUT_PRIM"/>
+-	<value value="7" name="PERF_TSE_INPUT_NULL_PRIM"/>
+-	<value value="8" name="PERF_TSE_TRIVAL_REJ_PRIM"/>
+-	<value value="9" name="PERF_TSE_CLIPPED_PRIM"/>
+-	<value value="10" name="PERF_TSE_ZERO_AREA_PRIM"/>
+-	<value value="11" name="PERF_TSE_FACENESS_CULLED_PRIM"/>
+-	<value value="12" name="PERF_TSE_ZERO_PIXEL_PRIM"/>
+-	<value value="13" name="PERF_TSE_OUTPUT_NULL_PRIM"/>
+-	<value value="14" name="PERF_TSE_OUTPUT_VISIBLE_PRIM"/>
+-	<value value="15" name="PERF_TSE_CINVOCATION"/>
+-	<value value="16" name="PERF_TSE_CPRIMITIVES"/>
+-	<value value="17" name="PERF_TSE_2D_INPUT_PRIM"/>
+-	<value value="18" name="PERF_TSE_2D_ALIVE_CYCLES"/>
+-	<value value="19" name="PERF_TSE_CLIP_PLANES"/>
+-</enum>
+-
+-<enum name="a6xx_ras_perfcounter_select">
+-	<value value="0" name="PERF_RAS_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_RAS_SUPERTILE_ACTIVE_CYCLES"/>
+-	<value value="2" name="PERF_RAS_STALL_CYCLES_LRZ"/>
+-	<value value="3" name="PERF_RAS_STARVE_CYCLES_TSE"/>
+-	<value value="4" name="PERF_RAS_SUPER_TILES"/>
+-	<value value="5" name="PERF_RAS_8X4_TILES"/>
+-	<value value="6" name="PERF_RAS_MASKGEN_ACTIVE"/>
+-	<value value="7" name="PERF_RAS_FULLY_COVERED_SUPER_TILES"/>
+-	<value value="8" name="PERF_RAS_FULLY_COVERED_8X4_TILES"/>
+-	<value value="9" name="PERF_RAS_PRIM_KILLED_INVISILBE"/>
+-	<value value="10" name="PERF_RAS_SUPERTILE_GEN_ACTIVE_CYCLES"/>
+-	<value value="11" name="PERF_RAS_LRZ_INTF_WORKING_CYCLES"/>
+-	<value value="12" name="PERF_RAS_BLOCKS"/>
+-</enum>
+-
+-<enum name="a6xx_uche_perfcounter_select">
+-	<value value="0" name="PERF_UCHE_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_UCHE_STALL_CYCLES_ARBITER"/>
+-	<value value="2" name="PERF_UCHE_VBIF_LATENCY_CYCLES"/>
+-	<value value="3" name="PERF_UCHE_VBIF_LATENCY_SAMPLES"/>
+-	<value value="4" name="PERF_UCHE_VBIF_READ_BEATS_TP"/>
+-	<value value="5" name="PERF_UCHE_VBIF_READ_BEATS_VFD"/>
+-	<value value="6" name="PERF_UCHE_VBIF_READ_BEATS_HLSQ"/>
+-	<value value="7" name="PERF_UCHE_VBIF_READ_BEATS_LRZ"/>
+-	<value value="8" name="PERF_UCHE_VBIF_READ_BEATS_SP"/>
+-	<value value="9" name="PERF_UCHE_READ_REQUESTS_TP"/>
+-	<value value="10" name="PERF_UCHE_READ_REQUESTS_VFD"/>
+-	<value value="11" name="PERF_UCHE_READ_REQUESTS_HLSQ"/>
+-	<value value="12" name="PERF_UCHE_READ_REQUESTS_LRZ"/>
+-	<value value="13" name="PERF_UCHE_READ_REQUESTS_SP"/>
+-	<value value="14" name="PERF_UCHE_WRITE_REQUESTS_LRZ"/>
+-	<value value="15" name="PERF_UCHE_WRITE_REQUESTS_SP"/>
+-	<value value="16" name="PERF_UCHE_WRITE_REQUESTS_VPC"/>
+-	<value value="17" name="PERF_UCHE_WRITE_REQUESTS_VSC"/>
+-	<value value="18" name="PERF_UCHE_EVICTS"/>
+-	<value value="19" name="PERF_UCHE_BANK_REQ0"/>
+-	<value value="20" name="PERF_UCHE_BANK_REQ1"/>
+-	<value value="21" name="PERF_UCHE_BANK_REQ2"/>
+-	<value value="22" name="PERF_UCHE_BANK_REQ3"/>
+-	<value value="23" name="PERF_UCHE_BANK_REQ4"/>
+-	<value value="24" name="PERF_UCHE_BANK_REQ5"/>
+-	<value value="25" name="PERF_UCHE_BANK_REQ6"/>
+-	<value value="26" name="PERF_UCHE_BANK_REQ7"/>
+-	<value value="27" name="PERF_UCHE_VBIF_READ_BEATS_CH0"/>
+-	<value value="28" name="PERF_UCHE_VBIF_READ_BEATS_CH1"/>
+-	<value value="29" name="PERF_UCHE_GMEM_READ_BEATS"/>
+-	<value value="30" name="PERF_UCHE_TPH_REF_FULL"/>
+-	<value value="31" name="PERF_UCHE_TPH_VICTIM_FULL"/>
+-	<value value="32" name="PERF_UCHE_TPH_EXT_FULL"/>
+-	<value value="33" name="PERF_UCHE_VBIF_STALL_WRITE_DATA"/>
+-	<value value="34" name="PERF_UCHE_DCMP_LATENCY_SAMPLES"/>
+-	<value value="35" name="PERF_UCHE_DCMP_LATENCY_CYCLES"/>
+-	<value value="36" name="PERF_UCHE_VBIF_READ_BEATS_PC"/>
+-	<value value="37" name="PERF_UCHE_READ_REQUESTS_PC"/>
+-	<value value="38" name="PERF_UCHE_RAM_READ_REQ"/>
+-	<value value="39" name="PERF_UCHE_RAM_WRITE_REQ"/>
+-</enum>
+-
+-<enum name="a6xx_tp_perfcounter_select">
+-	<value value="0" name="PERF_TP_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_TP_STALL_CYCLES_UCHE"/>
+-	<value value="2" name="PERF_TP_LATENCY_CYCLES"/>
+-	<value value="3" name="PERF_TP_LATENCY_TRANS"/>
+-	<value value="4" name="PERF_TP_FLAG_CACHE_REQUEST_SAMPLES"/>
+-	<value value="5" name="PERF_TP_FLAG_CACHE_REQUEST_LATENCY"/>
+-	<value value="6" name="PERF_TP_L1_CACHELINE_REQUESTS"/>
+-	<value value="7" name="PERF_TP_L1_CACHELINE_MISSES"/>
+-	<value value="8" name="PERF_TP_SP_TP_TRANS"/>
+-	<value value="9" name="PERF_TP_TP_SP_TRANS"/>
+-	<value value="10" name="PERF_TP_OUTPUT_PIXELS"/>
+-	<value value="11" name="PERF_TP_FILTER_WORKLOAD_16BIT"/>
+-	<value value="12" name="PERF_TP_FILTER_WORKLOAD_32BIT"/>
+-	<value value="13" name="PERF_TP_QUADS_RECEIVED"/>
+-	<value value="14" name="PERF_TP_QUADS_OFFSET"/>
+-	<value value="15" name="PERF_TP_QUADS_SHADOW"/>
+-	<value value="16" name="PERF_TP_QUADS_ARRAY"/>
+-	<value value="17" name="PERF_TP_QUADS_GRADIENT"/>
+-	<value value="18" name="PERF_TP_QUADS_1D"/>
+-	<value value="19" name="PERF_TP_QUADS_2D"/>
+-	<value value="20" name="PERF_TP_QUADS_BUFFER"/>
+-	<value value="21" name="PERF_TP_QUADS_3D"/>
+-	<value value="22" name="PERF_TP_QUADS_CUBE"/>
+-	<value value="23" name="PERF_TP_DIVERGENT_QUADS_RECEIVED"/>
+-	<value value="24" name="PERF_TP_PRT_NON_RESIDENT_EVENTS"/>
+-	<value value="25" name="PERF_TP_OUTPUT_PIXELS_POINT"/>
+-	<value value="26" name="PERF_TP_OUTPUT_PIXELS_BILINEAR"/>
+-	<value value="27" name="PERF_TP_OUTPUT_PIXELS_MIP"/>
+-	<value value="28" name="PERF_TP_OUTPUT_PIXELS_ANISO"/>
+-	<value value="29" name="PERF_TP_OUTPUT_PIXELS_ZERO_LOD"/>
+-	<value value="30" name="PERF_TP_FLAG_CACHE_REQUESTS"/>
+-	<value value="31" name="PERF_TP_FLAG_CACHE_MISSES"/>
+-	<value value="32" name="PERF_TP_L1_5_L2_REQUESTS"/>
+-	<value value="33" name="PERF_TP_2D_OUTPUT_PIXELS"/>
+-	<value value="34" name="PERF_TP_2D_OUTPUT_PIXELS_POINT"/>
+-	<value value="35" name="PERF_TP_2D_OUTPUT_PIXELS_BILINEAR"/>
+-	<value value="36" name="PERF_TP_2D_FILTER_WORKLOAD_16BIT"/>
+-	<value value="37" name="PERF_TP_2D_FILTER_WORKLOAD_32BIT"/>
+-	<value value="38" name="PERF_TP_TPA2TPC_TRANS"/>
+-	<value value="39" name="PERF_TP_L1_MISSES_ASTC_1TILE"/>
+-	<value value="40" name="PERF_TP_L1_MISSES_ASTC_2TILE"/>
+-	<value value="41" name="PERF_TP_L1_MISSES_ASTC_4TILE"/>
+-	<value value="42" name="PERF_TP_L1_5_L2_COMPRESS_REQS"/>
+-	<value value="43" name="PERF_TP_L1_5_L2_COMPRESS_MISS"/>
+-	<value value="44" name="PERF_TP_L1_BANK_CONFLICT"/>
+-	<value value="45" name="PERF_TP_L1_5_MISS_LATENCY_CYCLES"/>
+-	<value value="46" name="PERF_TP_L1_5_MISS_LATENCY_TRANS"/>
+-	<value value="47" name="PERF_TP_QUADS_CONSTANT_MULTIPLIED"/>
+-	<value value="48" name="PERF_TP_FRONTEND_WORKING_CYCLES"/>
+-	<value value="49" name="PERF_TP_L1_TAG_WORKING_CYCLES"/>
+-	<value value="50" name="PERF_TP_L1_DATA_WRITE_WORKING_CYCLES"/>
+-	<value value="51" name="PERF_TP_PRE_L1_DECOM_WORKING_CYCLES"/>
+-	<value value="52" name="PERF_TP_BACKEND_WORKING_CYCLES"/>
+-	<value value="53" name="PERF_TP_FLAG_CACHE_WORKING_CYCLES"/>
+-	<value value="54" name="PERF_TP_L1_5_CACHE_WORKING_CYCLES"/>
+-	<value value="55" name="PERF_TP_STARVE_CYCLES_SP"/>
+-	<value value="56" name="PERF_TP_STARVE_CYCLES_UCHE"/>
+-</enum>
+-
+-<enum name="a6xx_sp_perfcounter_select">
+-	<value value="0" name="PERF_SP_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_SP_ALU_WORKING_CYCLES"/>
+-	<value value="2" name="PERF_SP_EFU_WORKING_CYCLES"/>
+-	<value value="3" name="PERF_SP_STALL_CYCLES_VPC"/>
+-	<value value="4" name="PERF_SP_STALL_CYCLES_TP"/>
+-	<value value="5" name="PERF_SP_STALL_CYCLES_UCHE"/>
+-	<value value="6" name="PERF_SP_STALL_CYCLES_RB"/>
+-	<value value="7" name="PERF_SP_NON_EXECUTION_CYCLES"/>
+-	<value value="8" name="PERF_SP_WAVE_CONTEXTS"/>
+-	<value value="9" name="PERF_SP_WAVE_CONTEXT_CYCLES"/>
+-	<value value="10" name="PERF_SP_FS_STAGE_WAVE_CYCLES"/>
+-	<value value="11" name="PERF_SP_FS_STAGE_WAVE_SAMPLES"/>
+-	<value value="12" name="PERF_SP_VS_STAGE_WAVE_CYCLES"/>
+-	<value value="13" name="PERF_SP_VS_STAGE_WAVE_SAMPLES"/>
+-	<value value="14" name="PERF_SP_FS_STAGE_DURATION_CYCLES"/>
+-	<value value="15" name="PERF_SP_VS_STAGE_DURATION_CYCLES"/>
+-	<value value="16" name="PERF_SP_WAVE_CTRL_CYCLES"/>
+-	<value value="17" name="PERF_SP_WAVE_LOAD_CYCLES"/>
+-	<value value="18" name="PERF_SP_WAVE_EMIT_CYCLES"/>
+-	<value value="19" name="PERF_SP_WAVE_NOP_CYCLES"/>
+-	<value value="20" name="PERF_SP_WAVE_WAIT_CYCLES"/>
+-	<value value="21" name="PERF_SP_WAVE_FETCH_CYCLES"/>
+-	<value value="22" name="PERF_SP_WAVE_IDLE_CYCLES"/>
+-	<value value="23" name="PERF_SP_WAVE_END_CYCLES"/>
+-	<value value="24" name="PERF_SP_WAVE_LONG_SYNC_CYCLES"/>
+-	<value value="25" name="PERF_SP_WAVE_SHORT_SYNC_CYCLES"/>
+-	<value value="26" name="PERF_SP_WAVE_JOIN_CYCLES"/>
+-	<value value="27" name="PERF_SP_LM_LOAD_INSTRUCTIONS"/>
+-	<value value="28" name="PERF_SP_LM_STORE_INSTRUCTIONS"/>
+-	<value value="29" name="PERF_SP_LM_ATOMICS"/>
+-	<value value="30" name="PERF_SP_GM_LOAD_INSTRUCTIONS"/>
+-	<value value="31" name="PERF_SP_GM_STORE_INSTRUCTIONS"/>
+-	<value value="32" name="PERF_SP_GM_ATOMICS"/>
+-	<value value="33" name="PERF_SP_VS_STAGE_TEX_INSTRUCTIONS"/>
+-	<value value="34" name="PERF_SP_VS_STAGE_EFU_INSTRUCTIONS"/>
+-	<value value="35" name="PERF_SP_VS_STAGE_FULL_ALU_INSTRUCTIONS"/>
+-	<value value="36" name="PERF_SP_VS_STAGE_HALF_ALU_INSTRUCTIONS"/>
+-	<value value="37" name="PERF_SP_FS_STAGE_TEX_INSTRUCTIONS"/>
+-	<value value="38" name="PERF_SP_FS_STAGE_CFLOW_INSTRUCTIONS"/>
+-	<value value="39" name="PERF_SP_FS_STAGE_EFU_INSTRUCTIONS"/>
+-	<value value="40" name="PERF_SP_FS_STAGE_FULL_ALU_INSTRUCTIONS"/>
+-	<value value="41" name="PERF_SP_FS_STAGE_HALF_ALU_INSTRUCTIONS"/>
+-	<value value="42" name="PERF_SP_FS_STAGE_BARY_INSTRUCTIONS"/>
+-	<value value="43" name="PERF_SP_VS_INSTRUCTIONS"/>
+-	<value value="44" name="PERF_SP_FS_INSTRUCTIONS"/>
+-	<value value="45" name="PERF_SP_ADDR_LOCK_COUNT"/>
+-	<value value="46" name="PERF_SP_UCHE_READ_TRANS"/>
+-	<value value="47" name="PERF_SP_UCHE_WRITE_TRANS"/>
+-	<value value="48" name="PERF_SP_EXPORT_VPC_TRANS"/>
+-	<value value="49" name="PERF_SP_EXPORT_RB_TRANS"/>
+-	<value value="50" name="PERF_SP_PIXELS_KILLED"/>
+-	<value value="51" name="PERF_SP_ICL1_REQUESTS"/>
+-	<value value="52" name="PERF_SP_ICL1_MISSES"/>
+-	<value value="53" name="PERF_SP_HS_INSTRUCTIONS"/>
+-	<value value="54" name="PERF_SP_DS_INSTRUCTIONS"/>
+-	<value value="55" name="PERF_SP_GS_INSTRUCTIONS"/>
+-	<value value="56" name="PERF_SP_CS_INSTRUCTIONS"/>
+-	<value value="57" name="PERF_SP_GPR_READ"/>
+-	<value value="58" name="PERF_SP_GPR_WRITE"/>
+-	<value value="59" name="PERF_SP_FS_STAGE_HALF_EFU_INSTRUCTIONS"/>
+-	<value value="60" name="PERF_SP_VS_STAGE_HALF_EFU_INSTRUCTIONS"/>
+-	<value value="61" name="PERF_SP_LM_BANK_CONFLICTS"/>
+-	<value value="62" name="PERF_SP_TEX_CONTROL_WORKING_CYCLES"/>
+-	<value value="63" name="PERF_SP_LOAD_CONTROL_WORKING_CYCLES"/>
+-	<value value="64" name="PERF_SP_FLOW_CONTROL_WORKING_CYCLES"/>
+-	<value value="65" name="PERF_SP_LM_WORKING_CYCLES"/>
+-	<value value="66" name="PERF_SP_DISPATCHER_WORKING_CYCLES"/>
+-	<value value="67" name="PERF_SP_SEQUENCER_WORKING_CYCLES"/>
+-	<value value="68" name="PERF_SP_LOW_EFFICIENCY_STARVED_BY_TP"/>
+-	<value value="69" name="PERF_SP_STARVE_CYCLES_HLSQ"/>
+-	<value value="70" name="PERF_SP_NON_EXECUTION_LS_CYCLES"/>
+-	<value value="71" name="PERF_SP_WORKING_EU"/>
+-	<value value="72" name="PERF_SP_ANY_EU_WORKING"/>
+-	<value value="73" name="PERF_SP_WORKING_EU_FS_STAGE"/>
+-	<value value="74" name="PERF_SP_ANY_EU_WORKING_FS_STAGE"/>
+-	<value value="75" name="PERF_SP_WORKING_EU_VS_STAGE"/>
+-	<value value="76" name="PERF_SP_ANY_EU_WORKING_VS_STAGE"/>
+-	<value value="77" name="PERF_SP_WORKING_EU_CS_STAGE"/>
+-	<value value="78" name="PERF_SP_ANY_EU_WORKING_CS_STAGE"/>
+-	<value value="79" name="PERF_SP_GPR_READ_PREFETCH"/>
+-	<value value="80" name="PERF_SP_GPR_READ_CONFLICT"/>
+-	<value value="81" name="PERF_SP_GPR_WRITE_CONFLICT"/>
+-	<value value="82" name="PERF_SP_GM_LOAD_LATENCY_CYCLES"/>
+-	<value value="83" name="PERF_SP_GM_LOAD_LATENCY_SAMPLES"/>
+-	<value value="84" name="PERF_SP_EXECUTABLE_WAVES"/>
+-</enum>
+-
+-<enum name="a6xx_rb_perfcounter_select">
+-	<value value="0" name="PERF_RB_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_RB_STALL_CYCLES_HLSQ"/>
+-	<value value="2" name="PERF_RB_STALL_CYCLES_FIFO0_FULL"/>
+-	<value value="3" name="PERF_RB_STALL_CYCLES_FIFO1_FULL"/>
+-	<value value="4" name="PERF_RB_STALL_CYCLES_FIFO2_FULL"/>
+-	<value value="5" name="PERF_RB_STARVE_CYCLES_SP"/>
+-	<value value="6" name="PERF_RB_STARVE_CYCLES_LRZ_TILE"/>
+-	<value value="7" name="PERF_RB_STARVE_CYCLES_CCU"/>
+-	<value value="8" name="PERF_RB_STARVE_CYCLES_Z_PLANE"/>
+-	<value value="9" name="PERF_RB_STARVE_CYCLES_BARY_PLANE"/>
+-	<value value="10" name="PERF_RB_Z_WORKLOAD"/>
+-	<value value="11" name="PERF_RB_HLSQ_ACTIVE"/>
+-	<value value="12" name="PERF_RB_Z_READ"/>
+-	<value value="13" name="PERF_RB_Z_WRITE"/>
+-	<value value="14" name="PERF_RB_C_READ"/>
+-	<value value="15" name="PERF_RB_C_WRITE"/>
+-	<value value="16" name="PERF_RB_TOTAL_PASS"/>
+-	<value value="17" name="PERF_RB_Z_PASS"/>
+-	<value value="18" name="PERF_RB_Z_FAIL"/>
+-	<value value="19" name="PERF_RB_S_FAIL"/>
+-	<value value="20" name="PERF_RB_BLENDED_FXP_COMPONENTS"/>
+-	<value value="21" name="PERF_RB_BLENDED_FP16_COMPONENTS"/>
+-	<value value="22" name="PERF_RB_PS_INVOCATIONS"/>
+-	<value value="23" name="PERF_RB_2D_ALIVE_CYCLES"/>
+-	<value value="24" name="PERF_RB_2D_STALL_CYCLES_A2D"/>
+-	<value value="25" name="PERF_RB_2D_STARVE_CYCLES_SRC"/>
+-	<value value="26" name="PERF_RB_2D_STARVE_CYCLES_SP"/>
+-	<value value="27" name="PERF_RB_2D_STARVE_CYCLES_DST"/>
+-	<value value="28" name="PERF_RB_2D_VALID_PIXELS"/>
+-	<value value="29" name="PERF_RB_3D_PIXELS"/>
+-	<value value="30" name="PERF_RB_BLENDER_WORKING_CYCLES"/>
+-	<value value="31" name="PERF_RB_ZPROC_WORKING_CYCLES"/>
+-	<value value="32" name="PERF_RB_CPROC_WORKING_CYCLES"/>
+-	<value value="33" name="PERF_RB_SAMPLER_WORKING_CYCLES"/>
+-	<value value="34" name="PERF_RB_STALL_CYCLES_CCU_COLOR_READ"/>
+-	<value value="35" name="PERF_RB_STALL_CYCLES_CCU_COLOR_WRITE"/>
+-	<value value="36" name="PERF_RB_STALL_CYCLES_CCU_DEPTH_READ"/>
+-	<value value="37" name="PERF_RB_STALL_CYCLES_CCU_DEPTH_WRITE"/>
+-	<value value="38" name="PERF_RB_STALL_CYCLES_VPC"/>
+-	<value value="39" name="PERF_RB_2D_INPUT_TRANS"/>
+-	<value value="40" name="PERF_RB_2D_OUTPUT_RB_DST_TRANS"/>
+-	<value value="41" name="PERF_RB_2D_OUTPUT_RB_SRC_TRANS"/>
+-	<value value="42" name="PERF_RB_BLENDED_FP32_COMPONENTS"/>
+-	<value value="43" name="PERF_RB_COLOR_PIX_TILES"/>
+-	<value value="44" name="PERF_RB_STALL_CYCLES_CCU"/>
+-	<value value="45" name="PERF_RB_EARLY_Z_ARB3_GRANT"/>
+-	<value value="46" name="PERF_RB_LATE_Z_ARB3_GRANT"/>
+-	<value value="47" name="PERF_RB_EARLY_Z_SKIP_GRANT"/>
+-</enum>
+-
+-<enum name="a6xx_vsc_perfcounter_select">
+-	<value value="0" name="PERF_VSC_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_VSC_WORKING_CYCLES"/>
+-	<value value="2" name="PERF_VSC_STALL_CYCLES_UCHE"/>
+-	<value value="3" name="PERF_VSC_EOT_NUM"/>
+-	<value value="4" name="PERF_VSC_INPUT_TILES"/>
+-</enum>
+-
+-<enum name="a6xx_ccu_perfcounter_select">
+-	<value value="0" name="PERF_CCU_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_CCU_STALL_CYCLES_RB_DEPTH_RETURN"/>
+-	<value value="2" name="PERF_CCU_STALL_CYCLES_RB_COLOR_RETURN"/>
+-	<value value="3" name="PERF_CCU_STARVE_CYCLES_FLAG_RETURN"/>
+-	<value value="4" name="PERF_CCU_DEPTH_BLOCKS"/>
+-	<value value="5" name="PERF_CCU_COLOR_BLOCKS"/>
+-	<value value="6" name="PERF_CCU_DEPTH_BLOCK_HIT"/>
+-	<value value="7" name="PERF_CCU_COLOR_BLOCK_HIT"/>
+-	<value value="8" name="PERF_CCU_PARTIAL_BLOCK_READ"/>
+-	<value value="9" name="PERF_CCU_GMEM_READ"/>
+-	<value value="10" name="PERF_CCU_GMEM_WRITE"/>
+-	<value value="11" name="PERF_CCU_DEPTH_READ_FLAG0_COUNT"/>
+-	<value value="12" name="PERF_CCU_DEPTH_READ_FLAG1_COUNT"/>
+-	<value value="13" name="PERF_CCU_DEPTH_READ_FLAG2_COUNT"/>
+-	<value value="14" name="PERF_CCU_DEPTH_READ_FLAG3_COUNT"/>
+-	<value value="15" name="PERF_CCU_DEPTH_READ_FLAG4_COUNT"/>
+-	<value value="16" name="PERF_CCU_DEPTH_READ_FLAG5_COUNT"/>
+-	<value value="17" name="PERF_CCU_DEPTH_READ_FLAG6_COUNT"/>
+-	<value value="18" name="PERF_CCU_DEPTH_READ_FLAG8_COUNT"/>
+-	<value value="19" name="PERF_CCU_COLOR_READ_FLAG0_COUNT"/>
+-	<value value="20" name="PERF_CCU_COLOR_READ_FLAG1_COUNT"/>
+-	<value value="21" name="PERF_CCU_COLOR_READ_FLAG2_COUNT"/>
+-	<value value="22" name="PERF_CCU_COLOR_READ_FLAG3_COUNT"/>
+-	<value value="23" name="PERF_CCU_COLOR_READ_FLAG4_COUNT"/>
+-	<value value="24" name="PERF_CCU_COLOR_READ_FLAG5_COUNT"/>
+-	<value value="25" name="PERF_CCU_COLOR_READ_FLAG6_COUNT"/>
+-	<value value="26" name="PERF_CCU_COLOR_READ_FLAG8_COUNT"/>
+-	<value value="27" name="PERF_CCU_2D_RD_REQ"/>
+-	<value value="28" name="PERF_CCU_2D_WR_REQ"/>
+-</enum>
+-
+-<enum name="a6xx_lrz_perfcounter_select">
+-	<value value="0" name="PERF_LRZ_BUSY_CYCLES"/>
+-	<value value="1" name="PERF_LRZ_STARVE_CYCLES_RAS"/>
+-	<value value="2" name="PERF_LRZ_STALL_CYCLES_RB"/>
+-	<value value="3" name="PERF_LRZ_STALL_CYCLES_VSC"/>
+-	<value value="4" name="PERF_LRZ_STALL_CYCLES_VPC"/>
+-	<value value="5" name="PERF_LRZ_STALL_CYCLES_FLAG_PREFETCH"/>
+-	<value value="6" name="PERF_LRZ_STALL_CYCLES_UCHE"/>
+-	<value value="7" name="PERF_LRZ_LRZ_READ"/>
+-	<value value="8" name="PERF_LRZ_LRZ_WRITE"/>
+-	<value value="9" name="PERF_LRZ_READ_LATENCY"/>
+-	<value value="10" name="PERF_LRZ_MERGE_CACHE_UPDATING"/>
+-	<value value="11" name="PERF_LRZ_PRIM_KILLED_BY_MASKGEN"/>
+-	<value value="12" name="PERF_LRZ_PRIM_KILLED_BY_LRZ"/>
+-	<value value="13" name="PERF_LRZ_VISIBLE_PRIM_AFTER_LRZ"/>
+-	<value value="14" name="PERF_LRZ_FULL_8X8_TILES"/>
+-	<value value="15" name="PERF_LRZ_PARTIAL_8X8_TILES"/>
+-	<value value="16" name="PERF_LRZ_TILE_KILLED"/>
+-	<value value="17" name="PERF_LRZ_TOTAL_PIXEL"/>
+-	<value value="18" name="PERF_LRZ_VISIBLE_PIXEL_AFTER_LRZ"/>
+-	<value value="19" name="PERF_LRZ_FULLY_COVERED_TILES"/>
+-	<value value="20" name="PERF_LRZ_PARTIAL_COVERED_TILES"/>
+-	<value value="21" name="PERF_LRZ_FEEDBACK_ACCEPT"/>
+-	<value value="22" name="PERF_LRZ_FEEDBACK_DISCARD"/>
+-	<value value="23" name="PERF_LRZ_FEEDBACK_STALL"/>
+-	<value value="24" name="PERF_LRZ_STALL_CYCLES_RB_ZPLANE"/>
+-	<value value="25" name="PERF_LRZ_STALL_CYCLES_RB_BPLANE"/>
+-	<value value="26" name="PERF_LRZ_STALL_CYCLES_VC"/>
+-	<value value="27" name="PERF_LRZ_RAS_MASK_TRANS"/>
+-</enum>
+-
+-<enum name="a6xx_cmp_perfcounter_select">
+-	<value value="0" name="PERF_CMPDECMP_STALL_CYCLES_ARB"/>
+-	<value value="1" name="PERF_CMPDECMP_VBIF_LATENCY_CYCLES"/>
+-	<value value="2" name="PERF_CMPDECMP_VBIF_LATENCY_SAMPLES"/>
+-	<value value="3" name="PERF_CMPDECMP_VBIF_READ_DATA_CCU"/>
+-	<value value="4" name="PERF_CMPDECMP_VBIF_WRITE_DATA_CCU"/>
+-	<value value="5" name="PERF_CMPDECMP_VBIF_READ_REQUEST"/>
+-	<value value="6" name="PERF_CMPDECMP_VBIF_WRITE_REQUEST"/>
+-	<value value="7" name="PERF_CMPDECMP_VBIF_READ_DATA"/>
+-	<value value="8" name="PERF_CMPDECMP_VBIF_WRITE_DATA"/>
+-	<value value="9" name="PERF_CMPDECMP_FLAG_FETCH_CYCLES"/>
+-	<value value="10" name="PERF_CMPDECMP_FLAG_FETCH_SAMPLES"/>
+-	<value value="11" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG1_COUNT"/>
+-	<value value="12" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG2_COUNT"/>
+-	<value value="13" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG3_COUNT"/>
+-	<value value="14" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG4_COUNT"/>
+-	<value value="15" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG5_COUNT"/>
+-	<value value="16" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG6_COUNT"/>
+-	<value value="17" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG8_COUNT"/>
+-	<value value="18" name="PERF_CMPDECMP_COLOR_WRITE_FLAG1_COUNT"/>
+-	<value value="19" name="PERF_CMPDECMP_COLOR_WRITE_FLAG2_COUNT"/>
+-	<value value="20" name="PERF_CMPDECMP_COLOR_WRITE_FLAG3_COUNT"/>
+-	<value value="21" name="PERF_CMPDECMP_COLOR_WRITE_FLAG4_COUNT"/>
+-	<value value="22" name="PERF_CMPDECMP_COLOR_WRITE_FLAG5_COUNT"/>
+-	<value value="23" name="PERF_CMPDECMP_COLOR_WRITE_FLAG6_COUNT"/>
+-	<value value="24" name="PERF_CMPDECMP_COLOR_WRITE_FLAG8_COUNT"/>
+-	<value value="25" name="PERF_CMPDECMP_2D_STALL_CYCLES_VBIF_REQ"/>
+-	<value value="26" name="PERF_CMPDECMP_2D_STALL_CYCLES_VBIF_WR"/>
+-	<value value="27" name="PERF_CMPDECMP_2D_STALL_CYCLES_VBIF_RETURN"/>
+-	<value value="28" name="PERF_CMPDECMP_2D_RD_DATA"/>
+-	<value value="29" name="PERF_CMPDECMP_2D_WR_DATA"/>
+-	<value value="30" name="PERF_CMPDECMP_VBIF_READ_DATA_UCHE_CH0"/>
+-	<value value="31" name="PERF_CMPDECMP_VBIF_READ_DATA_UCHE_CH1"/>
+-	<value value="32" name="PERF_CMPDECMP_2D_OUTPUT_TRANS"/>
+-	<value value="33" name="PERF_CMPDECMP_VBIF_WRITE_DATA_UCHE"/>
+-	<value value="34" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG0_COUNT"/>
+-	<value value="35" name="PERF_CMPDECMP_COLOR_WRITE_FLAG0_COUNT"/>
+-	<value value="36" name="PERF_CMPDECMP_COLOR_WRITE_FLAGALPHA_COUNT"/>
+-	<value value="37" name="PERF_CMPDECMP_2D_BUSY_CYCLES"/>
+-	<value value="38" name="PERF_CMPDECMP_2D_REORDER_STARVE_CYCLES"/>
+-	<value value="39" name="PERF_CMPDECMP_2D_PIXELS"/>
+-</enum>
+-
+-<!--
+-Used in a6xx_2d_blit_cntl.. the value mostly seems to correlate to the
+-component type/size, so I think it relates to internal format used for
+-blending?  The one exception is that 16b unorm and 32b float use the
+-same value... maybe 16b unorm is uncommon enough that it was just easier
+-to upconvert to 32b float internally?
+-
+- 8b unorm:  10 (sometimes 0, is the high bit part of something else?)
+-16b unorm:   4
+-
+-32b int:     7
+-16b int:     6
+- 8b int:     5
+-
+-32b float:   4
+-16b float:   3
+- -->
+-<enum name="a6xx_2d_ifmt">
+-	<value value="0x10" name="R2D_UNORM8"/>
+-	<value value="0x7"  name="R2D_INT32"/>
+-	<value value="0x6"  name="R2D_INT16"/>
+-	<value value="0x5"  name="R2D_INT8"/>
+-	<value value="0x4"  name="R2D_FLOAT32"/>
+-	<value value="0x3"  name="R2D_FLOAT16"/>
+-	<value value="0x1"  name="R2D_UNORM8_SRGB"/>
+-	<value value="0x0"  name="R2D_RAW"/>
+-</enum>
+-
+-<enum name="a6xx_ztest_mode">
+-	<doc>Allow early z-test and early-lrz (if applicable)</doc>
+-	<value value="0x0" name="A6XX_EARLY_Z"/>
+-	<doc>Disable early z-test and early-lrz test (if applicable)</doc>
+-	<value value="0x1" name="A6XX_LATE_Z"/>
+-	<doc>
+-		A special mode that allows early-lrz test but disables
+-		early-z test.  Which might sound a bit funny, since
+-		lrz-test happens before z-test.  But as long as a couple
+-		conditions are maintained this allows using lrz-test in
+-		cases where fragment shader has kill/discard:
+-
+-		1) Disable lrz-write in cases where it is uncertain during
+-		   binning pass that a fragment will pass.  Ie.  if frag
+-		   shader has-kill, writes-z, or alpha/stencil test is
+-		   enabled.  (For correctness, lrz-write must be disabled
+-		   when blend is enabled.)  This is analogous to how a
+-		   z-prepass works.
+-
+-		2) Disable lrz-write and test if a depth-test direction
+-		   reversal is detected.  Due to condition (1), the contents
+-		   of the lrz buffer are a conservative estimation of the
+-		   depth buffer during the draw pass.  Meaning that geometry
+-		   that we know for certain will not be visible will not pass
+-		   lrz-test.  But geometry which may be (or contributes to
+-		   blend) will pass the lrz-test.
+-
+-		This allows us to keep early-lrz-test in cases where the frag
+-		shader does not write-z (ie. we know the z-value before FS)
+-		and does not have side-effects (image/ssbo writes, etc), but
+-		does have kill/discard.  Which turns out to be a common
+-		enough case that it is useful to keep early-lrz test against
+-		the conservative lrz buffer to discard fragments that we
+-		know will definitely not be visible.
+-	</doc>
+-	<value value="0x2" name="A6XX_EARLY_LRZ_LATE_Z"/>
+-	<doc>Not a real hw value, used internally by mesa</doc>
+-	<value value="0x3" name="A6XX_INVALID_ZTEST"/>
+-</enum>
+-
+-<enum name="a6xx_tess_spacing">
+-	<value value="0x0" name="TESS_EQUAL"/>
+-	<value value="0x2" name="TESS_FRACTIONAL_ODD"/>
+-	<value value="0x3" name="TESS_FRACTIONAL_EVEN"/>
+-</enum>
+-<enum name="a6xx_tess_output">
+-	<value value="0x0" name="TESS_POINTS"/>
+-	<value value="0x1" name="TESS_LINES"/>
+-	<value value="0x2" name="TESS_CW_TRIS"/>
+-	<value value="0x3" name="TESS_CCW_TRIS"/>
+-</enum>
+-
+-<enum name="a7xx_cp_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_CP_ALWAYS_COUNT"/>
+-	<value value="1" name="A7XX_PERF_CP_BUSY_GFX_CORE_IDLE"/>
+-	<value value="2" name="A7XX_PERF_CP_BUSY_CYCLES"/>
+-	<value value="3" name="A7XX_PERF_CP_NUM_PREEMPTIONS"/>
+-	<value value="4" name="A7XX_PERF_CP_PREEMPTION_REACTION_DELAY"/>
+-	<value value="5" name="A7XX_PERF_CP_PREEMPTION_SWITCH_OUT_TIME"/>
+-	<value value="6" name="A7XX_PERF_CP_PREEMPTION_SWITCH_IN_TIME"/>
+-	<value value="7" name="A7XX_PERF_CP_DEAD_DRAWS_IN_BIN_RENDER"/>
+-	<value value="8" name="A7XX_PERF_CP_PREDICATED_DRAWS_KILLED"/>
+-	<value value="9" name="A7XX_PERF_CP_MODE_SWITCH"/>
+-	<value value="10" name="A7XX_PERF_CP_ZPASS_DONE"/>
+-	<value value="11" name="A7XX_PERF_CP_CONTEXT_DONE"/>
+-	<value value="12" name="A7XX_PERF_CP_CACHE_FLUSH"/>
+-	<value value="13" name="A7XX_PERF_CP_LONG_PREEMPTIONS"/>
+-	<value value="14" name="A7XX_PERF_CP_SQE_I_CACHE_STARVE"/>
+-	<value value="15" name="A7XX_PERF_CP_SQE_IDLE"/>
+-	<value value="16" name="A7XX_PERF_CP_SQE_PM4_STARVE_RB_IB"/>
+-	<value value="17" name="A7XX_PERF_CP_SQE_PM4_STARVE_SDS"/>
+-	<value value="18" name="A7XX_PERF_CP_SQE_MRB_STARVE"/>
+-	<value value="19" name="A7XX_PERF_CP_SQE_RRB_STARVE"/>
+-	<value value="20" name="A7XX_PERF_CP_SQE_VSD_STARVE"/>
+-	<value value="21" name="A7XX_PERF_CP_VSD_DECODE_STARVE"/>
+-	<value value="22" name="A7XX_PERF_CP_SQE_PIPE_OUT_STALL"/>
+-	<value value="23" name="A7XX_PERF_CP_SQE_SYNC_STALL"/>
+-	<value value="24" name="A7XX_PERF_CP_SQE_PM4_WFI_STALL"/>
+-	<value value="25" name="A7XX_PERF_CP_SQE_SYS_WFI_STALL"/>
+-	<value value="26" name="A7XX_PERF_CP_SQE_T4_EXEC"/>
+-	<value value="27" name="A7XX_PERF_CP_SQE_LOAD_STATE_EXEC"/>
+-	<value value="28" name="A7XX_PERF_CP_SQE_SAVE_SDS_STATE"/>
+-	<value value="29" name="A7XX_PERF_CP_SQE_DRAW_EXEC"/>
+-	<value value="30" name="A7XX_PERF_CP_SQE_CTXT_REG_BUNCH_EXEC"/>
+-	<value value="31" name="A7XX_PERF_CP_SQE_EXEC_PROFILED"/>
+-	<value value="32" name="A7XX_PERF_CP_MEMORY_POOL_EMPTY"/>
+-	<value value="33" name="A7XX_PERF_CP_MEMORY_POOL_SYNC_STALL"/>
+-	<value value="34" name="A7XX_PERF_CP_MEMORY_POOL_ABOVE_THRESH"/>
+-	<value value="35" name="A7XX_PERF_CP_AHB_WR_STALL_PRE_DRAWS"/>
+-	<value value="36" name="A7XX_PERF_CP_AHB_STALL_SQE_GMU"/>
+-	<value value="37" name="A7XX_PERF_CP_AHB_STALL_SQE_WR_OTHER"/>
+-	<value value="38" name="A7XX_PERF_CP_AHB_STALL_SQE_RD_OTHER"/>
+-	<value value="39" name="A7XX_PERF_CP_CLUSTER0_EMPTY"/>
+-	<value value="40" name="A7XX_PERF_CP_CLUSTER1_EMPTY"/>
+-	<value value="41" name="A7XX_PERF_CP_CLUSTER2_EMPTY"/>
+-	<value value="42" name="A7XX_PERF_CP_CLUSTER3_EMPTY"/>
+-	<value value="43" name="A7XX_PERF_CP_CLUSTER4_EMPTY"/>
+-	<value value="44" name="A7XX_PERF_CP_CLUSTER5_EMPTY"/>
+-	<value value="45" name="A7XX_PERF_CP_PM4_DATA"/>
+-	<value value="46" name="A7XX_PERF_CP_PM4_HEADERS"/>
+-	<value value="47" name="A7XX_PERF_CP_VBIF_READ_BEATS"/>
+-	<value value="48" name="A7XX_PERF_CP_VBIF_WRITE_BEATS"/>
+-	<value value="49" name="A7XX_PERF_CP_SQE_INSTR_COUNTER"/>
+-	<value value="50" name="A7XX_PERF_CP_RESERVED_50"/>
+-	<value value="51" name="A7XX_PERF_CP_RESERVED_51"/>
+-	<value value="52" name="A7XX_PERF_CP_RESERVED_52"/>
+-	<value value="53" name="A7XX_PERF_CP_RESERVED_53"/>
+-	<value value="54" name="A7XX_PERF_CP_RESERVED_54"/>
+-	<value value="55" name="A7XX_PERF_CP_RESERVED_55"/>
+-	<value value="56" name="A7XX_PERF_CP_RESERVED_56"/>
+-	<value value="57" name="A7XX_PERF_CP_RESERVED_57"/>
+-	<value value="58" name="A7XX_PERF_CP_RESERVED_58"/>
+-	<value value="59" name="A7XX_PERF_CP_RESERVED_59"/>
+-	<value value="60" name="A7XX_PERF_CP_CLUSTER0_FULL"/>
+-	<value value="61" name="A7XX_PERF_CP_CLUSTER1_FULL"/>
+-	<value value="62" name="A7XX_PERF_CP_CLUSTER2_FULL"/>
+-	<value value="63" name="A7XX_PERF_CP_CLUSTER3_FULL"/>
+-	<value value="64" name="A7XX_PERF_CP_CLUSTER4_FULL"/>
+-	<value value="65" name="A7XX_PERF_CP_CLUSTER5_FULL"/>
+-	<value value="66" name="A7XX_PERF_CP_CLUSTER6_FULL"/>
+-	<value value="67" name="A7XX_PERF_CP_CLUSTER6_EMPTY"/>
+-	<value value="68" name="A7XX_PERF_CP_ICACHE_MISSES"/>
+-	<value value="69" name="A7XX_PERF_CP_ICACHE_HITS"/>
+-	<value value="70" name="A7XX_PERF_CP_ICACHE_STALL"/>
+-	<value value="71" name="A7XX_PERF_CP_DCACHE_MISSES"/>
+-	<value value="72" name="A7XX_PERF_CP_DCACHE_HITS"/>
+-	<value value="73" name="A7XX_PERF_CP_DCACHE_STALLS"/>
+-	<value value="74" name="A7XX_PERF_CP_AQE_SQE_STALL"/>
+-	<value value="75" name="A7XX_PERF_CP_SQE_AQE_STARVE"/>
+-	<value value="76" name="A7XX_PERF_CP_PREEMPT_LATENCY"/>
+-	<value value="77" name="A7XX_PERF_CP_SQE_MD8_STALL_CYCLES"/>
+-	<value value="78" name="A7XX_PERF_CP_SQE_MESH_EXEC_CYCLES"/>
+-	<value value="79" name="A7XX_PERF_CP_AQE_NUM_AS_CHUNKS"/>
+-	<value value="80" name="A7XX_PERF_CP_AQE_NUM_MS_CHUNKS"/>
+-</enum>
+-
+-<enum name="a7xx_rbbm_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_RBBM_ALWAYS_COUNT"/>
+-	<value value="1" name="A7XX_PERF_RBBM_ALWAYS_ON"/>
+-	<value value="2" name="A7XX_PERF_RBBM_TSE_BUSY"/>
+-	<value value="3" name="A7XX_PERF_RBBM_RAS_BUSY"/>
+-	<value value="4" name="A7XX_PERF_RBBM_PC_DCALL_BUSY"/>
+-	<value value="5" name="A7XX_PERF_RBBM_PC_VSD_BUSY"/>
+-	<value value="6" name="A7XX_PERF_RBBM_STATUS_MASKED"/>
+-	<value value="7" name="A7XX_PERF_RBBM_COM_BUSY"/>
+-	<value value="8" name="A7XX_PERF_RBBM_DCOM_BUSY"/>
+-	<value value="9" name="A7XX_PERF_RBBM_VBIF_BUSY"/>
+-	<value value="10" name="A7XX_PERF_RBBM_VSC_BUSY"/>
+-	<value value="11" name="A7XX_PERF_RBBM_TESS_BUSY"/>
+-	<value value="12" name="A7XX_PERF_RBBM_UCHE_BUSY"/>
+-	<value value="13" name="A7XX_PERF_RBBM_HLSQ_BUSY"/>
+-</enum>
+-
+-<enum name="a7xx_pc_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_PC_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_PC_WORKING_CYCLES"/>
+-	<value value="2" name="A7XX_PERF_PC_STALL_CYCLES_VFD"/>
+-	<value value="3" name="A7XX_PERF_PC_RESERVED"/>
+-	<value value="4" name="A7XX_PERF_PC_STALL_CYCLES_VPC"/>
+-	<value value="5" name="A7XX_PERF_PC_STALL_CYCLES_UCHE"/>
+-	<value value="6" name="A7XX_PERF_PC_STALL_CYCLES_TESS"/>
+-	<value value="7" name="A7XX_PERF_PC_STALL_CYCLES_VFD_ONLY"/>
+-	<value value="8" name="A7XX_PERF_PC_STALL_CYCLES_VPC_ONLY"/>
+-	<value value="9" name="A7XX_PERF_PC_PASS1_TF_STALL_CYCLES"/>
+-	<value value="10" name="A7XX_PERF_PC_STARVE_CYCLES_FOR_INDEX"/>
+-	<value value="11" name="A7XX_PERF_PC_STARVE_CYCLES_FOR_TESS_FACTOR"/>
+-	<value value="12" name="A7XX_PERF_PC_STARVE_CYCLES_FOR_VIZ_STREAM"/>
+-	<value value="13" name="A7XX_PERF_PC_STARVE_CYCLES_DI"/>
+-	<value value="14" name="A7XX_PERF_PC_VIS_STREAMS_LOADED"/>
+-	<value value="15" name="A7XX_PERF_PC_INSTANCES"/>
+-	<value value="16" name="A7XX_PERF_PC_VPC_PRIMITIVES"/>
+-	<value value="17" name="A7XX_PERF_PC_DEAD_PRIM"/>
+-	<value value="18" name="A7XX_PERF_PC_LIVE_PRIM"/>
+-	<value value="19" name="A7XX_PERF_PC_VERTEX_HITS"/>
+-	<value value="20" name="A7XX_PERF_PC_IA_VERTICES"/>
+-	<value value="21" name="A7XX_PERF_PC_IA_PRIMITIVES"/>
+-	<value value="22" name="A7XX_PERF_PC_RESERVED_22"/>
+-	<value value="23" name="A7XX_PERF_PC_HS_INVOCATIONS"/>
+-	<value value="24" name="A7XX_PERF_PC_DS_INVOCATIONS"/>
+-	<value value="25" name="A7XX_PERF_PC_VS_INVOCATIONS"/>
+-	<value value="26" name="A7XX_PERF_PC_GS_INVOCATIONS"/>
+-	<value value="27" name="A7XX_PERF_PC_DS_PRIMITIVES"/>
+-	<value value="28" name="A7XX_PERF_PC_3D_DRAWCALLS"/>
+-	<value value="29" name="A7XX_PERF_PC_2D_DRAWCALLS"/>
+-	<value value="30" name="A7XX_PERF_PC_NON_DRAWCALL_GLOBAL_EVENTS"/>
+-	<value value="31" name="A7XX_PERF_PC_TESS_BUSY_CYCLES"/>
+-	<value value="32" name="A7XX_PERF_PC_TESS_WORKING_CYCLES"/>
+-	<value value="33" name="A7XX_PERF_PC_TESS_STALL_CYCLES_PC"/>
+-	<value value="34" name="A7XX_PERF_PC_TESS_STARVE_CYCLES_PC"/>
+-	<value value="35" name="A7XX_PERF_PC_TESS_SINGLE_PRIM_CYCLES"/>
+-	<value value="36" name="A7XX_PERF_PC_TESS_PC_UV_TRANS"/>
+-	<value value="37" name="A7XX_PERF_PC_TESS_PC_UV_PATCHES"/>
+-	<value value="38" name="A7XX_PERF_PC_TESS_FACTOR_TRANS"/>
+-	<value value="39" name="A7XX_PERF_PC_TAG_CHECKED_VERTICES"/>
+-	<value value="40" name="A7XX_PERF_PC_MESH_VS_WAVES"/>
+-	<value value="41" name="A7XX_PERF_PC_MESH_DRAWS"/>
+-	<value value="42" name="A7XX_PERF_PC_MESH_DEAD_DRAWS"/>
+-	<value value="43" name="A7XX_PERF_PC_MESH_MVIS_EN_DRAWS"/>
+-	<value value="44" name="A7XX_PERF_PC_MESH_DEAD_PRIM"/>
+-	<value value="45" name="A7XX_PERF_PC_MESH_LIVE_PRIM"/>
+-	<value value="46" name="A7XX_PERF_PC_MESH_PA_EN_PRIM"/>
+-	<value value="47" name="A7XX_PERF_PC_STARVE_CYCLES_FOR_MVIS_STREAM"/>
+-	<value value="48" name="A7XX_PERF_PC_STARVE_CYCLES_PREDRAW"/>
+-	<value value="49" name="A7XX_PERF_PC_STALL_CYCLES_COMPUTE_GFX"/>
+-	<value value="50" name="A7XX_PERF_PC_STALL_CYCLES_GFX_COMPUTE"/>
+-	<value value="51" name="A7XX_PERF_PC_TESS_PC_MULTI_PATCH_TRANS"/>
+-</enum>
+-
+-<enum name="a7xx_vfd_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_VFD_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_VFD_STALL_CYCLES_UCHE"/>
+-	<value value="2" name="A7XX_PERF_VFD_STALL_CYCLES_VPC_ALLOC"/>
+-	<value value="3" name="A7XX_PERF_VFD_STALL_CYCLES_SP_INFO"/>
+-	<value value="4" name="A7XX_PERF_VFD_STALL_CYCLES_SP_ATTR"/>
+-	<value value="5" name="A7XX_PERF_VFD_STARVE_CYCLES_UCHE"/>
+-	<value value="6" name="A7XX_PERF_VFD_RBUFFER_FULL"/>
+-	<value value="7" name="A7XX_PERF_VFD_ATTR_INFO_FIFO_FULL"/>
+-	<value value="8" name="A7XX_PERF_VFD_DECODED_ATTRIBUTE_BYTES"/>
+-	<value value="9" name="A7XX_PERF_VFD_NUM_ATTRIBUTES"/>
+-	<value value="10" name="A7XX_PERF_VFD_UPPER_SHADER_FIBERS"/>
+-	<value value="11" name="A7XX_PERF_VFD_LOWER_SHADER_FIBERS"/>
+-	<value value="12" name="A7XX_PERF_VFD_MODE_0_FIBERS"/>
+-	<value value="13" name="A7XX_PERF_VFD_MODE_1_FIBERS"/>
+-	<value value="14" name="A7XX_PERF_VFD_MODE_2_FIBERS"/>
+-	<value value="15" name="A7XX_PERF_VFD_MODE_3_FIBERS"/>
+-	<value value="16" name="A7XX_PERF_VFD_MODE_4_FIBERS"/>
+-	<value value="17" name="A7XX_PERF_VFD_TOTAL_VERTICES"/>
+-	<value value="18" name="A7XX_PERF_VFDP_STALL_CYCLES_VFD"/>
+-	<value value="19" name="A7XX_PERF_VFDP_STALL_CYCLES_VFD_INDEX"/>
+-	<value value="20" name="A7XX_PERF_VFDP_STALL_CYCLES_VFD_PROG"/>
+-	<value value="21" name="A7XX_PERF_VFDP_STARVE_CYCLES_PC"/>
+-	<value value="22" name="A7XX_PERF_VFDP_VS_STAGE_WAVES"/>
+-	<value value="23" name="A7XX_PERF_VFD_STALL_CYCLES_PRG_END_FE"/>
+-	<value value="24" name="A7XX_PERF_VFD_STALL_CYCLES_CBSYNC"/>
+-</enum>
+-
+-<enum name="a7xx_hlsq_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_HLSQ_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_HLSQ_STALL_CYCLES_UCHE"/>
+-	<value value="2" name="A7XX_PERF_HLSQ_STALL_CYCLES_SP_STATE"/>
+-	<value value="3" name="A7XX_PERF_HLSQ_STALL_CYCLES_SP_FS_STAGE"/>
+-	<value value="4" name="A7XX_PERF_HLSQ_UCHE_LATENCY_CYCLES"/>
+-	<value value="5" name="A7XX_PERF_HLSQ_UCHE_LATENCY_COUNT"/>
+-	<value value="6" name="A7XX_PERF_HLSQ_RESERVED_6"/>
+-	<value value="7" name="A7XX_PERF_HLSQ_RESERVED_7"/>
+-	<value value="8" name="A7XX_PERF_HLSQ_RESERVED_8"/>
+-	<value value="9" name="A7XX_PERF_HLSQ_RESERVED_9"/>
+-	<value value="10" name="A7XX_PERF_HLSQ_COMPUTE_DRAWCALLS"/>
+-	<value value="11" name="A7XX_PERF_HLSQ_FS_DATA_WAIT_PROGRAMMING"/>
+-	<value value="12" name="A7XX_PERF_HLSQ_DUAL_FS_PROG_ACTIVE"/>
+-	<value value="13" name="A7XX_PERF_HLSQ_DUAL_VS_PROG_ACTIVE"/>
+-	<value value="14" name="A7XX_PERF_HLSQ_FS_BATCH_COUNT_ZERO"/>
+-	<value value="15" name="A7XX_PERF_HLSQ_VS_BATCH_COUNT_ZERO"/>
+-	<value value="16" name="A7XX_PERF_HLSQ_WAVE_PENDING_NO_QUAD"/>
+-	<value value="17" name="A7XX_PERF_HLSQ_WAVE_PENDING_NO_PRIM_BASE"/>
+-	<value value="18" name="A7XX_PERF_HLSQ_STALL_CYCLES_VPC"/>
+-	<value value="19" name="A7XX_PERF_HLSQ_RESERVED_19"/>
+-	<value value="20" name="A7XX_PERF_HLSQ_DRAW_MODE_SWITCH_VSFS_SYNC"/>
+-	<value value="21" name="A7XX_PERF_HLSQ_VSBR_STALL_CYCLES"/>
+-	<value value="22" name="A7XX_PERF_HLSQ_FS_STALL_CYCLES"/>
+-	<value value="23" name="A7XX_PERF_HLSQ_LPAC_STALL_CYCLES"/>
+-	<value value="24" name="A7XX_PERF_HLSQ_BV_STALL_CYCLES"/>
+-	<value value="25" name="A7XX_PERF_HLSQ_VSBR_DEREF_CYCLES"/>
+-	<value value="26" name="A7XX_PERF_HLSQ_FS_DEREF_CYCLES"/>
+-	<value value="27" name="A7XX_PERF_HLSQ_LPAC_DEREF_CYCLES"/>
+-	<value value="28" name="A7XX_PERF_HLSQ_BV_DEREF_CYCLES"/>
+-	<value value="29" name="A7XX_PERF_HLSQ_VSBR_S2W_CYCLES"/>
+-	<value value="30" name="A7XX_PERF_HLSQ_FS_S2W_CYCLES"/>
+-	<value value="31" name="A7XX_PERF_HLSQ_LPAC_S2W_CYCLES"/>
+-	<value value="32" name="A7XX_PERF_HLSQ_BV_S2W_CYCLES"/>
+-	<value value="33" name="A7XX_PERF_HLSQ_VSBR_WAIT_FS_S2W"/>
+-	<value value="34" name="A7XX_PERF_HLSQ_FS_WAIT_VS_S2W"/>
+-	<value value="35" name="A7XX_PERF_HLSQ_LPAC_WAIT_VS_S2W"/>
+-	<value value="36" name="A7XX_PERF_HLSQ_BV_WAIT_FS_S2W"/>
+-	<value value="37" name="A7XX_PERF_HLSQ_VS_WAIT_CONST_RESOURCE"/>
+-	<value value="38" name="A7XX_PERF_HLSQ_FS_WAIT_SAME_VS_S2W"/>
+-	<value value="39" name="A7XX_PERF_HLSQ_FS_STARVING_SP"/>
+-	<value value="40" name="A7XX_PERF_HLSQ_VS_DATA_WAIT_PROGRAMMING"/>
+-	<value value="41" name="A7XX_PERF_HLSQ_BV_DATA_WAIT_PROGRAMMING"/>
+-	<value value="42" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXTS_VS"/>
+-	<value value="43" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXT_CYCLES_VS"/>
+-	<value value="44" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXTS_FS"/>
+-	<value value="45" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXT_CYCLES_FS"/>
+-	<value value="46" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXTS_BV"/>
+-	<value value="47" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXT_CYCLES_BV"/>
+-	<value value="48" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXTS_LPAC"/>
+-	<value value="49" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXT_CYCLES_LPAC"/>
+-	<value value="50" name="A7XX_PERF_HLSQ_SPTROC_STCHE_WARMUP_INC_VS"/>
+-	<value value="51" name="A7XX_PERF_HLSQ_SPTROC_STCHE_WARMUP_INC_FS"/>
+-	<value value="52" name="A7XX_PERF_HLSQ_SPTROC_STCHE_WARMUP_INC_BV"/>
+-	<value value="53" name="A7XX_PERF_HLSQ_SPTROC_STCHE_WARMUP_INC_LPAC"/>
+-	<value value="54" name="A7XX_PERF_HLSQ_SPTROC_STCHE_MISS_INC_VS"/>
+-	<value value="55" name="A7XX_PERF_HLSQ_SPTROC_STCHE_MISS_INC_FS"/>
+-	<value value="56" name="A7XX_PERF_HLSQ_SPTROC_STCHE_MISS_INC_BV"/>
+-	<value value="57" name="A7XX_PERF_HLSQ_SPTROC_STCHE_MISS_INC_LPAC"/>
+-</enum>
+-
+-<enum name="a7xx_vpc_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_VPC_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_VPC_WORKING_CYCLES"/>
+-	<value value="2" name="A7XX_PERF_VPC_STALL_CYCLES_UCHE"/>
+-	<value value="3" name="A7XX_PERF_VPC_STALL_CYCLES_VFD_WACK"/>
+-	<value value="4" name="A7XX_PERF_VPC_STALL_CYCLES_HLSQ_PRIM_ALLOC"/>
+-	<value value="5" name="A7XX_PERF_VPC_RESERVED_5"/>
+-	<value value="6" name="A7XX_PERF_VPC_STALL_CYCLES_SP_LM"/>
+-	<value value="7" name="A7XX_PERF_VPC_STARVE_CYCLES_SP"/>
+-	<value value="8" name="A7XX_PERF_VPC_STARVE_CYCLES_LRZ"/>
+-	<value value="9" name="A7XX_PERF_VPC_PC_PRIMITIVES"/>
+-	<value value="10" name="A7XX_PERF_VPC_SP_COMPONENTS"/>
+-	<value value="11" name="A7XX_PERF_VPC_STALL_CYCLES_VPCRAM_POS"/>
+-	<value value="12" name="A7XX_PERF_VPC_LRZ_ASSIGN_PRIMITIVES"/>
+-	<value value="13" name="A7XX_PERF_VPC_RB_VISIBLE_PRIMITIVES"/>
+-	<value value="14" name="A7XX_PERF_VPC_LM_TRANSACTION"/>
+-	<value value="15" name="A7XX_PERF_VPC_STREAMOUT_TRANSACTION"/>
+-	<value value="16" name="A7XX_PERF_VPC_VS_BUSY_CYCLES"/>
+-	<value value="17" name="A7XX_PERF_VPC_PS_BUSY_CYCLES"/>
+-	<value value="18" name="A7XX_PERF_VPC_VS_WORKING_CYCLES"/>
+-	<value value="19" name="A7XX_PERF_VPC_PS_WORKING_CYCLES"/>
+-	<value value="20" name="A7XX_PERF_VPC_STARVE_CYCLES_RB"/>
+-	<value value="21" name="A7XX_PERF_VPC_NUM_VPCRAM_READ_POS"/>
+-	<value value="22" name="A7XX_PERF_VPC_WIT_FULL_CYCLES"/>
+-	<value value="23" name="A7XX_PERF_VPC_VPCRAM_FULL_CYCLES"/>
+-	<value value="24" name="A7XX_PERF_VPC_LM_FULL_WAIT_FOR_INTP_END"/>
+-	<value value="25" name="A7XX_PERF_VPC_NUM_VPCRAM_WRITE"/>
+-	<value value="26" name="A7XX_PERF_VPC_NUM_VPCRAM_READ_SO"/>
+-	<value value="27" name="A7XX_PERF_VPC_NUM_ATTR_REQ_LM"/>
+-	<value value="28" name="A7XX_PERF_VPC_STALL_CYCLE_TSE"/>
+-	<value value="29" name="A7XX_PERF_VPC_TSE_PRIMITIVES"/>
+-	<value value="30" name="A7XX_PERF_VPC_GS_PRIMITIVES"/>
+-	<value value="31" name="A7XX_PERF_VPC_TSE_TRANSACTIONS"/>
+-	<value value="32" name="A7XX_PERF_VPC_STALL_CYCLES_CCU"/>
+-	<value value="33" name="A7XX_PERF_VPC_NUM_WM_HIT"/>
+-	<value value="34" name="A7XX_PERF_VPC_STALL_DQ_WACK"/>
+-	<value value="35" name="A7XX_PERF_VPC_STALL_CYCLES_CCHE"/>
+-	<value value="36" name="A7XX_PERF_VPC_STARVE_CYCLES_CCHE"/>
+-	<value value="37" name="A7XX_PERF_VPC_NUM_PA_REQ"/>
+-	<value value="38" name="A7XX_PERF_VPC_NUM_LM_REQ_HIT"/>
+-	<value value="39" name="A7XX_PERF_VPC_CCHE_REQBUF_FULL"/>
+-	<value value="40" name="A7XX_PERF_VPC_STALL_CYCLES_LM_ACK"/>
+-	<value value="41" name="A7XX_PERF_VPC_STALL_CYCLES_PRG_END_FE"/>
+-	<value value="42" name="A7XX_PERF_VPC_STALL_CYCLES_PRG_END_PCVS"/>
+-	<value value="43" name="A7XX_PERF_VPC_STALL_CYCLES_PRG_END_VPCPS"/>
+-</enum>
+-
+-<enum name="a7xx_tse_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_TSE_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_TSE_CLIPPING_CYCLES"/>
+-	<value value="2" name="A7XX_PERF_TSE_STALL_CYCLES_RAS"/>
+-	<value value="3" name="A7XX_PERF_TSE_STALL_CYCLES_LRZ_BARYPLANE"/>
+-	<value value="4" name="A7XX_PERF_TSE_STALL_CYCLES_LRZ_ZPLANE"/>
+-	<value value="5" name="A7XX_PERF_TSE_STARVE_CYCLES_PC"/>
+-	<value value="6" name="A7XX_PERF_TSE_INPUT_PRIM"/>
+-	<value value="7" name="A7XX_PERF_TSE_INPUT_NULL_PRIM"/>
+-	<value value="8" name="A7XX_PERF_TSE_TRIVAL_REJ_PRIM"/>
+-	<value value="9" name="A7XX_PERF_TSE_CLIPPED_PRIM"/>
+-	<value value="10" name="A7XX_PERF_TSE_ZERO_AREA_PRIM"/>
+-	<value value="11" name="A7XX_PERF_TSE_FACENESS_CULLED_PRIM"/>
+-	<value value="12" name="A7XX_PERF_TSE_ZERO_PIXEL_PRIM"/>
+-	<value value="13" name="A7XX_PERF_TSE_OUTPUT_NULL_PRIM"/>
+-	<value value="14" name="A7XX_PERF_TSE_OUTPUT_VISIBLE_PRIM"/>
+-	<value value="15" name="A7XX_PERF_TSE_CINVOCATION"/>
+-	<value value="16" name="A7XX_PERF_TSE_CPRIMITIVES"/>
+-	<value value="17" name="A7XX_PERF_TSE_2D_INPUT_PRIM"/>
+-	<value value="18" name="A7XX_PERF_TSE_2D_ALIVE_CYCLES"/>
+-	<value value="19" name="A7XX_PERF_TSE_CLIP_PLANES"/>
+-</enum>
+-
+-<enum name="a7xx_ras_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_RAS_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_RAS_SUPERTILE_ACTIVE_CYCLES"/>
+-	<value value="2" name="A7XX_PERF_RAS_STALL_CYCLES_LRZ"/>
+-	<value value="3" name="A7XX_PERF_RAS_STARVE_CYCLES_TSE"/>
+-	<value value="4" name="A7XX_PERF_RAS_SUPER_TILES"/>
+-	<value value="5" name="A7XX_PERF_RAS_8X4_TILES"/>
+-	<value value="6" name="A7XX_PERF_RAS_MASKGEN_ACTIVE"/>
+-	<value value="7" name="A7XX_PERF_RAS_FULLY_COVERED_SUPER_TILES"/>
+-	<value value="8" name="A7XX_PERF_RAS_FULLY_COVERED_8X4_TILES"/>
+-	<value value="9" name="A7XX_PERF_RAS_PRIM_KILLED_INVISILBE"/>
+-	<value value="10" name="A7XX_PERF_RAS_SUPERTILE_GEN_ACTIVE_CYCLES"/>
+-	<value value="11" name="A7XX_PERF_RAS_LRZ_INTF_WORKING_CYCLES"/>
+-	<value value="12" name="A7XX_PERF_RAS_BLOCKS"/>
+-	<value value="13" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_0_WORKING_CC_l2"/>
+-	<value value="14" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_1_WORKING_CC_l2"/>
+-	<value value="15" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_2_WORKING_CC_l2"/>
+-	<value value="16" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_3_WORKING_CC_l2"/>
+-	<value value="17" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_4_WORKING_CC_l2"/>
+-	<value value="18" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_5_WORKING_CC_l2"/>
+-	<value value="19" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_6_WORKING_CC_l2"/>
+-	<value value="20" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_7_WORKING_CC_l2"/>
+-	<value value="21" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_8_WORKING_CC_l2"/>
+-	<value value="22" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_9_WORKING_CC_l2"/>
+-	<value value="23" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_10_WORKING_CC_l2"/>
+-	<value value="24" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_11_WORKING_CC_l2"/>
+-	<value value="25" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_12_WORKING_CC_l2"/>
+-	<value value="26" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_13_WORKING_CC_l2"/>
+-	<value value="27" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_14_WORKING_CC_l2"/>
+-	<value value="28" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_15_WORKING_CC_l2"/>
+-	<value value="29" name="A7XX_PERF_RAS_FALSE_PARTIAL_STILE"/>
+-
+-</enum>
+-
+-<enum name="a7xx_uche_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_UCHE_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_UCHE_STALL_CYCLES_ARBITER"/>
+-	<value value="2" name="A7XX_PERF_UCHE_VBIF_LATENCY_CYCLES"/>
+-	<value value="3" name="A7XX_PERF_UCHE_VBIF_LATENCY_SAMPLES"/>
+-	<value value="4" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_TP"/>
+-	<value value="5" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_VFD"/>
+-	<value value="6" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_HLSQ"/>
+-	<value value="7" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_LRZ"/>
+-	<value value="8" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_SP"/>
+-	<value value="9" name="A7XX_PERF_UCHE_READ_REQUESTS_TP"/>
+-	<value value="10" name="A7XX_PERF_UCHE_READ_REQUESTS_VFD"/>
+-	<value value="11" name="A7XX_PERF_UCHE_READ_REQUESTS_HLSQ"/>
+-	<value value="12" name="A7XX_PERF_UCHE_READ_REQUESTS_LRZ"/>
+-	<value value="13" name="A7XX_PERF_UCHE_READ_REQUESTS_SP"/>
+-	<value value="14" name="A7XX_PERF_UCHE_WRITE_REQUESTS_LRZ"/>
+-	<value value="15" name="A7XX_PERF_UCHE_WRITE_REQUESTS_SP"/>
+-	<value value="16" name="A7XX_PERF_UCHE_WRITE_REQUESTS_VPC"/>
+-	<value value="17" name="A7XX_PERF_UCHE_WRITE_REQUESTS_VSC"/>
+-	<value value="18" name="A7XX_PERF_UCHE_EVICTS"/>
+-	<value value="19" name="A7XX_PERF_UCHE_BANK_REQ0"/>
+-	<value value="20" name="A7XX_PERF_UCHE_BANK_REQ1"/>
+-	<value value="21" name="A7XX_PERF_UCHE_BANK_REQ2"/>
+-	<value value="22" name="A7XX_PERF_UCHE_BANK_REQ3"/>
+-	<value value="23" name="A7XX_PERF_UCHE_BANK_REQ4"/>
+-	<value value="24" name="A7XX_PERF_UCHE_BANK_REQ5"/>
+-	<value value="25" name="A7XX_PERF_UCHE_BANK_REQ6"/>
+-	<value value="26" name="A7XX_PERF_UCHE_BANK_REQ7"/>
+-	<value value="27" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_CH0"/>
+-	<value value="28" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_CH1"/>
+-	<value value="29" name="A7XX_PERF_UCHE_GMEM_READ_BEATS"/>
+-	<value value="30" name="A7XX_PERF_UCHE_TPH_REF_FULL"/>
+-	<value value="31" name="A7XX_PERF_UCHE_TPH_VICTIM_FULL"/>
+-	<value value="32" name="A7XX_PERF_UCHE_TPH_EXT_FULL"/>
+-	<value value="33" name="A7XX_PERF_UCHE_VBIF_STALL_WRITE_DATA"/>
+-	<value value="34" name="A7XX_PERF_UCHE_DCMP_LATENCY_SAMPLES"/>
+-	<value value="35" name="A7XX_PERF_UCHE_DCMP_LATENCY_CYCLES"/>
+-	<value value="36" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_PC"/>
+-	<value value="37" name="A7XX_PERF_UCHE_READ_REQUESTS_PC"/>
+-	<value value="38" name="A7XX_PERF_UCHE_RAM_READ_REQ"/>
+-	<value value="39" name="A7XX_PERF_UCHE_RAM_WRITE_REQ"/>
+-	<value value="40" name="A7XX_PERF_UCHE_STARVED_CYCLES_VBIF_DECMP"/>
+-	<value value="41" name="A7XX_PERF_UCHE_STALL_CYCLES_DECMP"/>
+-	<value value="42" name="A7XX_PERF_UCHE_ARBITER_STALL_CYCLES_VBIF"/>
+-	<value value="43" name="A7XX_PERF_UCHE_READ_REQUESTS_TP_UBWC"/>
+-	<value value="44" name="A7XX_PERF_UCHE_READ_REQUESTS_TP_NONUBWC"/>
+-	<value value="45" name="A7XX_PERF_UCHE_READ_REQUESTS_TP_GMEM"/>
+-	<value value="46" name="A7XX_PERF_UCHE_LONG_LINE_ALL_EVICTS_KAILUA"/>
+-	<value value="47" name="A7XX_PERF_UCHE_LONG_LINE_PARTIAL_EVICTS_KAILUA"/>
+-	<value value="48" name="A7XX_PERF_UCHE_TPH_CONFLICT_CL_CCHE"/>
+-	<value value="49" name="A7XX_PERF_UCHE_TPH_CONFLICT_CL_OTHER_KAILUA"/>
+-	<value value="50" name="A7XX_PERF_UCHE_DBANK_CONFLICT_CL_CCHE"/>
+-	<value value="51" name="A7XX_PERF_UCHE_DBANK_CONFLICT_CL_OTHER_CLIENTS"/>
+-	<value value="52" name="A7XX_PERF_UCHE_VBIF_WRITE_BEATS_CH0"/>
+-	<value value="53" name="A7XX_PERF_UCHE_VBIF_WRITE_BEATS_CH1"/>
+-	<value value="54" name="A7XX_PERF_UCHE_CCHE_TPH_QUEUE_FULL"/>
+-	<value value="55" name="A7XX_PERF_UCHE_CCHE_DPH_QUEUE_FULL"/>
+-	<value value="56" name="A7XX_PERF_UCHE_GMEM_WRITE_BEATS"/>
+-	<value value="57" name="A7XX_PERF_UCHE_UBWC_READ_BEATS"/>
+-	<value value="58" name="A7XX_PERF_UCHE_UBWC_WRITE_BEATS"/>
+-</enum>
+-
+-<enum name="a7xx_tp_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_TP_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_TP_STALL_CYCLES_UCHE"/>
+-	<value value="2" name="A7XX_PERF_TP_LATENCY_CYCLES"/>
+-	<value value="3" name="A7XX_PERF_TP_LATENCY_TRANS"/>
+-	<value value="4" name="A7XX_PERF_TP_FLAG_FIFO_DELAY_SAMPLES"/>
+-	<value value="5" name="A7XX_PERF_TP_FLAG_FIFO_DELAY_CYCLES"/>
+-	<value value="6" name="A7XX_PERF_TP_L1_CACHELINE_REQUESTS"/>
+-	<value value="7" name="A7XX_PERF_TP_L1_CACHELINE_MISSES"/>
+-	<value value="8" name="A7XX_PERF_TP_SP_TP_TRANS"/>
+-	<value value="9" name="A7XX_PERF_TP_TP_SP_TRANS"/>
+-	<value value="10" name="A7XX_PERF_TP_OUTPUT_PIXELS"/>
+-	<value value="11" name="A7XX_PERF_TP_FILTER_WORKLOAD_16BIT"/>
+-	<value value="12" name="A7XX_PERF_TP_FILTER_WORKLOAD_32BIT"/>
+-	<value value="13" name="A7XX_PERF_TP_QUADS_RECEIVED"/>
+-	<value value="14" name="A7XX_PERF_TP_QUADS_OFFSET"/>
+-	<value value="15" name="A7XX_PERF_TP_QUADS_SHADOW"/>
+-	<value value="16" name="A7XX_PERF_TP_QUADS_ARRAY"/>
+-	<value value="17" name="A7XX_PERF_TP_QUADS_GRADIENT"/>
+-	<value value="18" name="A7XX_PERF_TP_QUADS_1D"/>
+-	<value value="19" name="A7XX_PERF_TP_QUADS_2D"/>
+-	<value value="20" name="A7XX_PERF_TP_QUADS_BUFFER"/>
+-	<value value="21" name="A7XX_PERF_TP_QUADS_3D"/>
+-	<value value="22" name="A7XX_PERF_TP_QUADS_CUBE"/>
+-	<value value="23" name="A7XX_PERF_TP_DIVERGENT_QUADS_RECEIVED"/>
+-	<value value="24" name="A7XX_PERF_TP_PRT_NON_RESIDENT_EVENTS"/>
+-	<value value="25" name="A7XX_PERF_TP_OUTPUT_PIXELS_POINT"/>
+-	<value value="26" name="A7XX_PERF_TP_OUTPUT_PIXELS_BILINEAR"/>
+-	<value value="27" name="A7XX_PERF_TP_OUTPUT_PIXELS_MIP"/>
+-	<value value="28" name="A7XX_PERF_TP_OUTPUT_PIXELS_ANISO"/>
+-	<value value="29" name="A7XX_PERF_TP_OUTPUT_PIXELS_ZERO_LOD"/>
+-	<value value="30" name="A7XX_PERF_TP_FLAG_CACHE_REQUESTS"/>
+-	<value value="31" name="A7XX_PERF_TP_FLAG_CACHE_MISSES"/>
+-	<value value="32" name="A7XX_PERF_TP_L1_5_L2_REQUESTS"/>
+-	<value value="33" name="A7XX_PERF_TP_2D_OUTPUT_PIXELS"/>
+-	<value value="34" name="A7XX_PERF_TP_2D_OUTPUT_PIXELS_POINT"/>
+-	<value value="35" name="A7XX_PERF_TP_2D_OUTPUT_PIXELS_BILINEAR"/>
+-	<value value="36" name="A7XX_PERF_TP_2D_FILTER_WORKLOAD_16BIT"/>
+-	<value value="37" name="A7XX_PERF_TP_2D_FILTER_WORKLOAD_32BIT"/>
+-	<value value="38" name="A7XX_PERF_TP_TPA2TPC_TRANS"/>
+-	<value value="39" name="A7XX_PERF_TP_L1_MISSES_ASTC_1TILE"/>
+-	<value value="40" name="A7XX_PERF_TP_L1_MISSES_ASTC_2TILE"/>
+-	<value value="41" name="A7XX_PERF_TP_L1_MISSES_ASTC_4TILE"/>
+-	<value value="42" name="A7XX_PERF_TP_L1_5_COMPRESS_REQS"/>
+-	<value value="43" name="A7XX_PERF_TP_L1_5_L2_COMPRESS_MISS"/>
+-	<value value="44" name="A7XX_PERF_TP_L1_BANK_CONFLICT"/>
+-	<value value="45" name="A7XX_PERF_TP_L1_5_MISS_LATENCY_CYCLES"/>
+-	<value value="46" name="A7XX_PERF_TP_L1_5_MISS_LATENCY_TRANS"/>
+-	<value value="47" name="A7XX_PERF_TP_QUADS_CONSTANT_MULTIPLIED"/>
+-	<value value="48" name="A7XX_PERF_TP_FRONTEND_WORKING_CYCLES"/>
+-	<value value="49" name="A7XX_PERF_TP_L1_TAG_WORKING_CYCLES"/>
+-	<value value="50" name="A7XX_PERF_TP_L1_DATA_WRITE_WORKING_CYCLES"/>
+-	<value value="51" name="A7XX_PERF_TP_PRE_L1_DECOM_WORKING_CYCLES"/>
+-	<value value="52" name="A7XX_PERF_TP_BACKEND_WORKING_CYCLES"/>
+-	<value value="53" name="A7XX_PERF_TP_L1_5_CACHE_WORKING_CYCLES"/>
+-	<value value="54" name="A7XX_PERF_TP_STARVE_CYCLES_SP"/>
+-	<value value="55" name="A7XX_PERF_TP_STARVE_CYCLES_UCHE"/>
+-	<value value="56" name="A7XX_PERF_TP_STALL_CYCLES_UFC"/>
+-	<value value="57" name="A7XX_PERF_TP_FORMAT_DECOMP"/>
+-	<value value="58" name="A7XX_PERF_TP_FILTER_POINT_FP16"/>
+-	<value value="59" name="A7XX_PERF_TP_FILTER_POINT_FP32"/>
+-	<value value="60" name="A7XX_PERF_TP_LATENCY_FIFO_FULL"/>
+-	<value value="61" name="A7XX_PERF_TP_RESERVED_61"/>
+-	<value value="62" name="A7XX_PERF_TP_RESERVED_62"/>
+-	<value value="63" name="A7XX_PERF_TP_RESERVED_63"/>
+-	<value value="64" name="A7XX_PERF_TP_RESERVED_64"/>
+-	<value value="65" name="A7XX_PERF_TP_RESERVED_65"/>
+-	<value value="66" name="A7XX_PERF_TP_RESERVED_66"/>
+-	<value value="67" name="A7XX_PERF_TP_RESERVED_67"/>
+-	<value value="68" name="A7XX_PERF_TP_RESERVED_68"/>
+-	<value value="69" name="A7XX_PERF_TP_RESERVED_69"/>
+-	<value value="70" name="A7XX_PERF_TP_RESERVED_70"/>
+-	<value value="71" name="A7XX_PERF_TP_RESERVED_71"/>
+-	<value value="72" name="A7XX_PERF_TP_RESERVED_72"/>
+-	<value value="73" name="A7XX_PERF_TP_RESERVED_73"/>
+-	<value value="74" name="A7XX_PERF_TP_RESERVED_74"/>
+-	<value value="75" name="A7XX_PERF_TP_RESERVED_75"/>
+-	<value value="76" name="A7XX_PERF_TP_RESERVED_76"/>
+-	<value value="77" name="A7XX_PERF_TP_RESERVED_77"/>
+-	<value value="78" name="A7XX_PERF_TP_RESERVED_78"/>
+-	<value value="79" name="A7XX_PERF_TP_RESERVED_79"/>
+-	<value value="80" name="A7XX_PERF_TP_RESERVED_80"/>
+-	<value value="81" name="A7XX_PERF_TP_RESERVED_81"/>
+-	<value value="82" name="A7XX_PERF_TP_RESERVED_82"/>
+-	<value value="83" name="A7XX_PERF_TP_RESERVED_83"/>
+-	<value value="84" name="A7XX_PERF_TP_RESERVED_84"/>
+-	<value value="85" name="A7XX_PERF_TP_RESERVED_85"/>
+-	<value value="86" name="A7XX_PERF_TP_RESERVED_86"/>
+-	<value value="87" name="A7XX_PERF_TP_RESERVED_87"/>
+-	<value value="88" name="A7XX_PERF_TP_RESERVED_88"/>
+-	<value value="89" name="A7XX_PERF_TP_RESERVED_89"/>
+-	<value value="90" name="A7XX_PERF_TP_RESERVED_90"/>
+-	<value value="91" name="A7XX_PERF_TP_RESERVED_91"/>
+-	<value value="92" name="A7XX_PERF_TP_RESERVED_92"/>
+-	<value value="93" name="A7XX_PERF_TP_RESERVED_93"/>
+-	<value value="94" name="A7XX_PERF_TP_RESERVED_94"/>
+-	<value value="95" name="A7XX_PERF_TP_RESERVED_95"/>
+-	<value value="96" name="A7XX_PERF_TP_RESERVED_96"/>
+-	<value value="97" name="A7XX_PERF_TP_RESERVED_97"/>
+-	<value value="98" name="A7XX_PERF_TP_RESERVED_98"/>
+-	<value value="99" name="A7XX_PERF_TP_RESERVED_99"/>
+-	<value value="100" name="A7XX_PERF_TP_RESERVED_100"/>
+-	<value value="101" name="A7XX_PERF_TP_RESERVED_101"/>
+-	<value value="102" name="A7XX_PERF_TP_RESERVED_102"/>
+-	<value value="103" name="A7XX_PERF_TP_RESERVED_103"/>
+-	<value value="104" name="A7XX_PERF_TP_RESERVED_104"/>
+-	<value value="105" name="A7XX_PERF_TP_RESERVED_105"/>
+-	<value value="106" name="A7XX_PERF_TP_RESERVED_106"/>
+-	<value value="107" name="A7XX_PERF_TP_RESERVED_107"/>
+-	<value value="108" name="A7XX_PERF_TP_RESERVED_108"/>
+-	<value value="109" name="A7XX_PERF_TP_RESERVED_109"/>
+-	<value value="110" name="A7XX_PERF_TP_RESERVED_110"/>
+-	<value value="111" name="A7XX_PERF_TP_RESERVED_111"/>
+-	<value value="112" name="A7XX_PERF_TP_RESERVED_112"/>
+-	<value value="113" name="A7XX_PERF_TP_RESERVED_113"/>
+-	<value value="114" name="A7XX_PERF_TP_RESERVED_114"/>
+-	<value value="115" name="A7XX_PERF_TP_RESERVED_115"/>
+-	<value value="116" name="A7XX_PERF_TP_RESERVED_116"/>
+-	<value value="117" name="A7XX_PERF_TP_RESERVED_117"/>
+-	<value value="118" name="A7XX_PERF_TP_RESERVED_118"/>
+-	<value value="119" name="A7XX_PERF_TP_RESERVED_119"/>
+-	<value value="120" name="A7XX_PERF_TP_RESERVED_120"/>
+-	<value value="121" name="A7XX_PERF_TP_RESERVED_121"/>
+-	<value value="122" name="A7XX_PERF_TP_RESERVED_122"/>
+-	<value value="123" name="A7XX_PERF_TP_RESERVED_123"/>
+-	<value value="124" name="A7XX_PERF_TP_RESERVED_124"/>
+-	<value value="125" name="A7XX_PERF_TP_RESERVED_125"/>
+-	<value value="126" name="A7XX_PERF_TP_RESERVED_126"/>
+-	<value value="127" name="A7XX_PERF_TP_RESERVED_127"/>
+-	<value value="128" name="A7XX_PERF_TP_FORMAT_DECOMP_BILINEAR"/>
+-	<value value="129" name="A7XX_PERF_TP_PACKED_POINT_BOTH_VALID_FP16"/>
+-	<value value="130" name="A7XX_PERF_TP_PACKED_POINT_SINGLE_VALID_FP16"/>
+-	<value value="131" name="A7XX_PERF_TP_PACKED_POINT_BOTH_VALID_FP32"/>
+-	<value value="132" name="A7XX_PERF_TP_PACKED_POINT_SINGLE_VALID_FP32"/>
+-</enum>
+-
+-<enum name="a7xx_sp_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_SP_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_SP_ALU_WORKING_CYCLES"/>
+-	<value value="2" name="A7XX_PERF_SP_EFU_WORKING_CYCLES"/>
+-	<value value="3" name="A7XX_PERF_SP_STALL_CYCLES_VPC"/>
+-	<value value="4" name="A7XX_PERF_SP_STALL_CYCLES_TP"/>
+-	<value value="5" name="A7XX_PERF_SP_STALL_CYCLES_UCHE"/>
+-	<value value="6" name="A7XX_PERF_SP_STALL_CYCLES_RB"/>
+-	<value value="7" name="A7XX_PERF_SP_NON_EXECUTION_CYCLES"/>
+-	<value value="8" name="A7XX_PERF_SP_WAVE_CONTEXTS"/>
+-	<value value="9" name="A7XX_PERF_SP_WAVE_CONTEXT_CYCLES"/>
+-	<value value="10" name="A7XX_PERF_SP_STAGE_WAVE_CYCLES"/>
+-	<value value="11" name="A7XX_PERF_SP_STAGE_WAVE_SAMPLES"/>
+-	<value value="12" name="A7XX_PERF_SP_VS_STAGE_WAVE_CYCLES"/>
+-	<value value="13" name="A7XX_PERF_SP_VS_STAGE_WAVE_SAMPLES"/>
+-	<value value="14" name="A7XX_PERF_SP_FS_STAGE_DURATION_CYCLES"/>
+-	<value value="15" name="A7XX_PERF_SP_VS_STAGE_DURATION_CYCLES"/>
+-	<value value="16" name="A7XX_PERF_SP_WAVE_CTRL_CYCLES"/>
+-	<value value="17" name="A7XX_PERF_SP_WAVE_LOAD_CYCLES"/>
+-	<value value="18" name="A7XX_PERF_SP_WAVE_EMIT_CYCLES"/>
+-	<value value="19" name="A7XX_PERF_SP_WAVE_NOP_CYCLES"/>
+-	<value value="20" name="A7XX_PERF_SP_WAVE_WAIT_CYCLES"/>
+-	<value value="21" name="A7XX_PERF_SP_WAVE_FETCH_CYCLES"/>
+-	<value value="22" name="A7XX_PERF_SP_WAVE_IDLE_CYCLES"/>
+-	<value value="23" name="A7XX_PERF_SP_WAVE_END_CYCLES"/>
+-	<value value="24" name="A7XX_PERF_SP_WAVE_LONG_SYNC_CYCLES"/>
+-	<value value="25" name="A7XX_PERF_SP_WAVE_SHORT_SYNC_CYCLES"/>
+-	<value value="26" name="A7XX_PERF_SP_WAVE_JOIN_CYCLES"/>
+-	<value value="27" name="A7XX_PERF_SP_LM_LOAD_INSTRUCTIONS"/>
+-	<value value="28" name="A7XX_PERF_SP_LM_STORE_INSTRUCTIONS"/>
+-	<value value="29" name="A7XX_PERF_SP_LM_ATOMICS"/>
+-	<value value="30" name="A7XX_PERF_SP_GM_LOAD_INSTRUCTIONS"/>
+-	<value value="31" name="A7XX_PERF_SP_GM_STORE_INSTRUCTIONS"/>
+-	<value value="32" name="A7XX_PERF_SP_GM_ATOMICS"/>
+-	<value value="33" name="A7XX_PERF_SP_VS_STAGE_TEX_INSTRUCTIONS"/>
+-	<value value="34" name="A7XX_PERF_SP_VS_STAGE_EFU_INSTRUCTIONS"/>
+-	<value value="35" name="A7XX_PERF_SP_VS_STAGE_FULL_ALU_INSTRUCTIONS"/>
+-	<value value="36" name="A7XX_PERF_SP_VS_STAGE_HALF_ALU_INSTRUCTIONS"/>
+-	<value value="37" name="A7XX_PERF_SP_FS_STAGE_TEX_INSTRUCTIONS"/>
+-	<value value="38" name="A7XX_PERF_SP_FS_STAGE_CFLOW_INSTRUCTIONS"/>
+-	<value value="39" name="A7XX_PERF_SP_FS_STAGE_EFU_INSTRUCTIONS"/>
+-	<value value="40" name="A7XX_PERF_SP_FS_STAGE_FULL_ALU_INSTRUCTIONS"/>
+-	<value value="41" name="A7XX_PERF_SP_FS_STAGE_HALF_ALU_INSTRUCTIONS"/>
+-	<value value="42" name="A7XX_PERF_SP_FS_STAGE_BARY_INSTRUCTIONS"/>
+-	<value value="43" name="A7XX_PERF_SP_VS_INSTRUCTIONS"/>
+-	<value value="44" name="A7XX_PERF_SP_FS_INSTRUCTIONS"/>
+-	<value value="45" name="A7XX_PERF_SP_ADDR_LOCK_COUNT"/>
+-	<value value="46" name="A7XX_PERF_SP_UCHE_READ_TRANS"/>
+-	<value value="47" name="A7XX_PERF_SP_UCHE_WRITE_TRANS"/>
+-	<value value="48" name="A7XX_PERF_SP_EXPORT_VPC_TRANS"/>
+-	<value value="49" name="A7XX_PERF_SP_EXPORT_RB_TRANS"/>
+-	<value value="50" name="A7XX_PERF_SP_PIXELS_KILLED"/>
+-	<value value="51" name="A7XX_PERF_SP_ICL1_REQUESTS"/>
+-	<value value="52" name="A7XX_PERF_SP_ICL1_MISSES"/>
+-	<value value="53" name="A7XX_PERF_SP_HS_INSTRUCTIONS"/>
+-	<value value="54" name="A7XX_PERF_SP_DS_INSTRUCTIONS"/>
+-	<value value="55" name="A7XX_PERF_SP_GS_INSTRUCTIONS"/>
+-	<value value="56" name="A7XX_PERF_SP_CS_INSTRUCTIONS"/>
+-	<value value="57" name="A7XX_PERF_SP_GPR_READ"/>
+-	<value value="58" name="A7XX_PERF_SP_GPR_WRITE"/>
+-	<value value="59" name="A7XX_PERF_SP_FS_STAGE_HALF_EFU_INSTRUCTIONS"/>
+-	<value value="60" name="A7XX_PERF_SP_VS_STAGE_HALF_EFU_INSTRUCTIONS"/>
+-	<value value="61" name="A7XX_PERF_SP_LM_BANK_CONFLICTS"/>
+-	<value value="62" name="A7XX_PERF_SP_TEX_CONTROL_WORKING_CYCLES"/>
+-	<value value="63" name="A7XX_PERF_SP_LOAD_CONTROL_WORKING_CYCLES"/>
+-	<value value="64" name="A7XX_PERF_SP_FLOW_CONTROL_WORKING_CYCLES"/>
+-	<value value="65" name="A7XX_PERF_SP_LM_WORKING_CYCLES"/>
+-	<value value="66" name="A7XX_PERF_SP_DISPATCHER_WORKING_CYCLES"/>
+-	<value value="67" name="A7XX_PERF_SP_SEQUENCER_WORKING_CYCLES"/>
+-	<value value="68" name="A7XX_PERF_SP_LOW_EFFICIENCY_STARVED_BY_TP"/>
+-	<value value="69" name="A7XX_PERF_SP_STARVE_CYCLES_HLSQ"/>
+-	<value value="70" name="A7XX_PERF_SP_NON_EXECUTION_LS_CYCLES"/>
+-	<value value="71" name="A7XX_PERF_SP_WORKING_EU"/>
+-	<value value="72" name="A7XX_PERF_SP_ANY_EU_WORKING"/>
+-	<value value="73" name="A7XX_PERF_SP_WORKING_EU_FS_STAGE"/>
+-	<value value="74" name="A7XX_PERF_SP_ANY_EU_WORKING_FS_STAGE"/>
+-	<value value="75" name="A7XX_PERF_SP_WORKING_EU_VS_STAGE"/>
+-	<value value="76" name="A7XX_PERF_SP_ANY_EU_WORKING_VS_STAGE"/>
+-	<value value="77" name="A7XX_PERF_SP_WORKING_EU_CS_STAGE"/>
+-	<value value="78" name="A7XX_PERF_SP_ANY_EU_WORKING_CS_STAGE"/>
+-	<value value="79" name="A7XX_PERF_SP_GPR_READ_PREFETCH"/>
+-	<value value="80" name="A7XX_PERF_SP_GPR_READ_CONFLICT"/>
+-	<value value="81" name="A7XX_PERF_SP_GPR_WRITE_CONFLICT"/>
+-	<value value="82" name="A7XX_PERF_SP_GM_LOAD_LATENCY_CYCLES"/>
+-	<value value="83" name="A7XX_PERF_SP_GM_LOAD_LATENCY_SAMPLES"/>
+-	<value value="84" name="A7XX_PERF_SP_EXECUTABLE_WAVES"/>
+-	<value value="85" name="A7XX_PERF_SP_ICL1_MISS_FETCH_CYCLES"/>
+-	<value value="86" name="A7XX_PERF_SP_WORKING_EU_LPAC"/>
+-	<value value="87" name="A7XX_PERF_SP_BYPASS_BUSY_CYCLES"/>
+-	<value value="88" name="A7XX_PERF_SP_ANY_EU_WORKING_LPAC"/>
+-	<value value="89" name="A7XX_PERF_SP_WAVE_ALU_CYCLES"/>
+-	<value value="90" name="A7XX_PERF_SP_WAVE_EFU_CYCLES"/>
+-	<value value="91" name="A7XX_PERF_SP_WAVE_INT_CYCLES"/>
+-	<value value="92" name="A7XX_PERF_SP_WAVE_CSP_CYCLES"/>
+-	<value value="93" name="A7XX_PERF_SP_EWAVE_CONTEXTS"/>
+-	<value value="94" name="A7XX_PERF_SP_EWAVE_CONTEXT_CYCLES"/>
+-	<value value="95" name="A7XX_PERF_SP_LPAC_BUSY_CYCLES"/>
+-	<value value="96" name="A7XX_PERF_SP_LPAC_INSTRUCTIONS"/>
+-	<value value="97" name="A7XX_PERF_SP_FS_STAGE_1X_WAVES"/>
+-	<value value="98" name="A7XX_PERF_SP_FS_STAGE_2X_WAVES"/>
+-	<value value="99" name="A7XX_PERF_SP_QUADS"/>
+-	<value value="100" name="A7XX_PERF_SP_CS_INVOCATIONS"/>
+-	<value value="101" name="A7XX_PERF_SP_PIXELS"/>
+-	<value value="102" name="A7XX_PERF_SP_LPAC_DRAWCALLS"/>
+-	<value value="103" name="A7XX_PERF_SP_PI_WORKING_CYCLES"/>
+-	<value value="104" name="A7XX_PERF_SP_WAVE_INPUT_CYCLES"/>
+-	<value value="105" name="A7XX_PERF_SP_WAVE_OUTPUT_CYCLES"/>
+-	<value value="106" name="A7XX_PERF_SP_WAVE_HWAVE_WAIT_CYCLES"/>
+-	<value value="107" name="A7XX_PERF_SP_WAVE_HWAVE_SYNC"/>
+-	<value value="108" name="A7XX_PERF_SP_OUTPUT_3D_PIXELS"/>
+-	<value value="109" name="A7XX_PERF_SP_FULL_ALU_MAD_INSTRUCTIONS"/>
+-	<value value="110" name="A7XX_PERF_SP_HALF_ALU_MAD_INSTRUCTIONS"/>
+-	<value value="111" name="A7XX_PERF_SP_FULL_ALU_MUL_INSTRUCTIONS"/>
+-	<value value="112" name="A7XX_PERF_SP_HALF_ALU_MUL_INSTRUCTIONS"/>
+-	<value value="113" name="A7XX_PERF_SP_FULL_ALU_ADD_INSTRUCTIONS"/>
+-	<value value="114" name="A7XX_PERF_SP_HALF_ALU_ADD_INSTRUCTIONS"/>
+-	<value value="115" name="A7XX_PERF_SP_BARY_FP32_INSTRUCTIONS"/>
+-	<value value="116" name="A7XX_PERF_SP_ALU_GPR_READ_CYCLES"/>
+-	<value value="117" name="A7XX_PERF_SP_ALU_DATA_FORWARDING_CYCLES"/>
+-	<value value="118" name="A7XX_PERF_SP_LM_FULL_CYCLES"/>
+-	<value value="119" name="A7XX_PERF_SP_TEXTURE_FETCH_LATENCY_CYCLES"/>
+-	<value value="120" name="A7XX_PERF_SP_TEXTURE_FETCH_LATENCY_SAMPLES"/>
+-	<value value="121" name="A7XX_PERF_SP_FS_STAGE_PI_TEX_INSTRUCTION"/>
+-	<value value="122" name="A7XX_PERF_SP_RAY_QUERY_INSTRUCTIONS"/>
+-	<value value="123" name="A7XX_PERF_SP_RBRT_KICKOFF_FIBERS"/>
+-	<value value="124" name="A7XX_PERF_SP_RBRT_KICKOFF_DQUADS"/>
+-	<value value="125" name="A7XX_PERF_SP_RTU_BUSY_CYCLES"/>
+-	<value value="126" name="A7XX_PERF_SP_RTU_L0_HITS"/>
+-	<value value="127" name="A7XX_PERF_SP_RTU_L0_MISSES"/>
+-	<value value="128" name="A7XX_PERF_SP_RTU_L0_HIT_ON_MISS"/>
+-	<value value="129" name="A7XX_PERF_SP_RTU_STALL_CYCLES_WAVE_QUEUE"/>
+-	<value value="130" name="A7XX_PERF_SP_RTU_STALL_CYCLES_L0_HIT_QUEUE"/>
+-	<value value="131" name="A7XX_PERF_SP_RTU_STALL_CYCLES_L0_MISS_QUEUE"/>
+-	<value value="132" name="A7XX_PERF_SP_RTU_STALL_CYCLES_L0D_IDX_QUEUE"/>
+-	<value value="133" name="A7XX_PERF_SP_RTU_STALL_CYCLES_L0DATA"/>
+-	<value value="134" name="A7XX_PERF_SP_RTU_STALL_CYCLES_REPLACE_CNT"/>
+-	<value value="135" name="A7XX_PERF_SP_RTU_STALL_CYCLES_MRG_CNT"/>
+-	<value value="136" name="A7XX_PERF_SP_RTU_STALL_CYCLES_UCHE"/>
+-	<value value="137" name="A7XX_PERF_SP_RTU_OPERAND_FETCH_STALL_CYCLES_L0"/>
+-	<value value="138" name="A7XX_PERF_SP_RTU_OPERAND_FETCH_STALL_CYCLES_INS_FIFO"/>
+-	<value value="139" name="A7XX_PERF_SP_RTU_BVH_FETCH_LATENCY_CYCLES"/>
+-	<value value="140" name="A7XX_PERF_SP_RTU_BVH_FETCH_LATENCY_SAMPLES"/>
+-	<value value="141" name="A7XX_PERF_SP_STCHE_MISS_INC_VS"/>
+-	<value value="142" name="A7XX_PERF_SP_STCHE_MISS_INC_FS"/>
+-	<value value="143" name="A7XX_PERF_SP_STCHE_MISS_INC_BV"/>
+-	<value value="144" name="A7XX_PERF_SP_STCHE_MISS_INC_LPAC"/>
+-	<value value="145" name="A7XX_PERF_SP_VGPR_ACTIVE_CONTEXTS"/>
+-	<value value="146" name="A7XX_PERF_SP_PGPR_ALLOC_CONTEXTS"/>
+-	<value value="147" name="A7XX_PERF_SP_VGPR_ALLOC_CONTEXTS"/>
+-	<value value="148" name="A7XX_PERF_SP_RTU_RAY_BOX_INTERSECTIONS"/>
+-	<value value="149" name="A7XX_PERF_SP_RTU_RAY_TRIANGLE_INTERSECTIONS"/>
+-	<value value="150" name="A7XX_PERF_SP_SCH_STALL_CYCLES_RTU"/>
+-</enum>
+-
+-<enum name="a7xx_rb_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_RB_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_RB_STALL_CYCLES_HLSQ"/>
+-	<value value="2" name="A7XX_PERF_RB_STALL_CYCLES_FIFO0_FULL"/>
+-	<value value="3" name="A7XX_PERF_RB_STALL_CYCLES_FIFO1_FULL"/>
+-	<value value="4" name="A7XX_PERF_RB_STALL_CYCLES_FIFO2_FULL"/>
+-	<value value="5" name="A7XX_PERF_RB_STARVE_CYCLES_SP"/>
+-	<value value="6" name="A7XX_PERF_RB_STARVE_CYCLES_LRZ_TILE"/>
+-	<value value="7" name="A7XX_PERF_RB_STARVE_CYCLES_CCU"/>
+-	<value value="8" name="A7XX_PERF_RB_STARVE_CYCLES_Z_PLANE"/>
+-	<value value="9" name="A7XX_PERF_RB_STARVE_CYCLES_BARY_PLANE"/>
+-	<value value="10" name="A7XX_PERF_RB_Z_WORKLOAD"/>
+-	<value value="11" name="A7XX_PERF_RB_HLSQ_ACTIVE"/>
+-	<value value="12" name="A7XX_PERF_RB_Z_READ"/>
+-	<value value="13" name="A7XX_PERF_RB_Z_WRITE"/>
+-	<value value="14" name="A7XX_PERF_RB_C_READ"/>
+-	<value value="15" name="A7XX_PERF_RB_C_WRITE"/>
+-	<value value="16" name="A7XX_PERF_RB_TOTAL_PASS"/>
+-	<value value="17" name="A7XX_PERF_RB_Z_PASS"/>
+-	<value value="18" name="A7XX_PERF_RB_Z_FAIL"/>
+-	<value value="19" name="A7XX_PERF_RB_S_FAIL"/>
+-	<value value="20" name="A7XX_PERF_RB_BLENDED_FXP_COMPONENTS"/>
+-	<value value="21" name="A7XX_PERF_RB_BLENDED_FP16_COMPONENTS"/>
+-	<value value="22" name="A7XX_PERF_RB_PS_INVOCATIONS"/>
+-	<value value="23" name="A7XX_PERF_RB_2D_ALIVE_CYCLES"/>
+-	<value value="24" name="A7XX_PERF_RB_2D_STALL_CYCLES_A2D"/>
+-	<value value="25" name="A7XX_PERF_RB_2D_STARVE_CYCLES_SRC"/>
+-	<value value="26" name="A7XX_PERF_RB_2D_STARVE_CYCLES_SP"/>
+-	<value value="27" name="A7XX_PERF_RB_2D_STARVE_CYCLES_DST"/>
+-	<value value="28" name="A7XX_PERF_RB_2D_VALID_PIXELS"/>
+-	<value value="29" name="A7XX_PERF_RB_3D_PIXELS"/>
+-	<value value="30" name="A7XX_PERF_RB_BLENDER_WORKING_CYCLES"/>
+-	<value value="31" name="A7XX_PERF_RB_ZPROC_WORKING_CYCLES"/>
+-	<value value="32" name="A7XX_PERF_RB_CPROC_WORKING_CYCLES"/>
+-	<value value="33" name="A7XX_PERF_RB_SAMPLER_WORKING_CYCLES"/>
+-	<value value="34" name="A7XX_PERF_RB_STALL_CYCLES_CCU_COLOR_READ"/>
+-	<value value="35" name="A7XX_PERF_RB_STALL_CYCLES_CCU_COLOR_WRITE"/>
+-	<value value="36" name="A7XX_PERF_RB_STALL_CYCLES_CCU_DEPTH_READ"/>
+-	<value value="37" name="A7XX_PERF_RB_STALL_CYCLES_CCU_DEPTH_WRITE"/>
+-	<value value="38" name="A7XX_PERF_RB_STALL_CYCLES_VPC"/>
+-	<value value="39" name="A7XX_PERF_RB_2D_INPUT_TRANS"/>
+-	<value value="40" name="A7XX_PERF_RB_2D_OUTPUT_RB_DST_TRANS"/>
+-	<value value="41" name="A7XX_PERF_RB_2D_OUTPUT_RB_SRC_TRANS"/>
+-	<value value="42" name="A7XX_PERF_RB_BLENDED_FP32_COMPONENTS"/>
+-	<value value="43" name="A7XX_PERF_RB_COLOR_PIX_TILES"/>
+-	<value value="44" name="A7XX_PERF_RB_STALL_CYCLES_CCU"/>
+-	<value value="45" name="A7XX_PERF_RB_EARLY_Z_ARB3_GRANT"/>
+-	<value value="46" name="A7XX_PERF_RB_LATE_Z_ARB3_GRANT"/>
+-	<value value="47" name="A7XX_PERF_RB_EARLY_Z_SKIP_GRANT"/>
+-	<value value="48" name="A7XX_PERF_RB_VRS_1x1_QUADS"/>
+-	<value value="49" name="A7XX_PERF_RB_VRS_2x1_QUADS"/>
+-	<value value="50" name="A7XX_PERF_RB_VRS_1x2_QUADS"/>
+-	<value value="51" name="A7XX_PERF_RB_VRS_2x2_QUADS"/>
+-	<value value="52" name="A7XX_PERF_RB_VRS_4x2_QUADS"/>
+-	<value value="53" name="A7XX_PERF_RB_VRS_4x4_QUADS"/>
+-</enum>
+-
+-<enum name="a7xx_vsc_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_VSC_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_VSC_WORKING_CYCLES"/>
+-	<value value="2" name="A7XX_PERF_VSC_STALL_CYCLES_UCHE"/>
+-	<value value="3" name="A7XX_PERF_VSC_EOT_NUM"/>
+-	<value value="4" name="A7XX_PERF_VSC_INPUT_TILES"/>
+-</enum>
+-
+-<enum name="a7xx_ccu_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_CCU_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_CCU_STALL_CYCLES_RB_DEPTH_RETURN"/>
+-	<value value="2" name="A7XX_PERF_CCU_STALL_CYCLES_RB_COLOR_RETURN"/>
+-	<value value="3" name="A7XX_PERF_CCU_DEPTH_BLOCKS"/>
+-	<value value="4" name="A7XX_PERF_CCU_COLOR_BLOCKS"/>
+-	<value value="5" name="A7XX_PERF_CCU_DEPTH_BLOCK_HIT"/>
+-	<value value="6" name="A7XX_PERF_CCU_COLOR_BLOCK_HIT"/>
+-	<value value="7" name="A7XX_PERF_CCU_PARTIAL_BLOCK_READ"/>
+-	<value value="8" name="A7XX_PERF_CCU_GMEM_READ"/>
+-	<value value="9" name="A7XX_PERF_CCU_GMEM_WRITE"/>
+-	<value value="10" name="A7XX_PERF_CCU_2D_RD_REQ"/>
+-	<value value="11" name="A7XX_PERF_CCU_2D_WR_REQ"/>
+-	<value value="12" name="A7XX_PERF_CCU_UBWC_COLOR_BLOCKS_CONCURRENT"/>
+-	<value value="13" name="A7XX_PERF_CCU_UBWC_DEPTH_BLOCKS_CONCURRENT"/>
+-	<value value="14" name="A7XX_PERF_CCU_COLOR_RESOLVE_DROPPED"/>
+-	<value value="15" name="A7XX_PERF_CCU_DEPTH_RESOLVE_DROPPED"/>
+-	<value value="16" name="A7XX_PERF_CCU_COLOR_RENDER_CONCURRENT"/>
+-	<value value="17" name="A7XX_PERF_CCU_DEPTH_RENDER_CONCURRENT"/>
+-	<value value="18" name="A7XX_PERF_CCU_COLOR_RESOLVE_AFTER_RENDER"/>
+-	<value value="19" name="A7XX_PERF_CCU_DEPTH_RESOLVE_AFTER_RENDER"/>
+-	<value value="20" name="A7XX_PERF_CCU_GMEM_EXTRA_DEPTH_READ"/>
+-	<value value="21" name="A7XX_PERF_CCU_GMEM_COLOR_READ_4AA"/>
+-	<value value="22" name="A7XX_PERF_CCU_GMEM_COLOR_READ_4AA_FULL"/>
+-</enum>
+-
+-<enum name="a7xx_lrz_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_LRZ_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_LRZ_STARVE_CYCLES_RAS"/>
+-	<value value="2" name="A7XX_PERF_LRZ_STALL_CYCLES_RB"/>
+-	<value value="3" name="A7XX_PERF_LRZ_STALL_CYCLES_VSC"/>
+-	<value value="4" name="A7XX_PERF_LRZ_STALL_CYCLES_VPC"/>
+-	<value value="5" name="A7XX_PERF_LRZ_STALL_CYCLES_FLAG_PREFETCH"/>
+-	<value value="6" name="A7XX_PERF_LRZ_STALL_CYCLES_UCHE"/>
+-	<value value="7" name="A7XX_PERF_LRZ_LRZ_READ"/>
+-	<value value="8" name="A7XX_PERF_LRZ_LRZ_WRITE"/>
+-	<value value="9" name="A7XX_PERF_LRZ_READ_LATENCY"/>
+-	<value value="10" name="A7XX_PERF_LRZ_MERGE_CACHE_UPDATING"/>
+-	<value value="11" name="A7XX_PERF_LRZ_PRIM_KILLED_BY_MASKGEN"/>
+-	<value value="12" name="A7XX_PERF_LRZ_PRIM_KILLED_BY_LRZ"/>
+-	<value value="13" name="A7XX_PERF_LRZ_VISIBLE_PRIM_AFTER_LRZ"/>
+-	<value value="14" name="A7XX_PERF_LRZ_FULL_8X8_TILES"/>
+-	<value value="15" name="A7XX_PERF_LRZ_PARTIAL_8X8_TILES"/>
+-	<value value="16" name="A7XX_PERF_LRZ_TILE_KILLED"/>
+-	<value value="17" name="A7XX_PERF_LRZ_TOTAL_PIXEL"/>
+-	<value value="18" name="A7XX_PERF_LRZ_VISIBLE_PIXEL_AFTER_LRZ"/>
+-	<value value="19" name="A7XX_PERF_LRZ_FEEDBACK_ACCEPT"/>
+-	<value value="20" name="A7XX_PERF_LRZ_FEEDBACK_DISCARD"/>
+-	<value value="21" name="A7XX_PERF_LRZ_FEEDBACK_STALL"/>
+-	<value value="22" name="A7XX_PERF_LRZ_STALL_CYCLES_RB_ZPLANE"/>
+-	<value value="23" name="A7XX_PERF_LRZ_STALL_CYCLES_RB_BPLANE"/>
+-	<value value="24" name="A7XX_PERF_LRZ_RAS_MASK_TRANS"/>
+-	<value value="25" name="A7XX_PERF_LRZ_STALL_CYCLES_MVC"/>
+-	<value value="26" name="A7XX_PERF_LRZ_TILE_KILLED_BY_IMAGE_VRS"/>
+-	<value value="27" name="A7XX_PERF_LRZ_TILE_KILLED_BY_Z"/>
+-</enum>
+-
+-<enum name="a7xx_cmp_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_CMPDECMP_STALL_CYCLES_ARB"/>
+-	<value value="1" name="A7XX_PERF_CMPDECMP_VBIF_LATENCY_CYCLES"/>
+-	<value value="2" name="A7XX_PERF_CMPDECMP_VBIF_LATENCY_SAMPLES"/>
+-	<value value="3" name="A7XX_PERF_CMPDECMP_VBIF_READ_DATA_CCU"/>
+-	<value value="4" name="A7XX_PERF_CMPDECMP_VBIF_WRITE_DATA_CCU"/>
+-	<value value="5" name="A7XX_PERF_CMPDECMP_VBIF_READ_REQUEST"/>
+-	<value value="6" name="A7XX_PERF_CMPDECMP_VBIF_WRITE_REQUEST"/>
+-	<value value="7" name="A7XX_PERF_CMPDECMP_VBIF_READ_DATA"/>
+-	<value value="8" name="A7XX_PERF_CMPDECMP_VBIF_WRITE_DATA"/>
+-	<value value="9" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG1_COUNT"/>
+-	<value value="10" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG2_COUNT"/>
+-	<value value="11" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG3_COUNT"/>
+-	<value value="12" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG4_COUNT"/>
+-	<value value="13" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG5_COUNT"/>
+-	<value value="14" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG6_COUNT"/>
+-	<value value="15" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG8_COUNT"/>
+-	<value value="16" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG1_COUNT"/>
+-	<value value="17" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG2_COUNT"/>
+-	<value value="18" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG3_COUNT"/>
+-	<value value="19" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG4_COUNT"/>
+-	<value value="20" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG5_COUNT"/>
+-	<value value="21" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG6_COUNT"/>
+-	<value value="22" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG8_COUNT"/>
+-	<value value="23" name="A7XX_PERF_CMPDECMP_VBIF_READ_DATA_UCHE_CH0"/>
+-	<value value="24" name="A7XX_PERF_CMPDECMP_VBIF_READ_DATA_UCHE_CH1"/>
+-	<value value="25" name="A7XX_PERF_CMPDECMP_VBIF_WRITE_DATA_UCHE"/>
+-	<value value="26" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG0_COUNT"/>
+-	<value value="27" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG0_COUNT"/>
+-	<value value="28" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAGALPHA_COUNT"/>
+-	<value value="29" name="A7XX_PERF_CMPDECMP_RESOLVE_EVENTS"/>
+-	<value value="30" name="A7XX_PERF_CMPDECMP_CONCURRENT_RESOLVE_EVENTS"/>
+-	<value value="31" name="A7XX_PERF_CMPDECMP_DROPPED_CLEAR_EVENTS"/>
+-	<value value="32" name="A7XX_PERF_CMPDECMP_ST_BLOCKS_CONCURRENT"/>
+-	<value value="33" name="A7XX_PERF_CMPDECMP_LRZ_ST_BLOCKS_CONCURRENT"/>
+-	<value value="34" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG0_COUNT"/>
+-	<value value="35" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG1_COUNT"/>
+-	<value value="36" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG2_COUNT"/>
+-	<value value="37" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG3_COUNT"/>
+-	<value value="38" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG4_COUNT"/>
+-	<value value="39" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG5_COUNT"/>
+-	<value value="40" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG6_COUNT"/>
+-	<value value="41" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG8_COUNT"/>
+-	<value value="42" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG0_COUNT"/>
+-	<value value="43" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG1_COUNT"/>
+-	<value value="44" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG2_COUNT"/>
+-	<value value="45" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG3_COUNT"/>
+-	<value value="46" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG4_COUNT"/>
+-	<value value="47" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG5_COUNT"/>
+-	<value value="48" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG6_COUNT"/>
+-	<value value="49" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG8_COUNT"/>
+-</enum>
+-
+-<enum name="a7xx_gbif_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_GBIF_RESERVED_0"/>
+-	<value value="1" name="A7XX_PERF_GBIF_RESERVED_1"/>
+-	<value value="2" name="A7XX_PERF_GBIF_RESERVED_2"/>
+-	<value value="3" name="A7XX_PERF_GBIF_RESERVED_3"/>
+-	<value value="4" name="A7XX_PERF_GBIF_RESERVED_4"/>
+-	<value value="5" name="A7XX_PERF_GBIF_RESERVED_5"/>
+-	<value value="6" name="A7XX_PERF_GBIF_RESERVED_6"/>
+-	<value value="7" name="A7XX_PERF_GBIF_RESERVED_7"/>
+-	<value value="8" name="A7XX_PERF_GBIF_RESERVED_8"/>
+-	<value value="9" name="A7XX_PERF_GBIF_RESERVED_9"/>
+-	<value value="10" name="A7XX_PERF_GBIF_AXI0_READ_REQUESTS_TOTAL"/>
+-	<value value="11" name="A7XX_PERF_GBIF_AXI1_READ_REQUESTS_TOTAL"/>
+-	<value value="12" name="A7XX_PERF_GBIF_RESERVED_12"/>
+-	<value value="13" name="A7XX_PERF_GBIF_RESERVED_13"/>
+-	<value value="14" name="A7XX_PERF_GBIF_RESERVED_14"/>
+-	<value value="15" name="A7XX_PERF_GBIF_RESERVED_15"/>
+-	<value value="16" name="A7XX_PERF_GBIF_RESERVED_16"/>
+-	<value value="17" name="A7XX_PERF_GBIF_RESERVED_17"/>
+-	<value value="18" name="A7XX_PERF_GBIF_RESERVED_18"/>
+-	<value value="19" name="A7XX_PERF_GBIF_RESERVED_19"/>
+-	<value value="20" name="A7XX_PERF_GBIF_RESERVED_20"/>
+-	<value value="21" name="A7XX_PERF_GBIF_RESERVED_21"/>
+-	<value value="22" name="A7XX_PERF_GBIF_AXI0_WRITE_REQUESTS_TOTAL"/>
+-	<value value="23" name="A7XX_PERF_GBIF_AXI1_WRITE_REQUESTS_TOTAL"/>
+-	<value value="24" name="A7XX_PERF_GBIF_RESERVED_24"/>
+-	<value value="25" name="A7XX_PERF_GBIF_RESERVED_25"/>
+-	<value value="26" name="A7XX_PERF_GBIF_RESERVED_26"/>
+-	<value value="27" name="A7XX_PERF_GBIF_RESERVED_27"/>
+-	<value value="28" name="A7XX_PERF_GBIF_RESERVED_28"/>
+-	<value value="29" name="A7XX_PERF_GBIF_RESERVED_29"/>
+-	<value value="30" name="A7XX_PERF_GBIF_RESERVED_30"/>
+-	<value value="31" name="A7XX_PERF_GBIF_RESERVED_31"/>
+-	<value value="32" name="A7XX_PERF_GBIF_RESERVED_32"/>
+-	<value value="33" name="A7XX_PERF_GBIF_RESERVED_33"/>
+-	<value value="34" name="A7XX_PERF_GBIF_AXI0_READ_DATA_BEATS_TOTAL"/>
+-	<value value="35" name="A7XX_PERF_GBIF_AXI1_READ_DATA_BEATS_TOTAL"/>
+-	<value value="36" name="A7XX_PERF_GBIF_RESERVED_36"/>
+-	<value value="37" name="A7XX_PERF_GBIF_RESERVED_37"/>
+-	<value value="38" name="A7XX_PERF_GBIF_RESERVED_38"/>
+-	<value value="39" name="A7XX_PERF_GBIF_RESERVED_39"/>
+-	<value value="40" name="A7XX_PERF_GBIF_RESERVED_40"/>
+-	<value value="41" name="A7XX_PERF_GBIF_RESERVED_41"/>
+-	<value value="42" name="A7XX_PERF_GBIF_RESERVED_42"/>
+-	<value value="43" name="A7XX_PERF_GBIF_RESERVED_43"/>
+-	<value value="44" name="A7XX_PERF_GBIF_RESERVED_44"/>
+-	<value value="45" name="A7XX_PERF_GBIF_RESERVED_45"/>
+-	<value value="46" name="A7XX_PERF_GBIF_AXI0_WRITE_DATA_BEATS_TOTAL"/>
+-	<value value="47" name="A7XX_PERF_GBIF_AXI1_WRITE_DATA_BEATS_TOTAL"/>
+-	<value value="48" name="A7XX_PERF_GBIF_RESERVED_48"/>
+-	<value value="49" name="A7XX_PERF_GBIF_RESERVED_49"/>
+-	<value value="50" name="A7XX_PERF_GBIF_RESERVED_50"/>
+-	<value value="51" name="A7XX_PERF_GBIF_RESERVED_51"/>
+-	<value value="52" name="A7XX_PERF_GBIF_RESERVED_52"/>
+-	<value value="53" name="A7XX_PERF_GBIF_RESERVED_53"/>
+-	<value value="54" name="A7XX_PERF_GBIF_RESERVED_54"/>
+-	<value value="55" name="A7XX_PERF_GBIF_RESERVED_55"/>
+-	<value value="56" name="A7XX_PERF_GBIF_RESERVED_56"/>
+-	<value value="57" name="A7XX_PERF_GBIF_RESERVED_57"/>
+-	<value value="58" name="A7XX_PERF_GBIF_RESERVED_58"/>
+-	<value value="59" name="A7XX_PERF_GBIF_RESERVED_59"/>
+-	<value value="60" name="A7XX_PERF_GBIF_RESERVED_60"/>
+-	<value value="61" name="A7XX_PERF_GBIF_RESERVED_61"/>
+-	<value value="62" name="A7XX_PERF_GBIF_RESERVED_62"/>
+-	<value value="63" name="A7XX_PERF_GBIF_RESERVED_63"/>
+-	<value value="64" name="A7XX_PERF_GBIF_RESERVED_64"/>
+-	<value value="65" name="A7XX_PERF_GBIF_RESERVED_65"/>
+-	<value value="66" name="A7XX_PERF_GBIF_RESERVED_66"/>
+-	<value value="67" name="A7XX_PERF_GBIF_RESERVED_67"/>
+-	<value value="68" name="A7XX_PERF_GBIF_CYCLES_CH0_HELD_OFF_RD_ALL"/>
+-	<value value="69" name="A7XX_PERF_GBIF_CYCLES_CH1_HELD_OFF_RD_ALL"/>
+-	<value value="70" name="A7XX_PERF_GBIF_CYCLES_CH0_HELD_OFF_WR_ALL"/>
+-	<value value="71" name="A7XX_PERF_GBIF_CYCLES_CH1_HELD_OFF_WR_ALL"/>
+-	<value value="72" name="A7XX_PERF_GBIF_AXI_CH0_REQUEST_HELD_OFF"/>
+-	<value value="73" name="A7XX_PERF_GBIF_AXI_CH1_REQUEST_HELD_OFF"/>
+-	<value value="74" name="A7XX_PERF_GBIF_AXI_REQUEST_HELD_OFF"/>
+-	<value value="75" name="A7XX_PERF_GBIF_AXI_CH0_WRITE_DATA_HELD_OFF"/>
+-	<value value="76" name="A7XX_PERF_GBIF_AXI_CH1_WRITE_DATA_HELD_OFF"/>
+-	<value value="77" name="A7XX_PERF_GBIF_AXI_ALL_WRITE_DATA_HELD_OFF"/>
+-	<value value="78" name="A7XX_PERF_GBIF_AXI_ALL_READ_BEATS"/>
+-	<value value="79" name="A7XX_PERF_GBIF_AXI_ALL_WRITE_BEATS"/>
+-	<value value="80" name="A7XX_PERF_GBIF_AXI_ALL_BEATS"/>
+-</enum>
+-
+-<enum name="a7xx_ufc_perfcounter_select">
+-	<value value="0" name="A7XX_PERF_UFC_BUSY_CYCLES"/>
+-	<value value="1" name="A7XX_PERF_UFC_READ_DATA_VBIF"/>
+-	<value value="2" name="A7XX_PERF_UFC_WRITE_DATA_VBIF"/>
+-	<value value="3" name="A7XX_PERF_UFC_READ_REQUEST_VBIF"/>
+-	<value value="4" name="A7XX_PERF_UFC_WRITE_REQUEST_VBIF"/>
+-	<value value="5" name="A7XX_PERF_UFC_LRZ_FILTER_HIT"/>
+-	<value value="6" name="A7XX_PERF_UFC_LRZ_FILTER_MISS"/>
+-	<value value="7" name="A7XX_PERF_UFC_CRE_FILTER_HIT"/>
+-	<value value="8" name="A7XX_PERF_UFC_CRE_FILTER_MISS"/>
+-	<value value="9" name="A7XX_PERF_UFC_SP_FILTER_HIT"/>
+-	<value value="10" name="A7XX_PERF_UFC_SP_FILTER_MISS"/>
+-	<value value="11" name="A7XX_PERF_UFC_SP_REQUESTS"/>
+-	<value value="12" name="A7XX_PERF_UFC_TP_FILTER_HIT"/>
+-	<value value="13" name="A7XX_PERF_UFC_TP_FILTER_MISS"/>
+-	<value value="14" name="A7XX_PERF_UFC_TP_REQUESTS"/>
+-	<value value="15" name="A7XX_PERF_UFC_MAIN_HIT_LRZ_PREFETCH"/>
+-	<value value="16" name="A7XX_PERF_UFC_MAIN_HIT_CRE_PREFETCH"/>
+-	<value value="17" name="A7XX_PERF_UFC_MAIN_HIT_SP_PREFETCH"/>
+-	<value value="18" name="A7XX_PERF_UFC_MAIN_HIT_TP_PREFETCH"/>
+-	<value value="19" name="A7XX_PERF_UFC_MAIN_HIT_UBWC_READ"/>
+-	<value value="20" name="A7XX_PERF_UFC_MAIN_HIT_UBWC_WRITE"/>
+-	<value value="21" name="A7XX_PERF_UFC_MAIN_MISS_LRZ_PREFETCH"/>
+-	<value value="22" name="A7XX_PERF_UFC_MAIN_MISS_CRE_PREFETCH"/>
+-	<value value="23" name="A7XX_PERF_UFC_MAIN_MISS_SP_PREFETCH"/>
+-	<value value="24" name="A7XX_PERF_UFC_MAIN_MISS_TP_PREFETCH"/>
+-	<value value="25" name="A7XX_PERF_UFC_MAIN_MISS_UBWC_READ"/>
+-	<value value="26" name="A7XX_PERF_UFC_MAIN_MISS_UBWC_WRITE"/>
+-	<value value="27" name="A7XX_PERF_UFC_UBWC_READ_UFC_TRANS"/>
+-	<value value="28" name="A7XX_PERF_UFC_UBWC_WRITE_UFC_TRANS"/>
+-	<value value="29" name="A7XX_PERF_UFC_STALL_CYCLES_GBIF_CMD"/>
+-	<value value="30" name="A7XX_PERF_UFC_STALL_CYCLES_GBIF_RDATA"/>
+-	<value value="31" name="A7XX_PERF_UFC_STALL_CYCLES_GBIF_WDATA"/>
+-	<value value="32" name="A7XX_PERF_UFC_STALL_CYCLES_UBWC_WR_FLAG"/>
+-	<value value="33" name="A7XX_PERF_UFC_STALL_CYCLES_UBWC_FLAG_RTN"/>
+-	<value value="34" name="A7XX_PERF_UFC_STALL_CYCLES_UBWC_EVENT"/>
+-	<value value="35" name="A7XX_PERF_UFC_LRZ_PREFETCH_STALLED_CYCLES"/>
+-	<value value="36" name="A7XX_PERF_UFC_CRE_PREFETCH_STALLED_CYCLES"/>
+-	<value value="37" name="A7XX_PERF_UFC_SPTP_PREFETCH_STALLED_CYCLES"/>
+-	<value value="38" name="A7XX_PERF_UFC_UBWC_RD_STALLED_CYCLES"/>
+-	<value value="39" name="A7XX_PERF_UFC_UBWC_WR_STALLED_CYCLES"/>
+-	<value value="40" name="A7XX_PERF_UFC_PREFETCH_STALLED_CYCLES"/>
+-	<value value="41" name="A7XX_PERF_UFC_EVICTION_STALLED_CYCLES"/>
+-	<value value="42" name="A7XX_PERF_UFC_LOCK_STALLED_CYCLES"/>
+-	<value value="43" name="A7XX_PERF_UFC_MISS_LATENCY_CYCLES"/>
+-	<value value="44" name="A7XX_PERF_UFC_MISS_LATENCY_SAMPLES"/>
+-	<value value="45" name="A7XX_PERF_UFC_UBWC_REQ_STALLED_CYCLES"/>
+-	<value value="46" name="A7XX_PERF_UFC_TP_HINT_TAG_MISS"/>
+-	<value value="47" name="A7XX_PERF_UFC_TP_HINT_TAG_HIT_RDY"/>
+-	<value value="48" name="A7XX_PERF_UFC_TP_HINT_TAG_HIT_NRDY"/>
+-	<value value="49" name="A7XX_PERF_UFC_TP_HINT_IS_FCLEAR"/>
+-	<value value="50" name="A7XX_PERF_UFC_TP_HINT_IS_ALPHA0"/>
+-	<value value="51" name="A7XX_PERF_UFC_SP_L1_FILTER_HIT"/>
+-	<value value="52" name="A7XX_PERF_UFC_SP_L1_FILTER_MISS"/>
+-	<value value="53" name="A7XX_PERF_UFC_SP_L1_FILTER_REQUESTS"/>
+-	<value value="54" name="A7XX_PERF_UFC_TP_L1_TAG_HIT_RDY"/>
+-	<value value="55" name="A7XX_PERF_UFC_TP_L1_TAG_HIT_NRDY"/>
+-	<value value="56" name="A7XX_PERF_UFC_TP_L1_TAG_MISS"/>
+-	<value value="57" name="A7XX_PERF_UFC_TP_L1_FILTER_REQUESTS"/>
+-</enum>
+-
+ <domain name="A6XX" width="32" prefix="variant" varset="chip">
+ 	<bitset name="A6XX_RBBM_INT_0_MASK" inline="no" varset="chip">
+ 		<bitfield name="RBBM_GPU_IDLE" pos="0" type="boolean"/>
+@@ -2371,7 +177,7 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x08ab" name="CP_CONTEXT_SWITCH_LEVEL_STATUS" variants="A7XX-"/>
+ 	<array offset="0x08D0" name="CP_PERFCTR_CP_SEL" stride="1" length="14"/>
+ 	<array offset="0x08e0" name="CP_BV_PERFCTR_CP_SEL" stride="1" length="7" variants="A7XX-"/>
+-	<reg64 offset="0x0900" name="CP_CRASH_SCRIPT_BASE"/>
++	<reg64 offset="0x0900" name="CP_CRASH_DUMP_SCRIPT_BASE"/>
+ 	<reg32 offset="0x0902" name="CP_CRASH_DUMP_CNTL"/>
+ 	<reg32 offset="0x0903" name="CP_CRASH_DUMP_STATUS"/>
+ 	<reg32 offset="0x0908" name="CP_SQE_STAT_ADDR"/>
+@@ -2400,22 +206,22 @@ to upconvert to 32b float internally?
+ 	-->
+ 	<reg64 offset="0x0934" name="CP_VSD_BASE"/>
+ 
+-	<bitset name="a6xx_roq_stat" inline="yes">
++	<bitset name="a6xx_roq_status" inline="yes">
+ 		<bitfield name="RPTR" low="0" high="9"/>
+ 		<bitfield name="WPTR" low="16" high="25"/>
+ 	</bitset>
+-	<reg32 offset="0x0939" name="CP_ROQ_RB_STAT" type="a6xx_roq_stat"/>
+-	<reg32 offset="0x093a" name="CP_ROQ_IB1_STAT" type="a6xx_roq_stat"/>
+-	<reg32 offset="0x093b" name="CP_ROQ_IB2_STAT" type="a6xx_roq_stat"/>
+-	<reg32 offset="0x093c" name="CP_ROQ_SDS_STAT" type="a6xx_roq_stat"/>
+-	<reg32 offset="0x093d" name="CP_ROQ_MRB_STAT" type="a6xx_roq_stat"/>
+-	<reg32 offset="0x093e" name="CP_ROQ_VSD_STAT" type="a6xx_roq_stat"/>
+-
+-	<reg32 offset="0x0943" name="CP_IB1_DWORDS"/>
+-	<reg32 offset="0x0944" name="CP_IB2_DWORDS"/>
+-	<reg32 offset="0x0945" name="CP_SDS_DWORDS"/>
+-	<reg32 offset="0x0946" name="CP_MRB_DWORDS"/>
+-	<reg32 offset="0x0947" name="CP_VSD_DWORDS"/>
++	<reg32 offset="0x0939" name="CP_ROQ_RB_STATUS" type="a6xx_roq_status"/>
++	<reg32 offset="0x093a" name="CP_ROQ_IB1_STATUS" type="a6xx_roq_status"/>
++	<reg32 offset="0x093b" name="CP_ROQ_IB2_STATUS" type="a6xx_roq_status"/>
++	<reg32 offset="0x093c" name="CP_ROQ_SDS_STATUS" type="a6xx_roq_status"/>
++	<reg32 offset="0x093d" name="CP_ROQ_MRB_STATUS" type="a6xx_roq_status"/>
++	<reg32 offset="0x093e" name="CP_ROQ_VSD_STATUS" type="a6xx_roq_status"/>
++
++	<reg32 offset="0x0943" name="CP_IB1_INIT_SIZE"/>
++	<reg32 offset="0x0944" name="CP_IB2_INIT_SIZE"/>
++	<reg32 offset="0x0945" name="CP_SDS_INIT_SIZE"/>
++	<reg32 offset="0x0946" name="CP_MRB_INIT_SIZE"/>
++	<reg32 offset="0x0947" name="CP_VSD_INIT_SIZE"/>
+ 
+ 	<reg32 offset="0x0948" name="CP_ROQ_AVAIL_RB">
+ 		<doc>number of remaining dwords incl current dword being consumed?</doc>
+@@ -2451,6 +257,7 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x098D" name="CP_AHB_CNTL"/>
+ 	<reg32 offset="0x0A00" name="CP_APERTURE_CNTL_HOST" variants="A6XX"/>
+ 	<reg32 offset="0x0A00" name="CP_APERTURE_CNTL_HOST" type="a7xx_aperture_cntl" variants="A7XX-"/>
++	<reg32 offset="0x0A01" name="CP_APERTURE_CNTL_SQE" variants="A6XX"/>
+ 	<reg32 offset="0x0A03" name="CP_APERTURE_CNTL_CD" variants="A6XX"/>
+ 	<reg32 offset="0x0A03" name="CP_APERTURE_CNTL_CD" type="a7xx_aperture_cntl" variants="A7XX-"/>
+ 
+@@ -2468,8 +275,8 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x0a97" name="CP_BV_MEM_POOL_DBG_DATA" variants="A7XX-"/>
+ 	<reg64 offset="0x0a98" name="CP_BV_RB_RPTR_ADDR" variants="A7XX-"/>
+ 
+-	<reg32 offset="0x0a9a" name="CP_RESOURCE_TBL_DBG_ADDR" variants="A7XX-"/>
+-	<reg32 offset="0x0a9b" name="CP_RESOURCE_TBL_DBG_DATA" variants="A7XX-"/>
++	<reg32 offset="0x0a9a" name="CP_RESOURCE_TABLE_DBG_ADDR" variants="A7XX-"/>
++	<reg32 offset="0x0a9b" name="CP_RESOURCE_TABLE_DBG_DATA" variants="A7XX-"/>
+ 	<reg32 offset="0x0ad0" name="CP_BV_APRIV_CNTL" variants="A7XX-"/>
+ 	<reg32 offset="0x0ada" name="CP_BV_CHICKEN_DBG" variants="A7XX-"/>
+ 
+@@ -2619,28 +426,17 @@ to upconvert to 32b float internally?
+ 	    vertices in, number of primnitives assembled etc.
+ 	-->
+ 
+-	<reg32 offset="0x0540" name="RBBM_PRIMCTR_0_LO"/>  <!-- vs vertices in -->
+-	<reg32 offset="0x0541" name="RBBM_PRIMCTR_0_HI"/>
+-	<reg32 offset="0x0542" name="RBBM_PRIMCTR_1_LO"/>  <!-- vs primitives out -->
+-	<reg32 offset="0x0543" name="RBBM_PRIMCTR_1_HI"/>
+-	<reg32 offset="0x0544" name="RBBM_PRIMCTR_2_LO"/>  <!-- hs vertices in -->
+-	<reg32 offset="0x0545" name="RBBM_PRIMCTR_2_HI"/>
+-	<reg32 offset="0x0546" name="RBBM_PRIMCTR_3_LO"/>  <!-- hs patches out -->
+-	<reg32 offset="0x0547" name="RBBM_PRIMCTR_3_HI"/>
+-	<reg32 offset="0x0548" name="RBBM_PRIMCTR_4_LO"/>  <!-- dss vertices in -->
+-	<reg32 offset="0x0549" name="RBBM_PRIMCTR_4_HI"/>
+-	<reg32 offset="0x054a" name="RBBM_PRIMCTR_5_LO"/>  <!-- ds primitives out -->
+-	<reg32 offset="0x054b" name="RBBM_PRIMCTR_5_HI"/>
+-	<reg32 offset="0x054c" name="RBBM_PRIMCTR_6_LO"/>  <!-- gs primitives in -->
+-	<reg32 offset="0x054d" name="RBBM_PRIMCTR_6_HI"/>
+-	<reg32 offset="0x054e" name="RBBM_PRIMCTR_7_LO"/>  <!-- gs primitives out -->
+-	<reg32 offset="0x054f" name="RBBM_PRIMCTR_7_HI"/>
+-	<reg32 offset="0x0550" name="RBBM_PRIMCTR_8_LO"/>  <!-- gs primitives out -->
+-	<reg32 offset="0x0551" name="RBBM_PRIMCTR_8_HI"/>
+-	<reg32 offset="0x0552" name="RBBM_PRIMCTR_9_LO"/>  <!-- raster primitives in -->
+-	<reg32 offset="0x0553" name="RBBM_PRIMCTR_9_HI"/>
+-	<reg32 offset="0x0554" name="RBBM_PRIMCTR_10_LO"/>
+-	<reg32 offset="0x0555" name="RBBM_PRIMCTR_10_HI"/>
++	<reg64 offset="0x0540" name="RBBM_PIPESTAT_IAVERTICES"/>
++	<reg64 offset="0x0542" name="RBBM_PIPESTAT_IAPRIMITIVES"/>
++	<reg64 offset="0x0544" name="RBBM_PIPESTAT_VSINVOCATIONS"/>
++	<reg64 offset="0x0546" name="RBBM_PIPESTAT_HSINVOCATIONS"/>
++	<reg64 offset="0x0548" name="RBBM_PIPESTAT_DSINVOCATIONS"/>
++	<reg64 offset="0x054a" name="RBBM_PIPESTAT_GSINVOCATIONS"/>
++	<reg64 offset="0x054c" name="RBBM_PIPESTAT_GSPRIMITIVES"/>
++	<reg64 offset="0x054e" name="RBBM_PIPESTAT_CINVOCATIONS"/>
++	<reg64 offset="0x0550" name="RBBM_PIPESTAT_CPRIMITIVES"/>
++	<reg64 offset="0x0552" name="RBBM_PIPESTAT_PSINVOCATIONS"/>
++	<reg64 offset="0x0554" name="RBBM_PIPESTAT_CSINVOCATIONS"/>
+ 
+ 	<reg32 offset="0xF400" name="RBBM_SECVID_TRUST_CNTL"/>
+ 	<reg64 offset="0xF800" name="RBBM_SECVID_TSB_TRUSTED_BASE"/>
+@@ -2779,7 +575,7 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x0011f" name="RBBM_CGC_P2S_TRIG_CMD" variants="A7XX-"/>
+ 	<reg32 offset="0x00120" name="RBBM_CLOCK_CNTL_TEX_FCHE"/>
+ 	<reg32 offset="0x00121" name="RBBM_CLOCK_DELAY_TEX_FCHE"/>
+-	<reg32 offset="0x00122" name="RBBM_CLOCK_HYST_TEX_FCHE"/>
++	<reg32 offset="0x00122" name="RBBM_CLOCK_HYST_TEX_FCHE" variants="A6XX"/>
+ 	<reg32 offset="0x00122" name="RBBM_CGC_P2S_STATUS" variants="A7XX-">
+ 		<bitfield name="TXDONE" pos="0" type="boolean"/>
+ 	</reg32>
+@@ -2840,7 +636,7 @@ to upconvert to 32b float internally?
+ 	</reg32>
+ 	<reg32 offset="0x062f" name="DBGC_CFG_DBGBUS_TRACE_BUF1"/>
+ 	<reg32 offset="0x0630" name="DBGC_CFG_DBGBUS_TRACE_BUF2"/>
+-	<array offset="0x0CD8" name="VSC_PERFCTR_VSC_SEL" stride="1" length="2"/>
++	<array offset="0x0CD8" name="VSC_PERFCTR_VSC_SEL" stride="1" length="2" variants="A6XX"/>
+ 	<reg32 offset="0x0CD8" name="VSC_UNKNOWN_0CD8" variants="A7XX">
+ 		<doc>
+ 			Set to true when binning, isn't changed afterwards
+@@ -2936,8 +732,8 @@ to upconvert to 32b float internally?
+ 		<bitfield name="WIDTH" low="0" high="7" shr="5" type="uint"/>
+ 		<bitfield name="HEIGHT" low="8" high="16" shr="4" type="uint"/>
+ 	</reg32>
+-	<reg64 offset="0x0c03" name="VSC_DRAW_STRM_SIZE_ADDRESS" type="waddress" usage="cmd"/>
+-	<reg32 offset="0x0c06" name="VSC_BIN_COUNT" usage="rp_blit">
++	<reg64 offset="0x0c03" name="VSC_SIZE_BASE" type="waddress" usage="cmd"/>
++	<reg32 offset="0x0c06" name="VSC_EXPANDED_BIN_CNTL" usage="rp_blit">
+ 		<bitfield name="NX" low="1" high="10" type="uint"/>
+ 		<bitfield name="NY" low="11" high="20" type="uint"/>
+ 	</reg32>
+@@ -2967,14 +763,14 @@ to upconvert to 32b float internally?
+ 
+ 	LIMIT is set to PITCH - 64, to make room for a bit of overflow
+ 	 -->
+-	<reg64 offset="0x0c30" name="VSC_PRIM_STRM_ADDRESS" type="waddress" usage="cmd"/>
+-	<reg32 offset="0x0c32" name="VSC_PRIM_STRM_PITCH" usage="cmd"/>
+-	<reg32 offset="0x0c33" name="VSC_PRIM_STRM_LIMIT" usage="cmd"/>
+-	<reg64 offset="0x0c34" name="VSC_DRAW_STRM_ADDRESS" type="waddress" usage="cmd"/>
+-	<reg32 offset="0x0c36" name="VSC_DRAW_STRM_PITCH" usage="cmd"/>
+-	<reg32 offset="0x0c37" name="VSC_DRAW_STRM_LIMIT" usage="cmd"/>
+-
+-	<array offset="0x0c38" name="VSC_STATE" stride="1" length="32" usage="rp_blit">
++	<reg64 offset="0x0c30" name="VSC_PIPE_DATA_PRIM_BASE" type="waddress" usage="cmd"/>
++	<reg32 offset="0x0c32" name="VSC_PIPE_DATA_PRIM_STRIDE" usage="cmd"/>
++	<reg32 offset="0x0c33" name="VSC_PIPE_DATA_PRIM_LENGTH" usage="cmd"/>
++	<reg64 offset="0x0c34" name="VSC_PIPE_DATA_DRAW_BASE" type="waddress" usage="cmd"/>
++	<reg32 offset="0x0c36" name="VSC_PIPE_DATA_DRAW_STRIDE" usage="cmd"/>
++	<reg32 offset="0x0c37" name="VSC_PIPE_DATA_DRAW_LENGTH" usage="cmd"/>
++
++	<array offset="0x0c38" name="VSC_CHANNEL_VISIBILITY" stride="1" length="32" usage="rp_blit">
+ 		<doc>
+ 			Seems to be a bitmap of which tiles mapped to the VSC
+ 			pipe contain geometry.
+@@ -2985,7 +781,7 @@ to upconvert to 32b float internally?
+ 		<reg32 offset="0x0" name="REG"/>
+ 	</array>
+ 
+-	<array offset="0x0c58" name="VSC_PRIM_STRM_SIZE" stride="1" length="32" variants="A6XX" usage="rp_blit">
++	<array offset="0x0c58" name="VSC_PIPE_DATA_PRIM_SIZE" stride="1" length="32" variants="A6XX" usage="rp_blit">
+ 		<doc>
+ 			Has the size of data written to corresponding VSC_PRIM_STRM
+ 			buffer.
+@@ -2993,10 +789,10 @@ to upconvert to 32b float internally?
+ 		<reg32 offset="0x0" name="REG"/>
+ 	</array>
+ 
+-	<array offset="0x0c78" name="VSC_DRAW_STRM_SIZE" stride="1" length="32" variants="A6XX" usage="rp_blit">
++	<array offset="0x0c78" name="VSC_PIPE_DATA_DRAW_SIZE" stride="1" length="32" variants="A6XX" usage="rp_blit">
+ 		<doc>
+ 			Has the size of data written to corresponding VSC pipe, ie.
+-			same thing that is written out to VSC_DRAW_STRM_SIZE_ADDRESS_LO/HI
++			same thing that is written out to VSC_SIZE_BASE
+ 		</doc>
+ 		<reg32 offset="0x0" name="REG"/>
+ 	</array>
+@@ -3028,17 +824,17 @@ to upconvert to 32b float internally?
+ 		<bitfield name="PERSP_DIVISION_DISABLE" pos="9" type="boolean"/>
+ 	</reg32>
+ 
+-	<bitset name="a6xx_gras_xs_cl_cntl" inline="yes">
++	<bitset name="a6xx_gras_xs_clip_cull_distance" inline="yes">
+ 		<bitfield name="CLIP_MASK" low="0" high="7"/>
+ 		<bitfield name="CULL_MASK" low="8" high="15"/>
+ 	</bitset>
+-	<reg32 offset="0x8001" name="GRAS_VS_CL_CNTL" type="a6xx_gras_xs_cl_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x8002" name="GRAS_DS_CL_CNTL" type="a6xx_gras_xs_cl_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x8003" name="GRAS_GS_CL_CNTL" type="a6xx_gras_xs_cl_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x8004" name="GRAS_MAX_LAYER_INDEX" low="0" high="10" type="uint" usage="rp_blit"/>
++	<reg32 offset="0x8001" name="GRAS_CL_VS_CLIP_CULL_DISTANCE" type="a6xx_gras_xs_clip_cull_distance" usage="rp_blit"/>
++	<reg32 offset="0x8002" name="GRAS_CL_DS_CLIP_CULL_DISTANCE" type="a6xx_gras_xs_clip_cull_distance" usage="rp_blit"/>
++	<reg32 offset="0x8003" name="GRAS_CL_GS_CLIP_CULL_DISTANCE" type="a6xx_gras_xs_clip_cull_distance" usage="rp_blit"/>
++	<reg32 offset="0x8004" name="GRAS_CL_ARRAY_SIZE" low="0" high="10" type="uint" usage="rp_blit"/>
+ 
+-	<reg32 offset="0x8005" name="GRAS_CNTL" usage="rp_blit">
+-		<!-- see also RB_RENDER_CONTROL0 -->
++	<reg32 offset="0x8005" name="GRAS_CL_INTERP_CNTL" usage="rp_blit">
++		<!-- see also RB_INTERP_CNTL -->
+ 		<bitfield name="IJ_PERSP_PIXEL" pos="0" type="boolean"/>
+ 		<bitfield name="IJ_PERSP_CENTROID" pos="1" type="boolean"/>
+ 		<bitfield name="IJ_PERSP_SAMPLE" pos="2" type="boolean"/>
+@@ -3067,7 +863,7 @@ to upconvert to 32b float internally?
+ 	<!-- <reg32 offset="0x80f0" name="GRAS_UNKNOWN_80F0" type="a6xx_reg_xy"/> -->
+ 
+ 	<!-- 0x8006-0x800f invalid -->
+-	<array offset="0x8010" name="GRAS_CL_VPORT" stride="6" length="16" usage="rp_blit">
++	<array offset="0x8010" name="GRAS_CL_VIEWPORT" stride="6" length="16" usage="rp_blit">
+ 		<reg32 offset="0" name="XOFFSET" type="float"/>
+ 		<reg32 offset="1" name="XSCALE" type="float"/>
+ 		<reg32 offset="2" name="YOFFSET" type="float"/>
+@@ -3075,7 +871,7 @@ to upconvert to 32b float internally?
+ 		<reg32 offset="4" name="ZOFFSET" type="float"/>
+ 		<reg32 offset="5" name="ZSCALE" type="float"/>
+ 	</array>
+-	<array offset="0x8070" name="GRAS_CL_Z_CLAMP" stride="2" length="16" usage="rp_blit">
++	<array offset="0x8070" name="GRAS_CL_VIEWPORT_ZCLAMP" stride="2" length="16" usage="rp_blit">
+ 		<reg32 offset="0" name="MIN" type="float"/>
+ 		<reg32 offset="1" name="MAX" type="float"/>
+ 	</array>
+@@ -3124,7 +920,12 @@ to upconvert to 32b float internally?
+ 
+ 	<reg32 offset="0x8099" name="GRAS_SU_CONSERVATIVE_RAS_CNTL" usage="cmd">
+ 		<bitfield name="CONSERVATIVERASEN" pos="0" type="boolean"/>
+-		<bitfield name="SHIFTAMOUNT" low="1" high="2"/>
++		<enum name="a6xx_shift_amount">
++			<value value="0" name="NO_SHIFT"/>
++			<value value="1" name="HALF_PIXEL_SHIFT"/>
++			<value value="2" name="FULL_PIXEL_SHIFT"/>
++		</enum>
++		<bitfield name="SHIFTAMOUNT" low="1" high="2" type="a6xx_shift_amount"/>
+ 		<bitfield name="INNERCONSERVATIVERASEN" pos="3" type="boolean"/>
+ 		<bitfield name="UNK4" low="4" high="5"/>
+ 	</reg32>
+@@ -3133,13 +934,13 @@ to upconvert to 32b float internally?
+ 		<bitfield name="LINELENGTHEN" pos="1" type="boolean"/>
+ 	</reg32>
+ 
+-	<bitset name="a6xx_gras_layer_cntl" inline="yes">
++	<bitset name="a6xx_gras_us_xs_siv_cntl" inline="yes">
+ 		<bitfield name="WRITES_LAYER" pos="0" type="boolean"/>
+ 		<bitfield name="WRITES_VIEW" pos="1" type="boolean"/>
+ 	</bitset>
+-	<reg32 offset="0x809b" name="GRAS_VS_LAYER_CNTL" type="a6xx_gras_layer_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x809c" name="GRAS_GS_LAYER_CNTL" type="a6xx_gras_layer_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x809d" name="GRAS_DS_LAYER_CNTL" type="a6xx_gras_layer_cntl" usage="rp_blit"/>
++	<reg32 offset="0x809b" name="GRAS_SU_VS_SIV_CNTL" type="a6xx_gras_us_xs_siv_cntl" usage="rp_blit"/>
++	<reg32 offset="0x809c" name="GRAS_SU_GS_SIV_CNTL" type="a6xx_gras_us_xs_siv_cntl" usage="rp_blit"/>
++	<reg32 offset="0x809d" name="GRAS_SU_DS_SIV_CNTL" type="a6xx_gras_us_xs_siv_cntl" usage="rp_blit"/>
+ 	<!-- 0x809e/0x809f invalid -->
+ 
+ 	<enum name="a6xx_sequenced_thread_dist">
+@@ -3213,13 +1014,13 @@ to upconvert to 32b float internally?
+ 	<enum name="a6xx_lrz_feedback_mask">
+ 		<value value="0x0" name="LRZ_FEEDBACK_NONE"/>
+ 		<value value="0x1" name="LRZ_FEEDBACK_EARLY_Z"/>
+-		<value value="0x2" name="LRZ_FEEDBACK_EARLY_LRZ_LATE_Z"/>
++		<value value="0x2" name="LRZ_FEEDBACK_EARLY_Z_LATE_Z"/>
+ 		<!-- We don't have a flag type and this flags combination is often used -->
+-		<value value="0x3" name="LRZ_FEEDBACK_EARLY_Z_OR_EARLY_LRZ_LATE_Z"/>
++		<value value="0x3" name="LRZ_FEEDBACK_EARLY_Z_OR_EARLY_Z_LATE_Z"/>
+ 		<value value="0x4" name="LRZ_FEEDBACK_LATE_Z"/>
+ 	</enum>
+ 
+-	<reg32 offset="0x80a1" name="GRAS_BIN_CONTROL" usage="rp_blit">
++	<reg32 offset="0x80a1" name="GRAS_SC_BIN_CNTL" usage="rp_blit">
+ 		<bitfield name="BINW" low="0" high="5" shr="5" type="uint"/>
+ 		<bitfield name="BINH" low="8" high="14" shr="4" type="uint"/>
+ 		<bitfield name="RENDER_MODE" low="18" high="20" type="a6xx_render_mode"/>
+@@ -3235,22 +1036,22 @@ to upconvert to 32b float internally?
+ 		<bitfield name="UNK27" pos="27"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x80a2" name="GRAS_RAS_MSAA_CNTL" usage="rp_blit">
++	<reg32 offset="0x80a2" name="GRAS_SC_RAS_MSAA_CNTL" usage="rp_blit">
+ 		<bitfield name="SAMPLES" low="0" high="1" type="a3xx_msaa_samples"/>
+ 		<bitfield name="UNK2" pos="2"/>
+ 		<bitfield name="UNK3" pos="3"/>
+ 	</reg32>
+-	<reg32 offset="0x80a3" name="GRAS_DEST_MSAA_CNTL" usage="rp_blit">
++	<reg32 offset="0x80a3" name="GRAS_SC_DEST_MSAA_CNTL" usage="rp_blit">
+ 		<bitfield name="SAMPLES" low="0" high="1" type="a3xx_msaa_samples"/>
+ 		<bitfield name="MSAA_DISABLE" pos="2" type="boolean"/>
+ 	</reg32>
+ 
+-	<bitset name="a6xx_sample_config" inline="yes">
++	<bitset name="a6xx_msaa_sample_pos_cntl" inline="yes">
+ 		<bitfield name="UNK0" pos="0"/>
+ 		<bitfield name="LOCATION_ENABLE" pos="1" type="boolean"/>
+ 	</bitset>
+ 
+-	<bitset name="a6xx_sample_locations" inline="yes">
++	<bitset name="a6xx_programmable_msaa_pos" inline="yes">
+ 		<bitfield name="SAMPLE_0_X" low="0" high="3" radix="4" type="fixed"/>
+ 		<bitfield name="SAMPLE_0_Y" low="4" high="7" radix="4" type="fixed"/>
+ 		<bitfield name="SAMPLE_1_X" low="8" high="11" radix="4" type="fixed"/>
+@@ -3261,9 +1062,9 @@ to upconvert to 32b float internally?
+ 		<bitfield name="SAMPLE_3_Y" low="28" high="31" radix="4" type="fixed"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0x80a4" name="GRAS_SAMPLE_CONFIG" type="a6xx_sample_config" usage="rp_blit"/>
+-	<reg32 offset="0x80a5" name="GRAS_SAMPLE_LOCATION_0" type="a6xx_sample_locations" usage="rp_blit"/>
+-	<reg32 offset="0x80a6" name="GRAS_SAMPLE_LOCATION_1" type="a6xx_sample_locations" usage="rp_blit"/>
++	<reg32 offset="0x80a4" name="GRAS_SC_MSAA_SAMPLE_POS_CNTL" type="a6xx_msaa_sample_pos_cntl" usage="rp_blit"/>
++	<reg32 offset="0x80a5" name="GRAS_SC_PROGRAMMABLE_MSAA_POS_0" type="a6xx_programmable_msaa_pos" usage="rp_blit"/>
++	<reg32 offset="0x80a6" name="GRAS_SC_PROGRAMMABLE_MSAA_POS_1" type="a6xx_programmable_msaa_pos" usage="rp_blit"/>
+ 
+ 	<reg32 offset="0x80a7" name="GRAS_UNKNOWN_80A7" variants="A7XX-" usage="cmd"/>
+ 
+@@ -3286,13 +1087,36 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x80f0" name="GRAS_SC_WINDOW_SCISSOR_TL" type="a6xx_reg_xy" usage="rp_blit"/>
+ 	<reg32 offset="0x80f1" name="GRAS_SC_WINDOW_SCISSOR_BR" type="a6xx_reg_xy" usage="rp_blit"/>
+ 
+-	<!-- 0x80f4 - 0x80fa are used for VK_KHR_fragment_shading_rate -->
+-	<reg64 offset="0x80f4" name="GRAS_UNKNOWN_80F4" variants="A7XX-" usage="cmd"/>
+-	<reg64 offset="0x80f5" name="GRAS_UNKNOWN_80F5" variants="A7XX-" usage="cmd"/>
+-	<reg64 offset="0x80f6" name="GRAS_UNKNOWN_80F6" variants="A7XX-" usage="cmd"/>
+-	<reg64 offset="0x80f8" name="GRAS_UNKNOWN_80F8" variants="A7XX-" usage="cmd"/>
+-	<reg64 offset="0x80f9" name="GRAS_UNKNOWN_80F9" variants="A7XX-" usage="cmd"/>
+-	<reg64 offset="0x80fa" name="GRAS_UNKNOWN_80FA" variants="A7XX-" usage="cmd"/>
++	<enum name="a6xx_fsr_combiner">
++		<value value="0" name="FSR_COMBINER_OP_KEEP"/>
++		<value value="1" name="FSR_COMBINER_OP_REPLACE"/>
++		<value value="2" name="FSR_COMBINER_OP_MIN"/>
++		<value value="3" name="FSR_COMBINER_OP_MAX"/>
++		<value value="4" name="FSR_COMBINER_OP_MUL"/>
++	</enum>
++
++	<reg32 offset="0x80f4" name="GRAS_VRS_CONFIG" variants="A7XX-" usage="rp_blit">
++		<bitfield name="PIPELINE_FSR_ENABLE" pos="0" type="boolean"/>
++		<bitfield name="FRAG_SIZE_X" low="1" high="2" type="uint"/>
++		<bitfield name="FRAG_SIZE_Y" low="3" high="4" type="uint"/>
++		<bitfield name="COMBINER_OP_1" low="5" high="7" type="a6xx_fsr_combiner"/>
++		<bitfield name="COMBINER_OP_2" low="8" high="10" type="a6xx_fsr_combiner"/>
++		<bitfield name="ATTACHMENT_FSR_ENABLE" pos="13" type="boolean"/>
++		<bitfield name="PRIMITIVE_FSR_ENABLE" pos="20" type="boolean"/>
++	</reg32>
++	<reg32 offset="0x80f5" name="GRAS_QUALITY_BUFFER_INFO" variants="A7XX-" usage="rp_blit">
++		<bitfield name="LAYERED" pos="0" type="boolean"/>
++		<bitfield name="TILE_MODE" low="1" high="2" type="a6xx_tile_mode"/>
++	</reg32>
++	<reg32 offset="0x80f6" name="GRAS_QUALITY_BUFFER_DIMENSION" variants="A7XX-" usage="rp_blit">
++		<bitfield name="WIDTH" low="0" high="15" type="uint"/>
++		<bitfield name="HEIGHT" low="16" high="31" type="uint"/>
++	</reg32>
++	<reg64 offset="0x80f8" name="GRAS_QUALITY_BUFFER_BASE" variants="A7XX-" type="waddress" usage="rp_blit"/>
++	<reg32 offset="0x80fa" name="GRAS_QUALITY_BUFFER_PITCH" variants="A7XX-" usage="rp_blit">
++		<bitfield name="PITCH" shr="6" low="0" high="7" type="uint"/>
++		<bitfield name="ARRAY_PITCH" shr="6" low="10" high="28" type="uint"/>
++	</reg32>
+ 
+ 	<enum name="a6xx_lrz_dir_status">
+ 		<value value="0x1" name="LRZ_DIR_LE"/>
+@@ -3313,7 +1137,7 @@ to upconvert to 32b float internally?
+ 		</doc>
+ 		<bitfield name="FC_ENABLE" pos="3" type="boolean" variants="A6XX"/>
+ 		<!-- set when depth-test + depth-write enabled -->
+-		<bitfield name="Z_TEST_ENABLE" pos="4" type="boolean"/>
++		<bitfield name="Z_WRITE_ENABLE" pos="4" type="boolean"/>
+ 		<bitfield name="Z_BOUNDS_ENABLE" pos="5" type="boolean"/>
+ 		<bitfield name="DIR" low="6" high="7" type="a6xx_lrz_dir_status"/>
+ 		<doc>
+@@ -3339,14 +1163,13 @@ to upconvert to 32b float internally?
+ 		<bitfield name="FRAGCOORDSAMPLEMODE" low="1" high="2" type="a6xx_fragcoord_sample_mode"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x8102" name="GRAS_LRZ_MRT_BUF_INFO_0" usage="rp_blit">
++	<reg32 offset="0x8102" name="GRAS_LRZ_MRT_BUFFER_INFO_0" usage="rp_blit">
+ 		<bitfield name="COLOR_FORMAT" low="0" high="7" type="a6xx_format"/>
+ 	</reg32>
+ 	<reg64 offset="0x8103" name="GRAS_LRZ_BUFFER_BASE" align="256" type="waddress" usage="rp_blit"/>
+ 	<reg32 offset="0x8105" name="GRAS_LRZ_BUFFER_PITCH" usage="rp_blit">
+-		<!-- TODO: fix the shr fields -->
+ 		<bitfield name="PITCH" low="0" high="7" shr="5" type="uint"/>
+-		<bitfield name="ARRAY_PITCH" low="10" high="28" shr="4" type="uint"/>
++		<bitfield name="ARRAY_PITCH" low="10" high="28" shr="8" type="uint"/>
+ 	</reg32>
+ 
+ 	<!--
+@@ -3381,18 +1204,18 @@ to upconvert to 32b float internally?
+ 	 -->
+ 	<reg64 offset="0x8106" name="GRAS_LRZ_FAST_CLEAR_BUFFER_BASE" align="64" type="waddress" usage="rp_blit"/>
+ 	<!-- 0x8108 invalid -->
+-	<reg32 offset="0x8109" name="GRAS_SAMPLE_CNTL" usage="rp_blit">
++	<reg32 offset="0x8109" name="GRAS_LRZ_PS_SAMPLEFREQ_CNTL" usage="rp_blit">
+ 		<bitfield name="PER_SAMP_MODE" pos="0" type="boolean"/>
+ 	</reg32>
+ 	<!--
+ 	LRZ buffer represents a single array layer + mip level, and there is
+ 	a single buffer per depth image. Thus to reuse LRZ between renderpasses
+ 	it is necessary to track the depth view used in the past renderpass, which
+-	GRAS_LRZ_DEPTH_VIEW is for.
+-	GRAS_LRZ_CNTL checks if current value of GRAS_LRZ_DEPTH_VIEW is equal to
++	GRAS_LRZ_VIEW_INFO is for.
++	GRAS_LRZ_CNTL checks if current value of GRAS_LRZ_VIEW_INFO is equal to
+ 	the value stored in the LRZ buffer, if not - LRZ is disabled.
+ 	-->
+-	<reg32 offset="0x810a" name="GRAS_LRZ_DEPTH_VIEW" usage="cmd">
++	<reg32 offset="0x810a" name="GRAS_LRZ_VIEW_INFO" usage="cmd">
+ 		<bitfield name="BASE_LAYER" low="0" high="10" type="uint"/>
+ 		<bitfield name="LAYER_COUNT" low="16" high="26" type="uint"/>
+ 		<bitfield name="BASE_MIP_LEVEL" low="28" high="31" type="uint"/>
+@@ -3408,7 +1231,7 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x8110" name="GRAS_UNKNOWN_8110" low="0" high="1" usage="cmd"/>
+ 
+ 	<!-- A bit tentative but it's a color and it is followed by LRZ_CLEAR -->
+-	<reg32 offset="0x8111" name="GRAS_LRZ_CLEAR_DEPTH_F32" type="float" variants="A7XX-"/>
++	<reg32 offset="0x8111" name="GRAS_LRZ_DEPTH_CLEAR" type="float" variants="A7XX-"/>
+ 
+ 	<reg32 offset="0x8113" name="GRAS_LRZ_DEPTH_BUFFER_INFO" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="DEPTH_FORMAT" low="0" high="2" type="a6xx_depth_format"/>
+@@ -3430,7 +1253,7 @@ to upconvert to 32b float internally?
+ 		<value value="0x5" name="ROTATE_VFLIP"/>
+ 	</enum>
+ 
+-	<bitset name="a6xx_2d_blit_cntl" inline="yes">
++	<bitset name="a6xx_a2d_bit_cntl" inline="yes">
+ 		<bitfield name="ROTATE" low="0" high="2" type="a6xx_rotation"/>
+ 		<bitfield name="OVERWRITEEN" pos="3" type="boolean"/>
+ 		<bitfield name="UNK4" low="4" high="6"/>
+@@ -3447,22 +1270,22 @@ to upconvert to 32b float internally?
+ 		<bitfield name="UNK30" pos="30" type="boolean" variants="A7XX-"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0x8400" name="GRAS_2D_BLIT_CNTL" type="a6xx_2d_blit_cntl" usage="rp_blit"/>
++	<reg32 offset="0x8400" name="GRAS_A2D_BLT_CNTL" type="a6xx_a2d_bit_cntl" usage="rp_blit"/>
+ 	<!-- note: the low 8 bits for src coords are valid, probably fixed point
+ 	     it would be a bit weird though, since we subtract 1 from BR coords
+ 	     apparently signed, gallium driver uses negative coords and it works?
+ 	 -->
+-	<reg32 offset="0x8401" name="GRAS_2D_SRC_TL_X" low="8" high="24" type="int" usage="rp_blit"/>
+-	<reg32 offset="0x8402" name="GRAS_2D_SRC_BR_X" low="8" high="24" type="int" usage="rp_blit"/>
+-	<reg32 offset="0x8403" name="GRAS_2D_SRC_TL_Y" low="8" high="24" type="int" usage="rp_blit"/>
+-	<reg32 offset="0x8404" name="GRAS_2D_SRC_BR_Y" low="8" high="24" type="int" usage="rp_blit"/>
+-	<reg32 offset="0x8405" name="GRAS_2D_DST_TL" type="a6xx_reg_xy" usage="rp_blit"/>
+-	<reg32 offset="0x8406" name="GRAS_2D_DST_BR" type="a6xx_reg_xy" usage="rp_blit"/>
++	<reg32 offset="0x8401" name="GRAS_A2D_SRC_XMIN" low="8" high="24" type="int" usage="rp_blit"/>
++	<reg32 offset="0x8402" name="GRAS_A2D_SRC_XMAX" low="8" high="24" type="int" usage="rp_blit"/>
++	<reg32 offset="0x8403" name="GRAS_A2D_SRC_YMIN" low="8" high="24" type="int" usage="rp_blit"/>
++	<reg32 offset="0x8404" name="GRAS_A2D_SRC_YMAX" low="8" high="24" type="int" usage="rp_blit"/>
++	<reg32 offset="0x8405" name="GRAS_A2D_DEST_TL" type="a6xx_reg_xy" usage="rp_blit"/>
++	<reg32 offset="0x8406" name="GRAS_A2D_DEST_BR" type="a6xx_reg_xy" usage="rp_blit"/>
+ 	<reg32 offset="0x8407" name="GRAS_2D_UNKNOWN_8407" low="0" high="31"/>
+ 	<reg32 offset="0x8408" name="GRAS_2D_UNKNOWN_8408" low="0" high="31"/>
+ 	<reg32 offset="0x8409" name="GRAS_2D_UNKNOWN_8409" low="0" high="31"/>
+-	<reg32 offset="0x840a" name="GRAS_2D_RESOLVE_CNTL_1" type="a6xx_reg_xy" usage="rp_blit"/>
+-	<reg32 offset="0x840b" name="GRAS_2D_RESOLVE_CNTL_2" type="a6xx_reg_xy" usage="rp_blit"/>
++	<reg32 offset="0x840a" name="GRAS_A2D_SCISSOR_TL" type="a6xx_reg_xy" usage="rp_blit"/>
++	<reg32 offset="0x840b" name="GRAS_A2D_SCISSOR_BR" type="a6xx_reg_xy" usage="rp_blit"/>
+ 	<!-- 0x840c-0x85ff invalid -->
+ 
+ 	<!-- always 0x880 ? (and 0 in a640/a650 traces?) -->
+@@ -3481,7 +1304,7 @@ to upconvert to 32b float internally?
+ 	-->
+ 
+ 	<!-- same as GRAS_BIN_CONTROL, but without bit 27: -->
+-	<reg32 offset="0x8800" name="RB_BIN_CONTROL" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0x8800" name="RB_CNTL" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="BINW" low="0" high="5" shr="5" type="uint"/>
+ 		<bitfield name="BINH" low="8" high="14" shr="4" type="uint"/>
+ 		<bitfield name="RENDER_MODE" low="18" high="20" type="a6xx_render_mode"/>
+@@ -3490,7 +1313,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="LRZ_FEEDBACK_ZMODE_MASK" low="24" high="26" type="a6xx_lrz_feedback_mask"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x8800" name="RB_BIN_CONTROL" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0x8800" name="RB_CNTL" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="BINW" low="0" high="5" shr="5" type="uint"/>
+ 		<bitfield name="BINH" low="8" high="14" shr="4" type="uint"/>
+ 		<bitfield name="RENDER_MODE" low="18" high="20" type="a6xx_render_mode"/>
+@@ -3501,8 +1324,7 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x8801" name="RB_RENDER_CNTL" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="CCUSINGLECACHELINESIZE" low="3" high="5"/>
+ 		<bitfield name="EARLYVIZOUTEN" pos="6" type="boolean"/>
+-		<!-- set during binning pass: -->
+-		<bitfield name="BINNING" pos="7" type="boolean"/>
++		<bitfield name="FS_DISABLE" pos="7" type="boolean"/>
+ 		<bitfield name="UNK8" low="8" high="10"/>
+ 		<bitfield name="RASTER_MODE" pos="8" type="a6xx_raster_mode"/>
+ 		<bitfield name="RASTER_DIRECTION" low="9" high="10" type="a6xx_raster_direction"/>
+@@ -3515,15 +1337,14 @@ to upconvert to 32b float internally?
+ 	</reg32>
+ 	<reg32 offset="0x8801" name="RB_RENDER_CNTL" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="EARLYVIZOUTEN" pos="6" type="boolean"/>
+-		<!-- set during binning pass: -->
+-		<bitfield name="BINNING" pos="7" type="boolean"/>
++		<bitfield name="FS_DISABLE" pos="7" type="boolean"/>
+ 		<bitfield name="RASTER_MODE" pos="8" type="a6xx_raster_mode"/>
+ 		<bitfield name="RASTER_DIRECTION" low="9" high="10" type="a6xx_raster_direction"/>
+ 		<bitfield name="CONSERVATIVERASEN" pos="11" type="boolean"/>
+ 		<bitfield name="INNERCONSERVATIVERASEN" pos="12" type="boolean"/>
+ 	</reg32>
+ 	<reg32 offset="0x8116" name="GRAS_SU_RENDER_CNTL" variants="A7XX-" usage="rp_blit">
+-		<bitfield name="BINNING" pos="7" type="boolean"/>
++		<bitfield name="FS_DISABLE" pos="7" type="boolean"/>
+ 	</reg32>
+ 
+ 	<reg32 offset="0x8802" name="RB_RAS_MSAA_CNTL" usage="rp_blit">
+@@ -3536,16 +1357,16 @@ to upconvert to 32b float internally?
+ 		<bitfield name="MSAA_DISABLE" pos="2" type="boolean"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x8804" name="RB_SAMPLE_CONFIG" type="a6xx_sample_config" usage="rp_blit"/>
+-	<reg32 offset="0x8805" name="RB_SAMPLE_LOCATION_0" type="a6xx_sample_locations" usage="rp_blit"/>
+-	<reg32 offset="0x8806" name="RB_SAMPLE_LOCATION_1" type="a6xx_sample_locations" usage="rp_blit"/>
++	<reg32 offset="0x8804" name="RB_MSAA_SAMPLE_POS_CNTL" type="a6xx_msaa_sample_pos_cntl" usage="rp_blit"/>
++	<reg32 offset="0x8805" name="RB_PROGRAMMABLE_MSAA_POS_0" type="a6xx_programmable_msaa_pos" usage="rp_blit"/>
++	<reg32 offset="0x8806" name="RB_PROGRAMMABLE_MSAA_POS_1" type="a6xx_programmable_msaa_pos" usage="rp_blit"/>
+ 	<!-- 0x8807-0x8808 invalid -->
+ 	<!--
+ 	note: maybe not actually called RB_RENDER_CONTROLn (since RB_RENDER_CNTL
+ 	name comes from kernel and is probably right)
+ 	 -->
+-	<reg32 offset="0x8809" name="RB_RENDER_CONTROL0" usage="rp_blit">
+-		<!-- see also GRAS_CNTL -->
++	<reg32 offset="0x8809" name="RB_INTERP_CNTL" usage="rp_blit">
++		<!-- see also GRAS_CL_INTERP_CNTL -->
+ 		<bitfield name="IJ_PERSP_PIXEL" pos="0" type="boolean"/>
+ 		<bitfield name="IJ_PERSP_CENTROID" pos="1" type="boolean"/>
+ 		<bitfield name="IJ_PERSP_SAMPLE" pos="2" type="boolean"/>
+@@ -3555,7 +1376,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="COORD_MASK" low="6" high="9" type="hex"/>
+ 		<bitfield name="UNK10" pos="10" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0x880a" name="RB_RENDER_CONTROL1" usage="rp_blit">
++	<reg32 offset="0x880a" name="RB_PS_INPUT_CNTL" usage="rp_blit">
+ 		<!-- enable bits for various FS sysvalue regs: -->
+ 		<bitfield name="SAMPLEMASK" pos="0" type="boolean"/>
+ 		<bitfield name="POSTDEPTHCOVERAGE" pos="1" type="boolean"/>
+@@ -3567,16 +1388,16 @@ to upconvert to 32b float internally?
+ 		<bitfield name="FOVEATION" pos="8" type="boolean"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x880b" name="RB_FS_OUTPUT_CNTL0" usage="rp_blit">
++	<reg32 offset="0x880b" name="RB_PS_OUTPUT_CNTL" usage="rp_blit">
+ 		<bitfield name="DUAL_COLOR_IN_ENABLE" pos="0" type="boolean"/>
+ 		<bitfield name="FRAG_WRITES_Z" pos="1" type="boolean"/>
+ 		<bitfield name="FRAG_WRITES_SAMPMASK" pos="2" type="boolean"/>
+ 		<bitfield name="FRAG_WRITES_STENCILREF" pos="3" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0x880c" name="RB_FS_OUTPUT_CNTL1" usage="rp_blit">
++	<reg32 offset="0x880c" name="RB_PS_MRT_CNTL" usage="rp_blit">
+ 		<bitfield name="MRT" low="0" high="3" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0x880d" name="RB_RENDER_COMPONENTS" usage="rp_blit">
++	<reg32 offset="0x880d" name="RB_PS_OUTPUT_MASK" usage="rp_blit">
+ 		<bitfield name="RT0" low="0" high="3"/>
+ 		<bitfield name="RT1" low="4" high="7"/>
+ 		<bitfield name="RT2" low="8" high="11"/>
+@@ -3608,7 +1429,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="SRGB_MRT7" pos="7" type="boolean"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x8810" name="RB_SAMPLE_CNTL" usage="rp_blit">
++	<reg32 offset="0x8810" name="RB_PS_SAMPLEFREQ_CNTL" usage="rp_blit">
+ 		<bitfield name="PER_SAMP_MODE" pos="0" type="boolean"/>
+ 	</reg32>
+ 	<reg32 offset="0x8811" name="RB_UNKNOWN_8811" low="4" high="6" usage="cmd"/>
+@@ -3672,18 +1493,18 @@ to upconvert to 32b float internally?
+ 		<reg32 offset="0x7" name="BASE_GMEM" low="12" high="31" shr="12"/>
+ 	</array>
+ 
+-	<reg32 offset="0x8860" name="RB_BLEND_RED_F32" type="float" usage="rp_blit"/>
+-	<reg32 offset="0x8861" name="RB_BLEND_GREEN_F32" type="float" usage="rp_blit"/>
+-	<reg32 offset="0x8862" name="RB_BLEND_BLUE_F32" type="float" usage="rp_blit"/>
+-	<reg32 offset="0x8863" name="RB_BLEND_ALPHA_F32" type="float" usage="rp_blit"/>
+-	<reg32 offset="0x8864" name="RB_ALPHA_CONTROL" usage="cmd">
++	<reg32 offset="0x8860" name="RB_BLEND_CONSTANT_RED_FP32" type="float" usage="rp_blit"/>
++	<reg32 offset="0x8861" name="RB_BLEND_CONSTANT_GREEN_FP32" type="float" usage="rp_blit"/>
++	<reg32 offset="0x8862" name="RB_BLEND_CONSTANT_BLUE_FP32" type="float" usage="rp_blit"/>
++	<reg32 offset="0x8863" name="RB_BLEND_CONSTANT_ALPHA_FP32" type="float" usage="rp_blit"/>
++	<reg32 offset="0x8864" name="RB_ALPHA_TEST_CNTL" usage="cmd">
+ 		<bitfield name="ALPHA_REF" low="0" high="7" type="hex"/>
+ 		<bitfield name="ALPHA_TEST" pos="8" type="boolean"/>
+ 		<bitfield name="ALPHA_TEST_FUNC" low="9" high="11" type="adreno_compare_func"/>
+ 	</reg32>
+ 	<reg32 offset="0x8865" name="RB_BLEND_CNTL" usage="rp_blit">
+ 		<!-- per-mrt enable bit -->
+-		<bitfield name="ENABLE_BLEND" low="0" high="7"/>
++		<bitfield name="BLEND_READS_DEST" low="0" high="7"/>
+ 		<bitfield name="INDEPENDENT_BLEND" pos="8" type="boolean"/>
+ 		<bitfield name="DUAL_COLOR_IN_ENABLE" pos="9" type="boolean"/>
+ 		<bitfield name="ALPHA_TO_COVERAGE" pos="10" type="boolean"/>
+@@ -3726,12 +1547,12 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x8873" name="RB_DEPTH_BUFFER_PITCH" low="0" high="13" shr="6" type="uint" usage="rp_blit"/>
+ 	<reg32 offset="0x8874" name="RB_DEPTH_BUFFER_ARRAY_PITCH" low="0" high="27" shr="6" type="uint" usage="rp_blit"/>
+ 	<reg64 offset="0x8875" name="RB_DEPTH_BUFFER_BASE" type="waddress" align="64" usage="rp_blit"/>
+-	<reg32 offset="0x8877" name="RB_DEPTH_BUFFER_BASE_GMEM" low="12" high="31" shr="12" usage="rp_blit"/>
++	<reg32 offset="0x8877" name="RB_DEPTH_GMEM_BASE" low="12" high="31" shr="12" usage="rp_blit"/>
+ 
+-	<reg32 offset="0x8878" name="RB_Z_BOUNDS_MIN" type="float" usage="rp_blit"/>
+-	<reg32 offset="0x8879" name="RB_Z_BOUNDS_MAX" type="float" usage="rp_blit"/>
++	<reg32 offset="0x8878" name="RB_DEPTH_BOUND_MIN" type="float" usage="rp_blit"/>
++	<reg32 offset="0x8879" name="RB_DEPTH_BOUND_MAX" type="float" usage="rp_blit"/>
+ 	<!-- 0x887a-0x887f invalid -->
+-	<reg32 offset="0x8880" name="RB_STENCIL_CONTROL" usage="rp_blit">
++	<reg32 offset="0x8880" name="RB_STENCIL_CNTL" usage="rp_blit">
+ 		<bitfield name="STENCIL_ENABLE" pos="0" type="boolean"/>
+ 		<bitfield name="STENCIL_ENABLE_BF" pos="1" type="boolean"/>
+ 		<!--
+@@ -3753,11 +1574,11 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x8115" name="GRAS_SU_STENCIL_CNTL" usage="rp_blit">
+ 		<bitfield name="STENCIL_ENABLE" pos="0" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0x8881" name="RB_STENCIL_INFO" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0x8881" name="RB_STENCIL_BUFFER_INFO" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="SEPARATE_STENCIL" pos="0" type="boolean"/>
+ 		<bitfield name="UNK1" pos="1" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0x8881" name="RB_STENCIL_INFO" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0x8881" name="RB_STENCIL_BUFFER_INFO" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="SEPARATE_STENCIL" pos="0" type="boolean"/>
+ 		<bitfield name="UNK1" pos="1" type="boolean"/>
+ 		<bitfield name="TILEMODE" low="2" high="3" type="a6xx_tile_mode"/>
+@@ -3765,22 +1586,22 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x8882" name="RB_STENCIL_BUFFER_PITCH" low="0" high="11" shr="6" type="uint" usage="rp_blit"/>
+ 	<reg32 offset="0x8883" name="RB_STENCIL_BUFFER_ARRAY_PITCH" low="0" high="23" shr="6" type="uint" usage="rp_blit"/>
+ 	<reg64 offset="0x8884" name="RB_STENCIL_BUFFER_BASE" type="waddress" align="64" usage="rp_blit"/>
+-	<reg32 offset="0x8886" name="RB_STENCIL_BUFFER_BASE_GMEM" low="12" high="31" shr="12" usage="rp_blit"/>
+-	<reg32 offset="0x8887" name="RB_STENCILREF" usage="rp_blit">
++	<reg32 offset="0x8886" name="RB_STENCIL_GMEM_BASE" low="12" high="31" shr="12" usage="rp_blit"/>
++	<reg32 offset="0x8887" name="RB_STENCIL_REF_CNTL" usage="rp_blit">
+ 		<bitfield name="REF" low="0" high="7"/>
+ 		<bitfield name="BFREF" low="8" high="15"/>
+ 	</reg32>
+-	<reg32 offset="0x8888" name="RB_STENCILMASK" usage="rp_blit">
++	<reg32 offset="0x8888" name="RB_STENCIL_MASK" usage="rp_blit">
+ 		<bitfield name="MASK" low="0" high="7"/>
+ 		<bitfield name="BFMASK" low="8" high="15"/>
+ 	</reg32>
+-	<reg32 offset="0x8889" name="RB_STENCILWRMASK" usage="rp_blit">
++	<reg32 offset="0x8889" name="RB_STENCIL_WRITE_MASK" usage="rp_blit">
+ 		<bitfield name="WRMASK" low="0" high="7"/>
+ 		<bitfield name="BFWRMASK" low="8" high="15"/>
+ 	</reg32>
+ 	<!-- 0x888a-0x888f invalid -->
+ 	<reg32 offset="0x8890" name="RB_WINDOW_OFFSET" type="a6xx_reg_xy" usage="rp_blit"/>
+-	<reg32 offset="0x8891" name="RB_SAMPLE_COUNT_CONTROL" usage="cmd">
++	<reg32 offset="0x8891" name="RB_SAMPLE_COUNTER_CNTL" usage="cmd">
+ 		<bitfield name="DISABLE" pos="0" type="boolean"/>
+ 		<bitfield name="COPY" pos="1" type="boolean"/>
+ 	</reg32>
+@@ -3791,27 +1612,27 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x8899" name="RB_UNKNOWN_8899" variants="A7XX-" usage="cmd"/>
+ 	<!-- 0x8899-0x88bf invalid -->
+ 	<!-- clamps depth value for depth test/write -->
+-	<reg32 offset="0x88c0" name="RB_Z_CLAMP_MIN" type="float" usage="rp_blit"/>
+-	<reg32 offset="0x88c1" name="RB_Z_CLAMP_MAX" type="float" usage="rp_blit"/>
++	<reg32 offset="0x88c0" name="RB_VIEWPORT_ZCLAMP_MIN" type="float" usage="rp_blit"/>
++	<reg32 offset="0x88c1" name="RB_VIEWPORT_ZCLAMP_MAX" type="float" usage="rp_blit"/>
+ 	<!-- 0x88c2-0x88cf invalid-->
+-	<reg32 offset="0x88d0" name="RB_UNKNOWN_88D0" usage="rp_blit">
++	<reg32 offset="0x88d0" name="RB_RESOLVE_CNTL_0" usage="rp_blit">
+ 		<bitfield name="UNK0" low="0" high="12"/>
+ 		<bitfield name="UNK16" low="16" high="26"/>
+ 	</reg32>
+-	<reg32 offset="0x88d1" name="RB_BLIT_SCISSOR_TL" type="a6xx_reg_xy" usage="rp_blit"/>
+-	<reg32 offset="0x88d2" name="RB_BLIT_SCISSOR_BR" type="a6xx_reg_xy" usage="rp_blit"/>
++	<reg32 offset="0x88d1" name="RB_RESOLVE_CNTL_1" type="a6xx_reg_xy" usage="rp_blit"/>
++	<reg32 offset="0x88d2" name="RB_RESOLVE_CNTL_2" type="a6xx_reg_xy" usage="rp_blit"/>
+ 	<!-- weird to duplicate other regs from same block?? -->
+-	<reg32 offset="0x88d3" name="RB_BIN_CONTROL2" usage="rp_blit">
++	<reg32 offset="0x88d3" name="RB_RESOLVE_CNTL_3" usage="rp_blit">
+ 		<bitfield name="BINW" low="0" high="5" shr="5" type="uint"/>
+ 		<bitfield name="BINH" low="8" high="14" shr="4" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0x88d4" name="RB_WINDOW_OFFSET2" type="a6xx_reg_xy" usage="rp_blit"/>
+-	<reg32 offset="0x88d5" name="RB_BLIT_GMEM_MSAA_CNTL" usage="rp_blit">
++	<reg32 offset="0x88d4" name="RB_RESOLVE_WINDOW_OFFSET" type="a6xx_reg_xy" usage="rp_blit"/>
++	<reg32 offset="0x88d5" name="RB_RESOLVE_GMEM_BUFFER_INFO" usage="rp_blit">
+ 		<bitfield name="SAMPLES" low="3" high="4" type="a3xx_msaa_samples"/>
+ 	</reg32>
+-	<reg32 offset="0x88d6" name="RB_BLIT_BASE_GMEM" low="12" high="31" shr="12" usage="rp_blit"/>
++	<reg32 offset="0x88d6" name="RB_RESOLVE_GMEM_BUFFER_BASE" low="12" high="31" shr="12" usage="rp_blit"/>
+ 	<!-- s/DST_FORMAT/DST_INFO/ probably: -->
+-	<reg32 offset="0x88d7" name="RB_BLIT_DST_INFO" usage="rp_blit">
++	<reg32 offset="0x88d7" name="RB_RESOLVE_SYSTEM_BUFFER_INFO" usage="rp_blit">
+ 		<bitfield name="TILE_MODE" low="0" high="1" type="a6xx_tile_mode"/>
+ 		<bitfield name="FLAGS" pos="2" type="boolean"/>
+ 		<bitfield name="SAMPLES" low="3" high="4" type="a3xx_msaa_samples"/>
+@@ -3820,25 +1641,31 @@ to upconvert to 32b float internally?
+ 		<bitfield name="UNK15" pos="15" type="boolean"/>
+ 		<bitfield name="MUTABLEEN" pos="16" type="boolean" variants="A7XX-"/>
+ 	</reg32>
+-	<reg64 offset="0x88d8" name="RB_BLIT_DST" type="waddress" align="64" usage="rp_blit"/>
+-	<reg32 offset="0x88da" name="RB_BLIT_DST_PITCH" low="0" high="15" shr="6" type="uint" usage="rp_blit"/>
++	<reg64 offset="0x88d8" name="RB_RESOLVE_SYSTEM_BUFFER_BASE" type="waddress" align="64" usage="rp_blit"/>
++	<reg32 offset="0x88da" name="RB_RESOLVE_SYSTEM_BUFFER_PITCH" low="0" high="15" shr="6" type="uint" usage="rp_blit"/>
+ 	<!-- array-pitch is size of layer -->
+-	<reg32 offset="0x88db" name="RB_BLIT_DST_ARRAY_PITCH" low="0" high="28" shr="6" type="uint" usage="rp_blit"/>
+-	<reg64 offset="0x88dc" name="RB_BLIT_FLAG_DST" type="waddress" align="64" usage="rp_blit"/>
+-	<reg32 offset="0x88de" name="RB_BLIT_FLAG_DST_PITCH" usage="rp_blit">
++	<reg32 offset="0x88db" name="RB_RESOLVE_SYSTEM_BUFFER_ARRAY_PITCH" low="0" high="28" shr="6" type="uint" usage="rp_blit"/>
++	<reg64 offset="0x88dc" name="RB_RESOLVE_SYSTEM_FLAG_BUFFER_BASE" type="waddress" align="64" usage="rp_blit"/>
++	<reg32 offset="0x88de" name="RB_RESOLVE_SYSTEM_FLAG_BUFFER_PITCH" usage="rp_blit">
+ 		<bitfield name="PITCH" low="0" high="10" shr="6" type="uint"/>
+ 		<bitfield name="ARRAY_PITCH" low="11" high="27" shr="7" type="uint"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x88df" name="RB_BLIT_CLEAR_COLOR_DW0" usage="rp_blit"/>
+-	<reg32 offset="0x88e0" name="RB_BLIT_CLEAR_COLOR_DW1" usage="rp_blit"/>
+-	<reg32 offset="0x88e1" name="RB_BLIT_CLEAR_COLOR_DW2" usage="rp_blit"/>
+-	<reg32 offset="0x88e2" name="RB_BLIT_CLEAR_COLOR_DW3" usage="rp_blit"/>
++	<reg32 offset="0x88df" name="RB_RESOLVE_CLEAR_COLOR_DW0" usage="rp_blit"/>
++	<reg32 offset="0x88e0" name="RB_RESOLVE_CLEAR_COLOR_DW1" usage="rp_blit"/>
++	<reg32 offset="0x88e1" name="RB_RESOLVE_CLEAR_COLOR_DW2" usage="rp_blit"/>
++	<reg32 offset="0x88e2" name="RB_RESOLVE_CLEAR_COLOR_DW3" usage="rp_blit"/>
++
++	<enum name="a6xx_blit_event_type">
++		<value value="0x0" name="BLIT_EVENT_STORE"/>
++		<value value="0x1" name="BLIT_EVENT_STORE_AND_CLEAR"/>
++		<value value="0x2" name="BLIT_EVENT_CLEAR"/>
++		<value value="0x3" name="BLIT_EVENT_LOAD"/>
++	</enum>
+ 
+ 	<!-- seems somewhat similar to what we called RB_CLEAR_CNTL on a5xx: -->
+-	<reg32 offset="0x88e3" name="RB_BLIT_INFO" usage="rp_blit">
+-		<bitfield name="UNK0" pos="0" type="boolean"/> <!-- s8 stencil restore/clear?  But also color restore? -->
+-		<bitfield name="GMEM" pos="1" type="boolean"/> <!-- set for restore and clear to gmem? -->
++	<reg32 offset="0x88e3" name="RB_RESOLVE_OPERATION" usage="rp_blit">
++		<bitfield name="TYPE" low="0" high="1" type="a6xx_blit_event_type"/>
+ 		<bitfield name="SAMPLE_0" pos="2" type="boolean"/> <!-- takes sample 0 instead of averaging -->
+ 		<bitfield name="DEPTH" pos="3" type="boolean"/> <!-- z16/z32/z24s8/x24x8 clear or resolve? -->
+ 		<doc>
+@@ -3853,16 +1680,20 @@ to upconvert to 32b float internally?
+ 		<!-- set when this is the last resolve on a650+ -->
+ 		<bitfield name="LAST" low="8" high="9"/>
+ 		<!--
+-			a618 GLES: color render target number being resolved for RM6_RESOLVE, 0x8 for depth, 0x9 for separate stencil.
+-			a618 VK: 0x8 for depth RM6_RESOLVE, 0x9 for separate stencil, 0 otherwise.
+-
+-			We believe this is related to concurrent resolves
++			a618 GLES: color render target number being resolved for CCU_RESOLVE, 0x8 for depth, 0x9 for separate stencil.
++			a618 VK: 0x8 for depth CCU_RESOLVE, 0x9 for separate stencil, 0 otherwise.
++			a7xx VK: 0x8 for depth, 0x9 for separate stencil, 0x0 to 0x7 used for concurrent resolves of color render
++			targets inside a given resolve group.
+ 		 -->
+ 		<bitfield name="BUFFER_ID" low="12" high="15"/>
+ 	</reg32>
+-	<reg32 offset="0x88e4" name="RB_UNKNOWN_88E4" variants="A7XX-" usage="rp_blit">
+-		<!-- Value conditioned based on predicate, changed before blits -->
+-		<bitfield name="UNK0" pos="0" type="boolean"/>
++
++	<enum name="a7xx_blit_clear_mode">
++		<value value="0x0" name="CLEAR_MODE_SYSMEM"/>
++		<value value="0x1" name="CLEAR_MODE_GMEM"/>
++	</enum>
++	<reg32 offset="0x88e4" name="RB_CLEAR_TARGET" variants="A7XX-" usage="rp_blit">
++			<bitfield name="CLEAR_MODE" pos="0" type="a7xx_blit_clear_mode"/>
+ 	</reg32>
+ 
+ 	<enum name="a6xx_ccu_cache_size">
+@@ -3871,7 +1702,7 @@ to upconvert to 32b float internally?
+ 		<value value="0x2" name="CCU_CACHE_SIZE_QUARTER"/>
+ 		<value value="0x3" name="CCU_CACHE_SIZE_EIGHTH"/>
+ 	</enum>
+-	<reg32 offset="0x88e5" name="RB_CCU_CNTL2" variants="A7XX-" usage="cmd">
++	<reg32 offset="0x88e5" name="RB_CCU_CACHE_CNTL" variants="A7XX-" usage="cmd">
+ 		<bitfield name="DEPTH_OFFSET_HI" pos="0" type="hex"/>
+ 		<bitfield name="COLOR_OFFSET_HI" pos="2" type="hex"/>
+ 		<bitfield name="DEPTH_CACHE_SIZE" low="10" high="11" type="a6xx_ccu_cache_size"/>
+@@ -3895,7 +1726,13 @@ to upconvert to 32b float internally?
+ 		<bitfield name="PITCH" low="0" high="10" shr="6" type="uint"/>
+ 		<bitfield name="ARRAY_PITCH" low="11" high="23" shr="7" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0x88f4" name="RB_UNKNOWN_88F4" low="0" high="2"/>
++
++	<reg32 offset="0x88f4" name="RB_VRS_CONFIG" usage="rp_blit">
++		<bitfield name="UNK2" pos="2" type="boolean"/>
++		<bitfield name="PIPELINE_FSR_ENABLE" pos="4" type="boolean"/>
++		<bitfield name="ATTACHMENT_FSR_ENABLE" pos="5" type="boolean"/>
++		<bitfield name="PRIMITIVE_FSR_ENABLE" pos="18" type="boolean"/>
++	</reg32>
+ 	<!-- Connected to VK_EXT_fragment_density_map? -->
+ 	<reg32 offset="0x88f5" name="RB_UNKNOWN_88F5" variants="A7XX-"/>
+ 	<!-- 0x88f6-0x88ff invalid -->
+@@ -3906,7 +1743,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="UNK8" low="8" high="10"/>
+ 		<bitfield name="ARRAY_PITCH" low="11" high="27" shr="7" type="uint"/>
+ 	</reg32>
+-	<array offset="0x8903" name="RB_MRT_FLAG_BUFFER" stride="3" length="8" usage="rp_blit">
++	<array offset="0x8903" name="RB_COLOR_FLAG_BUFFER" stride="3" length="8" usage="rp_blit">
+ 		<reg64 offset="0" name="ADDR" type="waddress" align="64"/>
+ 		<reg32 offset="2" name="PITCH">
+ 			<bitfield name="PITCH" low="0" high="10" shr="6" type="uint"/>
+@@ -3915,10 +1752,10 @@ to upconvert to 32b float internally?
+ 	</array>
+ 	<!-- 0x891b-0x8926 invalid -->
+ 	<doc>
+-		RB_SAMPLE_COUNT_ADDR register is used up to (and including) a730. After that
++		RB_SAMPLE_COUNTER_BASE register is used up to (and including) a730. After that
+ 		the address is specified through CP_EVENT_WRITE7::WRITE_SAMPLE_COUNT.
+ 	</doc>
+-	<reg64 offset="0x8927" name="RB_SAMPLE_COUNT_ADDR" type="waddress" align="16" usage="cmd"/>
++	<reg64 offset="0x8927" name="RB_SAMPLE_COUNTER_BASE" type="waddress" align="16" usage="cmd"/>
+ 	<!-- 0x8929-0x89ff invalid -->
+ 
+ 	<!-- TODO: there are some registers in the 0x8a00-0x8bff range -->
+@@ -3932,10 +1769,10 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x8a20" name="RB_UNKNOWN_8A20" variants="A6XX" usage="rp_blit"/>
+ 	<reg32 offset="0x8a30" name="RB_UNKNOWN_8A30" variants="A6XX" usage="rp_blit"/>
+ 
+-	<reg32 offset="0x8c00" name="RB_2D_BLIT_CNTL" type="a6xx_2d_blit_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x8c01" name="RB_2D_UNKNOWN_8C01" low="0" high="31" usage="rp_blit"/>
++	<reg32 offset="0x8c00" name="RB_A2D_BLT_CNTL" type="a6xx_a2d_bit_cntl" usage="rp_blit"/>
++	<reg32 offset="0x8c01" name="RB_A2D_PIXEL_CNTL" low="0" high="31" usage="rp_blit"/>
+ 
+-	<bitset name="a6xx_2d_src_surf_info" inline="yes">
++	<bitset name="a6xx_a2d_src_texture_info" inline="yes">
+ 		<bitfield name="COLOR_FORMAT" low="0" high="7" type="a6xx_format"/>
+ 		<bitfield name="TILE_MODE" low="8" high="9" type="a6xx_tile_mode"/>
+ 		<bitfield name="COLOR_SWAP" low="10" high="11" type="a3xx_color_swap"/>
+@@ -3954,7 +1791,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="MUTABLEEN" pos="29" type="boolean" variants="A7XX-"/>
+ 	</bitset>
+ 
+-	<bitset name="a6xx_2d_dst_surf_info" inline="yes">
++	<bitset name="a6xx_a2d_dest_buffer_info" inline="yes">
+ 		<bitfield name="COLOR_FORMAT" low="0" high="7" type="a6xx_format"/>
+ 		<bitfield name="TILE_MODE" low="8" high="9" type="a6xx_tile_mode"/>
+ 		<bitfield name="COLOR_SWAP" low="10" high="11" type="a3xx_color_swap"/>
+@@ -3965,26 +1802,26 @@ to upconvert to 32b float internally?
+ 	</bitset>
+ 
+ 	<!-- 0x8c02-0x8c16 invalid -->
+-	<reg32 offset="0x8c17" name="RB_2D_DST_INFO" type="a6xx_2d_dst_surf_info" usage="rp_blit"/>
+-	<reg64 offset="0x8c18" name="RB_2D_DST" type="waddress" align="64" usage="rp_blit"/>
+-	<reg32 offset="0x8c1a" name="RB_2D_DST_PITCH" low="0" high="15" shr="6" type="uint" usage="rp_blit"/>
++	<reg32 offset="0x8c17" name="RB_A2D_DEST_BUFFER_INFO" type="a6xx_a2d_dest_buffer_info" usage="rp_blit"/>
++	<reg64 offset="0x8c18" name="RB_A2D_DEST_BUFFER_BASE" type="waddress" align="64" usage="rp_blit"/>
++	<reg32 offset="0x8c1a" name="RB_A2D_DEST_BUFFER_PITCH" low="0" high="15" shr="6" type="uint" usage="rp_blit"/>
+ 	<!-- this is a guess but seems likely (for NV12/IYUV): -->
+-	<reg64 offset="0x8c1b" name="RB_2D_DST_PLANE1" type="waddress" align="64" usage="rp_blit"/>
+-	<reg32 offset="0x8c1d" name="RB_2D_DST_PLANE_PITCH" low="0" high="15" shr="6" type="uint" usage="rp_blit"/>
+-	<reg64 offset="0x8c1e" name="RB_2D_DST_PLANE2" type="waddress" align="64" usage="rp_blit"/>
++	<reg64 offset="0x8c1b" name="RB_A2D_DEST_BUFFER_BASE_1" type="waddress" align="64" usage="rp_blit"/>
++	<reg32 offset="0x8c1d" name="RB_A2D_DEST_BUFFER_PITCH_1" low="0" high="15" shr="6" type="uint" usage="rp_blit"/>
++	<reg64 offset="0x8c1e" name="RB_A2D_DEST_BUFFER_BASE_2" type="waddress" align="64" usage="rp_blit"/>
+ 
+-	<reg64 offset="0x8c20" name="RB_2D_DST_FLAGS" type="waddress" align="64" usage="rp_blit"/>
+-	<reg32 offset="0x8c22" name="RB_2D_DST_FLAGS_PITCH" low="0" high="7" shr="6" type="uint" usage="rp_blit"/>
++	<reg64 offset="0x8c20" name="RB_A2D_DEST_FLAG_BUFFER_BASE" type="waddress" align="64" usage="rp_blit"/>
++	<reg32 offset="0x8c22" name="RB_A2D_DEST_FLAG_BUFFER_PITCH" low="0" high="7" shr="6" type="uint" usage="rp_blit"/>
+ 	<!-- this is a guess but seems likely (for NV12 with UBWC): -->
+-	<reg64 offset="0x8c23" name="RB_2D_DST_FLAGS_PLANE" type="waddress" align="64" usage="rp_blit"/>
+-	<reg32 offset="0x8c25" name="RB_2D_DST_FLAGS_PLANE_PITCH" low="0" high="7" shr="6" type="uint" usage="rp_blit"/>
++	<reg64 offset="0x8c23" name="RB_A2D_DEST_FLAG_BUFFER_BASE_1" type="waddress" align="64" usage="rp_blit"/>
++	<reg32 offset="0x8c25" name="RB_A2D_DEST_FLAG_BUFFER_PITCH_1" low="0" high="7" shr="6" type="uint" usage="rp_blit"/>
+ 
+ 	<!-- TODO: 0x8c26-0x8c33 are all full 32-bit registers -->
+ 	<!-- unlike a5xx, these are per channel values rather than packed -->
+-	<reg32 offset="0x8c2c" name="RB_2D_SRC_SOLID_C0" usage="rp_blit"/>
+-	<reg32 offset="0x8c2d" name="RB_2D_SRC_SOLID_C1" usage="rp_blit"/>
+-	<reg32 offset="0x8c2e" name="RB_2D_SRC_SOLID_C2" usage="rp_blit"/>
+-	<reg32 offset="0x8c2f" name="RB_2D_SRC_SOLID_C3" usage="rp_blit"/>
++	<reg32 offset="0x8c2c" name="RB_A2D_CLEAR_COLOR_DW0" usage="rp_blit"/>
++	<reg32 offset="0x8c2d" name="RB_A2D_CLEAR_COLOR_DW1" usage="rp_blit"/>
++	<reg32 offset="0x8c2e" name="RB_A2D_CLEAR_COLOR_DW2" usage="rp_blit"/>
++	<reg32 offset="0x8c2f" name="RB_A2D_CLEAR_COLOR_DW3" usage="rp_blit"/>
+ 
+ 	<reg32 offset="0x8c34" name="RB_UNKNOWN_8C34" variants="A7XX-" usage="cmd"/>
+ 
+@@ -3996,7 +1833,7 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x8e04" name="RB_DBG_ECO_CNTL" usage="cmd"/> <!-- TODO: valid mask 0xfffffeff -->
+ 	<reg32 offset="0x8e05" name="RB_ADDR_MODE_CNTL" pos="0" type="a5xx_address_mode"/>
+ 	<!-- 0x02080000 in GMEM, zero otherwise?  -->
+-	<reg32 offset="0x8e06" name="RB_UNKNOWN_8E06" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0x8e06" name="RB_CCU_DBG_ECO_CNTL" variants="A7XX-" usage="cmd"/>
+ 
+ 	<reg32 offset="0x8e07" name="RB_CCU_CNTL" usage="cmd" variants="A6XX">
+ 		<bitfield name="GMEM_FAST_CLEAR_DISABLE" pos="0" type="boolean"/>
+@@ -4017,10 +1854,21 @@ to upconvert to 32b float internally?
+ 		<bitfield name="COLOR_OFFSET" low="23" high="31" shr="12" type="hex"/>
+ 		<!--TODO: valid mask 0xfffffc1f -->
+ 	</reg32>
++	<enum name="a7xx_concurrent_resolve_mode">
++		<value value="0x0" name="CONCURRENT_RESOLVE_MODE_DISABLED"/>
++		<value value="0x1" name="CONCURRENT_RESOLVE_MODE_1"/>
++		<value value="0x2" name="CONCURRENT_RESOLVE_MODE_2"/>
++	</enum>
++	<enum name="a7xx_concurrent_unresolve_mode">
++		<value value="0x0" name="CONCURRENT_UNRESOLVE_MODE_DISABLED"/>
++		<value value="0x1" name="CONCURRENT_UNRESOLVE_MODE_PARTIAL"/>
++		<value value="0x3" name="CONCURRENT_UNRESOLVE_MODE_FULL"/>
++	</enum>
+ 	<reg32 offset="0x8e07" name="RB_CCU_CNTL" usage="cmd" variants="A7XX-">
+ 		<bitfield name="GMEM_FAST_CLEAR_DISABLE" pos="0" type="boolean"/>
+-		<bitfield name="CONCURRENT_RESOLVE" pos="2" type="boolean"/>
+-		<!-- rest of the bits were moved to RB_CCU_CNTL2 -->
++		<bitfield name="CONCURRENT_RESOLVE_MODE" low="2" high="3" type="a7xx_concurrent_resolve_mode"/>
++		<bitfield name="CONCURRENT_UNRESOLVE_MODE" low="5" high="6" type="a7xx_concurrent_unresolve_mode"/>
++		<!-- rest of the bits were moved to RB_CCU_CACHE_CNTL -->
+ 	</reg32>
+ 	<reg32 offset="0x8e08" name="RB_NC_MODE_CNTL">
+ 		<bitfield name="MODE" pos="0" type="boolean"/>
+@@ -4046,9 +1894,9 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x8e3d" name="RB_RB_SUB_BLOCK_SEL_CNTL_CD"/>
+ 	<!-- 0x8e3e-0x8e4f invalid -->
+ 	<!-- GMEM save/restore for preemption: -->
+-	<reg32 offset="0x8e50" name="RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE" pos="0" type="boolean"/>
++	<reg32 offset="0x8e50" name="RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE_ENABLE" pos="0" type="boolean"/>
+ 	<!-- address for GMEM save/restore? -->
+-	<reg32 offset="0x8e51" name="RB_UNKNOWN_8E51" type="waddress" align="1"/>
++	<reg32 offset="0x8e51" name="RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE_ADDR" type="waddress" align="1"/>
+ 	<!-- 0x8e53-0x8e7f invalid -->
+ 	<reg32 offset="0x8e79" name="RB_UNKNOWN_8E79" variants="A7XX-" usage="cmd"/>
+ 	<!-- 0x8e80-0x8e83 are valid -->
+@@ -4069,38 +1917,38 @@ to upconvert to 32b float internally?
+ 		<bitfield name="CLIP_DIST_03_LOC" low="8" high="15" type="uint"/>
+ 		<bitfield name="CLIP_DIST_47_LOC" low="16" high="23" type="uint"/>
+ 	</bitset>
+-	<reg32 offset="0x9101" name="VPC_VS_CLIP_CNTL" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9102" name="VPC_GS_CLIP_CNTL" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9103" name="VPC_DS_CLIP_CNTL" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9101" name="VPC_VS_CLIP_CULL_CNTL" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9102" name="VPC_GS_CLIP_CULL_CNTL" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9103" name="VPC_DS_CLIP_CULL_CNTL" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
+ 
+-	<reg32 offset="0x9311" name="VPC_VS_CLIP_CNTL_V2" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9312" name="VPC_GS_CLIP_CNTL_V2" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9313" name="VPC_DS_CLIP_CNTL_V2" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9311" name="VPC_VS_CLIP_CULL_CNTL_V2" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9312" name="VPC_GS_CLIP_CULL_CNTL_V2" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9313" name="VPC_DS_CLIP_CULL_CNTL_V2" type="a6xx_vpc_xs_clip_cntl" usage="rp_blit"/>
+ 
+-	<bitset name="a6xx_vpc_xs_layer_cntl" inline="yes">
++	<bitset name="a6xx_vpc_xs_siv_cntl" inline="yes">
+ 		<bitfield name="LAYERLOC" low="0" high="7" type="uint"/>
+ 		<bitfield name="VIEWLOC" low="8" high="15" type="uint"/>
+ 		<bitfield name="SHADINGRATELOC" low="16" high="23" type="uint" variants="A7XX-"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0x9104" name="VPC_VS_LAYER_CNTL" type="a6xx_vpc_xs_layer_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9105" name="VPC_GS_LAYER_CNTL" type="a6xx_vpc_xs_layer_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9106" name="VPC_DS_LAYER_CNTL" type="a6xx_vpc_xs_layer_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9104" name="VPC_VS_SIV_CNTL" type="a6xx_vpc_xs_siv_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9105" name="VPC_GS_SIV_CNTL" type="a6xx_vpc_xs_siv_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9106" name="VPC_DS_SIV_CNTL" type="a6xx_vpc_xs_siv_cntl" usage="rp_blit"/>
+ 
+-	<reg32 offset="0x9314" name="VPC_VS_LAYER_CNTL_V2" type="a6xx_vpc_xs_layer_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9315" name="VPC_GS_LAYER_CNTL_V2" type="a6xx_vpc_xs_layer_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9316" name="VPC_DS_LAYER_CNTL_V2" type="a6xx_vpc_xs_layer_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9314" name="VPC_VS_SIV_CNTL_V2" type="a6xx_vpc_xs_siv_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9315" name="VPC_GS_SIV_CNTL_V2" type="a6xx_vpc_xs_siv_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9316" name="VPC_DS_SIV_CNTL_V2" type="a6xx_vpc_xs_siv_cntl" usage="rp_blit"/>
+ 
+ 	<reg32 offset="0x9107" name="VPC_UNKNOWN_9107" variants="A6XX" usage="rp_blit">
+-		<!-- this mirrors PC_RASTER_CNTL::DISCARD, although it seems it's unused -->
++		<!-- this mirrors VPC_RAST_STREAM_CNTL::DISCARD, although it seems it's unused -->
+ 		<bitfield name="RASTER_DISCARD" pos="0" type="boolean"/>
+ 		<bitfield name="UNK2" pos="2" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0x9108" name="VPC_POLYGON_MODE" usage="rp_blit">
++	<reg32 offset="0x9108" name="VPC_RAST_CNTL" usage="rp_blit">
+ 		<bitfield name="MODE" low="0" high="1" type="a6xx_polygon_mode"/>
+ 	</reg32>
+ 
+-	<bitset name="a6xx_primitive_cntl_0" inline="yes">
++	<bitset name="a6xx_pc_cntl" inline="yes">
+ 		<bitfield name="PRIMITIVE_RESTART" pos="0" type="boolean"/>
+ 		<bitfield name="PROVOKING_VTX_LAST" pos="1" type="boolean"/>
+ 		<bitfield name="D3D_VERTEX_ORDERING" pos="2" type="boolean">
+@@ -4113,7 +1961,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="UNK3" pos="3" type="boolean"/>
+ 	</bitset>
+ 
+-	<bitset name="a6xx_primitive_cntl_5" inline="yes">
++	<bitset name="a6xx_gs_param_0" inline="yes">
+ 		<doc>
+ 		  geometry shader
+ 		</doc>
+@@ -4125,7 +1973,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="UNK18" pos="18"/>
+ 	</bitset>
+ 
+-	<bitset name="a6xx_multiview_cntl" inline="yes">
++	<bitset name="a6xx_stereo_rendering_cntl" inline="yes">
+ 		<bitfield name="ENABLE" pos="0" type="boolean"/>
+ 		<bitfield name="DISABLEMULTIPOS" pos="1" type="boolean">
+ 			<doc>
+@@ -4139,10 +1987,10 @@ to upconvert to 32b float internally?
+ 		<bitfield name="VIEWS" low="2" high="6" type="uint"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0x9109" name="VPC_PRIMITIVE_CNTL_0" type="a6xx_primitive_cntl_0" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0x910a" name="VPC_PRIMITIVE_CNTL_5" type="a6xx_primitive_cntl_5" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0x910b" name="VPC_MULTIVIEW_MASK" type="hex" low="0" high="15" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0x910c" name="VPC_MULTIVIEW_CNTL" type="a6xx_multiview_cntl" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0x9109" name="VPC_PC_CNTL" type="a6xx_pc_cntl" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0x910a" name="VPC_GS_PARAM_0" type="a6xx_gs_param_0" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0x910b" name="VPC_STEREO_RENDERING_VIEWMASK" type="hex" low="0" high="15" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0x910c" name="VPC_STEREO_RENDERING_CNTL" type="a6xx_stereo_rendering_cntl" variants="A7XX-" usage="rp_blit"/>
+ 
+ 	<enum name="a6xx_varying_interp_mode">
+ 		<value value="0" name="INTERP_SMOOTH"/>
+@@ -4159,11 +2007,11 @@ to upconvert to 32b float internally?
+ 	</enum>
+ 
+ 	<!-- 0x9109-0x91ff invalid -->
+-	<array offset="0x9200" name="VPC_VARYING_INTERP" stride="1" length="8" usage="rp_blit">
++	<array offset="0x9200" name="VPC_VARYING_INTERP_MODE" stride="1" length="8" usage="rp_blit">
+ 		<doc>Packed array of a6xx_varying_interp_mode</doc>
+ 		<reg32 offset="0x0" name="MODE"/>
+ 	</array>
+-	<array offset="0x9208" name="VPC_VARYING_PS_REPL" stride="1" length="8" usage="rp_blit">
++	<array offset="0x9208" name="VPC_VARYING_REPLACE_MODE_0" stride="1" length="8" usage="rp_blit">
+ 		<doc>Packed array of a6xx_varying_ps_repl_mode</doc>
+ 		<reg32 offset="0x0" name="MODE"/>
+ 	</array>
+@@ -4172,12 +2020,12 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0x9210" name="VPC_UNKNOWN_9210" low="0" high="31" variants="A6XX" usage="cmd"/>
+ 	<reg32 offset="0x9211" name="VPC_UNKNOWN_9211" low="0" high="31" variants="A6XX" usage="cmd"/>
+ 
+-	<array offset="0x9212" name="VPC_VAR" stride="1" length="4" usage="rp_blit">
++	<array offset="0x9212" name="VPC_VARYING_LM_TRANSFER_CNTL_0" stride="1" length="4" usage="rp_blit">
+ 		<!-- one bit per varying component: -->
+ 		<reg32 offset="0" name="DISABLE"/>
+ 	</array>
+ 
+-	<reg32 offset="0x9216" name="VPC_SO_CNTL" usage="rp_blit">
++	<reg32 offset="0x9216" name="VPC_SO_MAPPING_WPTR" usage="rp_blit">
+ 		<!--
+ 			Choose which DWORD to write to. There is an array of
+ 			(4 * 64) DWORD's, dumped in the devcoredump at
+@@ -4198,7 +2046,7 @@ to upconvert to 32b float internally?
+ 			When EmitStreamVertex(N) happens, the HW goes to DWORD
+ 			64 * N and then "executes" the next 64 DWORD's.
+ 
+-			This field is auto-incremented when VPC_SO_PROG is
++			This field is auto-incremented when VPC_SO_MAPPING_PORT is
+ 			written to.
+ 		-->
+ 		<bitfield name="ADDR" low="0" high="7" type="hex"/>
+@@ -4206,7 +2054,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="RESET" pos="16" type="boolean"/>
+ 	</reg32>
+ 	<!-- special register, write multiple times to load SO program (not readable) -->
+-	<reg32 offset="0x9217" name="VPC_SO_PROG" usage="rp_blit">
++	<reg32 offset="0x9217" name="VPC_SO_MAPPING_PORT" usage="rp_blit">
+ 		<bitfield name="A_BUF" low="0" high="1" type="uint"/>
+ 		<bitfield name="A_OFF" low="2" high="10" shr="2" type="uint"/>
+ 		<bitfield name="A_EN" pos="11" type="boolean"/>
+@@ -4215,7 +2063,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="B_EN" pos="23" type="boolean"/>
+ 	</reg32>
+ 
+-	<reg64 offset="0x9218" name="VPC_SO_STREAM_COUNTS" type="waddress" align="32" usage="cmd"/>
++	<reg64 offset="0x9218" name="VPC_SO_QUERY_BASE" type="waddress" align="32" usage="cmd"/>
+ 
+ 	<array offset="0x921a" name="VPC_SO" stride="7" length="4" usage="cmd">
+ 		<reg64 offset="0" name="BUFFER_BASE" type="waddress" align="32"/>
+@@ -4225,14 +2073,14 @@ to upconvert to 32b float internally?
+ 		<reg64 offset="5" name="FLUSH_BASE" type="waddress" align="32"/>
+ 	</array>
+ 
+-	<reg32 offset="0x9236" name="VPC_POINT_COORD_INVERT" usage="cmd">
++	<reg32 offset="0x9236" name="VPC_REPLACE_MODE_CNTL" usage="cmd">
+ 		<bitfield name="INVERT" pos="0" type="boolean"/>
+ 	</reg32>
+ 	<!-- 0x9237-0x92ff invalid -->
+ 	<!-- always 0x0 ? -->
+ 	<reg32 offset="0x9300" name="VPC_UNKNOWN_9300" low="0" high="2" usage="cmd"/>
+ 
+-	<bitset name="a6xx_vpc_xs_pack" inline="yes">
++	<bitset name="a6xx_vpc_xs_cntl" inline="yes">
+ 		<doc>
+ 			num of varyings plus four for gl_Position (plus one if gl_PointSize)
+ 			plus # of transform-feedback (streamout) varyings if using the
+@@ -4249,11 +2097,11 @@ to upconvert to 32b float internally?
+ 			</doc>
+ 		</bitfield>
+ 	</bitset>
+-	<reg32 offset="0x9301" name="VPC_VS_PACK" type="a6xx_vpc_xs_pack" usage="rp_blit"/>
+-	<reg32 offset="0x9302" name="VPC_GS_PACK" type="a6xx_vpc_xs_pack" usage="rp_blit"/>
+-	<reg32 offset="0x9303" name="VPC_DS_PACK" type="a6xx_vpc_xs_pack" usage="rp_blit"/>
++	<reg32 offset="0x9301" name="VPC_VS_CNTL" type="a6xx_vpc_xs_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9302" name="VPC_GS_CNTL" type="a6xx_vpc_xs_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9303" name="VPC_DS_CNTL" type="a6xx_vpc_xs_cntl" usage="rp_blit"/>
+ 
+-	<reg32 offset="0x9304" name="VPC_CNTL_0" usage="rp_blit">
++	<reg32 offset="0x9304" name="VPC_PS_CNTL" usage="rp_blit">
+ 		<bitfield name="NUMNONPOSVAR" low="0" high="7" type="uint"/>
+ 		<!-- for fixed-function (i.e. no GS) gl_PrimitiveID in FS -->
+ 		<bitfield name="PRIMIDLOC" low="8" high="15" type="uint"/>
+@@ -4272,7 +2120,7 @@ to upconvert to 32b float internally?
+ 		</bitfield>
+ 	</reg32>
+ 
+-	<reg32 offset="0x9305" name="VPC_SO_STREAM_CNTL" usage="rp_blit">
++	<reg32 offset="0x9305" name="VPC_SO_CNTL" usage="rp_blit">
+ 		<!--
+ 		It's offset by 1, and 0 means "disabled"
+ 		-->
+@@ -4282,19 +2130,19 @@ to upconvert to 32b float internally?
+ 		<bitfield name="BUF3_STREAM" low="9" high="11" type="uint"/>
+ 		<bitfield name="STREAM_ENABLE" low="15" high="18" type="hex"/>
+ 	</reg32>
+-	<reg32 offset="0x9306" name="VPC_SO_DISABLE" usage="rp_blit">
++	<reg32 offset="0x9306" name="VPC_SO_OVERRIDE" usage="rp_blit">
+ 		<bitfield name="DISABLE" pos="0" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0x9307" name="VPC_POLYGON_MODE2" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0x9307" name="VPC_PS_RAST_CNTL" variants="A6XX-" usage="rp_blit"> <!-- A702 + A7xx -->
+ 		<bitfield name="MODE" low="0" high="1" type="a6xx_polygon_mode"/>
+ 	</reg32>
+-	<reg32 offset="0x9308" name="VPC_ATTR_BUF_SIZE_GMEM" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0x9308" name="VPC_ATTR_BUF_GMEM_SIZE" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="SIZE_GMEM" low="0" high="31"/>
+ 	</reg32>
+-	<reg32 offset="0x9309" name="VPC_ATTR_BUF_BASE_GMEM" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0x9309" name="VPC_ATTR_BUF_GMEM_BASE" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="BASE_GMEM" low="0" high="31"/>
+ 	</reg32>
+-	<reg32 offset="0x9b09" name="PC_ATTR_BUF_SIZE_GMEM" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0x9b09" name="PC_ATTR_BUF_GMEM_SIZE" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="SIZE_GMEM" low="0" high="31"/>
+ 	</reg32>
+ 
+@@ -4311,15 +2159,15 @@ to upconvert to 32b float internally?
+ 	<!-- TODO: regs from 0x9624-0x963a -->
+ 	<!-- 0x963b-0x97ff invalid -->
+ 
+-	<reg32 offset="0x9800" name="PC_TESS_NUM_VERTEX" low="0" high="5" type="uint" usage="rp_blit"/>
++	<reg32 offset="0x9800" name="PC_HS_PARAM_0" low="0" high="5" type="uint" usage="rp_blit"/>
+ 
+ 	<!-- always 0x0 ? -->
+-	<reg32 offset="0x9801" name="PC_HS_INPUT_SIZE" usage="rp_blit">
++	<reg32 offset="0x9801" name="PC_HS_PARAM_1" usage="rp_blit">
+ 		<bitfield name="SIZE" low="0" high="10" type="uint"/>
+ 		<bitfield name="UNK13" pos="13"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x9802" name="PC_TESS_CNTL" usage="rp_blit">
++	<reg32 offset="0x9802" name="PC_DS_PARAM" usage="rp_blit">
+ 		<bitfield name="SPACING" low="0" high="1" type="a6xx_tess_spacing"/>
+ 		<bitfield name="OUTPUT" low="2" high="3" type="a6xx_tess_output"/>
+ 	</reg32>
+@@ -4334,7 +2182,7 @@ to upconvert to 32b float internally?
+ 	</reg32>
+ 
+ 	<!-- New in a6xx gen3+ -->
+-	<reg32 offset="0x9808" name="PC_SO_STREAM_CNTL" usage="rp_blit">
++	<reg32 offset="0x9808" name="PC_DGEN_SO_CNTL" usage="rp_blit">
+ 		<bitfield name="STREAM_ENABLE" low="15" high="18" type="hex"/>
+ 	</reg32>
+ 
+@@ -4344,15 +2192,15 @@ to upconvert to 32b float internally?
+ 	<!-- 0x980b-0x983f invalid -->
+ 
+ 	<!-- 0x9840 - 0x9842 are not readable -->
+-	<reg32 offset="0x9840" name="PC_DRAW_CMD">
++	<reg32 offset="0x9840" name="PC_DRAW_INITIATOR">
+ 		<bitfield name="STATE_ID" low="0" high="7"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x9841" name="PC_DISPATCH_CMD">
++	<reg32 offset="0x9841" name="PC_KERNEL_INITIATOR">
+ 		<bitfield name="STATE_ID" low="0" high="7"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x9842" name="PC_EVENT_CMD">
++	<reg32 offset="0x9842" name="PC_EVENT_INITIATOR">
+ 		<!-- I think only the low bit is actually used? -->
+ 		<bitfield name="STATE_ID" low="16" high="23"/>
+ 		<bitfield name="EVENT" low="0" high="6" type="vgt_event_type"/>
+@@ -4367,27 +2215,27 @@ to upconvert to 32b float internally?
+ 
+ 	<!-- 0x9843-0x997f invalid -->
+ 
+-	<reg32 offset="0x9981" name="PC_POLYGON_MODE" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0x9981" name="PC_DGEN_RAST_CNTL" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="MODE" low="0" high="1" type="a6xx_polygon_mode"/>
+ 	</reg32>
+-	<reg32 offset="0x9809" name="PC_POLYGON_MODE" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0x9809" name="PC_DGEN_RAST_CNTL" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="MODE" low="0" high="1" type="a6xx_polygon_mode"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x9980" name="PC_RASTER_CNTL" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0x9980" name="VPC_RAST_STREAM_CNTL" variants="A6XX" usage="rp_blit">
+ 		<!-- which stream to send to GRAS -->
+ 		<bitfield name="STREAM" low="0" high="1" type="uint"/>
+ 		<!-- discard primitives before rasterization -->
+ 		<bitfield name="DISCARD" pos="2" type="boolean"/>
+ 	</reg32>
+-	<!-- VPC_RASTER_CNTL -->
+-	<reg32 offset="0x9107" name="PC_RASTER_CNTL" variants="A7XX-" usage="rp_blit">
++	<!-- VPC_RAST_STREAM_CNTL -->
++	<reg32 offset="0x9107" name="VPC_RAST_STREAM_CNTL" variants="A7XX-" usage="rp_blit">
+ 		<!-- which stream to send to GRAS -->
+ 		<bitfield name="STREAM" low="0" high="1" type="uint"/>
+ 		<!-- discard primitives before rasterization -->
+ 		<bitfield name="DISCARD" pos="2" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0x9317" name="PC_RASTER_CNTL_V2" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0x9317" name="VPC_RAST_STREAM_CNTL_V2" variants="A7XX-" usage="rp_blit">
+ 		<!-- which stream to send to GRAS -->
+ 		<bitfield name="STREAM" low="0" high="1" type="uint"/>
+ 		<!-- discard primitives before rasterization -->
+@@ -4397,17 +2245,17 @@ to upconvert to 32b float internally?
+ 	<!-- Both are a750+.
+ 	     Probably needed to correctly overlap execution of several draws.
+ 	-->
+-	<reg32 offset="0x9885" name="PC_TESS_PARAM_SIZE" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0x9885" name="PC_HS_BUFFER_SIZE" variants="A7XX-" usage="cmd"/>
+ 	<!-- Blob adds a bit more space {0x10, 0x20, 0x30, 0x40} bytes, but the meaning of
+ 	     this additional space is not known.
+ 	-->
+-	<reg32 offset="0x9886" name="PC_TESS_FACTOR_SIZE" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0x9886" name="PC_TF_BUFFER_SIZE" variants="A7XX-" usage="cmd"/>
+ 
+ 	<!-- 0x9982-0x9aff invalid -->
+ 
+-	<reg32 offset="0x9b00" name="PC_PRIMITIVE_CNTL_0" type="a6xx_primitive_cntl_0" usage="rp_blit"/>
++	<reg32 offset="0x9b00" name="PC_CNTL" type="a6xx_pc_cntl" usage="rp_blit"/>
+ 
+-	<bitset name="a6xx_xs_out_cntl" inline="yes">
++	<bitset name="a6xx_pc_xs_cntl" inline="yes">
+ 		<doc>
+ 			num of varyings plus four for gl_Position (plus one if gl_PointSize)
+ 			plus # of transform-feedback (streamout) varyings if using the
+@@ -4417,19 +2265,19 @@ to upconvert to 32b float internally?
+ 		<bitfield name="PSIZE" pos="8" type="boolean"/>
+ 		<bitfield name="LAYER" pos="9" type="boolean"/>
+ 		<bitfield name="VIEW" pos="10" type="boolean"/>
+-		<!-- note: PC_VS_OUT_CNTL doesn't have the PRIMITIVE_ID bit -->
++		<!-- note: PC_VS_CNTL doesn't have the PRIMITIVE_ID bit -->
+ 		<bitfield name="PRIMITIVE_ID" pos="11" type="boolean"/>
+ 		<bitfield name="CLIP_MASK" low="16" high="23" type="uint"/>
+ 		<bitfield name="SHADINGRATE" pos="24" type="boolean" variants="A7XX-"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0x9b01" name="PC_VS_OUT_CNTL" type="a6xx_xs_out_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9b02" name="PC_GS_OUT_CNTL" type="a6xx_xs_out_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9b01" name="PC_VS_CNTL" type="a6xx_pc_xs_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9b02" name="PC_GS_CNTL" type="a6xx_pc_xs_cntl" usage="rp_blit"/>
+ 	<!-- since HS can't output anything, only PRIMITIVE_ID is valid -->
+-	<reg32 offset="0x9b03" name="PC_HS_OUT_CNTL" type="a6xx_xs_out_cntl" usage="rp_blit"/>
+-	<reg32 offset="0x9b04" name="PC_DS_OUT_CNTL" type="a6xx_xs_out_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9b03" name="PC_HS_CNTL" type="a6xx_pc_xs_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9b04" name="PC_DS_CNTL" type="a6xx_pc_xs_cntl" usage="rp_blit"/>
+ 
+-	<reg32 offset="0x9b05" name="PC_PRIMITIVE_CNTL_5" type="a6xx_primitive_cntl_5" usage="rp_blit"/>
++	<reg32 offset="0x9b05" name="PC_GS_PARAM_0" type="a6xx_gs_param_0" usage="rp_blit"/>
+ 
+ 	<reg32 offset="0x9b06" name="PC_PRIMITIVE_CNTL_6" variants="A6XX" usage="rp_blit">
+ 		<doc>
+@@ -4438,9 +2286,9 @@ to upconvert to 32b float internally?
+ 		<bitfield name="STRIDE_IN_VPC" low="0" high="10" type="uint"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0x9b07" name="PC_MULTIVIEW_CNTL" type="a6xx_multiview_cntl" usage="rp_blit"/>
++	<reg32 offset="0x9b07" name="PC_STEREO_RENDERING_CNTL" type="a6xx_stereo_rendering_cntl" usage="rp_blit"/>
+ 	<!-- mask of enabled views, doesn't exist on A630 -->
+-	<reg32 offset="0x9b08" name="PC_MULTIVIEW_MASK" type="hex" low="0" high="15" usage="rp_blit"/>
++	<reg32 offset="0x9b08" name="PC_STEREO_RENDERING_VIEWMASK" type="hex" low="0" high="15" usage="rp_blit"/>
+ 	<!-- 0x9b09-0x9bff invalid -->
+ 	<reg32 offset="0x9c00" name="PC_2D_EVENT_CMD">
+ 		<!-- special register (but note first 8 bits can be written/read) -->
+@@ -4451,31 +2299,31 @@ to upconvert to 32b float internally?
+ 	<!-- TODO: 0x9e00-0xa000 range incomplete -->
+ 	<reg32 offset="0x9e00" name="PC_DBG_ECO_CNTL"/>
+ 	<reg32 offset="0x9e01" name="PC_ADDR_MODE_CNTL" type="a5xx_address_mode"/>
+-	<reg64 offset="0x9e04" name="PC_DRAW_INDX_BASE"/>
+-	<reg32 offset="0x9e06" name="PC_DRAW_FIRST_INDX" type="uint"/>
+-	<reg32 offset="0x9e07" name="PC_DRAW_MAX_INDICES" type="uint"/>
+-	<reg64 offset="0x9e08" name="PC_TESSFACTOR_ADDR" variants="A6XX" type="waddress" align="32" usage="cmd"/>
+-	<reg64 offset="0x9810" name="PC_TESSFACTOR_ADDR" variants="A7XX-" type="waddress" align="32" usage="cmd"/>
++	<reg64 offset="0x9e04" name="PC_DMA_BASE"/>
++	<reg32 offset="0x9e06" name="PC_DMA_OFFSET" type="uint"/>
++	<reg32 offset="0x9e07" name="PC_DMA_SIZE" type="uint"/>
++	<reg64 offset="0x9e08" name="PC_TESS_BASE" variants="A6XX" type="waddress" align="32" usage="cmd"/>
++	<reg64 offset="0x9810" name="PC_TESS_BASE" variants="A7XX-" type="waddress" align="32" usage="cmd"/>
+ 
+-	<reg32 offset="0x9e0b" name="PC_DRAW_INITIATOR" type="vgt_draw_initiator_a4xx">
++	<reg32 offset="0x9e0b" name="PC_DRAWCALL_CNTL" type="vgt_draw_initiator_a4xx">
+ 		<doc>
+ 			Possibly not really "initiating" the draw but the layout is similar
+ 			to VGT_DRAW_INITIATOR on older gens
+ 		</doc>
+ 	</reg32>
+-	<reg32 offset="0x9e0c" name="PC_DRAW_NUM_INSTANCES" type="uint"/>
+-	<reg32 offset="0x9e0d" name="PC_DRAW_NUM_INDICES" type="uint"/>
++	<reg32 offset="0x9e0c" name="PC_DRAWCALL_INSTANCE_NUM" type="uint"/>
++	<reg32 offset="0x9e0d" name="PC_DRAWCALL_SIZE" type="uint"/>
+ 
+ 	<!-- These match the contents of CP_SET_BIN_DATA (not written directly) -->
+-	<reg32 offset="0x9e11" name="PC_VSTREAM_CONTROL">
++	<reg32 offset="0x9e11" name="PC_VIS_STREAM_CNTL">
+ 		<bitfield name="UNK0" low="0" high="15"/>
+ 		<bitfield name="VSC_SIZE" low="16" high="21" type="uint"/>
+ 		<bitfield name="VSC_N" low="22" high="26" type="uint"/>
+ 	</reg32>
+-	<reg64 offset="0x9e12" name="PC_BIN_PRIM_STRM" type="waddress" align="32"/>
+-	<reg64 offset="0x9e14" name="PC_BIN_DRAW_STRM" type="waddress" align="32"/>
++	<reg64 offset="0x9e12" name="PC_PVIS_STREAM_BIN_BASE" type="waddress" align="32"/>
++	<reg64 offset="0x9e14" name="PC_DVIS_STREAM_BIN_BASE" type="waddress" align="32"/>
+ 
+-	<reg32 offset="0x9e1c" name="PC_VISIBILITY_OVERRIDE">
++	<reg32 offset="0x9e1c" name="PC_DRAWCALL_CNTL_OVERRIDE">
+ 		<doc>Written by CP_SET_VISIBILITY_OVERRIDE handler</doc>
+ 		<bitfield name="OVERRIDE" pos="0" type="boolean"/>
+ 	</reg32>
+@@ -4488,18 +2336,18 @@ to upconvert to 32b float internally?
+ 	<!-- always 0x0 -->
+ 	<reg32 offset="0x9e72" name="PC_UNKNOWN_9E72" usage="cmd"/>
+ 
+-	<reg32 offset="0xa000" name="VFD_CONTROL_0" usage="rp_blit">
++	<reg32 offset="0xa000" name="VFD_CNTL_0" usage="rp_blit">
+ 		<bitfield name="FETCH_CNT" low="0" high="5" type="uint"/>
+ 		<bitfield name="DECODE_CNT" low="8" high="13" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xa001" name="VFD_CONTROL_1" usage="rp_blit">
++	<reg32 offset="0xa001" name="VFD_CNTL_1" usage="rp_blit">
+ 		<bitfield name="REGID4VTX" low="0" high="7" type="a3xx_regid"/>
+ 		<bitfield name="REGID4INST" low="8" high="15" type="a3xx_regid"/>
+ 		<bitfield name="REGID4PRIMID" low="16" high="23" type="a3xx_regid"/>
+ 		<!-- only used for VS in non-multi-position-output case -->
+ 		<bitfield name="REGID4VIEWID" low="24" high="31" type="a3xx_regid"/>
+ 	</reg32>
+-	<reg32 offset="0xa002" name="VFD_CONTROL_2" usage="rp_blit">
++	<reg32 offset="0xa002" name="VFD_CNTL_2" usage="rp_blit">
+ 		<bitfield name="REGID_HSRELPATCHID" low="0" high="7" type="a3xx_regid">
+ 			<doc>
+ 				This is the ID of the current patch within the
+@@ -4512,32 +2360,32 @@ to upconvert to 32b float internally?
+ 		</bitfield>
+ 		<bitfield name="REGID_INVOCATIONID" low="8" high="15" type="a3xx_regid"/>
+ 	</reg32>
+-	<reg32 offset="0xa003" name="VFD_CONTROL_3" usage="rp_blit">
++	<reg32 offset="0xa003" name="VFD_CNTL_3" usage="rp_blit">
+ 		<bitfield name="REGID_DSPRIMID" low="0" high="7" type="a3xx_regid"/>
+ 		<bitfield name="REGID_DSRELPATCHID" low="8" high="15" type="a3xx_regid"/>
+ 		<bitfield name="REGID_TESSX" low="16" high="23" type="a3xx_regid"/>
+ 		<bitfield name="REGID_TESSY" low="24" high="31" type="a3xx_regid"/>
+ 	</reg32>
+-	<reg32 offset="0xa004" name="VFD_CONTROL_4" usage="rp_blit">
++	<reg32 offset="0xa004" name="VFD_CNTL_4" usage="rp_blit">
+ 		<bitfield name="UNK0" low="0" high="7" type="a3xx_regid"/>
+ 	</reg32>
+-	<reg32 offset="0xa005" name="VFD_CONTROL_5" usage="rp_blit">
++	<reg32 offset="0xa005" name="VFD_CNTL_5" usage="rp_blit">
+ 		<bitfield name="REGID_GSHEADER" low="0" high="7" type="a3xx_regid"/>
+ 		<bitfield name="UNK8" low="8" high="15" type="a3xx_regid"/>
+ 	</reg32>
+-	<reg32 offset="0xa006" name="VFD_CONTROL_6" usage="rp_blit">
++	<reg32 offset="0xa006" name="VFD_CNTL_6" usage="rp_blit">
+ 		<!--
+ 			True if gl_PrimitiveID is read via the FS
+ 		-->
+ 		<bitfield name="PRIMID4PSEN" pos="0" type="boolean"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xa007" name="VFD_MODE_CNTL" usage="cmd">
++	<reg32 offset="0xa007" name="VFD_RENDER_MODE" usage="cmd">
+ 		<bitfield name="RENDER_MODE" low="0" high="2" type="a6xx_render_mode"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xa008" name="VFD_MULTIVIEW_CNTL" type="a6xx_multiview_cntl" usage="rp_blit"/>
+-	<reg32 offset="0xa009" name="VFD_ADD_OFFSET" usage="cmd">
++	<reg32 offset="0xa008" name="VFD_STEREO_RENDERING_CNTL" type="a6xx_stereo_rendering_cntl" usage="rp_blit"/>
++	<reg32 offset="0xa009" name="VFD_MODE_CNTL" usage="cmd">
+ 		<!-- add VFD_INDEX_OFFSET to REGID4VTX -->
+ 		<bitfield name="VERTEX" pos="0" type="boolean"/>
+ 		<!-- add VFD_INSTANCE_START_OFFSET to REGID4INST -->
+@@ -4546,14 +2394,14 @@ to upconvert to 32b float internally?
+ 
+ 	<reg32 offset="0xa00e" name="VFD_INDEX_OFFSET" usage="rp_blit"/>
+ 	<reg32 offset="0xa00f" name="VFD_INSTANCE_START_OFFSET" usage="rp_blit"/>
+-	<array offset="0xa010" name="VFD_FETCH" stride="4" length="32" usage="rp_blit">
++	<array offset="0xa010" name="VFD_VERTEX_BUFFER" stride="4" length="32" usage="rp_blit">
+ 		<reg64 offset="0x0" name="BASE" type="address" align="1"/>
+ 		<reg32 offset="0x2" name="SIZE" type="uint"/>
+ 		<reg32 offset="0x3" name="STRIDE" low="0" high="11" type="uint"/>
+ 	</array>
+-	<array offset="0xa090" name="VFD_DECODE" stride="2" length="32" usage="rp_blit">
++	<array offset="0xa090" name="VFD_FETCH_INSTR" stride="2" length="32" usage="rp_blit">
+ 		<reg32 offset="0x0" name="INSTR">
+-			<!-- IDX and byte OFFSET into VFD_FETCH -->
++			<!-- IDX and byte OFFSET into VFD_VERTEX_BUFFER -->
+ 			<bitfield name="IDX" low="0" high="4" type="uint"/>
+ 			<bitfield name="OFFSET" low="5" high="16"/>
+ 			<bitfield name="INSTANCED" pos="17" type="boolean"/>
+@@ -4573,7 +2421,7 @@ to upconvert to 32b float internally?
+ 
+ 	<reg32 offset="0xa0f8" name="VFD_POWER_CNTL" low="0" high="2" usage="rp_blit"/>
+ 
+-	<reg32 offset="0xa600" name="VFD_UNKNOWN_A600" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xa600" name="VFD_DBG_ECO_CNTL" variants="A7XX-" usage="cmd"/>
+ 
+ 	<reg32 offset="0xa601" name="VFD_ADDR_MODE_CNTL" type="a5xx_address_mode"/>
+ 	<array offset="0xa610" name="VFD_PERFCTR_VFD_SEL" stride="1" length="8" variants="A6XX"/>
+@@ -4588,7 +2436,7 @@ to upconvert to 32b float internally?
+ 		<value value="1" name="THREAD128"/>
+ 	</enum>
+ 
+-	<bitset name="a6xx_sp_xs_ctrl_reg0" inline="yes">
++	<bitset name="a6xx_sp_xs_cntl_0" inline="yes">
+ 		<!-- if set to SINGLE, only use 1 concurrent wave on each SP -->
+ 		<bitfield name="THREADMODE" pos="0" type="a3xx_threadmode"/>
+ 		<!--
+@@ -4620,7 +2468,7 @@ to upconvert to 32b float internally?
+ 		-->
+ 		<bitfield name="BINDLESS_TEX" pos="0" type="boolean"/>
+ 		<bitfield name="BINDLESS_SAMP" pos="1" type="boolean"/>
+-		<bitfield name="BINDLESS_IBO" pos="2" type="boolean"/>
++		<bitfield name="BINDLESS_UAV" pos="2" type="boolean"/>
+ 		<bitfield name="BINDLESS_UBO" pos="3" type="boolean"/>
+ 
+ 		<bitfield name="ENABLED" pos="8" type="boolean"/>
+@@ -4630,17 +2478,17 @@ to upconvert to 32b float internally?
+ 		 -->
+ 		<bitfield name="NTEX" low="9" high="16" type="uint"/>
+ 		<bitfield name="NSAMP" low="17" high="21" type="uint"/>
+-		<bitfield name="NIBO" low="22" high="28" type="uint"/>
++		<bitfield name="NUAV" low="22" high="28" type="uint"/>
+ 	</bitset>
+ 
+-	<bitset name="a6xx_sp_xs_prim_cntl" inline="yes">
++	<bitset name="a6xx_sp_xs_output_cntl" inline="yes">
+ 		<!-- # of VS outputs including pos/psize -->
+ 		<bitfield name="OUT" low="0" high="5" type="uint"/>
+ 		<!-- FLAGS_REGID only for GS -->
+ 		<bitfield name="FLAGS_REGID" low="6" high="13" type="a3xx_regid"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0xa800" name="SP_VS_CTRL_REG0" type="a6xx_sp_xs_ctrl_reg0" usage="rp_blit">
++	<reg32 offset="0xa800" name="SP_VS_CNTL_0" type="a6xx_sp_xs_cntl_0" usage="rp_blit">
+ 		<!--
+ 		This field actually controls all geometry stages. TCS, TES, and
+ 		GS must have the same mergedregs setting as VS.
+@@ -4665,10 +2513,10 @@ to upconvert to 32b float internally?
+ 	</reg32>
+ 	<!-- bitmask of true/false conditions for VS brac.N instructions,
+ 	     bit N corresponds to brac.N -->
+-	<reg32 offset="0xa801" name="SP_VS_BRANCH_COND" type="hex"/>
++	<reg32 offset="0xa801" name="SP_VS_BOOLEAN_CF_MASK" type="hex"/>
+ 	<!-- # of VS outputs including pos/psize -->
+-	<reg32 offset="0xa802" name="SP_VS_PRIMITIVE_CNTL" type="a6xx_sp_xs_prim_cntl" usage="rp_blit"/>
+-	<array offset="0xa803" name="SP_VS_OUT" stride="1" length="16" usage="rp_blit">
++	<reg32 offset="0xa802" name="SP_VS_OUTPUT_CNTL" type="a6xx_sp_xs_output_cntl" usage="rp_blit"/>
++	<array offset="0xa803" name="SP_VS_OUTPUT" stride="1" length="16" usage="rp_blit">
+ 		<reg32 offset="0x0" name="REG">
+ 			<bitfield name="A_REGID" low="0" high="7" type="a3xx_regid"/>
+ 			<bitfield name="A_COMPMASK" low="8" high="11" type="hex"/>
+@@ -4678,12 +2526,12 @@ to upconvert to 32b float internally?
+ 	</array>
+ 	<!--
+ 	Starting with a5xx, position/psize outputs from shader end up in the
+-	SP_VS_OUT map, with highest OUTLOCn position.  (Generally they are
++	SP_VS_OUTPUT map, with highest OUTLOCn position.  (Generally they are
+ 	the last entries too, except when gl_PointCoord is used, blob inserts
+ 	an extra varying after, but with a lower OUTLOC position.  If present,
+ 	psize is last, preceded by position.
+ 	 -->
+-	<array offset="0xa813" name="SP_VS_VPC_DST" stride="1" length="8" usage="rp_blit">
++	<array offset="0xa813" name="SP_VS_VPC_DEST" stride="1" length="8" usage="rp_blit">
+ 		<reg32 offset="0x0" name="REG">
+ 			<bitfield name="OUTLOC0" low="0" high="7" type="uint"/>
+ 			<bitfield name="OUTLOC1" low="8" high="15" type="uint"/>
+@@ -4752,7 +2600,7 @@ to upconvert to 32b float internally?
+ 		</bitfield>
+ 	</bitset>
+ 
+-	<bitset name="a6xx_sp_xs_pvt_mem_hw_stack_offset" inline="yes">
++	<bitset name="a6xx_sp_xs_pvt_mem_stack_offset" inline="yes">
+ 		<doc>
+ 			This seems to be be the equivalent of HWSTACKOFFSET in
+ 			a3xx. The ldp/stp offset formula above isn't affected by
+@@ -4763,18 +2611,18 @@ to upconvert to 32b float internally?
+ 		<bitfield name="OFFSET" low="0" high="18" shr="11"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0xa81b" name="SP_VS_OBJ_FIRST_EXEC_OFFSET" type="uint" usage="rp_blit"/>
+-	<reg64 offset="0xa81c" name="SP_VS_OBJ_START" type="address" align="32" usage="rp_blit"/>
++	<reg32 offset="0xa81b" name="SP_VS_PROGRAM_COUNTER_OFFSET" type="uint" usage="rp_blit"/>
++	<reg64 offset="0xa81c" name="SP_VS_BASE" type="address" align="32" usage="rp_blit"/>
+ 	<reg32 offset="0xa81e" name="SP_VS_PVT_MEM_PARAM" type="a6xx_sp_xs_pvt_mem_param" usage="rp_blit"/>
+-	<reg64 offset="0xa81f" name="SP_VS_PVT_MEM_ADDR" type="waddress" align="32" usage="rp_blit"/>
++	<reg64 offset="0xa81f" name="SP_VS_PVT_MEM_BASE" type="waddress" align="32" usage="rp_blit"/>
+ 	<reg32 offset="0xa821" name="SP_VS_PVT_MEM_SIZE" type="a6xx_sp_xs_pvt_mem_size" usage="rp_blit"/>
+-	<reg32 offset="0xa822" name="SP_VS_TEX_COUNT" low="0" high="7" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa822" name="SP_VS_TSIZE" low="0" high="7" type="uint" usage="rp_blit"/>
+ 	<reg32 offset="0xa823" name="SP_VS_CONFIG" type="a6xx_sp_xs_config" usage="rp_blit"/>
+-	<reg32 offset="0xa824" name="SP_VS_INSTRLEN" low="0" high="27" type="uint" usage="rp_blit"/>
+-	<reg32 offset="0xa825" name="SP_VS_PVT_MEM_HW_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_hw_stack_offset" usage="rp_blit"/>
+-	<reg32 offset="0xa82d" name="SP_VS_VGPR_CONFIG" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xa824" name="SP_VS_INSTR_SIZE" low="0" high="27" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa825" name="SP_VS_PVT_MEM_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_stack_offset" usage="rp_blit"/>
++	<reg32 offset="0xa82d" name="SP_VS_VGS_CNTL" variants="A7XX-" usage="cmd"/>
+ 
+-	<reg32 offset="0xa830" name="SP_HS_CTRL_REG0" type="a6xx_sp_xs_ctrl_reg0" usage="rp_blit">
++	<reg32 offset="0xa830" name="SP_HS_CNTL_0" type="a6xx_sp_xs_cntl_0" usage="rp_blit">
+ 		<!-- There is no mergedregs bit, that comes from the VS. -->
+ 		<bitfield name="EARLYPREAMBLE" pos="20" type="boolean"/>
+ 	</reg32>
+@@ -4782,32 +2630,32 @@ to upconvert to 32b float internally?
+ 	Total size of local storage in dwords divided by the wave size.
+ 	The maximum value is 64. With the wave size being always 64 for HS,
+ 	the maximum size of local storage should be:
+-	 64 (wavesize) * 64 (SP_HS_WAVE_INPUT_SIZE) * 4 = 16k
++	 64 (wavesize) * 64 (SP_HS_CNTL_1) * 4 = 16k
+ 	-->
+-	<reg32 offset="0xa831" name="SP_HS_WAVE_INPUT_SIZE" low="0" high="7" type="uint" usage="rp_blit"/>
+-	<reg32 offset="0xa832" name="SP_HS_BRANCH_COND" type="hex" usage="rp_blit"/>
++	<reg32 offset="0xa831" name="SP_HS_CNTL_1" low="0" high="7" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa832" name="SP_HS_BOOLEAN_CF_MASK" type="hex" usage="rp_blit"/>
+ 
+ 	<!-- TODO: exact same layout as 0xa81b-0xa825 -->
+-	<reg32 offset="0xa833" name="SP_HS_OBJ_FIRST_EXEC_OFFSET" type="uint" usage="rp_blit"/>
+-	<reg64 offset="0xa834" name="SP_HS_OBJ_START" type="address" align="32" usage="rp_blit"/>
++	<reg32 offset="0xa833" name="SP_HS_PROGRAM_COUNTER_OFFSET" type="uint" usage="rp_blit"/>
++	<reg64 offset="0xa834" name="SP_HS_BASE" type="address" align="32" usage="rp_blit"/>
+ 	<reg32 offset="0xa836" name="SP_HS_PVT_MEM_PARAM" type="a6xx_sp_xs_pvt_mem_param" usage="rp_blit"/>
+-	<reg64 offset="0xa837" name="SP_HS_PVT_MEM_ADDR" type="waddress" align="32" usage="rp_blit"/>
++	<reg64 offset="0xa837" name="SP_HS_PVT_MEM_BASE" type="waddress" align="32" usage="rp_blit"/>
+ 	<reg32 offset="0xa839" name="SP_HS_PVT_MEM_SIZE" type="a6xx_sp_xs_pvt_mem_size" usage="rp_blit"/>
+-	<reg32 offset="0xa83a" name="SP_HS_TEX_COUNT" low="0" high="7" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa83a" name="SP_HS_TSIZE" low="0" high="7" type="uint" usage="rp_blit"/>
+ 	<reg32 offset="0xa83b" name="SP_HS_CONFIG" type="a6xx_sp_xs_config" usage="rp_blit"/>
+-	<reg32 offset="0xa83c" name="SP_HS_INSTRLEN" low="0" high="27" type="uint" usage="rp_blit"/>
+-	<reg32 offset="0xa83d" name="SP_HS_PVT_MEM_HW_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_hw_stack_offset" usage="rp_blit"/>
+-	<reg32 offset="0xa82f" name="SP_HS_VGPR_CONFIG" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xa83c" name="SP_HS_INSTR_SIZE" low="0" high="27" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa83d" name="SP_HS_PVT_MEM_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_stack_offset" usage="rp_blit"/>
++	<reg32 offset="0xa82f" name="SP_HS_VGS_CNTL" variants="A7XX-" usage="cmd"/>
+ 
+-	<reg32 offset="0xa840" name="SP_DS_CTRL_REG0" type="a6xx_sp_xs_ctrl_reg0" usage="rp_blit">
++	<reg32 offset="0xa840" name="SP_DS_CNTL_0" type="a6xx_sp_xs_cntl_0" usage="rp_blit">
+ 		<!-- There is no mergedregs bit, that comes from the VS. -->
+ 		<bitfield name="EARLYPREAMBLE" pos="20" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0xa841" name="SP_DS_BRANCH_COND" type="hex"/>
++	<reg32 offset="0xa841" name="SP_DS_BOOLEAN_CF_MASK" type="hex"/>
+ 
+ 	<!-- TODO: exact same layout as 0xa802-0xa81a -->
+-	<reg32 offset="0xa842" name="SP_DS_PRIMITIVE_CNTL" type="a6xx_sp_xs_prim_cntl" usage="rp_blit"/>
+-	<array offset="0xa843" name="SP_DS_OUT" stride="1" length="16" usage="rp_blit">
++	<reg32 offset="0xa842" name="SP_DS_OUTPUT_CNTL" type="a6xx_sp_xs_output_cntl" usage="rp_blit"/>
++	<array offset="0xa843" name="SP_DS_OUTPUT" stride="1" length="16" usage="rp_blit">
+ 		<reg32 offset="0x0" name="REG">
+ 			<bitfield name="A_REGID" low="0" high="7" type="a3xx_regid"/>
+ 			<bitfield name="A_COMPMASK" low="8" high="11" type="hex"/>
+@@ -4815,7 +2663,7 @@ to upconvert to 32b float internally?
+ 			<bitfield name="B_COMPMASK" low="24" high="27" type="hex"/>
+ 		</reg32>
+ 	</array>
+-	<array offset="0xa853" name="SP_DS_VPC_DST" stride="1" length="8" usage="rp_blit">
++	<array offset="0xa853" name="SP_DS_VPC_DEST" stride="1" length="8" usage="rp_blit">
+ 		<reg32 offset="0x0" name="REG">
+ 			<bitfield name="OUTLOC0" low="0" high="7" type="uint"/>
+ 			<bitfield name="OUTLOC1" low="8" high="15" type="uint"/>
+@@ -4825,22 +2673,22 @@ to upconvert to 32b float internally?
+ 	</array>
+ 
+ 	<!-- TODO: exact same layout as 0xa81b-0xa825 -->
+-	<reg32 offset="0xa85b" name="SP_DS_OBJ_FIRST_EXEC_OFFSET" type="uint" usage="rp_blit"/>
+-	<reg64 offset="0xa85c" name="SP_DS_OBJ_START" type="address" align="32" usage="rp_blit"/>
++	<reg32 offset="0xa85b" name="SP_DS_PROGRAM_COUNTER_OFFSET" type="uint" usage="rp_blit"/>
++	<reg64 offset="0xa85c" name="SP_DS_BASE" type="address" align="32" usage="rp_blit"/>
+ 	<reg32 offset="0xa85e" name="SP_DS_PVT_MEM_PARAM" type="a6xx_sp_xs_pvt_mem_param" usage="rp_blit"/>
+-	<reg64 offset="0xa85f" name="SP_DS_PVT_MEM_ADDR" type="waddress" align="32" usage="rp_blit"/>
++	<reg64 offset="0xa85f" name="SP_DS_PVT_MEM_BASE" type="waddress" align="32" usage="rp_blit"/>
+ 	<reg32 offset="0xa861" name="SP_DS_PVT_MEM_SIZE" type="a6xx_sp_xs_pvt_mem_size" usage="rp_blit"/>
+-	<reg32 offset="0xa862" name="SP_DS_TEX_COUNT" low="0" high="7" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa862" name="SP_DS_TSIZE" low="0" high="7" type="uint" usage="rp_blit"/>
+ 	<reg32 offset="0xa863" name="SP_DS_CONFIG" type="a6xx_sp_xs_config" usage="rp_blit"/>
+-	<reg32 offset="0xa864" name="SP_DS_INSTRLEN" low="0" high="27" type="uint" usage="rp_blit"/>
+-	<reg32 offset="0xa865" name="SP_DS_PVT_MEM_HW_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_hw_stack_offset" usage="rp_blit"/>
+-	<reg32 offset="0xa868" name="SP_DS_VGPR_CONFIG" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xa864" name="SP_DS_INSTR_SIZE" low="0" high="27" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa865" name="SP_DS_PVT_MEM_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_stack_offset" usage="rp_blit"/>
++	<reg32 offset="0xa868" name="SP_DS_VGS_CNTL" variants="A7XX-" usage="cmd"/>
+ 
+-	<reg32 offset="0xa870" name="SP_GS_CTRL_REG0" type="a6xx_sp_xs_ctrl_reg0" usage="rp_blit">
++	<reg32 offset="0xa870" name="SP_GS_CNTL_0" type="a6xx_sp_xs_cntl_0" usage="rp_blit">
+ 		<!-- There is no mergedregs bit, that comes from the VS. -->
+ 		<bitfield name="EARLYPREAMBLE" pos="20" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0xa871" name="SP_GS_PRIM_SIZE" low="0" high="7" type="uint" usage="rp_blit">
++	<reg32 offset="0xa871" name="SP_GS_CNTL_1" low="0" high="7" type="uint" usage="rp_blit">
+ 		<doc>
+ 			Normally the size of the output of the last stage in
+ 			dwords. It should be programmed as follows:
+@@ -4854,11 +2702,11 @@ to upconvert to 32b float internally?
+ 			doesn't matter in practice.
+ 		</doc>
+ 	</reg32>
+-	<reg32 offset="0xa872" name="SP_GS_BRANCH_COND" type="hex" usage="rp_blit"/>
++	<reg32 offset="0xa872" name="SP_GS_BOOLEAN_CF_MASK" type="hex" usage="rp_blit"/>
+ 
+ 	<!-- TODO: exact same layout as 0xa802-0xa81a -->
+-	<reg32 offset="0xa873" name="SP_GS_PRIMITIVE_CNTL" type="a6xx_sp_xs_prim_cntl" usage="rp_blit"/>
+-	<array offset="0xa874" name="SP_GS_OUT" stride="1" length="16" usage="rp_blit">
++	<reg32 offset="0xa873" name="SP_GS_OUTPUT_CNTL" type="a6xx_sp_xs_output_cntl" usage="rp_blit"/>
++	<array offset="0xa874" name="SP_GS_OUTPUT" stride="1" length="16" usage="rp_blit">
+ 		<reg32 offset="0x0" name="REG">
+ 			<bitfield name="A_REGID" low="0" high="7" type="a3xx_regid"/>
+ 			<bitfield name="A_COMPMASK" low="8" high="11" type="hex"/>
+@@ -4867,7 +2715,7 @@ to upconvert to 32b float internally?
+ 		</reg32>
+ 	</array>
+ 
+-	<array offset="0xa884" name="SP_GS_VPC_DST" stride="1" length="8" usage="rp_blit">
++	<array offset="0xa884" name="SP_GS_VPC_DEST" stride="1" length="8" usage="rp_blit">
+ 		<reg32 offset="0x0" name="REG">
+ 			<bitfield name="OUTLOC0" low="0" high="7" type="uint"/>
+ 			<bitfield name="OUTLOC1" low="8" high="15" type="uint"/>
+@@ -4877,29 +2725,29 @@ to upconvert to 32b float internally?
+ 	</array>
+ 
+ 	<!-- TODO: exact same layout as 0xa81b-0xa825 -->
+-	<reg32 offset="0xa88c" name="SP_GS_OBJ_FIRST_EXEC_OFFSET" type="uint" usage="rp_blit"/>
+-	<reg64 offset="0xa88d" name="SP_GS_OBJ_START" type="address" align="32" usage="rp_blit"/>
++	<reg32 offset="0xa88c" name="SP_GS_PROGRAM_COUNTER_OFFSET" type="uint" usage="rp_blit"/>
++	<reg64 offset="0xa88d" name="SP_GS_BASE" type="address" align="32" usage="rp_blit"/>
+ 	<reg32 offset="0xa88f" name="SP_GS_PVT_MEM_PARAM" type="a6xx_sp_xs_pvt_mem_param" usage="rp_blit"/>
+-	<reg64 offset="0xa890" name="SP_GS_PVT_MEM_ADDR" type="waddress" align="32" usage="rp_blit"/>
++	<reg64 offset="0xa890" name="SP_GS_PVT_MEM_BASE" type="waddress" align="32" usage="rp_blit"/>
+ 	<reg32 offset="0xa892" name="SP_GS_PVT_MEM_SIZE" type="a6xx_sp_xs_pvt_mem_size" usage="rp_blit"/>
+-	<reg32 offset="0xa893" name="SP_GS_TEX_COUNT" low="0" high="7" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa893" name="SP_GS_TSIZE" low="0" high="7" type="uint" usage="rp_blit"/>
+ 	<reg32 offset="0xa894" name="SP_GS_CONFIG" type="a6xx_sp_xs_config" usage="rp_blit"/>
+-	<reg32 offset="0xa895" name="SP_GS_INSTRLEN" low="0" high="27" type="uint" usage="rp_blit"/>
+-	<reg32 offset="0xa896" name="SP_GS_PVT_MEM_HW_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_hw_stack_offset" usage="rp_blit"/>
+-	<reg32 offset="0xa899" name="SP_GS_VGPR_CONFIG" variants="A7XX-" usage="cmd"/>
+-
+-	<reg64 offset="0xa8a0" name="SP_VS_TEX_SAMP" type="address" align="16" usage="cmd"/>
+-	<reg64 offset="0xa8a2" name="SP_HS_TEX_SAMP" type="address" align="16" usage="cmd"/>
+-	<reg64 offset="0xa8a4" name="SP_DS_TEX_SAMP" type="address" align="16" usage="cmd"/>
+-	<reg64 offset="0xa8a6" name="SP_GS_TEX_SAMP" type="address" align="16" usage="cmd"/>
+-	<reg64 offset="0xa8a8" name="SP_VS_TEX_CONST" type="address" align="64" usage="cmd"/>
+-	<reg64 offset="0xa8aa" name="SP_HS_TEX_CONST" type="address" align="64" usage="cmd"/>
+-	<reg64 offset="0xa8ac" name="SP_DS_TEX_CONST" type="address" align="64" usage="cmd"/>
+-	<reg64 offset="0xa8ae" name="SP_GS_TEX_CONST" type="address" align="64" usage="cmd"/>
++	<reg32 offset="0xa895" name="SP_GS_INSTR_SIZE" low="0" high="27" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa896" name="SP_GS_PVT_MEM_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_stack_offset" usage="rp_blit"/>
++	<reg32 offset="0xa899" name="SP_GS_VGS_CNTL" variants="A7XX-" usage="cmd"/>
++
++	<reg64 offset="0xa8a0" name="SP_VS_SAMPLER_BASE" type="address" align="16" usage="cmd"/>
++	<reg64 offset="0xa8a2" name="SP_HS_SAMPLER_BASE" type="address" align="16" usage="cmd"/>
++	<reg64 offset="0xa8a4" name="SP_DS_SAMPLER_BASE" type="address" align="16" usage="cmd"/>
++	<reg64 offset="0xa8a6" name="SP_GS_SAMPLER_BASE" type="address" align="16" usage="cmd"/>
++	<reg64 offset="0xa8a8" name="SP_VS_TEXMEMOBJ_BASE" type="address" align="64" usage="cmd"/>
++	<reg64 offset="0xa8aa" name="SP_HS_TEXMEMOBJ_BASE" type="address" align="64" usage="cmd"/>
++	<reg64 offset="0xa8ac" name="SP_DS_TEXMEMOBJ_BASE" type="address" align="64" usage="cmd"/>
++	<reg64 offset="0xa8ae" name="SP_GS_TEXMEMOBJ_BASE" type="address" align="64" usage="cmd"/>
+ 
+ 	<!-- TODO: 4 unknown bool registers 0xa8c0-0xa8c3 -->
+ 
+-	<reg32 offset="0xa980" name="SP_FS_CTRL_REG0" type="a6xx_sp_xs_ctrl_reg0" usage="rp_blit">
++	<reg32 offset="0xa980" name="SP_PS_CNTL_0" type="a6xx_sp_xs_cntl_0" usage="rp_blit">
+ 		<bitfield name="THREADSIZE" pos="20" type="a6xx_threadsize"/>
+ 		<bitfield name="UNK21" pos="21" type="boolean"/>
+ 		<bitfield name="VARYING" pos="22" type="boolean"/>
+@@ -4909,8 +2757,7 @@ to upconvert to 32b float internally?
+ 				fine derivatives and quad subgroup ops.
+ 			</doc>
+ 		</bitfield>
+-		<!-- note: vk blob uses bit24 -->
+-		<bitfield name="UNK24" pos="24" type="boolean"/>
++		<bitfield name="INOUTREGOVERLAP" pos="24" type="boolean"/>
+ 		<bitfield name="UNK25" pos="25" type="boolean"/>
+ 		<bitfield name="PIXLODENABLE" pos="26" type="boolean">
+ 			<doc>
+@@ -4923,12 +2770,12 @@ to upconvert to 32b float internally?
+ 		<bitfield name="EARLYPREAMBLE" pos="28" type="boolean"/>
+ 		<bitfield name="MERGEDREGS" pos="31" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0xa981" name="SP_FS_BRANCH_COND" type="hex"/>
+-	<reg32 offset="0xa982" name="SP_FS_OBJ_FIRST_EXEC_OFFSET" type="uint" usage="rp_blit"/>
+-	<reg64 offset="0xa983" name="SP_FS_OBJ_START" type="address" align="32" usage="rp_blit"/>
+-	<reg32 offset="0xa985" name="SP_FS_PVT_MEM_PARAM" type="a6xx_sp_xs_pvt_mem_param" usage="rp_blit"/>
+-	<reg64 offset="0xa986" name="SP_FS_PVT_MEM_ADDR" type="waddress" align="32" usage="rp_blit"/>
+-	<reg32 offset="0xa988" name="SP_FS_PVT_MEM_SIZE" type="a6xx_sp_xs_pvt_mem_size" usage="rp_blit"/>
++	<reg32 offset="0xa981" name="SP_PS_BOOLEAN_CF_MASK" type="hex"/>
++	<reg32 offset="0xa982" name="SP_PS_PROGRAM_COUNTER_OFFSET" type="uint" usage="rp_blit"/>
++	<reg64 offset="0xa983" name="SP_PS_BASE" type="address" align="32" usage="rp_blit"/>
++	<reg32 offset="0xa985" name="SP_PS_PVT_MEM_PARAM" type="a6xx_sp_xs_pvt_mem_param" usage="rp_blit"/>
++	<reg64 offset="0xa986" name="SP_PS_PVT_MEM_BASE" type="waddress" align="32" usage="rp_blit"/>
++	<reg32 offset="0xa988" name="SP_PS_PVT_MEM_SIZE" type="a6xx_sp_xs_pvt_mem_size" usage="rp_blit"/>
+ 
+ 	<reg32 offset="0xa989" name="SP_BLEND_CNTL" usage="rp_blit">
+ 		<!-- per-mrt enable bit -->
+@@ -4948,7 +2795,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="SRGB_MRT6" pos="6" type="boolean"/>
+ 		<bitfield name="SRGB_MRT7" pos="7" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0xa98b" name="SP_FS_RENDER_COMPONENTS" usage="rp_blit">
++	<reg32 offset="0xa98b" name="SP_PS_OUTPUT_MASK" usage="rp_blit">
+ 		<bitfield name="RT0" low="0" high="3"/>
+ 		<bitfield name="RT1" low="4" high="7"/>
+ 		<bitfield name="RT2" low="8" high="11"/>
+@@ -4958,17 +2805,17 @@ to upconvert to 32b float internally?
+ 		<bitfield name="RT6" low="24" high="27"/>
+ 		<bitfield name="RT7" low="28" high="31"/>
+ 	</reg32>
+-	<reg32 offset="0xa98c" name="SP_FS_OUTPUT_CNTL0" usage="rp_blit">
++	<reg32 offset="0xa98c" name="SP_PS_OUTPUT_CNTL" usage="rp_blit">
+ 		<bitfield name="DUAL_COLOR_IN_ENABLE" pos="0" type="boolean"/>
+ 		<bitfield name="DEPTH_REGID" low="8" high="15" type="a3xx_regid"/>
+ 		<bitfield name="SAMPMASK_REGID" low="16" high="23" type="a3xx_regid"/>
+ 		<bitfield name="STENCILREF_REGID" low="24" high="31" type="a3xx_regid"/>
+ 	</reg32>
+-	<reg32 offset="0xa98d" name="SP_FS_OUTPUT_CNTL1" usage="rp_blit">
++	<reg32 offset="0xa98d" name="SP_PS_MRT_CNTL" usage="rp_blit">
+ 		<bitfield name="MRT" low="0" high="3" type="uint"/>
+ 	</reg32>
+ 
+-	<array offset="0xa98e" name="SP_FS_OUTPUT" stride="1" length="8" usage="rp_blit">
++	<array offset="0xa98e" name="SP_PS_OUTPUT" stride="1" length="8" usage="rp_blit">
+ 		<doc>per MRT</doc>
+ 		<reg32 offset="0x0" name="REG">
+ 			<bitfield name="REGID" low="0" high="7" type="a3xx_regid"/>
+@@ -4976,7 +2823,7 @@ to upconvert to 32b float internally?
+ 		</reg32>
+ 	</array>
+ 
+-	<array offset="0xa996" name="SP_FS_MRT" stride="1" length="8" usage="rp_blit">
++	<array offset="0xa996" name="SP_PS_MRT" stride="1" length="8" usage="rp_blit">
+ 		<reg32 offset="0" name="REG">
+ 			<bitfield name="COLOR_FORMAT" low="0" high="7" type="a6xx_format"/>
+ 			<bitfield name="COLOR_SINT" pos="8" type="boolean"/>
+@@ -4985,7 +2832,7 @@ to upconvert to 32b float internally?
+ 		</reg32>
+ 	</array>
+ 
+-	<reg32 offset="0xa99e" name="SP_FS_PREFETCH_CNTL" usage="rp_blit">
++	<reg32 offset="0xa99e" name="SP_PS_INITIAL_TEX_LOAD_CNTL" usage="rp_blit">
+ 		<bitfield name="COUNT" low="0" high="2" type="uint"/>
+ 		<bitfield name="IJ_WRITE_DISABLE" pos="3" type="boolean"/>
+ 		<doc>
+@@ -5002,7 +2849,7 @@ to upconvert to 32b float internally?
+ 		<!-- Blob never uses it -->
+ 		<bitfield name="CONSTSLOTID4COORD" low="16" high="24" type="uint" variants="A7XX-"/>
+ 	</reg32>
+-	<array offset="0xa99f" name="SP_FS_PREFETCH" stride="1" length="4" variants="A6XX" usage="rp_blit">
++	<array offset="0xa99f" name="SP_PS_INITIAL_TEX_LOAD" stride="1" length="4" variants="A6XX" usage="rp_blit">
+ 		<reg32 offset="0" name="CMD" variants="A6XX">
+ 			<bitfield name="SRC" low="0" high="6" type="uint"/>
+ 			<bitfield name="SAMP_ID" low="7" high="10" type="uint"/>
+@@ -5016,7 +2863,7 @@ to upconvert to 32b float internally?
+ 			<bitfield name="CMD" low="29" high="31" type="a6xx_tex_prefetch_cmd"/>
+ 		</reg32>
+ 	</array>
+-	<array offset="0xa99f" name="SP_FS_PREFETCH" stride="1" length="4" variants="A7XX-" usage="rp_blit">
++	<array offset="0xa99f" name="SP_PS_INITIAL_TEX_LOAD" stride="1" length="4" variants="A7XX-" usage="rp_blit">
+ 		<reg32 offset="0" name="CMD" variants="A7XX-">
+ 			<bitfield name="SRC" low="0" high="6" type="uint"/>
+ 			<bitfield name="SAMP_ID" low="7" high="9" type="uint"/>
+@@ -5028,22 +2875,23 @@ to upconvert to 32b float internally?
+ 			<bitfield name="CMD" low="26" high="29" type="a6xx_tex_prefetch_cmd"/>
+ 		</reg32>
+ 	</array>
+-	<array offset="0xa9a3" name="SP_FS_BINDLESS_PREFETCH" stride="1" length="4" usage="rp_blit">
++	<array offset="0xa9a3" name="SP_PS_INITIAL_TEX_INDEX" stride="1" length="4" usage="rp_blit">
+ 		<reg32 offset="0" name="CMD">
+ 			<bitfield name="SAMP_ID" low="0" high="15" type="uint"/>
+ 			<bitfield name="TEX_ID" low="16" high="31" type="uint"/>
+ 		</reg32>
+ 	</array>
+-	<reg32 offset="0xa9a7" name="SP_FS_TEX_COUNT" low="0" high="7" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xa9a7" name="SP_PS_TSIZE" low="0" high="7" type="uint" usage="rp_blit"/>
+ 	<reg32 offset="0xa9a8" name="SP_UNKNOWN_A9A8" low="0" high="16" usage="cmd"/> <!-- always 0x0 ? -->
+-	<reg32 offset="0xa9a9" name="SP_FS_PVT_MEM_HW_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_hw_stack_offset" usage="rp_blit"/>
++	<reg32 offset="0xa9a9" name="SP_PS_PVT_MEM_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_stack_offset" usage="rp_blit"/>
++	<reg32 offset="0xa9ab" name="SP_PS_UNKNOWN_A9AB" variants="A7XX-" usage="cmd"/>
+ 
+ 	<!-- TODO: unknown bool register at 0xa9aa, likely same as 0xa8c0-0xa8c3 but for FS -->
+ 
+ 
+ 
+ 
+-	<reg32 offset="0xa9b0" name="SP_CS_CTRL_REG0" type="a6xx_sp_xs_ctrl_reg0" usage="cmd">
++	<reg32 offset="0xa9b0" name="SP_CS_CNTL_0" type="a6xx_sp_xs_cntl_0" usage="cmd">
+ 		<bitfield name="THREADSIZE" pos="20" type="a6xx_threadsize"/>
+ 		<!-- seems to make SP use less concurrent threads when possible? -->
+ 		<bitfield name="UNK21" pos="21" type="boolean"/>
+@@ -5053,8 +2901,15 @@ to upconvert to 32b float internally?
+ 		<bitfield name="MERGEDREGS" pos="31" type="boolean"/>
+ 	</reg32>
+ 
++	<enum name="a6xx_const_ram_mode">
++		<value value="0x0" name="CONSTLEN_128"/>
++		<value value="0x1" name="CONSTLEN_192"/>
++		<value value="0x2" name="CONSTLEN_256"/>
++		<value value="0x3" name="CONSTLEN_512"/> <!-- a7xx only -->
++	</enum>
++
+ 	<!-- set for compute shaders -->
+-	<reg32 offset="0xa9b1" name="SP_CS_UNKNOWN_A9B1" usage="cmd">
++	<reg32 offset="0xa9b1" name="SP_CS_CNTL_1" usage="cmd">
+ 		<bitfield name="SHARED_SIZE" low="0" high="4" type="uint">
+ 			<doc>
+ 				If 0 - all 32k of shared storage is enabled, otherwise
+@@ -5065,32 +2920,36 @@ to upconvert to 32b float internally?
+ 				always return 0)
+ 			</doc>
+ 		</bitfield>
+-		<bitfield name="UNK5" pos="5" type="boolean"/>
+-		<!-- always 1 ? -->
+-		<bitfield name="UNK6" pos="6" type="boolean"/>
++		<bitfield name="CONSTANTRAMMODE" low="5" high="6" type="a6xx_const_ram_mode">
++			<doc>
++				This defines the split between consts and local
++				memory in the Local Buffer. The programmed value
++				must be at least the actual CONSTLEN.
++			</doc>
++		</bitfield>
+ 	</reg32>
+-	<reg32 offset="0xa9b2" name="SP_CS_BRANCH_COND" type="hex" usage="cmd"/>
+-	<reg32 offset="0xa9b3" name="SP_CS_OBJ_FIRST_EXEC_OFFSET" type="uint" usage="cmd"/>
+-	<reg64 offset="0xa9b4" name="SP_CS_OBJ_START" type="address" align="32" usage="cmd"/>
++	<reg32 offset="0xa9b2" name="SP_CS_BOOLEAN_CF_MASK" type="hex" usage="cmd"/>
++	<reg32 offset="0xa9b3" name="SP_CS_PROGRAM_COUNTER_OFFSET" type="uint" usage="cmd"/>
++	<reg64 offset="0xa9b4" name="SP_CS_BASE" type="address" align="32" usage="cmd"/>
+ 	<reg32 offset="0xa9b6" name="SP_CS_PVT_MEM_PARAM" type="a6xx_sp_xs_pvt_mem_param" usage="cmd"/>
+-	<reg64 offset="0xa9b7" name="SP_CS_PVT_MEM_ADDR" align="32" usage="cmd"/>
++	<reg64 offset="0xa9b7" name="SP_CS_PVT_MEM_BASE" align="32" usage="cmd"/>
+ 	<reg32 offset="0xa9b9" name="SP_CS_PVT_MEM_SIZE" type="a6xx_sp_xs_pvt_mem_size" usage="cmd"/>
+-	<reg32 offset="0xa9ba" name="SP_CS_TEX_COUNT" low="0" high="7" type="uint" usage="cmd"/>
++	<reg32 offset="0xa9ba" name="SP_CS_TSIZE" low="0" high="7" type="uint" usage="cmd"/>
+ 	<reg32 offset="0xa9bb" name="SP_CS_CONFIG" type="a6xx_sp_xs_config" usage="cmd"/>
+-	<reg32 offset="0xa9bc" name="SP_CS_INSTRLEN" low="0" high="27" type="uint" usage="cmd"/>
+-	<reg32 offset="0xa9bd" name="SP_CS_PVT_MEM_HW_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_hw_stack_offset" usage="cmd"/>
++	<reg32 offset="0xa9bc" name="SP_CS_INSTR_SIZE" low="0" high="27" type="uint" usage="cmd"/>
++	<reg32 offset="0xa9bd" name="SP_CS_PVT_MEM_STACK_OFFSET" type="a6xx_sp_xs_pvt_mem_stack_offset" usage="cmd"/>
+ 	<reg32 offset="0xa9be" name="SP_CS_UNKNOWN_A9BE" variants="A7XX-" usage="cmd"/>
+-	<reg32 offset="0xa9c5" name="SP_CS_VGPR_CONFIG" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xa9c5" name="SP_CS_VGS_CNTL" variants="A7XX-" usage="cmd"/>
+ 
+-	<!-- new in a6xx gen4, matches HLSQ_CS_CNTL_0 -->
+-	<reg32 offset="0xa9c2" name="SP_CS_CNTL_0" usage="cmd">
++	<!-- new in a6xx gen4, matches SP_CS_CONST_CONFIG_0 -->
++	<reg32 offset="0xa9c2" name="SP_CS_WIE_CNTL_0" usage="cmd">
+ 		<bitfield name="WGIDCONSTID" low="0" high="7" type="a3xx_regid"/>
+ 		<bitfield name="WGSIZECONSTID" low="8" high="15" type="a3xx_regid"/>
+ 		<bitfield name="WGOFFSETCONSTID" low="16" high="23" type="a3xx_regid"/>
+ 		<bitfield name="LOCALIDREGID" low="24" high="31" type="a3xx_regid"/>
+ 	</reg32>
+-	<!-- new in a6xx gen4, matches HLSQ_CS_CNTL_1 -->
+-	<reg32 offset="0xa9c3" name="SP_CS_CNTL_1" variants="A6XX" usage="cmd">
++	<!-- new in a6xx gen4, matches SP_CS_WGE_CNTL -->
++	<reg32 offset="0xa9c3" name="SP_CS_WIE_CNTL_1" variants="A6XX" usage="cmd">
+ 		<!-- gl_LocalInvocationIndex -->
+ 		<bitfield name="LINEARLOCALIDREGID" low="0" high="7" type="a3xx_regid"/>
+ 		<!-- a650 has 6 "SP cores" (but 3 "SP"). this makes it use only
+@@ -5102,7 +2961,18 @@ to upconvert to 32b float internally?
+ 		<bitfield name="THREADSIZE_SCALAR" pos="10" type="boolean"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xa9c3" name="SP_CS_CNTL_1" variants="A7XX-" usage="cmd">
++	<enum name="a7xx_workitem_rast_order">
++		<value value="0x0" name="WORKITEMRASTORDER_LINEAR"/>
++		<doc>
++			This is a fixed tiling, with 4x4 invocation outer tiles
++			containing 2x2 invocation inner tiles. The intent is to
++			improve cache locality with textures and images accessed
++			using gl_LocalInvocationID.
++		</doc>
++		<value value="0x1" name="WORKITEMRASTORDER_TILED"/>
++	</enum>
++
++	<reg32 offset="0xa9c3" name="SP_CS_WIE_CNTL_1" variants="A7XX-" usage="cmd">
+ 		<!-- gl_LocalInvocationIndex -->
+ 		<bitfield name="LINEARLOCALIDREGID" low="0" high="7" type="a3xx_regid"/>
+ 		<!-- Must match SP_CS_CTRL -->
+@@ -5110,18 +2980,16 @@ to upconvert to 32b float internally?
+ 		<!-- 1 thread per wave (would hang if THREAD128 is also set) -->
+ 		<bitfield name="THREADSIZE_SCALAR" pos="9" type="boolean"/>
+ 
+-		<!-- Affects getone. If enabled, getone sometimes executed 1? less times
+-		     than there are subgroups.
+-		 -->
+-		<bitfield name="UNK15" pos="15" type="boolean"/>
++		<doc>How invocations/fibers within a workgroup are tiled.</doc>
++		<bitfield name="WORKITEMRASTORDER" pos="15" type="a7xx_workitem_rast_order"/>
+ 	</reg32>
+ 
+ 	<!-- TODO: two 64kb aligned addresses at a9d0/a9d2 -->
+ 
+-	<reg64 offset="0xa9e0" name="SP_FS_TEX_SAMP" type="address" align="16" usage="rp_blit"/>
+-	<reg64 offset="0xa9e2" name="SP_CS_TEX_SAMP" type="address" align="16" usage="cmd"/>
+-	<reg64 offset="0xa9e4" name="SP_FS_TEX_CONST" type="address" align="64" usage="rp_blit"/>
+-	<reg64 offset="0xa9e6" name="SP_CS_TEX_CONST" type="address" align="64" usage="cmd"/>
++	<reg64 offset="0xa9e0" name="SP_PS_SAMPLER_BASE" type="address" align="16" usage="rp_blit"/>
++	<reg64 offset="0xa9e2" name="SP_CS_SAMPLER_BASE" type="address" align="16" usage="cmd"/>
++	<reg64 offset="0xa9e4" name="SP_PS_TEXMEMOBJ_BASE" type="address" align="64" usage="rp_blit"/>
++	<reg64 offset="0xa9e6" name="SP_CS_TEXMEMOBJ_BASE" type="address" align="64" usage="cmd"/>
+ 
+ 	<enum name="a6xx_bindless_descriptor_size">
+ 		<doc>
+@@ -5146,18 +3014,19 @@ to upconvert to 32b float internally?
+ 	</array>
+ 
+ 	<!--
+-	IBO state for compute shader:
++	UAV state for compute shader:
+ 	 -->
+-	<reg64 offset="0xa9f2" name="SP_CS_IBO" type="address" align="16"/>
+-	<reg32 offset="0xaa00" name="SP_CS_IBO_COUNT" low="0" high="6" type="uint"/>
++	<reg64 offset="0xa9f2" name="SP_CS_UAV_BASE" type="address" align="16" variants="A6XX"/>
++	<reg64 offset="0xa9f8" name="SP_CS_UAV_BASE" type="address" align="16" variants="A7XX"/>
++	<reg32 offset="0xaa00" name="SP_CS_USIZE" low="0" high="6" type="uint"/>
+ 
+ 	<!-- Correlated with avgs/uvgs usage in FS -->
+-	<reg32 offset="0xaa01" name="SP_FS_VGPR_CONFIG" type="uint" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xaa01" name="SP_PS_VGS_CNTL" type="uint" variants="A7XX-" usage="cmd"/>
+ 
+-	<reg32 offset="0xaa02" name="SP_PS_ALIASED_COMPONENTS_CONTROL" variants="A7XX-" usage="cmd">
++	<reg32 offset="0xaa02" name="SP_PS_OUTPUT_CONST_CNTL" variants="A7XX-" usage="cmd">
+ 		<bitfield name="ENABLED" pos="0" type="boolean"/>
+ 	</reg32>
+-	<reg32 offset="0xaa03" name="SP_PS_ALIASED_COMPONENTS" variants="A7XX-" usage="cmd">
++	<reg32 offset="0xaa03" name="SP_PS_OUTPUT_CONST_MASK" variants="A7XX-" usage="cmd">
+ 		<doc>
+ 			Specify for which components the output color should be read
+ 			from alias, e.g. for:
+@@ -5167,7 +3036,7 @@ to upconvert to 32b float internally?
+ 				alias.1.b32.0 r1.x, c4.x
+ 				alias.1.b32.0 r0.x, c0.x
+ 
+-			the SP_PS_ALIASED_COMPONENTS would be 0x00001111
++			the SP_PS_OUTPUT_CONST_MASK would be 0x00001111
+ 		</doc>
+ 
+ 		<bitfield name="RT0" low="0" high="3"/>
+@@ -5193,7 +3062,7 @@ to upconvert to 32b float internally?
+ 		<value value="0x2" name="ISAMMODE_GL"/>
+ 	</enum>
+ 
+-	<reg32 offset="0xab00" name="SP_MODE_CONTROL" usage="rp_blit">
++	<reg32 offset="0xab00" name="SP_MODE_CNTL" usage="rp_blit">
+ 	  <!--
+ 	  When set, half register loads from the constant file will
+ 	  load a 32-bit value (so hc0.y loads the same value as c0.y)
+@@ -5210,16 +3079,16 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0xab01" name="SP_UNKNOWN_AB01" variants="A7XX-" usage="cmd"/>
+ 	<reg32 offset="0xab02" name="SP_UNKNOWN_AB02" variants="A7XX-" usage="cmd"/>
+ 
+-	<reg32 offset="0xab04" name="SP_FS_CONFIG" type="a6xx_sp_xs_config" usage="rp_blit"/>
+-	<reg32 offset="0xab05" name="SP_FS_INSTRLEN" low="0" high="27" type="uint" usage="rp_blit"/>
++	<reg32 offset="0xab04" name="SP_PS_CONFIG" type="a6xx_sp_xs_config" usage="rp_blit"/>
++	<reg32 offset="0xab05" name="SP_PS_INSTR_SIZE" low="0" high="27" type="uint" usage="rp_blit"/>
+ 
+-	<array offset="0xab10" name="SP_BINDLESS_BASE" stride="2" length="5" variants="A6XX" usage="rp_blit">
++	<array offset="0xab10" name="SP_GFX_BINDLESS_BASE" stride="2" length="5" variants="A6XX" usage="rp_blit">
+ 		<reg64 offset="0" name="DESCRIPTOR" variants="A6XX">
+ 			<bitfield name="DESC_SIZE" low="0" high="1" type="a6xx_bindless_descriptor_size"/>
+ 			<bitfield name="ADDR" low="2" high="63" shr="2" type="address"/>
+ 		</reg64>
+ 	</array>
+-	<array offset="0xab0a" name="SP_BINDLESS_BASE" stride="2" length="8" variants="A7XX-" usage="rp_blit">
++	<array offset="0xab0a" name="SP_GFX_BINDLESS_BASE" stride="2" length="8" variants="A7XX-" usage="rp_blit">
+ 		<reg64 offset="0" name="DESCRIPTOR" variants="A7XX-">
+ 			<bitfield name="DESC_SIZE" low="0" high="1" type="a6xx_bindless_descriptor_size"/>
+ 			<bitfield name="ADDR" low="2" high="63" shr="2" type="address"/>
+@@ -5227,15 +3096,15 @@ to upconvert to 32b float internally?
+ 	</array>
+ 
+ 	<!--
+-	Combined IBO state for 3d pipe, used for Image and SSBO write/atomic
+-	instructions VS/HS/DS/GS/FS.  See SP_CS_IBO_* for compute shaders.
++	Combined UAV state for 3d pipe, used for Image and SSBO write/atomic
++	instructions VS/HS/DS/GS/FS.  See SP_CS_UAV_BASE_* for compute shaders.
+ 	 -->
+-	<reg64 offset="0xab1a" name="SP_IBO" type="address" align="16" usage="cmd"/>
+-	<reg32 offset="0xab20" name="SP_IBO_COUNT" low="0" high="6" type="uint" usage="cmd"/>
++	<reg64 offset="0xab1a" name="SP_GFX_UAV_BASE" type="address" align="16" usage="cmd"/>
++	<reg32 offset="0xab20" name="SP_GFX_USIZE" low="0" high="6" type="uint" usage="cmd"/>
+ 
+ 	<reg32 offset="0xab22" name="SP_UNKNOWN_AB22" variants="A7XX-" usage="cmd"/>
+ 
+-	<bitset name="a6xx_sp_2d_dst_format" inline="yes">
++	<bitset name="a6xx_sp_a2d_output_info" inline="yes">
+ 		<bitfield name="NORM" pos="0" type="boolean"/>
+ 		<bitfield name="SINT" pos="1" type="boolean"/>
+ 		<bitfield name="UINT" pos="2" type="boolean"/>
+@@ -5248,8 +3117,8 @@ to upconvert to 32b float internally?
+ 		<bitfield name="MASK" low="12" high="15"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0xacc0" name="SP_2D_DST_FORMAT" type="a6xx_sp_2d_dst_format" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xa9bf" name="SP_2D_DST_FORMAT" type="a6xx_sp_2d_dst_format" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xacc0" name="SP_A2D_OUTPUT_INFO" type="a6xx_sp_a2d_output_info" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xa9bf" name="SP_A2D_OUTPUT_INFO" type="a6xx_sp_a2d_output_info" variants="A7XX-" usage="rp_blit"/>
+ 
+ 	<reg32 offset="0xae00" name="SP_DBG_ECO_CNTL" usage="cmd"/>
+ 	<reg32 offset="0xae01" name="SP_ADDR_MODE_CNTL" pos="0" type="a5xx_address_mode"/>
+@@ -5257,16 +3126,16 @@ to upconvert to 32b float internally?
+ 		<!-- TODO: valid bits 0x3c3f, see kernel -->
+ 	</reg32>
+ 	<reg32 offset="0xae03" name="SP_CHICKEN_BITS" usage="cmd"/>
+-	<reg32 offset="0xae04" name="SP_FLOAT_CNTL" usage="cmd">
++	<reg32 offset="0xae04" name="SP_NC_MODE_CNTL_2" usage="cmd">
+ 		<bitfield name="F16_NO_INF" pos="3" type="boolean"/>
+ 	</reg32>
+ 
+ 	<reg32 offset="0xae06" name="SP_UNKNOWN_AE06" variants="A7XX-" usage="cmd"/>
+-	<reg32 offset="0xae08" name="SP_UNKNOWN_AE08" variants="A7XX-" usage="cmd"/>
+-	<reg32 offset="0xae09" name="SP_UNKNOWN_AE09" variants="A7XX-" usage="cmd"/>
+-	<reg32 offset="0xae0a" name="SP_UNKNOWN_AE0A" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xae08" name="SP_CHICKEN_BITS_1" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xae09" name="SP_CHICKEN_BITS_2" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xae0a" name="SP_CHICKEN_BITS_3" variants="A7XX-" usage="cmd"/>
+ 
+-	<reg32 offset="0xae0f" name="SP_PERFCTR_ENABLE" usage="cmd">
++	<reg32 offset="0xae0f" name="SP_PERFCTR_SHADER_MASK" usage="cmd">
+ 		<!-- some perfcntrs are affected by a per-stage enable bit
+ 		     (PERF_SP_ALU_WORKING_CYCLES for example)
+ 		     TODO: verify position of HS/DS/GS bits -->
+@@ -5281,7 +3150,7 @@ to upconvert to 32b float internally?
+ 	<array offset="0xae60" name="SP_PERFCTR_HLSQ_SEL" stride="1" length="6" variants="A7XX-"/>
+ 	<reg32 offset="0xae6a" name="SP_UNKNOWN_AE6A" variants="A7XX-" usage="cmd"/>
+ 	<reg32 offset="0xae6b" name="SP_UNKNOWN_AE6B" variants="A7XX-" usage="cmd"/>
+-	<reg32 offset="0xae6c" name="SP_UNKNOWN_AE6C" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xae6c" name="SP_HLSQ_DBG_ECO_CNTL" variants="A7XX-" usage="cmd"/>
+ 	<reg32 offset="0xae6d" name="SP_READ_SEL" variants="A7XX-">
+ 		<bitfield name="LOCATION" low="18" high="19" type="a7xx_state_location"/>
+ 		<bitfield name="PIPE" low="16" high="17" type="a7xx_pipe"/>
+@@ -5301,33 +3170,44 @@ to upconvert to 32b float internally?
+ 	"a6xx_sp_ps_tp_cluster" but this actually specifies the border
+ 	color base for compute shaders.
+ 	-->
+-	<reg64 offset="0xb180" name="SP_PS_TP_BORDER_COLOR_BASE_ADDR" type="address" align="128" usage="cmd"/>
++	<reg64 offset="0xb180" name="TPL1_CS_BORDER_COLOR_BASE" type="address" align="128" usage="cmd"/>
+ 	<reg32 offset="0xb182" name="SP_UNKNOWN_B182" low="0" high="2" usage="cmd"/>
+ 	<reg32 offset="0xb183" name="SP_UNKNOWN_B183" low="0" high="23" usage="cmd"/>
+ 
+ 	<reg32 offset="0xb190" name="SP_UNKNOWN_B190"/>
+ 	<reg32 offset="0xb191" name="SP_UNKNOWN_B191"/>
+ 
+-	<!-- could be all the stuff below here is actually TPL1?? -->
+-
+-	<reg32 offset="0xb300" name="SP_TP_RAS_MSAA_CNTL" usage="rp_blit">
++	<reg32 offset="0xb300" name="TPL1_RAS_MSAA_CNTL" usage="rp_blit">
+ 		<bitfield name="SAMPLES" low="0" high="1" type="a3xx_msaa_samples"/>
+ 		<bitfield name="UNK2" low="2" high="3"/>
+ 	</reg32>
+-	<reg32 offset="0xb301" name="SP_TP_DEST_MSAA_CNTL" usage="rp_blit">
++	<reg32 offset="0xb301" name="TPL1_DEST_MSAA_CNTL" usage="rp_blit">
+ 		<bitfield name="SAMPLES" low="0" high="1" type="a3xx_msaa_samples"/>
+ 		<bitfield name="MSAA_DISABLE" pos="2" type="boolean"/>
+ 	</reg32>
+ 
+ 	<!-- looks to work in the same way as a5xx: -->
+-	<reg64 offset="0xb302" name="SP_TP_BORDER_COLOR_BASE_ADDR" type="address" align="128" usage="cmd"/>
+-	<reg32 offset="0xb304" name="SP_TP_SAMPLE_CONFIG" type="a6xx_sample_config" usage="rp_blit"/>
+-	<reg32 offset="0xb305" name="SP_TP_SAMPLE_LOCATION_0" type="a6xx_sample_locations" usage="rp_blit"/>
+-	<reg32 offset="0xb306" name="SP_TP_SAMPLE_LOCATION_1" type="a6xx_sample_locations" usage="rp_blit"/>
+-	<reg32 offset="0xb307" name="SP_TP_WINDOW_OFFSET" type="a6xx_reg_xy" usage="rp_blit"/>
+-	<reg32 offset="0xb309" name="SP_TP_MODE_CNTL" usage="cmd">
++	<reg64 offset="0xb302" name="TPL1_GFX_BORDER_COLOR_BASE" type="address" align="128" usage="cmd"/>
++	<reg32 offset="0xb304" name="TPL1_MSAA_SAMPLE_POS_CNTL" type="a6xx_msaa_sample_pos_cntl" usage="rp_blit"/>
++	<reg32 offset="0xb305" name="TPL1_PROGRAMMABLE_MSAA_POS_0" type="a6xx_programmable_msaa_pos" usage="rp_blit"/>
++	<reg32 offset="0xb306" name="TPL1_PROGRAMMABLE_MSAA_POS_1" type="a6xx_programmable_msaa_pos" usage="rp_blit"/>
++	<reg32 offset="0xb307" name="TPL1_WINDOW_OFFSET" type="a6xx_reg_xy" usage="rp_blit"/>
++
++	<enum name="a6xx_coord_round">
++		<value value="0" name="COORD_TRUNCATE"/>
++		<value value="1" name="COORD_ROUND_NEAREST_EVEN"/>
++	</enum>
++
++	<enum name="a6xx_nearest_mode">
++		<value value="0" name="ROUND_CLAMP_TRUNCATE"/>
++		<value value="1" name="CLAMP_ROUND_TRUNCATE"/>
++	</enum>
++
++	<reg32 offset="0xb309" name="TPL1_MODE_CNTL" usage="cmd">
+ 		<bitfield name="ISAMMODE" low="0" high="1" type="a6xx_isam_mode"/>
+-		<bitfield name="UNK3" low="2" high="7"/>
++		<bitfield name="TEXCOORDROUNDMODE" pos="2" type="a6xx_coord_round"/>
++		<bitfield name="NEARESTMIPSNAP" pos="5" type="a6xx_nearest_mode"/>
++		<bitfield name="DESTDATATYPEOVERRIDE" pos="7" type="boolean"/>
+ 	</reg32>
+ 	<reg32 offset="0xb310" name="SP_UNKNOWN_B310" variants="A7XX-" usage="cmd"/>
+ 
+@@ -5336,42 +3216,45 @@ to upconvert to 32b float internally?
+ 	badly named or the functionality moved in a6xx.  But downstream kernel
+ 	calls this "a6xx_sp_ps_tp_2d_cluster"
+ 	 -->
+-	<reg32 offset="0xb4c0" name="SP_PS_2D_SRC_INFO" type="a6xx_2d_src_surf_info" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb4c1" name="SP_PS_2D_SRC_SIZE" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb4c0" name="TPL1_A2D_SRC_TEXTURE_INFO" type="a6xx_a2d_src_texture_info" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb4c1" name="TPL1_A2D_SRC_TEXTURE_SIZE" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="WIDTH" low="0" high="14" type="uint"/>
+ 		<bitfield name="HEIGHT" low="15" high="29" type="uint"/>
+ 	</reg32>
+-	<reg64 offset="0xb4c2" name="SP_PS_2D_SRC" type="address" align="16" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb4c4" name="SP_PS_2D_SRC_PITCH" variants="A6XX" usage="rp_blit">
++	<reg64 offset="0xb4c2" name="TPL1_A2D_SRC_TEXTURE_BASE" type="address" align="16" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb4c4" name="TPL1_A2D_SRC_TEXTURE_PITCH" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="UNK0" low="0" high="8"/>
+ 		<bitfield name="PITCH" low="9" high="23" shr="6" type="uint"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xb2c0" name="SP_PS_2D_SRC_INFO" type="a6xx_2d_src_surf_info" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xb2c1" name="SP_PS_2D_SRC_SIZE" variants="A7XX">
++	<reg32 offset="0xb2c0" name="TPL1_A2D_SRC_TEXTURE_INFO" type="a6xx_a2d_src_texture_info" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xb2c1" name="TPL1_A2D_SRC_TEXTURE_SIZE" variants="A7XX">
+ 		<bitfield name="WIDTH" low="0" high="14" type="uint"/>
+ 		<bitfield name="HEIGHT" low="15" high="29" type="uint"/>
+ 	</reg32>
+-	<reg64 offset="0xb2c2" name="SP_PS_2D_SRC" type="address" align="16" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xb2c4" name="SP_PS_2D_SRC_PITCH" variants="A7XX">
+-		<bitfield name="UNK0" low="0" high="8"/>
+-		<bitfield name="PITCH" low="9" high="23" shr="6" type="uint"/>
++	<reg64 offset="0xb2c2" name="TPL1_A2D_SRC_TEXTURE_BASE" type="address" align="16" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xb2c4" name="TPL1_A2D_SRC_TEXTURE_PITCH" variants="A7XX">
++		<!--
++		Bits from 3..9 must be zero unless 'TPL1_A2D_BLT_CNTL::TYPE'
++		is A6XX_TEX_IMG_BUFFER, which allows for lower alignment.
++		 -->
++		<bitfield name="PITCH" low="3" high="23" type="uint"/>
+ 	</reg32>
+ 
+ 	<!-- planes for NV12, etc. (TODO: not tested) -->
+-	<reg64 offset="0xb4c5" name="SP_PS_2D_SRC_PLANE1" type="address" align="16" variants="A6XX"/>
+-	<reg32 offset="0xb4c7" name="SP_PS_2D_SRC_PLANE_PITCH" low="0" high="11" shr="6" type="uint" variants="A6XX"/>
+-	<reg64 offset="0xb4c8" name="SP_PS_2D_SRC_PLANE2" type="address" align="16" variants="A6XX"/>
++	<reg64 offset="0xb4c5" name="TPL1_A2D_SRC_TEXTURE_BASE_1" type="address" align="16" variants="A6XX"/>
++	<reg32 offset="0xb4c7" name="TPL1_A2D_SRC_TEXTURE_PITCH_1" low="0" high="11" shr="6" type="uint" variants="A6XX"/>
++	<reg64 offset="0xb4c8" name="TPL1_A2D_SRC_TEXTURE_BASE_2" type="address" align="16" variants="A6XX"/>
+ 
+-	<reg64 offset="0xb2c5" name="SP_PS_2D_SRC_PLANE1" type="address" align="16" variants="A7XX-"/>
+-	<reg32 offset="0xb2c7" name="SP_PS_2D_SRC_PLANE_PITCH" low="0" high="11" shr="6" type="uint" variants="A7XX-"/>
+-	<reg64 offset="0xb2c8" name="SP_PS_2D_SRC_PLANE2" type="address" align="16" variants="A7XX-"/>
++	<reg64 offset="0xb2c5" name="TPL1_A2D_SRC_TEXTURE_BASE_1" type="address" align="16" variants="A7XX-"/>
++	<reg32 offset="0xb2c7" name="TPL1_A2D_SRC_TEXTURE_PITCH_1" low="0" high="11" shr="6" type="uint" variants="A7XX-"/>
++	<reg64 offset="0xb2c8" name="TPL1_A2D_SRC_TEXTURE_BASE_2" type="address" align="16" variants="A7XX-"/>
+ 
+-	<reg64 offset="0xb4ca" name="SP_PS_2D_SRC_FLAGS" type="address" align="16" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb4cc" name="SP_PS_2D_SRC_FLAGS_PITCH" low="0" high="7" shr="6" type="uint" variants="A6XX" usage="rp_blit"/>
++	<reg64 offset="0xb4ca" name="TPL1_A2D_SRC_TEXTURE_FLAG_BASE" type="address" align="16" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb4cc" name="TPL1_A2D_SRC_TEXTURE_FLAG_PITCH" low="0" high="7" shr="6" type="uint" variants="A6XX" usage="rp_blit"/>
+ 
+-	<reg64 offset="0xb2ca" name="SP_PS_2D_SRC_FLAGS" type="address" align="16" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xb2cc" name="SP_PS_2D_SRC_FLAGS_PITCH" low="0" high="7" shr="6" type="uint" variants="A7XX-" usage="rp_blit"/>
++	<reg64 offset="0xb2ca" name="TPL1_A2D_SRC_TEXTURE_FLAG_BASE" type="address" align="16" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xb2cc" name="TPL1_A2D_SRC_TEXTURE_FLAG_PITCH" low="0" high="7" shr="6" type="uint" variants="A7XX-" usage="rp_blit"/>
+ 
+ 	<reg32 offset="0xb4cd" name="SP_PS_UNKNOWN_B4CD" low="6" high="31" variants="A6XX"/>
+ 	<reg32 offset="0xb4ce" name="SP_PS_UNKNOWN_B4CE" low="0" high="31" variants="A6XX"/>
+@@ -5383,8 +3266,12 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0xb2ce" name="SP_PS_UNKNOWN_B4CE" low="0" high="31" variants="A7XX"/>
+ 	<reg32 offset="0xb2cf" name="SP_PS_UNKNOWN_B4CF" low="0" high="30" variants="A7XX"/>
+ 	<reg32 offset="0xb2d0" name="SP_PS_UNKNOWN_B4D0" low="0" high="29" variants="A7XX"/>
+-	<reg32 offset="0xb2d1" name="SP_PS_2D_WINDOW_OFFSET" type="a6xx_reg_xy" variants="A7XX"/>
+-	<reg32 offset="0xb2d2" name="SP_PS_UNKNOWN_B2D2" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xb2d1" name="TPL1_A2D_WINDOW_OFFSET" type="a6xx_reg_xy" variants="A7XX"/>
++	<reg32 offset="0xb2d2" name="TPL1_A2D_BLT_CNTL" variants="A7XX-" usage="rp_blit">
++		<bitfield name="RAW_COPY" pos="0" type="boolean"/>
++		<bitfield name="START_OFFSET_TEXELS" low="16" high="21"/>
++		<bitfield name="TYPE" low="29" high="31" type="a6xx_tex_type"/>
++	</reg32>
+ 	<reg32 offset="0xab21" name="SP_WINDOW_OFFSET" type="a6xx_reg_xy" variants="A7XX-" usage="rp_blit"/>
+ 
+ 	<!-- always 0x100000 or 0x1000000? -->
+@@ -5422,34 +3309,44 @@ to upconvert to 32b float internally?
+ 
+ 	<!-- TODO: 4 more perfcntr sel at 0xb620 ? -->
+ 
+-	<bitset name="a6xx_hlsq_xs_cntl" inline="yes">
++	<bitset name="a6xx_xs_const_config" inline="yes">
+ 		<bitfield name="CONSTLEN" low="0" high="7" shr="2" type="uint"/>
+ 		<bitfield name="ENABLED" pos="8" type="boolean"/>
+ 		<bitfield name="READ_IMM_SHARED_CONSTS" pos="9" type="boolean" variants="A7XX-"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0xb800" name="HLSQ_VS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb801" name="HLSQ_HS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb802" name="HLSQ_DS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb803" name="HLSQ_GS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb800" name="SP_VS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb801" name="SP_HS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb802" name="SP_DS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb803" name="SP_GS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A6XX" usage="rp_blit"/>
+ 
+-	<reg32 offset="0xa827" name="HLSQ_VS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xa83f" name="HLSQ_HS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xa867" name="HLSQ_DS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xa898" name="HLSQ_GS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa827" name="SP_VS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa83f" name="SP_HS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa867" name="SP_DS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa898" name="SP_GS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A7XX-" usage="rp_blit"/>
+ 
+-	<reg32 offset="0xa9aa" name="HLSQ_FS_UNKNOWN_A9AA" variants="A7XX-" usage="rp_blit">
+-		<!-- Tentatively named, appears to disable consts being loaded via CP_LOAD_STATE6_FRAG -->
+-		<bitfield name="CONSTS_LOAD_DISABLE" pos="0" type="boolean"/>
++	<reg32 offset="0xa9aa" name="SP_RENDER_CNTL" variants="A7XX-" usage="rp_blit">
++		<bitfield name="FS_DISABLE" pos="0" type="boolean"/>
+ 	</reg32>
+ 
+-	<!-- Always 0 -->
+-	<reg32 offset="0xa9ac" name="HLSQ_UNKNOWN_A9AC" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xa9ac" name="SP_DITHER_CNTL" variants="A7XX-" usage="cmd">
++		<bitfield name="DITHER_MODE_MRT0" low="0"  high="1"  type="adreno_rb_dither_mode"/>
++		<bitfield name="DITHER_MODE_MRT1" low="2"  high="3"  type="adreno_rb_dither_mode"/>
++		<bitfield name="DITHER_MODE_MRT2" low="4"  high="5"  type="adreno_rb_dither_mode"/>
++		<bitfield name="DITHER_MODE_MRT3" low="6"  high="7"  type="adreno_rb_dither_mode"/>
++		<bitfield name="DITHER_MODE_MRT4" low="8"  high="9"  type="adreno_rb_dither_mode"/>
++		<bitfield name="DITHER_MODE_MRT5" low="10" high="11" type="adreno_rb_dither_mode"/>
++		<bitfield name="DITHER_MODE_MRT6" low="12" high="13" type="adreno_rb_dither_mode"/>
++		<bitfield name="DITHER_MODE_MRT7" low="14" high="15" type="adreno_rb_dither_mode"/>
++	</reg32>
+ 
+-	<!-- Used in VK_KHR_fragment_shading_rate -->
+-	<reg32 offset="0xa9ad" name="HLSQ_UNKNOWN_A9AD" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xa9ad" name="SP_VRS_CONFIG" variants="A7XX-" usage="rp_blit">
++		<bitfield name="PIPELINE_FSR_ENABLE" pos="0" type="boolean"/>
++		<bitfield name="ATTACHMENT_FSR_ENABLE" pos="1" type="boolean"/>
++		<bitfield name="PRIMITIVE_FSR_ENABLE" pos="3" type="boolean"/>
++	</reg32>
+ 
+-	<reg32 offset="0xa9ae" name="HLSQ_UNKNOWN_A9AE" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9ae" name="SP_PS_CNTL_1" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="SYSVAL_REGS_COUNT" low="0" high="7" type="uint"/>
+ 		<!-- UNK8 is set on a730/a740 -->
+ 		<bitfield name="UNK8" pos="8" type="boolean"/>
+@@ -5462,94 +3359,94 @@ to upconvert to 32b float internally?
+ 	<reg32 offset="0xb823" name="HLSQ_LOAD_STATE_GEOM_DATA"/>
+ 
+ 
+-	<bitset name="a6xx_hlsq_fs_cntl_0" inline="yes">
++	<bitset name="a6xx_sp_ps_wave_cntl" inline="yes">
+ 		<!-- must match SP_FS_CTRL -->
+ 		<bitfield name="THREADSIZE" pos="0" type="a6xx_threadsize"/>
+ 		<bitfield name="VARYINGS" pos="1" type="boolean"/>
+ 		<bitfield name="UNK2" low="2" high="11"/>
+ 	</bitset>
+-	<bitset name="a6xx_hlsq_control_3_reg" inline="yes">
++	<bitset name="a6xx_sp_reg_prog_id_1" inline="yes">
+ 		<!-- register loaded with position (bary.f) -->
+ 		<bitfield name="IJ_PERSP_PIXEL" low="0" high="7" type="a3xx_regid"/>
+ 		<bitfield name="IJ_LINEAR_PIXEL" low="8" high="15" type="a3xx_regid"/>
+ 		<bitfield name="IJ_PERSP_CENTROID" low="16" high="23" type="a3xx_regid"/>
+ 		<bitfield name="IJ_LINEAR_CENTROID" low="24" high="31" type="a3xx_regid"/>
+ 	</bitset>
+-	<bitset name="a6xx_hlsq_control_4_reg" inline="yes">
++	<bitset name="a6xx_sp_reg_prog_id_2" inline="yes">
+ 		<bitfield name="IJ_PERSP_SAMPLE" low="0" high="7" type="a3xx_regid"/>
+ 		<bitfield name="IJ_LINEAR_SAMPLE" low="8" high="15" type="a3xx_regid"/>
+ 		<bitfield name="XYCOORDREGID" low="16" high="23" type="a3xx_regid"/>
+ 		<bitfield name="ZWCOORDREGID" low="24" high="31" type="a3xx_regid"/>
+ 	</bitset>
+-	<bitset name="a6xx_hlsq_control_5_reg" inline="yes">
++	<bitset name="a6xx_sp_reg_prog_id_3" inline="yes">
+ 		<bitfield name="LINELENGTHREGID" low="0" high="7" type="a3xx_regid"/>
+ 		<bitfield name="FOVEATIONQUALITYREGID" low="8" high="15" type="a3xx_regid"/>
+ 	</bitset>
+ 
+-	<reg32 offset="0xb980" type="a6xx_hlsq_fs_cntl_0" name="HLSQ_FS_CNTL_0" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb980" type="a6xx_sp_ps_wave_cntl" name="SP_PS_WAVE_CNTL" variants="A6XX" usage="rp_blit"/>
+ 	<reg32 offset="0xb981" name="HLSQ_UNKNOWN_B981" pos="0" type="boolean" variants="A6XX"/> <!-- never used by blob -->
+-	<reg32 offset="0xb982" name="HLSQ_CONTROL_1_REG" low="0" high="2" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb982" name="SP_LB_PARAM_LIMIT" low="0" high="2" variants="A6XX" usage="rp_blit">
+ 		<!-- Sets the maximum number of primitives allowed in one FS wave minus one, similarly to the
+ 				 A3xx field, except that it's not necessary to set it to anything but the maximum, since
+ 				 the hardware will simply emit smaller waves when it runs out of space.	-->
+ 		<bitfield name="PRIMALLOCTHRESHOLD" low="0" high="2" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xb983" name="HLSQ_CONTROL_2_REG" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb983" name="SP_REG_PROG_ID_0" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="FACEREGID" low="0" high="7" type="a3xx_regid"/>
+ 		<!-- SAMPLEID is loaded into a half-precision register: -->
+ 		<bitfield name="SAMPLEID" low="8" high="15" type="a3xx_regid"/>
+ 		<bitfield name="SAMPLEMASK" low="16" high="23" type="a3xx_regid"/>
+ 		<bitfield name="CENTERRHW" low="24" high="31" type="a3xx_regid"/>
+ 	</reg32>
+-	<reg32 offset="0xb984" type="a6xx_hlsq_control_3_reg" name="HLSQ_CONTROL_3_REG" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb985" type="a6xx_hlsq_control_4_reg" name="HLSQ_CONTROL_4_REG" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb986" type="a6xx_hlsq_control_5_reg" name="HLSQ_CONTROL_5_REG" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb987" name="HLSQ_CS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A6XX" usage="cmd"/>
+-	<reg32 offset="0xa9c6" type="a6xx_hlsq_fs_cntl_0" name="HLSQ_FS_CNTL_0" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xa9c7" name="HLSQ_CONTROL_1_REG" low="0" high="2" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xb984" type="a6xx_sp_reg_prog_id_1" name="SP_REG_PROG_ID_1" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb985" type="a6xx_sp_reg_prog_id_2" name="SP_REG_PROG_ID_2" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb986" type="a6xx_sp_reg_prog_id_3" name="SP_REG_PROG_ID_3" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb987" name="SP_CS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A6XX" usage="cmd"/>
++	<reg32 offset="0xa9c6" type="a6xx_sp_ps_wave_cntl" name="SP_PS_WAVE_CNTL" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa9c7" name="SP_LB_PARAM_LIMIT" low="0" high="2" variants="A7XX-" usage="rp_blit">
+ 			<bitfield name="PRIMALLOCTHRESHOLD" low="0" high="2" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xa9c8" name="HLSQ_CONTROL_2_REG" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9c8" name="SP_REG_PROG_ID_0" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="FACEREGID" low="0" high="7" type="a3xx_regid"/>
+ 		<!-- SAMPLEID is loaded into a half-precision register: -->
+ 		<bitfield name="SAMPLEID" low="8" high="15" type="a3xx_regid"/>
+ 		<bitfield name="SAMPLEMASK" low="16" high="23" type="a3xx_regid"/>
+ 		<bitfield name="CENTERRHW" low="24" high="31" type="a3xx_regid"/>
+ 	</reg32>
+-	<reg32 offset="0xa9c9" type="a6xx_hlsq_control_3_reg" name="HLSQ_CONTROL_3_REG" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xa9ca" type="a6xx_hlsq_control_4_reg" name="HLSQ_CONTROL_4_REG" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xa9cb" type="a6xx_hlsq_control_5_reg" name="HLSQ_CONTROL_5_REG" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xa9cd" name="HLSQ_CS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A7XX-" usage="cmd"/>
++	<reg32 offset="0xa9c9" type="a6xx_sp_reg_prog_id_1" name="SP_REG_PROG_ID_1" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa9ca" type="a6xx_sp_reg_prog_id_2" name="SP_REG_PROG_ID_2" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa9cb" type="a6xx_sp_reg_prog_id_3" name="SP_REG_PROG_ID_3" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa9cd" name="SP_CS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A7XX-" usage="cmd"/>
+ 
+ 	<!-- TODO: what does KERNELDIM do exactly (blob sets it differently from turnip) -->
+-	<reg32 offset="0xb990" name="HLSQ_CS_NDRANGE_0" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb990" name="SP_CS_NDRANGE_0" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="KERNELDIM" low="0" high="1" type="uint"/>
+ 		<!-- localsize is value minus one: -->
+ 		<bitfield name="LOCALSIZEX" low="2" high="11" type="uint"/>
+ 		<bitfield name="LOCALSIZEY" low="12" high="21" type="uint"/>
+ 		<bitfield name="LOCALSIZEZ" low="22" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xb991" name="HLSQ_CS_NDRANGE_1" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb991" name="SP_CS_NDRANGE_1" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="GLOBALSIZE_X" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xb992" name="HLSQ_CS_NDRANGE_2" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb992" name="SP_CS_NDRANGE_2" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="GLOBALOFF_X" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xb993" name="HLSQ_CS_NDRANGE_3" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb993" name="SP_CS_NDRANGE_3" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="GLOBALSIZE_Y" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xb994" name="HLSQ_CS_NDRANGE_4" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb994" name="SP_CS_NDRANGE_4" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="GLOBALOFF_Y" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xb995" name="HLSQ_CS_NDRANGE_5" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb995" name="SP_CS_NDRANGE_5" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="GLOBALSIZE_Z" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xb996" name="HLSQ_CS_NDRANGE_6" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb996" name="SP_CS_NDRANGE_6" variants="A6XX" usage="rp_blit">
+ 		<bitfield name="GLOBALOFF_Z" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xb997" name="HLSQ_CS_CNTL_0" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb997" name="SP_CS_CONST_CONFIG_0" variants="A6XX" usage="rp_blit">
+ 		<!-- these are all vec3. first 3 need to be high regs
+-		     WGSIZECONSTID is the local size (from HLSQ_CS_NDRANGE_0)
++		     WGSIZECONSTID is the local size (from SP_CS_NDRANGE_0)
+ 		     WGOFFSETCONSTID is WGIDCONSTID*WGSIZECONSTID
+ 		-->
+ 		<bitfield name="WGIDCONSTID" low="0" high="7" type="a3xx_regid"/>
+@@ -5557,7 +3454,7 @@ to upconvert to 32b float internally?
+ 		<bitfield name="WGOFFSETCONSTID" low="16" high="23" type="a3xx_regid"/>
+ 		<bitfield name="LOCALIDREGID" low="24" high="31" type="a3xx_regid"/>
+ 	</reg32>
+-	<reg32 offset="0xb998" name="HLSQ_CS_CNTL_1" variants="A6XX" usage="rp_blit">
++	<reg32 offset="0xb998" name="SP_CS_WGE_CNTL" variants="A6XX" usage="rp_blit">
+ 		<!-- gl_LocalInvocationIndex -->
+ 		<bitfield name="LINEARLOCALIDREGID" low="0" high="7" type="a3xx_regid"/>
+ 		<!-- a650 has 6 "SP cores" (but 3 "SP"). this makes it use only
+@@ -5569,40 +3466,40 @@ to upconvert to 32b float internally?
+ 		<bitfield name="THREADSIZE_SCALAR" pos="10" type="boolean"/>
+ 	</reg32>
+ 	<!--note: vulkan blob doesn't use these -->
+-	<reg32 offset="0xb999" name="HLSQ_CS_KERNEL_GROUP_X" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb99a" name="HLSQ_CS_KERNEL_GROUP_Y" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xb99b" name="HLSQ_CS_KERNEL_GROUP_Z" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb999" name="SP_CS_KERNEL_GROUP_X" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb99a" name="SP_CS_KERNEL_GROUP_Y" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xb99b" name="SP_CS_KERNEL_GROUP_Z" variants="A6XX" usage="rp_blit"/>
+ 
+ 	<!-- TODO: what does KERNELDIM do exactly (blob sets it differently from turnip) -->
+-	<reg32 offset="0xa9d4" name="HLSQ_CS_NDRANGE_0" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9d4" name="SP_CS_NDRANGE_0" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="KERNELDIM" low="0" high="1" type="uint"/>
+ 		<!-- localsize is value minus one: -->
+ 		<bitfield name="LOCALSIZEX" low="2" high="11" type="uint"/>
+ 		<bitfield name="LOCALSIZEY" low="12" high="21" type="uint"/>
+ 		<bitfield name="LOCALSIZEZ" low="22" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xa9d5" name="HLSQ_CS_NDRANGE_1" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9d5" name="SP_CS_NDRANGE_1" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="GLOBALSIZE_X" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xa9d6" name="HLSQ_CS_NDRANGE_2" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9d6" name="SP_CS_NDRANGE_2" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="GLOBALOFF_X" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xa9d7" name="HLSQ_CS_NDRANGE_3" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9d7" name="SP_CS_NDRANGE_3" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="GLOBALSIZE_Y" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xa9d8" name="HLSQ_CS_NDRANGE_4" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9d8" name="SP_CS_NDRANGE_4" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="GLOBALOFF_Y" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xa9d9" name="HLSQ_CS_NDRANGE_5" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9d9" name="SP_CS_NDRANGE_5" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="GLOBALSIZE_Z" low="0" high="31" type="uint"/>
+ 	</reg32>
+-	<reg32 offset="0xa9da" name="HLSQ_CS_NDRANGE_6" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9da" name="SP_CS_NDRANGE_6" variants="A7XX-" usage="rp_blit">
+ 		<bitfield name="GLOBALOFF_Z" low="0" high="31" type="uint"/>
+ 	</reg32>
+ 	<!--note: vulkan blob doesn't use these -->
+-	<reg32 offset="0xa9dc" name="HLSQ_CS_KERNEL_GROUP_X" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xa9dd" name="HLSQ_CS_KERNEL_GROUP_Y" variants="A7XX-" usage="rp_blit"/>
+-	<reg32 offset="0xa9de" name="HLSQ_CS_KERNEL_GROUP_Z" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa9dc" name="SP_CS_KERNEL_GROUP_X" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa9dd" name="SP_CS_KERNEL_GROUP_Y" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xa9de" name="SP_CS_KERNEL_GROUP_Z" variants="A7XX-" usage="rp_blit"/>
+ 
+ 	<enum name="a7xx_cs_yalign">
+ 		<value name="CS_YALIGN_1" value="8"/>
+@@ -5611,19 +3508,29 @@ to upconvert to 32b float internally?
+ 		<value name="CS_YALIGN_8" value="1"/>
+ 	</enum>
+ 
+-	<reg32 offset="0xa9db" name="HLSQ_CS_CNTL_1" variants="A7XX-" usage="rp_blit">
++	<reg32 offset="0xa9db" name="SP_CS_WGE_CNTL" variants="A7XX-" usage="rp_blit">
+ 		<!-- gl_LocalInvocationIndex -->
+ 		<bitfield name="LINEARLOCALIDREGID" low="0" high="7" type="a3xx_regid"/>
+ 		<!-- Must match SP_CS_CTRL -->
+ 		<bitfield name="THREADSIZE" pos="9" type="a6xx_threadsize"/>
+-		<bitfield name="UNK11" pos="11" type="boolean"/>
+-		<bitfield name="UNK22" pos="22" type="boolean"/>
+-		<bitfield name="UNK26" pos="26" type="boolean"/>
+-		<bitfield name="YALIGN" low="27" high="30" type="a7xx_cs_yalign"/>
++		<doc>
++			When this bit is enabled, the dispatch order interleaves
++			the z coordinate instead of launching all workgroups
++			with z=0, then all with z=1 and so on.
++		</doc>
++		<bitfield name="WORKGROUPRASTORDERZFIRSTEN" pos="11" type="boolean"/>
++		<doc>
++			When both fields are non-0 then the dispatcher uses
++			these tile sizes to launch workgroups in a tiled manner
++			when the x and y workgroup counts are
++			both more than 1.
++		</doc>
++		<bitfield name="WGTILEWIDTH" low="20" high="25"/>
++		<bitfield name="WGTILEHEIGHT" low="26" high="31"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xa9df" name="HLSQ_CS_LOCAL_SIZE" variants="A7XX-" usage="cmd">
+-		<!-- localsize is value minus one: -->
++	<reg32 offset="0xa9df" name="SP_CS_NDRANGE_7" variants="A7XX-" usage="cmd">
++		<!-- The size of the last workgroup. localsize is value minus one: -->
+ 		<bitfield name="LOCALSIZEX" low="2" high="11" type="uint"/>
+ 		<bitfield name="LOCALSIZEY" low="12" high="21" type="uint"/>
+ 		<bitfield name="LOCALSIZEZ" low="22" high="31" type="uint"/>
+@@ -5641,29 +3548,27 @@ to upconvert to 32b float internally?
+ 		</reg64>
+ 	</array>
+ 
+-	<!-- new in a6xx gen4, mirror of SP_CS_UNKNOWN_A9B1? -->
+-	<reg32 offset="0xb9d0" name="HLSQ_CS_UNKNOWN_B9D0" variants="A6XX" usage="cmd">
++	<!-- new in a6xx gen4, mirror of SP_CS_CNTL_1? -->
++	<reg32 offset="0xb9d0" name="HLSQ_CS_CTRL_REG1" variants="A6XX" usage="cmd">
+ 		<bitfield name="SHARED_SIZE" low="0" high="4" type="uint"/>
+-		<bitfield name="UNK5" pos="5" type="boolean"/>
+-		<!-- always 1 ? -->
+-		<bitfield name="UNK6" pos="6" type="boolean"/>
++		<bitfield name="CONSTANTRAMMODE" low="5" high="6" type="a6xx_const_ram_mode"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xbb00" name="HLSQ_DRAW_CMD" variants="A6XX">
++	<reg32 offset="0xbb00" name="SP_DRAW_INITIATOR" variants="A6XX">
+ 		<bitfield name="STATE_ID" low="0" high="7"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xbb01" name="HLSQ_DISPATCH_CMD" variants="A6XX">
++	<reg32 offset="0xbb01" name="SP_KERNEL_INITIATOR" variants="A6XX">
+ 		<bitfield name="STATE_ID" low="0" high="7"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xbb02" name="HLSQ_EVENT_CMD" variants="A6XX">
++	<reg32 offset="0xbb02" name="SP_EVENT_INITIATOR" variants="A6XX">
+ 		<!-- I think only the low bit is actually used? -->
+ 		<bitfield name="STATE_ID" low="16" high="23"/>
+ 		<bitfield name="EVENT" low="0" high="6" type="vgt_event_type"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xbb08" name="HLSQ_INVALIDATE_CMD" variants="A6XX" usage="cmd">
++	<reg32 offset="0xbb08" name="SP_UPDATE_CNTL" variants="A6XX" usage="cmd">
+ 		<doc>
+ 			This register clears pending loads queued up by
+ 			CP_LOAD_STATE6. Each bit resets a particular kind(s) of
+@@ -5678,8 +3583,8 @@ to upconvert to 32b float internally?
+ 		<bitfield name="FS_STATE" pos="4" type="boolean"/>
+ 		<bitfield name="CS_STATE" pos="5" type="boolean"/>
+ 
+-		<bitfield name="CS_IBO" pos="6" type="boolean"/>
+-		<bitfield name="GFX_IBO" pos="7" type="boolean"/>
++		<bitfield name="CS_UAV" pos="6" type="boolean"/>
++		<bitfield name="GFX_UAV" pos="7" type="boolean"/>
+ 
+ 		<!-- Note: these only do something when HLSQ_SHARED_CONSTS is set to 1 -->
+ 		<bitfield name="CS_SHARED_CONST" pos="19" type="boolean"/>
+@@ -5690,20 +3595,20 @@ to upconvert to 32b float internally?
+ 		<bitfield name="GFX_BINDLESS" low="14" high="18" type="hex"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xab1c" name="HLSQ_DRAW_CMD" variants="A7XX-">
++	<reg32 offset="0xab1c" name="SP_DRAW_INITIATOR" variants="A7XX-">
+ 		<bitfield name="STATE_ID" low="0" high="7"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xab1d" name="HLSQ_DISPATCH_CMD" variants="A7XX-">
++	<reg32 offset="0xab1d" name="SP_KERNEL_INITIATOR" variants="A7XX-">
+ 		<bitfield name="STATE_ID" low="0" high="7"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xab1e" name="HLSQ_EVENT_CMD" variants="A7XX-">
++	<reg32 offset="0xab1e" name="SP_EVENT_INITIATOR" variants="A7XX-">
+ 		<bitfield name="STATE_ID" low="16" high="23"/>
+ 		<bitfield name="EVENT" low="0" high="6" type="vgt_event_type"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xab1f" name="HLSQ_INVALIDATE_CMD" variants="A7XX-" usage="cmd">
++	<reg32 offset="0xab1f" name="SP_UPDATE_CNTL" variants="A7XX-" usage="cmd">
+ 		<doc>
+ 			This register clears pending loads queued up by
+ 			CP_LOAD_STATE6. Each bit resets a particular kind(s) of
+@@ -5718,18 +3623,18 @@ to upconvert to 32b float internally?
+ 		<bitfield name="FS_STATE" pos="4" type="boolean"/>
+ 		<bitfield name="CS_STATE" pos="5" type="boolean"/>
+ 
+-		<bitfield name="CS_IBO" pos="6" type="boolean"/>
+-		<bitfield name="GFX_IBO" pos="7" type="boolean"/>
++		<bitfield name="CS_UAV" pos="6" type="boolean"/>
++		<bitfield name="GFX_UAV" pos="7" type="boolean"/>
+ 
+ 		<!-- SS6_BINDLESS: one bit per bindless base -->
+ 		<bitfield name="CS_BINDLESS" low="9" high="16" type="hex"/>
+ 		<bitfield name="GFX_BINDLESS" low="17" high="24" type="hex"/>
+ 	</reg32>
+ 
+-	<reg32 offset="0xbb10" name="HLSQ_FS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A6XX" usage="rp_blit"/>
+-	<reg32 offset="0xab03" name="HLSQ_FS_CNTL" type="a6xx_hlsq_xs_cntl" variants="A7XX-" usage="rp_blit"/>
++	<reg32 offset="0xbb10" name="SP_PS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A6XX" usage="rp_blit"/>
++	<reg32 offset="0xab03" name="SP_PS_CONST_CONFIG" type="a6xx_xs_const_config" variants="A7XX-" usage="rp_blit"/>
+ 
+-	<array offset="0xab40" name="HLSQ_SHARED_CONSTS_IMM" stride="1" length="64" variants="A7XX-"/>
++	<array offset="0xab40" name="SP_SHARED_CONSTANT_GFX_0" stride="1" length="64" variants="A7XX-"/>
+ 
+ 	<reg32 offset="0xbb11" name="HLSQ_SHARED_CONSTS" variants="A6XX" usage="cmd">
+ 		<doc>
+@@ -5738,7 +3643,7 @@ to upconvert to 32b float internally?
+ 			const pool and 16 in the geometry const pool although
+ 			only 8 are actually used (why?) and they are mapped to
+ 			c504-c511 in each stage. Both VS and FS shared consts
+-			are written using ST6_CONSTANTS/SB6_IBO, so that both
++			are written using ST6_CONSTANTS/SB6_UAV, so that both
+ 			the geometry and FS shared consts can be written at once
+ 			by using CP_LOAD_STATE6 rather than
+ 			CP_LOAD_STATE6_FRAG/CP_LOAD_STATE6_GEOM. In addition
+@@ -5747,13 +3652,13 @@ to upconvert to 32b float internally?
+ 
+ 			There is also a separate shared constant pool for CS,
+ 			which is loaded through CP_LOAD_STATE6_FRAG with
+-			ST6_UBO/ST6_IBO. However the only real difference for CS
++			ST6_UBO/ST6_UAV. However the only real difference for CS
+ 			is the dword units.
+ 		</doc>
+ 		<bitfield name="ENABLE" pos="0" type="boolean"/>
+ 	</reg32>
+ 
+-	<!-- mirror of SP_BINDLESS_BASE -->
++	<!-- mirror of SP_GFX_BINDLESS_BASE -->
+ 	<array offset="0xbb20" name="HLSQ_BINDLESS_BASE" stride="2" length="5" variants="A6XX" usage="cmd">
+ 		<reg64 offset="0" name="DESCRIPTOR">
+ 			<bitfield name="DESC_SIZE" low="0" high="1" type="a6xx_bindless_descriptor_size"/>
+@@ -5788,10 +3693,10 @@ to upconvert to 32b float internally?
+ 		sequence. The sequence used internally for an event looks like:
+ 		- write EVENT_CMD pipe register
+ 		- write CP_EVENT_START
+-		- write HLSQ_EVENT_CMD with event or HLSQ_DRAW_CMD
+-		- write PC_EVENT_CMD with event or PC_DRAW_CMD
+-		- write HLSQ_EVENT_CMD(CONTEXT_DONE)
+-		- write PC_EVENT_CMD(CONTEXT_DONE)
++		- write SP_EVENT_INITIATOR with event or SP_DRAW_INITIATOR
++		- write PC_EVENT_INITIATOR with event or PC_DRAW_INITIATOR
++		- write SP_EVENT_INITIATOR(CONTEXT_DONE)
++		- write PC_EVENT_INITIATOR(CONTEXT_DONE)
+ 		- write CP_EVENT_END
+ 		Writing to CP_EVENT_END seems to actually trigger the context roll
+ 	-->
+@@ -5809,193 +3714,6 @@ to upconvert to 32b float internally?
+ 	</reg32>
+ </domain>
+ 
+-<!-- Seems basically the same as a5xx, maybe move to common.xml.. -->
+-<domain name="A6XX_TEX_SAMP" width="32">
+-	<doc>Texture sampler dwords</doc>
+-	<enum name="a6xx_tex_filter"> <!-- same as a4xx? -->
+-		<value name="A6XX_TEX_NEAREST" value="0"/>
+-		<value name="A6XX_TEX_LINEAR" value="1"/>
+-		<value name="A6XX_TEX_ANISO" value="2"/>
+-		<value name="A6XX_TEX_CUBIC" value="3"/> <!-- a650 only -->
+-	</enum>
+-	<enum name="a6xx_tex_clamp"> <!-- same as a4xx? -->
+-		<value name="A6XX_TEX_REPEAT" value="0"/>
+-		<value name="A6XX_TEX_CLAMP_TO_EDGE" value="1"/>
+-		<value name="A6XX_TEX_MIRROR_REPEAT" value="2"/>
+-		<value name="A6XX_TEX_CLAMP_TO_BORDER" value="3"/>
+-		<value name="A6XX_TEX_MIRROR_CLAMP" value="4"/>
+-	</enum>
+-	<enum name="a6xx_tex_aniso"> <!-- same as a4xx? -->
+-		<value name="A6XX_TEX_ANISO_1" value="0"/>
+-		<value name="A6XX_TEX_ANISO_2" value="1"/>
+-		<value name="A6XX_TEX_ANISO_4" value="2"/>
+-		<value name="A6XX_TEX_ANISO_8" value="3"/>
+-		<value name="A6XX_TEX_ANISO_16" value="4"/>
+-	</enum>
+-	<enum name="a6xx_reduction_mode">
+-		<value name="A6XX_REDUCTION_MODE_AVERAGE" value="0"/>
+-		<value name="A6XX_REDUCTION_MODE_MIN" value="1"/>
+-		<value name="A6XX_REDUCTION_MODE_MAX" value="2"/>
+-	</enum>
+-
+-	<reg32 offset="0" name="0">
+-		<bitfield name="MIPFILTER_LINEAR_NEAR" pos="0" type="boolean"/>
+-		<bitfield name="XY_MAG" low="1" high="2" type="a6xx_tex_filter"/>
+-		<bitfield name="XY_MIN" low="3" high="4" type="a6xx_tex_filter"/>
+-		<bitfield name="WRAP_S" low="5" high="7" type="a6xx_tex_clamp"/>
+-		<bitfield name="WRAP_T" low="8" high="10" type="a6xx_tex_clamp"/>
+-		<bitfield name="WRAP_R" low="11" high="13" type="a6xx_tex_clamp"/>
+-		<bitfield name="ANISO" low="14" high="16" type="a6xx_tex_aniso"/>
+-		<bitfield name="LOD_BIAS" low="19" high="31" type="fixed" radix="8"/><!-- no idea how many bits for real -->
+-	</reg32>
+-	<reg32 offset="1" name="1">
+-		<bitfield name="CLAMPENABLE" pos="0" type="boolean">
+-			<doc>
+-				clamp result to [0, 1] if the format is unorm or
+-				[-1, 1] if the format is snorm, *after*
+-				filtering. Has no effect for other formats.
+-			</doc>
+-		</bitfield>
+-		<bitfield name="COMPARE_FUNC" low="1" high="3" type="adreno_compare_func"/>
+-		<bitfield name="CUBEMAPSEAMLESSFILTOFF" pos="4" type="boolean"/>
+-		<bitfield name="UNNORM_COORDS" pos="5" type="boolean"/>
+-		<bitfield name="MIPFILTER_LINEAR_FAR" pos="6" type="boolean"/>
+-		<bitfield name="MAX_LOD" low="8" high="19" type="ufixed" radix="8"/>
+-		<bitfield name="MIN_LOD" low="20" high="31" type="ufixed" radix="8"/>
+-	</reg32>
+-	<reg32 offset="2" name="2">
+-		<bitfield name="REDUCTION_MODE" low="0" high="1" type="a6xx_reduction_mode"/>
+-		<bitfield name="CHROMA_LINEAR" pos="5" type="boolean"/>
+-		<bitfield name="BCOLOR" low="7" high="31"/>
+-	</reg32>
+-	<reg32 offset="3" name="3"/>
+-</domain>
+-
+-<domain name="A6XX_TEX_CONST" width="32" varset="chip">
+-	<doc>Texture constant dwords</doc>
+-	<enum name="a6xx_tex_swiz"> <!-- same as a4xx? -->
+-		<value name="A6XX_TEX_X" value="0"/>
+-		<value name="A6XX_TEX_Y" value="1"/>
+-		<value name="A6XX_TEX_Z" value="2"/>
+-		<value name="A6XX_TEX_W" value="3"/>
+-		<value name="A6XX_TEX_ZERO" value="4"/>
+-		<value name="A6XX_TEX_ONE" value="5"/>
+-	</enum>
+-	<enum name="a6xx_tex_type"> <!-- same as a4xx? -->
+-		<value name="A6XX_TEX_1D" value="0"/>
+-		<value name="A6XX_TEX_2D" value="1"/>
+-		<value name="A6XX_TEX_CUBE" value="2"/>
+-		<value name="A6XX_TEX_3D" value="3"/>
+-		<value name="A6XX_TEX_BUFFER" value="4"/>
+-	</enum>
+-	<reg32 offset="0" name="0">
+-		<bitfield name="TILE_MODE" low="0" high="1" type="a6xx_tile_mode"/>
+-		<bitfield name="SRGB" pos="2" type="boolean"/>
+-		<bitfield name="SWIZ_X" low="4" high="6" type="a6xx_tex_swiz"/>
+-		<bitfield name="SWIZ_Y" low="7" high="9" type="a6xx_tex_swiz"/>
+-		<bitfield name="SWIZ_Z" low="10" high="12" type="a6xx_tex_swiz"/>
+-		<bitfield name="SWIZ_W" low="13" high="15" type="a6xx_tex_swiz"/>
+-		<bitfield name="MIPLVLS" low="16" high="19" type="uint"/>
+-		<!-- overlaps with MIPLVLS -->
+-		<bitfield name="CHROMA_MIDPOINT_X" pos="16" type="boolean"/>
+-		<bitfield name="CHROMA_MIDPOINT_Y" pos="18" type="boolean"/>
+-		<bitfield name="SAMPLES" low="20" high="21" type="a3xx_msaa_samples"/>
+-		<bitfield name="FMT" low="22" high="29" type="a6xx_format"/>
+-		<!--
+-			Why is the swap needed in addition to SWIZ_*? The swap
+-			is performed before border color replacement, while the
+-			swizzle is applied after after it.
+-		-->
+-		<bitfield name="SWAP" low="30" high="31" type="a3xx_color_swap"/>
+-	</reg32>
+-	<reg32 offset="1" name="1">
+-		<bitfield name="WIDTH" low="0" high="14" type="uint"/>
+-		<bitfield name="HEIGHT" low="15" high="29" type="uint"/>
+-		<bitfield name="MUTABLEEN" pos="31" type="boolean" variants="A7XX-"/>
+-	</reg32>
+-	<reg32 offset="2" name="2">
+-		<!--
+-			These fields overlap PITCH, and are used instead of
+-			PITCH/PITCHALIGN when TYPE is A6XX_TEX_BUFFER.
+-		 -->
+-		<doc> probably for D3D structured UAVs, normally set to 1 </doc>
+-		<bitfield name="STRUCTSIZETEXELS" low="4" high="15" type="uint"/>
+-		<bitfield name="STARTOFFSETTEXELS" low="16" high="21" type="uint"/>
+-
+-		<!-- minimum pitch (for mipmap levels): log2(pitchalign / 64) -->
+-		<bitfield name="PITCHALIGN" low="0" high="3" type="uint"/>
+-		<doc>Pitch in bytes (so actually stride)</doc>
+-		<bitfield name="PITCH" low="7" high="28" type="uint"/>
+-		<bitfield name="TYPE" low="29" high="31" type="a6xx_tex_type"/>
+-	</reg32>
+-	<reg32 offset="3" name="3">
+-		<!--
+-		ARRAY_PITCH is basically LAYERSZ for the first mipmap level, and
+-		for 3d textures (laid out mipmap level first) MIN_LAYERSZ is the
+-		layer size at the point that it stops being reduced moving to
+-		higher (smaller) mipmap levels
+-		 -->
+-		<bitfield name="ARRAY_PITCH" low="0" high="22" shr="12" type="uint"/>
+-		<bitfield name="MIN_LAYERSZ" low="23" high="26" shr="12"/>
+-		<!--
+-		by default levels with w < 16 are linear
+-		TILE_ALL makes all levels have tiling
+-		seems required when using UBWC, since all levels have UBWC (can possibly be disabled?)
+-		 -->
+-		<bitfield name="TILE_ALL" pos="27" type="boolean"/>
+-		<bitfield name="FLAG" pos="28" type="boolean"/>
+-	</reg32>
+-	<!-- for 2-3 plane format, BASE is flag buffer address (if enabled)
+-	     the address of the non-flag base buffer is determined automatically,
+-	     and must follow the flag buffer
+-	 -->
+-	<reg32 offset="4" name="4">
+-		<bitfield name="BASE_LO" low="5" high="31" shr="5"/>
+-	</reg32>
+-	<reg32 offset="5" name="5">
+-		<bitfield name="BASE_HI" low="0" high="16"/>
+-		<bitfield name="DEPTH" low="17" high="29" type="uint"/>
+-	</reg32>
+-	<reg32 offset="6" name="6">
+-		<!-- overlaps with PLANE_PITCH -->
+-		<bitfield name="MIN_LOD_CLAMP" low="0" high="11" type="ufixed" radix="8"/>
+-		<!-- pitch for plane 2 / plane 3 -->
+-		<bitfield name="PLANE_PITCH" low="8" high="31" type="uint"/>
+-	</reg32>
+-	<!-- 7/8 is plane 2 address for planar formats -->
+-	<reg32 offset="7" name="7">
+-		<bitfield name="FLAG_LO" low="5" high="31" shr="5"/>
+-	</reg32>
+-	<reg32 offset="8" name="8">
+-		<bitfield name="FLAG_HI" low="0" high="16"/>
+-	</reg32>
+-	<!-- 9/10 is plane 3 address for planar formats -->
+-	<reg32 offset="9" name="9">
+-		<bitfield name="FLAG_BUFFER_ARRAY_PITCH" low="0" high="16" shr="4" type="uint"/>
+-	</reg32>
+-	<reg32 offset="10" name="10">
+-		<bitfield name="FLAG_BUFFER_PITCH" low="0" high="6" shr="6" type="uint"/>
+-		<!-- log2 size of the first level, required for mipmapping -->
+-		<bitfield name="FLAG_BUFFER_LOGW" low="8" high="11" type="uint"/>
+-		<bitfield name="FLAG_BUFFER_LOGH" low="12" high="15" type="uint"/>
+-	</reg32>
+-	<reg32 offset="11" name="11"/>
+-	<reg32 offset="12" name="12"/>
+-	<reg32 offset="13" name="13"/>
+-	<reg32 offset="14" name="14"/>
+-	<reg32 offset="15" name="15"/>
+-</domain>
+-
+-<domain name="A6XX_UBO" width="32">
+-	<reg32 offset="0" name="0">
+-		<bitfield name="BASE_LO" low="0" high="31"/>
+-	</reg32>
+-	<reg32 offset="1" name="1">
+-		<bitfield name="BASE_HI" low="0" high="16"/>
+-		<bitfield name="SIZE" low="17" high="31"/> <!-- size in vec4 (4xDWORD) units -->
+-	</reg32>
+-</domain>
+-
+ <domain name="A6XX_PDC" width="32">
+ 	<reg32 offset="0x1140" name="GPU_ENABLE_PDC"/>
+ 	<reg32 offset="0x1148" name="GPU_SEQ_START_ADDR"/>
+diff --git a/drivers/gpu/drm/msm/registers/adreno/a6xx_descriptors.xml b/drivers/gpu/drm/msm/registers/adreno/a6xx_descriptors.xml
+new file mode 100644
+index 00000000000000..307d43dda8a254
+--- /dev/null
++++ b/drivers/gpu/drm/msm/registers/adreno/a6xx_descriptors.xml
+@@ -0,0 +1,198 @@
++<?xml version="1.0" encoding="UTF-8"?>
++<database xmlns="http://nouveau.freedesktop.org/"
++xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
++xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
++<import file="freedreno_copyright.xml"/>
++<import file="adreno/adreno_common.xml"/>
++<import file="adreno/adreno_pm4.xml"/>
++<import file="adreno/a6xx_enums.xml"/>
++
++<domain name="A6XX_TEX_SAMP" width="32">
++	<doc>Texture sampler dwords</doc>
++	<enum name="a6xx_tex_filter"> <!-- same as a4xx? -->
++		<value name="A6XX_TEX_NEAREST" value="0"/>
++		<value name="A6XX_TEX_LINEAR" value="1"/>
++		<value name="A6XX_TEX_ANISO" value="2"/>
++		<value name="A6XX_TEX_CUBIC" value="3"/> <!-- a650 only -->
++	</enum>
++	<enum name="a6xx_tex_clamp"> <!-- same as a4xx? -->
++		<value name="A6XX_TEX_REPEAT" value="0"/>
++		<value name="A6XX_TEX_CLAMP_TO_EDGE" value="1"/>
++		<value name="A6XX_TEX_MIRROR_REPEAT" value="2"/>
++		<value name="A6XX_TEX_CLAMP_TO_BORDER" value="3"/>
++		<value name="A6XX_TEX_MIRROR_CLAMP" value="4"/>
++	</enum>
++	<enum name="a6xx_tex_aniso"> <!-- same as a4xx? -->
++		<value name="A6XX_TEX_ANISO_1" value="0"/>
++		<value name="A6XX_TEX_ANISO_2" value="1"/>
++		<value name="A6XX_TEX_ANISO_4" value="2"/>
++		<value name="A6XX_TEX_ANISO_8" value="3"/>
++		<value name="A6XX_TEX_ANISO_16" value="4"/>
++	</enum>
++	<enum name="a6xx_reduction_mode">
++		<value name="A6XX_REDUCTION_MODE_AVERAGE" value="0"/>
++		<value name="A6XX_REDUCTION_MODE_MIN" value="1"/>
++		<value name="A6XX_REDUCTION_MODE_MAX" value="2"/>
++	</enum>
++	<enum name="a6xx_fast_border_color">
++		<!--                           R B G A -->
++		<value name="A6XX_BORDER_COLOR_0_0_0_0" value="0"/>
++		<value name="A6XX_BORDER_COLOR_0_0_0_1" value="1"/>
++		<value name="A6XX_BORDER_COLOR_1_1_1_0" value="2"/>
++		<value name="A6XX_BORDER_COLOR_1_1_1_1" value="3"/>
++	</enum>
++
++	<reg32 offset="0" name="0">
++		<bitfield name="MIPFILTER_LINEAR_NEAR" pos="0" type="boolean"/>
++		<bitfield name="XY_MAG" low="1" high="2" type="a6xx_tex_filter"/>
++		<bitfield name="XY_MIN" low="3" high="4" type="a6xx_tex_filter"/>
++		<bitfield name="WRAP_S" low="5" high="7" type="a6xx_tex_clamp"/>
++		<bitfield name="WRAP_T" low="8" high="10" type="a6xx_tex_clamp"/>
++		<bitfield name="WRAP_R" low="11" high="13" type="a6xx_tex_clamp"/>
++		<bitfield name="ANISO" low="14" high="16" type="a6xx_tex_aniso"/>
++		<bitfield name="LOD_BIAS" low="19" high="31" type="fixed" radix="8"/><!-- no idea how many bits for real -->
++	</reg32>
++	<reg32 offset="1" name="1">
++		<bitfield name="CLAMPENABLE" pos="0" type="boolean">
++			<doc>
++				clamp result to [0, 1] if the format is unorm or
++				[-1, 1] if the format is snorm, *after*
++				filtering. Has no effect for other formats.
++			</doc>
++		</bitfield>
++		<bitfield name="COMPARE_FUNC" low="1" high="3" type="adreno_compare_func"/>
++		<bitfield name="CUBEMAPSEAMLESSFILTOFF" pos="4" type="boolean"/>
++		<bitfield name="UNNORM_COORDS" pos="5" type="boolean"/>
++		<bitfield name="MIPFILTER_LINEAR_FAR" pos="6" type="boolean"/>
++		<bitfield name="MAX_LOD" low="8" high="19" type="ufixed" radix="8"/>
++		<bitfield name="MIN_LOD" low="20" high="31" type="ufixed" radix="8"/>
++	</reg32>
++	<reg32 offset="2" name="2">
++		<bitfield name="REDUCTION_MODE" low="0" high="1" type="a6xx_reduction_mode"/>
++		<bitfield name="FASTBORDERCOLOR" low="2" high="3" type="a6xx_fast_border_color"/>
++		<bitfield name="FASTBORDERCOLOREN" pos="4" type="boolean"/>
++		<bitfield name="CHROMA_LINEAR" pos="5" type="boolean"/>
++		<bitfield name="BCOLOR" low="7" high="31"/>
++	</reg32>
++	<reg32 offset="3" name="3"/>
++</domain>
++
++<domain name="A6XX_TEX_CONST" width="32" varset="chip">
++	<doc>Texture constant dwords</doc>
++	<enum name="a6xx_tex_swiz"> <!-- same as a4xx? -->
++		<value name="A6XX_TEX_X" value="0"/>
++		<value name="A6XX_TEX_Y" value="1"/>
++		<value name="A6XX_TEX_Z" value="2"/>
++		<value name="A6XX_TEX_W" value="3"/>
++		<value name="A6XX_TEX_ZERO" value="4"/>
++		<value name="A6XX_TEX_ONE" value="5"/>
++	</enum>
++	<reg32 offset="0" name="0">
++		<bitfield name="TILE_MODE" low="0" high="1" type="a6xx_tile_mode"/>
++		<bitfield name="SRGB" pos="2" type="boolean"/>
++		<bitfield name="SWIZ_X" low="4" high="6" type="a6xx_tex_swiz"/>
++		<bitfield name="SWIZ_Y" low="7" high="9" type="a6xx_tex_swiz"/>
++		<bitfield name="SWIZ_Z" low="10" high="12" type="a6xx_tex_swiz"/>
++		<bitfield name="SWIZ_W" low="13" high="15" type="a6xx_tex_swiz"/>
++		<bitfield name="MIPLVLS" low="16" high="19" type="uint"/>
++		<!-- overlaps with MIPLVLS -->
++		<bitfield name="CHROMA_MIDPOINT_X" pos="16" type="boolean"/>
++		<bitfield name="CHROMA_MIDPOINT_Y" pos="18" type="boolean"/>
++		<bitfield name="SAMPLES" low="20" high="21" type="a3xx_msaa_samples"/>
++		<bitfield name="FMT" low="22" high="29" type="a6xx_format"/>
++		<!--
++			Why is the swap needed in addition to SWIZ_*? The swap
++			is performed before border color replacement, while the
++			swizzle is applied after after it.
++		-->
++		<bitfield name="SWAP" low="30" high="31" type="a3xx_color_swap"/>
++	</reg32>
++	<reg32 offset="1" name="1">
++		<bitfield name="WIDTH" low="0" high="14" type="uint"/>
++		<bitfield name="HEIGHT" low="15" high="29" type="uint"/>
++		<bitfield name="MUTABLEEN" pos="31" type="boolean" variants="A7XX-"/>
++	</reg32>
++	<reg32 offset="2" name="2">
++		<!--
++			These fields overlap PITCH, and are used instead of
++			PITCH/PITCHALIGN when TYPE is A6XX_TEX_BUFFER.
++		 -->
++		<doc> probably for D3D structured UAVs, normally set to 1 </doc>
++		<bitfield name="STRUCTSIZETEXELS" low="4" high="15" type="uint"/>
++		<bitfield name="STARTOFFSETTEXELS" low="16" high="21" type="uint"/>
++
++		<!-- minimum pitch (for mipmap levels): log2(pitchalign / 64) -->
++		<bitfield name="PITCHALIGN" low="0" high="3" type="uint"/>
++		<doc>Pitch in bytes (so actually stride)</doc>
++		<bitfield name="PITCH" low="7" high="28" type="uint"/>
++		<bitfield name="TYPE" low="29" high="31" type="a6xx_tex_type"/>
++	</reg32>
++	<reg32 offset="3" name="3">
++		<!--
++		ARRAY_PITCH is basically LAYERSZ for the first mipmap level, and
++		for 3d textures (laid out mipmap level first) MIN_LAYERSZ is the
++		layer size at the point that it stops being reduced moving to
++		higher (smaller) mipmap levels
++		 -->
++		<bitfield name="ARRAY_PITCH" low="0" high="22" shr="12" type="uint"/>
++		<bitfield name="MIN_LAYERSZ" low="23" high="26" shr="12"/>
++		<!--
++		by default levels with w < 16 are linear
++		TILE_ALL makes all levels have tiling
++		seems required when using UBWC, since all levels have UBWC (can possibly be disabled?)
++		 -->
++		<bitfield name="TILE_ALL" pos="27" type="boolean"/>
++		<bitfield name="FLAG" pos="28" type="boolean"/>
++	</reg32>
++	<!-- for 2-3 plane format, BASE is flag buffer address (if enabled)
++	     the address of the non-flag base buffer is determined automatically,
++	     and must follow the flag buffer
++	 -->
++	<reg32 offset="4" name="4">
++		<bitfield name="BASE_LO" low="5" high="31" shr="5"/>
++	</reg32>
++	<reg32 offset="5" name="5">
++		<bitfield name="BASE_HI" low="0" high="16"/>
++		<bitfield name="DEPTH" low="17" high="29" type="uint"/>
++	</reg32>
++	<reg32 offset="6" name="6">
++		<!-- overlaps with PLANE_PITCH -->
++		<bitfield name="MIN_LOD_CLAMP" low="0" high="11" type="ufixed" radix="8"/>
++		<!-- pitch for plane 2 / plane 3 -->
++		<bitfield name="PLANE_PITCH" low="8" high="31" type="uint"/>
++	</reg32>
++	<!-- 7/8 is plane 2 address for planar formats -->
++	<reg32 offset="7" name="7">
++		<bitfield name="FLAG_LO" low="5" high="31" shr="5"/>
++	</reg32>
++	<reg32 offset="8" name="8">
++		<bitfield name="FLAG_HI" low="0" high="16"/>
++	</reg32>
++	<!-- 9/10 is plane 3 address for planar formats -->
++	<reg32 offset="9" name="9">
++		<bitfield name="FLAG_BUFFER_ARRAY_PITCH" low="0" high="16" shr="4" type="uint"/>
++	</reg32>
++	<reg32 offset="10" name="10">
++		<bitfield name="FLAG_BUFFER_PITCH" low="0" high="6" shr="6" type="uint"/>
++		<!-- log2 size of the first level, required for mipmapping -->
++		<bitfield name="FLAG_BUFFER_LOGW" low="8" high="11" type="uint"/>
++		<bitfield name="FLAG_BUFFER_LOGH" low="12" high="15" type="uint"/>
++	</reg32>
++	<reg32 offset="11" name="11"/>
++	<reg32 offset="12" name="12"/>
++	<reg32 offset="13" name="13"/>
++	<reg32 offset="14" name="14"/>
++	<reg32 offset="15" name="15"/>
++</domain>
++
++<domain name="A6XX_UBO" width="32">
++	<reg32 offset="0" name="0">
++		<bitfield name="BASE_LO" low="0" high="31"/>
++	</reg32>
++	<reg32 offset="1" name="1">
++		<bitfield name="BASE_HI" low="0" high="16"/>
++		<bitfield name="SIZE" low="17" high="31"/> <!-- size in vec4 (4xDWORD) units -->
++	</reg32>
++</domain>
++
++</database>
+diff --git a/drivers/gpu/drm/msm/registers/adreno/a6xx_enums.xml b/drivers/gpu/drm/msm/registers/adreno/a6xx_enums.xml
+new file mode 100644
+index 00000000000000..665539b098c632
+--- /dev/null
++++ b/drivers/gpu/drm/msm/registers/adreno/a6xx_enums.xml
+@@ -0,0 +1,383 @@
++<?xml version="1.0" encoding="UTF-8"?>
++<database xmlns="http://nouveau.freedesktop.org/"
++xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
++xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
++<import file="freedreno_copyright.xml"/>
++<import file="adreno/adreno_common.xml"/>
++<import file="adreno/adreno_pm4.xml"/>
++
++<enum name="a6xx_tile_mode">
++	<value name="TILE6_LINEAR" value="0"/>
++	<value name="TILE6_2" value="2"/>
++	<value name="TILE6_3" value="3"/>
++</enum>
++
++<enum name="a6xx_format">
++	<value value="0x02" name="FMT6_A8_UNORM"/>
++	<value value="0x03" name="FMT6_8_UNORM"/>
++	<value value="0x04" name="FMT6_8_SNORM"/>
++	<value value="0x05" name="FMT6_8_UINT"/>
++	<value value="0x06" name="FMT6_8_SINT"/>
++
++	<value value="0x08" name="FMT6_4_4_4_4_UNORM"/>
++	<value value="0x0a" name="FMT6_5_5_5_1_UNORM"/>
++	<value value="0x0c" name="FMT6_1_5_5_5_UNORM"/> <!-- read only -->
++	<value value="0x0e" name="FMT6_5_6_5_UNORM"/>
++
++	<value value="0x0f" name="FMT6_8_8_UNORM"/>
++	<value value="0x10" name="FMT6_8_8_SNORM"/>
++	<value value="0x11" name="FMT6_8_8_UINT"/>
++	<value value="0x12" name="FMT6_8_8_SINT"/>
++	<value value="0x13" name="FMT6_L8_A8_UNORM"/>
++
++	<value value="0x15" name="FMT6_16_UNORM"/>
++	<value value="0x16" name="FMT6_16_SNORM"/>
++	<value value="0x17" name="FMT6_16_FLOAT"/>
++	<value value="0x18" name="FMT6_16_UINT"/>
++	<value value="0x19" name="FMT6_16_SINT"/>
++
++	<value value="0x21" name="FMT6_8_8_8_UNORM"/>
++	<value value="0x22" name="FMT6_8_8_8_SNORM"/>
++	<value value="0x23" name="FMT6_8_8_8_UINT"/>
++	<value value="0x24" name="FMT6_8_8_8_SINT"/>
++
++	<value value="0x30" name="FMT6_8_8_8_8_UNORM"/>
++	<value value="0x31" name="FMT6_8_8_8_X8_UNORM"/> <!-- samples 1 for alpha -->
++	<value value="0x32" name="FMT6_8_8_8_8_SNORM"/>
++	<value value="0x33" name="FMT6_8_8_8_8_UINT"/>
++	<value value="0x34" name="FMT6_8_8_8_8_SINT"/>
++
++	<value value="0x35" name="FMT6_9_9_9_E5_FLOAT"/>
++
++	<value value="0x36" name="FMT6_10_10_10_2_UNORM"/>
++	<value value="0x37" name="FMT6_10_10_10_2_UNORM_DEST"/>
++	<value value="0x39" name="FMT6_10_10_10_2_SNORM"/>
++	<value value="0x3a" name="FMT6_10_10_10_2_UINT"/>
++	<value value="0x3b" name="FMT6_10_10_10_2_SINT"/>
++
++	<value value="0x42" name="FMT6_11_11_10_FLOAT"/>
++
++	<value value="0x43" name="FMT6_16_16_UNORM"/>
++	<value value="0x44" name="FMT6_16_16_SNORM"/>
++	<value value="0x45" name="FMT6_16_16_FLOAT"/>
++	<value value="0x46" name="FMT6_16_16_UINT"/>
++	<value value="0x47" name="FMT6_16_16_SINT"/>
++
++	<value value="0x48" name="FMT6_32_UNORM"/>
++	<value value="0x49" name="FMT6_32_SNORM"/>
++	<value value="0x4a" name="FMT6_32_FLOAT"/>
++	<value value="0x4b" name="FMT6_32_UINT"/>
++	<value value="0x4c" name="FMT6_32_SINT"/>
++	<value value="0x4d" name="FMT6_32_FIXED"/>
++
++	<value value="0x58" name="FMT6_16_16_16_UNORM"/>
++	<value value="0x59" name="FMT6_16_16_16_SNORM"/>
++	<value value="0x5a" name="FMT6_16_16_16_FLOAT"/>
++	<value value="0x5b" name="FMT6_16_16_16_UINT"/>
++	<value value="0x5c" name="FMT6_16_16_16_SINT"/>
++
++	<value value="0x60" name="FMT6_16_16_16_16_UNORM"/>
++	<value value="0x61" name="FMT6_16_16_16_16_SNORM"/>
++	<value value="0x62" name="FMT6_16_16_16_16_FLOAT"/>
++	<value value="0x63" name="FMT6_16_16_16_16_UINT"/>
++	<value value="0x64" name="FMT6_16_16_16_16_SINT"/>
++
++	<value value="0x65" name="FMT6_32_32_UNORM"/>
++	<value value="0x66" name="FMT6_32_32_SNORM"/>
++	<value value="0x67" name="FMT6_32_32_FLOAT"/>
++	<value value="0x68" name="FMT6_32_32_UINT"/>
++	<value value="0x69" name="FMT6_32_32_SINT"/>
++	<value value="0x6a" name="FMT6_32_32_FIXED"/>
++
++	<value value="0x70" name="FMT6_32_32_32_UNORM"/>
++	<value value="0x71" name="FMT6_32_32_32_SNORM"/>
++	<value value="0x72" name="FMT6_32_32_32_UINT"/>
++	<value value="0x73" name="FMT6_32_32_32_SINT"/>
++	<value value="0x74" name="FMT6_32_32_32_FLOAT"/>
++	<value value="0x75" name="FMT6_32_32_32_FIXED"/>
++
++	<value value="0x80" name="FMT6_32_32_32_32_UNORM"/>
++	<value value="0x81" name="FMT6_32_32_32_32_SNORM"/>
++	<value value="0x82" name="FMT6_32_32_32_32_FLOAT"/>
++	<value value="0x83" name="FMT6_32_32_32_32_UINT"/>
++	<value value="0x84" name="FMT6_32_32_32_32_SINT"/>
++	<value value="0x85" name="FMT6_32_32_32_32_FIXED"/>
++
++	<value value="0x8c" name="FMT6_G8R8B8R8_422_UNORM"/> <!-- UYVY -->
++	<value value="0x8d" name="FMT6_R8G8R8B8_422_UNORM"/> <!-- YUYV -->
++	<value value="0x8e" name="FMT6_R8_G8B8_2PLANE_420_UNORM"/> <!-- NV12 -->
++	<value value="0x8f" name="FMT6_NV21"/>
++	<value value="0x90" name="FMT6_R8_G8_B8_3PLANE_420_UNORM"/> <!-- YV12 -->
++
++	<value value="0x91" name="FMT6_Z24_UNORM_S8_UINT_AS_R8G8B8A8"/>
++
++	<!-- Note: tiling/UBWC for these may be different from equivalent formats
++	For example FMT6_NV12_Y is not compatible with FMT6_8_UNORM
++	-->
++	<value value="0x94" name="FMT6_NV12_Y"/>
++	<value value="0x95" name="FMT6_NV12_UV"/>
++	<value value="0x96" name="FMT6_NV12_VU"/>
++	<value value="0x97" name="FMT6_NV12_4R"/>
++	<value value="0x98" name="FMT6_NV12_4R_Y"/>
++	<value value="0x99" name="FMT6_NV12_4R_UV"/>
++	<value value="0x9a" name="FMT6_P010"/>
++	<value value="0x9b" name="FMT6_P010_Y"/>
++	<value value="0x9c" name="FMT6_P010_UV"/>
++	<value value="0x9d" name="FMT6_TP10"/>
++	<value value="0x9e" name="FMT6_TP10_Y"/>
++	<value value="0x9f" name="FMT6_TP10_UV"/>
++
++	<value value="0xa0" name="FMT6_Z24_UNORM_S8_UINT"/>
++
++	<value value="0xab" name="FMT6_ETC2_RG11_UNORM"/>
++	<value value="0xac" name="FMT6_ETC2_RG11_SNORM"/>
++	<value value="0xad" name="FMT6_ETC2_R11_UNORM"/>
++	<value value="0xae" name="FMT6_ETC2_R11_SNORM"/>
++	<value value="0xaf" name="FMT6_ETC1"/>
++	<value value="0xb0" name="FMT6_ETC2_RGB8"/>
++	<value value="0xb1" name="FMT6_ETC2_RGBA8"/>
++	<value value="0xb2" name="FMT6_ETC2_RGB8A1"/>
++	<value value="0xb3" name="FMT6_DXT1"/>
++	<value value="0xb4" name="FMT6_DXT3"/>
++	<value value="0xb5" name="FMT6_DXT5"/>
++	<value value="0xb6" name="FMT6_RGTC1_UNORM"/>
++	<value value="0xb7" name="FMT6_RGTC1_UNORM_FAST"/>
++	<value value="0xb8" name="FMT6_RGTC1_SNORM"/>
++	<value value="0xb9" name="FMT6_RGTC1_SNORM_FAST"/>
++	<value value="0xba" name="FMT6_RGTC2_UNORM"/>
++	<value value="0xbb" name="FMT6_RGTC2_UNORM_FAST"/>
++	<value value="0xbc" name="FMT6_RGTC2_SNORM"/>
++	<value value="0xbd" name="FMT6_RGTC2_SNORM_FAST"/>
++	<value value="0xbe" name="FMT6_BPTC_UFLOAT"/>
++	<value value="0xbf" name="FMT6_BPTC_FLOAT"/>
++	<value value="0xc0" name="FMT6_BPTC"/>
++	<value value="0xc1" name="FMT6_ASTC_4x4"/>
++	<value value="0xc2" name="FMT6_ASTC_5x4"/>
++	<value value="0xc3" name="FMT6_ASTC_5x5"/>
++	<value value="0xc4" name="FMT6_ASTC_6x5"/>
++	<value value="0xc5" name="FMT6_ASTC_6x6"/>
++	<value value="0xc6" name="FMT6_ASTC_8x5"/>
++	<value value="0xc7" name="FMT6_ASTC_8x6"/>
++	<value value="0xc8" name="FMT6_ASTC_8x8"/>
++	<value value="0xc9" name="FMT6_ASTC_10x5"/>
++	<value value="0xca" name="FMT6_ASTC_10x6"/>
++	<value value="0xcb" name="FMT6_ASTC_10x8"/>
++	<value value="0xcc" name="FMT6_ASTC_10x10"/>
++	<value value="0xcd" name="FMT6_ASTC_12x10"/>
++	<value value="0xce" name="FMT6_ASTC_12x12"/>
++
++	<!-- for sampling stencil (integer, 2nd channel), not available on a630 -->
++	<value value="0xea" name="FMT6_Z24_UINT_S8_UINT"/>
++
++	<!-- Not a hw enum, used internally in driver -->
++	<value value="0xff" name="FMT6_NONE"/>
++
++</enum>
++
++<!-- probably same as a5xx -->
++<enum name="a6xx_polygon_mode">
++	<value name="POLYMODE6_POINTS" value="1"/>
++	<value name="POLYMODE6_LINES" value="2"/>
++	<value name="POLYMODE6_TRIANGLES" value="3"/>
++</enum>
++
++<enum name="a6xx_depth_format">
++	<value name="DEPTH6_NONE" value="0"/>
++	<value name="DEPTH6_16" value="1"/>
++	<value name="DEPTH6_24_8" value="2"/>
++	<value name="DEPTH6_32" value="4"/>
++</enum>
++
++<bitset name="a6x_cp_protect" inline="yes">
++	<bitfield name="BASE_ADDR" low="0" high="17"/>
++	<bitfield name="MASK_LEN" low="18" high="30"/>
++	<bitfield name="READ" pos="31" type="boolean"/>
++</bitset>
++
++<enum name="a6xx_shader_id">
++	<value value="0x9" name="A6XX_TP0_TMO_DATA"/>
++	<value value="0xa" name="A6XX_TP0_SMO_DATA"/>
++	<value value="0xb" name="A6XX_TP0_MIPMAP_BASE_DATA"/>
++	<value value="0x19" name="A6XX_TP1_TMO_DATA"/>
++	<value value="0x1a" name="A6XX_TP1_SMO_DATA"/>
++	<value value="0x1b" name="A6XX_TP1_MIPMAP_BASE_DATA"/>
++	<value value="0x29" name="A6XX_SP_INST_DATA"/>
++	<value value="0x2a" name="A6XX_SP_LB_0_DATA"/>
++	<value value="0x2b" name="A6XX_SP_LB_1_DATA"/>
++	<value value="0x2c" name="A6XX_SP_LB_2_DATA"/>
++	<value value="0x2d" name="A6XX_SP_LB_3_DATA"/>
++	<value value="0x2e" name="A6XX_SP_LB_4_DATA"/>
++	<value value="0x2f" name="A6XX_SP_LB_5_DATA"/>
++	<value value="0x30" name="A6XX_SP_CB_BINDLESS_DATA"/>
++	<value value="0x31" name="A6XX_SP_CB_LEGACY_DATA"/>
++	<value value="0x32" name="A6XX_SP_GFX_UAV_BASE_DATA"/>
++	<value value="0x33" name="A6XX_SP_INST_TAG"/>
++	<value value="0x34" name="A6XX_SP_CB_BINDLESS_TAG"/>
++	<value value="0x35" name="A6XX_SP_TMO_UMO_TAG"/>
++	<value value="0x36" name="A6XX_SP_SMO_TAG"/>
++	<value value="0x37" name="A6XX_SP_STATE_DATA"/>
++	<value value="0x49" name="A6XX_HLSQ_CHUNK_CVS_RAM"/>
++	<value value="0x4a" name="A6XX_HLSQ_CHUNK_CPS_RAM"/>
++	<value value="0x4b" name="A6XX_HLSQ_CHUNK_CVS_RAM_TAG"/>
++	<value value="0x4c" name="A6XX_HLSQ_CHUNK_CPS_RAM_TAG"/>
++	<value value="0x4d" name="A6XX_HLSQ_ICB_CVS_CB_BASE_TAG"/>
++	<value value="0x4e" name="A6XX_HLSQ_ICB_CPS_CB_BASE_TAG"/>
++	<value value="0x50" name="A6XX_HLSQ_CVS_MISC_RAM"/>
++	<value value="0x51" name="A6XX_HLSQ_CPS_MISC_RAM"/>
++	<value value="0x52" name="A6XX_HLSQ_INST_RAM"/>
++	<value value="0x53" name="A6XX_HLSQ_GFX_CVS_CONST_RAM"/>
++	<value value="0x54" name="A6XX_HLSQ_GFX_CPS_CONST_RAM"/>
++	<value value="0x55" name="A6XX_HLSQ_CVS_MISC_RAM_TAG"/>
++	<value value="0x56" name="A6XX_HLSQ_CPS_MISC_RAM_TAG"/>
++	<value value="0x57" name="A6XX_HLSQ_INST_RAM_TAG"/>
++	<value value="0x58" name="A6XX_HLSQ_GFX_CVS_CONST_RAM_TAG"/>
++	<value value="0x59" name="A6XX_HLSQ_GFX_CPS_CONST_RAM_TAG"/>
++	<value value="0x5a" name="A6XX_HLSQ_PWR_REST_RAM"/>
++	<value value="0x5b" name="A6XX_HLSQ_PWR_REST_TAG"/>
++	<value value="0x60" name="A6XX_HLSQ_DATAPATH_META"/>
++	<value value="0x61" name="A6XX_HLSQ_FRONTEND_META"/>
++	<value value="0x62" name="A6XX_HLSQ_INDIRECT_META"/>
++	<value value="0x63" name="A6XX_HLSQ_BACKEND_META"/>
++	<value value="0x70" name="A6XX_SP_LB_6_DATA"/>
++	<value value="0x71" name="A6XX_SP_LB_7_DATA"/>
++	<value value="0x73" name="A6XX_HLSQ_INST_RAM_1"/>
++</enum>
++
++<enum name="a6xx_debugbus_id">
++	<value value="0x1" name="A6XX_DBGBUS_CP"/>
++	<value value="0x2" name="A6XX_DBGBUS_RBBM"/>
++	<value value="0x3" name="A6XX_DBGBUS_VBIF"/>
++	<value value="0x4" name="A6XX_DBGBUS_HLSQ"/>
++	<value value="0x5" name="A6XX_DBGBUS_UCHE"/>
++	<value value="0x6" name="A6XX_DBGBUS_DPM"/>
++	<value value="0x7" name="A6XX_DBGBUS_TESS"/>
++	<value value="0x8" name="A6XX_DBGBUS_PC"/>
++	<value value="0x9" name="A6XX_DBGBUS_VFDP"/>
++	<value value="0xa" name="A6XX_DBGBUS_VPC"/>
++	<value value="0xb" name="A6XX_DBGBUS_TSE"/>
++	<value value="0xc" name="A6XX_DBGBUS_RAS"/>
++	<value value="0xd" name="A6XX_DBGBUS_VSC"/>
++	<value value="0xe" name="A6XX_DBGBUS_COM"/>
++	<value value="0x10" name="A6XX_DBGBUS_LRZ"/>
++	<value value="0x11" name="A6XX_DBGBUS_A2D"/>
++	<value value="0x12" name="A6XX_DBGBUS_CCUFCHE"/>
++	<value value="0x13" name="A6XX_DBGBUS_GMU_CX"/>
++	<value value="0x14" name="A6XX_DBGBUS_RBP"/>
++	<value value="0x15" name="A6XX_DBGBUS_DCS"/>
++	<value value="0x16" name="A6XX_DBGBUS_DBGC"/>
++	<value value="0x17" name="A6XX_DBGBUS_CX"/>
++	<value value="0x18" name="A6XX_DBGBUS_GMU_GX"/>
++	<value value="0x19" name="A6XX_DBGBUS_TPFCHE"/>
++	<value value="0x1a" name="A6XX_DBGBUS_GBIF_GX"/>
++	<value value="0x1d" name="A6XX_DBGBUS_GPC"/>
++	<value value="0x1e" name="A6XX_DBGBUS_LARC"/>
++	<value value="0x1f" name="A6XX_DBGBUS_HLSQ_SPTP"/>
++	<value value="0x20" name="A6XX_DBGBUS_RB_0"/>
++	<value value="0x21" name="A6XX_DBGBUS_RB_1"/>
++	<value value="0x22" name="A6XX_DBGBUS_RB_2"/>
++	<value value="0x24" name="A6XX_DBGBUS_UCHE_WRAPPER"/>
++	<value value="0x28" name="A6XX_DBGBUS_CCU_0"/>
++	<value value="0x29" name="A6XX_DBGBUS_CCU_1"/>
++	<value value="0x2a" name="A6XX_DBGBUS_CCU_2"/>
++	<value value="0x38" name="A6XX_DBGBUS_VFD_0"/>
++	<value value="0x39" name="A6XX_DBGBUS_VFD_1"/>
++	<value value="0x3a" name="A6XX_DBGBUS_VFD_2"/>
++	<value value="0x3b" name="A6XX_DBGBUS_VFD_3"/>
++	<value value="0x3c" name="A6XX_DBGBUS_VFD_4"/>
++	<value value="0x3d" name="A6XX_DBGBUS_VFD_5"/>
++	<value value="0x40" name="A6XX_DBGBUS_SP_0"/>
++	<value value="0x41" name="A6XX_DBGBUS_SP_1"/>
++	<value value="0x42" name="A6XX_DBGBUS_SP_2"/>
++	<value value="0x48" name="A6XX_DBGBUS_TPL1_0"/>
++	<value value="0x49" name="A6XX_DBGBUS_TPL1_1"/>
++	<value value="0x4a" name="A6XX_DBGBUS_TPL1_2"/>
++	<value value="0x4b" name="A6XX_DBGBUS_TPL1_3"/>
++	<value value="0x4c" name="A6XX_DBGBUS_TPL1_4"/>
++	<value value="0x4d" name="A6XX_DBGBUS_TPL1_5"/>
++	<value value="0x58" name="A6XX_DBGBUS_SPTP_0"/>
++	<value value="0x59" name="A6XX_DBGBUS_SPTP_1"/>
++	<value value="0x5a" name="A6XX_DBGBUS_SPTP_2"/>
++	<value value="0x5b" name="A6XX_DBGBUS_SPTP_3"/>
++	<value value="0x5c" name="A6XX_DBGBUS_SPTP_4"/>
++	<value value="0x5d" name="A6XX_DBGBUS_SPTP_5"/>
++</enum>
++
++<!--
++Used in a6xx_a2d_bit_cntl.. the value mostly seems to correlate to the
++component type/size, so I think it relates to internal format used for
++blending?  The one exception is that 16b unorm and 32b float use the
++same value... maybe 16b unorm is uncommon enough that it was just easier
++to upconvert to 32b float internally?
++
++ 8b unorm:  10 (sometimes 0, is the high bit part of something else?)
++16b unorm:   4
++
++32b int:     7
++16b int:     6
++ 8b int:     5
++
++32b float:   4
++16b float:   3
++ -->
++<enum name="a6xx_2d_ifmt">
++	<value value="0x10" name="R2D_UNORM8"/>
++	<value value="0x7"  name="R2D_INT32"/>
++	<value value="0x6"  name="R2D_INT16"/>
++	<value value="0x5"  name="R2D_INT8"/>
++	<value value="0x4"  name="R2D_FLOAT32"/>
++	<value value="0x3"  name="R2D_FLOAT16"/>
++	<value value="0x1"  name="R2D_UNORM8_SRGB"/>
++	<value value="0x0"  name="R2D_RAW"/>
++</enum>
++
++<enum name="a6xx_tex_type">
++	<value name="A6XX_TEX_1D" value="0"/>
++	<value name="A6XX_TEX_2D" value="1"/>
++	<value name="A6XX_TEX_CUBE" value="2"/>
++	<value name="A6XX_TEX_3D" value="3"/>
++	<value name="A6XX_TEX_BUFFER" value="4"/>
++	<doc>
++		A special buffer type for usage as the source for buffer
++		to image copies with lower alignment requirements than
++		A6XX_TEX_2D, available since A7XX.
++	</doc>
++	<value name="A6XX_TEX_IMG_BUFFER" value="5"/>
++</enum>
++
++<enum name="a6xx_ztest_mode">
++	<doc>Allow early z-test and early-lrz (if applicable)</doc>
++	<value value="0x0" name="A6XX_EARLY_Z"/>
++	<doc>Disable early z-test and early-lrz test (if applicable)</doc>
++	<value value="0x1" name="A6XX_LATE_Z"/>
++	<doc>
++		A special mode that allows early-lrz (if applicable) or early-z
++		tests, but also does late-z tests at which point it writes depth.
++
++		This mode is used when fragment can be killed (via discard or
++		sample mask) after early-z tests and it writes depth. In such case
++		depth can be written only at late-z stage, but it's ok to use
++		early-z to discard fragments.
++
++		However this mode is not compatible with:
++		- Lack of D/S attachment
++		- Stencil writes on stencil or depth test failures
++		- Per-sample shading
++	</doc>
++	<value value="0x2" name="A6XX_EARLY_Z_LATE_Z"/>
++	<doc>Not a real hw value, used internally by mesa</doc>
++	<value value="0x3" name="A6XX_INVALID_ZTEST"/>
++</enum>
++
++<enum name="a6xx_tess_spacing">
++	<value value="0x0" name="TESS_EQUAL"/>
++	<value value="0x2" name="TESS_FRACTIONAL_ODD"/>
++	<value value="0x3" name="TESS_FRACTIONAL_EVEN"/>
++</enum>
++<enum name="a6xx_tess_output">
++	<value value="0x0" name="TESS_POINTS"/>
++	<value value="0x1" name="TESS_LINES"/>
++	<value value="0x2" name="TESS_CW_TRIS"/>
++	<value value="0x3" name="TESS_CCW_TRIS"/>
++</enum>
++
++</database>
+diff --git a/drivers/gpu/drm/msm/registers/adreno/a6xx_perfcntrs.xml b/drivers/gpu/drm/msm/registers/adreno/a6xx_perfcntrs.xml
+new file mode 100644
+index 00000000000000..c446a2eb11202f
+--- /dev/null
++++ b/drivers/gpu/drm/msm/registers/adreno/a6xx_perfcntrs.xml
+@@ -0,0 +1,600 @@
++<?xml version="1.0" encoding="UTF-8"?>
++<database xmlns="http://nouveau.freedesktop.org/"
++xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
++xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
++<import file="freedreno_copyright.xml"/>
++<import file="adreno/adreno_common.xml"/>
++<import file="adreno/adreno_pm4.xml"/>
++
++<enum name="a6xx_cp_perfcounter_select">
++	<value value="0" name="PERF_CP_ALWAYS_COUNT"/>
++	<value value="1" name="PERF_CP_BUSY_GFX_CORE_IDLE"/>
++	<value value="2" name="PERF_CP_BUSY_CYCLES"/>
++	<value value="3" name="PERF_CP_NUM_PREEMPTIONS"/>
++	<value value="4" name="PERF_CP_PREEMPTION_REACTION_DELAY"/>
++	<value value="5" name="PERF_CP_PREEMPTION_SWITCH_OUT_TIME"/>
++	<value value="6" name="PERF_CP_PREEMPTION_SWITCH_IN_TIME"/>
++	<value value="7" name="PERF_CP_DEAD_DRAWS_IN_BIN_RENDER"/>
++	<value value="8" name="PERF_CP_PREDICATED_DRAWS_KILLED"/>
++	<value value="9" name="PERF_CP_MODE_SWITCH"/>
++	<value value="10" name="PERF_CP_ZPASS_DONE"/>
++	<value value="11" name="PERF_CP_CONTEXT_DONE"/>
++	<value value="12" name="PERF_CP_CACHE_FLUSH"/>
++	<value value="13" name="PERF_CP_LONG_PREEMPTIONS"/>
++	<value value="14" name="PERF_CP_SQE_I_CACHE_STARVE"/>
++	<value value="15" name="PERF_CP_SQE_IDLE"/>
++	<value value="16" name="PERF_CP_SQE_PM4_STARVE_RB_IB"/>
++	<value value="17" name="PERF_CP_SQE_PM4_STARVE_SDS"/>
++	<value value="18" name="PERF_CP_SQE_MRB_STARVE"/>
++	<value value="19" name="PERF_CP_SQE_RRB_STARVE"/>
++	<value value="20" name="PERF_CP_SQE_VSD_STARVE"/>
++	<value value="21" name="PERF_CP_VSD_DECODE_STARVE"/>
++	<value value="22" name="PERF_CP_SQE_PIPE_OUT_STALL"/>
++	<value value="23" name="PERF_CP_SQE_SYNC_STALL"/>
++	<value value="24" name="PERF_CP_SQE_PM4_WFI_STALL"/>
++	<value value="25" name="PERF_CP_SQE_SYS_WFI_STALL"/>
++	<value value="26" name="PERF_CP_SQE_T4_EXEC"/>
++	<value value="27" name="PERF_CP_SQE_LOAD_STATE_EXEC"/>
++	<value value="28" name="PERF_CP_SQE_SAVE_SDS_STATE"/>
++	<value value="29" name="PERF_CP_SQE_DRAW_EXEC"/>
++	<value value="30" name="PERF_CP_SQE_CTXT_REG_BUNCH_EXEC"/>
++	<value value="31" name="PERF_CP_SQE_EXEC_PROFILED"/>
++	<value value="32" name="PERF_CP_MEMORY_POOL_EMPTY"/>
++	<value value="33" name="PERF_CP_MEMORY_POOL_SYNC_STALL"/>
++	<value value="34" name="PERF_CP_MEMORY_POOL_ABOVE_THRESH"/>
++	<value value="35" name="PERF_CP_AHB_WR_STALL_PRE_DRAWS"/>
++	<value value="36" name="PERF_CP_AHB_STALL_SQE_GMU"/>
++	<value value="37" name="PERF_CP_AHB_STALL_SQE_WR_OTHER"/>
++	<value value="38" name="PERF_CP_AHB_STALL_SQE_RD_OTHER"/>
++	<value value="39" name="PERF_CP_CLUSTER0_EMPTY"/>
++	<value value="40" name="PERF_CP_CLUSTER1_EMPTY"/>
++	<value value="41" name="PERF_CP_CLUSTER2_EMPTY"/>
++	<value value="42" name="PERF_CP_CLUSTER3_EMPTY"/>
++	<value value="43" name="PERF_CP_CLUSTER4_EMPTY"/>
++	<value value="44" name="PERF_CP_CLUSTER5_EMPTY"/>
++	<value value="45" name="PERF_CP_PM4_DATA"/>
++	<value value="46" name="PERF_CP_PM4_HEADERS"/>
++	<value value="47" name="PERF_CP_VBIF_READ_BEATS"/>
++	<value value="48" name="PERF_CP_VBIF_WRITE_BEATS"/>
++	<value value="49" name="PERF_CP_SQE_INSTR_COUNTER"/>
++</enum>
++
++<enum name="a6xx_rbbm_perfcounter_select">
++	<value value="0" name="PERF_RBBM_ALWAYS_COUNT"/>
++	<value value="1" name="PERF_RBBM_ALWAYS_ON"/>
++	<value value="2" name="PERF_RBBM_TSE_BUSY"/>
++	<value value="3" name="PERF_RBBM_RAS_BUSY"/>
++	<value value="4" name="PERF_RBBM_PC_DCALL_BUSY"/>
++	<value value="5" name="PERF_RBBM_PC_VSD_BUSY"/>
++	<value value="6" name="PERF_RBBM_STATUS_MASKED"/>
++	<value value="7" name="PERF_RBBM_COM_BUSY"/>
++	<value value="8" name="PERF_RBBM_DCOM_BUSY"/>
++	<value value="9" name="PERF_RBBM_VBIF_BUSY"/>
++	<value value="10" name="PERF_RBBM_VSC_BUSY"/>
++	<value value="11" name="PERF_RBBM_TESS_BUSY"/>
++	<value value="12" name="PERF_RBBM_UCHE_BUSY"/>
++	<value value="13" name="PERF_RBBM_HLSQ_BUSY"/>
++</enum>
++
++<enum name="a6xx_pc_perfcounter_select">
++	<value value="0" name="PERF_PC_BUSY_CYCLES"/>
++	<value value="1" name="PERF_PC_WORKING_CYCLES"/>
++	<value value="2" name="PERF_PC_STALL_CYCLES_VFD"/>
++	<value value="3" name="PERF_PC_STALL_CYCLES_TSE"/>
++	<value value="4" name="PERF_PC_STALL_CYCLES_VPC"/>
++	<value value="5" name="PERF_PC_STALL_CYCLES_UCHE"/>
++	<value value="6" name="PERF_PC_STALL_CYCLES_TESS"/>
++	<value value="7" name="PERF_PC_STALL_CYCLES_TSE_ONLY"/>
++	<value value="8" name="PERF_PC_STALL_CYCLES_VPC_ONLY"/>
++	<value value="9" name="PERF_PC_PASS1_TF_STALL_CYCLES"/>
++	<value value="10" name="PERF_PC_STARVE_CYCLES_FOR_INDEX"/>
++	<value value="11" name="PERF_PC_STARVE_CYCLES_FOR_TESS_FACTOR"/>
++	<value value="12" name="PERF_PC_STARVE_CYCLES_FOR_VIZ_STREAM"/>
++	<value value="13" name="PERF_PC_STARVE_CYCLES_FOR_POSITION"/>
++	<value value="14" name="PERF_PC_STARVE_CYCLES_DI"/>
++	<value value="15" name="PERF_PC_VIS_STREAMS_LOADED"/>
++	<value value="16" name="PERF_PC_INSTANCES"/>
++	<value value="17" name="PERF_PC_VPC_PRIMITIVES"/>
++	<value value="18" name="PERF_PC_DEAD_PRIM"/>
++	<value value="19" name="PERF_PC_LIVE_PRIM"/>
++	<value value="20" name="PERF_PC_VERTEX_HITS"/>
++	<value value="21" name="PERF_PC_IA_VERTICES"/>
++	<value value="22" name="PERF_PC_IA_PRIMITIVES"/>
++	<value value="23" name="PERF_PC_GS_PRIMITIVES"/>
++	<value value="24" name="PERF_PC_HS_INVOCATIONS"/>
++	<value value="25" name="PERF_PC_DS_INVOCATIONS"/>
++	<value value="26" name="PERF_PC_VS_INVOCATIONS"/>
++	<value value="27" name="PERF_PC_GS_INVOCATIONS"/>
++	<value value="28" name="PERF_PC_DS_PRIMITIVES"/>
++	<value value="29" name="PERF_PC_VPC_POS_DATA_TRANSACTION"/>
++	<value value="30" name="PERF_PC_3D_DRAWCALLS"/>
++	<value value="31" name="PERF_PC_2D_DRAWCALLS"/>
++	<value value="32" name="PERF_PC_NON_DRAWCALL_GLOBAL_EVENTS"/>
++	<value value="33" name="PERF_TESS_BUSY_CYCLES"/>
++	<value value="34" name="PERF_TESS_WORKING_CYCLES"/>
++	<value value="35" name="PERF_TESS_STALL_CYCLES_PC"/>
++	<value value="36" name="PERF_TESS_STARVE_CYCLES_PC"/>
++	<value value="37" name="PERF_PC_TSE_TRANSACTION"/>
++	<value value="38" name="PERF_PC_TSE_VERTEX"/>
++	<value value="39" name="PERF_PC_TESS_PC_UV_TRANS"/>
++	<value value="40" name="PERF_PC_TESS_PC_UV_PATCHES"/>
++	<value value="41" name="PERF_PC_TESS_FACTOR_TRANS"/>
++</enum>
++
++<enum name="a6xx_vfd_perfcounter_select">
++	<value value="0" name="PERF_VFD_BUSY_CYCLES"/>
++	<value value="1" name="PERF_VFD_STALL_CYCLES_UCHE"/>
++	<value value="2" name="PERF_VFD_STALL_CYCLES_VPC_ALLOC"/>
++	<value value="3" name="PERF_VFD_STALL_CYCLES_SP_INFO"/>
++	<value value="4" name="PERF_VFD_STALL_CYCLES_SP_ATTR"/>
++	<value value="5" name="PERF_VFD_STARVE_CYCLES_UCHE"/>
++	<value value="6" name="PERF_VFD_RBUFFER_FULL"/>
++	<value value="7" name="PERF_VFD_ATTR_INFO_FIFO_FULL"/>
++	<value value="8" name="PERF_VFD_DECODED_ATTRIBUTE_BYTES"/>
++	<value value="9" name="PERF_VFD_NUM_ATTRIBUTES"/>
++	<value value="10" name="PERF_VFD_UPPER_SHADER_FIBERS"/>
++	<value value="11" name="PERF_VFD_LOWER_SHADER_FIBERS"/>
++	<value value="12" name="PERF_VFD_MODE_0_FIBERS"/>
++	<value value="13" name="PERF_VFD_MODE_1_FIBERS"/>
++	<value value="14" name="PERF_VFD_MODE_2_FIBERS"/>
++	<value value="15" name="PERF_VFD_MODE_3_FIBERS"/>
++	<value value="16" name="PERF_VFD_MODE_4_FIBERS"/>
++	<value value="17" name="PERF_VFD_TOTAL_VERTICES"/>
++	<value value="18" name="PERF_VFDP_STALL_CYCLES_VFD"/>
++	<value value="19" name="PERF_VFDP_STALL_CYCLES_VFD_INDEX"/>
++	<value value="20" name="PERF_VFDP_STALL_CYCLES_VFD_PROG"/>
++	<value value="21" name="PERF_VFDP_STARVE_CYCLES_PC"/>
++	<value value="22" name="PERF_VFDP_VS_STAGE_WAVES"/>
++</enum>
++
++<enum name="a6xx_hlsq_perfcounter_select">
++	<value value="0" name="PERF_HLSQ_BUSY_CYCLES"/>
++	<value value="1" name="PERF_HLSQ_STALL_CYCLES_UCHE"/>
++	<value value="2" name="PERF_HLSQ_STALL_CYCLES_SP_STATE"/>
++	<value value="3" name="PERF_HLSQ_STALL_CYCLES_SP_FS_STAGE"/>
++	<value value="4" name="PERF_HLSQ_UCHE_LATENCY_CYCLES"/>
++	<value value="5" name="PERF_HLSQ_UCHE_LATENCY_COUNT"/>
++	<value value="6" name="PERF_HLSQ_FS_STAGE_1X_WAVES"/>
++	<value value="7" name="PERF_HLSQ_FS_STAGE_2X_WAVES"/>
++	<value value="8" name="PERF_HLSQ_QUADS"/>
++	<value value="9" name="PERF_HLSQ_CS_INVOCATIONS"/>
++	<value value="10" name="PERF_HLSQ_COMPUTE_DRAWCALLS"/>
++	<value value="11" name="PERF_HLSQ_FS_DATA_WAIT_PROGRAMMING"/>
++	<value value="12" name="PERF_HLSQ_DUAL_FS_PROG_ACTIVE"/>
++	<value value="13" name="PERF_HLSQ_DUAL_VS_PROG_ACTIVE"/>
++	<value value="14" name="PERF_HLSQ_FS_BATCH_COUNT_ZERO"/>
++	<value value="15" name="PERF_HLSQ_VS_BATCH_COUNT_ZERO"/>
++	<value value="16" name="PERF_HLSQ_WAVE_PENDING_NO_QUAD"/>
++	<value value="17" name="PERF_HLSQ_WAVE_PENDING_NO_PRIM_BASE"/>
++	<value value="18" name="PERF_HLSQ_STALL_CYCLES_VPC"/>
++	<value value="19" name="PERF_HLSQ_PIXELS"/>
++	<value value="20" name="PERF_HLSQ_DRAW_MODE_SWITCH_VSFS_SYNC"/>
++</enum>
++
++<enum name="a6xx_vpc_perfcounter_select">
++	<value value="0" name="PERF_VPC_BUSY_CYCLES"/>
++	<value value="1" name="PERF_VPC_WORKING_CYCLES"/>
++	<value value="2" name="PERF_VPC_STALL_CYCLES_UCHE"/>
++	<value value="3" name="PERF_VPC_STALL_CYCLES_VFD_WACK"/>
++	<value value="4" name="PERF_VPC_STALL_CYCLES_HLSQ_PRIM_ALLOC"/>
++	<value value="5" name="PERF_VPC_STALL_CYCLES_PC"/>
++	<value value="6" name="PERF_VPC_STALL_CYCLES_SP_LM"/>
++	<value value="7" name="PERF_VPC_STARVE_CYCLES_SP"/>
++	<value value="8" name="PERF_VPC_STARVE_CYCLES_LRZ"/>
++	<value value="9" name="PERF_VPC_PC_PRIMITIVES"/>
++	<value value="10" name="PERF_VPC_SP_COMPONENTS"/>
++	<value value="11" name="PERF_VPC_STALL_CYCLES_VPCRAM_POS"/>
++	<value value="12" name="PERF_VPC_LRZ_ASSIGN_PRIMITIVES"/>
++	<value value="13" name="PERF_VPC_RB_VISIBLE_PRIMITIVES"/>
++	<value value="14" name="PERF_VPC_LM_TRANSACTION"/>
++	<value value="15" name="PERF_VPC_STREAMOUT_TRANSACTION"/>
++	<value value="16" name="PERF_VPC_VS_BUSY_CYCLES"/>
++	<value value="17" name="PERF_VPC_PS_BUSY_CYCLES"/>
++	<value value="18" name="PERF_VPC_VS_WORKING_CYCLES"/>
++	<value value="19" name="PERF_VPC_PS_WORKING_CYCLES"/>
++	<value value="20" name="PERF_VPC_STARVE_CYCLES_RB"/>
++	<value value="21" name="PERF_VPC_NUM_VPCRAM_READ_POS"/>
++	<value value="22" name="PERF_VPC_WIT_FULL_CYCLES"/>
++	<value value="23" name="PERF_VPC_VPCRAM_FULL_CYCLES"/>
++	<value value="24" name="PERF_VPC_LM_FULL_WAIT_FOR_INTP_END"/>
++	<value value="25" name="PERF_VPC_NUM_VPCRAM_WRITE"/>
++	<value value="26" name="PERF_VPC_NUM_VPCRAM_READ_SO"/>
++	<value value="27" name="PERF_VPC_NUM_ATTR_REQ_LM"/>
++</enum>
++
++<enum name="a6xx_tse_perfcounter_select">
++	<value value="0" name="PERF_TSE_BUSY_CYCLES"/>
++	<value value="1" name="PERF_TSE_CLIPPING_CYCLES"/>
++	<value value="2" name="PERF_TSE_STALL_CYCLES_RAS"/>
++	<value value="3" name="PERF_TSE_STALL_CYCLES_LRZ_BARYPLANE"/>
++	<value value="4" name="PERF_TSE_STALL_CYCLES_LRZ_ZPLANE"/>
++	<value value="5" name="PERF_TSE_STARVE_CYCLES_PC"/>
++	<value value="6" name="PERF_TSE_INPUT_PRIM"/>
++	<value value="7" name="PERF_TSE_INPUT_NULL_PRIM"/>
++	<value value="8" name="PERF_TSE_TRIVAL_REJ_PRIM"/>
++	<value value="9" name="PERF_TSE_CLIPPED_PRIM"/>
++	<value value="10" name="PERF_TSE_ZERO_AREA_PRIM"/>
++	<value value="11" name="PERF_TSE_FACENESS_CULLED_PRIM"/>
++	<value value="12" name="PERF_TSE_ZERO_PIXEL_PRIM"/>
++	<value value="13" name="PERF_TSE_OUTPUT_NULL_PRIM"/>
++	<value value="14" name="PERF_TSE_OUTPUT_VISIBLE_PRIM"/>
++	<value value="15" name="PERF_TSE_CINVOCATION"/>
++	<value value="16" name="PERF_TSE_CPRIMITIVES"/>
++	<value value="17" name="PERF_TSE_2D_INPUT_PRIM"/>
++	<value value="18" name="PERF_TSE_2D_ALIVE_CYCLES"/>
++	<value value="19" name="PERF_TSE_CLIP_PLANES"/>
++</enum>
++
++<enum name="a6xx_ras_perfcounter_select">
++	<value value="0" name="PERF_RAS_BUSY_CYCLES"/>
++	<value value="1" name="PERF_RAS_SUPERTILE_ACTIVE_CYCLES"/>
++	<value value="2" name="PERF_RAS_STALL_CYCLES_LRZ"/>
++	<value value="3" name="PERF_RAS_STARVE_CYCLES_TSE"/>
++	<value value="4" name="PERF_RAS_SUPER_TILES"/>
++	<value value="5" name="PERF_RAS_8X4_TILES"/>
++	<value value="6" name="PERF_RAS_MASKGEN_ACTIVE"/>
++	<value value="7" name="PERF_RAS_FULLY_COVERED_SUPER_TILES"/>
++	<value value="8" name="PERF_RAS_FULLY_COVERED_8X4_TILES"/>
++	<value value="9" name="PERF_RAS_PRIM_KILLED_INVISILBE"/>
++	<value value="10" name="PERF_RAS_SUPERTILE_GEN_ACTIVE_CYCLES"/>
++	<value value="11" name="PERF_RAS_LRZ_INTF_WORKING_CYCLES"/>
++	<value value="12" name="PERF_RAS_BLOCKS"/>
++</enum>
++
++<enum name="a6xx_uche_perfcounter_select">
++	<value value="0" name="PERF_UCHE_BUSY_CYCLES"/>
++	<value value="1" name="PERF_UCHE_STALL_CYCLES_ARBITER"/>
++	<value value="2" name="PERF_UCHE_VBIF_LATENCY_CYCLES"/>
++	<value value="3" name="PERF_UCHE_VBIF_LATENCY_SAMPLES"/>
++	<value value="4" name="PERF_UCHE_VBIF_READ_BEATS_TP"/>
++	<value value="5" name="PERF_UCHE_VBIF_READ_BEATS_VFD"/>
++	<value value="6" name="PERF_UCHE_VBIF_READ_BEATS_HLSQ"/>
++	<value value="7" name="PERF_UCHE_VBIF_READ_BEATS_LRZ"/>
++	<value value="8" name="PERF_UCHE_VBIF_READ_BEATS_SP"/>
++	<value value="9" name="PERF_UCHE_READ_REQUESTS_TP"/>
++	<value value="10" name="PERF_UCHE_READ_REQUESTS_VFD"/>
++	<value value="11" name="PERF_UCHE_READ_REQUESTS_HLSQ"/>
++	<value value="12" name="PERF_UCHE_READ_REQUESTS_LRZ"/>
++	<value value="13" name="PERF_UCHE_READ_REQUESTS_SP"/>
++	<value value="14" name="PERF_UCHE_WRITE_REQUESTS_LRZ"/>
++	<value value="15" name="PERF_UCHE_WRITE_REQUESTS_SP"/>
++	<value value="16" name="PERF_UCHE_WRITE_REQUESTS_VPC"/>
++	<value value="17" name="PERF_UCHE_WRITE_REQUESTS_VSC"/>
++	<value value="18" name="PERF_UCHE_EVICTS"/>
++	<value value="19" name="PERF_UCHE_BANK_REQ0"/>
++	<value value="20" name="PERF_UCHE_BANK_REQ1"/>
++	<value value="21" name="PERF_UCHE_BANK_REQ2"/>
++	<value value="22" name="PERF_UCHE_BANK_REQ3"/>
++	<value value="23" name="PERF_UCHE_BANK_REQ4"/>
++	<value value="24" name="PERF_UCHE_BANK_REQ5"/>
++	<value value="25" name="PERF_UCHE_BANK_REQ6"/>
++	<value value="26" name="PERF_UCHE_BANK_REQ7"/>
++	<value value="27" name="PERF_UCHE_VBIF_READ_BEATS_CH0"/>
++	<value value="28" name="PERF_UCHE_VBIF_READ_BEATS_CH1"/>
++	<value value="29" name="PERF_UCHE_GMEM_READ_BEATS"/>
++	<value value="30" name="PERF_UCHE_TPH_REF_FULL"/>
++	<value value="31" name="PERF_UCHE_TPH_VICTIM_FULL"/>
++	<value value="32" name="PERF_UCHE_TPH_EXT_FULL"/>
++	<value value="33" name="PERF_UCHE_VBIF_STALL_WRITE_DATA"/>
++	<value value="34" name="PERF_UCHE_DCMP_LATENCY_SAMPLES"/>
++	<value value="35" name="PERF_UCHE_DCMP_LATENCY_CYCLES"/>
++	<value value="36" name="PERF_UCHE_VBIF_READ_BEATS_PC"/>
++	<value value="37" name="PERF_UCHE_READ_REQUESTS_PC"/>
++	<value value="38" name="PERF_UCHE_RAM_READ_REQ"/>
++	<value value="39" name="PERF_UCHE_RAM_WRITE_REQ"/>
++</enum>
++
++<enum name="a6xx_tp_perfcounter_select">
++	<value value="0" name="PERF_TP_BUSY_CYCLES"/>
++	<value value="1" name="PERF_TP_STALL_CYCLES_UCHE"/>
++	<value value="2" name="PERF_TP_LATENCY_CYCLES"/>
++	<value value="3" name="PERF_TP_LATENCY_TRANS"/>
++	<value value="4" name="PERF_TP_FLAG_CACHE_REQUEST_SAMPLES"/>
++	<value value="5" name="PERF_TP_FLAG_CACHE_REQUEST_LATENCY"/>
++	<value value="6" name="PERF_TP_L1_CACHELINE_REQUESTS"/>
++	<value value="7" name="PERF_TP_L1_CACHELINE_MISSES"/>
++	<value value="8" name="PERF_TP_SP_TP_TRANS"/>
++	<value value="9" name="PERF_TP_TP_SP_TRANS"/>
++	<value value="10" name="PERF_TP_OUTPUT_PIXELS"/>
++	<value value="11" name="PERF_TP_FILTER_WORKLOAD_16BIT"/>
++	<value value="12" name="PERF_TP_FILTER_WORKLOAD_32BIT"/>
++	<value value="13" name="PERF_TP_QUADS_RECEIVED"/>
++	<value value="14" name="PERF_TP_QUADS_OFFSET"/>
++	<value value="15" name="PERF_TP_QUADS_SHADOW"/>
++	<value value="16" name="PERF_TP_QUADS_ARRAY"/>
++	<value value="17" name="PERF_TP_QUADS_GRADIENT"/>
++	<value value="18" name="PERF_TP_QUADS_1D"/>
++	<value value="19" name="PERF_TP_QUADS_2D"/>
++	<value value="20" name="PERF_TP_QUADS_BUFFER"/>
++	<value value="21" name="PERF_TP_QUADS_3D"/>
++	<value value="22" name="PERF_TP_QUADS_CUBE"/>
++	<value value="23" name="PERF_TP_DIVERGENT_QUADS_RECEIVED"/>
++	<value value="24" name="PERF_TP_PRT_NON_RESIDENT_EVENTS"/>
++	<value value="25" name="PERF_TP_OUTPUT_PIXELS_POINT"/>
++	<value value="26" name="PERF_TP_OUTPUT_PIXELS_BILINEAR"/>
++	<value value="27" name="PERF_TP_OUTPUT_PIXELS_MIP"/>
++	<value value="28" name="PERF_TP_OUTPUT_PIXELS_ANISO"/>
++	<value value="29" name="PERF_TP_OUTPUT_PIXELS_ZERO_LOD"/>
++	<value value="30" name="PERF_TP_FLAG_CACHE_REQUESTS"/>
++	<value value="31" name="PERF_TP_FLAG_CACHE_MISSES"/>
++	<value value="32" name="PERF_TP_L1_5_L2_REQUESTS"/>
++	<value value="33" name="PERF_TP_2D_OUTPUT_PIXELS"/>
++	<value value="34" name="PERF_TP_2D_OUTPUT_PIXELS_POINT"/>
++	<value value="35" name="PERF_TP_2D_OUTPUT_PIXELS_BILINEAR"/>
++	<value value="36" name="PERF_TP_2D_FILTER_WORKLOAD_16BIT"/>
++	<value value="37" name="PERF_TP_2D_FILTER_WORKLOAD_32BIT"/>
++	<value value="38" name="PERF_TP_TPA2TPC_TRANS"/>
++	<value value="39" name="PERF_TP_L1_MISSES_ASTC_1TILE"/>
++	<value value="40" name="PERF_TP_L1_MISSES_ASTC_2TILE"/>
++	<value value="41" name="PERF_TP_L1_MISSES_ASTC_4TILE"/>
++	<value value="42" name="PERF_TP_L1_5_L2_COMPRESS_REQS"/>
++	<value value="43" name="PERF_TP_L1_5_L2_COMPRESS_MISS"/>
++	<value value="44" name="PERF_TP_L1_BANK_CONFLICT"/>
++	<value value="45" name="PERF_TP_L1_5_MISS_LATENCY_CYCLES"/>
++	<value value="46" name="PERF_TP_L1_5_MISS_LATENCY_TRANS"/>
++	<value value="47" name="PERF_TP_QUADS_CONSTANT_MULTIPLIED"/>
++	<value value="48" name="PERF_TP_FRONTEND_WORKING_CYCLES"/>
++	<value value="49" name="PERF_TP_L1_TAG_WORKING_CYCLES"/>
++	<value value="50" name="PERF_TP_L1_DATA_WRITE_WORKING_CYCLES"/>
++	<value value="51" name="PERF_TP_PRE_L1_DECOM_WORKING_CYCLES"/>
++	<value value="52" name="PERF_TP_BACKEND_WORKING_CYCLES"/>
++	<value value="53" name="PERF_TP_FLAG_CACHE_WORKING_CYCLES"/>
++	<value value="54" name="PERF_TP_L1_5_CACHE_WORKING_CYCLES"/>
++	<value value="55" name="PERF_TP_STARVE_CYCLES_SP"/>
++	<value value="56" name="PERF_TP_STARVE_CYCLES_UCHE"/>
++</enum>
++
++<enum name="a6xx_sp_perfcounter_select">
++	<value value="0" name="PERF_SP_BUSY_CYCLES"/>
++	<value value="1" name="PERF_SP_ALU_WORKING_CYCLES"/>
++	<value value="2" name="PERF_SP_EFU_WORKING_CYCLES"/>
++	<value value="3" name="PERF_SP_STALL_CYCLES_VPC"/>
++	<value value="4" name="PERF_SP_STALL_CYCLES_TP"/>
++	<value value="5" name="PERF_SP_STALL_CYCLES_UCHE"/>
++	<value value="6" name="PERF_SP_STALL_CYCLES_RB"/>
++	<value value="7" name="PERF_SP_NON_EXECUTION_CYCLES"/>
++	<value value="8" name="PERF_SP_WAVE_CONTEXTS"/>
++	<value value="9" name="PERF_SP_WAVE_CONTEXT_CYCLES"/>
++	<value value="10" name="PERF_SP_FS_STAGE_WAVE_CYCLES"/>
++	<value value="11" name="PERF_SP_FS_STAGE_WAVE_SAMPLES"/>
++	<value value="12" name="PERF_SP_VS_STAGE_WAVE_CYCLES"/>
++	<value value="13" name="PERF_SP_VS_STAGE_WAVE_SAMPLES"/>
++	<value value="14" name="PERF_SP_FS_STAGE_DURATION_CYCLES"/>
++	<value value="15" name="PERF_SP_VS_STAGE_DURATION_CYCLES"/>
++	<value value="16" name="PERF_SP_WAVE_CTRL_CYCLES"/>
++	<value value="17" name="PERF_SP_WAVE_LOAD_CYCLES"/>
++	<value value="18" name="PERF_SP_WAVE_EMIT_CYCLES"/>
++	<value value="19" name="PERF_SP_WAVE_NOP_CYCLES"/>
++	<value value="20" name="PERF_SP_WAVE_WAIT_CYCLES"/>
++	<value value="21" name="PERF_SP_WAVE_FETCH_CYCLES"/>
++	<value value="22" name="PERF_SP_WAVE_IDLE_CYCLES"/>
++	<value value="23" name="PERF_SP_WAVE_END_CYCLES"/>
++	<value value="24" name="PERF_SP_WAVE_LONG_SYNC_CYCLES"/>
++	<value value="25" name="PERF_SP_WAVE_SHORT_SYNC_CYCLES"/>
++	<value value="26" name="PERF_SP_WAVE_JOIN_CYCLES"/>
++	<value value="27" name="PERF_SP_LM_LOAD_INSTRUCTIONS"/>
++	<value value="28" name="PERF_SP_LM_STORE_INSTRUCTIONS"/>
++	<value value="29" name="PERF_SP_LM_ATOMICS"/>
++	<value value="30" name="PERF_SP_GM_LOAD_INSTRUCTIONS"/>
++	<value value="31" name="PERF_SP_GM_STORE_INSTRUCTIONS"/>
++	<value value="32" name="PERF_SP_GM_ATOMICS"/>
++	<value value="33" name="PERF_SP_VS_STAGE_TEX_INSTRUCTIONS"/>
++	<value value="34" name="PERF_SP_VS_STAGE_EFU_INSTRUCTIONS"/>
++	<value value="35" name="PERF_SP_VS_STAGE_FULL_ALU_INSTRUCTIONS"/>
++	<value value="36" name="PERF_SP_VS_STAGE_HALF_ALU_INSTRUCTIONS"/>
++	<value value="37" name="PERF_SP_FS_STAGE_TEX_INSTRUCTIONS"/>
++	<value value="38" name="PERF_SP_FS_STAGE_CFLOW_INSTRUCTIONS"/>
++	<value value="39" name="PERF_SP_FS_STAGE_EFU_INSTRUCTIONS"/>
++	<value value="40" name="PERF_SP_FS_STAGE_FULL_ALU_INSTRUCTIONS"/>
++	<value value="41" name="PERF_SP_FS_STAGE_HALF_ALU_INSTRUCTIONS"/>
++	<value value="42" name="PERF_SP_FS_STAGE_BARY_INSTRUCTIONS"/>
++	<value value="43" name="PERF_SP_VS_INSTRUCTIONS"/>
++	<value value="44" name="PERF_SP_FS_INSTRUCTIONS"/>
++	<value value="45" name="PERF_SP_ADDR_LOCK_COUNT"/>
++	<value value="46" name="PERF_SP_UCHE_READ_TRANS"/>
++	<value value="47" name="PERF_SP_UCHE_WRITE_TRANS"/>
++	<value value="48" name="PERF_SP_EXPORT_VPC_TRANS"/>
++	<value value="49" name="PERF_SP_EXPORT_RB_TRANS"/>
++	<value value="50" name="PERF_SP_PIXELS_KILLED"/>
++	<value value="51" name="PERF_SP_ICL1_REQUESTS"/>
++	<value value="52" name="PERF_SP_ICL1_MISSES"/>
++	<value value="53" name="PERF_SP_HS_INSTRUCTIONS"/>
++	<value value="54" name="PERF_SP_DS_INSTRUCTIONS"/>
++	<value value="55" name="PERF_SP_GS_INSTRUCTIONS"/>
++	<value value="56" name="PERF_SP_CS_INSTRUCTIONS"/>
++	<value value="57" name="PERF_SP_GPR_READ"/>
++	<value value="58" name="PERF_SP_GPR_WRITE"/>
++	<value value="59" name="PERF_SP_FS_STAGE_HALF_EFU_INSTRUCTIONS"/>
++	<value value="60" name="PERF_SP_VS_STAGE_HALF_EFU_INSTRUCTIONS"/>
++	<value value="61" name="PERF_SP_LM_BANK_CONFLICTS"/>
++	<value value="62" name="PERF_SP_TEX_CONTROL_WORKING_CYCLES"/>
++	<value value="63" name="PERF_SP_LOAD_CONTROL_WORKING_CYCLES"/>
++	<value value="64" name="PERF_SP_FLOW_CONTROL_WORKING_CYCLES"/>
++	<value value="65" name="PERF_SP_LM_WORKING_CYCLES"/>
++	<value value="66" name="PERF_SP_DISPATCHER_WORKING_CYCLES"/>
++	<value value="67" name="PERF_SP_SEQUENCER_WORKING_CYCLES"/>
++	<value value="68" name="PERF_SP_LOW_EFFICIENCY_STARVED_BY_TP"/>
++	<value value="69" name="PERF_SP_STARVE_CYCLES_HLSQ"/>
++	<value value="70" name="PERF_SP_NON_EXECUTION_LS_CYCLES"/>
++	<value value="71" name="PERF_SP_WORKING_EU"/>
++	<value value="72" name="PERF_SP_ANY_EU_WORKING"/>
++	<value value="73" name="PERF_SP_WORKING_EU_FS_STAGE"/>
++	<value value="74" name="PERF_SP_ANY_EU_WORKING_FS_STAGE"/>
++	<value value="75" name="PERF_SP_WORKING_EU_VS_STAGE"/>
++	<value value="76" name="PERF_SP_ANY_EU_WORKING_VS_STAGE"/>
++	<value value="77" name="PERF_SP_WORKING_EU_CS_STAGE"/>
++	<value value="78" name="PERF_SP_ANY_EU_WORKING_CS_STAGE"/>
++	<value value="79" name="PERF_SP_GPR_READ_PREFETCH"/>
++	<value value="80" name="PERF_SP_GPR_READ_CONFLICT"/>
++	<value value="81" name="PERF_SP_GPR_WRITE_CONFLICT"/>
++	<value value="82" name="PERF_SP_GM_LOAD_LATENCY_CYCLES"/>
++	<value value="83" name="PERF_SP_GM_LOAD_LATENCY_SAMPLES"/>
++	<value value="84" name="PERF_SP_EXECUTABLE_WAVES"/>
++</enum>
++
++<enum name="a6xx_rb_perfcounter_select">
++	<value value="0" name="PERF_RB_BUSY_CYCLES"/>
++	<value value="1" name="PERF_RB_STALL_CYCLES_HLSQ"/>
++	<value value="2" name="PERF_RB_STALL_CYCLES_FIFO0_FULL"/>
++	<value value="3" name="PERF_RB_STALL_CYCLES_FIFO1_FULL"/>
++	<value value="4" name="PERF_RB_STALL_CYCLES_FIFO2_FULL"/>
++	<value value="5" name="PERF_RB_STARVE_CYCLES_SP"/>
++	<value value="6" name="PERF_RB_STARVE_CYCLES_LRZ_TILE"/>
++	<value value="7" name="PERF_RB_STARVE_CYCLES_CCU"/>
++	<value value="8" name="PERF_RB_STARVE_CYCLES_Z_PLANE"/>
++	<value value="9" name="PERF_RB_STARVE_CYCLES_BARY_PLANE"/>
++	<value value="10" name="PERF_RB_Z_WORKLOAD"/>
++	<value value="11" name="PERF_RB_HLSQ_ACTIVE"/>
++	<value value="12" name="PERF_RB_Z_READ"/>
++	<value value="13" name="PERF_RB_Z_WRITE"/>
++	<value value="14" name="PERF_RB_C_READ"/>
++	<value value="15" name="PERF_RB_C_WRITE"/>
++	<value value="16" name="PERF_RB_TOTAL_PASS"/>
++	<value value="17" name="PERF_RB_Z_PASS"/>
++	<value value="18" name="PERF_RB_Z_FAIL"/>
++	<value value="19" name="PERF_RB_S_FAIL"/>
++	<value value="20" name="PERF_RB_BLENDED_FXP_COMPONENTS"/>
++	<value value="21" name="PERF_RB_BLENDED_FP16_COMPONENTS"/>
++	<value value="22" name="PERF_RB_PS_INVOCATIONS"/>
++	<value value="23" name="PERF_RB_2D_ALIVE_CYCLES"/>
++	<value value="24" name="PERF_RB_2D_STALL_CYCLES_A2D"/>
++	<value value="25" name="PERF_RB_2D_STARVE_CYCLES_SRC"/>
++	<value value="26" name="PERF_RB_2D_STARVE_CYCLES_SP"/>
++	<value value="27" name="PERF_RB_2D_STARVE_CYCLES_DST"/>
++	<value value="28" name="PERF_RB_2D_VALID_PIXELS"/>
++	<value value="29" name="PERF_RB_3D_PIXELS"/>
++	<value value="30" name="PERF_RB_BLENDER_WORKING_CYCLES"/>
++	<value value="31" name="PERF_RB_ZPROC_WORKING_CYCLES"/>
++	<value value="32" name="PERF_RB_CPROC_WORKING_CYCLES"/>
++	<value value="33" name="PERF_RB_SAMPLER_WORKING_CYCLES"/>
++	<value value="34" name="PERF_RB_STALL_CYCLES_CCU_COLOR_READ"/>
++	<value value="35" name="PERF_RB_STALL_CYCLES_CCU_COLOR_WRITE"/>
++	<value value="36" name="PERF_RB_STALL_CYCLES_CCU_DEPTH_READ"/>
++	<value value="37" name="PERF_RB_STALL_CYCLES_CCU_DEPTH_WRITE"/>
++	<value value="38" name="PERF_RB_STALL_CYCLES_VPC"/>
++	<value value="39" name="PERF_RB_2D_INPUT_TRANS"/>
++	<value value="40" name="PERF_RB_2D_OUTPUT_RB_DST_TRANS"/>
++	<value value="41" name="PERF_RB_2D_OUTPUT_RB_SRC_TRANS"/>
++	<value value="42" name="PERF_RB_BLENDED_FP32_COMPONENTS"/>
++	<value value="43" name="PERF_RB_COLOR_PIX_TILES"/>
++	<value value="44" name="PERF_RB_STALL_CYCLES_CCU"/>
++	<value value="45" name="PERF_RB_EARLY_Z_ARB3_GRANT"/>
++	<value value="46" name="PERF_RB_LATE_Z_ARB3_GRANT"/>
++	<value value="47" name="PERF_RB_EARLY_Z_SKIP_GRANT"/>
++</enum>
++
++<enum name="a6xx_vsc_perfcounter_select">
++	<value value="0" name="PERF_VSC_BUSY_CYCLES"/>
++	<value value="1" name="PERF_VSC_WORKING_CYCLES"/>
++	<value value="2" name="PERF_VSC_STALL_CYCLES_UCHE"/>
++	<value value="3" name="PERF_VSC_EOT_NUM"/>
++	<value value="4" name="PERF_VSC_INPUT_TILES"/>
++</enum>
++
++<enum name="a6xx_ccu_perfcounter_select">
++	<value value="0" name="PERF_CCU_BUSY_CYCLES"/>
++	<value value="1" name="PERF_CCU_STALL_CYCLES_RB_DEPTH_RETURN"/>
++	<value value="2" name="PERF_CCU_STALL_CYCLES_RB_COLOR_RETURN"/>
++	<value value="3" name="PERF_CCU_STARVE_CYCLES_FLAG_RETURN"/>
++	<value value="4" name="PERF_CCU_DEPTH_BLOCKS"/>
++	<value value="5" name="PERF_CCU_COLOR_BLOCKS"/>
++	<value value="6" name="PERF_CCU_DEPTH_BLOCK_HIT"/>
++	<value value="7" name="PERF_CCU_COLOR_BLOCK_HIT"/>
++	<value value="8" name="PERF_CCU_PARTIAL_BLOCK_READ"/>
++	<value value="9" name="PERF_CCU_GMEM_READ"/>
++	<value value="10" name="PERF_CCU_GMEM_WRITE"/>
++	<value value="11" name="PERF_CCU_DEPTH_READ_FLAG0_COUNT"/>
++	<value value="12" name="PERF_CCU_DEPTH_READ_FLAG1_COUNT"/>
++	<value value="13" name="PERF_CCU_DEPTH_READ_FLAG2_COUNT"/>
++	<value value="14" name="PERF_CCU_DEPTH_READ_FLAG3_COUNT"/>
++	<value value="15" name="PERF_CCU_DEPTH_READ_FLAG4_COUNT"/>
++	<value value="16" name="PERF_CCU_DEPTH_READ_FLAG5_COUNT"/>
++	<value value="17" name="PERF_CCU_DEPTH_READ_FLAG6_COUNT"/>
++	<value value="18" name="PERF_CCU_DEPTH_READ_FLAG8_COUNT"/>
++	<value value="19" name="PERF_CCU_COLOR_READ_FLAG0_COUNT"/>
++	<value value="20" name="PERF_CCU_COLOR_READ_FLAG1_COUNT"/>
++	<value value="21" name="PERF_CCU_COLOR_READ_FLAG2_COUNT"/>
++	<value value="22" name="PERF_CCU_COLOR_READ_FLAG3_COUNT"/>
++	<value value="23" name="PERF_CCU_COLOR_READ_FLAG4_COUNT"/>
++	<value value="24" name="PERF_CCU_COLOR_READ_FLAG5_COUNT"/>
++	<value value="25" name="PERF_CCU_COLOR_READ_FLAG6_COUNT"/>
++	<value value="26" name="PERF_CCU_COLOR_READ_FLAG8_COUNT"/>
++	<value value="27" name="PERF_CCU_2D_RD_REQ"/>
++	<value value="28" name="PERF_CCU_2D_WR_REQ"/>
++</enum>
++
++<enum name="a6xx_lrz_perfcounter_select">
++	<value value="0" name="PERF_LRZ_BUSY_CYCLES"/>
++	<value value="1" name="PERF_LRZ_STARVE_CYCLES_RAS"/>
++	<value value="2" name="PERF_LRZ_STALL_CYCLES_RB"/>
++	<value value="3" name="PERF_LRZ_STALL_CYCLES_VSC"/>
++	<value value="4" name="PERF_LRZ_STALL_CYCLES_VPC"/>
++	<value value="5" name="PERF_LRZ_STALL_CYCLES_FLAG_PREFETCH"/>
++	<value value="6" name="PERF_LRZ_STALL_CYCLES_UCHE"/>
++	<value value="7" name="PERF_LRZ_LRZ_READ"/>
++	<value value="8" name="PERF_LRZ_LRZ_WRITE"/>
++	<value value="9" name="PERF_LRZ_READ_LATENCY"/>
++	<value value="10" name="PERF_LRZ_MERGE_CACHE_UPDATING"/>
++	<value value="11" name="PERF_LRZ_PRIM_KILLED_BY_MASKGEN"/>
++	<value value="12" name="PERF_LRZ_PRIM_KILLED_BY_LRZ"/>
++	<value value="13" name="PERF_LRZ_VISIBLE_PRIM_AFTER_LRZ"/>
++	<value value="14" name="PERF_LRZ_FULL_8X8_TILES"/>
++	<value value="15" name="PERF_LRZ_PARTIAL_8X8_TILES"/>
++	<value value="16" name="PERF_LRZ_TILE_KILLED"/>
++	<value value="17" name="PERF_LRZ_TOTAL_PIXEL"/>
++	<value value="18" name="PERF_LRZ_VISIBLE_PIXEL_AFTER_LRZ"/>
++	<value value="19" name="PERF_LRZ_FULLY_COVERED_TILES"/>
++	<value value="20" name="PERF_LRZ_PARTIAL_COVERED_TILES"/>
++	<value value="21" name="PERF_LRZ_FEEDBACK_ACCEPT"/>
++	<value value="22" name="PERF_LRZ_FEEDBACK_DISCARD"/>
++	<value value="23" name="PERF_LRZ_FEEDBACK_STALL"/>
++	<value value="24" name="PERF_LRZ_STALL_CYCLES_RB_ZPLANE"/>
++	<value value="25" name="PERF_LRZ_STALL_CYCLES_RB_BPLANE"/>
++	<value value="26" name="PERF_LRZ_STALL_CYCLES_VC"/>
++	<value value="27" name="PERF_LRZ_RAS_MASK_TRANS"/>
++</enum>
++
++<enum name="a6xx_cmp_perfcounter_select">
++	<value value="0" name="PERF_CMPDECMP_STALL_CYCLES_ARB"/>
++	<value value="1" name="PERF_CMPDECMP_VBIF_LATENCY_CYCLES"/>
++	<value value="2" name="PERF_CMPDECMP_VBIF_LATENCY_SAMPLES"/>
++	<value value="3" name="PERF_CMPDECMP_VBIF_READ_DATA_CCU"/>
++	<value value="4" name="PERF_CMPDECMP_VBIF_WRITE_DATA_CCU"/>
++	<value value="5" name="PERF_CMPDECMP_VBIF_READ_REQUEST"/>
++	<value value="6" name="PERF_CMPDECMP_VBIF_WRITE_REQUEST"/>
++	<value value="7" name="PERF_CMPDECMP_VBIF_READ_DATA"/>
++	<value value="8" name="PERF_CMPDECMP_VBIF_WRITE_DATA"/>
++	<value value="9" name="PERF_CMPDECMP_FLAG_FETCH_CYCLES"/>
++	<value value="10" name="PERF_CMPDECMP_FLAG_FETCH_SAMPLES"/>
++	<value value="11" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG1_COUNT"/>
++	<value value="12" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG2_COUNT"/>
++	<value value="13" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG3_COUNT"/>
++	<value value="14" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG4_COUNT"/>
++	<value value="15" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG5_COUNT"/>
++	<value value="16" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG6_COUNT"/>
++	<value value="17" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG8_COUNT"/>
++	<value value="18" name="PERF_CMPDECMP_COLOR_WRITE_FLAG1_COUNT"/>
++	<value value="19" name="PERF_CMPDECMP_COLOR_WRITE_FLAG2_COUNT"/>
++	<value value="20" name="PERF_CMPDECMP_COLOR_WRITE_FLAG3_COUNT"/>
++	<value value="21" name="PERF_CMPDECMP_COLOR_WRITE_FLAG4_COUNT"/>
++	<value value="22" name="PERF_CMPDECMP_COLOR_WRITE_FLAG5_COUNT"/>
++	<value value="23" name="PERF_CMPDECMP_COLOR_WRITE_FLAG6_COUNT"/>
++	<value value="24" name="PERF_CMPDECMP_COLOR_WRITE_FLAG8_COUNT"/>
++	<value value="25" name="PERF_CMPDECMP_2D_STALL_CYCLES_VBIF_REQ"/>
++	<value value="26" name="PERF_CMPDECMP_2D_STALL_CYCLES_VBIF_WR"/>
++	<value value="27" name="PERF_CMPDECMP_2D_STALL_CYCLES_VBIF_RETURN"/>
++	<value value="28" name="PERF_CMPDECMP_2D_RD_DATA"/>
++	<value value="29" name="PERF_CMPDECMP_2D_WR_DATA"/>
++	<value value="30" name="PERF_CMPDECMP_VBIF_READ_DATA_UCHE_CH0"/>
++	<value value="31" name="PERF_CMPDECMP_VBIF_READ_DATA_UCHE_CH1"/>
++	<value value="32" name="PERF_CMPDECMP_2D_OUTPUT_TRANS"/>
++	<value value="33" name="PERF_CMPDECMP_VBIF_WRITE_DATA_UCHE"/>
++	<value value="34" name="PERF_CMPDECMP_DEPTH_WRITE_FLAG0_COUNT"/>
++	<value value="35" name="PERF_CMPDECMP_COLOR_WRITE_FLAG0_COUNT"/>
++	<value value="36" name="PERF_CMPDECMP_COLOR_WRITE_FLAGALPHA_COUNT"/>
++	<value value="37" name="PERF_CMPDECMP_2D_BUSY_CYCLES"/>
++	<value value="38" name="PERF_CMPDECMP_2D_REORDER_STARVE_CYCLES"/>
++	<value value="39" name="PERF_CMPDECMP_2D_PIXELS"/>
++</enum>
++
++</database>
+diff --git a/drivers/gpu/drm/msm/registers/adreno/a7xx_enums.xml b/drivers/gpu/drm/msm/registers/adreno/a7xx_enums.xml
+new file mode 100644
+index 00000000000000..661b0dd0f675ba
+--- /dev/null
++++ b/drivers/gpu/drm/msm/registers/adreno/a7xx_enums.xml
+@@ -0,0 +1,223 @@
++<?xml version="1.0" encoding="UTF-8"?>
++<database xmlns="http://nouveau.freedesktop.org/"
++xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
++xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
++<import file="freedreno_copyright.xml"/>
++<import file="adreno/adreno_common.xml"/>
++<import file="adreno/adreno_pm4.xml"/>
++
++<enum name="a7xx_statetype_id">
++	<value value="0" name="A7XX_TP0_NCTX_REG"/>
++	<value value="1" name="A7XX_TP0_CTX0_3D_CVS_REG"/>
++	<value value="2" name="A7XX_TP0_CTX0_3D_CPS_REG"/>
++	<value value="3" name="A7XX_TP0_CTX1_3D_CVS_REG"/>
++	<value value="4" name="A7XX_TP0_CTX1_3D_CPS_REG"/>
++	<value value="5" name="A7XX_TP0_CTX2_3D_CPS_REG"/>
++	<value value="6" name="A7XX_TP0_CTX3_3D_CPS_REG"/>
++	<value value="9" name="A7XX_TP0_TMO_DATA"/>
++	<value value="10" name="A7XX_TP0_SMO_DATA"/>
++	<value value="11" name="A7XX_TP0_MIPMAP_BASE_DATA"/>
++	<value value="32" name="A7XX_SP_NCTX_REG"/>
++	<value value="33" name="A7XX_SP_CTX0_3D_CVS_REG"/>
++	<value value="34" name="A7XX_SP_CTX0_3D_CPS_REG"/>
++	<value value="35" name="A7XX_SP_CTX1_3D_CVS_REG"/>
++	<value value="36" name="A7XX_SP_CTX1_3D_CPS_REG"/>
++	<value value="37" name="A7XX_SP_CTX2_3D_CPS_REG"/>
++	<value value="38" name="A7XX_SP_CTX3_3D_CPS_REG"/>
++	<value value="39" name="A7XX_SP_INST_DATA"/>
++	<value value="40" name="A7XX_SP_INST_DATA_1"/>
++	<value value="41" name="A7XX_SP_LB_0_DATA"/>
++	<value value="42" name="A7XX_SP_LB_1_DATA"/>
++	<value value="43" name="A7XX_SP_LB_2_DATA"/>
++	<value value="44" name="A7XX_SP_LB_3_DATA"/>
++	<value value="45" name="A7XX_SP_LB_4_DATA"/>
++	<value value="46" name="A7XX_SP_LB_5_DATA"/>
++	<value value="47" name="A7XX_SP_LB_6_DATA"/>
++	<value value="48" name="A7XX_SP_LB_7_DATA"/>
++	<value value="49" name="A7XX_SP_CB_RAM"/>
++	<value value="50" name="A7XX_SP_LB_13_DATA"/>
++	<value value="51" name="A7XX_SP_LB_14_DATA"/>
++	<value value="52" name="A7XX_SP_INST_TAG"/>
++	<value value="53" name="A7XX_SP_INST_DATA_2"/>
++	<value value="54" name="A7XX_SP_TMO_TAG"/>
++	<value value="55" name="A7XX_SP_SMO_TAG"/>
++	<value value="56" name="A7XX_SP_STATE_DATA"/>
++	<value value="57" name="A7XX_SP_HWAVE_RAM"/>
++	<value value="58" name="A7XX_SP_L0_INST_BUF"/>
++	<value value="59" name="A7XX_SP_LB_8_DATA"/>
++	<value value="60" name="A7XX_SP_LB_9_DATA"/>
++	<value value="61" name="A7XX_SP_LB_10_DATA"/>
++	<value value="62" name="A7XX_SP_LB_11_DATA"/>
++	<value value="63" name="A7XX_SP_LB_12_DATA"/>
++	<value value="64" name="A7XX_HLSQ_DATAPATH_DSTR_META"/>
++	<value value="67" name="A7XX_HLSQ_L2STC_TAG_RAM"/>
++	<value value="68" name="A7XX_HLSQ_L2STC_INFO_CMD"/>
++	<value value="69" name="A7XX_HLSQ_CVS_BE_CTXT_BUF_RAM_TAG"/>
++	<value value="70" name="A7XX_HLSQ_CPS_BE_CTXT_BUF_RAM_TAG"/>
++	<value value="71" name="A7XX_HLSQ_GFX_CVS_BE_CTXT_BUF_RAM"/>
++	<value value="72" name="A7XX_HLSQ_GFX_CPS_BE_CTXT_BUF_RAM"/>
++	<value value="73" name="A7XX_HLSQ_CHUNK_CVS_RAM"/>
++	<value value="74" name="A7XX_HLSQ_CHUNK_CPS_RAM"/>
++	<value value="75" name="A7XX_HLSQ_CHUNK_CVS_RAM_TAG"/>
++	<value value="76" name="A7XX_HLSQ_CHUNK_CPS_RAM_TAG"/>
++	<value value="77" name="A7XX_HLSQ_ICB_CVS_CB_BASE_TAG"/>
++	<value value="78" name="A7XX_HLSQ_ICB_CPS_CB_BASE_TAG"/>
++	<value value="79" name="A7XX_HLSQ_CVS_MISC_RAM"/>
++	<value value="80" name="A7XX_HLSQ_CPS_MISC_RAM"/>
++	<value value="81" name="A7XX_HLSQ_CPS_MISC_RAM_1"/>
++	<value value="82" name="A7XX_HLSQ_INST_RAM"/>
++	<value value="83" name="A7XX_HLSQ_GFX_CVS_CONST_RAM"/>
++	<value value="84" name="A7XX_HLSQ_GFX_CPS_CONST_RAM"/>
++	<value value="85" name="A7XX_HLSQ_CVS_MISC_RAM_TAG"/>
++	<value value="86" name="A7XX_HLSQ_CPS_MISC_RAM_TAG"/>
++	<value value="87" name="A7XX_HLSQ_INST_RAM_TAG"/>
++	<value value="88" name="A7XX_HLSQ_GFX_CVS_CONST_RAM_TAG"/>
++	<value value="89" name="A7XX_HLSQ_GFX_CPS_CONST_RAM_TAG"/>
++	<value value="90" name="A7XX_HLSQ_GFX_LOCAL_MISC_RAM"/>
++	<value value="91" name="A7XX_HLSQ_GFX_LOCAL_MISC_RAM_TAG"/>
++	<value value="92" name="A7XX_HLSQ_INST_RAM_1"/>
++	<value value="93" name="A7XX_HLSQ_STPROC_META"/>
++	<value value="94" name="A7XX_HLSQ_BV_BE_META"/>
++	<value value="95" name="A7XX_HLSQ_INST_RAM_2"/>
++	<value value="96" name="A7XX_HLSQ_DATAPATH_META"/>
++	<value value="97" name="A7XX_HLSQ_FRONTEND_META"/>
++	<value value="98" name="A7XX_HLSQ_INDIRECT_META"/>
++	<value value="99" name="A7XX_HLSQ_BACKEND_META"/>
++</enum>
++
++<enum name="a7xx_state_location">
++	<value value="0" name="A7XX_HLSQ_STATE"/>
++	<value value="1" name="A7XX_HLSQ_DP"/>
++	<value value="2" name="A7XX_SP_TOP"/>
++	<value value="3" name="A7XX_USPTP"/>
++	<value value="4" name="A7XX_HLSQ_DP_STR"/>
++</enum>
++
++<enum name="a7xx_pipe">
++	<value value="0" name="A7XX_PIPE_NONE"/>
++	<value value="1" name="A7XX_PIPE_BR"/>
++	<value value="2" name="A7XX_PIPE_BV"/>
++	<value value="3" name="A7XX_PIPE_LPAC"/>
++</enum>
++
++<enum name="a7xx_cluster">
++	<value value="0" name="A7XX_CLUSTER_NONE"/>
++	<value value="1" name="A7XX_CLUSTER_FE"/>
++	<value value="2" name="A7XX_CLUSTER_SP_VS"/>
++	<value value="3" name="A7XX_CLUSTER_PC_VS"/>
++	<value value="4" name="A7XX_CLUSTER_GRAS"/>
++	<value value="5" name="A7XX_CLUSTER_SP_PS"/>
++	<value value="6" name="A7XX_CLUSTER_VPC_PS"/>
++	<value value="7" name="A7XX_CLUSTER_PS"/>
++</enum>
++
++<enum name="a7xx_debugbus_id">
++	<value value="1" name="A7XX_DBGBUS_CP_0_0"/>
++	<value value="2" name="A7XX_DBGBUS_CP_0_1"/>
++	<value value="3" name="A7XX_DBGBUS_RBBM"/>
++	<value value="5" name="A7XX_DBGBUS_GBIF_GX"/>
++	<value value="6" name="A7XX_DBGBUS_GBIF_CX"/>
++	<value value="7" name="A7XX_DBGBUS_HLSQ"/>
++	<value value="9" name="A7XX_DBGBUS_UCHE_0"/>
++	<value value="10" name="A7XX_DBGBUS_UCHE_1"/>
++	<value value="13" name="A7XX_DBGBUS_TESS_BR"/>
++	<value value="14" name="A7XX_DBGBUS_TESS_BV"/>
++	<value value="17" name="A7XX_DBGBUS_PC_BR"/>
++	<value value="18" name="A7XX_DBGBUS_PC_BV"/>
++	<value value="21" name="A7XX_DBGBUS_VFDP_BR"/>
++	<value value="22" name="A7XX_DBGBUS_VFDP_BV"/>
++	<value value="25" name="A7XX_DBGBUS_VPC_BR"/>
++	<value value="26" name="A7XX_DBGBUS_VPC_BV"/>
++	<value value="29" name="A7XX_DBGBUS_TSE_BR"/>
++	<value value="30" name="A7XX_DBGBUS_TSE_BV"/>
++	<value value="33" name="A7XX_DBGBUS_RAS_BR"/>
++	<value value="34" name="A7XX_DBGBUS_RAS_BV"/>
++	<value value="37" name="A7XX_DBGBUS_VSC"/>
++	<value value="39" name="A7XX_DBGBUS_COM_0"/>
++	<value value="43" name="A7XX_DBGBUS_LRZ_BR"/>
++	<value value="44" name="A7XX_DBGBUS_LRZ_BV"/>
++	<value value="47" name="A7XX_DBGBUS_UFC_0"/>
++	<value value="48" name="A7XX_DBGBUS_UFC_1"/>
++	<value value="55" name="A7XX_DBGBUS_GMU_GX"/>
++	<value value="59" name="A7XX_DBGBUS_DBGC"/>
++	<value value="60" name="A7XX_DBGBUS_CX"/>
++	<value value="61" name="A7XX_DBGBUS_GMU_CX"/>
++	<value value="62" name="A7XX_DBGBUS_GPC_BR"/>
++	<value value="63" name="A7XX_DBGBUS_GPC_BV"/>
++	<value value="66" name="A7XX_DBGBUS_LARC"/>
++	<value value="68" name="A7XX_DBGBUS_HLSQ_SPTP"/>
++	<value value="70" name="A7XX_DBGBUS_RB_0"/>
++	<value value="71" name="A7XX_DBGBUS_RB_1"/>
++	<value value="72" name="A7XX_DBGBUS_RB_2"/>
++	<value value="73" name="A7XX_DBGBUS_RB_3"/>
++	<value value="74" name="A7XX_DBGBUS_RB_4"/>
++	<value value="75" name="A7XX_DBGBUS_RB_5"/>
++	<value value="102" name="A7XX_DBGBUS_UCHE_WRAPPER"/>
++	<value value="106" name="A7XX_DBGBUS_CCU_0"/>
++	<value value="107" name="A7XX_DBGBUS_CCU_1"/>
++	<value value="108" name="A7XX_DBGBUS_CCU_2"/>
++	<value value="109" name="A7XX_DBGBUS_CCU_3"/>
++	<value value="110" name="A7XX_DBGBUS_CCU_4"/>
++	<value value="111" name="A7XX_DBGBUS_CCU_5"/>
++	<value value="138" name="A7XX_DBGBUS_VFD_BR_0"/>
++	<value value="139" name="A7XX_DBGBUS_VFD_BR_1"/>
++	<value value="140" name="A7XX_DBGBUS_VFD_BR_2"/>
++	<value value="141" name="A7XX_DBGBUS_VFD_BR_3"/>
++	<value value="142" name="A7XX_DBGBUS_VFD_BR_4"/>
++	<value value="143" name="A7XX_DBGBUS_VFD_BR_5"/>
++	<value value="144" name="A7XX_DBGBUS_VFD_BR_6"/>
++	<value value="145" name="A7XX_DBGBUS_VFD_BR_7"/>
++	<value value="202" name="A7XX_DBGBUS_VFD_BV_0"/>
++	<value value="203" name="A7XX_DBGBUS_VFD_BV_1"/>
++	<value value="204" name="A7XX_DBGBUS_VFD_BV_2"/>
++	<value value="205" name="A7XX_DBGBUS_VFD_BV_3"/>
++	<value value="234" name="A7XX_DBGBUS_USP_0"/>
++	<value value="235" name="A7XX_DBGBUS_USP_1"/>
++	<value value="236" name="A7XX_DBGBUS_USP_2"/>
++	<value value="237" name="A7XX_DBGBUS_USP_3"/>
++	<value value="238" name="A7XX_DBGBUS_USP_4"/>
++	<value value="239" name="A7XX_DBGBUS_USP_5"/>
++	<value value="266" name="A7XX_DBGBUS_TP_0"/>
++	<value value="267" name="A7XX_DBGBUS_TP_1"/>
++	<value value="268" name="A7XX_DBGBUS_TP_2"/>
++	<value value="269" name="A7XX_DBGBUS_TP_3"/>
++	<value value="270" name="A7XX_DBGBUS_TP_4"/>
++	<value value="271" name="A7XX_DBGBUS_TP_5"/>
++	<value value="272" name="A7XX_DBGBUS_TP_6"/>
++	<value value="273" name="A7XX_DBGBUS_TP_7"/>
++	<value value="274" name="A7XX_DBGBUS_TP_8"/>
++	<value value="275" name="A7XX_DBGBUS_TP_9"/>
++	<value value="276" name="A7XX_DBGBUS_TP_10"/>
++	<value value="277" name="A7XX_DBGBUS_TP_11"/>
++	<value value="330" name="A7XX_DBGBUS_USPTP_0"/>
++	<value value="331" name="A7XX_DBGBUS_USPTP_1"/>
++	<value value="332" name="A7XX_DBGBUS_USPTP_2"/>
++	<value value="333" name="A7XX_DBGBUS_USPTP_3"/>
++	<value value="334" name="A7XX_DBGBUS_USPTP_4"/>
++	<value value="335" name="A7XX_DBGBUS_USPTP_5"/>
++	<value value="336" name="A7XX_DBGBUS_USPTP_6"/>
++	<value value="337" name="A7XX_DBGBUS_USPTP_7"/>
++	<value value="338" name="A7XX_DBGBUS_USPTP_8"/>
++	<value value="339" name="A7XX_DBGBUS_USPTP_9"/>
++	<value value="340" name="A7XX_DBGBUS_USPTP_10"/>
++	<value value="341" name="A7XX_DBGBUS_USPTP_11"/>
++	<value value="396" name="A7XX_DBGBUS_CCHE_0"/>
++	<value value="397" name="A7XX_DBGBUS_CCHE_1"/>
++	<value value="398" name="A7XX_DBGBUS_CCHE_2"/>
++	<value value="408" name="A7XX_DBGBUS_VPC_DSTR_0"/>
++	<value value="409" name="A7XX_DBGBUS_VPC_DSTR_1"/>
++	<value value="410" name="A7XX_DBGBUS_VPC_DSTR_2"/>
++	<value value="411" name="A7XX_DBGBUS_HLSQ_DP_STR_0"/>
++	<value value="412" name="A7XX_DBGBUS_HLSQ_DP_STR_1"/>
++	<value value="413" name="A7XX_DBGBUS_HLSQ_DP_STR_2"/>
++	<value value="414" name="A7XX_DBGBUS_HLSQ_DP_STR_3"/>
++	<value value="415" name="A7XX_DBGBUS_HLSQ_DP_STR_4"/>
++	<value value="416" name="A7XX_DBGBUS_HLSQ_DP_STR_5"/>
++	<value value="443" name="A7XX_DBGBUS_UFC_DSTR_0"/>
++	<value value="444" name="A7XX_DBGBUS_UFC_DSTR_1"/>
++	<value value="445" name="A7XX_DBGBUS_UFC_DSTR_2"/>
++	<value value="446" name="A7XX_DBGBUS_CGC_SUBCORE"/>
++	<value value="447" name="A7XX_DBGBUS_CGC_CORE"/>
++</enum>
++
++</database>
+diff --git a/drivers/gpu/drm/msm/registers/adreno/a7xx_perfcntrs.xml b/drivers/gpu/drm/msm/registers/adreno/a7xx_perfcntrs.xml
+new file mode 100644
+index 00000000000000..9bf78b0a854b12
+--- /dev/null
++++ b/drivers/gpu/drm/msm/registers/adreno/a7xx_perfcntrs.xml
+@@ -0,0 +1,1030 @@
++<?xml version="1.0" encoding="UTF-8"?>
++<database xmlns="http://nouveau.freedesktop.org/"
++xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
++xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
++<import file="freedreno_copyright.xml"/>
++<import file="adreno/adreno_common.xml"/>
++<import file="adreno/adreno_pm4.xml"/>
++
++<enum name="a7xx_cp_perfcounter_select">
++	<value value="0" name="A7XX_PERF_CP_ALWAYS_COUNT"/>
++	<value value="1" name="A7XX_PERF_CP_BUSY_GFX_CORE_IDLE"/>
++	<value value="2" name="A7XX_PERF_CP_BUSY_CYCLES"/>
++	<value value="3" name="A7XX_PERF_CP_NUM_PREEMPTIONS"/>
++	<value value="4" name="A7XX_PERF_CP_PREEMPTION_REACTION_DELAY"/>
++	<value value="5" name="A7XX_PERF_CP_PREEMPTION_SWITCH_OUT_TIME"/>
++	<value value="6" name="A7XX_PERF_CP_PREEMPTION_SWITCH_IN_TIME"/>
++	<value value="7" name="A7XX_PERF_CP_DEAD_DRAWS_IN_BIN_RENDER"/>
++	<value value="8" name="A7XX_PERF_CP_PREDICATED_DRAWS_KILLED"/>
++	<value value="9" name="A7XX_PERF_CP_MODE_SWITCH"/>
++	<value value="10" name="A7XX_PERF_CP_ZPASS_DONE"/>
++	<value value="11" name="A7XX_PERF_CP_CONTEXT_DONE"/>
++	<value value="12" name="A7XX_PERF_CP_CACHE_FLUSH"/>
++	<value value="13" name="A7XX_PERF_CP_LONG_PREEMPTIONS"/>
++	<value value="14" name="A7XX_PERF_CP_SQE_I_CACHE_STARVE"/>
++	<value value="15" name="A7XX_PERF_CP_SQE_IDLE"/>
++	<value value="16" name="A7XX_PERF_CP_SQE_PM4_STARVE_RB_IB"/>
++	<value value="17" name="A7XX_PERF_CP_SQE_PM4_STARVE_SDS"/>
++	<value value="18" name="A7XX_PERF_CP_SQE_MRB_STARVE"/>
++	<value value="19" name="A7XX_PERF_CP_SQE_RRB_STARVE"/>
++	<value value="20" name="A7XX_PERF_CP_SQE_VSD_STARVE"/>
++	<value value="21" name="A7XX_PERF_CP_VSD_DECODE_STARVE"/>
++	<value value="22" name="A7XX_PERF_CP_SQE_PIPE_OUT_STALL"/>
++	<value value="23" name="A7XX_PERF_CP_SQE_SYNC_STALL"/>
++	<value value="24" name="A7XX_PERF_CP_SQE_PM4_WFI_STALL"/>
++	<value value="25" name="A7XX_PERF_CP_SQE_SYS_WFI_STALL"/>
++	<value value="26" name="A7XX_PERF_CP_SQE_T4_EXEC"/>
++	<value value="27" name="A7XX_PERF_CP_SQE_LOAD_STATE_EXEC"/>
++	<value value="28" name="A7XX_PERF_CP_SQE_SAVE_SDS_STATE"/>
++	<value value="29" name="A7XX_PERF_CP_SQE_DRAW_EXEC"/>
++	<value value="30" name="A7XX_PERF_CP_SQE_CTXT_REG_BUNCH_EXEC"/>
++	<value value="31" name="A7XX_PERF_CP_SQE_EXEC_PROFILED"/>
++	<value value="32" name="A7XX_PERF_CP_MEMORY_POOL_EMPTY"/>
++	<value value="33" name="A7XX_PERF_CP_MEMORY_POOL_SYNC_STALL"/>
++	<value value="34" name="A7XX_PERF_CP_MEMORY_POOL_ABOVE_THRESH"/>
++	<value value="35" name="A7XX_PERF_CP_AHB_WR_STALL_PRE_DRAWS"/>
++	<value value="36" name="A7XX_PERF_CP_AHB_STALL_SQE_GMU"/>
++	<value value="37" name="A7XX_PERF_CP_AHB_STALL_SQE_WR_OTHER"/>
++	<value value="38" name="A7XX_PERF_CP_AHB_STALL_SQE_RD_OTHER"/>
++	<value value="39" name="A7XX_PERF_CP_CLUSTER0_EMPTY"/>
++	<value value="40" name="A7XX_PERF_CP_CLUSTER1_EMPTY"/>
++	<value value="41" name="A7XX_PERF_CP_CLUSTER2_EMPTY"/>
++	<value value="42" name="A7XX_PERF_CP_CLUSTER3_EMPTY"/>
++	<value value="43" name="A7XX_PERF_CP_CLUSTER4_EMPTY"/>
++	<value value="44" name="A7XX_PERF_CP_CLUSTER5_EMPTY"/>
++	<value value="45" name="A7XX_PERF_CP_PM4_DATA"/>
++	<value value="46" name="A7XX_PERF_CP_PM4_HEADERS"/>
++	<value value="47" name="A7XX_PERF_CP_VBIF_READ_BEATS"/>
++	<value value="48" name="A7XX_PERF_CP_VBIF_WRITE_BEATS"/>
++	<value value="49" name="A7XX_PERF_CP_SQE_INSTR_COUNTER"/>
++	<value value="50" name="A7XX_PERF_CP_RESERVED_50"/>
++	<value value="51" name="A7XX_PERF_CP_RESERVED_51"/>
++	<value value="52" name="A7XX_PERF_CP_RESERVED_52"/>
++	<value value="53" name="A7XX_PERF_CP_RESERVED_53"/>
++	<value value="54" name="A7XX_PERF_CP_RESERVED_54"/>
++	<value value="55" name="A7XX_PERF_CP_RESERVED_55"/>
++	<value value="56" name="A7XX_PERF_CP_RESERVED_56"/>
++	<value value="57" name="A7XX_PERF_CP_RESERVED_57"/>
++	<value value="58" name="A7XX_PERF_CP_RESERVED_58"/>
++	<value value="59" name="A7XX_PERF_CP_RESERVED_59"/>
++	<value value="60" name="A7XX_PERF_CP_CLUSTER0_FULL"/>
++	<value value="61" name="A7XX_PERF_CP_CLUSTER1_FULL"/>
++	<value value="62" name="A7XX_PERF_CP_CLUSTER2_FULL"/>
++	<value value="63" name="A7XX_PERF_CP_CLUSTER3_FULL"/>
++	<value value="64" name="A7XX_PERF_CP_CLUSTER4_FULL"/>
++	<value value="65" name="A7XX_PERF_CP_CLUSTER5_FULL"/>
++	<value value="66" name="A7XX_PERF_CP_CLUSTER6_FULL"/>
++	<value value="67" name="A7XX_PERF_CP_CLUSTER6_EMPTY"/>
++	<value value="68" name="A7XX_PERF_CP_ICACHE_MISSES"/>
++	<value value="69" name="A7XX_PERF_CP_ICACHE_HITS"/>
++	<value value="70" name="A7XX_PERF_CP_ICACHE_STALL"/>
++	<value value="71" name="A7XX_PERF_CP_DCACHE_MISSES"/>
++	<value value="72" name="A7XX_PERF_CP_DCACHE_HITS"/>
++	<value value="73" name="A7XX_PERF_CP_DCACHE_STALLS"/>
++	<value value="74" name="A7XX_PERF_CP_AQE_SQE_STALL"/>
++	<value value="75" name="A7XX_PERF_CP_SQE_AQE_STARVE"/>
++	<value value="76" name="A7XX_PERF_CP_PREEMPT_LATENCY"/>
++	<value value="77" name="A7XX_PERF_CP_SQE_MD8_STALL_CYCLES"/>
++	<value value="78" name="A7XX_PERF_CP_SQE_MESH_EXEC_CYCLES"/>
++	<value value="79" name="A7XX_PERF_CP_AQE_NUM_AS_CHUNKS"/>
++	<value value="80" name="A7XX_PERF_CP_AQE_NUM_MS_CHUNKS"/>
++</enum>
++
++<enum name="a7xx_rbbm_perfcounter_select">
++	<value value="0" name="A7XX_PERF_RBBM_ALWAYS_COUNT"/>
++	<value value="1" name="A7XX_PERF_RBBM_ALWAYS_ON"/>
++	<value value="2" name="A7XX_PERF_RBBM_TSE_BUSY"/>
++	<value value="3" name="A7XX_PERF_RBBM_RAS_BUSY"/>
++	<value value="4" name="A7XX_PERF_RBBM_PC_DCALL_BUSY"/>
++	<value value="5" name="A7XX_PERF_RBBM_PC_VSD_BUSY"/>
++	<value value="6" name="A7XX_PERF_RBBM_STATUS_MASKED"/>
++	<value value="7" name="A7XX_PERF_RBBM_COM_BUSY"/>
++	<value value="8" name="A7XX_PERF_RBBM_DCOM_BUSY"/>
++	<value value="9" name="A7XX_PERF_RBBM_VBIF_BUSY"/>
++	<value value="10" name="A7XX_PERF_RBBM_VSC_BUSY"/>
++	<value value="11" name="A7XX_PERF_RBBM_TESS_BUSY"/>
++	<value value="12" name="A7XX_PERF_RBBM_UCHE_BUSY"/>
++	<value value="13" name="A7XX_PERF_RBBM_HLSQ_BUSY"/>
++</enum>
++
++<enum name="a7xx_pc_perfcounter_select">
++	<value value="0" name="A7XX_PERF_PC_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_PC_WORKING_CYCLES"/>
++	<value value="2" name="A7XX_PERF_PC_STALL_CYCLES_VFD"/>
++	<value value="3" name="A7XX_PERF_PC_RESERVED"/>
++	<value value="4" name="A7XX_PERF_PC_STALL_CYCLES_VPC"/>
++	<value value="5" name="A7XX_PERF_PC_STALL_CYCLES_UCHE"/>
++	<value value="6" name="A7XX_PERF_PC_STALL_CYCLES_TESS"/>
++	<value value="7" name="A7XX_PERF_PC_STALL_CYCLES_VFD_ONLY"/>
++	<value value="8" name="A7XX_PERF_PC_STALL_CYCLES_VPC_ONLY"/>
++	<value value="9" name="A7XX_PERF_PC_PASS1_TF_STALL_CYCLES"/>
++	<value value="10" name="A7XX_PERF_PC_STARVE_CYCLES_FOR_INDEX"/>
++	<value value="11" name="A7XX_PERF_PC_STARVE_CYCLES_FOR_TESS_FACTOR"/>
++	<value value="12" name="A7XX_PERF_PC_STARVE_CYCLES_FOR_VIZ_STREAM"/>
++	<value value="13" name="A7XX_PERF_PC_STARVE_CYCLES_DI"/>
++	<value value="14" name="A7XX_PERF_PC_VIS_STREAMS_LOADED"/>
++	<value value="15" name="A7XX_PERF_PC_INSTANCES"/>
++	<value value="16" name="A7XX_PERF_PC_VPC_PRIMITIVES"/>
++	<value value="17" name="A7XX_PERF_PC_DEAD_PRIM"/>
++	<value value="18" name="A7XX_PERF_PC_LIVE_PRIM"/>
++	<value value="19" name="A7XX_PERF_PC_VERTEX_HITS"/>
++	<value value="20" name="A7XX_PERF_PC_IA_VERTICES"/>
++	<value value="21" name="A7XX_PERF_PC_IA_PRIMITIVES"/>
++	<value value="22" name="A7XX_PERF_PC_RESERVED_22"/>
++	<value value="23" name="A7XX_PERF_PC_HS_INVOCATIONS"/>
++	<value value="24" name="A7XX_PERF_PC_DS_INVOCATIONS"/>
++	<value value="25" name="A7XX_PERF_PC_VS_INVOCATIONS"/>
++	<value value="26" name="A7XX_PERF_PC_GS_INVOCATIONS"/>
++	<value value="27" name="A7XX_PERF_PC_DS_PRIMITIVES"/>
++	<value value="28" name="A7XX_PERF_PC_3D_DRAWCALLS"/>
++	<value value="29" name="A7XX_PERF_PC_2D_DRAWCALLS"/>
++	<value value="30" name="A7XX_PERF_PC_NON_DRAWCALL_GLOBAL_EVENTS"/>
++	<value value="31" name="A7XX_PERF_PC_TESS_BUSY_CYCLES"/>
++	<value value="32" name="A7XX_PERF_PC_TESS_WORKING_CYCLES"/>
++	<value value="33" name="A7XX_PERF_PC_TESS_STALL_CYCLES_PC"/>
++	<value value="34" name="A7XX_PERF_PC_TESS_STARVE_CYCLES_PC"/>
++	<value value="35" name="A7XX_PERF_PC_TESS_SINGLE_PRIM_CYCLES"/>
++	<value value="36" name="A7XX_PERF_PC_TESS_PC_UV_TRANS"/>
++	<value value="37" name="A7XX_PERF_PC_TESS_PC_UV_PATCHES"/>
++	<value value="38" name="A7XX_PERF_PC_TESS_FACTOR_TRANS"/>
++	<value value="39" name="A7XX_PERF_PC_TAG_CHECKED_VERTICES"/>
++	<value value="40" name="A7XX_PERF_PC_MESH_VS_WAVES"/>
++	<value value="41" name="A7XX_PERF_PC_MESH_DRAWS"/>
++	<value value="42" name="A7XX_PERF_PC_MESH_DEAD_DRAWS"/>
++	<value value="43" name="A7XX_PERF_PC_MESH_MVIS_EN_DRAWS"/>
++	<value value="44" name="A7XX_PERF_PC_MESH_DEAD_PRIM"/>
++	<value value="45" name="A7XX_PERF_PC_MESH_LIVE_PRIM"/>
++	<value value="46" name="A7XX_PERF_PC_MESH_PA_EN_PRIM"/>
++	<value value="47" name="A7XX_PERF_PC_STARVE_CYCLES_FOR_MVIS_STREAM"/>
++	<value value="48" name="A7XX_PERF_PC_STARVE_CYCLES_PREDRAW"/>
++	<value value="49" name="A7XX_PERF_PC_STALL_CYCLES_COMPUTE_GFX"/>
++	<value value="50" name="A7XX_PERF_PC_STALL_CYCLES_GFX_COMPUTE"/>
++	<value value="51" name="A7XX_PERF_PC_TESS_PC_MULTI_PATCH_TRANS"/>
++</enum>
++
++<enum name="a7xx_vfd_perfcounter_select">
++	<value value="0" name="A7XX_PERF_VFD_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_VFD_STALL_CYCLES_UCHE"/>
++	<value value="2" name="A7XX_PERF_VFD_STALL_CYCLES_VPC_ALLOC"/>
++	<value value="3" name="A7XX_PERF_VFD_STALL_CYCLES_SP_INFO"/>
++	<value value="4" name="A7XX_PERF_VFD_STALL_CYCLES_SP_ATTR"/>
++	<value value="5" name="A7XX_PERF_VFD_STARVE_CYCLES_UCHE"/>
++	<value value="6" name="A7XX_PERF_VFD_RBUFFER_FULL"/>
++	<value value="7" name="A7XX_PERF_VFD_ATTR_INFO_FIFO_FULL"/>
++	<value value="8" name="A7XX_PERF_VFD_DECODED_ATTRIBUTE_BYTES"/>
++	<value value="9" name="A7XX_PERF_VFD_NUM_ATTRIBUTES"/>
++	<value value="10" name="A7XX_PERF_VFD_UPPER_SHADER_FIBERS"/>
++	<value value="11" name="A7XX_PERF_VFD_LOWER_SHADER_FIBERS"/>
++	<value value="12" name="A7XX_PERF_VFD_MODE_0_FIBERS"/>
++	<value value="13" name="A7XX_PERF_VFD_MODE_1_FIBERS"/>
++	<value value="14" name="A7XX_PERF_VFD_MODE_2_FIBERS"/>
++	<value value="15" name="A7XX_PERF_VFD_MODE_3_FIBERS"/>
++	<value value="16" name="A7XX_PERF_VFD_MODE_4_FIBERS"/>
++	<value value="17" name="A7XX_PERF_VFD_TOTAL_VERTICES"/>
++	<value value="18" name="A7XX_PERF_VFDP_STALL_CYCLES_VFD"/>
++	<value value="19" name="A7XX_PERF_VFDP_STALL_CYCLES_VFD_INDEX"/>
++	<value value="20" name="A7XX_PERF_VFDP_STALL_CYCLES_VFD_PROG"/>
++	<value value="21" name="A7XX_PERF_VFDP_STARVE_CYCLES_PC"/>
++	<value value="22" name="A7XX_PERF_VFDP_VS_STAGE_WAVES"/>
++	<value value="23" name="A7XX_PERF_VFD_STALL_CYCLES_PRG_END_FE"/>
++	<value value="24" name="A7XX_PERF_VFD_STALL_CYCLES_CBSYNC"/>
++</enum>
++
++<enum name="a7xx_hlsq_perfcounter_select">
++	<value value="0" name="A7XX_PERF_HLSQ_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_HLSQ_STALL_CYCLES_UCHE"/>
++	<value value="2" name="A7XX_PERF_HLSQ_STALL_CYCLES_SP_STATE"/>
++	<value value="3" name="A7XX_PERF_HLSQ_STALL_CYCLES_SP_FS_STAGE"/>
++	<value value="4" name="A7XX_PERF_HLSQ_UCHE_LATENCY_CYCLES"/>
++	<value value="5" name="A7XX_PERF_HLSQ_UCHE_LATENCY_COUNT"/>
++	<value value="6" name="A7XX_PERF_HLSQ_RESERVED_6"/>
++	<value value="7" name="A7XX_PERF_HLSQ_RESERVED_7"/>
++	<value value="8" name="A7XX_PERF_HLSQ_RESERVED_8"/>
++	<value value="9" name="A7XX_PERF_HLSQ_RESERVED_9"/>
++	<value value="10" name="A7XX_PERF_HLSQ_COMPUTE_DRAWCALLS"/>
++	<value value="11" name="A7XX_PERF_HLSQ_FS_DATA_WAIT_PROGRAMMING"/>
++	<value value="12" name="A7XX_PERF_HLSQ_DUAL_FS_PROG_ACTIVE"/>
++	<value value="13" name="A7XX_PERF_HLSQ_DUAL_VS_PROG_ACTIVE"/>
++	<value value="14" name="A7XX_PERF_HLSQ_FS_BATCH_COUNT_ZERO"/>
++	<value value="15" name="A7XX_PERF_HLSQ_VS_BATCH_COUNT_ZERO"/>
++	<value value="16" name="A7XX_PERF_HLSQ_WAVE_PENDING_NO_QUAD"/>
++	<value value="17" name="A7XX_PERF_HLSQ_WAVE_PENDING_NO_PRIM_BASE"/>
++	<value value="18" name="A7XX_PERF_HLSQ_STALL_CYCLES_VPC"/>
++	<value value="19" name="A7XX_PERF_HLSQ_RESERVED_19"/>
++	<value value="20" name="A7XX_PERF_HLSQ_DRAW_MODE_SWITCH_VSFS_SYNC"/>
++	<value value="21" name="A7XX_PERF_HLSQ_VSBR_STALL_CYCLES"/>
++	<value value="22" name="A7XX_PERF_HLSQ_FS_STALL_CYCLES"/>
++	<value value="23" name="A7XX_PERF_HLSQ_LPAC_STALL_CYCLES"/>
++	<value value="24" name="A7XX_PERF_HLSQ_BV_STALL_CYCLES"/>
++	<value value="25" name="A7XX_PERF_HLSQ_VSBR_DEREF_CYCLES"/>
++	<value value="26" name="A7XX_PERF_HLSQ_FS_DEREF_CYCLES"/>
++	<value value="27" name="A7XX_PERF_HLSQ_LPAC_DEREF_CYCLES"/>
++	<value value="28" name="A7XX_PERF_HLSQ_BV_DEREF_CYCLES"/>
++	<value value="29" name="A7XX_PERF_HLSQ_VSBR_S2W_CYCLES"/>
++	<value value="30" name="A7XX_PERF_HLSQ_FS_S2W_CYCLES"/>
++	<value value="31" name="A7XX_PERF_HLSQ_LPAC_S2W_CYCLES"/>
++	<value value="32" name="A7XX_PERF_HLSQ_BV_S2W_CYCLES"/>
++	<value value="33" name="A7XX_PERF_HLSQ_VSBR_WAIT_FS_S2W"/>
++	<value value="34" name="A7XX_PERF_HLSQ_FS_WAIT_VS_S2W"/>
++	<value value="35" name="A7XX_PERF_HLSQ_LPAC_WAIT_VS_S2W"/>
++	<value value="36" name="A7XX_PERF_HLSQ_BV_WAIT_FS_S2W"/>
++	<value value="37" name="A7XX_PERF_HLSQ_VS_WAIT_CONST_RESOURCE"/>
++	<value value="38" name="A7XX_PERF_HLSQ_FS_WAIT_SAME_VS_S2W"/>
++	<value value="39" name="A7XX_PERF_HLSQ_FS_STARVING_SP"/>
++	<value value="40" name="A7XX_PERF_HLSQ_VS_DATA_WAIT_PROGRAMMING"/>
++	<value value="41" name="A7XX_PERF_HLSQ_BV_DATA_WAIT_PROGRAMMING"/>
++	<value value="42" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXTS_VS"/>
++	<value value="43" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXT_CYCLES_VS"/>
++	<value value="44" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXTS_FS"/>
++	<value value="45" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXT_CYCLES_FS"/>
++	<value value="46" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXTS_BV"/>
++	<value value="47" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXT_CYCLES_BV"/>
++	<value value="48" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXTS_LPAC"/>
++	<value value="49" name="A7XX_PERF_HLSQ_STPROC_WAVE_CONTEXT_CYCLES_LPAC"/>
++	<value value="50" name="A7XX_PERF_HLSQ_SPTROC_STCHE_WARMUP_INC_VS"/>
++	<value value="51" name="A7XX_PERF_HLSQ_SPTROC_STCHE_WARMUP_INC_FS"/>
++	<value value="52" name="A7XX_PERF_HLSQ_SPTROC_STCHE_WARMUP_INC_BV"/>
++	<value value="53" name="A7XX_PERF_HLSQ_SPTROC_STCHE_WARMUP_INC_LPAC"/>
++	<value value="54" name="A7XX_PERF_HLSQ_SPTROC_STCHE_MISS_INC_VS"/>
++	<value value="55" name="A7XX_PERF_HLSQ_SPTROC_STCHE_MISS_INC_FS"/>
++	<value value="56" name="A7XX_PERF_HLSQ_SPTROC_STCHE_MISS_INC_BV"/>
++	<value value="57" name="A7XX_PERF_HLSQ_SPTROC_STCHE_MISS_INC_LPAC"/>
++</enum>
++
++<enum name="a7xx_vpc_perfcounter_select">
++	<value value="0" name="A7XX_PERF_VPC_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_VPC_WORKING_CYCLES"/>
++	<value value="2" name="A7XX_PERF_VPC_STALL_CYCLES_UCHE"/>
++	<value value="3" name="A7XX_PERF_VPC_STALL_CYCLES_VFD_WACK"/>
++	<value value="4" name="A7XX_PERF_VPC_STALL_CYCLES_HLSQ_PRIM_ALLOC"/>
++	<value value="5" name="A7XX_PERF_VPC_RESERVED_5"/>
++	<value value="6" name="A7XX_PERF_VPC_STALL_CYCLES_SP_LM"/>
++	<value value="7" name="A7XX_PERF_VPC_STARVE_CYCLES_SP"/>
++	<value value="8" name="A7XX_PERF_VPC_STARVE_CYCLES_LRZ"/>
++	<value value="9" name="A7XX_PERF_VPC_PC_PRIMITIVES"/>
++	<value value="10" name="A7XX_PERF_VPC_SP_COMPONENTS"/>
++	<value value="11" name="A7XX_PERF_VPC_STALL_CYCLES_VPCRAM_POS"/>
++	<value value="12" name="A7XX_PERF_VPC_LRZ_ASSIGN_PRIMITIVES"/>
++	<value value="13" name="A7XX_PERF_VPC_RB_VISIBLE_PRIMITIVES"/>
++	<value value="14" name="A7XX_PERF_VPC_LM_TRANSACTION"/>
++	<value value="15" name="A7XX_PERF_VPC_STREAMOUT_TRANSACTION"/>
++	<value value="16" name="A7XX_PERF_VPC_VS_BUSY_CYCLES"/>
++	<value value="17" name="A7XX_PERF_VPC_PS_BUSY_CYCLES"/>
++	<value value="18" name="A7XX_PERF_VPC_VS_WORKING_CYCLES"/>
++	<value value="19" name="A7XX_PERF_VPC_PS_WORKING_CYCLES"/>
++	<value value="20" name="A7XX_PERF_VPC_STARVE_CYCLES_RB"/>
++	<value value="21" name="A7XX_PERF_VPC_NUM_VPCRAM_READ_POS"/>
++	<value value="22" name="A7XX_PERF_VPC_WIT_FULL_CYCLES"/>
++	<value value="23" name="A7XX_PERF_VPC_VPCRAM_FULL_CYCLES"/>
++	<value value="24" name="A7XX_PERF_VPC_LM_FULL_WAIT_FOR_INTP_END"/>
++	<value value="25" name="A7XX_PERF_VPC_NUM_VPCRAM_WRITE"/>
++	<value value="26" name="A7XX_PERF_VPC_NUM_VPCRAM_READ_SO"/>
++	<value value="27" name="A7XX_PERF_VPC_NUM_ATTR_REQ_LM"/>
++	<value value="28" name="A7XX_PERF_VPC_STALL_CYCLE_TSE"/>
++	<value value="29" name="A7XX_PERF_VPC_TSE_PRIMITIVES"/>
++	<value value="30" name="A7XX_PERF_VPC_GS_PRIMITIVES"/>
++	<value value="31" name="A7XX_PERF_VPC_TSE_TRANSACTIONS"/>
++	<value value="32" name="A7XX_PERF_VPC_STALL_CYCLES_CCU"/>
++	<value value="33" name="A7XX_PERF_VPC_NUM_WM_HIT"/>
++	<value value="34" name="A7XX_PERF_VPC_STALL_DQ_WACK"/>
++	<value value="35" name="A7XX_PERF_VPC_STALL_CYCLES_CCHE"/>
++	<value value="36" name="A7XX_PERF_VPC_STARVE_CYCLES_CCHE"/>
++	<value value="37" name="A7XX_PERF_VPC_NUM_PA_REQ"/>
++	<value value="38" name="A7XX_PERF_VPC_NUM_LM_REQ_HIT"/>
++	<value value="39" name="A7XX_PERF_VPC_CCHE_REQBUF_FULL"/>
++	<value value="40" name="A7XX_PERF_VPC_STALL_CYCLES_LM_ACK"/>
++	<value value="41" name="A7XX_PERF_VPC_STALL_CYCLES_PRG_END_FE"/>
++	<value value="42" name="A7XX_PERF_VPC_STALL_CYCLES_PRG_END_PCVS"/>
++	<value value="43" name="A7XX_PERF_VPC_STALL_CYCLES_PRG_END_VPCPS"/>
++</enum>
++
++<enum name="a7xx_tse_perfcounter_select">
++	<value value="0" name="A7XX_PERF_TSE_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_TSE_CLIPPING_CYCLES"/>
++	<value value="2" name="A7XX_PERF_TSE_STALL_CYCLES_RAS"/>
++	<value value="3" name="A7XX_PERF_TSE_STALL_CYCLES_LRZ_BARYPLANE"/>
++	<value value="4" name="A7XX_PERF_TSE_STALL_CYCLES_LRZ_ZPLANE"/>
++	<value value="5" name="A7XX_PERF_TSE_STARVE_CYCLES_PC"/>
++	<value value="6" name="A7XX_PERF_TSE_INPUT_PRIM"/>
++	<value value="7" name="A7XX_PERF_TSE_INPUT_NULL_PRIM"/>
++	<value value="8" name="A7XX_PERF_TSE_TRIVAL_REJ_PRIM"/>
++	<value value="9" name="A7XX_PERF_TSE_CLIPPED_PRIM"/>
++	<value value="10" name="A7XX_PERF_TSE_ZERO_AREA_PRIM"/>
++	<value value="11" name="A7XX_PERF_TSE_FACENESS_CULLED_PRIM"/>
++	<value value="12" name="A7XX_PERF_TSE_ZERO_PIXEL_PRIM"/>
++	<value value="13" name="A7XX_PERF_TSE_OUTPUT_NULL_PRIM"/>
++	<value value="14" name="A7XX_PERF_TSE_OUTPUT_VISIBLE_PRIM"/>
++	<value value="15" name="A7XX_PERF_TSE_CINVOCATION"/>
++	<value value="16" name="A7XX_PERF_TSE_CPRIMITIVES"/>
++	<value value="17" name="A7XX_PERF_TSE_2D_INPUT_PRIM"/>
++	<value value="18" name="A7XX_PERF_TSE_2D_ALIVE_CYCLES"/>
++	<value value="19" name="A7XX_PERF_TSE_CLIP_PLANES"/>
++</enum>
++
++<enum name="a7xx_ras_perfcounter_select">
++	<value value="0" name="A7XX_PERF_RAS_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_RAS_SUPERTILE_ACTIVE_CYCLES"/>
++	<value value="2" name="A7XX_PERF_RAS_STALL_CYCLES_LRZ"/>
++	<value value="3" name="A7XX_PERF_RAS_STARVE_CYCLES_TSE"/>
++	<value value="4" name="A7XX_PERF_RAS_SUPER_TILES"/>
++	<value value="5" name="A7XX_PERF_RAS_8X4_TILES"/>
++	<value value="6" name="A7XX_PERF_RAS_MASKGEN_ACTIVE"/>
++	<value value="7" name="A7XX_PERF_RAS_FULLY_COVERED_SUPER_TILES"/>
++	<value value="8" name="A7XX_PERF_RAS_FULLY_COVERED_8X4_TILES"/>
++	<value value="9" name="A7XX_PERF_RAS_PRIM_KILLED_INVISILBE"/>
++	<value value="10" name="A7XX_PERF_RAS_SUPERTILE_GEN_ACTIVE_CYCLES"/>
++	<value value="11" name="A7XX_PERF_RAS_LRZ_INTF_WORKING_CYCLES"/>
++	<value value="12" name="A7XX_PERF_RAS_BLOCKS"/>
++	<value value="13" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_0_WORKING_CC_l2"/>
++	<value value="14" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_1_WORKING_CC_l2"/>
++	<value value="15" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_2_WORKING_CC_l2"/>
++	<value value="16" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_3_WORKING_CC_l2"/>
++	<value value="17" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_4_WORKING_CC_l2"/>
++	<value value="18" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_5_WORKING_CC_l2"/>
++	<value value="19" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_6_WORKING_CC_l2"/>
++	<value value="20" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_7_WORKING_CC_l2"/>
++	<value value="21" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_8_WORKING_CC_l2"/>
++	<value value="22" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_9_WORKING_CC_l2"/>
++	<value value="23" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_10_WORKING_CC_l2"/>
++	<value value="24" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_11_WORKING_CC_l2"/>
++	<value value="25" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_12_WORKING_CC_l2"/>
++	<value value="26" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_13_WORKING_CC_l2"/>
++	<value value="27" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_14_WORKING_CC_l2"/>
++	<value value="28" name="A7XX_PERF_RAS_SAMPLE_MASK_GEN_LANE_15_WORKING_CC_l2"/>
++	<value value="29" name="A7XX_PERF_RAS_FALSE_PARTIAL_STILE"/>
++
++</enum>
++
++<enum name="a7xx_uche_perfcounter_select">
++	<value value="0" name="A7XX_PERF_UCHE_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_UCHE_STALL_CYCLES_ARBITER"/>
++	<value value="2" name="A7XX_PERF_UCHE_VBIF_LATENCY_CYCLES"/>
++	<value value="3" name="A7XX_PERF_UCHE_VBIF_LATENCY_SAMPLES"/>
++	<value value="4" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_TP"/>
++	<value value="5" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_VFD"/>
++	<value value="6" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_HLSQ"/>
++	<value value="7" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_LRZ"/>
++	<value value="8" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_SP"/>
++	<value value="9" name="A7XX_PERF_UCHE_READ_REQUESTS_TP"/>
++	<value value="10" name="A7XX_PERF_UCHE_READ_REQUESTS_VFD"/>
++	<value value="11" name="A7XX_PERF_UCHE_READ_REQUESTS_HLSQ"/>
++	<value value="12" name="A7XX_PERF_UCHE_READ_REQUESTS_LRZ"/>
++	<value value="13" name="A7XX_PERF_UCHE_READ_REQUESTS_SP"/>
++	<value value="14" name="A7XX_PERF_UCHE_WRITE_REQUESTS_LRZ"/>
++	<value value="15" name="A7XX_PERF_UCHE_WRITE_REQUESTS_SP"/>
++	<value value="16" name="A7XX_PERF_UCHE_WRITE_REQUESTS_VPC"/>
++	<value value="17" name="A7XX_PERF_UCHE_WRITE_REQUESTS_VSC"/>
++	<value value="18" name="A7XX_PERF_UCHE_EVICTS"/>
++	<value value="19" name="A7XX_PERF_UCHE_BANK_REQ0"/>
++	<value value="20" name="A7XX_PERF_UCHE_BANK_REQ1"/>
++	<value value="21" name="A7XX_PERF_UCHE_BANK_REQ2"/>
++	<value value="22" name="A7XX_PERF_UCHE_BANK_REQ3"/>
++	<value value="23" name="A7XX_PERF_UCHE_BANK_REQ4"/>
++	<value value="24" name="A7XX_PERF_UCHE_BANK_REQ5"/>
++	<value value="25" name="A7XX_PERF_UCHE_BANK_REQ6"/>
++	<value value="26" name="A7XX_PERF_UCHE_BANK_REQ7"/>
++	<value value="27" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_CH0"/>
++	<value value="28" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_CH1"/>
++	<value value="29" name="A7XX_PERF_UCHE_GMEM_READ_BEATS"/>
++	<value value="30" name="A7XX_PERF_UCHE_TPH_REF_FULL"/>
++	<value value="31" name="A7XX_PERF_UCHE_TPH_VICTIM_FULL"/>
++	<value value="32" name="A7XX_PERF_UCHE_TPH_EXT_FULL"/>
++	<value value="33" name="A7XX_PERF_UCHE_VBIF_STALL_WRITE_DATA"/>
++	<value value="34" name="A7XX_PERF_UCHE_DCMP_LATENCY_SAMPLES"/>
++	<value value="35" name="A7XX_PERF_UCHE_DCMP_LATENCY_CYCLES"/>
++	<value value="36" name="A7XX_PERF_UCHE_VBIF_READ_BEATS_PC"/>
++	<value value="37" name="A7XX_PERF_UCHE_READ_REQUESTS_PC"/>
++	<value value="38" name="A7XX_PERF_UCHE_RAM_READ_REQ"/>
++	<value value="39" name="A7XX_PERF_UCHE_RAM_WRITE_REQ"/>
++	<value value="40" name="A7XX_PERF_UCHE_STARVED_CYCLES_VBIF_DECMP"/>
++	<value value="41" name="A7XX_PERF_UCHE_STALL_CYCLES_DECMP"/>
++	<value value="42" name="A7XX_PERF_UCHE_ARBITER_STALL_CYCLES_VBIF"/>
++	<value value="43" name="A7XX_PERF_UCHE_READ_REQUESTS_TP_UBWC"/>
++	<value value="44" name="A7XX_PERF_UCHE_READ_REQUESTS_TP_NONUBWC"/>
++	<value value="45" name="A7XX_PERF_UCHE_READ_REQUESTS_TP_GMEM"/>
++	<value value="46" name="A7XX_PERF_UCHE_LONG_LINE_ALL_EVICTS_KAILUA"/>
++	<value value="47" name="A7XX_PERF_UCHE_LONG_LINE_PARTIAL_EVICTS_KAILUA"/>
++	<value value="48" name="A7XX_PERF_UCHE_TPH_CONFLICT_CL_CCHE"/>
++	<value value="49" name="A7XX_PERF_UCHE_TPH_CONFLICT_CL_OTHER_KAILUA"/>
++	<value value="50" name="A7XX_PERF_UCHE_DBANK_CONFLICT_CL_CCHE"/>
++	<value value="51" name="A7XX_PERF_UCHE_DBANK_CONFLICT_CL_OTHER_CLIENTS"/>
++	<value value="52" name="A7XX_PERF_UCHE_VBIF_WRITE_BEATS_CH0"/>
++	<value value="53" name="A7XX_PERF_UCHE_VBIF_WRITE_BEATS_CH1"/>
++	<value value="54" name="A7XX_PERF_UCHE_CCHE_TPH_QUEUE_FULL"/>
++	<value value="55" name="A7XX_PERF_UCHE_CCHE_DPH_QUEUE_FULL"/>
++	<value value="56" name="A7XX_PERF_UCHE_GMEM_WRITE_BEATS"/>
++	<value value="57" name="A7XX_PERF_UCHE_UBWC_READ_BEATS"/>
++	<value value="58" name="A7XX_PERF_UCHE_UBWC_WRITE_BEATS"/>
++</enum>
++
++<enum name="a7xx_tp_perfcounter_select">
++	<value value="0" name="A7XX_PERF_TP_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_TP_STALL_CYCLES_UCHE"/>
++	<value value="2" name="A7XX_PERF_TP_LATENCY_CYCLES"/>
++	<value value="3" name="A7XX_PERF_TP_LATENCY_TRANS"/>
++	<value value="4" name="A7XX_PERF_TP_FLAG_FIFO_DELAY_SAMPLES"/>
++	<value value="5" name="A7XX_PERF_TP_FLAG_FIFO_DELAY_CYCLES"/>
++	<value value="6" name="A7XX_PERF_TP_L1_CACHELINE_REQUESTS"/>
++	<value value="7" name="A7XX_PERF_TP_L1_CACHELINE_MISSES"/>
++	<value value="8" name="A7XX_PERF_TP_SP_TP_TRANS"/>
++	<value value="9" name="A7XX_PERF_TP_TP_SP_TRANS"/>
++	<value value="10" name="A7XX_PERF_TP_OUTPUT_PIXELS"/>
++	<value value="11" name="A7XX_PERF_TP_FILTER_WORKLOAD_16BIT"/>
++	<value value="12" name="A7XX_PERF_TP_FILTER_WORKLOAD_32BIT"/>
++	<value value="13" name="A7XX_PERF_TP_QUADS_RECEIVED"/>
++	<value value="14" name="A7XX_PERF_TP_QUADS_OFFSET"/>
++	<value value="15" name="A7XX_PERF_TP_QUADS_SHADOW"/>
++	<value value="16" name="A7XX_PERF_TP_QUADS_ARRAY"/>
++	<value value="17" name="A7XX_PERF_TP_QUADS_GRADIENT"/>
++	<value value="18" name="A7XX_PERF_TP_QUADS_1D"/>
++	<value value="19" name="A7XX_PERF_TP_QUADS_2D"/>
++	<value value="20" name="A7XX_PERF_TP_QUADS_BUFFER"/>
++	<value value="21" name="A7XX_PERF_TP_QUADS_3D"/>
++	<value value="22" name="A7XX_PERF_TP_QUADS_CUBE"/>
++	<value value="23" name="A7XX_PERF_TP_DIVERGENT_QUADS_RECEIVED"/>
++	<value value="24" name="A7XX_PERF_TP_PRT_NON_RESIDENT_EVENTS"/>
++	<value value="25" name="A7XX_PERF_TP_OUTPUT_PIXELS_POINT"/>
++	<value value="26" name="A7XX_PERF_TP_OUTPUT_PIXELS_BILINEAR"/>
++	<value value="27" name="A7XX_PERF_TP_OUTPUT_PIXELS_MIP"/>
++	<value value="28" name="A7XX_PERF_TP_OUTPUT_PIXELS_ANISO"/>
++	<value value="29" name="A7XX_PERF_TP_OUTPUT_PIXELS_ZERO_LOD"/>
++	<value value="30" name="A7XX_PERF_TP_FLAG_CACHE_REQUESTS"/>
++	<value value="31" name="A7XX_PERF_TP_FLAG_CACHE_MISSES"/>
++	<value value="32" name="A7XX_PERF_TP_L1_5_L2_REQUESTS"/>
++	<value value="33" name="A7XX_PERF_TP_2D_OUTPUT_PIXELS"/>
++	<value value="34" name="A7XX_PERF_TP_2D_OUTPUT_PIXELS_POINT"/>
++	<value value="35" name="A7XX_PERF_TP_2D_OUTPUT_PIXELS_BILINEAR"/>
++	<value value="36" name="A7XX_PERF_TP_2D_FILTER_WORKLOAD_16BIT"/>
++	<value value="37" name="A7XX_PERF_TP_2D_FILTER_WORKLOAD_32BIT"/>
++	<value value="38" name="A7XX_PERF_TP_TPA2TPC_TRANS"/>
++	<value value="39" name="A7XX_PERF_TP_L1_MISSES_ASTC_1TILE"/>
++	<value value="40" name="A7XX_PERF_TP_L1_MISSES_ASTC_2TILE"/>
++	<value value="41" name="A7XX_PERF_TP_L1_MISSES_ASTC_4TILE"/>
++	<value value="42" name="A7XX_PERF_TP_L1_5_COMPRESS_REQS"/>
++	<value value="43" name="A7XX_PERF_TP_L1_5_L2_COMPRESS_MISS"/>
++	<value value="44" name="A7XX_PERF_TP_L1_BANK_CONFLICT"/>
++	<value value="45" name="A7XX_PERF_TP_L1_5_MISS_LATENCY_CYCLES"/>
++	<value value="46" name="A7XX_PERF_TP_L1_5_MISS_LATENCY_TRANS"/>
++	<value value="47" name="A7XX_PERF_TP_QUADS_CONSTANT_MULTIPLIED"/>
++	<value value="48" name="A7XX_PERF_TP_FRONTEND_WORKING_CYCLES"/>
++	<value value="49" name="A7XX_PERF_TP_L1_TAG_WORKING_CYCLES"/>
++	<value value="50" name="A7XX_PERF_TP_L1_DATA_WRITE_WORKING_CYCLES"/>
++	<value value="51" name="A7XX_PERF_TP_PRE_L1_DECOM_WORKING_CYCLES"/>
++	<value value="52" name="A7XX_PERF_TP_BACKEND_WORKING_CYCLES"/>
++	<value value="53" name="A7XX_PERF_TP_L1_5_CACHE_WORKING_CYCLES"/>
++	<value value="54" name="A7XX_PERF_TP_STARVE_CYCLES_SP"/>
++	<value value="55" name="A7XX_PERF_TP_STARVE_CYCLES_UCHE"/>
++	<value value="56" name="A7XX_PERF_TP_STALL_CYCLES_UFC"/>
++	<value value="57" name="A7XX_PERF_TP_FORMAT_DECOMP"/>
++	<value value="58" name="A7XX_PERF_TP_FILTER_POINT_FP16"/>
++	<value value="59" name="A7XX_PERF_TP_FILTER_POINT_FP32"/>
++	<value value="60" name="A7XX_PERF_TP_LATENCY_FIFO_FULL"/>
++	<value value="61" name="A7XX_PERF_TP_RESERVED_61"/>
++	<value value="62" name="A7XX_PERF_TP_RESERVED_62"/>
++	<value value="63" name="A7XX_PERF_TP_RESERVED_63"/>
++	<value value="64" name="A7XX_PERF_TP_RESERVED_64"/>
++	<value value="65" name="A7XX_PERF_TP_RESERVED_65"/>
++	<value value="66" name="A7XX_PERF_TP_RESERVED_66"/>
++	<value value="67" name="A7XX_PERF_TP_RESERVED_67"/>
++	<value value="68" name="A7XX_PERF_TP_RESERVED_68"/>
++	<value value="69" name="A7XX_PERF_TP_RESERVED_69"/>
++	<value value="70" name="A7XX_PERF_TP_RESERVED_70"/>
++	<value value="71" name="A7XX_PERF_TP_RESERVED_71"/>
++	<value value="72" name="A7XX_PERF_TP_RESERVED_72"/>
++	<value value="73" name="A7XX_PERF_TP_RESERVED_73"/>
++	<value value="74" name="A7XX_PERF_TP_RESERVED_74"/>
++	<value value="75" name="A7XX_PERF_TP_RESERVED_75"/>
++	<value value="76" name="A7XX_PERF_TP_RESERVED_76"/>
++	<value value="77" name="A7XX_PERF_TP_RESERVED_77"/>
++	<value value="78" name="A7XX_PERF_TP_RESERVED_78"/>
++	<value value="79" name="A7XX_PERF_TP_RESERVED_79"/>
++	<value value="80" name="A7XX_PERF_TP_RESERVED_80"/>
++	<value value="81" name="A7XX_PERF_TP_RESERVED_81"/>
++	<value value="82" name="A7XX_PERF_TP_RESERVED_82"/>
++	<value value="83" name="A7XX_PERF_TP_RESERVED_83"/>
++	<value value="84" name="A7XX_PERF_TP_RESERVED_84"/>
++	<value value="85" name="A7XX_PERF_TP_RESERVED_85"/>
++	<value value="86" name="A7XX_PERF_TP_RESERVED_86"/>
++	<value value="87" name="A7XX_PERF_TP_RESERVED_87"/>
++	<value value="88" name="A7XX_PERF_TP_RESERVED_88"/>
++	<value value="89" name="A7XX_PERF_TP_RESERVED_89"/>
++	<value value="90" name="A7XX_PERF_TP_RESERVED_90"/>
++	<value value="91" name="A7XX_PERF_TP_RESERVED_91"/>
++	<value value="92" name="A7XX_PERF_TP_RESERVED_92"/>
++	<value value="93" name="A7XX_PERF_TP_RESERVED_93"/>
++	<value value="94" name="A7XX_PERF_TP_RESERVED_94"/>
++	<value value="95" name="A7XX_PERF_TP_RESERVED_95"/>
++	<value value="96" name="A7XX_PERF_TP_RESERVED_96"/>
++	<value value="97" name="A7XX_PERF_TP_RESERVED_97"/>
++	<value value="98" name="A7XX_PERF_TP_RESERVED_98"/>
++	<value value="99" name="A7XX_PERF_TP_RESERVED_99"/>
++	<value value="100" name="A7XX_PERF_TP_RESERVED_100"/>
++	<value value="101" name="A7XX_PERF_TP_RESERVED_101"/>
++	<value value="102" name="A7XX_PERF_TP_RESERVED_102"/>
++	<value value="103" name="A7XX_PERF_TP_RESERVED_103"/>
++	<value value="104" name="A7XX_PERF_TP_RESERVED_104"/>
++	<value value="105" name="A7XX_PERF_TP_RESERVED_105"/>
++	<value value="106" name="A7XX_PERF_TP_RESERVED_106"/>
++	<value value="107" name="A7XX_PERF_TP_RESERVED_107"/>
++	<value value="108" name="A7XX_PERF_TP_RESERVED_108"/>
++	<value value="109" name="A7XX_PERF_TP_RESERVED_109"/>
++	<value value="110" name="A7XX_PERF_TP_RESERVED_110"/>
++	<value value="111" name="A7XX_PERF_TP_RESERVED_111"/>
++	<value value="112" name="A7XX_PERF_TP_RESERVED_112"/>
++	<value value="113" name="A7XX_PERF_TP_RESERVED_113"/>
++	<value value="114" name="A7XX_PERF_TP_RESERVED_114"/>
++	<value value="115" name="A7XX_PERF_TP_RESERVED_115"/>
++	<value value="116" name="A7XX_PERF_TP_RESERVED_116"/>
++	<value value="117" name="A7XX_PERF_TP_RESERVED_117"/>
++	<value value="118" name="A7XX_PERF_TP_RESERVED_118"/>
++	<value value="119" name="A7XX_PERF_TP_RESERVED_119"/>
++	<value value="120" name="A7XX_PERF_TP_RESERVED_120"/>
++	<value value="121" name="A7XX_PERF_TP_RESERVED_121"/>
++	<value value="122" name="A7XX_PERF_TP_RESERVED_122"/>
++	<value value="123" name="A7XX_PERF_TP_RESERVED_123"/>
++	<value value="124" name="A7XX_PERF_TP_RESERVED_124"/>
++	<value value="125" name="A7XX_PERF_TP_RESERVED_125"/>
++	<value value="126" name="A7XX_PERF_TP_RESERVED_126"/>
++	<value value="127" name="A7XX_PERF_TP_RESERVED_127"/>
++	<value value="128" name="A7XX_PERF_TP_FORMAT_DECOMP_BILINEAR"/>
++	<value value="129" name="A7XX_PERF_TP_PACKED_POINT_BOTH_VALID_FP16"/>
++	<value value="130" name="A7XX_PERF_TP_PACKED_POINT_SINGLE_VALID_FP16"/>
++	<value value="131" name="A7XX_PERF_TP_PACKED_POINT_BOTH_VALID_FP32"/>
++	<value value="132" name="A7XX_PERF_TP_PACKED_POINT_SINGLE_VALID_FP32"/>
++</enum>
++
++<enum name="a7xx_sp_perfcounter_select">
++	<value value="0" name="A7XX_PERF_SP_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_SP_ALU_WORKING_CYCLES"/>
++	<value value="2" name="A7XX_PERF_SP_EFU_WORKING_CYCLES"/>
++	<value value="3" name="A7XX_PERF_SP_STALL_CYCLES_VPC"/>
++	<value value="4" name="A7XX_PERF_SP_STALL_CYCLES_TP"/>
++	<value value="5" name="A7XX_PERF_SP_STALL_CYCLES_UCHE"/>
++	<value value="6" name="A7XX_PERF_SP_STALL_CYCLES_RB"/>
++	<value value="7" name="A7XX_PERF_SP_NON_EXECUTION_CYCLES"/>
++	<value value="8" name="A7XX_PERF_SP_WAVE_CONTEXTS"/>
++	<value value="9" name="A7XX_PERF_SP_WAVE_CONTEXT_CYCLES"/>
++	<value value="10" name="A7XX_PERF_SP_STAGE_WAVE_CYCLES"/>
++	<value value="11" name="A7XX_PERF_SP_STAGE_WAVE_SAMPLES"/>
++	<value value="12" name="A7XX_PERF_SP_VS_STAGE_WAVE_CYCLES"/>
++	<value value="13" name="A7XX_PERF_SP_VS_STAGE_WAVE_SAMPLES"/>
++	<value value="14" name="A7XX_PERF_SP_FS_STAGE_DURATION_CYCLES"/>
++	<value value="15" name="A7XX_PERF_SP_VS_STAGE_DURATION_CYCLES"/>
++	<value value="16" name="A7XX_PERF_SP_WAVE_CTRL_CYCLES"/>
++	<value value="17" name="A7XX_PERF_SP_WAVE_LOAD_CYCLES"/>
++	<value value="18" name="A7XX_PERF_SP_WAVE_EMIT_CYCLES"/>
++	<value value="19" name="A7XX_PERF_SP_WAVE_NOP_CYCLES"/>
++	<value value="20" name="A7XX_PERF_SP_WAVE_WAIT_CYCLES"/>
++	<value value="21" name="A7XX_PERF_SP_WAVE_FETCH_CYCLES"/>
++	<value value="22" name="A7XX_PERF_SP_WAVE_IDLE_CYCLES"/>
++	<value value="23" name="A7XX_PERF_SP_WAVE_END_CYCLES"/>
++	<value value="24" name="A7XX_PERF_SP_WAVE_LONG_SYNC_CYCLES"/>
++	<value value="25" name="A7XX_PERF_SP_WAVE_SHORT_SYNC_CYCLES"/>
++	<value value="26" name="A7XX_PERF_SP_WAVE_JOIN_CYCLES"/>
++	<value value="27" name="A7XX_PERF_SP_LM_LOAD_INSTRUCTIONS"/>
++	<value value="28" name="A7XX_PERF_SP_LM_STORE_INSTRUCTIONS"/>
++	<value value="29" name="A7XX_PERF_SP_LM_ATOMICS"/>
++	<value value="30" name="A7XX_PERF_SP_GM_LOAD_INSTRUCTIONS"/>
++	<value value="31" name="A7XX_PERF_SP_GM_STORE_INSTRUCTIONS"/>
++	<value value="32" name="A7XX_PERF_SP_GM_ATOMICS"/>
++	<value value="33" name="A7XX_PERF_SP_VS_STAGE_TEX_INSTRUCTIONS"/>
++	<value value="34" name="A7XX_PERF_SP_VS_STAGE_EFU_INSTRUCTIONS"/>
++	<value value="35" name="A7XX_PERF_SP_VS_STAGE_FULL_ALU_INSTRUCTIONS"/>
++	<value value="36" name="A7XX_PERF_SP_VS_STAGE_HALF_ALU_INSTRUCTIONS"/>
++	<value value="37" name="A7XX_PERF_SP_FS_STAGE_TEX_INSTRUCTIONS"/>
++	<value value="38" name="A7XX_PERF_SP_FS_STAGE_CFLOW_INSTRUCTIONS"/>
++	<value value="39" name="A7XX_PERF_SP_FS_STAGE_EFU_INSTRUCTIONS"/>
++	<value value="40" name="A7XX_PERF_SP_FS_STAGE_FULL_ALU_INSTRUCTIONS"/>
++	<value value="41" name="A7XX_PERF_SP_FS_STAGE_HALF_ALU_INSTRUCTIONS"/>
++	<value value="42" name="A7XX_PERF_SP_FS_STAGE_BARY_INSTRUCTIONS"/>
++	<value value="43" name="A7XX_PERF_SP_VS_INSTRUCTIONS"/>
++	<value value="44" name="A7XX_PERF_SP_FS_INSTRUCTIONS"/>
++	<value value="45" name="A7XX_PERF_SP_ADDR_LOCK_COUNT"/>
++	<value value="46" name="A7XX_PERF_SP_UCHE_READ_TRANS"/>
++	<value value="47" name="A7XX_PERF_SP_UCHE_WRITE_TRANS"/>
++	<value value="48" name="A7XX_PERF_SP_EXPORT_VPC_TRANS"/>
++	<value value="49" name="A7XX_PERF_SP_EXPORT_RB_TRANS"/>
++	<value value="50" name="A7XX_PERF_SP_PIXELS_KILLED"/>
++	<value value="51" name="A7XX_PERF_SP_ICL1_REQUESTS"/>
++	<value value="52" name="A7XX_PERF_SP_ICL1_MISSES"/>
++	<value value="53" name="A7XX_PERF_SP_HS_INSTRUCTIONS"/>
++	<value value="54" name="A7XX_PERF_SP_DS_INSTRUCTIONS"/>
++	<value value="55" name="A7XX_PERF_SP_GS_INSTRUCTIONS"/>
++	<value value="56" name="A7XX_PERF_SP_CS_INSTRUCTIONS"/>
++	<value value="57" name="A7XX_PERF_SP_GPR_READ"/>
++	<value value="58" name="A7XX_PERF_SP_GPR_WRITE"/>
++	<value value="59" name="A7XX_PERF_SP_FS_STAGE_HALF_EFU_INSTRUCTIONS"/>
++	<value value="60" name="A7XX_PERF_SP_VS_STAGE_HALF_EFU_INSTRUCTIONS"/>
++	<value value="61" name="A7XX_PERF_SP_LM_BANK_CONFLICTS"/>
++	<value value="62" name="A7XX_PERF_SP_TEX_CONTROL_WORKING_CYCLES"/>
++	<value value="63" name="A7XX_PERF_SP_LOAD_CONTROL_WORKING_CYCLES"/>
++	<value value="64" name="A7XX_PERF_SP_FLOW_CONTROL_WORKING_CYCLES"/>
++	<value value="65" name="A7XX_PERF_SP_LM_WORKING_CYCLES"/>
++	<value value="66" name="A7XX_PERF_SP_DISPATCHER_WORKING_CYCLES"/>
++	<value value="67" name="A7XX_PERF_SP_SEQUENCER_WORKING_CYCLES"/>
++	<value value="68" name="A7XX_PERF_SP_LOW_EFFICIENCY_STARVED_BY_TP"/>
++	<value value="69" name="A7XX_PERF_SP_STARVE_CYCLES_HLSQ"/>
++	<value value="70" name="A7XX_PERF_SP_NON_EXECUTION_LS_CYCLES"/>
++	<value value="71" name="A7XX_PERF_SP_WORKING_EU"/>
++	<value value="72" name="A7XX_PERF_SP_ANY_EU_WORKING"/>
++	<value value="73" name="A7XX_PERF_SP_WORKING_EU_FS_STAGE"/>
++	<value value="74" name="A7XX_PERF_SP_ANY_EU_WORKING_FS_STAGE"/>
++	<value value="75" name="A7XX_PERF_SP_WORKING_EU_VS_STAGE"/>
++	<value value="76" name="A7XX_PERF_SP_ANY_EU_WORKING_VS_STAGE"/>
++	<value value="77" name="A7XX_PERF_SP_WORKING_EU_CS_STAGE"/>
++	<value value="78" name="A7XX_PERF_SP_ANY_EU_WORKING_CS_STAGE"/>
++	<value value="79" name="A7XX_PERF_SP_GPR_READ_PREFETCH"/>
++	<value value="80" name="A7XX_PERF_SP_GPR_READ_CONFLICT"/>
++	<value value="81" name="A7XX_PERF_SP_GPR_WRITE_CONFLICT"/>
++	<value value="82" name="A7XX_PERF_SP_GM_LOAD_LATENCY_CYCLES"/>
++	<value value="83" name="A7XX_PERF_SP_GM_LOAD_LATENCY_SAMPLES"/>
++	<value value="84" name="A7XX_PERF_SP_EXECUTABLE_WAVES"/>
++	<value value="85" name="A7XX_PERF_SP_ICL1_MISS_FETCH_CYCLES"/>
++	<value value="86" name="A7XX_PERF_SP_WORKING_EU_LPAC"/>
++	<value value="87" name="A7XX_PERF_SP_BYPASS_BUSY_CYCLES"/>
++	<value value="88" name="A7XX_PERF_SP_ANY_EU_WORKING_LPAC"/>
++	<value value="89" name="A7XX_PERF_SP_WAVE_ALU_CYCLES"/>
++	<value value="90" name="A7XX_PERF_SP_WAVE_EFU_CYCLES"/>
++	<value value="91" name="A7XX_PERF_SP_WAVE_INT_CYCLES"/>
++	<value value="92" name="A7XX_PERF_SP_WAVE_CSP_CYCLES"/>
++	<value value="93" name="A7XX_PERF_SP_EWAVE_CONTEXTS"/>
++	<value value="94" name="A7XX_PERF_SP_EWAVE_CONTEXT_CYCLES"/>
++	<value value="95" name="A7XX_PERF_SP_LPAC_BUSY_CYCLES"/>
++	<value value="96" name="A7XX_PERF_SP_LPAC_INSTRUCTIONS"/>
++	<value value="97" name="A7XX_PERF_SP_FS_STAGE_1X_WAVES"/>
++	<value value="98" name="A7XX_PERF_SP_FS_STAGE_2X_WAVES"/>
++	<value value="99" name="A7XX_PERF_SP_QUADS"/>
++	<value value="100" name="A7XX_PERF_SP_CS_INVOCATIONS"/>
++	<value value="101" name="A7XX_PERF_SP_PIXELS"/>
++	<value value="102" name="A7XX_PERF_SP_LPAC_DRAWCALLS"/>
++	<value value="103" name="A7XX_PERF_SP_PI_WORKING_CYCLES"/>
++	<value value="104" name="A7XX_PERF_SP_WAVE_INPUT_CYCLES"/>
++	<value value="105" name="A7XX_PERF_SP_WAVE_OUTPUT_CYCLES"/>
++	<value value="106" name="A7XX_PERF_SP_WAVE_HWAVE_WAIT_CYCLES"/>
++	<value value="107" name="A7XX_PERF_SP_WAVE_HWAVE_SYNC"/>
++	<value value="108" name="A7XX_PERF_SP_OUTPUT_3D_PIXELS"/>
++	<value value="109" name="A7XX_PERF_SP_FULL_ALU_MAD_INSTRUCTIONS"/>
++	<value value="110" name="A7XX_PERF_SP_HALF_ALU_MAD_INSTRUCTIONS"/>
++	<value value="111" name="A7XX_PERF_SP_FULL_ALU_MUL_INSTRUCTIONS"/>
++	<value value="112" name="A7XX_PERF_SP_HALF_ALU_MUL_INSTRUCTIONS"/>
++	<value value="113" name="A7XX_PERF_SP_FULL_ALU_ADD_INSTRUCTIONS"/>
++	<value value="114" name="A7XX_PERF_SP_HALF_ALU_ADD_INSTRUCTIONS"/>
++	<value value="115" name="A7XX_PERF_SP_BARY_FP32_INSTRUCTIONS"/>
++	<value value="116" name="A7XX_PERF_SP_ALU_GPR_READ_CYCLES"/>
++	<value value="117" name="A7XX_PERF_SP_ALU_DATA_FORWARDING_CYCLES"/>
++	<value value="118" name="A7XX_PERF_SP_LM_FULL_CYCLES"/>
++	<value value="119" name="A7XX_PERF_SP_TEXTURE_FETCH_LATENCY_CYCLES"/>
++	<value value="120" name="A7XX_PERF_SP_TEXTURE_FETCH_LATENCY_SAMPLES"/>
++	<value value="121" name="A7XX_PERF_SP_FS_STAGE_PI_TEX_INSTRUCTION"/>
++	<value value="122" name="A7XX_PERF_SP_RAY_QUERY_INSTRUCTIONS"/>
++	<value value="123" name="A7XX_PERF_SP_RBRT_KICKOFF_FIBERS"/>
++	<value value="124" name="A7XX_PERF_SP_RBRT_KICKOFF_DQUADS"/>
++	<value value="125" name="A7XX_PERF_SP_RTU_BUSY_CYCLES"/>
++	<value value="126" name="A7XX_PERF_SP_RTU_L0_HITS"/>
++	<value value="127" name="A7XX_PERF_SP_RTU_L0_MISSES"/>
++	<value value="128" name="A7XX_PERF_SP_RTU_L0_HIT_ON_MISS"/>
++	<value value="129" name="A7XX_PERF_SP_RTU_STALL_CYCLES_WAVE_QUEUE"/>
++	<value value="130" name="A7XX_PERF_SP_RTU_STALL_CYCLES_L0_HIT_QUEUE"/>
++	<value value="131" name="A7XX_PERF_SP_RTU_STALL_CYCLES_L0_MISS_QUEUE"/>
++	<value value="132" name="A7XX_PERF_SP_RTU_STALL_CYCLES_L0D_IDX_QUEUE"/>
++	<value value="133" name="A7XX_PERF_SP_RTU_STALL_CYCLES_L0DATA"/>
++	<value value="134" name="A7XX_PERF_SP_RTU_STALL_CYCLES_REPLACE_CNT"/>
++	<value value="135" name="A7XX_PERF_SP_RTU_STALL_CYCLES_MRG_CNT"/>
++	<value value="136" name="A7XX_PERF_SP_RTU_STALL_CYCLES_UCHE"/>
++	<value value="137" name="A7XX_PERF_SP_RTU_OPERAND_FETCH_STALL_CYCLES_L0"/>
++	<value value="138" name="A7XX_PERF_SP_RTU_OPERAND_FETCH_STALL_CYCLES_INS_FIFO"/>
++	<value value="139" name="A7XX_PERF_SP_RTU_BVH_FETCH_LATENCY_CYCLES"/>
++	<value value="140" name="A7XX_PERF_SP_RTU_BVH_FETCH_LATENCY_SAMPLES"/>
++	<value value="141" name="A7XX_PERF_SP_STCHE_MISS_INC_VS"/>
++	<value value="142" name="A7XX_PERF_SP_STCHE_MISS_INC_FS"/>
++	<value value="143" name="A7XX_PERF_SP_STCHE_MISS_INC_BV"/>
++	<value value="144" name="A7XX_PERF_SP_STCHE_MISS_INC_LPAC"/>
++	<value value="145" name="A7XX_PERF_SP_VGPR_ACTIVE_CONTEXTS"/>
++	<value value="146" name="A7XX_PERF_SP_PGPR_ALLOC_CONTEXTS"/>
++	<value value="147" name="A7XX_PERF_SP_VGPR_ALLOC_CONTEXTS"/>
++	<value value="148" name="A7XX_PERF_SP_RTU_RAY_BOX_INTERSECTIONS"/>
++	<value value="149" name="A7XX_PERF_SP_RTU_RAY_TRIANGLE_INTERSECTIONS"/>
++	<value value="150" name="A7XX_PERF_SP_SCH_STALL_CYCLES_RTU"/>
++</enum>
++
++<enum name="a7xx_rb_perfcounter_select">
++	<value value="0" name="A7XX_PERF_RB_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_RB_STALL_CYCLES_HLSQ"/>
++	<value value="2" name="A7XX_PERF_RB_STALL_CYCLES_FIFO0_FULL"/>
++	<value value="3" name="A7XX_PERF_RB_STALL_CYCLES_FIFO1_FULL"/>
++	<value value="4" name="A7XX_PERF_RB_STALL_CYCLES_FIFO2_FULL"/>
++	<value value="5" name="A7XX_PERF_RB_STARVE_CYCLES_SP"/>
++	<value value="6" name="A7XX_PERF_RB_STARVE_CYCLES_LRZ_TILE"/>
++	<value value="7" name="A7XX_PERF_RB_STARVE_CYCLES_CCU"/>
++	<value value="8" name="A7XX_PERF_RB_STARVE_CYCLES_Z_PLANE"/>
++	<value value="9" name="A7XX_PERF_RB_STARVE_CYCLES_BARY_PLANE"/>
++	<value value="10" name="A7XX_PERF_RB_Z_WORKLOAD"/>
++	<value value="11" name="A7XX_PERF_RB_HLSQ_ACTIVE"/>
++	<value value="12" name="A7XX_PERF_RB_Z_READ"/>
++	<value value="13" name="A7XX_PERF_RB_Z_WRITE"/>
++	<value value="14" name="A7XX_PERF_RB_C_READ"/>
++	<value value="15" name="A7XX_PERF_RB_C_WRITE"/>
++	<value value="16" name="A7XX_PERF_RB_TOTAL_PASS"/>
++	<value value="17" name="A7XX_PERF_RB_Z_PASS"/>
++	<value value="18" name="A7XX_PERF_RB_Z_FAIL"/>
++	<value value="19" name="A7XX_PERF_RB_S_FAIL"/>
++	<value value="20" name="A7XX_PERF_RB_BLENDED_FXP_COMPONENTS"/>
++	<value value="21" name="A7XX_PERF_RB_BLENDED_FP16_COMPONENTS"/>
++	<value value="22" name="A7XX_PERF_RB_PS_INVOCATIONS"/>
++	<value value="23" name="A7XX_PERF_RB_2D_ALIVE_CYCLES"/>
++	<value value="24" name="A7XX_PERF_RB_2D_STALL_CYCLES_A2D"/>
++	<value value="25" name="A7XX_PERF_RB_2D_STARVE_CYCLES_SRC"/>
++	<value value="26" name="A7XX_PERF_RB_2D_STARVE_CYCLES_SP"/>
++	<value value="27" name="A7XX_PERF_RB_2D_STARVE_CYCLES_DST"/>
++	<value value="28" name="A7XX_PERF_RB_2D_VALID_PIXELS"/>
++	<value value="29" name="A7XX_PERF_RB_3D_PIXELS"/>
++	<value value="30" name="A7XX_PERF_RB_BLENDER_WORKING_CYCLES"/>
++	<value value="31" name="A7XX_PERF_RB_ZPROC_WORKING_CYCLES"/>
++	<value value="32" name="A7XX_PERF_RB_CPROC_WORKING_CYCLES"/>
++	<value value="33" name="A7XX_PERF_RB_SAMPLER_WORKING_CYCLES"/>
++	<value value="34" name="A7XX_PERF_RB_STALL_CYCLES_CCU_COLOR_READ"/>
++	<value value="35" name="A7XX_PERF_RB_STALL_CYCLES_CCU_COLOR_WRITE"/>
++	<value value="36" name="A7XX_PERF_RB_STALL_CYCLES_CCU_DEPTH_READ"/>
++	<value value="37" name="A7XX_PERF_RB_STALL_CYCLES_CCU_DEPTH_WRITE"/>
++	<value value="38" name="A7XX_PERF_RB_STALL_CYCLES_VPC"/>
++	<value value="39" name="A7XX_PERF_RB_2D_INPUT_TRANS"/>
++	<value value="40" name="A7XX_PERF_RB_2D_OUTPUT_RB_DST_TRANS"/>
++	<value value="41" name="A7XX_PERF_RB_2D_OUTPUT_RB_SRC_TRANS"/>
++	<value value="42" name="A7XX_PERF_RB_BLENDED_FP32_COMPONENTS"/>
++	<value value="43" name="A7XX_PERF_RB_COLOR_PIX_TILES"/>
++	<value value="44" name="A7XX_PERF_RB_STALL_CYCLES_CCU"/>
++	<value value="45" name="A7XX_PERF_RB_EARLY_Z_ARB3_GRANT"/>
++	<value value="46" name="A7XX_PERF_RB_LATE_Z_ARB3_GRANT"/>
++	<value value="47" name="A7XX_PERF_RB_EARLY_Z_SKIP_GRANT"/>
++	<value value="48" name="A7XX_PERF_RB_VRS_1x1_QUADS"/>
++	<value value="49" name="A7XX_PERF_RB_VRS_2x1_QUADS"/>
++	<value value="50" name="A7XX_PERF_RB_VRS_1x2_QUADS"/>
++	<value value="51" name="A7XX_PERF_RB_VRS_2x2_QUADS"/>
++	<value value="52" name="A7XX_PERF_RB_VRS_4x2_QUADS"/>
++	<value value="53" name="A7XX_PERF_RB_VRS_4x4_QUADS"/>
++</enum>
++
++<enum name="a7xx_vsc_perfcounter_select">
++	<value value="0" name="A7XX_PERF_VSC_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_VSC_WORKING_CYCLES"/>
++	<value value="2" name="A7XX_PERF_VSC_STALL_CYCLES_UCHE"/>
++	<value value="3" name="A7XX_PERF_VSC_EOT_NUM"/>
++	<value value="4" name="A7XX_PERF_VSC_INPUT_TILES"/>
++</enum>
++
++<enum name="a7xx_ccu_perfcounter_select">
++	<value value="0" name="A7XX_PERF_CCU_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_CCU_STALL_CYCLES_RB_DEPTH_RETURN"/>
++	<value value="2" name="A7XX_PERF_CCU_STALL_CYCLES_RB_COLOR_RETURN"/>
++	<value value="3" name="A7XX_PERF_CCU_DEPTH_BLOCKS"/>
++	<value value="4" name="A7XX_PERF_CCU_COLOR_BLOCKS"/>
++	<value value="5" name="A7XX_PERF_CCU_DEPTH_BLOCK_HIT"/>
++	<value value="6" name="A7XX_PERF_CCU_COLOR_BLOCK_HIT"/>
++	<value value="7" name="A7XX_PERF_CCU_PARTIAL_BLOCK_READ"/>
++	<value value="8" name="A7XX_PERF_CCU_GMEM_READ"/>
++	<value value="9" name="A7XX_PERF_CCU_GMEM_WRITE"/>
++	<value value="10" name="A7XX_PERF_CCU_2D_RD_REQ"/>
++	<value value="11" name="A7XX_PERF_CCU_2D_WR_REQ"/>
++	<value value="12" name="A7XX_PERF_CCU_UBWC_COLOR_BLOCKS_CONCURRENT"/>
++	<value value="13" name="A7XX_PERF_CCU_UBWC_DEPTH_BLOCKS_CONCURRENT"/>
++	<value value="14" name="A7XX_PERF_CCU_COLOR_RESOLVE_DROPPED"/>
++	<value value="15" name="A7XX_PERF_CCU_DEPTH_RESOLVE_DROPPED"/>
++	<value value="16" name="A7XX_PERF_CCU_COLOR_RENDER_CONCURRENT"/>
++	<value value="17" name="A7XX_PERF_CCU_DEPTH_RENDER_CONCURRENT"/>
++	<value value="18" name="A7XX_PERF_CCU_COLOR_RESOLVE_AFTER_RENDER"/>
++	<value value="19" name="A7XX_PERF_CCU_DEPTH_RESOLVE_AFTER_RENDER"/>
++	<value value="20" name="A7XX_PERF_CCU_GMEM_EXTRA_DEPTH_READ"/>
++	<value value="21" name="A7XX_PERF_CCU_GMEM_COLOR_READ_4AA"/>
++	<value value="22" name="A7XX_PERF_CCU_GMEM_COLOR_READ_4AA_FULL"/>
++</enum>
++
++<enum name="a7xx_lrz_perfcounter_select">
++	<value value="0" name="A7XX_PERF_LRZ_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_LRZ_STARVE_CYCLES_RAS"/>
++	<value value="2" name="A7XX_PERF_LRZ_STALL_CYCLES_RB"/>
++	<value value="3" name="A7XX_PERF_LRZ_STALL_CYCLES_VSC"/>
++	<value value="4" name="A7XX_PERF_LRZ_STALL_CYCLES_VPC"/>
++	<value value="5" name="A7XX_PERF_LRZ_STALL_CYCLES_FLAG_PREFETCH"/>
++	<value value="6" name="A7XX_PERF_LRZ_STALL_CYCLES_UCHE"/>
++	<value value="7" name="A7XX_PERF_LRZ_LRZ_READ"/>
++	<value value="8" name="A7XX_PERF_LRZ_LRZ_WRITE"/>
++	<value value="9" name="A7XX_PERF_LRZ_READ_LATENCY"/>
++	<value value="10" name="A7XX_PERF_LRZ_MERGE_CACHE_UPDATING"/>
++	<value value="11" name="A7XX_PERF_LRZ_PRIM_KILLED_BY_MASKGEN"/>
++	<value value="12" name="A7XX_PERF_LRZ_PRIM_KILLED_BY_LRZ"/>
++	<value value="13" name="A7XX_PERF_LRZ_VISIBLE_PRIM_AFTER_LRZ"/>
++	<value value="14" name="A7XX_PERF_LRZ_FULL_8X8_TILES"/>
++	<value value="15" name="A7XX_PERF_LRZ_PARTIAL_8X8_TILES"/>
++	<value value="16" name="A7XX_PERF_LRZ_TILE_KILLED"/>
++	<value value="17" name="A7XX_PERF_LRZ_TOTAL_PIXEL"/>
++	<value value="18" name="A7XX_PERF_LRZ_VISIBLE_PIXEL_AFTER_LRZ"/>
++	<value value="19" name="A7XX_PERF_LRZ_FEEDBACK_ACCEPT"/>
++	<value value="20" name="A7XX_PERF_LRZ_FEEDBACK_DISCARD"/>
++	<value value="21" name="A7XX_PERF_LRZ_FEEDBACK_STALL"/>
++	<value value="22" name="A7XX_PERF_LRZ_STALL_CYCLES_RB_ZPLANE"/>
++	<value value="23" name="A7XX_PERF_LRZ_STALL_CYCLES_RB_BPLANE"/>
++	<value value="24" name="A7XX_PERF_LRZ_RAS_MASK_TRANS"/>
++	<value value="25" name="A7XX_PERF_LRZ_STALL_CYCLES_MVC"/>
++	<value value="26" name="A7XX_PERF_LRZ_TILE_KILLED_BY_IMAGE_VRS"/>
++	<value value="27" name="A7XX_PERF_LRZ_TILE_KILLED_BY_Z"/>
++</enum>
++
++<enum name="a7xx_cmp_perfcounter_select">
++	<value value="0" name="A7XX_PERF_CMPDECMP_STALL_CYCLES_ARB"/>
++	<value value="1" name="A7XX_PERF_CMPDECMP_VBIF_LATENCY_CYCLES"/>
++	<value value="2" name="A7XX_PERF_CMPDECMP_VBIF_LATENCY_SAMPLES"/>
++	<value value="3" name="A7XX_PERF_CMPDECMP_VBIF_READ_DATA_CCU"/>
++	<value value="4" name="A7XX_PERF_CMPDECMP_VBIF_WRITE_DATA_CCU"/>
++	<value value="5" name="A7XX_PERF_CMPDECMP_VBIF_READ_REQUEST"/>
++	<value value="6" name="A7XX_PERF_CMPDECMP_VBIF_WRITE_REQUEST"/>
++	<value value="7" name="A7XX_PERF_CMPDECMP_VBIF_READ_DATA"/>
++	<value value="8" name="A7XX_PERF_CMPDECMP_VBIF_WRITE_DATA"/>
++	<value value="9" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG1_COUNT"/>
++	<value value="10" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG2_COUNT"/>
++	<value value="11" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG3_COUNT"/>
++	<value value="12" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG4_COUNT"/>
++	<value value="13" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG5_COUNT"/>
++	<value value="14" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG6_COUNT"/>
++	<value value="15" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG8_COUNT"/>
++	<value value="16" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG1_COUNT"/>
++	<value value="17" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG2_COUNT"/>
++	<value value="18" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG3_COUNT"/>
++	<value value="19" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG4_COUNT"/>
++	<value value="20" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG5_COUNT"/>
++	<value value="21" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG6_COUNT"/>
++	<value value="22" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG8_COUNT"/>
++	<value value="23" name="A7XX_PERF_CMPDECMP_VBIF_READ_DATA_UCHE_CH0"/>
++	<value value="24" name="A7XX_PERF_CMPDECMP_VBIF_READ_DATA_UCHE_CH1"/>
++	<value value="25" name="A7XX_PERF_CMPDECMP_VBIF_WRITE_DATA_UCHE"/>
++	<value value="26" name="A7XX_PERF_CMPDECMP_DEPTH_WRITE_FLAG0_COUNT"/>
++	<value value="27" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAG0_COUNT"/>
++	<value value="28" name="A7XX_PERF_CMPDECMP_COLOR_WRITE_FLAGALPHA_COUNT"/>
++	<value value="29" name="A7XX_PERF_CMPDECMP_RESOLVE_EVENTS"/>
++	<value value="30" name="A7XX_PERF_CMPDECMP_CONCURRENT_RESOLVE_EVENTS"/>
++	<value value="31" name="A7XX_PERF_CMPDECMP_DROPPED_CLEAR_EVENTS"/>
++	<value value="32" name="A7XX_PERF_CMPDECMP_ST_BLOCKS_CONCURRENT"/>
++	<value value="33" name="A7XX_PERF_CMPDECMP_LRZ_ST_BLOCKS_CONCURRENT"/>
++	<value value="34" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG0_COUNT"/>
++	<value value="35" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG1_COUNT"/>
++	<value value="36" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG2_COUNT"/>
++	<value value="37" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG3_COUNT"/>
++	<value value="38" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG4_COUNT"/>
++	<value value="39" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG5_COUNT"/>
++	<value value="40" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG6_COUNT"/>
++	<value value="41" name="A7XX_PERF_CMPDECMP_DEPTH_READ_FLAG8_COUNT"/>
++	<value value="42" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG0_COUNT"/>
++	<value value="43" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG1_COUNT"/>
++	<value value="44" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG2_COUNT"/>
++	<value value="45" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG3_COUNT"/>
++	<value value="46" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG4_COUNT"/>
++	<value value="47" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG5_COUNT"/>
++	<value value="48" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG6_COUNT"/>
++	<value value="49" name="A7XX_PERF_CMPDECMP_COLOR_READ_FLAG8_COUNT"/>
++</enum>
++
++<enum name="a7xx_gbif_perfcounter_select">
++	<value value="0" name="A7XX_PERF_GBIF_RESERVED_0"/>
++	<value value="1" name="A7XX_PERF_GBIF_RESERVED_1"/>
++	<value value="2" name="A7XX_PERF_GBIF_RESERVED_2"/>
++	<value value="3" name="A7XX_PERF_GBIF_RESERVED_3"/>
++	<value value="4" name="A7XX_PERF_GBIF_RESERVED_4"/>
++	<value value="5" name="A7XX_PERF_GBIF_RESERVED_5"/>
++	<value value="6" name="A7XX_PERF_GBIF_RESERVED_6"/>
++	<value value="7" name="A7XX_PERF_GBIF_RESERVED_7"/>
++	<value value="8" name="A7XX_PERF_GBIF_RESERVED_8"/>
++	<value value="9" name="A7XX_PERF_GBIF_RESERVED_9"/>
++	<value value="10" name="A7XX_PERF_GBIF_AXI0_READ_REQUESTS_TOTAL"/>
++	<value value="11" name="A7XX_PERF_GBIF_AXI1_READ_REQUESTS_TOTAL"/>
++	<value value="12" name="A7XX_PERF_GBIF_RESERVED_12"/>
++	<value value="13" name="A7XX_PERF_GBIF_RESERVED_13"/>
++	<value value="14" name="A7XX_PERF_GBIF_RESERVED_14"/>
++	<value value="15" name="A7XX_PERF_GBIF_RESERVED_15"/>
++	<value value="16" name="A7XX_PERF_GBIF_RESERVED_16"/>
++	<value value="17" name="A7XX_PERF_GBIF_RESERVED_17"/>
++	<value value="18" name="A7XX_PERF_GBIF_RESERVED_18"/>
++	<value value="19" name="A7XX_PERF_GBIF_RESERVED_19"/>
++	<value value="20" name="A7XX_PERF_GBIF_RESERVED_20"/>
++	<value value="21" name="A7XX_PERF_GBIF_RESERVED_21"/>
++	<value value="22" name="A7XX_PERF_GBIF_AXI0_WRITE_REQUESTS_TOTAL"/>
++	<value value="23" name="A7XX_PERF_GBIF_AXI1_WRITE_REQUESTS_TOTAL"/>
++	<value value="24" name="A7XX_PERF_GBIF_RESERVED_24"/>
++	<value value="25" name="A7XX_PERF_GBIF_RESERVED_25"/>
++	<value value="26" name="A7XX_PERF_GBIF_RESERVED_26"/>
++	<value value="27" name="A7XX_PERF_GBIF_RESERVED_27"/>
++	<value value="28" name="A7XX_PERF_GBIF_RESERVED_28"/>
++	<value value="29" name="A7XX_PERF_GBIF_RESERVED_29"/>
++	<value value="30" name="A7XX_PERF_GBIF_RESERVED_30"/>
++	<value value="31" name="A7XX_PERF_GBIF_RESERVED_31"/>
++	<value value="32" name="A7XX_PERF_GBIF_RESERVED_32"/>
++	<value value="33" name="A7XX_PERF_GBIF_RESERVED_33"/>
++	<value value="34" name="A7XX_PERF_GBIF_AXI0_READ_DATA_BEATS_TOTAL"/>
++	<value value="35" name="A7XX_PERF_GBIF_AXI1_READ_DATA_BEATS_TOTAL"/>
++	<value value="36" name="A7XX_PERF_GBIF_RESERVED_36"/>
++	<value value="37" name="A7XX_PERF_GBIF_RESERVED_37"/>
++	<value value="38" name="A7XX_PERF_GBIF_RESERVED_38"/>
++	<value value="39" name="A7XX_PERF_GBIF_RESERVED_39"/>
++	<value value="40" name="A7XX_PERF_GBIF_RESERVED_40"/>
++	<value value="41" name="A7XX_PERF_GBIF_RESERVED_41"/>
++	<value value="42" name="A7XX_PERF_GBIF_RESERVED_42"/>
++	<value value="43" name="A7XX_PERF_GBIF_RESERVED_43"/>
++	<value value="44" name="A7XX_PERF_GBIF_RESERVED_44"/>
++	<value value="45" name="A7XX_PERF_GBIF_RESERVED_45"/>
++	<value value="46" name="A7XX_PERF_GBIF_AXI0_WRITE_DATA_BEATS_TOTAL"/>
++	<value value="47" name="A7XX_PERF_GBIF_AXI1_WRITE_DATA_BEATS_TOTAL"/>
++	<value value="48" name="A7XX_PERF_GBIF_RESERVED_48"/>
++	<value value="49" name="A7XX_PERF_GBIF_RESERVED_49"/>
++	<value value="50" name="A7XX_PERF_GBIF_RESERVED_50"/>
++	<value value="51" name="A7XX_PERF_GBIF_RESERVED_51"/>
++	<value value="52" name="A7XX_PERF_GBIF_RESERVED_52"/>
++	<value value="53" name="A7XX_PERF_GBIF_RESERVED_53"/>
++	<value value="54" name="A7XX_PERF_GBIF_RESERVED_54"/>
++	<value value="55" name="A7XX_PERF_GBIF_RESERVED_55"/>
++	<value value="56" name="A7XX_PERF_GBIF_RESERVED_56"/>
++	<value value="57" name="A7XX_PERF_GBIF_RESERVED_57"/>
++	<value value="58" name="A7XX_PERF_GBIF_RESERVED_58"/>
++	<value value="59" name="A7XX_PERF_GBIF_RESERVED_59"/>
++	<value value="60" name="A7XX_PERF_GBIF_RESERVED_60"/>
++	<value value="61" name="A7XX_PERF_GBIF_RESERVED_61"/>
++	<value value="62" name="A7XX_PERF_GBIF_RESERVED_62"/>
++	<value value="63" name="A7XX_PERF_GBIF_RESERVED_63"/>
++	<value value="64" name="A7XX_PERF_GBIF_RESERVED_64"/>
++	<value value="65" name="A7XX_PERF_GBIF_RESERVED_65"/>
++	<value value="66" name="A7XX_PERF_GBIF_RESERVED_66"/>
++	<value value="67" name="A7XX_PERF_GBIF_RESERVED_67"/>
++	<value value="68" name="A7XX_PERF_GBIF_CYCLES_CH0_HELD_OFF_RD_ALL"/>
++	<value value="69" name="A7XX_PERF_GBIF_CYCLES_CH1_HELD_OFF_RD_ALL"/>
++	<value value="70" name="A7XX_PERF_GBIF_CYCLES_CH0_HELD_OFF_WR_ALL"/>
++	<value value="71" name="A7XX_PERF_GBIF_CYCLES_CH1_HELD_OFF_WR_ALL"/>
++	<value value="72" name="A7XX_PERF_GBIF_AXI_CH0_REQUEST_HELD_OFF"/>
++	<value value="73" name="A7XX_PERF_GBIF_AXI_CH1_REQUEST_HELD_OFF"/>
++	<value value="74" name="A7XX_PERF_GBIF_AXI_REQUEST_HELD_OFF"/>
++	<value value="75" name="A7XX_PERF_GBIF_AXI_CH0_WRITE_DATA_HELD_OFF"/>
++	<value value="76" name="A7XX_PERF_GBIF_AXI_CH1_WRITE_DATA_HELD_OFF"/>
++	<value value="77" name="A7XX_PERF_GBIF_AXI_ALL_WRITE_DATA_HELD_OFF"/>
++	<value value="78" name="A7XX_PERF_GBIF_AXI_ALL_READ_BEATS"/>
++	<value value="79" name="A7XX_PERF_GBIF_AXI_ALL_WRITE_BEATS"/>
++	<value value="80" name="A7XX_PERF_GBIF_AXI_ALL_BEATS"/>
++</enum>
++
++<enum name="a7xx_ufc_perfcounter_select">
++	<value value="0" name="A7XX_PERF_UFC_BUSY_CYCLES"/>
++	<value value="1" name="A7XX_PERF_UFC_READ_DATA_VBIF"/>
++	<value value="2" name="A7XX_PERF_UFC_WRITE_DATA_VBIF"/>
++	<value value="3" name="A7XX_PERF_UFC_READ_REQUEST_VBIF"/>
++	<value value="4" name="A7XX_PERF_UFC_WRITE_REQUEST_VBIF"/>
++	<value value="5" name="A7XX_PERF_UFC_LRZ_FILTER_HIT"/>
++	<value value="6" name="A7XX_PERF_UFC_LRZ_FILTER_MISS"/>
++	<value value="7" name="A7XX_PERF_UFC_CRE_FILTER_HIT"/>
++	<value value="8" name="A7XX_PERF_UFC_CRE_FILTER_MISS"/>
++	<value value="9" name="A7XX_PERF_UFC_SP_FILTER_HIT"/>
++	<value value="10" name="A7XX_PERF_UFC_SP_FILTER_MISS"/>
++	<value value="11" name="A7XX_PERF_UFC_SP_REQUESTS"/>
++	<value value="12" name="A7XX_PERF_UFC_TP_FILTER_HIT"/>
++	<value value="13" name="A7XX_PERF_UFC_TP_FILTER_MISS"/>
++	<value value="14" name="A7XX_PERF_UFC_TP_REQUESTS"/>
++	<value value="15" name="A7XX_PERF_UFC_MAIN_HIT_LRZ_PREFETCH"/>
++	<value value="16" name="A7XX_PERF_UFC_MAIN_HIT_CRE_PREFETCH"/>
++	<value value="17" name="A7XX_PERF_UFC_MAIN_HIT_SP_PREFETCH"/>
++	<value value="18" name="A7XX_PERF_UFC_MAIN_HIT_TP_PREFETCH"/>
++	<value value="19" name="A7XX_PERF_UFC_MAIN_HIT_UBWC_READ"/>
++	<value value="20" name="A7XX_PERF_UFC_MAIN_HIT_UBWC_WRITE"/>
++	<value value="21" name="A7XX_PERF_UFC_MAIN_MISS_LRZ_PREFETCH"/>
++	<value value="22" name="A7XX_PERF_UFC_MAIN_MISS_CRE_PREFETCH"/>
++	<value value="23" name="A7XX_PERF_UFC_MAIN_MISS_SP_PREFETCH"/>
++	<value value="24" name="A7XX_PERF_UFC_MAIN_MISS_TP_PREFETCH"/>
++	<value value="25" name="A7XX_PERF_UFC_MAIN_MISS_UBWC_READ"/>
++	<value value="26" name="A7XX_PERF_UFC_MAIN_MISS_UBWC_WRITE"/>
++	<value value="27" name="A7XX_PERF_UFC_UBWC_READ_UFC_TRANS"/>
++	<value value="28" name="A7XX_PERF_UFC_UBWC_WRITE_UFC_TRANS"/>
++	<value value="29" name="A7XX_PERF_UFC_STALL_CYCLES_GBIF_CMD"/>
++	<value value="30" name="A7XX_PERF_UFC_STALL_CYCLES_GBIF_RDATA"/>
++	<value value="31" name="A7XX_PERF_UFC_STALL_CYCLES_GBIF_WDATA"/>
++	<value value="32" name="A7XX_PERF_UFC_STALL_CYCLES_UBWC_WR_FLAG"/>
++	<value value="33" name="A7XX_PERF_UFC_STALL_CYCLES_UBWC_FLAG_RTN"/>
++	<value value="34" name="A7XX_PERF_UFC_STALL_CYCLES_UBWC_EVENT"/>
++	<value value="35" name="A7XX_PERF_UFC_LRZ_PREFETCH_STALLED_CYCLES"/>
++	<value value="36" name="A7XX_PERF_UFC_CRE_PREFETCH_STALLED_CYCLES"/>
++	<value value="37" name="A7XX_PERF_UFC_SPTP_PREFETCH_STALLED_CYCLES"/>
++	<value value="38" name="A7XX_PERF_UFC_UBWC_RD_STALLED_CYCLES"/>
++	<value value="39" name="A7XX_PERF_UFC_UBWC_WR_STALLED_CYCLES"/>
++	<value value="40" name="A7XX_PERF_UFC_PREFETCH_STALLED_CYCLES"/>
++	<value value="41" name="A7XX_PERF_UFC_EVICTION_STALLED_CYCLES"/>
++	<value value="42" name="A7XX_PERF_UFC_LOCK_STALLED_CYCLES"/>
++	<value value="43" name="A7XX_PERF_UFC_MISS_LATENCY_CYCLES"/>
++	<value value="44" name="A7XX_PERF_UFC_MISS_LATENCY_SAMPLES"/>
++	<value value="45" name="A7XX_PERF_UFC_UBWC_REQ_STALLED_CYCLES"/>
++	<value value="46" name="A7XX_PERF_UFC_TP_HINT_TAG_MISS"/>
++	<value value="47" name="A7XX_PERF_UFC_TP_HINT_TAG_HIT_RDY"/>
++	<value value="48" name="A7XX_PERF_UFC_TP_HINT_TAG_HIT_NRDY"/>
++	<value value="49" name="A7XX_PERF_UFC_TP_HINT_IS_FCLEAR"/>
++	<value value="50" name="A7XX_PERF_UFC_TP_HINT_IS_ALPHA0"/>
++	<value value="51" name="A7XX_PERF_UFC_SP_L1_FILTER_HIT"/>
++	<value value="52" name="A7XX_PERF_UFC_SP_L1_FILTER_MISS"/>
++	<value value="53" name="A7XX_PERF_UFC_SP_L1_FILTER_REQUESTS"/>
++	<value value="54" name="A7XX_PERF_UFC_TP_L1_TAG_HIT_RDY"/>
++	<value value="55" name="A7XX_PERF_UFC_TP_L1_TAG_HIT_NRDY"/>
++	<value value="56" name="A7XX_PERF_UFC_TP_L1_TAG_MISS"/>
++	<value value="57" name="A7XX_PERF_UFC_TP_L1_FILTER_REQUESTS"/>
++</enum>
++
++</database>
+diff --git a/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml b/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
+index 46271340162280..7abc08635495ce 100644
+--- a/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
++++ b/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
+@@ -21,9 +21,9 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ 	<value name="HLSQ_FLUSH" value="7" variants="A3XX-A4XX"/>
+ 	<value name="VIZQUERY_END" value="8" variants="A2XX"/>
+ 	<value name="SC_WAIT_WC" value="9" variants="A2XX"/>
+-	<value name="WRITE_PRIMITIVE_COUNTS" value="9" variants="A6XX"/>
+-	<value name="START_PRIMITIVE_CTRS" value="11" variants="A6XX"/>
+-	<value name="STOP_PRIMITIVE_CTRS" value="12" variants="A6XX"/>
++	<value name="WRITE_PRIMITIVE_COUNTS" value="9" variants="A6XX-"/>
++	<value name="START_PRIMITIVE_CTRS" value="11" variants="A6XX-"/>
++	<value name="STOP_PRIMITIVE_CTRS" value="12" variants="A6XX-"/>
+ 	<!-- Not sure that these 4 events don't have the same meaning as on A5XX+ -->
+ 	<value name="RST_PIX_CNT" value="13" variants="A2XX-A4XX"/>
+ 	<value name="RST_VTX_CNT" value="14" variants="A2XX-A4XX"/>
+@@ -31,8 +31,8 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ 	<value name="STAT_EVENT" value="16" variants="A2XX-A4XX"/>
+ 	<value name="CACHE_FLUSH_AND_INV_TS_EVENT" value="20" variants="A2XX-A4XX"/>
+ 	<doc>
+-		If A6XX_RB_SAMPLE_COUNT_CONTROL.copy is true, writes OQ Z passed
+-		sample counts to RB_SAMPLE_COUNT_ADDR.  This writes to main
++		If A6XX_RB_SAMPLE_COUNTER_CNTL.copy is true, writes OQ Z passed
++		sample counts to RB_SAMPLE_COUNTER_BASE.  This writes to main
+ 		memory, skipping UCHE.
+ 	</doc>
+ 	<value name="ZPASS_DONE" value="21"/>
+@@ -97,6 +97,13 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ 	</doc>
+ 	<value name="BLIT" value="30" variants="A5XX-"/>
+ 
++	<doc>
++	Flip between the primary and secondary LRZ buffers. This is used
++	for concurrent binning, so that BV can write to one buffer while
++	BR reads from the other.
++	</doc>
++	<value name="LRZ_FLIP_BUFFER" value="36" variants="A7XX"/>
++
+ 	<doc>
+ 		Clears based on GRAS_LRZ_CNTL configuration, could clear
+ 		fast-clear buffer or LRZ direction.
+@@ -114,6 +121,7 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ 	<value name="BLIT_OP_FILL_2D" value="39" variants="A5XX-"/>
+ 	<value name="BLIT_OP_COPY_2D" value="40" variants="A5XX-A6XX"/>
+ 	<value name="UNK_40" value="40" variants="A7XX"/>
++	<value name="LRZ_Q_CACHE_INVALIDATE" value="41" variants="A7XX"/>
+ 	<value name="BLIT_OP_SCALE_2D" value="42" variants="A5XX-"/>
+ 	<value name="CONTEXT_DONE_2D" value="43" variants="A5XX-"/>
+ 	<value name="UNK_2C" value="44" variants="A5XX-"/>
+@@ -372,7 +380,7 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ 	<value name="CP_LOAD_STATE" value="0x30" variants="A3XX"/>
+ 	<value name="CP_LOAD_STATE4" value="0x30" variants="A4XX-A5XX"/>
+ 	<doc>Conditionally load a IB based on a flag, prefetch enabled</doc>
+-	<value name="CP_COND_INDIRECT_BUFFER_PFE" value="0x3a"/>
++	<value name="CP_COND_INDIRECT_BUFFER_PFE" value="0x3a" variants="A3XX-A5XX"/>
+ 	<doc>Conditionally load a IB based on a flag, prefetch disabled</doc>
+ 	<value name="CP_COND_INDIRECT_BUFFER_PFD" value="0x32" variants="A3XX"/>
+ 	<doc>Load a buffer with pre-fetch enabled</doc>
+@@ -538,7 +546,7 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ 	<value name="CP_LOAD_STATE6_GEOM" value="0x32" variants="A6XX-"/>
+ 	<value name="CP_LOAD_STATE6_FRAG" value="0x34" variants="A6XX-"/>
+ 	<!--
+-	Note: For IBO state (Image/SSBOs) which have shared state across
++	Note: For UAV state (Image/SSBOs) which have shared state across
+ 	shader stages, for 3d pipeline CP_LOAD_STATE6 is used.  But for
+ 	compute shaders, CP_LOAD_STATE6_FRAG is used.  Possibly they are
+ 	interchangable.
+@@ -567,7 +575,7 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ 	<value name="IN_PREEMPT" value="0x0f" variants="A6XX-"/>
+ 
+ 	<!-- TODO do these exist on A5xx? -->
+-	<value name="CP_SCRATCH_WRITE" value="0x4c" variants="A6XX"/>
++	<value name="CP_SCRATCH_WRITE" value="0x4c" variants="A6XX-"/>
+ 	<value name="CP_REG_TO_MEM_OFFSET_MEM" value="0x74" variants="A6XX-"/>
+ 	<value name="CP_REG_TO_MEM_OFFSET_REG" value="0x72" variants="A6XX-"/>
+ 	<value name="CP_WAIT_MEM_GTE" value="0x14" variants="A6XX"/>
+@@ -650,6 +658,11 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ 
+ 	<doc>Reset various on-chip state used for synchronization</doc>
+ 	<value name="CP_RESET_CONTEXT_STATE" value="0x1f" variants="A7XX-"/>
++
++	<doc>Invalidates the "CCHE" introduced on a740</doc>
++	<value name="CP_CCHE_INVALIDATE" value="0x3a" variants="A7XX-"/>
++
++	<value name="CP_SCOPE_CNTL" value="0x6c" variants="A7XX-"/>
+ </enum>
+ 
+ 
+@@ -792,14 +805,14 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 		<value name="SB6_GS_SHADER" value="0xb"/>
+ 		<value name="SB6_FS_SHADER" value="0xc"/>
+ 		<value name="SB6_CS_SHADER" value="0xd"/>
+-		<value name="SB6_IBO"       value="0xe"/>
+-		<value name="SB6_CS_IBO"    value="0xf"/>
++		<value name="SB6_UAV"       value="0xe"/>
++		<value name="SB6_CS_UAV"    value="0xf"/>
+ 	</enum>
+ 	<enum name="a6xx_state_type">
+ 		<value name="ST6_SHADER" value="0"/>
+ 		<value name="ST6_CONSTANTS" value="1"/>
+ 		<value name="ST6_UBO" value="2"/>
+-		<value name="ST6_IBO" value="3"/>
++		<value name="ST6_UAV" value="3"/>
+ 	</enum>
+ 	<enum name="a6xx_state_src">
+ 		<value name="SS6_DIRECT" value="0"/>
+@@ -1121,39 +1134,93 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 	</reg32>
+ </domain>
+ 
++<enum name="a7xx_abs_mask_mode">
++	<value name="ABS_MASK" value="0x1"/>
++	<value name="NO_ABS_MASK" value="0x0"/>
++</enum>
++
+ <domain name="CP_SET_BIN_DATA5" width="32">
+ 	<reg32 offset="0" name="0">
++		<bitfield name="VSC_MASK" low="0" high="15" type="hex">
++			<doc>
++				A mask of bins, starting at VSC_N, whose
++				visibility is OR'd together. A value of 0 is
++				interpreted as 1 (i.e. just use VSC_N for
++				visbility) for backwards compatibility. Only
++				exists on a7xx.
++			</doc>
++		</bitfield>
+ 		<!-- equiv to PC_VSTREAM_CONTROL.SIZE on a3xx/a4xx: -->
+ 		<bitfield name="VSC_SIZE" low="16" high="21" type="uint"/>
+ 		<!-- equiv to PC_VSTREAM_CONTROL.N on a3xx/a4xx: -->
+ 		<bitfield name="VSC_N" low="22" high="26" type="uint"/>
++		<bitfield name="ABS_MASK" pos="28" type="a7xx_abs_mask_mode" addvariant="yes">
++			<doc>
++				If this field is 1, VSC_MASK and VSC_N are
++				ignored and instead a new ordinal immediately
++				after specifies the full 32-bit mask of bins
++				to use. The mask is "absolute" instead of
++				relative to VSC_N.
++			</doc>
++		</bitfield>
+ 	</reg32>
+-	<!-- BIN_DATA_ADDR -> VSC_PIPE[p].DATA_ADDRESS -->
+-	<reg32 offset="1" name="1">
+-		<bitfield name="BIN_DATA_ADDR_LO" low="0" high="31" type="hex"/>
+-	</reg32>
+-	<reg32 offset="2" name="2">
+-		<bitfield name="BIN_DATA_ADDR_HI" low="0" high="31" type="hex"/>
+-	</reg32>
+-	<!-- BIN_SIZE_ADDRESS -> VSC_SIZE_ADDRESS + (p * 4)-->
+-	<reg32 offset="3" name="3">
+-		<bitfield name="BIN_SIZE_ADDRESS_LO" low="0" high="31"/>
+-	</reg32>
+-	<reg32 offset="4" name="4">
+-		<bitfield name="BIN_SIZE_ADDRESS_HI" low="0" high="31"/>
+-	</reg32>
+-	<!-- new on a6xx, where BIN_DATA_ADDR is the DRAW_STRM: -->
+-	<reg32 offset="5" name="5">
+-		<bitfield name="BIN_PRIM_STRM_LO" low="0" high="31"/>
+-	</reg32>
+-	<reg32 offset="6" name="6">
+-		<bitfield name="BIN_PRIM_STRM_HI" low="0" high="31"/>
+-	</reg32>
+-	<!--
+-		a7xx adds a few more addresses to the end of the pkt
+-	 -->
+-	<reg64 offset="7" name="7"/>
+-	<reg64 offset="9" name="9"/>
++	<stripe varset="a7xx_abs_mask_mode" variants="NO_ABS_MASK">
++		<!-- BIN_DATA_ADDR -> VSC_PIPE[p].DATA_ADDRESS -->
++		<reg32 offset="1" name="1">
++			<bitfield name="BIN_DATA_ADDR_LO" low="0" high="31" type="hex"/>
++		</reg32>
++		<reg32 offset="2" name="2">
++			<bitfield name="BIN_DATA_ADDR_HI" low="0" high="31" type="hex"/>
++		</reg32>
++		<!-- BIN_SIZE_ADDRESS -> VSC_SIZE_ADDRESS + (p * 4)-->
++		<reg32 offset="3" name="3">
++			<bitfield name="BIN_SIZE_ADDRESS_LO" low="0" high="31"/>
++		</reg32>
++		<reg32 offset="4" name="4">
++			<bitfield name="BIN_SIZE_ADDRESS_HI" low="0" high="31"/>
++		</reg32>
++		<!-- new on a6xx, where BIN_DATA_ADDR is the DRAW_STRM: -->
++		<reg32 offset="5" name="5">
++			<bitfield name="BIN_PRIM_STRM_LO" low="0" high="31"/>
++		</reg32>
++		<reg32 offset="6" name="6">
++			<bitfield name="BIN_PRIM_STRM_HI" low="0" high="31"/>
++		</reg32>
++		<!--
++			a7xx adds a few more addresses to the end of the pkt
++		 -->
++		<reg64 offset="7" name="7"/>
++		<reg64 offset="9" name="9"/>
++	</stripe>
++	<stripe varset="a7xx_abs_mask_mode" variants="ABS_MASK">
++		<reg32 offset="1" name="ABS_MASK"/>
++		<!-- BIN_DATA_ADDR -> VSC_PIPE[p].DATA_ADDRESS -->
++		<reg32 offset="2" name="2">
++			<bitfield name="BIN_DATA_ADDR_LO" low="0" high="31" type="hex"/>
++		</reg32>
++		<reg32 offset="3" name="3">
++			<bitfield name="BIN_DATA_ADDR_HI" low="0" high="31" type="hex"/>
++		</reg32>
++		<!-- BIN_SIZE_ADDRESS -> VSC_SIZE_ADDRESS + (p * 4)-->
++		<reg32 offset="4" name="4">
++			<bitfield name="BIN_SIZE_ADDRESS_LO" low="0" high="31"/>
++		</reg32>
++		<reg32 offset="5" name="5">
++			<bitfield name="BIN_SIZE_ADDRESS_HI" low="0" high="31"/>
++		</reg32>
++		<!-- new on a6xx, where BIN_DATA_ADDR is the DRAW_STRM: -->
++		<reg32 offset="6" name="6">
++			<bitfield name="BIN_PRIM_STRM_LO" low="0" high="31"/>
++		</reg32>
++		<reg32 offset="7" name="7">
++			<bitfield name="BIN_PRIM_STRM_HI" low="0" high="31"/>
++		</reg32>
++		<!--
++			a7xx adds a few more addresses to the end of the pkt
++		 -->
++		<reg64 offset="8" name="8"/>
++		<reg64 offset="10" name="10"/>
++	</stripe>
+ </domain>
+ 
+ <domain name="CP_SET_BIN_DATA5_OFFSET" width="32">
+@@ -1164,23 +1231,42 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+                 stream is recorded.
+ 	</doc>
+ 	<reg32 offset="0" name="0">
++		<bitfield name="VSC_MASK" low="0" high="15" type="hex"/>
+ 		<!-- equiv to PC_VSTREAM_CONTROL.SIZE on a3xx/a4xx: -->
+ 		<bitfield name="VSC_SIZE" low="16" high="21" type="uint"/>
+ 		<!-- equiv to PC_VSTREAM_CONTROL.N on a3xx/a4xx: -->
+ 		<bitfield name="VSC_N" low="22" high="26" type="uint"/>
++		<bitfield name="ABS_MASK" pos="28" type="a7xx_abs_mask_mode" addvariant="yes"/>
+ 	</reg32>
+-	<!-- BIN_DATA_ADDR -> VSC_PIPE[p].DATA_ADDRESS -->
+-	<reg32 offset="1" name="1">
+-		<bitfield name="BIN_DATA_OFFSET" low="0" high="31" type="uint"/>
+-	</reg32>
+-	<!-- BIN_SIZE_ADDRESS -> VSC_SIZE_ADDRESS + (p * 4)-->
+-	<reg32 offset="2" name="2">
+-		<bitfield name="BIN_SIZE_OFFSET" low="0" high="31" type="uint"/>
+-	</reg32>
+-	<!-- BIN_DATA2_ADDR -> VSC_PIPE[p].DATA2_ADDRESS -->
+-	<reg32 offset="3" name="3">
+-		<bitfield name="BIN_DATA2_OFFSET" low="0" high="31" type="uint"/>
+-	</reg32>
++	<stripe varset="a7xx_abs_mask_mode" variants="NO_ABS_MASK">
++		<!-- BIN_DATA_ADDR -> VSC_PIPE[p].DATA_ADDRESS -->
++		<reg32 offset="1" name="1">
++			<bitfield name="BIN_DATA_OFFSET" low="0" high="31" type="uint"/>
++		</reg32>
++		<!-- BIN_SIZE_ADDRESS -> VSC_SIZE_ADDRESS + (p * 4)-->
++		<reg32 offset="2" name="2">
++			<bitfield name="BIN_SIZE_OFFSET" low="0" high="31" type="uint"/>
++		</reg32>
++		<!-- BIN_DATA2_ADDR -> VSC_PIPE[p].DATA2_ADDRESS -->
++		<reg32 offset="3" name="3">
++			<bitfield name="BIN_DATA2_OFFSET" low="0" high="31" type="uint"/>
++		</reg32>
++	</stripe>
++	<stripe varset="a7xx_abs_mask_mode" variants="ABS_MASK">
++		<reg32 offset="1" name="ABS_MASK"/>
++		<!-- BIN_DATA_ADDR -> VSC_PIPE[p].DATA_ADDRESS -->
++		<reg32 offset="2" name="2">
++			<bitfield name="BIN_DATA_OFFSET" low="0" high="31" type="uint"/>
++		</reg32>
++		<!-- BIN_SIZE_ADDRESS -> VSC_SIZE_ADDRESS + (p * 4)-->
++		<reg32 offset="3" name="3">
++			<bitfield name="BIN_SIZE_OFFSET" low="0" high="31" type="uint"/>
++		</reg32>
++		<!-- BIN_DATA2_ADDR -> VSC_PIPE[p].DATA2_ADDRESS -->
++		<reg32 offset="4" name="4">
++			<bitfield name="BIN_DATA2_OFFSET" low="0" high="31" type="uint"/>
++		</reg32>
++	</stripe>
+ </domain>
+ 
+ <domain name="CP_REG_RMW" width="32">
+@@ -1198,6 +1284,9 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 	</doc>
+ 	<reg32 offset="0" name="0">
+ 		<bitfield name="DST_REG" low="0" high="17" type="hex"/>
++		<bitfield name="DST_SCRATCH" pos="19" type="boolean" varset="chip" variants="A7XX-"/>
++		<!-- skip implied CP_WAIT_FOR_IDLE + CP_WAIT_FOR_ME -->
++		<bitfield name="SKIP_WAIT_FOR_ME" pos="23" type="boolean" varset="chip" variants="A7XX-"/>
+ 		<bitfield name="ROTATE" low="24" high="28" type="uint"/>
+ 		<bitfield name="SRC1_ADD" pos="29" type="boolean"/>
+ 		<bitfield name="SRC1_IS_REG" pos="30" type="boolean"/>
+@@ -1348,6 +1437,8 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 		<bitfield name="SCRATCH" low="20" high="22" type="uint"/>
+ 		<!-- number of registers/dwords copied is CNT + 1. -->
+ 		<bitfield name="CNT" low="24" high="26" type="uint"/>
++		<!-- skip implied CP_WAIT_FOR_IDLE + CP_WAIT_FOR_ME -->
++		<bitfield name="SKIP_WAIT_FOR_ME" pos="27" type="boolean" varset="chip" variants="A7XX-"/>
+ 	</reg32>
+ </domain>
+ 
+@@ -1655,8 +1746,8 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 		<bitfield name="WRITE_SAMPLE_COUNT" pos="12" type="boolean"/>
+ 		<!-- Write sample count at (iova + 16) -->
+ 		<bitfield name="SAMPLE_COUNT_END_OFFSET" pos="13" type="boolean"/>
+-		<!-- *(iova + 8) = *(iova + 16) - *iova -->
+-		<bitfield name="WRITE_SAMPLE_COUNT_DIFF" pos="14" type="boolean"/>
++		<!-- *(iova + 8) += *(iova + 16) - *iova -->
++		<bitfield name="WRITE_ACCUM_SAMPLE_COUNT_DIFF" pos="14" type="boolean"/>
+ 
+ 		<!-- Next 4 flags are valid to set only when concurrent binning is enabled -->
+ 		<!-- Increment 16b BV counter. Valid only in BV pipe -->
+@@ -1670,15 +1761,11 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 		<bitfield name="WRITE_DST" pos="24" type="event_write_dst" addvariant="yes"/>
+ 		<!-- Writes into WRITE_DST from WRITE_SRC. RB_DONE_TS requires WRITE_ENABLED. -->
+ 		<bitfield name="WRITE_ENABLED" pos="27" type="boolean"/>
++		<bitfield name="IRQ" pos="31" type="boolean"/>
+ 	</reg32>
+ 
+ 	<stripe varset="event_write_dst" variants="EV_DST_RAM">
+-		<reg32 offset="1" name="1">
+-			<bitfield name="ADDR_0_LO" low="0" high="31"/>
+-		</reg32>
+-		<reg32 offset="2" name="2">
+-			<bitfield name="ADDR_0_HI" low="0" high="31"/>
+-		</reg32>
++		<reg64 offset="1" name="1" type="waddress"/>
+ 		<reg32 offset="3" name="3">
+ 			<bitfield name="PAYLOAD_0" low="0" high="31"/>
+ 		</reg32>
+@@ -1773,13 +1860,23 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 
+ <domain name="CP_SET_MARKER" width="32" varset="chip" prefix="chip" variants="A6XX-">
+ 	<doc>Tell CP the current operation mode, indicates save and restore procedure</doc>
++	<enum name="set_marker_mode">
++		<value value="0" name="SET_RENDER_MODE"/>
++		<!-- IFPC - inter-frame power collapse -->
++		<value value="1" name="SET_IFPC_MODE"/>
++	</enum>
++	<enum name="a6xx_ifpc_mode">
++		<value value="0" name="IFPC_ENABLE"/>
++		<value value="1" name="IFPC_DISABLE"/>
++	</enum>
+ 	<enum name="a6xx_marker">
+-		<value value="1" name="RM6_BYPASS"/>
+-		<value value="2" name="RM6_BINNING"/>
+-		<value value="4" name="RM6_GMEM"/>
+-		<value value="5" name="RM6_ENDVIS"/>
+-		<value value="6" name="RM6_RESOLVE"/>
+-		<value value="7" name="RM6_YIELD"/>
++		<value value="1" name="RM6_DIRECT_RENDER"/>
++		<value value="2" name="RM6_BIN_VISIBILITY"/>
++		<value value="3" name="RM6_BIN_DIRECT"/>
++		<value value="4" name="RM6_BIN_RENDER_START"/>
++		<value value="5" name="RM6_BIN_END_OF_DRAWS"/>
++		<value value="6" name="RM6_BIN_RESOLVE"/>
++		<value value="7" name="RM6_BIN_RENDER_END"/>
+ 		<value value="8" name="RM6_COMPUTE"/>
+ 		<value value="0xc" name="RM6_BLIT2DSCALE"/>  <!-- no-op (at least on current sqe fw) -->
+ 
+@@ -1789,23 +1886,40 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 		-->
+ 		<value value="0xd" name="RM6_IB1LIST_START"/>
+ 		<value value="0xe" name="RM6_IB1LIST_END"/>
+-		<!-- IFPC - inter-frame power collapse -->
+-		<value value="0x100" name="RM6_IFPC_ENABLE"/>
+-		<value value="0x101" name="RM6_IFPC_DISABLE"/>
+ 	</enum>
+ 	<reg32 offset="0" name="0">
++		<!-- if b8 is set, the low bits are interpreted differently (and b4 ignored) -->
++		<bitfield name="MARKER_MODE" pos="8" type="set_marker_mode" addvariant="yes"/>
++
++		<bitfield name="MODE" low="0" high="3" type="a6xx_marker" varset="set_marker_mode" variants="SET_RENDER_MODE"/>
++		<!-- used by preemption to determine if GMEM needs to be saved or not -->
++		<bitfield name="USES_GMEM" pos="4" type="boolean" varset="set_marker_mode" variants="SET_RENDER_MODE"/>
++
++		<bitfield name="IFPC_MODE" pos="0" type="a6xx_ifpc_mode" varset="set_marker_mode" variants="SET_IFPC_MODE"/>
++
+ 		<!--
+-			NOTE: blob driver and some versions of freedreno/turnip set
+-			b4, which is unused (at least by current sqe fw), but interferes
+-			with parsing if we extend the size of the bitfield to include
+-			b8 (only sent by kernel mode driver).  Really, the way the
+-			parsing works in the firmware, only b0-b3 are considered, but
+-			if b8 is set, the low bits are interpreted differently.  To
+-			model this, without getting confused by spurious b4, this is
+-			described as two overlapping bitfields:
+-		 -->
+-		<bitfield name="MODE" low="0" high="8" type="a6xx_marker"/>
+-		<bitfield name="MARKER" low="0" high="3" type="a6xx_marker"/>
++			CP_SET_MARKER is used with these bits to create a
++			critical section around a workaround for ray tracing.
++			The workaround happens after BVH building, and appears
++			to invalidate the RTU's BVH node cache. It makes sure
++			that only one of BR/BV/LPAC is executing the
++			workaround at a time, and no draws using RT on BV/LPAC
++			are executing while the workaround is executed on BR (or
++			vice versa, that no draws on BV/BR using RT are executed
++			while the workaround executes on LPAC), by
++			hooking subsequent CP_EVENT_WRITE/CP_DRAW_*/CP_EXEC_CS.
++			The blob usage is:
++
++			CP_SET_MARKER(RT_WA_START)
++			... workaround here ...
++			CP_SET_MARKER(RT_WA_END)
++			...
++			CP_SET_MARKER(SHADER_USES_RT)
++			CP_DRAW_INDX(...) or CP_EXEC_CS(...)
++		-->
++		<bitfield name="SHADER_USES_RT" pos="9" type="boolean" variants="A7XX-"/>
++		<bitfield name="RT_WA_START" pos="10" type="boolean" variants="A7XX-"/>
++		<bitfield name="RT_WA_END" pos="11" type="boolean" variants="A7XX-"/>
+ 	</reg32>
+ </domain>
+ 
+@@ -1832,9 +1946,9 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 			If concurrent binning is disabled then BR also does binning so it will also
+ 			write the "real" registers in BR.
+ 		-->
+-		<value value="8" name="DRAW_STRM_ADDRESS"/>
+-		<value value="9" name="DRAW_STRM_SIZE_ADDRESS"/>
+-		<value value="10" name="PRIM_STRM_ADDRESS"/>
++		<value value="8" name="VSC_PIPE_DATA_DRAW_BASE"/>
++		<value value="9" name="VSC_SIZE_BASE"/>
++		<value value="10" name="VSC_PIPE_DATA_PRIM_BASE"/>
+ 		<value value="11" name="UNK_STRM_ADDRESS"/>
+ 		<value value="12" name="UNK_STRM_SIZE_ADDRESS"/>
+ 
+@@ -1935,11 +2049,11 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 			a bitmask of which modes pass the test.
+ 		-->
+ 
+-		<!-- RM6_BINNING -->
++		<!-- RM6_BIN_VISIBILITY -->
+ 		<bitfield name="BINNING" pos="25" variants="RENDER_MODE" type="boolean"/>
+ 		<!-- all others -->
+ 		<bitfield name="GMEM" pos="26" variants="RENDER_MODE" type="boolean"/>
+-		<!-- RM6_BYPASS -->
++		<!-- RM6_DIRECT_RENDER -->
+ 		<bitfield name="SYSMEM" pos="27" variants="RENDER_MODE" type="boolean"/>
+ 
+ 		<bitfield name="BV" pos="25" variants="THREAD_MODE" type="boolean"/>
+@@ -2014,10 +2128,10 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 
+ <domain name="CP_SET_AMBLE" width="32">
+ 	<doc>
+-                Used by the userspace and kernel drivers to set various IB's
+-                which are executed during context save/restore for handling
+-                state that isn't restored by the context switch routine itself.
+-  </doc>
++		Used by the userspace and kernel drivers to set various IB's
++		which are executed during context save/restore for handling
++		state that isn't restored by the context switch routine itself.
++	</doc>
+ 	<enum name="amble_type">
+ 		<value name="PREAMBLE_AMBLE_TYPE" value="0">
+ 			<doc>Executed unconditionally when switching back to the context.</doc>
+@@ -2087,12 +2201,12 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 		<value name="UNK_EVENT_WRITE" value="0x4"/>
+ 		<doc>
+ 			Tracks GRAS_LRZ_CNTL::GREATER, GRAS_LRZ_CNTL::DIR, and
+-			GRAS_LRZ_DEPTH_VIEW with previous values, and if one of
++			GRAS_LRZ_VIEW_INFO with previous values, and if one of
+ 			the following is true:
+ 			- GRAS_LRZ_CNTL::GREATER has changed
+ 			- GRAS_LRZ_CNTL::DIR has changed, the old value is not
+ 			  CUR_DIR_GE, and the new value is not CUR_DIR_DISABLED
+-			- GRAS_LRZ_DEPTH_VIEW has changed
++			- GRAS_LRZ_VIEW_INFO has changed
+ 			then it does a LRZ_FLUSH with GRAS_LRZ_CNTL::ENABLE
+ 			forced to 1.
+ 			Only exists in a650_sqe.fw.
+@@ -2207,7 +2321,7 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 
+ <domain name="CP_MEM_TO_SCRATCH_MEM" width="32">
+ 	<doc>
+-		Best guess is that it is a faster way to fetch all the VSC_STATE registers
++		Best guess is that it is a faster way to fetch all the VSC_CHANNEL_VISIBILITY registers
+ 		and keep them in a local scratch memory instead of fetching every time
+ 		when skipping IBs.
+ 	</doc>
+@@ -2260,6 +2374,16 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ 	</reg32>
+ </domain>
+ 
++<domain name="CP_SCOPE_CNTL" width="32">
++	<enum name="cp_scope">
++		<value value="0" name="INTERRUPTS"/>
++	</enum>
++	<reg32 offset="0" name="0">
++		<bitfield name="DISABLE_PREEMPTION" pos="0" type="boolean"/>
++		<bitfield low="28" high="31" name="SCOPE" type="cp_scope"/>
++	</reg32>
++</domain>
++
+ <domain name="CP_INDIRECT_BUFFER" width="32" varset="chip" prefix="chip" variants="A5XX-">
+ 	<reg64 offset="0" name="IB_BASE" type="address"/>
+ 	<reg32 offset="2" name="2">
+diff --git a/drivers/gpu/drm/panel/panel-raydium-rm67200.c b/drivers/gpu/drm/panel/panel-raydium-rm67200.c
+index 64b685dc11f65b..6d4d00d4cd7459 100644
+--- a/drivers/gpu/drm/panel/panel-raydium-rm67200.c
++++ b/drivers/gpu/drm/panel/panel-raydium-rm67200.c
+@@ -318,6 +318,7 @@ static void w552793baa_setup(struct mipi_dsi_multi_context *ctx)
+ static int raydium_rm67200_prepare(struct drm_panel *panel)
+ {
+ 	struct raydium_rm67200 *ctx = to_raydium_rm67200(panel);
++	struct mipi_dsi_multi_context mctx = { .dsi = ctx->dsi };
+ 	int ret;
+ 
+ 	ret = regulator_bulk_enable(ctx->num_supplies, ctx->supplies);
+@@ -328,6 +329,12 @@ static int raydium_rm67200_prepare(struct drm_panel *panel)
+ 
+ 	msleep(60);
+ 
++	ctx->panel_info->panel_setup(&mctx);
++	mipi_dsi_dcs_exit_sleep_mode_multi(&mctx);
++	mipi_dsi_msleep(&mctx, 120);
++	mipi_dsi_dcs_set_display_on_multi(&mctx);
++	mipi_dsi_msleep(&mctx, 30);
++
+ 	return 0;
+ }
+ 
+@@ -343,20 +350,6 @@ static int raydium_rm67200_unprepare(struct drm_panel *panel)
+ 	return 0;
+ }
+ 
+-static int raydium_rm67200_enable(struct drm_panel *panel)
+-{
+-	struct raydium_rm67200 *rm67200 = to_raydium_rm67200(panel);
+-	struct mipi_dsi_multi_context ctx = { .dsi = rm67200->dsi };
+-
+-	rm67200->panel_info->panel_setup(&ctx);
+-	mipi_dsi_dcs_exit_sleep_mode_multi(&ctx);
+-	mipi_dsi_msleep(&ctx, 120);
+-	mipi_dsi_dcs_set_display_on_multi(&ctx);
+-	mipi_dsi_msleep(&ctx, 30);
+-
+-	return ctx.accum_err;
+-}
+-
+ static int raydium_rm67200_disable(struct drm_panel *panel)
+ {
+ 	struct raydium_rm67200 *rm67200 = to_raydium_rm67200(panel);
+@@ -381,7 +374,6 @@ static const struct drm_panel_funcs raydium_rm67200_funcs = {
+ 	.prepare = raydium_rm67200_prepare,
+ 	.unprepare = raydium_rm67200_unprepare,
+ 	.get_modes = raydium_rm67200_get_modes,
+-	.enable = raydium_rm67200_enable,
+ 	.disable = raydium_rm67200_disable,
+ };
+ 
+diff --git a/drivers/gpu/drm/renesas/rz-du/rzg2l_mipi_dsi.c b/drivers/gpu/drm/renesas/rz-du/rzg2l_mipi_dsi.c
+index dc6ab012cdb69f..6aca10920c8e3f 100644
+--- a/drivers/gpu/drm/renesas/rz-du/rzg2l_mipi_dsi.c
++++ b/drivers/gpu/drm/renesas/rz-du/rzg2l_mipi_dsi.c
+@@ -585,6 +585,9 @@ rzg2l_mipi_dsi_bridge_mode_valid(struct drm_bridge *bridge,
+ 	if (mode->clock > 148500)
+ 		return MODE_CLOCK_HIGH;
+ 
++	if (mode->clock < 5803)
++		return MODE_CLOCK_LOW;
++
+ 	return MODE_OK;
+ }
+ 
+diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
+index 829579c41c6b5d..3ac5e6acce04bb 100644
+--- a/drivers/gpu/drm/scheduler/sched_main.c
++++ b/drivers/gpu/drm/scheduler/sched_main.c
+@@ -1348,6 +1348,18 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_init_
+ }
+ EXPORT_SYMBOL(drm_sched_init);
+ 
++static void drm_sched_cancel_remaining_jobs(struct drm_gpu_scheduler *sched)
++{
++	struct drm_sched_job *job, *tmp;
++
++	/* All other accessors are stopped. No locking necessary. */
++	list_for_each_entry_safe_reverse(job, tmp, &sched->pending_list, list) {
++		sched->ops->cancel_job(job);
++		list_del(&job->list);
++		sched->ops->free_job(job);
++	}
++}
++
+ /**
+  * drm_sched_fini - Destroy a gpu scheduler
+  *
+@@ -1355,19 +1367,11 @@ EXPORT_SYMBOL(drm_sched_init);
+  *
+  * Tears down and cleans up the scheduler.
+  *
+- * This stops submission of new jobs to the hardware through
+- * drm_sched_backend_ops.run_job(). Consequently, drm_sched_backend_ops.free_job()
+- * will not be called for all jobs still in drm_gpu_scheduler.pending_list.
+- * There is no solution for this currently. Thus, it is up to the driver to make
+- * sure that:
+- *
+- *  a) drm_sched_fini() is only called after for all submitted jobs
+- *     drm_sched_backend_ops.free_job() has been called or that
+- *  b) the jobs for which drm_sched_backend_ops.free_job() has not been called
+- *     after drm_sched_fini() ran are freed manually.
+- *
+- * FIXME: Take care of the above problem and prevent this function from leaking
+- * the jobs in drm_gpu_scheduler.pending_list under any circumstances.
++ * This stops submission of new jobs to the hardware through &struct
++ * drm_sched_backend_ops.run_job. If &struct drm_sched_backend_ops.cancel_job
++ * is implemented, all jobs will be canceled through it and afterwards cleaned
++ * up through &struct drm_sched_backend_ops.free_job. If cancel_job is not
++ * implemented, memory could leak.
+  */
+ void drm_sched_fini(struct drm_gpu_scheduler *sched)
+ {
+@@ -1397,6 +1401,10 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched)
+ 	/* Confirm no work left behind accessing device structures */
+ 	cancel_delayed_work_sync(&sched->work_tdr);
+ 
++	/* Avoid memory leaks if supported by the driver. */
++	if (sched->ops->cancel_job)
++		drm_sched_cancel_remaining_jobs(sched);
++
+ 	if (sched->own_submit_wq)
+ 		destroy_workqueue(sched->submit_wq);
+ 	sched->ready = false;
+diff --git a/drivers/gpu/drm/scheduler/tests/tests_basic.c b/drivers/gpu/drm/scheduler/tests/tests_basic.c
+index 7230057e0594c6..b1ae10c6bb3735 100644
+--- a/drivers/gpu/drm/scheduler/tests/tests_basic.c
++++ b/drivers/gpu/drm/scheduler/tests/tests_basic.c
+@@ -204,6 +204,47 @@ static struct kunit_suite drm_sched_basic = {
+ 	.test_cases = drm_sched_basic_tests,
+ };
+ 
++static void drm_sched_basic_cancel(struct kunit *test)
++{
++	struct drm_mock_sched_entity *entity;
++	struct drm_mock_scheduler *sched;
++	struct drm_mock_sched_job *job;
++	bool done;
++
++	/*
++	 * Check that drm_sched_fini() uses the cancel_job() callback to cancel
++	 * jobs that are still pending.
++	 */
++
++	sched = drm_mock_sched_new(test, MAX_SCHEDULE_TIMEOUT);
++	entity = drm_mock_sched_entity_new(test, DRM_SCHED_PRIORITY_NORMAL,
++					   sched);
++
++	job = drm_mock_sched_job_new(test, entity);
++
++	drm_mock_sched_job_submit(job);
++
++	done = drm_mock_sched_job_wait_scheduled(job, HZ);
++	KUNIT_ASSERT_TRUE(test, done);
++
++	drm_mock_sched_entity_free(entity);
++	drm_mock_sched_fini(sched);
++
++	KUNIT_ASSERT_EQ(test, job->hw_fence.error, -ECANCELED);
++}
++
++static struct kunit_case drm_sched_cancel_tests[] = {
++	KUNIT_CASE(drm_sched_basic_cancel),
++	{}
++};
++
++static struct kunit_suite drm_sched_cancel = {
++	.name = "drm_sched_basic_cancel_tests",
++	.init = drm_sched_basic_init,
++	.exit = drm_sched_basic_exit,
++	.test_cases = drm_sched_cancel_tests,
++};
++
+ static void drm_sched_basic_timeout(struct kunit *test)
+ {
+ 	struct drm_mock_scheduler *sched = test->priv;
+@@ -471,6 +512,7 @@ static struct kunit_suite drm_sched_credits = {
+ 
+ kunit_test_suites(&drm_sched_basic,
+ 		  &drm_sched_timeout,
++		  &drm_sched_cancel,
+ 		  &drm_sched_priority,
+ 		  &drm_sched_modify_sched,
+ 		  &drm_sched_credits);
+diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
+index c2ea865be65720..c060c90b89c021 100644
+--- a/drivers/gpu/drm/ttm/ttm_pool.c
++++ b/drivers/gpu/drm/ttm/ttm_pool.c
+@@ -1132,7 +1132,6 @@ void ttm_pool_fini(struct ttm_pool *pool)
+ }
+ EXPORT_SYMBOL(ttm_pool_fini);
+ 
+-/* As long as pages are available make sure to release at least one */
+ static unsigned long ttm_pool_shrinker_scan(struct shrinker *shrink,
+ 					    struct shrink_control *sc)
+ {
+@@ -1140,9 +1139,12 @@ static unsigned long ttm_pool_shrinker_scan(struct shrinker *shrink,
+ 
+ 	do
+ 		num_freed += ttm_pool_shrink();
+-	while (!num_freed && atomic_long_read(&allocated_pages));
++	while (num_freed < sc->nr_to_scan &&
++	       atomic_long_read(&allocated_pages));
+ 
+-	return num_freed;
++	sc->nr_scanned = num_freed;
++
++	return num_freed ?: SHRINK_STOP;
+ }
+ 
+ /* Return the number of pages available or SHRINK_EMPTY if we have none */
+diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c
+index 769b0ca9be47b9..006202fcd3c20e 100644
+--- a/drivers/gpu/drm/ttm/ttm_resource.c
++++ b/drivers/gpu/drm/ttm/ttm_resource.c
+@@ -557,6 +557,9 @@ int ttm_resource_manager_evict_all(struct ttm_device *bdev,
+ 		cond_resched();
+ 	} while (!ret);
+ 
++	if (ret && ret != -ENOENT)
++		return ret;
++
+ 	spin_lock(&man->move_lock);
+ 	fence = dma_fence_get(man->move);
+ 	spin_unlock(&man->move_lock);
+diff --git a/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h b/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
+index 4c39f01e4f5286..a3f421e2adc03b 100644
+--- a/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
++++ b/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
+@@ -20,6 +20,8 @@ struct xe_exec_queue;
+ struct xe_guc_exec_queue {
+ 	/** @q: Backpointer to parent xe_exec_queue */
+ 	struct xe_exec_queue *q;
++	/** @rcu: For safe freeing of exported dma fences */
++	struct rcu_head rcu;
+ 	/** @sched: GPU scheduler for this xe_exec_queue */
+ 	struct xe_gpu_scheduler sched;
+ 	/** @entity: Scheduler entity for this xe_exec_queue */
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index 2ac87ff4a057f4..ef3e9e1588f7c6 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -1287,7 +1287,11 @@ static void __guc_exec_queue_fini_async(struct work_struct *w)
+ 	xe_sched_entity_fini(&ge->entity);
+ 	xe_sched_fini(&ge->sched);
+ 
+-	kfree(ge);
++	/*
++	 * RCU free due sched being exported via DRM scheduler fences
++	 * (timeline name).
++	 */
++	kfree_rcu(ge, rcu);
+ 	xe_exec_queue_fini(q);
+ 	xe_pm_runtime_put(guc_to_xe(guc));
+ }
+@@ -1470,6 +1474,7 @@ static int guc_exec_queue_init(struct xe_exec_queue *q)
+ 
+ 	q->guc = ge;
+ 	ge->q = q;
++	init_rcu_head(&ge->rcu);
+ 	init_waitqueue_head(&ge->suspend_wait);
+ 
+ 	for (i = 0; i < MAX_STATIC_MSG_TYPE; ++i)
+diff --git a/drivers/gpu/drm/xe/xe_hw_fence.c b/drivers/gpu/drm/xe/xe_hw_fence.c
+index 0b4f12be3692ab..6e2221b606885f 100644
+--- a/drivers/gpu/drm/xe/xe_hw_fence.c
++++ b/drivers/gpu/drm/xe/xe_hw_fence.c
+@@ -100,6 +100,9 @@ void xe_hw_fence_irq_finish(struct xe_hw_fence_irq *irq)
+ 		spin_unlock_irqrestore(&irq->lock, flags);
+ 		dma_fence_end_signalling(tmp);
+ 	}
++
++	/* Safe release of the irq->lock used in dma_fence_init. */
++	synchronize_rcu();
+ }
+ 
+ void xe_hw_fence_irq_run(struct xe_hw_fence_irq *irq)
+diff --git a/drivers/gpu/drm/xe/xe_hwmon.c b/drivers/gpu/drm/xe/xe_hwmon.c
+index f008e804970011..be2948dfc8b321 100644
+--- a/drivers/gpu/drm/xe/xe_hwmon.c
++++ b/drivers/gpu/drm/xe/xe_hwmon.c
+@@ -329,6 +329,7 @@ static int xe_hwmon_power_max_write(struct xe_hwmon *hwmon, u32 attr, int channe
+ 	int ret = 0;
+ 	u32 reg_val;
+ 	struct xe_reg rapl_limit;
++	u64 max_supp_power_limit = 0;
+ 
+ 	mutex_lock(&hwmon->hwmon_lock);
+ 
+@@ -353,6 +354,20 @@ static int xe_hwmon_power_max_write(struct xe_hwmon *hwmon, u32 attr, int channe
+ 		goto unlock;
+ 	}
+ 
++	/*
++	 * If the sysfs value exceeds the maximum pcode supported power limit value, clamp it to
++	 * the supported maximum (U12.3 format).
++	 * This is to avoid truncation during reg_val calculation below and ensure the valid
++	 * power limit is sent for pcode which would clamp it to card-supported value.
++	 */
++	max_supp_power_limit = ((PWR_LIM_VAL) >> hwmon->scl_shift_power) * SF_POWER;
++	if (value > max_supp_power_limit) {
++		value = max_supp_power_limit;
++		drm_info(&hwmon->xe->drm,
++			 "Power limit clamped as selected %s exceeds channel %d limit\n",
++			 PWR_ATTR_TO_STR(attr), channel);
++	}
++
+ 	/* Computation in 64-bits to avoid overflow. Round to nearest. */
+ 	reg_val = DIV_ROUND_CLOSEST_ULL((u64)value << hwmon->scl_shift_power, SF_POWER);
+ 	reg_val = PWR_LIM_EN | REG_FIELD_PREP(PWR_LIM_VAL, reg_val);
+@@ -697,9 +712,23 @@ static int xe_hwmon_power_curr_crit_write(struct xe_hwmon *hwmon, int channel,
+ {
+ 	int ret;
+ 	u32 uval;
++	u64 max_crit_power_curr = 0;
+ 
+ 	mutex_lock(&hwmon->hwmon_lock);
+ 
++	/*
++	 * If the sysfs value exceeds the pcode mailbox cmd POWER_SETUP_SUBCOMMAND_WRITE_I1
++	 * max supported value, clamp it to the command's max (U10.6 format).
++	 * This is to avoid truncation during uval calculation below and ensure the valid power
++	 * limit is sent for pcode which would clamp it to card-supported value.
++	 */
++	max_crit_power_curr = (POWER_SETUP_I1_DATA_MASK >> POWER_SETUP_I1_SHIFT) * scale_factor;
++	if (value > max_crit_power_curr) {
++		value = max_crit_power_curr;
++		drm_info(&hwmon->xe->drm,
++			 "Power limit clamped as selected exceeds channel %d limit\n",
++			 channel);
++	}
+ 	uval = DIV_ROUND_CLOSEST_ULL(value << POWER_SETUP_I1_SHIFT, scale_factor);
+ 	ret = xe_hwmon_pcode_write_i1(hwmon, uval);
+ 
+diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
+index 07a5161c7d5b3e..1e3fd139dfcbca 100644
+--- a/drivers/gpu/drm/xe/xe_migrate.c
++++ b/drivers/gpu/drm/xe/xe_migrate.c
+@@ -1820,15 +1820,19 @@ int xe_migrate_access_memory(struct xe_migrate *m, struct xe_bo *bo,
+ 	if (!IS_ALIGNED(len, XE_CACHELINE_BYTES) ||
+ 	    !IS_ALIGNED((unsigned long)buf + offset, XE_CACHELINE_BYTES)) {
+ 		int buf_offset = 0;
++		void *bounce;
++		int err;
++
++		BUILD_BUG_ON(!is_power_of_2(XE_CACHELINE_BYTES));
++		bounce = kmalloc(XE_CACHELINE_BYTES, GFP_KERNEL);
++		if (!bounce)
++			return -ENOMEM;
+ 
+ 		/*
+ 		 * Less than ideal for large unaligned access but this should be
+ 		 * fairly rare, can fixup if this becomes common.
+ 		 */
+ 		do {
+-			u8 bounce[XE_CACHELINE_BYTES];
+-			void *ptr = (void *)bounce;
+-			int err;
+ 			int copy_bytes = min_t(int, bytes_left,
+ 					       XE_CACHELINE_BYTES -
+ 					       (offset & XE_CACHELINE_MASK));
+@@ -1837,22 +1841,22 @@ int xe_migrate_access_memory(struct xe_migrate *m, struct xe_bo *bo,
+ 			err = xe_migrate_access_memory(m, bo,
+ 						       offset &
+ 						       ~XE_CACHELINE_MASK,
+-						       (void *)ptr,
+-						       sizeof(bounce), 0);
++						       bounce,
++						       XE_CACHELINE_BYTES, 0);
+ 			if (err)
+-				return err;
++				break;
+ 
+ 			if (write) {
+-				memcpy(ptr + ptr_offset, buf + buf_offset, copy_bytes);
++				memcpy(bounce + ptr_offset, buf + buf_offset, copy_bytes);
+ 
+ 				err = xe_migrate_access_memory(m, bo,
+ 							       offset & ~XE_CACHELINE_MASK,
+-							       (void *)ptr,
+-							       sizeof(bounce), write);
++							       bounce,
++							       XE_CACHELINE_BYTES, write);
+ 				if (err)
+-					return err;
++					break;
+ 			} else {
+-				memcpy(buf + buf_offset, ptr + ptr_offset,
++				memcpy(buf + buf_offset, bounce + ptr_offset,
+ 				       copy_bytes);
+ 			}
+ 
+@@ -1861,7 +1865,8 @@ int xe_migrate_access_memory(struct xe_migrate *m, struct xe_bo *bo,
+ 			offset += copy_bytes;
+ 		} while (bytes_left);
+ 
+-		return 0;
++		kfree(bounce);
++		return err;
+ 	}
+ 
+ 	dma_addr = xe_migrate_dma_map(xe, buf, len + page_offset, write);
+@@ -1882,8 +1887,11 @@ int xe_migrate_access_memory(struct xe_migrate *m, struct xe_bo *bo,
+ 		else
+ 			current_bytes = min_t(int, bytes_left, cursor.size);
+ 
+-		if (fence)
+-			dma_fence_put(fence);
++		if (current_bytes & ~PAGE_MASK) {
++			int pitch = 4;
++
++			current_bytes = min_t(int, current_bytes, S16_MAX * pitch);
++		}
+ 
+ 		__fence = xe_migrate_vram(m, current_bytes,
+ 					  (unsigned long)buf & ~PAGE_MASK,
+@@ -1892,11 +1900,15 @@ int xe_migrate_access_memory(struct xe_migrate *m, struct xe_bo *bo,
+ 					  XE_MIGRATE_COPY_TO_VRAM :
+ 					  XE_MIGRATE_COPY_TO_SRAM);
+ 		if (IS_ERR(__fence)) {
+-			if (fence)
++			if (fence) {
+ 				dma_fence_wait(fence, false);
++				dma_fence_put(fence);
++			}
+ 			fence = __fence;
+ 			goto out_err;
+ 		}
++
++		dma_fence_put(fence);
+ 		fence = __fence;
+ 
+ 		buf += current_bytes;
+diff --git a/drivers/gpu/drm/xe/xe_query.c b/drivers/gpu/drm/xe/xe_query.c
+index 2dbf4066d86ff2..34f7488acc99d2 100644
+--- a/drivers/gpu/drm/xe/xe_query.c
++++ b/drivers/gpu/drm/xe/xe_query.c
+@@ -368,6 +368,7 @@ static int query_gt_list(struct xe_device *xe, struct drm_xe_device_query *query
+ 	struct drm_xe_query_gt_list __user *query_ptr =
+ 		u64_to_user_ptr(query->data);
+ 	struct drm_xe_query_gt_list *gt_list;
++	int iter = 0;
+ 	u8 id;
+ 
+ 	if (query->size == 0) {
+@@ -385,12 +386,12 @@ static int query_gt_list(struct xe_device *xe, struct drm_xe_device_query *query
+ 
+ 	for_each_gt(gt, xe, id) {
+ 		if (xe_gt_is_media_type(gt))
+-			gt_list->gt_list[id].type = DRM_XE_QUERY_GT_TYPE_MEDIA;
++			gt_list->gt_list[iter].type = DRM_XE_QUERY_GT_TYPE_MEDIA;
+ 		else
+-			gt_list->gt_list[id].type = DRM_XE_QUERY_GT_TYPE_MAIN;
+-		gt_list->gt_list[id].tile_id = gt_to_tile(gt)->id;
+-		gt_list->gt_list[id].gt_id = gt->info.id;
+-		gt_list->gt_list[id].reference_clock = gt->info.reference_clock;
++			gt_list->gt_list[iter].type = DRM_XE_QUERY_GT_TYPE_MAIN;
++		gt_list->gt_list[iter].tile_id = gt_to_tile(gt)->id;
++		gt_list->gt_list[iter].gt_id = gt->info.id;
++		gt_list->gt_list[iter].reference_clock = gt->info.reference_clock;
+ 		/*
+ 		 * The mem_regions indexes in the mask below need to
+ 		 * directly identify the struct
+@@ -406,19 +407,21 @@ static int query_gt_list(struct xe_device *xe, struct drm_xe_device_query *query
+ 		 * assumption.
+ 		 */
+ 		if (!IS_DGFX(xe))
+-			gt_list->gt_list[id].near_mem_regions = 0x1;
++			gt_list->gt_list[iter].near_mem_regions = 0x1;
+ 		else
+-			gt_list->gt_list[id].near_mem_regions =
++			gt_list->gt_list[iter].near_mem_regions =
+ 				BIT(gt_to_tile(gt)->id) << 1;
+-		gt_list->gt_list[id].far_mem_regions = xe->info.mem_region_mask ^
+-			gt_list->gt_list[id].near_mem_regions;
++		gt_list->gt_list[iter].far_mem_regions = xe->info.mem_region_mask ^
++			gt_list->gt_list[iter].near_mem_regions;
+ 
+-		gt_list->gt_list[id].ip_ver_major =
++		gt_list->gt_list[iter].ip_ver_major =
+ 			REG_FIELD_GET(GMD_ID_ARCH_MASK, gt->info.gmdid);
+-		gt_list->gt_list[id].ip_ver_minor =
++		gt_list->gt_list[iter].ip_ver_minor =
+ 			REG_FIELD_GET(GMD_ID_RELEASE_MASK, gt->info.gmdid);
+-		gt_list->gt_list[id].ip_ver_rev =
++		gt_list->gt_list[iter].ip_ver_rev =
+ 			REG_FIELD_GET(GMD_ID_REVID, gt->info.gmdid);
++
++		iter++;
+ 	}
+ 
+ 	if (copy_to_user(query_ptr, gt_list, size)) {
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index b9748366c6d644..4fd3fff4661c0a 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -663,9 +663,9 @@ static int hid_parser_main(struct hid_parser *parser, struct hid_item *item)
+ 	default:
+ 		if (item->tag >= HID_MAIN_ITEM_TAG_RESERVED_MIN &&
+ 			item->tag <= HID_MAIN_ITEM_TAG_RESERVED_MAX)
+-			hid_warn(parser->device, "reserved main item tag 0x%x\n", item->tag);
++			hid_warn_ratelimited(parser->device, "reserved main item tag 0x%x\n", item->tag);
+ 		else
+-			hid_warn(parser->device, "unknown main item tag 0x%x\n", item->tag);
++			hid_warn_ratelimited(parser->device, "unknown main item tag 0x%x\n", item->tag);
+ 		ret = 0;
+ 	}
+ 
+diff --git a/drivers/hwmon/emc2305.c b/drivers/hwmon/emc2305.c
+index 234c54956a4bdf..1dbe3f26467d38 100644
+--- a/drivers/hwmon/emc2305.c
++++ b/drivers/hwmon/emc2305.c
+@@ -299,6 +299,12 @@ static int emc2305_set_single_tz(struct device *dev, int idx)
+ 		dev_err(dev, "Failed to register cooling device %s\n", emc2305_fan_name[idx]);
+ 		return PTR_ERR(data->cdev_data[cdev_idx].cdev);
+ 	}
++
++	if (data->cdev_data[cdev_idx].cur_state > 0)
++		/* Update pwm when temperature is above trips */
++		pwm = EMC2305_PWM_STATE2DUTY(data->cdev_data[cdev_idx].cur_state,
++					     data->max_state, EMC2305_FAN_MAX);
++
+ 	/* Set minimal PWM speed. */
+ 	if (data->pwm_separate) {
+ 		ret = emc2305_set_pwm(dev, pwm, cdev_idx);
+@@ -312,10 +318,10 @@ static int emc2305_set_single_tz(struct device *dev, int idx)
+ 		}
+ 	}
+ 	data->cdev_data[cdev_idx].cur_state =
+-		EMC2305_PWM_DUTY2STATE(data->pwm_min[cdev_idx], data->max_state,
++		EMC2305_PWM_DUTY2STATE(pwm, data->max_state,
+ 				       EMC2305_FAN_MAX);
+ 	data->cdev_data[cdev_idx].last_hwmon_state =
+-		EMC2305_PWM_DUTY2STATE(data->pwm_min[cdev_idx], data->max_state,
++		EMC2305_PWM_DUTY2STATE(pwm, data->max_state,
+ 				       EMC2305_FAN_MAX);
+ 	return 0;
+ }
+diff --git a/drivers/i2c/i2c-core-acpi.c b/drivers/i2c/i2c-core-acpi.c
+index d2499f302b5083..f43067f6797e94 100644
+--- a/drivers/i2c/i2c-core-acpi.c
++++ b/drivers/i2c/i2c-core-acpi.c
+@@ -370,6 +370,7 @@ static const struct acpi_device_id i2c_acpi_force_100khz_device_ids[] = {
+ 	 * the device works without issues on Windows at what is expected to be
+ 	 * a 400KHz frequency. The root cause of the issue is not known.
+ 	 */
++	{ "DLL0945", 0 },
+ 	{ "ELAN06FA", 0 },
+ 	{}
+ };
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index 2ad2b1838f0f41..0849aa44952da0 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -1066,7 +1066,13 @@ void i2c_unregister_device(struct i2c_client *client)
+ 		of_node_clear_flag(to_of_node(fwnode), OF_POPULATED);
+ 	else if (is_acpi_device_node(fwnode))
+ 		acpi_device_clear_enumerated(to_acpi_device_node(fwnode));
+-	fwnode_handle_put(fwnode);
++
++	/*
++	 * If the primary fwnode is a software node it is free-ed by
++	 * device_remove_software_node() below, avoid double-free.
++	 */
++	if (!is_software_node(fwnode))
++		fwnode_handle_put(fwnode);
+ 
+ 	device_remove_software_node(&client->dev);
+ 	device_unregister(&client->dev);
+diff --git a/drivers/i3c/internals.h b/drivers/i3c/internals.h
+index 433f6088b7cec8..ce04aa4f269e09 100644
+--- a/drivers/i3c/internals.h
++++ b/drivers/i3c/internals.h
+@@ -9,6 +9,7 @@
+ #define I3C_INTERNALS_H
+ 
+ #include <linux/i3c/master.h>
++#include <linux/io.h>
+ 
+ void i3c_bus_normaluse_lock(struct i3c_bus *bus);
+ void i3c_bus_normaluse_unlock(struct i3c_bus *bus);
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index fd81871609d95b..dfa0bad991cf72 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -1439,7 +1439,7 @@ static int i3c_master_retrieve_dev_info(struct i3c_dev_desc *dev)
+ 
+ 	if (dev->info.bcr & I3C_BCR_HDR_CAP) {
+ 		ret = i3c_master_gethdrcap_locked(master, &dev->info);
+-		if (ret)
++		if (ret && ret != -ENOTSUPP)
+ 			return ret;
+ 	}
+ 
+@@ -2467,6 +2467,8 @@ static int i3c_i2c_notifier_call(struct notifier_block *nb, unsigned long action
+ 	case BUS_NOTIFY_DEL_DEVICE:
+ 		ret = i3c_master_i2c_detach(adap, client);
+ 		break;
++	default:
++		ret = -EINVAL;
+ 	}
+ 	i3c_bus_maintenance_unlock(&master->bus);
+ 
+diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
+index 73747d20df85d5..91a7b7e7c0c8e3 100644
+--- a/drivers/idle/intel_idle.c
++++ b/drivers/idle/intel_idle.c
+@@ -1679,7 +1679,7 @@ static const struct x86_cpu_id intel_idle_ids[] __initconst = {
+ };
+ 
+ static const struct x86_cpu_id intel_mwait_ids[] __initconst = {
+-	X86_MATCH_VENDOR_FAM_FEATURE(INTEL, 6, X86_FEATURE_MWAIT, NULL),
++	X86_MATCH_VENDOR_FAM_FEATURE(INTEL, X86_FAMILY_ANY, X86_FEATURE_MWAIT, NULL),
+ 	{}
+ };
+ 
+diff --git a/drivers/iio/adc/ad7768-1.c b/drivers/iio/adc/ad7768-1.c
+index 51134023534a5b..8b414a1028644e 100644
+--- a/drivers/iio/adc/ad7768-1.c
++++ b/drivers/iio/adc/ad7768-1.c
+@@ -252,6 +252,24 @@ static const struct regmap_config ad7768_regmap24_config = {
+ 	.max_register = AD7768_REG24_COEFF_DATA,
+ };
+ 
++static int ad7768_send_sync_pulse(struct ad7768_state *st)
++{
++	/*
++	 * The datasheet specifies a minimum SYNC_IN pulse width of 1.5 × Tmclk,
++	 * where Tmclk is the MCLK period. The supported MCLK frequencies range
++	 * from 0.6 MHz to 17 MHz, which corresponds to a minimum SYNC_IN pulse
++	 * width of approximately 2.5 µs in the worst-case scenario (0.6 MHz).
++	 *
++	 * Add a delay to ensure the pulse width is always sufficient to
++	 * trigger synchronization.
++	 */
++	gpiod_set_value_cansleep(st->gpio_sync_in, 1);
++	fsleep(3);
++	gpiod_set_value_cansleep(st->gpio_sync_in, 0);
++
++	return 0;
++}
++
+ static int ad7768_set_mode(struct ad7768_state *st,
+ 			   enum ad7768_conv_mode mode)
+ {
+@@ -339,10 +357,7 @@ static int ad7768_set_dig_fil(struct ad7768_state *st,
+ 		return ret;
+ 
+ 	/* A sync-in pulse is required every time the filter dec rate changes */
+-	gpiod_set_value(st->gpio_sync_in, 1);
+-	gpiod_set_value(st->gpio_sync_in, 0);
+-
+-	return 0;
++	return ad7768_send_sync_pulse(st);
+ }
+ 
+ static int ad7768_set_freq(struct ad7768_state *st,
+diff --git a/drivers/iio/adc/ad_sigma_delta.c b/drivers/iio/adc/ad_sigma_delta.c
+index 4c5f8d29a559fe..6b3ef7ef403e00 100644
+--- a/drivers/iio/adc/ad_sigma_delta.c
++++ b/drivers/iio/adc/ad_sigma_delta.c
+@@ -489,7 +489,7 @@ static int ad_sd_buffer_postenable(struct iio_dev *indio_dev)
+ 			return ret;
+ 	}
+ 
+-	samples_buf_size = ALIGN(slot * indio_dev->channels[0].scan_type.storagebits, 8);
++	samples_buf_size = ALIGN(slot * indio_dev->channels[0].scan_type.storagebits / 8, 8);
+ 	samples_buf_size += sizeof(int64_t);
+ 	samples_buf = devm_krealloc(&sigma_delta->spi->dev, sigma_delta->samples_buf,
+ 				    samples_buf_size, GFP_KERNEL);
+diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c
+index be6b2ef0ede4ee..2220a2dfab240e 100644
+--- a/drivers/infiniband/core/nldev.c
++++ b/drivers/infiniband/core/nldev.c
+@@ -1469,10 +1469,11 @@ static const struct nldev_fill_res_entry fill_entries[RDMA_RESTRACK_MAX] = {
+ 
+ };
+ 
+-static int res_get_common_doit(struct sk_buff *skb, struct nlmsghdr *nlh,
+-			       struct netlink_ext_ack *extack,
+-			       enum rdma_restrack_type res_type,
+-			       res_fill_func_t fill_func)
++static noinline_for_stack int
++res_get_common_doit(struct sk_buff *skb, struct nlmsghdr *nlh,
++		    struct netlink_ext_ack *extack,
++		    enum rdma_restrack_type res_type,
++		    res_fill_func_t fill_func)
+ {
+ 	const struct nldev_fill_res_entry *fe = &fill_entries[res_type];
+ 	struct nlattr *tb[RDMA_NLDEV_ATTR_MAX];
+@@ -2263,10 +2264,10 @@ static int nldev_stat_del_doit(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 	return ret;
+ }
+ 
+-static int stat_get_doit_default_counter(struct sk_buff *skb,
+-					 struct nlmsghdr *nlh,
+-					 struct netlink_ext_ack *extack,
+-					 struct nlattr *tb[])
++static noinline_for_stack int
++stat_get_doit_default_counter(struct sk_buff *skb, struct nlmsghdr *nlh,
++			      struct netlink_ext_ack *extack,
++			      struct nlattr *tb[])
+ {
+ 	struct rdma_hw_stats *stats;
+ 	struct nlattr *table_attr;
+@@ -2356,8 +2357,9 @@ static int stat_get_doit_default_counter(struct sk_buff *skb,
+ 	return ret;
+ }
+ 
+-static int stat_get_doit_qp(struct sk_buff *skb, struct nlmsghdr *nlh,
+-			    struct netlink_ext_ack *extack, struct nlattr *tb[])
++static noinline_for_stack int
++stat_get_doit_qp(struct sk_buff *skb, struct nlmsghdr *nlh,
++		 struct netlink_ext_ack *extack, struct nlattr *tb[])
+ 
+ {
+ 	static enum rdma_nl_counter_mode mode;
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index 063801384b2b04..3a627acb82ce13 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -4738,7 +4738,7 @@ static int UVERBS_HANDLER(BNXT_RE_METHOD_GET_TOGGLE_MEM)(struct uverbs_attr_bund
+ 		return err;
+ 
+ 	err = uverbs_copy_to(attrs, BNXT_RE_TOGGLE_MEM_MMAP_OFFSET,
+-			     &offset, sizeof(length));
++			     &offset, sizeof(offset));
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c
+index 7ead8746b79b38..f2c530ab85a563 100644
+--- a/drivers/infiniband/hw/hfi1/affinity.c
++++ b/drivers/infiniband/hw/hfi1/affinity.c
+@@ -964,31 +964,35 @@ static void find_hw_thread_mask(uint hw_thread_no, cpumask_var_t hw_thread_mask,
+ 				struct hfi1_affinity_node_list *affinity)
+ {
+ 	int possible, curr_cpu, i;
+-	uint num_cores_per_socket = node_affinity.num_online_cpus /
++	uint num_cores_per_socket;
++
++	cpumask_copy(hw_thread_mask, &affinity->proc.mask);
++
++	if (affinity->num_core_siblings == 0)
++		return;
++
++	num_cores_per_socket = node_affinity.num_online_cpus /
+ 					affinity->num_core_siblings /
+ 						node_affinity.num_online_nodes;
+ 
+-	cpumask_copy(hw_thread_mask, &affinity->proc.mask);
+-	if (affinity->num_core_siblings > 0) {
+-		/* Removing other siblings not needed for now */
+-		possible = cpumask_weight(hw_thread_mask);
+-		curr_cpu = cpumask_first(hw_thread_mask);
+-		for (i = 0;
+-		     i < num_cores_per_socket * node_affinity.num_online_nodes;
+-		     i++)
+-			curr_cpu = cpumask_next(curr_cpu, hw_thread_mask);
+-
+-		for (; i < possible; i++) {
+-			cpumask_clear_cpu(curr_cpu, hw_thread_mask);
+-			curr_cpu = cpumask_next(curr_cpu, hw_thread_mask);
+-		}
++	/* Removing other siblings not needed for now */
++	possible = cpumask_weight(hw_thread_mask);
++	curr_cpu = cpumask_first(hw_thread_mask);
++	for (i = 0;
++	     i < num_cores_per_socket * node_affinity.num_online_nodes;
++	     i++)
++		curr_cpu = cpumask_next(curr_cpu, hw_thread_mask);
+ 
+-		/* Identifying correct HW threads within physical cores */
+-		cpumask_shift_left(hw_thread_mask, hw_thread_mask,
+-				   num_cores_per_socket *
+-				   node_affinity.num_online_nodes *
+-				   hw_thread_no);
++	for (; i < possible; i++) {
++		cpumask_clear_cpu(curr_cpu, hw_thread_mask);
++		curr_cpu = cpumask_next(curr_cpu, hw_thread_mask);
+ 	}
++
++	/* Identifying correct HW threads within physical cores */
++	cpumask_shift_left(hw_thread_mask, hw_thread_mask,
++			   num_cores_per_socket *
++			   node_affinity.num_online_nodes *
++			   hw_thread_no);
+ }
+ 
+ int hfi1_get_proc_affinity(int node)
+diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/siw/siw_qp_tx.c
+index 6432bce7d08317..4ed2b1ea296e75 100644
+--- a/drivers/infiniband/sw/siw/siw_qp_tx.c
++++ b/drivers/infiniband/sw/siw/siw_qp_tx.c
+@@ -332,18 +332,17 @@ static int siw_tcp_sendpages(struct socket *s, struct page **page, int offset,
+ 		if (!sendpage_ok(page[i]))
+ 			msg.msg_flags &= ~MSG_SPLICE_PAGES;
+ 		bvec_set_page(&bvec, page[i], bytes, offset);
+-		iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, size);
++		iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, bytes);
+ 
+ try_page_again:
+ 		lock_sock(sk);
+-		rv = tcp_sendmsg_locked(sk, &msg, size);
++		rv = tcp_sendmsg_locked(sk, &msg, bytes);
+ 		release_sock(sk);
+ 
+ 		if (rv > 0) {
+ 			size -= rv;
+ 			sent += rv;
+ 			if (rv != bytes) {
+-				offset += rv;
+ 				bytes -= rv;
+ 				goto try_page_again;
+ 			}
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index 10cc6dc26b7b7c..dacaa78f69aaa1 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -2906,8 +2906,8 @@ int arm_smmu_attach_prepare(struct arm_smmu_attach_state *state,
+ 
+ 		master_domain = kzalloc(sizeof(*master_domain), GFP_KERNEL);
+ 		if (!master_domain) {
+-			kfree(state->vmaster);
+-			return -ENOMEM;
++			ret = -ENOMEM;
++			goto err_free_vmaster;
+ 		}
+ 		master_domain->domain = new_domain;
+ 		master_domain->master = master;
+@@ -2941,7 +2941,6 @@ int arm_smmu_attach_prepare(struct arm_smmu_attach_state *state,
+ 		    !arm_smmu_master_canwbs(master)) {
+ 			spin_unlock_irqrestore(&smmu_domain->devices_lock,
+ 					       flags);
+-			kfree(state->vmaster);
+ 			ret = -EINVAL;
+ 			goto err_iopf;
+ 		}
+@@ -2967,6 +2966,8 @@ int arm_smmu_attach_prepare(struct arm_smmu_attach_state *state,
+ 	arm_smmu_disable_iopf(master, master_domain);
+ err_free_master_domain:
+ 	kfree(master_domain);
++err_free_vmaster:
++	kfree(state->vmaster);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+index 53d88646476e9f..57c097e8761308 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+@@ -380,6 +380,7 @@ static const struct of_device_id qcom_smmu_client_of_match[] __maybe_unused = {
+ 	{ .compatible = "qcom,sdm670-mdss" },
+ 	{ .compatible = "qcom,sdm845-mdss" },
+ 	{ .compatible = "qcom,sdm845-mss-pil" },
++	{ .compatible = "qcom,sm6115-mdss" },
+ 	{ .compatible = "qcom,sm6350-mdss" },
+ 	{ .compatible = "qcom,sm6375-mdss" },
+ 	{ .compatible = "qcom,sm8150-mdss" },
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index c0be0b64e4c773..ab4cd742f0953c 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -1795,6 +1795,18 @@ static int domain_setup_first_level(struct intel_iommu *iommu,
+ 					  (pgd_t *)pgd, flags, old);
+ }
+ 
++static bool domain_need_iotlb_sync_map(struct dmar_domain *domain,
++				       struct intel_iommu *iommu)
++{
++	if (cap_caching_mode(iommu->cap) && !domain->use_first_level)
++		return true;
++
++	if (rwbf_quirk || cap_rwbf(iommu->cap))
++		return true;
++
++	return false;
++}
++
+ static int dmar_domain_attach_device(struct dmar_domain *domain,
+ 				     struct device *dev)
+ {
+@@ -1832,6 +1844,8 @@ static int dmar_domain_attach_device(struct dmar_domain *domain,
+ 	if (ret)
+ 		goto out_block_translation;
+ 
++	domain->iotlb_sync_map |= domain_need_iotlb_sync_map(domain, iommu);
++
+ 	return 0;
+ 
+ out_block_translation:
+@@ -3953,7 +3967,10 @@ static bool risky_device(struct pci_dev *pdev)
+ static int intel_iommu_iotlb_sync_map(struct iommu_domain *domain,
+ 				      unsigned long iova, size_t size)
+ {
+-	cache_tag_flush_range_np(to_dmar_domain(domain), iova, iova + size - 1);
++	struct dmar_domain *dmar_domain = to_dmar_domain(domain);
++
++	if (dmar_domain->iotlb_sync_map)
++		cache_tag_flush_range_np(dmar_domain, iova, iova + size - 1);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h
+index 2d1afab5eedcc7..61f42802fe9e95 100644
+--- a/drivers/iommu/intel/iommu.h
++++ b/drivers/iommu/intel/iommu.h
+@@ -614,6 +614,9 @@ struct dmar_domain {
+ 	u8 has_mappings:1;		/* Has mappings configured through
+ 					 * iommu_map() interface.
+ 					 */
++	u8 iotlb_sync_map:1;		/* Need to flush IOTLB cache or write
++					 * buffer when creating mappings.
++					 */
+ 
+ 	spinlock_t lock;		/* Protect device tracking lists */
+ 	struct list_head devices;	/* all devices' list */
+diff --git a/drivers/iommu/iommufd/io_pagetable.c b/drivers/iommu/iommufd/io_pagetable.c
+index 8a790e597e1253..c3a316fd1ef972 100644
+--- a/drivers/iommu/iommufd/io_pagetable.c
++++ b/drivers/iommu/iommufd/io_pagetable.c
+@@ -70,36 +70,45 @@ struct iopt_area *iopt_area_contig_next(struct iopt_area_contig_iter *iter)
+ 	return iter->area;
+ }
+ 
+-static bool __alloc_iova_check_hole(struct interval_tree_double_span_iter *span,
+-				    unsigned long length,
+-				    unsigned long iova_alignment,
+-				    unsigned long page_offset)
++static bool __alloc_iova_check_range(unsigned long *start, unsigned long last,
++				     unsigned long length,
++				     unsigned long iova_alignment,
++				     unsigned long page_offset)
+ {
+-	if (span->is_used || span->last_hole - span->start_hole < length - 1)
++	unsigned long aligned_start;
++
++	/* ALIGN_UP() */
++	if (check_add_overflow(*start, iova_alignment - 1, &aligned_start))
+ 		return false;
++	aligned_start &= ~(iova_alignment - 1);
++	aligned_start |= page_offset;
+ 
+-	span->start_hole = ALIGN(span->start_hole, iova_alignment) |
+-			   page_offset;
+-	if (span->start_hole > span->last_hole ||
+-	    span->last_hole - span->start_hole < length - 1)
++	if (aligned_start >= last || last - aligned_start < length - 1)
+ 		return false;
++	*start = aligned_start;
+ 	return true;
+ }
+ 
+-static bool __alloc_iova_check_used(struct interval_tree_span_iter *span,
++static bool __alloc_iova_check_hole(struct interval_tree_double_span_iter *span,
+ 				    unsigned long length,
+ 				    unsigned long iova_alignment,
+ 				    unsigned long page_offset)
+ {
+-	if (span->is_hole || span->last_used - span->start_used < length - 1)
++	if (span->is_used)
+ 		return false;
++	return __alloc_iova_check_range(&span->start_hole, span->last_hole,
++					length, iova_alignment, page_offset);
++}
+ 
+-	span->start_used = ALIGN(span->start_used, iova_alignment) |
+-			   page_offset;
+-	if (span->start_used > span->last_used ||
+-	    span->last_used - span->start_used < length - 1)
++static bool __alloc_iova_check_used(struct interval_tree_span_iter *span,
++				    unsigned long length,
++				    unsigned long iova_alignment,
++				    unsigned long page_offset)
++{
++	if (span->is_hole)
+ 		return false;
+-	return true;
++	return __alloc_iova_check_range(&span->start_used, span->last_used,
++					length, iova_alignment, page_offset);
+ }
+ 
+ /*
+@@ -743,8 +752,10 @@ static int iopt_unmap_iova_range(struct io_pagetable *iopt, unsigned long start,
+ 			iommufd_access_notify_unmap(iopt, area_first, length);
+ 			/* Something is not responding to unmap requests. */
+ 			tries++;
+-			if (WARN_ON(tries > 100))
+-				return -EDEADLOCK;
++			if (WARN_ON(tries > 100)) {
++				rc = -EDEADLOCK;
++				goto out_unmapped;
++			}
+ 			goto again;
+ 		}
+ 
+@@ -766,6 +777,7 @@ static int iopt_unmap_iova_range(struct io_pagetable *iopt, unsigned long start,
+ out_unlock_iova:
+ 	up_write(&iopt->iova_rwsem);
+ 	up_read(&iopt->domains_rwsem);
++out_unmapped:
+ 	if (unmapped)
+ 		*unmapped = unmapped_bytes;
+ 	return rc;
+diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
+index 34e8d09c12a0b2..19a57c5e2b2ea4 100644
+--- a/drivers/irqchip/irq-mips-gic.c
++++ b/drivers/irqchip/irq-mips-gic.c
+@@ -375,9 +375,13 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask,
+ 	/*
+ 	 * The GIC specifies that we can only route an interrupt to one VP(E),
+ 	 * ie. CPU in Linux parlance, at a time. Therefore we always route to
+-	 * the first online CPU in the mask.
++	 * the first forced or online CPU in the mask.
+ 	 */
+-	cpu = cpumask_first_and(cpumask, cpu_online_mask);
++	if (force)
++		cpu = cpumask_first(cpumask);
++	else
++		cpu = cpumask_first_and(cpumask, cpu_online_mask);
++
+ 	if (cpu >= NR_CPUS)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/irqchip/irq-mvebu-gicp.c b/drivers/irqchip/irq-mvebu-gicp.c
+index d3232d6d8dced5..54833717f8a70f 100644
+--- a/drivers/irqchip/irq-mvebu-gicp.c
++++ b/drivers/irqchip/irq-mvebu-gicp.c
+@@ -177,6 +177,7 @@ static int mvebu_gicp_probe(struct platform_device *pdev)
+ 		.ops	= &gicp_domain_ops,
+ 	};
+ 	struct mvebu_gicp *gicp;
++	void __iomem *base;
+ 	int ret, i;
+ 
+ 	gicp = devm_kzalloc(&pdev->dev, sizeof(*gicp), GFP_KERNEL);
+@@ -236,6 +237,15 @@ static int mvebu_gicp_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 
++	base = ioremap(gicp->res->start, resource_size(gicp->res));
++	if (IS_ERR(base)) {
++		dev_err(&pdev->dev, "ioremap() failed. Unable to clear pending interrupts.\n");
++	} else {
++		for (i = 0; i < 64; i++)
++			writel(i, base + GICP_CLRSPI_NSR_OFFSET);
++		iounmap(base);
++	}
++
+ 	return msi_create_parent_irq_domain(&info, &gicp_msi_parent_ops) ? 0 : -ENOMEM;
+ }
+ 
+diff --git a/drivers/irqchip/irq-renesas-rzv2h.c b/drivers/irqchip/irq-renesas-rzv2h.c
+index 69b32c19e8ff68..76fb1354e2aa9e 100644
+--- a/drivers/irqchip/irq-renesas-rzv2h.c
++++ b/drivers/irqchip/irq-renesas-rzv2h.c
+@@ -427,7 +427,9 @@ static const struct irq_chip rzv2h_icu_chip = {
+ 	.irq_retrigger		= irq_chip_retrigger_hierarchy,
+ 	.irq_set_type		= rzv2h_icu_set_type,
+ 	.irq_set_affinity	= irq_chip_set_affinity_parent,
+-	.flags			= IRQCHIP_SET_TYPE_MASKED,
++	.flags			= IRQCHIP_MASK_ON_SUSPEND |
++				  IRQCHIP_SET_TYPE_MASKED |
++				  IRQCHIP_SKIP_SET_WAKE,
+ };
+ 
+ static int rzv2h_icu_alloc(struct irq_domain *domain, unsigned int virq, unsigned int nr_irqs,
+diff --git a/drivers/leds/flash/leds-qcom-flash.c b/drivers/leds/flash/leds-qcom-flash.c
+index b4c19be51c4da7..89cf5120f5d55b 100644
+--- a/drivers/leds/flash/leds-qcom-flash.c
++++ b/drivers/leds/flash/leds-qcom-flash.c
+@@ -117,7 +117,7 @@ enum {
+ 	REG_MAX_COUNT,
+ };
+ 
+-static struct reg_field mvflash_3ch_regs[REG_MAX_COUNT] = {
++static const struct reg_field mvflash_3ch_regs[REG_MAX_COUNT] = {
+ 	REG_FIELD(0x08, 0, 7),			/* status1	*/
+ 	REG_FIELD(0x09, 0, 7),                  /* status2	*/
+ 	REG_FIELD(0x0a, 0, 7),                  /* status3	*/
+@@ -132,7 +132,7 @@ static struct reg_field mvflash_3ch_regs[REG_MAX_COUNT] = {
+ 	REG_FIELD(0x58, 0, 2),			/* therm_thrsh3 */
+ };
+ 
+-static struct reg_field mvflash_4ch_regs[REG_MAX_COUNT] = {
++static const struct reg_field mvflash_4ch_regs[REG_MAX_COUNT] = {
+ 	REG_FIELD(0x06, 0, 7),			/* status1	*/
+ 	REG_FIELD(0x07, 0, 6),			/* status2	*/
+ 	REG_FIELD(0x09, 0, 7),			/* status3	*/
+@@ -854,11 +854,17 @@ static int qcom_flash_led_probe(struct platform_device *pdev)
+ 	if (val == FLASH_SUBTYPE_3CH_PM8150_VAL || val == FLASH_SUBTYPE_3CH_PMI8998_VAL) {
+ 		flash_data->hw_type = QCOM_MVFLASH_3CH;
+ 		flash_data->max_channels = 3;
+-		regs = mvflash_3ch_regs;
++		regs = devm_kmemdup(dev, mvflash_3ch_regs, sizeof(mvflash_3ch_regs),
++				    GFP_KERNEL);
++		if (!regs)
++			return -ENOMEM;
+ 	} else if (val == FLASH_SUBTYPE_4CH_VAL) {
+ 		flash_data->hw_type = QCOM_MVFLASH_4CH;
+ 		flash_data->max_channels = 4;
+-		regs = mvflash_4ch_regs;
++		regs = devm_kmemdup(dev, mvflash_4ch_regs, sizeof(mvflash_4ch_regs),
++				    GFP_KERNEL);
++		if (!regs)
++			return -ENOMEM;
+ 
+ 		rc = regmap_read(regmap, reg_base + FLASH_REVISION_REG, &val);
+ 		if (rc < 0) {
+@@ -880,6 +886,7 @@ static int qcom_flash_led_probe(struct platform_device *pdev)
+ 		dev_err(dev, "Failed to allocate regmap field, rc=%d\n", rc);
+ 		return rc;
+ 	}
++	devm_kfree(dev, regs); /* devm_regmap_field_bulk_alloc() makes copies */
+ 
+ 	platform_set_drvdata(pdev, flash_data);
+ 	mutex_init(&flash_data->lock);
+diff --git a/drivers/leds/leds-lp50xx.c b/drivers/leds/leds-lp50xx.c
+index 02cb1565a9fb62..94f8ef6b482c91 100644
+--- a/drivers/leds/leds-lp50xx.c
++++ b/drivers/leds/leds-lp50xx.c
+@@ -476,6 +476,7 @@ static int lp50xx_probe_dt(struct lp50xx *priv)
+ 			return -ENOMEM;
+ 
+ 		fwnode_for_each_child_node(child, led_node) {
++			int multi_index;
+ 			ret = fwnode_property_read_u32(led_node, "color",
+ 						       &color_id);
+ 			if (ret) {
+@@ -483,8 +484,16 @@ static int lp50xx_probe_dt(struct lp50xx *priv)
+ 				dev_err(priv->dev, "Cannot read color\n");
+ 				return ret;
+ 			}
++			ret = fwnode_property_read_u32(led_node, "reg", &multi_index);
++			if (ret != 0) {
++				dev_err(priv->dev, "reg must be set\n");
++				return -EINVAL;
++			} else if (multi_index >= LP50XX_LEDS_PER_MODULE) {
++				dev_err(priv->dev, "reg %i out of range\n", multi_index);
++				return -EINVAL;
++			}
+ 
+-			mc_led_info[num_colors].color_index = color_id;
++			mc_led_info[multi_index].color_index = color_id;
+ 			num_colors++;
+ 		}
+ 
+diff --git a/drivers/leds/trigger/ledtrig-netdev.c b/drivers/leds/trigger/ledtrig-netdev.c
+index 4e048e08c4fdec..c15efe3e50780f 100644
+--- a/drivers/leds/trigger/ledtrig-netdev.c
++++ b/drivers/leds/trigger/ledtrig-netdev.c
+@@ -68,7 +68,6 @@ struct led_netdev_data {
+ 	unsigned int last_activity;
+ 
+ 	unsigned long mode;
+-	unsigned long blink_delay;
+ 	int link_speed;
+ 	__ETHTOOL_DECLARE_LINK_MODE_MASK(supported_link_modes);
+ 	u8 duplex;
+@@ -87,10 +86,6 @@ static void set_baseline_state(struct led_netdev_data *trigger_data)
+ 	/* Already validated, hw control is possible with the requested mode */
+ 	if (trigger_data->hw_control) {
+ 		led_cdev->hw_control_set(led_cdev, trigger_data->mode);
+-		if (led_cdev->blink_set) {
+-			led_cdev->blink_set(led_cdev, &trigger_data->blink_delay,
+-					    &trigger_data->blink_delay);
+-		}
+ 
+ 		return;
+ 	}
+@@ -459,11 +454,10 @@ static ssize_t interval_store(struct device *dev,
+ 			      size_t size)
+ {
+ 	struct led_netdev_data *trigger_data = led_trigger_get_drvdata(dev);
+-	struct led_classdev *led_cdev = trigger_data->led_cdev;
+ 	unsigned long value;
+ 	int ret;
+ 
+-	if (trigger_data->hw_control && !led_cdev->blink_set)
++	if (trigger_data->hw_control)
+ 		return -EINVAL;
+ 
+ 	ret = kstrtoul(buf, 0, &value);
+@@ -472,13 +466,9 @@ static ssize_t interval_store(struct device *dev,
+ 
+ 	/* impose some basic bounds on the timer interval */
+ 	if (value >= 5 && value <= 10000) {
+-		if (trigger_data->hw_control) {
+-			trigger_data->blink_delay = value;
+-		} else {
+-			cancel_delayed_work_sync(&trigger_data->work);
++		cancel_delayed_work_sync(&trigger_data->work);
+ 
+-			atomic_set(&trigger_data->interval, msecs_to_jiffies(value));
+-		}
++		atomic_set(&trigger_data->interval, msecs_to_jiffies(value));
+ 		set_baseline_state(trigger_data);	/* resets timer */
+ 	}
+ 
+diff --git a/drivers/md/dm-ps-historical-service-time.c b/drivers/md/dm-ps-historical-service-time.c
+index b49e10d76d0302..2c8626a83de437 100644
+--- a/drivers/md/dm-ps-historical-service-time.c
++++ b/drivers/md/dm-ps-historical-service-time.c
+@@ -541,8 +541,10 @@ static int __init dm_hst_init(void)
+ {
+ 	int r = dm_register_path_selector(&hst_ps);
+ 
+-	if (r < 0)
++	if (r < 0) {
+ 		DMERR("register failed %d", r);
++		return r;
++	}
+ 
+ 	DMINFO("version " HST_VERSION " loaded");
+ 
+diff --git a/drivers/md/dm-ps-queue-length.c b/drivers/md/dm-ps-queue-length.c
+index e305f05ad1e5e8..eb543e6431e038 100644
+--- a/drivers/md/dm-ps-queue-length.c
++++ b/drivers/md/dm-ps-queue-length.c
+@@ -260,8 +260,10 @@ static int __init dm_ql_init(void)
+ {
+ 	int r = dm_register_path_selector(&ql_ps);
+ 
+-	if (r < 0)
++	if (r < 0) {
+ 		DMERR("register failed %d", r);
++		return r;
++	}
+ 
+ 	DMINFO("version " QL_VERSION " loaded");
+ 
+diff --git a/drivers/md/dm-ps-round-robin.c b/drivers/md/dm-ps-round-robin.c
+index d1745b123dc19c..66a15ac0c22c8b 100644
+--- a/drivers/md/dm-ps-round-robin.c
++++ b/drivers/md/dm-ps-round-robin.c
+@@ -220,8 +220,10 @@ static int __init dm_rr_init(void)
+ {
+ 	int r = dm_register_path_selector(&rr_ps);
+ 
+-	if (r < 0)
++	if (r < 0) {
+ 		DMERR("register failed %d", r);
++		return r;
++	}
+ 
+ 	DMINFO("version " RR_VERSION " loaded");
+ 
+diff --git a/drivers/md/dm-ps-service-time.c b/drivers/md/dm-ps-service-time.c
+index 969d31c40272e2..f8c43aecdb27ad 100644
+--- a/drivers/md/dm-ps-service-time.c
++++ b/drivers/md/dm-ps-service-time.c
+@@ -341,8 +341,10 @@ static int __init dm_st_init(void)
+ {
+ 	int r = dm_register_path_selector(&st_ps);
+ 
+-	if (r < 0)
++	if (r < 0) {
+ 		DMERR("register failed %d", r);
++		return r;
++	}
+ 
+ 	DMINFO("version " ST_VERSION " loaded");
+ 
+diff --git a/drivers/md/dm-stripe.c b/drivers/md/dm-stripe.c
+index a7dc04bd55e5cb..5bbbdf8fc1bdef 100644
+--- a/drivers/md/dm-stripe.c
++++ b/drivers/md/dm-stripe.c
+@@ -458,6 +458,7 @@ static void stripe_io_hints(struct dm_target *ti,
+ 	struct stripe_c *sc = ti->private;
+ 	unsigned int chunk_size = sc->chunk_size << SECTOR_SHIFT;
+ 
++	limits->chunk_sectors = sc->chunk_size;
+ 	limits->io_min = chunk_size;
+ 	limits->io_opt = chunk_size * sc->stripes;
+ }
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 24a857ff6d0b1d..79ba4bacd0f9a9 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -899,17 +899,17 @@ static bool dm_table_supports_dax(struct dm_table *t,
+ 	return true;
+ }
+ 
+-static int device_is_rq_stackable(struct dm_target *ti, struct dm_dev *dev,
+-				  sector_t start, sector_t len, void *data)
++static int device_is_not_rq_stackable(struct dm_target *ti, struct dm_dev *dev,
++				      sector_t start, sector_t len, void *data)
+ {
+ 	struct block_device *bdev = dev->bdev;
+ 	struct request_queue *q = bdev_get_queue(bdev);
+ 
+ 	/* request-based cannot stack on partitions! */
+ 	if (bdev_is_partition(bdev))
+-		return false;
++		return true;
+ 
+-	return queue_is_mq(q);
++	return !queue_is_mq(q);
+ }
+ 
+ static int dm_table_determine_type(struct dm_table *t)
+@@ -1005,7 +1005,7 @@ static int dm_table_determine_type(struct dm_table *t)
+ 
+ 	/* Non-request-stackable devices can't be used for request-based dm */
+ 	if (!ti->type->iterate_devices ||
+-	    !ti->type->iterate_devices(ti, device_is_rq_stackable, NULL)) {
++	    ti->type->iterate_devices(ti, device_is_not_rq_stackable, NULL)) {
+ 		DMERR("table load rejected: including non-request-stackable devices");
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c
+index 5da3db06da1030..9da329078ea4a1 100644
+--- a/drivers/md/dm-zoned-target.c
++++ b/drivers/md/dm-zoned-target.c
+@@ -1062,7 +1062,7 @@ static int dmz_iterate_devices(struct dm_target *ti,
+ 	struct dmz_target *dmz = ti->private;
+ 	unsigned int zone_nr_sectors = dmz_zone_nr_sectors(dmz->metadata);
+ 	sector_t capacity;
+-	int i, r;
++	int i, r = 0;
+ 
+ 	for (i = 0; i < dmz->nr_ddevs; i++) {
+ 		capacity = dmz->dev[i].capacity & ~(zone_nr_sectors - 1);
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 1726f0f828cc94..9f6d88ea60e67f 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1776,19 +1776,35 @@ static void init_clone_info(struct clone_info *ci, struct dm_io *io,
+ }
+ 
+ #ifdef CONFIG_BLK_DEV_ZONED
+-static inline bool dm_zone_bio_needs_split(struct mapped_device *md,
+-					   struct bio *bio)
++static inline bool dm_zone_bio_needs_split(struct bio *bio)
+ {
+ 	/*
+-	 * For mapped device that need zone append emulation, we must
+-	 * split any large BIO that straddles zone boundaries.
++	 * Special case the zone operations that cannot or should not be split.
+ 	 */
+-	return dm_emulate_zone_append(md) && bio_straddles_zones(bio) &&
+-		!bio_flagged(bio, BIO_ZONE_WRITE_PLUGGING);
++	switch (bio_op(bio)) {
++	case REQ_OP_ZONE_APPEND:
++	case REQ_OP_ZONE_FINISH:
++	case REQ_OP_ZONE_RESET:
++	case REQ_OP_ZONE_RESET_ALL:
++		return false;
++	default:
++		break;
++	}
++
++	/*
++	 * When mapped devices use the block layer zone write plugging, we must
++	 * split any large BIO to the mapped device limits to not submit BIOs
++	 * that span zone boundaries and to avoid potential deadlocks with
++	 * queue freeze operations.
++	 */
++	return bio_needs_zone_write_plugging(bio) || bio_straddles_zones(bio);
+ }
++
+ static inline bool dm_zone_plug_bio(struct mapped_device *md, struct bio *bio)
+ {
+-	return dm_emulate_zone_append(md) && blk_zone_plug_bio(bio, 0);
++	if (!bio_needs_zone_write_plugging(bio))
++		return false;
++	return blk_zone_plug_bio(bio, 0);
+ }
+ 
+ static blk_status_t __send_zone_reset_all_emulated(struct clone_info *ci,
+@@ -1904,8 +1920,7 @@ static blk_status_t __send_zone_reset_all(struct clone_info *ci)
+ }
+ 
+ #else
+-static inline bool dm_zone_bio_needs_split(struct mapped_device *md,
+-					   struct bio *bio)
++static inline bool dm_zone_bio_needs_split(struct bio *bio)
+ {
+ 	return false;
+ }
+@@ -1932,9 +1947,7 @@ static void dm_split_and_process_bio(struct mapped_device *md,
+ 
+ 	is_abnormal = is_abnormal_io(bio);
+ 	if (static_branch_unlikely(&zoned_enabled)) {
+-		/* Special case REQ_OP_ZONE_RESET_ALL as it cannot be split. */
+-		need_split = (bio_op(bio) != REQ_OP_ZONE_RESET_ALL) &&
+-			(is_abnormal || dm_zone_bio_needs_split(md, bio));
++		need_split = is_abnormal || dm_zone_bio_needs_split(bio);
+ 	} else {
+ 		need_split = is_abnormal;
+ 	}
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index d2b237652d7e9f..95dc354a86a081 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -4009,6 +4009,7 @@ static int raid10_set_queue_limits(struct mddev *mddev)
+ 	md_init_stacking_limits(&lim);
+ 	lim.max_write_zeroes_sectors = 0;
+ 	lim.io_min = mddev->chunk_sectors << 9;
++	lim.chunk_sectors = mddev->chunk_sectors;
+ 	lim.io_opt = lim.io_min * raid10_nr_stripes(conf);
+ 	lim.features |= BLK_FEAT_ATOMIC_WRITES;
+ 	err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
+diff --git a/drivers/media/dvb-frontends/dib7000p.c b/drivers/media/dvb-frontends/dib7000p.c
+index b40daf2420469b..7d3a994b7cc4f0 100644
+--- a/drivers/media/dvb-frontends/dib7000p.c
++++ b/drivers/media/dvb-frontends/dib7000p.c
+@@ -2193,6 +2193,8 @@ static int w7090p_tuner_write_serpar(struct i2c_adapter *i2c_adap, struct i2c_ms
+ 	struct dib7000p_state *state = i2c_get_adapdata(i2c_adap);
+ 	u8 n_overflow = 1;
+ 	u16 i = 1000;
++	if (msg[0].len < 3)
++		return -EOPNOTSUPP;
+ 	u16 serpar_num = msg[0].buf[0];
+ 
+ 	while (n_overflow == 1 && i) {
+@@ -2212,6 +2214,8 @@ static int w7090p_tuner_read_serpar(struct i2c_adapter *i2c_adap, struct i2c_msg
+ 	struct dib7000p_state *state = i2c_get_adapdata(i2c_adap);
+ 	u8 n_overflow = 1, n_empty = 1;
+ 	u16 i = 1000;
++	if (msg[0].len < 1 || msg[1].len < 2)
++		return -EOPNOTSUPP;
+ 	u16 serpar_num = msg[0].buf[0];
+ 	u16 read_word;
+ 
+@@ -2256,8 +2260,12 @@ static int dib7090p_rw_on_apb(struct i2c_adapter *i2c_adap,
+ 	u16 word;
+ 
+ 	if (num == 1) {		/* write */
++		if (msg[0].len < 3)
++			return -EOPNOTSUPP;
+ 		dib7000p_write_word(state, apb_address, ((msg[0].buf[1] << 8) | (msg[0].buf[2])));
+ 	} else {
++		if (msg[1].len < 2)
++			return -EOPNOTSUPP;
+ 		word = dib7000p_read_word(state, apb_address);
+ 		msg[1].buf[0] = (word >> 8) & 0xff;
+ 		msg[1].buf[1] = (word) & 0xff;
+diff --git a/drivers/media/i2c/hi556.c b/drivers/media/i2c/hi556.c
+index aed258211b8a88..d3cc65b67855c8 100644
+--- a/drivers/media/i2c/hi556.c
++++ b/drivers/media/i2c/hi556.c
+@@ -1321,7 +1321,12 @@ static int hi556_resume(struct device *dev)
+ 		return ret;
+ 	}
+ 
+-	gpiod_set_value_cansleep(hi556->reset_gpio, 0);
++	if (hi556->reset_gpio) {
++		/* Assert reset for at least 2ms on back to back off-on */
++		usleep_range(2000, 2200);
++		gpiod_set_value_cansleep(hi556->reset_gpio, 0);
++	}
++
+ 	usleep_range(5000, 5500);
+ 	return 0;
+ }
+diff --git a/drivers/media/i2c/lt6911uxe.c b/drivers/media/i2c/lt6911uxe.c
+index 24857d683fcfcf..bdefdd157e69ca 100644
+--- a/drivers/media/i2c/lt6911uxe.c
++++ b/drivers/media/i2c/lt6911uxe.c
+@@ -600,7 +600,7 @@ static int lt6911uxe_probe(struct i2c_client *client)
+ 
+ 	v4l2_i2c_subdev_init(&lt6911uxe->sd, client, &lt6911uxe_subdev_ops);
+ 
+-	lt6911uxe->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_IN);
++	lt6911uxe->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
+ 	if (IS_ERR(lt6911uxe->reset_gpio))
+ 		return dev_err_probe(dev, PTR_ERR(lt6911uxe->reset_gpio),
+ 				     "failed to get reset gpio\n");
+diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
+index 3d6703b75bfa54..1c7546d2ada48d 100644
+--- a/drivers/media/i2c/tc358743.c
++++ b/drivers/media/i2c/tc358743.c
+@@ -114,7 +114,7 @@ static inline struct tc358743_state *to_state(struct v4l2_subdev *sd)
+ 
+ /* --------------- I2C --------------- */
+ 
+-static void i2c_rd(struct v4l2_subdev *sd, u16 reg, u8 *values, u32 n)
++static int i2c_rd(struct v4l2_subdev *sd, u16 reg, u8 *values, u32 n)
+ {
+ 	struct tc358743_state *state = to_state(sd);
+ 	struct i2c_client *client = state->i2c_client;
+@@ -140,6 +140,7 @@ static void i2c_rd(struct v4l2_subdev *sd, u16 reg, u8 *values, u32 n)
+ 		v4l2_err(sd, "%s: reading register 0x%x from 0x%x failed: %d\n",
+ 				__func__, reg, client->addr, err);
+ 	}
++	return err != ARRAY_SIZE(msgs);
+ }
+ 
+ static void i2c_wr(struct v4l2_subdev *sd, u16 reg, u8 *values, u32 n)
+@@ -196,15 +197,24 @@ static void i2c_wr(struct v4l2_subdev *sd, u16 reg, u8 *values, u32 n)
+ 	}
+ }
+ 
+-static noinline u32 i2c_rdreg(struct v4l2_subdev *sd, u16 reg, u32 n)
++static noinline u32 i2c_rdreg_err(struct v4l2_subdev *sd, u16 reg, u32 n,
++				  int *err)
+ {
++	int error;
+ 	__le32 val = 0;
+ 
+-	i2c_rd(sd, reg, (u8 __force *)&val, n);
++	error = i2c_rd(sd, reg, (u8 __force *)&val, n);
++	if (err)
++		*err = error;
+ 
+ 	return le32_to_cpu(val);
+ }
+ 
++static inline u32 i2c_rdreg(struct v4l2_subdev *sd, u16 reg, u32 n)
++{
++	return i2c_rdreg_err(sd, reg, n, NULL);
++}
++
+ static noinline void i2c_wrreg(struct v4l2_subdev *sd, u16 reg, u32 val, u32 n)
+ {
+ 	__le32 raw = cpu_to_le32(val);
+@@ -233,6 +243,13 @@ static u16 i2c_rd16(struct v4l2_subdev *sd, u16 reg)
+ 	return i2c_rdreg(sd, reg, 2);
+ }
+ 
++static int i2c_rd16_err(struct v4l2_subdev *sd, u16 reg, u16 *value)
++{
++	int err;
++	*value = i2c_rdreg_err(sd, reg, 2, &err);
++	return err;
++}
++
+ static void i2c_wr16(struct v4l2_subdev *sd, u16 reg, u16 val)
+ {
+ 	i2c_wrreg(sd, reg, val, 2);
+@@ -1691,12 +1708,23 @@ static int tc358743_enum_mbus_code(struct v4l2_subdev *sd,
+ 	return 0;
+ }
+ 
++static u32 tc358743_g_colorspace(u32 code)
++{
++	switch (code) {
++	case MEDIA_BUS_FMT_RGB888_1X24:
++		return V4L2_COLORSPACE_SRGB;
++	case MEDIA_BUS_FMT_UYVY8_1X16:
++		return V4L2_COLORSPACE_SMPTE170M;
++	default:
++		return 0;
++	}
++}
++
+ static int tc358743_get_fmt(struct v4l2_subdev *sd,
+ 		struct v4l2_subdev_state *sd_state,
+ 		struct v4l2_subdev_format *format)
+ {
+ 	struct tc358743_state *state = to_state(sd);
+-	u8 vi_rep = i2c_rd8(sd, VI_REP);
+ 
+ 	if (format->pad != 0)
+ 		return -EINVAL;
+@@ -1706,23 +1734,7 @@ static int tc358743_get_fmt(struct v4l2_subdev *sd,
+ 	format->format.height = state->timings.bt.height;
+ 	format->format.field = V4L2_FIELD_NONE;
+ 
+-	switch (vi_rep & MASK_VOUT_COLOR_SEL) {
+-	case MASK_VOUT_COLOR_RGB_FULL:
+-	case MASK_VOUT_COLOR_RGB_LIMITED:
+-		format->format.colorspace = V4L2_COLORSPACE_SRGB;
+-		break;
+-	case MASK_VOUT_COLOR_601_YCBCR_LIMITED:
+-	case MASK_VOUT_COLOR_601_YCBCR_FULL:
+-		format->format.colorspace = V4L2_COLORSPACE_SMPTE170M;
+-		break;
+-	case MASK_VOUT_COLOR_709_YCBCR_FULL:
+-	case MASK_VOUT_COLOR_709_YCBCR_LIMITED:
+-		format->format.colorspace = V4L2_COLORSPACE_REC709;
+-		break;
+-	default:
+-		format->format.colorspace = 0;
+-		break;
+-	}
++	format->format.colorspace = tc358743_g_colorspace(format->format.code);
+ 
+ 	return 0;
+ }
+@@ -1736,19 +1748,14 @@ static int tc358743_set_fmt(struct v4l2_subdev *sd,
+ 	u32 code = format->format.code; /* is overwritten by get_fmt */
+ 	int ret = tc358743_get_fmt(sd, sd_state, format);
+ 
+-	format->format.code = code;
++	if (code == MEDIA_BUS_FMT_RGB888_1X24 ||
++	    code == MEDIA_BUS_FMT_UYVY8_1X16)
++		format->format.code = code;
++	format->format.colorspace = tc358743_g_colorspace(format->format.code);
+ 
+ 	if (ret)
+ 		return ret;
+ 
+-	switch (code) {
+-	case MEDIA_BUS_FMT_RGB888_1X24:
+-	case MEDIA_BUS_FMT_UYVY8_1X16:
+-		break;
+-	default:
+-		return -EINVAL;
+-	}
+-
+ 	if (format->which == V4L2_SUBDEV_FORMAT_TRY)
+ 		return 0;
+ 
+@@ -1972,8 +1979,19 @@ static int tc358743_probe_of(struct tc358743_state *state)
+ 	state->pdata.refclk_hz = clk_get_rate(refclk);
+ 	state->pdata.ddc5v_delay = DDC5V_DELAY_100_MS;
+ 	state->pdata.enable_hdcp = false;
+-	/* A FIFO level of 16 should be enough for 2-lane 720p60 at 594 MHz. */
+-	state->pdata.fifo_level = 16;
++	/*
++	 * Ideally the FIFO trigger level should be set based on the input and
++	 * output data rates, but the calculations required are buried in
++	 * Toshiba's register settings spreadsheet.
++	 * A value of 16 works with a 594Mbps data rate for 720p60 (using 2
++	 * lanes) and 1080p60 (using 4 lanes), but fails when the data rate
++	 * is increased, or a lower pixel clock is used that result in CSI
++	 * reading out faster than the data is arriving.
++	 *
++	 * A value of 374 works with both those modes at 594Mbps, and with most
++	 * modes on 972Mbps.
++	 */
++	state->pdata.fifo_level = 374;
+ 	/*
+ 	 * The PLL input clock is obtained by dividing refclk by pll_prd.
+ 	 * It must be between 6 MHz and 40 MHz, lower frequency is better.
+@@ -2061,6 +2079,7 @@ static int tc358743_probe(struct i2c_client *client)
+ 	struct tc358743_platform_data *pdata = client->dev.platform_data;
+ 	struct v4l2_subdev *sd;
+ 	u16 irq_mask = MASK_HDMI_MSK | MASK_CSI_MSK;
++	u16 chipid;
+ 	int err;
+ 
+ 	if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_BYTE_DATA))
+@@ -2092,7 +2111,8 @@ static int tc358743_probe(struct i2c_client *client)
+ 	sd->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE | V4L2_SUBDEV_FL_HAS_EVENTS;
+ 
+ 	/* i2c access */
+-	if ((i2c_rd16(sd, CHIPID) & MASK_CHIPID) != 0) {
++	if (i2c_rd16_err(sd, CHIPID, &chipid) ||
++	    (chipid & MASK_CHIPID) != 0) {
+ 		v4l2_info(sd, "not a TC358743 on address 0x%x\n",
+ 			  client->addr << 1);
+ 		return -ENODEV;
+diff --git a/drivers/media/i2c/vd55g1.c b/drivers/media/i2c/vd55g1.c
+index 25e2fc88a0367b..dec6e3e231d54a 100644
+--- a/drivers/media/i2c/vd55g1.c
++++ b/drivers/media/i2c/vd55g1.c
+@@ -129,8 +129,8 @@
+ #define VD55G1_FWPATCH_REVISION_MINOR			9
+ #define VD55G1_XCLK_FREQ_MIN				(6 * HZ_PER_MHZ)
+ #define VD55G1_XCLK_FREQ_MAX				(27 * HZ_PER_MHZ)
+-#define VD55G1_MIPI_RATE_MIN				(250 * HZ_PER_MHZ)
+-#define VD55G1_MIPI_RATE_MAX				(1200 * HZ_PER_MHZ)
++#define VD55G1_MIPI_RATE_MIN				(250 * MEGA)
++#define VD55G1_MIPI_RATE_MAX				(1200 * MEGA)
+ 
+ static const u8 patch_array[] = {
+ 	0x44, 0x03, 0x09, 0x02, 0xe6, 0x01, 0x42, 0x00, 0xea, 0x01, 0x42, 0x00,
+@@ -1038,8 +1038,6 @@ static int vd55g1_enable_streams(struct v4l2_subdev *sd,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	vd55g1_write(sensor, VD55G1_REG_EXT_CLOCK, sensor->xclk_freq, &ret);
+-
+ 	/* Configure output */
+ 	vd55g1_write(sensor, VD55G1_REG_MIPI_DATA_RATE,
+ 		     sensor->mipi_rate, &ret);
+@@ -1084,7 +1082,7 @@ static int vd55g1_enable_streams(struct v4l2_subdev *sd,
+ 
+ err_rpm_put:
+ 	pm_runtime_put(sensor->dev);
+-	return 0;
++	return -EINVAL;
+ }
+ 
+ static int vd55g1_disable_streams(struct v4l2_subdev *sd,
+@@ -1613,6 +1611,9 @@ static int vd55g1_power_on(struct device *dev)
+ 		goto disable_clock;
+ 	}
+ 
++	/* Setup clock now to advance through system FSM states */
++	vd55g1_write(sensor, VD55G1_REG_EXT_CLOCK, sensor->xclk_freq, &ret);
++
+ 	ret = vd55g1_patch(sensor);
+ 	if (ret) {
+ 		dev_err(dev, "Sensor patch failed %d\n", ret);
+diff --git a/drivers/media/pci/intel/ipu-bridge.c b/drivers/media/pci/intel/ipu-bridge.c
+index 83e682e1a4b77d..73560c2c67c164 100644
+--- a/drivers/media/pci/intel/ipu-bridge.c
++++ b/drivers/media/pci/intel/ipu-bridge.c
+@@ -60,6 +60,8 @@ static const struct ipu_sensor_config ipu_supported_sensors[] = {
+ 	IPU_SENSOR_CONFIG("INT33BE", 1, 419200000),
+ 	/* Omnivision OV2740 */
+ 	IPU_SENSOR_CONFIG("INT3474", 1, 180000000),
++	/* Omnivision OV5670 */
++	IPU_SENSOR_CONFIG("INT3479", 1, 422400000),
+ 	/* Omnivision OV8865 */
+ 	IPU_SENSOR_CONFIG("INT347A", 1, 360000000),
+ 	/* Omnivision OV7251 */
+diff --git a/drivers/media/platform/qcom/iris/iris_buffer.c b/drivers/media/platform/qcom/iris/iris_buffer.c
+index e5c5a564fcb81e..7dd5730a867af7 100644
+--- a/drivers/media/platform/qcom/iris/iris_buffer.c
++++ b/drivers/media/platform/qcom/iris/iris_buffer.c
+@@ -593,10 +593,13 @@ int iris_vb2_buffer_done(struct iris_inst *inst, struct iris_buffer *buf)
+ 
+ 	vb2 = &vbuf->vb2_buf;
+ 
+-	if (buf->flags & V4L2_BUF_FLAG_ERROR)
++	if (buf->flags & V4L2_BUF_FLAG_ERROR) {
+ 		state = VB2_BUF_STATE_ERROR;
+-	else
+-		state = VB2_BUF_STATE_DONE;
++		vb2_set_plane_payload(vb2, 0, 0);
++		vb2->timestamp = 0;
++		v4l2_m2m_buf_done(vbuf, state);
++		return 0;
++	}
+ 
+ 	vbuf->flags |= buf->flags;
+ 
+@@ -616,6 +619,8 @@ int iris_vb2_buffer_done(struct iris_inst *inst, struct iris_buffer *buf)
+ 			v4l2_m2m_mark_stopped(m2m_ctx);
+ 		}
+ 	}
++
++	state = VB2_BUF_STATE_DONE;
+ 	vb2->timestamp = buf->timestamp;
+ 	v4l2_m2m_buf_done(vbuf, state);
+ 
+diff --git a/drivers/media/platform/qcom/iris/iris_hfi_gen1_defines.h b/drivers/media/platform/qcom/iris/iris_hfi_gen1_defines.h
+index 9f246816a28620..93b5f838c2901c 100644
+--- a/drivers/media/platform/qcom/iris/iris_hfi_gen1_defines.h
++++ b/drivers/media/platform/qcom/iris/iris_hfi_gen1_defines.h
+@@ -117,6 +117,8 @@
+ #define HFI_FRAME_NOTCODED				0x7f002000
+ #define HFI_FRAME_YUV					0x7f004000
+ #define HFI_UNUSED_PICT					0x10000000
++#define HFI_BUFFERFLAG_DATACORRUPT			0x00000008
++#define HFI_BUFFERFLAG_DROP_FRAME			0x20000000
+ 
+ struct hfi_pkt_hdr {
+ 	u32 size;
+diff --git a/drivers/media/platform/qcom/iris/iris_hfi_gen1_response.c b/drivers/media/platform/qcom/iris/iris_hfi_gen1_response.c
+index b72d503dd74018..91d95eed68aa29 100644
+--- a/drivers/media/platform/qcom/iris/iris_hfi_gen1_response.c
++++ b/drivers/media/platform/qcom/iris/iris_hfi_gen1_response.c
+@@ -481,6 +481,12 @@ static void iris_hfi_gen1_session_ftb_done(struct iris_inst *inst, void *packet)
+ 	buf->attr |= BUF_ATTR_DEQUEUED;
+ 	buf->attr |= BUF_ATTR_BUFFER_DONE;
+ 
++	if (hfi_flags & HFI_BUFFERFLAG_DATACORRUPT)
++		flags |= V4L2_BUF_FLAG_ERROR;
++
++	if (hfi_flags & HFI_BUFFERFLAG_DROP_FRAME)
++		flags |= V4L2_BUF_FLAG_ERROR;
++
+ 	buf->flags |= flags;
+ 
+ 	iris_vb2_buffer_done(inst, buf);
+diff --git a/drivers/media/platform/qcom/venus/hfi_msgs.c b/drivers/media/platform/qcom/venus/hfi_msgs.c
+index 0a041b4db9efc5..cf0d97cbc4631f 100644
+--- a/drivers/media/platform/qcom/venus/hfi_msgs.c
++++ b/drivers/media/platform/qcom/venus/hfi_msgs.c
+@@ -33,8 +33,9 @@ static void event_seq_changed(struct venus_core *core, struct venus_inst *inst,
+ 	struct hfi_buffer_requirements *bufreq;
+ 	struct hfi_extradata_input_crop *crop;
+ 	struct hfi_dpb_counts *dpb_count;
++	u32 ptype, rem_bytes;
++	u32 size_read = 0;
+ 	u8 *data_ptr;
+-	u32 ptype;
+ 
+ 	inst->error = HFI_ERR_NONE;
+ 
+@@ -44,86 +45,118 @@ static void event_seq_changed(struct venus_core *core, struct venus_inst *inst,
+ 		break;
+ 	default:
+ 		inst->error = HFI_ERR_SESSION_INVALID_PARAMETER;
+-		goto done;
++		inst->ops->event_notify(inst, EVT_SYS_EVENT_CHANGE, &event);
++		return;
+ 	}
+ 
+ 	event.event_type = pkt->event_data1;
+ 
+ 	num_properties_changed = pkt->event_data2;
+-	if (!num_properties_changed) {
+-		inst->error = HFI_ERR_SESSION_INSUFFICIENT_RESOURCES;
+-		goto done;
+-	}
++	if (!num_properties_changed)
++		goto error;
+ 
+ 	data_ptr = (u8 *)&pkt->ext_event_data[0];
++	rem_bytes = pkt->shdr.hdr.size - sizeof(*pkt);
++
+ 	do {
++		if (rem_bytes < sizeof(u32))
++			goto error;
+ 		ptype = *((u32 *)data_ptr);
++
++		data_ptr += sizeof(u32);
++		rem_bytes -= sizeof(u32);
++
+ 		switch (ptype) {
+ 		case HFI_PROPERTY_PARAM_FRAME_SIZE:
+-			data_ptr += sizeof(u32);
++			if (rem_bytes < sizeof(struct hfi_framesize))
++				goto error;
++
+ 			frame_sz = (struct hfi_framesize *)data_ptr;
+ 			event.width = frame_sz->width;
+ 			event.height = frame_sz->height;
+-			data_ptr += sizeof(*frame_sz);
++			size_read = sizeof(struct hfi_framesize);
+ 			break;
+ 		case HFI_PROPERTY_PARAM_PROFILE_LEVEL_CURRENT:
+-			data_ptr += sizeof(u32);
++			if (rem_bytes < sizeof(struct hfi_profile_level))
++				goto error;
++
+ 			profile_level = (struct hfi_profile_level *)data_ptr;
+ 			event.profile = profile_level->profile;
+ 			event.level = profile_level->level;
+-			data_ptr += sizeof(*profile_level);
++			size_read = sizeof(struct hfi_profile_level);
+ 			break;
+ 		case HFI_PROPERTY_PARAM_VDEC_PIXEL_BITDEPTH:
+-			data_ptr += sizeof(u32);
++			if (rem_bytes < sizeof(struct hfi_bit_depth))
++				goto error;
++
+ 			pixel_depth = (struct hfi_bit_depth *)data_ptr;
+ 			event.bit_depth = pixel_depth->bit_depth;
+-			data_ptr += sizeof(*pixel_depth);
++			size_read = sizeof(struct hfi_bit_depth);
+ 			break;
+ 		case HFI_PROPERTY_PARAM_VDEC_PIC_STRUCT:
+-			data_ptr += sizeof(u32);
++			if (rem_bytes < sizeof(struct hfi_pic_struct))
++				goto error;
++
+ 			pic_struct = (struct hfi_pic_struct *)data_ptr;
+ 			event.pic_struct = pic_struct->progressive_only;
+-			data_ptr += sizeof(*pic_struct);
++			size_read = sizeof(struct hfi_pic_struct);
+ 			break;
+ 		case HFI_PROPERTY_PARAM_VDEC_COLOUR_SPACE:
+-			data_ptr += sizeof(u32);
++			if (rem_bytes < sizeof(struct hfi_colour_space))
++				goto error;
++
+ 			colour_info = (struct hfi_colour_space *)data_ptr;
+ 			event.colour_space = colour_info->colour_space;
+-			data_ptr += sizeof(*colour_info);
++			size_read = sizeof(struct hfi_colour_space);
+ 			break;
+ 		case HFI_PROPERTY_CONFIG_VDEC_ENTROPY:
+-			data_ptr += sizeof(u32);
++			if (rem_bytes < sizeof(u32))
++				goto error;
++
+ 			event.entropy_mode = *(u32 *)data_ptr;
+-			data_ptr += sizeof(u32);
++			size_read = sizeof(u32);
+ 			break;
+ 		case HFI_PROPERTY_CONFIG_BUFFER_REQUIREMENTS:
+-			data_ptr += sizeof(u32);
++			if (rem_bytes < sizeof(struct hfi_buffer_requirements))
++				goto error;
++
+ 			bufreq = (struct hfi_buffer_requirements *)data_ptr;
+ 			event.buf_count = hfi_bufreq_get_count_min(bufreq, ver);
+-			data_ptr += sizeof(*bufreq);
++			size_read = sizeof(struct hfi_buffer_requirements);
+ 			break;
+ 		case HFI_INDEX_EXTRADATA_INPUT_CROP:
+-			data_ptr += sizeof(u32);
++			if (rem_bytes < sizeof(struct hfi_extradata_input_crop))
++				goto error;
++
+ 			crop = (struct hfi_extradata_input_crop *)data_ptr;
+ 			event.input_crop.left = crop->left;
+ 			event.input_crop.top = crop->top;
+ 			event.input_crop.width = crop->width;
+ 			event.input_crop.height = crop->height;
+-			data_ptr += sizeof(*crop);
++			size_read = sizeof(struct hfi_extradata_input_crop);
+ 			break;
+ 		case HFI_PROPERTY_PARAM_VDEC_DPB_COUNTS:
+-			data_ptr += sizeof(u32);
++			if (rem_bytes < sizeof(struct hfi_dpb_counts))
++				goto error;
++
+ 			dpb_count = (struct hfi_dpb_counts *)data_ptr;
+ 			event.buf_count = dpb_count->fw_min_cnt;
+-			data_ptr += sizeof(*dpb_count);
++			size_read = sizeof(struct hfi_dpb_counts);
+ 			break;
+ 		default:
++			size_read = 0;
+ 			break;
+ 		}
++		data_ptr += size_read;
++		rem_bytes -= size_read;
+ 		num_properties_changed--;
+ 	} while (num_properties_changed > 0);
+ 
+-done:
++	inst->ops->event_notify(inst, EVT_SYS_EVENT_CHANGE, &event);
++	return;
++
++error:
++	inst->error = HFI_ERR_SESSION_INSUFFICIENT_RESOURCES;
+ 	inst->ops->event_notify(inst, EVT_SYS_EVENT_CHANGE, &event);
+ }
+ 
+diff --git a/drivers/media/platform/raspberrypi/rp1-cfe/cfe.c b/drivers/media/platform/raspberrypi/rp1-cfe/cfe.c
+index fcadb2143c8819..62dca76b468d1b 100644
+--- a/drivers/media/platform/raspberrypi/rp1-cfe/cfe.c
++++ b/drivers/media/platform/raspberrypi/rp1-cfe/cfe.c
+@@ -1024,9 +1024,6 @@ static int cfe_queue_setup(struct vb2_queue *vq, unsigned int *nbuffers,
+ 	cfe_dbg(cfe, "%s: [%s] type:%u\n", __func__, node_desc[node->id].name,
+ 		node->buffer_queue.type);
+ 
+-	if (vq->max_num_buffers + *nbuffers < 3)
+-		*nbuffers = 3 - vq->max_num_buffers;
+-
+ 	if (*nplanes) {
+ 		if (sizes[0] < size) {
+ 			cfe_err(cfe, "sizes[0] %i < size %u\n", sizes[0], size);
+@@ -1998,6 +1995,7 @@ static int cfe_register_node(struct cfe_device *cfe, int id)
+ 	q->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
+ 	q->lock = &node->lock;
+ 	q->min_queued_buffers = 1;
++	q->min_reqbufs_allocation = 3;
+ 	q->dev = &cfe->pdev->dev;
+ 
+ 	ret = vb2_queue_init(q);
+diff --git a/drivers/media/usb/hdpvr/hdpvr-i2c.c b/drivers/media/usb/hdpvr/hdpvr-i2c.c
+index 070559b01b01b8..54956a8ff15e86 100644
+--- a/drivers/media/usb/hdpvr/hdpvr-i2c.c
++++ b/drivers/media/usb/hdpvr/hdpvr-i2c.c
+@@ -165,10 +165,16 @@ static const struct i2c_algorithm hdpvr_algo = {
+ 	.functionality = hdpvr_functionality,
+ };
+ 
++/* prevent invalid 0-length usb_control_msg */
++static const struct i2c_adapter_quirks hdpvr_quirks = {
++	.flags = I2C_AQ_NO_ZERO_LEN_READ,
++};
++
+ static const struct i2c_adapter hdpvr_i2c_adapter_template = {
+ 	.name   = "Hauppauge HD PVR I2C",
+ 	.owner  = THIS_MODULE,
+ 	.algo   = &hdpvr_algo,
++	.quirks = &hdpvr_quirks,
+ };
+ 
+ static int hdpvr_activate_ir(struct hdpvr_device *dev)
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index 44b6513c526421..73923e66152e3b 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -1483,14 +1483,28 @@ static u32 uvc_get_ctrl_bitmap(struct uvc_control *ctrl,
+ 	return ~0;
+ }
+ 
++/*
++ * Maximum retry count to avoid spurious errors with controls. Increasing this
++ * value does no seem to produce better results in the tested hardware.
++ */
++#define MAX_QUERY_RETRIES 2
++
+ static int __uvc_queryctrl_boundaries(struct uvc_video_chain *chain,
+ 				      struct uvc_control *ctrl,
+ 				      struct uvc_control_mapping *mapping,
+ 				      struct v4l2_query_ext_ctrl *v4l2_ctrl)
+ {
+ 	if (!ctrl->cached) {
+-		int ret = uvc_ctrl_populate_cache(chain, ctrl);
+-		if (ret < 0)
++		unsigned int retries;
++		int ret;
++
++		for (retries = 0; retries < MAX_QUERY_RETRIES; retries++) {
++			ret = uvc_ctrl_populate_cache(chain, ctrl);
++			if (ret != -EIO)
++				break;
++		}
++
++		if (ret)
+ 			return ret;
+ 	}
+ 
+@@ -1567,6 +1581,7 @@ static int __uvc_query_v4l2_ctrl(struct uvc_video_chain *chain,
+ {
+ 	struct uvc_control_mapping *master_map = NULL;
+ 	struct uvc_control *master_ctrl = NULL;
++	int ret;
+ 
+ 	memset(v4l2_ctrl, 0, sizeof(*v4l2_ctrl));
+ 	v4l2_ctrl->id = mapping->id;
+@@ -1587,18 +1602,31 @@ static int __uvc_query_v4l2_ctrl(struct uvc_video_chain *chain,
+ 		__uvc_find_control(ctrl->entity, mapping->master_id,
+ 				   &master_map, &master_ctrl, 0, 0);
+ 	if (master_ctrl && (master_ctrl->info.flags & UVC_CTRL_FLAG_GET_CUR)) {
++		unsigned int retries;
+ 		s32 val;
+ 		int ret;
+ 
+ 		if (WARN_ON(uvc_ctrl_mapping_is_compound(master_map)))
+ 			return -EIO;
+ 
+-		ret = __uvc_ctrl_get(chain, master_ctrl, master_map, &val);
+-		if (ret < 0)
+-			return ret;
++		for (retries = 0; retries < MAX_QUERY_RETRIES; retries++) {
++			ret = __uvc_ctrl_get(chain, master_ctrl, master_map,
++					     &val);
++			if (!ret)
++				break;
++			if (ret < 0 && ret != -EIO)
++				return ret;
++		}
+ 
+-		if (val != mapping->master_manual)
+-			v4l2_ctrl->flags |= V4L2_CTRL_FLAG_INACTIVE;
++		if (ret == -EIO) {
++			dev_warn_ratelimited(&chain->dev->udev->dev,
++					     "UVC non compliance: Error %d querying master control %x (%s)\n",
++					     ret, master_map->id,
++					     uvc_map_get_name(master_map));
++		} else {
++			if (val != mapping->master_manual)
++				v4l2_ctrl->flags |= V4L2_CTRL_FLAG_INACTIVE;
++		}
+ 	}
+ 
+ 	v4l2_ctrl->elem_size = uvc_mapping_v4l2_size(mapping);
+@@ -1613,7 +1641,18 @@ static int __uvc_query_v4l2_ctrl(struct uvc_video_chain *chain,
+ 		return 0;
+ 	}
+ 
+-	return __uvc_queryctrl_boundaries(chain, ctrl, mapping, v4l2_ctrl);
++	ret = __uvc_queryctrl_boundaries(chain, ctrl, mapping, v4l2_ctrl);
++	if (ret && !mapping->disabled) {
++		dev_warn(&chain->dev->udev->dev,
++			 "UVC non compliance: permanently disabling control %x (%s), due to error %d\n",
++			 mapping->id, uvc_map_get_name(mapping), ret);
++		mapping->disabled = true;
++	}
++
++	if (mapping->disabled)
++		v4l2_ctrl->flags |= V4L2_CTRL_FLAG_DISABLED;
++
++	return 0;
+ }
+ 
+ int uvc_query_v4l2_ctrl(struct uvc_video_chain *chain,
+@@ -2033,18 +2072,24 @@ static int uvc_ctrl_add_event(struct v4l2_subscribed_event *sev, unsigned elems)
+ 		goto done;
+ 	}
+ 
+-	list_add_tail(&sev->node, &mapping->ev_subs);
+ 	if (sev->flags & V4L2_EVENT_SUB_FL_SEND_INITIAL) {
+ 		struct v4l2_event ev;
+ 		u32 changes = V4L2_EVENT_CTRL_CH_FLAGS;
+ 		s32 val = 0;
+ 
++		ret = uvc_pm_get(handle->chain->dev);
++		if (ret)
++			goto done;
++
+ 		if (uvc_ctrl_mapping_is_compound(mapping) ||
+ 		    __uvc_ctrl_get(handle->chain, ctrl, mapping, &val) == 0)
+ 			changes |= V4L2_EVENT_CTRL_CH_VALUE;
+ 
+ 		uvc_ctrl_fill_event(handle->chain, &ev, ctrl, mapping, val,
+ 				    changes);
++
++		uvc_pm_put(handle->chain->dev);
++
+ 		/*
+ 		 * Mark the queue as active, allowing this initial event to be
+ 		 * accepted.
+@@ -2053,6 +2098,8 @@ static int uvc_ctrl_add_event(struct v4l2_subscribed_event *sev, unsigned elems)
+ 		v4l2_event_queue_fh(sev->fh, &ev);
+ 	}
+ 
++	list_add_tail(&sev->node, &mapping->ev_subs);
++
+ done:
+ 	mutex_unlock(&handle->chain->ctrl_mutex);
+ 	return ret;
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index da24a655ab68cc..5c7bf142fb2195 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -344,6 +344,9 @@ static int uvc_parse_format(struct uvc_device *dev,
+ 	u8 ftype;
+ 	int ret;
+ 
++	if (buflen < 4)
++		return -EINVAL;
++
+ 	format->type = buffer[2];
+ 	format->index = buffer[3];
+ 	format->frames = frames;
+@@ -2514,6 +2517,15 @@ static const struct uvc_device_info uvc_quirk_force_y8 = {
+  * Sort these by vendor/product ID.
+  */
+ static const struct usb_device_id uvc_ids[] = {
++	/* HP Webcam HD 2300 */
++	{ .match_flags		= USB_DEVICE_ID_MATCH_DEVICE
++				| USB_DEVICE_ID_MATCH_INT_INFO,
++	  .idVendor		= 0x03f0,
++	  .idProduct		= 0xe207,
++	  .bInterfaceClass	= USB_CLASS_VIDEO,
++	  .bInterfaceSubClass	= 1,
++	  .bInterfaceProtocol	= 0,
++	  .driver_info		= (kernel_ulong_t)&uvc_quirk_stream_no_fid },
+ 	/* Quanta ACER HD User Facing */
+ 	{ .match_flags		= USB_DEVICE_ID_MATCH_DEVICE
+ 				| USB_DEVICE_ID_MATCH_INT_INFO,
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index e3567aeb0007c1..2e377e7b9e8159 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -262,6 +262,15 @@ static void uvc_fixup_video_ctrl(struct uvc_streaming *stream,
+ 
+ 		ctrl->dwMaxPayloadTransferSize = bandwidth;
+ 	}
++
++	if (stream->intf->num_altsetting > 1 &&
++	    ctrl->dwMaxPayloadTransferSize > stream->maxpsize) {
++		dev_warn_ratelimited(&stream->intf->dev,
++				     "UVC non compliance: the max payload transmission size (%u) exceeds the size of the ep max packet (%u). Using the max size.\n",
++				     ctrl->dwMaxPayloadTransferSize,
++				     stream->maxpsize);
++		ctrl->dwMaxPayloadTransferSize = stream->maxpsize;
++	}
+ }
+ 
+ static size_t uvc_video_ctrl_size(struct uvc_streaming *stream)
+@@ -1433,12 +1442,6 @@ static void uvc_video_decode_meta(struct uvc_streaming *stream,
+ 	if (!meta_buf || length == 2)
+ 		return;
+ 
+-	if (meta_buf->length - meta_buf->bytesused <
+-	    length + sizeof(meta->ns) + sizeof(meta->sof)) {
+-		meta_buf->error = 1;
+-		return;
+-	}
+-
+ 	has_pts = mem[1] & UVC_STREAM_PTS;
+ 	has_scr = mem[1] & UVC_STREAM_SCR;
+ 
+@@ -1459,6 +1462,12 @@ static void uvc_video_decode_meta(struct uvc_streaming *stream,
+ 				  !memcmp(scr, stream->clock.last_scr, 6)))
+ 		return;
+ 
++	if (meta_buf->length - meta_buf->bytesused <
++	    length + sizeof(meta->ns) + sizeof(meta->sof)) {
++		meta_buf->error = 1;
++		return;
++	}
++
+ 	meta = (struct uvc_meta_buf *)((u8 *)meta_buf->mem + meta_buf->bytesused);
+ 	local_irq_save(flags);
+ 	time = uvc_video_get_time();
+diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
+index b9f8eb62ba1d82..11d6e3c2ebdfba 100644
+--- a/drivers/media/usb/uvc/uvcvideo.h
++++ b/drivers/media/usb/uvc/uvcvideo.h
+@@ -134,6 +134,8 @@ struct uvc_control_mapping {
+ 	s32 master_manual;
+ 	u32 slave_ids[2];
+ 
++	bool disabled;
++
+ 	const struct uvc_control_mapping *(*filter_mapping)
+ 				(struct uvc_video_chain *chain,
+ 				struct uvc_control *ctrl);
+diff --git a/drivers/media/v4l2-core/v4l2-common.c b/drivers/media/v4l2-core/v4l2-common.c
+index bd160a8c9efedb..01e1b3eff1c1a5 100644
+--- a/drivers/media/v4l2-core/v4l2-common.c
++++ b/drivers/media/v4l2-core/v4l2-common.c
+@@ -323,6 +323,12 @@ const struct v4l2_format_info *v4l2_format_info(u32 format)
+ 		{ .format = V4L2_PIX_FMT_NV61M,   .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 2, .comp_planes = 2, .bpp = { 1, 2, 0, 0 }, .bpp_div = { 1, 1, 1, 1 }, .hdiv = 2, .vdiv = 1 },
+ 		{ .format = V4L2_PIX_FMT_P012M,   .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 2, .comp_planes = 2, .bpp = { 2, 4, 0, 0 }, .bpp_div = { 1, 1, 1, 1 }, .hdiv = 2, .vdiv = 2 },
+ 
++		/* Tiled YUV formats, non contiguous variant */
++		{ .format = V4L2_PIX_FMT_NV12MT,        .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 2, .comp_planes = 2, .bpp = { 1, 2, 0, 0 }, .bpp_div = { 1, 1, 1, 1 }, .hdiv = 2, .vdiv = 2,
++		  .block_w = { 64, 32, 0, 0 },	.block_h = { 32, 16, 0, 0 }},
++		{ .format = V4L2_PIX_FMT_NV12MT_16X16,  .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 2, .comp_planes = 2, .bpp = { 1, 2, 0, 0 }, .bpp_div = { 1, 1, 1, 1 }, .hdiv = 2, .vdiv = 2,
++		  .block_w = { 16,  8, 0, 0 },	.block_h = { 16,  8, 0, 0 }},
++
+ 		/* Bayer RGB formats */
+ 		{ .format = V4L2_PIX_FMT_SBGGR8,	.pixel_enc = V4L2_PIXEL_ENC_BAYER, .mem_planes = 1, .comp_planes = 1, .bpp = { 1, 0, 0, 0 }, .bpp_div = { 1, 1, 1, 1 }, .hdiv = 1, .vdiv = 1 },
+ 		{ .format = V4L2_PIX_FMT_SGBRG8,	.pixel_enc = V4L2_PIXEL_ENC_BAYER, .mem_planes = 1, .comp_planes = 1, .bpp = { 1, 0, 0, 0 }, .bpp_div = { 1, 1, 1, 1 }, .hdiv = 1, .vdiv = 1 },
+@@ -505,10 +511,10 @@ s64 __v4l2_get_link_freq_ctrl(struct v4l2_ctrl_handler *handler,
+ 
+ 		freq = div_u64(v4l2_ctrl_g_ctrl_int64(ctrl) * mul, div);
+ 
+-		pr_warn("%s: Link frequency estimated using pixel rate: result might be inaccurate\n",
+-			__func__);
+-		pr_warn("%s: Consider implementing support for V4L2_CID_LINK_FREQ in the transmitter driver\n",
+-			__func__);
++		pr_warn_once("%s: Link frequency estimated using pixel rate: result might be inaccurate\n",
++			     __func__);
++		pr_warn_once("%s: Consider implementing support for V4L2_CID_LINK_FREQ in the transmitter driver\n",
++			     __func__);
+ 	}
+ 
+ 	return freq > 0 ? freq : -EINVAL;
+diff --git a/drivers/mfd/axp20x.c b/drivers/mfd/axp20x.c
+index e9914e8a29a33c..25c639b348cd69 100644
+--- a/drivers/mfd/axp20x.c
++++ b/drivers/mfd/axp20x.c
+@@ -1053,7 +1053,8 @@ static const struct mfd_cell axp152_cells[] = {
+ };
+ 
+ static struct mfd_cell axp313a_cells[] = {
+-	MFD_CELL_NAME("axp20x-regulator"),
++	/* AXP323 is sometimes paired with AXP717 as sub-PMIC */
++	MFD_CELL_BASIC("axp20x-regulator", NULL, NULL, 0, 1),
+ 	MFD_CELL_RES("axp313a-pek", axp313a_pek_resources),
+ };
+ 
+diff --git a/drivers/mfd/cros_ec_dev.c b/drivers/mfd/cros_ec_dev.c
+index 9f84a52b48d6a8..dc80a272726bb1 100644
+--- a/drivers/mfd/cros_ec_dev.c
++++ b/drivers/mfd/cros_ec_dev.c
+@@ -87,7 +87,6 @@ static const struct mfd_cell cros_ec_sensorhub_cells[] = {
+ };
+ 
+ static const struct mfd_cell cros_usbpd_charger_cells[] = {
+-	{ .name = "cros-charge-control", },
+ 	{ .name = "cros-usbpd-charger", },
+ 	{ .name = "cros-usbpd-logger", },
+ };
+@@ -112,6 +111,10 @@ static const struct mfd_cell cros_ec_ucsi_cells[] = {
+ 	{ .name = "cros_ec_ucsi", },
+ };
+ 
++static const struct mfd_cell cros_ec_charge_control_cells[] = {
++	{ .name = "cros-charge-control", },
++};
++
+ static const struct cros_feature_to_cells cros_subdevices[] = {
+ 	{
+ 		.id		= EC_FEATURE_CEC,
+@@ -148,6 +151,11 @@ static const struct cros_feature_to_cells cros_subdevices[] = {
+ 		.mfd_cells	= cros_ec_keyboard_leds_cells,
+ 		.num_cells	= ARRAY_SIZE(cros_ec_keyboard_leds_cells),
+ 	},
++	{
++		.id		= EC_FEATURE_CHARGER,
++		.mfd_cells	= cros_ec_charge_control_cells,
++		.num_cells	= ARRAY_SIZE(cros_ec_charge_control_cells),
++	},
+ };
+ 
+ static const struct mfd_cell cros_ec_platform_cells[] = {
+diff --git a/drivers/misc/cardreader/rtsx_usb.c b/drivers/misc/cardreader/rtsx_usb.c
+index 148107a4547c39..d007a4455ce5ba 100644
+--- a/drivers/misc/cardreader/rtsx_usb.c
++++ b/drivers/misc/cardreader/rtsx_usb.c
+@@ -698,6 +698,12 @@ static void rtsx_usb_disconnect(struct usb_interface *intf)
+ }
+ 
+ #ifdef CONFIG_PM
++static int rtsx_usb_resume_child(struct device *dev, void *data)
++{
++	pm_request_resume(dev);
++	return 0;
++}
++
+ static int rtsx_usb_suspend(struct usb_interface *intf, pm_message_t message)
+ {
+ 	struct rtsx_ucr *ucr =
+@@ -713,8 +719,10 @@ static int rtsx_usb_suspend(struct usb_interface *intf, pm_message_t message)
+ 			mutex_unlock(&ucr->dev_mutex);
+ 
+ 			/* Defer the autosuspend if card exists */
+-			if (val & (SD_CD | MS_CD))
++			if (val & (SD_CD | MS_CD)) {
++				device_for_each_child(&intf->dev, NULL, rtsx_usb_resume_child);
+ 				return -EAGAIN;
++			}
+ 		} else {
+ 			/* There is an ongoing operation*/
+ 			return -EAGAIN;
+@@ -724,12 +732,6 @@ static int rtsx_usb_suspend(struct usb_interface *intf, pm_message_t message)
+ 	return 0;
+ }
+ 
+-static int rtsx_usb_resume_child(struct device *dev, void *data)
+-{
+-	pm_request_resume(dev);
+-	return 0;
+-}
+-
+ static int rtsx_usb_resume(struct usb_interface *intf)
+ {
+ 	device_for_each_child(&intf->dev, NULL, rtsx_usb_resume_child);
+diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c
+index 67176caf54163a..1958c043ac142b 100644
+--- a/drivers/misc/mei/bus.c
++++ b/drivers/misc/mei/bus.c
+@@ -1301,10 +1301,16 @@ static void mei_dev_bus_put(struct mei_device *bus)
+ static void mei_cl_bus_dev_release(struct device *dev)
+ {
+ 	struct mei_cl_device *cldev = to_mei_cl_device(dev);
++	struct mei_device *mdev = cldev->cl->dev;
++	struct mei_cl *cl;
+ 
+ 	mei_cl_flush_queues(cldev->cl, NULL);
+ 	mei_me_cl_put(cldev->me_cl);
+ 	mei_dev_bus_put(cldev->bus);
++
++	list_for_each_entry(cl, &mdev->file_list, link)
++		WARN_ON(cl == cldev->cl);
++
+ 	kfree(cldev->cl);
+ 	kfree(cldev);
+ }
+diff --git a/drivers/mmc/host/rtsx_usb_sdmmc.c b/drivers/mmc/host/rtsx_usb_sdmmc.c
+index d229c2b83ea99e..8c35cb85a9c0e9 100644
+--- a/drivers/mmc/host/rtsx_usb_sdmmc.c
++++ b/drivers/mmc/host/rtsx_usb_sdmmc.c
+@@ -1029,9 +1029,7 @@ static int sd_set_power_mode(struct rtsx_usb_sdmmc *host,
+ 		err = sd_power_on(host);
+ 	}
+ 
+-	if (!err)
+-		host->power_mode = power_mode;
+-
++	host->power_mode = power_mode;
+ 	return err;
+ }
+ 
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index ac187a8798b710..05dd2b563c02a7 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -2039,12 +2039,20 @@ static int sdhci_esdhc_suspend(struct device *dev)
+ 		ret = sdhci_enable_irq_wakeups(host);
+ 		if (!ret)
+ 			dev_warn(dev, "Failed to enable irq wakeup\n");
++	} else {
++		/*
++		 * For the device which works as wakeup source, no need
++		 * to change the pinctrl to sleep state.
++		 * e.g. For SDIO device, the interrupt share with data pin,
++		 * but the pinctrl sleep state may config the data pin to
++		 * other function like GPIO function to save power in PM,
++		 * which finally block the SDIO wakeup function.
++		 */
++		ret = pinctrl_pm_select_sleep_state(dev);
++		if (ret)
++			return ret;
+ 	}
+ 
+-	ret = pinctrl_pm_select_sleep_state(dev);
+-	if (ret)
+-		return ret;
+-
+ 	ret = mmc_gpio_set_cd_wake(host->mmc, true);
+ 
+ 	/*
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index 66c0d1ba2a33a9..bc6ca49652f8e3 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -1564,6 +1564,7 @@ static void sdhci_msm_check_power_status(struct sdhci_host *host, u32 req_type)
+ {
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ 	struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host);
++	struct mmc_host *mmc = host->mmc;
+ 	bool done = false;
+ 	u32 val = SWITCHABLE_SIGNALING_VOLTAGE;
+ 	const struct sdhci_msm_offset *msm_offset =
+@@ -1621,6 +1622,12 @@ static void sdhci_msm_check_power_status(struct sdhci_host *host, u32 req_type)
+ 				 "%s: pwr_irq for req: (%d) timed out\n",
+ 				 mmc_hostname(host->mmc), req_type);
+ 	}
++
++	if ((req_type & REQ_BUS_ON) && mmc->card && !mmc->ops->get_cd(mmc)) {
++		sdhci_writeb(host, 0, SDHCI_POWER_CONTROL);
++		host->pwr = 0;
++	}
++
+ 	pr_debug("%s: %s: request %d done\n", mmc_hostname(host->mmc),
+ 			__func__, req_type);
+ }
+@@ -1679,6 +1686,13 @@ static void sdhci_msm_handle_pwr_irq(struct sdhci_host *host, int irq)
+ 		udelay(10);
+ 	}
+ 
++	if ((irq_status & CORE_PWRCTL_BUS_ON) && mmc->card &&
++	    !mmc->ops->get_cd(mmc)) {
++		msm_host_writel(msm_host, CORE_PWRCTL_BUS_FAIL, host,
++				msm_offset->core_pwrctl_ctl);
++		return;
++	}
++
+ 	/* Handle BUS ON/OFF*/
+ 	if (irq_status & CORE_PWRCTL_BUS_ON) {
+ 		pwr_state = REQ_BUS_ON;
+diff --git a/drivers/net/can/ti_hecc.c b/drivers/net/can/ti_hecc.c
+index 644e8b8eb91e74..e6d6661a908ab1 100644
+--- a/drivers/net/can/ti_hecc.c
++++ b/drivers/net/can/ti_hecc.c
+@@ -383,7 +383,7 @@ static void ti_hecc_start(struct net_device *ndev)
+ 	 * overflows instead of the hardware silently dropping the
+ 	 * messages.
+ 	 */
+-	mbx_mask = ~BIT(HECC_RX_LAST_MBOX);
++	mbx_mask = ~BIT_U32(HECC_RX_LAST_MBOX);
+ 	hecc_write(priv, HECC_CANOPC, mbx_mask);
+ 
+ 	/* Enable interrupts */
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index dc2f4adac9bc96..d15d912690c40e 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -361,18 +361,23 @@ static void b53_set_forwarding(struct b53_device *dev, int enable)
+ 
+ 	b53_write8(dev, B53_CTRL_PAGE, B53_SWITCH_MODE, mgmt);
+ 
+-	/* Include IMP port in dumb forwarding mode
+-	 */
+-	b53_read8(dev, B53_CTRL_PAGE, B53_SWITCH_CTRL, &mgmt);
+-	mgmt |= B53_MII_DUMB_FWDG_EN;
+-	b53_write8(dev, B53_CTRL_PAGE, B53_SWITCH_CTRL, mgmt);
+-
+-	/* Look at B53_UC_FWD_EN and B53_MC_FWD_EN to decide whether
+-	 * frames should be flooded or not.
+-	 */
+-	b53_read8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, &mgmt);
+-	mgmt |= B53_UC_FWD_EN | B53_MC_FWD_EN | B53_IPMC_FWD_EN;
+-	b53_write8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, mgmt);
++	if (!is5325(dev)) {
++		/* Include IMP port in dumb forwarding mode */
++		b53_read8(dev, B53_CTRL_PAGE, B53_SWITCH_CTRL, &mgmt);
++		mgmt |= B53_MII_DUMB_FWDG_EN;
++		b53_write8(dev, B53_CTRL_PAGE, B53_SWITCH_CTRL, mgmt);
++
++		/* Look at B53_UC_FWD_EN and B53_MC_FWD_EN to decide whether
++		 * frames should be flooded or not.
++		 */
++		b53_read8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, &mgmt);
++		mgmt |= B53_UC_FWD_EN | B53_MC_FWD_EN | B53_IPMC_FWD_EN;
++		b53_write8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, mgmt);
++	} else {
++		b53_read8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, &mgmt);
++		mgmt |= B53_IP_MCAST_25;
++		b53_write8(dev, B53_CTRL_PAGE, B53_IP_MULTICAST_CTRL, mgmt);
++	}
+ }
+ 
+ static void b53_enable_vlan(struct b53_device *dev, int port, bool enable,
+@@ -529,6 +534,10 @@ void b53_imp_vlan_setup(struct dsa_switch *ds, int cpu_port)
+ 	unsigned int i;
+ 	u16 pvlan;
+ 
++	/* BCM5325 CPU port is at 8 */
++	if ((is5325(dev) || is5365(dev)) && cpu_port == B53_CPU_PORT_25)
++		cpu_port = B53_CPU_PORT;
++
+ 	/* Enable the IMP port to be in the same VLAN as the other ports
+ 	 * on a per-port basis such that we only have Port i and IMP in
+ 	 * the same VLAN.
+@@ -579,6 +588,9 @@ static void b53_port_set_learning(struct b53_device *dev, int port,
+ {
+ 	u16 reg;
+ 
++	if (is5325(dev))
++		return;
++
+ 	b53_read16(dev, B53_CTRL_PAGE, B53_DIS_LEARNING, &reg);
+ 	if (learning)
+ 		reg &= ~BIT(port);
+@@ -615,6 +627,19 @@ int b53_setup_port(struct dsa_switch *ds, int port)
+ 	if (dsa_is_user_port(ds, port))
+ 		b53_set_eap_mode(dev, port, EAP_MODE_SIMPLIFIED);
+ 
++	if (is5325(dev) &&
++	    in_range(port, 1, 4)) {
++		u8 reg;
++
++		b53_read8(dev, B53_CTRL_PAGE, B53_PD_MODE_CTRL_25, &reg);
++		reg &= ~PD_MODE_POWER_DOWN_PORT(0);
++		if (dsa_is_unused_port(ds, port))
++			reg |= PD_MODE_POWER_DOWN_PORT(port);
++		else
++			reg &= ~PD_MODE_POWER_DOWN_PORT(port);
++		b53_write8(dev, B53_CTRL_PAGE, B53_PD_MODE_CTRL_25, reg);
++	}
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL(b53_setup_port);
+@@ -1257,6 +1282,8 @@ static void b53_force_link(struct b53_device *dev, int port, int link)
+ 	if (port == dev->imp_port) {
+ 		off = B53_PORT_OVERRIDE_CTRL;
+ 		val = PORT_OVERRIDE_EN;
++	} else if (is5325(dev)) {
++		return;
+ 	} else {
+ 		off = B53_GMII_PORT_OVERRIDE_CTRL(port);
+ 		val = GMII_PO_EN;
+@@ -1281,6 +1308,8 @@ static void b53_force_port_config(struct b53_device *dev, int port,
+ 	if (port == dev->imp_port) {
+ 		off = B53_PORT_OVERRIDE_CTRL;
+ 		val = PORT_OVERRIDE_EN;
++	} else if (is5325(dev)) {
++		return;
+ 	} else {
+ 		off = B53_GMII_PORT_OVERRIDE_CTRL(port);
+ 		val = GMII_PO_EN;
+@@ -1311,10 +1340,19 @@ static void b53_force_port_config(struct b53_device *dev, int port,
+ 		return;
+ 	}
+ 
+-	if (rx_pause)
+-		reg |= PORT_OVERRIDE_RX_FLOW;
+-	if (tx_pause)
+-		reg |= PORT_OVERRIDE_TX_FLOW;
++	if (rx_pause) {
++		if (is5325(dev))
++			reg |= PORT_OVERRIDE_LP_FLOW_25;
++		else
++			reg |= PORT_OVERRIDE_RX_FLOW;
++	}
++
++	if (tx_pause) {
++		if (is5325(dev))
++			reg |= PORT_OVERRIDE_LP_FLOW_25;
++		else
++			reg |= PORT_OVERRIDE_TX_FLOW;
++	}
+ 
+ 	b53_write8(dev, B53_CTRL_PAGE, off, reg);
+ }
+@@ -2165,7 +2203,13 @@ int b53_br_flags_pre(struct dsa_switch *ds, int port,
+ 		     struct switchdev_brport_flags flags,
+ 		     struct netlink_ext_ack *extack)
+ {
+-	if (flags.mask & ~(BR_FLOOD | BR_MCAST_FLOOD | BR_LEARNING))
++	struct b53_device *dev = ds->priv;
++	unsigned long mask = (BR_FLOOD | BR_MCAST_FLOOD);
++
++	if (!is5325(dev))
++		mask |= BR_LEARNING;
++
++	if (flags.mask & ~mask)
+ 		return -EINVAL;
+ 
+ 	return 0;
+diff --git a/drivers/net/dsa/b53/b53_regs.h b/drivers/net/dsa/b53/b53_regs.h
+index 1fbc5a204bc721..f2caf8fe569984 100644
+--- a/drivers/net/dsa/b53/b53_regs.h
++++ b/drivers/net/dsa/b53/b53_regs.h
+@@ -95,17 +95,22 @@
+ #define   PORT_OVERRIDE_SPEED_10M	(0 << PORT_OVERRIDE_SPEED_S)
+ #define   PORT_OVERRIDE_SPEED_100M	(1 << PORT_OVERRIDE_SPEED_S)
+ #define   PORT_OVERRIDE_SPEED_1000M	(2 << PORT_OVERRIDE_SPEED_S)
++#define   PORT_OVERRIDE_LP_FLOW_25	BIT(3) /* BCM5325 only */
+ #define   PORT_OVERRIDE_RV_MII_25	BIT(4) /* BCM5325 only */
+ #define   PORT_OVERRIDE_RX_FLOW		BIT(4)
+ #define   PORT_OVERRIDE_TX_FLOW		BIT(5)
+ #define   PORT_OVERRIDE_SPEED_2000M	BIT(6) /* BCM5301X only, requires setting 1000M */
+ #define   PORT_OVERRIDE_EN		BIT(7) /* Use the register contents */
+ 
+-/* Power-down mode control */
++/* Power-down mode control (8 bit) */
+ #define B53_PD_MODE_CTRL_25		0x0f
++#define  PD_MODE_PORT_MASK		0x1f
++/* Bit 0 also powers down the switch. */
++#define  PD_MODE_POWER_DOWN_PORT(i)	BIT(i)
+ 
+ /* IP Multicast control (8 bit) */
+ #define B53_IP_MULTICAST_CTRL		0x21
++#define  B53_IP_MCAST_25		BIT(0)
+ #define  B53_IPMC_FWD_EN		BIT(1)
+ #define  B53_UC_FWD_EN			BIT(6)
+ #define  B53_MC_FWD_EN			BIT(7)
+diff --git a/drivers/net/ethernet/agere/et131x.c b/drivers/net/ethernet/agere/et131x.c
+index 678eddb3617294..5c8217638ddafe 100644
+--- a/drivers/net/ethernet/agere/et131x.c
++++ b/drivers/net/ethernet/agere/et131x.c
+@@ -2459,6 +2459,10 @@ static int nic_send_packet(struct et131x_adapter *adapter, struct tcb *tcb)
+ 							  skb->data,
+ 							  skb_headlen(skb),
+ 							  DMA_TO_DEVICE);
++				if (dma_mapping_error(&adapter->pdev->dev,
++						      dma_addr))
++					return -ENOMEM;
++
+ 				desc[frag].addr_lo = lower_32_bits(dma_addr);
+ 				desc[frag].addr_hi = upper_32_bits(dma_addr);
+ 				frag++;
+@@ -2468,6 +2472,10 @@ static int nic_send_packet(struct et131x_adapter *adapter, struct tcb *tcb)
+ 							  skb->data,
+ 							  skb_headlen(skb) / 2,
+ 							  DMA_TO_DEVICE);
++				if (dma_mapping_error(&adapter->pdev->dev,
++						      dma_addr))
++					return -ENOMEM;
++
+ 				desc[frag].addr_lo = lower_32_bits(dma_addr);
+ 				desc[frag].addr_hi = upper_32_bits(dma_addr);
+ 				frag++;
+@@ -2478,6 +2486,10 @@ static int nic_send_packet(struct et131x_adapter *adapter, struct tcb *tcb)
+ 							  skb_headlen(skb) / 2,
+ 							  skb_headlen(skb) / 2,
+ 							  DMA_TO_DEVICE);
++				if (dma_mapping_error(&adapter->pdev->dev,
++						      dma_addr))
++					goto unmap_first_out;
++
+ 				desc[frag].addr_lo = lower_32_bits(dma_addr);
+ 				desc[frag].addr_hi = upper_32_bits(dma_addr);
+ 				frag++;
+@@ -2489,6 +2501,9 @@ static int nic_send_packet(struct et131x_adapter *adapter, struct tcb *tcb)
+ 						    0,
+ 						    desc[frag].len_vlan,
+ 						    DMA_TO_DEVICE);
++			if (dma_mapping_error(&adapter->pdev->dev, dma_addr))
++				goto unmap_out;
++
+ 			desc[frag].addr_lo = lower_32_bits(dma_addr);
+ 			desc[frag].addr_hi = upper_32_bits(dma_addr);
+ 			frag++;
+@@ -2578,6 +2593,27 @@ static int nic_send_packet(struct et131x_adapter *adapter, struct tcb *tcb)
+ 		       &adapter->regs->global.watchdog_timer);
+ 	}
+ 	return 0;
++
++unmap_out:
++	// Unmap the body of the packet with map_page
++	while (--i) {
++		frag--;
++		dma_addr = desc[frag].addr_lo;
++		dma_addr |= (u64)desc[frag].addr_hi << 32;
++		dma_unmap_page(&adapter->pdev->dev, dma_addr,
++			       desc[frag].len_vlan, DMA_TO_DEVICE);
++	}
++
++unmap_first_out:
++	// Unmap the header with map_single
++	while (frag--) {
++		dma_addr = desc[frag].addr_lo;
++		dma_addr |= (u64)desc[frag].addr_hi << 32;
++		dma_unmap_single(&adapter->pdev->dev, dma_addr,
++				 desc[frag].len_vlan, DMA_TO_DEVICE);
++	}
++
++	return -ENOMEM;
+ }
+ 
+ static int send_packet(struct sk_buff *skb, struct et131x_adapter *adapter)
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_hw.h b/drivers/net/ethernet/aquantia/atlantic/aq_hw.h
+index 42c0efc1b45581..4e66fd9b2ab1dc 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_hw.h
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_hw.h
+@@ -113,6 +113,8 @@ struct aq_stats_s {
+ #define AQ_HW_POWER_STATE_D0   0U
+ #define AQ_HW_POWER_STATE_D3   3U
+ 
++#define	AQ_FW_WAKE_ON_LINK_RTPM BIT(10)
++
+ #define AQ_HW_FLAG_STARTED     0x00000004U
+ #define AQ_HW_FLAG_STOPPING    0x00000008U
+ #define AQ_HW_FLAG_RESETTING   0x00000010U
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2_utils_fw.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2_utils_fw.c
+index 52e2070a4a2f0c..7370e3f76b6208 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2_utils_fw.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2_utils_fw.c
+@@ -462,6 +462,44 @@ static int aq_a2_fw_get_mac_temp(struct aq_hw_s *self, int *temp)
+ 	return aq_a2_fw_get_phy_temp(self, temp);
+ }
+ 
++static int aq_a2_fw_set_wol_params(struct aq_hw_s *self, const u8 *mac, u32 wol)
++{
++	struct mac_address_aligned_s mac_address;
++	struct link_control_s link_control;
++	struct wake_on_lan_s wake_on_lan;
++
++	memcpy(mac_address.aligned.mac_address, mac, ETH_ALEN);
++	hw_atl2_shared_buffer_write(self, mac_address, mac_address);
++
++	memset(&wake_on_lan, 0, sizeof(wake_on_lan));
++
++	if (wol & WAKE_MAGIC)
++		wake_on_lan.wake_on_magic_packet = 1U;
++
++	if (wol & (WAKE_PHY | AQ_FW_WAKE_ON_LINK_RTPM))
++		wake_on_lan.wake_on_link_up = 1U;
++
++	hw_atl2_shared_buffer_write(self, sleep_proxy, wake_on_lan);
++
++	hw_atl2_shared_buffer_get(self, link_control, link_control);
++	link_control.mode = AQ_HOST_MODE_SLEEP_PROXY;
++	hw_atl2_shared_buffer_write(self, link_control, link_control);
++
++	return hw_atl2_shared_buffer_finish_ack(self);
++}
++
++static int aq_a2_fw_set_power(struct aq_hw_s *self, unsigned int power_state,
++			      const u8 *mac)
++{
++	u32 wol = self->aq_nic_cfg->wol;
++	int err = 0;
++
++	if (wol)
++		err = aq_a2_fw_set_wol_params(self, mac, wol);
++
++	return err;
++}
++
+ static int aq_a2_fw_set_eee_rate(struct aq_hw_s *self, u32 speed)
+ {
+ 	struct link_options_s link_options;
+@@ -605,6 +643,7 @@ const struct aq_fw_ops aq_a2_fw_ops = {
+ 	.set_state          = aq_a2_fw_set_state,
+ 	.update_link_status = aq_a2_fw_update_link_status,
+ 	.update_stats       = aq_a2_fw_update_stats,
++	.set_power          = aq_a2_fw_set_power,
+ 	.get_mac_temp       = aq_a2_fw_get_mac_temp,
+ 	.get_phy_temp       = aq_a2_fw_get_phy_temp,
+ 	.set_eee_rate       = aq_a2_fw_set_eee_rate,
+diff --git a/drivers/net/ethernet/atheros/ag71xx.c b/drivers/net/ethernet/atheros/ag71xx.c
+index d8e6f23e143201..cbc730c7cff298 100644
+--- a/drivers/net/ethernet/atheros/ag71xx.c
++++ b/drivers/net/ethernet/atheros/ag71xx.c
+@@ -1213,6 +1213,11 @@ static bool ag71xx_fill_rx_buf(struct ag71xx *ag, struct ag71xx_buf *buf,
+ 	buf->rx.rx_buf = data;
+ 	buf->rx.dma_addr = dma_map_single(&ag->pdev->dev, data, ag->rx_buf_size,
+ 					  DMA_FROM_DEVICE);
++	if (dma_mapping_error(&ag->pdev->dev, buf->rx.dma_addr)) {
++		skb_free_frag(data);
++		buf->rx.rx_buf = NULL;
++		return false;
++	}
+ 	desc->data = (u32)buf->rx.dma_addr + offset;
+ 	return true;
+ }
+@@ -1511,6 +1516,10 @@ static netdev_tx_t ag71xx_hard_start_xmit(struct sk_buff *skb,
+ 
+ 	dma_addr = dma_map_single(&ag->pdev->dev, skb->data, skb->len,
+ 				  DMA_TO_DEVICE);
++	if (dma_mapping_error(&ag->pdev->dev, dma_addr)) {
++		netif_dbg(ag, tx_err, ndev, "DMA mapping error\n");
++		goto err_drop;
++	}
+ 
+ 	i = ring->curr & ring_mask;
+ 	desc = ag71xx_ring_desc(ring, i);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 243cb13cb01c7b..25681c2343fb46 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -921,15 +921,21 @@ static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping,
+ 
+ static netmem_ref __bnxt_alloc_rx_netmem(struct bnxt *bp, dma_addr_t *mapping,
+ 					 struct bnxt_rx_ring_info *rxr,
++					 unsigned int *offset,
+ 					 gfp_t gfp)
+ {
+ 	netmem_ref netmem;
+ 
+-	netmem = page_pool_alloc_netmems(rxr->page_pool, gfp);
++	if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) {
++		netmem = page_pool_alloc_frag_netmem(rxr->page_pool, offset, BNXT_RX_PAGE_SIZE, gfp);
++	} else {
++		netmem = page_pool_alloc_netmems(rxr->page_pool, gfp);
++		*offset = 0;
++	}
+ 	if (!netmem)
+ 		return 0;
+ 
+-	*mapping = page_pool_get_dma_addr_netmem(netmem);
++	*mapping = page_pool_get_dma_addr_netmem(netmem) + *offset;
+ 	return netmem;
+ }
+ 
+@@ -1024,7 +1030,7 @@ static int bnxt_alloc_rx_netmem(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
+ 	dma_addr_t mapping;
+ 	netmem_ref netmem;
+ 
+-	netmem = __bnxt_alloc_rx_netmem(bp, &mapping, rxr, gfp);
++	netmem = __bnxt_alloc_rx_netmem(bp, &mapping, rxr, &offset, gfp);
+ 	if (!netmem)
+ 		return -ENOMEM;
+ 
+@@ -3803,14 +3809,15 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp,
+ 				   struct bnxt_rx_ring_info *rxr,
+ 				   int numa_node)
+ {
++	const unsigned int agg_size_fac = PAGE_SIZE / BNXT_RX_PAGE_SIZE;
++	const unsigned int rx_size_fac = PAGE_SIZE / SZ_4K;
+ 	struct page_pool_params pp = { 0 };
+ 	struct page_pool *pool;
+ 
+-	pp.pool_size = bp->rx_agg_ring_size;
++	pp.pool_size = bp->rx_agg_ring_size / agg_size_fac;
+ 	if (BNXT_RX_PAGE_MODE(bp))
+-		pp.pool_size += bp->rx_ring_size;
++		pp.pool_size += bp->rx_ring_size / rx_size_fac;
+ 	pp.nid = numa_node;
+-	pp.napi = &rxr->bnapi->napi;
+ 	pp.netdev = bp->dev;
+ 	pp.dev = &bp->pdev->dev;
+ 	pp.dma_dir = bp->rx_dir;
+@@ -3826,7 +3833,7 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp,
+ 
+ 	rxr->need_head_pool = page_pool_is_unreadable(pool);
+ 	if (bnxt_separate_head_pool(rxr)) {
+-		pp.pool_size = max(bp->rx_ring_size, 1024);
++		pp.pool_size = min(bp->rx_ring_size / rx_size_fac, 1024);
+ 		pp.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV;
+ 		pool = page_pool_create(&pp);
+ 		if (IS_ERR(pool))
+@@ -3842,6 +3849,12 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp,
+ 	return PTR_ERR(pool);
+ }
+ 
++static void bnxt_enable_rx_page_pool(struct bnxt_rx_ring_info *rxr)
++{
++	page_pool_enable_direct_recycling(rxr->head_pool, &rxr->bnapi->napi);
++	page_pool_enable_direct_recycling(rxr->page_pool, &rxr->bnapi->napi);
++}
++
+ static int bnxt_alloc_rx_agg_bmap(struct bnxt *bp, struct bnxt_rx_ring_info *rxr)
+ {
+ 	u16 mem_size;
+@@ -3880,6 +3893,7 @@ static int bnxt_alloc_rx_rings(struct bnxt *bp)
+ 		rc = bnxt_alloc_rx_page_pool(bp, rxr, cpu_node);
+ 		if (rc)
+ 			return rc;
++		bnxt_enable_rx_page_pool(rxr);
+ 
+ 		rc = xdp_rxq_info_reg(&rxr->xdp_rxq, bp->dev, i, 0);
+ 		if (rc < 0)
+@@ -16042,6 +16056,7 @@ static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx)
+ 			goto err_reset;
+ 	}
+ 
++	bnxt_enable_rx_page_pool(rxr);
+ 	napi_enable_locked(&bnapi->napi);
+ 	bnxt_db_nq_arm(bp, &cpr->cp_db, cpr->cp_raw_cons);
+ 
+diff --git a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
+index 3b7ad744b2dd63..21495b5dce254d 100644
+--- a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
++++ b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
+@@ -1429,9 +1429,9 @@ static acpi_status bgx_acpi_match_id(acpi_handle handle, u32 lvl,
+ {
+ 	struct acpi_buffer string = { ACPI_ALLOCATE_BUFFER, NULL };
+ 	struct bgx *bgx = context;
+-	char bgx_sel[5];
++	char bgx_sel[7];
+ 
+-	snprintf(bgx_sel, 5, "BGX%d", bgx->bgx_id);
++	snprintf(bgx_sel, sizeof(bgx_sel), "BGX%d", bgx->bgx_id);
+ 	if (ACPI_FAILURE(acpi_get_name(handle, ACPI_SINGLE_NAME, &string))) {
+ 		pr_warn("Invalid link device\n");
+ 		return AE_OK;
+diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
+index 3d2e2159211917..490af665942947 100644
+--- a/drivers/net/ethernet/emulex/benet/be_main.c
++++ b/drivers/net/ethernet/emulex/benet/be_main.c
+@@ -1465,10 +1465,10 @@ static void be_tx_timeout(struct net_device *netdev, unsigned int txqueue)
+ 						 ntohs(tcphdr->source));
+ 					dev_info(dev, "TCP dest port %d\n",
+ 						 ntohs(tcphdr->dest));
+-					dev_info(dev, "TCP sequence num %d\n",
+-						 ntohs(tcphdr->seq));
+-					dev_info(dev, "TCP ack_seq %d\n",
+-						 ntohs(tcphdr->ack_seq));
++					dev_info(dev, "TCP sequence num %u\n",
++						 ntohl(tcphdr->seq));
++					dev_info(dev, "TCP ack_seq %u\n",
++						 ntohl(tcphdr->ack_seq));
+ 				} else if (ip_hdr(skb)->protocol ==
+ 					   IPPROTO_UDP) {
+ 					udphdr = udp_hdr(skb);
+diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c
+index a98d5af3f9e3c6..6421d8321bfeac 100644
+--- a/drivers/net/ethernet/faraday/ftgmac100.c
++++ b/drivers/net/ethernet/faraday/ftgmac100.c
+@@ -1730,16 +1730,17 @@ static int ftgmac100_setup_mdio(struct net_device *netdev)
+ static void ftgmac100_phy_disconnect(struct net_device *netdev)
+ {
+ 	struct ftgmac100 *priv = netdev_priv(netdev);
++	struct phy_device *phydev = netdev->phydev;
+ 
+-	if (!netdev->phydev)
++	if (!phydev)
+ 		return;
+ 
+-	phy_disconnect(netdev->phydev);
++	phy_disconnect(phydev);
+ 	if (of_phy_is_fixed_link(priv->dev->of_node))
+ 		of_phy_deregister_fixed_link(priv->dev->of_node);
+ 
+ 	if (priv->use_ncsi)
+-		fixed_phy_unregister(netdev->phydev);
++		fixed_phy_unregister(phydev);
+ }
+ 
+ static void ftgmac100_destroy_mdio(struct net_device *netdev)
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+index 23c23cca262091..3edc8d142dd559 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+@@ -28,7 +28,6 @@
+ #include <linux/percpu.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/sort.h>
+-#include <linux/phy_fixed.h>
+ #include <linux/bpf.h>
+ #include <linux/bpf_trace.h>
+ #include <soc/fsl/bman.h>
+@@ -3150,7 +3149,6 @@ static const struct net_device_ops dpaa_ops = {
+ 	.ndo_stop = dpaa_eth_stop,
+ 	.ndo_tx_timeout = dpaa_tx_timeout,
+ 	.ndo_get_stats64 = dpaa_get_stats64,
+-	.ndo_change_carrier = fixed_phy_change_carrier,
+ 	.ndo_set_mac_address = dpaa_set_mac_address,
+ 	.ndo_validate_addr = eth_validate_addr,
+ 	.ndo_set_rx_mode = dpaa_set_rx_mode,
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+index 9986f6e1f58774..7fc01baef280c0 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+@@ -401,8 +401,10 @@ static int dpaa_get_ts_info(struct net_device *net_dev,
+ 		of_node_put(ptp_node);
+ 	}
+ 
+-	if (ptp_dev)
++	if (ptp_dev) {
+ 		ptp = platform_get_drvdata(ptp_dev);
++		put_device(&ptp_dev->dev);
++	}
+ 
+ 	if (ptp)
+ 		info->phc_index = ptp->phc_index;
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
+index d38cd36be4a658..c0773dfbfb1890 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
+@@ -142,7 +142,7 @@ static const struct {
+ static const struct {
+ 	int reg;
+ 	char name[ETH_GSTRING_LEN] __nonstring;
+-} enetc_port_counters[] = {
++} enetc_pm_counters[] = {
+ 	{ ENETC_PM_REOCT(0),	"MAC rx ethernet octets" },
+ 	{ ENETC_PM_RALN(0),	"MAC rx alignment errors" },
+ 	{ ENETC_PM_RXPF(0),	"MAC rx valid pause frames" },
+@@ -194,6 +194,12 @@ static const struct {
+ 	{ ENETC_PM_TSCOL(0),	"MAC tx single collisions" },
+ 	{ ENETC_PM_TLCOL(0),	"MAC tx late collisions" },
+ 	{ ENETC_PM_TECOL(0),	"MAC tx excessive collisions" },
++};
++
++static const struct {
++	int reg;
++	char name[ETH_GSTRING_LEN] __nonstring;
++} enetc_port_counters[] = {
+ 	{ ENETC_UFDMF,		"SI MAC nomatch u-cast discards" },
+ 	{ ENETC_MFDMF,		"SI MAC nomatch m-cast discards" },
+ 	{ ENETC_PBFDSIR,	"SI MAC nomatch b-cast discards" },
+@@ -240,6 +246,7 @@ static int enetc_get_sset_count(struct net_device *ndev, int sset)
+ 		return len;
+ 
+ 	len += ARRAY_SIZE(enetc_port_counters);
++	len += ARRAY_SIZE(enetc_pm_counters);
+ 
+ 	return len;
+ }
+@@ -266,6 +273,9 @@ static void enetc_get_strings(struct net_device *ndev, u32 stringset, u8 *data)
+ 		for (i = 0; i < ARRAY_SIZE(enetc_port_counters); i++)
+ 			ethtool_cpy(&data, enetc_port_counters[i].name);
+ 
++		for (i = 0; i < ARRAY_SIZE(enetc_pm_counters); i++)
++			ethtool_cpy(&data, enetc_pm_counters[i].name);
++
+ 		break;
+ 	}
+ }
+@@ -302,6 +312,9 @@ static void enetc_get_ethtool_stats(struct net_device *ndev,
+ 
+ 	for (i = 0; i < ARRAY_SIZE(enetc_port_counters); i++)
+ 		data[o++] = enetc_port_rd(hw, enetc_port_counters[i].reg);
++
++	for (i = 0; i < ARRAY_SIZE(enetc_pm_counters); i++)
++		data[o++] = enetc_port_rd64(hw, enetc_pm_counters[i].reg);
+ }
+ 
+ static void enetc_pause_stats(struct enetc_hw *hw, int mac,
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_hw.h b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
+index 53e8d18c7a34a5..c84872ab6c8f3b 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_hw.h
++++ b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
+@@ -533,6 +533,7 @@ static inline u64 _enetc_rd_reg64_wa(void __iomem *reg)
+ /* port register accessors - PF only */
+ #define enetc_port_rd(hw, off)		enetc_rd_reg((hw)->port + (off))
+ #define enetc_port_wr(hw, off, val)	enetc_wr_reg((hw)->port + (off), val)
++#define enetc_port_rd64(hw, off)	_enetc_rd_reg64_wa((hw)->port + (off))
+ #define enetc_port_rd_mdio(hw, off)	_enetc_rd_mdio_reg_wa((hw)->port + (off))
+ #define enetc_port_wr_mdio(hw, off, val)	_enetc_wr_mdio_reg_wa(\
+ 							(hw)->port + (off), val)
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+index f63a29e2e0311d..de0fb272c8474c 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
+@@ -829,19 +829,29 @@ static int enetc_pf_register_with_ierb(struct pci_dev *pdev)
+ {
+ 	struct platform_device *ierb_pdev;
+ 	struct device_node *ierb_node;
++	int ret;
+ 
+ 	ierb_node = of_find_compatible_node(NULL, NULL,
+ 					    "fsl,ls1028a-enetc-ierb");
+-	if (!ierb_node || !of_device_is_available(ierb_node))
++	if (!ierb_node)
+ 		return -ENODEV;
+ 
++	if (!of_device_is_available(ierb_node)) {
++		of_node_put(ierb_node);
++		return -ENODEV;
++	}
++
+ 	ierb_pdev = of_find_device_by_node(ierb_node);
+ 	of_node_put(ierb_node);
+ 
+ 	if (!ierb_pdev)
+ 		return -EPROBE_DEFER;
+ 
+-	return enetc_ierb_register_pf(ierb_pdev, pdev);
++	ret = enetc_ierb_register_pf(ierb_pdev, pdev);
++
++	put_device(&ierb_pdev->dev);
++
++	return ret;
+ }
+ 
+ static const struct enetc_si_ops enetc_psi_ops = {
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 17e9bddb9ddd58..651b73163b6ee9 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -3124,27 +3124,25 @@ static int fec_enet_us_to_itr_clock(struct net_device *ndev, int us)
+ static void fec_enet_itr_coal_set(struct net_device *ndev)
+ {
+ 	struct fec_enet_private *fep = netdev_priv(ndev);
+-	int rx_itr, tx_itr;
++	u32 rx_itr = 0, tx_itr = 0;
++	int rx_ictt, tx_ictt;
+ 
+-	/* Must be greater than zero to avoid unpredictable behavior */
+-	if (!fep->rx_time_itr || !fep->rx_pkts_itr ||
+-	    !fep->tx_time_itr || !fep->tx_pkts_itr)
+-		return;
+-
+-	/* Select enet system clock as Interrupt Coalescing
+-	 * timer Clock Source
+-	 */
+-	rx_itr = FEC_ITR_CLK_SEL;
+-	tx_itr = FEC_ITR_CLK_SEL;
++	rx_ictt = fec_enet_us_to_itr_clock(ndev, fep->rx_time_itr);
++	tx_ictt = fec_enet_us_to_itr_clock(ndev, fep->tx_time_itr);
+ 
+-	/* set ICFT and ICTT */
+-	rx_itr |= FEC_ITR_ICFT(fep->rx_pkts_itr);
+-	rx_itr |= FEC_ITR_ICTT(fec_enet_us_to_itr_clock(ndev, fep->rx_time_itr));
+-	tx_itr |= FEC_ITR_ICFT(fep->tx_pkts_itr);
+-	tx_itr |= FEC_ITR_ICTT(fec_enet_us_to_itr_clock(ndev, fep->tx_time_itr));
++	if (rx_ictt > 0 && fep->rx_pkts_itr > 1) {
++		/* Enable with enet system clock as Interrupt Coalescing timer Clock Source */
++		rx_itr = FEC_ITR_EN | FEC_ITR_CLK_SEL;
++		rx_itr |= FEC_ITR_ICFT(fep->rx_pkts_itr);
++		rx_itr |= FEC_ITR_ICTT(rx_ictt);
++	}
+ 
+-	rx_itr |= FEC_ITR_EN;
+-	tx_itr |= FEC_ITR_EN;
++	if (tx_ictt > 0 && fep->tx_pkts_itr > 1) {
++		/* Enable with enet system clock as Interrupt Coalescing timer Clock Source */
++		tx_itr = FEC_ITR_EN | FEC_ITR_CLK_SEL;
++		tx_itr |= FEC_ITR_ICFT(fep->tx_pkts_itr);
++		tx_itr |= FEC_ITR_ICTT(tx_ictt);
++	}
+ 
+ 	writel(tx_itr, fep->hwp + FEC_TXIC0);
+ 	writel(rx_itr, fep->hwp + FEC_RXIC0);
+diff --git a/drivers/net/ethernet/freescale/gianfar_ethtool.c b/drivers/net/ethernet/freescale/gianfar_ethtool.c
+index 781d92e703cb3d..c9992ed4e30135 100644
+--- a/drivers/net/ethernet/freescale/gianfar_ethtool.c
++++ b/drivers/net/ethernet/freescale/gianfar_ethtool.c
+@@ -1466,8 +1466,10 @@ static int gfar_get_ts_info(struct net_device *dev,
+ 	if (ptp_node) {
+ 		ptp_dev = of_find_device_by_node(ptp_node);
+ 		of_node_put(ptp_node);
+-		if (ptp_dev)
++		if (ptp_dev) {
+ 			ptp = platform_get_drvdata(ptp_dev);
++			put_device(&ptp_dev->dev);
++		}
+ 	}
+ 
+ 	if (ptp)
+diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c
+index 3e8fc33cc11fdb..7f2334006f9f3b 100644
+--- a/drivers/net/ethernet/google/gve/gve_adminq.c
++++ b/drivers/net/ethernet/google/gve/gve_adminq.c
+@@ -564,6 +564,7 @@ static int gve_adminq_issue_cmd(struct gve_priv *priv,
+ 		break;
+ 	default:
+ 		dev_err(&priv->pdev->dev, "unknown AQ command opcode %d\n", opcode);
++		return -EINVAL;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/hisilicon/hibmcge/hbg_err.c b/drivers/net/ethernet/hisilicon/hibmcge/hbg_err.c
+index ff3295b60a69a8..dee1e86811576d 100644
+--- a/drivers/net/ethernet/hisilicon/hibmcge/hbg_err.c
++++ b/drivers/net/ethernet/hisilicon/hibmcge/hbg_err.c
+@@ -53,9 +53,11 @@ static int hbg_reset_prepare(struct hbg_priv *priv, enum hbg_reset_type type)
+ {
+ 	int ret;
+ 
+-	ASSERT_RTNL();
++	if (test_and_set_bit(HBG_NIC_STATE_RESETTING, &priv->state))
++		return -EBUSY;
+ 
+ 	if (netif_running(priv->netdev)) {
++		clear_bit(HBG_NIC_STATE_RESETTING, &priv->state);
+ 		dev_warn(&priv->pdev->dev,
+ 			 "failed to reset because port is up\n");
+ 		return -EBUSY;
+@@ -64,7 +66,6 @@ static int hbg_reset_prepare(struct hbg_priv *priv, enum hbg_reset_type type)
+ 	netif_device_detach(priv->netdev);
+ 
+ 	priv->reset_type = type;
+-	set_bit(HBG_NIC_STATE_RESETTING, &priv->state);
+ 	clear_bit(HBG_NIC_STATE_RESET_FAIL, &priv->state);
+ 	ret = hbg_hw_event_notify(priv, HBG_HW_EVENT_RESET);
+ 	if (ret) {
+@@ -83,28 +84,25 @@ static int hbg_reset_done(struct hbg_priv *priv, enum hbg_reset_type type)
+ 	    type != priv->reset_type)
+ 		return 0;
+ 
+-	ASSERT_RTNL();
+-
+-	clear_bit(HBG_NIC_STATE_RESETTING, &priv->state);
+ 	ret = hbg_rebuild(priv);
+ 	if (ret) {
+ 		set_bit(HBG_NIC_STATE_RESET_FAIL, &priv->state);
++		clear_bit(HBG_NIC_STATE_RESETTING, &priv->state);
+ 		dev_err(&priv->pdev->dev, "failed to rebuild after reset\n");
+ 		return ret;
+ 	}
+ 
+ 	netif_device_attach(priv->netdev);
++	clear_bit(HBG_NIC_STATE_RESETTING, &priv->state);
+ 
+ 	dev_info(&priv->pdev->dev, "reset done\n");
+ 	return ret;
+ }
+ 
+-/* must be protected by rtnl lock */
+ int hbg_reset(struct hbg_priv *priv)
+ {
+ 	int ret;
+ 
+-	ASSERT_RTNL();
+ 	ret = hbg_reset_prepare(priv, HBG_RESET_TYPE_FUNCTION);
+ 	if (ret)
+ 		return ret;
+@@ -169,7 +167,6 @@ static void hbg_pci_err_reset_prepare(struct pci_dev *pdev)
+ 	struct net_device *netdev = pci_get_drvdata(pdev);
+ 	struct hbg_priv *priv = netdev_priv(netdev);
+ 
+-	rtnl_lock();
+ 	hbg_reset_prepare(priv, HBG_RESET_TYPE_FLR);
+ }
+ 
+@@ -179,7 +176,6 @@ static void hbg_pci_err_reset_done(struct pci_dev *pdev)
+ 	struct hbg_priv *priv = netdev_priv(netdev);
+ 
+ 	hbg_reset_done(priv, HBG_RESET_TYPE_FLR);
+-	rtnl_unlock();
+ }
+ 
+ static const struct pci_error_handlers hbg_pci_err_handler = {
+diff --git a/drivers/net/ethernet/hisilicon/hibmcge/hbg_hw.c b/drivers/net/ethernet/hisilicon/hibmcge/hbg_hw.c
+index 9b65eef62b3fba..2844124f306dae 100644
+--- a/drivers/net/ethernet/hisilicon/hibmcge/hbg_hw.c
++++ b/drivers/net/ethernet/hisilicon/hibmcge/hbg_hw.c
+@@ -12,6 +12,8 @@
+ 
+ #define HBG_HW_EVENT_WAIT_TIMEOUT_US	(2 * 1000 * 1000)
+ #define HBG_HW_EVENT_WAIT_INTERVAL_US	(10 * 1000)
++#define HBG_MAC_LINK_WAIT_TIMEOUT_US	(500 * 1000)
++#define HBG_MAC_LINK_WAIT_INTERVAL_US	(5 * 1000)
+ /* little endian or big endian.
+  * ctrl means packet description, data means skb packet data
+  */
+@@ -213,6 +215,9 @@ void hbg_hw_fill_buffer(struct hbg_priv *priv, u32 buffer_dma_addr)
+ 
+ void hbg_hw_adjust_link(struct hbg_priv *priv, u32 speed, u32 duplex)
+ {
++	u32 link_status;
++	int ret;
++
+ 	hbg_hw_mac_enable(priv, HBG_STATUS_DISABLE);
+ 
+ 	hbg_reg_write_field(priv, HBG_REG_PORT_MODE_ADDR,
+@@ -224,8 +229,14 @@ void hbg_hw_adjust_link(struct hbg_priv *priv, u32 speed, u32 duplex)
+ 
+ 	hbg_hw_mac_enable(priv, HBG_STATUS_ENABLE);
+ 
+-	if (!hbg_reg_read_field(priv, HBG_REG_AN_NEG_STATE_ADDR,
+-				HBG_REG_AN_NEG_STATE_NP_LINK_OK_B))
++	/* wait MAC link up */
++	ret = readl_poll_timeout(priv->io_base + HBG_REG_AN_NEG_STATE_ADDR,
++				 link_status,
++				 FIELD_GET(HBG_REG_AN_NEG_STATE_NP_LINK_OK_B,
++					   link_status),
++				 HBG_MAC_LINK_WAIT_INTERVAL_US,
++				 HBG_MAC_LINK_WAIT_TIMEOUT_US);
++	if (ret)
+ 		hbg_np_link_fail_task_schedule(priv);
+ }
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hibmcge/hbg_txrx.h b/drivers/net/ethernet/hisilicon/hibmcge/hbg_txrx.h
+index 2883a5899ae29e..8b6110599e10db 100644
+--- a/drivers/net/ethernet/hisilicon/hibmcge/hbg_txrx.h
++++ b/drivers/net/ethernet/hisilicon/hibmcge/hbg_txrx.h
+@@ -29,7 +29,12 @@ static inline bool hbg_fifo_is_full(struct hbg_priv *priv, enum hbg_dir dir)
+ 
+ static inline u32 hbg_get_queue_used_num(struct hbg_ring *ring)
+ {
+-	return (ring->ntu + ring->len - ring->ntc) % ring->len;
++	u32 len = READ_ONCE(ring->len);
++
++	if (!len)
++		return 0;
++
++	return (READ_ONCE(ring->ntu) + len - READ_ONCE(ring->ntc)) % len;
+ }
+ 
+ netdev_tx_t hbg_net_start_xmit(struct sk_buff *skb, struct net_device *netdev);
+diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h
+index 1e812c3f62f9b8..0bb9cab2b3c3af 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf.h
++++ b/drivers/net/ethernet/intel/idpf/idpf.h
+@@ -378,10 +378,28 @@ struct idpf_rss_data {
+ 	u32 *cached_lut;
+ };
+ 
++/**
++ * struct idpf_q_coalesce - User defined coalescing configuration values for
++ *			   a single queue.
++ * @tx_intr_mode: Dynamic TX ITR or not
++ * @rx_intr_mode: Dynamic RX ITR or not
++ * @tx_coalesce_usecs: TX interrupt throttling rate
++ * @rx_coalesce_usecs: RX interrupt throttling rate
++ *
++ * Used to restore user coalescing configuration after a reset.
++ */
++struct idpf_q_coalesce {
++	u32 tx_intr_mode;
++	u32 rx_intr_mode;
++	u32 tx_coalesce_usecs;
++	u32 rx_coalesce_usecs;
++};
++
+ /**
+  * struct idpf_vport_user_config_data - User defined configuration values for
+  *					each vport.
+  * @rss_data: See struct idpf_rss_data
++ * @q_coalesce: Array of per queue coalescing data
+  * @num_req_tx_qs: Number of user requested TX queues through ethtool
+  * @num_req_rx_qs: Number of user requested RX queues through ethtool
+  * @num_req_txq_desc: Number of user requested TX queue descriptors through
+@@ -395,6 +413,7 @@ struct idpf_rss_data {
+  */
+ struct idpf_vport_user_config_data {
+ 	struct idpf_rss_data rss_data;
++	struct idpf_q_coalesce *q_coalesce;
+ 	u16 num_req_tx_qs;
+ 	u16 num_req_rx_qs;
+ 	u32 num_req_txq_desc;
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
+index eaf7a2606faaa0..f7599cd2aabf46 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
+@@ -1090,12 +1090,14 @@ static int idpf_get_per_q_coalesce(struct net_device *netdev, u32 q_num,
+ /**
+  * __idpf_set_q_coalesce - set ITR values for specific queue
+  * @ec: ethtool structure from user to update ITR settings
++ * @q_coal: per queue coalesce settings
+  * @qv: queue vector for which itr values has to be set
+  * @is_rxq: is queue type rx
+  *
+  * Returns 0 on success, negative otherwise.
+  */
+ static int __idpf_set_q_coalesce(const struct ethtool_coalesce *ec,
++				 struct idpf_q_coalesce *q_coal,
+ 				 struct idpf_q_vector *qv, bool is_rxq)
+ {
+ 	u32 use_adaptive_coalesce, coalesce_usecs;
+@@ -1139,20 +1141,25 @@ static int __idpf_set_q_coalesce(const struct ethtool_coalesce *ec,
+ 
+ 	if (is_rxq) {
+ 		qv->rx_itr_value = coalesce_usecs;
++		q_coal->rx_coalesce_usecs = coalesce_usecs;
+ 		if (use_adaptive_coalesce) {
+ 			qv->rx_intr_mode = IDPF_ITR_DYNAMIC;
++			q_coal->rx_intr_mode = IDPF_ITR_DYNAMIC;
+ 		} else {
+ 			qv->rx_intr_mode = !IDPF_ITR_DYNAMIC;
+-			idpf_vport_intr_write_itr(qv, qv->rx_itr_value,
+-						  false);
++			q_coal->rx_intr_mode = !IDPF_ITR_DYNAMIC;
++			idpf_vport_intr_write_itr(qv, coalesce_usecs, false);
+ 		}
+ 	} else {
+ 		qv->tx_itr_value = coalesce_usecs;
++		q_coal->tx_coalesce_usecs = coalesce_usecs;
+ 		if (use_adaptive_coalesce) {
+ 			qv->tx_intr_mode = IDPF_ITR_DYNAMIC;
++			q_coal->tx_intr_mode = IDPF_ITR_DYNAMIC;
+ 		} else {
+ 			qv->tx_intr_mode = !IDPF_ITR_DYNAMIC;
+-			idpf_vport_intr_write_itr(qv, qv->tx_itr_value, true);
++			q_coal->tx_intr_mode = !IDPF_ITR_DYNAMIC;
++			idpf_vport_intr_write_itr(qv, coalesce_usecs, true);
+ 		}
+ 	}
+ 
+@@ -1165,6 +1172,7 @@ static int __idpf_set_q_coalesce(const struct ethtool_coalesce *ec,
+ /**
+  * idpf_set_q_coalesce - set ITR values for specific queue
+  * @vport: vport associated to the queue that need updating
++ * @q_coal: per queue coalesce settings
+  * @ec: coalesce settings to program the device with
+  * @q_num: update ITR/INTRL (coalesce) settings for this queue number/index
+  * @is_rxq: is queue type rx
+@@ -1172,6 +1180,7 @@ static int __idpf_set_q_coalesce(const struct ethtool_coalesce *ec,
+  * Return 0 on success, and negative on failure
+  */
+ static int idpf_set_q_coalesce(const struct idpf_vport *vport,
++			       struct idpf_q_coalesce *q_coal,
+ 			       const struct ethtool_coalesce *ec,
+ 			       int q_num, bool is_rxq)
+ {
+@@ -1180,7 +1189,7 @@ static int idpf_set_q_coalesce(const struct idpf_vport *vport,
+ 	qv = is_rxq ? idpf_find_rxq_vec(vport, q_num) :
+ 		      idpf_find_txq_vec(vport, q_num);
+ 
+-	if (qv && __idpf_set_q_coalesce(ec, qv, is_rxq))
++	if (qv && __idpf_set_q_coalesce(ec, q_coal, qv, is_rxq))
+ 		return -EINVAL;
+ 
+ 	return 0;
+@@ -1201,9 +1210,13 @@ static int idpf_set_coalesce(struct net_device *netdev,
+ 			     struct netlink_ext_ack *extack)
+ {
+ 	struct idpf_netdev_priv *np = netdev_priv(netdev);
++	struct idpf_vport_user_config_data *user_config;
++	struct idpf_q_coalesce *q_coal;
+ 	struct idpf_vport *vport;
+ 	int i, err = 0;
+ 
++	user_config = &np->adapter->vport_config[np->vport_idx]->user_config;
++
+ 	idpf_vport_ctrl_lock(netdev);
+ 	vport = idpf_netdev_to_vport(netdev);
+ 
+@@ -1211,13 +1224,15 @@ static int idpf_set_coalesce(struct net_device *netdev,
+ 		goto unlock_mutex;
+ 
+ 	for (i = 0; i < vport->num_txq; i++) {
+-		err = idpf_set_q_coalesce(vport, ec, i, false);
++		q_coal = &user_config->q_coalesce[i];
++		err = idpf_set_q_coalesce(vport, q_coal, ec, i, false);
+ 		if (err)
+ 			goto unlock_mutex;
+ 	}
+ 
+ 	for (i = 0; i < vport->num_rxq; i++) {
+-		err = idpf_set_q_coalesce(vport, ec, i, true);
++		q_coal = &user_config->q_coalesce[i];
++		err = idpf_set_q_coalesce(vport, q_coal, ec, i, true);
+ 		if (err)
+ 			goto unlock_mutex;
+ 	}
+@@ -1239,20 +1254,25 @@ static int idpf_set_coalesce(struct net_device *netdev,
+ static int idpf_set_per_q_coalesce(struct net_device *netdev, u32 q_num,
+ 				   struct ethtool_coalesce *ec)
+ {
++	struct idpf_netdev_priv *np = netdev_priv(netdev);
++	struct idpf_vport_user_config_data *user_config;
++	struct idpf_q_coalesce *q_coal;
+ 	struct idpf_vport *vport;
+ 	int err;
+ 
+ 	idpf_vport_ctrl_lock(netdev);
+ 	vport = idpf_netdev_to_vport(netdev);
++	user_config = &np->adapter->vport_config[np->vport_idx]->user_config;
++	q_coal = &user_config->q_coalesce[q_num];
+ 
+-	err = idpf_set_q_coalesce(vport, ec, q_num, false);
++	err = idpf_set_q_coalesce(vport, q_coal, ec, q_num, false);
+ 	if (err) {
+ 		idpf_vport_ctrl_unlock(netdev);
+ 
+ 		return err;
+ 	}
+ 
+-	err = idpf_set_q_coalesce(vport, ec, q_num, true);
++	err = idpf_set_q_coalesce(vport, q_coal, ec, q_num, true);
+ 
+ 	idpf_vport_ctrl_unlock(netdev);
+ 
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+index 80382ff4a5fa00..e9f8da9f7979be 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+@@ -1079,8 +1079,10 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
+ 	if (!vport)
+ 		return vport;
+ 
++	num_max_q = max(max_q->max_txq, max_q->max_rxq);
+ 	if (!adapter->vport_config[idx]) {
+ 		struct idpf_vport_config *vport_config;
++		struct idpf_q_coalesce *q_coal;
+ 
+ 		vport_config = kzalloc(sizeof(*vport_config), GFP_KERNEL);
+ 		if (!vport_config) {
+@@ -1089,6 +1091,21 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
+ 			return NULL;
+ 		}
+ 
++		q_coal = kcalloc(num_max_q, sizeof(*q_coal), GFP_KERNEL);
++		if (!q_coal) {
++			kfree(vport_config);
++			kfree(vport);
++
++			return NULL;
++		}
++		for (int i = 0; i < num_max_q; i++) {
++			q_coal[i].tx_intr_mode = IDPF_ITR_DYNAMIC;
++			q_coal[i].tx_coalesce_usecs = IDPF_ITR_TX_DEF;
++			q_coal[i].rx_intr_mode = IDPF_ITR_DYNAMIC;
++			q_coal[i].rx_coalesce_usecs = IDPF_ITR_RX_DEF;
++		}
++		vport_config->user_config.q_coalesce = q_coal;
++
+ 		adapter->vport_config[idx] = vport_config;
+ 	}
+ 
+@@ -1098,7 +1115,6 @@ static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
+ 	vport->default_vport = adapter->num_alloc_vports <
+ 			       idpf_get_default_vports(adapter);
+ 
+-	num_max_q = max(max_q->max_txq, max_q->max_rxq);
+ 	vport->q_vector_idxs = kcalloc(num_max_q, sizeof(u16), GFP_KERNEL);
+ 	if (!vport->q_vector_idxs)
+ 		goto free_vport;
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_main.c b/drivers/net/ethernet/intel/idpf/idpf_main.c
+index 0efd9c0c7a90fe..6e0757ab406ebb 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_main.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_main.c
+@@ -62,6 +62,7 @@ static void idpf_remove(struct pci_dev *pdev)
+ 	destroy_workqueue(adapter->vc_event_wq);
+ 
+ 	for (i = 0; i < adapter->max_vports; i++) {
++		kfree(adapter->vport_config[i]->user_config.q_coalesce);
+ 		kfree(adapter->vport_config[i]);
+ 		adapter->vport_config[i] = NULL;
+ 	}
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+index 5cf440e09d0a67..89185d1b8485ee 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+@@ -4349,9 +4349,13 @@ static void idpf_vport_intr_napi_add_all(struct idpf_vport *vport)
+ int idpf_vport_intr_alloc(struct idpf_vport *vport)
+ {
+ 	u16 txqs_per_vector, rxqs_per_vector, bufqs_per_vector;
++	struct idpf_vport_user_config_data *user_config;
+ 	struct idpf_q_vector *q_vector;
++	struct idpf_q_coalesce *q_coal;
+ 	u32 complqs_per_vector, v_idx;
++	u16 idx = vport->idx;
+ 
++	user_config = &vport->adapter->vport_config[idx]->user_config;
+ 	vport->q_vectors = kcalloc(vport->num_q_vectors,
+ 				   sizeof(struct idpf_q_vector), GFP_KERNEL);
+ 	if (!vport->q_vectors)
+@@ -4369,14 +4373,15 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport)
+ 
+ 	for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) {
+ 		q_vector = &vport->q_vectors[v_idx];
++		q_coal = &user_config->q_coalesce[v_idx];
+ 		q_vector->vport = vport;
+ 
+-		q_vector->tx_itr_value = IDPF_ITR_TX_DEF;
+-		q_vector->tx_intr_mode = IDPF_ITR_DYNAMIC;
++		q_vector->tx_itr_value = q_coal->tx_coalesce_usecs;
++		q_vector->tx_intr_mode = q_coal->tx_intr_mode;
+ 		q_vector->tx_itr_idx = VIRTCHNL2_ITR_IDX_1;
+ 
+-		q_vector->rx_itr_value = IDPF_ITR_RX_DEF;
+-		q_vector->rx_intr_mode = IDPF_ITR_DYNAMIC;
++		q_vector->rx_itr_value = q_coal->rx_coalesce_usecs;
++		q_vector->rx_intr_mode = q_coal->rx_intr_mode;
+ 		q_vector->rx_itr_idx = VIRTCHNL2_ITR_IDX_0;
+ 
+ 		q_vector->tx = kcalloc(txqs_per_vector, sizeof(*q_vector->tx),
+diff --git a/drivers/net/ethernet/mediatek/mtk_wed.c b/drivers/net/ethernet/mediatek/mtk_wed.c
+index 351dd152f4f36a..4f3014fc389ba6 100644
+--- a/drivers/net/ethernet/mediatek/mtk_wed.c
++++ b/drivers/net/ethernet/mediatek/mtk_wed.c
+@@ -2794,7 +2794,6 @@ void mtk_wed_add_hw(struct device_node *np, struct mtk_eth *eth,
+ 	if (!pdev)
+ 		goto err_of_node_put;
+ 
+-	get_device(&pdev->dev);
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0)
+ 		goto err_put_device;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
+index f0744a45db92c3..4e461cb03b83dd 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
+@@ -374,7 +374,7 @@ void mlx5e_reactivate_qos_sq(struct mlx5e_priv *priv, u16 qid, struct netdev_que
+ void mlx5e_reset_qdisc(struct net_device *dev, u16 qid)
+ {
+ 	struct netdev_queue *dev_queue = netdev_get_tx_queue(dev, qid);
+-	struct Qdisc *qdisc = dev_queue->qdisc_sleeping;
++	struct Qdisc *qdisc = rtnl_dereference(dev_queue->qdisc_sleeping);
+ 
+ 	if (!qdisc)
+ 		return;
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index 7707a9e53c439d..48cb5d30b5f6f6 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -3526,10 +3526,6 @@ void ionic_lif_free(struct ionic_lif *lif)
+ 	lif->info = NULL;
+ 	lif->info_pa = 0;
+ 
+-	/* unmap doorbell page */
+-	ionic_bus_unmap_dbpage(lif->ionic, lif->kern_dbpage);
+-	lif->kern_dbpage = NULL;
+-
+ 	mutex_destroy(&lif->config_lock);
+ 	mutex_destroy(&lif->queue_lock);
+ 
+@@ -3555,6 +3551,9 @@ void ionic_lif_deinit(struct ionic_lif *lif)
+ 	ionic_lif_qcq_deinit(lif, lif->notifyqcq);
+ 	ionic_lif_qcq_deinit(lif, lif->adminqcq);
+ 
++	ionic_bus_unmap_dbpage(lif->ionic, lif->kern_dbpage);
++	lif->kern_dbpage = NULL;
++
+ 	ionic_lif_reset(lif);
+ }
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-thead.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-thead.c
+index c72ee759aae58b..f2946bea0bc268 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-thead.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-thead.c
+@@ -211,6 +211,7 @@ static int thead_dwmac_probe(struct platform_device *pdev)
+ 	struct stmmac_resources stmmac_res;
+ 	struct plat_stmmacenet_data *plat;
+ 	struct thead_dwmac *dwmac;
++	struct clk *apb_clk;
+ 	void __iomem *apb;
+ 	int ret;
+ 
+@@ -224,6 +225,19 @@ static int thead_dwmac_probe(struct platform_device *pdev)
+ 		return dev_err_probe(&pdev->dev, PTR_ERR(plat),
+ 				     "dt configuration failed\n");
+ 
++	/*
++	 * The APB clock is essential for accessing glue registers. However,
++	 * old devicetrees don't describe it correctly. We continue to probe
++	 * and emit a warning if it isn't present.
++	 */
++	apb_clk = devm_clk_get_enabled(&pdev->dev, "apb");
++	if (PTR_ERR(apb_clk) == -ENOENT)
++		dev_warn(&pdev->dev,
++			 "cannot get apb clock, link may break after speed changes\n");
++	else if (IS_ERR(apb_clk))
++		return dev_err_probe(&pdev->dev, PTR_ERR(apb_clk),
++				     "failed to get apb clock\n");
++
+ 	dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL);
+ 	if (!dwmac)
+ 		return -ENOMEM;
+diff --git a/drivers/net/ethernet/ti/icssg/icss_iep.c b/drivers/net/ethernet/ti/icssg/icss_iep.c
+index 2a1c43316f462b..d8c9fe1d98c475 100644
+--- a/drivers/net/ethernet/ti/icssg/icss_iep.c
++++ b/drivers/net/ethernet/ti/icssg/icss_iep.c
+@@ -621,7 +621,8 @@ static int icss_iep_pps_enable(struct icss_iep *iep, int on)
+ 
+ static int icss_iep_extts_enable(struct icss_iep *iep, u32 index, int on)
+ {
+-	u32 val, cap, ret = 0;
++	u32 val, cap;
++	int ret = 0;
+ 
+ 	mutex_lock(&iep->ptp_clk_mutex);
+ 
+@@ -685,10 +686,16 @@ struct icss_iep *icss_iep_get_idx(struct device_node *np, int idx)
+ 	struct platform_device *pdev;
+ 	struct device_node *iep_np;
+ 	struct icss_iep *iep;
++	int ret;
+ 
+ 	iep_np = of_parse_phandle(np, "ti,iep", idx);
+-	if (!iep_np || !of_device_is_available(iep_np))
++	if (!iep_np)
++		return ERR_PTR(-ENODEV);
++
++	if (!of_device_is_available(iep_np)) {
++		of_node_put(iep_np);
+ 		return ERR_PTR(-ENODEV);
++	}
+ 
+ 	pdev = of_find_device_by_node(iep_np);
+ 	of_node_put(iep_np);
+@@ -698,21 +705,28 @@ struct icss_iep *icss_iep_get_idx(struct device_node *np, int idx)
+ 		return ERR_PTR(-EPROBE_DEFER);
+ 
+ 	iep = platform_get_drvdata(pdev);
+-	if (!iep)
+-		return ERR_PTR(-EPROBE_DEFER);
++	if (!iep) {
++		ret = -EPROBE_DEFER;
++		goto err_put_pdev;
++	}
+ 
+ 	device_lock(iep->dev);
+ 	if (iep->client_np) {
+ 		device_unlock(iep->dev);
+ 		dev_err(iep->dev, "IEP is already acquired by %s",
+ 			iep->client_np->name);
+-		return ERR_PTR(-EBUSY);
++		ret = -EBUSY;
++		goto err_put_pdev;
+ 	}
+ 	iep->client_np = np;
+ 	device_unlock(iep->dev);
+-	get_device(iep->dev);
+ 
+ 	return iep;
++
++err_put_pdev:
++	put_device(&pdev->dev);
++
++	return ERR_PTR(ret);
+ }
+ EXPORT_SYMBOL_GPL(icss_iep_get_idx);
+ 
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.c b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+index 2f5c4335dec388..008d7772740078 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+@@ -50,6 +50,8 @@
+ /* CTRLMMR_ICSSG_RGMII_CTRL register bits */
+ #define ICSSG_CTRL_RGMII_ID_MODE                BIT(24)
+ 
++static void emac_adjust_link(struct net_device *ndev);
++
+ static int emac_get_tx_ts(struct prueth_emac *emac,
+ 			  struct emac_tx_ts_response *rsp)
+ {
+@@ -266,6 +268,10 @@ static int prueth_emac_common_start(struct prueth *prueth)
+ 		ret = icssg_config(prueth, emac, slice);
+ 		if (ret)
+ 			goto disable_class;
++
++		mutex_lock(&emac->ndev->phydev->lock);
++		emac_adjust_link(emac->ndev);
++		mutex_unlock(&emac->ndev->phydev->lock);
+ 	}
+ 
+ 	ret = prueth_emac_start(prueth);
+diff --git a/drivers/net/hamradio/bpqether.c b/drivers/net/hamradio/bpqether.c
+index 0e0fe32d2da4e3..045c5177262eaf 100644
+--- a/drivers/net/hamradio/bpqether.c
++++ b/drivers/net/hamradio/bpqether.c
+@@ -138,7 +138,7 @@ static inline struct net_device *bpq_get_ax25_dev(struct net_device *dev)
+ 
+ static inline int dev_is_ethdev(struct net_device *dev)
+ {
+-	return dev->type == ARPHRD_ETHER && strncmp(dev->name, "dummy", 5);
++	return dev->type == ARPHRD_ETHER && !netdev_need_ops_lock(dev);
+ }
+ 
+ /* ------------------------------------------------------------------------ */
+diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
+index cb6f5482d203e1..7397c693f984af 100644
+--- a/drivers/net/hyperv/hyperv_net.h
++++ b/drivers/net/hyperv/hyperv_net.h
+@@ -1061,6 +1061,7 @@ struct net_device_context {
+ 	struct net_device __rcu *vf_netdev;
+ 	struct netvsc_vf_pcpu_stats __percpu *vf_stats;
+ 	struct delayed_work vf_takeover;
++	struct delayed_work vfns_work;
+ 
+ 	/* 1: allocated, serial number is valid. 0: not allocated */
+ 	u32 vf_alloc;
+@@ -1075,6 +1076,8 @@ struct net_device_context {
+ 	struct netvsc_device_info *saved_netvsc_dev_info;
+ };
+ 
++void netvsc_vfns_work(struct work_struct *w);
++
+ /* Azure hosts don't support non-TCP port numbers in hashing for fragmented
+  * packets. We can use ethtool to change UDP hash level when necessary.
+  */
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 8be9bce66a4ed4..519456bdff6f6b 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2530,6 +2530,7 @@ static int netvsc_probe(struct hv_device *dev,
+ 	spin_lock_init(&net_device_ctx->lock);
+ 	INIT_LIST_HEAD(&net_device_ctx->reconfig_events);
+ 	INIT_DELAYED_WORK(&net_device_ctx->vf_takeover, netvsc_vf_setup);
++	INIT_DELAYED_WORK(&net_device_ctx->vfns_work, netvsc_vfns_work);
+ 
+ 	net_device_ctx->vf_stats
+ 		= netdev_alloc_pcpu_stats(struct netvsc_vf_pcpu_stats);
+@@ -2674,6 +2675,8 @@ static void netvsc_remove(struct hv_device *dev)
+ 	cancel_delayed_work_sync(&ndev_ctx->dwork);
+ 
+ 	rtnl_lock();
++	cancel_delayed_work_sync(&ndev_ctx->vfns_work);
++
+ 	nvdev = rtnl_dereference(ndev_ctx->nvdev);
+ 	if (nvdev) {
+ 		cancel_work_sync(&nvdev->subchan_work);
+@@ -2715,6 +2718,7 @@ static int netvsc_suspend(struct hv_device *dev)
+ 	cancel_delayed_work_sync(&ndev_ctx->dwork);
+ 
+ 	rtnl_lock();
++	cancel_delayed_work_sync(&ndev_ctx->vfns_work);
+ 
+ 	nvdev = rtnl_dereference(ndev_ctx->nvdev);
+ 	if (nvdev == NULL) {
+@@ -2808,6 +2812,27 @@ static void netvsc_event_set_vf_ns(struct net_device *ndev)
+ 	}
+ }
+ 
++void netvsc_vfns_work(struct work_struct *w)
++{
++	struct net_device_context *ndev_ctx =
++		container_of(w, struct net_device_context, vfns_work.work);
++	struct net_device *ndev;
++
++	if (!rtnl_trylock()) {
++		schedule_delayed_work(&ndev_ctx->vfns_work, 1);
++		return;
++	}
++
++	ndev = hv_get_drvdata(ndev_ctx->device_ctx);
++	if (!ndev)
++		goto out;
++
++	netvsc_event_set_vf_ns(ndev);
++
++out:
++	rtnl_unlock();
++}
++
+ /*
+  * On Hyper-V, every VF interface is matched with a corresponding
+  * synthetic interface. The synthetic interface is presented first
+@@ -2818,10 +2843,12 @@ static int netvsc_netdev_event(struct notifier_block *this,
+ 			       unsigned long event, void *ptr)
+ {
+ 	struct net_device *event_dev = netdev_notifier_info_to_dev(ptr);
++	struct net_device_context *ndev_ctx;
+ 	int ret = 0;
+ 
+ 	if (event_dev->netdev_ops == &device_ops && event == NETDEV_REGISTER) {
+-		netvsc_event_set_vf_ns(event_dev);
++		ndev_ctx = netdev_priv(event_dev);
++		schedule_delayed_work(&ndev_ctx->vfns_work, 0);
+ 		return NOTIFY_DONE;
+ 	}
+ 
+diff --git a/drivers/net/pcs/pcs-xpcs-plat.c b/drivers/net/pcs/pcs-xpcs-plat.c
+index 629315f1e57cb3..9dcaf7a66113ed 100644
+--- a/drivers/net/pcs/pcs-xpcs-plat.c
++++ b/drivers/net/pcs/pcs-xpcs-plat.c
+@@ -66,7 +66,7 @@ static int xpcs_mmio_read_reg_indirect(struct dw_xpcs_plat *pxpcs,
+ 	switch (pxpcs->reg_width) {
+ 	case 4:
+ 		writel(page, pxpcs->reg_base + (DW_VR_CSR_VIEWPORT << 2));
+-		ret = readl(pxpcs->reg_base + (ofs << 2));
++		ret = readl(pxpcs->reg_base + (ofs << 2)) & 0xffff;
+ 		break;
+ 	default:
+ 		writew(page, pxpcs->reg_base + (DW_VR_CSR_VIEWPORT << 1));
+@@ -124,7 +124,7 @@ static int xpcs_mmio_read_reg_direct(struct dw_xpcs_plat *pxpcs,
+ 
+ 	switch (pxpcs->reg_width) {
+ 	case 4:
+-		ret = readl(pxpcs->reg_base + (csr << 2));
++		ret = readl(pxpcs->reg_base + (csr << 2)) & 0xffff;
+ 		break;
+ 	default:
+ 		ret = readw(pxpcs->reg_base + (csr << 1));
+diff --git a/drivers/net/phy/air_en8811h.c b/drivers/net/phy/air_en8811h.c
+index 57fbd8df94388c..badd65f0ccee2e 100644
+--- a/drivers/net/phy/air_en8811h.c
++++ b/drivers/net/phy/air_en8811h.c
+@@ -11,6 +11,7 @@
+  * Copyright (C) 2023 Airoha Technology Corp.
+  */
+ 
++#include <linux/clk.h>
+ #include <linux/clk-provider.h>
+ #include <linux/phy.h>
+ #include <linux/firmware.h>
+@@ -157,6 +158,7 @@ struct en8811h_priv {
+ 	struct led		led[EN8811H_LED_COUNT];
+ 	struct clk_hw		hw;
+ 	struct phy_device	*phydev;
++	unsigned int		cko_is_enabled;
+ };
+ 
+ enum {
+@@ -865,11 +867,30 @@ static int en8811h_clk_is_enabled(struct clk_hw *hw)
+ 	return (pbus_value & EN8811H_CLK_CGM_CKO);
+ }
+ 
++static int en8811h_clk_save_context(struct clk_hw *hw)
++{
++	struct en8811h_priv *priv = clk_hw_to_en8811h_priv(hw);
++
++	priv->cko_is_enabled = en8811h_clk_is_enabled(hw);
++
++	return 0;
++}
++
++static void en8811h_clk_restore_context(struct clk_hw *hw)
++{
++	struct en8811h_priv *priv = clk_hw_to_en8811h_priv(hw);
++
++	if (!priv->cko_is_enabled)
++		en8811h_clk_disable(hw);
++}
++
+ static const struct clk_ops en8811h_clk_ops = {
+-	.recalc_rate	= en8811h_clk_recalc_rate,
+-	.enable		= en8811h_clk_enable,
+-	.disable	= en8811h_clk_disable,
+-	.is_enabled	= en8811h_clk_is_enabled,
++	.recalc_rate		= en8811h_clk_recalc_rate,
++	.enable			= en8811h_clk_enable,
++	.disable		= en8811h_clk_disable,
++	.is_enabled		= en8811h_clk_is_enabled,
++	.save_context		= en8811h_clk_save_context,
++	.restore_context	= en8811h_clk_restore_context,
+ };
+ 
+ static int en8811h_clk_provider_setup(struct device *dev, struct clk_hw *hw)
+@@ -1149,6 +1170,20 @@ static irqreturn_t en8811h_handle_interrupt(struct phy_device *phydev)
+ 	return IRQ_HANDLED;
+ }
+ 
++static int en8811h_resume(struct phy_device *phydev)
++{
++	clk_restore_context();
++
++	return genphy_resume(phydev);
++}
++
++static int en8811h_suspend(struct phy_device *phydev)
++{
++	clk_save_context();
++
++	return genphy_suspend(phydev);
++}
++
+ static struct phy_driver en8811h_driver[] = {
+ {
+ 	PHY_ID_MATCH_MODEL(EN8811H_PHY_ID),
+@@ -1159,6 +1194,8 @@ static struct phy_driver en8811h_driver[] = {
+ 	.get_rate_matching	= en8811h_get_rate_matching,
+ 	.config_aneg		= en8811h_config_aneg,
+ 	.read_status		= en8811h_read_status,
++	.resume			= en8811h_resume,
++	.suspend		= en8811h_suspend,
+ 	.config_intr		= en8811h_clear_intr,
+ 	.handle_interrupt	= en8811h_handle_interrupt,
+ 	.led_hw_is_supported	= en8811h_led_hw_is_supported,
+diff --git a/drivers/net/phy/broadcom.c b/drivers/net/phy/broadcom.c
+index 9b1de54fd4835c..f871f11d192130 100644
+--- a/drivers/net/phy/broadcom.c
++++ b/drivers/net/phy/broadcom.c
+@@ -655,7 +655,7 @@ static int bcm5481x_read_abilities(struct phy_device *phydev)
+ {
+ 	struct device_node *np = phydev->mdio.dev.of_node;
+ 	struct bcm54xx_phy_priv *priv = phydev->priv;
+-	int i, val, err;
++	int i, val, err, aneg;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(bcm54811_linkmodes); i++)
+ 		linkmode_clear_bit(bcm54811_linkmodes[i], phydev->supported);
+@@ -676,9 +676,19 @@ static int bcm5481x_read_abilities(struct phy_device *phydev)
+ 		if (val < 0)
+ 			return val;
+ 
++		/* BCM54811 is not capable of LDS but the corresponding bit
++		 * in LRESR is set to 1 and marked "Ignore" in the datasheet.
++		 * So we must read the bcm54811 as unable to auto-negotiate
++		 * in BroadR-Reach mode.
++		 */
++		if (BRCM_PHY_MODEL(phydev) == PHY_ID_BCM54811)
++			aneg = 0;
++		else
++			aneg = val & LRESR_LDSABILITY;
++
+ 		linkmode_mod_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ 				 phydev->supported,
+-				 val & LRESR_LDSABILITY);
++				 aneg);
+ 		linkmode_mod_bit(ETHTOOL_LINK_MODE_100baseT1_Full_BIT,
+ 				 phydev->supported,
+ 				 val & LRESR_100_1PAIR);
+@@ -735,8 +745,15 @@ static int bcm54811_config_aneg(struct phy_device *phydev)
+ 
+ 	/* Aneg firstly. */
+ 	if (priv->brr_mode) {
+-		/* BCM54811 is only capable of autonegotiation in IEEE mode */
+-		phydev->autoneg = 0;
++		/* BCM54811 is only capable of autonegotiation in IEEE mode.
++		 * In BroadR-Reach mode, disable the Long Distance Signaling,
++		 * the BRR mode autoneg as supported in other Broadcom PHYs.
++		 * This bit is marked as "Reserved" and "Default 1, must be
++		 *  written to 0 after every device reset" in the datasheet.
++		 */
++		ret = phy_modify(phydev, MII_BCM54XX_LRECR, LRECR_LDSEN, 0);
++		if (ret < 0)
++			return ret;
+ 		ret = bcm_config_lre_aneg(phydev, false);
+ 	} else {
+ 		ret = genphy_config_aneg(phydev);
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index fda2e27c18105c..cad6ed3aa10b64 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -91,6 +91,7 @@ int mdiobus_unregister_device(struct mdio_device *mdiodev)
+ 	if (mdiodev->bus->mdio_map[mdiodev->addr] != mdiodev)
+ 		return -EINVAL;
+ 
++	gpiod_put(mdiodev->reset_gpio);
+ 	reset_control_put(mdiodev->reset_ctrl);
+ 
+ 	mdiodev->bus->mdio_map[mdiodev->addr] = NULL;
+diff --git a/drivers/net/phy/mdio_bus_provider.c b/drivers/net/phy/mdio_bus_provider.c
+index 65850e36284de3..5401170f14e5a4 100644
+--- a/drivers/net/phy/mdio_bus_provider.c
++++ b/drivers/net/phy/mdio_bus_provider.c
+@@ -444,9 +444,6 @@ void mdiobus_unregister(struct mii_bus *bus)
+ 		if (!mdiodev)
+ 			continue;
+ 
+-		if (mdiodev->reset_gpio)
+-			gpiod_put(mdiodev->reset_gpio);
+-
+ 		mdiodev->device_remove(mdiodev);
+ 		mdiodev->device_free(mdiodev);
+ 	}
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 64aa03aed77009..52c6778116160d 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -472,6 +472,8 @@ static const struct kszphy_type ksz8051_type = {
+ 
+ static const struct kszphy_type ksz8081_type = {
+ 	.led_mode_reg		= MII_KSZPHY_CTRL_2,
++	.cable_diag_reg		= KSZ8081_LMD,
++	.pair_mask		= KSZPHY_WIRE_PAIR_MASK,
+ 	.has_broadcast_disable	= true,
+ 	.has_nand_tree_disable	= true,
+ 	.has_rmii_ref_clk_sel	= true,
+@@ -5403,6 +5405,14 @@ static int lan8841_suspend(struct phy_device *phydev)
+ 	return kszphy_generic_suspend(phydev);
+ }
+ 
++static int ksz9131_resume(struct phy_device *phydev)
++{
++	if (phydev->suspended && phy_interface_is_rgmii(phydev))
++		ksz9131_config_rgmii_delay(phydev);
++
++	return kszphy_resume(phydev);
++}
++
+ static struct phy_driver ksphy_driver[] = {
+ {
+ 	.phy_id		= PHY_ID_KS8737,
+@@ -5649,7 +5659,7 @@ static struct phy_driver ksphy_driver[] = {
+ 	.get_strings	= kszphy_get_strings,
+ 	.get_stats	= kszphy_get_stats,
+ 	.suspend	= kszphy_suspend,
+-	.resume		= kszphy_resume,
++	.resume		= ksz9131_resume,
+ 	.cable_test_start	= ksz9x31_cable_test_start,
+ 	.cable_test_get_status	= ksz9x31_cable_test_get_status,
+ 	.get_features	= ksz9477_get_features,
+diff --git a/drivers/net/phy/nxp-c45-tja11xx.c b/drivers/net/phy/nxp-c45-tja11xx.c
+index 4c6d905f0a9f75..87adb65080176d 100644
+--- a/drivers/net/phy/nxp-c45-tja11xx.c
++++ b/drivers/net/phy/nxp-c45-tja11xx.c
+@@ -1965,24 +1965,27 @@ static int nxp_c45_macsec_ability(struct phy_device *phydev)
+ 	return macsec_ability;
+ }
+ 
++static bool tja11xx_phy_id_compare(struct phy_device *phydev,
++				   const struct phy_driver *phydrv)
++{
++	u32 id = phydev->is_c45 ? phydev->c45_ids.device_ids[MDIO_MMD_PMAPMD] :
++				  phydev->phy_id;
++
++	return phy_id_compare(id, phydrv->phy_id, phydrv->phy_id_mask);
++}
++
+ static int tja11xx_no_macsec_match_phy_device(struct phy_device *phydev,
+ 					      const struct phy_driver *phydrv)
+ {
+-	if (!phy_id_compare(phydev->phy_id, phydrv->phy_id,
+-			    phydrv->phy_id_mask))
+-		return 0;
+-
+-	return !nxp_c45_macsec_ability(phydev);
++	return tja11xx_phy_id_compare(phydev, phydrv) &&
++	       !nxp_c45_macsec_ability(phydev);
+ }
+ 
+ static int tja11xx_macsec_match_phy_device(struct phy_device *phydev,
+ 					   const struct phy_driver *phydrv)
+ {
+-	if (!phy_id_compare(phydev->phy_id, phydrv->phy_id,
+-			    phydrv->phy_id_mask))
+-		return 0;
+-
+-	return nxp_c45_macsec_ability(phydev);
++	return tja11xx_phy_id_compare(phydev, phydrv) &&
++	       nxp_c45_macsec_ability(phydev);
+ }
+ 
+ static const struct nxp_c45_regmap tja1120_regmap = {
+diff --git a/drivers/net/phy/realtek/realtek_main.c b/drivers/net/phy/realtek/realtek_main.c
+index c3dcb625743033..dd0d675149ad7f 100644
+--- a/drivers/net/phy/realtek/realtek_main.c
++++ b/drivers/net/phy/realtek/realtek_main.c
+@@ -436,9 +436,15 @@ static irqreturn_t rtl8211f_handle_interrupt(struct phy_device *phydev)
+ 
+ static void rtl8211f_get_wol(struct phy_device *dev, struct ethtool_wolinfo *wol)
+ {
++	int wol_events;
++
+ 	wol->supported = WAKE_MAGIC;
+-	if (phy_read_paged(dev, RTL8211F_WOL_SETTINGS_PAGE, RTL8211F_WOL_SETTINGS_EVENTS)
+-	    & RTL8211F_WOL_EVENT_MAGIC)
++
++	wol_events = phy_read_paged(dev, RTL8211F_WOL_SETTINGS_PAGE, RTL8211F_WOL_SETTINGS_EVENTS);
++	if (wol_events < 0)
++		return;
++
++	if (wol_events & RTL8211F_WOL_EVENT_MAGIC)
+ 		wol->wolopts = WAKE_MAGIC;
+ }
+ 
+diff --git a/drivers/net/phy/smsc.c b/drivers/net/phy/smsc.c
+index b6489da5cfcdfb..48487149c22528 100644
+--- a/drivers/net/phy/smsc.c
++++ b/drivers/net/phy/smsc.c
+@@ -785,6 +785,7 @@ static struct phy_driver smsc_phy_driver[] = {
+ 
+ 	/* PHY_BASIC_FEATURES */
+ 
++	.flags		= PHY_RST_AFTER_CLK_EN,
+ 	.probe		= smsc_phy_probe,
+ 
+ 	/* basic functions */
+diff --git a/drivers/net/thunderbolt/main.c b/drivers/net/thunderbolt/main.c
+index 0a53ec293d0408..dcaa62377808c2 100644
+--- a/drivers/net/thunderbolt/main.c
++++ b/drivers/net/thunderbolt/main.c
+@@ -396,9 +396,9 @@ static void tbnet_tear_down(struct tbnet *net, bool send_logout)
+ 
+ 		ret = tb_xdomain_disable_paths(net->xd,
+ 					       net->local_transmit_path,
+-					       net->rx_ring.ring->hop,
++					       net->tx_ring.ring->hop,
+ 					       net->remote_transmit_path,
+-					       net->tx_ring.ring->hop);
++					       net->rx_ring.ring->hop);
+ 		if (ret)
+ 			netdev_warn(net->dev, "failed to disable DMA paths\n");
+ 
+@@ -662,9 +662,9 @@ static void tbnet_connected_work(struct work_struct *work)
+ 		goto err_free_rx_buffers;
+ 
+ 	ret = tb_xdomain_enable_paths(net->xd, net->local_transmit_path,
+-				      net->rx_ring.ring->hop,
++				      net->tx_ring.ring->hop,
+ 				      net->remote_transmit_path,
+-				      net->tx_ring.ring->hop);
++				      net->rx_ring.ring->hop);
+ 	if (ret) {
+ 		netdev_err(net->dev, "failed to enable DMA paths\n");
+ 		goto err_free_tx_buffers;
+@@ -924,8 +924,12 @@ static int tbnet_open(struct net_device *dev)
+ 
+ 	netif_carrier_off(dev);
+ 
+-	ring = tb_ring_alloc_tx(xd->tb->nhi, -1, TBNET_RING_SIZE,
+-				RING_FLAG_FRAME);
++	flags = RING_FLAG_FRAME;
++	/* Only enable full E2E if the other end supports it too */
++	if (tbnet_e2e && net->svc->prtcstns & TBNET_E2E)
++		flags |= RING_FLAG_E2E;
++
++	ring = tb_ring_alloc_tx(xd->tb->nhi, -1, TBNET_RING_SIZE, flags);
+ 	if (!ring) {
+ 		netdev_err(dev, "failed to allocate Tx ring\n");
+ 		return -ENOMEM;
+@@ -944,11 +948,6 @@ static int tbnet_open(struct net_device *dev)
+ 	sof_mask = BIT(TBIP_PDF_FRAME_START);
+ 	eof_mask = BIT(TBIP_PDF_FRAME_END);
+ 
+-	flags = RING_FLAG_FRAME;
+-	/* Only enable full E2E if the other end supports it too */
+-	if (tbnet_e2e && net->svc->prtcstns & TBNET_E2E)
+-		flags |= RING_FLAG_E2E;
+-
+ 	ring = tb_ring_alloc_rx(xd->tb->nhi, -1, TBNET_RING_SIZE, flags,
+ 				net->tx_ring.ring->hop, sof_mask,
+ 				eof_mask, tbnet_start_poll, net);
+diff --git a/drivers/net/usb/asix_devices.c b/drivers/net/usb/asix_devices.c
+index 9b0318fb50b55c..d9f5942ccc447b 100644
+--- a/drivers/net/usb/asix_devices.c
++++ b/drivers/net/usb/asix_devices.c
+@@ -676,6 +676,7 @@ static int ax88772_init_mdio(struct usbnet *dev)
+ 	priv->mdio->read = &asix_mdio_bus_read;
+ 	priv->mdio->write = &asix_mdio_bus_write;
+ 	priv->mdio->name = "Asix MDIO Bus";
++	priv->mdio->phy_mask = ~(BIT(priv->phy_addr) | BIT(AX_EMBD_PHY_ADDR));
+ 	/* mii bus name is usb-<usb bus number>-<usb device number> */
+ 	snprintf(priv->mdio->id, MII_BUS_ID_SIZE, "usb-%03d:%03d",
+ 		 dev->udev->bus->busnum, dev->udev->devnum);
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index 34e82f1e37d964..ea0e5e276cd6d1 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -892,6 +892,10 @@ int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_
+ 		}
+ 	}
+ 
++	if (ctx->func_desc)
++		ctx->filtering_supported = !!(ctx->func_desc->bmNetworkCapabilities
++			& USB_CDC_NCM_NCAP_ETH_FILTER);
++
+ 	iface_no = ctx->data->cur_altsetting->desc.bInterfaceNumber;
+ 
+ 	/* Device-specific flags */
+@@ -1898,6 +1902,14 @@ static void cdc_ncm_status(struct usbnet *dev, struct urb *urb)
+ 	}
+ }
+ 
++static void cdc_ncm_update_filter(struct usbnet *dev)
++{
++	struct cdc_ncm_ctx *ctx = (struct cdc_ncm_ctx *)dev->data[0];
++
++	if (ctx->filtering_supported)
++		usbnet_cdc_update_filter(dev);
++}
++
+ static const struct driver_info cdc_ncm_info = {
+ 	.description = "CDC NCM (NO ZLP)",
+ 	.flags = FLAG_POINTTOPOINT | FLAG_NO_SETINT | FLAG_MULTI_PACKET
+@@ -1908,7 +1920,7 @@ static const struct driver_info cdc_ncm_info = {
+ 	.status = cdc_ncm_status,
+ 	.rx_fixup = cdc_ncm_rx_fixup,
+ 	.tx_fixup = cdc_ncm_tx_fixup,
+-	.set_rx_mode = usbnet_cdc_update_filter,
++	.set_rx_mode = cdc_ncm_update_filter,
+ };
+ 
+ /* Same as cdc_ncm_info, but with FLAG_SEND_ZLP  */
+@@ -1922,7 +1934,7 @@ static const struct driver_info cdc_ncm_zlp_info = {
+ 	.status = cdc_ncm_status,
+ 	.rx_fixup = cdc_ncm_rx_fixup,
+ 	.tx_fixup = cdc_ncm_tx_fixup,
+-	.set_rx_mode = usbnet_cdc_update_filter,
++	.set_rx_mode = cdc_ncm_update_filter,
+ };
+ 
+ /* Same as cdc_ncm_info, but with FLAG_SEND_ZLP */
+@@ -1964,7 +1976,7 @@ static const struct driver_info wwan_info = {
+ 	.status = cdc_ncm_status,
+ 	.rx_fixup = cdc_ncm_rx_fixup,
+ 	.tx_fixup = cdc_ncm_tx_fixup,
+-	.set_rx_mode = usbnet_cdc_update_filter,
++	.set_rx_mode = cdc_ncm_update_filter,
+ };
+ 
+ /* Same as wwan_info, but with FLAG_NOARP  */
+@@ -1978,7 +1990,7 @@ static const struct driver_info wwan_noarp_info = {
+ 	.status = cdc_ncm_status,
+ 	.rx_fixup = cdc_ncm_rx_fixup,
+ 	.tx_fixup = cdc_ncm_tx_fixup,
+-	.set_rx_mode = usbnet_cdc_update_filter,
++	.set_rx_mode = cdc_ncm_update_filter,
+ };
+ 
+ static const struct usb_device_id cdc_devs[] = {
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index f5647ee0addec2..e56901bb6ebc43 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1361,6 +1361,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1057, 2)},	/* Telit FN980 */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1060, 2)},	/* Telit LN920 */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1070, 2)},	/* Telit FN990A */
++	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1077, 2)},	/* Telit FN990A w/audio */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1080, 2)}, /* Telit FE990A */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x10a0, 0)}, /* Telit FN920C04 */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x10a4, 0)}, /* Telit FN920C04 */
+diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c
+index 995a7207bdf871..f357a7ac70ac47 100644
+--- a/drivers/net/wan/lapbether.c
++++ b/drivers/net/wan/lapbether.c
+@@ -81,7 +81,7 @@ static struct lapbethdev *lapbeth_get_x25_dev(struct net_device *dev)
+ 
+ static __inline__ int dev_is_ethdev(struct net_device *dev)
+ {
+-	return dev->type == ARPHRD_ETHER && strncmp(dev->name, "dummy", 5);
++	return dev->type == ARPHRD_ETHER && !netdev_need_ops_lock(dev);
+ }
+ 
+ /* ------------------------------------------------------------------------ */
+diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
+index fe3a8f4a1cc1b7..8b659678856dbd 100644
+--- a/drivers/net/wireless/ath/ath10k/core.c
++++ b/drivers/net/wireless/ath/ath10k/core.c
+@@ -2491,12 +2491,50 @@ static int ath10k_init_hw_params(struct ath10k *ar)
+ 	return 0;
+ }
+ 
++static bool ath10k_core_needs_recovery(struct ath10k *ar)
++{
++	long time_left;
++
++	/* Sometimes the recovery will fail and then the next all recovery fail,
++	 * so avoid infinite recovery.
++	 */
++	if (atomic_read(&ar->fail_cont_count) >= ATH10K_RECOVERY_MAX_FAIL_COUNT) {
++		ath10k_err(ar, "consecutive fail %d times, will shutdown driver!",
++			   atomic_read(&ar->fail_cont_count));
++		ar->state = ATH10K_STATE_WEDGED;
++		return false;
++	}
++
++	ath10k_dbg(ar, ATH10K_DBG_BOOT, "total recovery count: %d", ++ar->recovery_count);
++
++	if (atomic_read(&ar->pending_recovery)) {
++		/* Sometimes it happened another recovery work before the previous one
++		 * completed, then the second recovery work will destroy the previous
++		 * one, thus below is to avoid that.
++		 */
++		time_left = wait_for_completion_timeout(&ar->driver_recovery,
++							ATH10K_RECOVERY_TIMEOUT_HZ);
++		if (time_left) {
++			ath10k_warn(ar, "previous recovery succeeded, skip this!\n");
++			return false;
++		}
++
++		/* Record the continuous recovery fail count when recovery failed. */
++		atomic_inc(&ar->fail_cont_count);
++
++		/* Avoid having multiple recoveries at the same time. */
++		return false;
++	}
++
++	atomic_inc(&ar->pending_recovery);
++
++	return true;
++}
++
+ void ath10k_core_start_recovery(struct ath10k *ar)
+ {
+-	if (test_and_set_bit(ATH10K_FLAG_RESTARTING, &ar->dev_flags)) {
+-		ath10k_warn(ar, "already restarting\n");
++	if (!ath10k_core_needs_recovery(ar))
+ 		return;
+-	}
+ 
+ 	queue_work(ar->workqueue, &ar->restart_work);
+ }
+@@ -2532,6 +2570,8 @@ static void ath10k_core_restart(struct work_struct *work)
+ 	struct ath10k *ar = container_of(work, struct ath10k, restart_work);
+ 	int ret;
+ 
++	reinit_completion(&ar->driver_recovery);
++
+ 	set_bit(ATH10K_FLAG_CRASH_FLUSH, &ar->dev_flags);
+ 
+ 	/* Place a barrier to make sure the compiler doesn't reorder
+@@ -2596,8 +2636,6 @@ static void ath10k_core_restart(struct work_struct *work)
+ 	if (ret)
+ 		ath10k_warn(ar, "failed to send firmware crash dump via devcoredump: %d",
+ 			    ret);
+-
+-	complete(&ar->driver_recovery);
+ }
+ 
+ static void ath10k_core_set_coverage_class_work(struct work_struct *work)
+diff --git a/drivers/net/wireless/ath/ath10k/core.h b/drivers/net/wireless/ath/ath10k/core.h
+index 446dca74f06a63..85e16c945b5c20 100644
+--- a/drivers/net/wireless/ath/ath10k/core.h
++++ b/drivers/net/wireless/ath/ath10k/core.h
+@@ -4,6 +4,7 @@
+  * Copyright (c) 2011-2017 Qualcomm Atheros, Inc.
+  * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
+  * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries.
+  */
+ 
+ #ifndef _CORE_H_
+@@ -87,6 +88,8 @@
+ 				  IEEE80211_IFACE_SKIP_SDATA_NOT_IN_DRIVER)
+ #define ATH10K_ITER_RESUME_FLAGS (IEEE80211_IFACE_ITER_RESUME_ALL |\
+ 				  IEEE80211_IFACE_SKIP_SDATA_NOT_IN_DRIVER)
++#define ATH10K_RECOVERY_TIMEOUT_HZ			(5 * HZ)
++#define ATH10K_RECOVERY_MAX_FAIL_COUNT			4
+ 
+ struct ath10k;
+ 
+@@ -865,9 +868,6 @@ enum ath10k_dev_flags {
+ 	/* Per Station statistics service */
+ 	ATH10K_FLAG_PEER_STATS,
+ 
+-	/* Indicates that ath10k device is during recovery process and not complete */
+-	ATH10K_FLAG_RESTARTING,
+-
+ 	/* protected by conf_mutex */
+ 	ATH10K_FLAG_NAPI_ENABLED,
+ };
+@@ -1211,6 +1211,11 @@ struct ath10k {
+ 	struct work_struct bundle_tx_work;
+ 	struct work_struct tx_complete_work;
+ 
++	atomic_t pending_recovery;
++	unsigned int recovery_count;
++	/* continuous recovery fail count */
++	atomic_t fail_cont_count;
++
+ 	/* cycle count is reported twice for each visited channel during scan.
+ 	 * access protected by data_lock
+ 	 */
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 07fe05384cdfb1..9e6f294cd5b6ec 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -8156,7 +8156,12 @@ static void ath10k_reconfig_complete(struct ieee80211_hw *hw,
+ 		ath10k_info(ar, "device successfully recovered\n");
+ 		ar->state = ATH10K_STATE_ON;
+ 		ieee80211_wake_queues(ar->hw);
+-		clear_bit(ATH10K_FLAG_RESTARTING, &ar->dev_flags);
++
++		/* Clear recovery state. */
++		complete(&ar->driver_recovery);
++		atomic_set(&ar->fail_cont_count, 0);
++		atomic_set(&ar->pending_recovery, 0);
++
+ 		if (ar->hw_params.hw_restart_disconnect) {
+ 			list_for_each_entry(arvif, &ar->arvifs, list) {
+ 				if (arvif->is_up && arvif->vdev_type == WMI_VDEV_TYPE_STA)
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index df6a24f8f8d5b5..cb8ae751eb3121 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -4,6 +4,7 @@
+  * Copyright (c) 2011-2017 Qualcomm Atheros, Inc.
+  * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
+  * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries.
+  */
+ 
+ #include <linux/skbuff.h>
+@@ -1941,6 +1942,11 @@ int ath10k_wmi_cmd_send(struct ath10k *ar, struct sk_buff *skb, u32 cmd_id)
+ 	}
+ 
+ 	wait_event_timeout(ar->wmi.tx_credits_wq, ({
++		if (ar->state == ATH10K_STATE_WEDGED) {
++			ret = -ESHUTDOWN;
++			ath10k_dbg(ar, ATH10K_DBG_WMI,
++				   "drop wmi command %d, hardware is wedged\n", cmd_id);
++		}
+ 		/* try to send pending beacons first. they take priority */
+ 		ath10k_wmi_tx_beacons_nowait(ar);
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/dp.c b/drivers/net/wireless/ath/ath12k/dp.c
+index 6317c6d4c04371..12e35d5a744eef 100644
+--- a/drivers/net/wireless/ath/ath12k/dp.c
++++ b/drivers/net/wireless/ath/ath12k/dp.c
+@@ -84,6 +84,7 @@ int ath12k_dp_peer_setup(struct ath12k *ar, int vdev_id, const u8 *addr)
+ 	ret = ath12k_dp_rx_peer_frag_setup(ar, addr, vdev_id);
+ 	if (ret) {
+ 		ath12k_warn(ab, "failed to setup rx defrag context\n");
++		tid--;
+ 		goto peer_clean;
+ 	}
+ 
+@@ -101,7 +102,7 @@ int ath12k_dp_peer_setup(struct ath12k *ar, int vdev_id, const u8 *addr)
+ 		return -ENOENT;
+ 	}
+ 
+-	for (; tid >= 0; tid--)
++	for (tid--; tid >= 0; tid--)
+ 		ath12k_dp_rx_peer_tid_delete(ar, peer, tid);
+ 
+ 	spin_unlock_bh(&ab->base_lock);
+diff --git a/drivers/net/wireless/ath/ath12k/hw.c b/drivers/net/wireless/ath/ath12k/hw.c
+index 8254dc10b53bbf..ec77ad498b33a2 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.c
++++ b/drivers/net/wireless/ath/ath12k/hw.c
+@@ -1478,7 +1478,7 @@ static const struct ath12k_hw_params ath12k_hw_params[] = {
+ 		.download_calib = true,
+ 		.supports_suspend = false,
+ 		.tcl_ring_retry = true,
+-		.reoq_lut_support = false,
++		.reoq_lut_support = true,
+ 		.supports_shadow_regs = false,
+ 
+ 		.num_tcl_banks = 48,
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index 23469d0cc9b34e..a885dd168a372a 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -602,6 +602,33 @@ ath12k_mac_get_tx_arvif(struct ath12k_link_vif *arvif,
+ 	return NULL;
+ }
+ 
++static const u8 *ath12k_mac_get_tx_bssid(struct ath12k_link_vif *arvif)
++{
++	struct ieee80211_bss_conf *link_conf;
++	struct ath12k_link_vif *tx_arvif;
++	struct ath12k *ar = arvif->ar;
++
++	lockdep_assert_wiphy(ath12k_ar_to_hw(ar)->wiphy);
++
++	link_conf = ath12k_mac_get_link_bss_conf(arvif);
++	if (!link_conf) {
++		ath12k_warn(ar->ab,
++			    "unable to access bss link conf for link %u required to retrieve transmitting link conf\n",
++			    arvif->link_id);
++		return NULL;
++	}
++	if (link_conf->vif->type == NL80211_IFTYPE_STATION) {
++		if (link_conf->nontransmitted)
++			return link_conf->transmitter_bssid;
++	} else {
++		tx_arvif = ath12k_mac_get_tx_arvif(arvif, link_conf);
++		if (tx_arvif)
++			return tx_arvif->bssid;
++	}
++
++	return NULL;
++}
++
+ struct ieee80211_bss_conf *
+ ath12k_mac_get_link_bss_conf(struct ath12k_link_vif *arvif)
+ {
+@@ -1691,8 +1718,6 @@ static void ath12k_control_beaconing(struct ath12k_link_vif *arvif,
+ {
+ 	struct ath12k_wmi_vdev_up_params params = {};
+ 	struct ath12k_vif *ahvif = arvif->ahvif;
+-	struct ieee80211_bss_conf *link_conf;
+-	struct ath12k_link_vif *tx_arvif;
+ 	struct ath12k *ar = arvif->ar;
+ 	int ret;
+ 
+@@ -1723,18 +1748,8 @@ static void ath12k_control_beaconing(struct ath12k_link_vif *arvif,
+ 	params.vdev_id = arvif->vdev_id;
+ 	params.aid = ahvif->aid;
+ 	params.bssid = arvif->bssid;
+-
+-	link_conf = ath12k_mac_get_link_bss_conf(arvif);
+-	if (!link_conf) {
+-		ath12k_warn(ar->ab,
+-			    "unable to access bss link conf for link %u required to retrieve transmitting link conf\n",
+-			    arvif->link_id);
+-		return;
+-	}
+-
+-	tx_arvif = ath12k_mac_get_tx_arvif(arvif, link_conf);
+-	if (tx_arvif) {
+-		params.tx_bssid = tx_arvif->bssid;
++	params.tx_bssid = ath12k_mac_get_tx_bssid(arvif);
++	if (params.tx_bssid) {
+ 		params.nontx_profile_idx = info->bssid_index;
+ 		params.nontx_profile_cnt = 1 << info->bssid_indicator;
+ 	}
+@@ -3265,6 +3280,11 @@ static void ath12k_bss_assoc(struct ath12k *ar,
+ 	params.vdev_id = arvif->vdev_id;
+ 	params.aid = ahvif->aid;
+ 	params.bssid = arvif->bssid;
++	params.tx_bssid = ath12k_mac_get_tx_bssid(arvif);
++	if (params.tx_bssid) {
++		params.nontx_profile_idx = bss_conf->bssid_index;
++		params.nontx_profile_cnt = 1 << bss_conf->bssid_indicator;
++	}
+ 	ret = ath12k_wmi_vdev_up(ar, &params);
+ 	if (ret) {
+ 		ath12k_warn(ar->ab, "failed to set vdev %d up: %d\n",
+@@ -10010,7 +10030,7 @@ ath12k_mac_update_vif_chan(struct ath12k *ar,
+ 			   int n_vifs)
+ {
+ 	struct ath12k_wmi_vdev_up_params params = {};
+-	struct ath12k_link_vif *arvif, *tx_arvif;
++	struct ath12k_link_vif *arvif;
+ 	struct ieee80211_bss_conf *link_conf;
+ 	struct ath12k_base *ab = ar->ab;
+ 	struct ieee80211_vif *vif;
+@@ -10082,10 +10102,8 @@ ath12k_mac_update_vif_chan(struct ath12k *ar,
+ 		params.vdev_id = arvif->vdev_id;
+ 		params.aid = ahvif->aid;
+ 		params.bssid = arvif->bssid;
+-
+-		tx_arvif = ath12k_mac_get_tx_arvif(arvif, link_conf);
+-		if (tx_arvif) {
+-			params.tx_bssid = tx_arvif->bssid;
++		params.tx_bssid = ath12k_mac_get_tx_bssid(arvif);
++		if (params.tx_bssid) {
+ 			params.nontx_profile_idx = link_conf->bssid_index;
+ 			params.nontx_profile_cnt = 1 << link_conf->bssid_indicator;
+ 		}
+@@ -12504,6 +12522,7 @@ static int ath12k_mac_hw_register(struct ath12k_hw *ah)
+ 
+ 	wiphy->mbssid_max_interfaces = mbssid_max_interfaces;
+ 	wiphy->ema_max_profile_periodicity = TARGET_EMA_MAX_PROFILE_PERIOD;
++	ieee80211_hw_set(hw, SUPPORTS_MULTI_BSSID);
+ 
+ 	if (is_6ghz) {
+ 		wiphy_ext_feature_set(wiphy,
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
+index 745d017c5aa88c..1d0d4a66894641 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.c
++++ b/drivers/net/wireless/ath/ath12k/wmi.c
+@@ -6140,6 +6140,11 @@ static int wmi_process_mgmt_tx_comp(struct ath12k *ar, u32 desc_id,
+ 	dma_unmap_single(ar->ab->dev, skb_cb->paddr, msdu->len, DMA_TO_DEVICE);
+ 
+ 	info = IEEE80211_SKB_CB(msdu);
++	memset(&info->status, 0, sizeof(info->status));
++
++	/* skip tx rate update from ieee80211_status*/
++	info->status.rates[0].idx = -1;
++
+ 	if ((!(info->flags & IEEE80211_TX_CTL_NO_ACK)) && !status)
+ 		info->flags |= IEEE80211_TX_STAT_ACK;
+ 
+diff --git a/drivers/net/wireless/intel/iwlegacy/4965-mac.c b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+index 8e58e97a148f8c..e9e007b48645cf 100644
+--- a/drivers/net/wireless/intel/iwlegacy/4965-mac.c
++++ b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+@@ -1575,8 +1575,11 @@ il4965_tx_cmd_build_rate(struct il_priv *il,
+ 	    || rate_idx > RATE_COUNT_LEGACY)
+ 		rate_idx = rate_lowest_index(&il->bands[info->band], sta);
+ 	/* For 5 GHZ band, remap mac80211 rate indices into driver indices */
+-	if (info->band == NL80211_BAND_5GHZ)
++	if (info->band == NL80211_BAND_5GHZ) {
+ 		rate_idx += IL_FIRST_OFDM_RATE;
++		if (rate_idx > IL_LAST_OFDM_RATE)
++			rate_idx = IL_LAST_OFDM_RATE;
++	}
+ 	/* Get PLCP rate for tx_cmd->rate_n_flags */
+ 	rate_plcp = il_rates[rate_idx].plcp;
+ 	/* Zero out flags for this packet */
+diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/rs.c b/drivers/net/wireless/intel/iwlwifi/dvm/rs.c
+index 8879e668ef0da0..ed964103281ed5 100644
+--- a/drivers/net/wireless/intel/iwlwifi/dvm/rs.c
++++ b/drivers/net/wireless/intel/iwlwifi/dvm/rs.c
+@@ -2899,7 +2899,7 @@ static void rs_fill_link_cmd(struct iwl_priv *priv,
+ 		/* Repeat initial/next rate.
+ 		 * For legacy IWL_NUMBER_TRY == 1, this loop will not execute.
+ 		 * For HT IWL_HT_NUMBER_TRY == 3, this executes twice. */
+-		while (repeat_rate > 0 && (index < LINK_QUAL_MAX_RETRY_NUM)) {
++		while (repeat_rate > 0 && index < (LINK_QUAL_MAX_RETRY_NUM - 1)) {
+ 			if (is_legacy(tbl_type.lq_type)) {
+ 				if (ant_toggle_cnt < NUM_TRY_BEFORE_ANT_TOGGLE)
+ 					ant_toggle_cnt++;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+index ea739ebe7cb0f1..95a732efce45de 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+@@ -3008,6 +3008,7 @@ int iwl_fw_dbg_collect(struct iwl_fw_runtime *fwrt,
+ 	struct iwl_fw_dump_desc *desc;
+ 	unsigned int delay = 0;
+ 	bool monitor_only = false;
++	int ret;
+ 
+ 	if (trigger) {
+ 		u16 occurrences = le16_to_cpu(trigger->occurrences) - 1;
+@@ -3038,7 +3039,11 @@ int iwl_fw_dbg_collect(struct iwl_fw_runtime *fwrt,
+ 	desc->trig_desc.type = cpu_to_le32(trig);
+ 	memcpy(desc->trig_desc.data, str, len);
+ 
+-	return iwl_fw_dbg_collect_desc(fwrt, desc, monitor_only, delay);
++	ret = iwl_fw_dbg_collect_desc(fwrt, desc, monitor_only, delay);
++	if (ret)
++		kfree(desc);
++
++	return ret;
+ }
+ IWL_EXPORT_SYMBOL(iwl_fw_dbg_collect);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+index 9504a0cb8b13d4..557a97144f90d3 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+@@ -298,13 +298,17 @@ static void iwl_get_ucode_api_versions(struct iwl_trans *trans,
+ 	const struct iwl_family_base_params *base = trans->mac_cfg->base;
+ 	const struct iwl_rf_cfg *cfg = trans->cfg;
+ 
+-	if (!base->ucode_api_max) {
++	/* if the MAC doesn't have range or if its range it higher than the RF's */
++	if (!base->ucode_api_max ||
++	    (cfg->ucode_api_max && base->ucode_api_min > cfg->ucode_api_max)) {
+ 		*api_min = cfg->ucode_api_min;
+ 		*api_max = cfg->ucode_api_max;
+ 		return;
+ 	}
+ 
+-	if (!cfg->ucode_api_max) {
++	/* if the RF doesn't have range or if its range it higher than the MAC's */
++	if (!cfg->ucode_api_max ||
++	    (base->ucode_api_max && cfg->ucode_api_min > base->ucode_api_max)) {
+ 		*api_min = base->ucode_api_min;
+ 		*api_max = base->ucode_api_max;
+ 		return;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/agg.c b/drivers/net/wireless/intel/iwlwifi/mld/agg.c
+index 6b349270481d45..3bf36f8f687425 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/agg.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/agg.c
+@@ -305,10 +305,15 @@ iwl_mld_reorder(struct iwl_mld *mld, struct napi_struct *napi,
+ 	 * already ahead and it will be dropped.
+ 	 * If the last sub-frame is not on this queue - we will get frame
+ 	 * release notification with up to date NSSN.
++	 * If this is the first frame that is stored in the buffer, the head_sn
++	 * may be outdated. Update it based on the last NSSN to make sure it
++	 * will be released when the frame release notification arrives.
+ 	 */
+ 	if (!amsdu || last_subframe)
+ 		iwl_mld_reorder_release_frames(mld, sta, napi, baid_data,
+ 					       buffer, nssn);
++	else if (buffer->num_stored == 1)
++		buffer->head_sn = nssn;
+ 
+ 	return IWL_MLD_BUFFERED_SKB;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/iface.h b/drivers/net/wireless/intel/iwlwifi/mld/iface.h
+index 49e2ce65557d6e..f64d0dcbb17894 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/iface.h
++++ b/drivers/net/wireless/intel/iwlwifi/mld/iface.h
+@@ -87,6 +87,8 @@ enum iwl_mld_emlsr_exit {
+  * @last_exit_reason: Reason for the last EMLSR exit
+  * @last_exit_ts: Time of the last EMLSR exit (if @last_exit_reason is non-zero)
+  * @exit_repeat_count: Number of times EMLSR was exited for the same reason
++ * @last_entry_ts: the time of the last EMLSR entry (if iwl_mld_emlsr_active()
++ *	is true)
+  * @unblock_tpt_wk: Unblock EMLSR because the throughput limit was reached
+  * @check_tpt_wk: a worker to check if IWL_MLD_EMLSR_BLOCKED_TPT should be
+  *	added, for example if there is no longer enough traffic.
+@@ -105,6 +107,7 @@ struct iwl_mld_emlsr {
+ 		enum iwl_mld_emlsr_exit last_exit_reason;
+ 		unsigned long last_exit_ts;
+ 		u8 exit_repeat_count;
++		unsigned long last_entry_ts;
+ 	);
+ 
+ 	struct wiphy_work unblock_tpt_wk;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/link.c b/drivers/net/wireless/intel/iwlwifi/mld/link.c
+index d0f56189ad3fd0..2bf4c773ce8aa1 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/link.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/link.c
+@@ -864,21 +864,23 @@ void iwl_mld_handle_missed_beacon_notif(struct iwl_mld *mld,
+ {
+ 	const struct iwl_missed_beacons_notif *notif = (const void *)pkt->data;
+ 	union iwl_dbg_tlv_tp_data tp_data = { .fw_pkt = pkt };
+-	u32 link_id = le32_to_cpu(notif->link_id);
++	u32 fw_link_id = le32_to_cpu(notif->link_id);
+ 	u32 missed_bcon = le32_to_cpu(notif->consec_missed_beacons);
+ 	u32 missed_bcon_since_rx =
+ 		le32_to_cpu(notif->consec_missed_beacons_since_last_rx);
+ 	u32 scnd_lnk_bcn_lost =
+ 		le32_to_cpu(notif->consec_missed_beacons_other_link);
+ 	struct ieee80211_bss_conf *link_conf =
+-		iwl_mld_fw_id_to_link_conf(mld, link_id);
++		iwl_mld_fw_id_to_link_conf(mld, fw_link_id);
+ 	u32 bss_param_ch_cnt_link_id;
+ 	struct ieee80211_vif *vif;
++	u8 link_id;
+ 
+ 	if (WARN_ON(!link_conf))
+ 		return;
+ 
+ 	vif = link_conf->vif;
++	link_id = link_conf->link_id;
+ 	bss_param_ch_cnt_link_id = link_conf->bss_param_ch_cnt_link_id;
+ 
+ 	IWL_DEBUG_INFO(mld,
+@@ -890,7 +892,7 @@ void iwl_mld_handle_missed_beacon_notif(struct iwl_mld *mld,
+ 
+ 	mld->trans->dbg.dump_file_name_ext_valid = true;
+ 	snprintf(mld->trans->dbg.dump_file_name_ext, IWL_FW_INI_MAX_NAME,
+-		 "LinkId_%d_MacType_%d", link_id,
++		 "LinkId_%d_MacType_%d", fw_link_id,
+ 		 iwl_mld_mac80211_iftype_to_fw(vif));
+ 
+ 	iwl_dbg_tlv_time_point(&mld->fwrt,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mld/mac80211.c
+index 4ba050397632aa..c1e56c4010da16 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/mac80211.c
+@@ -1002,6 +1002,7 @@ int iwl_mld_assign_vif_chanctx(struct ieee80211_hw *hw,
+ 
+ 		/* Indicate to mac80211 that EML is enabled */
+ 		vif->driver_flags |= IEEE80211_VIF_EML_ACTIVE;
++		mld_vif->emlsr.last_entry_ts = jiffies;
+ 
+ 		if (vif->active_links & BIT(mld_vif->emlsr.selected_links))
+ 			mld_vif->emlsr.primary = mld_vif->emlsr.selected_primary;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/mld.c b/drivers/net/wireless/intel/iwlwifi/mld/mld.c
+index 1774bb84dd3fad..2a6088e8f4c74f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/mld.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/mld.c
+@@ -357,7 +357,7 @@ iwl_mld_configure_trans(struct iwl_op_mode *op_mode)
+ 	trans->conf.n_no_reclaim_cmds = ARRAY_SIZE(no_reclaim_cmds);
+ 
+ 	trans->conf.rx_mpdu_cmd = REPLY_RX_MPDU_CMD;
+-	trans->conf.rx_mpdu_cmd_hdr_size = sizeof(struct iwl_rx_mpdu_res_start);
++	trans->conf.rx_mpdu_cmd_hdr_size = sizeof(struct iwl_rx_mpdu_desc);
+ 	trans->conf.wide_cmd_header = true;
+ 
+ 	iwl_trans_op_mode_enter(trans, op_mode);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/mlo.c b/drivers/net/wireless/intel/iwlwifi/mld/mlo.c
+index dba5379ed0090c..d71259ce1db171 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/mlo.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/mlo.c
+@@ -530,10 +530,12 @@ void iwl_mld_emlsr_check_tpt(struct wiphy *wiphy, struct wiphy_work *wk)
+ 	/*
+ 	 * TPT is unblocked, need to check if the TPT criteria is still met.
+ 	 *
+-	 * If EMLSR is active, then we also need to check the secondar link
+-	 * requirements.
++	 * If EMLSR is active for at least 5 seconds, then we also
++	 * need to check the secondary link requirements.
+ 	 */
+-	if (iwl_mld_emlsr_active(vif)) {
++	if (iwl_mld_emlsr_active(vif) &&
++	    time_is_before_jiffies(mld_vif->emlsr.last_entry_ts +
++				   IWL_MLD_TPT_COUNT_WINDOW)) {
+ 		sec_link_id = iwl_mld_get_other_link(vif, iwl_mld_get_primary_link(vif));
+ 		sec_link = iwl_mld_link_dereference_check(mld_vif, sec_link_id);
+ 		if (WARN_ON_ONCE(!sec_link))
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/scan.c b/drivers/net/wireless/intel/iwlwifi/mld/scan.c
+index 3fce7cd2d512e8..33e7e4d1ab9087 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/scan.c
+@@ -359,7 +359,7 @@ iwl_mld_scan_fits(struct iwl_mld *mld, int n_ssids,
+ 		  struct ieee80211_scan_ies *ies, int n_channels)
+ {
+ 	return ((n_ssids <= PROBE_OPTION_MAX) &&
+-		(n_channels <= mld->fw->ucode_capa.n_scan_channels) &
++		(n_channels <= mld->fw->ucode_capa.n_scan_channels) &&
+ 		(ies->common_ie_len + ies->len[NL80211_BAND_2GHZ] +
+ 		 ies->len[NL80211_BAND_5GHZ] + ies->len[NL80211_BAND_6GHZ] <=
+ 		 iwl_mld_scan_max_template_size()));
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/scan.h b/drivers/net/wireless/intel/iwlwifi/mld/scan.h
+index 3ae940d55065c5..4044cac3f086bd 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/scan.h
++++ b/drivers/net/wireless/intel/iwlwifi/mld/scan.h
+@@ -130,7 +130,7 @@ struct iwl_mld_scan {
+ 	void *cmd;
+ 	unsigned long last_6ghz_passive_jiffies;
+ 	unsigned long last_start_time_jiffies;
+-	unsigned long last_mlo_scan_time;
++	u64 last_mlo_scan_time;
+ };
+ 
+ #endif /* __iwl_mld_scan_h__ */
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+index 507c03198c9296..a521da8f108287 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+@@ -2385,6 +2385,7 @@ static void iwl_mvm_convert_gtk_v2(struct iwl_wowlan_status_data *status,
+ 
+ 	status->gtk[0].len = data->key_len;
+ 	status->gtk[0].flags = data->key_flags;
++	status->gtk[0].id = status->gtk[0].flags & IWL_WOWLAN_GTK_IDX_MASK;
+ 
+ 	memcpy(status->gtk[0].key, data->key, sizeof(data->key));
+ 
+@@ -2735,6 +2736,7 @@ iwl_mvm_send_wowlan_get_status(struct iwl_mvm *mvm, u8 sta_id)
+ 		 * currently used key.
+ 		 */
+ 		status->gtk[0].flags = v6->gtk.key_index | BIT(7);
++		status->gtk[0].id = v6->gtk.key_index;
+ 	} else if (notif_ver == 7) {
+ 		struct iwl_wowlan_status_v7 *v7 = (void *)cmd.resp_pkt->data;
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+index 077aadbf95db55..e0f87ebefc18af 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+@@ -854,10 +854,15 @@ static bool iwl_mvm_reorder(struct iwl_mvm *mvm,
+ 	 * already ahead and it will be dropped.
+ 	 * If the last sub-frame is not on this queue - we will get frame
+ 	 * release notification with up to date NSSN.
++	 * If this is the first frame that is stored in the buffer, the head_sn
++	 * may be outdated. Update it based on the last NSSN to make sure it
++	 * will be released when the frame release notification arrives.
+ 	 */
+ 	if (!amsdu || last_subframe)
+ 		iwl_mvm_release_frames(mvm, sta, napi, baid_data,
+ 				       buffer, nssn);
++	else if (buffer->num_stored == 1)
++		buffer->head_sn = nssn;
+ 
+ 	spin_unlock_bh(&buffer->lock);
+ 	return true;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index 60bd9c7e5f03d8..3dbda1e4a522b2 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -835,7 +835,7 @@ static inline bool iwl_mvm_scan_fits(struct iwl_mvm *mvm, int n_ssids,
+ 				     int n_channels)
+ {
+ 	return ((n_ssids <= PROBE_OPTION_MAX) &&
+-		(n_channels <= mvm->fw->ucode_capa.n_scan_channels) &
++		(n_channels <= mvm->fw->ucode_capa.n_scan_channels) &&
+ 		(ies->common_ie_len +
+ 		 ies->len[NL80211_BAND_2GHZ] + ies->len[NL80211_BAND_5GHZ] +
+ 		 ies->len[NL80211_BAND_6GHZ] <=
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+index 3b7c12fc4f9e4e..df4eaaa7efd2b5 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+@@ -1074,6 +1074,7 @@ void iwl_pcie_rx_allocator_work(struct work_struct *data);
+ 
+ /* common trans ops for all generations transports */
+ void iwl_trans_pcie_op_mode_enter(struct iwl_trans *trans);
++int _iwl_trans_pcie_start_hw(struct iwl_trans *trans);
+ int iwl_trans_pcie_start_hw(struct iwl_trans *trans);
+ void iwl_trans_pcie_op_mode_leave(struct iwl_trans *trans);
+ void iwl_trans_pcie_write8(struct iwl_trans *trans, u32 ofs, u8 val);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+index 5a9c3b7976a109..c7ada07a7075d1 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+@@ -612,6 +612,11 @@ int iwl_trans_pcie_gen2_start_fw(struct iwl_trans *trans,
+ 		msleep(10);
+ 		IWL_INFO(trans, "TOP reset successful, reinit now\n");
+ 		/* now load the firmware again properly */
++		ret = _iwl_trans_pcie_start_hw(trans);
++		if (ret) {
++			IWL_ERR(trans, "failed to start HW after TOP reset\n");
++			goto out;
++		}
+ 		trans_pcie->prph_scratch->ctrl_cfg.control.control_flags &=
+ 			~cpu_to_le32(IWL_PRPH_SCRATCH_TOP_RESET);
+ 		top_reset_done = true;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index cc4d289b110dc5..07a9255c986831 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -1845,7 +1845,7 @@ static int iwl_pcie_gen2_force_power_gating(struct iwl_trans *trans)
+ 	return iwl_trans_pcie_sw_reset(trans, true);
+ }
+ 
+-static int _iwl_trans_pcie_start_hw(struct iwl_trans *trans)
++int _iwl_trans_pcie_start_hw(struct iwl_trans *trans)
+ {
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+ 	int err;
+diff --git a/drivers/net/wireless/mediatek/mt76/channel.c b/drivers/net/wireless/mediatek/mt76/channel.c
+index cc2d888e3f17a5..77b75792eb488e 100644
+--- a/drivers/net/wireless/mediatek/mt76/channel.c
++++ b/drivers/net/wireless/mediatek/mt76/channel.c
+@@ -173,13 +173,13 @@ void mt76_unassign_vif_chanctx(struct ieee80211_hw *hw,
+ 	if (!mlink)
+ 		goto out;
+ 
+-	if (link_conf != &vif->bss_conf)
++	if (mlink != (struct mt76_vif_link *)vif->drv_priv)
+ 		rcu_assign_pointer(mvif->link[link_id], NULL);
+ 
+ 	dev->drv->vif_link_remove(phy, vif, link_conf, mlink);
+ 	mlink->ctx = NULL;
+ 
+-	if (link_conf != &vif->bss_conf)
++	if (mlink != (struct mt76_vif_link *)vif->drv_priv)
+ 		kfree_rcu(mlink, rcu_head);
+ 
+ out:
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 74b75035d36148..0ecf77fcbe3d07 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -1874,6 +1874,9 @@ mt76_vif_link(struct mt76_dev *dev, struct ieee80211_vif *vif, int link_id)
+ 	struct mt76_vif_link *mlink = (struct mt76_vif_link *)vif->drv_priv;
+ 	struct mt76_vif_data *mvif = mlink->mvif;
+ 
++	if (!link_id)
++		return mlink;
++
+ 	return mt76_dereference(mvif->link[link_id], dev);
+ }
+ 
+@@ -1884,7 +1887,7 @@ mt76_vif_conf_link(struct mt76_dev *dev, struct ieee80211_vif *vif,
+ 	struct mt76_vif_link *mlink = (struct mt76_vif_link *)vif->drv_priv;
+ 	struct mt76_vif_data *mvif = mlink->mvif;
+ 
+-	if (link_conf == &vif->bss_conf)
++	if (link_conf == &vif->bss_conf || !link_conf->link_id)
+ 		return mlink;
+ 
+ 	return mt76_dereference(mvif->link[link_conf->link_id], dev);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index c6584d2b750928..c1cfdbc2fe8485 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -208,6 +208,9 @@ mt7915_mcu_set_timeout(struct mt76_dev *mdev, int cmd)
+ 	case MCU_EXT_CMD_BSS_INFO_UPDATE:
+ 		mdev->mcu.timeout = 2 * HZ;
+ 		return;
++	case MCU_EXT_CMD_EFUSE_BUFFER_MODE:
++		mdev->mcu.timeout = 10 * HZ;
++		return;
+ 	default:
+ 		break;
+ 	}
+@@ -2110,16 +2113,21 @@ static int mt7915_load_firmware(struct mt7915_dev *dev)
+ {
+ 	int ret;
+ 
+-	/* make sure fw is download state */
+-	if (mt7915_firmware_state(dev, false)) {
+-		/* restart firmware once */
+-		mt76_connac_mcu_restart(&dev->mt76);
+-		ret = mt7915_firmware_state(dev, false);
+-		if (ret) {
+-			dev_err(dev->mt76.dev,
+-				"Firmware is not ready for download\n");
+-			return ret;
+-		}
++	/* Release Semaphore if taken by previous failed attempt */
++	ret = mt76_connac_mcu_patch_sem_ctrl(&dev->mt76, false);
++	if (ret != PATCH_REL_SEM_SUCCESS) {
++		dev_err(dev->mt76.dev, "Could not release semaphore\n");
++		/* Continue anyways */
++	}
++
++	/* Always restart MCU firmware */
++	mt76_connac_mcu_restart(&dev->mt76);
++
++	/* Check if MCU is ready */
++	ret = mt7915_firmware_state(dev, false);
++	if (ret) {
++		dev_err(dev->mt76.dev, "Firmware did not enter download state\n");
++		return ret;
+ 	}
+ 
+ 	ret = mt76_connac2_load_patch(&dev->mt76, fw_name_var(dev, ROM_PATCH));
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+index 92148518f6a515..37b21ad828b966 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+@@ -1084,9 +1084,9 @@ int mt7996_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
+ 		if (wcid->offchannel)
+ 			mlink = rcu_dereference(mvif->mt76.offchannel_link);
+ 		if (!mlink)
+-			mlink = &mvif->deflink.mt76;
++			mlink = rcu_dereference(mvif->mt76.link[wcid->link_id]);
+ 
+-		txp->fw.bss_idx = mlink->idx;
++		txp->fw.bss_idx = mlink ? mlink->idx : mvif->deflink.mt76.idx;
+ 	}
+ 
+ 	txp->fw.token = cpu_to_le16(id);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/pci.c b/drivers/net/wireless/realtek/rtlwifi/pci.c
+index 898f597f70a96d..d080469264cf89 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/pci.c
++++ b/drivers/net/wireless/realtek/rtlwifi/pci.c
+@@ -572,8 +572,11 @@ static int _rtl_pci_init_one_rxdesc(struct ieee80211_hw *hw,
+ 		dma_map_single(&rtlpci->pdev->dev, skb_tail_pointer(skb),
+ 			       rtlpci->rxbuffersize, DMA_FROM_DEVICE);
+ 	bufferaddress = *((dma_addr_t *)skb->cb);
+-	if (dma_mapping_error(&rtlpci->pdev->dev, bufferaddress))
++	if (dma_mapping_error(&rtlpci->pdev->dev, bufferaddress)) {
++		if (!new_skb)
++			kfree_skb(skb);
+ 		return 0;
++	}
+ 	rtlpci->rx_ring[rxring_idx].rx_buf[desc_idx] = skb;
+ 	if (rtlpriv->use_new_trx_flow) {
+ 		/* skb->cb may be 64 bit address */
+@@ -802,13 +805,19 @@ static void _rtl_pci_rx_interrupt(struct ieee80211_hw *hw)
+ 		skb = new_skb;
+ no_new:
+ 		if (rtlpriv->use_new_trx_flow) {
+-			_rtl_pci_init_one_rxdesc(hw, skb, (u8 *)buffer_desc,
+-						 rxring_idx,
+-						 rtlpci->rx_ring[rxring_idx].idx);
++			if (!_rtl_pci_init_one_rxdesc(hw, skb, (u8 *)buffer_desc,
++						      rxring_idx,
++						      rtlpci->rx_ring[rxring_idx].idx)) {
++				if (new_skb)
++					dev_kfree_skb_any(skb);
++			}
+ 		} else {
+-			_rtl_pci_init_one_rxdesc(hw, skb, (u8 *)pdesc,
+-						 rxring_idx,
+-						 rtlpci->rx_ring[rxring_idx].idx);
++			if (!_rtl_pci_init_one_rxdesc(hw, skb, (u8 *)pdesc,
++						      rxring_idx,
++						      rtlpci->rx_ring[rxring_idx].idx)) {
++				if (new_skb)
++					dev_kfree_skb_any(skb);
++			}
+ 			if (rtlpci->rx_ring[rxring_idx].idx ==
+ 			    rtlpci->rxringcount - 1)
+ 				rtlpriv->cfg->ops->set_desc(hw, (u8 *)pdesc,
+diff --git a/drivers/net/wireless/realtek/rtw89/chan.c b/drivers/net/wireless/realtek/rtw89/chan.c
+index 806f42429a2902..b18019b53181c7 100644
+--- a/drivers/net/wireless/realtek/rtw89/chan.c
++++ b/drivers/net/wireless/realtek/rtw89/chan.c
+@@ -2816,6 +2816,9 @@ int rtw89_chanctx_ops_assign_vif(struct rtw89_dev *rtwdev,
+ 	rtwvif_link->chanctx_assigned = true;
+ 	cfg->ref_count++;
+ 
++	if (rtwdev->scanning)
++		rtw89_hw_scan_abort(rtwdev, rtwdev->scan_info.scanning_vif);
++
+ 	if (list_empty(&rtwvif->mgnt_entry))
+ 		list_add_tail(&rtwvif->mgnt_entry, &mgnt->active_list);
+ 
+@@ -2855,6 +2858,9 @@ void rtw89_chanctx_ops_unassign_vif(struct rtw89_dev *rtwdev,
+ 	rtwvif_link->chanctx_assigned = false;
+ 	cfg->ref_count--;
+ 
++	if (rtwdev->scanning)
++		rtw89_hw_scan_abort(rtwdev, rtwdev->scan_info.scanning_vif);
++
+ 	if (!rtw89_vif_is_active_role(rtwvif))
+ 		list_del_init(&rtwvif->mgnt_entry);
+ 
+diff --git a/drivers/net/wireless/realtek/rtw89/coex.c b/drivers/net/wireless/realtek/rtw89/coex.c
+index 5ccf0cbaed2fa4..ea3664103fbf86 100644
+--- a/drivers/net/wireless/realtek/rtw89/coex.c
++++ b/drivers/net/wireless/realtek/rtw89/coex.c
+@@ -3836,13 +3836,13 @@ void rtw89_btc_set_policy_v1(struct rtw89_dev *rtwdev, u16 policy_type)
+ 
+ 		switch (policy_type) {
+ 		case BTC_CXP_OFFE_2GBWISOB: /* for normal-case */
+-			_slot_set(btc, CXST_E2G, 0, tbl_w1, SLOT_ISO);
++			_slot_set(btc, CXST_E2G, 5, tbl_w1, SLOT_ISO);
+ 			_slot_set_le(btc, CXST_EBT, s_def[CXST_EBT].dur,
+ 				     s_def[CXST_EBT].cxtbl, s_def[CXST_EBT].cxtype);
+ 			_slot_set_dur(btc, CXST_EBT, dur_2);
+ 			break;
+ 		case BTC_CXP_OFFE_2GISOB: /* for bt no-link */
+-			_slot_set(btc, CXST_E2G, 0, cxtbl[1], SLOT_ISO);
++			_slot_set(btc, CXST_E2G, 5, cxtbl[1], SLOT_ISO);
+ 			_slot_set_le(btc, CXST_EBT, s_def[CXST_EBT].dur,
+ 				     s_def[CXST_EBT].cxtbl, s_def[CXST_EBT].cxtype);
+ 			_slot_set_dur(btc, CXST_EBT, dur_2);
+@@ -3868,15 +3868,15 @@ void rtw89_btc_set_policy_v1(struct rtw89_dev *rtwdev, u16 policy_type)
+ 			break;
+ 		case BTC_CXP_OFFE_2GBWMIXB:
+ 			if (a2dp->exist)
+-				_slot_set(btc, CXST_E2G, 0, cxtbl[2], SLOT_MIX);
++				_slot_set(btc, CXST_E2G, 5, cxtbl[2], SLOT_MIX);
+ 			else
+-				_slot_set(btc, CXST_E2G, 0, tbl_w1, SLOT_MIX);
++				_slot_set(btc, CXST_E2G, 5, tbl_w1, SLOT_MIX);
+ 			_slot_set_le(btc, CXST_EBT, s_def[CXST_EBT].dur,
+ 				     s_def[CXST_EBT].cxtbl, s_def[CXST_EBT].cxtype);
+ 			break;
+ 		case BTC_CXP_OFFE_WL: /* for 4-way */
+-			_slot_set(btc, CXST_E2G, 0, cxtbl[1], SLOT_MIX);
+-			_slot_set(btc, CXST_EBT, 0, cxtbl[1], SLOT_MIX);
++			_slot_set(btc, CXST_E2G, 5, cxtbl[1], SLOT_MIX);
++			_slot_set(btc, CXST_EBT, 5, cxtbl[1], SLOT_MIX);
+ 			break;
+ 		default:
+ 			break;
+diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
+index c886dd2a73b412..894ab7ab94ccb6 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.c
++++ b/drivers/net/wireless/realtek/rtw89/core.c
+@@ -2839,6 +2839,9 @@ static enum rtw89_ps_mode rtw89_update_ps_mode(struct rtw89_dev *rtwdev)
+ {
+ 	const struct rtw89_chip_info *chip = rtwdev->chip;
+ 
++	if (rtwdev->hci.type != RTW89_HCI_TYPE_PCIE)
++		return RTW89_PS_MODE_NONE;
++
+ 	if (rtw89_disable_ps_mode || !chip->ps_mode_supported ||
+ 	    RTW89_CHK_FW_FEATURE(NO_DEEP_PS, &rtwdev->fw))
+ 		return RTW89_PS_MODE_NONE;
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
+index 00b65b2995cffc..68a937710c690a 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.c
++++ b/drivers/net/wireless/realtek/rtw89/fw.c
+@@ -838,6 +838,7 @@ static const struct __fw_feat_cfg fw_feat_tbl[] = {
+ 	__CFG_FW_FEAT(RTL8852C, ge, 0, 27, 40, 0, CRASH_TRIGGER),
+ 	__CFG_FW_FEAT(RTL8852C, ge, 0, 27, 56, 10, BEACON_FILTER),
+ 	__CFG_FW_FEAT(RTL8852C, ge, 0, 27, 80, 0, WOW_REASON_V1),
++	__CFG_FW_FEAT(RTL8852C, ge, 0, 27, 128, 0, BEACON_LOSS_COUNT_V1),
+ 	__CFG_FW_FEAT(RTL8922A, ge, 0, 34, 30, 0, CRASH_TRIGGER),
+ 	__CFG_FW_FEAT(RTL8922A, ge, 0, 34, 11, 0, MACID_PAUSE_SLEEP),
+ 	__CFG_FW_FEAT(RTL8922A, ge, 0, 34, 35, 0, SCAN_OFFLOAD),
+@@ -6506,13 +6507,18 @@ static int rtw89_fw_read_c2h_reg(struct rtw89_dev *rtwdev,
+ 	const struct rtw89_chip_info *chip = rtwdev->chip;
+ 	struct rtw89_fw_info *fw_info = &rtwdev->fw;
+ 	const u32 *c2h_reg = chip->c2h_regs;
+-	u32 ret;
++	u32 ret, timeout;
+ 	u8 i, val;
+ 
+ 	info->id = RTW89_FWCMD_C2HREG_FUNC_NULL;
+ 
++	if (rtwdev->hci.type == RTW89_HCI_TYPE_USB)
++		timeout = RTW89_C2H_TIMEOUT_USB;
++	else
++		timeout = RTW89_C2H_TIMEOUT;
++
+ 	ret = read_poll_timeout_atomic(rtw89_read8, val, val, 1,
+-				       RTW89_C2H_TIMEOUT, false, rtwdev,
++				       timeout, false, rtwdev,
+ 				       chip->c2h_ctrl_reg);
+ 	if (ret) {
+ 		rtw89_warn(rtwdev, "c2h reg timeout\n");
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.h b/drivers/net/wireless/realtek/rtw89/fw.h
+index 0fcc824e41be6d..a6b5dc6851f11c 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.h
++++ b/drivers/net/wireless/realtek/rtw89/fw.h
+@@ -112,6 +112,8 @@ struct rtw89_h2creg_sch_tx_en {
+ #define RTW89_C2HREG_HDR_LEN 2
+ #define RTW89_H2CREG_HDR_LEN 2
+ #define RTW89_C2H_TIMEOUT 1000000
++#define RTW89_C2H_TIMEOUT_USB 4000
++
+ struct rtw89_mac_c2h_info {
+ 	u8 id;
+ 	u8 content_len;
+diff --git a/drivers/net/wireless/realtek/rtw89/mac.c b/drivers/net/wireless/realtek/rtw89/mac.c
+index 9f0e30e7500927..94bcdf6abca5c9 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac.c
++++ b/drivers/net/wireless/realtek/rtw89/mac.c
+@@ -1441,6 +1441,23 @@ void rtw89_mac_notify_wake(struct rtw89_dev *rtwdev)
+ 	rtw89_mac_send_rpwm(rtwdev, state, true);
+ }
+ 
++static void rtw89_mac_power_switch_boot_mode(struct rtw89_dev *rtwdev)
++{
++	u32 boot_mode;
++
++	if (rtwdev->hci.type != RTW89_HCI_TYPE_USB)
++		return;
++
++	boot_mode = rtw89_read32_mask(rtwdev, R_AX_GPIO_MUXCFG, B_AX_BOOT_MODE);
++	if (!boot_mode)
++		return;
++
++	rtw89_write32_clr(rtwdev, R_AX_SYS_PW_CTRL, B_AX_APFN_ONMAC);
++	rtw89_write32_clr(rtwdev, R_AX_SYS_STATUS1, B_AX_AUTO_WLPON);
++	rtw89_write32_clr(rtwdev, R_AX_GPIO_MUXCFG, B_AX_BOOT_MODE);
++	rtw89_write32_clr(rtwdev, R_AX_RSV_CTRL, B_AX_R_DIS_PRST);
++}
++
+ static int rtw89_mac_power_switch(struct rtw89_dev *rtwdev, bool on)
+ {
+ #define PWR_ACT 1
+@@ -1451,6 +1468,8 @@ static int rtw89_mac_power_switch(struct rtw89_dev *rtwdev, bool on)
+ 	int ret;
+ 	u8 val;
+ 
++	rtw89_mac_power_switch_boot_mode(rtwdev);
++
+ 	if (on) {
+ 		cfg_seq = chip->pwr_on_seq;
+ 		cfg_func = chip->ops->pwr_on_func;
+diff --git a/drivers/net/wireless/realtek/rtw89/reg.h b/drivers/net/wireless/realtek/rtw89/reg.h
+index f05c81ae586949..9d9e1b02bfc7c9 100644
+--- a/drivers/net/wireless/realtek/rtw89/reg.h
++++ b/drivers/net/wireless/realtek/rtw89/reg.h
+@@ -182,6 +182,7 @@
+ 
+ #define R_AX_SYS_STATUS1 0x00F4
+ #define B_AX_SEL_0XC0_MASK GENMASK(17, 16)
++#define B_AX_AUTO_WLPON BIT(10)
+ #define B_AX_PAD_HCI_SEL_V2_MASK GENMASK(5, 3)
+ #define MAC_AX_HCI_SEL_SDIO_UART 0
+ #define MAC_AX_HCI_SEL_MULTI_USB 1
+diff --git a/drivers/net/wireless/realtek/rtw89/wow.c b/drivers/net/wireless/realtek/rtw89/wow.c
+index 34a0ab49bd7a9e..b6df7900fdc873 100644
+--- a/drivers/net/wireless/realtek/rtw89/wow.c
++++ b/drivers/net/wireless/realtek/rtw89/wow.c
+@@ -1412,6 +1412,8 @@ static void rtw89_fw_release_pno_pkt_list(struct rtw89_dev *rtwdev,
+ static int rtw89_pno_scan_update_probe_req(struct rtw89_dev *rtwdev,
+ 					   struct rtw89_vif_link *rtwvif_link)
+ {
++	static const u8 basic_rate_ie[] = {WLAN_EID_SUPP_RATES, 0x08,
++		 0x0c, 0x12, 0x18, 0x24, 0x30, 0x48, 0x60, 0x6c};
+ 	struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ 	struct cfg80211_sched_scan_request *nd_config = rtw_wow->nd_config;
+ 	u8 num = nd_config->n_match_sets, i;
+@@ -1423,10 +1425,11 @@ static int rtw89_pno_scan_update_probe_req(struct rtw89_dev *rtwdev,
+ 		skb = ieee80211_probereq_get(rtwdev->hw, rtwvif_link->mac_addr,
+ 					     nd_config->match_sets[i].ssid.ssid,
+ 					     nd_config->match_sets[i].ssid.ssid_len,
+-					     nd_config->ie_len);
++					     nd_config->ie_len + sizeof(basic_rate_ie));
+ 		if (!skb)
+ 			return -ENOMEM;
+ 
++		skb_put_data(skb, basic_rate_ie, sizeof(basic_rate_ie));
+ 		skb_put_data(skb, nd_config->ie, nd_config->ie_len);
+ 
+ 		info = kzalloc(sizeof(*info), GFP_KERNEL);
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 9bac50963477f1..a11a0e94940058 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -638,8 +638,6 @@ static int xennet_xdp_xmit_one(struct net_device *dev,
+ 	tx_stats->packets++;
+ 	u64_stats_update_end(&tx_stats->syncp);
+ 
+-	xennet_tx_buf_gc(queue);
+-
+ 	return 0;
+ }
+ 
+@@ -849,9 +847,6 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
+ 	tx_stats->packets++;
+ 	u64_stats_update_end(&tx_stats->syncp);
+ 
+-	/* Note: It is not safe to access skb after xennet_tx_buf_gc()! */
+-	xennet_tx_buf_gc(queue);
+-
+ 	if (!netfront_tx_slot_available(queue))
+ 		netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
+ 
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 320aaa41ec394c..3ef30c36bf105b 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1958,8 +1958,28 @@ static int nvme_pci_configure_admin_queue(struct nvme_dev *dev)
+ 	 * might be pointing at!
+ 	 */
+ 	result = nvme_disable_ctrl(&dev->ctrl, false);
+-	if (result < 0)
+-		return result;
++	if (result < 0) {
++		struct pci_dev *pdev = to_pci_dev(dev->dev);
++
++		/*
++		 * The NVMe Controller Reset method did not get an expected
++		 * CSTS.RDY transition, so something with the device appears to
++		 * be stuck. Use the lower level and bigger hammer PCIe
++		 * Function Level Reset to attempt restoring the device to its
++		 * initial state, and try again.
++		 */
++		result = pcie_reset_flr(pdev, false);
++		if (result < 0)
++			return result;
++
++		pci_restore_state(pdev);
++		result = nvme_disable_ctrl(&dev->ctrl, false);
++		if (result < 0)
++			return result;
++
++		dev_info(dev->ctrl.device,
++			"controller reset completed after pcie flr\n");
++	}
+ 
+ 	result = nvme_alloc_queue(dev, 0, NVME_AQ_DEPTH);
+ 	if (result)
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index d924008c39491f..9233f088fac885 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1745,9 +1745,14 @@ static int nvme_tcp_start_tls(struct nvme_ctrl *nctrl,
+ 			qid, ret);
+ 		tls_handshake_cancel(queue->sock->sk);
+ 	} else {
+-		dev_dbg(nctrl->device,
+-			"queue %d: TLS handshake complete, error %d\n",
+-			qid, queue->tls_err);
++		if (queue->tls_err) {
++			dev_err(nctrl->device,
++				"queue %d: TLS handshake complete, error %d\n",
++				qid, queue->tls_err);
++		} else {
++			dev_dbg(nctrl->device,
++				"queue %d: TLS handshake complete\n", qid);
++		}
+ 		ret = queue->tls_err;
+ 	}
+ 	return ret;
+diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+index 108d30637920e9..b5f5eee5a50efc 100644
+--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
++++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+@@ -58,6 +58,8 @@
+ 
+ /* Hot Reset Control Register */
+ #define PCIE_CLIENT_HOT_RESET_CTRL	0x180
++#define  PCIE_LTSSM_APP_DLY2_EN		BIT(1)
++#define  PCIE_LTSSM_APP_DLY2_DONE	BIT(3)
+ #define  PCIE_LTSSM_ENABLE_ENHANCE	BIT(4)
+ 
+ /* LTSSM Status Register */
+@@ -475,7 +477,7 @@ static irqreturn_t rockchip_pcie_ep_sys_irq_thread(int irq, void *arg)
+ 	struct rockchip_pcie *rockchip = arg;
+ 	struct dw_pcie *pci = &rockchip->pci;
+ 	struct device *dev = pci->dev;
+-	u32 reg;
++	u32 reg, val;
+ 
+ 	reg = rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_INTR_STATUS_MISC);
+ 	rockchip_pcie_writel_apb(rockchip, reg, PCIE_CLIENT_INTR_STATUS_MISC);
+@@ -486,6 +488,10 @@ static irqreturn_t rockchip_pcie_ep_sys_irq_thread(int irq, void *arg)
+ 	if (reg & PCIE_LINK_REQ_RST_NOT_INT) {
+ 		dev_dbg(dev, "hot reset or link-down reset\n");
+ 		dw_pcie_ep_linkdown(&pci->ep);
++		/* Stop delaying link training. */
++		val = HIWORD_UPDATE_BIT(PCIE_LTSSM_APP_DLY2_DONE);
++		rockchip_pcie_writel_apb(rockchip, val,
++					 PCIE_CLIENT_HOT_RESET_CTRL);
+ 	}
+ 
+ 	if (reg & PCIE_RDLH_LINK_UP_CHGED) {
+@@ -567,8 +573,11 @@ static int rockchip_pcie_configure_ep(struct platform_device *pdev,
+ 		return ret;
+ 	}
+ 
+-	/* LTSSM enable control mode */
+-	val = HIWORD_UPDATE_BIT(PCIE_LTSSM_ENABLE_ENHANCE);
++	/*
++	 * LTSSM enable control mode, and automatically delay link training on
++	 * hot reset/link-down reset.
++	 */
++	val = HIWORD_UPDATE_BIT(PCIE_LTSSM_ENABLE_ENHANCE | PCIE_LTSSM_APP_DLY2_EN);
+ 	rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_HOT_RESET_CTRL);
+ 
+ 	rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_EP_MODE,
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index af370628e58393..99c58ee09fbb0b 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -816,13 +816,11 @@ int pci_acpi_program_hp_params(struct pci_dev *dev)
+ bool pciehp_is_native(struct pci_dev *bridge)
+ {
+ 	const struct pci_host_bridge *host;
+-	u32 slot_cap;
+ 
+ 	if (!IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE))
+ 		return false;
+ 
+-	pcie_capability_read_dword(bridge, PCI_EXP_SLTCAP, &slot_cap);
+-	if (!(slot_cap & PCI_EXP_SLTCAP_HPC))
++	if (!bridge->is_pciehp)
+ 		return false;
+ 
+ 	if (pcie_ports_native)
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 9e42090fb10892..640d339a447d49 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -3030,8 +3030,12 @@ static const struct dmi_system_id bridge_d3_blacklist[] = {
+  * pci_bridge_d3_possible - Is it possible to put the bridge into D3
+  * @bridge: Bridge to check
+  *
+- * This function checks if it is possible to move the bridge to D3.
+  * Currently we only allow D3 for some PCIe ports and for Thunderbolt.
++ *
++ * Return: Whether it is possible to move the bridge to D3.
++ *
++ * The return value is guaranteed to be constant across the entire lifetime
++ * of the bridge, including its hot-removal.
+  */
+ bool pci_bridge_d3_possible(struct pci_dev *bridge)
+ {
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index e6a34db7782668..509421a7e50150 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -1678,7 +1678,7 @@ void set_pcie_hotplug_bridge(struct pci_dev *pdev)
+ 
+ 	pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &reg32);
+ 	if (reg32 & PCI_EXP_SLTCAP_HPC)
+-		pdev->is_hotplug_bridge = 1;
++		pdev->is_hotplug_bridge = pdev->is_pciehp = 1;
+ }
+ 
+ static void set_pcie_thunderbolt(struct pci_dev *dev)
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index 031d45d0fe3db6..d1df2f3adbc518 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -2655,6 +2655,7 @@ static struct platform_driver arm_cmn_driver = {
+ 		.name = "arm-cmn",
+ 		.of_match_table = of_match_ptr(arm_cmn_of_match),
+ 		.acpi_match_table = ACPI_PTR(arm_cmn_acpi_match),
++		.suppress_bind_attrs = true,
+ 	},
+ 	.probe = arm_cmn_probe,
+ 	.remove = arm_cmn_remove,
+diff --git a/drivers/perf/arm-ni.c b/drivers/perf/arm-ni.c
+index 9396d243415f48..c30a67fe2ae3ce 100644
+--- a/drivers/perf/arm-ni.c
++++ b/drivers/perf/arm-ni.c
+@@ -709,6 +709,7 @@ static struct platform_driver arm_ni_driver = {
+ 		.name = "arm-ni",
+ 		.of_match_table = of_match_ptr(arm_ni_of_match),
+ 		.acpi_match_table = ACPI_PTR(arm_ni_acpi_match),
++		.suppress_bind_attrs = true,
+ 	},
+ 	.probe = arm_ni_probe,
+ 	.remove = arm_ni_remove,
+diff --git a/drivers/perf/cxl_pmu.c b/drivers/perf/cxl_pmu.c
+index d6693519eaee2e..948e7c067dd2f1 100644
+--- a/drivers/perf/cxl_pmu.c
++++ b/drivers/perf/cxl_pmu.c
+@@ -873,7 +873,7 @@ static int cxl_pmu_probe(struct device *dev)
+ 		return rc;
+ 	irq = rc;
+ 
+-	irq_name = devm_kasprintf(dev, GFP_KERNEL, "%s_overflow\n", dev_name);
++	irq_name = devm_kasprintf(dev, GFP_KERNEL, "%s_overflow", dev_name);
+ 	if (!irq_name)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/phy/rockchip/phy-rockchip-pcie.c b/drivers/phy/rockchip/phy-rockchip-pcie.c
+index bd44af36c67a5a..4e2dfd01adf2ff 100644
+--- a/drivers/phy/rockchip/phy-rockchip-pcie.c
++++ b/drivers/phy/rockchip/phy-rockchip-pcie.c
+@@ -30,9 +30,8 @@
+ #define PHY_CFG_ADDR_SHIFT    1
+ #define PHY_CFG_DATA_MASK     0xf
+ #define PHY_CFG_ADDR_MASK     0x3f
+-#define PHY_CFG_RD_MASK       0x3ff
+ #define PHY_CFG_WR_ENABLE     1
+-#define PHY_CFG_WR_DISABLE    1
++#define PHY_CFG_WR_DISABLE    0
+ #define PHY_CFG_WR_SHIFT      0
+ #define PHY_CFG_WR_MASK       1
+ #define PHY_CFG_PLL_LOCK      0x10
+@@ -160,6 +159,12 @@ static int rockchip_pcie_phy_power_on(struct phy *phy)
+ 
+ 	guard(mutex)(&rk_phy->pcie_mutex);
+ 
++	regmap_write(rk_phy->reg_base,
++		     rk_phy->phy_data->pcie_laneoff,
++		     HIWORD_UPDATE(!PHY_LANE_IDLE_OFF,
++				   PHY_LANE_IDLE_MASK,
++				   PHY_LANE_IDLE_A_SHIFT + inst->index));
++
+ 	if (rk_phy->pwr_cnt++) {
+ 		return 0;
+ 	}
+@@ -176,12 +181,6 @@ static int rockchip_pcie_phy_power_on(struct phy *phy)
+ 				   PHY_CFG_ADDR_MASK,
+ 				   PHY_CFG_ADDR_SHIFT));
+ 
+-	regmap_write(rk_phy->reg_base,
+-		     rk_phy->phy_data->pcie_laneoff,
+-		     HIWORD_UPDATE(!PHY_LANE_IDLE_OFF,
+-				   PHY_LANE_IDLE_MASK,
+-				   PHY_LANE_IDLE_A_SHIFT + inst->index));
+-
+ 	/*
+ 	 * No documented timeout value for phy operation below,
+ 	 * so we make it large enough here. And we use loop-break
+diff --git a/drivers/pinctrl/stm32/pinctrl-stm32.c b/drivers/pinctrl/stm32/pinctrl-stm32.c
+index ba49d48c3a1d1e..e6ad63df82b7c2 100644
+--- a/drivers/pinctrl/stm32/pinctrl-stm32.c
++++ b/drivers/pinctrl/stm32/pinctrl-stm32.c
+@@ -411,6 +411,7 @@ static struct irq_chip stm32_gpio_irq_chip = {
+ 	.irq_set_wake	= irq_chip_set_wake_parent,
+ 	.irq_request_resources = stm32_gpio_irq_request_resources,
+ 	.irq_release_resources = stm32_gpio_irq_release_resources,
++	.irq_set_affinity = IS_ENABLED(CONFIG_SMP) ? irq_chip_set_affinity_parent : NULL,
+ };
+ 
+ static int stm32_gpio_domain_translate(struct irq_domain *d,
+diff --git a/drivers/platform/chrome/cros_ec_sensorhub.c b/drivers/platform/chrome/cros_ec_sensorhub.c
+index 50cdae67fa3204..9bad8f72680ea8 100644
+--- a/drivers/platform/chrome/cros_ec_sensorhub.c
++++ b/drivers/platform/chrome/cros_ec_sensorhub.c
+@@ -8,6 +8,7 @@
+ 
+ #include <linux/init.h>
+ #include <linux/device.h>
++#include <linux/delay.h>
+ #include <linux/mod_devicetable.h>
+ #include <linux/module.h>
+ #include <linux/platform_data/cros_ec_commands.h>
+@@ -18,6 +19,7 @@
+ #include <linux/types.h>
+ 
+ #define DRV_NAME		"cros-ec-sensorhub"
++#define CROS_EC_CMD_INFO_RETRIES 50
+ 
+ static void cros_ec_sensorhub_free_sensor(void *arg)
+ {
+@@ -53,7 +55,7 @@ static int cros_ec_sensorhub_register(struct device *dev,
+ 	int sensor_type[MOTIONSENSE_TYPE_MAX] = { 0 };
+ 	struct cros_ec_command *msg = sensorhub->msg;
+ 	struct cros_ec_dev *ec = sensorhub->ec;
+-	int ret, i;
++	int ret, i, retries;
+ 	char *name;
+ 
+ 
+@@ -65,12 +67,25 @@ static int cros_ec_sensorhub_register(struct device *dev,
+ 		sensorhub->params->cmd = MOTIONSENSE_CMD_INFO;
+ 		sensorhub->params->info.sensor_num = i;
+ 
+-		ret = cros_ec_cmd_xfer_status(ec->ec_dev, msg);
++		retries = CROS_EC_CMD_INFO_RETRIES;
++		do {
++			ret = cros_ec_cmd_xfer_status(ec->ec_dev, msg);
++			if (ret == -EBUSY) {
++				/* The EC is still busy initializing sensors. */
++				usleep_range(5000, 6000);
++				retries--;
++			}
++		} while (ret == -EBUSY && retries);
++
+ 		if (ret < 0) {
+-			dev_warn(dev, "no info for EC sensor %d : %d/%d\n",
+-				 i, ret, msg->result);
++			dev_err(dev, "no info for EC sensor %d : %d/%d\n",
++				i, ret, msg->result);
+ 			continue;
+ 		}
++		if (retries < CROS_EC_CMD_INFO_RETRIES) {
++			dev_warn(dev, "%d retries needed to bring up sensor %d\n",
++				 CROS_EC_CMD_INFO_RETRIES - retries, i);
++		}
+ 
+ 		switch (sensorhub->resp->info.type) {
+ 		case MOTIONSENSE_TYPE_ACCEL:
+diff --git a/drivers/platform/chrome/cros_ec_typec.c b/drivers/platform/chrome/cros_ec_typec.c
+index 7678e3d05fd36f..f437b594055cff 100644
+--- a/drivers/platform/chrome/cros_ec_typec.c
++++ b/drivers/platform/chrome/cros_ec_typec.c
+@@ -1272,8 +1272,8 @@ static int cros_typec_probe(struct platform_device *pdev)
+ 
+ 	typec->ec = dev_get_drvdata(pdev->dev.parent);
+ 	if (!typec->ec) {
+-		dev_err(dev, "couldn't find parent EC device\n");
+-		return -ENODEV;
++		dev_warn(dev, "couldn't find parent EC device\n");
++		return -EPROBE_DEFER;
+ 	}
+ 
+ 	platform_set_drvdata(pdev, typec);
+diff --git a/drivers/platform/x86/amd/pmc/pmc-quirks.c b/drivers/platform/x86/amd/pmc/pmc-quirks.c
+index 131f10b6830885..ded4c84f5ed149 100644
+--- a/drivers/platform/x86/amd/pmc/pmc-quirks.c
++++ b/drivers/platform/x86/amd/pmc/pmc-quirks.c
+@@ -190,6 +190,15 @@ static const struct dmi_system_id fwbug_list[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "82XQ"),
+ 		}
+ 	},
++	/* https://gitlab.freedesktop.org/drm/amd/-/issues/4434 */
++	{
++		.ident = "Lenovo Yoga 6 13ALC6",
++		.driver_data = &quirk_s2idle_bug,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "82ND"),
++		}
++	},
+ 	/* https://gitlab.freedesktop.org/drm/amd/-/issues/2684 */
+ 	{
+ 		.ident = "HP Laptop 15s-eq2xxx",
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index b59b4d90b0c747..afb83b3f4826e6 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -559,12 +559,12 @@ static unsigned long __init tpacpi_check_quirks(
+ 	return 0;
+ }
+ 
+-static inline bool __pure __init tpacpi_is_lenovo(void)
++static __always_inline bool __pure __init tpacpi_is_lenovo(void)
+ {
+ 	return thinkpad_id.vendor == PCI_VENDOR_ID_LENOVO;
+ }
+ 
+-static inline bool __pure __init tpacpi_is_ibm(void)
++static __always_inline bool __pure __init tpacpi_is_ibm(void)
+ {
+ 	return thinkpad_id.vendor == PCI_VENDOR_ID_IBM;
+ }
+diff --git a/drivers/pmdomain/imx/imx8m-blk-ctrl.c b/drivers/pmdomain/imx/imx8m-blk-ctrl.c
+index 912802b5215bd2..5c83e5599f1ea6 100644
+--- a/drivers/pmdomain/imx/imx8m-blk-ctrl.c
++++ b/drivers/pmdomain/imx/imx8m-blk-ctrl.c
+@@ -665,6 +665,11 @@ static const struct imx8m_blk_ctrl_data imx8mn_disp_blk_ctl_dev_data = {
+ #define  LCDIF_1_RD_HURRY	GENMASK(15, 13)
+ #define  LCDIF_0_RD_HURRY	GENMASK(12, 10)
+ 
++#define ISI_CACHE_CTRL		0x50
++#define  ISI_V_WR_HURRY		GENMASK(28, 26)
++#define  ISI_U_WR_HURRY		GENMASK(25, 23)
++#define  ISI_Y_WR_HURRY		GENMASK(22, 20)
++
+ static int imx8mp_media_power_notifier(struct notifier_block *nb,
+ 				unsigned long action, void *data)
+ {
+@@ -694,6 +699,11 @@ static int imx8mp_media_power_notifier(struct notifier_block *nb,
+ 		regmap_set_bits(bc->regmap, LCDIF_ARCACHE_CTRL,
+ 				FIELD_PREP(LCDIF_1_RD_HURRY, 7) |
+ 				FIELD_PREP(LCDIF_0_RD_HURRY, 7));
++		/* Same here for ISI */
++		regmap_set_bits(bc->regmap, ISI_CACHE_CTRL,
++				FIELD_PREP(ISI_V_WR_HURRY, 7) |
++				FIELD_PREP(ISI_U_WR_HURRY, 7) |
++				FIELD_PREP(ISI_Y_WR_HURRY, 7));
+ 	}
+ 
+ 	return NOTIFY_OK;
+diff --git a/drivers/pmdomain/ti/Kconfig b/drivers/pmdomain/ti/Kconfig
+index 67c608bf7ed026..5386b362a7ab25 100644
+--- a/drivers/pmdomain/ti/Kconfig
++++ b/drivers/pmdomain/ti/Kconfig
+@@ -10,7 +10,7 @@ if SOC_TI
+ config TI_SCI_PM_DOMAINS
+ 	tristate "TI SCI PM Domains Driver"
+ 	depends on TI_SCI_PROTOCOL
+-	depends on PM_GENERIC_DOMAINS
++	select PM_GENERIC_DOMAINS if PM
+ 	help
+ 	  Generic power domain implementation for TI device implementing
+ 	  the TI SCI protocol.
+diff --git a/drivers/power/supply/qcom_battmgr.c b/drivers/power/supply/qcom_battmgr.c
+index fe27676fbc7cd1..2d50830610e9aa 100644
+--- a/drivers/power/supply/qcom_battmgr.c
++++ b/drivers/power/supply/qcom_battmgr.c
+@@ -981,6 +981,8 @@ static unsigned int qcom_battmgr_sc8280xp_parse_technology(const char *chemistry
+ {
+ 	if (!strncmp(chemistry, "LIO", BATTMGR_CHEMISTRY_LEN))
+ 		return POWER_SUPPLY_TECHNOLOGY_LION;
++	if (!strncmp(chemistry, "LIP", BATTMGR_CHEMISTRY_LEN))
++		return POWER_SUPPLY_TECHNOLOGY_LIPO;
+ 
+ 	pr_err("Unknown battery technology '%s'\n", chemistry);
+ 	return POWER_SUPPLY_TECHNOLOGY_UNKNOWN;
+diff --git a/drivers/pps/clients/pps-gpio.c b/drivers/pps/clients/pps-gpio.c
+index 47d9891de368bd..935da68610c701 100644
+--- a/drivers/pps/clients/pps-gpio.c
++++ b/drivers/pps/clients/pps-gpio.c
+@@ -210,8 +210,8 @@ static int pps_gpio_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* register IRQ interrupt handler */
+-	ret = devm_request_irq(dev, data->irq, pps_gpio_irq_handler,
+-			get_irqf_trigger_flags(data), data->info.name, data);
++	ret = request_irq(data->irq, pps_gpio_irq_handler,
++			  get_irqf_trigger_flags(data), data->info.name, data);
+ 	if (ret) {
+ 		pps_unregister_source(data->pps);
+ 		dev_err(dev, "failed to acquire IRQ %d\n", data->irq);
+@@ -228,6 +228,7 @@ static void pps_gpio_remove(struct platform_device *pdev)
+ {
+ 	struct pps_gpio_device_data *data = platform_get_drvdata(pdev);
+ 
++	free_irq(data->irq, data);
+ 	pps_unregister_source(data->pps);
+ 	timer_delete_sync(&data->echo_timer);
+ 	/* reset echo pin in any case */
+diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c
+index 36f57d7b4a6671..1cc06b7cb17ef5 100644
+--- a/drivers/ptp/ptp_clock.c
++++ b/drivers/ptp/ptp_clock.c
+@@ -96,7 +96,7 @@ static int ptp_clock_settime(struct posix_clock *pc, const struct timespec64 *tp
+ 	struct ptp_clock *ptp = container_of(pc, struct ptp_clock, clock);
+ 
+ 	if (ptp_clock_freerun(ptp)) {
+-		pr_err("ptp: physical clock is free running\n");
++		pr_err_ratelimited("ptp: physical clock is free running\n");
+ 		return -EBUSY;
+ 	}
+ 
+diff --git a/drivers/ptp/ptp_private.h b/drivers/ptp/ptp_private.h
+index a6aad743c282f4..b352df4cd3f972 100644
+--- a/drivers/ptp/ptp_private.h
++++ b/drivers/ptp/ptp_private.h
+@@ -24,6 +24,11 @@
+ #define PTP_DEFAULT_MAX_VCLOCKS 20
+ #define PTP_MAX_CHANNELS 2048
+ 
++enum {
++	PTP_LOCK_PHYSICAL = 0,
++	PTP_LOCK_VIRTUAL,
++};
++
+ struct timestamp_event_queue {
+ 	struct ptp_extts_event buf[PTP_MAX_TIMESTAMPS];
+ 	int head;
+diff --git a/drivers/ptp/ptp_vclock.c b/drivers/ptp/ptp_vclock.c
+index 7febfdcbde8bc6..8ed4b85989242f 100644
+--- a/drivers/ptp/ptp_vclock.c
++++ b/drivers/ptp/ptp_vclock.c
+@@ -154,6 +154,11 @@ static long ptp_vclock_refresh(struct ptp_clock_info *ptp)
+ 	return PTP_VCLOCK_REFRESH_INTERVAL;
+ }
+ 
++static void ptp_vclock_set_subclass(struct ptp_clock *ptp)
++{
++	lockdep_set_subclass(&ptp->clock.rwsem, PTP_LOCK_VIRTUAL);
++}
++
+ static const struct ptp_clock_info ptp_vclock_info = {
+ 	.owner		= THIS_MODULE,
+ 	.name		= "ptp virtual clock",
+@@ -213,6 +218,8 @@ struct ptp_vclock *ptp_vclock_register(struct ptp_clock *pclock)
+ 		return NULL;
+ 	}
+ 
++	ptp_vclock_set_subclass(vclock->clock);
++
+ 	timecounter_init(&vclock->tc, &vclock->cc, 0);
+ 	ptp_schedule_worker(vclock->clock, PTP_VCLOCK_REFRESH_INTERVAL);
+ 
+diff --git a/drivers/remoteproc/imx_rproc.c b/drivers/remoteproc/imx_rproc.c
+index 74299af1d7f10a..627e57a88db218 100644
+--- a/drivers/remoteproc/imx_rproc.c
++++ b/drivers/remoteproc/imx_rproc.c
+@@ -1029,8 +1029,8 @@ static int imx_rproc_clk_enable(struct imx_rproc *priv)
+ 	struct device *dev = priv->dev;
+ 	int ret;
+ 
+-	/* Remote core is not under control of Linux */
+-	if (dcfg->method == IMX_RPROC_NONE)
++	/* Remote core is not under control of Linux or it is managed by SCU API */
++	if (dcfg->method == IMX_RPROC_NONE || dcfg->method == IMX_RPROC_SCU_API)
+ 		return 0;
+ 
+ 	priv->clk = devm_clk_get(dev, NULL);
+diff --git a/drivers/reset/Kconfig b/drivers/reset/Kconfig
+index d85be5899da6ab..ec8c953cb73d46 100644
+--- a/drivers/reset/Kconfig
++++ b/drivers/reset/Kconfig
+@@ -51,8 +51,8 @@ config RESET_BERLIN
+ 
+ config RESET_BRCMSTB
+ 	tristate "Broadcom STB reset controller"
+-	depends on ARCH_BRCMSTB || COMPILE_TEST
+-	default ARCH_BRCMSTB
++	depends on ARCH_BRCMSTB || ARCH_BCM2835 || COMPILE_TEST
++	default ARCH_BRCMSTB || ARCH_BCM2835
+ 	help
+ 	  This enables the reset controller driver for Broadcom STB SoCs using
+ 	  a SUN_TOP_CTRL_SW_INIT style controller.
+@@ -60,11 +60,11 @@ config RESET_BRCMSTB
+ config RESET_BRCMSTB_RESCAL
+ 	tristate "Broadcom STB RESCAL reset controller"
+ 	depends on HAS_IOMEM
+-	depends on ARCH_BRCMSTB || COMPILE_TEST
+-	default ARCH_BRCMSTB
++	depends on ARCH_BRCMSTB || ARCH_BCM2835 || COMPILE_TEST
++	default ARCH_BRCMSTB || ARCH_BCM2835
+ 	help
+ 	  This enables the RESCAL reset controller for SATA, PCIe0, or PCIe1 on
+-	  BCM7216.
++	  BCM7216 or the BCM2712.
+ 
+ config RESET_EYEQ
+ 	bool "Mobileye EyeQ reset controller"
+diff --git a/drivers/rtc/rtc-ds1307.c b/drivers/rtc/rtc-ds1307.c
+index c8a666de9cbe91..1960d1bd851cb0 100644
+--- a/drivers/rtc/rtc-ds1307.c
++++ b/drivers/rtc/rtc-ds1307.c
+@@ -279,6 +279,13 @@ static int ds1307_get_time(struct device *dev, struct rtc_time *t)
+ 		if (tmp & DS1340_BIT_OSF)
+ 			return -EINVAL;
+ 		break;
++	case ds_1341:
++		ret = regmap_read(ds1307->regmap, DS1337_REG_STATUS, &tmp);
++		if (ret)
++			return ret;
++		if (tmp & DS1337_BIT_OSF)
++			return -EINVAL;
++		break;
+ 	case ds_1388:
+ 		ret = regmap_read(ds1307->regmap, DS1388_REG_FLAG, &tmp);
+ 		if (ret)
+@@ -377,6 +384,10 @@ static int ds1307_set_time(struct device *dev, struct rtc_time *t)
+ 		regmap_update_bits(ds1307->regmap, DS1340_REG_FLAG,
+ 				   DS1340_BIT_OSF, 0);
+ 		break;
++	case ds_1341:
++		regmap_update_bits(ds1307->regmap, DS1337_REG_STATUS,
++				   DS1337_BIT_OSF, 0);
++		break;
+ 	case ds_1388:
+ 		regmap_update_bits(ds1307->regmap, DS1388_REG_FLAG,
+ 				   DS1388_BIT_OSF, 0);
+@@ -1813,10 +1824,8 @@ static int ds1307_probe(struct i2c_client *client)
+ 		regmap_write(ds1307->regmap, DS1337_REG_CONTROL,
+ 			     regs[0]);
+ 
+-		/* oscillator fault?  clear flag, and warn */
++		/* oscillator fault? warn */
+ 		if (regs[1] & DS1337_BIT_OSF) {
+-			regmap_write(ds1307->regmap, DS1337_REG_STATUS,
+-				     regs[1] & ~DS1337_BIT_OSF);
+ 			dev_warn(ds1307->dev, "SET TIME!\n");
+ 		}
+ 		break;
+diff --git a/drivers/s390/char/sclp.c b/drivers/s390/char/sclp.c
+index 840be75e75d419..9a55e2d04e633f 100644
+--- a/drivers/s390/char/sclp.c
++++ b/drivers/s390/char/sclp.c
+@@ -719,7 +719,7 @@ sclp_sync_wait(void)
+ 	timeout = 0;
+ 	if (timer_pending(&sclp_request_timer)) {
+ 		/* Get timeout TOD value */
+-		timeout = get_tod_clock_fast() +
++		timeout = get_tod_clock_monotonic() +
+ 			  sclp_tod_from_jiffies(sclp_request_timer.expires -
+ 						jiffies);
+ 	}
+@@ -739,7 +739,7 @@ sclp_sync_wait(void)
+ 	/* Loop until driver state indicates finished request */
+ 	while (sclp_running_state != sclp_running_state_idle) {
+ 		/* Check for expired request timer */
+-		if (get_tod_clock_fast() > timeout && timer_delete(&sclp_request_timer))
++		if (get_tod_clock_monotonic() > timeout && timer_delete(&sclp_request_timer))
+ 			sclp_request_timer.function(&sclp_request_timer);
+ 		cpu_relax();
+ 	}
+diff --git a/drivers/scsi/aacraid/comminit.c b/drivers/scsi/aacraid/comminit.c
+index 28cf18955a088e..726c8531b7d3fb 100644
+--- a/drivers/scsi/aacraid/comminit.c
++++ b/drivers/scsi/aacraid/comminit.c
+@@ -481,8 +481,7 @@ void aac_define_int_mode(struct aac_dev *dev)
+ 	    pci_find_capability(dev->pdev, PCI_CAP_ID_MSIX)) {
+ 		min_msix = 2;
+ 		i = pci_alloc_irq_vectors(dev->pdev,
+-					  min_msix, msi_count,
+-					  PCI_IRQ_MSIX | PCI_IRQ_AFFINITY);
++					  min_msix, msi_count, PCI_IRQ_MSIX);
+ 		if (i > 0) {
+ 			dev->msi_enabled = 1;
+ 			msi_count = i;
+diff --git a/drivers/scsi/bfa/bfad_im.c b/drivers/scsi/bfa/bfad_im.c
+index a719a18f0fbcf7..f56e008ee52b1d 100644
+--- a/drivers/scsi/bfa/bfad_im.c
++++ b/drivers/scsi/bfa/bfad_im.c
+@@ -706,6 +706,7 @@ bfad_im_probe(struct bfad_s *bfad)
+ 
+ 	if (bfad_thread_workq(bfad) != BFA_STATUS_OK) {
+ 		kfree(im);
++		bfad->im = NULL;
+ 		return BFA_STATUS_FAILED;
+ 	}
+ 
+diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
+index 392d57e054db33..c9f410c5097834 100644
+--- a/drivers/scsi/libiscsi.c
++++ b/drivers/scsi/libiscsi.c
+@@ -3185,7 +3185,8 @@ iscsi_conn_setup(struct iscsi_cls_session *cls_session, int dd_size,
+ 		return NULL;
+ 	conn = cls_conn->dd_data;
+ 
+-	conn->dd_data = cls_conn->dd_data + sizeof(*conn);
++	if (dd_size)
++		conn->dd_data = cls_conn->dd_data + sizeof(*conn);
+ 	conn->session = session;
+ 	conn->cls_conn = cls_conn;
+ 	conn->c_stage = ISCSI_CONN_INITIAL_STAGE;
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index 3fd1aa5cc78cc8..1b601e45bc45c1 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -6289,7 +6289,6 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
+ 			}
+ 			phba->nvmeio_trc_on = 1;
+ 			phba->nvmeio_trc_output_idx = 0;
+-			phba->nvmeio_trc = NULL;
+ 		} else {
+ nvmeio_off:
+ 			phba->nvmeio_trc_size = 0;
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index b88e54a7e65c65..3962f07c914057 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -183,7 +183,8 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ 
+ 	/* Don't schedule a worker thread event if the vport is going down. */
+ 	if (test_bit(FC_UNLOADING, &vport->load_flag) ||
+-	    !test_bit(HBA_SETUP, &phba->hba_flag)) {
++	    (phba->sli_rev == LPFC_SLI_REV4 &&
++	    !test_bit(HBA_SETUP, &phba->hba_flag))) {
+ 
+ 		spin_lock_irqsave(&ndlp->lock, iflags);
+ 		ndlp->rport = NULL;
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index 8acb744febcdb3..31a9f142bcb956 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -390,6 +390,10 @@ lpfc_sli4_vport_delete_fcp_xri_aborted(struct lpfc_vport *vport)
+ 	if (!(vport->cfg_enable_fc4_type & LPFC_ENABLE_FCP))
+ 		return;
+ 
++	/* may be called before queues established if hba_setup fails */
++	if (!phba->sli4_hba.hdwq)
++		return;
++
+ 	spin_lock_irqsave(&phba->hbalock, iflag);
+ 	for (idx = 0; idx < phba->cfg_hdw_queue; idx++) {
+ 		qp = &phba->sli4_hba.hdwq[idx];
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c
+index ce444efd859e34..87983ea4e06e08 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_os.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_os.c
+@@ -49,6 +49,13 @@ static void mpi3mr_send_event_ack(struct mpi3mr_ioc *mrioc, u8 event,
+ 
+ #define MPI3_EVENT_WAIT_FOR_DEVICES_TO_REFRESH	(0xFFFE)
+ 
++/*
++ * SAS Log info code for a NCQ collateral abort after an NCQ error:
++ * IOC_LOGINFO_PREFIX_PL | PL_LOGINFO_CODE_SATA_NCQ_FAIL_ALL_CMDS_AFTR_ERR
++ * See: drivers/message/fusion/lsi/mpi_log_sas.h
++ */
++#define IOC_LOGINFO_SATA_NCQ_FAIL_AFTER_ERR	0x31080000
++
+ /**
+  * mpi3mr_host_tag_for_scmd - Get host tag for a scmd
+  * @mrioc: Adapter instance reference
+@@ -3430,7 +3437,18 @@ void mpi3mr_process_op_reply_desc(struct mpi3mr_ioc *mrioc,
+ 		scmd->result = DID_NO_CONNECT << 16;
+ 		break;
+ 	case MPI3_IOCSTATUS_SCSI_IOC_TERMINATED:
+-		scmd->result = DID_SOFT_ERROR << 16;
++		if (ioc_loginfo == IOC_LOGINFO_SATA_NCQ_FAIL_AFTER_ERR) {
++			/*
++			 * This is a ATA NCQ command aborted due to another NCQ
++			 * command failure. We must retry this command
++			 * immediately but without incrementing its retry
++			 * counter.
++			 */
++			WARN_ON_ONCE(xfer_count != 0);
++			scmd->result = DID_IMM_RETRY << 16;
++		} else {
++			scmd->result = DID_SOFT_ERROR << 16;
++		}
+ 		break;
+ 	case MPI3_IOCSTATUS_SCSI_TASK_TERMINATED:
+ 	case MPI3_IOCSTATUS_SCSI_EXT_TERMINATED:
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index 0f900ddb3047c7..967af259118e72 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -195,6 +195,14 @@ struct sense_info {
+ #define MPT3SAS_PORT_ENABLE_COMPLETE (0xFFFD)
+ #define MPT3SAS_ABRT_TASK_SET (0xFFFE)
+ #define MPT3SAS_REMOVE_UNRESPONDING_DEVICES (0xFFFF)
++
++/*
++ * SAS Log info code for a NCQ collateral abort after an NCQ error:
++ * IOC_LOGINFO_PREFIX_PL | PL_LOGINFO_CODE_SATA_NCQ_FAIL_ALL_CMDS_AFTR_ERR
++ * See: drivers/message/fusion/lsi/mpi_log_sas.h
++ */
++#define IOC_LOGINFO_SATA_NCQ_FAIL_AFTER_ERR	0x31080000
++
+ /**
+  * struct fw_event_work - firmware event struct
+  * @list: link list framework
+@@ -5814,6 +5822,17 @@ _scsih_io_done(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index, u32 reply)
+ 			scmd->result = DID_TRANSPORT_DISRUPTED << 16;
+ 			goto out;
+ 		}
++		if (log_info == IOC_LOGINFO_SATA_NCQ_FAIL_AFTER_ERR) {
++			/*
++			 * This is a ATA NCQ command aborted due to another NCQ
++			 * command failure. We must retry this command
++			 * immediately but without incrementing its retry
++			 * counter.
++			 */
++			WARN_ON_ONCE(xfer_cnt != 0);
++			scmd->result = DID_IMM_RETRY << 16;
++			break;
++		}
+ 		if (log_info == 0x31110630) {
+ 			if (scmd->retries > 2) {
+ 				scmd->result = DID_NO_CONNECT << 16;
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index 5b373c53c0369e..c4074f062d9311 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -4677,8 +4677,12 @@ pm80xx_chip_phy_start_req(struct pm8001_hba_info *pm8001_ha, u8 phy_id)
+ 		&pm8001_ha->phy[phy_id].dev_sas_addr, SAS_ADDR_SIZE);
+ 	payload.sas_identify.phy_id = phy_id;
+ 
+-	return pm8001_mpi_build_cmd(pm8001_ha, 0, opcode, &payload,
++	ret = pm8001_mpi_build_cmd(pm8001_ha, 0, opcode, &payload,
+ 				    sizeof(payload), 0);
++	if (ret < 0)
++		pm8001_tag_free(pm8001_ha, tag);
++
++	return ret;
+ }
+ 
+ /**
+@@ -4704,8 +4708,12 @@ static int pm80xx_chip_phy_stop_req(struct pm8001_hba_info *pm8001_ha,
+ 	payload.tag = cpu_to_le32(tag);
+ 	payload.phy_id = cpu_to_le32(phy_id);
+ 
+-	return pm8001_mpi_build_cmd(pm8001_ha, 0, opcode, &payload,
++	ret = pm8001_mpi_build_cmd(pm8001_ha, 0, opcode, &payload,
+ 				    sizeof(payload), 0);
++	if (ret < 0)
++		pm8001_tag_free(pm8001_ha, tag);
++
++	return ret;
+ }
+ 
+ /*
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index 4833b8fe251b88..396fcf194b6b37 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -1899,7 +1899,7 @@ int scsi_scan_host_selected(struct Scsi_Host *shost, unsigned int channel,
+ 
+ 	return 0;
+ }
+-
++EXPORT_SYMBOL(scsi_scan_host_selected);
+ static void scsi_sysfs_add_devices(struct Scsi_Host *shost)
+ {
+ 	struct scsi_device *sdev;
+diff --git a/drivers/scsi/scsi_transport_sas.c b/drivers/scsi/scsi_transport_sas.c
+index 351b028ef8938e..d69c7c444a3116 100644
+--- a/drivers/scsi/scsi_transport_sas.c
++++ b/drivers/scsi/scsi_transport_sas.c
+@@ -40,6 +40,8 @@
+ #include <scsi/scsi_transport_sas.h>
+ 
+ #include "scsi_sas_internal.h"
++#include "scsi_priv.h"
++
+ struct sas_host_attrs {
+ 	struct list_head rphy_list;
+ 	struct mutex lock;
+@@ -1683,32 +1685,66 @@ int scsi_is_sas_rphy(const struct device *dev)
+ }
+ EXPORT_SYMBOL(scsi_is_sas_rphy);
+ 
+-
+-/*
+- * SCSI scan helper
+- */
+-
+-static int sas_user_scan(struct Scsi_Host *shost, uint channel,
+-		uint id, u64 lun)
++static void scan_channel_zero(struct Scsi_Host *shost, uint id, u64 lun)
+ {
+ 	struct sas_host_attrs *sas_host = to_sas_host_attrs(shost);
+ 	struct sas_rphy *rphy;
+ 
+-	mutex_lock(&sas_host->lock);
+ 	list_for_each_entry(rphy, &sas_host->rphy_list, list) {
+ 		if (rphy->identify.device_type != SAS_END_DEVICE ||
+ 		    rphy->scsi_target_id == -1)
+ 			continue;
+ 
+-		if ((channel == SCAN_WILD_CARD || channel == 0) &&
+-		    (id == SCAN_WILD_CARD || id == rphy->scsi_target_id)) {
++		if (id == SCAN_WILD_CARD || id == rphy->scsi_target_id) {
+ 			scsi_scan_target(&rphy->dev, 0, rphy->scsi_target_id,
+ 					 lun, SCSI_SCAN_MANUAL);
+ 		}
+ 	}
+-	mutex_unlock(&sas_host->lock);
++}
+ 
+-	return 0;
++/*
++ * SCSI scan helper
++ */
++
++static int sas_user_scan(struct Scsi_Host *shost, uint channel,
++		uint id, u64 lun)
++{
++	struct sas_host_attrs *sas_host = to_sas_host_attrs(shost);
++	int res = 0;
++	int i;
++
++	switch (channel) {
++	case 0:
++		mutex_lock(&sas_host->lock);
++		scan_channel_zero(shost, id, lun);
++		mutex_unlock(&sas_host->lock);
++		break;
++
++	case SCAN_WILD_CARD:
++		mutex_lock(&sas_host->lock);
++		scan_channel_zero(shost, id, lun);
++		mutex_unlock(&sas_host->lock);
++
++		for (i = 1; i <= shost->max_channel; i++) {
++			res = scsi_scan_host_selected(shost, i, id, lun,
++						      SCSI_SCAN_MANUAL);
++			if (res)
++				goto exit_scan;
++		}
++		break;
++
++	default:
++		if (channel < shost->max_channel) {
++			res = scsi_scan_host_selected(shost, channel, id, lun,
++						      SCSI_SCAN_MANUAL);
++		} else {
++			res = -EINVAL;
++		}
++		break;
++	}
++
++exit_scan:
++	return res;
+ }
+ 
+ 
+diff --git a/drivers/soc/qcom/mdt_loader.c b/drivers/soc/qcom/mdt_loader.c
+index b2c0fb55d4ae67..44589d10b15b50 100644
+--- a/drivers/soc/qcom/mdt_loader.c
++++ b/drivers/soc/qcom/mdt_loader.c
+@@ -83,7 +83,7 @@ ssize_t qcom_mdt_get_size(const struct firmware *fw)
+ 	int i;
+ 
+ 	ehdr = (struct elf32_hdr *)fw->data;
+-	phdrs = (struct elf32_phdr *)(ehdr + 1);
++	phdrs = (struct elf32_phdr *)(fw->data + ehdr->e_phoff);
+ 
+ 	for (i = 0; i < ehdr->e_phnum; i++) {
+ 		phdr = &phdrs[i];
+@@ -135,7 +135,7 @@ void *qcom_mdt_read_metadata(const struct firmware *fw, size_t *data_len,
+ 	void *data;
+ 
+ 	ehdr = (struct elf32_hdr *)fw->data;
+-	phdrs = (struct elf32_phdr *)(ehdr + 1);
++	phdrs = (struct elf32_phdr *)(fw->data + ehdr->e_phoff);
+ 
+ 	if (ehdr->e_phnum < 2)
+ 		return ERR_PTR(-EINVAL);
+@@ -215,7 +215,7 @@ int qcom_mdt_pas_init(struct device *dev, const struct firmware *fw,
+ 	int i;
+ 
+ 	ehdr = (struct elf32_hdr *)fw->data;
+-	phdrs = (struct elf32_phdr *)(ehdr + 1);
++	phdrs = (struct elf32_phdr *)(fw->data + ehdr->e_phoff);
+ 
+ 	for (i = 0; i < ehdr->e_phnum; i++) {
+ 		phdr = &phdrs[i];
+@@ -270,7 +270,7 @@ static bool qcom_mdt_bins_are_split(const struct firmware *fw, const char *fw_na
+ 	int i;
+ 
+ 	ehdr = (struct elf32_hdr *)fw->data;
+-	phdrs = (struct elf32_phdr *)(ehdr + 1);
++	phdrs = (struct elf32_phdr *)(fw->data + ehdr->e_phoff);
+ 
+ 	for (i = 0; i < ehdr->e_phnum; i++) {
+ 		/*
+@@ -312,7 +312,7 @@ static int __qcom_mdt_load(struct device *dev, const struct firmware *fw,
+ 
+ 	is_split = qcom_mdt_bins_are_split(fw, fw_name);
+ 	ehdr = (struct elf32_hdr *)fw->data;
+-	phdrs = (struct elf32_phdr *)(ehdr + 1);
++	phdrs = (struct elf32_phdr *)(fw->data + ehdr->e_phoff);
+ 
+ 	for (i = 0; i < ehdr->e_phnum; i++) {
+ 		phdr = &phdrs[i];
+diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
+index cb82e887b51d44..fdab2b1067dbb1 100644
+--- a/drivers/soc/qcom/rpmh-rsc.c
++++ b/drivers/soc/qcom/rpmh-rsc.c
+@@ -1072,7 +1072,7 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
+ 	drv->ver.minor = rsc_id & (MINOR_VER_MASK << MINOR_VER_SHIFT);
+ 	drv->ver.minor >>= MINOR_VER_SHIFT;
+ 
+-	if (drv->ver.major == 3)
++	if (drv->ver.major >= 3)
+ 		drv->regs = rpmh_rsc_reg_offset_ver_3_0;
+ 	else
+ 		drv->regs = rpmh_rsc_reg_offset_ver_2_7;
+diff --git a/drivers/soundwire/amd_manager.c b/drivers/soundwire/amd_manager.c
+index 7a671a7861979c..7ed9c8c0b4c8f1 100644
+--- a/drivers/soundwire/amd_manager.c
++++ b/drivers/soundwire/amd_manager.c
+@@ -1074,6 +1074,7 @@ static void amd_sdw_manager_remove(struct platform_device *pdev)
+ 	int ret;
+ 
+ 	pm_runtime_disable(&pdev->dev);
++	cancel_work_sync(&amd_manager->amd_sdw_work);
+ 	amd_disable_sdw_interrupts(amd_manager);
+ 	sdw_bus_master_delete(&amd_manager->bus);
+ 	ret = amd_disable_sdw_manager(amd_manager);
+@@ -1178,10 +1179,10 @@ static int __maybe_unused amd_pm_prepare(struct device *dev)
+ 	 * device is not in runtime suspend state, observed that device alerts are missing
+ 	 * without pm_prepare on AMD platforms in clockstop mode0.
+ 	 */
+-	if (amd_manager->power_mode_mask & AMD_SDW_CLK_STOP_MODE) {
+-		ret = pm_request_resume(dev);
++	if (amd_manager->power_mode_mask) {
++		ret = pm_runtime_resume(dev);
+ 		if (ret < 0) {
+-			dev_err(bus->dev, "pm_request_resume failed: %d\n", ret);
++			dev_err(bus->dev, "pm_runtime_resume failed: %d\n", ret);
+ 			return 0;
+ 		}
+ 	}
+diff --git a/drivers/soundwire/bus.c b/drivers/soundwire/bus.c
+index 68db4b67a86ff9..4fd5cac799c547 100644
+--- a/drivers/soundwire/bus.c
++++ b/drivers/soundwire/bus.c
+@@ -1753,15 +1753,15 @@ static int sdw_handle_slave_alerts(struct sdw_slave *slave)
+ 
+ 		/* Update the Slave driver */
+ 		if (slave_notify) {
++			if (slave->prop.use_domain_irq && slave->irq)
++				handle_nested_irq(slave->irq);
++
+ 			mutex_lock(&slave->sdw_dev_lock);
+ 
+ 			if (slave->probed) {
+ 				struct device *dev = &slave->dev;
+ 				struct sdw_driver *drv = drv_to_sdw_driver(dev->driver);
+ 
+-				if (slave->prop.use_domain_irq && slave->irq)
+-					handle_nested_irq(slave->irq);
+-
+ 				if (drv->ops && drv->ops->interrupt_callback) {
+ 					slave_intr.sdca_cascade = sdca_cascade;
+ 					slave_intr.control_port = clear;
+diff --git a/drivers/staging/gpib/ni_usb/ni_usb_gpib.c b/drivers/staging/gpib/ni_usb/ni_usb_gpib.c
+index 7cf25c95787f35..73ea72f34c0a33 100644
+--- a/drivers/staging/gpib/ni_usb/ni_usb_gpib.c
++++ b/drivers/staging/gpib/ni_usb/ni_usb_gpib.c
+@@ -2079,10 +2079,10 @@ static int ni_usb_hs_wait_for_ready(struct ni_usb_priv *ni_priv)
+ 		}
+ 		if (buffer[++j] != 0x0) { // [6]
+ 			ready = 1;
+-			// NI-USB-HS+ sends 0xf here
++			// NI-USB-HS+ sends 0xf or 0x19 here
+ 			if (buffer[j] != 0x2 && buffer[j] != 0xe && buffer[j] != 0xf &&
+-			    buffer[j] != 0x16)	{
+-				dev_err(&usb_dev->dev, "unexpected data: buffer[%i]=0x%x, expected 0x2, 0xe, 0xf or 0x16\n",
++			    buffer[j] != 0x16 && buffer[j] != 0x19) {
++				dev_err(&usb_dev->dev, "unexpected data: buffer[%i]=0x%x, expected 0x2, 0xe, 0xf, 0x16 or 0x19\n",
+ 					j, (int)buffer[j]);
+ 				unexpected = 1;
+ 			}
+@@ -2110,11 +2110,11 @@ static int ni_usb_hs_wait_for_ready(struct ni_usb_priv *ni_priv)
+ 				j, (int)buffer[j]);
+ 			unexpected = 1;
+ 		}
+-		if (buffer[++j] != 0x0) {
++		if (buffer[++j] != 0x0) { // [10] MC usb-488 sends 0x7 here, new HS+ sends 0x59
+ 			ready = 1;
+-			if (buffer[j] != 0x96 && buffer[j] != 0x7 && buffer[j] != 0x6e) {
+-// [10] MC usb-488 sends 0x7 here
+-				dev_err(&usb_dev->dev, "unexpected data: buffer[%i]=0x%x, expected 0x96, 0x07 or 0x6e\n",
++			if (buffer[j] != 0x96 && buffer[j] != 0x7 && buffer[j] != 0x6e &&
++			    buffer[j] != 0x59) {
++				dev_err(&usb_dev->dev, "unexpected data: buffer[%i]=0x%x, expected 0x96, 0x07, 0x6e or 0x59\n",
+ 					j, (int)buffer[j]);
+ 				unexpected = 1;
+ 			}
+diff --git a/drivers/target/target_core_fabric_lib.c b/drivers/target/target_core_fabric_lib.c
+index 43f47e3aa4482c..ec7bc6e3022891 100644
+--- a/drivers/target/target_core_fabric_lib.c
++++ b/drivers/target/target_core_fabric_lib.c
+@@ -257,11 +257,41 @@ static int iscsi_get_pr_transport_id_len(
+ 	return len;
+ }
+ 
+-static char *iscsi_parse_pr_out_transport_id(
++static void sas_parse_pr_out_transport_id(char *buf, char *i_str)
++{
++	char hex[17] = {};
++
++	bin2hex(hex, buf + 4, 8);
++	snprintf(i_str, TRANSPORT_IQN_LEN, "naa.%s", hex);
++}
++
++static void srp_parse_pr_out_transport_id(char *buf, char *i_str)
++{
++	char hex[33] = {};
++
++	bin2hex(hex, buf + 8, 16);
++	snprintf(i_str, TRANSPORT_IQN_LEN, "0x%s", hex);
++}
++
++static void fcp_parse_pr_out_transport_id(char *buf, char *i_str)
++{
++	snprintf(i_str, TRANSPORT_IQN_LEN, "%8phC", buf + 8);
++}
++
++static void sbp_parse_pr_out_transport_id(char *buf, char *i_str)
++{
++	char hex[17] = {};
++
++	bin2hex(hex, buf + 8, 8);
++	snprintf(i_str, TRANSPORT_IQN_LEN, "%s", hex);
++}
++
++static bool iscsi_parse_pr_out_transport_id(
+ 	struct se_portal_group *se_tpg,
+ 	char *buf,
+ 	u32 *out_tid_len,
+-	char **port_nexus_ptr)
++	char **port_nexus_ptr,
++	char *i_str)
+ {
+ 	char *p;
+ 	int i;
+@@ -282,7 +312,7 @@ static char *iscsi_parse_pr_out_transport_id(
+ 	if ((format_code != 0x00) && (format_code != 0x40)) {
+ 		pr_err("Illegal format code: 0x%02x for iSCSI"
+ 			" Initiator Transport ID\n", format_code);
+-		return NULL;
++		return false;
+ 	}
+ 	/*
+ 	 * If the caller wants the TransportID Length, we set that value for the
+@@ -306,7 +336,7 @@ static char *iscsi_parse_pr_out_transport_id(
+ 			pr_err("Unable to locate \",i,0x\" separator"
+ 				" for Initiator port identifier: %s\n",
+ 				&buf[4]);
+-			return NULL;
++			return false;
+ 		}
+ 		*p = '\0'; /* Terminate iSCSI Name */
+ 		p += 5; /* Skip over ",i,0x" separator */
+@@ -339,7 +369,8 @@ static char *iscsi_parse_pr_out_transport_id(
+ 	} else
+ 		*port_nexus_ptr = NULL;
+ 
+-	return &buf[4];
++	strscpy(i_str, &buf[4], TRANSPORT_IQN_LEN);
++	return true;
+ }
+ 
+ int target_get_pr_transport_id_len(struct se_node_acl *nacl,
+@@ -387,33 +418,35 @@ int target_get_pr_transport_id(struct se_node_acl *nacl,
+ 	}
+ }
+ 
+-const char *target_parse_pr_out_transport_id(struct se_portal_group *tpg,
+-		char *buf, u32 *out_tid_len, char **port_nexus_ptr)
++bool target_parse_pr_out_transport_id(struct se_portal_group *tpg,
++		char *buf, u32 *out_tid_len, char **port_nexus_ptr, char *i_str)
+ {
+-	u32 offset;
+-
+ 	switch (tpg->proto_id) {
+ 	case SCSI_PROTOCOL_SAS:
+ 		/*
+ 		 * Assume the FORMAT CODE 00b from spc4r17, 7.5.4.7 TransportID
+ 		 * for initiator ports using SCSI over SAS Serial SCSI Protocol.
+ 		 */
+-		offset = 4;
++		sas_parse_pr_out_transport_id(buf, i_str);
+ 		break;
+-	case SCSI_PROTOCOL_SBP:
+ 	case SCSI_PROTOCOL_SRP:
++		srp_parse_pr_out_transport_id(buf, i_str);
++		break;
+ 	case SCSI_PROTOCOL_FCP:
+-		offset = 8;
++		fcp_parse_pr_out_transport_id(buf, i_str);
++		break;
++	case SCSI_PROTOCOL_SBP:
++		sbp_parse_pr_out_transport_id(buf, i_str);
+ 		break;
+ 	case SCSI_PROTOCOL_ISCSI:
+ 		return iscsi_parse_pr_out_transport_id(tpg, buf, out_tid_len,
+-					port_nexus_ptr);
++					port_nexus_ptr, i_str);
+ 	default:
+ 		pr_err("Unknown proto_id: 0x%02x\n", tpg->proto_id);
+-		return NULL;
++		return false;
+ 	}
+ 
+ 	*port_nexus_ptr = NULL;
+ 	*out_tid_len = 24;
+-	return buf + offset;
++	return true;
+ }
+diff --git a/drivers/target/target_core_internal.h b/drivers/target/target_core_internal.h
+index 408be26d2e9b4d..20aab1f505655c 100644
+--- a/drivers/target/target_core_internal.h
++++ b/drivers/target/target_core_internal.h
+@@ -103,8 +103,8 @@ int	target_get_pr_transport_id_len(struct se_node_acl *nacl,
+ int	target_get_pr_transport_id(struct se_node_acl *nacl,
+ 		struct t10_pr_registration *pr_reg, int *format_code,
+ 		unsigned char *buf);
+-const char *target_parse_pr_out_transport_id(struct se_portal_group *tpg,
+-		char *buf, u32 *out_tid_len, char **port_nexus_ptr);
++bool target_parse_pr_out_transport_id(struct se_portal_group *tpg,
++		char *buf, u32 *out_tid_len, char **port_nexus_ptr, char *i_str);
+ 
+ /* target_core_hba.c */
+ struct se_hba *core_alloc_hba(const char *, u32, u32);
+diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c
+index 70905805cb1756..83e172c92238d9 100644
+--- a/drivers/target/target_core_pr.c
++++ b/drivers/target/target_core_pr.c
+@@ -1478,11 +1478,12 @@ core_scsi3_decode_spec_i_port(
+ 	LIST_HEAD(tid_dest_list);
+ 	struct pr_transport_id_holder *tidh_new, *tidh, *tidh_tmp;
+ 	unsigned char *buf, *ptr, proto_ident;
+-	const unsigned char *i_str = NULL;
++	unsigned char i_str[TRANSPORT_IQN_LEN];
+ 	char *iport_ptr = NULL, i_buf[PR_REG_ISID_ID_LEN];
+ 	sense_reason_t ret;
+ 	u32 tpdl, tid_len = 0;
+ 	u32 dest_rtpi = 0;
++	bool tid_found;
+ 
+ 	/*
+ 	 * Allocate a struct pr_transport_id_holder and setup the
+@@ -1571,9 +1572,9 @@ core_scsi3_decode_spec_i_port(
+ 			dest_rtpi = tmp_lun->lun_tpg->tpg_rtpi;
+ 
+ 			iport_ptr = NULL;
+-			i_str = target_parse_pr_out_transport_id(tmp_tpg,
+-					ptr, &tid_len, &iport_ptr);
+-			if (!i_str)
++			tid_found = target_parse_pr_out_transport_id(tmp_tpg,
++					ptr, &tid_len, &iport_ptr, i_str);
++			if (!tid_found)
+ 				continue;
+ 			/*
+ 			 * Determine if this SCSI device server requires that
+@@ -3153,13 +3154,14 @@ core_scsi3_emulate_pro_register_and_move(struct se_cmd *cmd, u64 res_key,
+ 	struct t10_pr_registration *pr_reg, *pr_res_holder, *dest_pr_reg;
+ 	struct t10_reservation *pr_tmpl = &dev->t10_pr;
+ 	unsigned char *buf;
+-	const unsigned char *initiator_str;
++	unsigned char initiator_str[TRANSPORT_IQN_LEN];
+ 	char *iport_ptr = NULL, i_buf[PR_REG_ISID_ID_LEN] = { };
+ 	u32 tid_len, tmp_tid_len;
+ 	int new_reg = 0, type, scope, matching_iname;
+ 	sense_reason_t ret;
+ 	unsigned short rtpi;
+ 	unsigned char proto_ident;
++	bool tid_found;
+ 
+ 	if (!se_sess || !se_lun) {
+ 		pr_err("SPC-3 PR: se_sess || struct se_lun is NULL!\n");
+@@ -3278,9 +3280,9 @@ core_scsi3_emulate_pro_register_and_move(struct se_cmd *cmd, u64 res_key,
+ 		ret = TCM_INVALID_PARAMETER_LIST;
+ 		goto out;
+ 	}
+-	initiator_str = target_parse_pr_out_transport_id(dest_se_tpg,
+-			&buf[24], &tmp_tid_len, &iport_ptr);
+-	if (!initiator_str) {
++	tid_found = target_parse_pr_out_transport_id(dest_se_tpg,
++			&buf[24], &tmp_tid_len, &iport_ptr, initiator_str);
++	if (!tid_found) {
+ 		pr_err("SPC-3 PR REGISTER_AND_MOVE: Unable to locate"
+ 			" initiator_str from Transport ID\n");
+ 		ret = TCM_INVALID_PARAMETER_LIST;
+diff --git a/drivers/thermal/qcom/qcom-spmi-temp-alarm.c b/drivers/thermal/qcom/qcom-spmi-temp-alarm.c
+index a81e7d6e865f91..4b91cc13ce3472 100644
+--- a/drivers/thermal/qcom/qcom-spmi-temp-alarm.c
++++ b/drivers/thermal/qcom/qcom-spmi-temp-alarm.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+  * Copyright (c) 2011-2015, 2017, 2020, The Linux Foundation. All rights reserved.
++ * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries.
+  */
+ 
+ #include <linux/bitops.h>
+@@ -16,6 +17,7 @@
+ 
+ #include "../thermal_hwmon.h"
+ 
++#define QPNP_TM_REG_DIG_MINOR		0x00
+ #define QPNP_TM_REG_DIG_MAJOR		0x01
+ #define QPNP_TM_REG_TYPE		0x04
+ #define QPNP_TM_REG_SUBTYPE		0x05
+@@ -31,7 +33,7 @@
+ #define STATUS_GEN2_STATE_MASK		GENMASK(6, 4)
+ #define STATUS_GEN2_STATE_SHIFT		4
+ 
+-#define SHUTDOWN_CTRL1_OVERRIDE_S2	BIT(6)
++#define SHUTDOWN_CTRL1_OVERRIDE_STAGE2	BIT(6)
+ #define SHUTDOWN_CTRL1_THRESHOLD_MASK	GENMASK(1, 0)
+ 
+ #define SHUTDOWN_CTRL1_RATE_25HZ	BIT(3)
+@@ -78,6 +80,7 @@ struct qpnp_tm_chip {
+ 	/* protects .thresh, .stage and chip registers */
+ 	struct mutex			lock;
+ 	bool				initialized;
++	bool				require_stage2_shutdown;
+ 
+ 	struct iio_channel		*adc;
+ 	const long			(*temp_map)[THRESH_COUNT][STAGE_COUNT];
+@@ -220,13 +223,13 @@ static int qpnp_tm_update_critical_trip_temp(struct qpnp_tm_chip *chip,
+ {
+ 	long stage2_threshold_min = (*chip->temp_map)[THRESH_MIN][1];
+ 	long stage2_threshold_max = (*chip->temp_map)[THRESH_MAX][1];
+-	bool disable_s2_shutdown = false;
++	bool disable_stage2_shutdown = false;
+ 	u8 reg;
+ 
+ 	WARN_ON(!mutex_is_locked(&chip->lock));
+ 
+ 	/*
+-	 * Default: S2 and S3 shutdown enabled, thresholds at
++	 * Default: Stage 2 and Stage 3 shutdown enabled, thresholds at
+ 	 * lowest threshold set, monitoring at 25Hz
+ 	 */
+ 	reg = SHUTDOWN_CTRL1_RATE_25HZ;
+@@ -241,12 +244,12 @@ static int qpnp_tm_update_critical_trip_temp(struct qpnp_tm_chip *chip,
+ 		chip->thresh = THRESH_MAX -
+ 			((stage2_threshold_max - temp) /
+ 			 TEMP_THRESH_STEP);
+-		disable_s2_shutdown = true;
++		disable_stage2_shutdown = true;
+ 	} else {
+ 		chip->thresh = THRESH_MAX;
+ 
+ 		if (chip->adc)
+-			disable_s2_shutdown = true;
++			disable_stage2_shutdown = true;
+ 		else
+ 			dev_warn(chip->dev,
+ 				 "No ADC is configured and critical temperature %d mC is above the maximum stage 2 threshold of %ld mC! Configuring stage 2 shutdown at %ld mC.\n",
+@@ -255,8 +258,8 @@ static int qpnp_tm_update_critical_trip_temp(struct qpnp_tm_chip *chip,
+ 
+ skip:
+ 	reg |= chip->thresh;
+-	if (disable_s2_shutdown)
+-		reg |= SHUTDOWN_CTRL1_OVERRIDE_S2;
++	if (disable_stage2_shutdown && !chip->require_stage2_shutdown)
++		reg |= SHUTDOWN_CTRL1_OVERRIDE_STAGE2;
+ 
+ 	return qpnp_tm_write(chip, QPNP_TM_REG_SHUTDOWN_CTRL1, reg);
+ }
+@@ -350,8 +353,8 @@ static int qpnp_tm_probe(struct platform_device *pdev)
+ {
+ 	struct qpnp_tm_chip *chip;
+ 	struct device_node *node;
+-	u8 type, subtype, dig_major;
+-	u32 res;
++	u8 type, subtype, dig_major, dig_minor;
++	u32 res, dig_revision;
+ 	int ret, irq;
+ 
+ 	node = pdev->dev.of_node;
+@@ -402,6 +405,11 @@ static int qpnp_tm_probe(struct platform_device *pdev)
+ 		return dev_err_probe(&pdev->dev, ret,
+ 				     "could not read dig_major\n");
+ 
++	ret = qpnp_tm_read(chip, QPNP_TM_REG_DIG_MINOR, &dig_minor);
++	if (ret < 0)
++		return dev_err_probe(&pdev->dev, ret,
++				     "could not read dig_minor\n");
++
+ 	if (type != QPNP_TM_TYPE || (subtype != QPNP_TM_SUBTYPE_GEN1
+ 				     && subtype != QPNP_TM_SUBTYPE_GEN2)) {
+ 		dev_err(&pdev->dev, "invalid type 0x%02x or subtype 0x%02x\n",
+@@ -415,6 +423,23 @@ static int qpnp_tm_probe(struct platform_device *pdev)
+ 	else
+ 		chip->temp_map = &temp_map_gen1;
+ 
++	if (chip->subtype == QPNP_TM_SUBTYPE_GEN2) {
++		dig_revision = (dig_major << 8) | dig_minor;
++		/*
++		 * Check if stage 2 automatic partial shutdown must remain
++		 * enabled to avoid potential repeated faults upon reaching
++		 * over-temperature stage 3.
++		 */
++		switch (dig_revision) {
++		case 0x0001:
++		case 0x0002:
++		case 0x0100:
++		case 0x0101:
++			chip->require_stage2_shutdown = true;
++			break;
++		}
++	}
++
+ 	/*
+ 	 * Register the sensor before initializing the hardware to be able to
+ 	 * read the trip points. get_temp() returns the default temperature
+diff --git a/drivers/thermal/thermal_sysfs.c b/drivers/thermal/thermal_sysfs.c
+index 24b9055a0b6c51..d80612506a334a 100644
+--- a/drivers/thermal/thermal_sysfs.c
++++ b/drivers/thermal/thermal_sysfs.c
+@@ -40,10 +40,13 @@ temp_show(struct device *dev, struct device_attribute *attr, char *buf)
+ 
+ 	ret = thermal_zone_get_temp(tz, &temperature);
+ 
+-	if (ret)
+-		return ret;
++	if (!ret)
++		return sprintf(buf, "%d\n", temperature);
+ 
+-	return sprintf(buf, "%d\n", temperature);
++	if (ret == -EAGAIN)
++		return -ENODATA;
++
++	return ret;
+ }
+ 
+ static ssize_t
+diff --git a/drivers/thunderbolt/domain.c b/drivers/thunderbolt/domain.c
+index a3a7c8059eeeb7..45239703745e54 100644
+--- a/drivers/thunderbolt/domain.c
++++ b/drivers/thunderbolt/domain.c
+@@ -36,7 +36,7 @@ static bool match_service_id(const struct tb_service_id *id,
+ 			return false;
+ 	}
+ 
+-	if (id->match_flags & TBSVC_MATCH_PROTOCOL_VERSION) {
++	if (id->match_flags & TBSVC_MATCH_PROTOCOL_REVISION) {
+ 		if (id->protocol_revision != svc->prtcrevs)
+ 			return false;
+ 	}
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 1f7708a91fc6d7..8a14821312570b 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -1337,28 +1337,28 @@ static void uart_sanitize_serial_rs485_delays(struct uart_port *port,
+ 	if (!port->rs485_supported.delay_rts_before_send) {
+ 		if (rs485->delay_rts_before_send) {
+ 			dev_warn_ratelimited(port->dev,
+-				"%s (%d): RTS delay before sending not supported\n",
++				"%s (%u): RTS delay before sending not supported\n",
+ 				port->name, port->line);
+ 		}
+ 		rs485->delay_rts_before_send = 0;
+ 	} else if (rs485->delay_rts_before_send > RS485_MAX_RTS_DELAY) {
+ 		rs485->delay_rts_before_send = RS485_MAX_RTS_DELAY;
+ 		dev_warn_ratelimited(port->dev,
+-			"%s (%d): RTS delay before sending clamped to %u ms\n",
++			"%s (%u): RTS delay before sending clamped to %u ms\n",
+ 			port->name, port->line, rs485->delay_rts_before_send);
+ 	}
+ 
+ 	if (!port->rs485_supported.delay_rts_after_send) {
+ 		if (rs485->delay_rts_after_send) {
+ 			dev_warn_ratelimited(port->dev,
+-				"%s (%d): RTS delay after sending not supported\n",
++				"%s (%u): RTS delay after sending not supported\n",
+ 				port->name, port->line);
+ 		}
+ 		rs485->delay_rts_after_send = 0;
+ 	} else if (rs485->delay_rts_after_send > RS485_MAX_RTS_DELAY) {
+ 		rs485->delay_rts_after_send = RS485_MAX_RTS_DELAY;
+ 		dev_warn_ratelimited(port->dev,
+-			"%s (%d): RTS delay after sending clamped to %u ms\n",
++			"%s (%u): RTS delay after sending clamped to %u ms\n",
+ 			port->name, port->line, rs485->delay_rts_after_send);
+ 	}
+ }
+@@ -1388,14 +1388,14 @@ static void uart_sanitize_serial_rs485(struct uart_port *port, struct serial_rs4
+ 			rs485->flags &= ~SER_RS485_RTS_AFTER_SEND;
+ 
+ 			dev_warn_ratelimited(port->dev,
+-				"%s (%d): invalid RTS setting, using RTS_ON_SEND instead\n",
++				"%s (%u): invalid RTS setting, using RTS_ON_SEND instead\n",
+ 				port->name, port->line);
+ 		} else {
+ 			rs485->flags |= SER_RS485_RTS_AFTER_SEND;
+ 			rs485->flags &= ~SER_RS485_RTS_ON_SEND;
+ 
+ 			dev_warn_ratelimited(port->dev,
+-				"%s (%d): invalid RTS setting, using RTS_AFTER_SEND instead\n",
++				"%s (%u): invalid RTS setting, using RTS_AFTER_SEND instead\n",
+ 				port->name, port->line);
+ 		}
+ 	}
+@@ -1834,7 +1834,7 @@ static void uart_wait_until_sent(struct tty_struct *tty, int timeout)
+ 
+ 	expire = jiffies + timeout;
+ 
+-	pr_debug("uart_wait_until_sent(%d), jiffies=%lu, expire=%lu...\n",
++	pr_debug("uart_wait_until_sent(%u), jiffies=%lu, expire=%lu...\n",
+ 		port->line, jiffies, expire);
+ 
+ 	/*
+@@ -2028,7 +2028,7 @@ static void uart_line_info(struct seq_file *m, struct uart_state *state)
+ 		return;
+ 
+ 	mmio = uport->iotype >= UPIO_MEM;
+-	seq_printf(m, "%d: uart:%s %s%08llX irq:%d",
++	seq_printf(m, "%u: uart:%s %s%08llX irq:%u",
+ 			uport->line, uart_type(uport),
+ 			mmio ? "mmio:0x" : "port:",
+ 			mmio ? (unsigned long long)uport->mapbase
+@@ -2050,18 +2050,18 @@ static void uart_line_info(struct seq_file *m, struct uart_state *state)
+ 		if (pm_state != UART_PM_STATE_ON)
+ 			uart_change_pm(state, pm_state);
+ 
+-		seq_printf(m, " tx:%d rx:%d",
++		seq_printf(m, " tx:%u rx:%u",
+ 				uport->icount.tx, uport->icount.rx);
+ 		if (uport->icount.frame)
+-			seq_printf(m, " fe:%d",	uport->icount.frame);
++			seq_printf(m, " fe:%u",	uport->icount.frame);
+ 		if (uport->icount.parity)
+-			seq_printf(m, " pe:%d",	uport->icount.parity);
++			seq_printf(m, " pe:%u",	uport->icount.parity);
+ 		if (uport->icount.brk)
+-			seq_printf(m, " brk:%d", uport->icount.brk);
++			seq_printf(m, " brk:%u", uport->icount.brk);
+ 		if (uport->icount.overrun)
+-			seq_printf(m, " oe:%d", uport->icount.overrun);
++			seq_printf(m, " oe:%u", uport->icount.overrun);
+ 		if (uport->icount.buf_overrun)
+-			seq_printf(m, " bo:%d", uport->icount.buf_overrun);
++			seq_printf(m, " bo:%u", uport->icount.buf_overrun);
+ 
+ #define INFOBIT(bit, str) \
+ 	if (uport->mctrl & (bit)) \
+@@ -2553,7 +2553,7 @@ uart_report_port(struct uart_driver *drv, struct uart_port *port)
+ 		break;
+ 	}
+ 
+-	pr_info("%s%s%s at %s (irq = %d, base_baud = %d) is a %s\n",
++	pr_info("%s%s%s at %s (irq = %u, base_baud = %u) is a %s\n",
+ 	       port->dev ? dev_name(port->dev) : "",
+ 	       port->dev ? ": " : "",
+ 	       port->name,
+@@ -2561,7 +2561,7 @@ uart_report_port(struct uart_driver *drv, struct uart_port *port)
+ 
+ 	/* The magic multiplier feature is a bit obscure, so report it too.  */
+ 	if (port->flags & UPF_MAGIC_MULTIPLIER)
+-		pr_info("%s%s%s extra baud rates supported: %d, %d",
++		pr_info("%s%s%s extra baud rates supported: %u, %u",
+ 			port->dev ? dev_name(port->dev) : "",
+ 			port->dev ? ": " : "",
+ 			port->name,
+@@ -2960,7 +2960,7 @@ static ssize_t close_delay_show(struct device *dev,
+ 	struct tty_port *port = dev_get_drvdata(dev);
+ 
+ 	uart_get_info(port, &tmp);
+-	return sprintf(buf, "%d\n", tmp.close_delay);
++	return sprintf(buf, "%u\n", tmp.close_delay);
+ }
+ 
+ static ssize_t closing_wait_show(struct device *dev,
+@@ -2970,7 +2970,7 @@ static ssize_t closing_wait_show(struct device *dev,
+ 	struct tty_port *port = dev_get_drvdata(dev);
+ 
+ 	uart_get_info(port, &tmp);
+-	return sprintf(buf, "%d\n", tmp.closing_wait);
++	return sprintf(buf, "%u\n", tmp.closing_wait);
+ }
+ 
+ static ssize_t custom_divisor_show(struct device *dev,
+@@ -2990,7 +2990,7 @@ static ssize_t io_type_show(struct device *dev,
+ 	struct tty_port *port = dev_get_drvdata(dev);
+ 
+ 	uart_get_info(port, &tmp);
+-	return sprintf(buf, "%d\n", tmp.io_type);
++	return sprintf(buf, "%u\n", tmp.io_type);
+ }
+ 
+ static ssize_t iomem_base_show(struct device *dev,
+@@ -3010,7 +3010,7 @@ static ssize_t iomem_reg_shift_show(struct device *dev,
+ 	struct tty_port *port = dev_get_drvdata(dev);
+ 
+ 	uart_get_info(port, &tmp);
+-	return sprintf(buf, "%d\n", tmp.iomem_reg_shift);
++	return sprintf(buf, "%u\n", tmp.iomem_reg_shift);
+ }
+ 
+ static ssize_t console_show(struct device *dev,
+@@ -3146,7 +3146,7 @@ static int serial_core_add_one_port(struct uart_driver *drv, struct uart_port *u
+ 	state->pm_state = UART_PM_STATE_UNDEFINED;
+ 	uart_port_set_cons(uport, drv->cons);
+ 	uport->minor = drv->tty_driver->minor_start + uport->line;
+-	uport->name = kasprintf(GFP_KERNEL, "%s%d", drv->dev_name,
++	uport->name = kasprintf(GFP_KERNEL, "%s%u", drv->dev_name,
+ 				drv->tty_driver->name_base + uport->line);
+ 	if (!uport->name)
+ 		return -ENOMEM;
+@@ -3185,7 +3185,7 @@ static int serial_core_add_one_port(struct uart_driver *drv, struct uart_port *u
+ 		device_set_wakeup_capable(tty_dev, 1);
+ 	} else {
+ 		uport->flags |= UPF_DEAD;
+-		dev_err(uport->dev, "Cannot register tty device on line %d\n",
++		dev_err(uport->dev, "Cannot register tty device on line %u\n",
+ 		       uport->line);
+ 	}
+ 
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index f07878c50f142e..3cc566e8bd1d26 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -7110,14 +7110,19 @@ static irqreturn_t ufshcd_threaded_intr(int irq, void *__hba)
+ static irqreturn_t ufshcd_intr(int irq, void *__hba)
+ {
+ 	struct ufs_hba *hba = __hba;
++	u32 intr_status, enabled_intr_status;
+ 
+ 	/* Move interrupt handling to thread when MCQ & ESI are not enabled */
+ 	if (!hba->mcq_enabled || !hba->mcq_esi_enabled)
+ 		return IRQ_WAKE_THREAD;
+ 
++	intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS);
++	enabled_intr_status = intr_status & ufshcd_readl(hba, REG_INTERRUPT_ENABLE);
++
++	ufshcd_writel(hba, intr_status, REG_INTERRUPT_STATUS);
++
+ 	/* Directly handle interrupts since MCQ ESI handlers does the hard job */
+-	return ufshcd_sl_intr(hba, ufshcd_readl(hba, REG_INTERRUPT_STATUS) &
+-				   ufshcd_readl(hba, REG_INTERRUPT_ENABLE));
++	return ufshcd_sl_intr(hba, enabled_intr_status);
+ }
+ 
+ static int ufshcd_clear_tm_cmd(struct ufs_hba *hba, int tag)
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index c2ecfa3c83496f..5a334e370f4d66 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1520,6 +1520,12 @@ static int acm_probe(struct usb_interface *intf,
+ 			goto err_remove_files;
+ 	}
+ 
++	if (quirks & CLEAR_HALT_CONDITIONS) {
++		/* errors intentionally ignored */
++		usb_clear_halt(usb_dev, acm->in);
++		usb_clear_halt(usb_dev, acm->out);
++	}
++
+ 	tty_dev = tty_port_register_device(&acm->port, acm_tty_driver, minor,
+ 			&control_interface->dev);
+ 	if (IS_ERR(tty_dev)) {
+@@ -1527,11 +1533,6 @@ static int acm_probe(struct usb_interface *intf,
+ 		goto err_release_data_interface;
+ 	}
+ 
+-	if (quirks & CLEAR_HALT_CONDITIONS) {
+-		usb_clear_halt(usb_dev, acm->in);
+-		usb_clear_halt(usb_dev, acm->out);
+-	}
+-
+ 	dev_info(&intf->dev, "ttyACM%d: USB ACM device\n", minor);
+ 
+ 	return 0;
+diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c
+index fc0cfd94cbab22..42468bbeffd229 100644
+--- a/drivers/usb/core/config.c
++++ b/drivers/usb/core/config.c
+@@ -107,8 +107,14 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
+ 	 */
+ 	desc = (struct usb_ss_ep_comp_descriptor *) buffer;
+ 
+-	if (desc->bDescriptorType != USB_DT_SS_ENDPOINT_COMP ||
+-			size < USB_DT_SS_EP_COMP_SIZE) {
++	if (size < USB_DT_SS_EP_COMP_SIZE) {
++		dev_notice(ddev,
++			   "invalid SuperSpeed endpoint companion descriptor "
++			   "of length %d, skipping\n", size);
++		return;
++	}
++
++	if (desc->bDescriptorType != USB_DT_SS_ENDPOINT_COMP) {
+ 		dev_notice(ddev, "No SuperSpeed endpoint companion for config %d "
+ 				" interface %d altsetting %d ep %d: "
+ 				"using minimum values\n",
+diff --git a/drivers/usb/core/urb.c b/drivers/usb/core/urb.c
+index 5e52a35486afbe..120de3c499d226 100644
+--- a/drivers/usb/core/urb.c
++++ b/drivers/usb/core/urb.c
+@@ -500,7 +500,7 @@ int usb_submit_urb(struct urb *urb, gfp_t mem_flags)
+ 
+ 	/* Check that the pipe's type matches the endpoint's type */
+ 	if (usb_pipe_type_check(urb->dev, urb->pipe))
+-		dev_WARN(&dev->dev, "BOGUS urb xfer, pipe %x != type %x\n",
++		dev_warn_once(&dev->dev, "BOGUS urb xfer, pipe %x != type %x\n",
+ 			usb_pipetype(urb->pipe), pipetypes[xfertype]);
+ 
+ 	/* Check against a simple/standard policy */
+diff --git a/drivers/usb/dwc3/dwc3-xilinx.c b/drivers/usb/dwc3/dwc3-xilinx.c
+index 4ca7f6240d07df..09c3c5c226ab40 100644
+--- a/drivers/usb/dwc3/dwc3-xilinx.c
++++ b/drivers/usb/dwc3/dwc3-xilinx.c
+@@ -422,6 +422,7 @@ static const struct dev_pm_ops dwc3_xlnx_dev_pm_ops = {
+ static struct platform_driver dwc3_xlnx_driver = {
+ 	.probe		= dwc3_xlnx_probe,
+ 	.remove		= dwc3_xlnx_remove,
++	.shutdown	= dwc3_xlnx_remove,
+ 	.driver		= {
+ 		.name		= "dwc3-xilinx",
+ 		.of_match_table	= dwc3_xlnx_of_match,
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 6680afa4f5963b..07289333a1e8f1 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -1195,6 +1195,8 @@ int xhci_setup_addressable_virt_dev(struct xhci_hcd *xhci, struct usb_device *ud
+ 	ep0_ctx->deq = cpu_to_le64(dev->eps[0].ring->first_seg->dma |
+ 				   dev->eps[0].ring->cycle_state);
+ 
++	ep0_ctx->tx_info = cpu_to_le32(EP_AVG_TRB_LENGTH(8));
++
+ 	trace_xhci_setup_addressable_virt_device(dev);
+ 
+ 	/* Steps 7 and 8 were done in xhci_alloc_virt_device() */
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 94c9c9271658ec..ecd757d482c582 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1376,12 +1376,15 @@ static void xhci_kill_endpoint_urbs(struct xhci_hcd *xhci,
+  */
+ void xhci_hc_died(struct xhci_hcd *xhci)
+ {
++	bool notify;
+ 	int i, j;
+ 
+ 	if (xhci->xhc_state & XHCI_STATE_DYING)
+ 		return;
+ 
+-	xhci_err(xhci, "xHCI host controller not responding, assume dead\n");
++	notify = !(xhci->xhc_state & XHCI_STATE_REMOVING);
++	if (notify)
++		xhci_err(xhci, "xHCI host controller not responding, assume dead\n");
+ 	xhci->xhc_state |= XHCI_STATE_DYING;
+ 
+ 	xhci_cleanup_command_queue(xhci);
+@@ -1395,7 +1398,7 @@ void xhci_hc_died(struct xhci_hcd *xhci)
+ 	}
+ 
+ 	/* inform usb core hc died if PCI remove isn't already handling it */
+-	if (!(xhci->xhc_state & XHCI_STATE_REMOVING))
++	if (notify)
+ 		usb_hc_died(xhci_to_hcd(xhci));
+ }
+ 
+@@ -4372,7 +4375,8 @@ static int queue_command(struct xhci_hcd *xhci, struct xhci_command *cmd,
+ 
+ 	if ((xhci->xhc_state & XHCI_STATE_DYING) ||
+ 		(xhci->xhc_state & XHCI_STATE_HALTED)) {
+-		xhci_dbg(xhci, "xHCI dying or halted, can't queue_command\n");
++		xhci_dbg(xhci, "xHCI dying or halted, can't queue_command. state: 0x%x\n",
++			 xhci->xhc_state);
+ 		return -ESHUTDOWN;
+ 	}
+ 
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 8a819e85328859..47151ca527bfaf 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -121,7 +121,8 @@ int xhci_halt(struct xhci_hcd *xhci)
+ 	ret = xhci_handshake(&xhci->op_regs->status,
+ 			STS_HALT, STS_HALT, XHCI_MAX_HALT_USEC);
+ 	if (ret) {
+-		xhci_warn(xhci, "Host halt failed, %d\n", ret);
++		if (!(xhci->xhc_state & XHCI_STATE_DYING))
++			xhci_warn(xhci, "Host halt failed, %d\n", ret);
+ 		return ret;
+ 	}
+ 
+@@ -180,7 +181,8 @@ int xhci_reset(struct xhci_hcd *xhci, u64 timeout_us)
+ 	state = readl(&xhci->op_regs->status);
+ 
+ 	if (state == ~(u32)0) {
+-		xhci_warn(xhci, "Host not accessible, reset failed.\n");
++		if (!(xhci->xhc_state & XHCI_STATE_DYING))
++			xhci_warn(xhci, "Host not accessible, reset failed.\n");
+ 		return -ENODEV;
+ 	}
+ 
+diff --git a/drivers/usb/typec/mux/intel_pmc_mux.c b/drivers/usb/typec/mux/intel_pmc_mux.c
+index 65dda9183e6fb6..1698428654abbe 100644
+--- a/drivers/usb/typec/mux/intel_pmc_mux.c
++++ b/drivers/usb/typec/mux/intel_pmc_mux.c
+@@ -754,7 +754,7 @@ static int pmc_usb_probe(struct platform_device *pdev)
+ 
+ 	pmc->ipc = devm_intel_scu_ipc_dev_get(&pdev->dev);
+ 	if (!pmc->ipc)
+-		return -ENODEV;
++		return -EPROBE_DEFER;
+ 
+ 	pmc->dev = &pdev->dev;
+ 
+diff --git a/drivers/usb/typec/tcpm/fusb302.c b/drivers/usb/typec/tcpm/fusb302.c
+index f15c63d3a8f441..870a71f953f6cd 100644
+--- a/drivers/usb/typec/tcpm/fusb302.c
++++ b/drivers/usb/typec/tcpm/fusb302.c
+@@ -104,6 +104,7 @@ struct fusb302_chip {
+ 	bool vconn_on;
+ 	bool vbus_on;
+ 	bool charge_on;
++	bool pd_rx_on;
+ 	bool vbus_present;
+ 	enum typec_cc_polarity cc_polarity;
+ 	enum typec_cc_status cc1;
+@@ -841,6 +842,11 @@ static int tcpm_set_pd_rx(struct tcpc_dev *dev, bool on)
+ 	int ret = 0;
+ 
+ 	mutex_lock(&chip->lock);
++	if (chip->pd_rx_on == on) {
++		fusb302_log(chip, "pd is already %s", str_on_off(on));
++		goto done;
++	}
++
+ 	ret = fusb302_pd_rx_flush(chip);
+ 	if (ret < 0) {
+ 		fusb302_log(chip, "cannot flush pd rx buffer, ret=%d", ret);
+@@ -863,6 +869,8 @@ static int tcpm_set_pd_rx(struct tcpc_dev *dev, bool on)
+ 			    str_on_off(on), ret);
+ 		goto done;
+ 	}
++
++	chip->pd_rx_on = on;
+ 	fusb302_log(chip, "pd := %s", str_on_off(on));
+ done:
+ 	mutex_unlock(&chip->lock);
+diff --git a/drivers/usb/typec/tcpm/tcpci_maxim_core.c b/drivers/usb/typec/tcpm/tcpci_maxim_core.c
+index b5a5ed40faea9c..ff3604be79da73 100644
+--- a/drivers/usb/typec/tcpm/tcpci_maxim_core.c
++++ b/drivers/usb/typec/tcpm/tcpci_maxim_core.c
+@@ -421,21 +421,6 @@ static irqreturn_t max_tcpci_isr(int irq, void *dev_id)
+ 	return IRQ_WAKE_THREAD;
+ }
+ 
+-static int max_tcpci_init_alert(struct max_tcpci_chip *chip, struct i2c_client *client)
+-{
+-	int ret;
+-
+-	ret = devm_request_threaded_irq(chip->dev, client->irq, max_tcpci_isr, max_tcpci_irq,
+-					(IRQF_TRIGGER_LOW | IRQF_ONESHOT), dev_name(chip->dev),
+-					chip);
+-
+-	if (ret < 0)
+-		return ret;
+-
+-	enable_irq_wake(client->irq);
+-	return 0;
+-}
+-
+ static int max_tcpci_start_toggling(struct tcpci *tcpci, struct tcpci_data *tdata,
+ 				    enum typec_cc_status cc)
+ {
+@@ -532,7 +517,9 @@ static int max_tcpci_probe(struct i2c_client *client)
+ 
+ 	chip->port = tcpci_get_tcpm_port(chip->tcpci);
+ 
+-	ret = max_tcpci_init_alert(chip, client);
++	ret = devm_request_threaded_irq(&client->dev, client->irq, max_tcpci_isr, max_tcpci_irq,
++					(IRQF_TRIGGER_LOW | IRQF_ONESHOT), dev_name(chip->dev),
++					chip);
+ 	if (ret < 0)
+ 		return dev_err_probe(&client->dev, ret,
+ 				     "IRQ initialization failed\n");
+@@ -544,6 +531,32 @@ static int max_tcpci_probe(struct i2c_client *client)
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_PM_SLEEP
++static int max_tcpci_resume(struct device *dev)
++{
++	struct i2c_client *client = to_i2c_client(dev);
++	int ret = 0;
++
++	if (client->irq && device_may_wakeup(dev))
++		ret = disable_irq_wake(client->irq);
++
++	return ret;
++}
++
++static int max_tcpci_suspend(struct device *dev)
++{
++	struct i2c_client *client = to_i2c_client(dev);
++	int ret = 0;
++
++	if (client->irq && device_may_wakeup(dev))
++		ret = enable_irq_wake(client->irq);
++
++	return ret;
++}
++#endif /* CONFIG_PM_SLEEP */
++
++static SIMPLE_DEV_PM_OPS(max_tcpci_pm_ops, max_tcpci_suspend, max_tcpci_resume);
++
+ static const struct i2c_device_id max_tcpci_id[] = {
+ 	{ "maxtcpc" },
+ 	{ }
+@@ -562,6 +575,7 @@ static struct i2c_driver max_tcpci_i2c_driver = {
+ 	.driver = {
+ 		.name = "maxtcpc",
+ 		.of_match_table = of_match_ptr(max_tcpci_of_match),
++		.pm = &max_tcpci_pm_ops,
+ 	},
+ 	.probe = max_tcpci_probe,
+ 	.id_table = max_tcpci_id,
+diff --git a/drivers/usb/typec/ucsi/cros_ec_ucsi.c b/drivers/usb/typec/ucsi/cros_ec_ucsi.c
+index 4ec1c6d2231096..eed2a7d0ebc634 100644
+--- a/drivers/usb/typec/ucsi/cros_ec_ucsi.c
++++ b/drivers/usb/typec/ucsi/cros_ec_ucsi.c
+@@ -137,6 +137,7 @@ static int cros_ucsi_sync_control(struct ucsi *ucsi, u64 cmd, u32 *cci,
+ static const struct ucsi_operations cros_ucsi_ops = {
+ 	.read_version = cros_ucsi_read_version,
+ 	.read_cci = cros_ucsi_read_cci,
++	.poll_cci = cros_ucsi_read_cci,
+ 	.read_message_in = cros_ucsi_read_message_in,
+ 	.async_control = cros_ucsi_async_control,
+ 	.sync_control = cros_ucsi_sync_control,
+diff --git a/drivers/usb/typec/ucsi/psy.c b/drivers/usb/typec/ucsi/psy.c
+index 62ac6973040502..62a9d68bb66d21 100644
+--- a/drivers/usb/typec/ucsi/psy.c
++++ b/drivers/usb/typec/ucsi/psy.c
+@@ -164,7 +164,7 @@ static int ucsi_psy_get_current_max(struct ucsi_connector *con,
+ 	case UCSI_CONSTAT_PWR_OPMODE_DEFAULT:
+ 	/* UCSI can't tell b/w DCP/CDP or USB2/3x1/3x2 SDP chargers */
+ 	default:
+-		val->intval = 0;
++		val->intval = UCSI_TYPEC_DEFAULT_CURRENT * 1000;
+ 		break;
+ 	}
+ 	return 0;
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 01ce858a1a2b34..8ff31963970bb3 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -1246,6 +1246,7 @@ static void ucsi_handle_connector_change(struct work_struct *work)
+ 
+ 	if (change & UCSI_CONSTAT_POWER_DIR_CHANGE) {
+ 		typec_set_pwr_role(con->port, role);
++		ucsi_port_psy_changed(con);
+ 
+ 		/* Complete pending power role swap */
+ 		if (!completion_done(&con->complete))
+diff --git a/drivers/usb/typec/ucsi/ucsi.h b/drivers/usb/typec/ucsi/ucsi.h
+index 5a8f947fcece29..f644bc25186332 100644
+--- a/drivers/usb/typec/ucsi/ucsi.h
++++ b/drivers/usb/typec/ucsi/ucsi.h
+@@ -481,9 +481,10 @@ struct ucsi {
+ #define UCSI_MAX_SVID		5
+ #define UCSI_MAX_ALTMODES	(UCSI_MAX_SVID * 6)
+ 
+-#define UCSI_TYPEC_VSAFE5V	5000
+-#define UCSI_TYPEC_1_5_CURRENT	1500
+-#define UCSI_TYPEC_3_0_CURRENT	3000
++#define UCSI_TYPEC_VSAFE5V		5000
++#define UCSI_TYPEC_DEFAULT_CURRENT	 100
++#define UCSI_TYPEC_1_5_CURRENT		1500
++#define UCSI_TYPEC_3_0_CURRENT		3000
+ 
+ struct ucsi_connector {
+ 	int num;
+diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c
+index 5b919a0b25247c..a92b095b90f6a9 100644
+--- a/drivers/vfio/pci/mlx5/cmd.c
++++ b/drivers/vfio/pci/mlx5/cmd.c
+@@ -1523,8 +1523,8 @@ int mlx5vf_start_page_tracker(struct vfio_device *vdev,
+ 	log_max_msg_size = MLX5_CAP_ADV_VIRTUALIZATION(mdev, pg_track_log_max_msg_size);
+ 	max_msg_size = (1ULL << log_max_msg_size);
+ 	/* The RQ must hold at least 4 WQEs/messages for successful QP creation */
+-	if (rq_size < 4 * max_msg_size)
+-		rq_size = 4 * max_msg_size;
++	if (rq_size < 4ULL * max_msg_size)
++		rq_size = 4ULL * max_msg_size;
+ 
+ 	memset(tracker, 0, sizeof(*tracker));
+ 	tracker->uar = mlx5_get_uars_page(mdev);
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index 1136d7ac6b597e..f8d68fe77b410a 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -647,6 +647,13 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
+ 
+ 	while (npage) {
+ 		if (!batch->size) {
++			/*
++			 * Large mappings may take a while to repeatedly refill
++			 * the batch, so conditionally relinquish the CPU when
++			 * needed to avoid stalls.
++			 */
++			cond_resched();
++
+ 			/* Empty batch, so refill it. */
+ 			ret = vaddr_get_pfns(mm, vaddr, npage, dma->prot,
+ 					     &pfn, batch);
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index 84c9bdf9aeddaf..478eca3cf113bc 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -2983,6 +2983,9 @@ int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads,
+ 	}
+ 	r = __vhost_add_used_n(vq, heads, count);
+ 
++	if (r < 0)
++		return r;
++
+ 	/* Make sure buffer is written before we update index. */
+ 	smp_wmb();
+ 	if (vhost_put_used_idx(vq)) {
+diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
+index 55c6686f091e79..c21484d15f0cbd 100644
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -660,7 +660,7 @@ config FB_ATMEL
+ 
+ config FB_NVIDIA
+ 	tristate "nVidia Framebuffer Support"
+-	depends on FB && PCI
++	depends on FB && PCI && HAS_IOPORT
+ 	select FB_CFB_FILLRECT
+ 	select FB_CFB_COPYAREA
+ 	select FB_CFB_IMAGEBLIT
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 2b2d36c021ba55..9eb880a11fd8ce 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -825,7 +825,8 @@ static void con2fb_init_display(struct vc_data *vc, struct fb_info *info,
+ 				   fg_vc->vc_rows);
+ 	}
+ 
+-	update_screen(vc_cons[fg_console].d);
++	if (fg_console != unit)
++		update_screen(vc_cons[fg_console].d);
+ }
+ 
+ /**
+@@ -1362,6 +1363,7 @@ static void fbcon_set_disp(struct fb_info *info, struct fb_var_screeninfo *var,
+ 	struct vc_data *svc;
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 	int rows, cols;
++	unsigned long ret = 0;
+ 
+ 	p = &fb_display[unit];
+ 
+@@ -1412,11 +1414,10 @@ static void fbcon_set_disp(struct fb_info *info, struct fb_var_screeninfo *var,
+ 	rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ 	cols /= vc->vc_font.width;
+ 	rows /= vc->vc_font.height;
+-	vc_resize(vc, cols, rows);
++	ret = vc_resize(vc, cols, rows);
+ 
+-	if (con_is_visible(vc)) {
++	if (con_is_visible(vc) && !ret)
+ 		update_screen(vc);
+-	}
+ }
+ 
+ static __inline__ void ywrap_up(struct vc_data *vc, int count)
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index dfcf5e4d1d4ccb..53f1719b1ae158 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -449,6 +449,9 @@ static int do_register_framebuffer(struct fb_info *fb_info)
+ 		if (!registered_fb[i])
+ 			break;
+ 
++	if (i >= FB_MAX)
++		return -ENXIO;
++
+ 	if (!fb_info->modelist.prev || !fb_info->modelist.next)
+ 		INIT_LIST_HEAD(&fb_info->modelist);
+ 
+diff --git a/drivers/virt/coco/efi_secret/efi_secret.c b/drivers/virt/coco/efi_secret/efi_secret.c
+index 1864f9f80617e0..f2da4819ec3b7d 100644
+--- a/drivers/virt/coco/efi_secret/efi_secret.c
++++ b/drivers/virt/coco/efi_secret/efi_secret.c
+@@ -136,15 +136,7 @@ static int efi_secret_unlink(struct inode *dir, struct dentry *dentry)
+ 		if (s->fs_files[i] == dentry)
+ 			s->fs_files[i] = NULL;
+ 
+-	/*
+-	 * securityfs_remove tries to lock the directory's inode, but we reach
+-	 * the unlink callback when it's already locked
+-	 */
+-	inode_unlock(dir);
+-	securityfs_remove(dentry);
+-	inode_lock(dir);
+-
+-	return 0;
++	return simple_unlink(inode, dentry);
+ }
+ 
+ static const struct inode_operations efi_secret_dir_inode_operations = {
+diff --git a/drivers/watchdog/dw_wdt.c b/drivers/watchdog/dw_wdt.c
+index 26efca9ae0e7d2..c3fbb6068c5201 100644
+--- a/drivers/watchdog/dw_wdt.c
++++ b/drivers/watchdog/dw_wdt.c
+@@ -644,6 +644,8 @@ static int dw_wdt_drv_probe(struct platform_device *pdev)
+ 	} else {
+ 		wdd->timeout = DW_WDT_DEFAULT_SECONDS;
+ 		watchdog_init_timeout(wdd, 0, dev);
++		/* Limit timeout value to hardware constraints. */
++		dw_wdt_set_timeout(wdd, wdd->timeout);
+ 	}
+ 
+ 	platform_set_drvdata(pdev, dw_wdt);
+diff --git a/drivers/watchdog/iTCO_wdt.c b/drivers/watchdog/iTCO_wdt.c
+index 9ab769aa0244ae..4ab3405ef8e6e1 100644
+--- a/drivers/watchdog/iTCO_wdt.c
++++ b/drivers/watchdog/iTCO_wdt.c
+@@ -577,7 +577,11 @@ static int iTCO_wdt_probe(struct platform_device *pdev)
+ 	/* Check that the heartbeat value is within it's range;
+ 	   if not reset to the default */
+ 	if (iTCO_wdt_set_timeout(&p->wddev, heartbeat)) {
+-		iTCO_wdt_set_timeout(&p->wddev, WATCHDOG_TIMEOUT);
++		ret = iTCO_wdt_set_timeout(&p->wddev, WATCHDOG_TIMEOUT);
++		if (ret != 0) {
++			dev_err(dev, "Failed to set watchdog timeout (%d)\n", WATCHDOG_TIMEOUT);
++			return ret;
++		}
+ 		dev_info(dev, "timeout value out of range, using %d\n",
+ 			WATCHDOG_TIMEOUT);
+ 		heartbeat = WATCHDOG_TIMEOUT;
+diff --git a/drivers/watchdog/sbsa_gwdt.c b/drivers/watchdog/sbsa_gwdt.c
+index 5f23913ce3b49c..6ce1bfb3906413 100644
+--- a/drivers/watchdog/sbsa_gwdt.c
++++ b/drivers/watchdog/sbsa_gwdt.c
+@@ -75,11 +75,17 @@
+ #define SBSA_GWDT_VERSION_MASK  0xF
+ #define SBSA_GWDT_VERSION_SHIFT 16
+ 
++#define SBSA_GWDT_IMPL_MASK	0x7FF
++#define SBSA_GWDT_IMPL_SHIFT	0
++#define SBSA_GWDT_IMPL_MEDIATEK	0x426
++
+ /**
+  * struct sbsa_gwdt - Internal representation of the SBSA GWDT
+  * @wdd:		kernel watchdog_device structure
+  * @clk:		store the System Counter clock frequency, in Hz.
+  * @version:            store the architecture version
++ * @need_ws0_race_workaround:
++ *			indicate whether to adjust wdd->timeout to avoid a race with WS0
+  * @refresh_base:	Virtual address of the watchdog refresh frame
+  * @control_base:	Virtual address of the watchdog control frame
+  */
+@@ -87,6 +93,7 @@ struct sbsa_gwdt {
+ 	struct watchdog_device	wdd;
+ 	u32			clk;
+ 	int			version;
++	bool			need_ws0_race_workaround;
+ 	void __iomem		*refresh_base;
+ 	void __iomem		*control_base;
+ };
+@@ -161,6 +168,31 @@ static int sbsa_gwdt_set_timeout(struct watchdog_device *wdd,
+ 		 */
+ 		sbsa_gwdt_reg_write(((u64)gwdt->clk / 2) * timeout, gwdt);
+ 
++	/*
++	 * Some watchdog hardware has a race condition where it will ignore
++	 * sbsa_gwdt_keepalive() if it is called at the exact moment that a
++	 * timeout occurs and WS0 is being asserted. Unfortunately, the default
++	 * behavior of the watchdog core is very likely to trigger this race
++	 * when action=0 because it programs WOR to be half of the desired
++	 * timeout, and watchdog_next_keepalive() chooses the exact same time to
++	 * send keepalive pings.
++	 *
++	 * This triggers a race where sbsa_gwdt_keepalive() can be called right
++	 * as WS0 is being asserted, and affected hardware will ignore that
++	 * write and continue to assert WS0. After another (timeout / 2)
++	 * seconds, the same race happens again. If the driver wins then the
++	 * explicit refresh will reset WS0 to false but if the hardware wins,
++	 * then WS1 is asserted and the system resets.
++	 *
++	 * Avoid the problem by scheduling keepalive heartbeats one second later
++	 * than the WOR timeout.
++	 *
++	 * This workaround might not be needed in a future revision of the
++	 * hardware.
++	 */
++	if (gwdt->need_ws0_race_workaround)
++		wdd->min_hw_heartbeat_ms = timeout * 500 + 1000;
++
+ 	return 0;
+ }
+ 
+@@ -202,12 +234,15 @@ static int sbsa_gwdt_keepalive(struct watchdog_device *wdd)
+ static void sbsa_gwdt_get_version(struct watchdog_device *wdd)
+ {
+ 	struct sbsa_gwdt *gwdt = watchdog_get_drvdata(wdd);
+-	int ver;
++	int iidr, ver, impl;
+ 
+-	ver = readl(gwdt->control_base + SBSA_GWDT_W_IIDR);
+-	ver = (ver >> SBSA_GWDT_VERSION_SHIFT) & SBSA_GWDT_VERSION_MASK;
++	iidr = readl(gwdt->control_base + SBSA_GWDT_W_IIDR);
++	ver = (iidr >> SBSA_GWDT_VERSION_SHIFT) & SBSA_GWDT_VERSION_MASK;
++	impl = (iidr >> SBSA_GWDT_IMPL_SHIFT) & SBSA_GWDT_IMPL_MASK;
+ 
+ 	gwdt->version = ver;
++	gwdt->need_ws0_race_workaround =
++		!action && (impl == SBSA_GWDT_IMPL_MEDIATEK);
+ }
+ 
+ static int sbsa_gwdt_start(struct watchdog_device *wdd)
+@@ -299,6 +334,15 @@ static int sbsa_gwdt_probe(struct platform_device *pdev)
+ 	else
+ 		wdd->max_hw_heartbeat_ms = GENMASK_ULL(47, 0) / gwdt->clk * 1000;
+ 
++	if (gwdt->need_ws0_race_workaround) {
++		/*
++		 * A timeout of 3 seconds means that WOR will be set to 1.5
++		 * seconds and the heartbeat will be scheduled every 2.5
++		 * seconds.
++		 */
++		wdd->min_timeout = 3;
++	}
++
+ 	status = readl(cf_base + SBSA_GWDT_WCS);
+ 	if (status & SBSA_GWDT_WCS_WS1) {
+ 		dev_warn(dev, "System reset by WDT.\n");
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 5b0cb04b2b93f2..42b86e0e707e05 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -34,6 +34,19 @@ int btrfs_should_fragment_free_space(const struct btrfs_block_group *block_group
+ }
+ #endif
+ 
++static inline bool has_unwritten_metadata(struct btrfs_block_group *block_group)
++{
++	/* The meta_write_pointer is available only on the zoned setup. */
++	if (!btrfs_is_zoned(block_group->fs_info))
++		return false;
++
++	if (block_group->flags & BTRFS_BLOCK_GROUP_DATA)
++		return false;
++
++	return block_group->start + block_group->alloc_offset >
++		block_group->meta_write_pointer;
++}
++
+ /*
+  * Return target flags in extended format or 0 if restripe for this chunk_type
+  * is not in progress
+@@ -1244,6 +1257,15 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+ 		goto out;
+ 
+ 	spin_lock(&block_group->lock);
++	/*
++	 * Hitting this WARN means we removed a block group with an unwritten
++	 * region. It will cause "unable to find chunk map for logical" errors.
++	 */
++	if (WARN_ON(has_unwritten_metadata(block_group)))
++		btrfs_warn(fs_info,
++			   "block group %llu is removed before metadata write out",
++			   block_group->start);
++
+ 	set_bit(BLOCK_GROUP_FLAG_REMOVED, &block_group->runtime_flags);
+ 
+ 	/*
+@@ -1586,8 +1608,9 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
+ 		 * needing to allocate extents from the block group.
+ 		 */
+ 		used = btrfs_space_info_used(space_info, true);
+-		if (space_info->total_bytes - block_group->length < used &&
+-		    block_group->zone_unusable < block_group->length) {
++		if ((space_info->total_bytes - block_group->length < used &&
++		     block_group->zone_unusable < block_group->length) ||
++		    has_unwritten_metadata(block_group)) {
+ 			/*
+ 			 * Add a reference for the list, compensate for the ref
+ 			 * drop under the "next" label for the
+@@ -1616,8 +1639,10 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
+ 		ret = btrfs_zone_finish(block_group);
+ 		if (ret < 0) {
+ 			btrfs_dec_block_group_ro(block_group);
+-			if (ret == -EAGAIN)
++			if (ret == -EAGAIN) {
++				btrfs_link_bg_list(block_group, &retry_list);
+ 				ret = 0;
++			}
+ 			goto next;
+ 		}
+ 
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 648531fe09002c..8abc066ce51fb4 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -2872,6 +2872,7 @@ static noinline int insert_new_root(struct btrfs_trans_handle *trans,
+ 	if (ret < 0) {
+ 		int ret2;
+ 
++		btrfs_clear_buffer_dirty(trans, c);
+ 		ret2 = btrfs_free_tree_block(trans, btrfs_root_id(root), c, 0, 1);
+ 		if (ret2 < 0)
+ 			btrfs_abort_transaction(trans, ret2);
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 0d6ad7512f217c..4cfcd879dc5eb7 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3561,6 +3561,7 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ 		goto fail_sysfs;
+ 	}
+ 
++	btrfs_zoned_reserve_data_reloc_bg(fs_info);
+ 	btrfs_free_zone_cache(fs_info);
+ 
+ 	btrfs_check_active_zone_reservation(fs_info);
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index cb6128778a8360..5f16e6a79d1088 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -3649,6 +3649,21 @@ btrfs_release_block_group(struct btrfs_block_group *cache,
+ 	btrfs_put_block_group(cache);
+ }
+ 
++static bool find_free_extent_check_size_class(const struct find_free_extent_ctl *ffe_ctl,
++					      const struct btrfs_block_group *bg)
++{
++	if (ffe_ctl->policy == BTRFS_EXTENT_ALLOC_ZONED)
++		return true;
++	if (!btrfs_block_group_should_use_size_class(bg))
++		return true;
++	if (ffe_ctl->loop >= LOOP_WRONG_SIZE_CLASS)
++		return true;
++	if (ffe_ctl->loop >= LOOP_UNSET_SIZE_CLASS &&
++	    bg->size_class == BTRFS_BG_SZ_NONE)
++		return true;
++	return ffe_ctl->size_class == bg->size_class;
++}
++
+ /*
+  * Helper function for find_free_extent().
+  *
+@@ -3670,7 +3685,8 @@ static int find_free_extent_clustered(struct btrfs_block_group *bg,
+ 	if (!cluster_bg)
+ 		goto refill_cluster;
+ 	if (cluster_bg != bg && (cluster_bg->ro ||
+-	    !block_group_bits(cluster_bg, ffe_ctl->flags)))
++	    !block_group_bits(cluster_bg, ffe_ctl->flags) ||
++	    !find_free_extent_check_size_class(ffe_ctl, cluster_bg)))
+ 		goto release_cluster;
+ 
+ 	offset = btrfs_alloc_from_cluster(cluster_bg, last_ptr,
+@@ -4227,21 +4243,6 @@ static int find_free_extent_update_loop(struct btrfs_fs_info *fs_info,
+ 	return -ENOSPC;
+ }
+ 
+-static bool find_free_extent_check_size_class(struct find_free_extent_ctl *ffe_ctl,
+-					      struct btrfs_block_group *bg)
+-{
+-	if (ffe_ctl->policy == BTRFS_EXTENT_ALLOC_ZONED)
+-		return true;
+-	if (!btrfs_block_group_should_use_size_class(bg))
+-		return true;
+-	if (ffe_ctl->loop >= LOOP_WRONG_SIZE_CLASS)
+-		return true;
+-	if (ffe_ctl->loop >= LOOP_UNSET_SIZE_CLASS &&
+-	    bg->size_class == BTRFS_BG_SZ_NONE)
+-		return true;
+-	return ffe_ctl->size_class == bg->size_class;
+-}
+-
+ static int prepare_allocation_clustered(struct btrfs_fs_info *fs_info,
+ 					struct find_free_extent_ctl *ffe_ctl,
+ 					struct btrfs_space_info *space_info,
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 8ce6f45f45e0bb..0a468bbfe01524 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -1842,6 +1842,7 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
+ 	loff_t size;
+ 	size_t fsize = folio_size(folio);
+ 	int ret;
++	bool only_release_metadata = false;
+ 	u64 reserved_space;
+ 	u64 page_start;
+ 	u64 page_end;
+@@ -1862,10 +1863,34 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
+ 	 * end up waiting indefinitely to get a lock on the page currently
+ 	 * being processed by btrfs_page_mkwrite() function.
+ 	 */
+-	ret = btrfs_delalloc_reserve_space(BTRFS_I(inode), &data_reserved,
+-					   page_start, reserved_space);
+-	if (ret < 0)
++	ret = btrfs_check_data_free_space(BTRFS_I(inode), &data_reserved,
++					  page_start, reserved_space, false);
++	if (ret < 0) {
++		size_t write_bytes = reserved_space;
++
++		if (btrfs_check_nocow_lock(BTRFS_I(inode), page_start,
++					   &write_bytes, false) <= 0)
++			goto out_noreserve;
++
++		only_release_metadata = true;
++
++		/*
++		 * Can't write the whole range, there may be shared extents or
++		 * holes in the range, bail out with @only_release_metadata set
++		 * to true so that we unlock the nocow lock before returning the
++		 * error.
++		 */
++		if (write_bytes < reserved_space)
++			goto out_noreserve;
++	}
++	ret = btrfs_delalloc_reserve_metadata(BTRFS_I(inode), reserved_space,
++					      reserved_space, false);
++	if (ret < 0) {
++		if (!only_release_metadata)
++			btrfs_free_reserved_data_space(BTRFS_I(inode), data_reserved,
++						       page_start, reserved_space);
+ 		goto out_noreserve;
++	}
+ 
+ 	ret = file_update_time(vmf->vma->vm_file);
+ 	if (ret < 0)
+@@ -1906,10 +1931,16 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
+ 	if (folio_contains(folio, (size - 1) >> PAGE_SHIFT)) {
+ 		reserved_space = round_up(size - page_start, fs_info->sectorsize);
+ 		if (reserved_space < fsize) {
++			const u64 to_free = fsize - reserved_space;
++
+ 			end = page_start + reserved_space - 1;
+-			btrfs_delalloc_release_space(BTRFS_I(inode),
+-					data_reserved, end + 1,
+-					fsize - reserved_space, true);
++			if (only_release_metadata)
++				btrfs_delalloc_release_metadata(BTRFS_I(inode),
++								to_free, true);
++			else
++				btrfs_delalloc_release_space(BTRFS_I(inode),
++							     data_reserved, end + 1,
++							     to_free, true);
+ 		}
+ 	}
+ 
+@@ -1946,10 +1977,16 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
+ 
+ 	btrfs_set_inode_last_sub_trans(BTRFS_I(inode));
+ 
++	if (only_release_metadata)
++		btrfs_set_extent_bit(io_tree, page_start, end, EXTENT_NORESERVE,
++				     &cached_state);
++
+ 	btrfs_unlock_extent(io_tree, page_start, page_end, &cached_state);
+ 	up_read(&BTRFS_I(inode)->i_mmap_lock);
+ 
+ 	btrfs_delalloc_release_extents(BTRFS_I(inode), fsize);
++	if (only_release_metadata)
++		btrfs_check_nocow_unlock(BTRFS_I(inode));
+ 	sb_end_pagefault(inode->i_sb);
+ 	extent_changeset_free(data_reserved);
+ 	return VM_FAULT_LOCKED;
+@@ -1959,10 +1996,16 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
+ 	up_read(&BTRFS_I(inode)->i_mmap_lock);
+ out:
+ 	btrfs_delalloc_release_extents(BTRFS_I(inode), fsize);
+-	btrfs_delalloc_release_space(BTRFS_I(inode), data_reserved, page_start,
+-				     reserved_space, true);
++	if (only_release_metadata)
++		btrfs_delalloc_release_metadata(BTRFS_I(inode), reserved_space, true);
++	else
++		btrfs_delalloc_release_space(BTRFS_I(inode), data_reserved,
++					     page_start, reserved_space, true);
+ 	extent_changeset_free(data_reserved);
+ out_noreserve:
++	if (only_release_metadata)
++		btrfs_check_nocow_unlock(BTRFS_I(inode));
++
+ 	sb_end_pagefault(inode->i_sb);
+ 
+ 	if (ret < 0)
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index fc66872b4c747f..4ae34c22ff1de1 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -2010,7 +2010,7 @@ static int nocow_one_range(struct btrfs_inode *inode, struct folio *locked_folio
+ 	 * cleaered by the caller.
+ 	 */
+ 	if (ret < 0)
+-		btrfs_cleanup_ordered_extents(inode, file_pos, end);
++		btrfs_cleanup_ordered_extents(inode, file_pos, len);
+ 	return ret;
+ }
+ 
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 8a60983a697c65..36e485d845308c 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -4829,7 +4829,8 @@ static int btrfs_uring_encoded_read(struct io_uring_cmd *cmd, unsigned int issue
+ #if defined(CONFIG_64BIT) && defined(CONFIG_COMPAT)
+ 		copy_end = offsetofend(struct btrfs_ioctl_encoded_io_args_32, flags);
+ #else
+-		return -ENOTTY;
++		ret = -ENOTTY;
++		goto out_acct;
+ #endif
+ 	} else {
+ 		copy_end = copy_end_kernel;
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index b3176edbde82a0..e1a34e69927dd3 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -630,22 +630,30 @@ bool btrfs_check_quota_leak(const struct btrfs_fs_info *fs_info)
+ 
+ /*
+  * This is called from close_ctree() or open_ctree() or btrfs_quota_disable(),
+- * first two are in single-threaded paths.And for the third one, we have set
+- * quota_root to be null with qgroup_lock held before, so it is safe to clean
+- * up the in-memory structures without qgroup_lock held.
++ * first two are in single-threaded paths.
+  */
+ void btrfs_free_qgroup_config(struct btrfs_fs_info *fs_info)
+ {
+ 	struct rb_node *n;
+ 	struct btrfs_qgroup *qgroup;
+ 
++	/*
++	 * btrfs_quota_disable() can be called concurrently with
++	 * btrfs_qgroup_rescan() -> qgroup_rescan_zero_tracking(), so take the
++	 * lock.
++	 */
++	spin_lock(&fs_info->qgroup_lock);
+ 	while ((n = rb_first(&fs_info->qgroup_tree))) {
+ 		qgroup = rb_entry(n, struct btrfs_qgroup, node);
+ 		rb_erase(n, &fs_info->qgroup_tree);
+ 		__del_qgroup_rb(qgroup);
++		spin_unlock(&fs_info->qgroup_lock);
+ 		btrfs_sysfs_del_one_qgroup(fs_info, qgroup);
+ 		kfree(qgroup);
++		spin_lock(&fs_info->qgroup_lock);
+ 	}
++	spin_unlock(&fs_info->qgroup_lock);
++
+ 	/*
+ 	 * We call btrfs_free_qgroup_config() when unmounting
+ 	 * filesystem and disabling quota, so we set qgroup_ulist
+@@ -1354,11 +1362,14 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
+ 
+ 	/*
+ 	 * We have nothing held here and no trans handle, just return the error
+-	 * if there is one.
++	 * if there is one and set back the quota enabled bit since we didn't
++	 * actually disable quotas.
+ 	 */
+ 	ret = flush_reservations(fs_info);
+-	if (ret)
++	if (ret) {
++		set_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
+ 		return ret;
++	}
+ 
+ 	/*
+ 	 * 1 For the root item
+@@ -1470,7 +1481,6 @@ static int __qgroup_excl_accounting(struct btrfs_fs_info *fs_info, u64 ref_root,
+ 				    struct btrfs_qgroup *src, int sign)
+ {
+ 	struct btrfs_qgroup *qgroup;
+-	struct btrfs_qgroup *cur;
+ 	LIST_HEAD(qgroup_list);
+ 	u64 num_bytes = src->excl;
+ 	int ret = 0;
+@@ -1480,7 +1490,7 @@ static int __qgroup_excl_accounting(struct btrfs_fs_info *fs_info, u64 ref_root,
+ 		goto out;
+ 
+ 	qgroup_iterator_add(&qgroup_list, qgroup);
+-	list_for_each_entry(cur, &qgroup_list, iterator) {
++	list_for_each_entry(qgroup, &qgroup_list, iterator) {
+ 		struct btrfs_qgroup_list *glist;
+ 
+ 		qgroup->rfer += sign * num_bytes;
+@@ -1679,9 +1689,6 @@ int btrfs_create_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
+ 	struct btrfs_qgroup *prealloc = NULL;
+ 	int ret = 0;
+ 
+-	if (btrfs_qgroup_mode(fs_info) == BTRFS_QGROUP_MODE_DISABLED)
+-		return 0;
+-
+ 	mutex_lock(&fs_info->qgroup_ioctl_lock);
+ 	if (!fs_info->quota_root) {
+ 		ret = -ENOTCONN;
+@@ -4036,12 +4043,21 @@ btrfs_qgroup_rescan(struct btrfs_fs_info *fs_info)
+ 	qgroup_rescan_zero_tracking(fs_info);
+ 
+ 	mutex_lock(&fs_info->qgroup_rescan_lock);
+-	fs_info->qgroup_rescan_running = true;
+-	btrfs_queue_work(fs_info->qgroup_rescan_workers,
+-			 &fs_info->qgroup_rescan_work);
++	/*
++	 * The rescan worker is only for full accounting qgroups, check if it's
++	 * enabled as it is pointless to queue it otherwise. A concurrent quota
++	 * disable may also have just cleared BTRFS_FS_QUOTA_ENABLED.
++	 */
++	if (btrfs_qgroup_full_accounting(fs_info)) {
++		fs_info->qgroup_rescan_running = true;
++		btrfs_queue_work(fs_info->qgroup_rescan_workers,
++				 &fs_info->qgroup_rescan_work);
++	} else {
++		ret = -ENOTCONN;
++	}
+ 	mutex_unlock(&fs_info->qgroup_rescan_lock);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ int btrfs_qgroup_wait_for_completion(struct btrfs_fs_info *fs_info,
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 02086191630d01..068c7a1ad73173 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -594,6 +594,25 @@ static struct btrfs_root *create_reloc_root(struct btrfs_trans_handle *trans,
+ 	if (btrfs_root_id(root) == objectid) {
+ 		u64 commit_root_gen;
+ 
++		/*
++		 * Relocation will wait for cleaner thread, and any half-dropped
++		 * subvolume will be fully cleaned up at mount time.
++		 * So here we shouldn't hit a subvolume with non-zero drop_progress.
++		 *
++		 * If this isn't the case, error out since it can make us attempt to
++		 * drop references for extents that were already dropped before.
++		 */
++		if (unlikely(btrfs_disk_key_objectid(&root->root_item.drop_progress))) {
++			struct btrfs_key cpu_key;
++
++			btrfs_disk_key_to_cpu(&cpu_key, &root->root_item.drop_progress);
++			btrfs_err(fs_info,
++	"cannot relocate partially dropped subvolume %llu, drop progress key (%llu %u %llu)",
++				  objectid, cpu_key.objectid, cpu_key.type, cpu_key.offset);
++			ret = -EUCLEAN;
++			goto fail;
++		}
++
+ 		/* called by btrfs_init_reloc_root */
+ 		ret = btrfs_copy_root(trans, root, root->commit_root, &eb,
+ 				      BTRFS_TREE_RELOC_OBJECTID);
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 2891ec4056c65b..8568e7a9500779 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -4,6 +4,7 @@
+  */
+ 
+ #include <linux/bsearch.h>
++#include <linux/falloc.h>
+ #include <linux/fs.h>
+ #include <linux/file.h>
+ #include <linux/sort.h>
+@@ -5411,6 +5412,30 @@ static int send_update_extent(struct send_ctx *sctx,
+ 	return ret;
+ }
+ 
++static int send_fallocate(struct send_ctx *sctx, u32 mode, u64 offset, u64 len)
++{
++	struct fs_path *path;
++	int ret;
++
++	path = get_cur_inode_path(sctx);
++	if (IS_ERR(path))
++		return PTR_ERR(path);
++
++	ret = begin_cmd(sctx, BTRFS_SEND_C_FALLOCATE);
++	if (ret < 0)
++		return ret;
++
++	TLV_PUT_PATH(sctx, BTRFS_SEND_A_PATH, path);
++	TLV_PUT_U32(sctx, BTRFS_SEND_A_FALLOCATE_MODE, mode);
++	TLV_PUT_U64(sctx, BTRFS_SEND_A_FILE_OFFSET, offset);
++	TLV_PUT_U64(sctx, BTRFS_SEND_A_SIZE, len);
++
++	ret = send_cmd(sctx);
++
++tlv_put_failure:
++	return ret;
++}
++
+ static int send_hole(struct send_ctx *sctx, u64 end)
+ {
+ 	struct fs_path *p = NULL;
+@@ -5418,6 +5443,14 @@ static int send_hole(struct send_ctx *sctx, u64 end)
+ 	u64 offset = sctx->cur_inode_last_extent;
+ 	int ret = 0;
+ 
++	/*
++	 * Starting with send stream v2 we have fallocate and can use it to
++	 * punch holes instead of sending writes full of zeroes.
++	 */
++	if (proto_cmd_ok(sctx, BTRFS_SEND_C_FALLOCATE))
++		return send_fallocate(sctx, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
++				      offset, end - offset);
++
+ 	/*
+ 	 * A hole that starts at EOF or beyond it. Since we do not yet support
+ 	 * fallocate (for extent preallocation and hole punching), sending a
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index b96195d6480f14..b9d7538764607f 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -1735,8 +1735,10 @@ static noinline int create_pending_snapshot(struct btrfs_trans_handle *trans,
+ 
+ 	ret = btrfs_create_qgroup(trans, objectid);
+ 	if (ret && ret != -EEXIST) {
+-		btrfs_abort_transaction(trans, ret);
+-		goto fail;
++		if (ret != -ENOTCONN || btrfs_qgroup_enabled(fs_info)) {
++			btrfs_abort_transaction(trans, ret);
++			goto fail;
++		}
+ 	}
+ 
+ 	/*
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index cea8a7e9d6d3b1..7241229d218b3a 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -321,8 +321,7 @@ struct walk_control {
+ 
+ 	/*
+ 	 * Ignore any items from the inode currently being processed. Needs
+-	 * to be set every time we find a BTRFS_INODE_ITEM_KEY and we are in
+-	 * the LOG_WALK_REPLAY_INODES stage.
++	 * to be set every time we find a BTRFS_INODE_ITEM_KEY.
+ 	 */
+ 	bool ignore_cur_inode;
+ 
+@@ -1383,6 +1382,8 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ 	dir = btrfs_iget_logging(parent_objectid, root);
+ 	if (IS_ERR(dir)) {
+ 		ret = PTR_ERR(dir);
++		if (ret == -ENOENT)
++			ret = 0;
+ 		dir = NULL;
+ 		goto out;
+ 	}
+@@ -1398,6 +1399,8 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ 		if (log_ref_ver) {
+ 			ret = extref_get_fields(eb, ref_ptr, &name,
+ 						&ref_index, &parent_objectid);
++			if (ret)
++				goto out;
+ 			/*
+ 			 * parent object can change from one array
+ 			 * item to another.
+@@ -1407,14 +1410,30 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ 				if (IS_ERR(dir)) {
+ 					ret = PTR_ERR(dir);
+ 					dir = NULL;
++					/*
++					 * A new parent dir may have not been
++					 * logged and not exist in the subvolume
++					 * tree, see the comment above before
++					 * the loop when getting the first
++					 * parent dir.
++					 */
++					if (ret == -ENOENT) {
++						/*
++						 * The next extref may refer to
++						 * another parent dir that
++						 * exists, so continue.
++						 */
++						ret = 0;
++						goto next;
++					}
+ 					goto out;
+ 				}
+ 			}
+ 		} else {
+ 			ret = ref_get_fields(eb, ref_ptr, &name, &ref_index);
++			if (ret)
++				goto out;
+ 		}
+-		if (ret)
+-			goto out;
+ 
+ 		ret = inode_in_dir(root, path, btrfs_ino(dir), btrfs_ino(inode),
+ 				   ref_index, &name);
+@@ -1448,10 +1467,11 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ 		}
+ 		/* Else, ret == 1, we already have a perfect match, we're done. */
+ 
++next:
+ 		ref_ptr = (unsigned long)(ref_ptr + ref_struct_size) + name.len;
+ 		kfree(name.name);
+ 		name.name = NULL;
+-		if (log_ref_ver) {
++		if (log_ref_ver && dir) {
+ 			iput(&dir->vfs_inode);
+ 			dir = NULL;
+ 		}
+@@ -2413,23 +2433,30 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 
+ 	nritems = btrfs_header_nritems(eb);
+ 	for (i = 0; i < nritems; i++) {
+-		btrfs_item_key_to_cpu(eb, &key, i);
++		struct btrfs_inode_item *inode_item;
+ 
+-		/* inode keys are done during the first stage */
+-		if (key.type == BTRFS_INODE_ITEM_KEY &&
+-		    wc->stage == LOG_WALK_REPLAY_INODES) {
+-			struct btrfs_inode_item *inode_item;
+-			u32 mode;
++		btrfs_item_key_to_cpu(eb, &key, i);
+ 
+-			inode_item = btrfs_item_ptr(eb, i,
+-					    struct btrfs_inode_item);
++		if (key.type == BTRFS_INODE_ITEM_KEY) {
++			inode_item = btrfs_item_ptr(eb, i, struct btrfs_inode_item);
+ 			/*
+-			 * If we have a tmpfile (O_TMPFILE) that got fsync'ed
+-			 * and never got linked before the fsync, skip it, as
+-			 * replaying it is pointless since it would be deleted
+-			 * later. We skip logging tmpfiles, but it's always
+-			 * possible we are replaying a log created with a kernel
+-			 * that used to log tmpfiles.
++			 * An inode with no links is either:
++			 *
++			 * 1) A tmpfile (O_TMPFILE) that got fsync'ed and never
++			 *    got linked before the fsync, skip it, as replaying
++			 *    it is pointless since it would be deleted later.
++			 *    We skip logging tmpfiles, but it's always possible
++			 *    we are replaying a log created with a kernel that
++			 *    used to log tmpfiles;
++			 *
++			 * 2) A non-tmpfile which got its last link deleted
++			 *    while holding an open fd on it and later got
++			 *    fsynced through that fd. We always log the
++			 *    parent inodes when inode->last_unlink_trans is
++			 *    set to the current transaction, so ignore all the
++			 *    inode items for this inode. We will delete the
++			 *    inode when processing the parent directory with
++			 *    replay_dir_deletes().
+ 			 */
+ 			if (btrfs_inode_nlink(eb, inode_item) == 0) {
+ 				wc->ignore_cur_inode = true;
+@@ -2437,8 +2464,14 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 			} else {
+ 				wc->ignore_cur_inode = false;
+ 			}
+-			ret = replay_xattr_deletes(wc->trans, root, log,
+-						   path, key.objectid);
++		}
++
++		/* Inode keys are done during the first stage. */
++		if (key.type == BTRFS_INODE_ITEM_KEY &&
++		    wc->stage == LOG_WALK_REPLAY_INODES) {
++			u32 mode;
++
++			ret = replay_xattr_deletes(wc->trans, root, log, path, key.objectid);
+ 			if (ret)
+ 				break;
+ 			mode = btrfs_inode_mode(eb, inode_item);
+@@ -2519,9 +2552,8 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 			   key.type == BTRFS_INODE_EXTREF_KEY) {
+ 			ret = add_inode_ref(wc->trans, root, log, path,
+ 					    eb, i, &key);
+-			if (ret && ret != -ENOENT)
++			if (ret)
+ 				break;
+-			ret = 0;
+ 		} else if (key.type == BTRFS_EXTENT_DATA_KEY) {
+ 			ret = replay_one_extent(wc->trans, root, path,
+ 						eb, i, &key);
+@@ -2542,14 +2574,14 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ /*
+  * Correctly adjust the reserved bytes occupied by a log tree extent buffer
+  */
+-static void unaccount_log_buffer(struct btrfs_fs_info *fs_info, u64 start)
++static int unaccount_log_buffer(struct btrfs_fs_info *fs_info, u64 start)
+ {
+ 	struct btrfs_block_group *cache;
+ 
+ 	cache = btrfs_lookup_block_group(fs_info, start);
+ 	if (!cache) {
+ 		btrfs_err(fs_info, "unable to find block group for %llu", start);
+-		return;
++		return -ENOENT;
+ 	}
+ 
+ 	spin_lock(&cache->space_info->lock);
+@@ -2560,27 +2592,22 @@ static void unaccount_log_buffer(struct btrfs_fs_info *fs_info, u64 start)
+ 	spin_unlock(&cache->space_info->lock);
+ 
+ 	btrfs_put_block_group(cache);
++
++	return 0;
+ }
+ 
+ static int clean_log_buffer(struct btrfs_trans_handle *trans,
+ 			    struct extent_buffer *eb)
+ {
+-	int ret;
+-
+ 	btrfs_tree_lock(eb);
+ 	btrfs_clear_buffer_dirty(trans, eb);
+ 	wait_on_extent_buffer_writeback(eb);
+ 	btrfs_tree_unlock(eb);
+ 
+-	if (trans) {
+-		ret = btrfs_pin_reserved_extent(trans, eb);
+-		if (ret)
+-			return ret;
+-	} else {
+-		unaccount_log_buffer(eb->fs_info, eb->start);
+-	}
++	if (trans)
++		return btrfs_pin_reserved_extent(trans, eb);
+ 
+-	return 0;
++	return unaccount_log_buffer(eb->fs_info, eb->start);
+ }
+ 
+ static noinline int walk_down_log_tree(struct btrfs_trans_handle *trans,
+@@ -4211,6 +4238,9 @@ static void fill_inode_item(struct btrfs_trans_handle *trans,
+ 	btrfs_set_token_timespec_nsec(&token, &item->ctime,
+ 				      inode_get_ctime_nsec(inode));
+ 
++	btrfs_set_timespec_sec(leaf, &item->otime, BTRFS_I(inode)->i_otime_sec);
++	btrfs_set_timespec_nsec(leaf, &item->otime, BTRFS_I(inode)->i_otime_nsec);
++
+ 	/*
+ 	 * We do not need to set the nbytes field, in fact during a fast fsync
+ 	 * its value may not even be correct, since a fast fsync does not wait
+@@ -7279,11 +7309,14 @@ int btrfs_recover_log_trees(struct btrfs_root *log_root_tree)
+ 
+ 		wc.replay_dest->log_root = log;
+ 		ret = btrfs_record_root_in_trans(trans, wc.replay_dest);
+-		if (ret)
++		if (ret) {
+ 			/* The loop needs to continue due to the root refs */
+ 			btrfs_abort_transaction(trans, ret);
+-		else
++		} else {
+ 			ret = walk_log_tree(trans, log, &wc);
++			if (ret)
++				btrfs_abort_transaction(trans, ret);
++		}
+ 
+ 		if (!ret && wc.stage == LOG_WALK_REPLAY_ALL) {
+ 			ret = fixup_inode_link_counts(trans, wc.replay_dest,
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index 9430b34d3cbb8e..5439d837471617 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -17,6 +17,7 @@
+ #include "fs.h"
+ #include "accessors.h"
+ #include "bio.h"
++#include "transaction.h"
+ 
+ /* Maximum number of zones to report per blkdev_report_zones() call */
+ #define BTRFS_REPORT_NR_ZONES   4096
+@@ -2501,6 +2502,66 @@ void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg)
+ 	spin_unlock(&fs_info->relocation_bg_lock);
+ }
+ 
++void btrfs_zoned_reserve_data_reloc_bg(struct btrfs_fs_info *fs_info)
++{
++	struct btrfs_space_info *data_sinfo = fs_info->data_sinfo;
++	struct btrfs_space_info *space_info = data_sinfo->sub_group[0];
++	struct btrfs_trans_handle *trans;
++	struct btrfs_block_group *bg;
++	struct list_head *bg_list;
++	u64 alloc_flags;
++	bool initial = false;
++	bool did_chunk_alloc = false;
++	int index;
++	int ret;
++
++	if (!btrfs_is_zoned(fs_info))
++		return;
++
++	if (fs_info->data_reloc_bg)
++		return;
++
++	if (sb_rdonly(fs_info->sb))
++		return;
++
++	ASSERT(space_info->subgroup_id == BTRFS_SUB_GROUP_DATA_RELOC);
++	alloc_flags = btrfs_get_alloc_profile(fs_info, space_info->flags);
++	index = btrfs_bg_flags_to_raid_index(alloc_flags);
++
++	bg_list = &data_sinfo->block_groups[index];
++again:
++	list_for_each_entry(bg, bg_list, list) {
++		if (bg->used > 0)
++			continue;
++
++		if (!initial) {
++			initial = true;
++			continue;
++		}
++
++		fs_info->data_reloc_bg = bg->start;
++		set_bit(BLOCK_GROUP_FLAG_ZONED_DATA_RELOC, &bg->runtime_flags);
++		btrfs_zone_activate(bg);
++
++		return;
++	}
++
++	if (did_chunk_alloc)
++		return;
++
++	trans = btrfs_join_transaction(fs_info->tree_root);
++	if (IS_ERR(trans))
++		return;
++
++	ret = btrfs_chunk_alloc(trans, space_info, alloc_flags, CHUNK_ALLOC_FORCE);
++	btrfs_end_transaction(trans);
++	if (ret == 1) {
++		did_chunk_alloc = true;
++		bg_list = &space_info->block_groups[index];
++		goto again;
++	}
++}
++
+ void btrfs_free_zone_cache(struct btrfs_fs_info *fs_info)
+ {
+ 	struct btrfs_fs_devices *fs_devices = fs_info->fs_devices;
+@@ -2523,8 +2584,8 @@ bool btrfs_zoned_should_reclaim(const struct btrfs_fs_info *fs_info)
+ {
+ 	struct btrfs_fs_devices *fs_devices = fs_info->fs_devices;
+ 	struct btrfs_device *device;
++	u64 total = btrfs_super_total_bytes(fs_info->super_copy);
+ 	u64 used = 0;
+-	u64 total = 0;
+ 	u64 factor;
+ 
+ 	ASSERT(btrfs_is_zoned(fs_info));
+@@ -2537,7 +2598,6 @@ bool btrfs_zoned_should_reclaim(const struct btrfs_fs_info *fs_info)
+ 		if (!device->bdev)
+ 			continue;
+ 
+-		total += device->disk_total_bytes;
+ 		used += device->bytes_used;
+ 	}
+ 	mutex_unlock(&fs_devices->device_list_mutex);
+@@ -2591,7 +2651,7 @@ int btrfs_zone_finish_one_bg(struct btrfs_fs_info *fs_info)
+ 
+ 		spin_lock(&block_group->lock);
+ 		if (block_group->reserved || block_group->alloc_offset == 0 ||
+-		    (block_group->flags & BTRFS_BLOCK_GROUP_SYSTEM) ||
++		    !(block_group->flags & BTRFS_BLOCK_GROUP_DATA) ||
+ 		    test_bit(BLOCK_GROUP_FLAG_ZONED_DATA_RELOC, &block_group->runtime_flags)) {
+ 			spin_unlock(&block_group->lock);
+ 			continue;
+diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h
+index 9672bf4c333552..6e11533b8e14c2 100644
+--- a/fs/btrfs/zoned.h
++++ b/fs/btrfs/zoned.h
+@@ -88,6 +88,7 @@ void btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info, u64 logical,
+ void btrfs_schedule_zone_finish_bg(struct btrfs_block_group *bg,
+ 				   struct extent_buffer *eb);
+ void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg);
++void btrfs_zoned_reserve_data_reloc_bg(struct btrfs_fs_info *fs_info);
+ void btrfs_free_zone_cache(struct btrfs_fs_info *fs_info);
+ bool btrfs_zoned_should_reclaim(const struct btrfs_fs_info *fs_info);
+ void btrfs_zoned_release_data_reloc_bg(struct btrfs_fs_info *fs_info, u64 logical,
+@@ -241,6 +242,8 @@ static inline void btrfs_schedule_zone_finish_bg(struct btrfs_block_group *bg,
+ 
+ static inline void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg) { }
+ 
++static inline void btrfs_zoned_reserve_data_reloc_bg(struct btrfs_fs_info *fs_info) { }
++
+ static inline void btrfs_free_zone_cache(struct btrfs_fs_info *fs_info) { }
+ 
+ static inline bool btrfs_zoned_should_reclaim(const struct btrfs_fs_info *fs_info)
+diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
+index c1d92074b65c50..6e7164530a1e2e 100644
+--- a/fs/crypto/fscrypt_private.h
++++ b/fs/crypto/fscrypt_private.h
+@@ -45,6 +45,23 @@
+  */
+ #undef FSCRYPT_MAX_KEY_SIZE
+ 
++/*
++ * This mask is passed as the third argument to the crypto_alloc_*() functions
++ * to prevent fscrypt from using the Crypto API drivers for non-inline crypto
++ * engines.  Those drivers have been problematic for fscrypt.  fscrypt users
++ * have reported hangs and even incorrect en/decryption with these drivers.
++ * Since going to the driver, off CPU, and back again is really slow, such
++ * drivers can be over 50 times slower than the CPU-based code for fscrypt's
++ * workload.  Even on platforms that lack AES instructions on the CPU, using the
++ * offloads has been shown to be slower, even staying with AES.  (Of course,
++ * Adiantum is faster still, and is the recommended option on such platforms...)
++ *
++ * Note that fscrypt also supports inline crypto engines.  Those don't use the
++ * Crypto API and work much better than the old-style (non-inline) engines.
++ */
++#define FSCRYPT_CRYPTOAPI_MASK \
++	(CRYPTO_ALG_ALLOCATES_MEMORY | CRYPTO_ALG_KERN_DRIVER_ONLY)
++
+ #define FSCRYPT_CONTEXT_V1	1
+ #define FSCRYPT_CONTEXT_V2	2
+ 
+diff --git a/fs/crypto/hkdf.c b/fs/crypto/hkdf.c
+index 0f3028adc9c72a..5b9c21cfe2b453 100644
+--- a/fs/crypto/hkdf.c
++++ b/fs/crypto/hkdf.c
+@@ -58,7 +58,7 @@ int fscrypt_init_hkdf(struct fscrypt_hkdf *hkdf, const u8 *master_key,
+ 	u8 prk[HKDF_HASHLEN];
+ 	int err;
+ 
+-	hmac_tfm = crypto_alloc_shash(HKDF_HMAC_ALG, 0, 0);
++	hmac_tfm = crypto_alloc_shash(HKDF_HMAC_ALG, 0, FSCRYPT_CRYPTOAPI_MASK);
+ 	if (IS_ERR(hmac_tfm)) {
+ 		fscrypt_err(NULL, "Error allocating " HKDF_HMAC_ALG ": %ld",
+ 			    PTR_ERR(hmac_tfm));
+diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c
+index 0d71843af94698..d8113a7196979b 100644
+--- a/fs/crypto/keysetup.c
++++ b/fs/crypto/keysetup.c
+@@ -103,7 +103,8 @@ fscrypt_allocate_skcipher(struct fscrypt_mode *mode, const u8 *raw_key,
+ 	struct crypto_skcipher *tfm;
+ 	int err;
+ 
+-	tfm = crypto_alloc_skcipher(mode->cipher_str, 0, 0);
++	tfm = crypto_alloc_skcipher(mode->cipher_str, 0,
++				    FSCRYPT_CRYPTOAPI_MASK);
+ 	if (IS_ERR(tfm)) {
+ 		if (PTR_ERR(tfm) == -ENOENT) {
+ 			fscrypt_warn(inode,
+diff --git a/fs/crypto/keysetup_v1.c b/fs/crypto/keysetup_v1.c
+index b70521c55132b9..158ceae8a5bce4 100644
+--- a/fs/crypto/keysetup_v1.c
++++ b/fs/crypto/keysetup_v1.c
+@@ -52,7 +52,8 @@ static int derive_key_aes(const u8 *master_key,
+ 	struct skcipher_request *req = NULL;
+ 	DECLARE_CRYPTO_WAIT(wait);
+ 	struct scatterlist src_sg, dst_sg;
+-	struct crypto_skcipher *tfm = crypto_alloc_skcipher("ecb(aes)", 0, 0);
++	struct crypto_skcipher *tfm =
++		crypto_alloc_skcipher("ecb(aes)", 0, FSCRYPT_CRYPTOAPI_MASK);
+ 
+ 	if (IS_ERR(tfm)) {
+ 		res = PTR_ERR(tfm);
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index e1e9f06e8342f3..799fef437aa8e7 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -313,8 +313,8 @@ static int erofs_read_superblock(struct super_block *sb)
+ 	sbi->islotbits = ilog2(sizeof(struct erofs_inode_compact));
+ 	if (erofs_sb_has_48bit(sbi) && dsb->rootnid_8b) {
+ 		sbi->root_nid = le64_to_cpu(dsb->rootnid_8b);
+-		sbi->dif0.blocks = (sbi->dif0.blocks << 32) |
+-				le16_to_cpu(dsb->rb.blocks_hi);
++		sbi->dif0.blocks = sbi->dif0.blocks |
++				((u64)le16_to_cpu(dsb->rb.blocks_hi) << 32);
+ 	} else {
+ 		sbi->root_nid = le16_to_cpu(dsb->rb.rootnid_2b);
+ 	}
+diff --git a/fs/exfat/dir.c b/fs/exfat/dir.c
+index 3103b932b67461..ee060e26f51d2a 100644
+--- a/fs/exfat/dir.c
++++ b/fs/exfat/dir.c
+@@ -996,6 +996,7 @@ int exfat_find_dir_entry(struct super_block *sb, struct exfat_inode_info *ei,
+ 	struct exfat_hint_femp candi_empty;
+ 	struct exfat_sb_info *sbi = EXFAT_SB(sb);
+ 	int num_entries = exfat_calc_num_entries(p_uniname);
++	unsigned int clu_count = 0;
+ 
+ 	if (num_entries < 0)
+ 		return num_entries;
+@@ -1133,6 +1134,10 @@ int exfat_find_dir_entry(struct super_block *sb, struct exfat_inode_info *ei,
+ 		} else {
+ 			if (exfat_get_next_cluster(sb, &clu.dir))
+ 				return -EIO;
++
++			/* break if the cluster chain includes a loop */
++			if (unlikely(++clu_count > EXFAT_DATA_CLUSTER_COUNT(sbi)))
++				goto not_found;
+ 		}
+ 	}
+ 
+@@ -1195,6 +1200,7 @@ int exfat_count_dir_entries(struct super_block *sb, struct exfat_chain *p_dir)
+ 	int i, count = 0;
+ 	int dentries_per_clu;
+ 	unsigned int entry_type;
++	unsigned int clu_count = 0;
+ 	struct exfat_chain clu;
+ 	struct exfat_dentry *ep;
+ 	struct exfat_sb_info *sbi = EXFAT_SB(sb);
+@@ -1227,6 +1233,12 @@ int exfat_count_dir_entries(struct super_block *sb, struct exfat_chain *p_dir)
+ 		} else {
+ 			if (exfat_get_next_cluster(sb, &(clu.dir)))
+ 				return -EIO;
++
++			if (unlikely(++clu_count > sbi->used_clusters)) {
++				exfat_fs_error(sb, "FAT or bitmap is corrupted");
++				return -EIO;
++			}
++
+ 		}
+ 	}
+ 
+diff --git a/fs/exfat/fatent.c b/fs/exfat/fatent.c
+index 23065f948ae752..232cc7f8ab92fc 100644
+--- a/fs/exfat/fatent.c
++++ b/fs/exfat/fatent.c
+@@ -490,5 +490,15 @@ int exfat_count_num_clusters(struct super_block *sb,
+ 	}
+ 
+ 	*ret_count = count;
++
++	/*
++	 * since exfat_count_used_clusters() is not called, sbi->used_clusters
++	 * cannot be used here.
++	 */
++	if (unlikely(i == sbi->num_clusters && clu != EXFAT_EOF_CLUSTER)) {
++		exfat_fs_error(sb, "The cluster chain has a loop");
++		return -EIO;
++	}
++
+ 	return 0;
+ }
+diff --git a/fs/exfat/namei.c b/fs/exfat/namei.c
+index fede0283d6e21f..f5f1c4e8a29fd2 100644
+--- a/fs/exfat/namei.c
++++ b/fs/exfat/namei.c
+@@ -890,6 +890,7 @@ static int exfat_check_dir_empty(struct super_block *sb,
+ {
+ 	int i, dentries_per_clu;
+ 	unsigned int type;
++	unsigned int clu_count = 0;
+ 	struct exfat_chain clu;
+ 	struct exfat_dentry *ep;
+ 	struct exfat_sb_info *sbi = EXFAT_SB(sb);
+@@ -926,6 +927,10 @@ static int exfat_check_dir_empty(struct super_block *sb,
+ 		} else {
+ 			if (exfat_get_next_cluster(sb, &(clu.dir)))
+ 				return -EIO;
++
++			/* break if the cluster chain includes a loop */
++			if (unlikely(++clu_count > EXFAT_DATA_CLUSTER_COUNT(sbi)))
++				break;
+ 		}
+ 	}
+ 
+diff --git a/fs/exfat/super.c b/fs/exfat/super.c
+index 7ed858937d45d2..3a9ec75ab45254 100644
+--- a/fs/exfat/super.c
++++ b/fs/exfat/super.c
+@@ -341,13 +341,12 @@ static void exfat_hash_init(struct super_block *sb)
+ 		INIT_HLIST_HEAD(&sbi->inode_hashtable[i]);
+ }
+ 
+-static int exfat_read_root(struct inode *inode)
++static int exfat_read_root(struct inode *inode, struct exfat_chain *root_clu)
+ {
+ 	struct super_block *sb = inode->i_sb;
+ 	struct exfat_sb_info *sbi = EXFAT_SB(sb);
+ 	struct exfat_inode_info *ei = EXFAT_I(inode);
+-	struct exfat_chain cdir;
+-	int num_subdirs, num_clu = 0;
++	int num_subdirs;
+ 
+ 	exfat_chain_set(&ei->dir, sbi->root_dir, 0, ALLOC_FAT_CHAIN);
+ 	ei->entry = -1;
+@@ -360,12 +359,9 @@ static int exfat_read_root(struct inode *inode)
+ 	ei->hint_stat.clu = sbi->root_dir;
+ 	ei->hint_femp.eidx = EXFAT_HINT_NONE;
+ 
+-	exfat_chain_set(&cdir, sbi->root_dir, 0, ALLOC_FAT_CHAIN);
+-	if (exfat_count_num_clusters(sb, &cdir, &num_clu))
+-		return -EIO;
+-	i_size_write(inode, num_clu << sbi->cluster_size_bits);
++	i_size_write(inode, EXFAT_CLU_TO_B(root_clu->size, sbi));
+ 
+-	num_subdirs = exfat_count_dir_entries(sb, &cdir);
++	num_subdirs = exfat_count_dir_entries(sb, root_clu);
+ 	if (num_subdirs < 0)
+ 		return -EIO;
+ 	set_nlink(inode, num_subdirs + EXFAT_MIN_SUBDIR);
+@@ -578,7 +574,8 @@ static int exfat_verify_boot_region(struct super_block *sb)
+ }
+ 
+ /* mount the file system volume */
+-static int __exfat_fill_super(struct super_block *sb)
++static int __exfat_fill_super(struct super_block *sb,
++		struct exfat_chain *root_clu)
+ {
+ 	int ret;
+ 	struct exfat_sb_info *sbi = EXFAT_SB(sb);
+@@ -595,6 +592,18 @@ static int __exfat_fill_super(struct super_block *sb)
+ 		goto free_bh;
+ 	}
+ 
++	/*
++	 * Call exfat_count_num_cluster() before searching for up-case and
++	 * bitmap directory entries to avoid infinite loop if they are missing
++	 * and the cluster chain includes a loop.
++	 */
++	exfat_chain_set(root_clu, sbi->root_dir, 0, ALLOC_FAT_CHAIN);
++	ret = exfat_count_num_clusters(sb, root_clu, &root_clu->size);
++	if (ret) {
++		exfat_err(sb, "failed to count the number of clusters in root");
++		goto free_bh;
++	}
++
+ 	ret = exfat_create_upcase_table(sb);
+ 	if (ret) {
+ 		exfat_err(sb, "failed to load upcase table");
+@@ -627,6 +636,7 @@ static int exfat_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	struct exfat_sb_info *sbi = sb->s_fs_info;
+ 	struct exfat_mount_options *opts = &sbi->options;
+ 	struct inode *root_inode;
++	struct exfat_chain root_clu;
+ 	int err;
+ 
+ 	if (opts->allow_utime == (unsigned short)-1)
+@@ -645,7 +655,7 @@ static int exfat_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	sb->s_time_min = EXFAT_MIN_TIMESTAMP_SECS;
+ 	sb->s_time_max = EXFAT_MAX_TIMESTAMP_SECS;
+ 
+-	err = __exfat_fill_super(sb);
++	err = __exfat_fill_super(sb, &root_clu);
+ 	if (err) {
+ 		exfat_err(sb, "failed to recognize exfat type");
+ 		goto check_nls_io;
+@@ -680,7 +690,7 @@ static int exfat_fill_super(struct super_block *sb, struct fs_context *fc)
+ 
+ 	root_inode->i_ino = EXFAT_ROOT_INO;
+ 	inode_set_iversion(root_inode, 1);
+-	err = exfat_read_root(root_inode);
++	err = exfat_read_root(root_inode, &root_clu);
+ 	if (err) {
+ 		exfat_err(sb, "failed to initialize root inode");
+ 		goto put_inode;
+diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c
+index 30f8201c155f40..177b1f852b63ac 100644
+--- a/fs/ext2/inode.c
++++ b/fs/ext2/inode.c
+@@ -895,9 +895,19 @@ int ext2_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ 		u64 start, u64 len)
+ {
+ 	int ret;
++	loff_t i_size;
+ 
+ 	inode_lock(inode);
+-	len = min_t(u64, len, i_size_read(inode));
++	i_size = i_size_read(inode);
++	/*
++	 * iomap_fiemap() returns EINVAL for 0 length. Make sure we don't trim
++	 * length to 0 but still trim the range as much as possible since
++	 * ext2_get_blocks() iterates unmapped space block by block which is
++	 * slow.
++	 */
++	if (i_size == 0)
++		i_size = 1;
++	len = min_t(u64, len, i_size);
+ 	ret = iomap_fiemap(inode, fieinfo, start, len, &ext2_iomap_ops);
+ 	inode_unlock(inode);
+ 
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 18373de980f27a..fe3366e98493b8 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -3020,7 +3020,7 @@ int ext4_walk_page_buffers(handle_t *handle,
+ 				     struct buffer_head *bh));
+ int do_journal_get_write_access(handle_t *handle, struct inode *inode,
+ 				struct buffer_head *bh);
+-bool ext4_should_enable_large_folio(struct inode *inode);
++void ext4_set_inode_mapping_order(struct inode *inode);
+ #define FALL_BACK_TO_NONDELALLOC 1
+ #define CONVERT_INLINE_DATA	 2
+ 
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index 79aa3df8d0197b..df4051613b290a 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -1335,8 +1335,7 @@ struct inode *__ext4_new_inode(struct mnt_idmap *idmap,
+ 		}
+ 	}
+ 
+-	if (ext4_should_enable_large_folio(inode))
+-		mapping_set_large_folios(inode->i_mapping);
++	ext4_set_inode_mapping_order(inode);
+ 
+ 	ext4_update_inode_fsync_trans(handle, inode, 1);
+ 
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 1545846e0e3e3f..05313c8ffb9cc7 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -303,7 +303,11 @@ static int ext4_create_inline_data(handle_t *handle,
+ 	if (error)
+ 		goto out;
+ 
+-	BUG_ON(!is.s.not_found);
++	if (!is.s.not_found) {
++		EXT4_ERROR_INODE(inode, "unexpected inline data xattr");
++		error = -EFSCORRUPTED;
++		goto out;
++	}
+ 
+ 	error = ext4_xattr_ibody_set(handle, inode, &i, &is);
+ 	if (error) {
+@@ -354,7 +358,11 @@ static int ext4_update_inline_data(handle_t *handle, struct inode *inode,
+ 	if (error)
+ 		goto out;
+ 
+-	BUG_ON(is.s.not_found);
++	if (is.s.not_found) {
++		EXT4_ERROR_INODE(inode, "missing inline data xattr");
++		error = -EFSCORRUPTED;
++		goto out;
++	}
+ 
+ 	len -= EXT4_MIN_INLINE_DATA_SIZE;
+ 	value = kzalloc(len, GFP_NOFS);
+@@ -1905,7 +1913,12 @@ int ext4_inline_data_truncate(struct inode *inode, int *has_inline)
+ 			if ((err = ext4_xattr_ibody_find(inode, &i, &is)) != 0)
+ 				goto out_error;
+ 
+-			BUG_ON(is.s.not_found);
++			if (is.s.not_found) {
++				EXT4_ERROR_INODE(inode,
++						 "missing inline data xattr");
++				err = -EFSCORRUPTED;
++				goto out_error;
++			}
+ 
+ 			value_len = le32_to_cpu(is.s.here->e_value_size);
+ 			value = kmalloc(value_len, GFP_NOFS);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index ee4129b5ecce33..0f316632b8dd65 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -5107,7 +5107,7 @@ static int check_igot_inode(struct inode *inode, ext4_iget_flags flags,
+ 	return -EFSCORRUPTED;
+ }
+ 
+-bool ext4_should_enable_large_folio(struct inode *inode)
++static bool ext4_should_enable_large_folio(struct inode *inode)
+ {
+ 	struct super_block *sb = inode->i_sb;
+ 
+@@ -5124,6 +5124,22 @@ bool ext4_should_enable_large_folio(struct inode *inode)
+ 	return true;
+ }
+ 
++/*
++ * Limit the maximum folio order to 2048 blocks to prevent overestimation
++ * of reserve handle credits during the folio writeback in environments
++ * where the PAGE_SIZE exceeds 4KB.
++ */
++#define EXT4_MAX_PAGECACHE_ORDER(i)		\
++		umin(MAX_PAGECACHE_ORDER, (11 + (i)->i_blkbits - PAGE_SHIFT))
++void ext4_set_inode_mapping_order(struct inode *inode)
++{
++	if (!ext4_should_enable_large_folio(inode))
++		return;
++
++	mapping_set_folio_order_range(inode->i_mapping, 0,
++				      EXT4_MAX_PAGECACHE_ORDER(inode));
++}
++
+ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ 			  ext4_iget_flags flags, const char *function,
+ 			  unsigned int line)
+@@ -5441,8 +5457,8 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ 		ret = -EFSCORRUPTED;
+ 		goto bad_inode;
+ 	}
+-	if (ext4_should_enable_large_folio(inode))
+-		mapping_set_large_folios(inode->i_mapping);
++
++	ext4_set_inode_mapping_order(inode);
+ 
+ 	ret = check_igot_inode(inode, flags, function, line);
+ 	/*
+diff --git a/fs/ext4/mballoc-test.c b/fs/ext4/mballoc-test.c
+index d634c12f198474..f018bc8424c7cb 100644
+--- a/fs/ext4/mballoc-test.c
++++ b/fs/ext4/mballoc-test.c
+@@ -155,6 +155,7 @@ static struct super_block *mbt_ext4_alloc_super_block(void)
+ 	bgl_lock_init(sbi->s_blockgroup_lock);
+ 
+ 	sbi->s_es = &fsb->es;
++	sbi->s_sb = sb;
+ 	sb->s_fs_info = sbi;
+ 
+ 	up_write(&sb->s_umount);
+@@ -802,6 +803,10 @@ static void test_mb_mark_used(struct kunit *test)
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	grp->bb_free = EXT4_CLUSTERS_PER_GROUP(sb);
++	grp->bb_largest_free_order = -1;
++	grp->bb_avg_fragment_size_order = -1;
++	INIT_LIST_HEAD(&grp->bb_largest_free_order_node);
++	INIT_LIST_HEAD(&grp->bb_avg_fragment_size_node);
+ 	mbt_generate_test_ranges(sb, ranges, TEST_RANGE_COUNT);
+ 	for (i = 0; i < TEST_RANGE_COUNT; i++)
+ 		test_mb_mark_used_range(test, &e4b, ranges[i].start,
+@@ -875,6 +880,10 @@ static void test_mb_free_blocks(struct kunit *test)
+ 	ext4_unlock_group(sb, TEST_GOAL_GROUP);
+ 
+ 	grp->bb_free = 0;
++	grp->bb_largest_free_order = -1;
++	grp->bb_avg_fragment_size_order = -1;
++	INIT_LIST_HEAD(&grp->bb_largest_free_order_node);
++	INIT_LIST_HEAD(&grp->bb_avg_fragment_size_node);
+ 	memset(bitmap, 0xff, sb->s_blocksize);
+ 
+ 	mbt_generate_test_ranges(sb, ranges, TEST_RANGE_COUNT);
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 1e98c5be4e0ad5..fb7093ea30236f 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -841,30 +841,30 @@ static void
+ mb_update_avg_fragment_size(struct super_block *sb, struct ext4_group_info *grp)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+-	int new_order;
++	int new, old;
+ 
+-	if (!test_opt2(sb, MB_OPTIMIZE_SCAN) || grp->bb_fragments == 0)
++	if (!test_opt2(sb, MB_OPTIMIZE_SCAN))
+ 		return;
+ 
+-	new_order = mb_avg_fragment_size_order(sb,
+-					grp->bb_free / grp->bb_fragments);
+-	if (new_order == grp->bb_avg_fragment_size_order)
++	old = grp->bb_avg_fragment_size_order;
++	new = grp->bb_fragments == 0 ? -1 :
++	      mb_avg_fragment_size_order(sb, grp->bb_free / grp->bb_fragments);
++	if (new == old)
+ 		return;
+ 
+-	if (grp->bb_avg_fragment_size_order != -1) {
+-		write_lock(&sbi->s_mb_avg_fragment_size_locks[
+-					grp->bb_avg_fragment_size_order]);
++	if (old >= 0) {
++		write_lock(&sbi->s_mb_avg_fragment_size_locks[old]);
+ 		list_del(&grp->bb_avg_fragment_size_node);
+-		write_unlock(&sbi->s_mb_avg_fragment_size_locks[
+-					grp->bb_avg_fragment_size_order]);
++		write_unlock(&sbi->s_mb_avg_fragment_size_locks[old]);
++	}
++
++	grp->bb_avg_fragment_size_order = new;
++	if (new >= 0) {
++		write_lock(&sbi->s_mb_avg_fragment_size_locks[new]);
++		list_add_tail(&grp->bb_avg_fragment_size_node,
++				&sbi->s_mb_avg_fragment_size[new]);
++		write_unlock(&sbi->s_mb_avg_fragment_size_locks[new]);
+ 	}
+-	grp->bb_avg_fragment_size_order = new_order;
+-	write_lock(&sbi->s_mb_avg_fragment_size_locks[
+-					grp->bb_avg_fragment_size_order]);
+-	list_add_tail(&grp->bb_avg_fragment_size_node,
+-		&sbi->s_mb_avg_fragment_size[grp->bb_avg_fragment_size_order]);
+-	write_unlock(&sbi->s_mb_avg_fragment_size_locks[
+-					grp->bb_avg_fragment_size_order]);
+ }
+ 
+ /*
+@@ -1150,33 +1150,28 @@ static void
+ mb_set_largest_free_order(struct super_block *sb, struct ext4_group_info *grp)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+-	int i;
++	int new, old = grp->bb_largest_free_order;
+ 
+-	for (i = MB_NUM_ORDERS(sb) - 1; i >= 0; i--)
+-		if (grp->bb_counters[i] > 0)
++	for (new = MB_NUM_ORDERS(sb) - 1; new >= 0; new--)
++		if (grp->bb_counters[new] > 0)
+ 			break;
++
+ 	/* No need to move between order lists? */
+-	if (!test_opt2(sb, MB_OPTIMIZE_SCAN) ||
+-	    i == grp->bb_largest_free_order) {
+-		grp->bb_largest_free_order = i;
++	if (new == old)
+ 		return;
+-	}
+ 
+-	if (grp->bb_largest_free_order >= 0) {
+-		write_lock(&sbi->s_mb_largest_free_orders_locks[
+-					      grp->bb_largest_free_order]);
++	if (old >= 0 && !list_empty(&grp->bb_largest_free_order_node)) {
++		write_lock(&sbi->s_mb_largest_free_orders_locks[old]);
+ 		list_del_init(&grp->bb_largest_free_order_node);
+-		write_unlock(&sbi->s_mb_largest_free_orders_locks[
+-					      grp->bb_largest_free_order]);
++		write_unlock(&sbi->s_mb_largest_free_orders_locks[old]);
+ 	}
+-	grp->bb_largest_free_order = i;
+-	if (grp->bb_largest_free_order >= 0 && grp->bb_free) {
+-		write_lock(&sbi->s_mb_largest_free_orders_locks[
+-					      grp->bb_largest_free_order]);
++
++	grp->bb_largest_free_order = new;
++	if (test_opt2(sb, MB_OPTIMIZE_SCAN) && new >= 0 && grp->bb_free) {
++		write_lock(&sbi->s_mb_largest_free_orders_locks[new]);
+ 		list_add_tail(&grp->bb_largest_free_order_node,
+-		      &sbi->s_mb_largest_free_orders[grp->bb_largest_free_order]);
+-		write_unlock(&sbi->s_mb_largest_free_orders_locks[
+-					      grp->bb_largest_free_order]);
++			      &sbi->s_mb_largest_free_orders[new]);
++		write_unlock(&sbi->s_mb_largest_free_orders_locks[new]);
+ 	}
+ }
+ 
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 696131e655ed38..bb3fd6a8416fd3 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1047,6 +1047,18 @@ int f2fs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 	if (unlikely(f2fs_cp_error(F2FS_I_SB(inode))))
+ 		return -EIO;
+ 
++	err = setattr_prepare(idmap, dentry, attr);
++	if (err)
++		return err;
++
++	err = fscrypt_prepare_setattr(dentry, attr);
++	if (err)
++		return err;
++
++	err = fsverity_prepare_setattr(dentry, attr);
++	if (err)
++		return err;
++
+ 	if (unlikely(IS_IMMUTABLE(inode)))
+ 		return -EPERM;
+ 
+@@ -1065,18 +1077,6 @@ int f2fs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 			return -EINVAL;
+ 	}
+ 
+-	err = setattr_prepare(idmap, dentry, attr);
+-	if (err)
+-		return err;
+-
+-	err = fscrypt_prepare_setattr(dentry, attr);
+-	if (err)
+-		return err;
+-
+-	err = fsverity_prepare_setattr(dentry, attr);
+-	if (err)
+-		return err;
+-
+ 	if (is_quota_modification(idmap, inode, attr)) {
+ 		err = f2fs_dquot_initialize(inode);
+ 		if (err)
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index bfe104db284ef1..2fd287f2bca4ba 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -555,8 +555,8 @@ int f2fs_get_node_info(struct f2fs_sb_info *sbi, nid_t nid,
+ 	struct f2fs_nat_entry ne;
+ 	struct nat_entry *e;
+ 	pgoff_t index;
+-	block_t blkaddr;
+ 	int i;
++	bool need_cache = true;
+ 
+ 	ni->flag = 0;
+ 	ni->nid = nid;
+@@ -569,6 +569,10 @@ int f2fs_get_node_info(struct f2fs_sb_info *sbi, nid_t nid,
+ 		ni->blk_addr = nat_get_blkaddr(e);
+ 		ni->version = nat_get_version(e);
+ 		f2fs_up_read(&nm_i->nat_tree_lock);
++		if (IS_ENABLED(CONFIG_F2FS_CHECK_FS)) {
++			need_cache = false;
++			goto sanity_check;
++		}
+ 		return 0;
+ 	}
+ 
+@@ -594,7 +598,7 @@ int f2fs_get_node_info(struct f2fs_sb_info *sbi, nid_t nid,
+ 	up_read(&curseg->journal_rwsem);
+ 	if (i >= 0) {
+ 		f2fs_up_read(&nm_i->nat_tree_lock);
+-		goto cache;
++		goto sanity_check;
+ 	}
+ 
+ 	/* Fill node_info from nat page */
+@@ -609,14 +613,23 @@ int f2fs_get_node_info(struct f2fs_sb_info *sbi, nid_t nid,
+ 	ne = nat_blk->entries[nid - start_nid];
+ 	node_info_from_raw_nat(ni, &ne);
+ 	f2fs_folio_put(folio, true);
+-cache:
+-	blkaddr = le32_to_cpu(ne.block_addr);
+-	if (__is_valid_data_blkaddr(blkaddr) &&
+-		!f2fs_is_valid_blkaddr(sbi, blkaddr, DATA_GENERIC_ENHANCE))
+-		return -EFAULT;
++sanity_check:
++	if (__is_valid_data_blkaddr(ni->blk_addr) &&
++		!f2fs_is_valid_blkaddr(sbi, ni->blk_addr,
++					DATA_GENERIC_ENHANCE)) {
++		set_sbi_flag(sbi, SBI_NEED_FSCK);
++		f2fs_err_ratelimited(sbi,
++			"f2fs_get_node_info of %pS: inconsistent nat entry, "
++			"ino:%u, nid:%u, blkaddr:%u, ver:%u, flag:%u",
++			__builtin_return_address(0),
++			ni->ino, ni->nid, ni->blk_addr, ni->version, ni->flag);
++		f2fs_handle_error(sbi, ERROR_INCONSISTENT_NAT);
++		return -EFSCORRUPTED;
++	}
+ 
+ 	/* cache nat entry */
+-	cache_nat_entry(sbi, nid, &ne);
++	if (need_cache)
++		cache_nat_entry(sbi, nid, &ne);
+ 	return 0;
+ }
+ 
+diff --git a/fs/fhandle.c b/fs/fhandle.c
+index 3e092ae6d142ae..66ff60591d17b2 100644
+--- a/fs/fhandle.c
++++ b/fs/fhandle.c
+@@ -88,7 +88,7 @@ static long do_sys_name_to_handle(const struct path *path,
+ 		if (fh_flags & EXPORT_FH_CONNECTABLE) {
+ 			handle->handle_type |= FILEID_IS_CONNECTABLE;
+ 			if (d_is_dir(path->dentry))
+-				fh_flags |= FILEID_IS_DIR;
++				handle->handle_type |= FILEID_IS_DIR;
+ 		}
+ 		retval = 0;
+ 	}
+diff --git a/fs/file.c b/fs/file.c
+index b6db031545e650..6d2275c3be9c69 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -197,6 +197,21 @@ static struct fdtable *alloc_fdtable(unsigned int slots_wanted)
+ 			return ERR_PTR(-EMFILE);
+ 	}
+ 
++	/*
++	 * Check if the allocation size would exceed INT_MAX. kvmalloc_array()
++	 * and kvmalloc() will warn if the allocation size is greater than
++	 * INT_MAX, as filp_cachep objects are not __GFP_NOWARN.
++	 *
++	 * This can happen when sysctl_nr_open is set to a very high value and
++	 * a process tries to use a file descriptor near that limit. For example,
++	 * if sysctl_nr_open is set to 1073741816 (0x3ffffff8) - which is what
++	 * systemd typically sets it to - then trying to use a file descriptor
++	 * close to that value will require allocating a file descriptor table
++	 * that exceeds 8GB in size.
++	 */
++	if (unlikely(nr > INT_MAX / sizeof(struct file *)))
++		return ERR_PTR(-EMFILE);
++
+ 	fdt = kmalloc(sizeof(struct fdtable), GFP_KERNEL_ACCOUNT);
+ 	if (!fdt)
+ 		goto out;
+diff --git a/fs/gfs2/dir.c b/fs/gfs2/dir.c
+index dbf1aede744c12..509e2f0d97e787 100644
+--- a/fs/gfs2/dir.c
++++ b/fs/gfs2/dir.c
+@@ -60,6 +60,7 @@
+ #include <linux/crc32.h>
+ #include <linux/vmalloc.h>
+ #include <linux/bio.h>
++#include <linux/log2.h>
+ 
+ #include "gfs2.h"
+ #include "incore.h"
+@@ -912,7 +913,6 @@ static int dir_make_exhash(struct inode *inode)
+ 	struct qstr args;
+ 	struct buffer_head *bh, *dibh;
+ 	struct gfs2_leaf *leaf;
+-	int y;
+ 	u32 x;
+ 	__be64 *lp;
+ 	u64 bn;
+@@ -979,9 +979,7 @@ static int dir_make_exhash(struct inode *inode)
+ 	i_size_write(inode, sdp->sd_sb.sb_bsize / 2);
+ 	gfs2_add_inode_blocks(&dip->i_inode, 1);
+ 	dip->i_diskflags |= GFS2_DIF_EXHASH;
+-
+-	for (x = sdp->sd_hash_ptrs, y = -1; x; x >>= 1, y++) ;
+-	dip->i_depth = y;
++	dip->i_depth = ilog2(sdp->sd_hash_ptrs);
+ 
+ 	gfs2_dinode_out(dip, dibh->b_data);
+ 
+diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
+index cebd66b22694c4..fe0faad4892f8b 100644
+--- a/fs/gfs2/glops.c
++++ b/fs/gfs2/glops.c
+@@ -11,6 +11,7 @@
+ #include <linux/bio.h>
+ #include <linux/posix_acl.h>
+ #include <linux/security.h>
++#include <linux/log2.h>
+ 
+ #include "gfs2.h"
+ #include "incore.h"
+@@ -450,6 +451,11 @@ static int gfs2_dinode_in(struct gfs2_inode *ip, const void *buf)
+ 		gfs2_consist_inode(ip);
+ 		return -EIO;
+ 	}
++	if ((ip->i_diskflags & GFS2_DIF_EXHASH) &&
++	    depth < ilog2(sdp->sd_hash_ptrs)) {
++		gfs2_consist_inode(ip);
++		return -EIO;
++	}
+ 	ip->i_depth = (u8)depth;
+ 	ip->i_entries = be32_to_cpu(str->di_entries);
+ 
+diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c
+index 9dc8885c95d072..66ee10929736f6 100644
+--- a/fs/gfs2/meta_io.c
++++ b/fs/gfs2/meta_io.c
+@@ -103,6 +103,7 @@ const struct address_space_operations gfs2_meta_aops = {
+ 	.invalidate_folio = block_invalidate_folio,
+ 	.writepages = gfs2_aspace_writepages,
+ 	.release_folio = gfs2_release_folio,
++	.migrate_folio = buffer_migrate_folio_norefs,
+ };
+ 
+ const struct address_space_operations gfs2_rgrp_aops = {
+@@ -110,6 +111,7 @@ const struct address_space_operations gfs2_rgrp_aops = {
+ 	.invalidate_folio = block_invalidate_folio,
+ 	.writepages = gfs2_aspace_writepages,
+ 	.release_folio = gfs2_release_folio,
++	.migrate_folio = buffer_migrate_folio_norefs,
+ };
+ 
+ /**
+diff --git a/fs/hfs/bfind.c b/fs/hfs/bfind.c
+index ef9498a6e88acd..34e9804e0f3601 100644
+--- a/fs/hfs/bfind.c
++++ b/fs/hfs/bfind.c
+@@ -16,6 +16,9 @@ int hfs_find_init(struct hfs_btree *tree, struct hfs_find_data *fd)
+ {
+ 	void *ptr;
+ 
++	if (!tree || !fd)
++		return -EINVAL;
++
+ 	fd->tree = tree;
+ 	fd->bnode = NULL;
+ 	ptr = kmalloc(tree->max_key_len * 2 + 4, GFP_KERNEL);
+diff --git a/fs/hfs/bnode.c b/fs/hfs/bnode.c
+index cb823a8a6ba960..e8cd1a31f2470c 100644
+--- a/fs/hfs/bnode.c
++++ b/fs/hfs/bnode.c
+@@ -15,6 +15,48 @@
+ 
+ #include "btree.h"
+ 
++static inline
++bool is_bnode_offset_valid(struct hfs_bnode *node, int off)
++{
++	bool is_valid = off < node->tree->node_size;
++
++	if (!is_valid) {
++		pr_err("requested invalid offset: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off);
++	}
++
++	return is_valid;
++}
++
++static inline
++int check_and_correct_requested_length(struct hfs_bnode *node, int off, int len)
++{
++	unsigned int node_size;
++
++	if (!is_bnode_offset_valid(node, off))
++		return 0;
++
++	node_size = node->tree->node_size;
++
++	if ((off + len) > node_size) {
++		int new_len = (int)node_size - off;
++
++		pr_err("requested length has been corrected: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d, "
++		       "requested_len %d, corrected_len %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off, len, new_len);
++
++		return new_len;
++	}
++
++	return len;
++}
++
+ void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len)
+ {
+ 	struct page *page;
+@@ -22,6 +64,20 @@ void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len)
+ 	int bytes_read;
+ 	int bytes_to_read;
+ 
++	if (!is_bnode_offset_valid(node, off))
++		return;
++
++	if (len == 0) {
++		pr_err("requested zero length: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d, len %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off, len);
++		return;
++	}
++
++	len = check_and_correct_requested_length(node, off, len);
++
+ 	off += node->page_offset;
+ 	pagenum = off >> PAGE_SHIFT;
+ 	off &= ~PAGE_MASK; /* compute page offset for the first page */
+@@ -80,6 +136,20 @@ void hfs_bnode_write(struct hfs_bnode *node, void *buf, int off, int len)
+ {
+ 	struct page *page;
+ 
++	if (!is_bnode_offset_valid(node, off))
++		return;
++
++	if (len == 0) {
++		pr_err("requested zero length: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d, len %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off, len);
++		return;
++	}
++
++	len = check_and_correct_requested_length(node, off, len);
++
+ 	off += node->page_offset;
+ 	page = node->page[0];
+ 
+@@ -104,6 +174,20 @@ void hfs_bnode_clear(struct hfs_bnode *node, int off, int len)
+ {
+ 	struct page *page;
+ 
++	if (!is_bnode_offset_valid(node, off))
++		return;
++
++	if (len == 0) {
++		pr_err("requested zero length: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d, len %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off, len);
++		return;
++	}
++
++	len = check_and_correct_requested_length(node, off, len);
++
+ 	off += node->page_offset;
+ 	page = node->page[0];
+ 
+@@ -119,6 +203,10 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst,
+ 	hfs_dbg(BNODE_MOD, "copybytes: %u,%u,%u\n", dst, src, len);
+ 	if (!len)
+ 		return;
++
++	len = check_and_correct_requested_length(src_node, src, len);
++	len = check_and_correct_requested_length(dst_node, dst, len);
++
+ 	src += src_node->page_offset;
+ 	dst += dst_node->page_offset;
+ 	src_page = src_node->page[0];
+@@ -136,6 +224,10 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len)
+ 	hfs_dbg(BNODE_MOD, "movebytes: %u,%u,%u\n", dst, src, len);
+ 	if (!len)
+ 		return;
++
++	len = check_and_correct_requested_length(node, src, len);
++	len = check_and_correct_requested_length(node, dst, len);
++
+ 	src += node->page_offset;
+ 	dst += node->page_offset;
+ 	page = node->page[0];
+@@ -482,6 +574,7 @@ void hfs_bnode_put(struct hfs_bnode *node)
+ 		if (test_bit(HFS_BNODE_DELETED, &node->flags)) {
+ 			hfs_bnode_unhash(node);
+ 			spin_unlock(&tree->hash_lock);
++			hfs_bnode_clear(node, 0, tree->node_size);
+ 			hfs_bmap_free(node);
+ 			hfs_bnode_free(node);
+ 			return;
+diff --git a/fs/hfs/btree.c b/fs/hfs/btree.c
+index 2fa4b1f8cc7fb0..e86e1e235658fa 100644
+--- a/fs/hfs/btree.c
++++ b/fs/hfs/btree.c
+@@ -21,8 +21,12 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id, btree_keycmp ke
+ 	struct hfs_btree *tree;
+ 	struct hfs_btree_header_rec *head;
+ 	struct address_space *mapping;
+-	struct page *page;
++	struct folio *folio;
++	struct buffer_head *bh;
+ 	unsigned int size;
++	u16 dblock;
++	sector_t start_block;
++	loff_t offset;
+ 
+ 	tree = kzalloc(sizeof(*tree), GFP_KERNEL);
+ 	if (!tree)
+@@ -75,12 +79,40 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id, btree_keycmp ke
+ 	unlock_new_inode(tree->inode);
+ 
+ 	mapping = tree->inode->i_mapping;
+-	page = read_mapping_page(mapping, 0, NULL);
+-	if (IS_ERR(page))
++	folio = filemap_grab_folio(mapping, 0);
++	if (IS_ERR(folio))
+ 		goto free_inode;
+ 
++	folio_zero_range(folio, 0, folio_size(folio));
++
++	dblock = hfs_ext_find_block(HFS_I(tree->inode)->first_extents, 0);
++	start_block = HFS_SB(sb)->fs_start + (dblock * HFS_SB(sb)->fs_div);
++
++	size = folio_size(folio);
++	offset = 0;
++	while (size > 0) {
++		size_t len;
++
++		bh = sb_bread(sb, start_block);
++		if (!bh) {
++			pr_err("unable to read tree header\n");
++			goto put_folio;
++		}
++
++		len = min_t(size_t, folio_size(folio), sb->s_blocksize);
++		memcpy_to_folio(folio, offset, bh->b_data, sb->s_blocksize);
++
++		brelse(bh);
++
++		start_block++;
++		offset += len;
++		size -= len;
++	}
++
++	folio_mark_uptodate(folio);
++
+ 	/* Load the header */
+-	head = (struct hfs_btree_header_rec *)(kmap_local_page(page) +
++	head = (struct hfs_btree_header_rec *)(kmap_local_folio(folio, 0) +
+ 					       sizeof(struct hfs_bnode_desc));
+ 	tree->root = be32_to_cpu(head->root);
+ 	tree->leaf_count = be32_to_cpu(head->leaf_count);
+@@ -95,22 +127,22 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id, btree_keycmp ke
+ 
+ 	size = tree->node_size;
+ 	if (!is_power_of_2(size))
+-		goto fail_page;
++		goto fail_folio;
+ 	if (!tree->node_count)
+-		goto fail_page;
++		goto fail_folio;
+ 	switch (id) {
+ 	case HFS_EXT_CNID:
+ 		if (tree->max_key_len != HFS_MAX_EXT_KEYLEN) {
+ 			pr_err("invalid extent max_key_len %d\n",
+ 			       tree->max_key_len);
+-			goto fail_page;
++			goto fail_folio;
+ 		}
+ 		break;
+ 	case HFS_CAT_CNID:
+ 		if (tree->max_key_len != HFS_MAX_CAT_KEYLEN) {
+ 			pr_err("invalid catalog max_key_len %d\n",
+ 			       tree->max_key_len);
+-			goto fail_page;
++			goto fail_folio;
+ 		}
+ 		break;
+ 	default:
+@@ -121,12 +153,15 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id, btree_keycmp ke
+ 	tree->pages_per_bnode = (tree->node_size + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ 
+ 	kunmap_local(head);
+-	put_page(page);
++	folio_unlock(folio);
++	folio_put(folio);
+ 	return tree;
+ 
+-fail_page:
++fail_folio:
+ 	kunmap_local(head);
+-	put_page(page);
++put_folio:
++	folio_unlock(folio);
++	folio_put(folio);
+ free_inode:
+ 	tree->inode->i_mapping->a_ops = &hfs_aops;
+ 	iput(tree->inode);
+diff --git a/fs/hfs/extent.c b/fs/hfs/extent.c
+index 4a0ce131e233fe..580c62981dbd3d 100644
+--- a/fs/hfs/extent.c
++++ b/fs/hfs/extent.c
+@@ -71,7 +71,7 @@ int hfs_ext_keycmp(const btree_key *key1, const btree_key *key2)
+  *
+  * Find a block within an extent record
+  */
+-static u16 hfs_ext_find_block(struct hfs_extent *ext, u16 off)
++u16 hfs_ext_find_block(struct hfs_extent *ext, u16 off)
+ {
+ 	int i;
+ 	u16 count;
+diff --git a/fs/hfs/hfs_fs.h b/fs/hfs/hfs_fs.h
+index a0c7cb0f79fcc9..732c5c4c7545d6 100644
+--- a/fs/hfs/hfs_fs.h
++++ b/fs/hfs/hfs_fs.h
+@@ -190,6 +190,7 @@ extern const struct inode_operations hfs_dir_inode_operations;
+ 
+ /* extent.c */
+ extern int hfs_ext_keycmp(const btree_key *, const btree_key *);
++extern u16 hfs_ext_find_block(struct hfs_extent *ext, u16 off);
+ extern int hfs_free_fork(struct super_block *, struct hfs_cat_file *, int);
+ extern int hfs_ext_write_extent(struct inode *);
+ extern int hfs_extend_file(struct inode *);
+diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c
+index 079ea80534f7de..14f4995588ff03 100644
+--- a/fs/hfsplus/bnode.c
++++ b/fs/hfsplus/bnode.c
+@@ -18,12 +18,68 @@
+ #include "hfsplus_fs.h"
+ #include "hfsplus_raw.h"
+ 
++static inline
++bool is_bnode_offset_valid(struct hfs_bnode *node, int off)
++{
++	bool is_valid = off < node->tree->node_size;
++
++	if (!is_valid) {
++		pr_err("requested invalid offset: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off);
++	}
++
++	return is_valid;
++}
++
++static inline
++int check_and_correct_requested_length(struct hfs_bnode *node, int off, int len)
++{
++	unsigned int node_size;
++
++	if (!is_bnode_offset_valid(node, off))
++		return 0;
++
++	node_size = node->tree->node_size;
++
++	if ((off + len) > node_size) {
++		int new_len = (int)node_size - off;
++
++		pr_err("requested length has been corrected: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d, "
++		       "requested_len %d, corrected_len %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off, len, new_len);
++
++		return new_len;
++	}
++
++	return len;
++}
++
+ /* Copy a specified range of bytes from the raw data of a node */
+ void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len)
+ {
+ 	struct page **pagep;
+ 	int l;
+ 
++	if (!is_bnode_offset_valid(node, off))
++		return;
++
++	if (len == 0) {
++		pr_err("requested zero length: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d, len %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off, len);
++		return;
++	}
++
++	len = check_and_correct_requested_length(node, off, len);
++
+ 	off += node->page_offset;
+ 	pagep = node->page + (off >> PAGE_SHIFT);
+ 	off &= ~PAGE_MASK;
+@@ -81,6 +137,20 @@ void hfs_bnode_write(struct hfs_bnode *node, void *buf, int off, int len)
+ 	struct page **pagep;
+ 	int l;
+ 
++	if (!is_bnode_offset_valid(node, off))
++		return;
++
++	if (len == 0) {
++		pr_err("requested zero length: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d, len %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off, len);
++		return;
++	}
++
++	len = check_and_correct_requested_length(node, off, len);
++
+ 	off += node->page_offset;
+ 	pagep = node->page + (off >> PAGE_SHIFT);
+ 	off &= ~PAGE_MASK;
+@@ -109,6 +179,20 @@ void hfs_bnode_clear(struct hfs_bnode *node, int off, int len)
+ 	struct page **pagep;
+ 	int l;
+ 
++	if (!is_bnode_offset_valid(node, off))
++		return;
++
++	if (len == 0) {
++		pr_err("requested zero length: "
++		       "NODE: id %u, type %#x, height %u, "
++		       "node_size %u, offset %d, len %d\n",
++		       node->this, node->type, node->height,
++		       node->tree->node_size, off, len);
++		return;
++	}
++
++	len = check_and_correct_requested_length(node, off, len);
++
+ 	off += node->page_offset;
+ 	pagep = node->page + (off >> PAGE_SHIFT);
+ 	off &= ~PAGE_MASK;
+@@ -133,6 +217,10 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst,
+ 	hfs_dbg(BNODE_MOD, "copybytes: %u,%u,%u\n", dst, src, len);
+ 	if (!len)
+ 		return;
++
++	len = check_and_correct_requested_length(src_node, src, len);
++	len = check_and_correct_requested_length(dst_node, dst, len);
++
+ 	src += src_node->page_offset;
+ 	dst += dst_node->page_offset;
+ 	src_page = src_node->page + (src >> PAGE_SHIFT);
+@@ -187,6 +275,10 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len)
+ 	hfs_dbg(BNODE_MOD, "movebytes: %u,%u,%u\n", dst, src, len);
+ 	if (!len)
+ 		return;
++
++	len = check_and_correct_requested_length(node, src, len);
++	len = check_and_correct_requested_length(node, dst, len);
++
+ 	src += node->page_offset;
+ 	dst += node->page_offset;
+ 	if (dst > src) {
+diff --git a/fs/hfsplus/unicode.c b/fs/hfsplus/unicode.c
+index 73342c925a4b6e..36b6cf2a3abba4 100644
+--- a/fs/hfsplus/unicode.c
++++ b/fs/hfsplus/unicode.c
+@@ -132,7 +132,14 @@ int hfsplus_uni2asc(struct super_block *sb,
+ 
+ 	op = astr;
+ 	ip = ustr->unicode;
++
+ 	ustrlen = be16_to_cpu(ustr->length);
++	if (ustrlen > HFSPLUS_MAX_STRLEN) {
++		ustrlen = HFSPLUS_MAX_STRLEN;
++		pr_err("invalid length %u has been corrected to %d\n",
++			be16_to_cpu(ustr->length), ustrlen);
++	}
++
+ 	len = *len_p;
+ 	ce1 = NULL;
+ 	compose = !test_bit(HFSPLUS_SB_NODECOMPOSE, &HFSPLUS_SB(sb)->flags);
+diff --git a/fs/hfsplus/xattr.c b/fs/hfsplus/xattr.c
+index 9a1a93e3888b92..18dc3d254d218c 100644
+--- a/fs/hfsplus/xattr.c
++++ b/fs/hfsplus/xattr.c
+@@ -172,7 +172,11 @@ static int hfsplus_create_attributes_file(struct super_block *sb)
+ 		return PTR_ERR(attr_file);
+ 	}
+ 
+-	BUG_ON(i_size_read(attr_file) != 0);
++	if (i_size_read(attr_file) != 0) {
++		err = -EIO;
++		pr_err("detected inconsistent attributes file, running fsck.hfsplus is recommended.\n");
++		goto end_attr_file_creation;
++	}
+ 
+ 	hip = HFSPLUS_I(attr_file);
+ 
+diff --git a/fs/jfs/file.c b/fs/jfs/file.c
+index 01b6912e60f808..742cadd1f37e84 100644
+--- a/fs/jfs/file.c
++++ b/fs/jfs/file.c
+@@ -44,6 +44,9 @@ static int jfs_open(struct inode *inode, struct file *file)
+ {
+ 	int rc;
+ 
++	if (S_ISREG(inode->i_mode) && inode->i_size < 0)
++		return -EIO;
++
+ 	if ((rc = dquot_file_open(inode, file)))
+ 		return rc;
+ 
+diff --git a/fs/jfs/inode.c b/fs/jfs/inode.c
+index 60fc92dee24d20..81e6b18e81e1b5 100644
+--- a/fs/jfs/inode.c
++++ b/fs/jfs/inode.c
+@@ -145,9 +145,9 @@ void jfs_evict_inode(struct inode *inode)
+ 	if (!inode->i_nlink && !is_bad_inode(inode)) {
+ 		dquot_initialize(inode);
+ 
++		truncate_inode_pages_final(&inode->i_data);
+ 		if (JFS_IP(inode)->fileset == FILESYSTEM_I) {
+ 			struct inode *ipimap = JFS_SBI(inode->i_sb)->ipimap;
+-			truncate_inode_pages_final(&inode->i_data);
+ 
+ 			if (test_cflag(COMMIT_Freewmap, inode))
+ 				jfs_free_zero_link(inode);
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 5a877261c3fe48..cdfa699cd7c8fa 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -1389,6 +1389,12 @@ dbAllocAG(struct bmap * bmp, int agno, s64 nblocks, int l2nb, s64 * results)
+ 	    (1 << (L2LPERCTL - (bmp->db_agheight << 1))) / bmp->db_agwidth;
+ 	ti = bmp->db_agstart + bmp->db_agwidth * (agno & (agperlev - 1));
+ 
++	if (ti < 0 || ti >= le32_to_cpu(dcp->nleafs)) {
++		jfs_error(bmp->db_ipbmap->i_sb, "Corrupt dmapctl page\n");
++		release_metapage(mp);
++		return -EIO;
++	}
++
+ 	/* dmap control page trees fan-out by 4 and a single allocation
+ 	 * group may be described by 1 or 2 subtrees within the ag level
+ 	 * dmap control page, depending upon the ag size. examine the ag's
+diff --git a/fs/libfs.c b/fs/libfs.c
+index 6f487fc6be3430..972b95cc743357 100644
+--- a/fs/libfs.c
++++ b/fs/libfs.c
+@@ -613,7 +613,7 @@ void simple_recursive_removal(struct dentry *dentry,
+ 		struct dentry *victim = NULL, *child;
+ 		struct inode *inode = this->d_inode;
+ 
+-		inode_lock(inode);
++		inode_lock_nested(inode, I_MUTEX_CHILD);
+ 		if (d_is_dir(this))
+ 			inode->i_flags |= S_DEAD;
+ 		while ((child = find_next_child(this, victim)) == NULL) {
+@@ -625,7 +625,7 @@ void simple_recursive_removal(struct dentry *dentry,
+ 			victim = this;
+ 			this = this->d_parent;
+ 			inode = this->d_inode;
+-			inode_lock(inode);
++			inode_lock_nested(inode, I_MUTEX_CHILD);
+ 			if (simple_positive(victim)) {
+ 				d_invalidate(victim);	// avoid lost mounts
+ 				if (d_is_dir(victim))
+diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
+index 47189476b5538b..5d6edafbed202a 100644
+--- a/fs/nfs/blocklayout/blocklayout.c
++++ b/fs/nfs/blocklayout/blocklayout.c
+@@ -149,8 +149,8 @@ do_add_page_to_bio(struct bio *bio, int npg, enum req_op op, sector_t isect,
+ 
+ 	/* limit length to what the device mapping allows */
+ 	end = disk_addr + *len;
+-	if (end >= map->start + map->len)
+-		*len = map->start + map->len - disk_addr;
++	if (end >= map->disk_offset + map->len)
++		*len = map->disk_offset + map->len - disk_addr;
+ 
+ retry:
+ 	if (!bio) {
+diff --git a/fs/nfs/blocklayout/dev.c b/fs/nfs/blocklayout/dev.c
+index cab8809f0e0f48..44306ac22353be 100644
+--- a/fs/nfs/blocklayout/dev.c
++++ b/fs/nfs/blocklayout/dev.c
+@@ -257,10 +257,11 @@ static bool bl_map_stripe(struct pnfs_block_dev *dev, u64 offset,
+ 	struct pnfs_block_dev *child;
+ 	u64 chunk;
+ 	u32 chunk_idx;
++	u64 disk_chunk;
+ 	u64 disk_offset;
+ 
+ 	chunk = div_u64(offset, dev->chunk_size);
+-	div_u64_rem(chunk, dev->nr_children, &chunk_idx);
++	disk_chunk = div_u64_rem(chunk, dev->nr_children, &chunk_idx);
+ 
+ 	if (chunk_idx >= dev->nr_children) {
+ 		dprintk("%s: invalid chunk idx %d (%lld/%lld)\n",
+@@ -273,7 +274,7 @@ static bool bl_map_stripe(struct pnfs_block_dev *dev, u64 offset,
+ 	offset = chunk * dev->chunk_size;
+ 
+ 	/* disk offset of the stripe */
+-	disk_offset = div_u64(offset, dev->nr_children);
++	disk_offset = disk_chunk * dev->chunk_size;
+ 
+ 	child = &dev->children[chunk_idx];
+ 	child->map(child, disk_offset, map);
+diff --git a/fs/nfs/blocklayout/extent_tree.c b/fs/nfs/blocklayout/extent_tree.c
+index 8f7cff7a42938e..0add0f329816b0 100644
+--- a/fs/nfs/blocklayout/extent_tree.c
++++ b/fs/nfs/blocklayout/extent_tree.c
+@@ -552,6 +552,15 @@ static int ext_tree_encode_commit(struct pnfs_block_layout *bl, __be32 *p,
+ 	return ret;
+ }
+ 
++/**
++ * ext_tree_prepare_commit - encode extents that need to be committed
++ * @arg: layout commit data
++ *
++ * Return values:
++ *   %0: Success, all required extents are encoded
++ *   %-ENOSPC: Some extents are encoded, but not all, due to RPC size limit
++ *   %-ENOMEM: Out of memory, extents not encoded
++ */
+ int
+ ext_tree_prepare_commit(struct nfs4_layoutcommit_args *arg)
+ {
+@@ -568,12 +577,12 @@ ext_tree_prepare_commit(struct nfs4_layoutcommit_args *arg)
+ 	start_p = page_address(arg->layoutupdate_page);
+ 	arg->layoutupdate_pages = &arg->layoutupdate_page;
+ 
+-retry:
+-	ret = ext_tree_encode_commit(bl, start_p + 1, buffer_size, &count, &arg->lastbytewritten);
++	ret = ext_tree_encode_commit(bl, start_p + 1, buffer_size,
++			&count, &arg->lastbytewritten);
+ 	if (unlikely(ret)) {
+ 		ext_tree_free_commitdata(arg, buffer_size);
+ 
+-		buffer_size = ext_tree_layoutupdate_size(bl, count);
++		buffer_size = NFS_SERVER(arg->inode)->wsize;
+ 		count = 0;
+ 
+ 		arg->layoutupdate_pages =
+@@ -588,7 +597,8 @@ ext_tree_prepare_commit(struct nfs4_layoutcommit_args *arg)
+ 			return -ENOMEM;
+ 		}
+ 
+-		goto retry;
++		ret = ext_tree_encode_commit(bl, start_p + 1, buffer_size,
++				&count, &arg->lastbytewritten);
+ 	}
+ 
+ 	*start_p = cpu_to_be32(count);
+@@ -608,7 +618,7 @@ ext_tree_prepare_commit(struct nfs4_layoutcommit_args *arg)
+ 	}
+ 
+ 	dprintk("%s found %zu ranges\n", __func__, count);
+-	return 0;
++	return ret;
+ }
+ 
+ void
+diff --git a/fs/nfs/client.c b/fs/nfs/client.c
+index cf35ad3f818adf..3bcf5c204578c1 100644
+--- a/fs/nfs/client.c
++++ b/fs/nfs/client.c
+@@ -682,6 +682,44 @@ struct nfs_client *nfs_init_client(struct nfs_client *clp,
+ }
+ EXPORT_SYMBOL_GPL(nfs_init_client);
+ 
++static void nfs4_server_set_init_caps(struct nfs_server *server)
++{
++#if IS_ENABLED(CONFIG_NFS_V4)
++	/* Set the basic capabilities */
++	server->caps = server->nfs_client->cl_mvops->init_caps;
++	if (server->flags & NFS_MOUNT_NORDIRPLUS)
++		server->caps &= ~NFS_CAP_READDIRPLUS;
++	if (server->nfs_client->cl_proto == XPRT_TRANSPORT_RDMA)
++		server->caps &= ~NFS_CAP_READ_PLUS;
++
++	/*
++	 * Don't use NFS uid/gid mapping if we're using AUTH_SYS or lower
++	 * authentication.
++	 */
++	if (nfs4_disable_idmapping &&
++	    server->client->cl_auth->au_flavor == RPC_AUTH_UNIX)
++		server->caps |= NFS_CAP_UIDGID_NOMAP;
++#endif
++}
++
++void nfs_server_set_init_caps(struct nfs_server *server)
++{
++	switch (server->nfs_client->rpc_ops->version) {
++	case 2:
++		server->caps = NFS_CAP_HARDLINKS | NFS_CAP_SYMLINKS;
++		break;
++	case 3:
++		server->caps = NFS_CAP_HARDLINKS | NFS_CAP_SYMLINKS;
++		if (!(server->flags & NFS_MOUNT_NORDIRPLUS))
++			server->caps |= NFS_CAP_READDIRPLUS;
++		break;
++	default:
++		nfs4_server_set_init_caps(server);
++		break;
++	}
++}
++EXPORT_SYMBOL_GPL(nfs_server_set_init_caps);
++
+ /*
+  * Create a version 2 or 3 client
+  */
+@@ -726,7 +764,6 @@ static int nfs_init_server(struct nfs_server *server,
+ 	/* Initialise the client representation from the mount data */
+ 	server->flags = ctx->flags;
+ 	server->options = ctx->options;
+-	server->caps |= NFS_CAP_HARDLINKS | NFS_CAP_SYMLINKS;
+ 
+ 	switch (clp->rpc_ops->version) {
+ 	case 2:
+@@ -762,6 +799,8 @@ static int nfs_init_server(struct nfs_server *server,
+ 	if (error < 0)
+ 		goto error;
+ 
++	nfs_server_set_init_caps(server);
++
+ 	/* Preserve the values of mount_server-related mount options */
+ 	if (ctx->mount_server.addrlen) {
+ 		memcpy(&server->mountd_address, &ctx->mount_server.address,
+@@ -936,7 +975,6 @@ void nfs_server_copy_userdata(struct nfs_server *target, struct nfs_server *sour
+ 	target->acregmax = source->acregmax;
+ 	target->acdirmin = source->acdirmin;
+ 	target->acdirmax = source->acdirmax;
+-	target->caps = source->caps;
+ 	target->options = source->options;
+ 	target->auth_info = source->auth_info;
+ 	target->port = source->port;
+@@ -1170,6 +1208,8 @@ struct nfs_server *nfs_clone_server(struct nfs_server *source,
+ 	if (error < 0)
+ 		goto out_free_server;
+ 
++	nfs_server_set_init_caps(server);
++
+ 	/* probe the filesystem info for this server filesystem */
+ 	error = nfs_probe_server(server, fh);
+ 	if (error < 0)
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index d8f768254f1665..9dcbc339649221 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -232,7 +232,7 @@ extern struct nfs_client *
+ nfs4_find_client_sessionid(struct net *, const struct sockaddr *,
+ 				struct nfs4_sessionid *, u32);
+ extern struct nfs_server *nfs_create_server(struct fs_context *);
+-extern void nfs4_server_set_init_caps(struct nfs_server *);
++extern void nfs_server_set_init_caps(struct nfs_server *);
+ extern struct nfs_server *nfs4_create_server(struct fs_context *);
+ extern struct nfs_server *nfs4_create_referral_server(struct fs_context *);
+ extern int nfs4_update_server(struct nfs_server *server, const char *hostname,
+diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
+index 162c85a83a14ae..dccf628850a727 100644
+--- a/fs/nfs/nfs4client.c
++++ b/fs/nfs/nfs4client.c
+@@ -1088,24 +1088,6 @@ static void nfs4_session_limit_xasize(struct nfs_server *server)
+ #endif
+ }
+ 
+-void nfs4_server_set_init_caps(struct nfs_server *server)
+-{
+-	/* Set the basic capabilities */
+-	server->caps |= server->nfs_client->cl_mvops->init_caps;
+-	if (server->flags & NFS_MOUNT_NORDIRPLUS)
+-			server->caps &= ~NFS_CAP_READDIRPLUS;
+-	if (server->nfs_client->cl_proto == XPRT_TRANSPORT_RDMA)
+-		server->caps &= ~NFS_CAP_READ_PLUS;
+-
+-	/*
+-	 * Don't use NFS uid/gid mapping if we're using AUTH_SYS or lower
+-	 * authentication.
+-	 */
+-	if (nfs4_disable_idmapping &&
+-			server->client->cl_auth->au_flavor == RPC_AUTH_UNIX)
+-		server->caps |= NFS_CAP_UIDGID_NOMAP;
+-}
+-
+ static int nfs4_server_common_setup(struct nfs_server *server,
+ 		struct nfs_fh *mntfh, bool auth_probe)
+ {
+@@ -1120,7 +1102,7 @@ static int nfs4_server_common_setup(struct nfs_server *server,
+ 	if (error < 0)
+ 		goto out;
+ 
+-	nfs4_server_set_init_caps(server);
++	nfs_server_set_init_caps(server);
+ 
+ 	/* Probe the root fh to retrieve its FSID and filehandle */
+ 	error = nfs4_get_rootfh(server, mntfh, auth_probe);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 811892cdb5a3a3..7e203857f46687 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -4082,7 +4082,7 @@ int nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *fhandle)
+ 	};
+ 	int err;
+ 
+-	nfs4_server_set_init_caps(server);
++	nfs_server_set_init_caps(server);
+ 	do {
+ 		err = nfs4_handle_exception(server,
+ 				_nfs4_server_capabilities(server, fhandle),
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 1a7ec68bde1532..3fd0971bf16fcf 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -3340,6 +3340,7 @@ pnfs_layoutcommit_inode(struct inode *inode, bool sync)
+ 	struct nfs_inode *nfsi = NFS_I(inode);
+ 	loff_t end_pos;
+ 	int status;
++	bool mark_as_dirty = false;
+ 
+ 	if (!pnfs_layoutcommit_outstanding(inode))
+ 		return 0;
+@@ -3391,19 +3392,23 @@ pnfs_layoutcommit_inode(struct inode *inode, bool sync)
+ 	if (ld->prepare_layoutcommit) {
+ 		status = ld->prepare_layoutcommit(&data->args);
+ 		if (status) {
+-			put_cred(data->cred);
++			if (status != -ENOSPC)
++				put_cred(data->cred);
+ 			spin_lock(&inode->i_lock);
+ 			set_bit(NFS_INO_LAYOUTCOMMIT, &nfsi->flags);
+ 			if (end_pos > nfsi->layout->plh_lwb)
+ 				nfsi->layout->plh_lwb = end_pos;
+-			goto out_unlock;
++			if (status != -ENOSPC)
++				goto out_unlock;
++			spin_unlock(&inode->i_lock);
++			mark_as_dirty = true;
+ 		}
+ 	}
+ 
+ 
+ 	status = nfs4_proc_layoutcommit(data, sync);
+ out:
+-	if (status)
++	if (status || mark_as_dirty)
+ 		mark_inode_dirty_sync(inode);
+ 	dprintk("<-- %s status %d\n", __func__, status);
+ 	return status;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index d5694987f86fad..89a6f0557d941a 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4697,10 +4697,16 @@ nfsd4_setclientid_confirm(struct svc_rqst *rqstp,
+ 	}
+ 	status = nfs_ok;
+ 	if (conf) {
+-		old = unconf;
+-		unhash_client_locked(old);
+-		nfsd4_change_callback(conf, &unconf->cl_cb_conn);
+-	} else {
++		if (get_client_locked(conf) == nfs_ok) {
++			old = unconf;
++			unhash_client_locked(old);
++			nfsd4_change_callback(conf, &unconf->cl_cb_conn);
++		} else {
++			conf = NULL;
++		}
++	}
++
++	if (!conf) {
+ 		old = find_confirmed_client_by_name(&unconf->cl_name, nn);
+ 		if (old) {
+ 			status = nfserr_clid_inuse;
+@@ -4717,10 +4723,14 @@ nfsd4_setclientid_confirm(struct svc_rqst *rqstp,
+ 			}
+ 			trace_nfsd_clid_replaced(&old->cl_clientid);
+ 		}
++		status = get_client_locked(unconf);
++		if (status != nfs_ok) {
++			old = NULL;
++			goto out;
++		}
+ 		move_to_confirmed(unconf);
+ 		conf = unconf;
+ 	}
+-	get_client_locked(conf);
+ 	spin_unlock(&nn->client_lock);
+ 	if (conf == unconf)
+ 		fsnotify_dentry(conf->cl_nfsd_info_dentry, FS_MODIFY);
+@@ -6322,6 +6332,20 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
+ 		status = nfs4_check_deleg(cl, open, &dp);
+ 		if (status)
+ 			goto out;
++		if (dp && nfsd4_is_deleg_cur(open) &&
++				(dp->dl_stid.sc_file != fp)) {
++			/*
++			 * RFC8881 section 8.2.4 mandates the server to return
++			 * NFS4ERR_BAD_STATEID if the selected table entry does
++			 * not match the current filehandle. However returning
++			 * NFS4ERR_BAD_STATEID in the OPEN can cause the client
++			 * to repeatedly retry the operation with the same
++			 * stateid, since the stateid itself is valid. To avoid
++			 * this situation NFSD returns NFS4ERR_INVAL instead.
++			 */
++			status = nfserr_inval;
++			goto out;
++		}
+ 		stp = nfsd4_find_and_lock_existing_open(fp, open);
+ 	} else {
+ 		open->op_file = NULL;
+diff --git a/fs/ntfs3/dir.c b/fs/ntfs3/dir.c
+index b6da80c69ca634..600e66035c1b70 100644
+--- a/fs/ntfs3/dir.c
++++ b/fs/ntfs3/dir.c
+@@ -304,6 +304,9 @@ static inline bool ntfs_dir_emit(struct ntfs_sb_info *sbi,
+ 	if (sbi->options->nohidden && (fname->dup.fa & FILE_ATTRIBUTE_HIDDEN))
+ 		return true;
+ 
++	if (fname->name_len + sizeof(struct NTFS_DE) > le16_to_cpu(e->size))
++		return true;
++
+ 	name_len = ntfs_utf16_to_nls(sbi, fname->name, fname->name_len, name,
+ 				     PATH_MAX);
+ 	if (name_len <= 0) {
+diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
+index 0f0d27d4644a9b..8214970f4594f1 100644
+--- a/fs/ntfs3/inode.c
++++ b/fs/ntfs3/inode.c
+@@ -1062,10 +1062,10 @@ int inode_read_data(struct inode *inode, void *data, size_t bytes)
+  * Number of bytes for REPARSE_DATA_BUFFER(IO_REPARSE_TAG_SYMLINK)
+  * for unicode string of @uni_len length.
+  */
+-static inline u32 ntfs_reparse_bytes(u32 uni_len)
++static inline u32 ntfs_reparse_bytes(u32 uni_len, bool is_absolute)
+ {
+ 	/* Header + unicode string + decorated unicode string. */
+-	return sizeof(short) * (2 * uni_len + 4) +
++	return sizeof(short) * (2 * uni_len + (is_absolute ? 4 : 0)) +
+ 	       offsetof(struct REPARSE_DATA_BUFFER,
+ 			SymbolicLinkReparseBuffer.PathBuffer);
+ }
+@@ -1078,8 +1078,11 @@ ntfs_create_reparse_buffer(struct ntfs_sb_info *sbi, const char *symname,
+ 	struct REPARSE_DATA_BUFFER *rp;
+ 	__le16 *rp_name;
+ 	typeof(rp->SymbolicLinkReparseBuffer) *rs;
++	bool is_absolute;
+ 
+-	rp = kzalloc(ntfs_reparse_bytes(2 * size + 2), GFP_NOFS);
++	is_absolute = (strlen(symname) > 1 && symname[1] == ':');
++
++	rp = kzalloc(ntfs_reparse_bytes(2 * size + 2, is_absolute), GFP_NOFS);
+ 	if (!rp)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -1094,7 +1097,7 @@ ntfs_create_reparse_buffer(struct ntfs_sb_info *sbi, const char *symname,
+ 		goto out;
+ 
+ 	/* err = the length of unicode name of symlink. */
+-	*nsize = ntfs_reparse_bytes(err);
++	*nsize = ntfs_reparse_bytes(err, is_absolute);
+ 
+ 	if (*nsize > sbi->reparse.max_size) {
+ 		err = -EFBIG;
+@@ -1114,7 +1117,7 @@ ntfs_create_reparse_buffer(struct ntfs_sb_info *sbi, const char *symname,
+ 
+ 	/* PrintName + SubstituteName. */
+ 	rs->SubstituteNameOffset = cpu_to_le16(sizeof(short) * err);
+-	rs->SubstituteNameLength = cpu_to_le16(sizeof(short) * err + 8);
++	rs->SubstituteNameLength = cpu_to_le16(sizeof(short) * err + (is_absolute ? 8 : 0));
+ 	rs->PrintNameLength = rs->SubstituteNameOffset;
+ 
+ 	/*
+@@ -1122,16 +1125,18 @@ ntfs_create_reparse_buffer(struct ntfs_sb_info *sbi, const char *symname,
+ 	 * parse this path.
+ 	 * 0-absolute path 1- relative path (SYMLINK_FLAG_RELATIVE).
+ 	 */
+-	rs->Flags = 0;
++	rs->Flags = cpu_to_le32(is_absolute ? 0 : SYMLINK_FLAG_RELATIVE);
+ 
+-	memmove(rp_name + err + 4, rp_name, sizeof(short) * err);
++	memmove(rp_name + err + (is_absolute ? 4 : 0), rp_name, sizeof(short) * err);
+ 
+-	/* Decorate SubstituteName. */
+-	rp_name += err;
+-	rp_name[0] = cpu_to_le16('\\');
+-	rp_name[1] = cpu_to_le16('?');
+-	rp_name[2] = cpu_to_le16('?');
+-	rp_name[3] = cpu_to_le16('\\');
++	if (is_absolute) {
++		/* Decorate SubstituteName. */
++		rp_name += err;
++		rp_name[0] = cpu_to_le16('\\');
++		rp_name[1] = cpu_to_le16('?');
++		rp_name[2] = cpu_to_le16('?');
++		rp_name[3] = cpu_to_le16('\\');
++	}
+ 
+ 	return rp;
+ out:
+diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
+index 40b6bce12951ec..89aadc6cdd8779 100644
+--- a/fs/ocfs2/aops.c
++++ b/fs/ocfs2/aops.c
+@@ -1071,6 +1071,7 @@ static int ocfs2_grab_folios_for_write(struct address_space *mapping,
+ 			if (IS_ERR(wc->w_folios[i])) {
+ 				ret = PTR_ERR(wc->w_folios[i]);
+ 				mlog_errno(ret);
++				wc->w_folios[i] = NULL;
+ 				goto out;
+ 			}
+ 		}
+diff --git a/fs/orangefs/orangefs-debugfs.c b/fs/orangefs/orangefs-debugfs.c
+index e8e3badbc2ec06..1c375fb650185c 100644
+--- a/fs/orangefs/orangefs-debugfs.c
++++ b/fs/orangefs/orangefs-debugfs.c
+@@ -396,7 +396,7 @@ static ssize_t orangefs_debug_read(struct file *file,
+ 		goto out;
+ 
+ 	mutex_lock(&orangefs_debug_lock);
+-	sprintf_ret = sprintf(buf, "%s", (char *)file->private_data);
++	sprintf_ret = scnprintf(buf, ORANGEFS_MAX_DEBUG_STRING_LEN, "%s", (char *)file->private_data);
+ 	mutex_unlock(&orangefs_debug_lock);
+ 
+ 	read_ret = simple_read_from_buffer(ubuf, count, ppos, buf, sprintf_ret);
+diff --git a/fs/pidfs.c b/fs/pidfs.c
+index 4625e097e3a0c0..4c551bfa8927f1 100644
+--- a/fs/pidfs.c
++++ b/fs/pidfs.c
+@@ -891,6 +891,8 @@ static int pidfs_init_fs_context(struct fs_context *fc)
+ 	if (!ctx)
+ 		return -ENOMEM;
+ 
++	fc->s_iflags |= SB_I_NOEXEC;
++	fc->s_iflags |= SB_I_NODEV;
+ 	ctx->ops = &pidfs_sops;
+ 	ctx->eops = &pidfs_export_operations;
+ 	ctx->dops = &pidfs_dentry_operations;
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index 751479eb128f0c..0102ab3aaec162 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -1020,10 +1020,13 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask,
+ {
+ 	struct mem_size_stats *mss = walk->private;
+ 	struct vm_area_struct *vma = walk->vma;
+-	pte_t ptent = huge_ptep_get(walk->mm, addr, pte);
+ 	struct folio *folio = NULL;
+ 	bool present = false;
++	spinlock_t *ptl;
++	pte_t ptent;
+ 
++	ptl = huge_pte_lock(hstate_vma(vma), walk->mm, pte);
++	ptent = huge_ptep_get(walk->mm, addr, pte);
+ 	if (pte_present(ptent)) {
+ 		folio = page_folio(pte_page(ptent));
+ 		present = true;
+@@ -1042,6 +1045,7 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask,
+ 		else
+ 			mss->private_hugetlb += huge_page_size(hstate_vma(vma));
+ 	}
++	spin_unlock(ptl);
+ 	return 0;
+ }
+ #else
+diff --git a/fs/smb/client/cifsencrypt.c b/fs/smb/client/cifsencrypt.c
+index 6be850d2a34677..3cc68624690876 100644
+--- a/fs/smb/client/cifsencrypt.c
++++ b/fs/smb/client/cifsencrypt.c
+@@ -532,17 +532,67 @@ CalcNTLMv2_response(const struct cifs_ses *ses, char *ntlmv2_hash, struct shash_
+ 	return rc;
+ }
+ 
++/*
++ * Set up NTLMv2 response blob with SPN (cifs/<hostname>) appended to the
++ * existing list of AV pairs.
++ */
++static int set_auth_key_response(struct cifs_ses *ses)
++{
++	size_t baselen = CIFS_SESS_KEY_SIZE + sizeof(struct ntlmv2_resp);
++	size_t len, spnlen, tilen = 0, num_avs = 2 /* SPN + EOL */;
++	struct TCP_Server_Info *server = ses->server;
++	char *spn __free(kfree) = NULL;
++	struct ntlmssp2_name *av;
++	char *rsp = NULL;
++	int rc;
++
++	spnlen = strlen(server->hostname);
++	len = sizeof("cifs/") + spnlen;
++	spn = kmalloc(len, GFP_KERNEL);
++	if (!spn) {
++		rc = -ENOMEM;
++		goto out;
++	}
++
++	spnlen = scnprintf(spn, len, "cifs/%.*s",
++			   (int)spnlen, server->hostname);
++
++	av_for_each_entry(ses, av)
++		tilen += sizeof(*av) + AV_LEN(av);
++
++	len = baselen + tilen + spnlen * sizeof(__le16) + num_avs * sizeof(*av);
++	rsp = kmalloc(len, GFP_KERNEL);
++	if (!rsp) {
++		rc = -ENOMEM;
++		goto out;
++	}
++
++	memcpy(rsp + baselen, ses->auth_key.response, tilen);
++	av = (void *)(rsp + baselen + tilen);
++	av->type = cpu_to_le16(NTLMSSP_AV_TARGET_NAME);
++	av->length = cpu_to_le16(spnlen * sizeof(__le16));
++	cifs_strtoUTF16((__le16 *)av->data, spn, spnlen, ses->local_nls);
++	av = (void *)((__u8 *)av + sizeof(*av) + AV_LEN(av));
++	av->type = cpu_to_le16(NTLMSSP_AV_EOL);
++	av->length = 0;
++
++	rc = 0;
++	ses->auth_key.len = len;
++out:
++	ses->auth_key.response = rsp;
++	return rc;
++}
++
+ int
+ setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp)
+ {
+ 	struct shash_desc *hmacmd5 = NULL;
+-	int rc;
+-	int baselen;
+-	unsigned int tilen;
++	unsigned char *tiblob = NULL; /* target info blob */
+ 	struct ntlmv2_resp *ntlmv2;
+ 	char ntlmv2_hash[16];
+-	unsigned char *tiblob = NULL; /* target info blob */
+ 	__le64 rsp_timestamp;
++	__u64 cc;
++	int rc;
+ 
+ 	if (nls_cp == NULL) {
+ 		cifs_dbg(VFS, "%s called with nls_cp==NULL\n", __func__);
+@@ -588,32 +638,25 @@ setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp)
+ 	 * (as Windows 7 does)
+ 	 */
+ 	rsp_timestamp = find_timestamp(ses);
++	get_random_bytes(&cc, sizeof(cc));
+ 
+-	baselen = CIFS_SESS_KEY_SIZE + sizeof(struct ntlmv2_resp);
+-	tilen = ses->auth_key.len;
+-	tiblob = ses->auth_key.response;
++	cifs_server_lock(ses->server);
+ 
+-	ses->auth_key.response = kmalloc(baselen + tilen, GFP_KERNEL);
+-	if (!ses->auth_key.response) {
+-		rc = -ENOMEM;
++	tiblob = ses->auth_key.response;
++	rc = set_auth_key_response(ses);
++	if (rc) {
+ 		ses->auth_key.len = 0;
+-		goto setup_ntlmv2_rsp_ret;
++		goto unlock;
+ 	}
+-	ses->auth_key.len += baselen;
+ 
+ 	ntlmv2 = (struct ntlmv2_resp *)
+ 			(ses->auth_key.response + CIFS_SESS_KEY_SIZE);
+ 	ntlmv2->blob_signature = cpu_to_le32(0x00000101);
+ 	ntlmv2->reserved = 0;
+ 	ntlmv2->time = rsp_timestamp;
+-
+-	get_random_bytes(&ntlmv2->client_chal, sizeof(ntlmv2->client_chal));
++	ntlmv2->client_chal = cc;
+ 	ntlmv2->reserved2 = 0;
+ 
+-	memcpy(ses->auth_key.response + baselen, tiblob, tilen);
+-
+-	cifs_server_lock(ses->server);
+-
+ 	rc = cifs_alloc_hash("hmac(md5)", &hmacmd5);
+ 	if (rc) {
+ 		cifs_dbg(VFS, "Could not allocate HMAC-MD5, rc=%d\n", rc);
+diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
+index 75142f49d65dd6..3b6bc53ee1c4d6 100644
+--- a/fs/smb/client/cifssmb.c
++++ b/fs/smb/client/cifssmb.c
+@@ -4020,6 +4020,12 @@ CIFSFindFirst(const unsigned int xid, struct cifs_tcon *tcon,
+ 			pSMB->FileName[name_len] = 0;
+ 			pSMB->FileName[name_len+1] = 0;
+ 			name_len += 2;
++		} else if (!searchName[0]) {
++			pSMB->FileName[0] = CIFS_DIR_SEP(cifs_sb);
++			pSMB->FileName[1] = 0;
++			pSMB->FileName[2] = 0;
++			pSMB->FileName[3] = 0;
++			name_len = 4;
+ 		}
+ 	} else {
+ 		name_len = copy_path_name(pSMB->FileName, searchName);
+@@ -4031,6 +4037,10 @@ CIFSFindFirst(const unsigned int xid, struct cifs_tcon *tcon,
+ 			pSMB->FileName[name_len] = '*';
+ 			pSMB->FileName[name_len+1] = 0;
+ 			name_len += 2;
++		} else if (!searchName[0]) {
++			pSMB->FileName[0] = CIFS_DIR_SEP(cifs_sb);
++			pSMB->FileName[1] = 0;
++			name_len = 2;
+ 		}
+ 	}
+ 
+diff --git a/fs/smb/client/compress.c b/fs/smb/client/compress.c
+index 766b4de13da76a..db709f5cd2e1ff 100644
+--- a/fs/smb/client/compress.c
++++ b/fs/smb/client/compress.c
+@@ -155,58 +155,29 @@ static int cmp_bkt(const void *_a, const void *_b)
+ }
+ 
+ /*
+- * TODO:
+- * Support other iter types, if required.
+- * Only ITER_XARRAY is supported for now.
++ * Collect some 2K samples with 2K gaps between.
+  */
+-static int collect_sample(const struct iov_iter *iter, ssize_t max, u8 *sample)
++static int collect_sample(const struct iov_iter *source, ssize_t max, u8 *sample)
+ {
+-	struct folio *folios[16], *folio;
+-	unsigned int nr, i, j, npages;
+-	loff_t start = iter->xarray_start + iter->iov_offset;
+-	pgoff_t last, index = start / PAGE_SIZE;
+-	size_t len, off, foff;
+-	void *p;
+-	int s = 0;
+-
+-	last = (start + max - 1) / PAGE_SIZE;
+-	do {
+-		nr = xa_extract(iter->xarray, (void **)folios, index, last, ARRAY_SIZE(folios),
+-				XA_PRESENT);
+-		if (nr == 0)
+-			return -EIO;
+-
+-		for (i = 0; i < nr; i++) {
+-			folio = folios[i];
+-			npages = folio_nr_pages(folio);
+-			foff = start - folio_pos(folio);
+-			off = foff % PAGE_SIZE;
+-
+-			for (j = foff / PAGE_SIZE; j < npages; j++) {
+-				size_t len2;
+-
+-				len = min_t(size_t, max, PAGE_SIZE - off);
+-				len2 = min_t(size_t, len, SZ_2K);
+-
+-				p = kmap_local_page(folio_page(folio, j));
+-				memcpy(&sample[s], p, len2);
+-				kunmap_local(p);
+-
+-				s += len2;
+-
+-				if (len2 < SZ_2K || s >= max - SZ_2K)
+-					return s;
+-
+-				max -= len;
+-				if (max <= 0)
+-					return s;
+-
+-				start += len;
+-				off = 0;
+-				index++;
+-			}
+-		}
+-	} while (nr == ARRAY_SIZE(folios));
++	struct iov_iter iter = *source;
++	size_t s = 0;
++
++	while (iov_iter_count(&iter) >= SZ_2K) {
++		size_t part = umin(umin(iov_iter_count(&iter), SZ_2K), max);
++		size_t n;
++
++		n = copy_from_iter(sample + s, part, &iter);
++		if (n != part)
++			return -EFAULT;
++
++		s += n;
++		max -= n;
++
++		if (iov_iter_count(&iter) < PAGE_SIZE - SZ_2K)
++			break;
++
++		iov_iter_advance(&iter, SZ_2K);
++	}
+ 
+ 	return s;
+ }
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index 5eec8957f2a996..bd3dd2e163cd8d 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -4204,7 +4204,6 @@ cifs_negotiate_protocol(const unsigned int xid, struct cifs_ses *ses,
+ 		return 0;
+ 	}
+ 
+-	server->lstrp = jiffies;
+ 	server->tcpStatus = CifsInNegotiate;
+ 	server->neg_start = jiffies;
+ 	spin_unlock(&server->srv_lock);
+diff --git a/fs/smb/client/sess.c b/fs/smb/client/sess.c
+index 330bc3d25badd8..0a8c2fcc9dedf1 100644
+--- a/fs/smb/client/sess.c
++++ b/fs/smb/client/sess.c
+@@ -332,6 +332,7 @@ cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server)
+ 	struct cifs_server_iface *old_iface = NULL;
+ 	struct cifs_server_iface *last_iface = NULL;
+ 	struct sockaddr_storage ss;
++	int retry = 0;
+ 
+ 	spin_lock(&ses->chan_lock);
+ 	chan_index = cifs_ses_get_chan_index(ses, server);
+@@ -360,6 +361,7 @@ cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server)
+ 		return;
+ 	}
+ 
++try_again:
+ 	last_iface = list_last_entry(&ses->iface_list, struct cifs_server_iface,
+ 				     iface_head);
+ 	iface_min_speed = last_iface->speed;
+@@ -397,6 +399,13 @@ cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server)
+ 	}
+ 
+ 	if (list_entry_is_head(iface, &ses->iface_list, iface_head)) {
++		list_for_each_entry(iface, &ses->iface_list, iface_head)
++			iface->weight_fulfilled = 0;
++
++		/* see if it can be satisfied in second attempt */
++		if (!retry++)
++			goto try_again;
++
+ 		iface = NULL;
+ 		cifs_dbg(FYI, "unable to find a suitable iface\n");
+ 	}
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index 938a8a7c5d2187..4bb065a6fbaa7d 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -772,6 +772,13 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
+ 			bytes_left -= sizeof(*p);
+ 			break;
+ 		}
++		/* Validate that Next doesn't point beyond the buffer */
++		if (next > bytes_left) {
++			cifs_dbg(VFS, "%s: invalid Next pointer %zu > %zd\n",
++				 __func__, next, bytes_left);
++			rc = -EINVAL;
++			goto out;
++		}
+ 		p = (struct network_interface_info_ioctl_rsp *)((u8 *)p+next);
+ 		bytes_left -= next;
+ 	}
+@@ -783,7 +790,9 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
+ 	}
+ 
+ 	/* Azure rounds the buffer size up 8, to a 16 byte boundary */
+-	if ((bytes_left > 8) || p->Next)
++	if ((bytes_left > 8) ||
++	    (bytes_left >= offsetof(struct network_interface_info_ioctl_rsp, Next)
++	     + sizeof(p->Next) && p->Next))
+ 		cifs_dbg(VFS, "%s: incomplete interface info\n", __func__);
+ 
+ 	ses->iface_last_update = jiffies;
+diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c
+index c661a8e6c18b85..b9bb531717a651 100644
+--- a/fs/smb/client/smbdirect.c
++++ b/fs/smb/client/smbdirect.c
+@@ -277,18 +277,20 @@ static void send_done(struct ib_cq *cq, struct ib_wc *wc)
+ 	log_rdma_send(INFO, "smbd_request 0x%p completed wc->status=%d\n",
+ 		request, wc->status);
+ 
+-	if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_SEND) {
+-		log_rdma_send(ERR, "wc->status=%d wc->opcode=%d\n",
+-			wc->status, wc->opcode);
+-		smbd_disconnect_rdma_connection(request->info);
+-	}
+-
+ 	for (i = 0; i < request->num_sge; i++)
+ 		ib_dma_unmap_single(sc->ib.dev,
+ 			request->sge[i].addr,
+ 			request->sge[i].length,
+ 			DMA_TO_DEVICE);
+ 
++	if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_SEND) {
++		log_rdma_send(ERR, "wc->status=%d wc->opcode=%d\n",
++			wc->status, wc->opcode);
++		mempool_free(request, info->request_mempool);
++		smbd_disconnect_rdma_connection(info);
++		return;
++	}
++
+ 	if (atomic_dec_and_test(&request->info->send_pending))
+ 		wake_up(&request->info->wait_send_pending);
+ 
+@@ -1314,10 +1316,6 @@ void smbd_destroy(struct TCP_Server_Info *server)
+ 	log_rdma_event(INFO, "cancelling idle timer\n");
+ 	cancel_delayed_work_sync(&info->idle_timer_work);
+ 
+-	log_rdma_event(INFO, "wait for all send posted to IB to finish\n");
+-	wait_event(info->wait_send_pending,
+-		atomic_read(&info->send_pending) == 0);
+-
+ 	/* It's not possible for upper layer to get to reassembly */
+ 	log_rdma_event(INFO, "drain the reassembly queue\n");
+ 	do {
+@@ -1691,7 +1689,6 @@ static struct smbd_connection *_smbd_get_connection(
+ 	cancel_delayed_work_sync(&info->idle_timer_work);
+ 	destroy_caches_and_workqueue(info);
+ 	sc->status = SMBDIRECT_SOCKET_NEGOTIATE_FAILED;
+-	init_waitqueue_head(&info->conn_wait);
+ 	rdma_disconnect(sc->rdma.cm_id);
+ 	wait_event(info->conn_wait,
+ 		sc->status == SMBDIRECT_SOCKET_DISCONNECTED);
+@@ -1963,7 +1960,11 @@ int smbd_send(struct TCP_Server_Info *server,
+ 	 */
+ 
+ 	wait_event(info->wait_send_pending,
+-		atomic_read(&info->send_pending) == 0);
++		atomic_read(&info->send_pending) == 0 ||
++		sc->status != SMBDIRECT_SOCKET_CONNECTED);
++
++	if (sc->status != SMBDIRECT_SOCKET_CONNECTED && rc == 0)
++		rc = -EAGAIN;
+ 
+ 	return rc;
+ }
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index a760785aa1ec7e..e5c469a87d64c0 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -6063,7 +6063,6 @@ static int smb2_create_link(struct ksmbd_work *work,
+ {
+ 	char *link_name = NULL, *target_name = NULL, *pathname = NULL;
+ 	struct path path, parent_path;
+-	bool file_present = false;
+ 	int rc;
+ 
+ 	if (buf_len < (u64)sizeof(struct smb2_file_link_info) +
+@@ -6096,11 +6095,8 @@ static int smb2_create_link(struct ksmbd_work *work,
+ 	if (rc) {
+ 		if (rc != -ENOENT)
+ 			goto out;
+-	} else
+-		file_present = true;
+-
+-	if (file_info->ReplaceIfExists) {
+-		if (file_present) {
++	} else {
++		if (file_info->ReplaceIfExists) {
+ 			rc = ksmbd_vfs_remove_file(work, &path);
+ 			if (rc) {
+ 				rc = -EINVAL;
+@@ -6108,21 +6104,17 @@ static int smb2_create_link(struct ksmbd_work *work,
+ 					    link_name);
+ 				goto out;
+ 			}
+-		}
+-	} else {
+-		if (file_present) {
++		} else {
+ 			rc = -EEXIST;
+ 			ksmbd_debug(SMB, "link already exists\n");
+ 			goto out;
+ 		}
++		ksmbd_vfs_kern_path_unlock(&parent_path, &path);
+ 	}
+-
+ 	rc = ksmbd_vfs_link(work, target_name, link_name);
+ 	if (rc)
+ 		rc = -EINVAL;
+ out:
+-	if (file_present)
+-		ksmbd_vfs_kern_path_unlock(&parent_path, &path);
+ 
+ 	if (!IS_ERR(link_name))
+ 		kfree(link_name);
+diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
+index a3fd3cc591bd88..43f83fc9594fa1 100644
+--- a/fs/tracefs/inode.c
++++ b/fs/tracefs/inode.c
+@@ -465,9 +465,20 @@ static int tracefs_d_revalidate(struct inode *inode, const struct qstr *name,
+ 	return !(ei && ei->is_freed);
+ }
+ 
++static int tracefs_d_delete(const struct dentry *dentry)
++{
++	/*
++	 * We want to keep eventfs dentries around but not tracefs
++	 * ones. eventfs dentries have content in d_fsdata.
++	 * Use d_fsdata to determine if it's a eventfs dentry or not.
++	 */
++	return dentry->d_fsdata == NULL;
++}
++
+ static const struct dentry_operations tracefs_dentry_operations = {
+ 	.d_revalidate = tracefs_d_revalidate,
+ 	.d_release = tracefs_d_release,
++	.d_delete = tracefs_d_delete,
+ };
+ 
+ static int tracefs_fill_super(struct super_block *sb, struct fs_context *fc)
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index 1c8a736b33097e..b2f168b0a0d18e 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -1440,7 +1440,7 @@ static int udf_load_logicalvol(struct super_block *sb, sector_t block,
+ 	struct genericPartitionMap *gpm;
+ 	uint16_t ident;
+ 	struct buffer_head *bh;
+-	unsigned int table_len;
++	unsigned int table_len, part_map_count;
+ 	int ret;
+ 
+ 	bh = udf_read_tagged(sb, block, block, &ident);
+@@ -1461,7 +1461,16 @@ static int udf_load_logicalvol(struct super_block *sb, sector_t block,
+ 					   "logical volume");
+ 	if (ret)
+ 		goto out_bh;
+-	ret = udf_sb_alloc_partition_maps(sb, le32_to_cpu(lvd->numPartitionMaps));
++
++	part_map_count = le32_to_cpu(lvd->numPartitionMaps);
++	if (part_map_count > table_len / sizeof(struct genericPartitionMap1)) {
++		udf_err(sb, "error loading logical volume descriptor: "
++			"Too many partition maps (%u > %u)\n", part_map_count,
++			table_len / (unsigned)sizeof(struct genericPartitionMap1));
++		ret = -EIO;
++		goto out_bh;
++	}
++	ret = udf_sb_alloc_partition_maps(sb, part_map_count);
+ 	if (ret)
+ 		goto out_bh;
+ 
+diff --git a/fs/xfs/scrub/trace.h b/fs/xfs/scrub/trace.h
+index d7c4ced47c1567..4368f08e91c68e 100644
+--- a/fs/xfs/scrub/trace.h
++++ b/fs/xfs/scrub/trace.h
+@@ -479,7 +479,7 @@ DECLARE_EVENT_CLASS(xchk_dqiter_class,
+ 		__field(xfs_exntst_t, state)
+ 	),
+ 	TP_fast_assign(
+-		__entry->dev = cursor->sc->ip->i_mount->m_super->s_dev;
++		__entry->dev = cursor->sc->mp->m_super->s_dev;
+ 		__entry->dqtype = cursor->dqtype;
+ 		__entry->ino = cursor->quota_ip->i_ino;
+ 		__entry->cur_id = cursor->id;
+diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
+index 1a7e377d4cbb4f..1eed74c7f2f7bc 100644
+--- a/include/drm/gpu_scheduler.h
++++ b/include/drm/gpu_scheduler.h
+@@ -508,6 +508,24 @@ struct drm_sched_backend_ops {
+          * and it's time to clean it up.
+ 	 */
+ 	void (*free_job)(struct drm_sched_job *sched_job);
++
++	/**
++	 * @cancel_job: Used by the scheduler to guarantee remaining jobs' fences
++	 * get signaled in drm_sched_fini().
++	 *
++	 * Used by the scheduler to cancel all jobs that have not been executed
++	 * with &struct drm_sched_backend_ops.run_job by the time
++	 * drm_sched_fini() gets invoked.
++	 *
++	 * Drivers need to signal the passed job's hardware fence with an
++	 * appropriate error code (e.g., -ECANCELED) in this callback. They
++	 * must not free the job.
++	 *
++	 * The scheduler will only call this callback once it stopped calling
++	 * all other callbacks forever, with the exception of &struct
++	 * drm_sched_backend_ops.free_job.
++	 */
++	void (*cancel_job)(struct drm_sched_job *sched_job);
+ };
+ 
+ /**
+diff --git a/include/linux/acpi.h b/include/linux/acpi.h
+index f102c0fe34318f..71e692f952905e 100644
+--- a/include/linux/acpi.h
++++ b/include/linux/acpi.h
+@@ -1503,7 +1503,7 @@ int acpi_parse_spcr(bool enable_earlycon, bool enable_console);
+ #else
+ static inline int acpi_parse_spcr(bool enable_earlycon, bool enable_console)
+ {
+-	return 0;
++	return -ENODEV;
+ }
+ #endif
+ 
+diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
+index 3d1577f07c1c82..930daff207df2c 100644
+--- a/include/linux/blk_types.h
++++ b/include/linux/blk_types.h
+@@ -350,11 +350,11 @@ enum req_op {
+ 	/* Close a zone */
+ 	REQ_OP_ZONE_CLOSE	= (__force blk_opf_t)11,
+ 	/* Transition a zone to full */
+-	REQ_OP_ZONE_FINISH	= (__force blk_opf_t)12,
++	REQ_OP_ZONE_FINISH	= (__force blk_opf_t)13,
+ 	/* reset a zone write pointer */
+-	REQ_OP_ZONE_RESET	= (__force blk_opf_t)13,
++	REQ_OP_ZONE_RESET	= (__force blk_opf_t)15,
+ 	/* reset all the zone present on the device */
+-	REQ_OP_ZONE_RESET_ALL	= (__force blk_opf_t)15,
++	REQ_OP_ZONE_RESET_ALL	= (__force blk_opf_t)17,
+ 
+ 	/* Driver private requests */
+ 	REQ_OP_DRV_IN		= (__force blk_opf_t)34,
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 181a0deadc9e9b..620345ce3aaad9 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -842,6 +842,55 @@ static inline unsigned int disk_nr_zones(struct gendisk *disk)
+ {
+ 	return disk->nr_zones;
+ }
++
++/**
++ * bio_needs_zone_write_plugging - Check if a BIO needs to be handled with zone
++ *				   write plugging
++ * @bio: The BIO being submitted
++ *
++ * Return true whenever @bio execution needs to be handled through zone
++ * write plugging (using blk_zone_plug_bio()). Return false otherwise.
++ */
++static inline bool bio_needs_zone_write_plugging(struct bio *bio)
++{
++	enum req_op op = bio_op(bio);
++
++	/*
++	 * Only zoned block devices have a zone write plug hash table. But not
++	 * all of them have one (e.g. DM devices may not need one).
++	 */
++	if (!bio->bi_bdev->bd_disk->zone_wplugs_hash)
++		return false;
++
++	/* Only write operations need zone write plugging. */
++	if (!op_is_write(op))
++		return false;
++
++	/* Ignore empty flush */
++	if (op_is_flush(bio->bi_opf) && !bio_sectors(bio))
++		return false;
++
++	/* Ignore BIOs that already have been handled by zone write plugging. */
++	if (bio_flagged(bio, BIO_ZONE_WRITE_PLUGGING))
++		return false;
++
++	/*
++	 * All zone write operations must be handled through zone write plugging
++	 * using blk_zone_plug_bio().
++	 */
++	switch (op) {
++	case REQ_OP_ZONE_APPEND:
++	case REQ_OP_WRITE:
++	case REQ_OP_WRITE_ZEROES:
++	case REQ_OP_ZONE_FINISH:
++	case REQ_OP_ZONE_RESET:
++	case REQ_OP_ZONE_RESET_ALL:
++		return true;
++	default:
++		return false;
++	}
++}
++
+ bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs);
+ 
+ /**
+@@ -871,6 +920,12 @@ static inline unsigned int disk_nr_zones(struct gendisk *disk)
+ {
+ 	return 0;
+ }
++
++static inline bool bio_needs_zone_write_plugging(struct bio *bio)
++{
++	return false;
++}
++
+ static inline bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs)
+ {
+ 	return false;
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index 568a9d8c749bc5..7f260e0e20498d 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -1239,6 +1239,8 @@ void hid_quirks_exit(__u16 bus);
+ 	dev_notice(&(hid)->dev, fmt, ##__VA_ARGS__)
+ #define hid_warn(hid, fmt, ...)				\
+ 	dev_warn(&(hid)->dev, fmt, ##__VA_ARGS__)
++#define hid_warn_ratelimited(hid, fmt, ...)				\
++	dev_warn_ratelimited(&(hid)->dev, fmt, ##__VA_ARGS__)
+ #define hid_info(hid, fmt, ...)				\
+ 	dev_info(&(hid)->dev, fmt, ##__VA_ARGS__)
+ #define hid_dbg(hid, fmt, ...)				\
+diff --git a/include/linux/hypervisor.h b/include/linux/hypervisor.h
+index 9efbc54e35e596..be5417303ecf69 100644
+--- a/include/linux/hypervisor.h
++++ b/include/linux/hypervisor.h
+@@ -37,6 +37,9 @@ static inline bool hypervisor_isolated_pci_functions(void)
+ 	if (IS_ENABLED(CONFIG_S390))
+ 		return true;
+ 
++	if (IS_ENABLED(CONFIG_LOONGARCH))
++		return true;
++
+ 	return jailhouse_paravirt();
+ }
+ 
+diff --git a/include/linux/if_vlan.h b/include/linux/if_vlan.h
+index 38456b42cdb556..b9f699799cf6e9 100644
+--- a/include/linux/if_vlan.h
++++ b/include/linux/if_vlan.h
+@@ -79,11 +79,6 @@ static inline struct vlan_ethhdr *skb_vlan_eth_hdr(const struct sk_buff *skb)
+ /* found in socket.c */
+ extern void vlan_ioctl_set(int (*hook)(struct net *, void __user *));
+ 
+-static inline bool is_vlan_dev(const struct net_device *dev)
+-{
+-        return dev->priv_flags & IFF_802_1Q_VLAN;
+-}
+-
+ #define skb_vlan_tag_present(__skb)	(!!(__skb)->vlan_all)
+ #define skb_vlan_tag_get(__skb)		((__skb)->vlan_tci)
+ #define skb_vlan_tag_get_id(__skb)	((__skb)->vlan_tci & VLAN_VID_MASK)
+@@ -200,6 +195,11 @@ struct vlan_dev_priv {
+ #endif
+ };
+ 
++static inline bool is_vlan_dev(const struct net_device *dev)
++{
++	return dev->priv_flags & IFF_802_1Q_VLAN;
++}
++
+ static inline struct vlan_dev_priv *vlan_dev_priv(const struct net_device *dev)
+ {
+ 	return netdev_priv(dev);
+@@ -237,6 +237,11 @@ extern void vlan_vids_del_by_dev(struct net_device *dev,
+ extern bool vlan_uses_dev(const struct net_device *dev);
+ 
+ #else
++static inline bool is_vlan_dev(const struct net_device *dev)
++{
++	return false;
++}
++
+ static inline struct net_device *
+ __vlan_find_dev_deep_rcu(struct net_device *real_dev,
+ 		     __be16 vlan_proto, u16 vlan_id)
+@@ -254,19 +259,19 @@ vlan_for_each(struct net_device *dev,
+ 
+ static inline struct net_device *vlan_dev_real_dev(const struct net_device *dev)
+ {
+-	BUG();
++	WARN_ON_ONCE(1);
+ 	return NULL;
+ }
+ 
+ static inline u16 vlan_dev_vlan_id(const struct net_device *dev)
+ {
+-	BUG();
++	WARN_ON_ONCE(1);
+ 	return 0;
+ }
+ 
+ static inline __be16 vlan_dev_vlan_proto(const struct net_device *dev)
+ {
+-	BUG();
++	WARN_ON_ONCE(1);
+ 	return 0;
+ }
+ 
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index 1e5aec839041de..e9d87cb1dbb757 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -534,6 +534,7 @@ typedef void (*ata_postreset_fn_t)(struct ata_link *link, unsigned int *classes)
+ 
+ extern struct device_attribute dev_attr_unload_heads;
+ #ifdef CONFIG_SATA_HOST
++extern struct device_attribute dev_attr_link_power_management_supported;
+ extern struct device_attribute dev_attr_link_power_management_policy;
+ extern struct device_attribute dev_attr_ncq_prio_supported;
+ extern struct device_attribute dev_attr_ncq_prio_enable;
+diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h
+index 0dc0cf2863e2ad..7a805796fcfd07 100644
+--- a/include/linux/memory-tiers.h
++++ b/include/linux/memory-tiers.h
+@@ -18,7 +18,7 @@
+  * adistance value (slightly faster) than default DRAM adistance to be part of
+  * the same memory tier.
+  */
+-#define MEMTIER_ADISTANCE_DRAM	((4 * MEMTIER_CHUNK_SIZE) + (MEMTIER_CHUNK_SIZE >> 1))
++#define MEMTIER_ADISTANCE_DRAM	((4L * MEMTIER_CHUNK_SIZE) + (MEMTIER_CHUNK_SIZE >> 1))
+ 
+ struct memory_tier;
+ struct memory_dev_type {
+diff --git a/include/linux/packing.h b/include/linux/packing.h
+index 0589d70bbe0434..20ae4d452c7bb4 100644
+--- a/include/linux/packing.h
++++ b/include/linux/packing.h
+@@ -5,8 +5,12 @@
+ #ifndef _LINUX_PACKING_H
+ #define _LINUX_PACKING_H
+ 
+-#include <linux/types.h>
++#include <linux/array_size.h>
+ #include <linux/bitops.h>
++#include <linux/build_bug.h>
++#include <linux/minmax.h>
++#include <linux/stddef.h>
++#include <linux/types.h>
+ 
+ #define GEN_PACKED_FIELD_STRUCT(__type) \
+ 	struct packed_field_ ## __type { \
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 05e68f35f39238..d56d0dd80afb5b 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -328,6 +328,11 @@ struct rcec_ea;
+  *			determined (e.g., for Root Complex Integrated
+  *			Endpoints without the relevant Capability
+  *			Registers).
++ * @is_hotplug_bridge:	Hotplug bridge of any kind (e.g. PCIe Hot-Plug Capable,
++ *			Conventional PCI Hot-Plug, ACPI slot).
++ *			Such bridges are allocated additional MMIO and bus
++ *			number resources to allow for hierarchy expansion.
++ * @is_pciehp:		PCIe Hot-Plug Capable bridge.
+  */
+ struct pci_dev {
+ 	struct list_head bus_list;	/* Node in per-bus list */
+@@ -451,6 +456,7 @@ struct pci_dev {
+ 	unsigned int	is_physfn:1;
+ 	unsigned int	is_virtfn:1;
+ 	unsigned int	is_hotplug_bridge:1;
++	unsigned int	is_pciehp:1;
+ 	unsigned int	shpc_managed:1;		/* SHPC owned by shpchp */
+ 	unsigned int	is_thunderbolt:1;	/* Thunderbolt controller */
+ 	/*
+diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h
+index 189140bf11fc40..4adf4b364fcda9 100644
+--- a/include/linux/sbitmap.h
++++ b/include/linux/sbitmap.h
+@@ -213,12 +213,12 @@ int sbitmap_get(struct sbitmap *sb);
+  * sbitmap_get_shallow() - Try to allocate a free bit from a &struct sbitmap,
+  * limiting the depth used from each word.
+  * @sb: Bitmap to allocate from.
+- * @shallow_depth: The maximum number of bits to allocate from a single word.
++ * @shallow_depth: The maximum number of bits to allocate from the bitmap.
+  *
+  * This rather specific operation allows for having multiple users with
+  * different allocation limits. E.g., there can be a high-priority class that
+  * uses sbitmap_get() and a low-priority class that uses sbitmap_get_shallow()
+- * with a @shallow_depth of (1 << (@sb->shift - 1)). Then, the low-priority
++ * with a @shallow_depth of (sb->depth >> 1). Then, the low-priority
+  * class can only allocate half of the total bits in the bitmap, preventing it
+  * from starving out the high-priority class.
+  *
+@@ -478,7 +478,7 @@ unsigned long __sbitmap_queue_get_batch(struct sbitmap_queue *sbq, int nr_tags,
+  * sbitmap_queue, limiting the depth used from each word, with preemption
+  * already disabled.
+  * @sbq: Bitmap queue to allocate from.
+- * @shallow_depth: The maximum number of bits to allocate from a single word.
++ * @shallow_depth: The maximum number of bits to allocate from the queue.
+  * See sbitmap_get_shallow().
+  *
+  * If you call this, make sure to call sbitmap_queue_min_shallow_depth() after
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 37f5c6099b1fbb..67a906702830c1 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -3688,7 +3688,13 @@ static inline void *skb_frag_address(const skb_frag_t *frag)
+  */
+ static inline void *skb_frag_address_safe(const skb_frag_t *frag)
+ {
+-	void *ptr = page_address(skb_frag_page(frag));
++	struct page *page = skb_frag_page(frag);
++	void *ptr;
++
++	if (!page)
++		return NULL;
++
++	ptr = page_address(page);
+ 	if (unlikely(!ptr))
+ 		return NULL;
+ 
+diff --git a/include/linux/usb/cdc_ncm.h b/include/linux/usb/cdc_ncm.h
+index 2d207cb4837dbf..4ac082a6317381 100644
+--- a/include/linux/usb/cdc_ncm.h
++++ b/include/linux/usb/cdc_ncm.h
+@@ -119,6 +119,7 @@ struct cdc_ncm_ctx {
+ 	u32 timer_interval;
+ 	u32 max_ndp_size;
+ 	u8 is_ndp16;
++	u8 filtering_supported;
+ 	union {
+ 		struct usb_cdc_ncm_ndp16 *delayed_ndp16;
+ 		struct usb_cdc_ncm_ndp32 *delayed_ndp32;
+diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
+index 36fb3edfa403d9..6c00687539cf46 100644
+--- a/include/linux/virtio_vsock.h
++++ b/include/linux/virtio_vsock.h
+@@ -111,7 +111,12 @@ static inline size_t virtio_vsock_skb_len(struct sk_buff *skb)
+ 	return (size_t)(skb_end_pointer(skb) - skb->head);
+ }
+ 
+-#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE	(1024 * 4)
++/* Dimension the RX SKB so that the entire thing fits exactly into
++ * a single 4KiB page. This avoids wasting memory due to alloc_skb()
++ * rounding up to the next page order and also means that we
++ * don't leave higher-order pages sitting around in the RX queue.
++ */
++#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE	SKB_WITH_OVERHEAD(1024 * 4)
+ #define VIRTIO_VSOCK_MAX_BUF_SIZE		0xFFFFFFFFUL
+ #define VIRTIO_VSOCK_MAX_PKT_BUF_SIZE		(1024 * 64)
+ 
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 5796ca9fe5da38..8fa82987313424 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -2852,6 +2852,12 @@ struct hci_evt_le_big_sync_estabilished {
+ 	__le16  bis[];
+ } __packed;
+ 
++#define HCI_EVT_LE_BIG_SYNC_LOST 0x1e
++struct hci_evt_le_big_sync_lost {
++	__u8    handle;
++	__u8    reason;
++} __packed;
++
+ #define HCI_EVT_LE_BIG_INFO_ADV_REPORT	0x22
+ struct hci_evt_le_big_info_adv_report {
+ 	__le16  sync_handle;
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index c371dadc6fa3e9..f22881bf1b392a 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1352,7 +1352,8 @@ hci_conn_hash_lookup_big_sync_pend(struct hci_dev *hdev,
+ }
+ 
+ static inline struct hci_conn *
+-hci_conn_hash_lookup_big_state(struct hci_dev *hdev, __u8 handle,  __u16 state)
++hci_conn_hash_lookup_big_state(struct hci_dev *hdev, __u8 handle, __u16 state,
++			       __u8 role)
+ {
+ 	struct hci_conn_hash *h = &hdev->conn_hash;
+ 	struct hci_conn  *c;
+@@ -1360,7 +1361,7 @@ hci_conn_hash_lookup_big_state(struct hci_dev *hdev, __u8 handle,  __u16 state)
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != BIS_LINK || c->state != state)
++		if (c->type != BIS_LINK || c->state != state || c->role != role)
+ 			continue;
+ 
+ 		if (handle == c->iso_qos.bcast.big) {
+diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
+index 10248d527616aa..1faeb587988fe9 100644
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -633,7 +633,7 @@ ieee80211_get_sband_iftype_data(const struct ieee80211_supported_band *sband,
+ 	const struct ieee80211_sband_iftype_data *data;
+ 	int i;
+ 
+-	if (WARN_ON(iftype >= NL80211_IFTYPE_MAX))
++	if (WARN_ON(iftype >= NUM_NL80211_IFTYPES))
+ 		return NULL;
+ 
+ 	if (iftype == NL80211_IFTYPE_AP_VLAN)
+diff --git a/include/net/ip_vs.h b/include/net/ip_vs.h
+index ff406ef4fd4aab..29a36709e7f35c 100644
+--- a/include/net/ip_vs.h
++++ b/include/net/ip_vs.h
+@@ -1163,6 +1163,14 @@ static inline const struct cpumask *sysctl_est_cpulist(struct netns_ipvs *ipvs)
+ 		return housekeeping_cpumask(HK_TYPE_KTHREAD);
+ }
+ 
++static inline const struct cpumask *sysctl_est_preferred_cpulist(struct netns_ipvs *ipvs)
++{
++	if (ipvs->est_cpulist_valid)
++		return ipvs->sysctl_est_cpulist;
++	else
++		return NULL;
++}
++
+ static inline int sysctl_est_nice(struct netns_ipvs *ipvs)
+ {
+ 	return ipvs->sysctl_est_nice;
+@@ -1270,6 +1278,11 @@ static inline const struct cpumask *sysctl_est_cpulist(struct netns_ipvs *ipvs)
+ 	return housekeeping_cpumask(HK_TYPE_KTHREAD);
+ }
+ 
++static inline const struct cpumask *sysctl_est_preferred_cpulist(struct netns_ipvs *ipvs)
++{
++	return NULL;
++}
++
+ static inline int sysctl_est_nice(struct netns_ipvs *ipvs)
+ {
+ 	return IPVS_EST_NICE;
+diff --git a/include/net/kcm.h b/include/net/kcm.h
+index 441e993be634ce..d9c35e71ecea40 100644
+--- a/include/net/kcm.h
++++ b/include/net/kcm.h
+@@ -71,7 +71,6 @@ struct kcm_sock {
+ 	struct list_head wait_psock_list;
+ 	struct sk_buff *seq_skb;
+ 	struct mutex tx_mutex;
+-	u32 tx_stopped : 1;
+ 
+ 	/* Don't use bit fields here, these are set under different locks */
+ 	bool tx_wait;
+diff --git a/include/net/mac80211.h b/include/net/mac80211.h
+index 82617579d91065..2e9d1c0bf5ac17 100644
+--- a/include/net/mac80211.h
++++ b/include/net/mac80211.h
+@@ -4302,6 +4302,8 @@ struct ieee80211_prep_tx_info {
+  * @mgd_complete_tx: Notify the driver that the response frame for a previously
+  *	transmitted frame announced with @mgd_prepare_tx was received, the data
+  *	is filled similarly to @mgd_prepare_tx though the duration is not used.
++ *	Note that this isn't always called for each mgd_prepare_tx() call, for
++ *	example for SAE the 'confirm' messages can be on the air in any order.
+  *
+  * @mgd_protect_tdls_discover: Protect a TDLS discovery session. After sending
+  *	a TDLS discovery-request, we expect a reply to arrive on the AP's
+@@ -4466,6 +4468,8 @@ struct ieee80211_prep_tx_info {
+  *	new links bitmaps may be 0 if going from/to a non-MLO situation.
+  *	The @old array contains pointers to the old bss_conf structures
+  *	that were already removed, in case they're needed.
++ *	Note that removal of link should always succeed, so the return value
++ *	will be ignored in a removal only case.
+  *	This callback can sleep.
+  * @change_sta_links: Change the valid links of a station, similar to
+  *	@change_vif_links. This callback can sleep.
+diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
+index 431b593de70937..1509a536cb855a 100644
+--- a/include/net/page_pool/types.h
++++ b/include/net/page_pool/types.h
+@@ -265,6 +265,8 @@ struct page_pool *page_pool_create_percpu(const struct page_pool_params *params,
+ struct xdp_mem_info;
+ 
+ #ifdef CONFIG_PAGE_POOL
++void page_pool_enable_direct_recycling(struct page_pool *pool,
++				       struct napi_struct *napi);
+ void page_pool_disable_direct_recycling(struct page_pool *pool);
+ void page_pool_destroy(struct page_pool *pool);
+ void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *),
+diff --git a/include/sound/sdca_function.h b/include/sound/sdca_function.h
+index eaedb54a83227b..b43bda42eeca60 100644
+--- a/include/sound/sdca_function.h
++++ b/include/sound/sdca_function.h
+@@ -16,6 +16,8 @@ struct device;
+ struct sdca_entity;
+ struct sdca_function_desc;
+ 
++#define SDCA_NO_INTERRUPT -1
++
+ /*
+  * The addressing space for SDCA relies on 7 bits for Entities, so a
+  * maximum of 128 Entities per function can be represented.
+diff --git a/include/trace/events/thp.h b/include/trace/events/thp.h
+index f50048af5fcc28..c8fe879d5828bd 100644
+--- a/include/trace/events/thp.h
++++ b/include/trace/events/thp.h
+@@ -8,6 +8,7 @@
+ #include <linux/types.h>
+ #include <linux/tracepoint.h>
+ 
++#ifdef CONFIG_PPC_BOOK3S_64
+ DECLARE_EVENT_CLASS(hugepage_set,
+ 
+ 	    TP_PROTO(unsigned long addr, unsigned long pte),
+@@ -66,6 +67,7 @@ DEFINE_EVENT(hugepage_update, hugepage_update_pud,
+ 	    TP_PROTO(unsigned long addr, unsigned long pud, unsigned long clr, unsigned long set),
+ 	    TP_ARGS(addr, pud, clr, set)
+ );
++#endif /* CONFIG_PPC_BOOK3S_64 */
+ 
+ DECLARE_EVENT_CLASS(migration_pmd,
+ 
+diff --git a/include/uapi/linux/in6.h b/include/uapi/linux/in6.h
+index ff8d21f9e95b77..5a47339ef7d768 100644
+--- a/include/uapi/linux/in6.h
++++ b/include/uapi/linux/in6.h
+@@ -152,7 +152,6 @@ struct in6_flowlabel_req {
+ /*
+  *	IPV6 socket options
+  */
+-#if __UAPI_DEF_IPV6_OPTIONS
+ #define IPV6_ADDRFORM		1
+ #define IPV6_2292PKTINFO	2
+ #define IPV6_2292HOPOPTS	3
+@@ -169,8 +168,10 @@ struct in6_flowlabel_req {
+ #define IPV6_MULTICAST_IF	17
+ #define IPV6_MULTICAST_HOPS	18
+ #define IPV6_MULTICAST_LOOP	19
++#if __UAPI_DEF_IPV6_OPTIONS
+ #define IPV6_ADD_MEMBERSHIP	20
+ #define IPV6_DROP_MEMBERSHIP	21
++#endif
+ #define IPV6_ROUTER_ALERT	22
+ #define IPV6_MTU_DISCOVER	23
+ #define IPV6_MTU		24
+@@ -203,7 +204,6 @@ struct in6_flowlabel_req {
+ #define IPV6_IPSEC_POLICY	34
+ #define IPV6_XFRM_POLICY	35
+ #define IPV6_HDRINCL		36
+-#endif
+ 
+ /*
+  * Multicast:
+diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
+index cfd17e382082fa..78be93899699b1 100644
+--- a/include/uapi/linux/io_uring.h
++++ b/include/uapi/linux/io_uring.h
+@@ -50,7 +50,7 @@ struct io_uring_sqe {
+ 	};
+ 	__u32	len;		/* buffer size or number of iovecs */
+ 	union {
+-		__kernel_rwf_t	rw_flags;
++		__u32		rw_flags;
+ 		__u32		fsync_flags;
+ 		__u16		poll_events;	/* compatibility */
+ 		__u32		poll32_events;	/* word-reversed for BE */
+diff --git a/io_uring/memmap.c b/io_uring/memmap.c
+index 725dc0bec24c42..2e99dffddfc5cc 100644
+--- a/io_uring/memmap.c
++++ b/io_uring/memmap.c
+@@ -156,7 +156,7 @@ static int io_region_allocate_pages(struct io_ring_ctx *ctx,
+ 				    unsigned long mmap_offset)
+ {
+ 	gfp_t gfp = GFP_KERNEL_ACCOUNT | __GFP_ZERO | __GFP_NOWARN;
+-	unsigned long size = mr->nr_pages << PAGE_SHIFT;
++	size_t size = (size_t) mr->nr_pages << PAGE_SHIFT;
+ 	unsigned long nr_allocated;
+ 	struct page **pages;
+ 	void *p;
+diff --git a/io_uring/net.c b/io_uring/net.c
+index bec8c6ed0a93f9..875561d0e71d43 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -477,6 +477,15 @@ static int io_bundle_nbufs(struct io_async_msghdr *kmsg, int ret)
+ 	return nbufs;
+ }
+ 
++static int io_net_kbuf_recyle(struct io_kiocb *req,
++			      struct io_async_msghdr *kmsg, int len)
++{
++	req->flags |= REQ_F_BL_NO_RECYCLE;
++	if (req->flags & REQ_F_BUFFERS_COMMIT)
++		io_kbuf_commit(req, req->buf_list, len, io_bundle_nbufs(kmsg, len));
++	return IOU_RETRY;
++}
++
+ static inline bool io_send_finish(struct io_kiocb *req, int *ret,
+ 				  struct io_async_msghdr *kmsg,
+ 				  unsigned issue_flags)
+@@ -545,8 +554,7 @@ int io_sendmsg(struct io_kiocb *req, unsigned int issue_flags)
+ 			kmsg->msg.msg_controllen = 0;
+ 			kmsg->msg.msg_control = NULL;
+ 			sr->done_io += ret;
+-			req->flags |= REQ_F_BL_NO_RECYCLE;
+-			return -EAGAIN;
++			return io_net_kbuf_recyle(req, kmsg, ret);
+ 		}
+ 		if (ret == -ERESTARTSYS)
+ 			ret = -EINTR;
+@@ -657,8 +665,7 @@ int io_send(struct io_kiocb *req, unsigned int issue_flags)
+ 			sr->len -= ret;
+ 			sr->buf += ret;
+ 			sr->done_io += ret;
+-			req->flags |= REQ_F_BL_NO_RECYCLE;
+-			return -EAGAIN;
++			return io_net_kbuf_recyle(req, kmsg, ret);
+ 		}
+ 		if (ret == -ERESTARTSYS)
+ 			ret = -EINTR;
+@@ -1026,8 +1033,7 @@ int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
+ 		}
+ 		if (ret > 0 && io_net_retry(sock, flags)) {
+ 			sr->done_io += ret;
+-			req->flags |= REQ_F_BL_NO_RECYCLE;
+-			return IOU_RETRY;
++			return io_net_kbuf_recyle(req, kmsg, ret);
+ 		}
+ 		if (ret == -ERESTARTSYS)
+ 			ret = -EINTR;
+@@ -1168,8 +1174,7 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
+ 			sr->len -= ret;
+ 			sr->buf += ret;
+ 			sr->done_io += ret;
+-			req->flags |= REQ_F_BL_NO_RECYCLE;
+-			return -EAGAIN;
++			return io_net_kbuf_recyle(req, kmsg, ret);
+ 		}
+ 		if (ret == -ERESTARTSYS)
+ 			ret = -EINTR;
+@@ -1450,8 +1455,7 @@ int io_send_zc(struct io_kiocb *req, unsigned int issue_flags)
+ 			zc->len -= ret;
+ 			zc->buf += ret;
+ 			zc->done_io += ret;
+-			req->flags |= REQ_F_BL_NO_RECYCLE;
+-			return -EAGAIN;
++			return io_net_kbuf_recyle(req, kmsg, ret);
+ 		}
+ 		if (ret == -ERESTARTSYS)
+ 			ret = -EINTR;
+@@ -1521,8 +1525,7 @@ int io_sendmsg_zc(struct io_kiocb *req, unsigned int issue_flags)
+ 
+ 		if (ret > 0 && io_net_retry(sock, flags)) {
+ 			sr->done_io += ret;
+-			req->flags |= REQ_F_BL_NO_RECYCLE;
+-			return -EAGAIN;
++			return io_net_kbuf_recyle(req, kmsg, ret);
+ 		}
+ 		if (ret == -ERESTARTSYS)
+ 			ret = -EINTR;
+diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
+index f2b31fb6899271..2d5d66e792ecd7 100644
+--- a/io_uring/rsrc.c
++++ b/io_uring/rsrc.c
+@@ -55,7 +55,7 @@ int __io_account_mem(struct user_struct *user, unsigned long nr_pages)
+ 	return 0;
+ }
+ 
+-static void io_unaccount_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
++void io_unaccount_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
+ {
+ 	if (ctx->user)
+ 		__io_unaccount_mem(ctx->user, nr_pages);
+@@ -64,7 +64,7 @@ static void io_unaccount_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
+ 		atomic64_sub(nr_pages, &ctx->mm_account->pinned_vm);
+ }
+ 
+-static int io_account_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
++int io_account_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
+ {
+ 	int ret;
+ 
+diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h
+index 25e7e998dcfd0a..a3ca6ba66596fb 100644
+--- a/io_uring/rsrc.h
++++ b/io_uring/rsrc.h
+@@ -120,6 +120,8 @@ int io_files_update(struct io_kiocb *req, unsigned int issue_flags);
+ int io_files_update_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe);
+ 
+ int __io_account_mem(struct user_struct *user, unsigned long nr_pages);
++int io_account_mem(struct io_ring_ctx *ctx, unsigned long nr_pages);
++void io_unaccount_mem(struct io_ring_ctx *ctx, unsigned long nr_pages);
+ 
+ static inline void __io_unaccount_mem(struct user_struct *user,
+ 				      unsigned long nr_pages)
+diff --git a/io_uring/rw.c b/io_uring/rw.c
+index 710d8cd53ebb74..52a5b950b2e5e9 100644
+--- a/io_uring/rw.c
++++ b/io_uring/rw.c
+@@ -288,7 +288,7 @@ static int __io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+ 
+ 	rw->addr = READ_ONCE(sqe->addr);
+ 	rw->len = READ_ONCE(sqe->len);
+-	rw->flags = READ_ONCE(sqe->rw_flags);
++	rw->flags = (__force rwf_t) READ_ONCE(sqe->rw_flags);
+ 
+ 	attr_type_mask = READ_ONCE(sqe->attr_type_mask);
+ 	if (attr_type_mask) {
+diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
+index 4a7011c799f013..56b490b27064cc 100644
+--- a/io_uring/zcrx.c
++++ b/io_uring/zcrx.c
+@@ -152,12 +152,29 @@ static int io_zcrx_map_area_dmabuf(struct io_zcrx_ifq *ifq, struct io_zcrx_area
+ 	return niov_idx;
+ }
+ 
++static unsigned long io_count_account_pages(struct page **pages, unsigned nr_pages)
++{
++	struct folio *last_folio = NULL;
++	unsigned long res = 0;
++	int i;
++
++	for (i = 0; i < nr_pages; i++) {
++		struct folio *folio = page_folio(pages[i]);
++
++		if (folio == last_folio)
++			continue;
++		last_folio = folio;
++		res += 1UL << folio_order(folio);
++	}
++	return res;
++}
++
+ static int io_import_umem(struct io_zcrx_ifq *ifq,
+ 			  struct io_zcrx_mem *mem,
+ 			  struct io_uring_zcrx_area_reg *area_reg)
+ {
+ 	struct page **pages;
+-	int nr_pages;
++	int nr_pages, ret;
+ 
+ 	if (area_reg->dmabuf_fd)
+ 		return -EINVAL;
+@@ -168,10 +185,15 @@ static int io_import_umem(struct io_zcrx_ifq *ifq,
+ 	if (IS_ERR(pages))
+ 		return PTR_ERR(pages);
+ 
++	mem->account_pages = io_count_account_pages(pages, nr_pages);
++	ret = io_account_mem(ifq->ctx, mem->account_pages);
++	if (ret < 0)
++		mem->account_pages = 0;
++
+ 	mem->pages = pages;
+ 	mem->nr_folios = nr_pages;
+ 	mem->size = area_reg->len;
+-	return 0;
++	return ret;
+ }
+ 
+ static void io_release_area_mem(struct io_zcrx_mem *mem)
+@@ -370,10 +392,12 @@ static void io_free_rbuf_ring(struct io_zcrx_ifq *ifq)
+ 
+ static void io_zcrx_free_area(struct io_zcrx_area *area)
+ {
+-	if (area->ifq)
+-		io_zcrx_unmap_area(area->ifq, area);
++	io_zcrx_unmap_area(area->ifq, area);
+ 	io_release_area_mem(&area->mem);
+ 
++	if (area->mem.account_pages)
++		io_unaccount_mem(area->ifq->ctx, area->mem.account_pages);
++
+ 	kvfree(area->freelist);
+ 	kvfree(area->nia.niovs);
+ 	kvfree(area->user_refs);
+@@ -401,6 +425,7 @@ static int io_zcrx_create_area(struct io_zcrx_ifq *ifq,
+ 	area = kzalloc(sizeof(*area), GFP_KERNEL);
+ 	if (!area)
+ 		goto err;
++	area->ifq = ifq;
+ 
+ 	ret = io_import_area(ifq, &area->mem, area_reg);
+ 	if (ret)
+@@ -435,7 +460,6 @@ static int io_zcrx_create_area(struct io_zcrx_ifq *ifq,
+ 	}
+ 
+ 	area->free_count = nr_iovs;
+-	area->ifq = ifq;
+ 	/* we're only supporting one area per ifq for now */
+ 	area->area_id = 0;
+ 	area_reg->rq_area_token = (u64)area->area_id << IORING_ZCRX_AREA_SHIFT;
+diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h
+index 2f5e26389f2218..67ed6b179f3d3f 100644
+--- a/io_uring/zcrx.h
++++ b/io_uring/zcrx.h
+@@ -14,6 +14,7 @@ struct io_zcrx_mem {
+ 
+ 	struct page			**pages;
+ 	unsigned long			nr_folios;
++	unsigned long			account_pages;
+ 
+ 	struct dma_buf_attachment	*attach;
+ 	struct dma_buf			*dmabuf;
+diff --git a/kernel/.gitignore b/kernel/.gitignore
+index c6b299a6b7866d..a501bfc8069425 100644
+--- a/kernel/.gitignore
++++ b/kernel/.gitignore
+@@ -1,3 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ /config_data
+ /kheaders.md5
++/kheaders-objlist
++/kheaders-srclist
+diff --git a/kernel/Makefile b/kernel/Makefile
+index 32e80dd626af07..9a9ff405ea89b5 100644
+--- a/kernel/Makefile
++++ b/kernel/Makefile
+@@ -158,11 +158,48 @@ filechk_cat = cat $<
+ $(obj)/config_data: $(KCONFIG_CONFIG) FORCE
+ 	$(call filechk,cat)
+ 
++# kheaders_data.tar.xz
+ $(obj)/kheaders.o: $(obj)/kheaders_data.tar.xz
+ 
+-quiet_cmd_genikh = CHK     $(obj)/kheaders_data.tar.xz
+-      cmd_genikh = $(CONFIG_SHELL) $(srctree)/kernel/gen_kheaders.sh $@
+-$(obj)/kheaders_data.tar.xz: FORCE
+-	$(call cmd,genikh)
++quiet_cmd_kheaders_data = GEN     $@
++      cmd_kheaders_data = "$<" "$@" "$(obj)/kheaders-srclist" "$(obj)/kheaders-objlist"
++      cmd_kheaders_data_dep = cat $(depfile) >> $(dot-target).cmd; rm -f $(depfile)
+ 
+-clean-files := kheaders_data.tar.xz kheaders.md5
++define rule_kheaders_data
++	$(call cmd_and_savecmd,kheaders_data)
++	$(call cmd,kheaders_data_dep)
++endef
++
++targets += kheaders_data.tar.xz
++$(obj)/kheaders_data.tar.xz: $(src)/gen_kheaders.sh $(obj)/kheaders-srclist $(obj)/kheaders-objlist $(obj)/kheaders.md5 FORCE
++	$(call if_changed_rule,kheaders_data)
++
++# generated headers in objtree
++#
++# include/generated/utsversion.h is ignored because it is generated
++# after gen_kheaders.sh is executed. (utsversion.h is unneeded for kheaders)
++filechk_kheaders_objlist = \
++	for d in include "arch/$(SRCARCH)/include"; do \
++		find "$${d}/generated" ! -path "include/generated/utsversion.h" -a -name "*.h" -print; \
++	done
++
++$(obj)/kheaders-objlist: FORCE
++	$(call filechk,kheaders_objlist)
++
++# non-generated headers in srctree
++filechk_kheaders_srclist = \
++	for d in include "arch/$(SRCARCH)/include"; do \
++		find "$(srctree)/$${d}" -path "$(srctree)/$${d}/generated" -prune -o -name "*.h" -print; \
++	done
++
++$(obj)/kheaders-srclist: FORCE
++	$(call filechk,kheaders_srclist)
++
++# Some files are symlinks. If symlinks are changed, kheaders_data.tar.xz should
++# be rebuilt.
++filechk_kheaders_md5sum = xargs -r -a $< stat -c %N | md5sum
++
++$(obj)/kheaders.md5: $(obj)/kheaders-srclist FORCE
++	$(call filechk,kheaders_md5sum)
++
++clean-files := kheaders.md5 kheaders-srclist kheaders-objlist
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 97e07eb31fec2f..4fd89659750b25 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -405,7 +405,8 @@ static bool reg_not_null(const struct bpf_reg_state *reg)
+ 		type == PTR_TO_MAP_KEY ||
+ 		type == PTR_TO_SOCK_COMMON ||
+ 		(type == PTR_TO_BTF_ID && is_trusted_reg(reg)) ||
+-		type == PTR_TO_MEM;
++		type == PTR_TO_MEM ||
++		type == CONST_PTR_TO_MAP;
+ }
+ 
+ static struct btf_record *reg_btf_record(const struct bpf_reg_state *reg)
+@@ -16028,6 +16029,10 @@ static void regs_refine_cond_op(struct bpf_reg_state *reg1, struct bpf_reg_state
+ 		if (!is_reg_const(reg2, is_jmp32))
+ 			break;
+ 		val = reg_const_value(reg2, is_jmp32);
++		/* Forget the ranges before narrowing tnums, to avoid invariant
++		 * violations if we're on a dead branch.
++		 */
++		__mark_reg_unbounded(reg1);
+ 		if (is_jmp32) {
+ 			t = tnum_and(tnum_subreg(reg1->var_off), tnum_const(~val));
+ 			reg1->var_off = tnum_with_subreg(reg1->var_off, t);
+diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h
+index fcd1617212eed0..e3fecc2311aba1 100644
+--- a/kernel/futex/futex.h
++++ b/kernel/futex/futex.h
+@@ -321,13 +321,13 @@ static __always_inline int futex_put_value(u32 val, u32 __user *to)
+ {
+ 	if (can_do_masked_user_access())
+ 		to = masked_user_access_begin(to);
+-	else if (!user_read_access_begin(to, sizeof(*to)))
++	else if (!user_write_access_begin(to, sizeof(*to)))
+ 		return -EFAULT;
+ 	unsafe_put_user(val, to, Efault);
+-	user_read_access_end();
++	user_write_access_end();
+ 	return 0;
+ Efault:
+-	user_read_access_end();
++	user_write_access_end();
+ 	return -EFAULT;
+ }
+ 
+diff --git a/kernel/gen_kheaders.sh b/kernel/gen_kheaders.sh
+index c9e5dc068e854f..0ff7beabb21a7b 100755
+--- a/kernel/gen_kheaders.sh
++++ b/kernel/gen_kheaders.sh
+@@ -4,79 +4,33 @@
+ # This script generates an archive consisting of kernel headers
+ # for CONFIG_IKHEADERS.
+ set -e
+-sfile="$(readlink -f "$0")"
+-outdir="$(pwd)"
+ tarfile=$1
+-tmpdir=$outdir/${tarfile%/*}/.tmp_dir
+-
+-dir_list="
+-include/
+-arch/$SRCARCH/include/
+-"
+-
+-# Support incremental builds by skipping archive generation
+-# if timestamps of files being archived are not changed.
+-
+-# This block is useful for debugging the incremental builds.
+-# Uncomment it for debugging.
+-# if [ ! -f /tmp/iter ]; then iter=1; echo 1 > /tmp/iter;
+-# else iter=$(($(cat /tmp/iter) + 1)); echo $iter > /tmp/iter; fi
+-# find $all_dirs -name "*.h" | xargs ls -l > /tmp/ls-$iter
+-
+-all_dirs=
+-if [ "$building_out_of_srctree" ]; then
+-	for d in $dir_list; do
+-		all_dirs="$all_dirs $srctree/$d"
+-	done
+-fi
+-all_dirs="$all_dirs $dir_list"
+-
+-# include/generated/utsversion.h is ignored because it is generated after this
+-# script is executed. (utsversion.h is unneeded for kheaders)
+-#
+-# When Kconfig regenerates include/generated/autoconf.h, its timestamp is
+-# updated, but the contents might be still the same. When any CONFIG option is
+-# changed, Kconfig touches the corresponding timestamp file include/config/*.
+-# Hence, the md5sum detects the configuration change anyway. We do not need to
+-# check include/generated/autoconf.h explicitly.
+-#
+-# Ignore them for md5 calculation to avoid pointless regeneration.
+-headers_md5="$(find $all_dirs -name "*.h" -a			\
+-		! -path include/generated/utsversion.h -a	\
+-		! -path include/generated/autoconf.h		|
+-		xargs ls -l | md5sum | cut -d ' ' -f1)"
+-
+-# Any changes to this script will also cause a rebuild of the archive.
+-this_file_md5="$(ls -l $sfile | md5sum | cut -d ' ' -f1)"
+-if [ -f $tarfile ]; then tarfile_md5="$(md5sum $tarfile | cut -d ' ' -f1)"; fi
+-if [ -f kernel/kheaders.md5 ] &&
+-	[ "$(head -n 1 kernel/kheaders.md5)" = "$headers_md5" ] &&
+-	[ "$(head -n 2 kernel/kheaders.md5 | tail -n 1)" = "$this_file_md5" ] &&
+-	[ "$(tail -n 1 kernel/kheaders.md5)" = "$tarfile_md5" ]; then
+-		exit
+-fi
+-
+-echo "  GEN     $tarfile"
++srclist=$2
++objlist=$3
++
++dir=$(dirname "${tarfile}")
++tmpdir=${dir}/.tmp_dir
++depfile=${dir}/.$(basename "${tarfile}").d
++
++# generate dependency list.
++{
++	echo
++	echo "deps_${tarfile} := \\"
++	sed 's:\(.*\):  \1 \\:' "${srclist}"
++	sed -n '/^include\/generated\/autoconf\.h$/!s:\(.*\):  \1 \\:p' "${objlist}"
++	echo
++	echo "${tarfile}: \$(deps_${tarfile})"
++	echo
++	echo "\$(deps_${tarfile}):"
++
++} > "${depfile}"
+ 
+ rm -rf "${tmpdir}"
+ mkdir "${tmpdir}"
+ 
+-if [ "$building_out_of_srctree" ]; then
+-	(
+-		cd $srctree
+-		for f in $dir_list
+-			do find "$f" -name "*.h";
+-		done | tar -c -f - -T - | tar -xf - -C "${tmpdir}"
+-	)
+-fi
+-
+-for f in $dir_list;
+-	do find "$f" -name "*.h";
+-done | tar -c -f - -T - | tar -xf - -C "${tmpdir}"
+-
+-# Always exclude include/generated/utsversion.h
+-# Otherwise, the contents of the tarball may vary depending on the build steps.
+-rm -f "${tmpdir}/include/generated/utsversion.h"
++# shellcheck disable=SC2154 # srctree is passed as an env variable
++sed "s:^${srctree}/::" "${srclist}" | tar -c -f - -C "${srctree}" -T - | tar -xf - -C "${tmpdir}"
++tar -c -f - -T "${objlist}" | tar -xf - -C "${tmpdir}"
+ 
+ # Remove comments except SDPX lines
+ # Use a temporary file to store directory contents to prevent find/xargs from
+@@ -92,8 +46,4 @@ tar "${KBUILD_BUILD_TIMESTAMP:+--mtime=$KBUILD_BUILD_TIMESTAMP}" \
+     --owner=0 --group=0 --sort=name --numeric-owner --mode=u=rw,go=r,a+X \
+     -I $XZ -cf $tarfile -C "${tmpdir}/" . > /dev/null
+ 
+-echo $headers_md5 > kernel/kheaders.md5
+-echo "$this_file_md5" >> kernel/kheaders.md5
+-echo "$(md5sum $tarfile | cut -d ' ' -f1)" >> kernel/kheaders.md5
+-
+ rm -rf "${tmpdir}"
+diff --git a/kernel/kthread.c b/kernel/kthread.c
+index 85fc068f0083dd..8d5e87b03d1e4f 100644
+--- a/kernel/kthread.c
++++ b/kernel/kthread.c
+@@ -894,6 +894,7 @@ int kthread_affine_preferred(struct task_struct *p, const struct cpumask *mask)
+ 
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(kthread_affine_preferred);
+ 
+ /*
+  * Re-affine kthreads according to their preferences
+diff --git a/kernel/module/main.c b/kernel/module/main.c
+index 43df45c39f59cb..b62f9ccc50d628 100644
+--- a/kernel/module/main.c
++++ b/kernel/module/main.c
+@@ -751,14 +751,16 @@ SYSCALL_DEFINE2(delete_module, const char __user *, name_user,
+ 	struct module *mod;
+ 	char name[MODULE_NAME_LEN];
+ 	char buf[MODULE_FLAGS_BUF_SIZE];
+-	int ret, forced = 0;
++	int ret, len, forced = 0;
+ 
+ 	if (!capable(CAP_SYS_MODULE) || modules_disabled)
+ 		return -EPERM;
+ 
+-	if (strncpy_from_user(name, name_user, MODULE_NAME_LEN-1) < 0)
+-		return -EFAULT;
+-	name[MODULE_NAME_LEN-1] = '\0';
++	len = strncpy_from_user(name, name_user, MODULE_NAME_LEN);
++	if (len == 0 || len == MODULE_NAME_LEN)
++		return -ENOENT;
++	if (len < 0)
++		return len;
+ 
+ 	audit_log_kern_module(name);
+ 
+diff --git a/kernel/power/console.c b/kernel/power/console.c
+index fcdf0e14a47d47..19c48aa5355d2b 100644
+--- a/kernel/power/console.c
++++ b/kernel/power/console.c
+@@ -16,6 +16,7 @@
+ #define SUSPEND_CONSOLE	(MAX_NR_CONSOLES-1)
+ 
+ static int orig_fgconsole, orig_kmsg;
++static bool vt_switch_done;
+ 
+ static DEFINE_MUTEX(vt_switch_mutex);
+ 
+@@ -136,17 +137,21 @@ void pm_prepare_console(void)
+ 	if (orig_fgconsole < 0)
+ 		return;
+ 
++	vt_switch_done = true;
++
+ 	orig_kmsg = vt_kmsg_redirect(SUSPEND_CONSOLE);
+ 	return;
+ }
+ 
+ void pm_restore_console(void)
+ {
+-	if (!pm_vt_switch())
++	if (!pm_vt_switch() && !vt_switch_done)
+ 		return;
+ 
+ 	if (orig_fgconsole >= 0) {
+ 		vt_move_to_console(orig_fgconsole, 0);
+ 		vt_kmsg_redirect(orig_kmsg);
+ 	}
++
++	vt_switch_done = false;
+ }
+diff --git a/kernel/printk/nbcon.c b/kernel/printk/nbcon.c
+index fd12efcc4aeda8..e7a3af81b17397 100644
+--- a/kernel/printk/nbcon.c
++++ b/kernel/printk/nbcon.c
+@@ -214,8 +214,9 @@ static void nbcon_seq_try_update(struct nbcon_context *ctxt, u64 new_seq)
+ 
+ /**
+  * nbcon_context_try_acquire_direct - Try to acquire directly
+- * @ctxt:	The context of the caller
+- * @cur:	The current console state
++ * @ctxt:		The context of the caller
++ * @cur:		The current console state
++ * @is_reacquire:	This acquire is a reacquire
+  *
+  * Acquire the console when it is released. Also acquire the console when
+  * the current owner has a lower priority and the console is in a safe state.
+@@ -225,17 +226,17 @@ static void nbcon_seq_try_update(struct nbcon_context *ctxt, u64 new_seq)
+  *
+  * Errors:
+  *
+- *	-EPERM:		A panic is in progress and this is not the panic CPU.
+- *			Or the current owner or waiter has the same or higher
+- *			priority. No acquire method can be successful in
+- *			this case.
++ *	-EPERM:		A panic is in progress and this is neither the panic
++ *			CPU nor is this a reacquire. Or the current owner or
++ *			waiter has the same or higher priority. No acquire
++ *			method can be successful in these cases.
+  *
+  *	-EBUSY:		The current owner has a lower priority but the console
+  *			in an unsafe state. The caller should try using
+  *			the handover acquire method.
+  */
+ static int nbcon_context_try_acquire_direct(struct nbcon_context *ctxt,
+-					    struct nbcon_state *cur)
++					    struct nbcon_state *cur, bool is_reacquire)
+ {
+ 	unsigned int cpu = smp_processor_id();
+ 	struct console *con = ctxt->console;
+@@ -243,14 +244,20 @@ static int nbcon_context_try_acquire_direct(struct nbcon_context *ctxt,
+ 
+ 	do {
+ 		/*
+-		 * Panic does not imply that the console is owned. However, it
+-		 * is critical that non-panic CPUs during panic are unable to
+-		 * acquire ownership in order to satisfy the assumptions of
+-		 * nbcon_waiter_matches(). In particular, the assumption that
+-		 * lower priorities are ignored during panic.
++		 * Panic does not imply that the console is owned. However,
++		 * since all non-panic CPUs are stopped during panic(), it
++		 * is safer to have them avoid gaining console ownership.
++		 *
++		 * If this acquire is a reacquire (and an unsafe takeover
++		 * has not previously occurred) then it is allowed to attempt
++		 * a direct acquire in panic. This gives console drivers an
++		 * opportunity to perform any necessary cleanup if they were
++		 * interrupted by the panic CPU while printing.
+ 		 */
+-		if (other_cpu_in_panic())
++		if (other_cpu_in_panic() &&
++		    (!is_reacquire || cur->unsafe_takeover)) {
+ 			return -EPERM;
++		}
+ 
+ 		if (ctxt->prio <= cur->prio || ctxt->prio <= cur->req_prio)
+ 			return -EPERM;
+@@ -301,8 +308,9 @@ static bool nbcon_waiter_matches(struct nbcon_state *cur, int expected_prio)
+ 	 * Event #1 implies this context is EMERGENCY.
+ 	 * Event #2 implies the new context is PANIC.
+ 	 * Event #3 occurs when panic() has flushed the console.
+-	 * Events #4 and #5 are not possible due to the other_cpu_in_panic()
+-	 * check in nbcon_context_try_acquire_direct().
++	 * Event #4 occurs when a non-panic CPU reacquires.
++	 * Event #5 is not possible due to the other_cpu_in_panic() check
++	 *          in nbcon_context_try_acquire_handover().
+ 	 */
+ 
+ 	return (cur->req_prio == expected_prio);
+@@ -431,6 +439,16 @@ static int nbcon_context_try_acquire_handover(struct nbcon_context *ctxt,
+ 	WARN_ON_ONCE(ctxt->prio <= cur->prio || ctxt->prio <= cur->req_prio);
+ 	WARN_ON_ONCE(!cur->unsafe);
+ 
++	/*
++	 * Panic does not imply that the console is owned. However, it
++	 * is critical that non-panic CPUs during panic are unable to
++	 * wait for a handover in order to satisfy the assumptions of
++	 * nbcon_waiter_matches(). In particular, the assumption that
++	 * lower priorities are ignored during panic.
++	 */
++	if (other_cpu_in_panic())
++		return -EPERM;
++
+ 	/* Handover is not possible on the same CPU. */
+ 	if (cur->cpu == cpu)
+ 		return -EBUSY;
+@@ -558,7 +576,8 @@ static struct printk_buffers panic_nbcon_pbufs;
+ 
+ /**
+  * nbcon_context_try_acquire - Try to acquire nbcon console
+- * @ctxt:	The context of the caller
++ * @ctxt:		The context of the caller
++ * @is_reacquire:	This acquire is a reacquire
+  *
+  * Context:	Under @ctxt->con->device_lock() or local_irq_save().
+  * Return:	True if the console was acquired. False otherwise.
+@@ -568,7 +587,7 @@ static struct printk_buffers panic_nbcon_pbufs;
+  * in an unsafe state. Otherwise, on success the caller may assume
+  * the console is not in an unsafe state.
+  */
+-static bool nbcon_context_try_acquire(struct nbcon_context *ctxt)
++static bool nbcon_context_try_acquire(struct nbcon_context *ctxt, bool is_reacquire)
+ {
+ 	unsigned int cpu = smp_processor_id();
+ 	struct console *con = ctxt->console;
+@@ -577,7 +596,7 @@ static bool nbcon_context_try_acquire(struct nbcon_context *ctxt)
+ 
+ 	nbcon_state_read(con, &cur);
+ try_again:
+-	err = nbcon_context_try_acquire_direct(ctxt, &cur);
++	err = nbcon_context_try_acquire_direct(ctxt, &cur, is_reacquire);
+ 	if (err != -EBUSY)
+ 		goto out;
+ 
+@@ -913,7 +932,7 @@ void nbcon_reacquire_nobuf(struct nbcon_write_context *wctxt)
+ {
+ 	struct nbcon_context *ctxt = &ACCESS_PRIVATE(wctxt, ctxt);
+ 
+-	while (!nbcon_context_try_acquire(ctxt))
++	while (!nbcon_context_try_acquire(ctxt, true))
+ 		cpu_relax();
+ 
+ 	nbcon_write_context_set_buf(wctxt, NULL, 0);
+@@ -1101,7 +1120,7 @@ static bool nbcon_emit_one(struct nbcon_write_context *wctxt, bool use_atomic)
+ 		cant_migrate();
+ 	}
+ 
+-	if (!nbcon_context_try_acquire(ctxt))
++	if (!nbcon_context_try_acquire(ctxt, false))
+ 		goto out;
+ 
+ 	/*
+@@ -1486,7 +1505,7 @@ static int __nbcon_atomic_flush_pending_con(struct console *con, u64 stop_seq,
+ 	ctxt->prio			= nbcon_get_default_prio();
+ 	ctxt->allow_unsafe_takeover	= allow_unsafe_takeover;
+ 
+-	if (!nbcon_context_try_acquire(ctxt))
++	if (!nbcon_context_try_acquire(ctxt, false))
+ 		return -EPERM;
+ 
+ 	while (nbcon_seq_read(con) < stop_seq) {
+@@ -1762,7 +1781,7 @@ bool nbcon_device_try_acquire(struct console *con)
+ 	ctxt->console	= con;
+ 	ctxt->prio	= NBCON_PRIO_NORMAL;
+ 
+-	if (!nbcon_context_try_acquire(ctxt))
++	if (!nbcon_context_try_acquire(ctxt, false))
+ 		return false;
+ 
+ 	if (!nbcon_context_enter_unsafe(ctxt))
+diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
+index 70ec0f21abc3b5..1d4635f8bb43f1 100644
+--- a/kernel/rcu/rcutorture.c
++++ b/kernel/rcu/rcutorture.c
+@@ -464,7 +464,7 @@ rcu_read_delay(struct torture_random_state *rrsp, struct rt_read_seg *rtrsp)
+ 	    !(torture_random(rrsp) % (nrealreaders * 2000 * longdelay_ms))) {
+ 		started = cur_ops->get_gp_seq();
+ 		ts = rcu_trace_clock_local();
+-		if (preempt_count() & (SOFTIRQ_MASK | HARDIRQ_MASK))
++		if ((preempt_count() & HARDIRQ_MASK) || softirq_count())
+ 			longdelay_ms = 5; /* Avoid triggering BH limits. */
+ 		mdelay(longdelay_ms);
+ 		rtrsp->rt_delay_ms = longdelay_ms;
+@@ -1930,7 +1930,7 @@ static void rcutorture_one_extend_check(char *s, int curstate, int new, int old,
+ 		return;
+ 
+ 	WARN_ONCE((curstate & (RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH)) &&
+-		  !(preempt_count() & SOFTIRQ_MASK), ROEC_ARGS);
++		  !softirq_count(), ROEC_ARGS);
+ 	WARN_ONCE((curstate & (RCUTORTURE_RDR_PREEMPT | RCUTORTURE_RDR_SCHED)) &&
+ 		  !(preempt_count() & PREEMPT_MASK), ROEC_ARGS);
+ 	WARN_ONCE(cur_ops->readlock_nesting &&
+@@ -1944,7 +1944,7 @@ static void rcutorture_one_extend_check(char *s, int curstate, int new, int old,
+ 
+ 	WARN_ONCE(cur_ops->extendables &&
+ 		  !(curstate & (RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH)) &&
+-		  (preempt_count() & SOFTIRQ_MASK), ROEC_ARGS);
++		  softirq_count(), ROEC_ARGS);
+ 
+ 	/*
+ 	 * non-preemptible RCU in a preemptible kernel uses preempt_disable()
+@@ -1965,6 +1965,9 @@ static void rcutorture_one_extend_check(char *s, int curstate, int new, int old,
+ 	if (!IS_ENABLED(CONFIG_PREEMPT_RCU))
+ 		mask |= RCUTORTURE_RDR_PREEMPT | RCUTORTURE_RDR_SCHED;
+ 
++	if (IS_ENABLED(CONFIG_PREEMPT_RT) && softirq_count())
++		mask |= RCUTORTURE_RDR_BH | RCUTORTURE_RDR_RBH;
++
+ 	WARN_ONCE(cur_ops->readlock_nesting && !(curstate & mask) &&
+ 		  cur_ops->readlock_nesting() > 0, ROEC_ARGS);
+ }
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 14d4499c6fc316..945551eb14b6a3 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -4233,6 +4233,8 @@ int rcutree_prepare_cpu(unsigned int cpu)
+ 	rdp->rcu_iw_gp_seq = rdp->gp_seq - 1;
+ 	trace_rcu_grace_period(rcu_state.name, rdp->gp_seq, TPS("cpuonl"));
+ 	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
++
++	rcu_preempt_deferred_qs_init(rdp);
+ 	rcu_spawn_rnp_kthreads(rnp);
+ 	rcu_spawn_cpu_nocb_kthread(cpu);
+ 	ASSERT_EXCLUSIVE_WRITER(rcu_state.n_online_cpus);
+diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
+index 3830c19cf2f601..b8bbe7960cda7a 100644
+--- a/kernel/rcu/tree.h
++++ b/kernel/rcu/tree.h
+@@ -174,6 +174,17 @@ struct rcu_snap_record {
+ 	unsigned long   jiffies;	/* Track jiffies value */
+ };
+ 
++/*
++ * An IRQ work (deferred_qs_iw) is used by RCU to get the scheduler's attention.
++ * to report quiescent states at the soonest possible time.
++ * The request can be in one of the following states:
++ * - DEFER_QS_IDLE: An IRQ work is yet to be scheduled.
++ * - DEFER_QS_PENDING: An IRQ work was scheduled but either not yet run, or it
++ *                     ran and we still haven't reported a quiescent state.
++ */
++#define DEFER_QS_IDLE		0
++#define DEFER_QS_PENDING	1
++
+ /* Per-CPU data for read-copy update. */
+ struct rcu_data {
+ 	/* 1) quiescent-state and grace-period handling : */
+@@ -192,7 +203,7 @@ struct rcu_data {
+ 					/*  during and after the last grace */
+ 					/* period it is aware of. */
+ 	struct irq_work defer_qs_iw;	/* Obtain later scheduler attention. */
+-	bool defer_qs_iw_pending;	/* Scheduler attention pending? */
++	int defer_qs_iw_pending;	/* Scheduler attention pending? */
+ 	struct work_struct strict_work;	/* Schedule readers for strict GPs. */
+ 
+ 	/* 2) batch handling */
+@@ -477,6 +488,7 @@ static int rcu_print_task_exp_stall(struct rcu_node *rnp);
+ static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp);
+ static void rcu_flavor_sched_clock_irq(int user);
+ static void dump_blkd_tasks(struct rcu_node *rnp, int ncheck);
++static void rcu_preempt_deferred_qs_init(struct rcu_data *rdp);
+ static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags);
+ static void rcu_preempt_boost_start_gp(struct rcu_node *rnp);
+ static bool rcu_is_callbacks_kthread(struct rcu_data *rdp);
+diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
+index 711043e4eb5434..47d1851f23ebb3 100644
+--- a/kernel/rcu/tree_nocb.h
++++ b/kernel/rcu/tree_nocb.h
+@@ -1146,7 +1146,6 @@ static bool rcu_nocb_rdp_offload_wait_cond(struct rcu_data *rdp)
+ static int rcu_nocb_rdp_offload(struct rcu_data *rdp)
+ {
+ 	int wake_gp;
+-	struct rcu_data *rdp_gp = rdp->nocb_gp_rdp;
+ 
+ 	WARN_ON_ONCE(cpu_online(rdp->cpu));
+ 	/*
+@@ -1156,7 +1155,7 @@ static int rcu_nocb_rdp_offload(struct rcu_data *rdp)
+ 	if (!rdp->nocb_gp_rdp)
+ 		return -EINVAL;
+ 
+-	if (WARN_ON_ONCE(!rdp_gp->nocb_gp_kthread))
++	if (WARN_ON_ONCE(!rdp->nocb_gp_kthread))
+ 		return -EINVAL;
+ 
+ 	pr_info("Offloading %d\n", rdp->cpu);
+@@ -1166,7 +1165,7 @@ static int rcu_nocb_rdp_offload(struct rcu_data *rdp)
+ 
+ 	wake_gp = rcu_nocb_queue_toggle_rdp(rdp);
+ 	if (wake_gp)
+-		wake_up_process(rdp_gp->nocb_gp_kthread);
++		wake_up_process(rdp->nocb_gp_kthread);
+ 
+ 	swait_event_exclusive(rdp->nocb_state_wq,
+ 			      rcu_nocb_rdp_offload_wait_cond(rdp));
+diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
+index 0b0f56f6abc853..c99701dfffa9ff 100644
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -486,13 +486,16 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags)
+ 	struct rcu_node *rnp;
+ 	union rcu_special special;
+ 
++	rdp = this_cpu_ptr(&rcu_data);
++	if (rdp->defer_qs_iw_pending == DEFER_QS_PENDING)
++		rdp->defer_qs_iw_pending = DEFER_QS_IDLE;
++
+ 	/*
+ 	 * If RCU core is waiting for this CPU to exit its critical section,
+ 	 * report the fact that it has exited.  Because irqs are disabled,
+ 	 * t->rcu_read_unlock_special cannot change.
+ 	 */
+ 	special = t->rcu_read_unlock_special;
+-	rdp = this_cpu_ptr(&rcu_data);
+ 	if (!special.s && !rdp->cpu_no_qs.b.exp) {
+ 		local_irq_restore(flags);
+ 		return;
+@@ -624,10 +627,29 @@ notrace void rcu_preempt_deferred_qs(struct task_struct *t)
+  */
+ static void rcu_preempt_deferred_qs_handler(struct irq_work *iwp)
+ {
++	unsigned long flags;
+ 	struct rcu_data *rdp;
+ 
+ 	rdp = container_of(iwp, struct rcu_data, defer_qs_iw);
+-	rdp->defer_qs_iw_pending = false;
++	local_irq_save(flags);
++
++	/*
++	 * If the IRQ work handler happens to run in the middle of RCU read-side
++	 * critical section, it could be ineffective in getting the scheduler's
++	 * attention to report a deferred quiescent state (the whole point of the
++	 * IRQ work). For this reason, requeue the IRQ work.
++	 *
++	 * Basically, we want to avoid following situation:
++	 * 1. rcu_read_unlock() queues IRQ work (state -> DEFER_QS_PENDING)
++	 * 2. CPU enters new rcu_read_lock()
++	 * 3. IRQ work runs but cannot report QS due to rcu_preempt_depth() > 0
++	 * 4. rcu_read_unlock() does not re-queue work (state still PENDING)
++	 * 5. Deferred QS reporting does not happen.
++	 */
++	if (rcu_preempt_depth() > 0)
++		WRITE_ONCE(rdp->defer_qs_iw_pending, DEFER_QS_IDLE);
++
++	local_irq_restore(flags);
+ }
+ 
+ /*
+@@ -673,17 +695,11 @@ static void rcu_read_unlock_special(struct task_struct *t)
+ 			set_tsk_need_resched(current);
+ 			set_preempt_need_resched();
+ 			if (IS_ENABLED(CONFIG_IRQ_WORK) && irqs_were_disabled &&
+-			    expboost && !rdp->defer_qs_iw_pending && cpu_online(rdp->cpu)) {
++			    expboost && rdp->defer_qs_iw_pending != DEFER_QS_PENDING &&
++			    cpu_online(rdp->cpu)) {
+ 				// Get scheduler to re-evaluate and call hooks.
+ 				// If !IRQ_WORK, FQS scan will eventually IPI.
+-				if (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD) &&
+-				    IS_ENABLED(CONFIG_PREEMPT_RT))
+-					rdp->defer_qs_iw = IRQ_WORK_INIT_HARD(
+-								rcu_preempt_deferred_qs_handler);
+-				else
+-					init_irq_work(&rdp->defer_qs_iw,
+-						      rcu_preempt_deferred_qs_handler);
+-				rdp->defer_qs_iw_pending = true;
++				rdp->defer_qs_iw_pending = DEFER_QS_PENDING;
+ 				irq_work_queue_on(&rdp->defer_qs_iw, rdp->cpu);
+ 			}
+ 		}
+@@ -822,6 +838,10 @@ dump_blkd_tasks(struct rcu_node *rnp, int ncheck)
+ 	}
+ }
+ 
++static void rcu_preempt_deferred_qs_init(struct rcu_data *rdp)
++{
++	rdp->defer_qs_iw = IRQ_WORK_INIT_HARD(rcu_preempt_deferred_qs_handler);
++}
+ #else /* #ifdef CONFIG_PREEMPT_RCU */
+ 
+ /*
+@@ -1021,6 +1041,8 @@ dump_blkd_tasks(struct rcu_node *rnp, int ncheck)
+ 	WARN_ON_ONCE(!list_empty(&rnp->blkd_tasks));
+ }
+ 
++static void rcu_preempt_deferred_qs_init(struct rcu_data *rdp) { }
++
+ #endif /* #else #ifdef CONFIG_PREEMPT_RCU */
+ 
+ /*
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 65f3b2cc891da6..d86b211f2c141f 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -3249,6 +3249,9 @@ void sched_dl_do_global(void)
+ 	if (global_rt_runtime() != RUNTIME_INF)
+ 		new_bw = to_ratio(global_rt_period(), global_rt_runtime());
+ 
++	for_each_possible_cpu(cpu)
++		init_dl_rq_bw_ratio(&cpu_rq(cpu)->dl);
++
+ 	for_each_possible_cpu(cpu) {
+ 		rcu_read_lock_sched();
+ 
+@@ -3264,7 +3267,6 @@ void sched_dl_do_global(void)
+ 		raw_spin_unlock_irqrestore(&dl_b->lock, flags);
+ 
+ 		rcu_read_unlock_sched();
+-		init_dl_rq_bw_ratio(&cpu_rq(cpu)->dl);
+ 	}
+ }
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 7a14da5396fb23..042ab0863ccc0f 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -12174,8 +12174,14 @@ static inline bool update_newidle_cost(struct sched_domain *sd, u64 cost)
+ 		/*
+ 		 * Track max cost of a domain to make sure to not delay the
+ 		 * next wakeup on the CPU.
++		 *
++		 * sched_balance_newidle() bumps the cost whenever newidle
++		 * balance fails, and we don't want things to grow out of
++		 * control.  Use the sysctl_sched_migration_cost as the upper
++		 * limit, plus a litle extra to avoid off by ones.
+ 		 */
+-		sd->max_newidle_lb_cost = cost;
++		sd->max_newidle_lb_cost =
++			min(cost, sysctl_sched_migration_cost + 200);
+ 		sd->last_decay_max_lb_cost = jiffies;
+ 	} else if (time_after(jiffies, sd->last_decay_max_lb_cost + HZ)) {
+ 		/*
+@@ -12867,10 +12873,17 @@ static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf)
+ 
+ 			t1 = sched_clock_cpu(this_cpu);
+ 			domain_cost = t1 - t0;
+-			update_newidle_cost(sd, domain_cost);
+-
+ 			curr_cost += domain_cost;
+ 			t0 = t1;
++
++			/*
++			 * Failing newidle means it is not effective;
++			 * bump the cost so we end up doing less of it.
++			 */
++			if (!pulled_task)
++				domain_cost = (3 * sd->max_newidle_lb_cost) / 2;
++
++			update_newidle_cost(sd, domain_cost);
+ 		}
+ 
+ 		/*
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index e40422c3703352..507707653312dc 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -2951,6 +2951,12 @@ static int sched_rt_handler(const struct ctl_table *table, int write, void *buff
+ 	sched_domains_mutex_unlock();
+ 	mutex_unlock(&mutex);
+ 
++	/*
++	 * After changing maximum available bandwidth for DEADLINE, we need to
++	 * recompute per root domain and per cpus variables accordingly.
++	 */
++	rebuild_sched_domains();
++
+ 	return ret;
+ }
+ 
+diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
+index ba7ff14f5339b5..f9b3aa9afb1784 100644
+--- a/kernel/trace/fprobe.c
++++ b/kernel/trace/fprobe.c
+@@ -352,7 +352,7 @@ static void fprobe_return(struct ftrace_graph_ret *trace,
+ 	size_words = SIZE_IN_LONG(size);
+ 	ret_ip = ftrace_regs_get_instruction_pointer(fregs);
+ 
+-	preempt_disable();
++	preempt_disable_notrace();
+ 
+ 	curr = 0;
+ 	while (size_words > curr) {
+@@ -368,7 +368,7 @@ static void fprobe_return(struct ftrace_graph_ret *trace,
+ 		}
+ 		curr += size;
+ 	}
+-	preempt_enable();
++	preempt_enable_notrace();
+ }
+ NOKPROBE_SYMBOL(fprobe_return);
+ 
+diff --git a/kernel/trace/rv/rv_trace.h b/kernel/trace/rv/rv_trace.h
+index 01fa84824bcbe6..f1ea75683fc7b6 100644
+--- a/kernel/trace/rv/rv_trace.h
++++ b/kernel/trace/rv/rv_trace.h
+@@ -129,8 +129,9 @@ DECLARE_EVENT_CLASS(error_da_monitor_id,
+ #endif /* CONFIG_DA_MON_EVENTS_ID */
+ #endif /* _TRACE_RV_H */
+ 
+-/* This part ust be outside protection */
++/* This part must be outside protection */
+ #undef TRACE_INCLUDE_PATH
+ #define TRACE_INCLUDE_PATH .
++#undef TRACE_INCLUDE_FILE
+ #define TRACE_INCLUDE_FILE rv_trace
+ #include <trace/define_trace.h>
+diff --git a/lib/sbitmap.c b/lib/sbitmap.c
+index d3412984170c03..c07e3cd82e29d7 100644
+--- a/lib/sbitmap.c
++++ b/lib/sbitmap.c
+@@ -208,8 +208,28 @@ static int sbitmap_find_bit_in_word(struct sbitmap_word *map,
+ 	return nr;
+ }
+ 
++static unsigned int __map_depth_with_shallow(const struct sbitmap *sb,
++					     int index,
++					     unsigned int shallow_depth)
++{
++	u64 shallow_word_depth;
++	unsigned int word_depth, reminder;
++
++	word_depth = __map_depth(sb, index);
++	if (shallow_depth >= sb->depth)
++		return word_depth;
++
++	shallow_word_depth = word_depth * shallow_depth;
++	reminder = do_div(shallow_word_depth, sb->depth);
++
++	if (reminder >= (index + 1) * word_depth)
++		shallow_word_depth++;
++
++	return (unsigned int)shallow_word_depth;
++}
++
+ static int sbitmap_find_bit(struct sbitmap *sb,
+-			    unsigned int depth,
++			    unsigned int shallow_depth,
+ 			    unsigned int index,
+ 			    unsigned int alloc_hint,
+ 			    bool wrap)
+@@ -218,12 +238,12 @@ static int sbitmap_find_bit(struct sbitmap *sb,
+ 	int nr = -1;
+ 
+ 	for (i = 0; i < sb->map_nr; i++) {
+-		nr = sbitmap_find_bit_in_word(&sb->map[index],
+-					      min_t(unsigned int,
+-						    __map_depth(sb, index),
+-						    depth),
+-					      alloc_hint, wrap);
++		unsigned int depth = __map_depth_with_shallow(sb, index,
++							      shallow_depth);
+ 
++		if (depth)
++			nr = sbitmap_find_bit_in_word(&sb->map[index], depth,
++						      alloc_hint, wrap);
+ 		if (nr != -1) {
+ 			nr += index << sb->shift;
+ 			break;
+@@ -406,27 +426,9 @@ EXPORT_SYMBOL_GPL(sbitmap_bitmap_show);
+ static unsigned int sbq_calc_wake_batch(struct sbitmap_queue *sbq,
+ 					unsigned int depth)
+ {
+-	unsigned int wake_batch;
+-	unsigned int shallow_depth;
+-
+-	/*
+-	 * Each full word of the bitmap has bits_per_word bits, and there might
+-	 * be a partial word. There are depth / bits_per_word full words and
+-	 * depth % bits_per_word bits left over. In bitwise arithmetic:
+-	 *
+-	 * bits_per_word = 1 << shift
+-	 * depth / bits_per_word = depth >> shift
+-	 * depth % bits_per_word = depth & ((1 << shift) - 1)
+-	 *
+-	 * Each word can be limited to sbq->min_shallow_depth bits.
+-	 */
+-	shallow_depth = min(1U << sbq->sb.shift, sbq->min_shallow_depth);
+-	depth = ((depth >> sbq->sb.shift) * shallow_depth +
+-		 min(depth & ((1U << sbq->sb.shift) - 1), shallow_depth));
+-	wake_batch = clamp_t(unsigned int, depth / SBQ_WAIT_QUEUES, 1,
+-			     SBQ_WAKE_BATCH);
+-
+-	return wake_batch;
++	return clamp_t(unsigned int,
++		       min(depth, sbq->min_shallow_depth) / SBQ_WAIT_QUEUES,
++		       1, SBQ_WAKE_BATCH);
+ }
+ 
+ int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth,
+diff --git a/mm/damon/core.c b/mm/damon/core.c
+index 339116ea30e30e..d9c4a93b24509c 100644
+--- a/mm/damon/core.c
++++ b/mm/damon/core.c
+@@ -993,6 +993,7 @@ static int damos_commit(struct damos *dst, struct damos *src)
+ 		return err;
+ 
+ 	dst->wmarks = src->wmarks;
++	dst->target_nid = src->target_nid;
+ 
+ 	err = damos_commit_filters(dst, src);
+ 	return err;
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index d3e66136e41a31..49b98082c5401b 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -1516,10 +1516,9 @@ static pud_t maybe_pud_mkwrite(pud_t pud, struct vm_area_struct *vma)
+ }
+ 
+ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
+-		pud_t *pud, pfn_t pfn, bool write)
++		pud_t *pud, pfn_t pfn, pgprot_t prot, bool write)
+ {
+ 	struct mm_struct *mm = vma->vm_mm;
+-	pgprot_t prot = vma->vm_page_prot;
+ 	pud_t entry;
+ 
+ 	if (!pud_none(*pud)) {
+@@ -1581,7 +1580,7 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write)
+ 	pfnmap_setup_cachemode_pfn(pfn_t_to_pfn(pfn), &pgprot);
+ 
+ 	ptl = pud_lock(vma->vm_mm, vmf->pud);
+-	insert_pfn_pud(vma, addr, vmf->pud, pfn, write);
++	insert_pfn_pud(vma, addr, vmf->pud, pfn, pgprot, write);
+ 	spin_unlock(ptl);
+ 
+ 	return VM_FAULT_NOPAGE;
+@@ -1625,7 +1624,7 @@ vm_fault_t vmf_insert_folio_pud(struct vm_fault *vmf, struct folio *folio,
+ 		add_mm_counter(mm, mm_counter_file(folio), HPAGE_PUD_NR);
+ 	}
+ 	insert_pfn_pud(vma, addr, vmf->pud, pfn_to_pfn_t(folio_pfn(folio)),
+-		write);
++		       vma->vm_page_prot, write);
+ 	spin_unlock(ptl);
+ 
+ 	return VM_FAULT_NOPAGE;
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index 8d588e6853110b..84265983f239c1 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -470,6 +470,7 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp)
+ {
+ 	unsigned long flags;
+ 	struct kmemleak_object *object;
++	bool warn = false;
+ 
+ 	/* try the slab allocator first */
+ 	if (object_cache) {
+@@ -488,8 +489,10 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp)
+ 	else if (mem_pool_free_count)
+ 		object = &mem_pool[--mem_pool_free_count];
+ 	else
+-		pr_warn_once("Memory pool empty, consider increasing CONFIG_DEBUG_KMEMLEAK_MEM_POOL_SIZE\n");
++		warn = true;
+ 	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
++	if (warn)
++		pr_warn_once("Memory pool empty, consider increasing CONFIG_DEBUG_KMEMLEAK_MEM_POOL_SIZE\n");
+ 
+ 	return object;
+ }
+@@ -2181,6 +2184,7 @@ static const struct file_operations kmemleak_fops = {
+ static void __kmemleak_do_cleanup(void)
+ {
+ 	struct kmemleak_object *object, *tmp;
++	unsigned int cnt = 0;
+ 
+ 	/*
+ 	 * Kmemleak has already been disabled, no need for RCU list traversal
+@@ -2189,6 +2193,10 @@ static void __kmemleak_do_cleanup(void)
+ 	list_for_each_entry_safe(object, tmp, &object_list, object_list) {
+ 		__remove_object(object);
+ 		__delete_object(object);
++
++		/* Call cond_resched() once per 64 iterations to avoid soft lockup */
++		if (!(++cnt & 0x3f))
++			cond_resched();
+ 	}
+ }
+ 
+diff --git a/mm/ptdump.c b/mm/ptdump.c
+index 9374f29cdc6f8a..0a6965e2e7fa6b 100644
+--- a/mm/ptdump.c
++++ b/mm/ptdump.c
+@@ -175,6 +175,7 @@ void ptdump_walk_pgd(struct ptdump_state *st, struct mm_struct *mm, pgd_t *pgd)
+ {
+ 	const struct ptdump_range *range = st->range;
+ 
++	get_online_mems();
+ 	mmap_write_lock(mm);
+ 	while (range->start != range->end) {
+ 		walk_page_range_novma(mm, range->start, range->end,
+@@ -182,6 +183,7 @@ void ptdump_walk_pgd(struct ptdump_state *st, struct mm_struct *mm, pgd_t *pgd)
+ 		range++;
+ 	}
+ 	mmap_write_unlock(mm);
++	put_online_mems();
+ 
+ 	/* Flush out the last page */
+ 	st->note_page_flush(st);
+diff --git a/mm/shmem.c b/mm/shmem.c
+index c67dfc17a81920..5758e2e3b1539a 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -884,7 +884,9 @@ static int shmem_add_to_page_cache(struct folio *folio,
+ 				   pgoff_t index, void *expected, gfp_t gfp)
+ {
+ 	XA_STATE_ORDER(xas, &mapping->i_pages, index, folio_order(folio));
+-	long nr = folio_nr_pages(folio);
++	unsigned long nr = folio_nr_pages(folio);
++	swp_entry_t iter, swap;
++	void *entry;
+ 
+ 	VM_BUG_ON_FOLIO(index != round_down(index, nr), folio);
+ 	VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
+@@ -896,14 +898,25 @@ static int shmem_add_to_page_cache(struct folio *folio,
+ 
+ 	gfp &= GFP_RECLAIM_MASK;
+ 	folio_throttle_swaprate(folio, gfp);
++	swap = radix_to_swp_entry(expected);
+ 
+ 	do {
++		iter = swap;
+ 		xas_lock_irq(&xas);
+-		if (expected != xas_find_conflict(&xas)) {
+-			xas_set_err(&xas, -EEXIST);
+-			goto unlock;
++		xas_for_each_conflict(&xas, entry) {
++			/*
++			 * The range must either be empty, or filled with
++			 * expected swap entries. Shmem swap entries are never
++			 * partially freed without split of both entry and
++			 * folio, so there shouldn't be any holes.
++			 */
++			if (!expected || entry != swp_to_radix_entry(iter)) {
++				xas_set_err(&xas, -EEXIST);
++				goto unlock;
++			}
++			iter.val += 1 << xas_get_order(&xas);
+ 		}
+-		if (expected && xas_find_conflict(&xas)) {
++		if (expected && iter.val - nr != swap.val) {
+ 			xas_set_err(&xas, -EEXIST);
+ 			goto unlock;
+ 		}
+@@ -2327,7 +2340,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
+ 			error = -ENOMEM;
+ 			goto failed;
+ 		}
+-	} else if (order != folio_order(folio)) {
++	} else if (order > folio_order(folio)) {
+ 		/*
+ 		 * Swap readahead may swap in order 0 folios into swapcache
+ 		 * asynchronously, while the shmem mapping can still stores
+@@ -2352,15 +2365,23 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
+ 
+ 			swap = swp_entry(swp_type(swap), swp_offset(swap) + offset);
+ 		}
++	} else if (order < folio_order(folio)) {
++		swap.val = round_down(swap.val, 1 << folio_order(folio));
++		index = round_down(index, 1 << folio_order(folio));
+ 	}
+ 
+ alloced:
+-	/* We have to do this with folio locked to prevent races */
++	/*
++	 * We have to do this with the folio locked to prevent races.
++	 * The shmem_confirm_swap below only checks if the first swap
++	 * entry matches the folio, that's enough to ensure the folio
++	 * is not used outside of shmem, as shmem swap entries
++	 * and swap cache folios are never partially freed.
++	 */
+ 	folio_lock(folio);
+ 	if ((!skip_swapcache && !folio_test_swapcache(folio)) ||
+-	    folio->swap.val != swap.val ||
+ 	    !shmem_confirm_swap(mapping, index, swap) ||
+-	    xa_get_order(&mapping->i_pages, index) != folio_order(folio)) {
++	    folio->swap.val != swap.val) {
+ 		error = -EEXIST;
+ 		goto unlock;
+ 	}
+diff --git a/mm/slub.c b/mm/slub.c
+index 45a963e363d32b..394646988b1c2a 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -4269,7 +4269,12 @@ static void *___kmalloc_large_node(size_t size, gfp_t flags, int node)
+ 		flags = kmalloc_fix_flags(flags);
+ 
+ 	flags |= __GFP_COMP;
+-	folio = (struct folio *)alloc_pages_node_noprof(node, flags, order);
++
++	if (node == NUMA_NO_NODE)
++		folio = (struct folio *)alloc_pages_noprof(flags, order);
++	else
++		folio = (struct folio *)__alloc_pages_noprof(flags, order, node, NULL);
++
+ 	if (folio) {
+ 		ptr = folio_address(folio);
+ 		lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B,
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index 8253978ee0fb11..2f4b98f2c8adf5 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -1829,13 +1829,16 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start,
+ 			/* Check if we can move the pmd without splitting it. */
+ 			if (move_splits_huge_pmd(dst_addr, src_addr, src_start + len) ||
+ 			    !pmd_none(dst_pmdval)) {
+-				struct folio *folio = pmd_folio(*src_pmd);
+-
+-				if (!folio || (!is_huge_zero_folio(folio) &&
+-					       !PageAnonExclusive(&folio->page))) {
+-					spin_unlock(ptl);
+-					err = -EBUSY;
+-					break;
++				/* Can be a migration entry */
++				if (pmd_present(*src_pmd)) {
++					struct folio *folio = pmd_folio(*src_pmd);
++
++					if (!is_huge_zero_folio(folio) &&
++					    !PageAnonExclusive(&folio->page)) {
++						spin_unlock(ptl);
++						err = -EBUSY;
++						break;
++					}
+ 				}
+ 
+ 				spin_unlock(ptl);
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 4f379184df5b1b..f5cd935490ad97 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -2146,7 +2146,8 @@ struct hci_conn *hci_bind_bis(struct hci_dev *hdev, bdaddr_t *dst, __u8 sid,
+ 	struct hci_link *link;
+ 
+ 	/* Look for any BIS that is open for rebinding */
+-	conn = hci_conn_hash_lookup_big_state(hdev, qos->bcast.big, BT_OPEN);
++	conn = hci_conn_hash_lookup_big_state(hdev, qos->bcast.big, BT_OPEN,
++					      HCI_ROLE_MASTER);
+ 	if (conn) {
+ 		memcpy(qos, &conn->iso_qos, sizeof(*qos));
+ 		conn->state = BT_CONNECTED;
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index c1dd8d78701fe5..b83995898da098 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -6880,7 +6880,8 @@ static void hci_le_create_big_complete_evt(struct hci_dev *hdev, void *data,
+ 
+ 	/* Connect all BISes that are bound to the BIG */
+ 	while ((conn = hci_conn_hash_lookup_big_state(hdev, ev->handle,
+-						      BT_BOUND))) {
++						      BT_BOUND,
++						      HCI_ROLE_MASTER))) {
+ 		if (ev->status) {
+ 			hci_connect_cfm(conn, ev->status);
+ 			hci_conn_del(conn);
+@@ -6996,6 +6997,37 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+ 	hci_dev_unlock(hdev);
+ }
+ 
++static void hci_le_big_sync_lost_evt(struct hci_dev *hdev, void *data,
++				     struct sk_buff *skb)
++{
++	struct hci_evt_le_big_sync_lost *ev = data;
++	struct hci_conn *bis, *conn;
++
++	bt_dev_dbg(hdev, "big handle 0x%2.2x", ev->handle);
++
++	hci_dev_lock(hdev);
++
++	/* Delete the pa sync connection */
++	bis = hci_conn_hash_lookup_pa_sync_big_handle(hdev, ev->handle);
++	if (bis) {
++		conn = hci_conn_hash_lookup_pa_sync_handle(hdev,
++							   bis->sync_handle);
++		if (conn)
++			hci_conn_del(conn);
++	}
++
++	/* Delete each bis connection */
++	while ((bis = hci_conn_hash_lookup_big_state(hdev, ev->handle,
++						     BT_CONNECTED,
++						     HCI_ROLE_SLAVE))) {
++		clear_bit(HCI_CONN_BIG_SYNC, &bis->flags);
++		hci_disconn_cfm(bis, ev->reason);
++		hci_conn_del(bis);
++	}
++
++	hci_dev_unlock(hdev);
++}
++
+ static void hci_le_big_info_adv_report_evt(struct hci_dev *hdev, void *data,
+ 					   struct sk_buff *skb)
+ {
+@@ -7119,6 +7151,11 @@ static const struct hci_le_ev {
+ 		     hci_le_big_sync_established_evt,
+ 		     sizeof(struct hci_evt_le_big_sync_estabilished),
+ 		     HCI_MAX_EVENT_SIZE),
++	/* [0x1e = HCI_EVT_LE_BIG_SYNC_LOST] */
++	HCI_LE_EV_VL(HCI_EVT_LE_BIG_SYNC_LOST,
++		     hci_le_big_sync_lost_evt,
++		     sizeof(struct hci_evt_le_big_sync_lost),
++		     HCI_MAX_EVENT_SIZE),
+ 	/* [0x22 = HCI_EVT_LE_BIG_INFO_ADV_REPORT] */
+ 	HCI_LE_EV_VL(HCI_EVT_LE_BIG_INFO_ADV_REPORT,
+ 		     hci_le_big_info_adv_report_evt,
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index 428ee5c7de7ea3..fc866759910d95 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -118,7 +118,7 @@ static void hci_sock_free_cookie(struct sock *sk)
+ 	int id = hci_pi(sk)->cookie;
+ 
+ 	if (id) {
+-		hci_pi(sk)->cookie = 0xffffffff;
++		hci_pi(sk)->cookie = 0;
+ 		ida_free(&sock_cookie_ida, id);
+ 	}
+ }
+diff --git a/net/core/ieee8021q_helpers.c b/net/core/ieee8021q_helpers.c
+index 759a9b9f3f898b..669b357b73b2d7 100644
+--- a/net/core/ieee8021q_helpers.c
++++ b/net/core/ieee8021q_helpers.c
+@@ -7,6 +7,11 @@
+ #include <net/dscp.h>
+ #include <net/ieee8021q.h>
+ 
++/* verify that table covers all 8 traffic types */
++#define TT_MAP_SIZE_OK(tbl)                                 \
++	compiletime_assert(ARRAY_SIZE(tbl) == IEEE8021Q_TT_MAX, \
++			   #tbl " size mismatch")
++
+ /* The following arrays map Traffic Types (TT) to traffic classes (TC) for
+  * different number of queues as shown in the example provided by
+  * IEEE 802.1Q-2022 in Annex I "I.3 Traffic type to traffic class mapping" and
+@@ -101,51 +106,28 @@ int ieee8021q_tt_to_tc(enum ieee8021q_traffic_type tt, unsigned int num_queues)
+ 
+ 	switch (num_queues) {
+ 	case 8:
+-		compiletime_assert(ARRAY_SIZE(ieee8021q_8queue_tt_tc_map) !=
+-				   IEEE8021Q_TT_MAX - 1,
+-				   "ieee8021q_8queue_tt_tc_map != max - 1");
++		TT_MAP_SIZE_OK(ieee8021q_8queue_tt_tc_map);
+ 		return ieee8021q_8queue_tt_tc_map[tt];
+ 	case 7:
+-		compiletime_assert(ARRAY_SIZE(ieee8021q_7queue_tt_tc_map) !=
+-				   IEEE8021Q_TT_MAX - 1,
+-				   "ieee8021q_7queue_tt_tc_map != max - 1");
+-
++		TT_MAP_SIZE_OK(ieee8021q_7queue_tt_tc_map);
+ 		return ieee8021q_7queue_tt_tc_map[tt];
+ 	case 6:
+-		compiletime_assert(ARRAY_SIZE(ieee8021q_6queue_tt_tc_map) !=
+-				   IEEE8021Q_TT_MAX - 1,
+-				   "ieee8021q_6queue_tt_tc_map != max - 1");
+-
++		TT_MAP_SIZE_OK(ieee8021q_6queue_tt_tc_map);
+ 		return ieee8021q_6queue_tt_tc_map[tt];
+ 	case 5:
+-		compiletime_assert(ARRAY_SIZE(ieee8021q_5queue_tt_tc_map) !=
+-				   IEEE8021Q_TT_MAX - 1,
+-				   "ieee8021q_5queue_tt_tc_map != max - 1");
+-
++		TT_MAP_SIZE_OK(ieee8021q_5queue_tt_tc_map);
+ 		return ieee8021q_5queue_tt_tc_map[tt];
+ 	case 4:
+-		compiletime_assert(ARRAY_SIZE(ieee8021q_4queue_tt_tc_map) !=
+-				   IEEE8021Q_TT_MAX - 1,
+-				   "ieee8021q_4queue_tt_tc_map != max - 1");
+-
++		TT_MAP_SIZE_OK(ieee8021q_4queue_tt_tc_map);
+ 		return ieee8021q_4queue_tt_tc_map[tt];
+ 	case 3:
+-		compiletime_assert(ARRAY_SIZE(ieee8021q_3queue_tt_tc_map) !=
+-				   IEEE8021Q_TT_MAX - 1,
+-				   "ieee8021q_3queue_tt_tc_map != max - 1");
+-
++		TT_MAP_SIZE_OK(ieee8021q_3queue_tt_tc_map);
+ 		return ieee8021q_3queue_tt_tc_map[tt];
+ 	case 2:
+-		compiletime_assert(ARRAY_SIZE(ieee8021q_2queue_tt_tc_map) !=
+-				   IEEE8021Q_TT_MAX - 1,
+-				   "ieee8021q_2queue_tt_tc_map != max - 1");
+-
++		TT_MAP_SIZE_OK(ieee8021q_2queue_tt_tc_map);
+ 		return ieee8021q_2queue_tt_tc_map[tt];
+ 	case 1:
+-		compiletime_assert(ARRAY_SIZE(ieee8021q_1queue_tt_tc_map) !=
+-				   IEEE8021Q_TT_MAX - 1,
+-				   "ieee8021q_1queue_tt_tc_map != max - 1");
+-
++		TT_MAP_SIZE_OK(ieee8021q_1queue_tt_tc_map);
+ 		return ieee8021q_1queue_tt_tc_map[tt];
+ 	}
+ 
+diff --git a/net/core/page_pool.c b/net/core/page_pool.c
+index ba7cf3e3c32fdc..368412baad2649 100644
+--- a/net/core/page_pool.c
++++ b/net/core/page_pool.c
+@@ -1201,6 +1201,35 @@ void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *),
+ 	pool->xdp_mem_id = mem->id;
+ }
+ 
++/**
++ * page_pool_enable_direct_recycling() - mark page pool as owned by NAPI
++ * @pool: page pool to modify
++ * @napi: NAPI instance to associate the page pool with
++ *
++ * Associate a page pool with a NAPI instance for lockless page recycling.
++ * This is useful when a new page pool has to be added to a NAPI instance
++ * without disabling that NAPI instance, to mark the point at which control
++ * path "hands over" the page pool to the NAPI instance. In most cases driver
++ * can simply set the @napi field in struct page_pool_params, and does not
++ * have to call this helper.
++ *
++ * The function is idempotent, but does not implement any refcounting.
++ * Single page_pool_disable_direct_recycling() will disable recycling,
++ * no matter how many times enable was called.
++ */
++void page_pool_enable_direct_recycling(struct page_pool *pool,
++				       struct napi_struct *napi)
++{
++	if (READ_ONCE(pool->p.napi) == napi)
++		return;
++	WARN_ON(!napi || pool->p.napi);
++
++	mutex_lock(&page_pools_lock);
++	WRITE_ONCE(pool->p.napi, napi);
++	mutex_unlock(&page_pools_lock);
++}
++EXPORT_SYMBOL(page_pool_enable_direct_recycling);
++
+ void page_pool_disable_direct_recycling(struct page_pool *pool)
+ {
+ 	/* Disable direct recycling based on pool->cpuid.
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index bd5d48fdd62ace..7b8e80c4f1d98c 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -2586,7 +2586,6 @@ static struct rtable *__mkroute_output(const struct fib_result *res,
+ 	do_cache = true;
+ 	if (type == RTN_BROADCAST) {
+ 		flags |= RTCF_BROADCAST | RTCF_LOCAL;
+-		fi = NULL;
+ 	} else if (type == RTN_MULTICAST) {
+ 		flags |= RTCF_MULTICAST | RTCF_LOCAL;
+ 		if (!ip_check_mc_rcu(in_dev, fl4->daddr, fl4->saddr,
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index e0a6bfa95118ac..eeac86bacdbaaa 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -224,7 +224,7 @@ static struct sk_buff *__skb_udp_tunnel_segment(struct sk_buff *skb,
+ 	remcsum = !!(skb_shinfo(skb)->gso_type & SKB_GSO_TUNNEL_REMCSUM);
+ 	skb->remcsum_offload = remcsum;
+ 
+-	need_ipsec = skb_dst(skb) && dst_xfrm(skb_dst(skb));
++	need_ipsec = (skb_dst(skb) && dst_xfrm(skb_dst(skb))) || skb_sec_path(skb);
+ 	/* Try to offload checksum if possible */
+ 	offload_csum = !!(need_csum &&
+ 			  !need_ipsec &&
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 870a0bd6c2bab7..59b8b27272b5bc 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -2229,13 +2229,12 @@ void addrconf_dad_failure(struct sk_buff *skb, struct inet6_ifaddr *ifp)
+ 	in6_ifa_put(ifp);
+ }
+ 
+-/* Join to solicited addr multicast group.
+- * caller must hold RTNL */
++/* Join to solicited addr multicast group. */
+ void addrconf_join_solict(struct net_device *dev, const struct in6_addr *addr)
+ {
+ 	struct in6_addr maddr;
+ 
+-	if (dev->flags&(IFF_LOOPBACK|IFF_NOARP))
++	if (READ_ONCE(dev->flags) & (IFF_LOOPBACK | IFF_NOARP))
+ 		return;
+ 
+ 	addrconf_addr_solict_mult(addr, &maddr);
+@@ -3860,7 +3859,7 @@ static int addrconf_ifdown(struct net_device *dev, bool unregister)
+ 	 *	   Do not dev_put!
+ 	 */
+ 	if (unregister) {
+-		idev->dead = 1;
++		WRITE_ONCE(idev->dead, 1);
+ 
+ 		/* protected by rtnl_lock */
+ 		RCU_INIT_POINTER(dev->ip6_ptr, NULL);
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index 616bf4c0c8fd91..b91538e90c54a7 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -945,23 +945,22 @@ static void inet6_ifmcaddr_notify(struct net_device *dev,
+ static int __ipv6_dev_mc_inc(struct net_device *dev,
+ 			     const struct in6_addr *addr, unsigned int mode)
+ {
+-	struct ifmcaddr6 *mc;
+ 	struct inet6_dev *idev;
+-
+-	ASSERT_RTNL();
++	struct ifmcaddr6 *mc;
+ 
+ 	/* we need to take a reference on idev */
+ 	idev = in6_dev_get(dev);
+-
+ 	if (!idev)
+ 		return -EINVAL;
+ 
+-	if (idev->dead) {
++	mutex_lock(&idev->mc_lock);
++
++	if (READ_ONCE(idev->dead)) {
++		mutex_unlock(&idev->mc_lock);
+ 		in6_dev_put(idev);
+ 		return -ENODEV;
+ 	}
+ 
+-	mutex_lock(&idev->mc_lock);
+ 	for_each_mc_mclock(idev, mc) {
+ 		if (ipv6_addr_equal(&mc->mca_addr, addr)) {
+ 			mc->mca_users++;
+diff --git a/net/ipv6/xfrm6_tunnel.c b/net/ipv6/xfrm6_tunnel.c
+index 5120a763da0d95..0a0eeaed059101 100644
+--- a/net/ipv6/xfrm6_tunnel.c
++++ b/net/ipv6/xfrm6_tunnel.c
+@@ -334,7 +334,7 @@ static void __net_exit xfrm6_tunnel_net_exit(struct net *net)
+ 	struct xfrm6_tunnel_net *xfrm6_tn = xfrm6_tunnel_pernet(net);
+ 	unsigned int i;
+ 
+-	xfrm_state_flush(net, IPSEC_PROTO_ANY, false);
++	xfrm_state_flush(net, 0, false);
+ 	xfrm_flush_gc();
+ 
+ 	for (i = 0; i < XFRM6_TUNNEL_SPI_BYADDR_HSIZE; i++)
+diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
+index c05047dad62d7e..d0a001ebabfefe 100644
+--- a/net/kcm/kcmsock.c
++++ b/net/kcm/kcmsock.c
+@@ -430,7 +430,7 @@ static void psock_write_space(struct sock *sk)
+ 
+ 	/* Check if the socket is reserved so someone is waiting for sending. */
+ 	kcm = psock->tx_kcm;
+-	if (kcm && !unlikely(kcm->tx_stopped))
++	if (kcm)
+ 		queue_work(kcm_wq, &kcm->tx_work);
+ 
+ 	spin_unlock_bh(&mux->lock);
+@@ -1694,12 +1694,6 @@ static int kcm_release(struct socket *sock)
+ 	 */
+ 	__skb_queue_purge(&sk->sk_write_queue);
+ 
+-	/* Set tx_stopped. This is checked when psock is bound to a kcm and we
+-	 * get a writespace callback. This prevents further work being queued
+-	 * from the callback (unbinding the psock occurs after canceling work.
+-	 */
+-	kcm->tx_stopped = 1;
+-
+ 	release_sock(sk);
+ 
+ 	spin_lock_bh(&mux->lock);
+@@ -1715,7 +1709,7 @@ static int kcm_release(struct socket *sock)
+ 	/* Cancel work. After this point there should be no outside references
+ 	 * to the kcm socket.
+ 	 */
+-	cancel_work_sync(&kcm->tx_work);
++	disable_work_sync(&kcm->tx_work);
+ 
+ 	lock_sock(sk);
+ 	psock = kcm->tx_psock;
+diff --git a/net/mac80211/chan.c b/net/mac80211/chan.c
+index 3aaf5abf1acc13..e0fdeaafc4893e 100644
+--- a/net/mac80211/chan.c
++++ b/net/mac80211/chan.c
+@@ -1381,6 +1381,7 @@ ieee80211_link_use_reserved_reassign(struct ieee80211_link_data *link)
+ 		goto out;
+ 	}
+ 
++	link->radar_required = link->reserved_radar_required;
+ 	list_move(&link->assigned_chanctx_list, &new_ctx->assigned_links);
+ 	rcu_assign_pointer(link_conf->chanctx_conf, &new_ctx->conf);
+ 
+diff --git a/net/mac80211/ht.c b/net/mac80211/ht.c
+index 32390d8a9d753e..1c82a28b03de4b 100644
+--- a/net/mac80211/ht.c
++++ b/net/mac80211/ht.c
+@@ -9,7 +9,7 @@
+  * Copyright 2007, Michael Wu <flamingice@sourmilk.net>
+  * Copyright 2007-2010, Intel Corporation
+  * Copyright 2017	Intel Deutschland GmbH
+- * Copyright(c) 2020-2024 Intel Corporation
++ * Copyright(c) 2020-2025 Intel Corporation
+  */
+ 
+ #include <linux/ieee80211.h>
+@@ -603,3 +603,41 @@ void ieee80211_request_smps(struct ieee80211_vif *vif, unsigned int link_id,
+ }
+ /* this might change ... don't want non-open drivers using it */
+ EXPORT_SYMBOL_GPL(ieee80211_request_smps);
++
++void ieee80211_ht_handle_chanwidth_notif(struct ieee80211_local *local,
++					 struct ieee80211_sub_if_data *sdata,
++					 struct sta_info *sta,
++					 struct link_sta_info *link_sta,
++					 u8 chanwidth, enum nl80211_band band)
++{
++	enum ieee80211_sta_rx_bandwidth max_bw, new_bw;
++	struct ieee80211_supported_band *sband;
++	struct sta_opmode_info sta_opmode = {};
++
++	lockdep_assert_wiphy(local->hw.wiphy);
++
++	if (chanwidth == IEEE80211_HT_CHANWIDTH_20MHZ)
++		max_bw = IEEE80211_STA_RX_BW_20;
++	else
++		max_bw = ieee80211_sta_cap_rx_bw(link_sta);
++
++	/* set cur_max_bandwidth and recalc sta bw */
++	link_sta->cur_max_bandwidth = max_bw;
++	new_bw = ieee80211_sta_cur_vht_bw(link_sta);
++
++	if (link_sta->pub->bandwidth == new_bw)
++		return;
++
++	link_sta->pub->bandwidth = new_bw;
++	sband = local->hw.wiphy->bands[band];
++	sta_opmode.bw =
++		ieee80211_sta_rx_bw_to_chan_width(link_sta);
++	sta_opmode.changed = STA_OPMODE_MAX_BW_CHANGED;
++
++	rate_control_rate_update(local, sband, link_sta,
++				 IEEE80211_RC_BW_CHANGED);
++	cfg80211_sta_opmode_change_notify(sdata->dev,
++					  sta->addr,
++					  &sta_opmode,
++					  GFP_KERNEL);
++}
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index f71d9eeb8abc80..61439e6efdb74f 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -2220,6 +2220,12 @@ u8 ieee80211_mcs_to_chains(const struct ieee80211_mcs_info *mcs);
+ enum nl80211_smps_mode
+ ieee80211_smps_mode_to_smps_mode(enum ieee80211_smps_mode smps);
+ 
++void ieee80211_ht_handle_chanwidth_notif(struct ieee80211_local *local,
++					 struct ieee80211_sub_if_data *sdata,
++					 struct sta_info *sta,
++					 struct link_sta_info *link_sta,
++					 u8 chanwidth, enum nl80211_band band);
++
+ /* VHT */
+ void
+ ieee80211_vht_cap_ie_to_sta_vht_cap(struct ieee80211_sub_if_data *sdata,
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index c01634fdba789a..851d399fca1350 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -1556,6 +1556,35 @@ static void ieee80211_iface_process_skb(struct ieee80211_local *local,
+ 				break;
+ 			}
+ 		}
++	} else if (ieee80211_is_action(mgmt->frame_control) &&
++		   mgmt->u.action.category == WLAN_CATEGORY_HT) {
++		switch (mgmt->u.action.u.ht_smps.action) {
++		case WLAN_HT_ACTION_NOTIFY_CHANWIDTH: {
++			u8 chanwidth = mgmt->u.action.u.ht_notify_cw.chanwidth;
++			struct ieee80211_rx_status *status;
++			struct link_sta_info *link_sta;
++			struct sta_info *sta;
++
++			sta = sta_info_get_bss(sdata, mgmt->sa);
++			if (!sta)
++				break;
++
++			status = IEEE80211_SKB_RXCB(skb);
++			if (!status->link_valid)
++				link_sta = &sta->deflink;
++			else
++				link_sta = rcu_dereference_protected(sta->link[status->link_id],
++							lockdep_is_held(&local->hw.wiphy->mtx));
++			if (link_sta)
++				ieee80211_ht_handle_chanwidth_notif(local, sdata, sta,
++								    link_sta, chanwidth,
++								    status->band);
++			break;
++		}
++		default:
++			WARN_ON(1);
++			break;
++		}
+ 	} else if (ieee80211_is_action(mgmt->frame_control) &&
+ 		   mgmt->u.action.category == WLAN_CATEGORY_VHT) {
+ 		switch (mgmt->u.action.u.vht_group_notif.action_code) {
+diff --git a/net/mac80211/link.c b/net/mac80211/link.c
+index 4f7b7d0f64f24b..d71eabe5abf8d8 100644
+--- a/net/mac80211/link.c
++++ b/net/mac80211/link.c
+@@ -2,7 +2,7 @@
+ /*
+  * MLO link handling
+  *
+- * Copyright (C) 2022-2024 Intel Corporation
++ * Copyright (C) 2022-2025 Intel Corporation
+  */
+ #include <linux/slab.h>
+ #include <linux/kernel.h>
+@@ -368,6 +368,13 @@ static int ieee80211_vif_update_links(struct ieee80211_sub_if_data *sdata,
+ 			ieee80211_update_apvlan_links(sdata);
+ 	}
+ 
++	/*
++	 * Ignore errors if we are only removing links as removal should
++	 * always succeed
++	 */
++	if (!new_links)
++		ret = 0;
++
+ 	if (ret) {
+ 		/* restore config */
+ 		memcpy(sdata->link, old_data, sizeof(old_data));
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 0ed68182f79b5e..006d02dce94923 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -1218,18 +1218,36 @@ EXPORT_SYMBOL_IF_MAC80211_KUNIT(ieee80211_determine_chan_mode);
+ 
+ static int ieee80211_config_bw(struct ieee80211_link_data *link,
+ 			       struct ieee802_11_elems *elems,
+-			       bool update, u64 *changed,
+-			       const char *frame)
++			       bool update, u64 *changed, u16 stype)
+ {
+ 	struct ieee80211_channel *channel = link->conf->chanreq.oper.chan;
+ 	struct ieee80211_sub_if_data *sdata = link->sdata;
+ 	struct ieee80211_chan_req chanreq = {};
+ 	struct cfg80211_chan_def ap_chandef;
+ 	enum ieee80211_conn_mode ap_mode;
++	const char *frame;
+ 	u32 vht_cap_info = 0;
+ 	u16 ht_opmode;
+ 	int ret;
+ 
++	switch (stype) {
++	case IEEE80211_STYPE_BEACON:
++		frame = "beacon";
++		break;
++	case IEEE80211_STYPE_ASSOC_RESP:
++		frame = "assoc response";
++		break;
++	case IEEE80211_STYPE_REASSOC_RESP:
++		frame = "reassoc response";
++		break;
++	case IEEE80211_STYPE_ACTION:
++		/* the only action frame that gets here */
++		frame = "ML reconf response";
++		break;
++	default:
++		return -EINVAL;
++	}
++
+ 	/* don't track any bandwidth changes in legacy/S1G modes */
+ 	if (link->u.mgd.conn.mode == IEEE80211_CONN_MODE_LEGACY ||
+ 	    link->u.mgd.conn.mode == IEEE80211_CONN_MODE_S1G)
+@@ -1278,7 +1296,9 @@ static int ieee80211_config_bw(struct ieee80211_link_data *link,
+ 			ieee80211_min_bw_limit_from_chandef(&chanreq.oper))
+ 		ieee80211_chandef_downgrade(&chanreq.oper, NULL);
+ 
+-	if (ap_chandef.chan->band == NL80211_BAND_6GHZ &&
++	/* TPE element is not present in (re)assoc/ML reconfig response */
++	if (stype == IEEE80211_STYPE_BEACON &&
++	    ap_chandef.chan->band == NL80211_BAND_6GHZ &&
+ 	    link->u.mgd.conn.mode >= IEEE80211_CONN_MODE_HE) {
+ 		ieee80211_rearrange_tpe(&elems->tpe, &ap_chandef,
+ 					&chanreq.oper);
+@@ -2515,7 +2535,8 @@ ieee80211_sta_abort_chanswitch(struct ieee80211_link_data *link)
+ 	if (!local->ops->abort_channel_switch)
+ 		return;
+ 
+-	ieee80211_link_unreserve_chanctx(link);
++	if (rcu_access_pointer(link->conf->chanctx_conf))
++		ieee80211_link_unreserve_chanctx(link);
+ 
+ 	ieee80211_vif_unblock_queues_csa(sdata);
+ 
+@@ -4737,6 +4758,7 @@ static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
+ 	struct ieee80211_prep_tx_info info = {
+ 		.subtype = IEEE80211_STYPE_AUTH,
+ 	};
++	bool sae_need_confirm = false;
+ 
+ 	lockdep_assert_wiphy(sdata->local->hw.wiphy);
+ 
+@@ -4782,6 +4804,8 @@ static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
+ 				jiffies + IEEE80211_AUTH_WAIT_SAE_RETRY;
+ 			ifmgd->auth_data->timeout_started = true;
+ 			run_again(sdata, ifmgd->auth_data->timeout);
++			if (auth_transaction == 1)
++				sae_need_confirm = true;
+ 			goto notify_driver;
+ 		}
+ 
+@@ -4824,6 +4848,9 @@ static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
+ 	     ifmgd->auth_data->expected_transaction == 2)) {
+ 		if (!ieee80211_mark_sta_auth(sdata))
+ 			return; /* ignore frame -- wait for timeout */
++	} else if (ifmgd->auth_data->algorithm == WLAN_AUTH_SAE &&
++		   auth_transaction == 1) {
++		sae_need_confirm = true;
+ 	} else if (ifmgd->auth_data->algorithm == WLAN_AUTH_SAE &&
+ 		   auth_transaction == 2) {
+ 		sdata_info(sdata, "SAE peer confirmed\n");
+@@ -4832,7 +4859,8 @@ static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
+ 
+ 	cfg80211_rx_mlme_mgmt(sdata->dev, (u8 *)mgmt, len);
+ notify_driver:
+-	drv_mgd_complete_tx(sdata->local, sdata, &info);
++	if (!sae_need_confirm)
++		drv_mgd_complete_tx(sdata->local, sdata, &info);
+ }
+ 
+ #define case_WLAN(type) \
+@@ -5294,7 +5322,9 @@ static bool ieee80211_assoc_config_link(struct ieee80211_link_data *link,
+ 	/* check/update if AP changed anything in assoc response vs. scan */
+ 	if (ieee80211_config_bw(link, elems,
+ 				link_id == assoc_data->assoc_link_id,
+-				changed, "assoc response")) {
++				changed,
++				le16_to_cpu(mgmt->frame_control) &
++					IEEE80211_FCTL_STYPE)) {
+ 		ret = false;
+ 		goto out;
+ 	}
+@@ -7482,7 +7512,8 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_link_data *link,
+ 
+ 	changed |= ieee80211_recalc_twt_req(sdata, sband, link, link_sta, elems);
+ 
+-	if (ieee80211_config_bw(link, elems, true, &changed, "beacon")) {
++	if (ieee80211_config_bw(link, elems, true, &changed,
++				IEEE80211_STYPE_BEACON)) {
+ 		ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DEAUTH,
+ 				       WLAN_REASON_DEAUTH_LEAVING,
+ 				       true, deauth_buf);
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index e73431549ce77e..7b801dd3f569a2 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -3576,41 +3576,18 @@ ieee80211_rx_h_action(struct ieee80211_rx_data *rx)
+ 			goto handled;
+ 		}
+ 		case WLAN_HT_ACTION_NOTIFY_CHANWIDTH: {
+-			struct ieee80211_supported_band *sband;
+ 			u8 chanwidth = mgmt->u.action.u.ht_notify_cw.chanwidth;
+-			enum ieee80211_sta_rx_bandwidth max_bw, new_bw;
+-			struct sta_opmode_info sta_opmode = {};
++
++			if (chanwidth != IEEE80211_HT_CHANWIDTH_20MHZ &&
++			    chanwidth != IEEE80211_HT_CHANWIDTH_ANY)
++				goto invalid;
+ 
+ 			/* If it doesn't support 40 MHz it can't change ... */
+ 			if (!(rx->link_sta->pub->ht_cap.cap &
+-					IEEE80211_HT_CAP_SUP_WIDTH_20_40))
+-				goto handled;
+-
+-			if (chanwidth == IEEE80211_HT_CHANWIDTH_20MHZ)
+-				max_bw = IEEE80211_STA_RX_BW_20;
+-			else
+-				max_bw = ieee80211_sta_cap_rx_bw(rx->link_sta);
+-
+-			/* set cur_max_bandwidth and recalc sta bw */
+-			rx->link_sta->cur_max_bandwidth = max_bw;
+-			new_bw = ieee80211_sta_cur_vht_bw(rx->link_sta);
+-
+-			if (rx->link_sta->pub->bandwidth == new_bw)
++				IEEE80211_HT_CAP_SUP_WIDTH_20_40))
+ 				goto handled;
+ 
+-			rx->link_sta->pub->bandwidth = new_bw;
+-			sband = rx->local->hw.wiphy->bands[status->band];
+-			sta_opmode.bw =
+-				ieee80211_sta_rx_bw_to_chan_width(rx->link_sta);
+-			sta_opmode.changed = STA_OPMODE_MAX_BW_CHANGED;
+-
+-			rate_control_rate_update(local, sband, rx->link_sta,
+-						 IEEE80211_RC_BW_CHANGED);
+-			cfg80211_sta_opmode_change_notify(sdata->dev,
+-							  rx->sta->addr,
+-							  &sta_opmode,
+-							  GFP_ATOMIC);
+-			goto handled;
++			goto queue;
+ 		}
+ 		default:
+ 			goto invalid;
+@@ -4234,10 +4211,16 @@ static bool ieee80211_rx_data_set_sta(struct ieee80211_rx_data *rx,
+ 		rx->link_sta = NULL;
+ 	}
+ 
+-	if (link_id < 0)
+-		rx->link = &rx->sdata->deflink;
+-	else if (!ieee80211_rx_data_set_link(rx, link_id))
++	if (link_id < 0) {
++		if (ieee80211_vif_is_mld(&rx->sdata->vif) &&
++		    sta && !sta->sta.valid_links)
++			rx->link =
++				rcu_dereference(rx->sdata->link[sta->deflink.link_id]);
++		else
++			rx->link = &rx->sdata->deflink;
++	} else if (!ieee80211_rx_data_set_link(rx, link_id)) {
+ 		return false;
++	}
+ 
+ 	return true;
+ }
+diff --git a/net/mctp/af_mctp.c b/net/mctp/af_mctp.c
+index 9b12ca97f41282..9d5db3feedec57 100644
+--- a/net/mctp/af_mctp.c
++++ b/net/mctp/af_mctp.c
+@@ -73,7 +73,6 @@ static int mctp_bind(struct socket *sock, struct sockaddr *addr, int addrlen)
+ 
+ 	lock_sock(sk);
+ 
+-	/* TODO: allow rebind */
+ 	if (sk_hashed(sk)) {
+ 		rc = -EADDRINUSE;
+ 		goto out_release;
+@@ -629,15 +628,36 @@ static void mctp_sk_close(struct sock *sk, long timeout)
+ static int mctp_sk_hash(struct sock *sk)
+ {
+ 	struct net *net = sock_net(sk);
++	struct sock *existing;
++	struct mctp_sock *msk;
++	int rc;
++
++	msk = container_of(sk, struct mctp_sock, sk);
+ 
+ 	/* Bind lookup runs under RCU, remain live during that. */
+ 	sock_set_flag(sk, SOCK_RCU_FREE);
+ 
+ 	mutex_lock(&net->mctp.bind_lock);
++
++	/* Prevent duplicate binds. */
++	sk_for_each(existing, &net->mctp.binds) {
++		struct mctp_sock *mex =
++			container_of(existing, struct mctp_sock, sk);
++
++		if (mex->bind_type == msk->bind_type &&
++		    mex->bind_addr == msk->bind_addr &&
++		    mex->bind_net == msk->bind_net) {
++			rc = -EADDRINUSE;
++			goto out;
++		}
++	}
++
+ 	sk_add_node_rcu(sk, &net->mctp.binds);
+-	mutex_unlock(&net->mctp.bind_lock);
++	rc = 0;
+ 
+-	return 0;
++out:
++	mutex_unlock(&net->mctp.bind_lock);
++	return rc;
+ }
+ 
+ static void mctp_sk_unhash(struct sock *sk)
+diff --git a/net/ncsi/internal.h b/net/ncsi/internal.h
+index e76c6de0c78448..adee6dcabdc3fe 100644
+--- a/net/ncsi/internal.h
++++ b/net/ncsi/internal.h
+@@ -110,7 +110,7 @@ struct ncsi_channel_version {
+ 	u8   update;		/* NCSI version update */
+ 	char alpha1;		/* NCSI version alpha1 */
+ 	char alpha2;		/* NCSI version alpha2 */
+-	u8  fw_name[12];	/* Firmware name string                */
++	u8  fw_name[12 + 1];	/* Firmware name string                */
+ 	u32 fw_version;		/* Firmware version                   */
+ 	u16 pci_ids[4];		/* PCI identification                 */
+ 	u32 mf_id;		/* Manufacture ID                     */
+diff --git a/net/ncsi/ncsi-rsp.c b/net/ncsi/ncsi-rsp.c
+index 472cc68ad86f2f..271ec6c3929e85 100644
+--- a/net/ncsi/ncsi-rsp.c
++++ b/net/ncsi/ncsi-rsp.c
+@@ -775,6 +775,7 @@ static int ncsi_rsp_handler_gvi(struct ncsi_request *nr)
+ 	ncv->alpha1 = rsp->alpha1;
+ 	ncv->alpha2 = rsp->alpha2;
+ 	memcpy(ncv->fw_name, rsp->fw_name, 12);
++	ncv->fw_name[12] = '\0';
+ 	ncv->fw_version = ntohl(rsp->fw_version);
+ 	for (i = 0; i < ARRAY_SIZE(ncv->pci_ids); i++)
+ 		ncv->pci_ids[i] = ntohs(rsp->pci_ids[i]);
+diff --git a/net/netfilter/ipvs/ip_vs_est.c b/net/netfilter/ipvs/ip_vs_est.c
+index f821ad2e19b35e..15049b82673272 100644
+--- a/net/netfilter/ipvs/ip_vs_est.c
++++ b/net/netfilter/ipvs/ip_vs_est.c
+@@ -265,7 +265,8 @@ int ip_vs_est_kthread_start(struct netns_ipvs *ipvs,
+ 	}
+ 
+ 	set_user_nice(kd->task, sysctl_est_nice(ipvs));
+-	set_cpus_allowed_ptr(kd->task, sysctl_est_cpulist(ipvs));
++	if (sysctl_est_preferred_cpulist(ipvs))
++		kthread_affine_preferred(kd->task, sysctl_est_preferred_cpulist(ipvs));
+ 
+ 	pr_info("starting estimator thread %d...\n", kd->id);
+ 	wake_up_process(kd->task);
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index 2cc0fde233447e..2273ead8102f83 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -884,8 +884,6 @@ ctnetlink_conntrack_event(unsigned int events, const struct nf_ct_event *item)
+ 
+ static int ctnetlink_done(struct netlink_callback *cb)
+ {
+-	if (cb->args[1])
+-		nf_ct_put((struct nf_conn *)cb->args[1]);
+ 	kfree(cb->data);
+ 	return 0;
+ }
+@@ -1208,19 +1206,26 @@ static int ctnetlink_filter_match(struct nf_conn *ct, void *data)
+ 	return 0;
+ }
+ 
++static unsigned long ctnetlink_get_id(const struct nf_conn *ct)
++{
++	unsigned long id = nf_ct_get_id(ct);
++
++	return id ? id : 1;
++}
++
+ static int
+ ctnetlink_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
+ {
+ 	unsigned int flags = cb->data ? NLM_F_DUMP_FILTERED : 0;
+ 	struct net *net = sock_net(skb->sk);
+-	struct nf_conn *ct, *last;
++	unsigned long last_id = cb->args[1];
+ 	struct nf_conntrack_tuple_hash *h;
+ 	struct hlist_nulls_node *n;
+ 	struct nf_conn *nf_ct_evict[8];
++	struct nf_conn *ct;
+ 	int res, i;
+ 	spinlock_t *lockp;
+ 
+-	last = (struct nf_conn *)cb->args[1];
+ 	i = 0;
+ 
+ 	local_bh_disable();
+@@ -1257,7 +1262,7 @@ ctnetlink_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
+ 				continue;
+ 
+ 			if (cb->args[1]) {
+-				if (ct != last)
++				if (ctnetlink_get_id(ct) != last_id)
+ 					continue;
+ 				cb->args[1] = 0;
+ 			}
+@@ -1270,8 +1275,7 @@ ctnetlink_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
+ 					    NFNL_MSG_TYPE(cb->nlh->nlmsg_type),
+ 					    ct, true, flags);
+ 			if (res < 0) {
+-				nf_conntrack_get(&ct->ct_general);
+-				cb->args[1] = (unsigned long)ct;
++				cb->args[1] = ctnetlink_get_id(ct);
+ 				spin_unlock(lockp);
+ 				goto out;
+ 			}
+@@ -1284,12 +1288,10 @@ ctnetlink_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
+ 	}
+ out:
+ 	local_bh_enable();
+-	if (last) {
++	if (last_id) {
+ 		/* nf ct hash resize happened, now clear the leftover. */
+-		if ((struct nf_conn *)cb->args[1] == last)
++		if (cb->args[1] == last_id)
+ 			cb->args[1] = 0;
+-
+-		nf_ct_put(last);
+ 	}
+ 
+ 	while (i) {
+@@ -3169,23 +3171,27 @@ ctnetlink_expect_event(unsigned int events, const struct nf_exp_event *item)
+ 	return 0;
+ }
+ #endif
+-static int ctnetlink_exp_done(struct netlink_callback *cb)
++
++static unsigned long ctnetlink_exp_id(const struct nf_conntrack_expect *exp)
+ {
+-	if (cb->args[1])
+-		nf_ct_expect_put((struct nf_conntrack_expect *)cb->args[1]);
+-	return 0;
++	unsigned long id = (unsigned long)exp;
++
++	id += nf_ct_get_id(exp->master);
++	id += exp->class;
++
++	return id ? id : 1;
+ }
+ 
+ static int
+ ctnetlink_exp_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
+ {
+ 	struct net *net = sock_net(skb->sk);
+-	struct nf_conntrack_expect *exp, *last;
+ 	struct nfgenmsg *nfmsg = nlmsg_data(cb->nlh);
+ 	u_int8_t l3proto = nfmsg->nfgen_family;
++	unsigned long last_id = cb->args[1];
++	struct nf_conntrack_expect *exp;
+ 
+ 	rcu_read_lock();
+-	last = (struct nf_conntrack_expect *)cb->args[1];
+ 	for (; cb->args[0] < nf_ct_expect_hsize; cb->args[0]++) {
+ restart:
+ 		hlist_for_each_entry_rcu(exp, &nf_ct_expect_hash[cb->args[0]],
+@@ -3197,7 +3203,7 @@ ctnetlink_exp_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
+ 				continue;
+ 
+ 			if (cb->args[1]) {
+-				if (exp != last)
++				if (ctnetlink_exp_id(exp) != last_id)
+ 					continue;
+ 				cb->args[1] = 0;
+ 			}
+@@ -3206,9 +3212,7 @@ ctnetlink_exp_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
+ 						    cb->nlh->nlmsg_seq,
+ 						    IPCTNL_MSG_EXP_NEW,
+ 						    exp) < 0) {
+-				if (!refcount_inc_not_zero(&exp->use))
+-					continue;
+-				cb->args[1] = (unsigned long)exp;
++				cb->args[1] = ctnetlink_exp_id(exp);
+ 				goto out;
+ 			}
+ 		}
+@@ -3219,32 +3223,30 @@ ctnetlink_exp_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
+ 	}
+ out:
+ 	rcu_read_unlock();
+-	if (last)
+-		nf_ct_expect_put(last);
+-
+ 	return skb->len;
+ }
+ 
+ static int
+ ctnetlink_exp_ct_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
+ {
+-	struct nf_conntrack_expect *exp, *last;
+ 	struct nfgenmsg *nfmsg = nlmsg_data(cb->nlh);
+ 	struct nf_conn *ct = cb->data;
+ 	struct nf_conn_help *help = nfct_help(ct);
+ 	u_int8_t l3proto = nfmsg->nfgen_family;
++	unsigned long last_id = cb->args[1];
++	struct nf_conntrack_expect *exp;
+ 
+ 	if (cb->args[0])
+ 		return 0;
+ 
+ 	rcu_read_lock();
+-	last = (struct nf_conntrack_expect *)cb->args[1];
++
+ restart:
+ 	hlist_for_each_entry_rcu(exp, &help->expectations, lnode) {
+ 		if (l3proto && exp->tuple.src.l3num != l3proto)
+ 			continue;
+ 		if (cb->args[1]) {
+-			if (exp != last)
++			if (ctnetlink_exp_id(exp) != last_id)
+ 				continue;
+ 			cb->args[1] = 0;
+ 		}
+@@ -3252,9 +3254,7 @@ ctnetlink_exp_ct_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
+ 					    cb->nlh->nlmsg_seq,
+ 					    IPCTNL_MSG_EXP_NEW,
+ 					    exp) < 0) {
+-			if (!refcount_inc_not_zero(&exp->use))
+-				continue;
+-			cb->args[1] = (unsigned long)exp;
++			cb->args[1] = ctnetlink_exp_id(exp);
+ 			goto out;
+ 		}
+ 	}
+@@ -3265,9 +3265,6 @@ ctnetlink_exp_ct_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
+ 	cb->args[0] = 1;
+ out:
+ 	rcu_read_unlock();
+-	if (last)
+-		nf_ct_expect_put(last);
+-
+ 	return skb->len;
+ }
+ 
+@@ -3286,7 +3283,6 @@ static int ctnetlink_dump_exp_ct(struct net *net, struct sock *ctnl,
+ 	struct nf_conntrack_zone zone;
+ 	struct netlink_dump_control c = {
+ 		.dump = ctnetlink_exp_ct_dump_table,
+-		.done = ctnetlink_exp_done,
+ 	};
+ 
+ 	err = ctnetlink_parse_tuple(cda, &tuple, CTA_EXPECT_MASTER,
+@@ -3336,7 +3332,6 @@ static int ctnetlink_get_expect(struct sk_buff *skb,
+ 		else {
+ 			struct netlink_dump_control c = {
+ 				.dump = ctnetlink_exp_dump_table,
+-				.done = ctnetlink_exp_done,
+ 			};
+ 			return netlink_dump_start(info->sk, skb, info->nlh, &c);
+ 		}
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 064f18792d9847..46ca725d653819 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -2790,6 +2790,7 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
+ 	struct nft_chain *chain = ctx->chain;
+ 	struct nft_chain_hook hook = {};
+ 	struct nft_stats __percpu *stats = NULL;
++	struct nftables_pernet *nft_net;
+ 	struct nft_hook *h, *next;
+ 	struct nf_hook_ops *ops;
+ 	struct nft_trans *trans;
+@@ -2832,6 +2833,20 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
+ 				if (nft_hook_list_find(&basechain->hook_list, h)) {
+ 					list_del(&h->list);
+ 					nft_netdev_hook_free(h);
++					continue;
++				}
++
++				nft_net = nft_pernet(ctx->net);
++				list_for_each_entry(trans, &nft_net->commit_list, list) {
++					if (trans->msg_type != NFT_MSG_NEWCHAIN ||
++					    trans->table != ctx->table ||
++					    !nft_trans_chain_update(trans))
++						continue;
++
++					if (nft_hook_list_find(&nft_trans_chain_hooks(trans), h)) {
++						nft_chain_release_hook(&hook);
++						return -EEXIST;
++					}
+ 				}
+ 			}
+ 		} else {
+@@ -9033,6 +9048,7 @@ static int nft_flowtable_update(struct nft_ctx *ctx, const struct nlmsghdr *nlh,
+ {
+ 	const struct nlattr * const *nla = ctx->nla;
+ 	struct nft_flowtable_hook flowtable_hook;
++	struct nftables_pernet *nft_net;
+ 	struct nft_hook *hook, *next;
+ 	struct nf_hook_ops *ops;
+ 	struct nft_trans *trans;
+@@ -9049,6 +9065,20 @@ static int nft_flowtable_update(struct nft_ctx *ctx, const struct nlmsghdr *nlh,
+ 		if (nft_hook_list_find(&flowtable->hook_list, hook)) {
+ 			list_del(&hook->list);
+ 			nft_netdev_hook_free(hook);
++			continue;
++		}
++
++		nft_net = nft_pernet(ctx->net);
++		list_for_each_entry(trans, &nft_net->commit_list, list) {
++			if (trans->msg_type != NFT_MSG_NEWFLOWTABLE ||
++			    trans->table != ctx->table ||
++			    !nft_trans_flowtable_update(trans))
++				continue;
++
++			if (nft_hook_list_find(&nft_trans_flowtable_hooks(trans), hook)) {
++				err = -EEXIST;
++				goto err_flowtable_update_hook;
++			}
+ 		}
+ 	}
+ 
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index c5855069bdaba0..9e4e25f2458f99 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -1219,7 +1219,7 @@ static void pipapo_free_scratch(const struct nft_pipapo_match *m, unsigned int c
+ 
+ 	mem = s;
+ 	mem -= s->align_off;
+-	kfree(mem);
++	kvfree(mem);
+ }
+ 
+ /**
+@@ -1240,10 +1240,9 @@ static int pipapo_realloc_scratch(struct nft_pipapo_match *clone,
+ 		void *scratch_aligned;
+ 		u32 align_off;
+ #endif
+-		scratch = kzalloc_node(struct_size(scratch, map,
+-						   bsize_max * 2) +
+-				       NFT_PIPAPO_ALIGN_HEADROOM,
+-				       GFP_KERNEL_ACCOUNT, cpu_to_node(i));
++		scratch = kvzalloc_node(struct_size(scratch, map, bsize_max * 2) +
++					NFT_PIPAPO_ALIGN_HEADROOM,
++					GFP_KERNEL_ACCOUNT, cpu_to_node(i));
+ 		if (!scratch) {
+ 			/* On failure, there's no need to undo previous
+ 			 * allocations: this means that some scratch maps have
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 6332a0e0659675..0fc3f045fb65a3 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1218,7 +1218,7 @@ int netlink_attachskb(struct sock *sk, struct sk_buff *skb,
+ 	nlk = nlk_sk(sk);
+ 	rmem = atomic_add_return(skb->truesize, &sk->sk_rmem_alloc);
+ 
+-	if ((rmem == skb->truesize || rmem < READ_ONCE(sk->sk_rcvbuf)) &&
++	if ((rmem == skb->truesize || rmem <= READ_ONCE(sk->sk_rcvbuf)) &&
+ 	    !test_bit(NETLINK_S_CONGESTED, &nlk->state)) {
+ 		netlink_skb_set_owner_r(skb, sk);
+ 		return 0;
+diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c
+index 037f764822b965..82635dd2cfa59f 100644
+--- a/net/sched/sch_ets.c
++++ b/net/sched/sch_ets.c
+@@ -651,6 +651,12 @@ static int ets_qdisc_change(struct Qdisc *sch, struct nlattr *opt,
+ 
+ 	sch_tree_lock(sch);
+ 
++	for (i = nbands; i < oldbands; i++) {
++		if (i >= q->nstrict && q->classes[i].qdisc->q.qlen)
++			list_del_init(&q->classes[i].alist);
++		qdisc_purge_queue(q->classes[i].qdisc);
++	}
++
+ 	WRITE_ONCE(q->nbands, nbands);
+ 	for (i = nstrict; i < q->nstrict; i++) {
+ 		if (q->classes[i].qdisc->q.qlen) {
+@@ -658,11 +664,6 @@ static int ets_qdisc_change(struct Qdisc *sch, struct nlattr *opt,
+ 			q->classes[i].deficit = quanta[i];
+ 		}
+ 	}
+-	for (i = q->nbands; i < oldbands; i++) {
+-		if (i >= q->nstrict && q->classes[i].qdisc->q.qlen)
+-			list_del_init(&q->classes[i].alist);
+-		qdisc_purge_queue(q->classes[i].qdisc);
+-	}
+ 	WRITE_ONCE(q->nstrict, nstrict);
+ 	memcpy(q->prio2band, priomap, sizeof(priomap));
+ 
+diff --git a/net/sctp/input.c b/net/sctp/input.c
+index 0c0d2757f6f8df..6fcdcaeed40e97 100644
+--- a/net/sctp/input.c
++++ b/net/sctp/input.c
+@@ -117,7 +117,7 @@ int sctp_rcv(struct sk_buff *skb)
+ 	 * it's better to just linearize it otherwise crc computing
+ 	 * takes longer.
+ 	 */
+-	if ((!is_gso && skb_linearize(skb)) ||
++	if (((!is_gso || skb_cloned(skb)) && skb_linearize(skb)) ||
+ 	    !pskb_may_pull(skb, sizeof(struct sctphdr)))
+ 		goto discard_it;
+ 
+diff --git a/net/tls/tls.h b/net/tls/tls.h
+index 774859b63f0ded..4e077068e6d98a 100644
+--- a/net/tls/tls.h
++++ b/net/tls/tls.h
+@@ -196,7 +196,7 @@ void tls_strp_msg_done(struct tls_strparser *strp);
+ int tls_rx_msg_size(struct tls_strparser *strp, struct sk_buff *skb);
+ void tls_rx_msg_ready(struct tls_strparser *strp);
+ 
+-void tls_strp_msg_load(struct tls_strparser *strp, bool force_refresh);
++bool tls_strp_msg_load(struct tls_strparser *strp, bool force_refresh);
+ int tls_strp_msg_cow(struct tls_sw_context_rx *ctx);
+ struct sk_buff *tls_strp_msg_detach(struct tls_sw_context_rx *ctx);
+ int tls_strp_msg_hold(struct tls_strparser *strp, struct sk_buff_head *dst);
+diff --git a/net/tls/tls_strp.c b/net/tls/tls_strp.c
+index 095cf31bae0ba9..d71643b494a1ae 100644
+--- a/net/tls/tls_strp.c
++++ b/net/tls/tls_strp.c
+@@ -475,7 +475,7 @@ static void tls_strp_load_anchor_with_queue(struct tls_strparser *strp, int len)
+ 	strp->stm.offset = offset;
+ }
+ 
+-void tls_strp_msg_load(struct tls_strparser *strp, bool force_refresh)
++bool tls_strp_msg_load(struct tls_strparser *strp, bool force_refresh)
+ {
+ 	struct strp_msg *rxm;
+ 	struct tls_msg *tlm;
+@@ -484,8 +484,11 @@ void tls_strp_msg_load(struct tls_strparser *strp, bool force_refresh)
+ 	DEBUG_NET_WARN_ON_ONCE(!strp->stm.full_len);
+ 
+ 	if (!strp->copy_mode && force_refresh) {
+-		if (WARN_ON(tcp_inq(strp->sk) < strp->stm.full_len))
+-			return;
++		if (unlikely(tcp_inq(strp->sk) < strp->stm.full_len)) {
++			WRITE_ONCE(strp->msg_ready, 0);
++			memset(&strp->stm, 0, sizeof(strp->stm));
++			return false;
++		}
+ 
+ 		tls_strp_load_anchor_with_queue(strp, strp->stm.full_len);
+ 	}
+@@ -495,6 +498,8 @@ void tls_strp_msg_load(struct tls_strparser *strp, bool force_refresh)
+ 	rxm->offset	= strp->stm.offset;
+ 	tlm = tls_msg(strp->anchor);
+ 	tlm->control	= strp->mark;
++
++	return true;
+ }
+ 
+ /* Called with lock held on lower socket */
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 549d1ea01a72a7..51c98a007ddac4 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1384,7 +1384,8 @@ tls_rx_rec_wait(struct sock *sk, struct sk_psock *psock, bool nonblock,
+ 			return sock_intr_errno(timeo);
+ 	}
+ 
+-	tls_strp_msg_load(&ctx->strp, released);
++	if (unlikely(!tls_strp_msg_load(&ctx->strp, released)))
++		return tls_rx_rec_wait(sk, psock, nonblock, false);
+ 
+ 	return 1;
+ }
+diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
+index f0e48e6911fc46..f01f9e8781061e 100644
+--- a/net/vmw_vsock/virtio_transport.c
++++ b/net/vmw_vsock/virtio_transport.c
+@@ -307,7 +307,7 @@ virtio_transport_cancel_pkt(struct vsock_sock *vsk)
+ 
+ static void virtio_vsock_rx_fill(struct virtio_vsock *vsock)
+ {
+-	int total_len = VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE + VIRTIO_VSOCK_SKB_HEADROOM;
++	int total_len = VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE;
+ 	struct scatterlist pkt, *p;
+ 	struct virtqueue *vq;
+ 	struct sk_buff *skb;
+diff --git a/net/wireless/mlme.c b/net/wireless/mlme.c
+index 05d44a4435189c..fd88a32d43d685 100644
+--- a/net/wireless/mlme.c
++++ b/net/wireless/mlme.c
+@@ -850,7 +850,8 @@ int cfg80211_mlme_mgmt_tx(struct cfg80211_registered_device *rdev,
+ 
+ 	mgmt = (const struct ieee80211_mgmt *)params->buf;
+ 
+-	if (!ieee80211_is_mgmt(mgmt->frame_control))
++	if (!ieee80211_is_mgmt(mgmt->frame_control) ||
++	    ieee80211_has_order(mgmt->frame_control))
+ 		return -EINVAL;
+ 
+ 	stype = le16_to_cpu(mgmt->frame_control) & IEEE80211_FCTL_STYPE;
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index d2819baea414f7..c7a1f080d2de3a 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -155,7 +155,8 @@ struct sk_buff *validate_xmit_xfrm(struct sk_buff *skb, netdev_features_t featur
+ 		return skb;
+ 	}
+ 
+-	if (skb_is_gso(skb) && unlikely(xmit_xfrm_check_overflow(skb))) {
++	if (skb_is_gso(skb) && (unlikely(x->xso.dev != dev) ||
++				unlikely(xmit_xfrm_check_overflow(skb)))) {
+ 		struct sk_buff *segs;
+ 
+ 		/* Packet got rerouted, fixup features and segment it. */
+@@ -415,10 +416,12 @@ bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x)
+ 	struct net_device *dev = x->xso.dev;
+ 	bool check_tunnel_size;
+ 
+-	if (x->xso.type == XFRM_DEV_OFFLOAD_UNSPECIFIED)
++	if (!x->type_offload ||
++	    (x->xso.type == XFRM_DEV_OFFLOAD_UNSPECIFIED && x->encap))
+ 		return false;
+ 
+-	if ((dev == xfrm_dst_path(dst)->dev) && !xdst->child->xfrm) {
++	if ((!dev || dev == xfrm_dst_path(dst)->dev) &&
++	    !xdst->child->xfrm) {
+ 		mtu = xfrm_state_mtu(x, xdst->child_mtu_cached);
+ 		if (skb->len <= mtu)
+ 			goto ok;
+@@ -430,6 +433,9 @@ bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x)
+ 	return false;
+ 
+ ok:
++	if (!dev)
++		return true;
++
+ 	check_tunnel_size = x->xso.type == XFRM_DEV_OFFLOAD_PACKET &&
+ 			    x->props.mode == XFRM_MODE_TUNNEL;
+ 	switch (x->props.family) {
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 97ff756191bab4..86337453709bad 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -1697,6 +1697,26 @@ struct xfrm_state *xfrm_state_lookup_byspi(struct net *net, __be32 spi,
+ }
+ EXPORT_SYMBOL(xfrm_state_lookup_byspi);
+ 
++static struct xfrm_state *xfrm_state_lookup_spi_proto(struct net *net, __be32 spi, u8 proto)
++{
++	struct xfrm_state *x;
++	unsigned int i;
++
++	rcu_read_lock();
++	for (i = 0; i <= net->xfrm.state_hmask; i++) {
++		hlist_for_each_entry_rcu(x, &net->xfrm.state_byspi[i], byspi) {
++			if (x->id.spi == spi && x->id.proto == proto) {
++				if (!xfrm_state_hold_rcu(x))
++					continue;
++				rcu_read_unlock();
++				return x;
++			}
++		}
++	}
++	rcu_read_unlock();
++	return NULL;
++}
++
+ static void __xfrm_state_insert(struct xfrm_state *x)
+ {
+ 	struct net *net = xs_net(x);
+@@ -2541,10 +2561,8 @@ int xfrm_alloc_spi(struct xfrm_state *x, u32 low, u32 high,
+ 	unsigned int h;
+ 	struct xfrm_state *x0;
+ 	int err = -ENOENT;
+-	__be32 minspi = htonl(low);
+-	__be32 maxspi = htonl(high);
++	u32 range = high - low + 1;
+ 	__be32 newspi = 0;
+-	u32 mark = x->mark.v & x->mark.m;
+ 
+ 	spin_lock_bh(&x->lock);
+ 	if (x->km.state == XFRM_STATE_DEAD) {
+@@ -2558,38 +2576,34 @@ int xfrm_alloc_spi(struct xfrm_state *x, u32 low, u32 high,
+ 
+ 	err = -ENOENT;
+ 
+-	if (minspi == maxspi) {
+-		x0 = xfrm_state_lookup(net, mark, &x->id.daddr, minspi, x->id.proto, x->props.family);
+-		if (x0) {
+-			NL_SET_ERR_MSG(extack, "Requested SPI is already in use");
+-			xfrm_state_put(x0);
++	for (h = 0; h < range; h++) {
++		u32 spi = (low == high) ? low : get_random_u32_inclusive(low, high);
++		newspi = htonl(spi);
++
++		spin_lock_bh(&net->xfrm.xfrm_state_lock);
++		x0 = xfrm_state_lookup_spi_proto(net, newspi, x->id.proto);
++		if (!x0) {
++			x->id.spi = newspi;
++			h = xfrm_spi_hash(net, &x->id.daddr, newspi, x->id.proto, x->props.family);
++			XFRM_STATE_INSERT(byspi, &x->byspi, net->xfrm.state_byspi + h, x->xso.type);
++			spin_unlock_bh(&net->xfrm.xfrm_state_lock);
++			err = 0;
+ 			goto unlock;
+ 		}
+-		newspi = minspi;
+-	} else {
+-		u32 spi = 0;
+-		for (h = 0; h < high-low+1; h++) {
+-			spi = get_random_u32_inclusive(low, high);
+-			x0 = xfrm_state_lookup(net, mark, &x->id.daddr, htonl(spi), x->id.proto, x->props.family);
+-			if (x0 == NULL) {
+-				newspi = htonl(spi);
+-				break;
+-			}
+-			xfrm_state_put(x0);
++		xfrm_state_put(x0);
++		spin_unlock_bh(&net->xfrm.xfrm_state_lock);
++
++		if (signal_pending(current)) {
++			err = -ERESTARTSYS;
++			goto unlock;
+ 		}
++
++		if (low == high)
++			break;
+ 	}
+-	if (newspi) {
+-		spin_lock_bh(&net->xfrm.xfrm_state_lock);
+-		x->id.spi = newspi;
+-		h = xfrm_spi_hash(net, &x->id.daddr, x->id.spi, x->id.proto, x->props.family);
+-		XFRM_STATE_INSERT(byspi, &x->byspi, net->xfrm.state_byspi + h,
+-				  x->xso.type);
+-		spin_unlock_bh(&net->xfrm.xfrm_state_lock);
+ 
+-		err = 0;
+-	} else {
++	if (err)
+ 		NL_SET_ERR_MSG(extack, "No SPI available in the requested range");
+-	}
+ 
+ unlock:
+ 	spin_unlock_bh(&x->lock);
+@@ -3278,7 +3292,7 @@ void xfrm_state_fini(struct net *net)
+ 	unsigned int sz;
+ 
+ 	flush_work(&net->xfrm.state_hash_work);
+-	xfrm_state_flush(net, IPSEC_PROTO_ANY, false);
++	xfrm_state_flush(net, 0, false);
+ 	flush_work(&xfrm_state_gc_work);
+ 
+ 	WARN_ON(!list_empty(&net->xfrm.state_all));
+diff --git a/rust/Makefile b/rust/Makefile
+index 115b63b7d1e3f3..dfde685264e1e2 100644
+--- a/rust/Makefile
++++ b/rust/Makefile
+@@ -62,6 +62,10 @@ core-cfgs = \
+ 
+ core-edition := $(if $(call rustc-min-version,108700),2024,2021)
+ 
++# `rustdoc` did not save the target modifiers, thus workaround for
++# the time being (https://github.com/rust-lang/rust/issues/144521).
++rustdoc_modifiers_workaround := $(if $(call rustc-min-version,108800),-Cunsafe-allow-abi-mismatch=fixed-x18)
++
+ # `rustc` recognizes `--remap-path-prefix` since 1.26.0, but `rustdoc` only
+ # since Rust 1.81.0. Moreover, `rustdoc` ICEs on out-of-tree builds since Rust
+ # 1.82.0 (https://github.com/rust-lang/rust/issues/138520). Thus workaround both
+@@ -74,6 +78,7 @@ quiet_cmd_rustdoc = RUSTDOC $(if $(rustdoc_host),H, ) $<
+ 		-Zunstable-options --generate-link-to-definition \
+ 		--output $(rustdoc_output) \
+ 		--crate-name $(subst rustdoc-,,$@) \
++		$(rustdoc_modifiers_workaround) \
+ 		$(if $(rustdoc_host),,--sysroot=/dev/null) \
+ 		@$(objtree)/include/generated/rustc_cfg $<
+ 
+@@ -103,14 +108,14 @@ rustdoc: rustdoc-core rustdoc-macros rustdoc-compiler_builtins \
+ rustdoc-macros: private rustdoc_host = yes
+ rustdoc-macros: private rustc_target_flags = --crate-type proc-macro \
+     --extern proc_macro
+-rustdoc-macros: $(src)/macros/lib.rs FORCE
++rustdoc-macros: $(src)/macros/lib.rs rustdoc-clean FORCE
+ 	+$(call if_changed,rustdoc)
+ 
+ # Starting with Rust 1.82.0, skipping `-Wrustdoc::unescaped_backticks` should
+ # not be needed -- see https://github.com/rust-lang/rust/pull/128307.
+ rustdoc-core: private skip_flags = --edition=2021 -Wrustdoc::unescaped_backticks
+ rustdoc-core: private rustc_target_flags = --edition=$(core-edition) $(core-cfgs)
+-rustdoc-core: $(RUST_LIB_SRC)/core/src/lib.rs FORCE
++rustdoc-core: $(RUST_LIB_SRC)/core/src/lib.rs rustdoc-clean FORCE
+ 	+$(call if_changed,rustdoc)
+ 
+ rustdoc-compiler_builtins: $(src)/compiler_builtins.rs rustdoc-core FORCE
+@@ -122,7 +127,8 @@ rustdoc-ffi: $(src)/ffi.rs rustdoc-core FORCE
+ rustdoc-pin_init_internal: private rustdoc_host = yes
+ rustdoc-pin_init_internal: private rustc_target_flags = --cfg kernel \
+     --extern proc_macro --crate-type proc-macro
+-rustdoc-pin_init_internal: $(src)/pin-init/internal/src/lib.rs FORCE
++rustdoc-pin_init_internal: $(src)/pin-init/internal/src/lib.rs \
++    rustdoc-clean FORCE
+ 	+$(call if_changed,rustdoc)
+ 
+ rustdoc-pin_init: private rustdoc_host = yes
+@@ -140,6 +146,9 @@ rustdoc-kernel: $(src)/kernel/lib.rs rustdoc-core rustdoc-ffi rustdoc-macros \
+     $(obj)/bindings.o FORCE
+ 	+$(call if_changed,rustdoc)
+ 
++rustdoc-clean: FORCE
++	$(Q)rm -rf $(rustdoc_output)
++
+ quiet_cmd_rustc_test_library = $(RUSTC_OR_CLIPPY_QUIET) TL $<
+       cmd_rustc_test_library = \
+ 	OBJTREE=$(abspath $(objtree)) \
+@@ -212,6 +221,7 @@ quiet_cmd_rustdoc_test_kernel = RUSTDOC TK $<
+ 		--extern bindings --extern uapi \
+ 		--no-run --crate-name kernel -Zunstable-options \
+ 		--sysroot=/dev/null \
++		$(rustdoc_modifiers_workaround) \
+ 		--test-builder $(objtree)/scripts/rustdoc_test_builder \
+ 		$< $(rustdoc_test_kernel_quiet); \
+ 	$(objtree)/scripts/rustdoc_test_gen
+diff --git a/samples/damon/mtier.c b/samples/damon/mtier.c
+index c94254b77fc984..ed6bed8b3d4d99 100644
+--- a/samples/damon/mtier.c
++++ b/samples/damon/mtier.c
+@@ -151,6 +151,8 @@ static void damon_sample_mtier_stop(void)
+ 	damon_destroy_ctx(ctxs[1]);
+ }
+ 
++static bool init_called;
++
+ static int damon_sample_mtier_enable_store(
+ 		const char *val, const struct kernel_param *kp)
+ {
+@@ -176,6 +178,14 @@ static int damon_sample_mtier_enable_store(
+ 
+ static int __init damon_sample_mtier_init(void)
+ {
++	int err = 0;
++
++	init_called = true;
++	if (enable) {
++		err = damon_sample_mtier_start();
++		if (err)
++			enable = false;
++	}
+ 	return 0;
+ }
+ 
+diff --git a/samples/damon/wsse.c b/samples/damon/wsse.c
+index e20238a249e7b5..e941958b103249 100644
+--- a/samples/damon/wsse.c
++++ b/samples/damon/wsse.c
+@@ -89,6 +89,8 @@ static void damon_sample_wsse_stop(void)
+ 		put_pid(target_pidp);
+ }
+ 
++static bool init_called;
++
+ static int damon_sample_wsse_enable_store(
+ 		const char *val, const struct kernel_param *kp)
+ {
+@@ -114,7 +116,15 @@ static int damon_sample_wsse_enable_store(
+ 
+ static int __init damon_sample_wsse_init(void)
+ {
+-	return 0;
++	int err = 0;
++
++	init_called = true;
++	if (enable) {
++		err = damon_sample_wsse_start();
++		if (err)
++			enable = false;
++	}
++	return err;
+ }
+ 
+ module_init(damon_sample_wsse_init);
+diff --git a/scripts/kconfig/gconf.c b/scripts/kconfig/gconf.c
+index c0f46f18906073..0caf0ced13df4a 100644
+--- a/scripts/kconfig/gconf.c
++++ b/scripts/kconfig/gconf.c
+@@ -748,7 +748,7 @@ static void renderer_edited(GtkCellRendererText * cell,
+ 	struct symbol *sym;
+ 
+ 	if (!gtk_tree_model_get_iter(model2, &iter, path))
+-		return;
++		goto free;
+ 
+ 	gtk_tree_model_get(model2, &iter, COL_MENU, &menu, -1);
+ 	sym = menu->sym;
+@@ -760,6 +760,7 @@ static void renderer_edited(GtkCellRendererText * cell,
+ 
+ 	update_tree(&rootmenu, NULL);
+ 
++free:
+ 	gtk_tree_path_free(path);
+ }
+ 
+@@ -942,13 +943,14 @@ on_treeview2_key_press_event(GtkWidget * widget,
+ void
+ on_treeview2_cursor_changed(GtkTreeView * treeview, gpointer user_data)
+ {
++	GtkTreeModel *model = gtk_tree_view_get_model(treeview);
+ 	GtkTreeSelection *selection;
+ 	GtkTreeIter iter;
+ 	struct menu *menu;
+ 
+ 	selection = gtk_tree_view_get_selection(treeview);
+-	if (gtk_tree_selection_get_selected(selection, &model2, &iter)) {
+-		gtk_tree_model_get(model2, &iter, COL_MENU, &menu, -1);
++	if (gtk_tree_selection_get_selected(selection, &model, &iter)) {
++		gtk_tree_model_get(model, &iter, COL_MENU, &menu, -1);
+ 		text_insert_help(menu);
+ 	}
+ }
+diff --git a/scripts/kconfig/lxdialog/inputbox.c b/scripts/kconfig/lxdialog/inputbox.c
+index 3c6e24b20f5be6..5e4a131724f288 100644
+--- a/scripts/kconfig/lxdialog/inputbox.c
++++ b/scripts/kconfig/lxdialog/inputbox.c
+@@ -39,8 +39,10 @@ int dialog_inputbox(const char *title, const char *prompt, int height, int width
+ 
+ 	if (!init)
+ 		instr[0] = '\0';
+-	else
+-		strcpy(instr, init);
++	else {
++		strncpy(instr, init, sizeof(dialog_input_result) - 1);
++		instr[sizeof(dialog_input_result) - 1] = '\0';
++	}
+ 
+ do_resize:
+ 	if (getmaxy(stdscr) <= (height - INPUTBOX_HEIGHT_MIN))
+diff --git a/scripts/kconfig/lxdialog/menubox.c b/scripts/kconfig/lxdialog/menubox.c
+index 6e6244df0c56e3..d4c19b7beebbd4 100644
+--- a/scripts/kconfig/lxdialog/menubox.c
++++ b/scripts/kconfig/lxdialog/menubox.c
+@@ -264,7 +264,7 @@ int dialog_menu(const char *title, const char *prompt,
+ 		if (key < 256 && isalpha(key))
+ 			key = tolower(key);
+ 
+-		if (strchr("ynmh", key))
++		if (strchr("ynmh ", key))
+ 			i = max_choice;
+ 		else {
+ 			for (i = choice + 1; i < max_choice; i++) {
+diff --git a/scripts/kconfig/nconf.c b/scripts/kconfig/nconf.c
+index c0b2dabf6c894f..ae1fe5f6032703 100644
+--- a/scripts/kconfig/nconf.c
++++ b/scripts/kconfig/nconf.c
+@@ -593,6 +593,8 @@ static void item_add_str(const char *fmt, ...)
+ 		tmp_str,
+ 		sizeof(k_menu_items[index].str));
+ 
++	k_menu_items[index].str[sizeof(k_menu_items[index].str) - 1] = '\0';
++
+ 	free_item(curses_menu_items[index]);
+ 	curses_menu_items[index] = new_item(
+ 			k_menu_items[index].str,
+diff --git a/scripts/kconfig/nconf.gui.c b/scripts/kconfig/nconf.gui.c
+index 4bfdf8ac2a9a34..7206437e784a0a 100644
+--- a/scripts/kconfig/nconf.gui.c
++++ b/scripts/kconfig/nconf.gui.c
+@@ -359,6 +359,7 @@ int dialog_inputbox(WINDOW *main_window,
+ 	x = (columns-win_cols)/2;
+ 
+ 	strncpy(result, init, *result_len);
++	result[*result_len - 1] = '\0';
+ 
+ 	/* create the windows */
+ 	win = newwin(win_lines, win_cols, y, x);
+diff --git a/security/apparmor/domain.c b/security/apparmor/domain.c
+index 5939bd9a9b9bb0..08ca9057f82b02 100644
+--- a/security/apparmor/domain.c
++++ b/security/apparmor/domain.c
+@@ -508,6 +508,7 @@ static const char *next_name(int xtype, const char *name)
+  * @name: returns: name tested to find label (NOT NULL)
+  *
+  * Returns: refcounted label, or NULL on failure (MAYBE NULL)
++ *          @name will always be set with the last name tried
+  */
+ struct aa_label *x_table_lookup(struct aa_profile *profile, u32 xindex,
+ 				const char **name)
+@@ -517,6 +518,7 @@ struct aa_label *x_table_lookup(struct aa_profile *profile, u32 xindex,
+ 	struct aa_label *label = NULL;
+ 	u32 xtype = xindex & AA_X_TYPE_MASK;
+ 	int index = xindex & AA_X_INDEX_MASK;
++	const char *next;
+ 
+ 	AA_BUG(!name);
+ 
+@@ -524,25 +526,27 @@ struct aa_label *x_table_lookup(struct aa_profile *profile, u32 xindex,
+ 	/* TODO: move lookup parsing to unpack time so this is a straight
+ 	 *       index into the resultant label
+ 	 */
+-	for (*name = rules->file->trans.table[index]; !label && *name;
+-	     *name = next_name(xtype, *name)) {
++	for (next = rules->file->trans.table[index]; next;
++	     next = next_name(xtype, next)) {
++		const char *lookup = (*next == '&') ? next + 1 : next;
++		*name = next;
+ 		if (xindex & AA_X_CHILD) {
+-			struct aa_profile *new_profile;
+-			/* release by caller */
+-			new_profile = aa_find_child(profile, *name);
+-			if (new_profile)
+-				label = &new_profile->label;
++			/* TODO: switich to parse to get stack of child */
++			struct aa_profile *new = aa_find_child(profile, lookup);
++
++			if (new)
++				/* release by caller */
++				return &new->label;
+ 			continue;
+ 		}
+-		label = aa_label_parse(&profile->label, *name, GFP_KERNEL,
++		label = aa_label_parse(&profile->label, lookup, GFP_KERNEL,
+ 				       true, false);
+-		if (IS_ERR(label))
+-			label = NULL;
++		if (!IS_ERR_OR_NULL(label))
++			/* release by caller */
++			return label;
+ 	}
+ 
+-	/* released by caller */
+-
+-	return label;
++	return NULL;
+ }
+ 
+ /**
+@@ -567,9 +571,9 @@ static struct aa_label *x_to_label(struct aa_profile *profile,
+ 	struct aa_ruleset *rules = list_first_entry(&profile->rules,
+ 						    typeof(*rules), list);
+ 	struct aa_label *new = NULL;
++	struct aa_label *stack = NULL;
+ 	struct aa_ns *ns = profile->ns;
+ 	u32 xtype = xindex & AA_X_TYPE_MASK;
+-	const char *stack = NULL;
+ 
+ 	switch (xtype) {
+ 	case AA_X_NONE:
+@@ -578,13 +582,14 @@ static struct aa_label *x_to_label(struct aa_profile *profile,
+ 		break;
+ 	case AA_X_TABLE:
+ 		/* TODO: fix when perm mapping done at unload */
+-		stack = rules->file->trans.table[xindex & AA_X_INDEX_MASK];
+-		if (*stack != '&') {
+-			/* released by caller */
+-			new = x_table_lookup(profile, xindex, lookupname);
+-			stack = NULL;
++		/* released by caller
++		 * if null for both stack and direct want to try fallback
++		 */
++		new = x_table_lookup(profile, xindex, lookupname);
++		if (!new || **lookupname != '&')
+ 			break;
+-		}
++		stack = new;
++		new = NULL;
+ 		fallthrough;	/* to X_NAME */
+ 	case AA_X_NAME:
+ 		if (xindex & AA_X_CHILD)
+@@ -599,6 +604,7 @@ static struct aa_label *x_to_label(struct aa_profile *profile,
+ 		break;
+ 	}
+ 
++	/* fallback transition check */
+ 	if (!new) {
+ 		if (xindex & AA_X_INHERIT) {
+ 			/* (p|c|n)ix - don't change profile but do
+@@ -617,12 +623,12 @@ static struct aa_label *x_to_label(struct aa_profile *profile,
+ 		/* base the stack on post domain transition */
+ 		struct aa_label *base = new;
+ 
+-		new = aa_label_parse(base, stack, GFP_KERNEL, true, false);
+-		if (IS_ERR(new))
+-			new = NULL;
++		new = aa_label_merge(base, stack, GFP_KERNEL);
++		/* null on error */
+ 		aa_put_label(base);
+ 	}
+ 
++	aa_put_label(stack);
+ 	/* released by caller */
+ 	return new;
+ }
+diff --git a/security/apparmor/file.c b/security/apparmor/file.c
+index d52a5b14dad4c7..62bc46e037588a 100644
+--- a/security/apparmor/file.c
++++ b/security/apparmor/file.c
+@@ -423,9 +423,11 @@ int aa_path_link(const struct cred *subj_cred,
+ {
+ 	struct path link = { .mnt = new_dir->mnt, .dentry = new_dentry };
+ 	struct path target = { .mnt = new_dir->mnt, .dentry = old_dentry };
++	struct inode *inode = d_backing_inode(old_dentry);
++	vfsuid_t vfsuid = i_uid_into_vfsuid(mnt_idmap(target.mnt), inode);
+ 	struct path_cond cond = {
+-		d_backing_inode(old_dentry)->i_uid,
+-		d_backing_inode(old_dentry)->i_mode
++		.uid = vfsuid_into_kuid(vfsuid),
++		.mode = inode->i_mode,
+ 	};
+ 	char *buffer = NULL, *buffer2 = NULL;
+ 	struct aa_profile *profile;
+diff --git a/security/apparmor/include/lib.h b/security/apparmor/include/lib.h
+index f11a0db7f51da4..e83f45e936a7d4 100644
+--- a/security/apparmor/include/lib.h
++++ b/security/apparmor/include/lib.h
+@@ -48,7 +48,11 @@ extern struct aa_dfa *stacksplitdfa;
+ #define AA_BUG_FMT(X, fmt, args...)					\
+ 	WARN((X), "AppArmor WARN %s: (" #X "): " fmt, __func__, ##args)
+ #else
+-#define AA_BUG_FMT(X, fmt, args...) no_printk(fmt, ##args)
++#define AA_BUG_FMT(X, fmt, args...)					\
++	do {								\
++		BUILD_BUG_ON_INVALID(X);				\
++		no_printk(fmt, ##args);					\
++	} while (0)
+ #endif
+ 
+ #define AA_ERROR(fmt, args...)						\
+diff --git a/security/inode.c b/security/inode.c
+index 3913501621fa9d..93460eab8216bb 100644
+--- a/security/inode.c
++++ b/security/inode.c
+@@ -159,7 +159,6 @@ static struct dentry *securityfs_create_dentry(const char *name, umode_t mode,
+ 		inode->i_fop = fops;
+ 	}
+ 	d_instantiate(dentry, inode);
+-	dget(dentry);
+ 	inode_unlock(dir);
+ 	return dentry;
+ 
+@@ -306,7 +305,6 @@ void securityfs_remove(struct dentry *dentry)
+ 			simple_rmdir(dir, dentry);
+ 		else
+ 			simple_unlink(dir, dentry);
+-		dput(dentry);
+ 	}
+ 	inode_unlock(dir);
+ 	simple_release_fs(&mount, &mount_count);
+diff --git a/security/landlock/syscalls.c b/security/landlock/syscalls.c
+index 33eafb71e4f31b..0116e9f93ffe30 100644
+--- a/security/landlock/syscalls.c
++++ b/security/landlock/syscalls.c
+@@ -303,7 +303,6 @@ static int get_path_from_fd(const s32 fd, struct path *const path)
+ 	if ((fd_file(f)->f_op == &ruleset_fops) ||
+ 	    (fd_file(f)->f_path.mnt->mnt_flags & MNT_INTERNAL) ||
+ 	    (fd_file(f)->f_path.dentry->d_sb->s_flags & SB_NOUSER) ||
+-	    d_is_negative(fd_file(f)->f_path.dentry) ||
+ 	    IS_PRIVATE(d_backing_inode(fd_file(f)->f_path.dentry)))
+ 		return -EBADFD;
+ 
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 853ac5bb33ff2a..ecb71bf1859d40 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -24,6 +24,7 @@
+ #include <sound/minors.h>
+ #include <linux/uio.h>
+ #include <linux/delay.h>
++#include <linux/bitops.h>
+ 
+ #include "pcm_local.h"
+ 
+@@ -3130,13 +3131,23 @@ struct snd_pcm_sync_ptr32 {
+ static snd_pcm_uframes_t recalculate_boundary(struct snd_pcm_runtime *runtime)
+ {
+ 	snd_pcm_uframes_t boundary;
++	snd_pcm_uframes_t border;
++	int order;
+ 
+ 	if (! runtime->buffer_size)
+ 		return 0;
+-	boundary = runtime->buffer_size;
+-	while (boundary * 2 <= 0x7fffffffUL - runtime->buffer_size)
+-		boundary *= 2;
+-	return boundary;
++
++	border = 0x7fffffffUL - runtime->buffer_size;
++	if (runtime->buffer_size > border)
++		return runtime->buffer_size;
++
++	order = __fls(border) - __fls(runtime->buffer_size);
++	boundary = runtime->buffer_size << order;
++
++	if (boundary <= border)
++		return boundary;
++	else
++		return boundary / 2;
+ }
+ 
+ static int snd_pcm_ioctl_sync_ptr_compat(struct snd_pcm_substream *substream,
+diff --git a/sound/pci/hda/cs35l41_hda.c b/sound/pci/hda/cs35l41_hda.c
+index d5bc81099d0d68..17cdce91fdbfab 100644
+--- a/sound/pci/hda/cs35l41_hda.c
++++ b/sound/pci/hda/cs35l41_hda.c
+@@ -2085,3 +2085,5 @@ MODULE_IMPORT_NS("SND_SOC_CS_AMP_LIB");
+ MODULE_AUTHOR("Lucas Tanure, Cirrus Logic Inc, <tanureal@opensource.cirrus.com>");
+ MODULE_LICENSE("GPL");
+ MODULE_IMPORT_NS("FW_CS_DSP");
++MODULE_FIRMWARE("cirrus/cs35l41-*.wmfw");
++MODULE_FIRMWARE("cirrus/cs35l41-*.bin");
+diff --git a/sound/pci/hda/cs35l56_hda.c b/sound/pci/hda/cs35l56_hda.c
+index 886c53184fec23..f48077f5ca45b3 100644
+--- a/sound/pci/hda/cs35l56_hda.c
++++ b/sound/pci/hda/cs35l56_hda.c
+@@ -1176,3 +1176,7 @@ MODULE_IMPORT_NS("SND_SOC_CS_AMP_LIB");
+ MODULE_AUTHOR("Richard Fitzgerald <rf@opensource.cirrus.com>");
+ MODULE_AUTHOR("Simon Trimmer <simont@opensource.cirrus.com>");
+ MODULE_LICENSE("GPL");
++MODULE_FIRMWARE("cirrus/cs35l54-*.wmfw");
++MODULE_FIRMWARE("cirrus/cs35l54-*.bin");
++MODULE_FIRMWARE("cirrus/cs35l56-*.wmfw");
++MODULE_FIRMWARE("cirrus/cs35l56-*.bin");
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index c018beeecd3d60..0398df0f159a1e 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -639,24 +639,16 @@ static void hda_jackpoll_work(struct work_struct *work)
+ 	struct hda_codec *codec =
+ 		container_of(work, struct hda_codec, jackpoll_work.work);
+ 
+-	/* for non-polling trigger: we need nothing if already powered on */
+-	if (!codec->jackpoll_interval && snd_hdac_is_power_on(&codec->core))
++	if (!codec->jackpoll_interval)
+ 		return;
+ 
+ 	/* the power-up/down sequence triggers the runtime resume */
+-	snd_hda_power_up_pm(codec);
++	snd_hda_power_up(codec);
+ 	/* update jacks manually if polling is required, too */
+-	if (codec->jackpoll_interval) {
+-		snd_hda_jack_set_dirty_all(codec);
+-		snd_hda_jack_poll_all(codec);
+-	}
+-	snd_hda_power_down_pm(codec);
+-
+-	if (!codec->jackpoll_interval)
+-		return;
+-
+-	schedule_delayed_work(&codec->jackpoll_work,
+-			      codec->jackpoll_interval);
++	snd_hda_jack_set_dirty_all(codec);
++	snd_hda_jack_poll_all(codec);
++	schedule_delayed_work(&codec->jackpoll_work, codec->jackpoll_interval);
++	snd_hda_power_down(codec);
+ }
+ 
+ /* release all pincfg lists */
+@@ -2895,12 +2887,12 @@ static void hda_call_codec_resume(struct hda_codec *codec)
+ 		snd_hda_regmap_sync(codec);
+ 	}
+ 
+-	if (codec->jackpoll_interval)
+-		hda_jackpoll_work(&codec->jackpoll_work.work);
+-	else
+-		snd_hda_jack_report_sync(codec);
++	snd_hda_jack_report_sync(codec);
+ 	codec->core.dev.power.power_state = PMSG_ON;
+ 	snd_hdac_leave_pm(&codec->core);
++	if (codec->jackpoll_interval)
++		schedule_delayed_work(&codec->jackpoll_work,
++				      codec->jackpoll_interval);
+ }
+ 
+ static int hda_codec_runtime_suspend(struct device *dev)
+@@ -2912,8 +2904,6 @@ static int hda_codec_runtime_suspend(struct device *dev)
+ 	if (!codec->card)
+ 		return 0;
+ 
+-	cancel_delayed_work_sync(&codec->jackpoll_work);
+-
+ 	state = hda_call_codec_suspend(codec);
+ 	if (codec->link_down_at_suspend ||
+ 	    (codec_has_clkstop(codec) && codec_has_epss(codec) &&
+@@ -2921,10 +2911,6 @@ static int hda_codec_runtime_suspend(struct device *dev)
+ 		snd_hdac_codec_link_down(&codec->core);
+ 	snd_hda_codec_display_power(codec, false);
+ 
+-	if (codec->bus->jackpoll_in_suspend &&
+-		(dev->power.power_state.event != PM_EVENT_SUSPEND))
+-		schedule_delayed_work(&codec->jackpoll_work,
+-					codec->jackpoll_interval);
+ 	return 0;
+ }
+ 
+@@ -3020,6 +3006,7 @@ void snd_hda_codec_shutdown(struct hda_codec *codec)
+ 	if (!codec->core.registered)
+ 		return;
+ 
++	codec->jackpoll_interval = 0; /* don't poll any longer */
+ 	cancel_delayed_work_sync(&codec->jackpoll_work);
+ 	list_for_each_entry(cpcm, &codec->pcm_list_head, list)
+ 		snd_pcm_suspend_all(cpcm->pcm);
+@@ -3086,10 +3073,11 @@ int snd_hda_codec_build_controls(struct hda_codec *codec)
+ 	if (err < 0)
+ 		return err;
+ 
++	snd_hda_jack_report_sync(codec); /* call at the last init point */
+ 	if (codec->jackpoll_interval)
+-		hda_jackpoll_work(&codec->jackpoll_work.work);
+-	else
+-		snd_hda_jack_report_sync(codec); /* call at the last init point */
++		schedule_delayed_work(&codec->jackpoll_work,
++				      codec->jackpoll_interval);
++
+ 	sync_power_up_states(codec);
+ 	return 0;
+ }
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 77432e06f3e32c..a2f57d7424bb84 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -4410,7 +4410,7 @@ static int add_tuning_control(struct hda_codec *codec,
+ 	}
+ 	knew.private_value =
+ 		HDA_COMPOSE_AMP_VAL(nid, 1, 0, type);
+-	sprintf(namestr, "%s %s Volume", name, dirstr[dir]);
++	snprintf(namestr, sizeof(namestr), "%s %s Volume", name, dirstr[dir]);
+ 	return snd_hda_ctl_add(codec, nid, snd_ctl_new1(&knew, codec));
+ }
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 4031eeb4357b15..2ab2666d4058d6 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -11401,6 +11401,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1854, 0x0440, "LG CQ6", ALC256_FIXUP_HEADPHONE_AMP_VOL),
+ 	SND_PCI_QUIRK(0x1854, 0x0441, "LG CQ6 AIO", ALC256_FIXUP_HEADPHONE_AMP_VOL),
+ 	SND_PCI_QUIRK(0x1854, 0x0488, "LG gram 16 (16Z90R)", ALC298_FIXUP_SAMSUNG_AMP_V2_4_AMPS),
++	SND_PCI_QUIRK(0x1854, 0x0489, "LG gram 16 (16Z90R-A)", ALC298_FIXUP_SAMSUNG_AMP_V2_4_AMPS),
+ 	SND_PCI_QUIRK(0x1854, 0x048a, "LG gram 17 (17ZD90R)", ALC298_FIXUP_SAMSUNG_AMP_V2_4_AMPS),
+ 	SND_PCI_QUIRK(0x19e5, 0x3204, "Huawei MACH-WX9", ALC256_FIXUP_HUAWEI_MACH_WX9_PINS),
+ 	SND_PCI_QUIRK(0x19e5, 0x320f, "Huawei WRT-WX9 ", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+@@ -11430,6 +11431,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1945, "Redmi G", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1ee7, 0x2078, "HONOR BRB-X M1010", ALC2XX_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1f66, 0x0105, "Ayaneo Portable Game Player", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x2014, 0x800a, "Positivo ARN50", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x2782, 0x0214, "VAIO VJFE-CL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -11448,6 +11450,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0xf111, 0x0001, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0xf111, 0x0006, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0xf111, 0x0009, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0xf111, 0x000b, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0xf111, 0x000c, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
+ 
+ #if 0
+diff --git a/sound/pci/intel8x0.c b/sound/pci/intel8x0.c
+index 51e7f1f1a48e45..b521cec2033361 100644
+--- a/sound/pci/intel8x0.c
++++ b/sound/pci/intel8x0.c
+@@ -2249,7 +2249,7 @@ static int snd_intel8x0_mixer(struct intel8x0 *chip, int ac97_clock,
+ 			tmp |= chip->ac97_sdin[0] << ICH_DI1L_SHIFT;
+ 			for (i = 1; i < 4; i++) {
+ 				if (pcm->r[0].codec[i]) {
+-					tmp |= chip->ac97_sdin[pcm->r[0].codec[1]->num] << ICH_DI2L_SHIFT;
++					tmp |= chip->ac97_sdin[pcm->r[0].codec[i]->num] << ICH_DI2L_SHIFT;
+ 					break;
+ 				}
+ 			}
+diff --git a/sound/soc/codecs/hdac_hdmi.c b/sound/soc/codecs/hdac_hdmi.c
+index 1139a2754ca337..056d98154682a7 100644
+--- a/sound/soc/codecs/hdac_hdmi.c
++++ b/sound/soc/codecs/hdac_hdmi.c
+@@ -1232,7 +1232,8 @@ static int hdac_hdmi_parse_eld(struct hdac_device *hdev,
+ 						>> DRM_ELD_VER_SHIFT;
+ 
+ 	if (ver != ELD_VER_CEA_861D && ver != ELD_VER_PARTIAL) {
+-		dev_err(&hdev->dev, "HDMI: Unknown ELD version %d\n", ver);
++		dev_err_ratelimited(&hdev->dev,
++				    "HDMI: Unknown ELD version %d\n", ver);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -1240,7 +1241,8 @@ static int hdac_hdmi_parse_eld(struct hdac_device *hdev,
+ 		DRM_ELD_MNL_MASK) >> DRM_ELD_MNL_SHIFT;
+ 
+ 	if (mnl > ELD_MAX_MNL) {
+-		dev_err(&hdev->dev, "HDMI: MNL Invalid %d\n", mnl);
++		dev_err_ratelimited(&hdev->dev,
++				    "HDMI: MNL Invalid %d\n", mnl);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -1299,8 +1301,8 @@ static void hdac_hdmi_present_sense(struct hdac_hdmi_pin *pin,
+ 
+ 	if (!port->eld.monitor_present || !port->eld.eld_valid) {
+ 
+-		dev_err(&hdev->dev, "%s: disconnect for pin:port %d:%d\n",
+-						__func__, pin->nid, port->id);
++		dev_dbg(&hdev->dev, "%s: disconnect for pin:port %d:%d\n",
++			__func__, pin->nid, port->id);
+ 
+ 		/*
+ 		 * PCMs are not registered during device probe, so don't
+diff --git a/sound/soc/codecs/rt5640.c b/sound/soc/codecs/rt5640.c
+index 21a18012b4c0db..55881a5669e2b4 100644
+--- a/sound/soc/codecs/rt5640.c
++++ b/sound/soc/codecs/rt5640.c
+@@ -3013,6 +3013,11 @@ static int rt5640_i2c_probe(struct i2c_client *i2c)
+ 	}
+ 
+ 	regmap_read(rt5640->regmap, RT5640_VENDOR_ID2, &val);
++	if (val != RT5640_DEVICE_ID) {
++		usleep_range(60000, 100000);
++		regmap_read(rt5640->regmap, RT5640_VENDOR_ID2, &val);
++	}
++
+ 	if (val != RT5640_DEVICE_ID) {
+ 		dev_err(&i2c->dev,
+ 			"Device with ID register %#x is not rt5640/39\n", val);
+diff --git a/sound/soc/fsl/fsl_sai.c b/sound/soc/fsl/fsl_sai.c
+index 50af6b725670d9..10f0633ce68f29 100644
+--- a/sound/soc/fsl/fsl_sai.c
++++ b/sound/soc/fsl/fsl_sai.c
+@@ -809,9 +809,9 @@ static void fsl_sai_config_disable(struct fsl_sai *sai, int dir)
+ 	 * are running concurrently.
+ 	 */
+ 	/* Software Reset */
+-	regmap_write(sai->regmap, FSL_SAI_xCSR(tx, ofs), FSL_SAI_CSR_SR);
++	regmap_update_bits(sai->regmap, FSL_SAI_xCSR(tx, ofs), FSL_SAI_CSR_SR, FSL_SAI_CSR_SR);
+ 	/* Clear SR bit to finish the reset */
+-	regmap_write(sai->regmap, FSL_SAI_xCSR(tx, ofs), 0);
++	regmap_update_bits(sai->regmap, FSL_SAI_xCSR(tx, ofs), FSL_SAI_CSR_SR, 0);
+ }
+ 
+ static int fsl_sai_trigger(struct snd_pcm_substream *substream, int cmd,
+@@ -930,11 +930,11 @@ static int fsl_sai_dai_probe(struct snd_soc_dai *cpu_dai)
+ 	unsigned int ofs = sai->soc_data->reg_offset;
+ 
+ 	/* Software Reset for both Tx and Rx */
+-	regmap_write(sai->regmap, FSL_SAI_TCSR(ofs), FSL_SAI_CSR_SR);
+-	regmap_write(sai->regmap, FSL_SAI_RCSR(ofs), FSL_SAI_CSR_SR);
++	regmap_update_bits(sai->regmap, FSL_SAI_TCSR(ofs), FSL_SAI_CSR_SR, FSL_SAI_CSR_SR);
++	regmap_update_bits(sai->regmap, FSL_SAI_RCSR(ofs), FSL_SAI_CSR_SR, FSL_SAI_CSR_SR);
+ 	/* Clear SR bit to finish the reset */
+-	regmap_write(sai->regmap, FSL_SAI_TCSR(ofs), 0);
+-	regmap_write(sai->regmap, FSL_SAI_RCSR(ofs), 0);
++	regmap_update_bits(sai->regmap, FSL_SAI_TCSR(ofs), FSL_SAI_CSR_SR, 0);
++	regmap_update_bits(sai->regmap, FSL_SAI_RCSR(ofs), FSL_SAI_CSR_SR, 0);
+ 
+ 	regmap_update_bits(sai->regmap, FSL_SAI_TCR1(ofs),
+ 			   FSL_SAI_CR1_RFW_MASK(sai->soc_data->fifo_depth),
+@@ -1824,11 +1824,11 @@ static int fsl_sai_runtime_resume(struct device *dev)
+ 
+ 	regcache_cache_only(sai->regmap, false);
+ 	regcache_mark_dirty(sai->regmap);
+-	regmap_write(sai->regmap, FSL_SAI_TCSR(ofs), FSL_SAI_CSR_SR);
+-	regmap_write(sai->regmap, FSL_SAI_RCSR(ofs), FSL_SAI_CSR_SR);
++	regmap_update_bits(sai->regmap, FSL_SAI_TCSR(ofs), FSL_SAI_CSR_SR, FSL_SAI_CSR_SR);
++	regmap_update_bits(sai->regmap, FSL_SAI_RCSR(ofs), FSL_SAI_CSR_SR, FSL_SAI_CSR_SR);
+ 	usleep_range(1000, 2000);
+-	regmap_write(sai->regmap, FSL_SAI_TCSR(ofs), 0);
+-	regmap_write(sai->regmap, FSL_SAI_RCSR(ofs), 0);
++	regmap_update_bits(sai->regmap, FSL_SAI_TCSR(ofs), FSL_SAI_CSR_SR, 0);
++	regmap_update_bits(sai->regmap, FSL_SAI_RCSR(ofs), FSL_SAI_CSR_SR, 0);
+ 
+ 	ret = regcache_sync(sai->regmap);
+ 	if (ret)
+diff --git a/sound/soc/intel/avs/core.c b/sound/soc/intel/avs/core.c
+index ec1b3f55cb5c9d..d45e9279df27aa 100644
+--- a/sound/soc/intel/avs/core.c
++++ b/sound/soc/intel/avs/core.c
+@@ -446,6 +446,8 @@ static int avs_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ 	adev = devm_kzalloc(dev, sizeof(*adev), GFP_KERNEL);
+ 	if (!adev)
+ 		return -ENOMEM;
++	bus = &adev->base.core;
++
+ 	ret = avs_bus_init(adev, pci, id);
+ 	if (ret < 0) {
+ 		dev_err(dev, "failed to init avs bus: %d\n", ret);
+@@ -456,7 +458,6 @@ static int avs_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	bus = &adev->base.core;
+ 	bus->addr = pci_resource_start(pci, 0);
+ 	bus->remap_addr = pci_ioremap_bar(pci, 0);
+ 	if (!bus->remap_addr) {
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 504887505e6839..c576ec5527f915 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -741,6 +741,14 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 		},
+ 		.driver_data = (void *)(SOC_SDW_CODEC_SPKR),
+ 	},
++	{
++		.callback = sof_sdw_quirk_cb,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Alienware"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0CCC")
++		},
++		.driver_data = (void *)(SOC_SDW_CODEC_SPKR),
++	},
+ 	/* Pantherlake devices*/
+ 	{
+ 		.callback = sof_sdw_quirk_cb,
+diff --git a/sound/soc/qcom/lpass-platform.c b/sound/soc/qcom/lpass-platform.c
+index 9946f12254b396..b456e096f138fd 100644
+--- a/sound/soc/qcom/lpass-platform.c
++++ b/sound/soc/qcom/lpass-platform.c
+@@ -202,7 +202,6 @@ static int lpass_platform_pcmops_open(struct snd_soc_component *component,
+ 	struct regmap *map;
+ 	unsigned int dai_id = cpu_dai->driver->id;
+ 
+-	component->id = dai_id;
+ 	data = kzalloc(sizeof(*data), GFP_KERNEL);
+ 	if (!data)
+ 		return -ENOMEM;
+@@ -1190,13 +1189,14 @@ static int lpass_platform_pcmops_suspend(struct snd_soc_component *component)
+ {
+ 	struct lpass_data *drvdata = snd_soc_component_get_drvdata(component);
+ 	struct regmap *map;
+-	unsigned int dai_id = component->id;
+ 
+-	if (dai_id == LPASS_DP_RX)
++	if (drvdata->hdmi_port_enable) {
+ 		map = drvdata->hdmiif_map;
+-	else
+-		map = drvdata->lpaif_map;
++		regcache_cache_only(map, true);
++		regcache_mark_dirty(map);
++	}
+ 
++	map = drvdata->lpaif_map;
+ 	regcache_cache_only(map, true);
+ 	regcache_mark_dirty(map);
+ 
+@@ -1207,14 +1207,19 @@ static int lpass_platform_pcmops_resume(struct snd_soc_component *component)
+ {
+ 	struct lpass_data *drvdata = snd_soc_component_get_drvdata(component);
+ 	struct regmap *map;
+-	unsigned int dai_id = component->id;
++	int ret;
+ 
+-	if (dai_id == LPASS_DP_RX)
++	if (drvdata->hdmi_port_enable) {
+ 		map = drvdata->hdmiif_map;
+-	else
+-		map = drvdata->lpaif_map;
++		regcache_cache_only(map, false);
++		ret = regcache_sync(map);
++		if (ret)
++			return ret;
++	}
+ 
++	map = drvdata->lpaif_map;
+ 	regcache_cache_only(map, false);
++
+ 	return regcache_sync(map);
+ }
+ 
+@@ -1224,7 +1229,9 @@ static int lpass_platform_copy(struct snd_soc_component *component,
+ 			       unsigned long bytes)
+ {
+ 	struct snd_pcm_runtime *rt = substream->runtime;
+-	unsigned int dai_id = component->id;
++	struct snd_soc_pcm_runtime *soc_runtime = snd_soc_substream_to_rtd(substream);
++	struct snd_soc_dai *cpu_dai = snd_soc_rtd_to_cpu(soc_runtime, 0);
++	unsigned int dai_id = cpu_dai->driver->id;
+ 	int ret = 0;
+ 
+ 	void __iomem *dma_buf = (void __iomem *) (rt->dma_area + pos +
+diff --git a/sound/soc/sdca/sdca_functions.c b/sound/soc/sdca/sdca_functions.c
+index 28e9e6de6d5dba..050f7338aca95a 100644
+--- a/sound/soc/sdca/sdca_functions.c
++++ b/sound/soc/sdca/sdca_functions.c
+@@ -912,6 +912,8 @@ static int find_sdca_entity_control(struct device *dev, struct sdca_entity *enti
+ 				       &tmp);
+ 	if (!ret)
+ 		control->interrupt_position = tmp;
++	else
++		control->interrupt_position = SDCA_NO_INTERRUPT;
+ 
+ 	control->label = find_sdca_control_label(dev, entity, control);
+ 	if (!control->label)
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index 67bebc339148b1..16bbc074dc5f67 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -1139,6 +1139,9 @@ static int snd_soc_compensate_channel_connection_map(struct snd_soc_card *card,
+ void snd_soc_remove_pcm_runtime(struct snd_soc_card *card,
+ 				struct snd_soc_pcm_runtime *rtd)
+ {
++	if (!rtd)
++		return;
++
+ 	lockdep_assert_held(&client_mutex);
+ 
+ 	/*
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index f26f9e9d7ce742..7d9c9e8839f6ad 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -739,6 +739,10 @@ static int snd_soc_dapm_set_bias_level(struct snd_soc_dapm_context *dapm,
+ out:
+ 	trace_snd_soc_bias_level_done(dapm, level);
+ 
++	/* success */
++	if (ret == 0)
++		snd_soc_dapm_init_bias_level(dapm, level);
++
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/sof/topology.c b/sound/soc/sof/topology.c
+index d612d693efc39a..b6d5c8024f8cfa 100644
+--- a/sound/soc/sof/topology.c
++++ b/sound/soc/sof/topology.c
+@@ -2378,14 +2378,25 @@ static int sof_dspless_widget_ready(struct snd_soc_component *scomp, int index,
+ 				    struct snd_soc_dapm_widget *w,
+ 				    struct snd_soc_tplg_dapm_widget *tw)
+ {
++	struct snd_soc_tplg_private *priv = &tw->priv;
++	int ret;
++
++	/* for snd_soc_dapm_widget.no_wname_in_kcontrol_name */
++	ret = sof_parse_tokens(scomp, w, dapm_widget_tokens,
++			       ARRAY_SIZE(dapm_widget_tokens),
++			       priv->array, le32_to_cpu(priv->size));
++	if (ret < 0) {
++		dev_err(scomp->dev, "failed to parse dapm widget tokens for %s\n",
++			w->name);
++		return ret;
++	}
++
+ 	if (WIDGET_IS_DAI(w->id)) {
+ 		static const struct sof_topology_token dai_tokens[] = {
+ 			{SOF_TKN_DAI_TYPE, SND_SOC_TPLG_TUPLE_TYPE_STRING, get_token_dai_type, 0}};
+ 		struct snd_sof_dev *sdev = snd_soc_component_get_drvdata(scomp);
+-		struct snd_soc_tplg_private *priv = &tw->priv;
+ 		struct snd_sof_widget *swidget;
+ 		struct snd_sof_dai *sdai;
+-		int ret;
+ 
+ 		swidget = kzalloc(sizeof(*swidget), GFP_KERNEL);
+ 		if (!swidget)
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index aad205df93b263..d0efb3dd8675e4 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -2153,15 +2153,15 @@ static int dell_dock_mixer_init(struct usb_mixer_interface *mixer)
+ #define SND_RME_CLK_FREQMUL_SHIFT		18
+ #define SND_RME_CLK_FREQMUL_MASK		0x7
+ #define SND_RME_CLK_SYSTEM(x) \
+-	((x >> SND_RME_CLK_SYSTEM_SHIFT) & SND_RME_CLK_SYSTEM_MASK)
++	(((x) >> SND_RME_CLK_SYSTEM_SHIFT) & SND_RME_CLK_SYSTEM_MASK)
+ #define SND_RME_CLK_AES(x) \
+-	((x >> SND_RME_CLK_AES_SHIFT) & SND_RME_CLK_AES_SPDIF_MASK)
++	(((x) >> SND_RME_CLK_AES_SHIFT) & SND_RME_CLK_AES_SPDIF_MASK)
+ #define SND_RME_CLK_SPDIF(x) \
+-	((x >> SND_RME_CLK_SPDIF_SHIFT) & SND_RME_CLK_AES_SPDIF_MASK)
++	(((x) >> SND_RME_CLK_SPDIF_SHIFT) & SND_RME_CLK_AES_SPDIF_MASK)
+ #define SND_RME_CLK_SYNC(x) \
+-	((x >> SND_RME_CLK_SYNC_SHIFT) & SND_RME_CLK_SYNC_MASK)
++	(((x) >> SND_RME_CLK_SYNC_SHIFT) & SND_RME_CLK_SYNC_MASK)
+ #define SND_RME_CLK_FREQMUL(x) \
+-	((x >> SND_RME_CLK_FREQMUL_SHIFT) & SND_RME_CLK_FREQMUL_MASK)
++	(((x) >> SND_RME_CLK_FREQMUL_SHIFT) & SND_RME_CLK_FREQMUL_MASK)
+ #define SND_RME_CLK_AES_LOCK			0x1
+ #define SND_RME_CLK_AES_SYNC			0x4
+ #define SND_RME_CLK_SPDIF_LOCK			0x2
+@@ -2170,9 +2170,9 @@ static int dell_dock_mixer_init(struct usb_mixer_interface *mixer)
+ #define SND_RME_SPDIF_FORMAT_SHIFT		5
+ #define SND_RME_BINARY_MASK			0x1
+ #define SND_RME_SPDIF_IF(x) \
+-	((x >> SND_RME_SPDIF_IF_SHIFT) & SND_RME_BINARY_MASK)
++	(((x) >> SND_RME_SPDIF_IF_SHIFT) & SND_RME_BINARY_MASK)
+ #define SND_RME_SPDIF_FORMAT(x) \
+-	((x >> SND_RME_SPDIF_FORMAT_SHIFT) & SND_RME_BINARY_MASK)
++	(((x) >> SND_RME_SPDIF_FORMAT_SHIFT) & SND_RME_BINARY_MASK)
+ 
+ static const u32 snd_rme_rate_table[] = {
+ 	32000, 44100, 48000, 50000,
+diff --git a/sound/usb/stream.c b/sound/usb/stream.c
+index aa91d63749f2ca..1cb52373e70f64 100644
+--- a/sound/usb/stream.c
++++ b/sound/usb/stream.c
+@@ -341,20 +341,28 @@ snd_pcm_chmap_elem *convert_chmap_v3(struct uac3_cluster_header_descriptor
+ 
+ 	len = le16_to_cpu(cluster->wLength);
+ 	c = 0;
+-	p += sizeof(struct uac3_cluster_header_descriptor);
++	p += sizeof(*cluster);
++	len -= sizeof(*cluster);
+ 
+-	while (((p - (void *)cluster) < len) && (c < channels)) {
++	while (len > 0 && (c < channels)) {
+ 		struct uac3_cluster_segment_descriptor *cs_desc = p;
+ 		u16 cs_len;
+ 		u8 cs_type;
+ 
++		if (len < sizeof(*p))
++			break;
+ 		cs_len = le16_to_cpu(cs_desc->wLength);
++		if (len < cs_len)
++			break;
+ 		cs_type = cs_desc->bSegmentType;
+ 
+ 		if (cs_type == UAC3_CHANNEL_INFORMATION) {
+ 			struct uac3_cluster_information_segment_descriptor *is = p;
+ 			unsigned char map;
+ 
++			if (cs_len < sizeof(*is))
++				break;
++
+ 			/*
+ 			 * TODO: this conversion is not complete, update it
+ 			 * after adding UAC3 values to asound.h
+@@ -456,6 +464,7 @@ snd_pcm_chmap_elem *convert_chmap_v3(struct uac3_cluster_header_descriptor
+ 			chmap->map[c++] = map;
+ 		}
+ 		p += cs_len;
++		len -= cs_len;
+ 	}
+ 
+ 	if (channels < c)
+@@ -880,7 +889,7 @@ snd_usb_get_audioformat_uac3(struct snd_usb_audio *chip,
+ 	u64 badd_formats = 0;
+ 	unsigned int num_channels;
+ 	struct audioformat *fp;
+-	u16 cluster_id, wLength;
++	u16 cluster_id, wLength, cluster_wLength;
+ 	int clock = 0;
+ 	int err;
+ 
+@@ -1010,6 +1019,16 @@ snd_usb_get_audioformat_uac3(struct snd_usb_audio *chip,
+ 		return ERR_PTR(-EIO);
+ 	}
+ 
++	cluster_wLength = le16_to_cpu(cluster->wLength);
++	if (cluster_wLength < sizeof(*cluster) ||
++	    cluster_wLength > wLength) {
++		dev_err(&dev->dev,
++			"%u:%d : invalid Cluster Descriptor size\n",
++			iface_no, altno);
++		kfree(cluster);
++		return ERR_PTR(-EIO);
++	}
++
+ 	num_channels = cluster->bNrChannels;
+ 	chmap = convert_chmap_v3(cluster);
+ 	kfree(cluster);
+diff --git a/sound/usb/validate.c b/sound/usb/validate.c
+index 6fe206f6e91105..4f4e8e87a14cd0 100644
+--- a/sound/usb/validate.c
++++ b/sound/usb/validate.c
+@@ -221,6 +221,17 @@ static bool validate_uac3_feature_unit(const void *p,
+ 	return d->bLength >= sizeof(*d) + 4 + 2;
+ }
+ 
++static bool validate_uac3_power_domain_unit(const void *p,
++					    const struct usb_desc_validator *v)
++{
++	const struct uac3_power_domain_descriptor *d = p;
++
++	if (d->bLength < sizeof(*d))
++		return false;
++	/* baEntities[] + wPDomainDescrStr */
++	return d->bLength >= sizeof(*d) + d->bNrEntities + 2;
++}
++
+ static bool validate_midi_out_jack(const void *p,
+ 				   const struct usb_desc_validator *v)
+ {
+@@ -285,6 +296,7 @@ static const struct usb_desc_validator audio_validators[] = {
+ 	      struct uac3_clock_multiplier_descriptor),
+ 	/* UAC_VERSION_3, UAC3_SAMPLE_RATE_CONVERTER: not implemented yet */
+ 	/* UAC_VERSION_3, UAC3_CONNECTORS: not implemented yet */
++	FUNC(UAC_VERSION_3, UAC3_POWER_DOMAIN, validate_uac3_power_domain_unit),
+ 	{ } /* terminator */
+ };
+ 
+diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
+index cd5963cb605873..2b7f2bd3a7dbc7 100644
+--- a/tools/bpf/bpftool/main.c
++++ b/tools/bpf/bpftool/main.c
+@@ -534,9 +534,9 @@ int main(int argc, char **argv)
+ 		usage();
+ 
+ 	if (version_requested)
+-		return do_version(argc, argv);
+-
+-	ret = cmd_select(commands, argc, argv, do_help);
++		ret = do_version(argc, argv);
++	else
++		ret = cmd_select(commands, argc, argv, do_help);
+ 
+ 	if (json_output)
+ 		jsonw_destroy(&json_wtr);
+diff --git a/tools/include/nolibc/std.h b/tools/include/nolibc/std.h
+index adda7333d12e7d..ba950f0e733843 100644
+--- a/tools/include/nolibc/std.h
++++ b/tools/include/nolibc/std.h
+@@ -16,6 +16,8 @@
+ #include "stdint.h"
+ #include "stddef.h"
+ 
++#include <linux/types.h>
++
+ /* those are commonly provided by sys/types.h */
+ typedef unsigned int          dev_t;
+ typedef unsigned long         ino_t;
+@@ -27,6 +29,6 @@ typedef unsigned long       nlink_t;
+ typedef   signed long         off_t;
+ typedef   signed long     blksize_t;
+ typedef   signed long      blkcnt_t;
+-typedef   signed long        time_t;
++typedef __kernel_old_time_t  time_t;
+ 
+ #endif /* _NOLIBC_STD_H */
+diff --git a/tools/include/nolibc/types.h b/tools/include/nolibc/types.h
+index 30904be544ed01..16c6e9ec9451f4 100644
+--- a/tools/include/nolibc/types.h
++++ b/tools/include/nolibc/types.h
+@@ -128,7 +128,7 @@ typedef struct {
+ 		int __fd = (fd);					\
+ 		if (__fd >= 0)						\
+ 			__set->fds[__fd / FD_SETIDXMASK] &=		\
+-				~(1U << (__fd & FX_SETBITMASK));	\
++				~(1U << (__fd & FD_SETBITMASK));	\
+ 	} while (0)
+ 
+ #define FD_SET(fd, set) do {						\
+@@ -145,7 +145,7 @@ typedef struct {
+ 		int __r = 0;						\
+ 		if (__fd >= 0)						\
+ 			__r = !!(__set->fds[__fd / FD_SETIDXMASK] &	\
+-1U << (__fd & FD_SET_BITMASK));						\
++1U << (__fd & FD_SETBITMASK));						\
+ 		__r;							\
+ 	})
+ 
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index d41ee26b944359..8fe427960eee43 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -4582,6 +4582,11 @@ static int bpf_program__record_reloc(struct bpf_program *prog,
+ 
+ 	/* arena data relocation */
+ 	if (shdr_idx == obj->efile.arena_data_shndx) {
++		if (obj->arena_map_idx < 0) {
++			pr_warn("prog '%s': bad arena data relocation at insn %u, no arena maps defined\n",
++				prog->name, insn_idx);
++			return -LIBBPF_ERRNO__RELOC;
++		}
+ 		reloc_desc->type = RELO_DATA;
+ 		reloc_desc->insn_idx = insn_idx;
+ 		reloc_desc->map_idx = obj->arena_map_idx;
+@@ -9216,7 +9221,7 @@ int bpf_object__gen_loader(struct bpf_object *obj, struct gen_loader_opts *opts)
+ 		return libbpf_err(-EFAULT);
+ 	if (!OPTS_VALID(opts, gen_loader_opts))
+ 		return libbpf_err(-EINVAL);
+-	gen = calloc(sizeof(*gen), 1);
++	gen = calloc(1, sizeof(*gen));
+ 	if (!gen)
+ 		return libbpf_err(-ENOMEM);
+ 	gen->opts = opts;
+diff --git a/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c b/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
+index 73b6b10cbdd291..5ae02c3d5b64b7 100644
+--- a/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
++++ b/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
+@@ -240,9 +240,9 @@ static int mperf_stop(void)
+ 	int cpu;
+ 
+ 	for (cpu = 0; cpu < cpu_count; cpu++) {
+-		mperf_measure_stats(cpu);
+-		mperf_get_tsc(&tsc_at_measure_end[cpu]);
+ 		clock_gettime(CLOCK_REALTIME, &time_end[cpu]);
++		mperf_get_tsc(&tsc_at_measure_end[cpu]);
++		mperf_measure_stats(cpu);
+ 	}
+ 
+ 	return 0;
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index 426eabc10d765f..5c747131f1555e 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -67,6 +67,7 @@
+ #include <stdbool.h>
+ #include <assert.h>
+ #include <linux/kernel.h>
++#include <limits.h>
+ 
+ #define UNUSED(x) (void)(x)
+ 
+@@ -6572,8 +6573,16 @@ int check_for_cap_sys_rawio(void)
+ 	int ret = 0;
+ 
+ 	caps = cap_get_proc();
+-	if (caps == NULL)
++	if (caps == NULL) {
++		/*
++		 * CONFIG_MULTIUSER=n kernels have no cap_get_proc()
++		 * Allow them to continue and attempt to access MSRs
++		 */
++		if (errno == ENOSYS)
++			return 0;
++
+ 		return 1;
++	}
+ 
+ 	if (cap_get_flag(caps, CAP_SYS_RAWIO, CAP_EFFECTIVE, &cap_flag_value)) {
+ 		ret = 1;
+@@ -6740,7 +6749,8 @@ static void probe_intel_uncore_frequency_legacy(void)
+ 			sprintf(path_base, "/sys/devices/system/cpu/intel_uncore_frequency/package_%02d_die_%02d", i,
+ 				j);
+ 
+-			if (access(path_base, R_OK))
++			sprintf(path, "%s/current_freq_khz", path_base);
++			if (access(path, R_OK))
+ 				continue;
+ 
+ 			BIC_PRESENT(BIC_UNCORE_MHZ);
+diff --git a/tools/scripts/Makefile.include b/tools/scripts/Makefile.include
+index 5158250988cea8..ded48263dd5e05 100644
+--- a/tools/scripts/Makefile.include
++++ b/tools/scripts/Makefile.include
+@@ -101,7 +101,9 @@ else ifneq ($(CROSS_COMPILE),)
+ # Allow userspace to override CLANG_CROSS_FLAGS to specify their own
+ # sysroots and flags or to avoid the GCC call in pure Clang builds.
+ ifeq ($(CLANG_CROSS_FLAGS),)
+-CLANG_CROSS_FLAGS := --target=$(notdir $(CROSS_COMPILE:%-=%))
++CLANG_TARGET := $(notdir $(CROSS_COMPILE:%-=%))
++CLANG_TARGET := $(subst s390-linux,s390x-linux,$(CLANG_TARGET))
++CLANG_CROSS_FLAGS := --target=$(CLANG_TARGET)
+ GCC_TOOLCHAIN_DIR := $(dir $(shell which $(CROSS_COMPILE)gcc 2>/dev/null))
+ ifneq ($(GCC_TOOLCHAIN_DIR),)
+ CLANG_CROSS_FLAGS += --prefix=$(GCC_TOOLCHAIN_DIR)$(notdir $(CROSS_COMPILE))
+diff --git a/tools/testing/ktest/ktest.pl b/tools/testing/ktest/ktest.pl
+index a5f7fdd0c1fbbb..e1d31e2aa948ff 100755
+--- a/tools/testing/ktest/ktest.pl
++++ b/tools/testing/ktest/ktest.pl
+@@ -1371,7 +1371,10 @@ sub __eval_option {
+ 	# If a variable contains itself, use the default var
+ 	if (($var eq $name) && defined($opt{$var})) {
+ 	    $o = $opt{$var};
+-	    $retval = "$retval$o";
++	    # Only append if the default doesn't contain itself
++	    if ($o !~ m/\$\{$var\}/) {
++		$retval = "$retval$o";
++	    }
+ 	} elsif (defined($opt{$o})) {
+ 	    $o = $opt{$o};
+ 	    $retval = "$retval$o";
+diff --git a/tools/testing/selftests/arm64/fp/sve-ptrace.c b/tools/testing/selftests/arm64/fp/sve-ptrace.c
+index c499d5789dd53f..16320aeaff857b 100644
+--- a/tools/testing/selftests/arm64/fp/sve-ptrace.c
++++ b/tools/testing/selftests/arm64/fp/sve-ptrace.c
+@@ -170,7 +170,7 @@ static void ptrace_set_get_inherit(pid_t child, const struct vec_type *type)
+ 	memset(&sve, 0, sizeof(sve));
+ 	sve.size = sizeof(sve);
+ 	sve.vl = sve_vl_from_vq(SVE_VQ_MIN);
+-	sve.flags = SVE_PT_VL_INHERIT;
++	sve.flags = SVE_PT_VL_INHERIT | SVE_PT_REGS_SVE;
+ 	ret = set_sve(child, type, &sve);
+ 	if (ret != 0) {
+ 		ksft_test_result_fail("Failed to set %s SVE_PT_VL_INHERIT\n",
+@@ -235,6 +235,7 @@ static void ptrace_set_get_vl(pid_t child, const struct vec_type *type,
+ 	/* Set the VL by doing a set with no register payload */
+ 	memset(&sve, 0, sizeof(sve));
+ 	sve.size = sizeof(sve);
++	sve.flags = SVE_PT_REGS_SVE;
+ 	sve.vl = vl;
+ 	ret = set_sve(child, type, &sve);
+ 	if (ret != 0) {
+diff --git a/tools/testing/selftests/bpf/prog_tests/ringbuf.c b/tools/testing/selftests/bpf/prog_tests/ringbuf.c
+index da430df45aa497..d1e4cb28a72c6b 100644
+--- a/tools/testing/selftests/bpf/prog_tests/ringbuf.c
++++ b/tools/testing/selftests/bpf/prog_tests/ringbuf.c
+@@ -97,7 +97,7 @@ static void ringbuf_write_subtest(void)
+ 	if (!ASSERT_OK_PTR(skel, "skel_open"))
+ 		return;
+ 
+-	skel->maps.ringbuf.max_entries = 0x4000;
++	skel->maps.ringbuf.max_entries = 0x40000;
+ 
+ 	err = test_ringbuf_write_lskel__load(skel);
+ 	if (!ASSERT_OK(err, "skel_load"))
+@@ -108,7 +108,7 @@ static void ringbuf_write_subtest(void)
+ 	mmap_ptr = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, rb_fd, 0);
+ 	if (!ASSERT_OK_PTR(mmap_ptr, "rw_cons_pos"))
+ 		goto cleanup;
+-	*mmap_ptr = 0x3000;
++	*mmap_ptr = 0x30000;
+ 	ASSERT_OK(munmap(mmap_ptr, page_size), "unmap_rw");
+ 
+ 	skel->bss->pid = getpid();
+diff --git a/tools/testing/selftests/bpf/prog_tests/user_ringbuf.c b/tools/testing/selftests/bpf/prog_tests/user_ringbuf.c
+index d424e7ecbd12d0..9fd3ae98732102 100644
+--- a/tools/testing/selftests/bpf/prog_tests/user_ringbuf.c
++++ b/tools/testing/selftests/bpf/prog_tests/user_ringbuf.c
+@@ -21,8 +21,7 @@
+ #include "../progs/test_user_ringbuf.h"
+ 
+ static const long c_sample_size = sizeof(struct sample) + BPF_RINGBUF_HDR_SZ;
+-static const long c_ringbuf_size = 1 << 12; /* 1 small page */
+-static const long c_max_entries = c_ringbuf_size / c_sample_size;
++static long c_ringbuf_size, c_max_entries;
+ 
+ static void drain_current_samples(void)
+ {
+@@ -424,7 +423,9 @@ static void test_user_ringbuf_loop(void)
+ 	uint32_t remaining_samples = total_samples;
+ 	int err;
+ 
+-	BUILD_BUG_ON(total_samples <= c_max_entries);
++	if (!ASSERT_LT(c_max_entries, total_samples, "compare_c_max_entries"))
++		return;
++
+ 	err = load_skel_create_user_ringbuf(&skel, &ringbuf);
+ 	if (err)
+ 		return;
+@@ -686,6 +687,9 @@ void test_user_ringbuf(void)
+ {
+ 	int i;
+ 
++	c_ringbuf_size = getpagesize(); /* 1 page */
++	c_max_entries = c_ringbuf_size / c_sample_size;
++
+ 	for (i = 0; i < ARRAY_SIZE(success_tests); i++) {
+ 		if (!test__start_subtest(success_tests[i].test_name))
+ 			continue;
+diff --git a/tools/testing/selftests/bpf/progs/test_ringbuf_write.c b/tools/testing/selftests/bpf/progs/test_ringbuf_write.c
+index 350513c0e4c985..f063a0013f8506 100644
+--- a/tools/testing/selftests/bpf/progs/test_ringbuf_write.c
++++ b/tools/testing/selftests/bpf/progs/test_ringbuf_write.c
+@@ -26,11 +26,11 @@ int test_ringbuf_write(void *ctx)
+ 	if (cur_pid != pid)
+ 		return 0;
+ 
+-	sample1 = bpf_ringbuf_reserve(&ringbuf, 0x3000, 0);
++	sample1 = bpf_ringbuf_reserve(&ringbuf, 0x30000, 0);
+ 	if (!sample1)
+ 		return 0;
+ 	/* first one can pass */
+-	sample2 = bpf_ringbuf_reserve(&ringbuf, 0x3000, 0);
++	sample2 = bpf_ringbuf_reserve(&ringbuf, 0x30000, 0);
+ 	if (!sample2) {
+ 		bpf_ringbuf_discard(sample1, 0);
+ 		__sync_fetch_and_add(&discarded, 1);
+diff --git a/tools/testing/selftests/bpf/progs/verifier_unpriv.c b/tools/testing/selftests/bpf/progs/verifier_unpriv.c
+index a4a5e207160404..28200f068ce53c 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_unpriv.c
++++ b/tools/testing/selftests/bpf/progs/verifier_unpriv.c
+@@ -619,7 +619,7 @@ __naked void pass_pointer_to_tail_call(void)
+ 
+ SEC("socket")
+ __description("unpriv: cmp map pointer with zero")
+-__success __failure_unpriv __msg_unpriv("R1 pointer comparison")
++__success __success_unpriv
+ __retval(0)
+ __naked void cmp_map_pointer_with_zero(void)
+ {
+diff --git a/tools/testing/selftests/ftrace/test.d/ftrace/func-filter-glob.tc b/tools/testing/selftests/ftrace/test.d/ftrace/func-filter-glob.tc
+index 4b994b6df5ac30..ed81eaf2afd6d9 100644
+--- a/tools/testing/selftests/ftrace/test.d/ftrace/func-filter-glob.tc
++++ b/tools/testing/selftests/ftrace/test.d/ftrace/func-filter-glob.tc
+@@ -29,7 +29,7 @@ ftrace_filter_check 'schedule*' '^schedule.*$'
+ ftrace_filter_check '*pin*lock' '.*pin.*lock$'
+ 
+ # filter by start*mid*
+-ftrace_filter_check 'mutex*try*' '^mutex.*try.*'
++ftrace_filter_check 'mutex*unl*' '^mutex.*unl.*'
+ 
+ # Advanced full-glob matching feature is recently supported.
+ # Skip the tests if we are sure the kernel does not support it.
+diff --git a/tools/testing/selftests/futex/include/futextest.h b/tools/testing/selftests/futex/include/futextest.h
+index ddbcfc9b7bac4a..7a5fd1d5355e7e 100644
+--- a/tools/testing/selftests/futex/include/futextest.h
++++ b/tools/testing/selftests/futex/include/futextest.h
+@@ -47,6 +47,17 @@ typedef volatile u_int32_t futex_t;
+ 					 FUTEX_PRIVATE_FLAG)
+ #endif
+ 
++/*
++ * SYS_futex is expected from system C library, in glibc some 32-bit
++ * architectures (e.g. RV32) are using 64-bit time_t, therefore it doesn't have
++ * SYS_futex defined but just SYS_futex_time64. Define SYS_futex as
++ * SYS_futex_time64 in this situation to ensure the compilation and the
++ * compatibility.
++ */
++#if !defined(SYS_futex) && defined(SYS_futex_time64)
++#define SYS_futex SYS_futex_time64
++#endif
++
+ /**
+  * futex() - SYS_futex syscall wrapper
+  * @uaddr:	address of first futex
+diff --git a/tools/testing/selftests/kexec/Makefile b/tools/testing/selftests/kexec/Makefile
+index e3000ccb9a5d9d..874cfdd3b75b3c 100644
+--- a/tools/testing/selftests/kexec/Makefile
++++ b/tools/testing/selftests/kexec/Makefile
+@@ -12,7 +12,7 @@ include ../../../scripts/Makefile.arch
+ 
+ ifeq ($(IS_64_BIT)$(ARCH_PROCESSED),1x86)
+ TEST_PROGS += test_kexec_jump.sh
+-test_kexec_jump.sh: $(OUTPUT)/test_kexec_jump
++TEST_GEN_PROGS := test_kexec_jump
+ endif
+ 
+ include ../lib.mk
+diff --git a/tools/testing/selftests/net/netfilter/config b/tools/testing/selftests/net/netfilter/config
+index 363646f4fefec5..142eb2b036437b 100644
+--- a/tools/testing/selftests/net/netfilter/config
++++ b/tools/testing/selftests/net/netfilter/config
+@@ -92,4 +92,4 @@ CONFIG_XFRM_STATISTICS=y
+ CONFIG_NET_PKTGEN=m
+ CONFIG_TUN=m
+ CONFIG_INET_DIAG=m
+-CONFIG_SCTP_DIAG=m
++CONFIG_INET_SCTP_DIAG=m
+diff --git a/tools/testing/selftests/vDSO/vdso_test_getrandom.c b/tools/testing/selftests/vDSO/vdso_test_getrandom.c
+index 95057f7567db22..ff8d5675da2b0e 100644
+--- a/tools/testing/selftests/vDSO/vdso_test_getrandom.c
++++ b/tools/testing/selftests/vDSO/vdso_test_getrandom.c
+@@ -242,6 +242,7 @@ static void kselftest(void)
+ 	pid_t child;
+ 
+ 	ksft_print_header();
++	vgetrandom_init();
+ 	ksft_set_plan(2);
+ 
+ 	for (size_t i = 0; i < 1000; ++i) {
+@@ -295,8 +296,6 @@ static void usage(const char *argv0)
+ 
+ int main(int argc, char *argv[])
+ {
+-	vgetrandom_init();
+-
+ 	if (argc == 1) {
+ 		kselftest();
+ 		return 0;
+@@ -306,6 +305,9 @@ int main(int argc, char *argv[])
+ 		usage(argv[0]);
+ 		return 1;
+ 	}
++
++	vgetrandom_init();
++
+ 	if (!strcmp(argv[1], "bench-single"))
+ 		bench_single();
+ 	else if (!strcmp(argv[1], "bench-multi"))
+diff --git a/tools/verification/dot2/dot2k.py b/tools/verification/dot2/dot2k.py
+index 745d35a4a37918..dd4b5528a4f23e 100644
+--- a/tools/verification/dot2/dot2k.py
++++ b/tools/verification/dot2/dot2k.py
+@@ -35,6 +35,7 @@ class dot2k(Dot2c):
+             self.states = []
+             self.main_c = self.__read_file(self.monitor_templates_dir + "main_container.c")
+             self.main_h = self.__read_file(self.monitor_templates_dir + "main_container.h")
++            self.kconfig = self.__read_file(self.monitor_templates_dir + "Kconfig_container")
+         else:
+             super().__init__(file_path, extra_params.get("model_name"))
+ 
+@@ -44,7 +45,7 @@ class dot2k(Dot2c):
+             self.monitor_type = MonitorType
+             self.main_c = self.__read_file(self.monitor_templates_dir + "main.c")
+             self.trace_h = self.__read_file(self.monitor_templates_dir + "trace.h")
+-        self.kconfig = self.__read_file(self.monitor_templates_dir + "Kconfig")
++            self.kconfig = self.__read_file(self.monitor_templates_dir + "Kconfig")
+         self.enum_suffix = "_%s" % self.name
+         self.description = extra_params.get("description", self.name) or "auto-generated"
+         self.auto_patch = extra_params.get("auto_patch")
+diff --git a/tools/verification/dot2/dot2k_templates/Kconfig_container b/tools/verification/dot2/dot2k_templates/Kconfig_container
+new file mode 100644
+index 00000000000000..a606111949c27e
+--- /dev/null
++++ b/tools/verification/dot2/dot2k_templates/Kconfig_container
+@@ -0,0 +1,5 @@
++config RV_MON_%%MODEL_NAME_UP%%
++	depends on RV
++	bool "%%MODEL_NAME%% monitor"
++	help
++	  %%DESCRIPTION%%


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-08-21  4:31 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-08-21  4:31 UTC (permalink / raw
  To: gentoo-commits

commit:     c38997df8c108e6158132f7c5d056fce255b9d02
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 21 02:22:44 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Aug 21 02:22:44 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c38997df

Remove 1900_btrfs_fix_log_tree_replay_failure.patch

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                  |   4 -
 1900_btrfs_fix_log_tree_replay_failure.patch | 143 ---------------------------
 2 files changed, 147 deletions(-)

diff --git a/0000_README b/0000_README
index 8d6c88e6..60e64ca1 100644
--- a/0000_README
+++ b/0000_README
@@ -63,10 +63,6 @@ Patch:  1730_parisc-Disable-prctl.patch
 From:   https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
 Desc:   prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
 
-Patch:  1900_btrfs_fix_log_tree_replay_failure.patch
-From:   https://gitlab.com/cki-project/kernel-ark/-/commit/e6c71b29fab08fd0ab55d2f83c4539d68d543895
-Desc:   btrfs: fix log tree replay failure due to file with 0 links and extents
-
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1900_btrfs_fix_log_tree_replay_failure.patch b/1900_btrfs_fix_log_tree_replay_failure.patch
deleted file mode 100644
index 335bb7f2..00000000
--- a/1900_btrfs_fix_log_tree_replay_failure.patch
+++ /dev/null
@@ -1,143 +0,0 @@
-From e6c71b29fab08fd0ab55d2f83c4539d68d543895 Mon Sep 17 00:00:00 2001
-From: Filipe Manana <fdmanana@suse.com>
-Date: Wed, 30 Jul 2025 19:18:37 +0100
-Subject: [PATCH] btrfs: fix log tree replay failure due to file with 0 links
- and extents
-
-If we log a new inode (not persisted in a past transaction) that has 0
-links and extents, then log another inode with an higher inode number, we
-end up with failing to replay the log tree with -EINVAL. The steps for
-this are:
-
-1) create new file A
-2) write some data to file A
-3) open an fd on file A
-4) unlink file A
-5) fsync file A using the previously open fd
-6) create file B (has higher inode number than file A)
-7) fsync file B
-8) power fail before current transaction commits
-
-Now when attempting to mount the fs, the log replay will fail with
--ENOENT at replay_one_extent() when attempting to replay the first
-extent of file A. The failure comes when trying to open the inode for
-file A in the subvolume tree, since it doesn't exist.
-
-Before commit 5f61b961599a ("btrfs: fix inode lookup error handling
-during log replay"), the returned error was -EIO instead of -ENOENT,
-since we converted any errors when attempting to read an inode during
-log replay to -EIO.
-
-The reason for this is that the log replay procedure fails to ignore
-the current inode when we are at the stage LOG_WALK_REPLAY_ALL, our
-current inode has 0 links and last inode we processed in the previous
-stage has a non 0 link count. In other words, the issue is that at
-replay_one_extent() we only update wc->ignore_cur_inode if the current
-replay stage is LOG_WALK_REPLAY_INODES.
-
-Fix this by updating wc->ignore_cur_inode whenever we find an inode item
-regardless of the current replay stage. This is a simple solution and easy
-to backport, but later we can do other alternatives like avoid logging
-extents or inode items other than the inode item for inodes with a link
-count of 0.
-
-The problem with the wc->ignore_cur_inode logic has been around since
-commit f2d72f42d5fa ("Btrfs: fix warning when replaying log after fsync
-of a tmpfile") but it only became frequent to hit since the more recent
-commit 5e85262e542d ("btrfs: fix fsync of files with no hard links not
-persisting deletion"), because we stopped skipping inodes with a link
-count of 0 when logging, while before the problem would only be triggered
-if trying to replay a log tree created with an older kernel which has a
-logged inode with 0 links.
-
-A test case for fstests will be submitted soon.
-
-Reported-by: Peter Jung <ptr1337@cachyos.org>
-Link: https://lore.kernel.org/linux-btrfs/fce139db-4458-4788-bb97-c29acf6cb1df@cachyos.org/
-Reported-by: burneddi <burneddi@protonmail.com>
-Link: https://lore.kernel.org/linux-btrfs/lh4W-Lwc0Mbk-QvBhhQyZxf6VbM3E8VtIvU3fPIQgweP_Q1n7wtlUZQc33sYlCKYd-o6rryJQfhHaNAOWWRKxpAXhM8NZPojzsJPyHMf2qY=@protonmail.com/#t
-Reported-by: Russell Haley <yumpusamongus@gmail.com>
-Link: https://lore.kernel.org/linux-btrfs/598ecc75-eb80-41b3-83c2-f2317fbb9864@gmail.com/
-Fixes: f2d72f42d5fa ("Btrfs: fix warning when replaying log after fsync of a tmpfile")
-Reviewed-by: Boris Burkov <boris@bur.io>
-Signed-off-by: Filipe Manana <fdmanana@suse.com>
----
- fs/btrfs/tree-log.c | 45 +++++++++++++++++++++++++++++----------------
- 1 file changed, 29 insertions(+), 16 deletions(-)
-
-diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
-index e05140ce95be9..2fb9e7bfc9077 100644
---- a/fs/btrfs/tree-log.c
-+++ b/fs/btrfs/tree-log.c
-@@ -321,8 +321,7 @@ struct walk_control {
- 
- 	/*
- 	 * Ignore any items from the inode currently being processed. Needs
--	 * to be set every time we find a BTRFS_INODE_ITEM_KEY and we are in
--	 * the LOG_WALK_REPLAY_INODES stage.
-+	 * to be set every time we find a BTRFS_INODE_ITEM_KEY.
- 	 */
- 	bool ignore_cur_inode;
- 
-@@ -2410,23 +2409,30 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
- 
- 	nritems = btrfs_header_nritems(eb);
- 	for (i = 0; i < nritems; i++) {
--		btrfs_item_key_to_cpu(eb, &key, i);
-+		struct btrfs_inode_item *inode_item;
- 
--		/* inode keys are done during the first stage */
--		if (key.type == BTRFS_INODE_ITEM_KEY &&
--		    wc->stage == LOG_WALK_REPLAY_INODES) {
--			struct btrfs_inode_item *inode_item;
--			u32 mode;
-+		btrfs_item_key_to_cpu(eb, &key, i);
- 
--			inode_item = btrfs_item_ptr(eb, i,
--					    struct btrfs_inode_item);
-+		if (key.type == BTRFS_INODE_ITEM_KEY) {
-+			inode_item = btrfs_item_ptr(eb, i, struct btrfs_inode_item);
- 			/*
--			 * If we have a tmpfile (O_TMPFILE) that got fsync'ed
--			 * and never got linked before the fsync, skip it, as
--			 * replaying it is pointless since it would be deleted
--			 * later. We skip logging tmpfiles, but it's always
--			 * possible we are replaying a log created with a kernel
--			 * that used to log tmpfiles.
-+			 * An inode with no links is either:
-+			 *
-+			 * 1) A tmpfile (O_TMPFILE) that got fsync'ed and never
-+			 *    got linked before the fsync, skip it, as replaying
-+			 *    it is pointless since it would be deleted later.
-+			 *    We skip logging tmpfiles, but it's always possible
-+			 *    we are replaying a log created with a kernel that
-+			 *    used to log tmpfiles;
-+			 *
-+			 * 2) A non-tmpfile which got its last link deleted
-+			 *    while holding an open fd on it and later got
-+			 *    fsynced through that fd. We always log the
-+			 *    parent inodes when inode->last_unlink_trans is
-+			 *    set to the current transaction, so ignore all the
-+			 *    inode items for this inode. We will delete the
-+			 *    inode when processing the parent directory with
-+			 *    replay_dir_deletes().
- 			 */
- 			if (btrfs_inode_nlink(eb, inode_item) == 0) {
- 				wc->ignore_cur_inode = true;
-@@ -2434,6 +2440,13 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
- 			} else {
- 				wc->ignore_cur_inode = false;
- 			}
-+		}
-+
-+		/* Inode keys are done during the first stage. */
-+		if (key.type == BTRFS_INODE_ITEM_KEY &&
-+		    wc->stage == LOG_WALK_REPLAY_INODES) {
-+			 u32 mode;
-+
- 			ret = replay_xattr_deletes(wc->trans, root, log,
- 						   path, key.objectid);
- 			if (ret)
--- 
-GitLab
-


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-08-21  4:31 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-08-21  4:31 UTC (permalink / raw
  To: gentoo-commits

commit:     ed27bc2128c0fec84ab4bbafd4c783edccecfb97
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 21 02:33:27 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Aug 21 02:33:27 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ed27bc21

Remove 2910_kheaders_override_TAR.patch

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                      |  4 --
 2910_kheaders_override_TAR.patch | 96 ----------------------------------------
 2 files changed, 100 deletions(-)

diff --git a/0000_README b/0000_README
index 60e64ca1..317b6950 100644
--- a/0000_README
+++ b/0000_README
@@ -71,10 +71,6 @@ Patch:  2901_permit-menuconfig-sorting.patch
 From:   https://lore.kernel.org/
 Desc:   menuconfig: Allow sorting the entries alphabetically
 
-Patch:  2910_kheaders_override_TAR.patch
-From:   https://lore.kernel.org/
-Desc:   Add TAR make variable for override the tar executable
-
 Patch:  2920_sign-file-patch-for-libressl.patch
 From:   https://bugs.gentoo.org/717166
 Desc:   sign-file: full functionality with modern LibreSSL

diff --git a/2910_kheaders_override_TAR.patch b/2910_kheaders_override_TAR.patch
deleted file mode 100644
index e8511e1a..00000000
--- a/2910_kheaders_override_TAR.patch
+++ /dev/null
@@ -1,96 +0,0 @@
-Subject: [PATCH v3] kheaders: make it possible to override TAR
-Date: Sat, 19 Jul 2025 16:24:05 +0100
-Message-ID: <277557da458c5fa07eba7d785b4f527cc37a023f.1752938644.git.sam@gentoo.org>
-X-Mailer: git-send-email 2.50.1
-In-Reply-To: <20230412082743.350699-1-mgorny@gentoo.org>
-References: <20230412082743.350699-1-mgorny@gentoo.org>
-Precedence: bulk
-X-Mailing-List: linux-kernel@vger.kernel.org
-List-Id: <linux-kernel.vger.kernel.org>
-List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org>
-List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org>
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-From: Michał Górny <mgorny@gentoo.org>
-
-Commit 86cdd2fdc4e39c388d39c7ba2396d1a9dfd66226 ("kheaders: make headers
-archive reproducible") introduced a number of options specific to GNU
-tar to the `tar` invocation in `gen_kheaders.sh` script.  This causes
-the script to fail to work on systems where `tar` is not GNU tar.  This
-can occur e.g. on recent Gentoo Linux installations that support using
-bsdtar from libarchive instead.
-
-Add a `TAR` make variable to make it possible to override the tar
-executable used, e.g. by specifying:
-
-  make TAR=gtar
-
-Link: https://bugs.gentoo.org/884061
-Reported-by: Sam James <sam@gentoo.org>
-Tested-by: Sam James <sam@gentoo.org>
-Co-developed-by: Masahiro Yamada <masahiroy@kernel.org>
-Signed-off-by: Michał Górny <mgorny@gentoo.org>
-Signed-off-by: Sam James <sam@gentoo.org>
----
-v3: Rebase, cover more tar instances.
-
- Makefile               | 3 ++-
- kernel/gen_kheaders.sh | 6 +++---
- 2 files changed, 5 insertions(+), 4 deletions(-)
-
-diff --git a/Makefile b/Makefile
-index c09766beb7eff..22d6037d738fe 100644
---- a/Makefile
-+++ b/Makefile
-@@ -543,6 +543,7 @@ LZMA		= lzma
- LZ4		= lz4
- XZ		= xz
- ZSTD		= zstd
-+TAR		= tar
- 
- CHECKFLAGS     := -D__linux__ -Dlinux -D__STDC__ -Dunix -D__unix__ \
- 		  -Wbitwise -Wno-return-void -Wno-unknown-attribute $(CF)
-@@ -622,7 +623,7 @@ export RUSTC RUSTDOC RUSTFMT RUSTC_OR_CLIPPY_QUIET RUSTC_OR_CLIPPY BINDGEN
- export HOSTRUSTC KBUILD_HOSTRUSTFLAGS
- export CPP AR NM STRIP OBJCOPY OBJDUMP READELF PAHOLE RESOLVE_BTFIDS LEX YACC AWK INSTALLKERNEL
- export PERL PYTHON3 CHECK CHECKFLAGS MAKE UTS_MACHINE HOSTCXX
--export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD
-+export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD TAR
- export KBUILD_HOSTCXXFLAGS KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS KBUILD_PROCMACROLDFLAGS LDFLAGS_MODULE
- export KBUILD_USERCFLAGS KBUILD_USERLDFLAGS
- 
-diff --git a/kernel/gen_kheaders.sh b/kernel/gen_kheaders.sh
-index c9e5dc068e854..bb609a9ed72b4 100755
---- a/kernel/gen_kheaders.sh
-+++ b/kernel/gen_kheaders.sh
-@@ -66,13 +66,13 @@ if [ "$building_out_of_srctree" ]; then
- 		cd $srctree
- 		for f in $dir_list
- 			do find "$f" -name "*.h";
--		done | tar -c -f - -T - | tar -xf - -C "${tmpdir}"
-+		done | ${TAR:-tar} -c -f - -T - | ${TAR:-tar} -xf - -C "${tmpdir}"
- 	)
- fi
- 
- for f in $dir_list;
- 	do find "$f" -name "*.h";
--done | tar -c -f - -T - | tar -xf - -C "${tmpdir}"
-+done | ${TAR:-tar} -c -f - -T - | ${TAR:-tar} -xf - -C "${tmpdir}"
- 
- # Always exclude include/generated/utsversion.h
- # Otherwise, the contents of the tarball may vary depending on the build steps.
-@@ -88,7 +88,7 @@ xargs -0 -P8 -n1 \
- rm -f "${tmpdir}.contents.txt"
- 
- # Create archive and try to normalize metadata for reproducibility.
--tar "${KBUILD_BUILD_TIMESTAMP:+--mtime=$KBUILD_BUILD_TIMESTAMP}" \
-+${TAR:-tar} "${KBUILD_BUILD_TIMESTAMP:+--mtime=$KBUILD_BUILD_TIMESTAMP}" \
-     --owner=0 --group=0 --sort=name --numeric-owner --mode=u=rw,go=r,a+X \
-     -I $XZ -cf $tarfile -C "${tmpdir}/" . > /dev/null
- 
--- 
-2.50.1
-
-


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-08-24 23:09 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-08-24 23:09 UTC (permalink / raw
  To: gentoo-commits

commit:     cc303b46d5283acf365e402868184e1906c3636d
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Sun Aug 24 23:09:24 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Sun Aug 24 23:09:24 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cc303b46

Linux patch 6.16.3

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README             |   4 +
 1002_linux-6.16.3.patch | 792 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 796 insertions(+)

diff --git a/0000_README b/0000_README
index 317b6950..77885109 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-6.16.2.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.16.2
 
+Patch:  1002_linux-6.16.3.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.16.3
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1002_linux-6.16.3.patch b/1002_linux-6.16.3.patch
new file mode 100644
index 00000000..7c8ab11f
--- /dev/null
+++ b/1002_linux-6.16.3.patch
@@ -0,0 +1,792 @@
+diff --git a/Makefile b/Makefile
+index ed2967dd07d5e2..df121383064380 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 16
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index fe3366e98493b8..9ac0a7d4fa0cd7 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -3064,9 +3064,9 @@ extern int ext4_punch_hole(struct file *file, loff_t offset, loff_t length);
+ extern void ext4_set_inode_flags(struct inode *, bool init);
+ extern int ext4_alloc_da_blocks(struct inode *inode);
+ extern void ext4_set_aops(struct inode *inode);
+-extern int ext4_writepage_trans_blocks(struct inode *);
+ extern int ext4_normal_submit_inode_data_buffers(struct jbd2_inode *jinode);
+ extern int ext4_chunk_trans_blocks(struct inode *, int nrblocks);
++extern int ext4_chunk_trans_extent(struct inode *inode, int nrblocks);
+ extern int ext4_meta_trans_blocks(struct inode *inode, int lblocks,
+ 				  int pextents);
+ extern int ext4_zero_partial_blocks(handle_t *handle, struct inode *inode,
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index b543a46fc80962..f0f1554586978a 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -5171,7 +5171,7 @@ ext4_ext_shift_path_extents(struct ext4_ext_path *path, ext4_lblk_t shift,
+ 				credits = depth + 2;
+ 			}
+ 
+-			restart_credits = ext4_writepage_trans_blocks(inode);
++			restart_credits = ext4_chunk_trans_extent(inode, 0);
+ 			err = ext4_datasem_ensure_credits(handle, inode, credits,
+ 					restart_credits, 0);
+ 			if (err) {
+@@ -5431,7 +5431,7 @@ static int ext4_collapse_range(struct file *file, loff_t offset, loff_t len)
+ 
+ 	truncate_pagecache(inode, start);
+ 
+-	credits = ext4_writepage_trans_blocks(inode);
++	credits = ext4_chunk_trans_extent(inode, 0);
+ 	handle = ext4_journal_start(inode, EXT4_HT_TRUNCATE, credits);
+ 	if (IS_ERR(handle))
+ 		return PTR_ERR(handle);
+@@ -5527,7 +5527,7 @@ static int ext4_insert_range(struct file *file, loff_t offset, loff_t len)
+ 
+ 	truncate_pagecache(inode, start);
+ 
+-	credits = ext4_writepage_trans_blocks(inode);
++	credits = ext4_chunk_trans_extent(inode, 0);
+ 	handle = ext4_journal_start(inode, EXT4_HT_TRUNCATE, credits);
+ 	if (IS_ERR(handle))
+ 		return PTR_ERR(handle);
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 05313c8ffb9cc7..38874d62f88c44 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -570,7 +570,7 @@ static int ext4_convert_inline_data_to_extent(struct address_space *mapping,
+ 		return 0;
+ 	}
+ 
+-	needed_blocks = ext4_writepage_trans_blocks(inode);
++	needed_blocks = ext4_chunk_trans_extent(inode, 1);
+ 
+ 	ret = ext4_get_inode_loc(inode, &iloc);
+ 	if (ret)
+@@ -1874,7 +1874,7 @@ int ext4_inline_data_truncate(struct inode *inode, int *has_inline)
+ 	};
+ 
+ 
+-	needed_blocks = ext4_writepage_trans_blocks(inode);
++	needed_blocks = ext4_chunk_trans_extent(inode, 1);
+ 	handle = ext4_journal_start(inode, EXT4_HT_INODE, needed_blocks);
+ 	if (IS_ERR(handle))
+ 		return PTR_ERR(handle);
+@@ -1994,7 +1994,7 @@ int ext4_convert_inline_data(struct inode *inode)
+ 			return 0;
+ 	}
+ 
+-	needed_blocks = ext4_writepage_trans_blocks(inode);
++	needed_blocks = ext4_chunk_trans_extent(inode, 1);
+ 
+ 	iloc.bh = NULL;
+ 	error = ext4_get_inode_loc(inode, &iloc);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 0f316632b8dd65..3df0796a30104e 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -877,6 +877,26 @@ static void ext4_update_bh_state(struct buffer_head *bh, unsigned long flags)
+ 	} while (unlikely(!try_cmpxchg(&bh->b_state, &old_state, new_state)));
+ }
+ 
++/*
++ * Make sure that the current journal transaction has enough credits to map
++ * one extent. Return -EAGAIN if it cannot extend the current running
++ * transaction.
++ */
++static inline int ext4_journal_ensure_extent_credits(handle_t *handle,
++						     struct inode *inode)
++{
++	int credits;
++	int ret;
++
++	/* Called from ext4_da_write_begin() which has no handle started? */
++	if (!handle)
++		return 0;
++
++	credits = ext4_chunk_trans_blocks(inode, 1);
++	ret = __ext4_journal_ensure_credits(handle, credits, credits, 0);
++	return ret <= 0 ? ret : -EAGAIN;
++}
++
+ static int _ext4_get_block(struct inode *inode, sector_t iblock,
+ 			   struct buffer_head *bh, int flags)
+ {
+@@ -1175,7 +1195,9 @@ int ext4_block_write_begin(handle_t *handle, struct folio *folio,
+ 			clear_buffer_new(bh);
+ 		if (!buffer_mapped(bh)) {
+ 			WARN_ON(bh->b_size != blocksize);
+-			err = get_block(inode, block, bh, 1);
++			err = ext4_journal_ensure_extent_credits(handle, inode);
++			if (!err)
++				err = get_block(inode, block, bh, 1);
+ 			if (err)
+ 				break;
+ 			if (buffer_new(bh)) {
+@@ -1274,7 +1296,8 @@ static int ext4_write_begin(struct file *file, struct address_space *mapping,
+ 	 * Reserve one block more for addition to orphan list in case
+ 	 * we allocate blocks but write fails for some reason
+ 	 */
+-	needed_blocks = ext4_writepage_trans_blocks(inode) + 1;
++	needed_blocks = ext4_chunk_trans_extent(inode,
++			ext4_journal_blocks_per_folio(inode)) + 1;
+ 	index = pos >> PAGE_SHIFT;
+ 
+ 	if (ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA)) {
+@@ -1374,8 +1397,9 @@ static int ext4_write_begin(struct file *file, struct address_space *mapping,
+ 				ext4_orphan_del(NULL, inode);
+ 		}
+ 
+-		if (ret == -ENOSPC &&
+-		    ext4_should_retry_alloc(inode->i_sb, &retries))
++		if (ret == -EAGAIN ||
++		    (ret == -ENOSPC &&
++		     ext4_should_retry_alloc(inode->i_sb, &retries)))
+ 			goto retry_journal;
+ 		folio_put(folio);
+ 		return ret;
+@@ -1668,11 +1692,12 @@ struct mpage_da_data {
+ 	unsigned int can_map:1;	/* Can writepages call map blocks? */
+ 
+ 	/* These are internal state of ext4_do_writepages() */
+-	pgoff_t first_page;	/* The first page to write */
+-	pgoff_t next_page;	/* Current page to examine */
+-	pgoff_t last_page;	/* Last page to examine */
++	loff_t start_pos;	/* The start pos to write */
++	loff_t next_pos;	/* Current pos to examine */
++	loff_t end_pos;		/* Last pos to examine */
++
+ 	/*
+-	 * Extent to map - this can be after first_page because that can be
++	 * Extent to map - this can be after start_pos because that can be
+ 	 * fully mapped. We somewhat abuse m_flags to store whether the extent
+ 	 * is delalloc or unwritten.
+ 	 */
+@@ -1692,38 +1717,38 @@ static void mpage_release_unused_pages(struct mpage_da_data *mpd,
+ 	struct inode *inode = mpd->inode;
+ 	struct address_space *mapping = inode->i_mapping;
+ 
+-	/* This is necessary when next_page == 0. */
+-	if (mpd->first_page >= mpd->next_page)
++	/* This is necessary when next_pos == 0. */
++	if (mpd->start_pos >= mpd->next_pos)
+ 		return;
+ 
+ 	mpd->scanned_until_end = 0;
+-	index = mpd->first_page;
+-	end   = mpd->next_page - 1;
+ 	if (invalidate) {
+ 		ext4_lblk_t start, last;
+-		start = index << (PAGE_SHIFT - inode->i_blkbits);
+-		last = end << (PAGE_SHIFT - inode->i_blkbits);
++		start = EXT4_B_TO_LBLK(inode, mpd->start_pos);
++		last = mpd->next_pos >> inode->i_blkbits;
+ 
+ 		/*
+ 		 * avoid racing with extent status tree scans made by
+ 		 * ext4_insert_delayed_block()
+ 		 */
+ 		down_write(&EXT4_I(inode)->i_data_sem);
+-		ext4_es_remove_extent(inode, start, last - start + 1);
++		ext4_es_remove_extent(inode, start, last - start);
+ 		up_write(&EXT4_I(inode)->i_data_sem);
+ 	}
+ 
+ 	folio_batch_init(&fbatch);
+-	while (index <= end) {
+-		nr = filemap_get_folios(mapping, &index, end, &fbatch);
++	index = mpd->start_pos >> PAGE_SHIFT;
++	end = mpd->next_pos >> PAGE_SHIFT;
++	while (index < end) {
++		nr = filemap_get_folios(mapping, &index, end - 1, &fbatch);
+ 		if (nr == 0)
+ 			break;
+ 		for (i = 0; i < nr; i++) {
+ 			struct folio *folio = fbatch.folios[i];
+ 
+-			if (folio->index < mpd->first_page)
++			if (folio_pos(folio) < mpd->start_pos)
+ 				continue;
+-			if (folio_next_index(folio) - 1 > end)
++			if (folio_next_index(folio) > end)
+ 				continue;
+ 			BUG_ON(!folio_test_locked(folio));
+ 			BUG_ON(folio_test_writeback(folio));
+@@ -2025,7 +2050,8 @@ int ext4_da_get_block_prep(struct inode *inode, sector_t iblock,
+ 
+ static void mpage_folio_done(struct mpage_da_data *mpd, struct folio *folio)
+ {
+-	mpd->first_page += folio_nr_pages(folio);
++	mpd->start_pos += folio_size(folio);
++	mpd->wbc->nr_to_write -= folio_nr_pages(folio);
+ 	folio_unlock(folio);
+ }
+ 
+@@ -2035,7 +2061,7 @@ static int mpage_submit_folio(struct mpage_da_data *mpd, struct folio *folio)
+ 	loff_t size;
+ 	int err;
+ 
+-	BUG_ON(folio->index != mpd->first_page);
++	WARN_ON_ONCE(folio_pos(folio) != mpd->start_pos);
+ 	folio_clear_dirty_for_io(folio);
+ 	/*
+ 	 * We have to be very careful here!  Nothing protects writeback path
+@@ -2056,8 +2082,6 @@ static int mpage_submit_folio(struct mpage_da_data *mpd, struct folio *folio)
+ 	    !ext4_verity_in_progress(mpd->inode))
+ 		len = size & (len - 1);
+ 	err = ext4_bio_write_folio(&mpd->io_submit, folio, len);
+-	if (!err)
+-		mpd->wbc->nr_to_write -= folio_nr_pages(folio);
+ 
+ 	return err;
+ }
+@@ -2324,6 +2348,11 @@ static int mpage_map_one_extent(handle_t *handle, struct mpage_da_data *mpd)
+ 	int get_blocks_flags;
+ 	int err, dioread_nolock;
+ 
++	/* Make sure transaction has enough credits for this extent */
++	err = ext4_journal_ensure_extent_credits(handle, inode);
++	if (err < 0)
++		return err;
++
+ 	trace_ext4_da_write_pages_extent(inode, map);
+ 	/*
+ 	 * Call ext4_map_blocks() to allocate any delayed allocation blocks, or
+@@ -2362,6 +2391,47 @@ static int mpage_map_one_extent(handle_t *handle, struct mpage_da_data *mpd)
+ 	return 0;
+ }
+ 
++/*
++ * This is used to submit mapped buffers in a single folio that is not fully
++ * mapped for various reasons, such as insufficient space or journal credits.
++ */
++static int mpage_submit_partial_folio(struct mpage_da_data *mpd)
++{
++	struct inode *inode = mpd->inode;
++	struct folio *folio;
++	loff_t pos;
++	int ret;
++
++	folio = filemap_get_folio(inode->i_mapping,
++				  mpd->start_pos >> PAGE_SHIFT);
++	if (IS_ERR(folio))
++		return PTR_ERR(folio);
++	/*
++	 * The mapped position should be within the current processing folio
++	 * but must not be the folio start position.
++	 */
++	pos = ((loff_t)mpd->map.m_lblk) << inode->i_blkbits;
++	if (WARN_ON_ONCE((folio_pos(folio) == pos) ||
++			 !folio_contains(folio, pos >> PAGE_SHIFT)))
++		return -EINVAL;
++
++	ret = mpage_submit_folio(mpd, folio);
++	if (ret)
++		goto out;
++	/*
++	 * Update start_pos to prevent this folio from being released in
++	 * mpage_release_unused_pages(), it will be reset to the aligned folio
++	 * pos when this folio is written again in the next round. Additionally,
++	 * do not update wbc->nr_to_write here, as it will be updated once the
++	 * entire folio has finished processing.
++	 */
++	mpd->start_pos = pos;
++out:
++	folio_unlock(folio);
++	folio_put(folio);
++	return ret;
++}
++
+ /*
+  * mpage_map_and_submit_extent - map extent starting at mpd->lblk of length
+  *				 mpd->len and submit pages underlying it for IO
+@@ -2410,10 +2480,18 @@ static int mpage_map_and_submit_extent(handle_t *handle,
+ 			 * In the case of ENOSPC, if ext4_count_free_blocks()
+ 			 * is non-zero, a commit should free up blocks.
+ 			 */
+-			if ((err == -ENOMEM) ||
++			if ((err == -ENOMEM) || (err == -EAGAIN) ||
+ 			    (err == -ENOSPC && ext4_count_free_clusters(sb))) {
+-				if (progress)
++				/*
++				 * We may have already allocated extents for
++				 * some bhs inside the folio, issue the
++				 * corresponding data to prevent stale data.
++				 */
++				if (progress) {
++					if (mpage_submit_partial_folio(mpd))
++						goto invalidate_dirty_pages;
+ 					goto update_disksize;
++				}
+ 				return err;
+ 			}
+ 			ext4_msg(sb, KERN_CRIT,
+@@ -2447,7 +2525,7 @@ static int mpage_map_and_submit_extent(handle_t *handle,
+ 	 * Update on-disk size after IO is submitted.  Races with
+ 	 * truncate are avoided by checking i_size under i_data_sem.
+ 	 */
+-	disksize = ((loff_t)mpd->first_page) << PAGE_SHIFT;
++	disksize = mpd->start_pos;
+ 	if (disksize > READ_ONCE(EXT4_I(inode)->i_disksize)) {
+ 		int err2;
+ 		loff_t i_size;
+@@ -2471,21 +2549,6 @@ static int mpage_map_and_submit_extent(handle_t *handle,
+ 	return err;
+ }
+ 
+-/*
+- * Calculate the total number of credits to reserve for one writepages
+- * iteration. This is called from ext4_writepages(). We map an extent of
+- * up to MAX_WRITEPAGES_EXTENT_LEN blocks and then we go on and finish mapping
+- * the last partial page. So in total we can map MAX_WRITEPAGES_EXTENT_LEN +
+- * bpp - 1 blocks in bpp different extents.
+- */
+-static int ext4_da_writepages_trans_blocks(struct inode *inode)
+-{
+-	int bpp = ext4_journal_blocks_per_folio(inode);
+-
+-	return ext4_meta_trans_blocks(inode,
+-				MAX_WRITEPAGES_EXTENT_LEN + bpp - 1, bpp);
+-}
+-
+ static int ext4_journal_folio_buffers(handle_t *handle, struct folio *folio,
+ 				     size_t len)
+ {
+@@ -2550,8 +2613,8 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
+ 	struct address_space *mapping = mpd->inode->i_mapping;
+ 	struct folio_batch fbatch;
+ 	unsigned int nr_folios;
+-	pgoff_t index = mpd->first_page;
+-	pgoff_t end = mpd->last_page;
++	pgoff_t index = mpd->start_pos >> PAGE_SHIFT;
++	pgoff_t end = mpd->end_pos >> PAGE_SHIFT;
+ 	xa_mark_t tag;
+ 	int i, err = 0;
+ 	int blkbits = mpd->inode->i_blkbits;
+@@ -2566,7 +2629,7 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
+ 		tag = PAGECACHE_TAG_DIRTY;
+ 
+ 	mpd->map.m_len = 0;
+-	mpd->next_page = index;
++	mpd->next_pos = mpd->start_pos;
+ 	if (ext4_should_journal_data(mpd->inode)) {
+ 		handle = ext4_journal_start(mpd->inode, EXT4_HT_WRITE_PAGE,
+ 					    bpp);
+@@ -2597,7 +2660,8 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
+ 				goto out;
+ 
+ 			/* If we can't merge this page, we are done. */
+-			if (mpd->map.m_len > 0 && mpd->next_page != folio->index)
++			if (mpd->map.m_len > 0 &&
++			    mpd->next_pos != folio_pos(folio))
+ 				goto out;
+ 
+ 			if (handle) {
+@@ -2643,8 +2707,8 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
+ 			}
+ 
+ 			if (mpd->map.m_len == 0)
+-				mpd->first_page = folio->index;
+-			mpd->next_page = folio_next_index(folio);
++				mpd->start_pos = folio_pos(folio);
++			mpd->next_pos = folio_pos(folio) + folio_size(folio);
+ 			/*
+ 			 * Writeout when we cannot modify metadata is simple.
+ 			 * Just submit the page. For data=journal mode we
+@@ -2772,12 +2836,12 @@ static int ext4_do_writepages(struct mpage_da_data *mpd)
+ 	mpd->journalled_more_data = 0;
+ 
+ 	if (ext4_should_dioread_nolock(inode)) {
++		int bpf = ext4_journal_blocks_per_folio(inode);
+ 		/*
+ 		 * We may need to convert up to one extent per block in
+-		 * the page and we may dirty the inode.
++		 * the folio and we may dirty the inode.
+ 		 */
+-		rsv_blocks = 1 + ext4_chunk_trans_blocks(inode,
+-						PAGE_SIZE >> inode->i_blkbits);
++		rsv_blocks = 1 + ext4_ext_index_trans_blocks(inode, bpf);
+ 	}
+ 
+ 	if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX)
+@@ -2787,18 +2851,18 @@ static int ext4_do_writepages(struct mpage_da_data *mpd)
+ 		writeback_index = mapping->writeback_index;
+ 		if (writeback_index)
+ 			cycled = 0;
+-		mpd->first_page = writeback_index;
+-		mpd->last_page = -1;
++		mpd->start_pos = writeback_index << PAGE_SHIFT;
++		mpd->end_pos = LLONG_MAX;
+ 	} else {
+-		mpd->first_page = wbc->range_start >> PAGE_SHIFT;
+-		mpd->last_page = wbc->range_end >> PAGE_SHIFT;
++		mpd->start_pos = wbc->range_start;
++		mpd->end_pos = wbc->range_end;
+ 	}
+ 
+ 	ext4_io_submit_init(&mpd->io_submit, wbc);
+ retry:
+ 	if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages)
+-		tag_pages_for_writeback(mapping, mpd->first_page,
+-					mpd->last_page);
++		tag_pages_for_writeback(mapping, mpd->start_pos >> PAGE_SHIFT,
++					mpd->end_pos >> PAGE_SHIFT);
+ 	blk_start_plug(&plug);
+ 
+ 	/*
+@@ -2841,8 +2905,14 @@ static int ext4_do_writepages(struct mpage_da_data *mpd)
+ 		 * not supported by delalloc.
+ 		 */
+ 		BUG_ON(ext4_should_journal_data(inode));
+-		needed_blocks = ext4_da_writepages_trans_blocks(inode);
+-
++		/*
++		 * Calculate the number of credits needed to reserve for one
++		 * extent of up to MAX_WRITEPAGES_EXTENT_LEN blocks. It will
++		 * attempt to extend the transaction or start a new iteration
++		 * if the reserved credits are insufficient.
++		 */
++		needed_blocks = ext4_chunk_trans_blocks(inode,
++						MAX_WRITEPAGES_EXTENT_LEN);
+ 		/* start a new transaction */
+ 		handle = ext4_journal_start_with_reserve(inode,
+ 				EXT4_HT_WRITE_PAGE, needed_blocks, rsv_blocks);
+@@ -2858,7 +2928,8 @@ static int ext4_do_writepages(struct mpage_da_data *mpd)
+ 		}
+ 		mpd->do_map = 1;
+ 
+-		trace_ext4_da_write_pages(inode, mpd->first_page, wbc);
++		trace_ext4_da_write_folios_start(inode, mpd->start_pos,
++				mpd->next_pos, wbc);
+ 		ret = mpage_prepare_extent_to_map(mpd);
+ 		if (!ret && mpd->map.m_len)
+ 			ret = mpage_map_and_submit_extent(handle, mpd,
+@@ -2896,6 +2967,8 @@ static int ext4_do_writepages(struct mpage_da_data *mpd)
+ 		} else
+ 			ext4_put_io_end(mpd->io_submit.io_end);
+ 		mpd->io_submit.io_end = NULL;
++		trace_ext4_da_write_folios_end(inode, mpd->start_pos,
++				mpd->next_pos, wbc, ret);
+ 
+ 		if (ret == -ENOSPC && sbi->s_journal) {
+ 			/*
+@@ -2907,6 +2980,8 @@ static int ext4_do_writepages(struct mpage_da_data *mpd)
+ 			ret = 0;
+ 			continue;
+ 		}
++		if (ret == -EAGAIN)
++			ret = 0;
+ 		/* Fatal error - ENOMEM, EIO... */
+ 		if (ret)
+ 			break;
+@@ -2915,8 +2990,8 @@ static int ext4_do_writepages(struct mpage_da_data *mpd)
+ 	blk_finish_plug(&plug);
+ 	if (!ret && !cycled && wbc->nr_to_write > 0) {
+ 		cycled = 1;
+-		mpd->last_page = writeback_index - 1;
+-		mpd->first_page = 0;
++		mpd->end_pos = (writeback_index << PAGE_SHIFT) - 1;
++		mpd->start_pos = 0;
+ 		goto retry;
+ 	}
+ 
+@@ -2926,7 +3001,7 @@ static int ext4_do_writepages(struct mpage_da_data *mpd)
+ 		 * Set the writeback_index so that range_cyclic
+ 		 * mode will write it back later
+ 		 */
+-		mapping->writeback_index = mpd->first_page;
++		mapping->writeback_index = mpd->start_pos >> PAGE_SHIFT;
+ 
+ out_writepages:
+ 	trace_ext4_writepages_result(inode, wbc, ret,
+@@ -4390,7 +4465,7 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length)
+ 		return ret;
+ 
+ 	if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
+-		credits = ext4_writepage_trans_blocks(inode);
++		credits = ext4_chunk_trans_extent(inode, 2);
+ 	else
+ 		credits = ext4_blocks_for_truncate(inode);
+ 	handle = ext4_journal_start(inode, EXT4_HT_TRUNCATE, credits);
+@@ -4539,7 +4614,7 @@ int ext4_truncate(struct inode *inode)
+ 	}
+ 
+ 	if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
+-		credits = ext4_writepage_trans_blocks(inode);
++		credits = ext4_chunk_trans_extent(inode, 1);
+ 	else
+ 		credits = ext4_blocks_for_truncate(inode);
+ 
+@@ -6182,25 +6257,19 @@ int ext4_meta_trans_blocks(struct inode *inode, int lblocks, int pextents)
+ }
+ 
+ /*
+- * Calculate the total number of credits to reserve to fit
+- * the modification of a single pages into a single transaction,
+- * which may include multiple chunks of block allocations.
+- *
+- * This could be called via ext4_write_begin()
+- *
+- * We need to consider the worse case, when
+- * one new block per extent.
++ * Calculate the journal credits for modifying the number of blocks
++ * in a single extent within one transaction. 'nrblocks' is used only
++ * for non-extent inodes. For extent type inodes, 'nrblocks' can be
++ * zero if the exact number of blocks is unknown.
+  */
+-int ext4_writepage_trans_blocks(struct inode *inode)
++int ext4_chunk_trans_extent(struct inode *inode, int nrblocks)
+ {
+-	int bpp = ext4_journal_blocks_per_folio(inode);
+ 	int ret;
+ 
+-	ret = ext4_meta_trans_blocks(inode, bpp, bpp);
+-
++	ret = ext4_meta_trans_blocks(inode, nrblocks, 1);
+ 	/* Account for data blocks for journalled mode */
+ 	if (ext4_should_journal_data(inode))
+-		ret += bpp;
++		ret += nrblocks;
+ 	return ret;
+ }
+ 
+@@ -6572,6 +6641,55 @@ static int ext4_bh_unmapped(handle_t *handle, struct inode *inode,
+ 	return !buffer_mapped(bh);
+ }
+ 
++static int ext4_block_page_mkwrite(struct inode *inode, struct folio *folio,
++				   get_block_t get_block)
++{
++	handle_t *handle;
++	loff_t size;
++	unsigned long len;
++	int credits;
++	int ret;
++
++	credits = ext4_chunk_trans_extent(inode,
++			ext4_journal_blocks_per_folio(inode));
++	handle = ext4_journal_start(inode, EXT4_HT_WRITE_PAGE, credits);
++	if (IS_ERR(handle))
++		return PTR_ERR(handle);
++
++	folio_lock(folio);
++	size = i_size_read(inode);
++	/* Page got truncated from under us? */
++	if (folio->mapping != inode->i_mapping || folio_pos(folio) > size) {
++		ret = -EFAULT;
++		goto out_error;
++	}
++
++	len = folio_size(folio);
++	if (folio_pos(folio) + len > size)
++		len = size - folio_pos(folio);
++
++	ret = ext4_block_write_begin(handle, folio, 0, len, get_block);
++	if (ret)
++		goto out_error;
++
++	if (!ext4_should_journal_data(inode)) {
++		block_commit_write(folio, 0, len);
++		folio_mark_dirty(folio);
++	} else {
++		ret = ext4_journal_folio_buffers(handle, folio, len);
++		if (ret)
++			goto out_error;
++	}
++	ext4_journal_stop(handle);
++	folio_wait_stable(folio);
++	return ret;
++
++out_error:
++	folio_unlock(folio);
++	ext4_journal_stop(handle);
++	return ret;
++}
++
+ vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
+ {
+ 	struct vm_area_struct *vma = vmf->vma;
+@@ -6583,8 +6701,7 @@ vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
+ 	struct file *file = vma->vm_file;
+ 	struct inode *inode = file_inode(file);
+ 	struct address_space *mapping = inode->i_mapping;
+-	handle_t *handle;
+-	get_block_t *get_block;
++	get_block_t *get_block = ext4_get_block;
+ 	int retries = 0;
+ 
+ 	if (unlikely(IS_IMMUTABLE(inode)))
+@@ -6652,47 +6769,11 @@ vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
+ 	/* OK, we need to fill the hole... */
+ 	if (ext4_should_dioread_nolock(inode))
+ 		get_block = ext4_get_block_unwritten;
+-	else
+-		get_block = ext4_get_block;
+ retry_alloc:
+-	handle = ext4_journal_start(inode, EXT4_HT_WRITE_PAGE,
+-				    ext4_writepage_trans_blocks(inode));
+-	if (IS_ERR(handle)) {
+-		ret = VM_FAULT_SIGBUS;
+-		goto out;
+-	}
+-	/*
+-	 * Data journalling can't use block_page_mkwrite() because it
+-	 * will set_buffer_dirty() before do_journal_get_write_access()
+-	 * thus might hit warning messages for dirty metadata buffers.
+-	 */
+-	if (!ext4_should_journal_data(inode)) {
+-		err = block_page_mkwrite(vma, vmf, get_block);
+-	} else {
+-		folio_lock(folio);
+-		size = i_size_read(inode);
+-		/* Page got truncated from under us? */
+-		if (folio->mapping != mapping || folio_pos(folio) > size) {
+-			ret = VM_FAULT_NOPAGE;
+-			goto out_error;
+-		}
+-
+-		len = folio_size(folio);
+-		if (folio_pos(folio) + len > size)
+-			len = size - folio_pos(folio);
+-
+-		err = ext4_block_write_begin(handle, folio, 0, len,
+-					     ext4_get_block);
+-		if (!err) {
+-			ret = VM_FAULT_SIGBUS;
+-			if (ext4_journal_folio_buffers(handle, folio, len))
+-				goto out_error;
+-		} else {
+-			folio_unlock(folio);
+-		}
+-	}
+-	ext4_journal_stop(handle);
+-	if (err == -ENOSPC && ext4_should_retry_alloc(inode->i_sb, &retries))
++	/* Start journal and allocate blocks */
++	err = ext4_block_page_mkwrite(inode, folio, get_block);
++	if (err == -EAGAIN ||
++	    (err == -ENOSPC && ext4_should_retry_alloc(inode->i_sb, &retries)))
+ 		goto retry_alloc;
+ out_ret:
+ 	ret = vmf_fs_error(err);
+@@ -6700,8 +6781,4 @@ vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
+ 	filemap_invalidate_unlock_shared(mapping);
+ 	sb_end_pagefault(inode->i_sb);
+ 	return ret;
+-out_error:
+-	folio_unlock(folio);
+-	ext4_journal_stop(handle);
+-	goto out;
+ }
+diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
+index 1f8493a56e8f6a..adae3caf175a93 100644
+--- a/fs/ext4/move_extent.c
++++ b/fs/ext4/move_extent.c
+@@ -280,7 +280,8 @@ move_extent_per_page(struct file *o_filp, struct inode *donor_inode,
+ 	 */
+ again:
+ 	*err = 0;
+-	jblocks = ext4_writepage_trans_blocks(orig_inode) * 2;
++	jblocks = ext4_meta_trans_blocks(orig_inode, block_len_in_page,
++					 block_len_in_page) * 2;
+ 	handle = ext4_journal_start(orig_inode, EXT4_HT_MOVE_EXTENTS, jblocks);
+ 	if (IS_ERR(handle)) {
+ 		*err = PTR_ERR(handle);
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 8d15acbacc2035..3fb93247330d10 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -962,7 +962,7 @@ int __ext4_xattr_set_credits(struct super_block *sb, struct inode *inode,
+ 	 * so we need to reserve credits for this eventuality
+ 	 */
+ 	if (inode && ext4_has_inline_data(inode))
+-		credits += ext4_writepage_trans_blocks(inode) + 1;
++		credits += ext4_chunk_trans_extent(inode, 1) + 1;
+ 
+ 	/* We are done if ea_inode feature is not enabled. */
+ 	if (!ext4_has_feature_ea_inode(sb))
+diff --git a/include/trace/events/ext4.h b/include/trace/events/ext4.h
+index 156908641e68f1..845451077c4186 100644
+--- a/include/trace/events/ext4.h
++++ b/include/trace/events/ext4.h
+@@ -482,16 +482,17 @@ TRACE_EVENT(ext4_writepages,
+ 		  (unsigned long) __entry->writeback_index)
+ );
+ 
+-TRACE_EVENT(ext4_da_write_pages,
+-	TP_PROTO(struct inode *inode, pgoff_t first_page,
++TRACE_EVENT(ext4_da_write_folios_start,
++	TP_PROTO(struct inode *inode, loff_t start_pos, loff_t next_pos,
+ 		 struct writeback_control *wbc),
+ 
+-	TP_ARGS(inode, first_page, wbc),
++	TP_ARGS(inode, start_pos, next_pos, wbc),
+ 
+ 	TP_STRUCT__entry(
+ 		__field(	dev_t,	dev			)
+ 		__field(	ino_t,	ino			)
+-		__field(      pgoff_t,	first_page		)
++		__field(       loff_t,	start_pos		)
++		__field(       loff_t,	next_pos		)
+ 		__field(	 long,	nr_to_write		)
+ 		__field(	  int,	sync_mode		)
+ 	),
+@@ -499,18 +500,48 @@ TRACE_EVENT(ext4_da_write_pages,
+ 	TP_fast_assign(
+ 		__entry->dev		= inode->i_sb->s_dev;
+ 		__entry->ino		= inode->i_ino;
+-		__entry->first_page	= first_page;
++		__entry->start_pos	= start_pos;
++		__entry->next_pos	= next_pos;
+ 		__entry->nr_to_write	= wbc->nr_to_write;
+ 		__entry->sync_mode	= wbc->sync_mode;
+ 	),
+ 
+-	TP_printk("dev %d,%d ino %lu first_page %lu nr_to_write %ld "
+-		  "sync_mode %d",
++	TP_printk("dev %d,%d ino %lu start_pos 0x%llx next_pos 0x%llx nr_to_write %ld sync_mode %d",
+ 		  MAJOR(__entry->dev), MINOR(__entry->dev),
+-		  (unsigned long) __entry->ino, __entry->first_page,
++		  (unsigned long) __entry->ino, __entry->start_pos, __entry->next_pos,
+ 		  __entry->nr_to_write, __entry->sync_mode)
+ );
+ 
++TRACE_EVENT(ext4_da_write_folios_end,
++	TP_PROTO(struct inode *inode, loff_t start_pos, loff_t next_pos,
++		 struct writeback_control *wbc, int ret),
++
++	TP_ARGS(inode, start_pos, next_pos, wbc, ret),
++
++	TP_STRUCT__entry(
++		__field(	dev_t,	dev			)
++		__field(	ino_t,	ino			)
++		__field(       loff_t,	start_pos		)
++		__field(       loff_t,	next_pos		)
++		__field(	 long,	nr_to_write		)
++		__field(	  int,	ret			)
++	),
++
++	TP_fast_assign(
++		__entry->dev		= inode->i_sb->s_dev;
++		__entry->ino		= inode->i_ino;
++		__entry->start_pos	= start_pos;
++		__entry->next_pos	= next_pos;
++		__entry->nr_to_write	= wbc->nr_to_write;
++		__entry->ret		= ret;
++	),
++
++	TP_printk("dev %d,%d ino %lu start_pos 0x%llx next_pos 0x%llx nr_to_write %ld ret %d",
++		  MAJOR(__entry->dev), MINOR(__entry->dev),
++		  (unsigned long) __entry->ino, __entry->start_pos, __entry->next_pos,
++		  __entry->nr_to_write, __entry->ret)
++);
++
+ TRACE_EVENT(ext4_da_write_pages_extent,
+ 	TP_PROTO(struct inode *inode, struct ext4_map_blocks *map),
+ 


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-08-25  0:00 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-08-25  0:00 UTC (permalink / raw
  To: gentoo-commits

commit:     bab5f2bfedc8c60092ef700568659aece80676af
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Sun Aug 24 23:22:11 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Sun Aug 24 23:22:11 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bab5f2bf

Add Update CPU Optimization patch for 6.16+

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                   |   4 +
 5010_enable-cpu-optimizations-universal.patch | 846 ++++++++++++++++++++++++++
 2 files changed, 850 insertions(+)

diff --git a/0000_README b/0000_README
index 77885109..0bac5f02 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
 
+Patch:  5010_enable-cpu-optimizations-universal.patch
+From:   https://github.com/graysky2/kernel_compiler_patch
+Desc:   More ISA levels and uarches for kernel 6.16+
+
 Patch:  5020_BMQ-and-PDS-io-scheduler-v6.16-r0.patch
 From:   https://gitlab.com/alfredchen/projectc
 Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.

diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
new file mode 100644
index 00000000..962a82a6
--- /dev/null
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -0,0 +1,846 @@
+From 6b1d270f55e3143bcb3ad914adf920774351a6b9 Mon Sep 17 00:00:00 2001
+From: graysky <therealgraysky AT proton DOT me>
+Date: Mon, 18 Aug 2025 04:14:48 -0400
+
+1. New generic x86-64 ISA levels
+
+These are selectable under:
+	Processor type and features ---> x86-64 compiler ISA level
+
+• x86-64     A value of (1) is the default
+• x86-64-v2  A value of (2) brings support for vector
+             instructions up to Streaming SIMD Extensions 4.2 (SSE4.2)
+	     and Supplemental Streaming SIMD Extensions 3 (SSSE3), the
+	     POPCNT instruction, and CMPXCHG16B.
+• x86-64-v3  A value of (3) adds vector instructions up to AVX2, MOVBE,
+             and additional bit-manipulation instructions.
+
+There is also x86-64-v4 but including this makes little sense as
+the kernel does not use any of the AVX512 instructions anyway.
+
+Users of glibc 2.33 and above can see which level is supported by running:
+	/lib/ld-linux-x86-64.so.2 --help | grep supported
+Or
+	/lib64/ld-linux-x86-64.so.2 --help | grep supported
+
+2. New micro-architectures
+
+These are selectable under:
+	Processor type and features ---> Processor family
+
+• AMD Improved K8-family
+• AMD K10-family
+• AMD Family 10h (Barcelona)
+• AMD Family 14h (Bobcat)
+• AMD Family 16h (Jaguar)
+• AMD Family 15h (Bulldozer)
+• AMD Family 15h (Piledriver)
+• AMD Family 15h (Steamroller)
+• AMD Family 15h (Excavator)
+• AMD Family 17h (Zen)
+• AMD Family 17h (Zen 2)
+• AMD Family 19h (Zen 3)**
+• AMD Family 19h (Zen 4)‡
+• AMD Family 1Ah (Zen 5)§
+• Intel Silvermont low-power processors
+• Intel Goldmont low-power processors (Apollo Lake and Denverton)
+• Intel Goldmont Plus low-power processors (Gemini Lake)
+• Intel 1st Gen Core i3/i5/i7 (Nehalem)
+• Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+• Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+• Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+• Intel 4th Gen Core i3/i5/i7 (Haswell)
+• Intel 5th Gen Core i3/i5/i7 (Broadwell)
+• Intel 6th Gen Core i3/i5/i7 (Skylake)
+• Intel 6th Gen Core i7/i9 (Skylake X)
+• Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
+• Intel 10th Gen Core i7/i9 (Ice Lake)
+• Intel Xeon (Cascade Lake)
+• Intel Xeon (Cooper Lake)*
+• Intel 3rd Gen 10nm++ i3/i5/i7/i9-family (Tiger Lake)*
+• Intel 4th Gen 10nm++ Xeon (Sapphire Rapids)†
+• Intel 11th Gen i3/i5/i7/i9-family (Rocket Lake)†
+• Intel 12th Gen i3/i5/i7/i9-family (Alder Lake)†
+• Intel 13th Gen i3/i5/i7/i9-family (Raptor Lake)‡
+• Intel 14th Gen i3/i5/i7/i9-family (Meteor Lake)‡
+• Intel 5th Gen 10nm++ Xeon (Emerald Rapids)‡
+
+Notes: If not otherwise noted, gcc >=9.1 is required for support.
+       *Requires gcc >=10.1 or clang >=10.0
+      **Required gcc >=10.3 or clang >=12.0
+       †Required gcc >=11.1 or clang >=12.0
+       ‡Required gcc >=13.0 or clang >=15.0.5
+       §Required gcc  >14.0 or clang >=19.0?
+
+3. Auto-detected micro-architecture levels
+
+Compile by passing the '-march=native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[1]
+
+Users of Intel CPUs should select the 'Intel-Native' option and users of AMD
+CPUs should select the 'AMD-Native' option.
+
+MINOR NOTES RELATING TO INTEL ATOM PROCESSORS
+This patch also changes -march=atom to -march=bonnell in accordance with the
+gcc v4.9 changes. Upstream is using the deprecated -match=atom flags when I
+believe it should use the newer -march=bonnell flag for atom processors.[2]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[3] The
+recommendation is to use the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_compiler_patch?tab=readme-ov-file#benchmarks
+
+REQUIREMENTS
+linux version 6.16+
+gcc version >=9.0 or clang version >=9.0
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[4]
+
+REFERENCES
+1.  https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html#index-x86-Options
+2.  https://bugzilla.kernel.org/show_bug.cgi?id=77461
+3.  https://github.com/graysky2/kernel_gcc_patch/issues/15
+4.  http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+---
+ arch/x86/Kconfig.cpu | 427 ++++++++++++++++++++++++++++++++++++++++++-
+ arch/x86/Makefile    | 213 ++++++++++++++++++++-
+ 2 files changed, 631 insertions(+), 9 deletions(-)
+
+diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
+index f928cf6e3252..2802936f2e62 100644
+--- a/arch/x86/Kconfig.cpu
++++ b/arch/x86/Kconfig.cpu
+@@ -31,6 +31,7 @@ choice
+ 	  - "Pentium-4" for the Intel Pentium 4 or P4-based Celeron.
+ 	  - "K6" for the AMD K6, K6-II and K6-III (aka K6-3D).
+ 	  - "Athlon" for the AMD K7 family (Athlon/Duron/Thunderbird).
++	  - "Opteron/Athlon64/Hammer/K8" for all K8 and newer AMD CPUs.
+ 	  - "Crusoe" for the Transmeta Crusoe series.
+ 	  - "Efficeon" for the Transmeta Efficeon series.
+ 	  - "Winchip-C6" for original IDT Winchip.
+@@ -41,7 +42,10 @@ choice
+ 	  - "CyrixIII/VIA C3" for VIA Cyrix III or VIA C3.
+ 	  - "VIA C3-2" for VIA C3-2 "Nehemiah" (model 9 and above).
+ 	  - "VIA C7" for VIA C7.
++	  - "Intel P4" for the Pentium 4/Netburst microarchitecture.
++	  - "Core 2/newer Xeon" for all core2 and newer Intel CPUs.
+ 	  - "Intel Atom" for the Atom-microarchitecture CPUs.
++	  - "Generic-x86-64" for a kernel which runs on any x86-64 CPU.
+ 
+ 	  See each option's help text for additional details. If you don't know
+ 	  what to do, choose "Pentium-Pro".
+@@ -135,10 +139,21 @@ config MPENTIUM4
+ 		-Mobile Pentium 4
+ 		-Mobile Pentium 4 M
+ 		-Extreme Edition (Gallatin)
++		-Prescott
++		-Prescott 2M
++		-Cedar Mill
++		-Presler
++		-Smithfiled
+ 	    Xeons (Intel Xeon, Xeon MP, Xeon LV, Xeon MV) corename:
+ 		-Foster
+ 		-Prestonia
+ 		-Gallatin
++		-Nocona
++		-Irwindale
++		-Cranford
++		-Potomac
++		-Paxville
++		-Dempsey
+ 
+ config MK6
+ 	bool "K6/K6-II/K6-III"
+@@ -281,6 +296,402 @@ config X86_GENERIC
+ 	  This is really intended for distributors who need more
+ 	  generic optimizations.
+ 
++choice
++	prompt "x86_64 Compiler Build Optimization"
++	depends on !X86_NATIVE_CPU
++	default GENERIC_CPU
++
++config GENERIC_CPU
++	bool "Generic-x86-64"
++	depends on X86_64
++	help
++	  Generic x86-64 CPU.
++	  Runs equally well on all x86-64 CPUs.
++
++config MK8
++	bool "AMD Opteron/Athlon64/Hammer/K8"
++	help
++	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MK8SSE3
++	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++	help
++	  Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MK10
++	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++	help
++	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++	  Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MBARCELONA
++	bool "AMD Barcelona"
++	help
++	  Select this for AMD Family 10h Barcelona processors.
++
++	  Enables -march=barcelona
++
++config MBOBCAT
++	bool "AMD Bobcat"
++	help
++	  Select this for AMD Family 14h Bobcat processors.
++
++	  Enables -march=btver1
++
++config MJAGUAR
++	bool "AMD Jaguar"
++	help
++	  Select this for AMD Family 16h Jaguar processors.
++
++	  Enables -march=btver2
++
++config MBULLDOZER
++	bool "AMD Bulldozer"
++	help
++	  Select this for AMD Family 15h Bulldozer processors.
++
++	  Enables -march=bdver1
++
++config MPILEDRIVER
++	bool "AMD Piledriver"
++	help
++	  Select this for AMD Family 15h Piledriver processors.
++
++	  Enables -march=bdver2
++
++config MSTEAMROLLER
++	bool "AMD Steamroller"
++	help
++	  Select this for AMD Family 15h Steamroller processors.
++
++	  Enables -march=bdver3
++
++config MEXCAVATOR
++	bool "AMD Excavator"
++	help
++	  Select this for AMD Family 15h Excavator processors.
++
++	  Enables -march=bdver4
++
++config MZEN
++	bool "AMD Ryzen"
++	help
++	  Select this for AMD Family 17h Zen processors.
++
++	  Enables -march=znver1
++
++config MZEN2
++	bool "AMD Ryzen 2"
++	help
++	  Select this for AMD Family 17h Zen 2 processors.
++
++	  Enables -march=znver2
++
++config MZEN3
++	bool "AMD Ryzen 3"
++	depends on (CC_IS_GCC && GCC_VERSION >= 100300) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++	help
++	  Select this for AMD Family 19h Zen 3 processors.
++
++	  Enables -march=znver3
++
++config MZEN4
++	bool "AMD Ryzen 4"
++	depends on (CC_IS_GCC && GCC_VERSION >= 130000) || (CC_IS_CLANG && CLANG_VERSION >= 160000)
++	help
++	  Select this for AMD Family 19h Zen 4 processors.
++
++	  Enables -march=znver4
++
++config MZEN5
++	bool "AMD Ryzen 5"
++	depends on (CC_IS_GCC && GCC_VERSION > 140000) || (CC_IS_CLANG && CLANG_VERSION >= 190100)
++	help
++	  Select this for AMD Family 19h Zen 5 processors.
++
++	  Enables -march=znver5
++
++config MPSC
++	bool "Intel P4 / older Netburst based Xeon"
++	depends on X86_64
++	help
++	  Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
++	  Xeon CPUs with Intel 64bit which is compatible with x86-64.
++	  Note that the latest Xeons (Xeon 51xx and 53xx) are not based on the
++	  Netburst core and shouldn't use this option. You can distinguish them
++	  using the cpu family field
++	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
++
++config MCORE2
++	bool "Intel Core 2"
++	depends on X86_64
++	help
++
++	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
++	  53xx) CPUs. You can distinguish newer from older Xeons by the CPU
++	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
++	  (not a typo)
++
++	  Enables -march=core2
++
++config MNEHALEM
++	bool "Intel Nehalem"
++	depends on X86_64
++	help
++
++	  Select this for 1st Gen Core processors in the Nehalem family.
++
++	  Enables -march=nehalem
++
++config MWESTMERE
++	bool "Intel Westmere"
++	depends on X86_64
++	help
++
++	  Select this for the Intel Westmere formerly Nehalem-C family.
++
++	  Enables -march=westmere
++
++config MSILVERMONT
++	bool "Intel Silvermont"
++	depends on X86_64
++	help
++
++	  Select this for the Intel Silvermont platform.
++
++	  Enables -march=silvermont
++
++config MGOLDMONT
++	bool "Intel Goldmont"
++	depends on X86_64
++	help
++
++	  Select this for the Intel Goldmont platform including Apollo Lake and Denverton.
++
++	  Enables -march=goldmont
++
++config MGOLDMONTPLUS
++	bool "Intel Goldmont Plus"
++	depends on X86_64
++	help
++
++	  Select this for the Intel Goldmont Plus platform including Gemini Lake.
++
++	  Enables -march=goldmont-plus
++
++config MSANDYBRIDGE
++	bool "Intel Sandy Bridge"
++	depends on X86_64
++	help
++
++	  Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++	  Enables -march=sandybridge
++
++config MIVYBRIDGE
++	bool "Intel Ivy Bridge"
++	depends on X86_64
++	help
++
++	  Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++	  Enables -march=ivybridge
++
++config MHASWELL
++	bool "Intel Haswell"
++	depends on X86_64
++	help
++
++	  Select this for 4th Gen Core processors in the Haswell family.
++
++	  Enables -march=haswell
++
++config MBROADWELL
++	bool "Intel Broadwell"
++	depends on X86_64
++	help
++
++	  Select this for 5th Gen Core processors in the Broadwell family.
++
++	  Enables -march=broadwell
++
++config MSKYLAKE
++	bool "Intel Skylake"
++	depends on X86_64
++	help
++
++	  Select this for 6th Gen Core processors in the Skylake family.
++
++	  Enables -march=skylake
++
++config MSKYLAKEX
++	bool "Intel Skylake-X (7th Gen Core i7/i9)"
++	depends on X86_64
++	help
++
++	  Select this for 7th Gen Core i7/i9 processors in the Skylake-X family.
++
++	  Enables -march=skylake-avx512
++
++config MCANNONLAKE
++	bool "Intel Coffee Lake/Kaby Lake Refresh (8th Gen Core i3/i5/i7)"
++	depends on X86_64
++	help
++
++	  Select this for 8th Gen Core i3/i5/i7 processors in the Coffee Lake or Kaby Lake Refresh families.
++
++	  Enables -march=cannonlake
++
++config MICELAKE_CLIENT
++	bool "Intel Ice Lake"
++	depends on X86_64
++	help
++
++	  Select this for 10th Gen Core client processors in the Ice Lake family.
++
++	  Enables -march=icelake-client
++
++config MICELAKE_SERVER
++	bool "Intel Ice Lake-SP (3rd Gen Xeon Scalable)"
++	depends on X86_64
++	help
++
++	  Select this for 3rd Gen Xeon Scalable processors in the Ice Lake-SP family.
++
++	  Enables -march=icelake-server
++
++config MCOOPERLAKE
++	bool "Intel Cooper Lake"
++	depends on X86_64
++	depends on (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
++	help
++
++	  Select this for Xeon processors in the Cooper Lake family.
++
++	  Enables -march=cooperlake
++
++config MCASCADELAKE
++	bool "Intel Cascade Lake"
++	depends on X86_64
++	depends on (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
++	help
++
++	  Select this for Xeon processors in the Cascade Lake family.
++
++	  Enables -march=cascadelake
++
++config MTIGERLAKE
++	bool "Intel Tiger Lake"
++	depends on X86_64
++	depends on  (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
++	help
++
++	  Select this for third-generation 10 nm process processors in the Tiger Lake family.
++
++	  Enables -march=tigerlake
++
++config MSAPPHIRERAPIDS
++	bool "Intel Sapphire Rapids"
++	depends on X86_64
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++	help
++
++	  Select this for fourth-generation 10 nm process processors in the Sapphire Rapids family.
++
++	  Enables -march=sapphirerapids
++
++config MROCKETLAKE
++	bool "Intel Rocket Lake"
++	depends on X86_64
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++	help
++
++	  Select this for eleventh-generation processors in the Rocket Lake family.
++
++	  Enables -march=rocketlake
++
++config MALDERLAKE
++	bool "Intel Alder Lake"
++	depends on X86_64
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++	help
++
++	  Select this for twelfth-generation processors in the Alder Lake family.
++
++	  Enables -march=alderlake
++
++config MRAPTORLAKE
++	bool "Intel Raptor Lake"
++	depends on X86_64
++	depends on (CC_IS_GCC && GCC_VERSION >= 130000) || (CC_IS_CLANG && CLANG_VERSION >= 150500)
++	help
++
++	  Select this for thirteenth-generation processors in the Raptor Lake family.
++
++	  Enables -march=raptorlake
++
++config MMETEORLAKE
++	bool "Intel Meteor Lake"
++	depends on X86_64
++	depends on (CC_IS_GCC && GCC_VERSION >= 130000) || (CC_IS_CLANG && CLANG_VERSION >= 150500)
++	help
++
++	  Select this for fourteenth-generation processors in the Meteor Lake family.
++
++	  Enables -march=meteorlake
++
++config MEMERALDRAPIDS
++	bool "Intel Emerald Rapids"
++	depends on X86_64
++	depends on (CC_IS_GCC && GCC_VERSION > 130000) || (CC_IS_CLANG && CLANG_VERSION >= 150500)
++	help
++
++	  Select this for fifth-generation Xeon Scalable processors in the Emerald Rapids family.
++
++	  Enables -march=emeraldrapids
++
++config MDIAMONDRAPIDS
++	bool "Intel Diamond Rapids (7th Gen Xeon Scalable)"
++	depends on X86_64
++	depends on (CC_IS_GCC && GCC_VERSION > 150000) || (CC_IS_CLANG && CLANG_VERSION >= 200000)
++	help
++
++	  Select this for seventh-generation Xeon Scalable processors in the Diamond Rapids family.
++
++	  Enables -march=diamondrapids
++
++endchoice
++
++config X86_64_VERSION
++	int "x86-64 compiler ISA level"
++	range 1 3
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++	depends on X86_64 && GENERIC_CPU
++	help
++	  Specify a specific x86-64 compiler ISA level.
++
++	  There are three x86-64 ISA levels that work on top of
++	  the x86-64 baseline, namely: x86-64-v2 and x86-64-v3.
++
++	  x86-64-v2 brings support for vector instructions up to Streaming SIMD
++	  Extensions 4.2 (SSE4.2) and Supplemental Streaming SIMD Extensions 3
++	  (SSSE3), the POPCNT instruction, and CMPXCHG16B.
++
++	  x86-64-v3 adds vector instructions up to AVX2, MOVBE, and additional
++	  bit-manipulation instructions.
++
++	  x86-64-v4 is not included since the kernel does not use AVX512 instructions
++
++	  You can find the best version for your CPU by running one of the following:
++	  /lib/ld-linux-x86-64.so.2 --help | grep supported
++	  /lib64/ld-linux-x86-64.so.2 --help | grep supported
++
+ #
+ # Define implied options from the CPU selection here
+ config X86_INTERNODE_CACHE_SHIFT
+@@ -290,8 +701,8 @@ config X86_INTERNODE_CACHE_SHIFT
+ 
+ config X86_L1_CACHE_SHIFT
+ 	int
+-	default "7" if MPENTIUM4
+-	default "6" if MK7 || MPENTIUMM || MATOM || MVIAC7 || X86_GENERIC || X86_64
++	default "7" if MPENTIUM4 || MPSC
++	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE_CLIENT || MICELAKE_SERVER || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MDIAMONDRAPIDS || X86_NATIVE_CPU
+ 	default "4" if MELAN || M486SX || M486 || MGEODEGX1
+ 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+ 
+@@ -309,19 +720,19 @@ config X86_ALIGNMENT_16
+ 
+ config X86_INTEL_USERCOPY
+ 	def_bool y
+-	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK7 || MEFFICEON
++	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE_CLIENT || MICELAKE_SERVER || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MDIAMONDRAPIDS
+ 
+ config X86_USE_PPRO_CHECKSUM
+ 	def_bool y
+-	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MATOM
++	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE_CLIENT || MICELAKE_SERVER || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MDIAMONDRAPIDS
+ 
+ config X86_TSC
+ 	def_bool y
+-	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MATOM) || X86_64
++	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
+ 
+ config X86_HAVE_PAE
+ 	def_bool y
+-	depends on MCRUSOE || MEFFICEON || MCYRIXIII || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC7 || MATOM || X86_64
++	depends on MCRUSOE || MEFFICEON || MCYRIXIII || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC7 || MCORE2 || MATOM || X86_64
+ 
+ config X86_CX8
+ 	def_bool y
+@@ -331,12 +742,12 @@ config X86_CX8
+ # generates cmov.
+ config X86_CMOV
+ 	def_bool y
+-	depends on (MK7 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || MATOM || MGEODE_LX || X86_64)
++	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
+ 
+ config X86_MINIMUM_CPU_FAMILY
+ 	int
+ 	default "64" if X86_64
+-	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MK7)
++	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCORE2 || MK7 || MK8)
+ 	default "5" if X86_32 && X86_CX8
+ 	default "4"
+ 
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index 1913d342969b..6c165daccb3d 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -177,10 +177,221 @@ ifdef CONFIG_X86_NATIVE_CPU
+         KBUILD_CFLAGS += -march=native
+         KBUILD_RUSTFLAGS += -Ctarget-cpu=native
+ else
++ifdef CONFIG_MK8
++        KBUILD_CFLAGS += -march=k8
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=k8
++endif
++
++ifdef CONFIG_MK8SSE3
++        KBUILD_CFLAGS += -march=k8-sse3
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=k8-sse3
++endif
++
++ifdef CONFIG_MK10
++        KBUILD_CFLAGS += -march=amdfam10
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=amdfam10
++endif
++
++ifdef CONFIG_MBARCELONA
++        KBUILD_CFLAGS += -march=barcelona
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=barcelona
++endif
++
++ifdef CONFIG_MBOBCAT
++        KBUILD_CFLAGS += -march=btver1
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=btver1
++endif
++
++ifdef CONFIG_MJAGUAR
++        KBUILD_CFLAGS += -march=btver2
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=btver2
++endif
++
++ifdef CONFIG_MBULLDOZER
++        KBUILD_CFLAGS += -march=bdver1
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=bdver1
++endif
++
++ifdef CONFIG_MPILEDRIVER
++        KBUILD_CFLAGS += -march=bdver2 -mno-tbm
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=bdver2 -mno-tbm
++endif
++
++ifdef CONFIG_MSTEAMROLLER
++        KBUILD_CFLAGS += -march=bdver3 -mno-tbm
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=bdver3 -mno-tbm
++endif
++
++ifdef CONFIG_MEXCAVATOR
++        KBUILD_CFLAGS += -march=bdver4 -mno-tbm
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=bdver4 -mno-tbm
++endif
++
++ifdef CONFIG_MZEN
++        KBUILD_CFLAGS += -march=znver1
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=znver1
++endif
++
++ifdef CONFIG_MZEN2
++        KBUILD_CFLAGS += -march=znver2
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=znver2
++endif
++
++ifdef CONFIG_MZEN3
++        KBUILD_CFLAGS += -march=znver3
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=znver3
++endif
++
++ifdef CONFIG_MZEN4
++        KBUILD_CFLAGS += -march=znver4
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=znver4
++endif
++
++ifdef CONFIG_MZEN5
++        KBUILD_CFLAGS += -march=znver5
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=znver5
++endif
++
++ifdef CONFIG_MPSC
++        KBUILD_CFLAGS += -march=nocona
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=nocona
++endif
++
++ifdef CONFIG_MCORE2
++        KBUILD_CFLAGS += -march=core2
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=core2
++endif
++
++ifdef CONFIG_MNEHALEM
++        KBUILD_CFLAGS += -march=nehalem
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=nehalem
++endif
++
++ifdef CONFIG_MWESTMERE
++        KBUILD_CFLAGS += -march=westmere
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=westmere
++endif
++
++ifdef CONFIG_MSILVERMONT
++        KBUILD_CFLAGS += -march=silvermont
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=silvermont
++endif
++
++ifdef CONFIG_MGOLDMONT
++        KBUILD_CFLAGS += -march=goldmont
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=goldmont
++endif
++
++ifdef CONFIG_MGOLDMONTPLUS
++        KBUILD_CFLAGS += -march=goldmont-plus
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=goldmont-plus
++endif
++
++ifdef CONFIG_MSANDYBRIDGE
++        KBUILD_CFLAGS += -march=sandybridge
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=sandybridge
++endif
++
++ifdef CONFIG_MIVYBRIDGE
++        KBUILD_CFLAGS += -march=ivybridge
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=ivybridge
++endif
++
++ifdef CONFIG_MHASWELL
++        KBUILD_CFLAGS += -march=haswell
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=haswell
++endif
++
++ifdef CONFIG_MBROADWELL
++        KBUILD_CFLAGS += -march=broadwell
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=broadwell
++endif
++
++ifdef CONFIG_MSKYLAKE
++        KBUILD_CFLAGS += -march=skylake
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=skylake
++endif
++
++ifdef CONFIG_MSKYLAKEX
++        KBUILD_CFLAGS += -march=skylake-avx512
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=skylake-avx512
++endif
++
++ifdef CONFIG_MCANNONLAKE
++        KBUILD_CFLAGS += -march=cannonlake
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=cannonlake
++endif
++
++ifdef CONFIG_MICELAKE_CLIENT
++        KBUILD_CFLAGS += -march=icelake-client
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=icelake-client
++endif
++
++ifdef CONFIG_MICELAKE_SERVER
++        KBUILD_CFLAGS += -march=icelake-server
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=icelake-server
++endif
++
++ifdef CONFIG_MCOOPERLAKE
++        KBUILD_CFLAGS += -march=cooperlake
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=cooperlake
++endif
++
++ifdef CONFIG_MCASCADELAKE
++        KBUILD_CFLAGS += -march=cascadelake
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=cascadelake
++endif
++
++ifdef CONFIG_MTIGERLAKE
++        KBUILD_CFLAGS += -march=tigerlake
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=tigerlake
++endif
++
++ifdef CONFIG_MSAPPHIRERAPIDS
++        KBUILD_CFLAGS += -march=sapphirerapids
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=sapphirerapids
++endif
++
++ifdef CONFIG_MROCKETLAKE
++        KBUILD_CFLAGS += -march=rocketlake
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=rocketlake
++endif
++
++ifdef CONFIG_MALDERLAKE
++        KBUILD_CFLAGS += -march=alderlake
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=alderlake
++endif
++
++ifdef CONFIG_MRAPTORLAKE
++        KBUILD_CFLAGS += -march=raptorlake
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=raptorlake
++endif
++
++ifdef CONFIG_MMETEORLAKE
++        KBUILD_CFLAGS += -march=meteorlake
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=meteorlake
++endif
++
++ifdef CONFIG_MEMERALDRAPIDS
++        KBUILD_CFLAGS += -march=emeraldrapids
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=emeraldrapids
++endif
++
++ifdef CONFIG_MDIAMONDRAPIDS
++        KBUILD_CFLAGS += -march=diamondrapids
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=diamondrapids
++endif
++
++ifdef CONFIG_GENERIC_CPU
++ifeq ($(CONFIG_X86_64_VERSION),1)
+         KBUILD_CFLAGS += -march=x86-64 -mtune=generic
+         KBUILD_RUSTFLAGS += -Ctarget-cpu=x86-64 -Ztune-cpu=generic
++else
++        KBUILD_CFLAGS +=-march=x86-64-v$(CONFIG_X86_64_VERSION)
++        KBUILD_RUSTFLAGS += -Ctarget-cpu=x86-64-v$(CONFIG_X86_64_VERSION)
++endif
++endif
+ endif
+-
+         KBUILD_CFLAGS += -mno-red-zone
+         KBUILD_CFLAGS += -mcmodel=kernel
+         KBUILD_RUSTFLAGS += -Cno-redzone=y
+-- 
+2.50.1
+


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-08-28 15:14 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-08-28 15:14 UTC (permalink / raw
  To: gentoo-commits

commit:     f9fb3f8988f07024269de6c05cd3e56481ab028c
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 28 15:08:20 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Aug 28 15:08:20 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f9fb3f89

Add net: ipv4: fix regression in local-broadcast routes

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                        |   4 +
 ..._fix_regression_in_local-broadcast_routes.patch | 134 +++++++++++++++++++++
 2 files changed, 138 insertions(+)

diff --git a/0000_README b/0000_README
index 0bac5f02..7698b47d 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
+Patch:  2010_ipv4_fix_regression_in_local-broadcast_routes.patch
+From:   https://lore.kernel.org/regressions/20250826121750.8451-1-oscmaes92@gmail.com/
+Desc:   net: ipv4: fix regression in local-broadcast routes
+
 Patch:  2901_permit-menuconfig-sorting.patch
 From:   https://lore.kernel.org/
 Desc:   menuconfig: Allow sorting the entries alphabetically

diff --git a/2010_ipv4_fix_regression_in_local-broadcast_routes.patch b/2010_ipv4_fix_regression_in_local-broadcast_routes.patch
new file mode 100644
index 00000000..a306132d
--- /dev/null
+++ b/2010_ipv4_fix_regression_in_local-broadcast_routes.patch
@@ -0,0 +1,134 @@
+From mboxrd@z Thu Jan  1 00:00:00 1970
+Received: from mail-wm1-f48.google.com (mail-wm1-f48.google.com [209.85.128.48])
+	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
+	(No client certificate requested)
+	by smtp.subspace.kernel.org (Postfix) with ESMTPS id 35E81393DF2
+	for <regressions@lists.linux.dev>; Tue, 26 Aug 2025 12:18:24 +0000 (UTC)
+Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.48
+ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116;
+	t=1756210706; cv=none; b=Da/rRcCPW+gdgl9sh1AJU0E8vP05G0xfCEUnpWuqnQjaf8/mvVPUhzba4pXCTtFhHNsTTT3iEOPPiPqCzdNwRexxsZIkyL6JGG+hXkV8cn+i7XctZ961TmWYP8ACY74i8MLs7Iud+2gt8y4VrLoMeHXcE7ripzyOxa8THiVuFTc=
+ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org;
+	s=arc-20240116; t=1756210706; c=relaxed/simple;
+	bh=WNRFfbyB1JScy1/30FZa+Ntq9RVZSUi/ijHlpcIjNBs=;
+	h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:
+	 MIME-Version; b=Y3iH3AFJjiR147yq3M5X/KlRR6baEAus+ZHb4N2PZZKa0T3Ln2c2/SnZLXQgRCa8rdr3MCFoXaoDuRUcx8k744Dh1j64HY9sRnYjM01rc0Kh+iaf3nZ0jYkC+zOL+8Wv5eWgNbDX5Qg+WzwUQMQhrC5xEQNjNorKTxd+SRFGpao=
+ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=dvEwPzW3; arc=none smtp.client-ip=209.85.128.48
+Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com
+Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com
+Authentication-Results: smtp.subspace.kernel.org;
+	dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="dvEwPzW3"
+Received: by mail-wm1-f48.google.com with SMTP id 5b1f17b1804b1-45a1b004954so43862245e9.0
+        for <regressions@lists.linux.dev>; Tue, 26 Aug 2025 05:18:23 -0700 (PDT)
+DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
+        d=gmail.com; s=20230601; t=1756210702; x=1756815502; darn=lists.linux.dev;
+        h=content-transfer-encoding:mime-version:references:in-reply-to
+         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
+         :message-id:reply-to;
+        bh=LChTYlNX7jpHHdvqfK7E+ehTE+2KqMA/oVeIigfrSAA=;
+        b=dvEwPzW3bP5r/IJF4+nyqmSsoFRE2/TxvBct7S3hXKOLTfxpExbkGfpZTxb/wRhBAJ
+         wQL8iEOoH47boqy/i72LQhH6bNLS72yU2FMpqZNVENRJqtwB6lq8PJlRvDn7gEVW4awK
+         8Phof2i45jLRu1288bEfZkYpSVK0hclcCXgP/5f7t0zNSdutKc/aOXCyLeoIeciLR4Zx
+         JmtIedPpVahUnw0oCgxmQbOkHd3yf1xoxAiEblYfya59tRXPty3gfMnh2Ox4gTYn29NF
+         kp+PqMg4GxVf0j4TZMuCnBqnjtYFkfDpGyqNr4HBBV4PzdZjjbuJ8bPNOVUNOzk14j+4
+         JE9Q==
+X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
+        d=1e100.net; s=20230601; t=1756210702; x=1756815502;
+        h=content-transfer-encoding:mime-version:references:in-reply-to
+         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
+         :subject:date:message-id:reply-to;
+        bh=LChTYlNX7jpHHdvqfK7E+ehTE+2KqMA/oVeIigfrSAA=;
+        b=XlKRIeF/DtRxj+OA67VSgyB25oK9Z0gak5vT5pjoH+XFlP+Y6y9GSx70oRvvIgIZE0
+         apTakbKssvoFmeCLmAFQRStZfubuWoor6Ond1N/6K7j7VBU11eysPUkeo6jQSTzdSQMt
+         v9Jq11Lnaii0ms5s6kIaWWPR9lGAWFb++ZJNYkXt59iXnhEVlVW4dFssD6VR/VJnyX+e
+         A+eGOVoa1k3c4ae23Wmq55GQR1iKbviKO28+BXatjKRWcFjaTgedk1WATZrrwcRYdD2E
+         a3r6R5iTOkMNX/TOJ4v2X7s69ndC+qxxJQ0yLTAsmfV1EDGUp3kwwkdIVl3UDqUhHszh
+         N0+w==
+X-Forwarded-Encrypted: i=1; AJvYcCX+aV0s2nW7qE7ZH57rmDl4GNnOxwFmQdMvPxxvM100/HNXQPAUrKLUeYBdO5rTpnftaJQ4J3zRSge7XA==@lists.linux.dev
+X-Gm-Message-State: AOJu0Yy9j79mStVe7fYpUjVZm00DYURS6tKQYofu48lxIG03z+fJEMUq
+	NKggf5H7k0btf9k9VXff6yWYNoL6JnO/uWjuPcDWrTtpRme13iQ8weyk
+X-Gm-Gg: ASbGncv/CVsSHrFQyxd//IAOzxZbvxje250ZYi2TUZi9g/Gf4x/86XgM4MjXoZFeBZB
+	4c00kmZrQIKWk4ToI+ySCSydYzZQbrw+nGnrad6FqeWQESk5tqOBYnIYKTUT+rseRkG5dukKJdE
+	lNeFu0sfmmoAnvNyKtLNqG9VwFQtqSwODKIKH+CZ92mMBuWe4ePVv4JQpz/fUhIRN+eZBdfDwUZ
+	eWZScFkZRFJo2SrVq9Ku3CIOA8hD0ktkkBDaFj57r+4YoToeLSvbCzzZrcFGoj2E1zqyTSlUhbf
+	SsKS4HgBDjkhx2k41IZAVyT+pE/GfU2BgS6BmY/VUxh72VrmHWCvbCnGX1TsHixJSwCGJiilLTg
+	KuDu6j0RQCZjyzUt7t8/H5A==
+X-Google-Smtp-Source: AGHT+IGDspJcry+lZbYtZeVg4+4kmcTBPmZuyilfg0+W2o8HlDsbRsJZkF4781x4cl6MBUZul/po1A==
+X-Received: by 2002:a05:600c:3b2a:b0:453:2066:4a26 with SMTP id 5b1f17b1804b1-45b5179f3d1mr194717105e9.16.1756210702258;
+        Tue, 26 Aug 2025 05:18:22 -0700 (PDT)
+Received: from oscar-xps.. ([45.128.133.231])
+        by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-45b5758a0bfsm149513675e9.20.2025.08.26.05.18.18
+        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
+        Tue, 26 Aug 2025 05:18:21 -0700 (PDT)
+From: Oscar Maes <oscmaes92@gmail.com>
+To: bacs@librecast.net,
+	brett@librecast.net,
+	kuba@kernel.org
+Cc: davem@davemloft.net,
+	dsahern@kernel.org,
+	netdev@vger.kernel.org,
+	regressions@lists.linux.dev,
+	stable@vger.kernel.org,
+	Oscar Maes <oscmaes92@gmail.com>
+Subject: [PATCH net v2 1/2] net: ipv4: fix regression in local-broadcast routes
+Date: Tue, 26 Aug 2025 14:17:49 +0200
+Message-Id: <20250826121750.8451-1-oscmaes92@gmail.com>
+X-Mailer: git-send-email 2.39.5
+In-Reply-To: <20250826121126-oscmaes92@gmail.com>
+References: <20250826121126-oscmaes92@gmail.com>
+Precedence: bulk
+X-Mailing-List: regressions@lists.linux.dev
+List-Id: <regressions.lists.linux.dev>
+List-Subscribe: <mailto:regressions+subscribe@lists.linux.dev>
+List-Unsubscribe: <mailto:regressions+unsubscribe@lists.linux.dev>
+MIME-Version: 1.0
+Content-Transfer-Encoding: 8bit
+
+Commit 9e30ecf23b1b ("net: ipv4: fix incorrect MTU in broadcast routes")
+introduced a regression where local-broadcast packets would have their
+gateway set in __mkroute_output, which was caused by fi = NULL being
+removed.
+
+Fix this by resetting the fib_info for local-broadcast packets. This
+preserves the intended changes for directed-broadcast packets.
+
+Cc: stable@vger.kernel.org
+Fixes: 9e30ecf23b1b ("net: ipv4: fix incorrect MTU in broadcast routes")
+Reported-by: Brett A C Sheffield <bacs@librecast.net>
+Closes: https://lore.kernel.org/regressions/20250822165231.4353-4-bacs@librecast.net
+Signed-off-by: Oscar Maes <oscmaes92@gmail.com>
+---
+
+Thanks to Brett Sheffield for finding the regression and writing
+the initial fix!
+---
+ net/ipv4/route.c | 10 +++++++---
+ 1 file changed, 7 insertions(+), 3 deletions(-)
+
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 1f212b2ce4c6..24c898b7654f 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -2575,12 +2575,16 @@ static struct rtable *__mkroute_output(const struct fib_result *res,
+ 		    !netif_is_l3_master(dev_out))
+ 			return ERR_PTR(-EINVAL);
+ 
+-	if (ipv4_is_lbcast(fl4->daddr))
++	if (ipv4_is_lbcast(fl4->daddr)) {
+ 		type = RTN_BROADCAST;
+-	else if (ipv4_is_multicast(fl4->daddr))
++
++		/* reset fi to prevent gateway resolution */
++		fi = NULL;
++	} else if (ipv4_is_multicast(fl4->daddr)) {
+ 		type = RTN_MULTICAST;
+-	else if (ipv4_is_zeronet(fl4->daddr))
++	} else if (ipv4_is_zeronet(fl4->daddr)) {
+ 		return ERR_PTR(-EINVAL);
++	}
+ 
+ 	if (dev_out->flags & IFF_LOOPBACK)
+ 		flags |= RTCF_LOCAL;
+-- 
+2.39.5
+
+


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-08-28 15:19 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-08-28 15:19 UTC (permalink / raw
  To: gentoo-commits

commit:     35516718b31b904369ac05645cf883b707a77a51
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 28 15:18:55 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Aug 28 15:18:55 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=35516718

(add) proc: fix missing pde_set_flags() for net proc files

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                        |   4 +
 ..._missing_pde_set_flags_for_net_proc_files.patch | 129 +++++++++++++++++++++
 2 files changed, 133 insertions(+)

diff --git a/0000_README b/0000_README
index 3b747ed9..eda565a9 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  1730_parisc-Disable-prctl.patch
 From:   https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
 Desc:   prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
 
+Patch:  1800_proc_fix_missing_pde_set_flags_for_net_proc_files.patch
+From:   https://lore.kernel.org/all/20250821105806.1453833-1-wangzijie1@honor.com/
+Desc:   proc: fix missing pde_set_flags() for net proc files
+
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1800_proc_fix_missing_pde_set_flags_for_net_proc_files.patch b/1800_proc_fix_missing_pde_set_flags_for_net_proc_files.patch
new file mode 100644
index 00000000..8632f53b
--- /dev/null
+++ b/1800_proc_fix_missing_pde_set_flags_for_net_proc_files.patch
@@ -0,0 +1,129 @@
+Subject: [PATCH v3] proc: fix missing pde_set_flags() for net proc files
+Date: Thu, 21 Aug 2025 18:58:06 +0800
+Message-ID: <20250821105806.1453833-1-wangzijie1@honor.com>
+X-Mailer: git-send-email 2.25.1
+Precedence: bulk
+X-Mailing-List: regressions@lists.linux.dev
+List-Id: <regressions.lists.linux.dev>
+List-Subscribe: <mailto:regressions+subscribe@lists.linux.dev>
+List-Unsubscribe: <mailto:regressions+unsubscribe@lists.linux.dev>
+MIME-Version: 1.0
+Content-Transfer-Encoding: 8bit
+Content-Type: text/plain
+X-ClientProxiedBy: w002.hihonor.com (10.68.28.120) To a011.hihonor.com
+ (10.68.31.243)
+
+To avoid potential UAF issues during module removal races, we use pde_set_flags()
+to save proc_ops flags in PDE itself before proc_register(), and then use
+pde_has_proc_*() helpers instead of directly dereferencing pde->proc_ops->*.
+
+However, the pde_set_flags() call was missing when creating net related proc files.
+This omission caused incorrect behavior which FMODE_LSEEK was being cleared
+inappropriately in proc_reg_open() for net proc files. Lars reported it in this link[1].
+
+Fix this by ensuring pde_set_flags() is called when register proc entry, and add
+NULL check for proc_ops in pde_set_flags().
+
+[1]: https://lore.kernel.org/all/20250815195616.64497967@chagall.paradoxon.rec/
+
+Fixes: ff7ec8dc1b64 ("proc: use the same treatment to check proc_lseek as ones for proc_read_iter et.al")
+Cc: stable@vger.kernel.org
+Reported-by: Lars Wendler <polynomial-c@gmx.de>
+Signed-off-by: wangzijie <wangzijie1@honor.com>
+---
+v3:
+- followed by Christian's suggestion to stash pde->proc_ops in a local const variable
+v2:
+- followed by Jiri's suggestion to refractor code and reformat commit message
+---
+ fs/proc/generic.c | 38 +++++++++++++++++++++-----------------
+ 1 file changed, 21 insertions(+), 17 deletions(-)
+
+diff --git a/fs/proc/generic.c b/fs/proc/generic.c
+index 76e800e38..bd0c099cf 100644
+--- a/fs/proc/generic.c
++++ b/fs/proc/generic.c
+@@ -367,6 +367,25 @@ static const struct inode_operations proc_dir_inode_operations = {
+ 	.setattr	= proc_notify_change,
+ };
+ 
++static void pde_set_flags(struct proc_dir_entry *pde)
++{
++	const struct proc_ops *proc_ops = pde->proc_ops;
++
++	if (!proc_ops)
++		return;
++
++	if (proc_ops->proc_flags & PROC_ENTRY_PERMANENT)
++		pde->flags |= PROC_ENTRY_PERMANENT;
++	if (proc_ops->proc_read_iter)
++		pde->flags |= PROC_ENTRY_proc_read_iter;
++#ifdef CONFIG_COMPAT
++	if (proc_ops->proc_compat_ioctl)
++		pde->flags |= PROC_ENTRY_proc_compat_ioctl;
++#endif
++	if (proc_ops->proc_lseek)
++		pde->flags |= PROC_ENTRY_proc_lseek;
++}
++
+ /* returns the registered entry, or frees dp and returns NULL on failure */
+ struct proc_dir_entry *proc_register(struct proc_dir_entry *dir,
+ 		struct proc_dir_entry *dp)
+@@ -374,6 +393,8 @@ struct proc_dir_entry *proc_register(struct proc_dir_entry *dir,
+ 	if (proc_alloc_inum(&dp->low_ino))
+ 		goto out_free_entry;
+ 
++	pde_set_flags(dp);
++
+ 	write_lock(&proc_subdir_lock);
+ 	dp->parent = dir;
+ 	if (pde_subdir_insert(dir, dp) == false) {
+@@ -561,20 +582,6 @@ struct proc_dir_entry *proc_create_reg(const char *name, umode_t mode,
+ 	return p;
+ }
+ 
+-static void pde_set_flags(struct proc_dir_entry *pde)
+-{
+-	if (pde->proc_ops->proc_flags & PROC_ENTRY_PERMANENT)
+-		pde->flags |= PROC_ENTRY_PERMANENT;
+-	if (pde->proc_ops->proc_read_iter)
+-		pde->flags |= PROC_ENTRY_proc_read_iter;
+-#ifdef CONFIG_COMPAT
+-	if (pde->proc_ops->proc_compat_ioctl)
+-		pde->flags |= PROC_ENTRY_proc_compat_ioctl;
+-#endif
+-	if (pde->proc_ops->proc_lseek)
+-		pde->flags |= PROC_ENTRY_proc_lseek;
+-}
+-
+ struct proc_dir_entry *proc_create_data(const char *name, umode_t mode,
+ 		struct proc_dir_entry *parent,
+ 		const struct proc_ops *proc_ops, void *data)
+@@ -585,7 +592,6 @@ struct proc_dir_entry *proc_create_data(const char *name, umode_t mode,
+ 	if (!p)
+ 		return NULL;
+ 	p->proc_ops = proc_ops;
+-	pde_set_flags(p);
+ 	return proc_register(parent, p);
+ }
+ EXPORT_SYMBOL(proc_create_data);
+@@ -636,7 +642,6 @@ struct proc_dir_entry *proc_create_seq_private(const char *name, umode_t mode,
+ 	p->proc_ops = &proc_seq_ops;
+ 	p->seq_ops = ops;
+ 	p->state_size = state_size;
+-	pde_set_flags(p);
+ 	return proc_register(parent, p);
+ }
+ EXPORT_SYMBOL(proc_create_seq_private);
+@@ -667,7 +672,6 @@ struct proc_dir_entry *proc_create_single_data(const char *name, umode_t mode,
+ 		return NULL;
+ 	p->proc_ops = &proc_single_ops;
+ 	p->single_show = show;
+-	pde_set_flags(p);
+ 	return proc_register(parent, p);
+ }
+ EXPORT_SYMBOL(proc_create_single_data);
+-- 
+2.25.1
+
+


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-08-28 15:31 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-08-28 15:31 UTC (permalink / raw
  To: gentoo-commits

commit:     8e74cc0d197a9fac87cabebab148db4f96ffeb96
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 28 15:22:31 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Aug 28 15:22:31 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8e74cc0d

Linux patch 6.16.4

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README             |     4 +
 1003_linux-6.16.4.patch | 20047 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 20051 insertions(+)

diff --git a/0000_README b/0000_README
index eda565a9..32d61205 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1002_linux-6.16.3.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.16.3
 
+Patch:  1003_linux-6.16.4.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.16.4
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1003_linux-6.16.4.patch b/1003_linux-6.16.4.patch
new file mode 100644
index 00000000..00c38fe4
--- /dev/null
+++ b/1003_linux-6.16.4.patch
@@ -0,0 +1,20047 @@
+diff --git a/Documentation/devicetree/bindings/display/rockchip/rockchip-vop2.yaml b/Documentation/devicetree/bindings/display/rockchip/rockchip-vop2.yaml
+index f546d481b7e5f4..93da1fb9adc47b 100644
+--- a/Documentation/devicetree/bindings/display/rockchip/rockchip-vop2.yaml
++++ b/Documentation/devicetree/bindings/display/rockchip/rockchip-vop2.yaml
+@@ -64,10 +64,10 @@ properties:
+       - description: Pixel clock for video port 0.
+       - description: Pixel clock for video port 1.
+       - description: Pixel clock for video port 2.
+-      - description: Pixel clock for video port 3.
+-      - description: Peripheral(vop grf/dsi) clock.
+-      - description: Alternative pixel clock provided by HDMI0 PHY PLL.
+-      - description: Alternative pixel clock provided by HDMI1 PHY PLL.
++      - {}
++      - {}
++      - {}
++      - {}
+ 
+   clock-names:
+     minItems: 5
+@@ -77,10 +77,10 @@ properties:
+       - const: dclk_vp0
+       - const: dclk_vp1
+       - const: dclk_vp2
+-      - const: dclk_vp3
+-      - const: pclk_vop
+-      - const: pll_hdmiphy0
+-      - const: pll_hdmiphy1
++      - {}
++      - {}
++      - {}
++      - {}
+ 
+   rockchip,grf:
+     $ref: /schemas/types.yaml#/definitions/phandle
+@@ -175,10 +175,24 @@ allOf:
+     then:
+       properties:
+         clocks:
+-          maxItems: 5
++          minItems: 5
++          items:
++            - {}
++            - {}
++            - {}
++            - {}
++            - {}
++            - description: Alternative pixel clock provided by HDMI PHY PLL.
+ 
+         clock-names:
+-          maxItems: 5
++          minItems: 5
++          items:
++            - {}
++            - {}
++            - {}
++            - {}
++            - {}
++            - const: pll_hdmiphy0
+ 
+         interrupts:
+           minItems: 4
+@@ -208,11 +222,29 @@ allOf:
+       properties:
+         clocks:
+           minItems: 7
+-          maxItems: 9
++          items:
++            - {}
++            - {}
++            - {}
++            - {}
++            - {}
++            - description: Pixel clock for video port 3.
++            - description: Peripheral(vop grf/dsi) clock.
++            - description: Alternative pixel clock provided by HDMI0 PHY PLL.
++            - description: Alternative pixel clock provided by HDMI1 PHY PLL.
+ 
+         clock-names:
+           minItems: 7
+-          maxItems: 9
++          items:
++            - {}
++            - {}
++            - {}
++            - {}
++            - {}
++            - const: dclk_vp3
++            - const: pclk_vop
++            - const: pll_hdmiphy0
++            - const: pll_hdmiphy1
+ 
+         interrupts:
+           maxItems: 1
+diff --git a/Documentation/devicetree/bindings/display/sprd/sprd,sharkl3-dpu.yaml b/Documentation/devicetree/bindings/display/sprd/sprd,sharkl3-dpu.yaml
+index 4ebea60b8c5ba5..8c52fa0ea5f8ee 100644
+--- a/Documentation/devicetree/bindings/display/sprd/sprd,sharkl3-dpu.yaml
++++ b/Documentation/devicetree/bindings/display/sprd/sprd,sharkl3-dpu.yaml
+@@ -25,7 +25,7 @@ properties:
+     maxItems: 1
+ 
+   clocks:
+-    minItems: 2
++    maxItems: 2
+ 
+   clock-names:
+     items:
+diff --git a/Documentation/devicetree/bindings/display/sprd/sprd,sharkl3-dsi-host.yaml b/Documentation/devicetree/bindings/display/sprd/sprd,sharkl3-dsi-host.yaml
+index bc5594d1864301..300bf2252c3e8e 100644
+--- a/Documentation/devicetree/bindings/display/sprd/sprd,sharkl3-dsi-host.yaml
++++ b/Documentation/devicetree/bindings/display/sprd/sprd,sharkl3-dsi-host.yaml
+@@ -20,7 +20,7 @@ properties:
+     maxItems: 2
+ 
+   clocks:
+-    minItems: 1
++    maxItems: 1
+ 
+   clock-names:
+     items:
+diff --git a/Documentation/devicetree/bindings/ufs/mediatek,ufs.yaml b/Documentation/devicetree/bindings/ufs/mediatek,ufs.yaml
+index 32fd535a514ad1..20f341d25ebc3f 100644
+--- a/Documentation/devicetree/bindings/ufs/mediatek,ufs.yaml
++++ b/Documentation/devicetree/bindings/ufs/mediatek,ufs.yaml
+@@ -33,6 +33,10 @@ properties:
+ 
+   vcc-supply: true
+ 
++  mediatek,ufs-disable-mcq:
++    $ref: /schemas/types.yaml#/definitions/flag
++    description: The mask to disable MCQ (Multi-Circular Queue) for UFS host.
++
+ required:
+   - compatible
+   - clocks
+diff --git a/Documentation/networking/mptcp-sysctl.rst b/Documentation/networking/mptcp-sysctl.rst
+index 5bfab01eff5a9d..1683c139821e3b 100644
+--- a/Documentation/networking/mptcp-sysctl.rst
++++ b/Documentation/networking/mptcp-sysctl.rst
+@@ -12,6 +12,8 @@ add_addr_timeout - INTEGER (seconds)
+ 	resent to an MPTCP peer that has not acknowledged a previous
+ 	ADD_ADDR message.
+ 
++	Do not retransmit if set to 0.
++
+ 	The default value matches TCP_RTO_MAX. This is a per-namespace
+ 	sysctl.
+ 
+diff --git a/Makefile b/Makefile
+index df121383064380..e5509045fe3f3a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 16
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+@@ -1134,7 +1134,7 @@ KBUILD_USERCFLAGS  += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD
+ KBUILD_USERLDFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))
+ 
+ # userspace programs are linked via the compiler, use the correct linker
+-ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_LD_IS_LLD),yy)
++ifdef CONFIG_CC_IS_CLANG
+ KBUILD_USERLDFLAGS += --ld-path=$(LD)
+ endif
+ 
+diff --git a/arch/arm/lib/crypto/poly1305-glue.c b/arch/arm/lib/crypto/poly1305-glue.c
+index 2603b0771f2c4c..ca6dc553370546 100644
+--- a/arch/arm/lib/crypto/poly1305-glue.c
++++ b/arch/arm/lib/crypto/poly1305-glue.c
+@@ -7,6 +7,7 @@
+ 
+ #include <asm/hwcap.h>
+ #include <asm/neon.h>
++#include <asm/simd.h>
+ #include <crypto/internal/poly1305.h>
+ #include <linux/cpufeature.h>
+ #include <linux/jump_label.h>
+@@ -39,7 +40,7 @@ void poly1305_blocks_arch(struct poly1305_block_state *state, const u8 *src,
+ {
+ 	len = round_down(len, POLY1305_BLOCK_SIZE);
+ 	if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) &&
+-	    static_branch_likely(&have_neon)) {
++	    static_branch_likely(&have_neon) && likely(may_use_simd())) {
+ 		do {
+ 			unsigned int todo = min_t(unsigned int, len, SZ_4K);
+ 
+diff --git a/arch/arm64/boot/dts/apple/t8012-j132.dts b/arch/arm64/boot/dts/apple/t8012-j132.dts
+index 778a69be18dd81..7dcac51703ff60 100644
+--- a/arch/arm64/boot/dts/apple/t8012-j132.dts
++++ b/arch/arm64/boot/dts/apple/t8012-j132.dts
+@@ -7,6 +7,7 @@
+ /dts-v1/;
+ 
+ #include "t8012-jxxx.dtsi"
++#include "t8012-touchbar.dtsi"
+ 
+ / {
+ 	model = "Apple T2 MacBookPro15,2 (j132)";
+diff --git a/arch/arm64/boot/dts/exynos/exynos7870-j6lte.dts b/arch/arm64/boot/dts/exynos/exynos7870-j6lte.dts
+index 61eec1aff32ef3..b8ce433b93b1b4 100644
+--- a/arch/arm64/boot/dts/exynos/exynos7870-j6lte.dts
++++ b/arch/arm64/boot/dts/exynos/exynos7870-j6lte.dts
+@@ -89,7 +89,7 @@ key-volup {
+ 	memory@40000000 {
+ 		device_type = "memory";
+ 		reg = <0x0 0x40000000 0x3d800000>,
+-		      <0x0 0x80000000 0x7d800000>;
++		      <0x0 0x80000000 0x40000000>;
+ 	};
+ 
+ 	pwrseq_mmc1: pwrseq-mmc1 {
+diff --git a/arch/arm64/boot/dts/exynos/exynos7870-on7xelte.dts b/arch/arm64/boot/dts/exynos/exynos7870-on7xelte.dts
+index eb97dcc415423f..b1d9eff5a82702 100644
+--- a/arch/arm64/boot/dts/exynos/exynos7870-on7xelte.dts
++++ b/arch/arm64/boot/dts/exynos/exynos7870-on7xelte.dts
+@@ -78,7 +78,7 @@ key-volup {
+ 	memory@40000000 {
+ 		device_type = "memory";
+ 		reg = <0x0 0x40000000 0x3e400000>,
+-		      <0x0 0x80000000 0xbe400000>;
++		      <0x0 0x80000000 0x80000000>;
+ 	};
+ 
+ 	pwrseq_mmc1: pwrseq-mmc1 {
+diff --git a/arch/arm64/boot/dts/exynos/exynos7870.dtsi b/arch/arm64/boot/dts/exynos/exynos7870.dtsi
+index 5cba8c9bb40340..d5d347623b9038 100644
+--- a/arch/arm64/boot/dts/exynos/exynos7870.dtsi
++++ b/arch/arm64/boot/dts/exynos/exynos7870.dtsi
+@@ -327,6 +327,7 @@ usb@0 {
+ 				phys = <&usbdrd_phy 0>;
+ 
+ 				usb-role-switch;
++				snps,usb2-gadget-lpm-disable;
+ 			};
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/exynos/google/gs101.dtsi b/arch/arm64/boot/dts/exynos/google/gs101.dtsi
+index 94aa0ffb9a9760..0f6592658b982d 100644
+--- a/arch/arm64/boot/dts/exynos/google/gs101.dtsi
++++ b/arch/arm64/boot/dts/exynos/google/gs101.dtsi
+@@ -1371,6 +1371,7 @@ ufs_0: ufs@14700000 {
+ 				 <&cmu_hsi2 CLK_GOUT_HSI2_SYSREG_HSI2_PCLK>;
+ 			clock-names = "core_clk", "sclk_unipro_main", "fmp",
+ 				      "aclk", "pclk", "sysreg";
++			dma-coherent;
+ 			freq-table-hz = <0 0>, <0 0>, <0 0>, <0 0>, <0 0>, <0 0>;
+ 			pinctrl-0 = <&ufs_rst_n &ufs_refclk_out>;
+ 			pinctrl-names = "default";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3576.dtsi b/arch/arm64/boot/dts/rockchip/rk3576.dtsi
+index 64812e3bcb613c..036b50936cb25f 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3576.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3576.dtsi
+@@ -1155,12 +1155,14 @@ vop: vop@27d00000 {
+ 				 <&cru HCLK_VOP>,
+ 				 <&cru DCLK_VP0>,
+ 				 <&cru DCLK_VP1>,
+-				 <&cru DCLK_VP2>;
++				 <&cru DCLK_VP2>,
++				 <&hdptxphy>;
+ 			clock-names = "aclk",
+ 				      "hclk",
+ 				      "dclk_vp0",
+ 				      "dclk_vp1",
+-				      "dclk_vp2";
++				      "dclk_vp2",
++				      "pll_hdmiphy0";
+ 			iommus = <&vop_mmu>;
+ 			power-domains = <&power RK3576_PD_VOP>;
+ 			rockchip,grf = <&sys_grf>;
+@@ -2391,6 +2393,7 @@ hdptxphy: hdmiphy@2b000000 {
+ 			reg = <0x0 0x2b000000 0x0 0x2000>;
+ 			clocks = <&cru CLK_PHY_REF_SRC>, <&cru PCLK_HDPTX_APB>;
+ 			clock-names = "ref", "apb";
++			#clock-cells = <0>;
+ 			resets = <&cru SRST_P_HDPTX_APB>, <&cru SRST_HDPTX_INIT>,
+ 				 <&cru SRST_HDPTX_CMN>, <&cru SRST_HDPTX_LANE>;
+ 			reset-names = "apb", "init", "cmn", "lane";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-turing-rk1.dtsi b/arch/arm64/boot/dts/rockchip/rk3588-turing-rk1.dtsi
+index 60ad272982ad51..6daea8961fdd65 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-turing-rk1.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588-turing-rk1.dtsi
+@@ -398,17 +398,6 @@ rk806_dvs3_null: dvs3-null-pins {
+ 
+ 		regulators {
+ 			vdd_gpu_s0: vdd_gpu_mem_s0: dcdc-reg1 {
+-				/*
+-				 * RK3588's GPU power domain cannot be enabled
+-				 * without this regulator active, but it
+-				 * doesn't have to be on when the GPU PD is
+-				 * disabled.  Because the PD binding does not
+-				 * currently allow us to express this
+-				 * relationship, we have no choice but to do
+-				 * this instead:
+-				 */
+-				regulator-always-on;
+-
+ 				regulator-boot-on;
+ 				regulator-min-microvolt = <550000>;
+ 				regulator-max-microvolt = <950000>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am62-lp-sk.dts b/arch/arm64/boot/dts/ti/k3-am62-lp-sk.dts
+index aafdb90c0eb700..4609f366006e4c 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62-lp-sk.dts
++++ b/arch/arm64/boot/dts/ti/k3-am62-lp-sk.dts
+@@ -74,6 +74,22 @@ vddshv_sdio: regulator-4 {
+ };
+ 
+ &main_pmx0 {
++	main_mmc0_pins_default: main-mmc0-default-pins {
++		bootph-all;
++		pinctrl-single,pins = <
++			AM62X_IOPAD(0x220, PIN_INPUT, 0) /* (V3) MMC0_CMD */
++			AM62X_IOPAD(0x218, PIN_INPUT, 0) /* (Y1) MMC0_CLK */
++			AM62X_IOPAD(0x214, PIN_INPUT, 0) /* (V2) MMC0_DAT0 */
++			AM62X_IOPAD(0x210, PIN_INPUT, 0) /* (V1) MMC0_DAT1 */
++			AM62X_IOPAD(0x20c, PIN_INPUT, 0) /* (W2) MMC0_DAT2 */
++			AM62X_IOPAD(0x208, PIN_INPUT, 0) /* (W1) MMC0_DAT3 */
++			AM62X_IOPAD(0x204, PIN_INPUT, 0) /* (Y2) MMC0_DAT4 */
++			AM62X_IOPAD(0x200, PIN_INPUT, 0) /* (W3) MMC0_DAT5 */
++			AM62X_IOPAD(0x1fc, PIN_INPUT, 0) /* (W4) MMC0_DAT6 */
++			AM62X_IOPAD(0x1f8, PIN_INPUT, 0) /* (V4) MMC0_DAT7 */
++		>;
++	};
++
+ 	vddshv_sdio_pins_default: vddshv-sdio-default-pins {
+ 		pinctrl-single,pins = <
+ 			AM62X_IOPAD(0x07c, PIN_OUTPUT, 7) /* (M19) GPMC0_CLK.GPIO0_31 */
+@@ -144,6 +160,14 @@ exp2: gpio@23 {
+ 	};
+ };
+ 
++&sdhci0 {
++	bootph-all;
++	non-removable;
++	pinctrl-names = "default";
++	pinctrl-0 = <&main_mmc0_pins_default>;
++	status = "okay";
++};
++
+ &sdhci1 {
+ 	vmmc-supply = <&vdd_mmc1>;
+ 	vqmmc-supply = <&vddshv_sdio>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+index 9e0b6eee9ac77d..120ba8f9dd0e7e 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+@@ -553,7 +553,6 @@ sdhci0: mmc@fa10000 {
+ 		clocks = <&k3_clks 57 5>, <&k3_clks 57 6>;
+ 		clock-names = "clk_ahb", "clk_xin";
+ 		bus-width = <8>;
+-		mmc-ddr-1_8v;
+ 		mmc-hs200-1_8v;
+ 		ti,clkbuf-sel = <0x7>;
+ 		ti,otap-del-sel-legacy = <0x0>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi b/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi
+index 1ea8f64b1b3bd3..bc2289d7477457 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi
+@@ -507,16 +507,16 @@ AM62X_IOPAD(0x01ec, PIN_INPUT_PULLUP, 0) /* (A17) I2C1_SDA */ /* SODIMM 12 */
+ 	/* Verdin I2C_2_DSI */
+ 	pinctrl_i2c2: main-i2c2-default-pins {
+ 		pinctrl-single,pins = <
+-			AM62X_IOPAD(0x00b0, PIN_INPUT, 1) /* (K22) GPMC0_CSn2.I2C2_SCL */ /* SODIMM 55 */
+-			AM62X_IOPAD(0x00b4, PIN_INPUT, 1) /* (K24) GPMC0_CSn3.I2C2_SDA */ /* SODIMM 53 */
++			AM62X_IOPAD(0x00b0, PIN_INPUT_PULLUP, 1) /* (K22) GPMC0_CSn2.I2C2_SCL */ /* SODIMM 55 */
++			AM62X_IOPAD(0x00b4, PIN_INPUT_PULLUP, 1) /* (K24) GPMC0_CSn3.I2C2_SDA */ /* SODIMM 53 */
+ 		>;
+ 	};
+ 
+ 	/* Verdin I2C_4_CSI */
+ 	pinctrl_i2c3: main-i2c3-default-pins {
+ 		pinctrl-single,pins = <
+-			AM62X_IOPAD(0x01d0, PIN_INPUT, 2) /* (A15) UART0_CTSn.I2C3_SCL */ /* SODIMM 95 */
+-			AM62X_IOPAD(0x01d4, PIN_INPUT, 2) /* (B15) UART0_RTSn.I2C3_SDA */ /* SODIMM 93 */
++			AM62X_IOPAD(0x01d0, PIN_INPUT_PULLUP, 2) /* (A15) UART0_CTSn.I2C3_SCL */ /* SODIMM 95 */
++			AM62X_IOPAD(0x01d4, PIN_INPUT_PULLUP, 2) /* (B15) UART0_RTSn.I2C3_SDA */ /* SODIMM 93 */
+ 		>;
+ 	};
+ 
+@@ -786,8 +786,8 @@ AM62X_MCU_IOPAD(0x0010, PIN_INPUT, 7) /* (C9) MCU_SPI0_D1.MCU_GPIO0_4 */ /* SODI
+ 	/* Verdin I2C_3_HDMI */
+ 	pinctrl_mcu_i2c0: mcu-i2c0-default-pins {
+ 		pinctrl-single,pins = <
+-			AM62X_MCU_IOPAD(0x0044, PIN_INPUT, 0) /*  (A8) MCU_I2C0_SCL */ /* SODIMM 59 */
+-			AM62X_MCU_IOPAD(0x0048, PIN_INPUT, 0) /* (D10) MCU_I2C0_SDA */ /* SODIMM 57 */
++			AM62X_MCU_IOPAD(0x0044, PIN_INPUT_PULLUP, 0) /*  (A8) MCU_I2C0_SCL */ /* SODIMM 59 */
++			AM62X_MCU_IOPAD(0x0048, PIN_INPUT_PULLUP, 0) /* (D10) MCU_I2C0_SDA */ /* SODIMM 57 */
+ 		>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-am625-sk.dts b/arch/arm64/boot/dts/ti/k3-am625-sk.dts
+index 2fbfa371934575..d240165bda9c57 100644
+--- a/arch/arm64/boot/dts/ti/k3-am625-sk.dts
++++ b/arch/arm64/boot/dts/ti/k3-am625-sk.dts
+@@ -106,6 +106,22 @@ vcc_1v8: regulator-5 {
+ };
+ 
+ &main_pmx0 {
++	main_mmc0_pins_default: main-mmc0-default-pins {
++		bootph-all;
++		pinctrl-single,pins = <
++			AM62X_IOPAD(0x220, PIN_INPUT, 0) /* (Y3) MMC0_CMD */
++			AM62X_IOPAD(0x218, PIN_INPUT, 0) /* (AB1) MMC0_CLK */
++			AM62X_IOPAD(0x214, PIN_INPUT, 0) /* (AA2) MMC0_DAT0 */
++			AM62X_IOPAD(0x210, PIN_INPUT_PULLUP, 0) /* (AA1) MMC0_DAT1 */
++			AM62X_IOPAD(0x20c, PIN_INPUT_PULLUP, 0) /* (AA3) MMC0_DAT2 */
++			AM62X_IOPAD(0x208, PIN_INPUT_PULLUP, 0) /* (Y4) MMC0_DAT3 */
++			AM62X_IOPAD(0x204, PIN_INPUT_PULLUP, 0) /* (AB2) MMC0_DAT4 */
++			AM62X_IOPAD(0x200, PIN_INPUT_PULLUP, 0) /* (AC1) MMC0_DAT5 */
++			AM62X_IOPAD(0x1fc, PIN_INPUT_PULLUP, 0) /* (AD2) MMC0_DAT6 */
++			AM62X_IOPAD(0x1f8, PIN_INPUT_PULLUP, 0) /* (AC2) MMC0_DAT7 */
++		>;
++	};
++
+ 	main_rgmii2_pins_default: main-rgmii2-default-pins {
+ 		bootph-all;
+ 		pinctrl-single,pins = <
+@@ -195,6 +211,14 @@ exp1: gpio@22 {
+ 	};
+ };
+ 
++&sdhci0 {
++	bootph-all;
++	non-removable;
++	pinctrl-names = "default";
++	pinctrl-0 = <&main_mmc0_pins_default>;
++	status = "okay";
++};
++
+ &sdhci1 {
+ 	vmmc-supply = <&vdd_mmc1>;
+ 	vqmmc-supply = <&vdd_sd_dv>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am62a7-sk.dts b/arch/arm64/boot/dts/ti/k3-am62a7-sk.dts
+index b2775902601495..2129da4d7185b4 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62a7-sk.dts
++++ b/arch/arm64/boot/dts/ti/k3-am62a7-sk.dts
+@@ -301,8 +301,8 @@ AM62AX_IOPAD(0x1cc, PIN_OUTPUT, 0) /* (D15) UART0_TXD */
+ 
+ 	main_uart1_pins_default: main-uart1-default-pins {
+ 		pinctrl-single,pins = <
+-			AM62AX_IOPAD(0x01e8, PIN_INPUT, 1) /* (C17) I2C1_SCL.UART1_RXD */
+-			AM62AX_IOPAD(0x01ec, PIN_OUTPUT, 1) /* (E17) I2C1_SDA.UART1_TXD */
++			AM62AX_IOPAD(0x01ac, PIN_INPUT, 2) /* (B21) MCASP0_AFSR.UART1_RXD */
++			AM62AX_IOPAD(0x01b0, PIN_OUTPUT, 2) /* (A21) MCASP0_ACLKR.UART1_TXD */
+ 			AM62AX_IOPAD(0x0194, PIN_INPUT, 2) /* (C19) MCASP0_AXR3.UART1_CTSn */
+ 			AM62AX_IOPAD(0x0198, PIN_OUTPUT, 2) /* (B19) MCASP0_AXR2.UART1_RTSn */
+ 		>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am62x-sk-common.dtsi b/arch/arm64/boot/dts/ti/k3-am62x-sk-common.dtsi
+index ee8337bfbbfd3a..13e1d36123d51f 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62x-sk-common.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62x-sk-common.dtsi
+@@ -203,22 +203,6 @@ AM62X_IOPAD(0x0b4, PIN_INPUT_PULLUP, 1) /* (K24/H19) GPMC0_CSn3.I2C2_SDA */
+ 		>;
+ 	};
+ 
+-	main_mmc0_pins_default: main-mmc0-default-pins {
+-		bootph-all;
+-		pinctrl-single,pins = <
+-			AM62X_IOPAD(0x220, PIN_INPUT, 0) /* (Y3/V3) MMC0_CMD */
+-			AM62X_IOPAD(0x218, PIN_INPUT, 0) /* (AB1/Y1) MMC0_CLK */
+-			AM62X_IOPAD(0x214, PIN_INPUT, 0) /* (AA2/V2) MMC0_DAT0 */
+-			AM62X_IOPAD(0x210, PIN_INPUT, 0) /* (AA1/V1) MMC0_DAT1 */
+-			AM62X_IOPAD(0x20c, PIN_INPUT, 0) /* (AA3/W2) MMC0_DAT2 */
+-			AM62X_IOPAD(0x208, PIN_INPUT, 0) /* (Y4/W1) MMC0_DAT3 */
+-			AM62X_IOPAD(0x204, PIN_INPUT, 0) /* (AB2/Y2) MMC0_DAT4 */
+-			AM62X_IOPAD(0x200, PIN_INPUT, 0) /* (AC1/W3) MMC0_DAT5 */
+-			AM62X_IOPAD(0x1fc, PIN_INPUT, 0) /* (AD2/W4) MMC0_DAT6 */
+-			AM62X_IOPAD(0x1f8, PIN_INPUT, 0) /* (AC2/V4) MMC0_DAT7 */
+-		>;
+-	};
+-
+ 	main_mmc1_pins_default: main-mmc1-default-pins {
+ 		bootph-all;
+ 		pinctrl-single,pins = <
+@@ -457,14 +441,6 @@ &main_i2c2 {
+ 	clock-frequency = <400000>;
+ };
+ 
+-&sdhci0 {
+-	bootph-all;
+-	status = "okay";
+-	non-removable;
+-	pinctrl-names = "default";
+-	pinctrl-0 = <&main_mmc0_pins_default>;
+-};
+-
+ &sdhci1 {
+ 	/* SD/MMC */
+ 	bootph-all;
+diff --git a/arch/arm64/boot/dts/ti/k3-pinctrl.h b/arch/arm64/boot/dts/ti/k3-pinctrl.h
+index cac7cccc111212..38590188dd51ca 100644
+--- a/arch/arm64/boot/dts/ti/k3-pinctrl.h
++++ b/arch/arm64/boot/dts/ti/k3-pinctrl.h
+@@ -8,6 +8,7 @@
+ #ifndef DTS_ARM64_TI_K3_PINCTRL_H
+ #define DTS_ARM64_TI_K3_PINCTRL_H
+ 
++#define ST_EN_SHIFT		(14)
+ #define PULLUDEN_SHIFT		(16)
+ #define PULLTYPESEL_SHIFT	(17)
+ #define RXACTIVE_SHIFT		(18)
+@@ -19,6 +20,10 @@
+ #define DS_PULLUD_EN_SHIFT	(27)
+ #define DS_PULLTYPE_SEL_SHIFT	(28)
+ 
++/* Schmitt trigger configuration */
++#define ST_DISABLE		(0 << ST_EN_SHIFT)
++#define ST_ENABLE		(1 << ST_EN_SHIFT)
++
+ #define PULL_DISABLE		(1 << PULLUDEN_SHIFT)
+ #define PULL_ENABLE		(0 << PULLUDEN_SHIFT)
+ 
+@@ -32,9 +37,13 @@
+ #define PIN_OUTPUT		(INPUT_DISABLE | PULL_DISABLE)
+ #define PIN_OUTPUT_PULLUP	(INPUT_DISABLE | PULL_UP)
+ #define PIN_OUTPUT_PULLDOWN	(INPUT_DISABLE | PULL_DOWN)
+-#define PIN_INPUT		(INPUT_EN | PULL_DISABLE)
+-#define PIN_INPUT_PULLUP	(INPUT_EN | PULL_UP)
+-#define PIN_INPUT_PULLDOWN	(INPUT_EN | PULL_DOWN)
++#define PIN_INPUT		(INPUT_EN | ST_ENABLE | PULL_DISABLE)
++#define PIN_INPUT_PULLUP	(INPUT_EN | ST_ENABLE | PULL_UP)
++#define PIN_INPUT_PULLDOWN	(INPUT_EN | ST_ENABLE | PULL_DOWN)
++/* Input configurations with Schmitt Trigger disabled */
++#define PIN_INPUT_NOST		(INPUT_EN | PULL_DISABLE)
++#define PIN_INPUT_PULLUP_NOST	(INPUT_EN | PULL_UP)
++#define PIN_INPUT_PULLDOWN_NOST	(INPUT_EN | PULL_DOWN)
+ 
+ #define PIN_DEBOUNCE_DISABLE	(0 << DEBOUNCE_SHIFT)
+ #define PIN_DEBOUNCE_CONF1	(1 << DEBOUNCE_SHIFT)
+diff --git a/arch/arm64/lib/crypto/poly1305-glue.c b/arch/arm64/lib/crypto/poly1305-glue.c
+index c9a74766785bd7..31aea21ce42f79 100644
+--- a/arch/arm64/lib/crypto/poly1305-glue.c
++++ b/arch/arm64/lib/crypto/poly1305-glue.c
+@@ -7,6 +7,7 @@
+ 
+ #include <asm/hwcap.h>
+ #include <asm/neon.h>
++#include <asm/simd.h>
+ #include <crypto/internal/poly1305.h>
+ #include <linux/cpufeature.h>
+ #include <linux/jump_label.h>
+@@ -33,7 +34,7 @@ void poly1305_blocks_arch(struct poly1305_block_state *state, const u8 *src,
+ 			  unsigned int len, u32 padbit)
+ {
+ 	len = round_down(len, POLY1305_BLOCK_SIZE);
+-	if (static_branch_likely(&have_neon)) {
++	if (static_branch_likely(&have_neon) && likely(may_use_simd())) {
+ 		do {
+ 			unsigned int todo = min_t(unsigned int, len, SZ_4K);
+ 
+diff --git a/arch/loongarch/Makefile b/arch/loongarch/Makefile
+index b0703a4e02a253..a3a9759414f40f 100644
+--- a/arch/loongarch/Makefile
++++ b/arch/loongarch/Makefile
+@@ -102,7 +102,13 @@ KBUILD_CFLAGS			+= $(call cc-option,-mthin-add-sub) $(call cc-option,-Wa$(comma)
+ 
+ ifdef CONFIG_OBJTOOL
+ ifdef CONFIG_CC_HAS_ANNOTATE_TABLEJUMP
++# The annotate-tablejump option can not be passed to LLVM backend when LTO is enabled.
++# Ensure it is aware of linker with LTO, '--loongarch-annotate-tablejump' also needs to
++# be passed via '-mllvm' to ld.lld.
+ KBUILD_CFLAGS			+= -mannotate-tablejump
++ifdef CONFIG_LTO_CLANG
++KBUILD_LDFLAGS			+= -mllvm --loongarch-annotate-tablejump
++endif
+ else
+ KBUILD_CFLAGS			+= -fno-jump-tables # keep compatibility with older compilers
+ endif
+diff --git a/arch/loongarch/kernel/module-sections.c b/arch/loongarch/kernel/module-sections.c
+index e2f30ff9afde82..a43ba7f9f9872a 100644
+--- a/arch/loongarch/kernel/module-sections.c
++++ b/arch/loongarch/kernel/module-sections.c
+@@ -8,6 +8,7 @@
+ #include <linux/module.h>
+ #include <linux/moduleloader.h>
+ #include <linux/ftrace.h>
++#include <linux/sort.h>
+ 
+ Elf_Addr module_emit_got_entry(struct module *mod, Elf_Shdr *sechdrs, Elf_Addr val)
+ {
+@@ -61,39 +62,38 @@ Elf_Addr module_emit_plt_entry(struct module *mod, Elf_Shdr *sechdrs, Elf_Addr v
+ 	return (Elf_Addr)&plt[nr];
+ }
+ 
+-static int is_rela_equal(const Elf_Rela *x, const Elf_Rela *y)
+-{
+-	return x->r_info == y->r_info && x->r_addend == y->r_addend;
+-}
++#define cmp_3way(a, b)  ((a) < (b) ? -1 : (a) > (b))
+ 
+-static bool duplicate_rela(const Elf_Rela *rela, int idx)
++static int compare_rela(const void *x, const void *y)
+ {
+-	int i;
++	int ret;
++	const Elf_Rela *rela_x = x, *rela_y = y;
+ 
+-	for (i = 0; i < idx; i++) {
+-		if (is_rela_equal(&rela[i], &rela[idx]))
+-			return true;
+-	}
++	ret = cmp_3way(rela_x->r_info, rela_y->r_info);
++	if (ret == 0)
++		ret = cmp_3way(rela_x->r_addend, rela_y->r_addend);
+ 
+-	return false;
++	return ret;
+ }
+ 
+ static void count_max_entries(Elf_Rela *relas, int num,
+ 			      unsigned int *plts, unsigned int *gots)
+ {
+-	unsigned int i, type;
++	unsigned int i;
++
++	sort(relas, num, sizeof(Elf_Rela), compare_rela, NULL);
+ 
+ 	for (i = 0; i < num; i++) {
+-		type = ELF_R_TYPE(relas[i].r_info);
+-		switch (type) {
++		if (i && !compare_rela(&relas[i-1], &relas[i]))
++			continue;
++
++		switch (ELF_R_TYPE(relas[i].r_info)) {
+ 		case R_LARCH_SOP_PUSH_PLT_PCREL:
+ 		case R_LARCH_B26:
+-			if (!duplicate_rela(relas, i))
+-				(*plts)++;
++			(*plts)++;
+ 			break;
+ 		case R_LARCH_GOT_PC_HI20:
+-			if (!duplicate_rela(relas, i))
+-				(*gots)++;
++			(*gots)++;
+ 			break;
+ 		default:
+ 			break; /* Do nothing. */
+diff --git a/arch/loongarch/kvm/intc/eiointc.c b/arch/loongarch/kvm/intc/eiointc.c
+index a75f865d6fb96c..0207cfe1dbd6c7 100644
+--- a/arch/loongarch/kvm/intc/eiointc.c
++++ b/arch/loongarch/kvm/intc/eiointc.c
+@@ -9,7 +9,7 @@
+ 
+ static void eiointc_set_sw_coreisr(struct loongarch_eiointc *s)
+ {
+-	int ipnum, cpu, cpuid, irq_index, irq_mask, irq;
++	int ipnum, cpu, cpuid, irq;
+ 	struct kvm_vcpu *vcpu;
+ 
+ 	for (irq = 0; irq < EIOINTC_IRQS; irq++) {
+@@ -18,8 +18,6 @@ static void eiointc_set_sw_coreisr(struct loongarch_eiointc *s)
+ 			ipnum = count_trailing_zeros(ipnum);
+ 			ipnum = (ipnum >= 0 && ipnum < 4) ? ipnum : 0;
+ 		}
+-		irq_index = irq / 32;
+-		irq_mask = BIT(irq & 0x1f);
+ 
+ 		cpuid = s->coremap.reg_u8[irq];
+ 		vcpu = kvm_get_vcpu_by_cpuid(s->kvm, cpuid);
+@@ -27,16 +25,16 @@ static void eiointc_set_sw_coreisr(struct loongarch_eiointc *s)
+ 			continue;
+ 
+ 		cpu = vcpu->vcpu_id;
+-		if (!!(s->coreisr.reg_u32[cpu][irq_index] & irq_mask))
+-			set_bit(irq, s->sw_coreisr[cpu][ipnum]);
++		if (test_bit(irq, (unsigned long *)s->coreisr.reg_u32[cpu]))
++			__set_bit(irq, s->sw_coreisr[cpu][ipnum]);
+ 		else
+-			clear_bit(irq, s->sw_coreisr[cpu][ipnum]);
++			__clear_bit(irq, s->sw_coreisr[cpu][ipnum]);
+ 	}
+ }
+ 
+ static void eiointc_update_irq(struct loongarch_eiointc *s, int irq, int level)
+ {
+-	int ipnum, cpu, found, irq_index, irq_mask;
++	int ipnum, cpu, found;
+ 	struct kvm_vcpu *vcpu;
+ 	struct kvm_interrupt vcpu_irq;
+ 
+@@ -47,20 +45,22 @@ static void eiointc_update_irq(struct loongarch_eiointc *s, int irq, int level)
+ 	}
+ 
+ 	cpu = s->sw_coremap[irq];
+-	vcpu = kvm_get_vcpu(s->kvm, cpu);
+-	irq_index = irq / 32;
+-	irq_mask = BIT(irq & 0x1f);
++	vcpu = kvm_get_vcpu_by_id(s->kvm, cpu);
++	if (unlikely(vcpu == NULL)) {
++		kvm_err("%s: invalid target cpu: %d\n", __func__, cpu);
++		return;
++	}
+ 
+ 	if (level) {
+ 		/* if not enable return false */
+-		if (((s->enable.reg_u32[irq_index]) & irq_mask) == 0)
++		if (!test_bit(irq, (unsigned long *)s->enable.reg_u32))
+ 			return;
+-		s->coreisr.reg_u32[cpu][irq_index] |= irq_mask;
++		__set_bit(irq, (unsigned long *)s->coreisr.reg_u32[cpu]);
+ 		found = find_first_bit(s->sw_coreisr[cpu][ipnum], EIOINTC_IRQS);
+-		set_bit(irq, s->sw_coreisr[cpu][ipnum]);
++		__set_bit(irq, s->sw_coreisr[cpu][ipnum]);
+ 	} else {
+-		s->coreisr.reg_u32[cpu][irq_index] &= ~irq_mask;
+-		clear_bit(irq, s->sw_coreisr[cpu][ipnum]);
++		__clear_bit(irq, (unsigned long *)s->coreisr.reg_u32[cpu]);
++		__clear_bit(irq, s->sw_coreisr[cpu][ipnum]);
+ 		found = find_first_bit(s->sw_coreisr[cpu][ipnum], EIOINTC_IRQS);
+ 	}
+ 
+@@ -110,8 +110,8 @@ void eiointc_set_irq(struct loongarch_eiointc *s, int irq, int level)
+ 	unsigned long flags;
+ 	unsigned long *isr = (unsigned long *)s->isr.reg_u8;
+ 
+-	level ? set_bit(irq, isr) : clear_bit(irq, isr);
+ 	spin_lock_irqsave(&s->lock, flags);
++	level ? __set_bit(irq, isr) : __clear_bit(irq, isr);
+ 	eiointc_update_irq(s, irq, level);
+ 	spin_unlock_irqrestore(&s->lock, flags);
+ }
+diff --git a/arch/loongarch/kvm/intc/ipi.c b/arch/loongarch/kvm/intc/ipi.c
+index fe734dc062ed47..4859e320e3a166 100644
+--- a/arch/loongarch/kvm/intc/ipi.c
++++ b/arch/loongarch/kvm/intc/ipi.c
+@@ -99,7 +99,7 @@ static void write_mailbox(struct kvm_vcpu *vcpu, int offset, uint64_t data, int
+ static int send_ipi_data(struct kvm_vcpu *vcpu, gpa_t addr, uint64_t data)
+ {
+ 	int i, idx, ret;
+-	uint32_t val = 0, mask = 0;
++	uint64_t val = 0, mask = 0;
+ 
+ 	/*
+ 	 * Bit 27-30 is mask for byte writing.
+@@ -108,7 +108,7 @@ static int send_ipi_data(struct kvm_vcpu *vcpu, gpa_t addr, uint64_t data)
+ 	if ((data >> 27) & 0xf) {
+ 		/* Read the old val */
+ 		idx = srcu_read_lock(&vcpu->kvm->srcu);
+-		ret = kvm_io_bus_read(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val);
++		ret = kvm_io_bus_read(vcpu, KVM_IOCSR_BUS, addr, 4, &val);
+ 		srcu_read_unlock(&vcpu->kvm->srcu, idx);
+ 		if (unlikely(ret)) {
+ 			kvm_err("%s: : read data from addr %llx failed\n", __func__, addr);
+@@ -124,7 +124,7 @@ static int send_ipi_data(struct kvm_vcpu *vcpu, gpa_t addr, uint64_t data)
+ 	}
+ 	val |= ((uint32_t)(data >> 32) & ~mask);
+ 	idx = srcu_read_lock(&vcpu->kvm->srcu);
+-	ret = kvm_io_bus_write(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val);
++	ret = kvm_io_bus_write(vcpu, KVM_IOCSR_BUS, addr, 4, &val);
+ 	srcu_read_unlock(&vcpu->kvm->srcu, idx);
+ 	if (unlikely(ret))
+ 		kvm_err("%s: : write data to addr %llx failed\n", __func__, addr);
+@@ -318,7 +318,7 @@ static int kvm_ipi_regs_access(struct kvm_device *dev,
+ 	cpu = (attr->attr >> 16) & 0x3ff;
+ 	addr = attr->attr & 0xff;
+ 
+-	vcpu = kvm_get_vcpu(dev->kvm, cpu);
++	vcpu = kvm_get_vcpu_by_id(dev->kvm, cpu);
+ 	if (unlikely(vcpu == NULL)) {
+ 		kvm_err("%s: invalid target cpu: %d\n", __func__, cpu);
+ 		return -EINVAL;
+diff --git a/arch/loongarch/kvm/intc/pch_pic.c b/arch/loongarch/kvm/intc/pch_pic.c
+index 08fce845f66803..ef5044796b7a6e 100644
+--- a/arch/loongarch/kvm/intc/pch_pic.c
++++ b/arch/loongarch/kvm/intc/pch_pic.c
+@@ -195,6 +195,11 @@ static int kvm_pch_pic_read(struct kvm_vcpu *vcpu,
+ 		return -EINVAL;
+ 	}
+ 
++	if (addr & (len - 1)) {
++		kvm_err("%s: pch pic not aligned addr %llx len %d\n", __func__, addr, len);
++		return -EINVAL;
++	}
++
+ 	/* statistics of pch pic reading */
+ 	vcpu->kvm->stat.pch_pic_read_exits++;
+ 	ret = loongarch_pch_pic_read(s, addr, len, val);
+@@ -302,6 +307,11 @@ static int kvm_pch_pic_write(struct kvm_vcpu *vcpu,
+ 		return -EINVAL;
+ 	}
+ 
++	if (addr & (len - 1)) {
++		kvm_err("%s: pch pic not aligned addr %llx len %d\n", __func__, addr, len);
++		return -EINVAL;
++	}
++
+ 	/* statistics of pch pic writing */
+ 	vcpu->kvm->stat.pch_pic_write_exits++;
+ 	ret = loongarch_pch_pic_write(s, addr, len, val);
+diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
+index 5af32ec62cb16a..ca35c01fa36382 100644
+--- a/arch/loongarch/kvm/vcpu.c
++++ b/arch/loongarch/kvm/vcpu.c
+@@ -1277,9 +1277,11 @@ int kvm_own_lbt(struct kvm_vcpu *vcpu)
+ 		return -EINVAL;
+ 
+ 	preempt_disable();
+-	set_csr_euen(CSR_EUEN_LBTEN);
+-	_restore_lbt(&vcpu->arch.lbt);
+-	vcpu->arch.aux_inuse |= KVM_LARCH_LBT;
++	if (!(vcpu->arch.aux_inuse & KVM_LARCH_LBT)) {
++		set_csr_euen(CSR_EUEN_LBTEN);
++		_restore_lbt(&vcpu->arch.lbt);
++		vcpu->arch.aux_inuse |= KVM_LARCH_LBT;
++	}
+ 	preempt_enable();
+ 
+ 	return 0;
+diff --git a/arch/m68k/kernel/head.S b/arch/m68k/kernel/head.S
+index ba22bc2f3d6d86..d96685489aac98 100644
+--- a/arch/m68k/kernel/head.S
++++ b/arch/m68k/kernel/head.S
+@@ -3400,6 +3400,7 @@ L(console_clear_loop):
+ 
+ 	movel	%d4,%d1				/* screen height in pixels */
+ 	divul	%a0@(FONT_DESC_HEIGHT),%d1	/* d1 = max num rows */
++	subql	#1,%d1				/* row range is 0 to num - 1 */
+ 
+ 	movel	%d0,%a2@(Lconsole_struct_num_columns)
+ 	movel	%d1,%a2@(Lconsole_struct_num_rows)
+@@ -3546,15 +3547,14 @@ func_start	console_putc,%a0/%a1/%d0-%d7
+ 	cmpib	#10,%d7
+ 	jne	L(console_not_lf)
+ 	movel	%a0@(Lconsole_struct_cur_row),%d0
+-	addil	#1,%d0
+-	movel	%d0,%a0@(Lconsole_struct_cur_row)
+ 	movel	%a0@(Lconsole_struct_num_rows),%d1
+ 	cmpl	%d1,%d0
+ 	jcs	1f
+-	subil	#1,%d0
+-	movel	%d0,%a0@(Lconsole_struct_cur_row)
+ 	console_scroll
++	jra	L(console_exit)
+ 1:
++	addql	#1,%d0
++	movel	%d0,%a0@(Lconsole_struct_cur_row)
+ 	jra	L(console_exit)
+ 
+ L(console_not_lf):
+@@ -3581,12 +3581,6 @@ L(console_not_cr):
+  */
+ L(console_not_home):
+ 	movel	%a0@(Lconsole_struct_cur_column),%d0
+-	addql	#1,%a0@(Lconsole_struct_cur_column)
+-	movel	%a0@(Lconsole_struct_num_columns),%d1
+-	cmpl	%d1,%d0
+-	jcs	1f
+-	console_putc	#'\n'	/* recursion is OK! */
+-1:
+ 	movel	%a0@(Lconsole_struct_cur_row),%d1
+ 
+ 	/*
+@@ -3633,6 +3627,23 @@ L(console_do_font_scanline):
+ 	addq	#1,%d1
+ 	dbra	%d7,L(console_read_char_scanline)
+ 
++	/*
++	 *	Register usage in the code below:
++	 *	a0 = pointer to console globals
++	 *	d0 = cursor column
++	 *	d1 = cursor column limit
++	 */
++
++	lea	%pc@(L(console_globals)),%a0
++
++	movel	%a0@(Lconsole_struct_cur_column),%d0
++	addql	#1,%d0
++	movel	%d0,%a0@(Lconsole_struct_cur_column)	/* Update cursor pos */
++	movel	%a0@(Lconsole_struct_num_columns),%d1
++	cmpl	%d1,%d0
++	jcs	L(console_exit)
++	console_putc	#'\n'		/* Line wrap using tail recursion */
++
+ L(console_exit):
+ func_return	console_putc
+ 
+diff --git a/arch/mips/lib/crypto/chacha-core.S b/arch/mips/lib/crypto/chacha-core.S
+index 5755f69cfe0074..706aeb850fb0d6 100644
+--- a/arch/mips/lib/crypto/chacha-core.S
++++ b/arch/mips/lib/crypto/chacha-core.S
+@@ -55,17 +55,13 @@
+ #if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+ #define MSB 0
+ #define LSB 3
+-#define ROTx rotl
+-#define ROTR(n) rotr n, 24
+ #define	CPU_TO_LE32(n) \
+-	wsbh	n; \
++	wsbh	n, n; \
+ 	rotr	n, 16;
+ #else
+ #define MSB 3
+ #define LSB 0
+-#define ROTx rotr
+ #define CPU_TO_LE32(n)
+-#define ROTR(n)
+ #endif
+ 
+ #define FOR_EACH_WORD(x) \
+@@ -192,10 +188,10 @@ CONCAT3(.Lchacha_mips_xor_aligned_, PLUS_ONE(x), _b: ;) \
+ 	xor	X(W), X(B); \
+ 	xor	X(Y), X(C); \
+ 	xor	X(Z), X(D); \
+-	rotl	X(V), S;    \
+-	rotl	X(W), S;    \
+-	rotl	X(Y), S;    \
+-	rotl	X(Z), S;
++	rotr	X(V), 32 - S; \
++	rotr	X(W), 32 - S; \
++	rotr	X(Y), 32 - S; \
++	rotr	X(Z), 32 - S;
+ 
+ .text
+ .set	reorder
+@@ -372,21 +368,19 @@ chacha_crypt_arch:
+ 	/* First byte */
+ 	lbu	T1, 0(IN)
+ 	addiu	$at, BYTES, 1
+-	CPU_TO_LE32(SAVED_X)
+-	ROTR(SAVED_X)
+ 	xor	T1, SAVED_X
+ 	sb	T1, 0(OUT)
+ 	beqz	$at, .Lchacha_mips_xor_done
+ 	/* Second byte */
+ 	lbu	T1, 1(IN)
+ 	addiu	$at, BYTES, 2
+-	ROTx	SAVED_X, 8
++	rotr	SAVED_X, 8
+ 	xor	T1, SAVED_X
+ 	sb	T1, 1(OUT)
+ 	beqz	$at, .Lchacha_mips_xor_done
+ 	/* Third byte */
+ 	lbu	T1, 2(IN)
+-	ROTx	SAVED_X, 8
++	rotr	SAVED_X, 8
+ 	xor	T1, SAVED_X
+ 	sb	T1, 2(OUT)
+ 	b	.Lchacha_mips_xor_done
+diff --git a/arch/parisc/Makefile b/arch/parisc/Makefile
+index 9cd9aa3d16f29a..48ae3c79557a51 100644
+--- a/arch/parisc/Makefile
++++ b/arch/parisc/Makefile
+@@ -39,7 +39,9 @@ endif
+ 
+ export LD_BFD
+ 
+-# Set default 32 bits cross compilers for vdso
++# Set default 32 bits cross compilers for vdso.
++# This means that for 64BIT, both the 64-bit tools and the 32-bit tools
++# need to be in the path.
+ CC_ARCHES_32 = hppa hppa2.0 hppa1.1
+ CC_SUFFIXES  = linux linux-gnu unknown-linux-gnu suse-linux
+ CROSS32_COMPILE := $(call cc-cross-prefix, \
+diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h
+index 1a86a4370b298a..2c139a4dbf4b86 100644
+--- a/arch/parisc/include/asm/pgtable.h
++++ b/arch/parisc/include/asm/pgtable.h
+@@ -276,7 +276,7 @@ extern unsigned long *empty_zero_page;
+ #define pte_none(x)     (pte_val(x) == 0)
+ #define pte_present(x)	(pte_val(x) & _PAGE_PRESENT)
+ #define pte_user(x)	(pte_val(x) & _PAGE_USER)
+-#define pte_clear(mm, addr, xp)  set_pte(xp, __pte(0))
++#define pte_clear(mm, addr, xp) set_pte_at((mm), (addr), (xp), __pte(0))
+ 
+ #define pmd_flag(x)	(pmd_val(x) & PxD_FLAG_MASK)
+ #define pmd_address(x)	((unsigned long)(pmd_val(x) &~ PxD_FLAG_MASK) << PxD_VALUE_SHIFT)
+@@ -392,6 +392,7 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
+ 	}
+ }
+ #define set_ptes set_ptes
++#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1)
+ 
+ /* Used for deferring calls to flush_dcache_page() */
+ 
+@@ -456,7 +457,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned
+ 	if (!pte_young(pte)) {
+ 		return 0;
+ 	}
+-	set_pte(ptep, pte_mkold(pte));
++	set_pte_at(vma->vm_mm, addr, ptep, pte_mkold(pte));
+ 	return 1;
+ }
+ 
+@@ -466,7 +467,7 @@ pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr, pte_t *pt
+ struct mm_struct;
+ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
+ {
+-	set_pte(ptep, pte_wrprotect(*ptep));
++	set_pte_at(mm, addr, ptep, pte_wrprotect(*ptep));
+ }
+ 
+ #define pte_same(A,B)	(pte_val(A) == pte_val(B))
+diff --git a/arch/parisc/include/asm/special_insns.h b/arch/parisc/include/asm/special_insns.h
+index 51f40eaf778065..1013eeba31e5bb 100644
+--- a/arch/parisc/include/asm/special_insns.h
++++ b/arch/parisc/include/asm/special_insns.h
+@@ -32,6 +32,34 @@
+ 	pa;						\
+ })
+ 
++/**
++ * prober_user() - Probe user read access
++ * @sr:		Space regster.
++ * @va:		Virtual address.
++ *
++ * Return: Non-zero if address is accessible.
++ *
++ * Due to the way _PAGE_READ is handled in TLB entries, we need
++ * a special check to determine whether a user address is accessible.
++ * The ldb instruction does the initial access check. If it is
++ * successful, the probe instruction checks user access rights.
++ */
++#define prober_user(sr, va)	({			\
++	unsigned long read_allowed;			\
++	__asm__ __volatile__(				\
++		"copy %%r0,%0\n"			\
++		"8:\tldb 0(%%sr%1,%2),%%r0\n"		\
++		"\tproberi (%%sr%1,%2),%3,%0\n"		\
++		"9:\n"					\
++		ASM_EXCEPTIONTABLE_ENTRY(8b, 9b,	\
++				"or %%r0,%%r0,%%r0")	\
++		: "=&r" (read_allowed)			\
++		: "i" (sr), "r" (va), "i" (PRIV_USER)	\
++		: "memory"				\
++	);						\
++	read_allowed;					\
++})
++
+ #define CR_EIEM 15	/* External Interrupt Enable Mask */
+ #define CR_CR16 16	/* CR16 Interval Timer */
+ #define CR_EIRR 23	/* External Interrupt Request Register */
+diff --git a/arch/parisc/include/asm/uaccess.h b/arch/parisc/include/asm/uaccess.h
+index 88d0ae5769dde5..6c531d2c847eb1 100644
+--- a/arch/parisc/include/asm/uaccess.h
++++ b/arch/parisc/include/asm/uaccess.h
+@@ -42,9 +42,24 @@
+ 	__gu_err;					\
+ })
+ 
+-#define __get_user(val, ptr)				\
+-({							\
+-	__get_user_internal(SR_USER, val, ptr);	\
++#define __probe_user_internal(sr, error, ptr)			\
++({								\
++	__asm__("\tproberi (%%sr%1,%2),%3,%0\n"			\
++		"\tcmpiclr,= 1,%0,%0\n"				\
++		"\tldi %4,%0\n"					\
++		: "=r"(error)					\
++		: "i"(sr), "r"(ptr), "i"(PRIV_USER),		\
++		  "i"(-EFAULT));				\
++})
++
++#define __get_user(val, ptr)					\
++({								\
++	register long __gu_err;					\
++								\
++	__gu_err = __get_user_internal(SR_USER, val, ptr);	\
++	if (likely(!__gu_err))					\
++		__probe_user_internal(SR_USER, __gu_err, ptr);	\
++	__gu_err;						\
+ })
+ 
+ #define __get_user_asm(sr, val, ldx, ptr)		\
+diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
+index db531e58d70ef0..37ca484cc49511 100644
+--- a/arch/parisc/kernel/cache.c
++++ b/arch/parisc/kernel/cache.c
+@@ -429,7 +429,7 @@ static inline pte_t *get_ptep(struct mm_struct *mm, unsigned long addr)
+ 	return ptep;
+ }
+ 
+-static inline bool pte_needs_flush(pte_t pte)
++static inline bool pte_needs_cache_flush(pte_t pte)
+ {
+ 	return (pte_val(pte) & (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_NO_CACHE))
+ 		== (_PAGE_PRESENT | _PAGE_ACCESSED);
+@@ -630,7 +630,7 @@ static void flush_cache_page_if_present(struct vm_area_struct *vma,
+ 	ptep = get_ptep(vma->vm_mm, vmaddr);
+ 	if (ptep) {
+ 		pte = ptep_get(ptep);
+-		needs_flush = pte_needs_flush(pte);
++		needs_flush = pte_needs_cache_flush(pte);
+ 		pte_unmap(ptep);
+ 	}
+ 	if (needs_flush)
+@@ -841,7 +841,7 @@ void flush_cache_vmap(unsigned long start, unsigned long end)
+ 	}
+ 
+ 	vm = find_vm_area((void *)start);
+-	if (WARN_ON_ONCE(!vm)) {
++	if (!vm) {
+ 		flush_cache_all();
+ 		return;
+ 	}
+diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
+index ea57bcc21dc5fe..f4bf61a34701e5 100644
+--- a/arch/parisc/kernel/entry.S
++++ b/arch/parisc/kernel/entry.S
+@@ -499,6 +499,12 @@
+ 	 * this happens is quite subtle, read below */
+ 	.macro		make_insert_tlb	spc,pte,prot,tmp
+ 	space_to_prot   \spc \prot        /* create prot id from space */
++
++#if _PAGE_SPECIAL_BIT == _PAGE_DMB_BIT
++	/* need to drop DMB bit, as it's used as SPECIAL flag */
++	depi		0,_PAGE_SPECIAL_BIT,1,\pte
++#endif
++
+ 	/* The following is the real subtlety.  This is depositing
+ 	 * T <-> _PAGE_REFTRAP
+ 	 * D <-> _PAGE_DIRTY
+@@ -511,17 +517,18 @@
+ 	 * Finally, _PAGE_READ goes in the top bit of PL1 (so we
+ 	 * trigger an access rights trap in user space if the user
+ 	 * tries to read an unreadable page */
+-#if _PAGE_SPECIAL_BIT == _PAGE_DMB_BIT
+-	/* need to drop DMB bit, as it's used as SPECIAL flag */
+-	depi		0,_PAGE_SPECIAL_BIT,1,\pte
+-#endif
+ 	depd            \pte,8,7,\prot
+ 
+ 	/* PAGE_USER indicates the page can be read with user privileges,
+ 	 * so deposit X1|11 to PL1|PL2 (remember the upper bit of PL1
+-	 * contains _PAGE_READ) */
++	 * contains _PAGE_READ). While the kernel can't directly write
++	 * user pages which have _PAGE_WRITE zero, it can read pages
++	 * which have _PAGE_READ zero (PL <= PL1). Thus, the kernel
++	 * exception fault handler doesn't trigger when reading pages
++	 * that aren't user read accessible */
+ 	extrd,u,*=      \pte,_PAGE_USER_BIT+32,1,%r0
+ 	depdi		7,11,3,\prot
++
+ 	/* If we're a gateway page, drop PL2 back to zero for promotion
+ 	 * to kernel privilege (so we can execute the page as kernel).
+ 	 * Any privilege promotion page always denys read and write */
+diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S
+index 0fa81bf1466b15..f58c4bccfbce0e 100644
+--- a/arch/parisc/kernel/syscall.S
++++ b/arch/parisc/kernel/syscall.S
+@@ -613,6 +613,9 @@ lws_compare_and_swap32:
+ lws_compare_and_swap:
+ 	/* Trigger memory reference interruptions without writing to memory */
+ 1:	ldw	0(%r26), %r28
++	proberi	(%r26), PRIV_USER, %r28
++	comb,=,n	%r28, %r0, lws_fault /* backwards, likely not taken */
++	nop
+ 2:	stbys,e	%r0, 0(%r26)
+ 
+ 	/* Calculate 8-bit hash index from virtual address */
+@@ -767,6 +770,9 @@ cas2_lock_start:
+ 	copy	%r26, %r28
+ 	depi_safe	0, 31, 2, %r28
+ 10:	ldw	0(%r28), %r1
++	proberi	(%r28), PRIV_USER, %r1
++	comb,=,n	%r1, %r0, lws_fault /* backwards, likely not taken */
++	nop
+ 11:	stbys,e	%r0, 0(%r28)
+ 
+ 	/* Calculate 8-bit hash index from virtual address */
+@@ -951,41 +957,47 @@ atomic_xchg_begin:
+ 
+ 	/* 8-bit exchange */
+ 1:	ldb	0(%r24), %r20
++	proberi	(%r24), PRIV_USER, %r20
++	comb,=,n	%r20, %r0, lws_fault /* backwards, likely not taken */
++	nop
+ 	copy	%r23, %r20
+ 	depi_safe	0, 31, 2, %r20
+ 	b	atomic_xchg_start
+ 2:	stbys,e	%r0, 0(%r20)
+-	nop
+-	nop
+-	nop
+ 
+ 	/* 16-bit exchange */
+ 3:	ldh	0(%r24), %r20
++	proberi	(%r24), PRIV_USER, %r20
++	comb,=,n	%r20, %r0, lws_fault /* backwards, likely not taken */
++	nop
+ 	copy	%r23, %r20
+ 	depi_safe	0, 31, 2, %r20
+ 	b	atomic_xchg_start
+ 4:	stbys,e	%r0, 0(%r20)
+-	nop
+-	nop
+-	nop
+ 
+ 	/* 32-bit exchange */
+ 5:	ldw	0(%r24), %r20
++	proberi	(%r24), PRIV_USER, %r20
++	comb,=,n	%r20, %r0, lws_fault /* backwards, likely not taken */
++	nop
+ 	b	atomic_xchg_start
+ 6:	stbys,e	%r0, 0(%r23)
+ 	nop
+ 	nop
+-	nop
+-	nop
+-	nop
+ 
+ 	/* 64-bit exchange */
+ #ifdef CONFIG_64BIT
+ 7:	ldd	0(%r24), %r20
++	proberi	(%r24), PRIV_USER, %r20
++	comb,=,n	%r20, %r0, lws_fault /* backwards, likely not taken */
++	nop
+ 8:	stdby,e	%r0, 0(%r23)
+ #else
+ 7:	ldw	0(%r24), %r20
+ 8:	ldw	4(%r24), %r20
++	proberi	(%r24), PRIV_USER, %r20
++	comb,=,n	%r20, %r0, lws_fault /* backwards, likely not taken */
++	nop
+ 	copy	%r23, %r20
+ 	depi_safe	0, 31, 2, %r20
+ 9:	stbys,e	%r0, 0(%r20)
+diff --git a/arch/parisc/lib/memcpy.c b/arch/parisc/lib/memcpy.c
+index 5fc0c852c84c8d..69d65ffab31263 100644
+--- a/arch/parisc/lib/memcpy.c
++++ b/arch/parisc/lib/memcpy.c
+@@ -12,6 +12,7 @@
+ #include <linux/module.h>
+ #include <linux/compiler.h>
+ #include <linux/uaccess.h>
++#include <linux/mm.h>
+ 
+ #define get_user_space()	mfsp(SR_USER)
+ #define get_kernel_space()	SR_KERNEL
+@@ -32,9 +33,25 @@ EXPORT_SYMBOL(raw_copy_to_user);
+ unsigned long raw_copy_from_user(void *dst, const void __user *src,
+ 			       unsigned long len)
+ {
++	unsigned long start = (unsigned long) src;
++	unsigned long end = start + len;
++	unsigned long newlen = len;
++
+ 	mtsp(get_user_space(), SR_TEMP1);
+ 	mtsp(get_kernel_space(), SR_TEMP2);
+-	return pa_memcpy(dst, (void __force *)src, len);
++
++	/* Check region is user accessible */
++	if (start)
++	while (start < end) {
++		if (!prober_user(SR_TEMP1, start)) {
++			newlen = (start - (unsigned long) src);
++			break;
++		}
++		start += PAGE_SIZE;
++		/* align to page boundry which may have different permission */
++		start = PAGE_ALIGN_DOWN(start);
++	}
++	return len - newlen + pa_memcpy(dst, (void __force *)src, newlen);
+ }
+ EXPORT_SYMBOL(raw_copy_from_user);
+ 
+diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c
+index c39de84e98b051..f1785640b049b5 100644
+--- a/arch/parisc/mm/fault.c
++++ b/arch/parisc/mm/fault.c
+@@ -363,6 +363,10 @@ void do_page_fault(struct pt_regs *regs, unsigned long code,
+ 	mmap_read_unlock(mm);
+ 
+ bad_area_nosemaphore:
++	if (!user_mode(regs) && fixup_exception(regs)) {
++		return;
++	}
++
+ 	if (user_mode(regs)) {
+ 		int signo, si_code;
+ 
+diff --git a/arch/s390/boot/vmem.c b/arch/s390/boot/vmem.c
+index 1d073acd05a7b8..cea3de4dce8c32 100644
+--- a/arch/s390/boot/vmem.c
++++ b/arch/s390/boot/vmem.c
+@@ -530,6 +530,9 @@ void setup_vmem(unsigned long kernel_start, unsigned long kernel_end, unsigned l
+ 			 lowcore_address + sizeof(struct lowcore),
+ 			 POPULATE_LOWCORE);
+ 	for_each_physmem_usable_range(i, &start, &end) {
++		/* Do not map lowcore with identity mapping */
++		if (!start)
++			start = sizeof(struct lowcore);
+ 		pgtable_populate((unsigned long)__identity_va(start),
+ 				 (unsigned long)__identity_va(end),
+ 				 POPULATE_IDENTITY);
+diff --git a/arch/s390/hypfs/hypfs_dbfs.c b/arch/s390/hypfs/hypfs_dbfs.c
+index 5d9effb0867cde..41a0d2066fa002 100644
+--- a/arch/s390/hypfs/hypfs_dbfs.c
++++ b/arch/s390/hypfs/hypfs_dbfs.c
+@@ -6,6 +6,7 @@
+  * Author(s): Michael Holzheu <holzheu@linux.vnet.ibm.com>
+  */
+ 
++#include <linux/security.h>
+ #include <linux/slab.h>
+ #include "hypfs.h"
+ 
+@@ -66,23 +67,27 @@ static long dbfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 	long rc;
+ 
+ 	mutex_lock(&df->lock);
+-	if (df->unlocked_ioctl)
+-		rc = df->unlocked_ioctl(file, cmd, arg);
+-	else
+-		rc = -ENOTTY;
++	rc = df->unlocked_ioctl(file, cmd, arg);
+ 	mutex_unlock(&df->lock);
+ 	return rc;
+ }
+ 
+-static const struct file_operations dbfs_ops = {
++static const struct file_operations dbfs_ops_ioctl = {
+ 	.read		= dbfs_read,
+ 	.unlocked_ioctl = dbfs_ioctl,
+ };
+ 
++static const struct file_operations dbfs_ops = {
++	.read		= dbfs_read,
++};
++
+ void hypfs_dbfs_create_file(struct hypfs_dbfs_file *df)
+ {
+-	df->dentry = debugfs_create_file(df->name, 0400, dbfs_dir, df,
+-					 &dbfs_ops);
++	const struct file_operations *fops = &dbfs_ops;
++
++	if (df->unlocked_ioctl && !security_locked_down(LOCKDOWN_DEBUGFS))
++		fops = &dbfs_ops_ioctl;
++	df->dentry = debugfs_create_file(df->name, 0400, dbfs_dir, df, fops);
+ 	mutex_init(&df->lock);
+ }
+ 
+diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c
+index f1b6d40154e352..f1adfba1a76ea9 100644
+--- a/arch/x86/crypto/aegis128-aesni-glue.c
++++ b/arch/x86/crypto/aegis128-aesni-glue.c
+@@ -104,10 +104,12 @@ static void crypto_aegis128_aesni_process_ad(
+ 	}
+ }
+ 
+-static __always_inline void
++static __always_inline int
+ crypto_aegis128_aesni_process_crypt(struct aegis_state *state,
+ 				    struct skcipher_walk *walk, bool enc)
+ {
++	int err = 0;
++
+ 	while (walk->nbytes >= AEGIS128_BLOCK_SIZE) {
+ 		if (enc)
+ 			aegis128_aesni_enc(state, walk->src.virt.addr,
+@@ -119,7 +121,10 @@ crypto_aegis128_aesni_process_crypt(struct aegis_state *state,
+ 					   walk->dst.virt.addr,
+ 					   round_down(walk->nbytes,
+ 						      AEGIS128_BLOCK_SIZE));
+-		skcipher_walk_done(walk, walk->nbytes % AEGIS128_BLOCK_SIZE);
++		kernel_fpu_end();
++		err = skcipher_walk_done(walk,
++					 walk->nbytes % AEGIS128_BLOCK_SIZE);
++		kernel_fpu_begin();
+ 	}
+ 
+ 	if (walk->nbytes) {
+@@ -131,8 +136,11 @@ crypto_aegis128_aesni_process_crypt(struct aegis_state *state,
+ 			aegis128_aesni_dec_tail(state, walk->src.virt.addr,
+ 						walk->dst.virt.addr,
+ 						walk->nbytes);
+-		skcipher_walk_done(walk, 0);
++		kernel_fpu_end();
++		err = skcipher_walk_done(walk, 0);
++		kernel_fpu_begin();
+ 	}
++	return err;
+ }
+ 
+ static struct aegis_ctx *crypto_aegis128_aesni_ctx(struct crypto_aead *aead)
+@@ -165,7 +173,7 @@ static int crypto_aegis128_aesni_setauthsize(struct crypto_aead *tfm,
+ 	return 0;
+ }
+ 
+-static __always_inline void
++static __always_inline int
+ crypto_aegis128_aesni_crypt(struct aead_request *req,
+ 			    struct aegis_block *tag_xor,
+ 			    unsigned int cryptlen, bool enc)
+@@ -174,20 +182,24 @@ crypto_aegis128_aesni_crypt(struct aead_request *req,
+ 	struct aegis_ctx *ctx = crypto_aegis128_aesni_ctx(tfm);
+ 	struct skcipher_walk walk;
+ 	struct aegis_state state;
++	int err;
+ 
+ 	if (enc)
+-		skcipher_walk_aead_encrypt(&walk, req, true);
++		err = skcipher_walk_aead_encrypt(&walk, req, false);
+ 	else
+-		skcipher_walk_aead_decrypt(&walk, req, true);
++		err = skcipher_walk_aead_decrypt(&walk, req, false);
++	if (err)
++		return err;
+ 
+ 	kernel_fpu_begin();
+ 
+ 	aegis128_aesni_init(&state, &ctx->key, req->iv);
+ 	crypto_aegis128_aesni_process_ad(&state, req->src, req->assoclen);
+-	crypto_aegis128_aesni_process_crypt(&state, &walk, enc);
+-	aegis128_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
+-
++	err = crypto_aegis128_aesni_process_crypt(&state, &walk, enc);
++	if (err == 0)
++		aegis128_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
+ 	kernel_fpu_end();
++	return err;
+ }
+ 
+ static int crypto_aegis128_aesni_encrypt(struct aead_request *req)
+@@ -196,8 +208,11 @@ static int crypto_aegis128_aesni_encrypt(struct aead_request *req)
+ 	struct aegis_block tag = {};
+ 	unsigned int authsize = crypto_aead_authsize(tfm);
+ 	unsigned int cryptlen = req->cryptlen;
++	int err;
+ 
+-	crypto_aegis128_aesni_crypt(req, &tag, cryptlen, true);
++	err = crypto_aegis128_aesni_crypt(req, &tag, cryptlen, true);
++	if (err)
++		return err;
+ 
+ 	scatterwalk_map_and_copy(tag.bytes, req->dst,
+ 				 req->assoclen + cryptlen, authsize, 1);
+@@ -212,11 +227,14 @@ static int crypto_aegis128_aesni_decrypt(struct aead_request *req)
+ 	struct aegis_block tag;
+ 	unsigned int authsize = crypto_aead_authsize(tfm);
+ 	unsigned int cryptlen = req->cryptlen - authsize;
++	int err;
+ 
+ 	scatterwalk_map_and_copy(tag.bytes, req->src,
+ 				 req->assoclen + cryptlen, authsize, 0);
+ 
+-	crypto_aegis128_aesni_crypt(req, &tag, cryptlen, false);
++	err = crypto_aegis128_aesni_crypt(req, &tag, cryptlen, false);
++	if (err)
++		return err;
+ 
+ 	return crypto_memneq(tag.bytes, zeros.bytes, authsize) ? -EBADMSG : 0;
+ }
+diff --git a/arch/x86/include/asm/xen/hypercall.h b/arch/x86/include/asm/xen/hypercall.h
+index 59a62c3780a2f2..a16d4631547ce1 100644
+--- a/arch/x86/include/asm/xen/hypercall.h
++++ b/arch/x86/include/asm/xen/hypercall.h
+@@ -94,12 +94,13 @@ DECLARE_STATIC_CALL(xen_hypercall, xen_hypercall_func);
+ #ifdef MODULE
+ #define __ADDRESSABLE_xen_hypercall
+ #else
+-#define __ADDRESSABLE_xen_hypercall __ADDRESSABLE_ASM_STR(__SCK__xen_hypercall)
++#define __ADDRESSABLE_xen_hypercall \
++	__stringify(.global STATIC_CALL_KEY(xen_hypercall);)
+ #endif
+ 
+ #define __HYPERCALL					\
+ 	__ADDRESSABLE_xen_hypercall			\
+-	"call __SCT__xen_hypercall"
++	__stringify(call STATIC_CALL_TRAMP(xen_hypercall))
+ 
+ #define __HYPERCALL_ENTRY(x)	"a" (x)
+ 
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 329ee185d8ccaf..9cc1738f59cf56 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -1324,8 +1324,8 @@ static const char * const s5_reset_reason_txt[] = {
+ 
+ static __init int print_s5_reset_status_mmio(void)
+ {
+-	unsigned long value;
+ 	void __iomem *addr;
++	u32 value;
+ 	int i;
+ 
+ 	if (!cpu_feature_enabled(X86_FEATURE_ZEN))
+@@ -1338,12 +1338,16 @@ static __init int print_s5_reset_status_mmio(void)
+ 	value = ioread32(addr);
+ 	iounmap(addr);
+ 
++	/* Value with "all bits set" is an error response and should be ignored. */
++	if (value == U32_MAX)
++		return 0;
++
+ 	for (i = 0; i < ARRAY_SIZE(s5_reset_reason_txt); i++) {
+ 		if (!(value & BIT(i)))
+ 			continue;
+ 
+ 		if (s5_reset_reason_txt[i]) {
+-			pr_info("x86/amd: Previous system reset reason [0x%08lx]: %s\n",
++			pr_info("x86/amd: Previous system reset reason [0x%08x]: %s\n",
+ 				value, s5_reset_reason_txt[i]);
+ 		}
+ 	}
+diff --git a/arch/x86/kernel/cpu/hygon.c b/arch/x86/kernel/cpu/hygon.c
+index 2154f12766fb5a..1fda6c3a2b65a7 100644
+--- a/arch/x86/kernel/cpu/hygon.c
++++ b/arch/x86/kernel/cpu/hygon.c
+@@ -16,6 +16,7 @@
+ #include <asm/spec-ctrl.h>
+ #include <asm/delay.h>
+ #include <asm/msr.h>
++#include <asm/resctrl.h>
+ 
+ #include "cpu.h"
+ 
+@@ -117,6 +118,8 @@ static void bsp_init_hygon(struct cpuinfo_x86 *c)
+ 			x86_amd_ls_cfg_ssbd_mask = 1ULL << 10;
+ 		}
+ 	}
++
++	resctrl_cpu_detect(c);
+ }
+ 
+ static void early_init_hygon(struct cpuinfo_x86 *c)
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index d68da9e92e1eee..9483c529c95823 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -7229,22 +7229,16 @@ static void bfq_init_root_group(struct bfq_group *root_group,
+ 	root_group->sched_data.bfq_class_idle_last_service = jiffies;
+ }
+ 
+-static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
++static int bfq_init_queue(struct request_queue *q, struct elevator_queue *eq)
+ {
+ 	struct bfq_data *bfqd;
+-	struct elevator_queue *eq;
+ 	unsigned int i;
+ 	struct blk_independent_access_ranges *ia_ranges = q->disk->ia_ranges;
+ 
+-	eq = elevator_alloc(q, e);
+-	if (!eq)
+-		return -ENOMEM;
+-
+ 	bfqd = kzalloc_node(sizeof(*bfqd), GFP_KERNEL, q->node);
+-	if (!bfqd) {
+-		kobject_put(&eq->kobj);
++	if (!bfqd)
+ 		return -ENOMEM;
+-	}
++
+ 	eq->elevator_data = bfqd;
+ 
+ 	spin_lock_irq(&q->queue_lock);
+@@ -7402,7 +7396,6 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
+ 
+ out_free:
+ 	kfree(bfqd);
+-	kobject_put(&eq->kobj);
+ 	return -ENOMEM;
+ }
+ 
+diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
+index 29b3540dd1804d..bdcb27ab560682 100644
+--- a/block/blk-mq-debugfs.c
++++ b/block/blk-mq-debugfs.c
+@@ -95,6 +95,7 @@ static const char *const blk_queue_flag_name[] = {
+ 	QUEUE_FLAG_NAME(SQ_SCHED),
+ 	QUEUE_FLAG_NAME(DISABLE_WBT_DEF),
+ 	QUEUE_FLAG_NAME(NO_ELV_SWITCH),
++	QUEUE_FLAG_NAME(QOS_ENABLED),
+ };
+ #undef QUEUE_FLAG_NAME
+ 
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index 55a0fd10514796..e2ce4a28e6c9e0 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -374,64 +374,17 @@ bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq,
+ }
+ EXPORT_SYMBOL_GPL(blk_mq_sched_try_insert_merge);
+ 
+-static int blk_mq_sched_alloc_map_and_rqs(struct request_queue *q,
+-					  struct blk_mq_hw_ctx *hctx,
+-					  unsigned int hctx_idx)
+-{
+-	if (blk_mq_is_shared_tags(q->tag_set->flags)) {
+-		hctx->sched_tags = q->sched_shared_tags;
+-		return 0;
+-	}
+-
+-	hctx->sched_tags = blk_mq_alloc_map_and_rqs(q->tag_set, hctx_idx,
+-						    q->nr_requests);
+-
+-	if (!hctx->sched_tags)
+-		return -ENOMEM;
+-	return 0;
+-}
+-
+-static void blk_mq_exit_sched_shared_tags(struct request_queue *queue)
+-{
+-	blk_mq_free_rq_map(queue->sched_shared_tags);
+-	queue->sched_shared_tags = NULL;
+-}
+-
+ /* called in queue's release handler, tagset has gone away */
+ static void blk_mq_sched_tags_teardown(struct request_queue *q, unsigned int flags)
+ {
+ 	struct blk_mq_hw_ctx *hctx;
+ 	unsigned long i;
+ 
+-	queue_for_each_hw_ctx(q, hctx, i) {
+-		if (hctx->sched_tags) {
+-			if (!blk_mq_is_shared_tags(flags))
+-				blk_mq_free_rq_map(hctx->sched_tags);
+-			hctx->sched_tags = NULL;
+-		}
+-	}
++	queue_for_each_hw_ctx(q, hctx, i)
++		hctx->sched_tags = NULL;
+ 
+ 	if (blk_mq_is_shared_tags(flags))
+-		blk_mq_exit_sched_shared_tags(q);
+-}
+-
+-static int blk_mq_init_sched_shared_tags(struct request_queue *queue)
+-{
+-	struct blk_mq_tag_set *set = queue->tag_set;
+-
+-	/*
+-	 * Set initial depth at max so that we don't need to reallocate for
+-	 * updating nr_requests.
+-	 */
+-	queue->sched_shared_tags = blk_mq_alloc_map_and_rqs(set,
+-						BLK_MQ_NO_HCTX_IDX,
+-						MAX_SCHED_RQ);
+-	if (!queue->sched_shared_tags)
+-		return -ENOMEM;
+-
+-	blk_mq_tag_update_sched_shared_tags(queue);
+-
+-	return 0;
++		q->sched_shared_tags = NULL;
+ }
+ 
+ void blk_mq_sched_reg_debugfs(struct request_queue *q)
+@@ -458,8 +411,140 @@ void blk_mq_sched_unreg_debugfs(struct request_queue *q)
+ 	mutex_unlock(&q->debugfs_mutex);
+ }
+ 
++void blk_mq_free_sched_tags(struct elevator_tags *et,
++		struct blk_mq_tag_set *set)
++{
++	unsigned long i;
++
++	/* Shared tags are stored at index 0 in @tags. */
++	if (blk_mq_is_shared_tags(set->flags))
++		blk_mq_free_map_and_rqs(set, et->tags[0], BLK_MQ_NO_HCTX_IDX);
++	else {
++		for (i = 0; i < et->nr_hw_queues; i++)
++			blk_mq_free_map_and_rqs(set, et->tags[i], i);
++	}
++
++	kfree(et);
++}
++
++void blk_mq_free_sched_tags_batch(struct xarray *et_table,
++		struct blk_mq_tag_set *set)
++{
++	struct request_queue *q;
++	struct elevator_tags *et;
++
++	lockdep_assert_held_write(&set->update_nr_hwq_lock);
++
++	list_for_each_entry(q, &set->tag_list, tag_set_list) {
++		/*
++		 * Accessing q->elevator without holding q->elevator_lock is
++		 * safe because we're holding here set->update_nr_hwq_lock in
++		 * the writer context. So, scheduler update/switch code (which
++		 * acquires the same lock but in the reader context) can't run
++		 * concurrently.
++		 */
++		if (q->elevator) {
++			et = xa_load(et_table, q->id);
++			if (unlikely(!et))
++				WARN_ON_ONCE(1);
++			else
++				blk_mq_free_sched_tags(et, set);
++		}
++	}
++}
++
++struct elevator_tags *blk_mq_alloc_sched_tags(struct blk_mq_tag_set *set,
++		unsigned int nr_hw_queues)
++{
++	unsigned int nr_tags;
++	int i;
++	struct elevator_tags *et;
++	gfp_t gfp = GFP_NOIO | __GFP_ZERO | __GFP_NOWARN | __GFP_NORETRY;
++
++	if (blk_mq_is_shared_tags(set->flags))
++		nr_tags = 1;
++	else
++		nr_tags = nr_hw_queues;
++
++	et = kmalloc(sizeof(struct elevator_tags) +
++			nr_tags * sizeof(struct blk_mq_tags *), gfp);
++	if (!et)
++		return NULL;
++	/*
++	 * Default to double of smaller one between hw queue_depth and
++	 * 128, since we don't split into sync/async like the old code
++	 * did. Additionally, this is a per-hw queue depth.
++	 */
++	et->nr_requests = 2 * min_t(unsigned int, set->queue_depth,
++			BLKDEV_DEFAULT_RQ);
++	et->nr_hw_queues = nr_hw_queues;
++
++	if (blk_mq_is_shared_tags(set->flags)) {
++		/* Shared tags are stored at index 0 in @tags. */
++		et->tags[0] = blk_mq_alloc_map_and_rqs(set, BLK_MQ_NO_HCTX_IDX,
++					MAX_SCHED_RQ);
++		if (!et->tags[0])
++			goto out;
++	} else {
++		for (i = 0; i < et->nr_hw_queues; i++) {
++			et->tags[i] = blk_mq_alloc_map_and_rqs(set, i,
++					et->nr_requests);
++			if (!et->tags[i])
++				goto out_unwind;
++		}
++	}
++
++	return et;
++out_unwind:
++	while (--i >= 0)
++		blk_mq_free_map_and_rqs(set, et->tags[i], i);
++out:
++	kfree(et);
++	return NULL;
++}
++
++int blk_mq_alloc_sched_tags_batch(struct xarray *et_table,
++		struct blk_mq_tag_set *set, unsigned int nr_hw_queues)
++{
++	struct request_queue *q;
++	struct elevator_tags *et;
++	gfp_t gfp = GFP_NOIO | __GFP_ZERO | __GFP_NOWARN | __GFP_NORETRY;
++
++	lockdep_assert_held_write(&set->update_nr_hwq_lock);
++
++	list_for_each_entry(q, &set->tag_list, tag_set_list) {
++		/*
++		 * Accessing q->elevator without holding q->elevator_lock is
++		 * safe because we're holding here set->update_nr_hwq_lock in
++		 * the writer context. So, scheduler update/switch code (which
++		 * acquires the same lock but in the reader context) can't run
++		 * concurrently.
++		 */
++		if (q->elevator) {
++			et = blk_mq_alloc_sched_tags(set, nr_hw_queues);
++			if (!et)
++				goto out_unwind;
++			if (xa_insert(et_table, q->id, et, gfp))
++				goto out_free_tags;
++		}
++	}
++	return 0;
++out_free_tags:
++	blk_mq_free_sched_tags(et, set);
++out_unwind:
++	list_for_each_entry_continue_reverse(q, &set->tag_list, tag_set_list) {
++		if (q->elevator) {
++			et = xa_load(et_table, q->id);
++			if (et)
++				blk_mq_free_sched_tags(et, set);
++		}
++	}
++	return -ENOMEM;
++}
++
+ /* caller must have a reference to @e, will grab another one if successful */
+-int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e)
++int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e,
++		struct elevator_tags *et)
+ {
+ 	unsigned int flags = q->tag_set->flags;
+ 	struct blk_mq_hw_ctx *hctx;
+@@ -467,36 +552,33 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e)
+ 	unsigned long i;
+ 	int ret;
+ 
+-	/*
+-	 * Default to double of smaller one between hw queue_depth and 128,
+-	 * since we don't split into sync/async like the old code did.
+-	 * Additionally, this is a per-hw queue depth.
+-	 */
+-	q->nr_requests = 2 * min_t(unsigned int, q->tag_set->queue_depth,
+-				   BLKDEV_DEFAULT_RQ);
++	eq = elevator_alloc(q, e, et);
++	if (!eq)
++		return -ENOMEM;
++
++	q->nr_requests = et->nr_requests;
+ 
+ 	if (blk_mq_is_shared_tags(flags)) {
+-		ret = blk_mq_init_sched_shared_tags(q);
+-		if (ret)
+-			return ret;
++		/* Shared tags are stored at index 0 in @et->tags. */
++		q->sched_shared_tags = et->tags[0];
++		blk_mq_tag_update_sched_shared_tags(q);
+ 	}
+ 
+ 	queue_for_each_hw_ctx(q, hctx, i) {
+-		ret = blk_mq_sched_alloc_map_and_rqs(q, hctx, i);
+-		if (ret)
+-			goto err_free_map_and_rqs;
++		if (blk_mq_is_shared_tags(flags))
++			hctx->sched_tags = q->sched_shared_tags;
++		else
++			hctx->sched_tags = et->tags[i];
+ 	}
+ 
+-	ret = e->ops.init_sched(q, e);
++	ret = e->ops.init_sched(q, eq);
+ 	if (ret)
+-		goto err_free_map_and_rqs;
++		goto out;
+ 
+ 	queue_for_each_hw_ctx(q, hctx, i) {
+ 		if (e->ops.init_hctx) {
+ 			ret = e->ops.init_hctx(hctx, i);
+ 			if (ret) {
+-				eq = q->elevator;
+-				blk_mq_sched_free_rqs(q);
+ 				blk_mq_exit_sched(q, eq);
+ 				kobject_put(&eq->kobj);
+ 				return ret;
+@@ -505,10 +587,9 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e)
+ 	}
+ 	return 0;
+ 
+-err_free_map_and_rqs:
+-	blk_mq_sched_free_rqs(q);
++out:
+ 	blk_mq_sched_tags_teardown(q, flags);
+-
++	kobject_put(&eq->kobj);
+ 	q->elevator = NULL;
+ 	return ret;
+ }
+diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h
+index 1326526bb7338c..b554e1d559508c 100644
+--- a/block/blk-mq-sched.h
++++ b/block/blk-mq-sched.h
+@@ -18,10 +18,20 @@ void __blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx);
+ 
+ void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx);
+ 
+-int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e);
++int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e,
++		struct elevator_tags *et);
+ void blk_mq_exit_sched(struct request_queue *q, struct elevator_queue *e);
+ void blk_mq_sched_free_rqs(struct request_queue *q);
+ 
++struct elevator_tags *blk_mq_alloc_sched_tags(struct blk_mq_tag_set *set,
++		unsigned int nr_hw_queues);
++int blk_mq_alloc_sched_tags_batch(struct xarray *et_table,
++		struct blk_mq_tag_set *set, unsigned int nr_hw_queues);
++void blk_mq_free_sched_tags(struct elevator_tags *et,
++		struct blk_mq_tag_set *set);
++void blk_mq_free_sched_tags_batch(struct xarray *et_table,
++		struct blk_mq_tag_set *set);
++
+ static inline void blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx)
+ {
+ 	if (test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state))
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 32d11305d51bb2..355db0abe44b86 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -4972,12 +4972,13 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr)
+  * Switch back to the elevator type stored in the xarray.
+  */
+ static void blk_mq_elv_switch_back(struct request_queue *q,
+-		struct xarray *elv_tbl)
++		struct xarray *elv_tbl, struct xarray *et_tbl)
+ {
+ 	struct elevator_type *e = xa_load(elv_tbl, q->id);
++	struct elevator_tags *t = xa_load(et_tbl, q->id);
+ 
+ 	/* The elv_update_nr_hw_queues unfreezes the queue. */
+-	elv_update_nr_hw_queues(q, e);
++	elv_update_nr_hw_queues(q, e, t);
+ 
+ 	/* Drop the reference acquired in blk_mq_elv_switch_none. */
+ 	if (e)
+@@ -5029,7 +5030,8 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ 	int prev_nr_hw_queues = set->nr_hw_queues;
+ 	unsigned int memflags;
+ 	int i;
+-	struct xarray elv_tbl;
++	struct xarray elv_tbl, et_tbl;
++	bool queues_frozen = false;
+ 
+ 	lockdep_assert_held(&set->tag_list_lock);
+ 
+@@ -5042,6 +5044,10 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ 
+ 	memflags = memalloc_noio_save();
+ 
++	xa_init(&et_tbl);
++	if (blk_mq_alloc_sched_tags_batch(&et_tbl, set, nr_hw_queues) < 0)
++		goto out_memalloc_restore;
++
+ 	xa_init(&elv_tbl);
+ 
+ 	list_for_each_entry(q, &set->tag_list, tag_set_list) {
+@@ -5049,9 +5055,6 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ 		blk_mq_sysfs_unregister_hctxs(q);
+ 	}
+ 
+-	list_for_each_entry(q, &set->tag_list, tag_set_list)
+-		blk_mq_freeze_queue_nomemsave(q);
+-
+ 	/*
+ 	 * Switch IO scheduler to 'none', cleaning up the data associated
+ 	 * with the previous scheduler. We will switch back once we are done
+@@ -5061,6 +5064,9 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ 		if (blk_mq_elv_switch_none(q, &elv_tbl))
+ 			goto switch_back;
+ 
++	list_for_each_entry(q, &set->tag_list, tag_set_list)
++		blk_mq_freeze_queue_nomemsave(q);
++	queues_frozen = true;
+ 	if (blk_mq_realloc_tag_set_tags(set, nr_hw_queues) < 0)
+ 		goto switch_back;
+ 
+@@ -5084,8 +5090,12 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ 	}
+ switch_back:
+ 	/* The blk_mq_elv_switch_back unfreezes queue for us. */
+-	list_for_each_entry(q, &set->tag_list, tag_set_list)
+-		blk_mq_elv_switch_back(q, &elv_tbl);
++	list_for_each_entry(q, &set->tag_list, tag_set_list) {
++		/* switch_back expects queue to be frozen */
++		if (!queues_frozen)
++			blk_mq_freeze_queue_nomemsave(q);
++		blk_mq_elv_switch_back(q, &elv_tbl, &et_tbl);
++	}
+ 
+ 	list_for_each_entry(q, &set->tag_list, tag_set_list) {
+ 		blk_mq_sysfs_register_hctxs(q);
+@@ -5096,7 +5106,8 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ 	}
+ 
+ 	xa_destroy(&elv_tbl);
+-
++	xa_destroy(&et_tbl);
++out_memalloc_restore:
+ 	memalloc_noio_restore(memflags);
+ 
+ 	/* Free the excess tags when nr_hw_queues shrink. */
+diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
+index 848591fb3c57b6..654478dfbc2044 100644
+--- a/block/blk-rq-qos.c
++++ b/block/blk-rq-qos.c
+@@ -2,8 +2,6 @@
+ 
+ #include "blk-rq-qos.h"
+ 
+-__read_mostly DEFINE_STATIC_KEY_FALSE(block_rq_qos);
+-
+ /*
+  * Increment 'v', if 'v' is below 'below'. Returns true if we succeeded,
+  * false if 'v' + 1 would be bigger than 'below'.
+@@ -319,8 +317,8 @@ void rq_qos_exit(struct request_queue *q)
+ 		struct rq_qos *rqos = q->rq_qos;
+ 		q->rq_qos = rqos->next;
+ 		rqos->ops->exit(rqos);
+-		static_branch_dec(&block_rq_qos);
+ 	}
++	blk_queue_flag_clear(QUEUE_FLAG_QOS_ENABLED, q);
+ 	mutex_unlock(&q->rq_qos_mutex);
+ }
+ 
+@@ -346,7 +344,7 @@ int rq_qos_add(struct rq_qos *rqos, struct gendisk *disk, enum rq_qos_id id,
+ 		goto ebusy;
+ 	rqos->next = q->rq_qos;
+ 	q->rq_qos = rqos;
+-	static_branch_inc(&block_rq_qos);
++	blk_queue_flag_set(QUEUE_FLAG_QOS_ENABLED, q);
+ 
+ 	blk_mq_unfreeze_queue(q, memflags);
+ 
+@@ -377,6 +375,8 @@ void rq_qos_del(struct rq_qos *rqos)
+ 			break;
+ 		}
+ 	}
++	if (!q->rq_qos)
++		blk_queue_flag_clear(QUEUE_FLAG_QOS_ENABLED, q);
+ 	blk_mq_unfreeze_queue(q, memflags);
+ 
+ 	mutex_lock(&q->debugfs_mutex);
+diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
+index 39749f4066fb10..1fe22000a3790e 100644
+--- a/block/blk-rq-qos.h
++++ b/block/blk-rq-qos.h
+@@ -12,7 +12,6 @@
+ #include "blk-mq-debugfs.h"
+ 
+ struct blk_mq_debugfs_attr;
+-extern struct static_key_false block_rq_qos;
+ 
+ enum rq_qos_id {
+ 	RQ_QOS_WBT,
+@@ -113,43 +112,55 @@ void __rq_qos_queue_depth_changed(struct rq_qos *rqos);
+ 
+ static inline void rq_qos_cleanup(struct request_queue *q, struct bio *bio)
+ {
+-	if (static_branch_unlikely(&block_rq_qos) && q->rq_qos)
++	if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&
++			q->rq_qos)
+ 		__rq_qos_cleanup(q->rq_qos, bio);
+ }
+ 
+ static inline void rq_qos_done(struct request_queue *q, struct request *rq)
+ {
+-	if (static_branch_unlikely(&block_rq_qos) && q->rq_qos &&
+-	    !blk_rq_is_passthrough(rq))
++	if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&
++			q->rq_qos && !blk_rq_is_passthrough(rq))
+ 		__rq_qos_done(q->rq_qos, rq);
+ }
+ 
+ static inline void rq_qos_issue(struct request_queue *q, struct request *rq)
+ {
+-	if (static_branch_unlikely(&block_rq_qos) && q->rq_qos)
++	if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&
++			q->rq_qos)
+ 		__rq_qos_issue(q->rq_qos, rq);
+ }
+ 
+ static inline void rq_qos_requeue(struct request_queue *q, struct request *rq)
+ {
+-	if (static_branch_unlikely(&block_rq_qos) && q->rq_qos)
++	if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&
++			q->rq_qos)
+ 		__rq_qos_requeue(q->rq_qos, rq);
+ }
+ 
+ static inline void rq_qos_done_bio(struct bio *bio)
+ {
+-	if (static_branch_unlikely(&block_rq_qos) &&
+-	    bio->bi_bdev && (bio_flagged(bio, BIO_QOS_THROTTLED) ||
+-			     bio_flagged(bio, BIO_QOS_MERGED))) {
+-		struct request_queue *q = bdev_get_queue(bio->bi_bdev);
+-		if (q->rq_qos)
+-			__rq_qos_done_bio(q->rq_qos, bio);
+-	}
++	struct request_queue *q;
++
++	if (!bio->bi_bdev || (!bio_flagged(bio, BIO_QOS_THROTTLED) &&
++			     !bio_flagged(bio, BIO_QOS_MERGED)))
++		return;
++
++	q = bdev_get_queue(bio->bi_bdev);
++
++	/*
++	 * If a bio has BIO_QOS_xxx set, it implicitly implies that
++	 * q->rq_qos is present. So, we skip re-checking q->rq_qos
++	 * here as an extra optimization and directly call
++	 * __rq_qos_done_bio().
++	 */
++	__rq_qos_done_bio(q->rq_qos, bio);
+ }
+ 
+ static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio)
+ {
+-	if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) {
++	if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&
++			q->rq_qos) {
+ 		bio_set_flag(bio, BIO_QOS_THROTTLED);
+ 		__rq_qos_throttle(q->rq_qos, bio);
+ 	}
+@@ -158,14 +169,16 @@ static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio)
+ static inline void rq_qos_track(struct request_queue *q, struct request *rq,
+ 				struct bio *bio)
+ {
+-	if (static_branch_unlikely(&block_rq_qos) && q->rq_qos)
++	if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&
++			q->rq_qos)
+ 		__rq_qos_track(q->rq_qos, rq, bio);
+ }
+ 
+ static inline void rq_qos_merge(struct request_queue *q, struct request *rq,
+ 				struct bio *bio)
+ {
+-	if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) {
++	if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&
++			q->rq_qos) {
+ 		bio_set_flag(bio, BIO_QOS_MERGED);
+ 		__rq_qos_merge(q->rq_qos, rq, bio);
+ 	}
+@@ -173,7 +186,8 @@ static inline void rq_qos_merge(struct request_queue *q, struct request *rq,
+ 
+ static inline void rq_qos_queue_depth_changed(struct request_queue *q)
+ {
+-	if (static_branch_unlikely(&block_rq_qos) && q->rq_qos)
++	if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&
++			q->rq_qos)
+ 		__rq_qos_queue_depth_changed(q->rq_qos);
+ }
+ 
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index de39746de18b48..7d1a03c36502f8 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -876,9 +876,9 @@ int blk_register_queue(struct gendisk *disk)
+ 
+ 	if (queue_is_mq(q))
+ 		elevator_set_default(q);
+-	wbt_enable_default(disk);
+ 
+ 	blk_queue_flag_set(QUEUE_FLAG_REGISTERED, q);
++	wbt_enable_default(disk);
+ 
+ 	/* Now everything is ready and send out KOBJ_ADD uevent */
+ 	kobject_uevent(&disk->queue_kobj, KOBJ_ADD);
+diff --git a/block/blk.h b/block/blk.h
+index 4746a7704856af..5d9ca8c951932e 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -12,6 +12,7 @@
+ #include "blk-crypto-internal.h"
+ 
+ struct elevator_type;
++struct elevator_tags;
+ 
+ #define	BLK_DEV_MAX_SECTORS	(LLONG_MAX >> 9)
+ #define	BLK_MIN_SEGMENT_SIZE	4096
+@@ -322,7 +323,8 @@ bool blk_bio_list_merge(struct request_queue *q, struct list_head *list,
+ 
+ bool blk_insert_flush(struct request *rq);
+ 
+-void elv_update_nr_hw_queues(struct request_queue *q, struct elevator_type *e);
++void elv_update_nr_hw_queues(struct request_queue *q, struct elevator_type *e,
++		struct elevator_tags *t);
+ void elevator_set_default(struct request_queue *q);
+ void elevator_set_none(struct request_queue *q);
+ 
+diff --git a/block/elevator.c b/block/elevator.c
+index 88f8f36bed9818..fe96c6f4753ca2 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -54,6 +54,8 @@ struct elv_change_ctx {
+ 	struct elevator_queue *old;
+ 	/* for registering new elevator */
+ 	struct elevator_queue *new;
++	/* holds sched tags data */
++	struct elevator_tags *et;
+ };
+ 
+ static DEFINE_SPINLOCK(elv_list_lock);
+@@ -132,7 +134,7 @@ static struct elevator_type *elevator_find_get(const char *name)
+ static const struct kobj_type elv_ktype;
+ 
+ struct elevator_queue *elevator_alloc(struct request_queue *q,
+-				  struct elevator_type *e)
++		struct elevator_type *e, struct elevator_tags *et)
+ {
+ 	struct elevator_queue *eq;
+ 
+@@ -145,10 +147,10 @@ struct elevator_queue *elevator_alloc(struct request_queue *q,
+ 	kobject_init(&eq->kobj, &elv_ktype);
+ 	mutex_init(&eq->sysfs_lock);
+ 	hash_init(eq->hash);
++	eq->et = et;
+ 
+ 	return eq;
+ }
+-EXPORT_SYMBOL(elevator_alloc);
+ 
+ static void elevator_release(struct kobject *kobj)
+ {
+@@ -166,7 +168,6 @@ static void elevator_exit(struct request_queue *q)
+ 	lockdep_assert_held(&q->elevator_lock);
+ 
+ 	ioc_clear_queue(q);
+-	blk_mq_sched_free_rqs(q);
+ 
+ 	mutex_lock(&e->sysfs_lock);
+ 	blk_mq_exit_sched(q, e);
+@@ -592,7 +593,7 @@ static int elevator_switch(struct request_queue *q, struct elv_change_ctx *ctx)
+ 	}
+ 
+ 	if (new_e) {
+-		ret = blk_mq_init_sched(q, new_e);
++		ret = blk_mq_init_sched(q, new_e, ctx->et);
+ 		if (ret)
+ 			goto out_unfreeze;
+ 		ctx->new = q->elevator;
+@@ -627,8 +628,10 @@ static void elv_exit_and_release(struct request_queue *q)
+ 	elevator_exit(q);
+ 	mutex_unlock(&q->elevator_lock);
+ 	blk_mq_unfreeze_queue(q, memflags);
+-	if (e)
++	if (e) {
++		blk_mq_free_sched_tags(e->et, q->tag_set);
+ 		kobject_put(&e->kobj);
++	}
+ }
+ 
+ static int elevator_change_done(struct request_queue *q,
+@@ -641,6 +644,7 @@ static int elevator_change_done(struct request_queue *q,
+ 				&ctx->old->flags);
+ 
+ 		elv_unregister_queue(q, ctx->old);
++		blk_mq_free_sched_tags(ctx->old->et, q->tag_set);
+ 		kobject_put(&ctx->old->kobj);
+ 		if (enable_wbt)
+ 			wbt_enable_default(q->disk);
+@@ -659,9 +663,16 @@ static int elevator_change_done(struct request_queue *q,
+ static int elevator_change(struct request_queue *q, struct elv_change_ctx *ctx)
+ {
+ 	unsigned int memflags;
++	struct blk_mq_tag_set *set = q->tag_set;
+ 	int ret = 0;
+ 
+-	lockdep_assert_held(&q->tag_set->update_nr_hwq_lock);
++	lockdep_assert_held(&set->update_nr_hwq_lock);
++
++	if (strncmp(ctx->name, "none", 4)) {
++		ctx->et = blk_mq_alloc_sched_tags(set, set->nr_hw_queues);
++		if (!ctx->et)
++			return -ENOMEM;
++	}
+ 
+ 	memflags = blk_mq_freeze_queue(q);
+ 	/*
+@@ -681,6 +692,11 @@ static int elevator_change(struct request_queue *q, struct elv_change_ctx *ctx)
+ 	blk_mq_unfreeze_queue(q, memflags);
+ 	if (!ret)
+ 		ret = elevator_change_done(q, ctx);
++	/*
++	 * Free sched tags if it's allocated but we couldn't switch elevator.
++	 */
++	if (ctx->et && !ctx->new)
++		blk_mq_free_sched_tags(ctx->et, set);
+ 
+ 	return ret;
+ }
+@@ -689,8 +705,10 @@ static int elevator_change(struct request_queue *q, struct elv_change_ctx *ctx)
+  * The I/O scheduler depends on the number of hardware queues, this forces a
+  * reattachment when nr_hw_queues changes.
+  */
+-void elv_update_nr_hw_queues(struct request_queue *q, struct elevator_type *e)
++void elv_update_nr_hw_queues(struct request_queue *q, struct elevator_type *e,
++		struct elevator_tags *t)
+ {
++	struct blk_mq_tag_set *set = q->tag_set;
+ 	struct elv_change_ctx ctx = {};
+ 	int ret = -ENODEV;
+ 
+@@ -698,6 +716,7 @@ void elv_update_nr_hw_queues(struct request_queue *q, struct elevator_type *e)
+ 
+ 	if (e && !blk_queue_dying(q) && blk_queue_registered(q)) {
+ 		ctx.name = e->elevator_name;
++		ctx.et = t;
+ 
+ 		mutex_lock(&q->elevator_lock);
+ 		/* force to reattach elevator after nr_hw_queue is updated */
+@@ -707,6 +726,11 @@ void elv_update_nr_hw_queues(struct request_queue *q, struct elevator_type *e)
+ 	blk_mq_unfreeze_queue_nomemrestore(q);
+ 	if (!ret)
+ 		WARN_ON_ONCE(elevator_change_done(q, &ctx));
++	/*
++	 * Free sched tags if it's allocated but we couldn't switch elevator.
++	 */
++	if (t && !ctx.new)
++		blk_mq_free_sched_tags(t, set);
+ }
+ 
+ /*
+diff --git a/block/elevator.h b/block/elevator.h
+index a07ce773a38f79..adc5c157e17e51 100644
+--- a/block/elevator.h
++++ b/block/elevator.h
+@@ -23,8 +23,17 @@ enum elv_merge {
+ struct blk_mq_alloc_data;
+ struct blk_mq_hw_ctx;
+ 
++struct elevator_tags {
++	/* num. of hardware queues for which tags are allocated */
++	unsigned int nr_hw_queues;
++	/* depth used while allocating tags */
++	unsigned int nr_requests;
++	/* shared tag is stored at index 0 */
++	struct blk_mq_tags *tags[];
++};
++
+ struct elevator_mq_ops {
+-	int (*init_sched)(struct request_queue *, struct elevator_type *);
++	int (*init_sched)(struct request_queue *, struct elevator_queue *);
+ 	void (*exit_sched)(struct elevator_queue *);
+ 	int (*init_hctx)(struct blk_mq_hw_ctx *, unsigned int);
+ 	void (*exit_hctx)(struct blk_mq_hw_ctx *, unsigned int);
+@@ -113,6 +122,7 @@ struct request *elv_rqhash_find(struct request_queue *q, sector_t offset);
+ struct elevator_queue
+ {
+ 	struct elevator_type *type;
++	struct elevator_tags *et;
+ 	void *elevator_data;
+ 	struct kobject kobj;
+ 	struct mutex sysfs_lock;
+@@ -152,8 +162,8 @@ ssize_t elv_iosched_show(struct gendisk *disk, char *page);
+ ssize_t elv_iosched_store(struct gendisk *disk, const char *page, size_t count);
+ 
+ extern bool elv_bio_merge_ok(struct request *, struct bio *);
+-extern struct elevator_queue *elevator_alloc(struct request_queue *,
+-					struct elevator_type *);
++struct elevator_queue *elevator_alloc(struct request_queue *,
++		struct elevator_type *, struct elevator_tags *);
+ 
+ /*
+  * Helper functions.
+diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
+index bfd9a40bb33d44..70cbc7b2deb40b 100644
+--- a/block/kyber-iosched.c
++++ b/block/kyber-iosched.c
+@@ -399,20 +399,13 @@ static struct kyber_queue_data *kyber_queue_data_alloc(struct request_queue *q)
+ 	return ERR_PTR(ret);
+ }
+ 
+-static int kyber_init_sched(struct request_queue *q, struct elevator_type *e)
++static int kyber_init_sched(struct request_queue *q, struct elevator_queue *eq)
+ {
+ 	struct kyber_queue_data *kqd;
+-	struct elevator_queue *eq;
+-
+-	eq = elevator_alloc(q, e);
+-	if (!eq)
+-		return -ENOMEM;
+ 
+ 	kqd = kyber_queue_data_alloc(q);
+-	if (IS_ERR(kqd)) {
+-		kobject_put(&eq->kobj);
++	if (IS_ERR(kqd))
+ 		return PTR_ERR(kqd);
+-	}
+ 
+ 	blk_stat_enable_accounting(q);
+ 
+diff --git a/block/mq-deadline.c b/block/mq-deadline.c
+index 9ab6c62566952b..b9b7cdf1d3c980 100644
+--- a/block/mq-deadline.c
++++ b/block/mq-deadline.c
+@@ -554,20 +554,14 @@ static void dd_exit_sched(struct elevator_queue *e)
+ /*
+  * initialize elevator private data (deadline_data).
+  */
+-static int dd_init_sched(struct request_queue *q, struct elevator_type *e)
++static int dd_init_sched(struct request_queue *q, struct elevator_queue *eq)
+ {
+ 	struct deadline_data *dd;
+-	struct elevator_queue *eq;
+ 	enum dd_prio prio;
+-	int ret = -ENOMEM;
+-
+-	eq = elevator_alloc(q, e);
+-	if (!eq)
+-		return ret;
+ 
+ 	dd = kzalloc_node(sizeof(*dd), GFP_KERNEL, q->node);
+ 	if (!dd)
+-		goto put_eq;
++		return -ENOMEM;
+ 
+ 	eq->elevator_data = dd;
+ 
+@@ -594,10 +588,6 @@ static int dd_init_sched(struct request_queue *q, struct elevator_type *e)
+ 
+ 	q->elevator = eq;
+ 	return 0;
+-
+-put_eq:
+-	kobject_put(&eq->kobj);
+-	return ret;
+ }
+ 
+ /*
+diff --git a/crypto/deflate.c b/crypto/deflate.c
+index fe8e4ad0fee106..21404515dc77ec 100644
+--- a/crypto/deflate.c
++++ b/crypto/deflate.c
+@@ -48,9 +48,14 @@ static void *deflate_alloc_stream(void)
+ 	return ctx;
+ }
+ 
++static void deflate_free_stream(void *ctx)
++{
++	kvfree(ctx);
++}
++
+ static struct crypto_acomp_streams deflate_streams = {
+ 	.alloc_ctx = deflate_alloc_stream,
+-	.cfree_ctx = kvfree,
++	.free_ctx = deflate_free_stream,
+ };
+ 
+ static int deflate_compress_one(struct acomp_req *req,
+diff --git a/drivers/accel/habanalabs/gaudi2/gaudi2.c b/drivers/accel/habanalabs/gaudi2/gaudi2.c
+index a38b88baadf2ba..5722e4128d3cee 100644
+--- a/drivers/accel/habanalabs/gaudi2/gaudi2.c
++++ b/drivers/accel/habanalabs/gaudi2/gaudi2.c
+@@ -10437,7 +10437,7 @@ static int gaudi2_memset_device_memory(struct hl_device *hdev, u64 addr, u64 siz
+ 				(u64 *)(lin_dma_pkts_arr), DEBUGFS_WRITE64);
+ 	WREG32(sob_addr, 0);
+ 
+-	kfree(lin_dma_pkts_arr);
++	kvfree(lin_dma_pkts_arr);
+ 
+ 	return rc;
+ }
+diff --git a/drivers/acpi/apei/einj-core.c b/drivers/acpi/apei/einj-core.c
+index 9b041415a9d018..aded7d8f8cafdd 100644
+--- a/drivers/acpi/apei/einj-core.c
++++ b/drivers/acpi/apei/einj-core.c
+@@ -842,7 +842,7 @@ static int __init einj_probe(struct faux_device *fdev)
+ 	return rc;
+ }
+ 
+-static void __exit einj_remove(struct faux_device *fdev)
++static void einj_remove(struct faux_device *fdev)
+ {
+ 	struct apei_exec_context ctx;
+ 
+@@ -864,15 +864,9 @@ static void __exit einj_remove(struct faux_device *fdev)
+ }
+ 
+ static struct faux_device *einj_dev;
+-/*
+- * einj_remove() lives in .exit.text. For drivers registered via
+- * platform_driver_probe() this is ok because they cannot get unbound at
+- * runtime. So mark the driver struct with __refdata to prevent modpost
+- * triggering a section mismatch warning.
+- */
+-static struct faux_device_ops einj_device_ops __refdata = {
++static struct faux_device_ops einj_device_ops = {
+ 	.probe = einj_probe,
+-	.remove = __exit_p(einj_remove),
++	.remove = einj_remove,
+ };
+ 
+ static int __init einj_init(void)
+diff --git a/drivers/acpi/pfr_update.c b/drivers/acpi/pfr_update.c
+index 031d1ba81b866d..08b9b2bc2d9779 100644
+--- a/drivers/acpi/pfr_update.c
++++ b/drivers/acpi/pfr_update.c
+@@ -310,7 +310,7 @@ static bool applicable_image(const void *data, struct pfru_update_cap_info *cap,
+ 	if (type == PFRU_CODE_INJECT_TYPE)
+ 		return payload_hdr->rt_ver >= cap->code_rt_version;
+ 
+-	return payload_hdr->rt_ver >= cap->drv_rt_version;
++	return payload_hdr->svn_ver >= cap->drv_svn;
+ }
+ 
+ static void print_update_debug_info(struct pfru_updated_result *result,
+diff --git a/drivers/ata/Kconfig b/drivers/ata/Kconfig
+index e00536b495529b..120a2b7067fc7b 100644
+--- a/drivers/ata/Kconfig
++++ b/drivers/ata/Kconfig
+@@ -117,23 +117,39 @@ config SATA_AHCI
+ 
+ config SATA_MOBILE_LPM_POLICY
+ 	int "Default SATA Link Power Management policy"
+-	range 0 4
++	range 0 5
+ 	default 3
+ 	depends on SATA_AHCI
+ 	help
+ 	  Select the Default SATA Link Power Management (LPM) policy to use
+ 	  for chipsets / "South Bridges" supporting low-power modes. Such
+ 	  chipsets are ubiquitous across laptops, desktops and servers.
+-
+-	  The value set has the following meanings:
++	  Each policy combines power saving states and features:
++	   - Partial: The Phy logic is powered but is in a reduced power
++                      state. The exit latency from this state is no longer than
++                      10us).
++	   - Slumber: The Phy logic is powered but is in an even lower power
++                      state. The exit latency from this state is potentially
++		      longer, but no longer than 10ms.
++	   - DevSleep: The Phy logic may be powered down. The exit latency from
++	               this state is no longer than 20 ms, unless otherwise
++		       specified by DETO in the device Identify Device Data log.
++	   - HIPM: Host Initiated Power Management (host automatically
++		   transitions to partial and slumber).
++	   - DIPM: Device Initiated Power Management (device automatically
++		   transitions to partial and slumber).
++
++	  The possible values for the default SATA link power management
++	  policies are:
+ 		0 => Keep firmware settings
+-		1 => Maximum performance
+-		2 => Medium power
+-		3 => Medium power with Device Initiated PM enabled
+-		4 => Minimum power
+-
+-	  Note "Minimum power" is known to cause issues, including disk
+-	  corruption, with some disks and should not be used.
++		1 => No power savings (maximum performance)
++		2 => HIPM (Partial)
++		3 => HIPM (Partial) and DIPM (Partial and Slumber)
++		4 => HIPM (Partial and DevSleep) and DIPM (Partial and Slumber)
++		5 => HIPM (Slumber and DevSleep) and DIPM (Partial and Slumber)
++
++	  Excluding the value 0, higher values represent policies with higher
++	  power savings.
+ 
+ config SATA_AHCI_PLATFORM
+ 	tristate "Platform AHCI SATA support"
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index a21c9895408dc5..0573d9b6363a52 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -859,18 +859,14 @@ static void ata_to_sense_error(u8 drv_stat, u8 drv_err, u8 *sk, u8 *asc,
+ 		{0xFF, 0xFF, 0xFF, 0xFF}, // END mark
+ 	};
+ 	static const unsigned char stat_table[][4] = {
+-		/* Must be first because BUSY means no other bits valid */
+-		{0x80,		ABORTED_COMMAND, 0x47, 0x00},
+-		// Busy, fake parity for now
+-		{0x40,		ILLEGAL_REQUEST, 0x21, 0x04},
+-		// Device ready, unaligned write command
+-		{0x20,		HARDWARE_ERROR,  0x44, 0x00},
+-		// Device fault, internal target failure
+-		{0x08,		ABORTED_COMMAND, 0x47, 0x00},
+-		// Timed out in xfer, fake parity for now
+-		{0x04,		RECOVERED_ERROR, 0x11, 0x00},
+-		// Recovered ECC error	  Medium error, recovered
+-		{0xFF, 0xFF, 0xFF, 0xFF}, // END mark
++		/* Busy: must be first because BUSY means no other bits valid */
++		{ ATA_BUSY,	ABORTED_COMMAND, 0x00, 0x00 },
++		/* Device fault: INTERNAL TARGET FAILURE */
++		{ ATA_DF,	HARDWARE_ERROR,  0x44, 0x00 },
++		/* Corrected data error */
++		{ ATA_CORR,	RECOVERED_ERROR, 0x00, 0x00 },
++
++		{ 0xFF, 0xFF, 0xFF, 0xFF }, /* END mark */
+ 	};
+ 
+ 	/*
+@@ -942,6 +938,8 @@ static void ata_gen_passthru_sense(struct ata_queued_cmd *qc)
+ 	if (!(qc->flags & ATA_QCFLAG_RTF_FILLED)) {
+ 		ata_dev_dbg(dev,
+ 			    "missing result TF: can't generate ATA PT sense data\n");
++		if (qc->err_mask)
++			ata_scsi_set_sense(dev, cmd, ABORTED_COMMAND, 0, 0);
+ 		return;
+ 	}
+ 
+@@ -996,8 +994,8 @@ static void ata_gen_ata_sense(struct ata_queued_cmd *qc)
+ 
+ 	if (!(qc->flags & ATA_QCFLAG_RTF_FILLED)) {
+ 		ata_dev_dbg(dev,
+-			    "missing result TF: can't generate sense data\n");
+-		return;
++			    "Missing result TF: reporting aborted command\n");
++		goto aborted;
+ 	}
+ 
+ 	/* Use ata_to_sense_error() to map status register bits
+@@ -1008,13 +1006,15 @@ static void ata_gen_ata_sense(struct ata_queued_cmd *qc)
+ 		ata_to_sense_error(tf->status, tf->error,
+ 				   &sense_key, &asc, &ascq);
+ 		ata_scsi_set_sense(dev, cmd, sense_key, asc, ascq);
+-	} else {
+-		/* Could not decode error */
+-		ata_dev_warn(dev, "could not decode error status 0x%x err_mask 0x%x\n",
+-			     tf->status, qc->err_mask);
+-		ata_scsi_set_sense(dev, cmd, ABORTED_COMMAND, 0, 0);
+ 		return;
+ 	}
++
++	/* Could not decode error */
++	ata_dev_warn(dev,
++		"Could not decode error 0x%x, status 0x%x (err_mask=0x%x)\n",
++		tf->error, tf->status, qc->err_mask);
++aborted:
++	ata_scsi_set_sense(dev, cmd, ABORTED_COMMAND, 0, 0);
+ }
+ 
+ void ata_scsi_sdev_config(struct scsi_device *sdev)
+@@ -3905,21 +3905,16 @@ static int ata_mselect_control_ata_feature(struct ata_queued_cmd *qc,
+ 	/* Check cdl_ctrl */
+ 	switch (buf[0] & 0x03) {
+ 	case 0:
+-		/* Disable CDL if it is enabled */
+-		if (!(dev->flags & ATA_DFLAG_CDL_ENABLED))
+-			return 0;
++		/* Disable CDL */
+ 		ata_dev_dbg(dev, "Disabling CDL\n");
+ 		cdl_action = 0;
+ 		dev->flags &= ~ATA_DFLAG_CDL_ENABLED;
+ 		break;
+ 	case 0x02:
+ 		/*
+-		 * Enable CDL if not already enabled. Since this is mutually
+-		 * exclusive with NCQ priority, allow this only if NCQ priority
+-		 * is disabled.
++		 * Enable CDL. Since CDL is mutually exclusive with NCQ
++		 * priority, allow this only if NCQ priority is disabled.
+ 		 */
+-		if (dev->flags & ATA_DFLAG_CDL_ENABLED)
+-			return 0;
+ 		if (dev->flags & ATA_DFLAG_NCQ_PRIO_ENABLED) {
+ 			ata_dev_err(dev,
+ 				"NCQ priority must be disabled to enable CDL\n");
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index 1ef26216f9718a..5f9491ab6a43cc 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -1191,10 +1191,12 @@ EXPORT_SYMBOL_GPL(__pm_runtime_resume);
+  *
+  * Return -EINVAL if runtime PM is disabled for @dev.
+  *
+- * Otherwise, if the runtime PM status of @dev is %RPM_ACTIVE and either
+- * @ign_usage_count is %true or the runtime PM usage counter of @dev is not
+- * zero, increment the usage counter of @dev and return 1. Otherwise, return 0
+- * without changing the usage counter.
++ * Otherwise, if its runtime PM status is %RPM_ACTIVE and (1) @ign_usage_count
++ * is set, or (2) @dev is not ignoring children and its active child count is
++ * nonero, or (3) the runtime PM usage counter of @dev is not zero, increment
++ * the usage counter of @dev and return 1.
++ *
++ * Otherwise, return 0 without changing the usage counter.
+  *
+  * If @ign_usage_count is %true, this function can be used to prevent suspending
+  * the device when its runtime PM status is %RPM_ACTIVE.
+@@ -1216,7 +1218,8 @@ static int pm_runtime_get_conditional(struct device *dev, bool ign_usage_count)
+ 		retval = -EINVAL;
+ 	} else if (dev->power.runtime_status != RPM_ACTIVE) {
+ 		retval = 0;
+-	} else if (ign_usage_count) {
++	} else if (ign_usage_count || (!dev->power.ignore_children &&
++		   atomic_read(&dev->power.child_count) > 0)) {
+ 		retval = 1;
+ 		atomic_inc(&dev->power.usage_count);
+ 	} else {
+@@ -1249,10 +1252,16 @@ EXPORT_SYMBOL_GPL(pm_runtime_get_if_active);
+  * @dev: Target device.
+  *
+  * Increment the runtime PM usage counter of @dev if its runtime PM status is
+- * %RPM_ACTIVE and its runtime PM usage counter is greater than 0, in which case
+- * it returns 1. If the device is in a different state or its usage_count is 0,
+- * 0 is returned. -EINVAL is returned if runtime PM is disabled for the device,
+- * in which case also the usage_count will remain unmodified.
++ * %RPM_ACTIVE and its runtime PM usage counter is greater than 0 or it is not
++ * ignoring children and its active child count is nonzero.  1 is returned in
++ * this case.
++ *
++ * If @dev is in a different state or it is not in use (that is, its usage
++ * counter is 0, or it is ignoring children, or its active child count is 0),
++ * 0 is returned.
++ *
++ * -EINVAL is returned if runtime PM is disabled for the device, in which case
++ * also the usage counter of @dev is not updated.
+  */
+ int pm_runtime_get_if_in_use(struct device *dev)
+ {
+diff --git a/drivers/bluetooth/btmtk.c b/drivers/bluetooth/btmtk.c
+index 4390fd571dbd15..a8c520dc09e19c 100644
+--- a/drivers/bluetooth/btmtk.c
++++ b/drivers/bluetooth/btmtk.c
+@@ -642,12 +642,7 @@ static int btmtk_usb_hci_wmt_sync(struct hci_dev *hdev,
+ 	 * WMT command.
+ 	 */
+ 	err = wait_on_bit_timeout(&data->flags, BTMTK_TX_WAIT_VND_EVT,
+-				  TASK_INTERRUPTIBLE, HCI_INIT_TIMEOUT);
+-	if (err == -EINTR) {
+-		bt_dev_err(hdev, "Execution of wmt command interrupted");
+-		clear_bit(BTMTK_TX_WAIT_VND_EVT, &data->flags);
+-		goto err_free_wc;
+-	}
++				  TASK_UNINTERRUPTIBLE, HCI_INIT_TIMEOUT);
+ 
+ 	if (err) {
+ 		bt_dev_err(hdev, "Execution of wmt command timed out");
+diff --git a/drivers/bus/mhi/host/boot.c b/drivers/bus/mhi/host/boot.c
+index efa3b6dddf4d2f..205d83ac069f15 100644
+--- a/drivers/bus/mhi/host/boot.c
++++ b/drivers/bus/mhi/host/boot.c
+@@ -31,8 +31,8 @@ int mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
+ 	int ret;
+ 
+ 	for (i = 0; i < img_info->entries - 1; i++, mhi_buf++, bhi_vec++) {
+-		bhi_vec->dma_addr = mhi_buf->dma_addr;
+-		bhi_vec->size = mhi_buf->len;
++		bhi_vec->dma_addr = cpu_to_le64(mhi_buf->dma_addr);
++		bhi_vec->size = cpu_to_le64(mhi_buf->len);
+ 	}
+ 
+ 	dev_dbg(dev, "BHIe programming for RDDM\n");
+@@ -431,8 +431,8 @@ static void mhi_firmware_copy_bhie(struct mhi_controller *mhi_cntrl,
+ 	while (remainder) {
+ 		to_cpy = min(remainder, mhi_buf->len);
+ 		memcpy(mhi_buf->buf, buf, to_cpy);
+-		bhi_vec->dma_addr = mhi_buf->dma_addr;
+-		bhi_vec->size = to_cpy;
++		bhi_vec->dma_addr = cpu_to_le64(mhi_buf->dma_addr);
++		bhi_vec->size = cpu_to_le64(to_cpy);
+ 
+ 		buf += to_cpy;
+ 		remainder -= to_cpy;
+diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
+index ce566f7d2e9240..1dbc3f736161d6 100644
+--- a/drivers/bus/mhi/host/internal.h
++++ b/drivers/bus/mhi/host/internal.h
+@@ -25,8 +25,8 @@ struct mhi_ctxt {
+ };
+ 
+ struct bhi_vec_entry {
+-	u64 dma_addr;
+-	u64 size;
++	__le64 dma_addr;
++	__le64 size;
+ };
+ 
+ enum mhi_fw_load_type {
+diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
+index 9bb0df43ceef1e..d7b266bafb1087 100644
+--- a/drivers/bus/mhi/host/main.c
++++ b/drivers/bus/mhi/host/main.c
+@@ -602,7 +602,7 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
+ 	{
+ 		dma_addr_t ptr = MHI_TRE_GET_EV_PTR(event);
+ 		struct mhi_ring_element *local_rp, *ev_tre;
+-		void *dev_rp;
++		void *dev_rp, *next_rp;
+ 		struct mhi_buf_info *buf_info;
+ 		u16 xfer_len;
+ 
+@@ -621,6 +621,16 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
+ 		result.dir = mhi_chan->dir;
+ 
+ 		local_rp = tre_ring->rp;
++
++		next_rp = local_rp + 1;
++		if (next_rp >= tre_ring->base + tre_ring->len)
++			next_rp = tre_ring->base;
++		if (dev_rp != next_rp && !MHI_TRE_DATA_GET_CHAIN(local_rp)) {
++			dev_err(&mhi_cntrl->mhi_dev->dev,
++				"Event element points to an unexpected TRE\n");
++			break;
++		}
++
+ 		while (local_rp != dev_rp) {
+ 			buf_info = buf_ring->rp;
+ 			/* If it's the last TRE, get length from the event */
+diff --git a/drivers/cdx/controller/cdx_rpmsg.c b/drivers/cdx/controller/cdx_rpmsg.c
+index 04b578a0be17c2..61f1a290ff0890 100644
+--- a/drivers/cdx/controller/cdx_rpmsg.c
++++ b/drivers/cdx/controller/cdx_rpmsg.c
+@@ -129,8 +129,7 @@ static int cdx_rpmsg_probe(struct rpmsg_device *rpdev)
+ 
+ 	chinfo.src = RPMSG_ADDR_ANY;
+ 	chinfo.dst = rpdev->dst;
+-	strscpy(chinfo.name, cdx_rpmsg_id_table[0].name,
+-		strlen(cdx_rpmsg_id_table[0].name));
++	strscpy(chinfo.name, cdx_rpmsg_id_table[0].name, sizeof(chinfo.name));
+ 
+ 	cdx_mcdi->ept = rpmsg_create_ept(rpdev, cdx_rpmsg_cb, NULL, chinfo);
+ 	if (!cdx_mcdi->ept) {
+diff --git a/drivers/comedi/comedi_fops.c b/drivers/comedi/comedi_fops.c
+index 23b7178522ae03..7e2f2b1a1c362e 100644
+--- a/drivers/comedi/comedi_fops.c
++++ b/drivers/comedi/comedi_fops.c
+@@ -1587,6 +1587,9 @@ static int do_insnlist_ioctl(struct comedi_device *dev,
+ 				memset(&data[n], 0, (MIN_SAMPLES - n) *
+ 						    sizeof(unsigned int));
+ 			}
++		} else {
++			memset(data, 0, max_t(unsigned int, n, MIN_SAMPLES) *
++					sizeof(unsigned int));
+ 		}
+ 		ret = parse_insn(dev, insns + i, data, file);
+ 		if (ret < 0)
+@@ -1670,6 +1673,8 @@ static int do_insn_ioctl(struct comedi_device *dev,
+ 			memset(&data[insn->n], 0,
+ 			       (MIN_SAMPLES - insn->n) * sizeof(unsigned int));
+ 		}
++	} else {
++		memset(data, 0, n_data * sizeof(unsigned int));
+ 	}
+ 	ret = parse_insn(dev, insn, data, file);
+ 	if (ret < 0)
+diff --git a/drivers/comedi/drivers.c b/drivers/comedi/drivers.c
+index f1dc854928c176..c9ebaadc5e82af 100644
+--- a/drivers/comedi/drivers.c
++++ b/drivers/comedi/drivers.c
+@@ -620,11 +620,9 @@ static int insn_rw_emulate_bits(struct comedi_device *dev,
+ 	unsigned int chan = CR_CHAN(insn->chanspec);
+ 	unsigned int base_chan = (chan < 32) ? 0 : chan;
+ 	unsigned int _data[2];
++	unsigned int i;
+ 	int ret;
+ 
+-	if (insn->n == 0)
+-		return 0;
+-
+ 	memset(_data, 0, sizeof(_data));
+ 	memset(&_insn, 0, sizeof(_insn));
+ 	_insn.insn = INSN_BITS;
+@@ -635,18 +633,21 @@ static int insn_rw_emulate_bits(struct comedi_device *dev,
+ 	if (insn->insn == INSN_WRITE) {
+ 		if (!(s->subdev_flags & SDF_WRITABLE))
+ 			return -EINVAL;
+-		_data[0] = 1U << (chan - base_chan);		     /* mask */
+-		_data[1] = data[0] ? (1U << (chan - base_chan)) : 0; /* bits */
++		_data[0] = 1U << (chan - base_chan);		/* mask */
+ 	}
++	for (i = 0; i < insn->n; i++) {
++		if (insn->insn == INSN_WRITE)
++			_data[1] = data[i] ? _data[0] : 0;	/* bits */
+ 
+-	ret = s->insn_bits(dev, s, &_insn, _data);
+-	if (ret < 0)
+-		return ret;
++		ret = s->insn_bits(dev, s, &_insn, _data);
++		if (ret < 0)
++			return ret;
+ 
+-	if (insn->insn == INSN_READ)
+-		data[0] = (_data[1] >> (chan - base_chan)) & 1;
++		if (insn->insn == INSN_READ)
++			data[i] = (_data[1] >> (chan - base_chan)) & 1;
++	}
+ 
+-	return 1;
++	return insn->n;
+ }
+ 
+ static int __comedi_device_postconfig_async(struct comedi_device *dev,
+diff --git a/drivers/comedi/drivers/pcl726.c b/drivers/comedi/drivers/pcl726.c
+index 0430630e6ebb90..b542896fa0e427 100644
+--- a/drivers/comedi/drivers/pcl726.c
++++ b/drivers/comedi/drivers/pcl726.c
+@@ -328,7 +328,8 @@ static int pcl726_attach(struct comedi_device *dev,
+ 	 * Hook up the external trigger source interrupt only if the
+ 	 * user config option is valid and the board supports interrupts.
+ 	 */
+-	if (it->options[1] && (board->irq_mask & (1 << it->options[1]))) {
++	if (it->options[1] > 0 && it->options[1] < 16 &&
++	    (board->irq_mask & (1U << it->options[1]))) {
+ 		ret = request_irq(it->options[1], pcl726_interrupt, 0,
+ 				  dev->board_name, dev);
+ 		if (ret == 0) {
+diff --git a/drivers/cpufreq/armada-8k-cpufreq.c b/drivers/cpufreq/armada-8k-cpufreq.c
+index 006f4c554dd7e9..d96c1718f7f87c 100644
+--- a/drivers/cpufreq/armada-8k-cpufreq.c
++++ b/drivers/cpufreq/armada-8k-cpufreq.c
+@@ -103,7 +103,7 @@ static void armada_8k_cpufreq_free_table(struct freq_table *freq_tables)
+ {
+ 	int opps_index, nb_cpus = num_possible_cpus();
+ 
+-	for (opps_index = 0 ; opps_index <= nb_cpus; opps_index++) {
++	for (opps_index = 0 ; opps_index < nb_cpus; opps_index++) {
+ 		int i;
+ 
+ 		/* If cpu_dev is NULL then we reached the end of the array */
+diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
+index 81306612a5c67d..b2e3d0b0a116dc 100644
+--- a/drivers/cpuidle/governors/menu.c
++++ b/drivers/cpuidle/governors/menu.c
+@@ -287,20 +287,15 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ 		return 0;
+ 	}
+ 
+-	if (tick_nohz_tick_stopped()) {
+-		/*
+-		 * If the tick is already stopped, the cost of possible short
+-		 * idle duration misprediction is much higher, because the CPU
+-		 * may be stuck in a shallow idle state for a long time as a
+-		 * result of it.  In that case say we might mispredict and use
+-		 * the known time till the closest timer event for the idle
+-		 * state selection.
+-		 */
+-		if (predicted_ns < TICK_NSEC)
+-			predicted_ns = data->next_timer_ns;
+-	} else if (latency_req > predicted_ns) {
+-		latency_req = predicted_ns;
+-	}
++	/*
++	 * If the tick is already stopped, the cost of possible short idle
++	 * duration misprediction is much higher, because the CPU may be stuck
++	 * in a shallow idle state for a long time as a result of it.  In that
++	 * case, say we might mispredict and use the known time till the closest
++	 * timer event for the idle state selection.
++	 */
++	if (tick_nohz_tick_stopped() && predicted_ns < TICK_NSEC)
++		predicted_ns = data->next_timer_ns;
+ 
+ 	/*
+ 	 * Find the idle state with the lowest power while satisfying
+@@ -316,13 +311,15 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ 		if (idx == -1)
+ 			idx = i; /* first enabled state */
+ 
++		if (s->exit_latency_ns > latency_req)
++			break;
++
+ 		if (s->target_residency_ns > predicted_ns) {
+ 			/*
+ 			 * Use a physical idle state, not busy polling, unless
+ 			 * a timer is going to trigger soon enough.
+ 			 */
+ 			if ((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) &&
+-			    s->exit_latency_ns <= latency_req &&
+ 			    s->target_residency_ns <= data->next_timer_ns) {
+ 				predicted_ns = s->target_residency_ns;
+ 				idx = i;
+@@ -354,8 +351,6 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ 
+ 			return idx;
+ 		}
+-		if (s->exit_latency_ns > latency_req)
+-			break;
+ 
+ 		idx = i;
+ 	}
+diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
+index 9cd5e3d54d9d0e..ce7b99019537ef 100644
+--- a/drivers/crypto/caam/ctrl.c
++++ b/drivers/crypto/caam/ctrl.c
+@@ -831,7 +831,7 @@ static int caam_ctrl_suspend(struct device *dev)
+ {
+ 	const struct caam_drv_private *ctrlpriv = dev_get_drvdata(dev);
+ 
+-	if (ctrlpriv->caam_off_during_pm && !ctrlpriv->optee_en)
++	if (ctrlpriv->caam_off_during_pm && !ctrlpriv->no_page0)
+ 		caam_state_save(dev);
+ 
+ 	return 0;
+@@ -842,7 +842,7 @@ static int caam_ctrl_resume(struct device *dev)
+ 	struct caam_drv_private *ctrlpriv = dev_get_drvdata(dev);
+ 	int ret = 0;
+ 
+-	if (ctrlpriv->caam_off_during_pm && !ctrlpriv->optee_en) {
++	if (ctrlpriv->caam_off_during_pm && !ctrlpriv->no_page0) {
+ 		caam_state_restore(dev);
+ 
+ 		/* HW and rng will be reset so deinstantiation can be removed */
+@@ -908,6 +908,7 @@ static int caam_probe(struct platform_device *pdev)
+ 
+ 		imx_soc_data = imx_soc_match->data;
+ 		reg_access = reg_access && imx_soc_data->page0_access;
++		ctrlpriv->no_page0 = !reg_access;
+ 		/*
+ 		 * CAAM clocks cannot be controlled from kernel.
+ 		 */
+diff --git a/drivers/crypto/caam/intern.h b/drivers/crypto/caam/intern.h
+index e5132015087209..51c90d17a40d23 100644
+--- a/drivers/crypto/caam/intern.h
++++ b/drivers/crypto/caam/intern.h
+@@ -115,6 +115,7 @@ struct caam_drv_private {
+ 	u8 blob_present;	/* Nonzero if BLOB support present in device */
+ 	u8 mc_en;		/* Nonzero if MC f/w is active */
+ 	u8 optee_en;		/* Nonzero if OP-TEE f/w is active */
++	u8 no_page0;		/* Nonzero if register page 0 is not controlled by Linux */
+ 	bool pr_support;        /* RNG prediction resistance available */
+ 	int secvio_irq;		/* Security violation interrupt number */
+ 	int virt_en;		/* Virtualization enabled in CAAM */
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 539c303beb3a28..e058ba02779296 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -1787,8 +1787,14 @@ static int __sev_snp_shutdown_locked(int *error, bool panic)
+ 	sev->snp_initialized = false;
+ 	dev_dbg(sev->dev, "SEV-SNP firmware shutdown\n");
+ 
+-	atomic_notifier_chain_unregister(&panic_notifier_list,
+-					 &snp_panic_notifier);
++	/*
++	 * __sev_snp_shutdown_locked() deadlocks when it tries to unregister
++	 * itself during panic as the panic notifier is called with RCU read
++	 * lock held and notifier unregistration does RCU synchronization.
++	 */
++	if (!panic)
++		atomic_notifier_chain_unregister(&panic_notifier_list,
++						 &snp_panic_notifier);
+ 
+ 	/* Reset TMR size back to default */
+ 	sev_es_tmr_size = SEV_TMR_SIZE;
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_common_drv.h b/drivers/crypto/intel/qat/qat_common/adf_common_drv.h
+index eaa6388a6678b0..7a022bd4ae07ca 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_common_drv.h
++++ b/drivers/crypto/intel/qat/qat_common/adf_common_drv.h
+@@ -189,6 +189,7 @@ void adf_exit_misc_wq(void);
+ bool adf_misc_wq_queue_work(struct work_struct *work);
+ bool adf_misc_wq_queue_delayed_work(struct delayed_work *work,
+ 				    unsigned long delay);
++void adf_misc_wq_flush(void);
+ #if defined(CONFIG_PCI_IOV)
+ int adf_sriov_configure(struct pci_dev *pdev, int numvfs);
+ void adf_disable_sriov(struct adf_accel_dev *accel_dev);
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_init.c b/drivers/crypto/intel/qat/qat_common/adf_init.c
+index f189cce7d15358..46491048e0bb42 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_init.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_init.c
+@@ -404,6 +404,7 @@ static void adf_dev_shutdown(struct adf_accel_dev *accel_dev)
+ 		hw_data->exit_admin_comms(accel_dev);
+ 
+ 	adf_cleanup_etr_data(accel_dev);
++	adf_misc_wq_flush();
+ 	adf_dev_restore(accel_dev);
+ }
+ 
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_isr.c b/drivers/crypto/intel/qat/qat_common/adf_isr.c
+index cae1aee5479aff..12e5656136610c 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_isr.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_isr.c
+@@ -407,3 +407,8 @@ bool adf_misc_wq_queue_delayed_work(struct delayed_work *work,
+ {
+ 	return queue_delayed_work(adf_misc_wq, work, delay);
+ }
++
++void adf_misc_wq_flush(void)
++{
++	flush_workqueue(adf_misc_wq);
++}
+diff --git a/drivers/crypto/intel/qat/qat_common/qat_algs.c b/drivers/crypto/intel/qat/qat_common/qat_algs.c
+index c03a698511142e..43e6dd9b77b7d4 100644
+--- a/drivers/crypto/intel/qat/qat_common/qat_algs.c
++++ b/drivers/crypto/intel/qat/qat_common/qat_algs.c
+@@ -1277,7 +1277,7 @@ static struct aead_alg qat_aeads[] = { {
+ 	.base = {
+ 		.cra_name = "authenc(hmac(sha1),cbc(aes))",
+ 		.cra_driver_name = "qat_aes_cbc_hmac_sha1",
+-		.cra_priority = 4001,
++		.cra_priority = 100,
+ 		.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY,
+ 		.cra_blocksize = AES_BLOCK_SIZE,
+ 		.cra_ctxsize = sizeof(struct qat_alg_aead_ctx),
+@@ -1294,7 +1294,7 @@ static struct aead_alg qat_aeads[] = { {
+ 	.base = {
+ 		.cra_name = "authenc(hmac(sha256),cbc(aes))",
+ 		.cra_driver_name = "qat_aes_cbc_hmac_sha256",
+-		.cra_priority = 4001,
++		.cra_priority = 100,
+ 		.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY,
+ 		.cra_blocksize = AES_BLOCK_SIZE,
+ 		.cra_ctxsize = sizeof(struct qat_alg_aead_ctx),
+@@ -1311,7 +1311,7 @@ static struct aead_alg qat_aeads[] = { {
+ 	.base = {
+ 		.cra_name = "authenc(hmac(sha512),cbc(aes))",
+ 		.cra_driver_name = "qat_aes_cbc_hmac_sha512",
+-		.cra_priority = 4001,
++		.cra_priority = 100,
+ 		.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY,
+ 		.cra_blocksize = AES_BLOCK_SIZE,
+ 		.cra_ctxsize = sizeof(struct qat_alg_aead_ctx),
+@@ -1329,7 +1329,7 @@ static struct aead_alg qat_aeads[] = { {
+ static struct skcipher_alg qat_skciphers[] = { {
+ 	.base.cra_name = "cbc(aes)",
+ 	.base.cra_driver_name = "qat_aes_cbc",
+-	.base.cra_priority = 4001,
++	.base.cra_priority = 100,
+ 	.base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY,
+ 	.base.cra_blocksize = AES_BLOCK_SIZE,
+ 	.base.cra_ctxsize = sizeof(struct qat_alg_skcipher_ctx),
+@@ -1347,7 +1347,7 @@ static struct skcipher_alg qat_skciphers[] = { {
+ }, {
+ 	.base.cra_name = "ctr(aes)",
+ 	.base.cra_driver_name = "qat_aes_ctr",
+-	.base.cra_priority = 4001,
++	.base.cra_priority = 100,
+ 	.base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY,
+ 	.base.cra_blocksize = 1,
+ 	.base.cra_ctxsize = sizeof(struct qat_alg_skcipher_ctx),
+@@ -1365,7 +1365,7 @@ static struct skcipher_alg qat_skciphers[] = { {
+ }, {
+ 	.base.cra_name = "xts(aes)",
+ 	.base.cra_driver_name = "qat_aes_xts",
+-	.base.cra_priority = 4001,
++	.base.cra_priority = 100,
+ 	.base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK |
+ 			  CRYPTO_ALG_ALLOCATES_MEMORY,
+ 	.base.cra_blocksize = AES_BLOCK_SIZE,
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_reqmgr.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_reqmgr.h
+index e27e849b01dfc0..90a031421aacbf 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cpt_reqmgr.h
++++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_reqmgr.h
+@@ -34,6 +34,9 @@
+ #define SG_COMP_2    2
+ #define SG_COMP_1    1
+ 
++#define OTX2_CPT_DPTR_RPTR_ALIGN	8
++#define OTX2_CPT_RES_ADDR_ALIGN		32
++
+ union otx2_cpt_opcode {
+ 	u16 flags;
+ 	struct {
+@@ -347,22 +350,48 @@ static inline struct otx2_cpt_inst_info *
+ cn10k_sgv2_info_create(struct pci_dev *pdev, struct otx2_cpt_req_info *req,
+ 		       gfp_t gfp)
+ {
+-	u32 dlen = 0, g_len, sg_len, info_len;
+-	int align = OTX2_CPT_DMA_MINALIGN;
++	u32 dlen = 0, g_len, s_len, sg_len, info_len;
+ 	struct otx2_cpt_inst_info *info;
+-	u16 g_sz_bytes, s_sz_bytes;
+ 	u32 total_mem_len;
+ 	int i;
+ 
+-	g_sz_bytes = ((req->in_cnt + 2) / 3) *
+-		      sizeof(struct cn10kb_cpt_sglist_component);
+-	s_sz_bytes = ((req->out_cnt + 2) / 3) *
+-		      sizeof(struct cn10kb_cpt_sglist_component);
++	/* Allocate memory to meet below alignment requirement:
++	 *  ------------------------------------
++	 * |    struct otx2_cpt_inst_info       |
++	 * |    (No alignment required)         |
++	 * |    --------------------------------|
++	 * |   | padding for ARCH_DMA_MINALIGN  |
++	 * |   | alignment                      |
++	 * |------------------------------------|
++	 * |    SG List Gather/Input memory     |
++	 * |    Length = multiple of 32Bytes    |
++	 * |    Alignment = 8Byte               |
++	 * |----------------------------------  |
++	 * |    SG List Scatter/Output memory   |
++	 * |    Length = multiple of 32Bytes    |
++	 * |    Alignment = 8Byte               |
++	 * |     -------------------------------|
++	 * |    | padding for 32B alignment     |
++	 * |------------------------------------|
++	 * |    Result response memory          |
++	 * |    Alignment = 32Byte              |
++	 *  ------------------------------------
++	 */
++
++	info_len = sizeof(*info);
++
++	g_len = ((req->in_cnt + 2) / 3) *
++		 sizeof(struct cn10kb_cpt_sglist_component);
++	s_len = ((req->out_cnt + 2) / 3) *
++		 sizeof(struct cn10kb_cpt_sglist_component);
++	sg_len = g_len + s_len;
+ 
+-	g_len = ALIGN(g_sz_bytes, align);
+-	sg_len = ALIGN(g_len + s_sz_bytes, align);
+-	info_len = ALIGN(sizeof(*info), align);
+-	total_mem_len = sg_len + info_len + sizeof(union otx2_cpt_res_s);
++	/* Allocate extra memory for SG and response address alignment */
++	total_mem_len = ALIGN(info_len, OTX2_CPT_DPTR_RPTR_ALIGN);
++	total_mem_len += (ARCH_DMA_MINALIGN - 1) &
++			  ~(OTX2_CPT_DPTR_RPTR_ALIGN - 1);
++	total_mem_len += ALIGN(sg_len, OTX2_CPT_RES_ADDR_ALIGN);
++	total_mem_len += sizeof(union otx2_cpt_res_s);
+ 
+ 	info = kzalloc(total_mem_len, gfp);
+ 	if (unlikely(!info))
+@@ -372,7 +401,8 @@ cn10k_sgv2_info_create(struct pci_dev *pdev, struct otx2_cpt_req_info *req,
+ 		dlen += req->in[i].size;
+ 
+ 	info->dlen = dlen;
+-	info->in_buffer = (u8 *)info + info_len;
++	info->in_buffer = PTR_ALIGN((u8 *)info + info_len, ARCH_DMA_MINALIGN);
++	info->out_buffer = info->in_buffer + g_len;
+ 	info->gthr_sz = req->in_cnt;
+ 	info->sctr_sz = req->out_cnt;
+ 
+@@ -384,7 +414,7 @@ cn10k_sgv2_info_create(struct pci_dev *pdev, struct otx2_cpt_req_info *req,
+ 	}
+ 
+ 	if (sgv2io_components_setup(pdev, req->out, req->out_cnt,
+-				    &info->in_buffer[g_len])) {
++				    info->out_buffer)) {
+ 		dev_err(&pdev->dev, "Failed to setup scatter list\n");
+ 		goto destroy_info;
+ 	}
+@@ -401,8 +431,10 @@ cn10k_sgv2_info_create(struct pci_dev *pdev, struct otx2_cpt_req_info *req,
+ 	 * Get buffer for union otx2_cpt_res_s response
+ 	 * structure and its physical address
+ 	 */
+-	info->completion_addr = info->in_buffer + sg_len;
+-	info->comp_baddr = info->dptr_baddr + sg_len;
++	info->completion_addr = PTR_ALIGN((info->in_buffer + sg_len),
++					  OTX2_CPT_RES_ADDR_ALIGN);
++	info->comp_baddr = ALIGN((info->dptr_baddr + sg_len),
++				 OTX2_CPT_RES_ADDR_ALIGN);
+ 
+ 	return info;
+ 
+@@ -417,10 +449,9 @@ static inline struct otx2_cpt_inst_info *
+ otx2_sg_info_create(struct pci_dev *pdev, struct otx2_cpt_req_info *req,
+ 		    gfp_t gfp)
+ {
+-	int align = OTX2_CPT_DMA_MINALIGN;
+ 	struct otx2_cpt_inst_info *info;
+-	u32 dlen, align_dlen, info_len;
+-	u16 g_sz_bytes, s_sz_bytes;
++	u32 dlen, info_len;
++	u16 g_len, s_len;
+ 	u32 total_mem_len;
+ 
+ 	if (unlikely(req->in_cnt > OTX2_CPT_MAX_SG_IN_CNT ||
+@@ -429,22 +460,54 @@ otx2_sg_info_create(struct pci_dev *pdev, struct otx2_cpt_req_info *req,
+ 		return NULL;
+ 	}
+ 
+-	g_sz_bytes = ((req->in_cnt + 3) / 4) *
+-		      sizeof(struct otx2_cpt_sglist_component);
+-	s_sz_bytes = ((req->out_cnt + 3) / 4) *
+-		      sizeof(struct otx2_cpt_sglist_component);
++	/* Allocate memory to meet below alignment requirement:
++	 *  ------------------------------------
++	 * |    struct otx2_cpt_inst_info       |
++	 * |    (No alignment required)         |
++	 * |    --------------------------------|
++	 * |   | padding for ARCH_DMA_MINALIGN  |
++	 * |   | alignment                      |
++	 * |------------------------------------|
++	 * |    SG List Header of 8 Byte        |
++	 * |------------------------------------|
++	 * |    SG List Gather/Input memory     |
++	 * |    Length = multiple of 32Bytes    |
++	 * |    Alignment = 8Byte               |
++	 * |----------------------------------  |
++	 * |    SG List Scatter/Output memory   |
++	 * |    Length = multiple of 32Bytes    |
++	 * |    Alignment = 8Byte               |
++	 * |     -------------------------------|
++	 * |    | padding for 32B alignment     |
++	 * |------------------------------------|
++	 * |    Result response memory          |
++	 * |    Alignment = 32Byte              |
++	 *  ------------------------------------
++	 */
++
++	info_len = sizeof(*info);
++
++	g_len = ((req->in_cnt + 3) / 4) *
++		 sizeof(struct otx2_cpt_sglist_component);
++	s_len = ((req->out_cnt + 3) / 4) *
++		 sizeof(struct otx2_cpt_sglist_component);
++
++	dlen = g_len + s_len + SG_LIST_HDR_SIZE;
+ 
+-	dlen = g_sz_bytes + s_sz_bytes + SG_LIST_HDR_SIZE;
+-	align_dlen = ALIGN(dlen, align);
+-	info_len = ALIGN(sizeof(*info), align);
+-	total_mem_len = align_dlen + info_len + sizeof(union otx2_cpt_res_s);
++	/* Allocate extra memory for SG and response address alignment */
++	total_mem_len = ALIGN(info_len, OTX2_CPT_DPTR_RPTR_ALIGN);
++	total_mem_len += (ARCH_DMA_MINALIGN - 1) &
++			  ~(OTX2_CPT_DPTR_RPTR_ALIGN - 1);
++	total_mem_len += ALIGN(dlen, OTX2_CPT_RES_ADDR_ALIGN);
++	total_mem_len += sizeof(union otx2_cpt_res_s);
+ 
+ 	info = kzalloc(total_mem_len, gfp);
+ 	if (unlikely(!info))
+ 		return NULL;
+ 
+ 	info->dlen = dlen;
+-	info->in_buffer = (u8 *)info + info_len;
++	info->in_buffer = PTR_ALIGN((u8 *)info + info_len, ARCH_DMA_MINALIGN);
++	info->out_buffer = info->in_buffer + SG_LIST_HDR_SIZE + g_len;
+ 
+ 	((u16 *)info->in_buffer)[0] = req->out_cnt;
+ 	((u16 *)info->in_buffer)[1] = req->in_cnt;
+@@ -460,7 +523,7 @@ otx2_sg_info_create(struct pci_dev *pdev, struct otx2_cpt_req_info *req,
+ 	}
+ 
+ 	if (setup_sgio_components(pdev, req->out, req->out_cnt,
+-				  &info->in_buffer[8 + g_sz_bytes])) {
++				  info->out_buffer)) {
+ 		dev_err(&pdev->dev, "Failed to setup scatter list\n");
+ 		goto destroy_info;
+ 	}
+@@ -476,8 +539,10 @@ otx2_sg_info_create(struct pci_dev *pdev, struct otx2_cpt_req_info *req,
+ 	 * Get buffer for union otx2_cpt_res_s response
+ 	 * structure and its physical address
+ 	 */
+-	info->completion_addr = info->in_buffer + align_dlen;
+-	info->comp_baddr = info->dptr_baddr + align_dlen;
++	info->completion_addr = PTR_ALIGN((info->in_buffer + dlen),
++					  OTX2_CPT_RES_ADDR_ALIGN);
++	info->comp_baddr = ALIGN((info->dptr_baddr + dlen),
++				 OTX2_CPT_RES_ADDR_ALIGN);
+ 
+ 	return info;
+ 
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
+index 9095dea2748d5e..56645b3eb71757 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
+@@ -1491,12 +1491,13 @@ int otx2_cpt_discover_eng_capabilities(struct otx2_cptpf_dev *cptpf)
+ 	union otx2_cpt_opcode opcode;
+ 	union otx2_cpt_res_s *result;
+ 	union otx2_cpt_inst_s inst;
++	dma_addr_t result_baddr;
+ 	dma_addr_t rptr_baddr;
+ 	struct pci_dev *pdev;
+-	u32 len, compl_rlen;
+ 	int timeout = 10000;
++	void *base, *rptr;
+ 	int ret, etype;
+-	void *rptr;
++	u32 len;
+ 
+ 	/*
+ 	 * We don't get capabilities if it was already done
+@@ -1519,22 +1520,28 @@ int otx2_cpt_discover_eng_capabilities(struct otx2_cptpf_dev *cptpf)
+ 	if (ret)
+ 		goto delete_grps;
+ 
+-	compl_rlen = ALIGN(sizeof(union otx2_cpt_res_s), OTX2_CPT_DMA_MINALIGN);
+-	len = compl_rlen + LOADFVC_RLEN;
++	/* Allocate extra memory for "rptr" and "result" pointer alignment */
++	len = LOADFVC_RLEN + ARCH_DMA_MINALIGN +
++	       sizeof(union otx2_cpt_res_s) + OTX2_CPT_RES_ADDR_ALIGN;
+ 
+-	result = kzalloc(len, GFP_KERNEL);
+-	if (!result) {
++	base = kzalloc(len, GFP_KERNEL);
++	if (!base) {
+ 		ret = -ENOMEM;
+ 		goto lf_cleanup;
+ 	}
+-	rptr_baddr = dma_map_single(&pdev->dev, (void *)result, len,
+-				    DMA_BIDIRECTIONAL);
++
++	rptr = PTR_ALIGN(base, ARCH_DMA_MINALIGN);
++	rptr_baddr = dma_map_single(&pdev->dev, rptr, len, DMA_BIDIRECTIONAL);
+ 	if (dma_mapping_error(&pdev->dev, rptr_baddr)) {
+ 		dev_err(&pdev->dev, "DMA mapping failed\n");
+ 		ret = -EFAULT;
+-		goto free_result;
++		goto free_rptr;
+ 	}
+-	rptr = (u8 *)result + compl_rlen;
++
++	result = (union otx2_cpt_res_s *)PTR_ALIGN(rptr + LOADFVC_RLEN,
++						   OTX2_CPT_RES_ADDR_ALIGN);
++	result_baddr = ALIGN(rptr_baddr + LOADFVC_RLEN,
++			     OTX2_CPT_RES_ADDR_ALIGN);
+ 
+ 	/* Fill in the command */
+ 	opcode.s.major = LOADFVC_MAJOR_OP;
+@@ -1546,14 +1553,14 @@ int otx2_cpt_discover_eng_capabilities(struct otx2_cptpf_dev *cptpf)
+ 	/* 64-bit swap for microcode data reads, not needed for addresses */
+ 	cpu_to_be64s(&iq_cmd.cmd.u);
+ 	iq_cmd.dptr = 0;
+-	iq_cmd.rptr = rptr_baddr + compl_rlen;
++	iq_cmd.rptr = rptr_baddr;
+ 	iq_cmd.cptr.u = 0;
+ 
+ 	for (etype = 1; etype < OTX2_CPT_MAX_ENG_TYPES; etype++) {
+ 		result->s.compcode = OTX2_CPT_COMPLETION_CODE_INIT;
+ 		iq_cmd.cptr.s.grp = otx2_cpt_get_eng_grp(&cptpf->eng_grps,
+ 							 etype);
+-		otx2_cpt_fill_inst(&inst, &iq_cmd, rptr_baddr);
++		otx2_cpt_fill_inst(&inst, &iq_cmd, result_baddr);
+ 		lfs->ops->send_cmd(&inst, 1, &cptpf->lfs.lf[0]);
+ 		timeout = 10000;
+ 
+@@ -1576,8 +1583,8 @@ int otx2_cpt_discover_eng_capabilities(struct otx2_cptpf_dev *cptpf)
+ 
+ error_no_response:
+ 	dma_unmap_single(&pdev->dev, rptr_baddr, len, DMA_BIDIRECTIONAL);
+-free_result:
+-	kfree(result);
++free_rptr:
++	kfree(base);
+ lf_cleanup:
+ 	otx2_cptlf_shutdown(lfs);
+ delete_grps:
+diff --git a/drivers/fpga/zynq-fpga.c b/drivers/fpga/zynq-fpga.c
+index f7e08f7ea9ef3c..b7629a0e481340 100644
+--- a/drivers/fpga/zynq-fpga.c
++++ b/drivers/fpga/zynq-fpga.c
+@@ -405,12 +405,12 @@ static int zynq_fpga_ops_write(struct fpga_manager *mgr, struct sg_table *sgt)
+ 		}
+ 	}
+ 
+-	priv->dma_nelms =
+-	    dma_map_sg(mgr->dev.parent, sgt->sgl, sgt->nents, DMA_TO_DEVICE);
+-	if (priv->dma_nelms == 0) {
++	err = dma_map_sgtable(mgr->dev.parent, sgt, DMA_TO_DEVICE, 0);
++	if (err) {
+ 		dev_err(&mgr->dev, "Unable to DMA map (TO_DEVICE)\n");
+-		return -ENOMEM;
++		return err;
+ 	}
++	priv->dma_nelms = sgt->nents;
+ 
+ 	/* enable clock */
+ 	err = clk_enable(priv->clk);
+@@ -478,7 +478,7 @@ static int zynq_fpga_ops_write(struct fpga_manager *mgr, struct sg_table *sgt)
+ 	clk_disable(priv->clk);
+ 
+ out_free:
+-	dma_unmap_sg(mgr->dev.parent, sgt->sgl, sgt->nents, DMA_TO_DEVICE);
++	dma_unmap_sgtable(mgr->dev.parent, sgt, DMA_TO_DEVICE, 0);
+ 	return err;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index a5ccd0ada16ab0..e1d79f48304992 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -886,6 +886,7 @@ struct amdgpu_mqd_prop {
+ 	uint64_t csa_addr;
+ 	uint64_t fence_address;
+ 	bool tmz_queue;
++	bool kernel_queue;
+ };
+ 
+ struct amdgpu_mqd {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cper.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cper.c
+index 15dde1f5032842..25252231a68a94 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cper.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cper.c
+@@ -459,7 +459,7 @@ static u32 amdgpu_cper_ring_get_ent_sz(struct amdgpu_ring *ring, u64 pos)
+ 
+ void amdgpu_cper_ring_write(struct amdgpu_ring *ring, void *src, int count)
+ {
+-	u64 pos, wptr_old, rptr = *ring->rptr_cpu_addr & ring->ptr_mask;
++	u64 pos, wptr_old, rptr;
+ 	int rec_cnt_dw = count >> 2;
+ 	u32 chunk, ent_sz;
+ 	u8 *s = (u8 *)src;
+@@ -472,9 +472,11 @@ void amdgpu_cper_ring_write(struct amdgpu_ring *ring, void *src, int count)
+ 		return;
+ 	}
+ 
++	mutex_lock(&ring->adev->cper.ring_lock);
++
+ 	wptr_old = ring->wptr;
++	rptr = *ring->rptr_cpu_addr & ring->ptr_mask;
+ 
+-	mutex_lock(&ring->adev->cper.ring_lock);
+ 	while (count) {
+ 		ent_sz = amdgpu_cper_ring_get_ent_sz(ring, ring->wptr);
+ 		chunk = umin(ent_sz, count);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 9ea0d9b71f48db..82cb68114ae991 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -1138,6 +1138,9 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
+ 		}
+ 	}
+ 
++	if (!amdgpu_vm_ready(vm))
++		return -EINVAL;
++
+ 	r = amdgpu_vm_clear_freed(adev, vm, NULL);
+ 	if (r)
+ 		return r;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 54ea8e8d781215..a57e8c5474bb00 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2561,9 +2561,6 @@ static int amdgpu_device_parse_gpu_info_fw(struct amdgpu_device *adev)
+ 
+ 	adev->firmware.gpu_info_fw = NULL;
+ 
+-	if (adev->mman.discovery_bin)
+-		return 0;
+-
+ 	switch (adev->asic_type) {
+ 	default:
+ 		return 0;
+@@ -2585,6 +2582,8 @@ static int amdgpu_device_parse_gpu_info_fw(struct amdgpu_device *adev)
+ 		chip_name = "arcturus";
+ 		break;
+ 	case CHIP_NAVI12:
++		if (adev->mman.discovery_bin)
++			return 0;
+ 		chip_name = "navi12";
+ 		break;
+ 	}
+@@ -3235,6 +3234,7 @@ static bool amdgpu_device_check_vram_lost(struct amdgpu_device *adev)
+ 	 * always assumed to be lost.
+ 	 */
+ 	switch (amdgpu_asic_reset_method(adev)) {
++	case AMD_RESET_METHOD_LEGACY:
+ 	case AMD_RESET_METHOD_LINK:
+ 	case AMD_RESET_METHOD_BACO:
+ 	case AMD_RESET_METHOD_MODE1:
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+index 81b3443c8d7f48..efe0058b48ca85 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+@@ -276,7 +276,7 @@ static int amdgpu_discovery_read_binary_from_mem(struct amdgpu_device *adev,
+ 	u32 msg;
+ 
+ 	if (!amdgpu_sriov_vf(adev)) {
+-		/* It can take up to a second for IFWI init to complete on some dGPUs,
++		/* It can take up to two second for IFWI init to complete on some dGPUs,
+ 		 * but generally it should be in the 60-100ms range.  Normally this starts
+ 		 * as soon as the device gets power so by the time the OS loads this has long
+ 		 * completed.  However, when a card is hotplugged via e.g., USB4, we need to
+@@ -284,7 +284,7 @@ static int amdgpu_discovery_read_binary_from_mem(struct amdgpu_device *adev,
+ 		 * continue.
+ 		 */
+ 
+-		for (i = 0; i < 1000; i++) {
++		for (i = 0; i < 2000; i++) {
+ 			msg = RREG32(mmMP0_SMN_C2PMSG_33);
+ 			if (msg & 0x80000000)
+ 				break;
+@@ -2555,40 +2555,11 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
+ 
+ 	switch (adev->asic_type) {
+ 	case CHIP_VEGA10:
+-	case CHIP_VEGA12:
+-	case CHIP_RAVEN:
+-	case CHIP_VEGA20:
+-	case CHIP_ARCTURUS:
+-	case CHIP_ALDEBARAN:
+-		/* this is not fatal.  We have a fallback below
+-		 * if the new firmwares are not present. some of
+-		 * this will be overridden below to keep things
+-		 * consistent with the current behavior.
++		/* This is not fatal.  We only need the discovery
++		 * binary for sysfs.  We don't need it for a
++		 * functional system.
+ 		 */
+-		r = amdgpu_discovery_reg_base_init(adev);
+-		if (!r) {
+-			amdgpu_discovery_harvest_ip(adev);
+-			amdgpu_discovery_get_gfx_info(adev);
+-			amdgpu_discovery_get_mall_info(adev);
+-			amdgpu_discovery_get_vcn_info(adev);
+-		}
+-		break;
+-	default:
+-		r = amdgpu_discovery_reg_base_init(adev);
+-		if (r) {
+-			drm_err(&adev->ddev, "discovery failed: %d\n", r);
+-			return r;
+-		}
+-
+-		amdgpu_discovery_harvest_ip(adev);
+-		amdgpu_discovery_get_gfx_info(adev);
+-		amdgpu_discovery_get_mall_info(adev);
+-		amdgpu_discovery_get_vcn_info(adev);
+-		break;
+-	}
+-
+-	switch (adev->asic_type) {
+-	case CHIP_VEGA10:
++		amdgpu_discovery_init(adev);
+ 		vega10_reg_base_init(adev);
+ 		adev->sdma.num_instances = 2;
+ 		adev->gmc.num_umc = 4;
+@@ -2611,6 +2582,11 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
+ 		adev->ip_versions[DCI_HWIP][0] = IP_VERSION(12, 0, 0);
+ 		break;
+ 	case CHIP_VEGA12:
++		/* This is not fatal.  We only need the discovery
++		 * binary for sysfs.  We don't need it for a
++		 * functional system.
++		 */
++		amdgpu_discovery_init(adev);
+ 		vega10_reg_base_init(adev);
+ 		adev->sdma.num_instances = 2;
+ 		adev->gmc.num_umc = 4;
+@@ -2633,6 +2609,11 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
+ 		adev->ip_versions[DCI_HWIP][0] = IP_VERSION(12, 0, 1);
+ 		break;
+ 	case CHIP_RAVEN:
++		/* This is not fatal.  We only need the discovery
++		 * binary for sysfs.  We don't need it for a
++		 * functional system.
++		 */
++		amdgpu_discovery_init(adev);
+ 		vega10_reg_base_init(adev);
+ 		adev->sdma.num_instances = 1;
+ 		adev->vcn.num_vcn_inst = 1;
+@@ -2674,6 +2655,11 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
+ 		}
+ 		break;
+ 	case CHIP_VEGA20:
++		/* This is not fatal.  We only need the discovery
++		 * binary for sysfs.  We don't need it for a
++		 * functional system.
++		 */
++		amdgpu_discovery_init(adev);
+ 		vega20_reg_base_init(adev);
+ 		adev->sdma.num_instances = 2;
+ 		adev->gmc.num_umc = 8;
+@@ -2697,6 +2683,11 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
+ 		adev->ip_versions[DCI_HWIP][0] = IP_VERSION(12, 1, 0);
+ 		break;
+ 	case CHIP_ARCTURUS:
++		/* This is not fatal.  We only need the discovery
++		 * binary for sysfs.  We don't need it for a
++		 * functional system.
++		 */
++		amdgpu_discovery_init(adev);
+ 		arct_reg_base_init(adev);
+ 		adev->sdma.num_instances = 8;
+ 		adev->vcn.num_vcn_inst = 2;
+@@ -2725,6 +2716,11 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
+ 		adev->ip_versions[UVD_HWIP][1] = IP_VERSION(2, 5, 0);
+ 		break;
+ 	case CHIP_ALDEBARAN:
++		/* This is not fatal.  We only need the discovery
++		 * binary for sysfs.  We don't need it for a
++		 * functional system.
++		 */
++		amdgpu_discovery_init(adev);
+ 		aldebaran_reg_base_init(adev);
+ 		adev->sdma.num_instances = 5;
+ 		adev->vcn.num_vcn_inst = 2;
+@@ -2751,6 +2747,16 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
+ 		adev->ip_versions[XGMI_HWIP][0] = IP_VERSION(6, 1, 0);
+ 		break;
+ 	default:
++		r = amdgpu_discovery_reg_base_init(adev);
++		if (r) {
++			drm_err(&adev->ddev, "discovery failed: %d\n", r);
++			return r;
++		}
++
++		amdgpu_discovery_harvest_ip(adev);
++		amdgpu_discovery_get_gfx_info(adev);
++		amdgpu_discovery_get_mall_info(adev);
++		amdgpu_discovery_get_vcn_info(adev);
+ 		break;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+index 3528a27c7c1ddd..bee6609eab2043 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+@@ -387,13 +387,6 @@ amdgpu_job_prepare_job(struct drm_sched_job *sched_job,
+ 			dev_err(ring->adev->dev, "Error getting VM ID (%d)\n", r);
+ 			goto error;
+ 		}
+-		/*
+-		 * The VM structure might be released after the VMID is
+-		 * assigned, we had multiple problems with people trying to use
+-		 * the VM pointer so better set it to NULL.
+-		 */
+-		if (!fence)
+-			job->vm = NULL;
+ 		return fence;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+index 6ac0ce361a2d8c..7c5584742471e9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+@@ -687,6 +687,7 @@ static void amdgpu_ring_to_mqd_prop(struct amdgpu_ring *ring,
+ 	prop->eop_gpu_addr = ring->eop_gpu_addr;
+ 	prop->use_doorbell = ring->use_doorbell;
+ 	prop->doorbell_index = ring->doorbell_index;
++	prop->kernel_queue = true;
+ 
+ 	/* map_queues packet doesn't need activate the queue,
+ 	 * so only kiq need set this field.
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+index eaddc441c51ab5..024f2121a0997b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+@@ -32,6 +32,7 @@
+ 
+ static const struct kicker_device kicker_device_list[] = {
+ 	{0x744B, 0x00},
++	{0x7551, 0xC8}
+ };
+ 
+ static void amdgpu_ucode_print_common_hdr(const struct common_firmware_header *hdr)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 3911c78f828279..9b364502de44f9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -654,11 +654,10 @@ int amdgpu_vm_validate(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+  * Check if all VM PDs/PTs are ready for updates
+  *
+  * Returns:
+- * True if VM is not evicting.
++ * True if VM is not evicting and all VM entities are not stopped
+  */
+ bool amdgpu_vm_ready(struct amdgpu_vm *vm)
+ {
+-	bool empty;
+ 	bool ret;
+ 
+ 	amdgpu_vm_eviction_lock(vm);
+@@ -666,10 +665,18 @@ bool amdgpu_vm_ready(struct amdgpu_vm *vm)
+ 	amdgpu_vm_eviction_unlock(vm);
+ 
+ 	spin_lock(&vm->status_lock);
+-	empty = list_empty(&vm->evicted);
++	ret &= list_empty(&vm->evicted);
+ 	spin_unlock(&vm->status_lock);
+ 
+-	return ret && empty;
++	spin_lock(&vm->immediate.lock);
++	ret &= !vm->immediate.stopped;
++	spin_unlock(&vm->immediate.lock);
++
++	spin_lock(&vm->delayed.lock);
++	ret &= !vm->delayed.stopped;
++	spin_unlock(&vm->delayed.lock);
++
++	return ret;
+ }
+ 
+ /**
+@@ -2409,13 +2416,11 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
+  */
+ long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
+ {
+-	timeout = dma_resv_wait_timeout(vm->root.bo->tbo.base.resv,
+-					DMA_RESV_USAGE_BOOKKEEP,
+-					true, timeout);
++	timeout = drm_sched_entity_flush(&vm->immediate, timeout);
+ 	if (timeout <= 0)
+ 		return timeout;
+ 
+-	return dma_fence_wait_timeout(vm->last_unlocked, true, timeout);
++	return drm_sched_entity_flush(&vm->delayed, timeout);
+ }
+ 
+ static void amdgpu_vm_destroy_task_info(struct kref *kref)
+diff --git a/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c b/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c
+index 1c083304ae7767..c67c4705d662e6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c
++++ b/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c
+@@ -453,6 +453,7 @@ static int __aqua_vanjaram_get_px_mode_info(struct amdgpu_xcp_mgr *xcp_mgr,
+ 					    uint16_t *nps_modes)
+ {
+ 	struct amdgpu_device *adev = xcp_mgr->adev;
++	uint32_t gc_ver = amdgpu_ip_version(adev, GC_HWIP, 0);
+ 
+ 	if (!num_xcp || !nps_modes || !(xcp_mgr->supp_xcp_modes & BIT(px_mode)))
+ 		return -EINVAL;
+@@ -476,12 +477,14 @@ static int __aqua_vanjaram_get_px_mode_info(struct amdgpu_xcp_mgr *xcp_mgr,
+ 		*num_xcp = 4;
+ 		*nps_modes = BIT(AMDGPU_NPS1_PARTITION_MODE) |
+ 			     BIT(AMDGPU_NPS4_PARTITION_MODE);
++		if (gc_ver == IP_VERSION(9, 5, 0))
++			*nps_modes |= BIT(AMDGPU_NPS2_PARTITION_MODE);
+ 		break;
+ 	case AMDGPU_CPX_PARTITION_MODE:
+ 		*num_xcp = NUM_XCC(adev->gfx.xcc_mask);
+ 		*nps_modes = BIT(AMDGPU_NPS1_PARTITION_MODE) |
+ 			     BIT(AMDGPU_NPS4_PARTITION_MODE);
+-		if (amdgpu_sriov_vf(adev))
++		if (gc_ver == IP_VERSION(9, 5, 0))
+ 			*nps_modes |= BIT(AMDGPU_NPS2_PARTITION_MODE);
+ 		break;
+ 	default:
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+index 50f04c2c0b8c0c..097ec7d99c5abb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+@@ -79,6 +79,7 @@ MODULE_FIRMWARE("amdgpu/gc_12_0_1_pfp.bin");
+ MODULE_FIRMWARE("amdgpu/gc_12_0_1_me.bin");
+ MODULE_FIRMWARE("amdgpu/gc_12_0_1_mec.bin");
+ MODULE_FIRMWARE("amdgpu/gc_12_0_1_rlc.bin");
++MODULE_FIRMWARE("amdgpu/gc_12_0_1_rlc_kicker.bin");
+ MODULE_FIRMWARE("amdgpu/gc_12_0_1_toc.bin");
+ 
+ static const struct amdgpu_hwip_reg_entry gc_reg_list_12_0[] = {
+@@ -586,7 +587,7 @@ static int gfx_v12_0_init_toc_microcode(struct amdgpu_device *adev, const char *
+ 
+ static int gfx_v12_0_init_microcode(struct amdgpu_device *adev)
+ {
+-	char ucode_prefix[15];
++	char ucode_prefix[30];
+ 	int err;
+ 	const struct rlc_firmware_header_v2_0 *rlc_hdr;
+ 	uint16_t version_major;
+@@ -613,9 +614,14 @@ static int gfx_v12_0_init_microcode(struct amdgpu_device *adev)
+ 	amdgpu_gfx_cp_init_microcode(adev, AMDGPU_UCODE_ID_CP_RS64_ME_P0_STACK);
+ 
+ 	if (!amdgpu_sriov_vf(adev)) {
+-		err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw,
+-					   AMDGPU_UCODE_REQUIRED,
+-					   "amdgpu/%s_rlc.bin", ucode_prefix);
++		if (amdgpu_is_kicker_fw(adev))
++			err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw,
++						   AMDGPU_UCODE_REQUIRED,
++						   "amdgpu/%s_rlc_kicker.bin", ucode_prefix);
++		else
++			err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw,
++						   AMDGPU_UCODE_REQUIRED,
++						   "amdgpu/%s_rlc.bin", ucode_prefix);
+ 		if (err)
+ 			goto out;
+ 		rlc_hdr = (const struct rlc_firmware_header_v2_0 *)adev->gfx.rlc_fw->data;
+diff --git a/drivers/gpu/drm/amd/amdgpu/imu_v12_0.c b/drivers/gpu/drm/amd/amdgpu/imu_v12_0.c
+index df898dbb746e3f..58cd87db80619f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/imu_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/imu_v12_0.c
+@@ -34,12 +34,13 @@
+ 
+ MODULE_FIRMWARE("amdgpu/gc_12_0_0_imu.bin");
+ MODULE_FIRMWARE("amdgpu/gc_12_0_1_imu.bin");
++MODULE_FIRMWARE("amdgpu/gc_12_0_1_imu_kicker.bin");
+ 
+ #define TRANSFER_RAM_MASK	0x001c0000
+ 
+ static int imu_v12_0_init_microcode(struct amdgpu_device *adev)
+ {
+-	char ucode_prefix[15];
++	char ucode_prefix[30];
+ 	int err;
+ 	const struct imu_firmware_header_v1_0 *imu_hdr;
+ 	struct amdgpu_firmware_info *info = NULL;
+@@ -47,8 +48,12 @@ static int imu_v12_0_init_microcode(struct amdgpu_device *adev)
+ 	DRM_DEBUG("\n");
+ 
+ 	amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, sizeof(ucode_prefix));
+-	err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, AMDGPU_UCODE_REQUIRED,
+-				   "amdgpu/%s_imu.bin", ucode_prefix);
++	if (amdgpu_is_kicker_fw(adev))
++		err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, AMDGPU_UCODE_REQUIRED,
++					   "amdgpu/%s_imu_kicker.bin", ucode_prefix);
++	else
++		err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, AMDGPU_UCODE_REQUIRED,
++					   "amdgpu/%s_imu.bin", ucode_prefix);
+ 	if (err)
+ 		goto out;
+ 
+@@ -362,7 +367,7 @@ static void program_imu_rlc_ram(struct amdgpu_device *adev,
+ static void imu_v12_0_program_rlc_ram(struct amdgpu_device *adev)
+ {
+ 	u32 reg_data, size = 0;
+-	const u32 *data;
++	const u32 *data = NULL;
+ 	int r = -EINVAL;
+ 
+ 	WREG32_SOC15(GC, 0, regGFX_IMU_RLC_RAM_INDEX, 0x2);
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_1.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_1.c
+index 134c4ec1088785..910337dc28d105 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_1.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_1.c
+@@ -36,40 +36,47 @@
+ 
+ static const char *mmhub_client_ids_v3_0_1[][2] = {
+ 	[0][0] = "VMC",
++	[1][0] = "ISPXT",
++	[2][0] = "ISPIXT",
+ 	[4][0] = "DCEDMC",
+ 	[5][0] = "DCEVGA",
+ 	[6][0] = "MP0",
+ 	[7][0] = "MP1",
+-	[8][0] = "MPIO",
+-	[16][0] = "HDP",
+-	[17][0] = "LSDMA",
+-	[18][0] = "JPEG",
+-	[19][0] = "VCNU0",
+-	[21][0] = "VSCH",
+-	[22][0] = "VCNU1",
+-	[23][0] = "VCN1",
+-	[32+20][0] = "VCN0",
+-	[2][1] = "DBGUNBIO",
++	[8][0] = "MPM",
++	[12][0] = "ISPTNR",
++	[14][0] = "ISPCRD0",
++	[15][0] = "ISPCRD1",
++	[16][0] = "ISPCRD2",
++	[22][0] = "HDP",
++	[23][0] = "LSDMA",
++	[24][0] = "JPEG",
++	[27][0] = "VSCH",
++	[28][0] = "VCNU",
++	[29][0] = "VCN",
++	[1][1] = "ISPXT",
++	[2][1] = "ISPIXT",
+ 	[3][1] = "DCEDWB",
+ 	[4][1] = "DCEDMC",
+ 	[5][1] = "DCEVGA",
+ 	[6][1] = "MP0",
+ 	[7][1] = "MP1",
+-	[8][1] = "MPIO",
+-	[10][1] = "DBGU0",
+-	[11][1] = "DBGU1",
+-	[12][1] = "DBGU2",
+-	[13][1] = "DBGU3",
+-	[14][1] = "XDP",
+-	[15][1] = "OSSSYS",
+-	[16][1] = "HDP",
+-	[17][1] = "LSDMA",
+-	[18][1] = "JPEG",
+-	[19][1] = "VCNU0",
+-	[20][1] = "VCN0",
+-	[21][1] = "VSCH",
+-	[22][1] = "VCNU1",
+-	[23][1] = "VCN1",
++	[8][1] = "MPM",
++	[10][1] = "ISPMWR0",
++	[11][1] = "ISPMWR1",
++	[12][1] = "ISPTNR",
++	[13][1] = "ISPSWR",
++	[14][1] = "ISPCWR0",
++	[15][1] = "ISPCWR1",
++	[16][1] = "ISPCWR2",
++	[17][1] = "ISPCWR3",
++	[18][1] = "XDP",
++	[21][1] = "OSSSYS",
++	[22][1] = "HDP",
++	[23][1] = "LSDMA",
++	[24][1] = "JPEG",
++	[27][1] = "VSCH",
++	[28][1] = "VCNU",
++	[29][1] = "VCN",
+ };
+ 
+ static uint32_t mmhub_v3_0_1_get_invalidate_req(unsigned int vmid,
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v3_3.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v3_3.c
+index bc3d6c2fc87a42..f6fc9778bc3059 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v3_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v3_3.c
+@@ -40,30 +40,129 @@
+ 
+ static const char *mmhub_client_ids_v3_3[][2] = {
+ 	[0][0] = "VMC",
++	[1][0] = "ISPXT",
++	[2][0] = "ISPIXT",
+ 	[4][0] = "DCEDMC",
+ 	[6][0] = "MP0",
+ 	[7][0] = "MP1",
+ 	[8][0] = "MPM",
++	[9][0] = "ISPPDPRD",
++	[10][0] = "ISPCSTATRD",
++	[11][0] = "ISPBYRPRD",
++	[12][0] = "ISPRGBPRD",
++	[13][0] = "ISPMCFPRD",
++	[14][0] = "ISPMCFPRD1",
++	[15][0] = "ISPYUVPRD",
++	[16][0] = "ISPMCSCRD",
++	[17][0] = "ISPGDCRD",
++	[18][0] = "ISPLMERD",
++	[22][0] = "ISPXT1",
++	[23][0] = "ISPIXT1",
+ 	[24][0] = "HDP",
+ 	[25][0] = "LSDMA",
+ 	[26][0] = "JPEG",
+ 	[27][0] = "VPE",
++	[28][0] = "VSCH",
+ 	[29][0] = "VCNU",
+ 	[30][0] = "VCN",
++	[1][1] = "ISPXT",
++	[2][1] = "ISPIXT",
+ 	[3][1] = "DCEDWB",
+ 	[4][1] = "DCEDMC",
++	[5][1] = "ISPCSISWR",
+ 	[6][1] = "MP0",
+ 	[7][1] = "MP1",
+ 	[8][1] = "MPM",
++	[9][1] = "ISPPDPWR",
++	[10][1] = "ISPCSTATWR",
++	[11][1] = "ISPBYRPWR",
++	[12][1] = "ISPRGBPWR",
++	[13][1] = "ISPMCFPWR",
++	[14][1] = "ISPMWR0",
++	[15][1] = "ISPYUVPWR",
++	[16][1] = "ISPMCSCWR",
++	[17][1] = "ISPGDCWR",
++	[18][1] = "ISPLMEWR",
++	[20][1] = "ISPMWR2",
+ 	[21][1] = "OSSSYS",
++	[22][1] = "ISPXT1",
++	[23][1] = "ISPIXT1",
+ 	[24][1] = "HDP",
+ 	[25][1] = "LSDMA",
+ 	[26][1] = "JPEG",
+ 	[27][1] = "VPE",
++	[28][1] = "VSCH",
+ 	[29][1] = "VCNU",
+ 	[30][1] = "VCN",
+ };
+ 
++static const char *mmhub_client_ids_v3_3_1[][2] = {
++	[0][0] = "VMC",
++	[4][0] = "DCEDMC",
++	[6][0] = "MP0",
++	[7][0] = "MP1",
++	[8][0] = "MPM",
++	[24][0] = "HDP",
++	[25][0] = "LSDMA",
++	[26][0] = "JPEG0",
++	[27][0] = "VPE0",
++	[28][0] = "VSCH",
++	[29][0] = "VCNU0",
++	[30][0] = "VCN0",
++	[32+1][0] = "ISPXT",
++	[32+2][0] = "ISPIXT",
++	[32+9][0] = "ISPPDPRD",
++	[32+10][0] = "ISPCSTATRD",
++	[32+11][0] = "ISPBYRPRD",
++	[32+12][0] = "ISPRGBPRD",
++	[32+13][0] = "ISPMCFPRD",
++	[32+14][0] = "ISPMCFPRD1",
++	[32+15][0] = "ISPYUVPRD",
++	[32+16][0] = "ISPMCSCRD",
++	[32+17][0] = "ISPGDCRD",
++	[32+18][0] = "ISPLMERD",
++	[32+22][0] = "ISPXT1",
++	[32+23][0] = "ISPIXT1",
++	[32+26][0] = "JPEG1",
++	[32+27][0] = "VPE1",
++	[32+29][0] = "VCNU1",
++	[32+30][0] = "VCN1",
++	[3][1] = "DCEDWB",
++	[4][1] = "DCEDMC",
++	[6][1] = "MP0",
++	[7][1] = "MP1",
++	[8][1] = "MPM",
++	[21][1] = "OSSSYS",
++	[24][1] = "HDP",
++	[25][1] = "LSDMA",
++	[26][1] = "JPEG0",
++	[27][1] = "VPE0",
++	[28][1] = "VSCH",
++	[29][1] = "VCNU0",
++	[30][1] = "VCN0",
++	[32+1][1] = "ISPXT",
++	[32+2][1] = "ISPIXT",
++	[32+5][1] = "ISPCSISWR",
++	[32+9][1] = "ISPPDPWR",
++	[32+10][1] = "ISPCSTATWR",
++	[32+11][1] = "ISPBYRPWR",
++	[32+12][1] = "ISPRGBPWR",
++	[32+13][1] = "ISPMCFPWR",
++	[32+14][1] = "ISPMWR0",
++	[32+15][1] = "ISPYUVPWR",
++	[32+16][1] = "ISPMCSCWR",
++	[32+17][1] = "ISPGDCWR",
++	[32+18][1] = "ISPLMEWR",
++	[32+19][1] = "ISPMWR1",
++	[32+20][1] = "ISPMWR2",
++	[32+22][1] = "ISPXT1",
++	[32+23][1] = "ISPIXT1",
++	[32+26][1] = "JPEG1",
++	[32+27][1] = "VPE1",
++	[32+29][1] = "VCNU1",
++	[32+30][1] = "VCN1",
++};
++
+ static uint32_t mmhub_v3_3_get_invalidate_req(unsigned int vmid,
+ 						uint32_t flush_type)
+ {
+@@ -102,12 +201,16 @@ mmhub_v3_3_print_l2_protection_fault_status(struct amdgpu_device *adev,
+ 
+ 	switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
+ 	case IP_VERSION(3, 3, 0):
+-	case IP_VERSION(3, 3, 1):
+ 	case IP_VERSION(3, 3, 2):
+ 		mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v3_3) ?
+ 			    mmhub_client_ids_v3_3[cid][rw] :
+ 			    cid == 0x140 ? "UMSCH" : NULL;
+ 		break;
++	case IP_VERSION(3, 3, 1):
++		mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v3_3_1) ?
++			    mmhub_client_ids_v3_3_1[cid][rw] :
++			    cid == 0x140 ? "UMSCH" : NULL;
++		break;
+ 	default:
+ 		mmhub_cid = NULL;
+ 		break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c
+index f2ab5001b49249..951998454b2572 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c
+@@ -37,39 +37,31 @@
+ static const char *mmhub_client_ids_v4_1_0[][2] = {
+ 	[0][0] = "VMC",
+ 	[4][0] = "DCEDMC",
+-	[5][0] = "DCEVGA",
+ 	[6][0] = "MP0",
+ 	[7][0] = "MP1",
+ 	[8][0] = "MPIO",
+-	[16][0] = "HDP",
+-	[17][0] = "LSDMA",
+-	[18][0] = "JPEG",
+-	[19][0] = "VCNU0",
+-	[21][0] = "VSCH",
+-	[22][0] = "VCNU1",
+-	[23][0] = "VCN1",
+-	[32+20][0] = "VCN0",
+-	[2][1] = "DBGUNBIO",
++	[16][0] = "LSDMA",
++	[17][0] = "JPEG",
++	[19][0] = "VCNU",
++	[22][0] = "VSCH",
++	[23][0] = "HDP",
++	[32+23][0] = "VCNRD",
+ 	[3][1] = "DCEDWB",
+ 	[4][1] = "DCEDMC",
+-	[5][1] = "DCEVGA",
+ 	[6][1] = "MP0",
+ 	[7][1] = "MP1",
+ 	[8][1] = "MPIO",
+ 	[10][1] = "DBGU0",
+ 	[11][1] = "DBGU1",
+-	[12][1] = "DBGU2",
+-	[13][1] = "DBGU3",
++	[12][1] = "DBGUNBIO",
+ 	[14][1] = "XDP",
+ 	[15][1] = "OSSSYS",
+-	[16][1] = "HDP",
+-	[17][1] = "LSDMA",
+-	[18][1] = "JPEG",
+-	[19][1] = "VCNU0",
+-	[20][1] = "VCN0",
+-	[21][1] = "VSCH",
+-	[22][1] = "VCNU1",
+-	[23][1] = "VCN1",
++	[16][1] = "LSDMA",
++	[17][1] = "JPEG",
++	[18][1] = "VCNWR",
++	[19][1] = "VCNU",
++	[22][1] = "VSCH",
++	[23][1] = "HDP",
+ };
+ 
+ static uint32_t mmhub_v4_1_0_get_invalidate_req(unsigned int vmid,
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c
+index ffa47c7d24c919..b029f301aaccaf 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c
+@@ -34,7 +34,9 @@
+ MODULE_FIRMWARE("amdgpu/psp_14_0_2_sos.bin");
+ MODULE_FIRMWARE("amdgpu/psp_14_0_2_ta.bin");
+ MODULE_FIRMWARE("amdgpu/psp_14_0_3_sos.bin");
++MODULE_FIRMWARE("amdgpu/psp_14_0_3_sos_kicker.bin");
+ MODULE_FIRMWARE("amdgpu/psp_14_0_3_ta.bin");
++MODULE_FIRMWARE("amdgpu/psp_14_0_3_ta_kicker.bin");
+ MODULE_FIRMWARE("amdgpu/psp_14_0_5_toc.bin");
+ MODULE_FIRMWARE("amdgpu/psp_14_0_5_ta.bin");
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
+index c457be3a3c56f5..9e74c9822e622a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
+@@ -1218,6 +1218,8 @@ static int soc15_common_early_init(struct amdgpu_ip_block *ip_block)
+ 			AMD_PG_SUPPORT_JPEG;
+ 		/*TODO: need a new external_rev_id for GC 9.4.4? */
+ 		adev->external_rev_id = adev->rev_id + 0x46;
++		if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 5, 0))
++			adev->external_rev_id = adev->rev_id + 0x50;
+ 		break;
+ 	default:
+ 		/* FIXME: not supported yet */
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index 76359c6a3f3a44..91ce313ac43abe 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -2716,7 +2716,7 @@ static void get_queue_checkpoint_info(struct device_queue_manager *dqm,
+ 
+ 	dqm_lock(dqm);
+ 	mqd_mgr = dqm->mqd_mgrs[mqd_type];
+-	*mqd_size = mqd_mgr->mqd_size;
++	*mqd_size = mqd_mgr->mqd_size * NUM_XCC(mqd_mgr->dev->xcc_mask);
+ 	*ctl_stack_size = 0;
+ 
+ 	if (q->properties.type == KFD_QUEUE_TYPE_COMPUTE && mqd_mgr->get_checkpoint_info)
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_module.c b/drivers/gpu/drm/amd/amdkfd/kfd_module.c
+index aee2212e52f69a..33aa23450b3f72 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_module.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_module.c
+@@ -78,8 +78,8 @@ static int kfd_init(void)
+ static void kfd_exit(void)
+ {
+ 	kfd_cleanup_processes();
+-	kfd_debugfs_fini();
+ 	kfd_process_destroy_wq();
++	kfd_debugfs_fini();
+ 	kfd_procfs_shutdown();
+ 	kfd_topology_shutdown();
+ 	kfd_chardev_exit();
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+index 97933d2a380323..f2dee320fada42 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+@@ -373,7 +373,7 @@ static void get_checkpoint_info(struct mqd_manager *mm, void *mqd, u32 *ctl_stac
+ {
+ 	struct v9_mqd *m = get_mqd(mqd);
+ 
+-	*ctl_stack_size = m->cp_hqd_cntl_stack_size;
++	*ctl_stack_size = m->cp_hqd_cntl_stack_size * NUM_XCC(mm->dev->xcc_mask);
+ }
+ 
+ static void checkpoint_mqd(struct mqd_manager *mm, void *mqd, void *mqd_dst, void *ctl_stack_dst)
+@@ -388,6 +388,24 @@ static void checkpoint_mqd(struct mqd_manager *mm, void *mqd, void *mqd_dst, voi
+ 	memcpy(ctl_stack_dst, ctl_stack, m->cp_hqd_cntl_stack_size);
+ }
+ 
++static void checkpoint_mqd_v9_4_3(struct mqd_manager *mm,
++								  void *mqd,
++								  void *mqd_dst,
++								  void *ctl_stack_dst)
++{
++	struct v9_mqd *m;
++	int xcc;
++	uint64_t size = get_mqd(mqd)->cp_mqd_stride_size;
++
++	for (xcc = 0; xcc < NUM_XCC(mm->dev->xcc_mask); xcc++) {
++		m = get_mqd(mqd + size * xcc);
++
++		checkpoint_mqd(mm, m,
++				(uint8_t *)mqd_dst + sizeof(*m) * xcc,
++				(uint8_t *)ctl_stack_dst + m->cp_hqd_cntl_stack_size * xcc);
++	}
++}
++
+ static void restore_mqd(struct mqd_manager *mm, void **mqd,
+ 			struct kfd_mem_obj *mqd_mem_obj, uint64_t *gart_addr,
+ 			struct queue_properties *qp,
+@@ -764,13 +782,35 @@ static void restore_mqd_v9_4_3(struct mqd_manager *mm, void **mqd,
+ 			const void *mqd_src,
+ 			const void *ctl_stack_src, u32 ctl_stack_size)
+ {
+-	restore_mqd(mm, mqd, mqd_mem_obj, gart_addr, qp, mqd_src, ctl_stack_src, ctl_stack_size);
+-	if (amdgpu_sriov_multi_vf_mode(mm->dev->adev)) {
+-		struct v9_mqd *m;
++	struct kfd_mem_obj xcc_mqd_mem_obj;
++	u32 mqd_ctl_stack_size;
++	struct v9_mqd *m;
++	u32 num_xcc;
++	int xcc;
+ 
+-		m = (struct v9_mqd *) mqd_mem_obj->cpu_ptr;
+-		m->cp_hqd_pq_doorbell_control |= 1 <<
+-				CP_HQD_PQ_DOORBELL_CONTROL__DOORBELL_MODE__SHIFT;
++	uint64_t offset = mm->mqd_stride(mm, qp);
++
++	mm->dev->dqm->current_logical_xcc_start++;
++
++	num_xcc = NUM_XCC(mm->dev->xcc_mask);
++	mqd_ctl_stack_size = ctl_stack_size / num_xcc;
++
++	memset(&xcc_mqd_mem_obj, 0x0, sizeof(struct kfd_mem_obj));
++
++	/* Set the MQD pointer and gart address to XCC0 MQD */
++	*mqd = mqd_mem_obj->cpu_ptr;
++	if (gart_addr)
++		*gart_addr = mqd_mem_obj->gpu_addr;
++
++	for (xcc = 0; xcc < num_xcc; xcc++) {
++		get_xcc_mqd(mqd_mem_obj, &xcc_mqd_mem_obj, offset * xcc);
++		restore_mqd(mm, (void **)&m,
++					&xcc_mqd_mem_obj,
++					NULL,
++					qp,
++					(uint8_t *)mqd_src + xcc * sizeof(*m),
++					(uint8_t *)ctl_stack_src + xcc *  mqd_ctl_stack_size,
++					mqd_ctl_stack_size);
+ 	}
+ }
+ static int destroy_mqd_v9_4_3(struct mqd_manager *mm, void *mqd,
+@@ -906,7 +946,6 @@ struct mqd_manager *mqd_manager_init_v9(enum KFD_MQD_TYPE type,
+ 		mqd->free_mqd = kfd_free_mqd_cp;
+ 		mqd->is_occupied = kfd_is_occupied_cp;
+ 		mqd->get_checkpoint_info = get_checkpoint_info;
+-		mqd->checkpoint_mqd = checkpoint_mqd;
+ 		mqd->mqd_size = sizeof(struct v9_mqd);
+ 		mqd->mqd_stride = mqd_stride_v9;
+ #if defined(CONFIG_DEBUG_FS)
+@@ -918,16 +957,18 @@ struct mqd_manager *mqd_manager_init_v9(enum KFD_MQD_TYPE type,
+ 			mqd->init_mqd = init_mqd_v9_4_3;
+ 			mqd->load_mqd = load_mqd_v9_4_3;
+ 			mqd->update_mqd = update_mqd_v9_4_3;
+-			mqd->restore_mqd = restore_mqd_v9_4_3;
+ 			mqd->destroy_mqd = destroy_mqd_v9_4_3;
+ 			mqd->get_wave_state = get_wave_state_v9_4_3;
++			mqd->checkpoint_mqd = checkpoint_mqd_v9_4_3;
++			mqd->restore_mqd = restore_mqd_v9_4_3;
+ 		} else {
+ 			mqd->init_mqd = init_mqd;
+ 			mqd->load_mqd = load_mqd;
+ 			mqd->update_mqd = update_mqd;
+-			mqd->restore_mqd = restore_mqd;
+ 			mqd->destroy_mqd = kfd_destroy_mqd_cp;
+ 			mqd->get_wave_state = get_wave_state;
++			mqd->checkpoint_mqd = checkpoint_mqd;
++			mqd->restore_mqd = restore_mqd;
+ 		}
+ 		break;
+ 	case KFD_MQD_TYPE_HIQ:
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+index c643e0ccec52b0..7fbb5c274ccc42 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+@@ -914,7 +914,10 @@ static int criu_checkpoint_queues_device(struct kfd_process_device *pdd,
+ 
+ 		q_data = (struct kfd_criu_queue_priv_data *)q_private_data;
+ 
+-		/* data stored in this order: priv_data, mqd, ctl_stack */
++		/*
++		 * data stored in this order:
++		 * priv_data, mqd[xcc0], mqd[xcc1],..., ctl_stack[xcc0], ctl_stack[xcc1]...
++		 */
+ 		q_data->mqd_size = mqd_size;
+ 		q_data->ctl_stack_size = ctl_stack_size;
+ 
+@@ -963,7 +966,7 @@ int kfd_criu_checkpoint_queues(struct kfd_process *p,
+ }
+ 
+ static void set_queue_properties_from_criu(struct queue_properties *qp,
+-					  struct kfd_criu_queue_priv_data *q_data)
++					  struct kfd_criu_queue_priv_data *q_data, uint32_t num_xcc)
+ {
+ 	qp->is_interop = false;
+ 	qp->queue_percent = q_data->q_percent;
+@@ -976,7 +979,11 @@ static void set_queue_properties_from_criu(struct queue_properties *qp,
+ 	qp->eop_ring_buffer_size = q_data->eop_ring_buffer_size;
+ 	qp->ctx_save_restore_area_address = q_data->ctx_save_restore_area_address;
+ 	qp->ctx_save_restore_area_size = q_data->ctx_save_restore_area_size;
+-	qp->ctl_stack_size = q_data->ctl_stack_size;
++	if (q_data->type == KFD_QUEUE_TYPE_COMPUTE)
++		qp->ctl_stack_size = q_data->ctl_stack_size / num_xcc;
++	else
++		qp->ctl_stack_size = q_data->ctl_stack_size;
++
+ 	qp->type = q_data->type;
+ 	qp->format = q_data->format;
+ }
+@@ -1036,12 +1043,15 @@ int kfd_criu_restore_queue(struct kfd_process *p,
+ 		goto exit;
+ 	}
+ 
+-	/* data stored in this order: mqd, ctl_stack */
++	/*
++	 * data stored in this order:
++	 * mqd[xcc0], mqd[xcc1],..., ctl_stack[xcc0], ctl_stack[xcc1]...
++	 */
+ 	mqd = q_extra_data;
+ 	ctl_stack = mqd + q_data->mqd_size;
+ 
+ 	memset(&qp, 0, sizeof(qp));
+-	set_queue_properties_from_criu(&qp, q_data);
++	set_queue_properties_from_criu(&qp, q_data, NUM_XCC(pdd->dev->adev->gfx.xcc_mask));
+ 
+ 	print_queue_properties(&qp);
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 7b5440bdad2f35..2d94fec5b545d7 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3343,8 +3343,10 @@ static int dm_resume(struct amdgpu_ip_block *ip_block)
+ 		link_enc_cfg_copy(adev->dm.dc->current_state, dc_state);
+ 
+ 		r = dm_dmub_hw_init(adev);
+-		if (r)
++		if (r) {
+ 			drm_err(adev_to_drm(adev), "DMUB interface failed to initialize: status=%d\n", r);
++			return r;
++		}
+ 
+ 		dc_dmub_srv_set_power_state(dm->dc->ctx->dmub_srv, DC_ACPI_CM_POWER_STATE_D0);
+ 		dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D0);
+@@ -4731,16 +4733,16 @@ static int get_brightness_range(const struct amdgpu_dm_backlight_caps *caps,
+ 	return 1;
+ }
+ 
+-/* Rescale from [min..max] to [0..MAX_BACKLIGHT_LEVEL] */
++/* Rescale from [min..max] to [0..AMDGPU_MAX_BL_LEVEL] */
+ static inline u32 scale_input_to_fw(int min, int max, u64 input)
+ {
+-	return DIV_ROUND_CLOSEST_ULL(input * MAX_BACKLIGHT_LEVEL, max - min);
++	return DIV_ROUND_CLOSEST_ULL(input * AMDGPU_MAX_BL_LEVEL, max - min);
+ }
+ 
+-/* Rescale from [0..MAX_BACKLIGHT_LEVEL] to [min..max] */
++/* Rescale from [0..AMDGPU_MAX_BL_LEVEL] to [min..max] */
+ static inline u32 scale_fw_to_input(int min, int max, u64 input)
+ {
+-	return min + DIV_ROUND_CLOSEST_ULL(input * (max - min), MAX_BACKLIGHT_LEVEL);
++	return min + DIV_ROUND_CLOSEST_ULL(input * (max - min), AMDGPU_MAX_BL_LEVEL);
+ }
+ 
+ static void convert_custom_brightness(const struct amdgpu_dm_backlight_caps *caps,
+@@ -4952,9 +4954,9 @@ amdgpu_dm_register_backlight_device(struct amdgpu_dm_connector *aconnector)
+ 	caps = &dm->backlight_caps[aconnector->bl_idx];
+ 	if (get_brightness_range(caps, &min, &max)) {
+ 		if (power_supply_is_system_supplied() > 0)
+-			props.brightness = (max - min) * DIV_ROUND_CLOSEST(caps->ac_level, 100);
++			props.brightness = DIV_ROUND_CLOSEST((max - min) * caps->ac_level, 100);
+ 		else
+-			props.brightness = (max - min) * DIV_ROUND_CLOSEST(caps->dc_level, 100);
++			props.brightness = DIV_ROUND_CLOSEST((max - min) * caps->dc_level, 100);
+ 		/* min is zero, so max needs to be adjusted */
+ 		props.max_brightness = max - min;
+ 		drm_dbg(drm, "Backlight caps: min: %d, max: %d, ac %d, dc %d\n", min, max,
+@@ -7759,6 +7761,9 @@ amdgpu_dm_connector_atomic_check(struct drm_connector *conn,
+ 	struct amdgpu_dm_connector *aconn = to_amdgpu_dm_connector(conn);
+ 	int ret;
+ 
++	if (WARN_ON(unlikely(!old_con_state || !new_con_state)))
++		return -EINVAL;
++
+ 	trace_amdgpu_dm_connector_atomic_check(new_con_state);
+ 
+ 	if (conn->connector_type == DRM_MODE_CONNECTOR_DisplayPort) {
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+index 2551823382f8b6..45feb404b0979e 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+@@ -299,6 +299,25 @@ static inline int amdgpu_dm_crtc_set_vblank(struct drm_crtc *crtc, bool enable)
+ 	irq_type = amdgpu_display_crtc_idx_to_irq_type(adev, acrtc->crtc_id);
+ 
+ 	if (enable) {
++		struct dc *dc = adev->dm.dc;
++		struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc);
++		struct psr_settings *psr = &acrtc_state->stream->link->psr_settings;
++		struct replay_settings *pr = &acrtc_state->stream->link->replay_settings;
++		bool sr_supported = (psr->psr_version != DC_PSR_VERSION_UNSUPPORTED) ||
++								pr->config.replay_supported;
++
++		/*
++		 * IPS & self-refresh feature can cause vblank counter resets between
++		 * vblank disable and enable.
++		 * It may cause system stuck due to waiting for the vblank counter.
++		 * Call this function to estimate missed vblanks by using timestamps and
++		 * update the vblank counter in DRM.
++		 */
++		if (dc->caps.ips_support &&
++			dc->config.disable_ips != DMUB_IPS_DISABLE_ALL &&
++			sr_supported && vblank->config.disable_immediate)
++			drm_crtc_vblank_restore(crtc);
++
+ 		/* vblank irq on -> Only need vupdate irq in vrr mode */
+ 		if (amdgpu_dm_crtc_vrr_active(acrtc_state))
+ 			rc = amdgpu_dm_crtc_set_vupdate_irq(crtc, true);
+@@ -661,6 +680,15 @@ static int amdgpu_dm_crtc_helper_atomic_check(struct drm_crtc *crtc,
+ 		return -EINVAL;
+ 	}
+ 
++	if (!state->legacy_cursor_update && amdgpu_dm_crtc_vrr_active(dm_crtc_state)) {
++		struct drm_plane_state *primary_state;
++
++		/* Pull in primary plane for correct VRR handling */
++		primary_state = drm_atomic_get_plane_state(state, crtc->primary);
++		if (IS_ERR(primary_state))
++			return PTR_ERR(primary_state);
++	}
++
+ 	/* In some use cases, like reset, no stream is attached */
+ 	if (!dm_crtc_state->stream)
+ 		return 0;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index c7d13e743e6c8c..b726bcd18e2982 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -3988,7 +3988,7 @@ static int capabilities_show(struct seq_file *m, void *unused)
+ 
+ 	struct hubbub *hubbub = dc->res_pool->hubbub;
+ 
+-	if (hubbub->funcs->get_mall_en)
++	if (hubbub && hubbub->funcs->get_mall_en)
+ 		hubbub->funcs->get_mall_en(hubbub, &mall_in_use);
+ 
+ 	if (dc->cap_funcs.get_subvp_en)
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
+index 67f08495b7e6e2..154fd2c18e8848 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
+@@ -174,11 +174,8 @@ static struct graphics_object_id bios_parser_get_connector_id(
+ 		return object_id;
+ 	}
+ 
+-	if (tbl->ucNumberOfObjects <= i) {
+-		dm_error("Can't find connector id %d in connector table of size %d.\n",
+-			 i, tbl->ucNumberOfObjects);
++	if (tbl->ucNumberOfObjects <= i)
+ 		return object_id;
+-	}
+ 
+ 	id = le16_to_cpu(tbl->asObjects[i].usObjectID);
+ 	object_id = object_id_from_bios_object_id(id);
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table.c b/drivers/gpu/drm/amd/display/dc/bios/command_table.c
+index 2bcae0643e61db..58e88778da7ffd 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/command_table.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/command_table.c
+@@ -993,7 +993,7 @@ static enum bp_result set_pixel_clock_v3(
+ 	allocation.sPCLKInput.usFbDiv =
+ 			cpu_to_le16((uint16_t)bp_params->feedback_divider);
+ 	allocation.sPCLKInput.ucFracFbDiv =
+-			(uint8_t)bp_params->fractional_feedback_divider;
++			(uint8_t)(bp_params->fractional_feedback_divider / 100000);
+ 	allocation.sPCLKInput.ucPostDiv =
+ 			(uint8_t)bp_params->pixel_clock_post_divider;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
+index 4c3e58c730b11c..a0c1072c59a236 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
+@@ -158,7 +158,6 @@ struct clk_mgr *dc_clk_mgr_create(struct dc_context *ctx, struct pp_smu_funcs *p
+ 			return NULL;
+ 		}
+ 		dce60_clk_mgr_construct(ctx, clk_mgr);
+-		dce_clk_mgr_construct(ctx, clk_mgr);
+ 		return &clk_mgr->base;
+ 	}
+ #endif
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce100/dce_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce100/dce_clk_mgr.c
+index 26feefbb8990ae..dbd6ef1b60a0b7 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce100/dce_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce100/dce_clk_mgr.c
+@@ -72,9 +72,9 @@ static const struct state_dependent_clocks dce80_max_clks_by_state[] = {
+ /* ClocksStateLow */
+ { .display_clk_khz = 352000, .pixel_clk_khz = 330000},
+ /* ClocksStateNominal */
+-{ .display_clk_khz = 600000, .pixel_clk_khz = 400000 },
++{ .display_clk_khz = 625000, .pixel_clk_khz = 400000 },
+ /* ClocksStatePerformance */
+-{ .display_clk_khz = 600000, .pixel_clk_khz = 400000 } };
++{ .display_clk_khz = 625000, .pixel_clk_khz = 400000 } };
+ 
+ int dentist_get_divider_from_did(int did)
+ {
+@@ -245,6 +245,11 @@ int dce_set_clock(
+ 	pxl_clk_params.target_pixel_clock_100hz = requested_clk_khz * 10;
+ 	pxl_clk_params.pll_id = CLOCK_SOURCE_ID_DFS;
+ 
++	/* DCE 6.0, DCE 6.4: engine clock is the same as PLL0 */
++	if (clk_mgr_base->ctx->dce_version == DCE_VERSION_6_0 ||
++	    clk_mgr_base->ctx->dce_version == DCE_VERSION_6_4)
++		pxl_clk_params.pll_id = CLOCK_SOURCE_ID_PLL0;
++
+ 	if (clk_mgr_dce->dfs_bypass_active)
+ 		pxl_clk_params.flags.SET_DISPCLK_DFS_BYPASS = true;
+ 
+@@ -386,8 +391,6 @@ static void dce_pplib_apply_display_requirements(
+ {
+ 	struct dm_pp_display_configuration *pp_display_cfg = &context->pp_display_cfg;
+ 
+-	pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context);
+-
+ 	dce110_fill_display_configs(context, pp_display_cfg);
+ 
+ 	if (memcmp(&dc->current_state->pp_display_cfg, pp_display_cfg, sizeof(*pp_display_cfg)) !=  0)
+@@ -400,11 +403,9 @@ static void dce_update_clocks(struct clk_mgr *clk_mgr_base,
+ {
+ 	struct clk_mgr_internal *clk_mgr_dce = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+ 	struct dm_pp_power_level_change_request level_change_req;
+-	int patched_disp_clk = context->bw_ctx.bw.dce.dispclk_khz;
+-
+-	/*TODO: W/A for dal3 linux, investigate why this works */
+-	if (!clk_mgr_dce->dfs_bypass_active)
+-		patched_disp_clk = patched_disp_clk * 115 / 100;
++	const int max_disp_clk =
++		clk_mgr_dce->max_clks_by_state[DM_PP_CLOCKS_STATE_PERFORMANCE].display_clk_khz;
++	int patched_disp_clk = MIN(max_disp_clk, context->bw_ctx.bw.dce.dispclk_khz);
+ 
+ 	level_change_req.power_level = dce_get_required_clocks_state(clk_mgr_base, context);
+ 	/* get max clock state from PPLIB */
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c
+index f8409453434c1c..13cf415e38e501 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c
+@@ -120,9 +120,15 @@ void dce110_fill_display_configs(
+ 	const struct dc_state *context,
+ 	struct dm_pp_display_configuration *pp_display_cfg)
+ {
++	struct dc *dc = context->clk_mgr->ctx->dc;
+ 	int j;
+ 	int num_cfgs = 0;
+ 
++	pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context);
++	pp_display_cfg->disp_clk_khz = dc->clk_mgr->clks.dispclk_khz;
++	pp_display_cfg->avail_mclk_switch_time_in_disp_active_us = 0;
++	pp_display_cfg->crtc_index = dc->res_pool->res_cap->num_timing_generator;
++
+ 	for (j = 0; j < context->stream_count; j++) {
+ 		int k;
+ 
+@@ -164,6 +170,23 @@ void dce110_fill_display_configs(
+ 		cfg->v_refresh /= stream->timing.h_total;
+ 		cfg->v_refresh = (cfg->v_refresh + stream->timing.v_total / 2)
+ 							/ stream->timing.v_total;
++
++		/* Find first CRTC index and calculate its line time.
++		 * This is necessary for DPM on SI GPUs.
++		 */
++		if (cfg->pipe_idx < pp_display_cfg->crtc_index) {
++			const struct dc_crtc_timing *timing =
++				&context->streams[0]->timing;
++
++			pp_display_cfg->crtc_index = cfg->pipe_idx;
++			pp_display_cfg->line_time_in_us =
++				timing->h_total * 10000 / timing->pix_clk_100hz;
++		}
++	}
++
++	if (!num_cfgs) {
++		pp_display_cfg->crtc_index = 0;
++		pp_display_cfg->line_time_in_us = 0;
+ 	}
+ 
+ 	pp_display_cfg->display_count = num_cfgs;
+@@ -223,25 +246,8 @@ void dce11_pplib_apply_display_requirements(
+ 	pp_display_cfg->min_engine_clock_deep_sleep_khz
+ 			= context->bw_ctx.bw.dce.sclk_deep_sleep_khz;
+ 
+-	pp_display_cfg->avail_mclk_switch_time_us =
+-						dce110_get_min_vblank_time_us(context);
+-	/* TODO: dce11.2*/
+-	pp_display_cfg->avail_mclk_switch_time_in_disp_active_us = 0;
+-
+-	pp_display_cfg->disp_clk_khz = dc->clk_mgr->clks.dispclk_khz;
+-
+ 	dce110_fill_display_configs(context, pp_display_cfg);
+ 
+-	/* TODO: is this still applicable?*/
+-	if (pp_display_cfg->display_count == 1) {
+-		const struct dc_crtc_timing *timing =
+-			&context->streams[0]->timing;
+-
+-		pp_display_cfg->crtc_index =
+-			pp_display_cfg->disp_configs[0].pipe_idx;
+-		pp_display_cfg->line_time_in_us = timing->h_total * 10000 / timing->pix_clk_100hz;
+-	}
+-
+ 	if (memcmp(&dc->current_state->pp_display_cfg, pp_display_cfg, sizeof(*pp_display_cfg)) !=  0)
+ 		dm_pp_apply_display_requirements(dc->ctx, pp_display_cfg);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce60/dce60_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce60/dce60_clk_mgr.c
+index 0267644717b27a..a39641a0ff09ef 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce60/dce60_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce60/dce60_clk_mgr.c
+@@ -83,22 +83,13 @@ static const struct state_dependent_clocks dce60_max_clks_by_state[] = {
+ static int dce60_get_dp_ref_freq_khz(struct clk_mgr *clk_mgr_base)
+ {
+ 	struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+-	int dprefclk_wdivider;
+-	int dp_ref_clk_khz;
+-	int target_div;
++	struct dc_context *ctx = clk_mgr_base->ctx;
++	int dp_ref_clk_khz = 0;
+ 
+-	/* DCE6 has no DPREFCLK_CNTL to read DP Reference Clock source */
+-
+-	/* Read the mmDENTIST_DISPCLK_CNTL to get the currently
+-	 * programmed DID DENTIST_DPREFCLK_WDIVIDER*/
+-	REG_GET(DENTIST_DISPCLK_CNTL, DENTIST_DPREFCLK_WDIVIDER, &dprefclk_wdivider);
+-
+-	/* Convert DENTIST_DPREFCLK_WDIVIDERto actual divider*/
+-	target_div = dentist_get_divider_from_did(dprefclk_wdivider);
+-
+-	/* Calculate the current DFS clock, in kHz.*/
+-	dp_ref_clk_khz = (DENTIST_DIVIDER_RANGE_SCALE_FACTOR
+-		* clk_mgr->base.dentist_vco_freq_khz) / target_div;
++	if (ASIC_REV_IS_TAHITI_P(ctx->asic_id.hw_internal_rev))
++		dp_ref_clk_khz = ctx->dc_bios->fw_info.default_display_engine_pll_frequency;
++	else
++		dp_ref_clk_khz = clk_mgr_base->clks.dispclk_khz;
+ 
+ 	return dce_adjust_dp_ref_freq_for_ss(clk_mgr, dp_ref_clk_khz);
+ }
+@@ -109,8 +100,6 @@ static void dce60_pplib_apply_display_requirements(
+ {
+ 	struct dm_pp_display_configuration *pp_display_cfg = &context->pp_display_cfg;
+ 
+-	pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context);
+-
+ 	dce110_fill_display_configs(context, pp_display_cfg);
+ 
+ 	if (memcmp(&dc->current_state->pp_display_cfg, pp_display_cfg, sizeof(*pp_display_cfg)) !=  0)
+@@ -123,11 +112,9 @@ static void dce60_update_clocks(struct clk_mgr *clk_mgr_base,
+ {
+ 	struct clk_mgr_internal *clk_mgr_dce = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+ 	struct dm_pp_power_level_change_request level_change_req;
+-	int patched_disp_clk = context->bw_ctx.bw.dce.dispclk_khz;
+-
+-	/*TODO: W/A for dal3 linux, investigate why this works */
+-	if (!clk_mgr_dce->dfs_bypass_active)
+-		patched_disp_clk = patched_disp_clk * 115 / 100;
++	const int max_disp_clk =
++		clk_mgr_dce->max_clks_by_state[DM_PP_CLOCKS_STATE_PERFORMANCE].display_clk_khz;
++	int patched_disp_clk = MIN(max_disp_clk, context->bw_ctx.bw.dce.dispclk_khz);
+ 
+ 	level_change_req.power_level = dce_get_required_clocks_state(clk_mgr_base, context);
+ 	/* get max clock state from PPLIB */
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 0017e9991670bd..eb76611a42a5eb 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -217,11 +217,24 @@ static bool create_links(
+ 		connectors_num,
+ 		num_virtual_links);
+ 
+-	// condition loop on link_count to allow skipping invalid indices
++	/* When getting the number of connectors, the VBIOS reports the number of valid indices,
++	 * but it doesn't say which indices are valid, and not every index has an actual connector.
++	 * So, if we don't find a connector on an index, that is not an error.
++	 *
++	 * - There is no guarantee that the first N indices will be valid
++	 * - VBIOS may report a higher amount of valid indices than there are actual connectors
++	 * - Some VBIOS have valid configurations for more connectors than there actually are
++	 *   on the card. This may be because the manufacturer used the same VBIOS for different
++	 *   variants of the same card.
++	 */
+ 	for (i = 0; dc->link_count < connectors_num && i < MAX_LINKS; i++) {
++		struct graphics_object_id connector_id = bios->funcs->get_connector_id(bios, i);
+ 		struct link_init_data link_init_params = {0};
+ 		struct dc_link *link;
+ 
++		if (connector_id.id == CONNECTOR_ID_UNKNOWN)
++			continue;
++
+ 		DC_LOG_DC("BIOS object table - printing link object info for connector number: %d, link_index: %d", i, dc->link_count);
+ 
+ 		link_init_params.ctx = dc->ctx;
+@@ -938,17 +951,18 @@ static void dc_destruct(struct dc *dc)
+ 	if (dc->link_srv)
+ 		link_destroy_link_service(&dc->link_srv);
+ 
+-	if (dc->ctx->gpio_service)
+-		dal_gpio_service_destroy(&dc->ctx->gpio_service);
+-
+-	if (dc->ctx->created_bios)
+-		dal_bios_parser_destroy(&dc->ctx->dc_bios);
++	if (dc->ctx) {
++		if (dc->ctx->gpio_service)
++			dal_gpio_service_destroy(&dc->ctx->gpio_service);
+ 
+-	kfree(dc->ctx->logger);
+-	dc_perf_trace_destroy(&dc->ctx->perf_trace);
++		if (dc->ctx->created_bios)
++			dal_bios_parser_destroy(&dc->ctx->dc_bios);
++		kfree(dc->ctx->logger);
++		dc_perf_trace_destroy(&dc->ctx->perf_trace);
+ 
+-	kfree(dc->ctx);
+-	dc->ctx = NULL;
++		kfree(dc->ctx);
++		dc->ctx = NULL;
++	}
+ 
+ 	kfree(dc->bw_vbios);
+ 	dc->bw_vbios = NULL;
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dce60/dce60_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dce60/dce60_resource.c
+index d9ffdded5ce1e1..6944bac4ea9b2e 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dce60/dce60_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dce60/dce60_resource.c
+@@ -373,7 +373,7 @@ static const struct resource_caps res_cap = {
+ 		.num_timing_generator = 6,
+ 		.num_audio = 6,
+ 		.num_stream_encoder = 6,
+-		.num_pll = 2,
++		.num_pll = 3,
+ 		.num_ddc = 6,
+ };
+ 
+@@ -389,7 +389,7 @@ static const struct resource_caps res_cap_64 = {
+ 		.num_timing_generator = 2,
+ 		.num_audio = 2,
+ 		.num_stream_encoder = 2,
+-		.num_pll = 2,
++		.num_pll = 3,
+ 		.num_ddc = 2,
+ };
+ 
+@@ -973,21 +973,24 @@ static bool dce60_construct(
+ 
+ 	if (bp->fw_info_valid && bp->fw_info.external_clock_source_frequency_for_dp != 0) {
+ 		pool->base.dp_clock_source =
+-				dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_EXTERNAL, NULL, true);
++			dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_EXTERNAL, NULL, true);
+ 
++		/* DCE 6.0 and 6.4: PLL0 can only be used with DP. Don't initialize it here. */
+ 		pool->base.clock_sources[0] =
+-				dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL0, &clk_src_regs[0], false);
++			dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL1, &clk_src_regs[1], false);
+ 		pool->base.clock_sources[1] =
+-				dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL1, &clk_src_regs[1], false);
++			dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL2, &clk_src_regs[2], false);
+ 		pool->base.clk_src_count = 2;
+ 
+ 	} else {
+ 		pool->base.dp_clock_source =
+-				dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL0, &clk_src_regs[0], true);
++			dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL0, &clk_src_regs[0], true);
+ 
+ 		pool->base.clock_sources[0] =
+-				dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL1, &clk_src_regs[1], false);
+-		pool->base.clk_src_count = 1;
++			dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL1, &clk_src_regs[1], false);
++		pool->base.clock_sources[1] =
++			dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL2, &clk_src_regs[2], false);
++		pool->base.clk_src_count = 2;
+ 	}
+ 
+ 	if (pool->base.dp_clock_source == NULL) {
+@@ -1365,21 +1368,24 @@ static bool dce64_construct(
+ 
+ 	if (bp->fw_info_valid && bp->fw_info.external_clock_source_frequency_for_dp != 0) {
+ 		pool->base.dp_clock_source =
+-				dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_EXTERNAL, NULL, true);
++			dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_EXTERNAL, NULL, true);
+ 
++		/* DCE 6.0 and 6.4: PLL0 can only be used with DP. Don't initialize it here. */
+ 		pool->base.clock_sources[0] =
+-				dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL1, &clk_src_regs[0], false);
++			dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL1, &clk_src_regs[1], false);
+ 		pool->base.clock_sources[1] =
+-				dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL2, &clk_src_regs[1], false);
++			dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL2, &clk_src_regs[2], false);
+ 		pool->base.clk_src_count = 2;
+ 
+ 	} else {
+ 		pool->base.dp_clock_source =
+-				dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL1, &clk_src_regs[0], true);
++			dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL0, &clk_src_regs[0], true);
+ 
+ 		pool->base.clock_sources[0] =
+-				dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL2, &clk_src_regs[1], false);
+-		pool->base.clk_src_count = 1;
++			dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL1, &clk_src_regs[1], false);
++		pool->base.clock_sources[1] =
++			dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL2, &clk_src_regs[2], false);
++		pool->base.clk_src_count = 2;
+ 	}
+ 
+ 	if (pool->base.dp_clock_source == NULL) {
+diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
+index e58e7b93810be7..6b7db8ec9a53b2 100644
+--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
++++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c
+@@ -260,6 +260,9 @@ enum mod_hdcp_status mod_hdcp_hdcp1_create_session(struct mod_hdcp *hdcp)
+ 		return MOD_HDCP_STATUS_FAILURE;
+ 	}
+ 
++	if (!display)
++		return MOD_HDCP_STATUS_DISPLAY_NOT_FOUND;
++
+ 	hdcp_cmd = (struct ta_hdcp_shared_memory *)psp->hdcp_context.context.mem_context.shared_buf;
+ 
+ 	mutex_lock(&psp->hdcp_context.mutex);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index d79a1d94661a54..26b8e232f85825 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -76,6 +76,9 @@ static void smu_power_profile_mode_get(struct smu_context *smu,
+ 				       enum PP_SMC_POWER_PROFILE profile_mode);
+ static void smu_power_profile_mode_put(struct smu_context *smu,
+ 				       enum PP_SMC_POWER_PROFILE profile_mode);
++static int smu_od_edit_dpm_table(void *handle,
++				 enum PP_OD_DPM_TABLE_COMMAND type,
++				 long *input, uint32_t size);
+ 
+ static int smu_sys_get_pp_feature_mask(void *handle,
+ 				       char *buf)
+@@ -2144,6 +2147,7 @@ static int smu_resume(struct amdgpu_ip_block *ip_block)
+ 	int ret;
+ 	struct amdgpu_device *adev = ip_block->adev;
+ 	struct smu_context *smu = adev->powerplay.pp_handle;
++	struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
+ 
+ 	if (amdgpu_sriov_multi_vf_mode(adev))
+ 		return 0;
+@@ -2175,6 +2179,18 @@ static int smu_resume(struct amdgpu_ip_block *ip_block)
+ 
+ 	adev->pm.dpm_enabled = true;
+ 
++	if (smu->current_power_limit) {
++		ret = smu_set_power_limit(smu, smu->current_power_limit);
++		if (ret && ret != -EOPNOTSUPP)
++			return ret;
++	}
++
++	if (smu_dpm_ctx->dpm_level == AMD_DPM_FORCED_LEVEL_MANUAL) {
++		ret = smu_od_edit_dpm_table(smu, PP_OD_COMMIT_DPM_TABLE, NULL, 0);
++		if (ret)
++			return ret;
++	}
++
+ 	dev_info(adev->dev, "SMU is resumed successfully!\n");
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c
+index 76c1adda83dbc8..f9b0938c57ea71 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c
+@@ -62,13 +62,14 @@ const int decoded_link_width[8] = {0, 1, 2, 4, 8, 12, 16, 32};
+ 
+ MODULE_FIRMWARE("amdgpu/smu_14_0_2.bin");
+ MODULE_FIRMWARE("amdgpu/smu_14_0_3.bin");
++MODULE_FIRMWARE("amdgpu/smu_14_0_3_kicker.bin");
+ 
+ #define ENABLE_IMU_ARG_GFXOFF_ENABLE		1
+ 
+ int smu_v14_0_init_microcode(struct smu_context *smu)
+ {
+ 	struct amdgpu_device *adev = smu->adev;
+-	char ucode_prefix[15];
++	char ucode_prefix[30];
+ 	int err = 0;
+ 	const struct smc_firmware_header_v1_0 *hdr;
+ 	const struct common_firmware_header *header;
+@@ -79,8 +80,12 @@ int smu_v14_0_init_microcode(struct smu_context *smu)
+ 		return 0;
+ 
+ 	amdgpu_ucode_ip_version_decode(adev, MP1_HWIP, ucode_prefix, sizeof(ucode_prefix));
+-	err = amdgpu_ucode_request(adev, &adev->pm.fw, AMDGPU_UCODE_REQUIRED,
+-				   "amdgpu/%s.bin", ucode_prefix);
++	if (amdgpu_is_kicker_fw(adev))
++		err = amdgpu_ucode_request(adev, &adev->pm.fw, AMDGPU_UCODE_REQUIRED,
++					   "amdgpu/%s_kicker.bin", ucode_prefix);
++	else
++		err = amdgpu_ucode_request(adev, &adev->pm.fw, AMDGPU_UCODE_REQUIRED,
++					   "amdgpu/%s.bin", ucode_prefix);
+ 	if (err)
+ 		goto out;
+ 
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+index 82c2db972491d4..7c8d19cfa324e3 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+@@ -1689,9 +1689,11 @@ static int smu_v14_0_2_get_power_limit(struct smu_context *smu,
+ 				       uint32_t *min_power_limit)
+ {
+ 	struct smu_table_context *table_context = &smu->smu_table;
++	struct smu_14_0_2_powerplay_table *powerplay_table =
++		table_context->power_play_table;
+ 	PPTable_t *pptable = table_context->driver_pptable;
+ 	CustomSkuTable_t *skutable = &pptable->CustomSkuTable;
+-	uint32_t power_limit;
++	uint32_t power_limit, od_percent_upper = 0, od_percent_lower = 0;
+ 	uint32_t msg_limit = pptable->SkuTable.MsgLimits.Power[PPT_THROTTLER_PPT0][POWER_SOURCE_AC];
+ 
+ 	if (smu_v14_0_get_current_power_limit(smu, &power_limit))
+@@ -1704,11 +1706,29 @@ static int smu_v14_0_2_get_power_limit(struct smu_context *smu,
+ 	if (default_power_limit)
+ 		*default_power_limit = power_limit;
+ 
+-	if (max_power_limit)
+-		*max_power_limit = msg_limit;
++	if (powerplay_table) {
++		if (smu->od_enabled &&
++		    smu_v14_0_2_is_od_feature_supported(smu, PP_OD_FEATURE_PPT_BIT)) {
++			od_percent_upper = pptable->SkuTable.OverDriveLimitsBasicMax.Ppt;
++			od_percent_lower = pptable->SkuTable.OverDriveLimitsBasicMin.Ppt;
++		} else if (smu_v14_0_2_is_od_feature_supported(smu, PP_OD_FEATURE_PPT_BIT)) {
++			od_percent_upper = 0;
++			od_percent_lower = pptable->SkuTable.OverDriveLimitsBasicMin.Ppt;
++		}
++	}
++
++	dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n",
++					od_percent_upper, od_percent_lower, power_limit);
++
++	if (max_power_limit) {
++		*max_power_limit = msg_limit * (100 + od_percent_upper);
++		*max_power_limit /= 100;
++	}
+ 
+-	if (min_power_limit)
+-		*min_power_limit = 0;
++	if (min_power_limit) {
++		*min_power_limit = power_limit * (100 + od_percent_lower);
++		*min_power_limit /= 100;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
+index ea78c6c8ca7a63..dc622c78db9d4a 100644
+--- a/drivers/gpu/drm/display/drm_dp_helper.c
++++ b/drivers/gpu/drm/display/drm_dp_helper.c
+@@ -725,7 +725,7 @@ ssize_t drm_dp_dpcd_read(struct drm_dp_aux *aux, unsigned int offset,
+ 	 * monitor doesn't power down exactly after the throw away read.
+ 	 */
+ 	if (!aux->is_remote) {
+-		ret = drm_dp_dpcd_probe(aux, DP_TRAINING_PATTERN_SET);
++		ret = drm_dp_dpcd_probe(aux, DP_LANE0_1_STATUS);
+ 		if (ret < 0)
+ 			return ret;
+ 	}
+diff --git a/drivers/gpu/drm/drm_format_helper.c b/drivers/gpu/drm/drm_format_helper.c
+index d36e6cacc575e3..73b5a80771cc9f 100644
+--- a/drivers/gpu/drm/drm_format_helper.c
++++ b/drivers/gpu/drm/drm_format_helper.c
+@@ -857,11 +857,33 @@ static void drm_fb_xrgb8888_to_abgr8888_line(void *dbuf, const void *sbuf, unsig
+ 	drm_fb_xfrm_line_32to32(dbuf, sbuf, pixels, drm_pixel_xrgb8888_to_abgr8888);
+ }
+ 
+-static void drm_fb_xrgb8888_to_abgr8888(struct iosys_map *dst, const unsigned int *dst_pitch,
+-					const struct iosys_map *src,
+-					const struct drm_framebuffer *fb,
+-					const struct drm_rect *clip,
+-					struct drm_format_conv_state *state)
++/**
++ * drm_fb_xrgb8888_to_abgr8888 - Convert XRGB8888 to ABGR8888 clip buffer
++ * @dst: Array of ABGR8888 destination buffers
++ * @dst_pitch: Array of numbers of bytes between the start of two consecutive scanlines
++ *             within @dst; can be NULL if scanlines are stored next to each other.
++ * @src: Array of XRGB8888 source buffer
++ * @fb: DRM framebuffer
++ * @clip: Clip rectangle area to copy
++ * @state: Transform and conversion state
++ *
++ * This function copies parts of a framebuffer to display memory and converts the
++ * color format during the process. The parameters @dst, @dst_pitch and @src refer
++ * to arrays. Each array must have at least as many entries as there are planes in
++ * @fb's format. Each entry stores the value for the format's respective color plane
++ * at the same index.
++ *
++ * This function does not apply clipping on @dst (i.e. the destination is at the
++ * top-left corner).
++ *
++ * Drivers can use this function for ABGR8888 devices that don't support XRGB8888
++ * natively. It sets an opaque alpha channel as part of the conversion.
++ */
++void drm_fb_xrgb8888_to_abgr8888(struct iosys_map *dst, const unsigned int *dst_pitch,
++				 const struct iosys_map *src,
++				 const struct drm_framebuffer *fb,
++				 const struct drm_rect *clip,
++				 struct drm_format_conv_state *state)
+ {
+ 	static const u8 dst_pixsize[DRM_FORMAT_MAX_PLANES] = {
+ 		4,
+@@ -870,17 +892,40 @@ static void drm_fb_xrgb8888_to_abgr8888(struct iosys_map *dst, const unsigned in
+ 	drm_fb_xfrm(dst, dst_pitch, dst_pixsize, src, fb, clip, false, state,
+ 		    drm_fb_xrgb8888_to_abgr8888_line);
+ }
++EXPORT_SYMBOL(drm_fb_xrgb8888_to_abgr8888);
+ 
+ static void drm_fb_xrgb8888_to_xbgr8888_line(void *dbuf, const void *sbuf, unsigned int pixels)
+ {
+ 	drm_fb_xfrm_line_32to32(dbuf, sbuf, pixels, drm_pixel_xrgb8888_to_xbgr8888);
+ }
+ 
+-static void drm_fb_xrgb8888_to_xbgr8888(struct iosys_map *dst, const unsigned int *dst_pitch,
+-					const struct iosys_map *src,
+-					const struct drm_framebuffer *fb,
+-					const struct drm_rect *clip,
+-					struct drm_format_conv_state *state)
++/**
++ * drm_fb_xrgb8888_to_xbgr8888 - Convert XRGB8888 to XBGR8888 clip buffer
++ * @dst: Array of XBGR8888 destination buffers
++ * @dst_pitch: Array of numbers of bytes between the start of two consecutive scanlines
++ *             within @dst; can be NULL if scanlines are stored next to each other.
++ * @src: Array of XRGB8888 source buffer
++ * @fb: DRM framebuffer
++ * @clip: Clip rectangle area to copy
++ * @state: Transform and conversion state
++ *
++ * This function copies parts of a framebuffer to display memory and converts the
++ * color format during the process. The parameters @dst, @dst_pitch and @src refer
++ * to arrays. Each array must have at least as many entries as there are planes in
++ * @fb's format. Each entry stores the value for the format's respective color plane
++ * at the same index.
++ *
++ * This function does not apply clipping on @dst (i.e. the destination is at the
++ * top-left corner).
++ *
++ * Drivers can use this function for XBGR8888 devices that don't support XRGB8888
++ * natively.
++ */
++void drm_fb_xrgb8888_to_xbgr8888(struct iosys_map *dst, const unsigned int *dst_pitch,
++				 const struct iosys_map *src,
++				 const struct drm_framebuffer *fb,
++				 const struct drm_rect *clip,
++				 struct drm_format_conv_state *state)
+ {
+ 	static const u8 dst_pixsize[DRM_FORMAT_MAX_PLANES] = {
+ 		4,
+@@ -889,6 +934,49 @@ static void drm_fb_xrgb8888_to_xbgr8888(struct iosys_map *dst, const unsigned in
+ 	drm_fb_xfrm(dst, dst_pitch, dst_pixsize, src, fb, clip, false, state,
+ 		    drm_fb_xrgb8888_to_xbgr8888_line);
+ }
++EXPORT_SYMBOL(drm_fb_xrgb8888_to_xbgr8888);
++
++static void drm_fb_xrgb8888_to_bgrx8888_line(void *dbuf, const void *sbuf, unsigned int pixels)
++{
++	drm_fb_xfrm_line_32to32(dbuf, sbuf, pixels, drm_pixel_xrgb8888_to_bgrx8888);
++}
++
++/**
++ * drm_fb_xrgb8888_to_bgrx8888 - Convert XRGB8888 to BGRX8888 clip buffer
++ * @dst: Array of BGRX8888 destination buffers
++ * @dst_pitch: Array of numbers of bytes between the start of two consecutive scanlines
++ *             within @dst; can be NULL if scanlines are stored next to each other.
++ * @src: Array of XRGB8888 source buffer
++ * @fb: DRM framebuffer
++ * @clip: Clip rectangle area to copy
++ * @state: Transform and conversion state
++ *
++ * This function copies parts of a framebuffer to display memory and converts the
++ * color format during the process. The parameters @dst, @dst_pitch and @src refer
++ * to arrays. Each array must have at least as many entries as there are planes in
++ * @fb's format. Each entry stores the value for the format's respective color plane
++ * at the same index.
++ *
++ * This function does not apply clipping on @dst (i.e. the destination is at the
++ * top-left corner).
++ *
++ * Drivers can use this function for BGRX8888 devices that don't support XRGB8888
++ * natively.
++ */
++void drm_fb_xrgb8888_to_bgrx8888(struct iosys_map *dst, const unsigned int *dst_pitch,
++				 const struct iosys_map *src,
++				 const struct drm_framebuffer *fb,
++				 const struct drm_rect *clip,
++				 struct drm_format_conv_state *state)
++{
++	static const u8 dst_pixsize[DRM_FORMAT_MAX_PLANES] = {
++		4,
++	};
++
++	drm_fb_xfrm(dst, dst_pitch, dst_pixsize, src, fb, clip, false, state,
++		    drm_fb_xrgb8888_to_bgrx8888_line);
++}
++EXPORT_SYMBOL(drm_fb_xrgb8888_to_bgrx8888);
+ 
+ static void drm_fb_xrgb8888_to_xrgb2101010_line(void *dbuf, const void *sbuf, unsigned int pixels)
+ {
+diff --git a/drivers/gpu/drm/drm_format_internal.h b/drivers/gpu/drm/drm_format_internal.h
+index 9f857bfa368d10..0aa458b8a3e0af 100644
+--- a/drivers/gpu/drm/drm_format_internal.h
++++ b/drivers/gpu/drm/drm_format_internal.h
+@@ -111,6 +111,14 @@ static inline u32 drm_pixel_xrgb8888_to_xbgr8888(u32 pix)
+ 	       ((pix & 0x000000ff) << 16);
+ }
+ 
++static inline u32 drm_pixel_xrgb8888_to_bgrx8888(u32 pix)
++{
++	return ((pix & 0xff000000) >> 24) | /* also copy filler bits */
++	       ((pix & 0x00ff0000) >> 8) |
++	       ((pix & 0x0000ff00) << 8) |
++	       ((pix & 0x000000ff) << 24);
++}
++
+ static inline u32 drm_pixel_xrgb8888_to_abgr8888(u32 pix)
+ {
+ 	return GENMASK(31, 24) | /* fill alpha bits */
+diff --git a/drivers/gpu/drm/drm_panic_qr.rs b/drivers/gpu/drm/drm_panic_qr.rs
+index 18492daae4b345..b9cc64458437d2 100644
+--- a/drivers/gpu/drm/drm_panic_qr.rs
++++ b/drivers/gpu/drm/drm_panic_qr.rs
+@@ -381,6 +381,26 @@ struct DecFifo {
+     len: usize,
+ }
+ 
++// On arm32 architecture, dividing an `u64` by a constant will generate a call
++// to `__aeabi_uldivmod` which is not present in the kernel.
++// So use the multiply by inverse method for this architecture.
++fn div10(val: u64) -> u64 {
++    if cfg!(target_arch = "arm") {
++        let val_h = val >> 32;
++        let val_l = val & 0xFFFFFFFF;
++        let b_h: u64 = 0x66666666;
++        let b_l: u64 = 0x66666667;
++
++        let tmp1 = val_h * b_l + ((val_l * b_l) >> 32);
++        let tmp2 = val_l * b_h + (tmp1 & 0xffffffff);
++        let tmp3 = val_h * b_h + (tmp1 >> 32) + (tmp2 >> 32);
++
++        tmp3 >> 2
++    } else {
++        val / 10
++    }
++}
++
+ impl DecFifo {
+     fn push(&mut self, data: u64, len: usize) {
+         let mut chunk = data;
+@@ -389,7 +409,7 @@ fn push(&mut self, data: u64, len: usize) {
+         }
+         for i in 0..len {
+             self.decimals[i] = (chunk % 10) as u8;
+-            chunk /= 10;
++            chunk = div10(chunk);
+         }
+         self.len += len;
+     }
+diff --git a/drivers/gpu/drm/hisilicon/hibmc/dp/dp_link.c b/drivers/gpu/drm/hisilicon/hibmc/dp/dp_link.c
+index 74f7832ea53eae..0726cb5b736e60 100644
+--- a/drivers/gpu/drm/hisilicon/hibmc/dp/dp_link.c
++++ b/drivers/gpu/drm/hisilicon/hibmc/dp/dp_link.c
+@@ -325,6 +325,17 @@ static int hibmc_dp_link_downgrade_training_eq(struct hibmc_dp_dev *dp)
+ 	return hibmc_dp_link_reduce_rate(dp);
+ }
+ 
++static void hibmc_dp_update_caps(struct hibmc_dp_dev *dp)
++{
++	dp->link.cap.link_rate = dp->dpcd[DP_MAX_LINK_RATE];
++	if (dp->link.cap.link_rate > DP_LINK_BW_8_1 || !dp->link.cap.link_rate)
++		dp->link.cap.link_rate = DP_LINK_BW_8_1;
++
++	dp->link.cap.lanes = dp->dpcd[DP_MAX_LANE_COUNT] & DP_MAX_LANE_COUNT_MASK;
++	if (dp->link.cap.lanes > HIBMC_DP_LANE_NUM_MAX)
++		dp->link.cap.lanes = HIBMC_DP_LANE_NUM_MAX;
++}
++
+ int hibmc_dp_link_training(struct hibmc_dp_dev *dp)
+ {
+ 	struct hibmc_dp_link *link = &dp->link;
+@@ -334,8 +345,7 @@ int hibmc_dp_link_training(struct hibmc_dp_dev *dp)
+ 	if (ret)
+ 		drm_err(dp->dev, "dp aux read dpcd failed, ret: %d\n", ret);
+ 
+-	dp->link.cap.link_rate = dp->dpcd[DP_MAX_LINK_RATE];
+-	dp->link.cap.lanes = 0x2;
++	hibmc_dp_update_caps(dp);
+ 
+ 	ret = hibmc_dp_get_serdes_rate_cfg(dp);
+ 	if (ret < 0)
+diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
+index 768b97f9e74afe..289304500ab097 100644
+--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
++++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
+@@ -32,7 +32,7 @@
+ 
+ DEFINE_DRM_GEM_FOPS(hibmc_fops);
+ 
+-static const char *g_irqs_names_map[HIBMC_MAX_VECTORS] = { "vblank", "hpd" };
++static const char *g_irqs_names_map[HIBMC_MAX_VECTORS] = { "hibmc-vblank", "hibmc-hpd" };
+ 
+ static irqreturn_t hibmc_interrupt(int irq, void *arg)
+ {
+@@ -115,6 +115,8 @@ static const struct drm_mode_config_funcs hibmc_mode_funcs = {
+ static int hibmc_kms_init(struct hibmc_drm_private *priv)
+ {
+ 	struct drm_device *dev = &priv->dev;
++	struct drm_encoder *encoder;
++	u32 clone_mask = 0;
+ 	int ret;
+ 
+ 	ret = drmm_mode_config_init(dev);
+@@ -154,6 +156,12 @@ static int hibmc_kms_init(struct hibmc_drm_private *priv)
+ 		return ret;
+ 	}
+ 
++	drm_for_each_encoder(encoder, dev)
++		clone_mask |= drm_encoder_mask(encoder);
++
++	drm_for_each_encoder(encoder, dev)
++		encoder->possible_clones = clone_mask;
++
+ 	return 0;
+ }
+ 
+@@ -277,7 +285,6 @@ static void hibmc_unload(struct drm_device *dev)
+ static int hibmc_msi_init(struct drm_device *dev)
+ {
+ 	struct pci_dev *pdev = to_pci_dev(dev->dev);
+-	char name[32] = {0};
+ 	int valid_irq_num;
+ 	int irq;
+ 	int ret;
+@@ -292,9 +299,6 @@ static int hibmc_msi_init(struct drm_device *dev)
+ 	valid_irq_num = ret;
+ 
+ 	for (int i = 0; i < valid_irq_num; i++) {
+-		snprintf(name, ARRAY_SIZE(name) - 1, "%s-%s-%s",
+-			 dev->driver->name, pci_name(pdev), g_irqs_names_map[i]);
+-
+ 		irq = pci_irq_vector(pdev, i);
+ 
+ 		if (i)
+@@ -302,10 +306,10 @@ static int hibmc_msi_init(struct drm_device *dev)
+ 			ret = devm_request_threaded_irq(&pdev->dev, irq,
+ 							hibmc_dp_interrupt,
+ 							hibmc_dp_hpd_isr,
+-							IRQF_SHARED, name, dev);
++							IRQF_SHARED, g_irqs_names_map[i], dev);
+ 		else
+ 			ret = devm_request_irq(&pdev->dev, irq, hibmc_interrupt,
+-					       IRQF_SHARED, name, dev);
++					       IRQF_SHARED, g_irqs_names_map[i], dev);
+ 		if (ret) {
+ 			drm_err(dev, "install irq failed: %d\n", ret);
+ 			return ret;
+@@ -323,13 +327,13 @@ static int hibmc_load(struct drm_device *dev)
+ 
+ 	ret = hibmc_hw_init(priv);
+ 	if (ret)
+-		goto err;
++		return ret;
+ 
+ 	ret = drmm_vram_helper_init(dev, pci_resource_start(pdev, 0),
+ 				    pci_resource_len(pdev, 0));
+ 	if (ret) {
+ 		drm_err(dev, "Error initializing VRAM MM; %d\n", ret);
+-		goto err;
++		return ret;
+ 	}
+ 
+ 	ret = hibmc_kms_init(priv);
+diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h
+index 274feabe7df007..ca8502e2760c12 100644
+--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h
++++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h
+@@ -69,6 +69,7 @@ int hibmc_de_init(struct hibmc_drm_private *priv);
+ int hibmc_vdac_init(struct hibmc_drm_private *priv);
+ 
+ int hibmc_ddc_create(struct drm_device *drm_dev, struct hibmc_vdac *connector);
++void hibmc_ddc_del(struct hibmc_vdac *vdac);
+ 
+ int hibmc_dp_init(struct hibmc_drm_private *priv);
+ 
+diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c
+index 99b3b77b5445f6..44860011855eb6 100644
+--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c
++++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c
+@@ -95,3 +95,8 @@ int hibmc_ddc_create(struct drm_device *drm_dev, struct hibmc_vdac *vdac)
+ 
+ 	return i2c_bit_add_bus(&vdac->adapter);
+ }
++
++void hibmc_ddc_del(struct hibmc_vdac *vdac)
++{
++	i2c_del_adapter(&vdac->adapter);
++}
+diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c
+index e8a527ede85438..841e81f47b6862 100644
+--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c
++++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c
+@@ -53,7 +53,7 @@ static void hibmc_connector_destroy(struct drm_connector *connector)
+ {
+ 	struct hibmc_vdac *vdac = to_hibmc_vdac(connector);
+ 
+-	i2c_del_adapter(&vdac->adapter);
++	hibmc_ddc_del(vdac);
+ 	drm_connector_cleanup(connector);
+ }
+ 
+@@ -110,7 +110,7 @@ int hibmc_vdac_init(struct hibmc_drm_private *priv)
+ 	ret = drmm_encoder_init(dev, encoder, NULL, DRM_MODE_ENCODER_DAC, NULL);
+ 	if (ret) {
+ 		drm_err(dev, "failed to init encoder: %d\n", ret);
+-		return ret;
++		goto err;
+ 	}
+ 
+ 	drm_encoder_helper_add(encoder, &hibmc_encoder_helper_funcs);
+@@ -121,7 +121,7 @@ int hibmc_vdac_init(struct hibmc_drm_private *priv)
+ 					  &vdac->adapter);
+ 	if (ret) {
+ 		drm_err(dev, "failed to init connector: %d\n", ret);
+-		return ret;
++		goto err;
+ 	}
+ 
+ 	drm_connector_helper_add(connector, &hibmc_connector_helper_funcs);
+@@ -131,4 +131,9 @@ int hibmc_vdac_init(struct hibmc_drm_private *priv)
+ 	connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT;
+ 
+ 	return 0;
++
++err:
++	hibmc_ddc_del(vdac);
++
++	return ret;
+ }
+diff --git a/drivers/gpu/drm/i915/display/intel_display_irq.c b/drivers/gpu/drm/i915/display/intel_display_irq.c
+index 3e73832e5e8132..9e785318a6e251 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_irq.c
++++ b/drivers/gpu/drm/i915/display/intel_display_irq.c
+@@ -1492,10 +1492,14 @@ u32 gen11_gu_misc_irq_ack(struct intel_display *display, const u32 master_ctl)
+ 	if (!(master_ctl & GEN11_GU_MISC_IRQ))
+ 		return 0;
+ 
++	intel_display_rpm_assert_block(display);
++
+ 	iir = intel_de_read(display, GEN11_GU_MISC_IIR);
+ 	if (likely(iir))
+ 		intel_de_write(display, GEN11_GU_MISC_IIR, iir);
+ 
++	intel_display_rpm_assert_unblock(display);
++
+ 	return iir;
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_tc.c b/drivers/gpu/drm/i915/display/intel_tc.c
+index c1014e74791faa..2df5145fd9286a 100644
+--- a/drivers/gpu/drm/i915/display/intel_tc.c
++++ b/drivers/gpu/drm/i915/display/intel_tc.c
+@@ -22,6 +22,7 @@
+ #include "intel_modeset_lock.h"
+ #include "intel_tc.h"
+ 
++#define DP_PIN_ASSIGNMENT_NONE	0x0
+ #define DP_PIN_ASSIGNMENT_C	0x3
+ #define DP_PIN_ASSIGNMENT_D	0x4
+ #define DP_PIN_ASSIGNMENT_E	0x5
+@@ -65,6 +66,7 @@ struct intel_tc_port {
+ 	enum tc_port_mode init_mode;
+ 	enum phy_fia phy_fia;
+ 	u8 phy_fia_idx;
++	u8 max_lane_count;
+ };
+ 
+ static enum intel_display_power_domain
+@@ -306,6 +308,8 @@ static int lnl_tc_port_get_max_lane_count(struct intel_digital_port *dig_port)
+ 		REG_FIELD_GET(TCSS_DDI_STATUS_PIN_ASSIGNMENT_MASK, val);
+ 
+ 	switch (pin_assignment) {
++	case DP_PIN_ASSIGNMENT_NONE:
++		return 0;
+ 	default:
+ 		MISSING_CASE(pin_assignment);
+ 		fallthrough;
+@@ -364,12 +368,12 @@ static int intel_tc_port_get_max_lane_count(struct intel_digital_port *dig_port)
+ 	}
+ }
+ 
+-int intel_tc_port_max_lane_count(struct intel_digital_port *dig_port)
++static int get_max_lane_count(struct intel_tc_port *tc)
+ {
+-	struct intel_display *display = to_intel_display(dig_port);
+-	struct intel_tc_port *tc = to_tc_port(dig_port);
++	struct intel_display *display = to_intel_display(tc->dig_port);
++	struct intel_digital_port *dig_port = tc->dig_port;
+ 
+-	if (!intel_encoder_is_tc(&dig_port->base) || tc->mode != TC_PORT_DP_ALT)
++	if (tc->mode != TC_PORT_DP_ALT)
+ 		return 4;
+ 
+ 	assert_tc_cold_blocked(tc);
+@@ -383,6 +387,25 @@ int intel_tc_port_max_lane_count(struct intel_digital_port *dig_port)
+ 	return intel_tc_port_get_max_lane_count(dig_port);
+ }
+ 
++static void read_pin_configuration(struct intel_tc_port *tc)
++{
++	tc->max_lane_count = get_max_lane_count(tc);
++}
++
++int intel_tc_port_max_lane_count(struct intel_digital_port *dig_port)
++{
++	struct intel_display *display = to_intel_display(dig_port);
++	struct intel_tc_port *tc = to_tc_port(dig_port);
++
++	if (!intel_encoder_is_tc(&dig_port->base))
++		return 4;
++
++	if (DISPLAY_VER(display) < 20)
++		return get_max_lane_count(tc);
++
++	return tc->max_lane_count;
++}
++
+ void intel_tc_port_set_fia_lane_count(struct intel_digital_port *dig_port,
+ 				      int required_lanes)
+ {
+@@ -595,9 +618,12 @@ static void icl_tc_phy_get_hw_state(struct intel_tc_port *tc)
+ 	tc_cold_wref = __tc_cold_block(tc, &domain);
+ 
+ 	tc->mode = tc_phy_get_current_mode(tc);
+-	if (tc->mode != TC_PORT_DISCONNECTED)
++	if (tc->mode != TC_PORT_DISCONNECTED) {
+ 		tc->lock_wakeref = tc_cold_block(tc);
+ 
++		read_pin_configuration(tc);
++	}
++
+ 	__tc_cold_unblock(tc, domain, tc_cold_wref);
+ }
+ 
+@@ -655,8 +681,11 @@ static bool icl_tc_phy_connect(struct intel_tc_port *tc,
+ 
+ 	tc->lock_wakeref = tc_cold_block(tc);
+ 
+-	if (tc->mode == TC_PORT_TBT_ALT)
++	if (tc->mode == TC_PORT_TBT_ALT) {
++		read_pin_configuration(tc);
++
+ 		return true;
++	}
+ 
+ 	if ((!tc_phy_is_ready(tc) ||
+ 	     !icl_tc_phy_take_ownership(tc, true)) &&
+@@ -667,6 +696,7 @@ static bool icl_tc_phy_connect(struct intel_tc_port *tc,
+ 		goto out_unblock_tc_cold;
+ 	}
+ 
++	read_pin_configuration(tc);
+ 
+ 	if (!tc_phy_verify_legacy_or_dp_alt_mode(tc, required_lanes))
+ 		goto out_release_phy;
+@@ -857,9 +887,12 @@ static void adlp_tc_phy_get_hw_state(struct intel_tc_port *tc)
+ 	port_wakeref = intel_display_power_get(display, port_power_domain);
+ 
+ 	tc->mode = tc_phy_get_current_mode(tc);
+-	if (tc->mode != TC_PORT_DISCONNECTED)
++	if (tc->mode != TC_PORT_DISCONNECTED) {
+ 		tc->lock_wakeref = tc_cold_block(tc);
+ 
++		read_pin_configuration(tc);
++	}
++
+ 	intel_display_power_put(display, port_power_domain, port_wakeref);
+ }
+ 
+@@ -872,6 +905,9 @@ static bool adlp_tc_phy_connect(struct intel_tc_port *tc, int required_lanes)
+ 
+ 	if (tc->mode == TC_PORT_TBT_ALT) {
+ 		tc->lock_wakeref = tc_cold_block(tc);
++
++		read_pin_configuration(tc);
++
+ 		return true;
+ 	}
+ 
+@@ -893,6 +929,8 @@ static bool adlp_tc_phy_connect(struct intel_tc_port *tc, int required_lanes)
+ 
+ 	tc->lock_wakeref = tc_cold_block(tc);
+ 
++	read_pin_configuration(tc);
++
+ 	if (!tc_phy_verify_legacy_or_dp_alt_mode(tc, required_lanes))
+ 		goto out_unblock_tc_cold;
+ 
+@@ -1123,9 +1161,18 @@ static void xelpdp_tc_phy_get_hw_state(struct intel_tc_port *tc)
+ 	tc_cold_wref = __tc_cold_block(tc, &domain);
+ 
+ 	tc->mode = tc_phy_get_current_mode(tc);
+-	if (tc->mode != TC_PORT_DISCONNECTED)
++	if (tc->mode != TC_PORT_DISCONNECTED) {
+ 		tc->lock_wakeref = tc_cold_block(tc);
+ 
++		read_pin_configuration(tc);
++		/*
++		 * Set a valid lane count value for a DP-alt sink which got
++		 * disconnected. The driver can only disable the output on this PHY.
++		 */
++		if (tc->max_lane_count == 0)
++			tc->max_lane_count = 4;
++	}
++
+ 	drm_WARN_ON(display->drm,
+ 		    (tc->mode == TC_PORT_DP_ALT || tc->mode == TC_PORT_LEGACY) &&
+ 		    !xelpdp_tc_phy_tcss_power_is_enabled(tc));
+@@ -1137,14 +1184,19 @@ static bool xelpdp_tc_phy_connect(struct intel_tc_port *tc, int required_lanes)
+ {
+ 	tc->lock_wakeref = tc_cold_block(tc);
+ 
+-	if (tc->mode == TC_PORT_TBT_ALT)
++	if (tc->mode == TC_PORT_TBT_ALT) {
++		read_pin_configuration(tc);
++
+ 		return true;
++	}
+ 
+ 	if (!xelpdp_tc_phy_enable_tcss_power(tc, true))
+ 		goto out_unblock_tccold;
+ 
+ 	xelpdp_tc_phy_take_ownership(tc, true);
+ 
++	read_pin_configuration(tc);
++
+ 	if (!tc_phy_verify_legacy_or_dp_alt_mode(tc, required_lanes))
+ 		goto out_release_phy;
+ 
+@@ -1225,14 +1277,19 @@ static void tc_phy_get_hw_state(struct intel_tc_port *tc)
+ 	tc->phy_ops->get_hw_state(tc);
+ }
+ 
+-static bool tc_phy_is_ready_and_owned(struct intel_tc_port *tc,
+-				      bool phy_is_ready, bool phy_is_owned)
++/* Is the PHY owned by display i.e. is it in legacy or DP-alt mode? */
++static bool tc_phy_owned_by_display(struct intel_tc_port *tc,
++				    bool phy_is_ready, bool phy_is_owned)
+ {
+ 	struct intel_display *display = to_intel_display(tc->dig_port);
+ 
+-	drm_WARN_ON(display->drm, phy_is_owned && !phy_is_ready);
++	if (DISPLAY_VER(display) < 20) {
++		drm_WARN_ON(display->drm, phy_is_owned && !phy_is_ready);
+ 
+-	return phy_is_ready && phy_is_owned;
++		return phy_is_ready && phy_is_owned;
++	} else {
++		return phy_is_owned;
++	}
+ }
+ 
+ static bool tc_phy_is_connected(struct intel_tc_port *tc,
+@@ -1243,7 +1300,7 @@ static bool tc_phy_is_connected(struct intel_tc_port *tc,
+ 	bool phy_is_owned = tc_phy_is_owned(tc);
+ 	bool is_connected;
+ 
+-	if (tc_phy_is_ready_and_owned(tc, phy_is_ready, phy_is_owned))
++	if (tc_phy_owned_by_display(tc, phy_is_ready, phy_is_owned))
+ 		is_connected = port_pll_type == ICL_PORT_DPLL_MG_PHY;
+ 	else
+ 		is_connected = port_pll_type == ICL_PORT_DPLL_DEFAULT;
+@@ -1351,7 +1408,7 @@ tc_phy_get_current_mode(struct intel_tc_port *tc)
+ 	phy_is_ready = tc_phy_is_ready(tc);
+ 	phy_is_owned = tc_phy_is_owned(tc);
+ 
+-	if (!tc_phy_is_ready_and_owned(tc, phy_is_ready, phy_is_owned)) {
++	if (!tc_phy_owned_by_display(tc, phy_is_ready, phy_is_owned)) {
+ 		mode = get_tc_mode_in_phy_not_owned_state(tc, live_mode);
+ 	} else {
+ 		drm_WARN_ON(display->drm, live_mode == TC_PORT_TBT_ALT);
+@@ -1440,11 +1497,11 @@ static void intel_tc_port_reset_mode(struct intel_tc_port *tc,
+ 	intel_display_power_flush_work(display);
+ 	if (!intel_tc_cold_requires_aux_pw(dig_port)) {
+ 		enum intel_display_power_domain aux_domain;
+-		bool aux_powered;
+ 
+ 		aux_domain = intel_aux_power_domain(dig_port);
+-		aux_powered = intel_display_power_is_enabled(display, aux_domain);
+-		drm_WARN_ON(display->drm, aux_powered);
++		if (intel_display_power_is_enabled(display, aux_domain))
++			drm_dbg_kms(display->drm, "Port %s: AUX unexpectedly powered\n",
++				    tc->port_name);
+ 	}
+ 
+ 	tc_phy_disconnect(tc);
+diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c
+index b37e400f74e536..5a95f06900b5d3 100644
+--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
++++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
+@@ -634,6 +634,8 @@ static void cfl_ctx_workarounds_init(struct intel_engine_cs *engine,
+ static void icl_ctx_workarounds_init(struct intel_engine_cs *engine,
+ 				     struct i915_wa_list *wal)
+ {
++	struct drm_i915_private *i915 = engine->i915;
++
+ 	/* Wa_1406697149 (WaDisableBankHangMode:icl) */
+ 	wa_write(wal, GEN8_L3CNTLREG, GEN8_ERRDETBCTRL);
+ 
+@@ -669,6 +671,15 @@ static void icl_ctx_workarounds_init(struct intel_engine_cs *engine,
+ 
+ 	/* Wa_1406306137:icl,ehl */
+ 	wa_mcr_masked_en(wal, GEN9_ROW_CHICKEN4, GEN11_DIS_PICK_2ND_EU);
++
++	if (IS_JASPERLAKE(i915) || IS_ELKHARTLAKE(i915)) {
++		/*
++		 * Disable Repacking for Compression (masked R/W access)
++		 * before rendering compressed surfaces for display.
++		 */
++		wa_masked_en(wal, CACHE_MODE_0_GEN7,
++			     DISABLE_REPACKING_FOR_COMPRESSION);
++	}
+ }
+ 
+ /*
+@@ -2306,15 +2317,6 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
+ 			     GEN8_RC_SEMA_IDLE_MSG_DISABLE);
+ 	}
+ 
+-	if (IS_JASPERLAKE(i915) || IS_ELKHARTLAKE(i915)) {
+-		/*
+-		 * "Disable Repacking for Compression (masked R/W access)
+-		 *  before rendering compressed surfaces for display."
+-		 */
+-		wa_masked_en(wal, CACHE_MODE_0_GEN7,
+-			     DISABLE_REPACKING_FOR_COMPRESSION);
+-	}
+-
+ 	if (GRAPHICS_VER(i915) == 11) {
+ 		/* This is not an Wa. Enable for better image quality */
+ 		wa_masked_en(wal,
+diff --git a/drivers/gpu/drm/nouveau/nvif/vmm.c b/drivers/gpu/drm/nouveau/nvif/vmm.c
+index 99296f03371ae0..07c1ebc2a94141 100644
+--- a/drivers/gpu/drm/nouveau/nvif/vmm.c
++++ b/drivers/gpu/drm/nouveau/nvif/vmm.c
+@@ -219,7 +219,8 @@ nvif_vmm_ctor(struct nvif_mmu *mmu, const char *name, s32 oclass,
+ 	case RAW: args->type = NVIF_VMM_V0_TYPE_RAW; break;
+ 	default:
+ 		WARN_ON(1);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto done;
+ 	}
+ 
+ 	memcpy(args->data, argv, argc);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c
+index 9d06ff722fea7c..0dc4782df8c0c1 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c
+@@ -325,7 +325,7 @@ r535_gsp_msgq_recv(struct nvkm_gsp *gsp, u32 gsp_rpc_len, int *retries)
+ 
+ 		rpc = r535_gsp_msgq_peek(gsp, sizeof(*rpc), info.retries);
+ 		if (IS_ERR_OR_NULL(rpc)) {
+-			kfree(buf);
++			kvfree(buf);
+ 			return rpc;
+ 		}
+ 
+@@ -334,7 +334,7 @@ r535_gsp_msgq_recv(struct nvkm_gsp *gsp, u32 gsp_rpc_len, int *retries)
+ 
+ 		rpc = r535_gsp_msgq_recv_one_elem(gsp, &info);
+ 		if (IS_ERR_OR_NULL(rpc)) {
+-			kfree(buf);
++			kvfree(buf);
+ 			return rpc;
+ 		}
+ 
+diff --git a/drivers/gpu/drm/nova/file.rs b/drivers/gpu/drm/nova/file.rs
+index 7e59a34b830da3..4fe62cf98a23e9 100644
+--- a/drivers/gpu/drm/nova/file.rs
++++ b/drivers/gpu/drm/nova/file.rs
+@@ -39,7 +39,8 @@ pub(crate) fn get_param(
+             _ => return Err(EINVAL),
+         };
+ 
+-        getparam.set_value(value);
++        #[allow(clippy::useless_conversion)]
++        getparam.set_value(value.into());
+ 
+         Ok(0)
+     }
+diff --git a/drivers/gpu/drm/tests/drm_format_helper_test.c b/drivers/gpu/drm/tests/drm_format_helper_test.c
+index 35cd3405d0450c..e17643c408bf4b 100644
+--- a/drivers/gpu/drm/tests/drm_format_helper_test.c
++++ b/drivers/gpu/drm/tests/drm_format_helper_test.c
+@@ -748,14 +748,9 @@ static void drm_test_fb_xrgb8888_to_rgb565(struct kunit *test)
+ 	buf = dst.vaddr;
+ 	memset(buf, 0, dst_size);
+ 
+-	int blit_result = 0;
+-
+-	blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_RGB565, &src, &fb, &params->clip,
+-				  &fmtcnv_state);
+-
++	drm_fb_xrgb8888_to_rgb565(&dst, dst_pitch, &src, &fb, &params->clip,
++				  &fmtcnv_state, false);
+ 	buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16));
+-
+-	KUNIT_EXPECT_FALSE(test, blit_result);
+ 	KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
+ }
+ 
+@@ -795,14 +790,8 @@ static void drm_test_fb_xrgb8888_to_xrgb1555(struct kunit *test)
+ 	buf = dst.vaddr; /* restore original value of buf */
+ 	memset(buf, 0, dst_size);
+ 
+-	int blit_result = 0;
+-
+-	blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_XRGB1555, &src, &fb, &params->clip,
+-				  &fmtcnv_state);
+-
++	drm_fb_xrgb8888_to_xrgb1555(&dst, dst_pitch, &src, &fb, &params->clip, &fmtcnv_state);
+ 	buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16));
+-
+-	KUNIT_EXPECT_FALSE(test, blit_result);
+ 	KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
+ }
+ 
+@@ -842,14 +831,8 @@ static void drm_test_fb_xrgb8888_to_argb1555(struct kunit *test)
+ 	buf = dst.vaddr; /* restore original value of buf */
+ 	memset(buf, 0, dst_size);
+ 
+-	int blit_result = 0;
+-
+-	blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_ARGB1555, &src, &fb, &params->clip,
+-				  &fmtcnv_state);
+-
++	drm_fb_xrgb8888_to_argb1555(&dst, dst_pitch, &src, &fb, &params->clip, &fmtcnv_state);
+ 	buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16));
+-
+-	KUNIT_EXPECT_FALSE(test, blit_result);
+ 	KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
+ }
+ 
+@@ -889,14 +872,8 @@ static void drm_test_fb_xrgb8888_to_rgba5551(struct kunit *test)
+ 	buf = dst.vaddr; /* restore original value of buf */
+ 	memset(buf, 0, dst_size);
+ 
+-	int blit_result = 0;
+-
+-	blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_RGBA5551, &src, &fb, &params->clip,
+-				  &fmtcnv_state);
+-
++	drm_fb_xrgb8888_to_rgba5551(&dst, dst_pitch, &src, &fb, &params->clip, &fmtcnv_state);
+ 	buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16));
+-
+-	KUNIT_EXPECT_FALSE(test, blit_result);
+ 	KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
+ }
+ 
+@@ -939,12 +916,7 @@ static void drm_test_fb_xrgb8888_to_rgb888(struct kunit *test)
+ 	buf = dst.vaddr; /* restore original value of buf */
+ 	memset(buf, 0, dst_size);
+ 
+-	int blit_result = 0;
+-
+-	blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_RGB888, &src, &fb, &params->clip,
+-				  &fmtcnv_state);
+-
+-	KUNIT_EXPECT_FALSE(test, blit_result);
++	drm_fb_xrgb8888_to_rgb888(&dst, dst_pitch, &src, &fb, &params->clip, &fmtcnv_state);
+ 	KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
+ }
+ 
+@@ -985,12 +957,8 @@ static void drm_test_fb_xrgb8888_to_bgr888(struct kunit *test)
+ 	buf = dst.vaddr; /* restore original value of buf */
+ 	memset(buf, 0, dst_size);
+ 
+-	int blit_result = 0;
+-
+-	blit_result = drm_fb_blit(&dst, &result->dst_pitch, DRM_FORMAT_BGR888, &src, &fb, &params->clip,
++	drm_fb_xrgb8888_to_bgr888(&dst, &result->dst_pitch, &src, &fb, &params->clip,
+ 				  &fmtcnv_state);
+-
+-	KUNIT_EXPECT_FALSE(test, blit_result);
+ 	KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
+ }
+ 
+@@ -1030,14 +998,8 @@ static void drm_test_fb_xrgb8888_to_argb8888(struct kunit *test)
+ 	buf = dst.vaddr; /* restore original value of buf */
+ 	memset(buf, 0, dst_size);
+ 
+-	int blit_result = 0;
+-
+-	blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_ARGB8888, &src, &fb, &params->clip,
+-				  &fmtcnv_state);
+-
++	drm_fb_xrgb8888_to_argb8888(&dst, dst_pitch, &src, &fb, &params->clip, &fmtcnv_state);
+ 	buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
+-
+-	KUNIT_EXPECT_FALSE(test, blit_result);
+ 	KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
+ }
+ 
+@@ -1071,18 +1033,14 @@ static void drm_test_fb_xrgb8888_to_xrgb2101010(struct kunit *test)
+ 		NULL : &result->dst_pitch;
+ 
+ 	drm_fb_xrgb8888_to_xrgb2101010(&dst, dst_pitch, &src, &fb, &params->clip, &fmtcnv_state);
+-	buf = le32buf_to_cpu(test, buf, dst_size / sizeof(u32));
++	buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
+ 	KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
+ 
+ 	buf = dst.vaddr; /* restore original value of buf */
+ 	memset(buf, 0, dst_size);
+ 
+-	int blit_result = 0;
+-
+-	blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_XRGB2101010, &src, &fb,
+-				  &params->clip, &fmtcnv_state);
+-
+-	KUNIT_EXPECT_FALSE(test, blit_result);
++	drm_fb_xrgb8888_to_xrgb2101010(&dst, dst_pitch, &src, &fb, &params->clip, &fmtcnv_state);
++	buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
+ 	KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
+ }
+ 
+@@ -1122,14 +1080,8 @@ static void drm_test_fb_xrgb8888_to_argb2101010(struct kunit *test)
+ 	buf = dst.vaddr; /* restore original value of buf */
+ 	memset(buf, 0, dst_size);
+ 
+-	int blit_result = 0;
+-
+-	blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_ARGB2101010, &src, &fb,
+-				  &params->clip, &fmtcnv_state);
+-
++	drm_fb_xrgb8888_to_argb2101010(&dst, dst_pitch, &src, &fb, &params->clip, &fmtcnv_state);
+ 	buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
+-
+-	KUNIT_EXPECT_FALSE(test, blit_result);
+ 	KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
+ }
+ 
+@@ -1202,23 +1154,15 @@ static void drm_test_fb_swab(struct kunit *test)
+ 	buf = dst.vaddr; /* restore original value of buf */
+ 	memset(buf, 0, dst_size);
+ 
+-	int blit_result;
+-
+-	blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_XRGB8888 | DRM_FORMAT_BIG_ENDIAN,
+-				  &src, &fb, &params->clip, &fmtcnv_state);
++	drm_fb_swab(&dst, dst_pitch, &src, &fb, &params->clip, false, &fmtcnv_state);
+ 	buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
+-
+-	KUNIT_EXPECT_FALSE(test, blit_result);
+ 	KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
+ 
+ 	buf = dst.vaddr;
+ 	memset(buf, 0, dst_size);
+ 
+-	blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_BGRX8888, &src, &fb, &params->clip,
+-				  &fmtcnv_state);
++	drm_fb_xrgb8888_to_bgrx8888(&dst, dst_pitch, &src, &fb, &params->clip, &fmtcnv_state);
+ 	buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
+-
+-	KUNIT_EXPECT_FALSE(test, blit_result);
+ 	KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
+ 
+ 	buf = dst.vaddr;
+@@ -1229,11 +1173,8 @@ static void drm_test_fb_swab(struct kunit *test)
+ 	mock_format.format |= DRM_FORMAT_BIG_ENDIAN;
+ 	fb.format = &mock_format;
+ 
+-	blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_XRGB8888, &src, &fb, &params->clip,
+-				  &fmtcnv_state);
++	drm_fb_swab(&dst, dst_pitch, &src, &fb, &params->clip, false, &fmtcnv_state);
+ 	buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
+-
+-	KUNIT_EXPECT_FALSE(test, blit_result);
+ 	KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
+ }
+ 
+@@ -1266,14 +1207,8 @@ static void drm_test_fb_xrgb8888_to_abgr8888(struct kunit *test)
+ 	const unsigned int *dst_pitch = (result->dst_pitch == TEST_USE_DEFAULT_PITCH) ?
+ 		NULL : &result->dst_pitch;
+ 
+-	int blit_result = 0;
+-
+-	blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_ABGR8888, &src, &fb, &params->clip,
+-				  &fmtcnv_state);
+-
++	drm_fb_xrgb8888_to_abgr8888(&dst, dst_pitch, &src, &fb, &params->clip, &fmtcnv_state);
+ 	buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
+-
+-	KUNIT_EXPECT_FALSE(test, blit_result);
+ 	KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
+ }
+ 
+@@ -1306,14 +1241,8 @@ static void drm_test_fb_xrgb8888_to_xbgr8888(struct kunit *test)
+ 	const unsigned int *dst_pitch = (result->dst_pitch == TEST_USE_DEFAULT_PITCH) ?
+ 		NULL : &result->dst_pitch;
+ 
+-	int blit_result = 0;
+-
+-	blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_XBGR8888, &src, &fb, &params->clip,
+-				  &fmtcnv_state);
+-
++	drm_fb_xrgb8888_to_xbgr8888(&dst, dst_pitch, &src, &fb, &params->clip, &fmtcnv_state);
+ 	buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32));
+-
+-	KUNIT_EXPECT_FALSE(test, blit_result);
+ 	KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size);
+ }
+ 
+@@ -1910,12 +1839,8 @@ static void drm_test_fb_memcpy(struct kunit *test)
+ 		memset(buf[i], 0, dst_size[i]);
+ 	}
+ 
+-	int blit_result;
+-
+-	blit_result = drm_fb_blit(dst, dst_pitches, params->format, src, &fb, &params->clip,
+-				  &fmtcnv_state);
++	drm_fb_memcpy(dst, dst_pitches, src, &fb, &params->clip);
+ 
+-	KUNIT_EXPECT_FALSE(test, blit_result);
+ 	for (size_t i = 0; i < fb.format->num_planes; i++) {
+ 		expected[i] = cpubuf_to_le32(test, params->expected[i], TEST_BUF_SIZE);
+ 		KUNIT_EXPECT_MEMEQ_MSG(test, buf[i], expected[i], dst_size[i],
+diff --git a/drivers/gpu/drm/xe/Kconfig b/drivers/gpu/drm/xe/Kconfig
+index 99a91355842ec3..785d2917f6ed20 100644
+--- a/drivers/gpu/drm/xe/Kconfig
++++ b/drivers/gpu/drm/xe/Kconfig
+@@ -5,6 +5,7 @@ config DRM_XE
+ 	depends on KUNIT || !KUNIT
+ 	depends on INTEL_VSEC || !INTEL_VSEC
+ 	depends on X86_PLATFORM_DEVICES || !(X86 && ACPI)
++	depends on PAGE_SIZE_4KB || COMPILE_TEST || BROKEN
+ 	select INTERVAL_TREE
+ 	# we need shmfs for the swappable backing store, and in particular
+ 	# the shmem_readpage() which depends upon tmpfs
+diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
+index 1e3fd139dfcbca..0a481190f3e6ae 100644
+--- a/drivers/gpu/drm/xe/xe_migrate.c
++++ b/drivers/gpu/drm/xe/xe_migrate.c
+@@ -408,7 +408,7 @@ struct xe_migrate *xe_migrate_init(struct xe_tile *tile)
+ 
+ 	/* Special layout, prepared below.. */
+ 	vm = xe_vm_create(xe, XE_VM_FLAG_MIGRATION |
+-			  XE_VM_FLAG_SET_TILE_ID(tile));
++			  XE_VM_FLAG_SET_TILE_ID(tile), NULL);
+ 	if (IS_ERR(vm))
+ 		return ERR_CAST(vm);
+ 
+diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c b/drivers/gpu/drm/xe/xe_pxp_submit.c
+index d92ec0f515b034..ca95f2a4d4ef5d 100644
+--- a/drivers/gpu/drm/xe/xe_pxp_submit.c
++++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
+@@ -101,7 +101,7 @@ static int allocate_gsc_client_resources(struct xe_gt *gt,
+ 	xe_assert(xe, hwe);
+ 
+ 	/* PXP instructions must be issued from PPGTT */
+-	vm = xe_vm_create(xe, XE_VM_FLAG_GSC);
++	vm = xe_vm_create(xe, XE_VM_FLAG_GSC, NULL);
+ 	if (IS_ERR(vm))
+ 		return PTR_ERR(vm);
+ 
+diff --git a/drivers/gpu/drm/xe/xe_shrinker.c b/drivers/gpu/drm/xe/xe_shrinker.c
+index 86d47aaf035892..5e761f543ac30f 100644
+--- a/drivers/gpu/drm/xe/xe_shrinker.c
++++ b/drivers/gpu/drm/xe/xe_shrinker.c
+@@ -53,10 +53,10 @@ xe_shrinker_mod_pages(struct xe_shrinker *shrinker, long shrinkable, long purgea
+ 	write_unlock(&shrinker->lock);
+ }
+ 
+-static s64 xe_shrinker_walk(struct xe_device *xe,
+-			    struct ttm_operation_ctx *ctx,
+-			    const struct xe_bo_shrink_flags flags,
+-			    unsigned long to_scan, unsigned long *scanned)
++static s64 __xe_shrinker_walk(struct xe_device *xe,
++			      struct ttm_operation_ctx *ctx,
++			      const struct xe_bo_shrink_flags flags,
++			      unsigned long to_scan, unsigned long *scanned)
+ {
+ 	unsigned int mem_type;
+ 	s64 freed = 0, lret;
+@@ -86,6 +86,48 @@ static s64 xe_shrinker_walk(struct xe_device *xe,
+ 	return freed;
+ }
+ 
++/*
++ * Try shrinking idle objects without writeback first, then if not sufficient,
++ * try also non-idle objects and finally if that's not sufficient either,
++ * add writeback. This avoids stalls and explicit writebacks with light or
++ * moderate memory pressure.
++ */
++static s64 xe_shrinker_walk(struct xe_device *xe,
++			    struct ttm_operation_ctx *ctx,
++			    const struct xe_bo_shrink_flags flags,
++			    unsigned long to_scan, unsigned long *scanned)
++{
++	bool no_wait_gpu = true;
++	struct xe_bo_shrink_flags save_flags = flags;
++	s64 lret, freed;
++
++	swap(no_wait_gpu, ctx->no_wait_gpu);
++	save_flags.writeback = false;
++	lret = __xe_shrinker_walk(xe, ctx, save_flags, to_scan, scanned);
++	swap(no_wait_gpu, ctx->no_wait_gpu);
++	if (lret < 0 || *scanned >= to_scan)
++		return lret;
++
++	freed = lret;
++	if (!ctx->no_wait_gpu) {
++		lret = __xe_shrinker_walk(xe, ctx, save_flags, to_scan, scanned);
++		if (lret < 0)
++			return lret;
++		freed += lret;
++		if (*scanned >= to_scan)
++			return freed;
++	}
++
++	if (flags.writeback) {
++		lret = __xe_shrinker_walk(xe, ctx, flags, to_scan, scanned);
++		if (lret < 0)
++			return lret;
++		freed += lret;
++	}
++
++	return freed;
++}
++
+ static unsigned long
+ xe_shrinker_count(struct shrinker *shrink, struct shrink_control *sc)
+ {
+@@ -192,6 +234,7 @@ static unsigned long xe_shrinker_scan(struct shrinker *shrink, struct shrink_con
+ 		runtime_pm = xe_shrinker_runtime_pm_get(shrinker, true, 0, can_backup);
+ 
+ 	shrink_flags.purge = false;
++
+ 	lret = xe_shrinker_walk(shrinker->xe, &ctx, shrink_flags,
+ 				nr_to_scan, &nr_scanned);
+ 	if (lret >= 0)
+diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
+index 8615777469293b..3b11b1d52bee9b 100644
+--- a/drivers/gpu/drm/xe/xe_vm.c
++++ b/drivers/gpu/drm/xe/xe_vm.c
+@@ -1612,7 +1612,7 @@ static void xe_vm_free_scratch(struct xe_vm *vm)
+ 	}
+ }
+ 
+-struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
++struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef)
+ {
+ 	struct drm_gem_object *vm_resv_obj;
+ 	struct xe_vm *vm;
+@@ -1633,9 +1633,10 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
+ 	vm->xe = xe;
+ 
+ 	vm->size = 1ull << xe->info.va_bits;
+-
+ 	vm->flags = flags;
+ 
++	if (xef)
++		vm->xef = xe_file_get(xef);
+ 	/**
+ 	 * GSC VMs are kernel-owned, only used for PXP ops and can sometimes be
+ 	 * manipulated under the PXP mutex. However, the PXP mutex can be taken
+@@ -1766,6 +1767,20 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
+ 	if (number_tiles > 1)
+ 		vm->composite_fence_ctx = dma_fence_context_alloc(1);
+ 
++	if (xef && xe->info.has_asid) {
++		u32 asid;
++
++		down_write(&xe->usm.lock);
++		err = xa_alloc_cyclic(&xe->usm.asid_to_vm, &asid, vm,
++				      XA_LIMIT(1, XE_MAX_ASID - 1),
++				      &xe->usm.next_asid, GFP_KERNEL);
++		up_write(&xe->usm.lock);
++		if (err < 0)
++			goto err_unlock_close;
++
++		vm->usm.asid = asid;
++	}
++
+ 	trace_xe_vm_create(vm);
+ 
+ 	return vm;
+@@ -1786,6 +1801,8 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
+ 	for_each_tile(tile, xe, id)
+ 		xe_range_fence_tree_fini(&vm->rftree[id]);
+ 	ttm_lru_bulk_move_fini(&xe->ttm, &vm->lru_bulk_move);
++	if (vm->xef)
++		xe_file_put(vm->xef);
+ 	kfree(vm);
+ 	if (flags & XE_VM_FLAG_LR_MODE)
+ 		xe_pm_runtime_put(xe);
+@@ -2031,9 +2048,8 @@ int xe_vm_create_ioctl(struct drm_device *dev, void *data,
+ 	struct xe_device *xe = to_xe_device(dev);
+ 	struct xe_file *xef = to_xe_file(file);
+ 	struct drm_xe_vm_create *args = data;
+-	struct xe_tile *tile;
+ 	struct xe_vm *vm;
+-	u32 id, asid;
++	u32 id;
+ 	int err;
+ 	u32 flags = 0;
+ 
+@@ -2069,29 +2085,10 @@ int xe_vm_create_ioctl(struct drm_device *dev, void *data,
+ 	if (args->flags & DRM_XE_VM_CREATE_FLAG_FAULT_MODE)
+ 		flags |= XE_VM_FLAG_FAULT_MODE;
+ 
+-	vm = xe_vm_create(xe, flags);
++	vm = xe_vm_create(xe, flags, xef);
+ 	if (IS_ERR(vm))
+ 		return PTR_ERR(vm);
+ 
+-	if (xe->info.has_asid) {
+-		down_write(&xe->usm.lock);
+-		err = xa_alloc_cyclic(&xe->usm.asid_to_vm, &asid, vm,
+-				      XA_LIMIT(1, XE_MAX_ASID - 1),
+-				      &xe->usm.next_asid, GFP_KERNEL);
+-		up_write(&xe->usm.lock);
+-		if (err < 0)
+-			goto err_close_and_put;
+-
+-		vm->usm.asid = asid;
+-	}
+-
+-	vm->xef = xe_file_get(xef);
+-
+-	/* Record BO memory for VM pagetable created against client */
+-	for_each_tile(tile, xe, id)
+-		if (vm->pt_root[id])
+-			xe_drm_client_add_bo(vm->xef->client, vm->pt_root[id]->bo);
+-
+ #if IS_ENABLED(CONFIG_DRM_XE_DEBUG_MEM)
+ 	/* Warning: Security issue - never enable by default */
+ 	args->reserved[0] = xe_bo_main_addr(vm->pt_root[0]->bo, XE_PAGE_SIZE);
+@@ -3203,6 +3200,7 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
+ free_bind_ops:
+ 	if (args->num_binds > 1)
+ 		kvfree(*bind_ops);
++	*bind_ops = NULL;
+ 	return err;
+ }
+ 
+@@ -3308,7 +3306,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+ 	struct xe_exec_queue *q = NULL;
+ 	u32 num_syncs, num_ufence = 0;
+ 	struct xe_sync_entry *syncs = NULL;
+-	struct drm_xe_vm_bind_op *bind_ops;
++	struct drm_xe_vm_bind_op *bind_ops = NULL;
+ 	struct xe_vma_ops vops;
+ 	struct dma_fence *fence;
+ 	int err;
+diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
+index 494af6bdc646b4..0158ec0ae3b230 100644
+--- a/drivers/gpu/drm/xe/xe_vm.h
++++ b/drivers/gpu/drm/xe/xe_vm.h
+@@ -26,7 +26,7 @@ struct xe_sync_entry;
+ struct xe_svm_range;
+ struct drm_exec;
+ 
+-struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags);
++struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef);
+ 
+ struct xe_vm *xe_vm_lookup(struct xe_file *xef, u32 id);
+ int xe_vma_cmp_vma_cb(const void *key, const struct rb_node *node);
+diff --git a/drivers/hwmon/gsc-hwmon.c b/drivers/hwmon/gsc-hwmon.c
+index 0f9af82cebec2c..105b9f9dbec3d7 100644
+--- a/drivers/hwmon/gsc-hwmon.c
++++ b/drivers/hwmon/gsc-hwmon.c
+@@ -64,7 +64,7 @@ static ssize_t pwm_auto_point_temp_show(struct device *dev,
+ 		return ret;
+ 
+ 	ret = regs[0] | regs[1] << 8;
+-	return sprintf(buf, "%d\n", ret * 10);
++	return sprintf(buf, "%d\n", ret * 100);
+ }
+ 
+ static ssize_t pwm_auto_point_temp_store(struct device *dev,
+@@ -99,7 +99,7 @@ static ssize_t pwm_auto_point_pwm_show(struct device *dev,
+ {
+ 	struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
+ 
+-	return sprintf(buf, "%d\n", 255 * (50 + (attr->index * 10)));
++	return sprintf(buf, "%d\n", 255 * (50 + (attr->index * 10)) / 100);
+ }
+ 
+ static SENSOR_DEVICE_ATTR_RO(pwm1_auto_point1_pwm, pwm_auto_point_pwm, 0);
+diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
+index 13889f52b6f78a..ff2289b52c84cc 100644
+--- a/drivers/i2c/busses/i2c-qcom-geni.c
++++ b/drivers/i2c/busses/i2c-qcom-geni.c
+@@ -155,9 +155,9 @@ static const struct geni_i2c_clk_fld geni_i2c_clk_map_19p2mhz[] = {
+ 
+ /* source_clock = 32 MHz */
+ static const struct geni_i2c_clk_fld geni_i2c_clk_map_32mhz[] = {
+-	{ I2C_MAX_STANDARD_MODE_FREQ, 8, 14, 18, 40 },
+-	{ I2C_MAX_FAST_MODE_FREQ, 4,  3, 11, 20 },
+-	{ I2C_MAX_FAST_MODE_PLUS_FREQ, 2, 3,  6, 15 },
++	{ I2C_MAX_STANDARD_MODE_FREQ, 8, 14, 18, 38 },
++	{ I2C_MAX_FAST_MODE_FREQ, 4,  3, 9, 19 },
++	{ I2C_MAX_FAST_MODE_PLUS_FREQ, 2, 3, 5, 15 },
+ 	{}
+ };
+ 
+diff --git a/drivers/i2c/busses/i2c-rtl9300.c b/drivers/i2c/busses/i2c-rtl9300.c
+index e064e8a4a1f082..cfafe089102aa2 100644
+--- a/drivers/i2c/busses/i2c-rtl9300.c
++++ b/drivers/i2c/busses/i2c-rtl9300.c
+@@ -143,10 +143,10 @@ static int rtl9300_i2c_write(struct rtl9300_i2c *i2c, u8 *buf, int len)
+ 		return -EIO;
+ 
+ 	for (i = 0; i < len; i++) {
+-		if (i % 4 == 0)
+-			vals[i/4] = 0;
+-		vals[i/4] <<= 8;
+-		vals[i/4] |= buf[i];
++		unsigned int shift = (i % 4) * 8;
++		unsigned int reg = i / 4;
++
++		vals[reg] |= buf[i] << shift;
+ 	}
+ 
+ 	return regmap_bulk_write(i2c->regmap, i2c->reg_base + RTL9300_I2C_MST_DATA_WORD0,
+@@ -175,7 +175,7 @@ static int rtl9300_i2c_execute_xfer(struct rtl9300_i2c *i2c, char read_write,
+ 		return ret;
+ 
+ 	ret = regmap_read_poll_timeout(i2c->regmap, i2c->reg_base + RTL9300_I2C_MST_CTRL1,
+-				       val, !(val & RTL9300_I2C_MST_CTRL1_I2C_TRIG), 100, 2000);
++				       val, !(val & RTL9300_I2C_MST_CTRL1_I2C_TRIG), 100, 100000);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -281,15 +281,19 @@ static int rtl9300_i2c_smbus_xfer(struct i2c_adapter *adap, u16 addr, unsigned s
+ 		ret = rtl9300_i2c_reg_addr_set(i2c, command, 1);
+ 		if (ret)
+ 			goto out_unlock;
+-		ret = rtl9300_i2c_config_xfer(i2c, chan, addr, data->block[0]);
++		if (data->block[0] < 1 || data->block[0] > I2C_SMBUS_BLOCK_MAX) {
++			ret = -EINVAL;
++			goto out_unlock;
++		}
++		ret = rtl9300_i2c_config_xfer(i2c, chan, addr, data->block[0] + 1);
+ 		if (ret)
+ 			goto out_unlock;
+ 		if (read_write == I2C_SMBUS_WRITE) {
+-			ret = rtl9300_i2c_write(i2c, &data->block[1], data->block[0]);
++			ret = rtl9300_i2c_write(i2c, &data->block[0], data->block[0] + 1);
+ 			if (ret)
+ 				goto out_unlock;
+ 		}
+-		len = data->block[0];
++		len = data->block[0] + 1;
+ 		break;
+ 
+ 	default:
+diff --git a/drivers/iio/accel/sca3300.c b/drivers/iio/accel/sca3300.c
+index 67416a406e2f43..d5e9fa58d0ba3a 100644
+--- a/drivers/iio/accel/sca3300.c
++++ b/drivers/iio/accel/sca3300.c
+@@ -479,7 +479,7 @@ static irqreturn_t sca3300_trigger_handler(int irq, void *p)
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+ 	struct sca3300_data *data = iio_priv(indio_dev);
+ 	int bit, ret, val, i = 0;
+-	IIO_DECLARE_BUFFER_WITH_TS(s16, channels, SCA3300_SCAN_MAX);
++	IIO_DECLARE_BUFFER_WITH_TS(s16, channels, SCA3300_SCAN_MAX) = { };
+ 
+ 	iio_for_each_active_channel(indio_dev, bit) {
+ 		ret = sca3300_read_reg(data, indio_dev->channels[bit].address, &val);
+diff --git a/drivers/iio/adc/Kconfig b/drivers/iio/adc/Kconfig
+index ea3ba139739281..a632c74c1fc510 100644
+--- a/drivers/iio/adc/Kconfig
++++ b/drivers/iio/adc/Kconfig
+@@ -1257,7 +1257,7 @@ config RN5T618_ADC
+ 
+ config ROHM_BD79124
+ 	tristate "Rohm BD79124 ADC driver"
+-	depends on I2C
++	depends on I2C && GPIOLIB
+ 	select REGMAP_I2C
+ 	select IIO_ADC_HELPER
+ 	help
+diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c
+index 92596f15e79737..bdd2b2b5bac1ae 100644
+--- a/drivers/iio/adc/ad7124.c
++++ b/drivers/iio/adc/ad7124.c
+@@ -855,7 +855,7 @@ enum {
+ static int ad7124_syscalib_locked(struct ad7124_state *st, const struct iio_chan_spec *chan)
+ {
+ 	struct device *dev = &st->sd.spi->dev;
+-	struct ad7124_channel *ch = &st->channels[chan->channel];
++	struct ad7124_channel *ch = &st->channels[chan->address];
+ 	int ret;
+ 
+ 	if (ch->syscalib_mode == AD7124_SYSCALIB_ZERO_SCALE) {
+@@ -871,8 +871,8 @@ static int ad7124_syscalib_locked(struct ad7124_state *st, const struct iio_chan
+ 		if (ret < 0)
+ 			return ret;
+ 
+-		dev_dbg(dev, "offset for channel %d after zero-scale calibration: 0x%x\n",
+-			chan->channel, ch->cfg.calibration_offset);
++		dev_dbg(dev, "offset for channel %lu after zero-scale calibration: 0x%x\n",
++			chan->address, ch->cfg.calibration_offset);
+ 	} else {
+ 		ch->cfg.calibration_gain = st->gain_default;
+ 
+@@ -886,8 +886,8 @@ static int ad7124_syscalib_locked(struct ad7124_state *st, const struct iio_chan
+ 		if (ret < 0)
+ 			return ret;
+ 
+-		dev_dbg(dev, "gain for channel %d after full-scale calibration: 0x%x\n",
+-			chan->channel, ch->cfg.calibration_gain);
++		dev_dbg(dev, "gain for channel %lu after full-scale calibration: 0x%x\n",
++			chan->address, ch->cfg.calibration_gain);
+ 	}
+ 
+ 	return 0;
+@@ -930,7 +930,7 @@ static int ad7124_set_syscalib_mode(struct iio_dev *indio_dev,
+ {
+ 	struct ad7124_state *st = iio_priv(indio_dev);
+ 
+-	st->channels[chan->channel].syscalib_mode = mode;
++	st->channels[chan->address].syscalib_mode = mode;
+ 
+ 	return 0;
+ }
+@@ -940,7 +940,7 @@ static int ad7124_get_syscalib_mode(struct iio_dev *indio_dev,
+ {
+ 	struct ad7124_state *st = iio_priv(indio_dev);
+ 
+-	return st->channels[chan->channel].syscalib_mode;
++	return st->channels[chan->address].syscalib_mode;
+ }
+ 
+ static const struct iio_enum ad7124_syscalib_mode_enum = {
+diff --git a/drivers/iio/adc/ad7173.c b/drivers/iio/adc/ad7173.c
+index b3e6bd2a55d717..5b57c29b6b34c9 100644
+--- a/drivers/iio/adc/ad7173.c
++++ b/drivers/iio/adc/ad7173.c
+@@ -200,7 +200,7 @@ struct ad7173_channel_config {
+ 	/*
+ 	 * Following fields are used to compare equality. If you
+ 	 * make adaptations in it, you most likely also have to adapt
+-	 * ad7173_find_live_config(), too.
++	 * ad7173_is_setup_equal(), too.
+ 	 */
+ 	struct_group(config_props,
+ 		bool bipolar;
+@@ -319,7 +319,7 @@ static int ad7173_set_syscalib_mode(struct iio_dev *indio_dev,
+ {
+ 	struct ad7173_state *st = iio_priv(indio_dev);
+ 
+-	st->channels[chan->channel].syscalib_mode = mode;
++	st->channels[chan->address].syscalib_mode = mode;
+ 
+ 	return 0;
+ }
+@@ -329,7 +329,7 @@ static int ad7173_get_syscalib_mode(struct iio_dev *indio_dev,
+ {
+ 	struct ad7173_state *st = iio_priv(indio_dev);
+ 
+-	return st->channels[chan->channel].syscalib_mode;
++	return st->channels[chan->address].syscalib_mode;
+ }
+ 
+ static ssize_t ad7173_write_syscalib(struct iio_dev *indio_dev,
+@@ -348,7 +348,7 @@ static ssize_t ad7173_write_syscalib(struct iio_dev *indio_dev,
+ 	if (!iio_device_claim_direct(indio_dev))
+ 		return -EBUSY;
+ 
+-	mode = st->channels[chan->channel].syscalib_mode;
++	mode = st->channels[chan->address].syscalib_mode;
+ 	if (sys_calib) {
+ 		if (mode == AD7173_SYSCALIB_ZERO_SCALE)
+ 			ret = ad_sd_calibrate(&st->sd, AD7173_MODE_CAL_SYS_ZERO,
+@@ -392,13 +392,12 @@ static int ad7173_calibrate_all(struct ad7173_state *st, struct iio_dev *indio_d
+ 		if (indio_dev->channels[i].type != IIO_VOLTAGE)
+ 			continue;
+ 
+-		ret = ad_sd_calibrate(&st->sd, AD7173_MODE_CAL_INT_ZERO, st->channels[i].ain);
++		ret = ad_sd_calibrate(&st->sd, AD7173_MODE_CAL_INT_ZERO, i);
+ 		if (ret < 0)
+ 			return ret;
+ 
+ 		if (st->info->has_internal_fs_calibration) {
+-			ret = ad_sd_calibrate(&st->sd, AD7173_MODE_CAL_INT_FULL,
+-					      st->channels[i].ain);
++			ret = ad_sd_calibrate(&st->sd, AD7173_MODE_CAL_INT_FULL, i);
+ 			if (ret < 0)
+ 				return ret;
+ 		}
+@@ -563,12 +562,19 @@ static void ad7173_reset_usage_cnts(struct ad7173_state *st)
+ 	st->config_usage_counter = 0;
+ }
+ 
+-static struct ad7173_channel_config *
+-ad7173_find_live_config(struct ad7173_state *st, struct ad7173_channel_config *cfg)
++/**
++ * ad7173_is_setup_equal - Compare two channel setups
++ * @cfg1: First channel configuration
++ * @cfg2: Second channel configuration
++ *
++ * Compares all configuration options that affect the registers connected to
++ * SETUP_SEL, namely CONFIGx, FILTERx, GAINx and OFFSETx.
++ *
++ * Returns: true if the setups are identical, false otherwise
++ */
++static bool ad7173_is_setup_equal(const struct ad7173_channel_config *cfg1,
++				  const struct ad7173_channel_config *cfg2)
+ {
+-	struct ad7173_channel_config *cfg_aux;
+-	int i;
+-
+ 	/*
+ 	 * This is just to make sure that the comparison is adapted after
+ 	 * struct ad7173_channel_config was changed.
+@@ -581,14 +587,22 @@ ad7173_find_live_config(struct ad7173_state *st, struct ad7173_channel_config *c
+ 				     u8 ref_sel;
+ 			     }));
+ 
++	return cfg1->bipolar == cfg2->bipolar &&
++	       cfg1->input_buf == cfg2->input_buf &&
++	       cfg1->odr == cfg2->odr &&
++	       cfg1->ref_sel == cfg2->ref_sel;
++}
++
++static struct ad7173_channel_config *
++ad7173_find_live_config(struct ad7173_state *st, struct ad7173_channel_config *cfg)
++{
++	struct ad7173_channel_config *cfg_aux;
++	int i;
++
+ 	for (i = 0; i < st->num_channels; i++) {
+ 		cfg_aux = &st->channels[i].cfg;
+ 
+-		if (cfg_aux->live &&
+-		    cfg->bipolar == cfg_aux->bipolar &&
+-		    cfg->input_buf == cfg_aux->input_buf &&
+-		    cfg->odr == cfg_aux->odr &&
+-		    cfg->ref_sel == cfg_aux->ref_sel)
++		if (cfg_aux->live && ad7173_is_setup_equal(cfg, cfg_aux))
+ 			return cfg_aux;
+ 	}
+ 	return NULL;
+@@ -772,10 +786,26 @@ static const struct ad_sigma_delta_info ad7173_sigma_delta_info_8_slots = {
+ 	.num_slots = 8,
+ };
+ 
++static const struct ad_sigma_delta_info ad7173_sigma_delta_info_16_slots = {
++	.set_channel = ad7173_set_channel,
++	.append_status = ad7173_append_status,
++	.disable_all = ad7173_disable_all,
++	.disable_one = ad7173_disable_one,
++	.set_mode = ad7173_set_mode,
++	.has_registers = true,
++	.has_named_irqs = true,
++	.addr_shift = 0,
++	.read_mask = BIT(6),
++	.status_ch_mask = GENMASK(3, 0),
++	.data_reg = AD7173_REG_DATA,
++	.num_resetclks = 64,
++	.num_slots = 16,
++};
++
+ static const struct ad7173_device_info ad4111_device_info = {
+ 	.name = "ad4111",
+ 	.id = AD4111_ID,
+-	.sd_info = &ad7173_sigma_delta_info_8_slots,
++	.sd_info = &ad7173_sigma_delta_info_16_slots,
+ 	.num_voltage_in_div = 8,
+ 	.num_channels = 16,
+ 	.num_configs = 8,
+@@ -797,7 +827,7 @@ static const struct ad7173_device_info ad4111_device_info = {
+ static const struct ad7173_device_info ad4112_device_info = {
+ 	.name = "ad4112",
+ 	.id = AD4112_ID,
+-	.sd_info = &ad7173_sigma_delta_info_8_slots,
++	.sd_info = &ad7173_sigma_delta_info_16_slots,
+ 	.num_voltage_in_div = 8,
+ 	.num_channels = 16,
+ 	.num_configs = 8,
+@@ -818,7 +848,7 @@ static const struct ad7173_device_info ad4112_device_info = {
+ static const struct ad7173_device_info ad4113_device_info = {
+ 	.name = "ad4113",
+ 	.id = AD4113_ID,
+-	.sd_info = &ad7173_sigma_delta_info_8_slots,
++	.sd_info = &ad7173_sigma_delta_info_16_slots,
+ 	.num_voltage_in_div = 8,
+ 	.num_channels = 16,
+ 	.num_configs = 8,
+@@ -837,7 +867,7 @@ static const struct ad7173_device_info ad4113_device_info = {
+ static const struct ad7173_device_info ad4114_device_info = {
+ 	.name = "ad4114",
+ 	.id = AD4114_ID,
+-	.sd_info = &ad7173_sigma_delta_info_8_slots,
++	.sd_info = &ad7173_sigma_delta_info_16_slots,
+ 	.num_voltage_in_div = 16,
+ 	.num_channels = 16,
+ 	.num_configs = 8,
+@@ -856,7 +886,7 @@ static const struct ad7173_device_info ad4114_device_info = {
+ static const struct ad7173_device_info ad4115_device_info = {
+ 	.name = "ad4115",
+ 	.id = AD4115_ID,
+-	.sd_info = &ad7173_sigma_delta_info_8_slots,
++	.sd_info = &ad7173_sigma_delta_info_16_slots,
+ 	.num_voltage_in_div = 16,
+ 	.num_channels = 16,
+ 	.num_configs = 8,
+@@ -875,7 +905,7 @@ static const struct ad7173_device_info ad4115_device_info = {
+ static const struct ad7173_device_info ad4116_device_info = {
+ 	.name = "ad4116",
+ 	.id = AD4116_ID,
+-	.sd_info = &ad7173_sigma_delta_info_8_slots,
++	.sd_info = &ad7173_sigma_delta_info_16_slots,
+ 	.num_voltage_in_div = 11,
+ 	.num_channels = 16,
+ 	.num_configs = 8,
+@@ -894,7 +924,7 @@ static const struct ad7173_device_info ad4116_device_info = {
+ static const struct ad7173_device_info ad7172_2_device_info = {
+ 	.name = "ad7172-2",
+ 	.id = AD7172_2_ID,
+-	.sd_info = &ad7173_sigma_delta_info_8_slots,
++	.sd_info = &ad7173_sigma_delta_info_4_slots,
+ 	.num_voltage_in = 5,
+ 	.num_channels = 4,
+ 	.num_configs = 4,
+@@ -927,7 +957,7 @@ static const struct ad7173_device_info ad7172_4_device_info = {
+ static const struct ad7173_device_info ad7173_8_device_info = {
+ 	.name = "ad7173-8",
+ 	.id = AD7173_ID,
+-	.sd_info = &ad7173_sigma_delta_info_8_slots,
++	.sd_info = &ad7173_sigma_delta_info_16_slots,
+ 	.num_voltage_in = 17,
+ 	.num_channels = 16,
+ 	.num_configs = 8,
+@@ -944,7 +974,7 @@ static const struct ad7173_device_info ad7173_8_device_info = {
+ static const struct ad7173_device_info ad7175_2_device_info = {
+ 	.name = "ad7175-2",
+ 	.id = AD7175_2_ID,
+-	.sd_info = &ad7173_sigma_delta_info_8_slots,
++	.sd_info = &ad7173_sigma_delta_info_4_slots,
+ 	.num_voltage_in = 5,
+ 	.num_channels = 4,
+ 	.num_configs = 4,
+@@ -961,7 +991,7 @@ static const struct ad7173_device_info ad7175_2_device_info = {
+ static const struct ad7173_device_info ad7175_8_device_info = {
+ 	.name = "ad7175-8",
+ 	.id = AD7175_8_ID,
+-	.sd_info = &ad7173_sigma_delta_info_8_slots,
++	.sd_info = &ad7173_sigma_delta_info_16_slots,
+ 	.num_voltage_in = 17,
+ 	.num_channels = 16,
+ 	.num_configs = 8,
+@@ -1214,7 +1244,7 @@ static int ad7173_update_scan_mode(struct iio_dev *indio_dev,
+ 				   const unsigned long *scan_mask)
+ {
+ 	struct ad7173_state *st = iio_priv(indio_dev);
+-	int i, ret;
++	int i, j, k, ret;
+ 
+ 	for (i = 0; i < indio_dev->num_channels; i++) {
+ 		if (test_bit(i, scan_mask))
+@@ -1225,6 +1255,54 @@ static int ad7173_update_scan_mode(struct iio_dev *indio_dev,
+ 			return ret;
+ 	}
+ 
++	/*
++	 * On some chips, there are more channels that setups, so if there were
++	 * more unique setups requested than the number of available slots,
++	 * ad7173_set_channel() will have written over some of the slots. We
++	 * can detect this by making sure each assigned cfg_slot matches the
++	 * requested configuration. If it doesn't, we know that the slot was
++	 * overwritten by a different channel.
++	 */
++	for_each_set_bit(i, scan_mask, indio_dev->num_channels) {
++		const struct ad7173_channel_config *cfg1, *cfg2;
++
++		cfg1 = &st->channels[i].cfg;
++
++		for_each_set_bit(j, scan_mask, indio_dev->num_channels) {
++			cfg2 = &st->channels[j].cfg;
++
++			/*
++			 * Only compare configs that are assigned to the same
++			 * SETUP_SEL slot and don't compare channel to itself.
++			 */
++			if (i == j || cfg1->cfg_slot != cfg2->cfg_slot)
++				continue;
++
++			/*
++			 * If we find two different configs trying to use the
++			 * same SETUP_SEL slot, then we know that the that we
++			 * have too many unique configurations requested for
++			 * the available slots and at least one was overwritten.
++			 */
++			if (!ad7173_is_setup_equal(cfg1, cfg2)) {
++				/*
++				 * At this point, there isn't a way to tell
++				 * which setups are actually programmed in the
++				 * ADC anymore, so we could read them back to
++				 * see, but it is simpler to just turn off all
++				 * of the live flags so that everything gets
++				 * reprogramed on the next attempt read a sample.
++				 */
++				for (k = 0; k < st->num_channels; k++)
++					st->channels[k].cfg.live = false;
++
++				dev_err(&st->sd.spi->dev,
++					"Too many unique channel configurations requested for scan\n");
++				return -EINVAL;
++			}
++		}
++	}
++
+ 	return 0;
+ }
+ 
+@@ -1580,6 +1658,7 @@ static int ad7173_fw_parse_channel_config(struct iio_dev *indio_dev)
+ 		chan_st_priv->cfg.bipolar = false;
+ 		chan_st_priv->cfg.input_buf = st->info->has_input_buf;
+ 		chan_st_priv->cfg.ref_sel = AD7173_SETUP_REF_SEL_INT_REF;
++		chan_st_priv->cfg.odr = st->info->odr_start_value;
+ 		chan_st_priv->cfg.openwire_comp_chan = -1;
+ 		st->adc_mode |= AD7173_ADC_MODE_REF_EN;
+ 		if (st->info->data_reg_only_16bit)
+@@ -1646,7 +1725,7 @@ static int ad7173_fw_parse_channel_config(struct iio_dev *indio_dev)
+ 		chan->scan_index = chan_index;
+ 		chan->channel = ain[0];
+ 		chan_st_priv->cfg.input_buf = st->info->has_input_buf;
+-		chan_st_priv->cfg.odr = 0;
++		chan_st_priv->cfg.odr = st->info->odr_start_value;
+ 		chan_st_priv->cfg.openwire_comp_chan = -1;
+ 
+ 		chan_st_priv->cfg.bipolar = fwnode_property_read_bool(child, "bipolar");
+diff --git a/drivers/iio/adc/ad7380.c b/drivers/iio/adc/ad7380.c
+index cabf5511d11617..3773d727708089 100644
+--- a/drivers/iio/adc/ad7380.c
++++ b/drivers/iio/adc/ad7380.c
+@@ -873,6 +873,7 @@ static const struct ad7380_chip_info adaq4381_4_chip_info = {
+ 	.has_hardware_gain = true,
+ 	.available_scan_masks = ad7380_4_channel_scan_masks,
+ 	.timing_specs = &ad7380_4_timing,
++	.max_conversion_rate_hz = 4 * MEGA,
+ };
+ 
+ static const struct spi_offload_config ad7380_offload_config = {
+diff --git a/drivers/iio/adc/ad_sigma_delta.c b/drivers/iio/adc/ad_sigma_delta.c
+index 6b3ef7ef403e00..debd59b3989474 100644
+--- a/drivers/iio/adc/ad_sigma_delta.c
++++ b/drivers/iio/adc/ad_sigma_delta.c
+@@ -520,7 +520,7 @@ static int ad_sd_buffer_postenable(struct iio_dev *indio_dev)
+ 	return ret;
+ }
+ 
+-static int ad_sd_buffer_postdisable(struct iio_dev *indio_dev)
++static int ad_sd_buffer_predisable(struct iio_dev *indio_dev)
+ {
+ 	struct ad_sigma_delta *sigma_delta = iio_device_get_drvdata(indio_dev);
+ 
+@@ -644,7 +644,7 @@ static bool ad_sd_validate_scan_mask(struct iio_dev *indio_dev, const unsigned l
+ 
+ static const struct iio_buffer_setup_ops ad_sd_buffer_setup_ops = {
+ 	.postenable = &ad_sd_buffer_postenable,
+-	.postdisable = &ad_sd_buffer_postdisable,
++	.predisable = &ad_sd_buffer_predisable,
+ 	.validate_scan_mask = &ad_sd_validate_scan_mask,
+ };
+ 
+diff --git a/drivers/iio/adc/rzg2l_adc.c b/drivers/iio/adc/rzg2l_adc.c
+index 9674d48074c9a7..cadb0446bc2956 100644
+--- a/drivers/iio/adc/rzg2l_adc.c
++++ b/drivers/iio/adc/rzg2l_adc.c
+@@ -89,7 +89,6 @@ struct rzg2l_adc {
+ 	struct completion completion;
+ 	struct mutex lock;
+ 	u16 last_val[RZG2L_ADC_MAX_CHANNELS];
+-	bool was_rpm_active;
+ };
+ 
+ /**
+@@ -428,6 +427,8 @@ static int rzg2l_adc_probe(struct platform_device *pdev)
+ 	if (!indio_dev)
+ 		return -ENOMEM;
+ 
++	platform_set_drvdata(pdev, indio_dev);
++
+ 	adc = iio_priv(indio_dev);
+ 
+ 	adc->hw_params = device_get_match_data(dev);
+@@ -460,8 +461,6 @@ static int rzg2l_adc_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		return ret;
+ 
+-	platform_set_drvdata(pdev, indio_dev);
+-
+ 	ret = rzg2l_adc_hw_init(dev, adc);
+ 	if (ret)
+ 		return dev_err_probe(&pdev->dev, ret,
+@@ -541,14 +540,9 @@ static int rzg2l_adc_suspend(struct device *dev)
+ 	};
+ 	int ret;
+ 
+-	if (pm_runtime_suspended(dev)) {
+-		adc->was_rpm_active = false;
+-	} else {
+-		ret = pm_runtime_force_suspend(dev);
+-		if (ret)
+-			return ret;
+-		adc->was_rpm_active = true;
+-	}
++	ret = pm_runtime_force_suspend(dev);
++	if (ret)
++		return ret;
+ 
+ 	ret = reset_control_bulk_assert(ARRAY_SIZE(resets), resets);
+ 	if (ret)
+@@ -557,9 +551,7 @@ static int rzg2l_adc_suspend(struct device *dev)
+ 	return 0;
+ 
+ rpm_restore:
+-	if (adc->was_rpm_active)
+-		pm_runtime_force_resume(dev);
+-
++	pm_runtime_force_resume(dev);
+ 	return ret;
+ }
+ 
+@@ -577,11 +569,9 @@ static int rzg2l_adc_resume(struct device *dev)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (adc->was_rpm_active) {
+-		ret = pm_runtime_force_resume(dev);
+-		if (ret)
+-			goto resets_restore;
+-	}
++	ret = pm_runtime_force_resume(dev);
++	if (ret)
++		goto resets_restore;
+ 
+ 	ret = rzg2l_adc_hw_init(dev, adc);
+ 	if (ret)
+@@ -590,10 +580,7 @@ static int rzg2l_adc_resume(struct device *dev)
+ 	return 0;
+ 
+ rpm_restore:
+-	if (adc->was_rpm_active) {
+-		pm_runtime_mark_last_busy(dev);
+-		pm_runtime_put_autosuspend(dev);
+-	}
++	pm_runtime_force_suspend(dev);
+ resets_restore:
+ 	reset_control_bulk_assert(ARRAY_SIZE(resets), resets);
+ 	return ret;
+diff --git a/drivers/iio/imu/bno055/bno055.c b/drivers/iio/imu/bno055/bno055.c
+index 597c402b98dedf..143ccc4f4331e3 100644
+--- a/drivers/iio/imu/bno055/bno055.c
++++ b/drivers/iio/imu/bno055/bno055.c
+@@ -118,6 +118,7 @@ struct bno055_sysfs_attr {
+ 	int len;
+ 	int *fusion_vals;
+ 	int *hw_xlate;
++	int hw_xlate_len;
+ 	int type;
+ };
+ 
+@@ -170,20 +171,24 @@ static int bno055_gyr_scale_vals[] = {
+ 	1000, 1877467, 2000, 1877467,
+ };
+ 
++static int bno055_gyr_scale_hw_xlate[] = {0, 1, 2, 3, 4};
+ static struct bno055_sysfs_attr bno055_gyr_scale = {
+ 	.vals = bno055_gyr_scale_vals,
+ 	.len = ARRAY_SIZE(bno055_gyr_scale_vals),
+ 	.fusion_vals = (int[]){1, 900},
+-	.hw_xlate = (int[]){4, 3, 2, 1, 0},
++	.hw_xlate = bno055_gyr_scale_hw_xlate,
++	.hw_xlate_len = ARRAY_SIZE(bno055_gyr_scale_hw_xlate),
+ 	.type = IIO_VAL_FRACTIONAL,
+ };
+ 
+ static int bno055_gyr_lpf_vals[] = {12, 23, 32, 47, 64, 116, 230, 523};
++static int bno055_gyr_lpf_hw_xlate[] = {5, 4, 7, 3, 6, 2, 1, 0};
+ static struct bno055_sysfs_attr bno055_gyr_lpf = {
+ 	.vals = bno055_gyr_lpf_vals,
+ 	.len = ARRAY_SIZE(bno055_gyr_lpf_vals),
+ 	.fusion_vals = (int[]){32},
+-	.hw_xlate = (int[]){5, 4, 7, 3, 6, 2, 1, 0},
++	.hw_xlate = bno055_gyr_lpf_hw_xlate,
++	.hw_xlate_len = ARRAY_SIZE(bno055_gyr_lpf_hw_xlate),
+ 	.type = IIO_VAL_INT,
+ };
+ 
+@@ -561,7 +566,7 @@ static int bno055_get_regmask(struct bno055_priv *priv, int *val, int *val2,
+ 
+ 	idx = (hwval & mask) >> shift;
+ 	if (attr->hw_xlate)
+-		for (i = 0; i < attr->len; i++)
++		for (i = 0; i < attr->hw_xlate_len; i++)
+ 			if (attr->hw_xlate[i] == idx) {
+ 				idx = i;
+ 				break;
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600.h b/drivers/iio/imu/inv_icm42600/inv_icm42600.h
+index f893dbe6996506..55ed1ddaa8cb5d 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600.h
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600.h
+@@ -164,11 +164,11 @@ struct inv_icm42600_state {
+ 	struct inv_icm42600_suspended suspended;
+ 	struct iio_dev *indio_gyro;
+ 	struct iio_dev *indio_accel;
+-	uint8_t buffer[2] __aligned(IIO_DMA_MINALIGN);
++	u8 buffer[2] __aligned(IIO_DMA_MINALIGN);
+ 	struct inv_icm42600_fifo fifo;
+ 	struct {
+-		int64_t gyro;
+-		int64_t accel;
++		s64 gyro;
++		s64 accel;
+ 	} timestamp;
+ };
+ 
+@@ -410,7 +410,7 @@ const struct iio_mount_matrix *
+ inv_icm42600_get_mount_matrix(const struct iio_dev *indio_dev,
+ 			      const struct iio_chan_spec *chan);
+ 
+-uint32_t inv_icm42600_odr_to_period(enum inv_icm42600_odr odr);
++u32 inv_icm42600_odr_to_period(enum inv_icm42600_odr odr);
+ 
+ int inv_icm42600_set_accel_conf(struct inv_icm42600_state *st,
+ 				struct inv_icm42600_sensor_conf *conf,
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c
+index e6cd9dcb0687d1..8a6f09e68f4934 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c
+@@ -177,7 +177,7 @@ static const struct iio_chan_spec inv_icm42600_accel_channels[] = {
+  */
+ struct inv_icm42600_accel_buffer {
+ 	struct inv_icm42600_fifo_sensor_data accel;
+-	int16_t temp;
++	s16 temp;
+ 	aligned_s64 timestamp;
+ };
+ 
+@@ -241,7 +241,7 @@ static int inv_icm42600_accel_update_scan_mode(struct iio_dev *indio_dev,
+ 
+ static int inv_icm42600_accel_read_sensor(struct iio_dev *indio_dev,
+ 					  struct iio_chan_spec const *chan,
+-					  int16_t *val)
++					  s16 *val)
+ {
+ 	struct inv_icm42600_state *st = iio_device_get_drvdata(indio_dev);
+ 	struct inv_icm42600_sensor_state *accel_st = iio_priv(indio_dev);
+@@ -284,7 +284,7 @@ static int inv_icm42600_accel_read_sensor(struct iio_dev *indio_dev,
+ 	if (ret)
+ 		goto exit;
+ 
+-	*val = (int16_t)be16_to_cpup(data);
++	*val = (s16)be16_to_cpup(data);
+ 	if (*val == INV_ICM42600_DATA_INVALID)
+ 		ret = -EINVAL;
+ exit:
+@@ -492,11 +492,11 @@ static int inv_icm42600_accel_read_offset(struct inv_icm42600_state *st,
+ 					  int *val, int *val2)
+ {
+ 	struct device *dev = regmap_get_device(st->map);
+-	int64_t val64;
+-	int32_t bias;
++	s64 val64;
++	s32 bias;
+ 	unsigned int reg;
+-	int16_t offset;
+-	uint8_t data[2];
++	s16 offset;
++	u8 data[2];
+ 	int ret;
+ 
+ 	if (chan->type != IIO_ACCEL)
+@@ -550,7 +550,7 @@ static int inv_icm42600_accel_read_offset(struct inv_icm42600_state *st,
+ 	 * result in micro (1000000)
+ 	 * (offset * 5 * 9.806650 * 1000000) / 10000
+ 	 */
+-	val64 = (int64_t)offset * 5LL * 9806650LL;
++	val64 = (s64)offset * 5LL * 9806650LL;
+ 	/* for rounding, add + or - divisor (10000) divided by 2 */
+ 	if (val64 >= 0)
+ 		val64 += 10000LL / 2LL;
+@@ -568,10 +568,10 @@ static int inv_icm42600_accel_write_offset(struct inv_icm42600_state *st,
+ 					   int val, int val2)
+ {
+ 	struct device *dev = regmap_get_device(st->map);
+-	int64_t val64;
+-	int32_t min, max;
++	s64 val64;
++	s32 min, max;
+ 	unsigned int reg, regval;
+-	int16_t offset;
++	s16 offset;
+ 	int ret;
+ 
+ 	if (chan->type != IIO_ACCEL)
+@@ -596,7 +596,7 @@ static int inv_icm42600_accel_write_offset(struct inv_icm42600_state *st,
+ 	      inv_icm42600_accel_calibbias[1];
+ 	max = inv_icm42600_accel_calibbias[4] * 1000000L +
+ 	      inv_icm42600_accel_calibbias[5];
+-	val64 = (int64_t)val * 1000000LL + (int64_t)val2;
++	val64 = (s64)val * 1000000LL + (s64)val2;
+ 	if (val64 < min || val64 > max)
+ 		return -EINVAL;
+ 
+@@ -671,7 +671,7 @@ static int inv_icm42600_accel_read_raw(struct iio_dev *indio_dev,
+ 				       int *val, int *val2, long mask)
+ {
+ 	struct inv_icm42600_state *st = iio_device_get_drvdata(indio_dev);
+-	int16_t data;
++	s16 data;
+ 	int ret;
+ 
+ 	switch (chan->type) {
+@@ -902,7 +902,8 @@ int inv_icm42600_accel_parse_fifo(struct iio_dev *indio_dev)
+ 	const int8_t *temp;
+ 	unsigned int odr;
+ 	int64_t ts_val;
+-	struct inv_icm42600_accel_buffer buffer;
++	/* buffer is copied to userspace, zeroing it to avoid any data leak */
++	struct inv_icm42600_accel_buffer buffer = { };
+ 
+ 	/* parse all fifo packets */
+ 	for (i = 0, no = 0; i < st->fifo.count; i += size, ++no) {
+@@ -921,8 +922,6 @@ int inv_icm42600_accel_parse_fifo(struct iio_dev *indio_dev)
+ 			inv_sensors_timestamp_apply_odr(ts, st->fifo.period,
+ 							st->fifo.nb.total, no);
+ 
+-		/* buffer is copied to userspace, zeroing it to avoid any data leak */
+-		memset(&buffer, 0, sizeof(buffer));
+ 		memcpy(&buffer.accel, accel, sizeof(buffer.accel));
+ 		/* convert 8 bits FIFO temperature in high resolution format */
+ 		buffer.temp = temp ? (*temp * 64) : 0;
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.c
+index aae7c56481a3fa..00b9db52ca7855 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.c
+@@ -26,28 +26,28 @@
+ #define INV_ICM42600_FIFO_HEADER_ODR_GYRO	BIT(0)
+ 
+ struct inv_icm42600_fifo_1sensor_packet {
+-	uint8_t header;
++	u8 header;
+ 	struct inv_icm42600_fifo_sensor_data data;
+-	int8_t temp;
++	s8 temp;
+ } __packed;
+ #define INV_ICM42600_FIFO_1SENSOR_PACKET_SIZE		8
+ 
+ struct inv_icm42600_fifo_2sensors_packet {
+-	uint8_t header;
++	u8 header;
+ 	struct inv_icm42600_fifo_sensor_data accel;
+ 	struct inv_icm42600_fifo_sensor_data gyro;
+-	int8_t temp;
++	s8 temp;
+ 	__be16 timestamp;
+ } __packed;
+ #define INV_ICM42600_FIFO_2SENSORS_PACKET_SIZE		16
+ 
+ ssize_t inv_icm42600_fifo_decode_packet(const void *packet, const void **accel,
+-					const void **gyro, const int8_t **temp,
++					const void **gyro, const s8 **temp,
+ 					const void **timestamp, unsigned int *odr)
+ {
+ 	const struct inv_icm42600_fifo_1sensor_packet *pack1 = packet;
+ 	const struct inv_icm42600_fifo_2sensors_packet *pack2 = packet;
+-	uint8_t header = *((const uint8_t *)packet);
++	u8 header = *((const u8 *)packet);
+ 
+ 	/* FIFO empty */
+ 	if (header & INV_ICM42600_FIFO_HEADER_MSG) {
+@@ -100,7 +100,7 @@ ssize_t inv_icm42600_fifo_decode_packet(const void *packet, const void **accel,
+ 
+ void inv_icm42600_buffer_update_fifo_period(struct inv_icm42600_state *st)
+ {
+-	uint32_t period_gyro, period_accel, period;
++	u32 period_gyro, period_accel, period;
+ 
+ 	if (st->fifo.en & INV_ICM42600_SENSOR_GYRO)
+ 		period_gyro = inv_icm42600_odr_to_period(st->conf.gyro.odr);
+@@ -204,8 +204,8 @@ int inv_icm42600_buffer_update_watermark(struct inv_icm42600_state *st)
+ {
+ 	size_t packet_size, wm_size;
+ 	unsigned int wm_gyro, wm_accel, watermark;
+-	uint32_t period_gyro, period_accel, period;
+-	uint32_t latency_gyro, latency_accel, latency;
++	u32 period_gyro, period_accel, period;
++	u32 latency_gyro, latency_accel, latency;
+ 	bool restore;
+ 	__le16 raw_wm;
+ 	int ret;
+@@ -459,7 +459,7 @@ int inv_icm42600_buffer_fifo_read(struct inv_icm42600_state *st,
+ 	__be16 *raw_fifo_count;
+ 	ssize_t i, size;
+ 	const void *accel, *gyro, *timestamp;
+-	const int8_t *temp;
++	const s8 *temp;
+ 	unsigned int odr;
+ 	int ret;
+ 
+@@ -550,7 +550,7 @@ int inv_icm42600_buffer_hwfifo_flush(struct inv_icm42600_state *st,
+ 	struct inv_icm42600_sensor_state *gyro_st = iio_priv(st->indio_gyro);
+ 	struct inv_icm42600_sensor_state *accel_st = iio_priv(st->indio_accel);
+ 	struct inv_sensors_timestamp *ts;
+-	int64_t gyro_ts, accel_ts;
++	s64 gyro_ts, accel_ts;
+ 	int ret;
+ 
+ 	gyro_ts = iio_get_time_ns(st->indio_gyro);
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.h b/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.h
+index f6c85daf42b00b..ffca4da1e24936 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.h
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.h
+@@ -28,7 +28,7 @@ struct inv_icm42600_state;
+ struct inv_icm42600_fifo {
+ 	unsigned int on;
+ 	unsigned int en;
+-	uint32_t period;
++	u32 period;
+ 	struct {
+ 		unsigned int gyro;
+ 		unsigned int accel;
+@@ -41,7 +41,7 @@ struct inv_icm42600_fifo {
+ 		size_t accel;
+ 		size_t total;
+ 	} nb;
+-	uint8_t data[2080] __aligned(IIO_DMA_MINALIGN);
++	u8 data[2080] __aligned(IIO_DMA_MINALIGN);
+ };
+ 
+ /* FIFO data packet */
+@@ -52,7 +52,7 @@ struct inv_icm42600_fifo_sensor_data {
+ } __packed;
+ #define INV_ICM42600_FIFO_DATA_INVALID		-32768
+ 
+-static inline int16_t inv_icm42600_fifo_get_sensor_data(__be16 d)
++static inline s16 inv_icm42600_fifo_get_sensor_data(__be16 d)
+ {
+ 	return be16_to_cpu(d);
+ }
+@@ -60,7 +60,7 @@ static inline int16_t inv_icm42600_fifo_get_sensor_data(__be16 d)
+ static inline bool
+ inv_icm42600_fifo_is_data_valid(const struct inv_icm42600_fifo_sensor_data *s)
+ {
+-	int16_t x, y, z;
++	s16 x, y, z;
+ 
+ 	x = inv_icm42600_fifo_get_sensor_data(s->x);
+ 	y = inv_icm42600_fifo_get_sensor_data(s->y);
+@@ -75,7 +75,7 @@ inv_icm42600_fifo_is_data_valid(const struct inv_icm42600_fifo_sensor_data *s)
+ }
+ 
+ ssize_t inv_icm42600_fifo_decode_packet(const void *packet, const void **accel,
+-					const void **gyro, const int8_t **temp,
++					const void **gyro, const s8 **temp,
+ 					const void **timestamp, unsigned int *odr);
+ 
+ extern const struct iio_buffer_setup_ops inv_icm42600_buffer_ops;
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
+index 63d46619ebfaa1..0bf696ba35ed6a 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
+@@ -103,7 +103,7 @@ const struct regmap_config inv_icm42600_spi_regmap_config = {
+ EXPORT_SYMBOL_NS_GPL(inv_icm42600_spi_regmap_config, "IIO_ICM42600");
+ 
+ struct inv_icm42600_hw {
+-	uint8_t whoami;
++	u8 whoami;
+ 	const char *name;
+ 	const struct inv_icm42600_conf *conf;
+ };
+@@ -188,9 +188,9 @@ inv_icm42600_get_mount_matrix(const struct iio_dev *indio_dev,
+ 	return &st->orientation;
+ }
+ 
+-uint32_t inv_icm42600_odr_to_period(enum inv_icm42600_odr odr)
++u32 inv_icm42600_odr_to_period(enum inv_icm42600_odr odr)
+ {
+-	static uint32_t odr_periods[INV_ICM42600_ODR_NB] = {
++	static u32 odr_periods[INV_ICM42600_ODR_NB] = {
+ 		/* reserved values */
+ 		0, 0, 0,
+ 		/* 8kHz */
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c
+index b4d7ce1432a4f4..9ba6f13628e6af 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c
+@@ -77,7 +77,7 @@ static const struct iio_chan_spec inv_icm42600_gyro_channels[] = {
+  */
+ struct inv_icm42600_gyro_buffer {
+ 	struct inv_icm42600_fifo_sensor_data gyro;
+-	int16_t temp;
++	s16 temp;
+ 	aligned_s64 timestamp;
+ };
+ 
+@@ -139,7 +139,7 @@ static int inv_icm42600_gyro_update_scan_mode(struct iio_dev *indio_dev,
+ 
+ static int inv_icm42600_gyro_read_sensor(struct inv_icm42600_state *st,
+ 					 struct iio_chan_spec const *chan,
+-					 int16_t *val)
++					 s16 *val)
+ {
+ 	struct device *dev = regmap_get_device(st->map);
+ 	struct inv_icm42600_sensor_conf conf = INV_ICM42600_SENSOR_CONF_INIT;
+@@ -179,7 +179,7 @@ static int inv_icm42600_gyro_read_sensor(struct inv_icm42600_state *st,
+ 	if (ret)
+ 		goto exit;
+ 
+-	*val = (int16_t)be16_to_cpup(data);
++	*val = (s16)be16_to_cpup(data);
+ 	if (*val == INV_ICM42600_DATA_INVALID)
+ 		ret = -EINVAL;
+ exit:
+@@ -399,11 +399,11 @@ static int inv_icm42600_gyro_read_offset(struct inv_icm42600_state *st,
+ 					 int *val, int *val2)
+ {
+ 	struct device *dev = regmap_get_device(st->map);
+-	int64_t val64;
+-	int32_t bias;
++	s64 val64;
++	s32 bias;
+ 	unsigned int reg;
+-	int16_t offset;
+-	uint8_t data[2];
++	s16 offset;
++	u8 data[2];
+ 	int ret;
+ 
+ 	if (chan->type != IIO_ANGL_VEL)
+@@ -457,7 +457,7 @@ static int inv_icm42600_gyro_read_offset(struct inv_icm42600_state *st,
+ 	 * result in nano (1000000000)
+ 	 * (offset * 64 * Pi * 1000000000) / (2048 * 180)
+ 	 */
+-	val64 = (int64_t)offset * 64LL * 3141592653LL;
++	val64 = (s64)offset * 64LL * 3141592653LL;
+ 	/* for rounding, add + or - divisor (2048 * 180) divided by 2 */
+ 	if (val64 >= 0)
+ 		val64 += 2048 * 180 / 2;
+@@ -475,9 +475,9 @@ static int inv_icm42600_gyro_write_offset(struct inv_icm42600_state *st,
+ 					  int val, int val2)
+ {
+ 	struct device *dev = regmap_get_device(st->map);
+-	int64_t val64, min, max;
++	s64 val64, min, max;
+ 	unsigned int reg, regval;
+-	int16_t offset;
++	s16 offset;
+ 	int ret;
+ 
+ 	if (chan->type != IIO_ANGL_VEL)
+@@ -498,11 +498,11 @@ static int inv_icm42600_gyro_write_offset(struct inv_icm42600_state *st,
+ 	}
+ 
+ 	/* inv_icm42600_gyro_calibbias: min - step - max in nano */
+-	min = (int64_t)inv_icm42600_gyro_calibbias[0] * 1000000000LL +
+-	      (int64_t)inv_icm42600_gyro_calibbias[1];
+-	max = (int64_t)inv_icm42600_gyro_calibbias[4] * 1000000000LL +
+-	      (int64_t)inv_icm42600_gyro_calibbias[5];
+-	val64 = (int64_t)val * 1000000000LL + (int64_t)val2;
++	min = (s64)inv_icm42600_gyro_calibbias[0] * 1000000000LL +
++	      (s64)inv_icm42600_gyro_calibbias[1];
++	max = (s64)inv_icm42600_gyro_calibbias[4] * 1000000000LL +
++	      (s64)inv_icm42600_gyro_calibbias[5];
++	val64 = (s64)val * 1000000000LL + (s64)val2;
+ 	if (val64 < min || val64 > max)
+ 		return -EINVAL;
+ 
+@@ -577,7 +577,7 @@ static int inv_icm42600_gyro_read_raw(struct iio_dev *indio_dev,
+ 				      int *val, int *val2, long mask)
+ {
+ 	struct inv_icm42600_state *st = iio_device_get_drvdata(indio_dev);
+-	int16_t data;
++	s16 data;
+ 	int ret;
+ 
+ 	switch (chan->type) {
+@@ -803,10 +803,11 @@ int inv_icm42600_gyro_parse_fifo(struct iio_dev *indio_dev)
+ 	ssize_t i, size;
+ 	unsigned int no;
+ 	const void *accel, *gyro, *timestamp;
+-	const int8_t *temp;
++	const s8 *temp;
+ 	unsigned int odr;
+-	int64_t ts_val;
+-	struct inv_icm42600_gyro_buffer buffer;
++	s64 ts_val;
++	/* buffer is copied to userspace, zeroing it to avoid any data leak */
++	struct inv_icm42600_gyro_buffer buffer = { };
+ 
+ 	/* parse all fifo packets */
+ 	for (i = 0, no = 0; i < st->fifo.count; i += size, ++no) {
+@@ -825,8 +826,6 @@ int inv_icm42600_gyro_parse_fifo(struct iio_dev *indio_dev)
+ 			inv_sensors_timestamp_apply_odr(ts, st->fifo.period,
+ 							st->fifo.nb.total, no);
+ 
+-		/* buffer is copied to userspace, zeroing it to avoid any data leak */
+-		memset(&buffer, 0, sizeof(buffer));
+ 		memcpy(&buffer.gyro, gyro, sizeof(buffer.gyro));
+ 		/* convert 8 bits FIFO temperature in high resolution format */
+ 		buffer.temp = temp ? (*temp * 64) : 0;
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_temp.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_temp.c
+index 988f227f6563da..271a4788604ad5 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_temp.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_temp.c
+@@ -13,7 +13,7 @@
+ #include "inv_icm42600.h"
+ #include "inv_icm42600_temp.h"
+ 
+-static int inv_icm42600_temp_read(struct inv_icm42600_state *st, int16_t *temp)
++static int inv_icm42600_temp_read(struct inv_icm42600_state *st, s16 *temp)
+ {
+ 	struct device *dev = regmap_get_device(st->map);
+ 	__be16 *raw;
+@@ -31,9 +31,13 @@ static int inv_icm42600_temp_read(struct inv_icm42600_state *st, int16_t *temp)
+ 	if (ret)
+ 		goto exit;
+ 
+-	*temp = (int16_t)be16_to_cpup(raw);
++	*temp = (s16)be16_to_cpup(raw);
++	/*
++	 * Temperature data is invalid if both accel and gyro are off.
++	 * Return -EBUSY in this case.
++	 */
+ 	if (*temp == INV_ICM42600_DATA_INVALID)
+-		ret = -EINVAL;
++		ret = -EBUSY;
+ 
+ exit:
+ 	mutex_unlock(&st->lock);
+@@ -48,7 +52,7 @@ int inv_icm42600_temp_read_raw(struct iio_dev *indio_dev,
+ 			       int *val, int *val2, long mask)
+ {
+ 	struct inv_icm42600_state *st = iio_device_get_drvdata(indio_dev);
+-	int16_t temp;
++	s16 temp;
+ 	int ret;
+ 
+ 	if (chan->type != IIO_TEMP)
+diff --git a/drivers/iio/light/as73211.c b/drivers/iio/light/as73211.c
+index 68f60dc3c79d53..32719f584c47a0 100644
+--- a/drivers/iio/light/as73211.c
++++ b/drivers/iio/light/as73211.c
+@@ -639,7 +639,7 @@ static irqreturn_t as73211_trigger_handler(int irq __always_unused, void *p)
+ 	struct {
+ 		__le16 chan[4];
+ 		aligned_s64 ts;
+-	} scan;
++	} scan = { };
+ 	int data_result, ret;
+ 
+ 	mutex_lock(&data->mutex);
+diff --git a/drivers/iio/pressure/bmp280-core.c b/drivers/iio/pressure/bmp280-core.c
+index f37f20776c8917..0f23ece440004c 100644
+--- a/drivers/iio/pressure/bmp280-core.c
++++ b/drivers/iio/pressure/bmp280-core.c
+@@ -3216,11 +3216,12 @@ int bmp280_common_probe(struct device *dev,
+ 
+ 	/* Bring chip out of reset if there is an assigned GPIO line */
+ 	gpiod = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH);
++	if (IS_ERR(gpiod))
++		return dev_err_probe(dev, PTR_ERR(gpiod), "failed to get reset GPIO\n");
++
+ 	/* Deassert the signal */
+-	if (gpiod) {
+-		dev_info(dev, "release reset\n");
+-		gpiod_set_value(gpiod, 0);
+-	}
++	dev_info(dev, "release reset\n");
++	gpiod_set_value(gpiod, 0);
+ 
+ 	data->regmap = regmap;
+ 
+diff --git a/drivers/iio/proximity/isl29501.c b/drivers/iio/proximity/isl29501.c
+index d1510fe2405088..f69db6f2f38031 100644
+--- a/drivers/iio/proximity/isl29501.c
++++ b/drivers/iio/proximity/isl29501.c
+@@ -938,12 +938,18 @@ static irqreturn_t isl29501_trigger_handler(int irq, void *p)
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+ 	struct isl29501_private *isl29501 = iio_priv(indio_dev);
+ 	const unsigned long *active_mask = indio_dev->active_scan_mask;
+-	u32 buffer[4] __aligned(8) = {}; /* 1x16-bit + naturally aligned ts */
+-
+-	if (test_bit(ISL29501_DISTANCE_SCAN_INDEX, active_mask))
+-		isl29501_register_read(isl29501, REG_DISTANCE, buffer);
++	u32 value;
++	struct {
++		u16 data;
++		aligned_s64 ts;
++	} scan = { };
++
++	if (test_bit(ISL29501_DISTANCE_SCAN_INDEX, active_mask)) {
++		isl29501_register_read(isl29501, REG_DISTANCE, &value);
++		scan.data = value;
++	}
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, buffer, pf->timestamp);
++	iio_push_to_buffers_with_timestamp(indio_dev, &scan, pf->timestamp);
+ 	iio_trigger_notify_done(indio_dev->trig);
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/iio/temperature/maxim_thermocouple.c b/drivers/iio/temperature/maxim_thermocouple.c
+index cae8e84821d7fd..205939680fd4fc 100644
+--- a/drivers/iio/temperature/maxim_thermocouple.c
++++ b/drivers/iio/temperature/maxim_thermocouple.c
+@@ -11,6 +11,7 @@
+ #include <linux/module.h>
+ #include <linux/err.h>
+ #include <linux/spi/spi.h>
++#include <linux/types.h>
+ #include <linux/iio/iio.h>
+ #include <linux/iio/sysfs.h>
+ #include <linux/iio/trigger.h>
+@@ -121,8 +122,15 @@ struct maxim_thermocouple_data {
+ 	struct spi_device *spi;
+ 	const struct maxim_thermocouple_chip *chip;
+ 	char tc_type;
+-
+-	u8 buffer[16] __aligned(IIO_DMA_MINALIGN);
++	/* Buffer for reading up to 2 hardware channels. */
++	struct {
++		union {
++			__be16 raw16;
++			__be32 raw32;
++			__be16 raw[2];
++		};
++		aligned_s64 timestamp;
++	} buffer __aligned(IIO_DMA_MINALIGN);
+ };
+ 
+ static int maxim_thermocouple_read(struct maxim_thermocouple_data *data,
+@@ -130,18 +138,16 @@ static int maxim_thermocouple_read(struct maxim_thermocouple_data *data,
+ {
+ 	unsigned int storage_bytes = data->chip->read_size;
+ 	unsigned int shift = chan->scan_type.shift + (chan->address * 8);
+-	__be16 buf16;
+-	__be32 buf32;
+ 	int ret;
+ 
+ 	switch (storage_bytes) {
+ 	case 2:
+-		ret = spi_read(data->spi, (void *)&buf16, storage_bytes);
+-		*val = be16_to_cpu(buf16);
++		ret = spi_read(data->spi, &data->buffer.raw16, storage_bytes);
++		*val = be16_to_cpu(data->buffer.raw16);
+ 		break;
+ 	case 4:
+-		ret = spi_read(data->spi, (void *)&buf32, storage_bytes);
+-		*val = be32_to_cpu(buf32);
++		ret = spi_read(data->spi, &data->buffer.raw32, storage_bytes);
++		*val = be32_to_cpu(data->buffer.raw32);
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+@@ -166,9 +172,9 @@ static irqreturn_t maxim_thermocouple_trigger_handler(int irq, void *private)
+ 	struct maxim_thermocouple_data *data = iio_priv(indio_dev);
+ 	int ret;
+ 
+-	ret = spi_read(data->spi, data->buffer, data->chip->read_size);
++	ret = spi_read(data->spi, data->buffer.raw, data->chip->read_size);
+ 	if (!ret) {
+-		iio_push_to_buffers_with_ts(indio_dev, data->buffer,
++		iio_push_to_buffers_with_ts(indio_dev, &data->buffer,
+ 					    sizeof(data->buffer),
+ 					    iio_get_time_ns(indio_dev));
+ 	}
+diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c
+index b1c44ec1a3f36d..572a91a62a7bea 100644
+--- a/drivers/infiniband/core/umem_odp.c
++++ b/drivers/infiniband/core/umem_odp.c
+@@ -115,7 +115,7 @@ static int ib_init_umem_odp(struct ib_umem_odp *umem_odp,
+ 
+ out_free_map:
+ 	if (ib_uses_virt_dma(dev))
+-		kfree(map->pfn_list);
++		kvfree(map->pfn_list);
+ 	else
+ 		hmm_dma_map_free(dev->dma_device, map);
+ 	return ret;
+@@ -287,7 +287,7 @@ static void ib_umem_odp_free(struct ib_umem_odp *umem_odp)
+ 	mutex_unlock(&umem_odp->umem_mutex);
+ 	mmu_interval_notifier_remove(&umem_odp->notifier);
+ 	if (ib_uses_virt_dma(dev))
+-		kfree(umem_odp->map.pfn_list);
++		kvfree(umem_odp->map.pfn_list);
+ 	else
+ 		hmm_dma_map_free(dev->dma_device, &umem_odp->map);
+ }
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index 3a627acb82ce13..9b33072f9a0680 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -1921,7 +1921,6 @@ int bnxt_re_modify_srq(struct ib_srq *ib_srq, struct ib_srq_attr *srq_attr,
+ 	struct bnxt_re_srq *srq = container_of(ib_srq, struct bnxt_re_srq,
+ 					       ib_srq);
+ 	struct bnxt_re_dev *rdev = srq->rdev;
+-	int rc;
+ 
+ 	switch (srq_attr_mask) {
+ 	case IB_SRQ_MAX_WR:
+@@ -1933,11 +1932,8 @@ int bnxt_re_modify_srq(struct ib_srq *ib_srq, struct ib_srq_attr *srq_attr,
+ 			return -EINVAL;
+ 
+ 		srq->qplib_srq.threshold = srq_attr->srq_limit;
+-		rc = bnxt_qplib_modify_srq(&rdev->qplib_res, &srq->qplib_srq);
+-		if (rc) {
+-			ibdev_err(&rdev->ibdev, "Modify HW SRQ failed!");
+-			return rc;
+-		}
++		bnxt_qplib_srq_arm_db(&srq->qplib_srq.dbinfo, srq->qplib_srq.threshold);
++
+ 		/* On success, update the shadow */
+ 		srq->srq_limit = srq_attr->srq_limit;
+ 		/* No need to Build and send response back to udata */
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 293b0a96c8e3ec..df7cf8d68e273f 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -2017,6 +2017,28 @@ static void bnxt_re_free_nqr_mem(struct bnxt_re_dev *rdev)
+ 	rdev->nqr = NULL;
+ }
+ 
++/* When DEL_GID fails, driver is not freeing GID ctx memory.
++ * To avoid the memory leak, free the memory during unload
++ */
++static void bnxt_re_free_gid_ctx(struct bnxt_re_dev *rdev)
++{
++	struct bnxt_qplib_sgid_tbl *sgid_tbl = &rdev->qplib_res.sgid_tbl;
++	struct bnxt_re_gid_ctx *ctx, **ctx_tbl;
++	int i;
++
++	if (!sgid_tbl->active)
++		return;
++
++	ctx_tbl = sgid_tbl->ctx;
++	for (i = 0; i < sgid_tbl->max; i++) {
++		if (sgid_tbl->hw_id[i] == 0xFFFF)
++			continue;
++
++		ctx = ctx_tbl[i];
++		kfree(ctx);
++	}
++}
++
+ static void bnxt_re_dev_uninit(struct bnxt_re_dev *rdev, u8 op_type)
+ {
+ 	u8 type;
+@@ -2030,6 +2052,7 @@ static void bnxt_re_dev_uninit(struct bnxt_re_dev *rdev, u8 op_type)
+ 	if (test_and_clear_bit(BNXT_RE_FLAG_QOS_WORK_REG, &rdev->flags))
+ 		cancel_delayed_work_sync(&rdev->worker);
+ 
++	bnxt_re_free_gid_ctx(rdev);
+ 	if (test_and_clear_bit(BNXT_RE_FLAG_RESOURCES_INITIALIZED,
+ 			       &rdev->flags))
+ 		bnxt_re_cleanup_res(rdev);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index be34c605d51672..c2784561156f63 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -705,9 +705,7 @@ int bnxt_qplib_create_srq(struct bnxt_qplib_res *res,
+ 	srq->dbinfo.db = srq->dpi->dbr;
+ 	srq->dbinfo.max_slot = 1;
+ 	srq->dbinfo.priv_db = res->dpi_tbl.priv_db;
+-	if (srq->threshold)
+-		bnxt_qplib_armen_db(&srq->dbinfo, DBC_DBC_TYPE_SRQ_ARMENA);
+-	srq->arm_req = false;
++	bnxt_qplib_armen_db(&srq->dbinfo, DBC_DBC_TYPE_SRQ_ARMENA);
+ 
+ 	return 0;
+ fail:
+@@ -717,24 +715,6 @@ int bnxt_qplib_create_srq(struct bnxt_qplib_res *res,
+ 	return rc;
+ }
+ 
+-int bnxt_qplib_modify_srq(struct bnxt_qplib_res *res,
+-			  struct bnxt_qplib_srq *srq)
+-{
+-	struct bnxt_qplib_hwq *srq_hwq = &srq->hwq;
+-	u32 count;
+-
+-	count = __bnxt_qplib_get_avail(srq_hwq);
+-	if (count > srq->threshold) {
+-		srq->arm_req = false;
+-		bnxt_qplib_srq_arm_db(&srq->dbinfo, srq->threshold);
+-	} else {
+-		/* Deferred arming */
+-		srq->arm_req = true;
+-	}
+-
+-	return 0;
+-}
+-
+ int bnxt_qplib_query_srq(struct bnxt_qplib_res *res,
+ 			 struct bnxt_qplib_srq *srq)
+ {
+@@ -776,7 +756,6 @@ int bnxt_qplib_post_srq_recv(struct bnxt_qplib_srq *srq,
+ 	struct bnxt_qplib_hwq *srq_hwq = &srq->hwq;
+ 	struct rq_wqe *srqe;
+ 	struct sq_sge *hw_sge;
+-	u32 count = 0;
+ 	int i, next;
+ 
+ 	spin_lock(&srq_hwq->lock);
+@@ -808,15 +787,8 @@ int bnxt_qplib_post_srq_recv(struct bnxt_qplib_srq *srq,
+ 
+ 	bnxt_qplib_hwq_incr_prod(&srq->dbinfo, srq_hwq, srq->dbinfo.max_slot);
+ 
+-	spin_lock(&srq_hwq->lock);
+-	count = __bnxt_qplib_get_avail(srq_hwq);
+-	spin_unlock(&srq_hwq->lock);
+ 	/* Ring DB */
+ 	bnxt_qplib_ring_prod_db(&srq->dbinfo, DBC_DBC_TYPE_SRQ);
+-	if (srq->arm_req == true && count > srq->threshold) {
+-		srq->arm_req = false;
+-		bnxt_qplib_srq_arm_db(&srq->dbinfo, srq->threshold);
+-	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+index 0d9487c889ff3e..846501f12227ca 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+@@ -543,8 +543,6 @@ int bnxt_qplib_enable_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq,
+ 			 srqn_handler_t srq_handler);
+ int bnxt_qplib_create_srq(struct bnxt_qplib_res *res,
+ 			  struct bnxt_qplib_srq *srq);
+-int bnxt_qplib_modify_srq(struct bnxt_qplib_res *res,
+-			  struct bnxt_qplib_srq *srq);
+ int bnxt_qplib_query_srq(struct bnxt_qplib_res *res,
+ 			 struct bnxt_qplib_srq *srq);
+ void bnxt_qplib_destroy_srq(struct bnxt_qplib_res *res,
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.c b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+index 6cd05207ffeddf..cc5c82d968395a 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_res.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+@@ -121,6 +121,7 @@ static int __alloc_pbl(struct bnxt_qplib_res *res,
+ 	pbl->pg_arr = vmalloc_array(pages, sizeof(void *));
+ 	if (!pbl->pg_arr)
+ 		return -ENOMEM;
++	memset(pbl->pg_arr, 0, pages * sizeof(void *));
+ 
+ 	pbl->pg_map_arr = vmalloc_array(pages, sizeof(dma_addr_t));
+ 	if (!pbl->pg_map_arr) {
+@@ -128,6 +129,7 @@ static int __alloc_pbl(struct bnxt_qplib_res *res,
+ 		pbl->pg_arr = NULL;
+ 		return -ENOMEM;
+ 	}
++	memset(pbl->pg_map_arr, 0, pages * sizeof(dma_addr_t));
+ 	pbl->pg_count = 0;
+ 	pbl->pg_size = sginfo->pgsize;
+ 
+diff --git a/drivers/infiniband/hw/erdma/erdma_verbs.c b/drivers/infiniband/hw/erdma/erdma_verbs.c
+index ec0ad40860668a..8d7596abb82296 100644
+--- a/drivers/infiniband/hw/erdma/erdma_verbs.c
++++ b/drivers/infiniband/hw/erdma/erdma_verbs.c
+@@ -994,6 +994,8 @@ int erdma_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attrs,
+ 		old_entry = xa_store(&dev->qp_xa, 1, qp, GFP_KERNEL);
+ 		if (xa_is_err(old_entry))
+ 			ret = xa_err(old_entry);
++		else
++			qp->ibqp.qp_num = 1;
+ 	} else {
+ 		ret = xa_alloc_cyclic(&dev->qp_xa, &qp->ibqp.qp_num, qp,
+ 				      XA_LIMIT(1, dev->attrs.max_qp - 1),
+@@ -1031,7 +1033,9 @@ int erdma_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attrs,
+ 		if (ret)
+ 			goto err_out_cmd;
+ 	} else {
+-		init_kernel_qp(dev, qp, attrs);
++		ret = init_kernel_qp(dev, qp, attrs);
++		if (ret)
++			goto err_out_xa;
+ 	}
+ 
+ 	qp->attrs.max_send_sge = attrs->cap.max_send_sge;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index b30dce00f2405a..b544ca0244842b 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -3043,7 +3043,7 @@ static void hns_roce_v2_exit(struct hns_roce_dev *hr_dev)
+ 	if (!hr_dev->is_vf)
+ 		hns_roce_free_link_table(hr_dev);
+ 
+-	if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP09)
++	if (hr_dev->pci_dev->revision >= PCI_REVISION_ID_HIP09)
+ 		free_dip_entry(hr_dev);
+ }
+ 
+@@ -5514,7 +5514,7 @@ static int hns_roce_v2_query_srqc(struct hns_roce_dev *hr_dev, u32 srqn,
+ 	return ret;
+ }
+ 
+-static int hns_roce_v2_query_sccc(struct hns_roce_dev *hr_dev, u32 qpn,
++static int hns_roce_v2_query_sccc(struct hns_roce_dev *hr_dev, u32 sccn,
+ 				  void *buffer)
+ {
+ 	struct hns_roce_v2_scc_context *context;
+@@ -5526,7 +5526,7 @@ static int hns_roce_v2_query_sccc(struct hns_roce_dev *hr_dev, u32 qpn,
+ 		return PTR_ERR(mailbox);
+ 
+ 	ret = hns_roce_cmd_mbox(hr_dev, 0, mailbox->dma, HNS_ROCE_CMD_QUERY_SCCC,
+-				qpn);
++				sccn);
+ 	if (ret)
+ 		goto out;
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_restrack.c b/drivers/infiniband/hw/hns/hns_roce_restrack.c
+index f637b73b946e44..230187dda6a07b 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_restrack.c
++++ b/drivers/infiniband/hw/hns/hns_roce_restrack.c
+@@ -100,6 +100,7 @@ int hns_roce_fill_res_qp_entry_raw(struct sk_buff *msg, struct ib_qp *ib_qp)
+ 		struct hns_roce_v2_qp_context qpc;
+ 		struct hns_roce_v2_scc_context sccc;
+ 	} context = {};
++	u32 sccn = hr_qp->qpn;
+ 	int ret;
+ 
+ 	if (!hr_dev->hw->query_qpc)
+@@ -116,7 +117,13 @@ int hns_roce_fill_res_qp_entry_raw(struct sk_buff *msg, struct ib_qp *ib_qp)
+ 	    !hr_dev->hw->query_sccc)
+ 		goto out;
+ 
+-	ret = hr_dev->hw->query_sccc(hr_dev, hr_qp->qpn, &context.sccc);
++	if (hr_qp->cong_type == CONG_TYPE_DIP) {
++		if (!hr_qp->dip)
++			goto out;
++		sccn = hr_qp->dip->dip_idx;
++	}
++
++	ret = hr_dev->hw->query_sccc(hr_dev, sccn, &context.sccc);
+ 	if (ret)
+ 		ibdev_warn_ratelimited(&hr_dev->ib_dev,
+ 				       "failed to query SCCC, ret = %d.\n",
+diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c
+index 132a87e52d5c7e..ac0183a2ff7aa1 100644
+--- a/drivers/infiniband/sw/rxe/rxe_net.c
++++ b/drivers/infiniband/sw/rxe/rxe_net.c
+@@ -345,33 +345,15 @@ int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt,
+ 
+ static void rxe_skb_tx_dtor(struct sk_buff *skb)
+ {
+-	struct net_device *ndev = skb->dev;
+-	struct rxe_dev *rxe;
+-	unsigned int qp_index;
+-	struct rxe_qp *qp;
++	struct rxe_qp *qp = skb->sk->sk_user_data;
+ 	int skb_out;
+ 
+-	rxe = rxe_get_dev_from_net(ndev);
+-	if (!rxe && is_vlan_dev(ndev))
+-		rxe = rxe_get_dev_from_net(vlan_dev_real_dev(ndev));
+-	if (WARN_ON(!rxe))
+-		return;
+-
+-	qp_index = (int)(uintptr_t)skb->sk->sk_user_data;
+-	if (!qp_index)
+-		return;
+-
+-	qp = rxe_pool_get_index(&rxe->qp_pool, qp_index);
+-	if (!qp)
+-		goto put_dev;
+-
+ 	skb_out = atomic_dec_return(&qp->skb_out);
+-	if (qp->need_req_skb && skb_out < RXE_INFLIGHT_SKBS_PER_QP_LOW)
++	if (unlikely(qp->need_req_skb &&
++		skb_out < RXE_INFLIGHT_SKBS_PER_QP_LOW))
+ 		rxe_sched_task(&qp->send_task);
+ 
+ 	rxe_put(qp);
+-put_dev:
+-	ib_device_put(&rxe->ib_dev);
+ 	sock_put(skb->sk);
+ }
+ 
+@@ -383,6 +365,7 @@ static int rxe_send(struct sk_buff *skb, struct rxe_pkt_info *pkt)
+ 	sock_hold(sk);
+ 	skb->sk = sk;
+ 	skb->destructor = rxe_skb_tx_dtor;
++	rxe_get(pkt->qp);
+ 	atomic_inc(&pkt->qp->skb_out);
+ 
+ 	if (skb->protocol == htons(ETH_P_IP))
+@@ -405,6 +388,7 @@ static int rxe_loopback(struct sk_buff *skb, struct rxe_pkt_info *pkt)
+ 	sock_hold(sk);
+ 	skb->sk = sk;
+ 	skb->destructor = rxe_skb_tx_dtor;
++	rxe_get(pkt->qp);
+ 	atomic_inc(&pkt->qp->skb_out);
+ 
+ 	if (skb->protocol == htons(ETH_P_IP))
+@@ -497,6 +481,9 @@ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av,
+ 		goto out;
+ 	}
+ 
++	/* Add time stamp to skb. */
++	skb->tstamp = ktime_get();
++
+ 	skb_reserve(skb, hdr_len + LL_RESERVED_SPACE(ndev));
+ 
+ 	/* FIXME: hold reference to this netdev until life of this skb. */
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index f2af3e0aef35b5..95f1c1c2949de7 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -244,7 +244,7 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp,
+ 	err = sock_create_kern(&init_net, AF_INET, SOCK_DGRAM, 0, &qp->sk);
+ 	if (err < 0)
+ 		return err;
+-	qp->sk->sk->sk_user_data = (void *)(uintptr_t)qp->elem.index;
++	qp->sk->sk->sk_user_data = qp;
+ 
+ 	/* pick a source UDP port number for this QP based on
+ 	 * the source QPN. this spreads traffic for different QPs
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 9c17dfa7670306..7add9bcf45dc8b 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -3596,7 +3596,7 @@ static int __init parse_ivrs_acpihid(char *str)
+ {
+ 	u32 seg = 0, bus, dev, fn;
+ 	char *hid, *uid, *p, *addr;
+-	char acpiid[ACPIID_LEN] = {0};
++	char acpiid[ACPIID_LEN + 1] = { }; /* size with NULL terminator */
+ 	int i;
+ 
+ 	addr = strchr(str, '@');
+@@ -3622,7 +3622,7 @@ static int __init parse_ivrs_acpihid(char *str)
+ 	/* We have the '@', make it the terminator to get just the acpiid */
+ 	*addr++ = 0;
+ 
+-	if (strlen(str) > ACPIID_LEN + 1)
++	if (strlen(str) > ACPIID_LEN)
+ 		goto not_found;
+ 
+ 	if (sscanf(str, "=%s", acpiid) != 1)
+diff --git a/drivers/iommu/apple-dart.c b/drivers/iommu/apple-dart.c
+index 757d24f67ad45a..190f28d7661515 100644
+--- a/drivers/iommu/apple-dart.c
++++ b/drivers/iommu/apple-dart.c
+@@ -991,7 +991,6 @@ static const struct iommu_ops apple_dart_iommu_ops = {
+ 	.of_xlate = apple_dart_of_xlate,
+ 	.def_domain_type = apple_dart_def_domain_type,
+ 	.get_resv_regions = apple_dart_get_resv_regions,
+-	.pgsize_bitmap = -1UL, /* Restricted during dart probe */
+ 	.owner = THIS_MODULE,
+ 	.default_domain_ops = &(const struct iommu_domain_ops) {
+ 		.attach_dev	= apple_dart_attach_dev_paging,
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index dacaa78f69aaa1..43df3dc65e2127 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -2997,9 +2997,9 @@ void arm_smmu_attach_commit(struct arm_smmu_attach_state *state)
+ 		/* ATS is being switched off, invalidate the entire ATC */
+ 		arm_smmu_atc_inv_master(master, IOMMU_NO_PASID);
+ 	}
+-	master->ats_enabled = state->ats_enabled;
+ 
+ 	arm_smmu_remove_master_domain(master, state->old_domain, state->ssid);
++	master->ats_enabled = state->ats_enabled;
+ }
+ 
+ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index ab4cd742f0953c..c239e280e43d91 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -4390,7 +4390,6 @@ const struct iommu_ops intel_iommu_ops = {
+ 	.device_group		= intel_iommu_device_group,
+ 	.is_attach_deferred	= intel_iommu_is_attach_deferred,
+ 	.def_domain_type	= device_def_domain_type,
+-	.pgsize_bitmap		= SZ_4K,
+ 	.page_response		= intel_iommu_page_response,
+ 	.default_domain_ops = &(const struct iommu_domain_ops) {
+ 		.attach_dev		= intel_iommu_attach_device,
+diff --git a/drivers/iommu/iommufd/selftest.c b/drivers/iommu/iommufd/selftest.c
+index 6bd0abf9a641e2..c52bf037a2f01e 100644
+--- a/drivers/iommu/iommufd/selftest.c
++++ b/drivers/iommu/iommufd/selftest.c
+@@ -801,7 +801,6 @@ static const struct iommu_ops mock_ops = {
+ 	.default_domain = &mock_blocking_domain,
+ 	.blocked_domain = &mock_blocking_domain,
+ 	.owner = THIS_MODULE,
+-	.pgsize_bitmap = MOCK_IO_PAGE_SIZE,
+ 	.hw_info = mock_domain_hw_info,
+ 	.domain_alloc_paging_flags = mock_domain_alloc_paging_flags,
+ 	.domain_alloc_nested = mock_domain_alloc_nested,
+diff --git a/drivers/iommu/riscv/iommu.c b/drivers/iommu/riscv/iommu.c
+index bb57092ca90110..0eae2f4bdc5e64 100644
+--- a/drivers/iommu/riscv/iommu.c
++++ b/drivers/iommu/riscv/iommu.c
+@@ -1283,7 +1283,7 @@ static phys_addr_t riscv_iommu_iova_to_phys(struct iommu_domain *iommu_domain,
+ 	unsigned long *ptr;
+ 
+ 	ptr = riscv_iommu_pte_fetch(domain, iova, &pte_size);
+-	if (_io_pte_none(*ptr) || !_io_pte_present(*ptr))
++	if (!ptr)
+ 		return 0;
+ 
+ 	return pfn_to_phys(__page_val_to_pfn(*ptr)) | (iova & (pte_size - 1));
+@@ -1533,7 +1533,6 @@ static void riscv_iommu_release_device(struct device *dev)
+ }
+ 
+ static const struct iommu_ops riscv_iommu_ops = {
+-	.pgsize_bitmap = SZ_4K,
+ 	.of_xlate = riscv_iommu_of_xlate,
+ 	.identity_domain = &riscv_iommu_identity_domain,
+ 	.blocked_domain = &riscv_iommu_blocking_domain,
+diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
+index ecd41fb03e5a51..b39d6f134ab28f 100644
+--- a/drivers/iommu/virtio-iommu.c
++++ b/drivers/iommu/virtio-iommu.c
+@@ -998,8 +998,7 @@ static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
+ 	iommu_dma_get_resv_regions(dev, head);
+ }
+ 
+-static struct iommu_ops viommu_ops;
+-static struct virtio_driver virtio_iommu_drv;
++static const struct bus_type *virtio_bus_type;
+ 
+ static int viommu_match_node(struct device *dev, const void *data)
+ {
+@@ -1008,8 +1007,9 @@ static int viommu_match_node(struct device *dev, const void *data)
+ 
+ static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
+ {
+-	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
+-						fwnode, viommu_match_node);
++	struct device *dev = bus_find_device(virtio_bus_type, NULL, fwnode,
++					     viommu_match_node);
++
+ 	put_device(dev);
+ 
+ 	return dev ? dev_to_virtio(dev)->priv : NULL;
+@@ -1086,7 +1086,7 @@ static bool viommu_capable(struct device *dev, enum iommu_cap cap)
+ 	}
+ }
+ 
+-static struct iommu_ops viommu_ops = {
++static const struct iommu_ops viommu_ops = {
+ 	.capable		= viommu_capable,
+ 	.domain_alloc_identity	= viommu_domain_alloc_identity,
+ 	.domain_alloc_paging	= viommu_domain_alloc_paging,
+@@ -1160,6 +1160,9 @@ static int viommu_probe(struct virtio_device *vdev)
+ 	if (!viommu)
+ 		return -ENOMEM;
+ 
++	/* Borrow this for easy lookups later */
++	virtio_bus_type = dev->bus;
++
+ 	spin_lock_init(&viommu->request_lock);
+ 	ida_init(&viommu->domain_ids);
+ 	viommu->dev = dev;
+@@ -1217,8 +1220,6 @@ static int viommu_probe(struct virtio_device *vdev)
+ 		viommu->first_domain++;
+ 	}
+ 
+-	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
+-
+ 	virtio_device_ready(vdev);
+ 
+ 	/* Populate the event queue with buffers */
+@@ -1231,10 +1232,10 @@ static int viommu_probe(struct virtio_device *vdev)
+ 	if (ret)
+ 		goto err_free_vqs;
+ 
+-	iommu_device_register(&viommu->iommu, &viommu_ops, parent_dev);
+-
+ 	vdev->priv = viommu;
+ 
++	iommu_device_register(&viommu->iommu, &viommu_ops, parent_dev);
++
+ 	dev_info(dev, "input address: %u bits\n",
+ 		 order_base_2(viommu->geometry.aperture_end));
+ 	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 17157c4216a5b5..4e80784d17343e 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -253,17 +253,35 @@ MODULE_PARM_DESC(max_read_size, "Maximum size of a read request");
+ static unsigned int max_write_size = 0;
+ module_param(max_write_size, uint, 0644);
+ MODULE_PARM_DESC(max_write_size, "Maximum size of a write request");
+-static unsigned get_max_request_size(struct crypt_config *cc, bool wrt)
++
++static unsigned get_max_request_sectors(struct dm_target *ti, struct bio *bio)
+ {
++	struct crypt_config *cc = ti->private;
+ 	unsigned val, sector_align;
+-	val = !wrt ? READ_ONCE(max_read_size) : READ_ONCE(max_write_size);
+-	if (likely(!val))
+-		val = !wrt ? DM_CRYPT_DEFAULT_MAX_READ_SIZE : DM_CRYPT_DEFAULT_MAX_WRITE_SIZE;
+-	if (wrt || cc->used_tag_size) {
+-		if (unlikely(val > BIO_MAX_VECS << PAGE_SHIFT))
+-			val = BIO_MAX_VECS << PAGE_SHIFT;
+-	}
+-	sector_align = max(bdev_logical_block_size(cc->dev->bdev), (unsigned)cc->sector_size);
++	bool wrt = op_is_write(bio_op(bio));
++
++	if (wrt) {
++		/*
++		 * For zoned devices, splitting write operations creates the
++		 * risk of deadlocking queue freeze operations with zone write
++		 * plugging BIO work when the reminder of a split BIO is
++		 * issued. So always allow the entire BIO to proceed.
++		 */
++		if (ti->emulate_zone_append)
++			return bio_sectors(bio);
++
++		val = min_not_zero(READ_ONCE(max_write_size),
++				   DM_CRYPT_DEFAULT_MAX_WRITE_SIZE);
++	} else {
++		val = min_not_zero(READ_ONCE(max_read_size),
++				   DM_CRYPT_DEFAULT_MAX_READ_SIZE);
++	}
++
++	if (wrt || cc->used_tag_size)
++		val = min(val, BIO_MAX_VECS << PAGE_SHIFT);
++
++	sector_align = max(bdev_logical_block_size(cc->dev->bdev),
++			   (unsigned)cc->sector_size);
+ 	val = round_down(val, sector_align);
+ 	if (unlikely(!val))
+ 		val = sector_align;
+@@ -3496,7 +3514,7 @@ static int crypt_map(struct dm_target *ti, struct bio *bio)
+ 	/*
+ 	 * Check if bio is too large, split as needed.
+ 	 */
+-	max_sectors = get_max_request_size(cc, bio_data_dir(bio) == WRITE);
++	max_sectors = get_max_request_sectors(ti, bio);
+ 	if (unlikely(bio_sectors(bio) > max_sectors))
+ 		dm_accept_partial_bio(bio, max_sectors);
+ 
+@@ -3733,6 +3751,17 @@ static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits)
+ 		max_t(unsigned int, limits->physical_block_size, cc->sector_size);
+ 	limits->io_min = max_t(unsigned int, limits->io_min, cc->sector_size);
+ 	limits->dma_alignment = limits->logical_block_size - 1;
++
++	/*
++	 * For zoned dm-crypt targets, there will be no internal splitting of
++	 * write BIOs to avoid exceeding BIO_MAX_VECS vectors per BIO. But
++	 * without respecting this limit, crypt_alloc_buffer() will trigger a
++	 * BUG(). Avoid this by forcing DM core to split write BIOs to this
++	 * limit.
++	 */
++	if (ti->emulate_zone_append)
++		limits->max_hw_sectors = min(limits->max_hw_sectors,
++					     BIO_MAX_VECS << PAGE_SECTORS_SHIFT);
+ }
+ 
+ static struct target_type crypt_target = {
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index e8c0a8c6fb5117..9835f2fe26e99f 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -439,7 +439,7 @@ static bool rs_is_reshapable(struct raid_set *rs)
+ /* Return true, if raid set in @rs is recovering */
+ static bool rs_is_recovering(struct raid_set *rs)
+ {
+-	return rs->md.recovery_cp < rs->md.dev_sectors;
++	return rs->md.resync_offset < rs->md.dev_sectors;
+ }
+ 
+ /* Return true, if raid set in @rs is reshaping */
+@@ -769,7 +769,7 @@ static struct raid_set *raid_set_alloc(struct dm_target *ti, struct raid_type *r
+ 	rs->md.layout = raid_type->algorithm;
+ 	rs->md.new_layout = rs->md.layout;
+ 	rs->md.delta_disks = 0;
+-	rs->md.recovery_cp = MaxSector;
++	rs->md.resync_offset = MaxSector;
+ 
+ 	for (i = 0; i < raid_devs; i++)
+ 		md_rdev_init(&rs->dev[i].rdev);
+@@ -913,7 +913,7 @@ static int parse_dev_params(struct raid_set *rs, struct dm_arg_set *as)
+ 		rs->md.external = 0;
+ 		rs->md.persistent = 1;
+ 		rs->md.major_version = 2;
+-	} else if (rebuild && !rs->md.recovery_cp) {
++	} else if (rebuild && !rs->md.resync_offset) {
+ 		/*
+ 		 * Without metadata, we will not be able to tell if the array
+ 		 * is in-sync or not - we must assume it is not.  Therefore,
+@@ -1696,20 +1696,20 @@ static void rs_setup_recovery(struct raid_set *rs, sector_t dev_sectors)
+ {
+ 	/* raid0 does not recover */
+ 	if (rs_is_raid0(rs))
+-		rs->md.recovery_cp = MaxSector;
++		rs->md.resync_offset = MaxSector;
+ 	/*
+ 	 * A raid6 set has to be recovered either
+ 	 * completely or for the grown part to
+ 	 * ensure proper parity and Q-Syndrome
+ 	 */
+ 	else if (rs_is_raid6(rs))
+-		rs->md.recovery_cp = dev_sectors;
++		rs->md.resync_offset = dev_sectors;
+ 	/*
+ 	 * Other raid set types may skip recovery
+ 	 * depending on the 'nosync' flag.
+ 	 */
+ 	else
+-		rs->md.recovery_cp = test_bit(__CTR_FLAG_NOSYNC, &rs->ctr_flags)
++		rs->md.resync_offset = test_bit(__CTR_FLAG_NOSYNC, &rs->ctr_flags)
+ 				     ? MaxSector : dev_sectors;
+ }
+ 
+@@ -2144,7 +2144,7 @@ static void super_sync(struct mddev *mddev, struct md_rdev *rdev)
+ 	sb->events = cpu_to_le64(mddev->events);
+ 
+ 	sb->disk_recovery_offset = cpu_to_le64(rdev->recovery_offset);
+-	sb->array_resync_offset = cpu_to_le64(mddev->recovery_cp);
++	sb->array_resync_offset = cpu_to_le64(mddev->resync_offset);
+ 
+ 	sb->level = cpu_to_le32(mddev->level);
+ 	sb->layout = cpu_to_le32(mddev->layout);
+@@ -2335,18 +2335,18 @@ static int super_init_validation(struct raid_set *rs, struct md_rdev *rdev)
+ 	}
+ 
+ 	if (!test_bit(__CTR_FLAG_NOSYNC, &rs->ctr_flags))
+-		mddev->recovery_cp = le64_to_cpu(sb->array_resync_offset);
++		mddev->resync_offset = le64_to_cpu(sb->array_resync_offset);
+ 
+ 	/*
+ 	 * During load, we set FirstUse if a new superblock was written.
+ 	 * There are two reasons we might not have a superblock:
+ 	 * 1) The raid set is brand new - in which case, all of the
+ 	 *    devices must have their In_sync bit set.	Also,
+-	 *    recovery_cp must be 0, unless forced.
++	 *    resync_offset must be 0, unless forced.
+ 	 * 2) This is a new device being added to an old raid set
+ 	 *    and the new device needs to be rebuilt - in which
+ 	 *    case the In_sync bit will /not/ be set and
+-	 *    recovery_cp must be MaxSector.
++	 *    resync_offset must be MaxSector.
+ 	 * 3) This is/are a new device(s) being added to an old
+ 	 *    raid set during takeover to a higher raid level
+ 	 *    to provide capacity for redundancy or during reshape
+@@ -2391,8 +2391,8 @@ static int super_init_validation(struct raid_set *rs, struct md_rdev *rdev)
+ 			      new_devs > 1 ? "s" : "");
+ 			return -EINVAL;
+ 		} else if (!test_bit(__CTR_FLAG_REBUILD, &rs->ctr_flags) && rs_is_recovering(rs)) {
+-			DMERR("'rebuild' specified while raid set is not in-sync (recovery_cp=%llu)",
+-			      (unsigned long long) mddev->recovery_cp);
++			DMERR("'rebuild' specified while raid set is not in-sync (resync_offset=%llu)",
++			      (unsigned long long) mddev->resync_offset);
+ 			return -EINVAL;
+ 		} else if (rs_is_reshaping(rs)) {
+ 			DMERR("'rebuild' specified while raid set is being reshaped (reshape_position=%llu)",
+@@ -2697,11 +2697,11 @@ static int rs_adjust_data_offsets(struct raid_set *rs)
+ 	}
+ out:
+ 	/*
+-	 * Raise recovery_cp in case data_offset != 0 to
++	 * Raise resync_offset in case data_offset != 0 to
+ 	 * avoid false recovery positives in the constructor.
+ 	 */
+-	if (rs->md.recovery_cp < rs->md.dev_sectors)
+-		rs->md.recovery_cp += rs->dev[0].rdev.data_offset;
++	if (rs->md.resync_offset < rs->md.dev_sectors)
++		rs->md.resync_offset += rs->dev[0].rdev.data_offset;
+ 
+ 	/* Adjust data offsets on all rdevs but on any raid4/5/6 journal device */
+ 	rdev_for_each(rdev, &rs->md) {
+@@ -2756,7 +2756,7 @@ static int rs_setup_takeover(struct raid_set *rs)
+ 	}
+ 
+ 	clear_bit(MD_ARRAY_FIRST_USE, &mddev->flags);
+-	mddev->recovery_cp = MaxSector;
++	mddev->resync_offset = MaxSector;
+ 
+ 	while (d--) {
+ 		rdev = &rs->dev[d].rdev;
+@@ -2764,7 +2764,7 @@ static int rs_setup_takeover(struct raid_set *rs)
+ 		if (test_bit(d, (void *) rs->rebuild_disks)) {
+ 			clear_bit(In_sync, &rdev->flags);
+ 			clear_bit(Faulty, &rdev->flags);
+-			mddev->recovery_cp = rdev->recovery_offset = 0;
++			mddev->resync_offset = rdev->recovery_offset = 0;
+ 			/* Bitmap has to be created when we do an "up" takeover */
+ 			set_bit(MD_ARRAY_FIRST_USE, &mddev->flags);
+ 		}
+@@ -3222,7 +3222,7 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 			if (r)
+ 				goto bad;
+ 
+-			rs_setup_recovery(rs, rs->md.recovery_cp < rs->md.dev_sectors ? rs->md.recovery_cp : rs->md.dev_sectors);
++			rs_setup_recovery(rs, rs->md.resync_offset < rs->md.dev_sectors ? rs->md.resync_offset : rs->md.dev_sectors);
+ 		} else {
+ 			/* This is no size change or it is shrinking, update size and record in superblocks */
+ 			r = rs_set_dev_and_array_sectors(rs, rs->ti->len, false);
+@@ -3446,7 +3446,7 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery,
+ 
+ 	} else {
+ 		if (state == st_idle && !test_bit(MD_RECOVERY_INTR, &recovery))
+-			r = mddev->recovery_cp;
++			r = mddev->resync_offset;
+ 		else
+ 			r = mddev->curr_resync_completed;
+ 
+@@ -4074,9 +4074,9 @@ static int raid_preresume(struct dm_target *ti)
+ 	}
+ 
+ 	/* Check for any resize/reshape on @rs and adjust/initiate */
+-	if (mddev->recovery_cp && mddev->recovery_cp < MaxSector) {
++	if (mddev->resync_offset && mddev->resync_offset < MaxSector) {
+ 		set_bit(MD_RECOVERY_REQUESTED, &mddev->recovery);
+-		mddev->resync_min = mddev->recovery_cp;
++		mddev->resync_min = mddev->resync_offset;
+ 		if (test_bit(RT_FLAG_RS_GROW, &rs->runtime_flags))
+ 			mddev->resync_max_sectors = mddev->dev_sectors;
+ 	}
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 9f6d88ea60e67f..abfe0392b5a47e 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1293,8 +1293,9 @@ static size_t dm_dax_recovery_write(struct dax_device *dax_dev, pgoff_t pgoff,
+ /*
+  * A target may call dm_accept_partial_bio only from the map routine.  It is
+  * allowed for all bio types except REQ_PREFLUSH, REQ_OP_ZONE_* zone management
+- * operations, REQ_OP_ZONE_APPEND (zone append writes) and any bio serviced by
+- * __send_duplicate_bios().
++ * operations, zone append writes (native with REQ_OP_ZONE_APPEND or emulated
++ * with write BIOs flagged with BIO_EMULATES_ZONE_APPEND) and any bio serviced
++ * by __send_duplicate_bios().
+  *
+  * dm_accept_partial_bio informs the dm that the target only wants to process
+  * additional n_sectors sectors of the bio and the rest of the data should be
+@@ -1327,11 +1328,19 @@ void dm_accept_partial_bio(struct bio *bio, unsigned int n_sectors)
+ 	unsigned int bio_sectors = bio_sectors(bio);
+ 
+ 	BUG_ON(dm_tio_flagged(tio, DM_TIO_IS_DUPLICATE_BIO));
+-	BUG_ON(op_is_zone_mgmt(bio_op(bio)));
+-	BUG_ON(bio_op(bio) == REQ_OP_ZONE_APPEND);
+ 	BUG_ON(bio_sectors > *tio->len_ptr);
+ 	BUG_ON(n_sectors > bio_sectors);
+ 
++	if (static_branch_unlikely(&zoned_enabled) &&
++	    unlikely(bdev_is_zoned(bio->bi_bdev))) {
++		enum req_op op = bio_op(bio);
++
++		BUG_ON(op_is_zone_mgmt(op));
++		BUG_ON(op == REQ_OP_WRITE);
++		BUG_ON(op == REQ_OP_WRITE_ZEROES);
++		BUG_ON(op == REQ_OP_ZONE_APPEND);
++	}
++
+ 	*tio->len_ptr -= bio_sectors - n_sectors;
+ 	bio->bi_iter.bi_size = n_sectors << SECTOR_SHIFT;
+ 
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index 7f524a26cebcaa..334b7140493004 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -1987,12 +1987,12 @@ static void bitmap_dirty_bits(struct mddev *mddev, unsigned long s,
+ 
+ 		md_bitmap_set_memory_bits(bitmap, sec, 1);
+ 		md_bitmap_file_set_bit(bitmap, sec);
+-		if (sec < bitmap->mddev->recovery_cp)
++		if (sec < bitmap->mddev->resync_offset)
+ 			/* We are asserting that the array is dirty,
+-			 * so move the recovery_cp address back so
++			 * so move the resync_offset address back so
+ 			 * that it is obvious that it is dirty
+ 			 */
+-			bitmap->mddev->recovery_cp = sec;
++			bitmap->mddev->resync_offset = sec;
+ 	}
+ }
+ 
+@@ -2258,7 +2258,7 @@ static int bitmap_load(struct mddev *mddev)
+ 	    || bitmap->events_cleared == mddev->events)
+ 		/* no need to keep dirty bits to optimise a
+ 		 * re-add of a missing device */
+-		start = mddev->recovery_cp;
++		start = mddev->resync_offset;
+ 
+ 	mutex_lock(&mddev->bitmap_info.mutex);
+ 	err = md_bitmap_init_from_disk(bitmap, start);
+diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
+index 94221d964d4fd6..5497eaee96e7d3 100644
+--- a/drivers/md/md-cluster.c
++++ b/drivers/md/md-cluster.c
+@@ -337,11 +337,11 @@ static void recover_bitmaps(struct md_thread *thread)
+ 			md_wakeup_thread(mddev->sync_thread);
+ 
+ 		if (hi > 0) {
+-			if (lo < mddev->recovery_cp)
+-				mddev->recovery_cp = lo;
++			if (lo < mddev->resync_offset)
++				mddev->resync_offset = lo;
+ 			/* wake up thread to continue resync in case resync
+ 			 * is not finished */
+-			if (mddev->recovery_cp != MaxSector) {
++			if (mddev->resync_offset != MaxSector) {
+ 				/*
+ 				 * clear the REMOTE flag since we will launch
+ 				 * resync thread in current node.
+@@ -863,9 +863,9 @@ static int gather_all_resync_info(struct mddev *mddev, int total_slots)
+ 			lockres_free(bm_lockres);
+ 			continue;
+ 		}
+-		if ((hi > 0) && (lo < mddev->recovery_cp)) {
++		if ((hi > 0) && (lo < mddev->resync_offset)) {
+ 			set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+-			mddev->recovery_cp = lo;
++			mddev->resync_offset = lo;
+ 			md_check_recovery(mddev);
+ 		}
+ 
+@@ -1027,7 +1027,7 @@ static int leave(struct mddev *mddev)
+ 	 * Also, we should send BITMAP_NEEDS_SYNC message in
+ 	 * case reshaping is interrupted.
+ 	 */
+-	if ((cinfo->slot_number > 0 && mddev->recovery_cp != MaxSector) ||
++	if ((cinfo->slot_number > 0 && mddev->resync_offset != MaxSector) ||
+ 	    (mddev->reshape_position != MaxSector &&
+ 	     test_bit(MD_CLOSING, &mddev->flags)))
+ 		resync_bitmap(mddev);
+@@ -1605,8 +1605,8 @@ static int gather_bitmaps(struct md_rdev *rdev)
+ 			pr_warn("md-cluster: Could not gather bitmaps from slot %d", sn);
+ 			goto out;
+ 		}
+-		if ((hi > 0) && (lo < mddev->recovery_cp))
+-			mddev->recovery_cp = lo;
++		if ((hi > 0) && (lo < mddev->resync_offset))
++			mddev->resync_offset = lo;
+ 	}
+ out:
+ 	return err;
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 10670c62b09e50..8746b22060a7c2 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -1402,13 +1402,13 @@ static int super_90_validate(struct mddev *mddev, struct md_rdev *freshest, stru
+ 			mddev->layout = -1;
+ 
+ 		if (sb->state & (1<<MD_SB_CLEAN))
+-			mddev->recovery_cp = MaxSector;
++			mddev->resync_offset = MaxSector;
+ 		else {
+ 			if (sb->events_hi == sb->cp_events_hi &&
+ 				sb->events_lo == sb->cp_events_lo) {
+-				mddev->recovery_cp = sb->recovery_cp;
++				mddev->resync_offset = sb->resync_offset;
+ 			} else
+-				mddev->recovery_cp = 0;
++				mddev->resync_offset = 0;
+ 		}
+ 
+ 		memcpy(mddev->uuid+0, &sb->set_uuid0, 4);
+@@ -1534,13 +1534,13 @@ static void super_90_sync(struct mddev *mddev, struct md_rdev *rdev)
+ 	mddev->minor_version = sb->minor_version;
+ 	if (mddev->in_sync)
+ 	{
+-		sb->recovery_cp = mddev->recovery_cp;
++		sb->resync_offset = mddev->resync_offset;
+ 		sb->cp_events_hi = (mddev->events>>32);
+ 		sb->cp_events_lo = (u32)mddev->events;
+-		if (mddev->recovery_cp == MaxSector)
++		if (mddev->resync_offset == MaxSector)
+ 			sb->state = (1<< MD_SB_CLEAN);
+ 	} else
+-		sb->recovery_cp = 0;
++		sb->resync_offset = 0;
+ 
+ 	sb->layout = mddev->layout;
+ 	sb->chunk_size = mddev->chunk_sectors << 9;
+@@ -1888,7 +1888,7 @@ static int super_1_validate(struct mddev *mddev, struct md_rdev *freshest, struc
+ 		mddev->bitmap_info.default_space = (4096-1024) >> 9;
+ 		mddev->reshape_backwards = 0;
+ 
+-		mddev->recovery_cp = le64_to_cpu(sb->resync_offset);
++		mddev->resync_offset = le64_to_cpu(sb->resync_offset);
+ 		memcpy(mddev->uuid, sb->set_uuid, 16);
+ 
+ 		mddev->max_disks =  (4096-256)/2;
+@@ -2074,7 +2074,7 @@ static void super_1_sync(struct mddev *mddev, struct md_rdev *rdev)
+ 	sb->utime = cpu_to_le64((__u64)mddev->utime);
+ 	sb->events = cpu_to_le64(mddev->events);
+ 	if (mddev->in_sync)
+-		sb->resync_offset = cpu_to_le64(mddev->recovery_cp);
++		sb->resync_offset = cpu_to_le64(mddev->resync_offset);
+ 	else if (test_bit(MD_JOURNAL_CLEAN, &mddev->flags))
+ 		sb->resync_offset = cpu_to_le64(MaxSector);
+ 	else
+@@ -2754,7 +2754,7 @@ void md_update_sb(struct mddev *mddev, int force_change)
+ 	/* If this is just a dirty<->clean transition, and the array is clean
+ 	 * and 'events' is odd, we can roll back to the previous clean state */
+ 	if (nospares
+-	    && (mddev->in_sync && mddev->recovery_cp == MaxSector)
++	    && (mddev->in_sync && mddev->resync_offset == MaxSector)
+ 	    && mddev->can_decrease_events
+ 	    && mddev->events != 1) {
+ 		mddev->events--;
+@@ -4290,9 +4290,9 @@ __ATTR(chunk_size, S_IRUGO|S_IWUSR, chunk_size_show, chunk_size_store);
+ static ssize_t
+ resync_start_show(struct mddev *mddev, char *page)
+ {
+-	if (mddev->recovery_cp == MaxSector)
++	if (mddev->resync_offset == MaxSector)
+ 		return sprintf(page, "none\n");
+-	return sprintf(page, "%llu\n", (unsigned long long)mddev->recovery_cp);
++	return sprintf(page, "%llu\n", (unsigned long long)mddev->resync_offset);
+ }
+ 
+ static ssize_t
+@@ -4318,7 +4318,7 @@ resync_start_store(struct mddev *mddev, const char *buf, size_t len)
+ 		err = -EBUSY;
+ 
+ 	if (!err) {
+-		mddev->recovery_cp = n;
++		mddev->resync_offset = n;
+ 		if (mddev->pers)
+ 			set_bit(MD_SB_CHANGE_CLEAN, &mddev->sb_flags);
+ 	}
+@@ -4822,9 +4822,42 @@ metadata_store(struct mddev *mddev, const char *buf, size_t len)
+ static struct md_sysfs_entry md_metadata =
+ __ATTR_PREALLOC(metadata_version, S_IRUGO|S_IWUSR, metadata_show, metadata_store);
+ 
++static bool rdev_needs_recovery(struct md_rdev *rdev, sector_t sectors)
++{
++	return rdev->raid_disk >= 0 &&
++	       !test_bit(Journal, &rdev->flags) &&
++	       !test_bit(Faulty, &rdev->flags) &&
++	       !test_bit(In_sync, &rdev->flags) &&
++	       rdev->recovery_offset < sectors;
++}
++
++static enum sync_action md_get_active_sync_action(struct mddev *mddev)
++{
++	struct md_rdev *rdev;
++	bool is_recover = false;
++
++	if (mddev->resync_offset < MaxSector)
++		return ACTION_RESYNC;
++
++	if (mddev->reshape_position != MaxSector)
++		return ACTION_RESHAPE;
++
++	rcu_read_lock();
++	rdev_for_each_rcu(rdev, mddev) {
++		if (rdev_needs_recovery(rdev, MaxSector)) {
++			is_recover = true;
++			break;
++		}
++	}
++	rcu_read_unlock();
++
++	return is_recover ? ACTION_RECOVER : ACTION_IDLE;
++}
++
+ enum sync_action md_sync_action(struct mddev *mddev)
+ {
+ 	unsigned long recovery = mddev->recovery;
++	enum sync_action active_action;
+ 
+ 	/*
+ 	 * frozen has the highest priority, means running sync_thread will be
+@@ -4848,8 +4881,17 @@ enum sync_action md_sync_action(struct mddev *mddev)
+ 	    !test_bit(MD_RECOVERY_NEEDED, &recovery))
+ 		return ACTION_IDLE;
+ 
+-	if (test_bit(MD_RECOVERY_RESHAPE, &recovery) ||
+-	    mddev->reshape_position != MaxSector)
++	/*
++	 * Check if any sync operation (resync/recover/reshape) is
++	 * currently active. This ensures that only one sync operation
++	 * can run at a time. Returns the type of active operation, or
++	 * ACTION_IDLE if none are active.
++	 */
++	active_action = md_get_active_sync_action(mddev);
++	if (active_action != ACTION_IDLE)
++		return active_action;
++
++	if (test_bit(MD_RECOVERY_RESHAPE, &recovery))
+ 		return ACTION_RESHAPE;
+ 
+ 	if (test_bit(MD_RECOVERY_RECOVER, &recovery))
+@@ -6405,7 +6447,7 @@ static void md_clean(struct mddev *mddev)
+ 	mddev->external_size = 0;
+ 	mddev->dev_sectors = 0;
+ 	mddev->raid_disks = 0;
+-	mddev->recovery_cp = 0;
++	mddev->resync_offset = 0;
+ 	mddev->resync_min = 0;
+ 	mddev->resync_max = MaxSector;
+ 	mddev->reshape_position = MaxSector;
+@@ -7359,9 +7401,9 @@ int md_set_array_info(struct mddev *mddev, struct mdu_array_info_s *info)
+ 	 * openned
+ 	 */
+ 	if (info->state & (1<<MD_SB_CLEAN))
+-		mddev->recovery_cp = MaxSector;
++		mddev->resync_offset = MaxSector;
+ 	else
+-		mddev->recovery_cp = 0;
++		mddev->resync_offset = 0;
+ 	mddev->persistent    = ! info->not_persistent;
+ 	mddev->external	     = 0;
+ 
+@@ -8300,7 +8342,7 @@ static int status_resync(struct seq_file *seq, struct mddev *mddev)
+ 				seq_printf(seq, "\tresync=REMOTE");
+ 			return 1;
+ 		}
+-		if (mddev->recovery_cp < MaxSector) {
++		if (mddev->resync_offset < MaxSector) {
+ 			seq_printf(seq, "\tresync=PENDING");
+ 			return 1;
+ 		}
+@@ -8943,7 +8985,7 @@ static sector_t md_sync_position(struct mddev *mddev, enum sync_action action)
+ 		return mddev->resync_min;
+ 	case ACTION_RESYNC:
+ 		if (!mddev->bitmap)
+-			return mddev->recovery_cp;
++			return mddev->resync_offset;
+ 		return 0;
+ 	case ACTION_RESHAPE:
+ 		/*
+@@ -8959,11 +9001,7 @@ static sector_t md_sync_position(struct mddev *mddev, enum sync_action action)
+ 		start = MaxSector;
+ 		rcu_read_lock();
+ 		rdev_for_each_rcu(rdev, mddev)
+-			if (rdev->raid_disk >= 0 &&
+-			    !test_bit(Journal, &rdev->flags) &&
+-			    !test_bit(Faulty, &rdev->flags) &&
+-			    !test_bit(In_sync, &rdev->flags) &&
+-			    rdev->recovery_offset < start)
++			if (rdev_needs_recovery(rdev, start))
+ 				start = rdev->recovery_offset;
+ 		rcu_read_unlock();
+ 
+@@ -9181,8 +9219,8 @@ void md_do_sync(struct md_thread *thread)
+ 				   atomic_read(&mddev->recovery_active) == 0);
+ 			mddev->curr_resync_completed = j;
+ 			if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery) &&
+-			    j > mddev->recovery_cp)
+-				mddev->recovery_cp = j;
++			    j > mddev->resync_offset)
++				mddev->resync_offset = j;
+ 			update_time = jiffies;
+ 			set_bit(MD_SB_CHANGE_CLEAN, &mddev->sb_flags);
+ 			sysfs_notify_dirent_safe(mddev->sysfs_completed);
+@@ -9302,19 +9340,19 @@ void md_do_sync(struct md_thread *thread)
+ 	    mddev->curr_resync > MD_RESYNC_ACTIVE) {
+ 		if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery)) {
+ 			if (test_bit(MD_RECOVERY_INTR, &mddev->recovery)) {
+-				if (mddev->curr_resync >= mddev->recovery_cp) {
++				if (mddev->curr_resync >= mddev->resync_offset) {
+ 					pr_debug("md: checkpointing %s of %s.\n",
+ 						 desc, mdname(mddev));
+ 					if (test_bit(MD_RECOVERY_ERROR,
+ 						&mddev->recovery))
+-						mddev->recovery_cp =
++						mddev->resync_offset =
+ 							mddev->curr_resync_completed;
+ 					else
+-						mddev->recovery_cp =
++						mddev->resync_offset =
+ 							mddev->curr_resync;
+ 				}
+ 			} else
+-				mddev->recovery_cp = MaxSector;
++				mddev->resync_offset = MaxSector;
+ 		} else {
+ 			if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery))
+ 				mddev->curr_resync = MaxSector;
+@@ -9322,12 +9360,8 @@ void md_do_sync(struct md_thread *thread)
+ 			    test_bit(MD_RECOVERY_RECOVER, &mddev->recovery)) {
+ 				rcu_read_lock();
+ 				rdev_for_each_rcu(rdev, mddev)
+-					if (rdev->raid_disk >= 0 &&
+-					    mddev->delta_disks >= 0 &&
+-					    !test_bit(Journal, &rdev->flags) &&
+-					    !test_bit(Faulty, &rdev->flags) &&
+-					    !test_bit(In_sync, &rdev->flags) &&
+-					    rdev->recovery_offset < mddev->curr_resync)
++					if (mddev->delta_disks >= 0 &&
++					    rdev_needs_recovery(rdev, mddev->curr_resync))
+ 						rdev->recovery_offset = mddev->curr_resync;
+ 				rcu_read_unlock();
+ 			}
+@@ -9536,7 +9570,7 @@ static bool md_choose_sync_action(struct mddev *mddev, int *spares)
+ 	}
+ 
+ 	/* Check if resync is in progress. */
+-	if (mddev->recovery_cp < MaxSector) {
++	if (mddev->resync_offset < MaxSector) {
+ 		remove_spares(mddev, NULL);
+ 		set_bit(MD_RECOVERY_SYNC, &mddev->recovery);
+ 		clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
+@@ -9717,7 +9751,7 @@ void md_check_recovery(struct mddev *mddev)
+ 		test_bit(MD_RECOVERY_DONE, &mddev->recovery) ||
+ 		(mddev->external == 0 && mddev->safemode == 1) ||
+ 		(mddev->safemode == 2
+-		 && !mddev->in_sync && mddev->recovery_cp == MaxSector)
++		 && !mddev->in_sync && mddev->resync_offset == MaxSector)
+ 		))
+ 		return;
+ 
+diff --git a/drivers/md/md.h b/drivers/md/md.h
+index d45a9e6ead80c5..43ae2d03faa1e6 100644
+--- a/drivers/md/md.h
++++ b/drivers/md/md.h
+@@ -523,7 +523,7 @@ struct mddev {
+ 	unsigned long			normal_io_events; /* IO event timestamp */
+ 	atomic_t			recovery_active; /* blocks scheduled, but not written */
+ 	wait_queue_head_t		recovery_wait;
+-	sector_t			recovery_cp;
++	sector_t			resync_offset;
+ 	sector_t			resync_min;	/* user requested sync
+ 							 * starts here */
+ 	sector_t			resync_max;	/* resync should pause
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index d8f639f4ae1235..613f4fab83b22c 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -673,7 +673,7 @@ static void *raid0_takeover_raid45(struct mddev *mddev)
+ 	mddev->raid_disks--;
+ 	mddev->delta_disks = -1;
+ 	/* make sure it will be not marked as dirty */
+-	mddev->recovery_cp = MaxSector;
++	mddev->resync_offset = MaxSector;
+ 	mddev_clear_unsupported_flags(mddev, UNSUPPORTED_MDDEV_FLAGS);
+ 
+ 	create_strip_zones(mddev, &priv_conf);
+@@ -716,7 +716,7 @@ static void *raid0_takeover_raid10(struct mddev *mddev)
+ 	mddev->raid_disks += mddev->delta_disks;
+ 	mddev->degraded = 0;
+ 	/* make sure it will be not marked as dirty */
+-	mddev->recovery_cp = MaxSector;
++	mddev->resync_offset = MaxSector;
+ 	mddev_clear_unsupported_flags(mddev, UNSUPPORTED_MDDEV_FLAGS);
+ 
+ 	create_strip_zones(mddev, &priv_conf);
+@@ -759,7 +759,7 @@ static void *raid0_takeover_raid1(struct mddev *mddev)
+ 	mddev->delta_disks = 1 - mddev->raid_disks;
+ 	mddev->raid_disks = 1;
+ 	/* make sure it will be not marked as dirty */
+-	mddev->recovery_cp = MaxSector;
++	mddev->resync_offset = MaxSector;
+ 	mddev_clear_unsupported_flags(mddev, UNSUPPORTED_MDDEV_FLAGS);
+ 
+ 	create_strip_zones(mddev, &priv_conf);
+diff --git a/drivers/md/raid1-10.c b/drivers/md/raid1-10.c
+index b8b3a90697012c..52881e6032daee 100644
+--- a/drivers/md/raid1-10.c
++++ b/drivers/md/raid1-10.c
+@@ -283,7 +283,7 @@ static inline int raid1_check_read_range(struct md_rdev *rdev,
+ static inline bool raid1_should_read_first(struct mddev *mddev,
+ 					   sector_t this_sector, int len)
+ {
+-	if ((mddev->recovery_cp < this_sector + len))
++	if ((mddev->resync_offset < this_sector + len))
+ 		return true;
+ 
+ 	if (mddev_is_clustered(mddev) &&
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 64b8176907a9b8..6cee738a645ff2 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -2822,7 +2822,7 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 	}
+ 
+ 	if (mddev->bitmap == NULL &&
+-	    mddev->recovery_cp == MaxSector &&
++	    mddev->resync_offset == MaxSector &&
+ 	    !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery) &&
+ 	    conf->fullsync == 0) {
+ 		*skipped = 1;
+@@ -3282,9 +3282,9 @@ static int raid1_run(struct mddev *mddev)
+ 	}
+ 
+ 	if (conf->raid_disks - mddev->degraded == 1)
+-		mddev->recovery_cp = MaxSector;
++		mddev->resync_offset = MaxSector;
+ 
+-	if (mddev->recovery_cp != MaxSector)
++	if (mddev->resync_offset != MaxSector)
+ 		pr_info("md/raid1:%s: not clean -- starting background reconstruction\n",
+ 			mdname(mddev));
+ 	pr_info("md/raid1:%s: active with %d out of %d mirrors\n",
+@@ -3345,8 +3345,8 @@ static int raid1_resize(struct mddev *mddev, sector_t sectors)
+ 
+ 	md_set_array_sectors(mddev, newsize);
+ 	if (sectors > mddev->dev_sectors &&
+-	    mddev->recovery_cp > mddev->dev_sectors) {
+-		mddev->recovery_cp = mddev->dev_sectors;
++	    mddev->resync_offset > mddev->dev_sectors) {
++		mddev->resync_offset = mddev->dev_sectors;
+ 		set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+ 	}
+ 	mddev->dev_sectors = sectors;
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 95dc354a86a081..b60c30bfb6c794 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -2117,7 +2117,7 @@ static int raid10_add_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 	int last = conf->geo.raid_disks - 1;
+ 	struct raid10_info *p;
+ 
+-	if (mddev->recovery_cp < MaxSector)
++	if (mddev->resync_offset < MaxSector)
+ 		/* only hot-add to in-sync arrays, as recovery is
+ 		 * very different from resync
+ 		 */
+@@ -3185,7 +3185,7 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 	 * of a clean array, like RAID1 does.
+ 	 */
+ 	if (mddev->bitmap == NULL &&
+-	    mddev->recovery_cp == MaxSector &&
++	    mddev->resync_offset == MaxSector &&
+ 	    mddev->reshape_position == MaxSector &&
+ 	    !test_bit(MD_RECOVERY_SYNC, &mddev->recovery) &&
+ 	    !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery) &&
+@@ -4145,7 +4145,7 @@ static int raid10_run(struct mddev *mddev)
+ 		disk->recovery_disabled = mddev->recovery_disabled - 1;
+ 	}
+ 
+-	if (mddev->recovery_cp != MaxSector)
++	if (mddev->resync_offset != MaxSector)
+ 		pr_notice("md/raid10:%s: not clean -- starting background reconstruction\n",
+ 			  mdname(mddev));
+ 	pr_info("md/raid10:%s: active with %d out of %d devices\n",
+@@ -4245,8 +4245,8 @@ static int raid10_resize(struct mddev *mddev, sector_t sectors)
+ 
+ 	md_set_array_sectors(mddev, size);
+ 	if (sectors > mddev->dev_sectors &&
+-	    mddev->recovery_cp > oldsize) {
+-		mddev->recovery_cp = oldsize;
++	    mddev->resync_offset > oldsize) {
++		mddev->resync_offset = oldsize;
+ 		set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+ 	}
+ 	calc_sectors(conf, sectors);
+@@ -4275,7 +4275,7 @@ static void *raid10_takeover_raid0(struct mddev *mddev, sector_t size, int devs)
+ 	mddev->delta_disks = mddev->raid_disks;
+ 	mddev->raid_disks *= 2;
+ 	/* make sure it will be not marked as dirty */
+-	mddev->recovery_cp = MaxSector;
++	mddev->resync_offset = MaxSector;
+ 	mddev->dev_sectors = size;
+ 
+ 	conf = setup_conf(mddev);
+@@ -5087,8 +5087,8 @@ static void raid10_finish_reshape(struct mddev *mddev)
+ 		return;
+ 
+ 	if (mddev->delta_disks > 0) {
+-		if (mddev->recovery_cp > mddev->resync_max_sectors) {
+-			mddev->recovery_cp = mddev->resync_max_sectors;
++		if (mddev->resync_offset > mddev->resync_max_sectors) {
++			mddev->resync_offset = mddev->resync_max_sectors;
+ 			set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+ 		}
+ 		mddev->resync_max_sectors = mddev->array_sectors;
+diff --git a/drivers/md/raid5-ppl.c b/drivers/md/raid5-ppl.c
+index c0fb335311aa6c..56b234683ee6be 100644
+--- a/drivers/md/raid5-ppl.c
++++ b/drivers/md/raid5-ppl.c
+@@ -1163,7 +1163,7 @@ static int ppl_load_distributed(struct ppl_log *log)
+ 		    le64_to_cpu(pplhdr->generation));
+ 
+ 	/* attempt to recover from log if we are starting a dirty array */
+-	if (pplhdr && !mddev->pers && mddev->recovery_cp != MaxSector)
++	if (pplhdr && !mddev->pers && mddev->resync_offset != MaxSector)
+ 		ret = ppl_recover(log, pplhdr, pplhdr_offset);
+ 
+ 	/* write empty header if we are starting the array */
+@@ -1422,14 +1422,14 @@ int ppl_init_log(struct r5conf *conf)
+ 
+ 	if (ret) {
+ 		goto err;
+-	} else if (!mddev->pers && mddev->recovery_cp == 0 &&
++	} else if (!mddev->pers && mddev->resync_offset == 0 &&
+ 		   ppl_conf->recovered_entries > 0 &&
+ 		   ppl_conf->mismatch_count == 0) {
+ 		/*
+ 		 * If we are starting a dirty array and the recovery succeeds
+ 		 * without any issues, set the array as clean.
+ 		 */
+-		mddev->recovery_cp = MaxSector;
++		mddev->resync_offset = MaxSector;
+ 		set_bit(MD_SB_CHANGE_CLEAN, &mddev->sb_flags);
+ 	} else if (mddev->pers && ppl_conf->mismatch_count > 0) {
+ 		/* no mismatch allowed when enabling PPL for a running array */
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index ca5b0e8ba707f0..38a193c0fdae7e 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -3740,7 +3740,7 @@ static int want_replace(struct stripe_head *sh, int disk_idx)
+ 	    && !test_bit(Faulty, &rdev->flags)
+ 	    && !test_bit(In_sync, &rdev->flags)
+ 	    && (rdev->recovery_offset <= sh->sector
+-		|| rdev->mddev->recovery_cp <= sh->sector))
++		|| rdev->mddev->resync_offset <= sh->sector))
+ 		rv = 1;
+ 	return rv;
+ }
+@@ -3832,7 +3832,7 @@ static int need_this_block(struct stripe_head *sh, struct stripe_head_state *s,
+ 	 * is missing/faulty, then we need to read everything we can.
+ 	 */
+ 	if (!force_rcw &&
+-	    sh->sector < sh->raid_conf->mddev->recovery_cp)
++	    sh->sector < sh->raid_conf->mddev->resync_offset)
+ 		/* reconstruct-write isn't being forced */
+ 		return 0;
+ 	for (i = 0; i < s->failed && i < 2; i++) {
+@@ -4097,7 +4097,7 @@ static int handle_stripe_dirtying(struct r5conf *conf,
+ 				  int disks)
+ {
+ 	int rmw = 0, rcw = 0, i;
+-	sector_t recovery_cp = conf->mddev->recovery_cp;
++	sector_t resync_offset = conf->mddev->resync_offset;
+ 
+ 	/* Check whether resync is now happening or should start.
+ 	 * If yes, then the array is dirty (after unclean shutdown or
+@@ -4107,14 +4107,14 @@ static int handle_stripe_dirtying(struct r5conf *conf,
+ 	 * generate correct data from the parity.
+ 	 */
+ 	if (conf->rmw_level == PARITY_DISABLE_RMW ||
+-	    (recovery_cp < MaxSector && sh->sector >= recovery_cp &&
++	    (resync_offset < MaxSector && sh->sector >= resync_offset &&
+ 	     s->failed == 0)) {
+ 		/* Calculate the real rcw later - for now make it
+ 		 * look like rcw is cheaper
+ 		 */
+ 		rcw = 1; rmw = 2;
+-		pr_debug("force RCW rmw_level=%u, recovery_cp=%llu sh->sector=%llu\n",
+-			 conf->rmw_level, (unsigned long long)recovery_cp,
++		pr_debug("force RCW rmw_level=%u, resync_offset=%llu sh->sector=%llu\n",
++			 conf->rmw_level, (unsigned long long)resync_offset,
+ 			 (unsigned long long)sh->sector);
+ 	} else for (i = disks; i--; ) {
+ 		/* would I have to read this buffer for read_modify_write */
+@@ -4770,14 +4770,14 @@ static void analyse_stripe(struct stripe_head *sh, struct stripe_head_state *s)
+ 	if (test_bit(STRIPE_SYNCING, &sh->state)) {
+ 		/* If there is a failed device being replaced,
+ 		 *     we must be recovering.
+-		 * else if we are after recovery_cp, we must be syncing
++		 * else if we are after resync_offset, we must be syncing
+ 		 * else if MD_RECOVERY_REQUESTED is set, we also are syncing.
+ 		 * else we can only be replacing
+ 		 * sync and recovery both need to read all devices, and so
+ 		 * use the same flag.
+ 		 */
+ 		if (do_recovery ||
+-		    sh->sector >= conf->mddev->recovery_cp ||
++		    sh->sector >= conf->mddev->resync_offset ||
+ 		    test_bit(MD_RECOVERY_REQUESTED, &(conf->mddev->recovery)))
+ 			s->syncing = 1;
+ 		else
+@@ -7780,7 +7780,7 @@ static int raid5_run(struct mddev *mddev)
+ 	int first = 1;
+ 	int ret = -EIO;
+ 
+-	if (mddev->recovery_cp != MaxSector)
++	if (mddev->resync_offset != MaxSector)
+ 		pr_notice("md/raid:%s: not clean -- starting background reconstruction\n",
+ 			  mdname(mddev));
+ 
+@@ -7921,7 +7921,7 @@ static int raid5_run(struct mddev *mddev)
+ 				mdname(mddev));
+ 			mddev->ro = 1;
+ 			set_disk_ro(mddev->gendisk, 1);
+-		} else if (mddev->recovery_cp == MaxSector)
++		} else if (mddev->resync_offset == MaxSector)
+ 			set_bit(MD_JOURNAL_CLEAN, &mddev->flags);
+ 	}
+ 
+@@ -7988,7 +7988,7 @@ static int raid5_run(struct mddev *mddev)
+ 	mddev->resync_max_sectors = mddev->dev_sectors;
+ 
+ 	if (mddev->degraded > dirty_parity_disks &&
+-	    mddev->recovery_cp != MaxSector) {
++	    mddev->resync_offset != MaxSector) {
+ 		if (test_bit(MD_HAS_PPL, &mddev->flags))
+ 			pr_crit("md/raid:%s: starting dirty degraded array with PPL.\n",
+ 				mdname(mddev));
+@@ -8328,8 +8328,8 @@ static int raid5_resize(struct mddev *mddev, sector_t sectors)
+ 
+ 	md_set_array_sectors(mddev, newsize);
+ 	if (sectors > mddev->dev_sectors &&
+-	    mddev->recovery_cp > mddev->dev_sectors) {
+-		mddev->recovery_cp = mddev->dev_sectors;
++	    mddev->resync_offset > mddev->dev_sectors) {
++		mddev->resync_offset = mddev->dev_sectors;
+ 		set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+ 	}
+ 	mddev->dev_sectors = sectors;
+@@ -8423,7 +8423,7 @@ static int raid5_start_reshape(struct mddev *mddev)
+ 		return -EINVAL;
+ 
+ 	/* raid5 can't handle concurrent reshape and recovery */
+-	if (mddev->recovery_cp < MaxSector)
++	if (mddev->resync_offset < MaxSector)
+ 		return -EBUSY;
+ 	for (i = 0; i < conf->raid_disks; i++)
+ 		if (conf->disks[i].replacement)
+@@ -8648,7 +8648,7 @@ static void *raid45_takeover_raid0(struct mddev *mddev, int level)
+ 	mddev->raid_disks += 1;
+ 	mddev->delta_disks = 1;
+ 	/* make sure it will be not marked as dirty */
+-	mddev->recovery_cp = MaxSector;
++	mddev->resync_offset = MaxSector;
+ 
+ 	return setup_conf(mddev);
+ }
+diff --git a/drivers/media/cec/usb/rainshadow/rainshadow-cec.c b/drivers/media/cec/usb/rainshadow/rainshadow-cec.c
+index ee870ea1a88601..6f8d6797c61459 100644
+--- a/drivers/media/cec/usb/rainshadow/rainshadow-cec.c
++++ b/drivers/media/cec/usb/rainshadow/rainshadow-cec.c
+@@ -171,11 +171,12 @@ static irqreturn_t rain_interrupt(struct serio *serio, unsigned char data,
+ {
+ 	struct rain *rain = serio_get_drvdata(serio);
+ 
++	spin_lock(&rain->buf_lock);
+ 	if (rain->buf_len == DATA_SIZE) {
++		spin_unlock(&rain->buf_lock);
+ 		dev_warn_once(rain->dev, "buffer overflow\n");
+ 		return IRQ_HANDLED;
+ 	}
+-	spin_lock(&rain->buf_lock);
+ 	rain->buf_len++;
+ 	rain->buf[rain->buf_wr_idx] = data;
+ 	rain->buf_wr_idx = (rain->buf_wr_idx + 1) & 0xff;
+diff --git a/drivers/media/i2c/hi556.c b/drivers/media/i2c/hi556.c
+index d3cc65b67855c8..9aec11ee369bf1 100644
+--- a/drivers/media/i2c/hi556.c
++++ b/drivers/media/i2c/hi556.c
+@@ -756,21 +756,23 @@ static int hi556_test_pattern(struct hi556 *hi556, u32 pattern)
+ 	int ret;
+ 	u32 val;
+ 
+-	if (pattern) {
+-		ret = hi556_read_reg(hi556, HI556_REG_ISP,
+-				     HI556_REG_VALUE_08BIT, &val);
+-		if (ret)
+-			return ret;
++	ret = hi556_read_reg(hi556, HI556_REG_ISP,
++			     HI556_REG_VALUE_08BIT, &val);
++	if (ret)
++		return ret;
+ 
+-		ret = hi556_write_reg(hi556, HI556_REG_ISP,
+-				      HI556_REG_VALUE_08BIT,
+-				      val | HI556_REG_ISP_TPG_EN);
+-		if (ret)
+-			return ret;
+-	}
++	val = pattern ? (val | HI556_REG_ISP_TPG_EN) :
++		(val & ~HI556_REG_ISP_TPG_EN);
++
++	ret = hi556_write_reg(hi556, HI556_REG_ISP,
++			      HI556_REG_VALUE_08BIT, val);
++	if (ret)
++		return ret;
++
++	val = pattern ? BIT(pattern - 1) : 0;
+ 
+ 	return hi556_write_reg(hi556, HI556_REG_TEST_PATTERN,
+-			       HI556_REG_VALUE_08BIT, pattern);
++			       HI556_REG_VALUE_08BIT, val);
+ }
+ 
+ static int hi556_set_ctrl(struct v4l2_ctrl *ctrl)
+diff --git a/drivers/media/i2c/mt9m114.c b/drivers/media/i2c/mt9m114.c
+index 5f0b0ad8f885f1..c00f9412d08eba 100644
+--- a/drivers/media/i2c/mt9m114.c
++++ b/drivers/media/i2c/mt9m114.c
+@@ -1599,13 +1599,9 @@ static int mt9m114_ifp_get_frame_interval(struct v4l2_subdev *sd,
+ 	if (interval->which != V4L2_SUBDEV_FORMAT_ACTIVE)
+ 		return -EINVAL;
+ 
+-	mutex_lock(sensor->ifp.hdl.lock);
+-
+ 	ival->numerator = 1;
+ 	ival->denominator = sensor->ifp.frame_rate;
+ 
+-	mutex_unlock(sensor->ifp.hdl.lock);
+-
+ 	return 0;
+ }
+ 
+@@ -1624,8 +1620,6 @@ static int mt9m114_ifp_set_frame_interval(struct v4l2_subdev *sd,
+ 	if (interval->which != V4L2_SUBDEV_FORMAT_ACTIVE)
+ 		return -EINVAL;
+ 
+-	mutex_lock(sensor->ifp.hdl.lock);
+-
+ 	if (ival->numerator != 0 && ival->denominator != 0)
+ 		sensor->ifp.frame_rate = min_t(unsigned int,
+ 					       ival->denominator / ival->numerator,
+@@ -1639,8 +1633,6 @@ static int mt9m114_ifp_set_frame_interval(struct v4l2_subdev *sd,
+ 	if (sensor->streaming)
+ 		ret = mt9m114_set_frame_rate(sensor);
+ 
+-	mutex_unlock(sensor->ifp.hdl.lock);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/media/i2c/ov2659.c b/drivers/media/i2c/ov2659.c
+index 06b7896c3eaf14..586b31ba076b60 100644
+--- a/drivers/media/i2c/ov2659.c
++++ b/drivers/media/i2c/ov2659.c
+@@ -1469,14 +1469,15 @@ static int ov2659_probe(struct i2c_client *client)
+ 				     V4L2_CID_TEST_PATTERN,
+ 				     ARRAY_SIZE(ov2659_test_pattern_menu) - 1,
+ 				     0, 0, ov2659_test_pattern_menu);
+-	ov2659->sd.ctrl_handler = &ov2659->ctrls;
+ 
+ 	if (ov2659->ctrls.error) {
+ 		dev_err(&client->dev, "%s: control initialization error %d\n",
+ 			__func__, ov2659->ctrls.error);
++		v4l2_ctrl_handler_free(&ov2659->ctrls);
+ 		return  ov2659->ctrls.error;
+ 	}
+ 
++	ov2659->sd.ctrl_handler = &ov2659->ctrls;
+ 	sd = &ov2659->sd;
+ 	client->flags |= I2C_CLIENT_SCCB;
+ 
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-csi2.c b/drivers/media/pci/intel/ipu6/ipu6-isys-csi2.c
+index da8581a37e2204..6030bd23b4b944 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-isys-csi2.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-isys-csi2.c
+@@ -354,9 +354,9 @@ static int ipu6_isys_csi2_enable_streams(struct v4l2_subdev *sd,
+ 	remote_pad = media_pad_remote_pad_first(&sd->entity.pads[CSI2_PAD_SINK]);
+ 	remote_sd = media_entity_to_v4l2_subdev(remote_pad->entity);
+ 
+-	sink_streams = v4l2_subdev_state_xlate_streams(state, CSI2_PAD_SRC,
+-						       CSI2_PAD_SINK,
+-						       &streams_mask);
++	sink_streams =
++		v4l2_subdev_state_xlate_streams(state, pad, CSI2_PAD_SINK,
++						&streams_mask);
+ 
+ 	ret = ipu6_isys_csi2_calc_timing(csi2, &timing, CSI2_ACCINV);
+ 	if (ret)
+@@ -384,9 +384,9 @@ static int ipu6_isys_csi2_disable_streams(struct v4l2_subdev *sd,
+ 	struct media_pad *remote_pad;
+ 	u64 sink_streams;
+ 
+-	sink_streams = v4l2_subdev_state_xlate_streams(state, CSI2_PAD_SRC,
+-						       CSI2_PAD_SINK,
+-						       &streams_mask);
++	sink_streams =
++		v4l2_subdev_state_xlate_streams(state, pad, CSI2_PAD_SINK,
++						&streams_mask);
+ 
+ 	remote_pad = media_pad_remote_pad_first(&sd->entity.pads[CSI2_PAD_SINK]);
+ 	remote_sd = media_entity_to_v4l2_subdev(remote_pad->entity);
+diff --git a/drivers/media/pci/intel/ivsc/mei_ace.c b/drivers/media/pci/intel/ivsc/mei_ace.c
+index 3622271c71c883..50d18b627e152e 100644
+--- a/drivers/media/pci/intel/ivsc/mei_ace.c
++++ b/drivers/media/pci/intel/ivsc/mei_ace.c
+@@ -529,6 +529,8 @@ static void mei_ace_remove(struct mei_cl_device *cldev)
+ 
+ 	ace_set_camera_owner(ace, ACE_CAMERA_IVSC);
+ 
++	mei_cldev_disable(cldev);
++
+ 	mutex_destroy(&ace->lock);
+ }
+ 
+diff --git a/drivers/media/pci/intel/ivsc/mei_csi.c b/drivers/media/pci/intel/ivsc/mei_csi.c
+index 92d871a378ba24..955f687e5d59c2 100644
+--- a/drivers/media/pci/intel/ivsc/mei_csi.c
++++ b/drivers/media/pci/intel/ivsc/mei_csi.c
+@@ -760,6 +760,8 @@ static void mei_csi_remove(struct mei_cl_device *cldev)
+ 
+ 	pm_runtime_disable(&cldev->dev);
+ 
++	mei_cldev_disable(cldev);
++
+ 	mutex_destroy(&csi->lock);
+ }
+ 
+diff --git a/drivers/media/platform/qcom/camss/camss-csiphy-3ph-1-0.c b/drivers/media/platform/qcom/camss/camss-csiphy-3ph-1-0.c
+index f732a76de93e3e..88c0ba495c3271 100644
+--- a/drivers/media/platform/qcom/camss/camss-csiphy-3ph-1-0.c
++++ b/drivers/media/platform/qcom/camss/camss-csiphy-3ph-1-0.c
+@@ -849,8 +849,7 @@ static int csiphy_init(struct csiphy_device *csiphy)
+ 		regs->offset = 0x1000;
+ 		break;
+ 	default:
+-		WARN(1, "unknown csiphy version\n");
+-		return -ENODEV;
++		break;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/media/platform/qcom/camss/camss.c b/drivers/media/platform/qcom/camss/camss.c
+index 06f42875702f02..1507c79913bd45 100644
+--- a/drivers/media/platform/qcom/camss/camss.c
++++ b/drivers/media/platform/qcom/camss/camss.c
+@@ -2486,8 +2486,8 @@ static const struct resources_icc icc_res_sm8550[] = {
+ static const struct camss_subdev_resources csiphy_res_x1e80100[] = {
+ 	/* CSIPHY0 */
+ 	{
+-		.regulators = { "vdd-csiphy-0p8-supply",
+-				"vdd-csiphy-1p2-supply" },
++		.regulators = { "vdd-csiphy-0p8",
++				"vdd-csiphy-1p2" },
+ 		.clock = { "csiphy0", "csiphy0_timer" },
+ 		.clock_rate = { { 300000000, 400000000, 480000000 },
+ 				{ 266666667, 400000000 } },
+@@ -2501,8 +2501,8 @@ static const struct camss_subdev_resources csiphy_res_x1e80100[] = {
+ 	},
+ 	/* CSIPHY1 */
+ 	{
+-		.regulators = { "vdd-csiphy-0p8-supply",
+-				"vdd-csiphy-1p2-supply" },
++		.regulators = { "vdd-csiphy-0p8",
++				"vdd-csiphy-1p2" },
+ 		.clock = { "csiphy1", "csiphy1_timer" },
+ 		.clock_rate = { { 300000000, 400000000, 480000000 },
+ 				{ 266666667, 400000000 } },
+@@ -2516,8 +2516,8 @@ static const struct camss_subdev_resources csiphy_res_x1e80100[] = {
+ 	},
+ 	/* CSIPHY2 */
+ 	{
+-		.regulators = { "vdd-csiphy-0p8-supply",
+-				"vdd-csiphy-1p2-supply" },
++		.regulators = { "vdd-csiphy-0p8",
++				"vdd-csiphy-1p2" },
+ 		.clock = { "csiphy2", "csiphy2_timer" },
+ 		.clock_rate = { { 300000000, 400000000, 480000000 },
+ 				{ 266666667, 400000000 } },
+@@ -2531,8 +2531,8 @@ static const struct camss_subdev_resources csiphy_res_x1e80100[] = {
+ 	},
+ 	/* CSIPHY4 */
+ 	{
+-		.regulators = { "vdd-csiphy-0p8-supply",
+-				"vdd-csiphy-1p2-supply" },
++		.regulators = { "vdd-csiphy-0p8",
++				"vdd-csiphy-1p2" },
+ 		.clock = { "csiphy4", "csiphy4_timer" },
+ 		.clock_rate = { { 300000000, 400000000, 480000000 },
+ 				{ 266666667, 400000000 } },
+@@ -3625,7 +3625,7 @@ static int camss_probe(struct platform_device *pdev)
+ 	ret = v4l2_device_register(camss->dev, &camss->v4l2_dev);
+ 	if (ret < 0) {
+ 		dev_err(dev, "Failed to register V4L2 device: %d\n", ret);
+-		goto err_genpd_cleanup;
++		goto err_media_device_cleanup;
+ 	}
+ 
+ 	v4l2_async_nf_init(&camss->notifier, &camss->v4l2_dev);
+@@ -3680,6 +3680,8 @@ static int camss_probe(struct platform_device *pdev)
+ 	v4l2_device_unregister(&camss->v4l2_dev);
+ 	v4l2_async_nf_cleanup(&camss->notifier);
+ 	pm_runtime_disable(dev);
++err_media_device_cleanup:
++	media_device_cleanup(&camss->media_dev);
+ err_genpd_cleanup:
+ 	camss_genpd_cleanup(camss);
+ 
+diff --git a/drivers/media/platform/qcom/iris/iris_buffer.c b/drivers/media/platform/qcom/iris/iris_buffer.c
+index 7dd5730a867af7..018334512baed2 100644
+--- a/drivers/media/platform/qcom/iris/iris_buffer.c
++++ b/drivers/media/platform/qcom/iris/iris_buffer.c
+@@ -376,7 +376,7 @@ int iris_destroy_internal_buffer(struct iris_inst *inst, struct iris_buffer *buf
+ 	return 0;
+ }
+ 
+-int iris_destroy_internal_buffers(struct iris_inst *inst, u32 plane)
++static int iris_destroy_internal_buffers(struct iris_inst *inst, u32 plane, bool force)
+ {
+ 	const struct iris_platform_data *platform_data = inst->core->iris_platform_data;
+ 	struct iris_buffer *buf, *next;
+@@ -396,6 +396,14 @@ int iris_destroy_internal_buffers(struct iris_inst *inst, u32 plane)
+ 	for (i = 0; i < len; i++) {
+ 		buffers = &inst->buffers[internal_buf_type[i]];
+ 		list_for_each_entry_safe(buf, next, &buffers->list, list) {
++			/*
++			 * during stream on, skip destroying internal(DPB) buffer
++			 * if firmware did not return it.
++			 * during close, destroy all buffers irrespectively.
++			 */
++			if (!force && buf->attr & BUF_ATTR_QUEUED)
++				continue;
++
+ 			ret = iris_destroy_internal_buffer(inst, buf);
+ 			if (ret)
+ 				return ret;
+@@ -405,6 +413,16 @@ int iris_destroy_internal_buffers(struct iris_inst *inst, u32 plane)
+ 	return 0;
+ }
+ 
++int iris_destroy_all_internal_buffers(struct iris_inst *inst, u32 plane)
++{
++	return iris_destroy_internal_buffers(inst, plane, true);
++}
++
++int iris_destroy_dequeued_internal_buffers(struct iris_inst *inst, u32 plane)
++{
++	return iris_destroy_internal_buffers(inst, plane, false);
++}
++
+ static int iris_release_internal_buffers(struct iris_inst *inst,
+ 					 enum iris_buffer_type buffer_type)
+ {
+diff --git a/drivers/media/platform/qcom/iris/iris_buffer.h b/drivers/media/platform/qcom/iris/iris_buffer.h
+index c36b6347b0770a..00825ad2dc3a4b 100644
+--- a/drivers/media/platform/qcom/iris/iris_buffer.h
++++ b/drivers/media/platform/qcom/iris/iris_buffer.h
+@@ -106,7 +106,8 @@ void iris_get_internal_buffers(struct iris_inst *inst, u32 plane);
+ int iris_create_internal_buffers(struct iris_inst *inst, u32 plane);
+ int iris_queue_internal_buffers(struct iris_inst *inst, u32 plane);
+ int iris_destroy_internal_buffer(struct iris_inst *inst, struct iris_buffer *buffer);
+-int iris_destroy_internal_buffers(struct iris_inst *inst, u32 plane);
++int iris_destroy_all_internal_buffers(struct iris_inst *inst, u32 plane);
++int iris_destroy_dequeued_internal_buffers(struct iris_inst *inst, u32 plane);
+ int iris_alloc_and_queue_persist_bufs(struct iris_inst *inst);
+ int iris_alloc_and_queue_input_int_bufs(struct iris_inst *inst);
+ int iris_queue_buffer(struct iris_inst *inst, struct iris_buffer *buf);
+diff --git a/drivers/media/platform/qcom/iris/iris_ctrls.c b/drivers/media/platform/qcom/iris/iris_ctrls.c
+index b690578256d59e..13f5cf0d0e8a44 100644
+--- a/drivers/media/platform/qcom/iris/iris_ctrls.c
++++ b/drivers/media/platform/qcom/iris/iris_ctrls.c
+@@ -17,8 +17,6 @@ static inline bool iris_valid_cap_id(enum platform_inst_fw_cap_type cap_id)
+ static enum platform_inst_fw_cap_type iris_get_cap_id(u32 id)
+ {
+ 	switch (id) {
+-	case V4L2_CID_MPEG_VIDEO_DECODER_MPEG4_DEBLOCK_FILTER:
+-		return DEBLOCK;
+ 	case V4L2_CID_MPEG_VIDEO_H264_PROFILE:
+ 		return PROFILE;
+ 	case V4L2_CID_MPEG_VIDEO_H264_LEVEL:
+@@ -34,8 +32,6 @@ static u32 iris_get_v4l2_id(enum platform_inst_fw_cap_type cap_id)
+ 		return 0;
+ 
+ 	switch (cap_id) {
+-	case DEBLOCK:
+-		return V4L2_CID_MPEG_VIDEO_DECODER_MPEG4_DEBLOCK_FILTER;
+ 	case PROFILE:
+ 		return V4L2_CID_MPEG_VIDEO_H264_PROFILE;
+ 	case LEVEL:
+@@ -84,8 +80,6 @@ int iris_ctrls_init(struct iris_inst *inst)
+ 		if (iris_get_v4l2_id(cap[idx].cap_id))
+ 			num_ctrls++;
+ 	}
+-	if (!num_ctrls)
+-		return -EINVAL;
+ 
+ 	/* Adding 1 to num_ctrls to include V4L2_CID_MIN_BUFFERS_FOR_CAPTURE */
+ 
+@@ -163,6 +157,7 @@ void iris_session_init_caps(struct iris_core *core)
+ 		core->inst_fw_caps[cap_id].value = caps[i].value;
+ 		core->inst_fw_caps[cap_id].flags = caps[i].flags;
+ 		core->inst_fw_caps[cap_id].hfi_id = caps[i].hfi_id;
++		core->inst_fw_caps[cap_id].set = caps[i].set;
+ 	}
+ }
+ 
+diff --git a/drivers/media/platform/qcom/iris/iris_hfi_gen1_command.c b/drivers/media/platform/qcom/iris/iris_hfi_gen1_command.c
+index 64f887d9a17d73..bd9d86220e611e 100644
+--- a/drivers/media/platform/qcom/iris/iris_hfi_gen1_command.c
++++ b/drivers/media/platform/qcom/iris/iris_hfi_gen1_command.c
+@@ -208,8 +208,10 @@ static int iris_hfi_gen1_session_stop(struct iris_inst *inst, u32 plane)
+ 		flush_pkt.flush_type = flush_type;
+ 
+ 		ret = iris_hfi_queue_cmd_write(core, &flush_pkt, flush_pkt.shdr.hdr.size);
+-		if (!ret)
++		if (!ret) {
++			inst->flush_responses_pending++;
+ 			ret = iris_wait_for_session_response(inst, true);
++		}
+ 	}
+ 
+ 	return ret;
+@@ -490,14 +492,6 @@ iris_hfi_gen1_packet_session_set_property(struct hfi_session_set_property_pkt *p
+ 		packet->shdr.hdr.size += sizeof(u32) + sizeof(*wm);
+ 		break;
+ 	}
+-	case HFI_PROPERTY_CONFIG_VDEC_POST_LOOP_DEBLOCKER: {
+-		struct hfi_enable *en = prop_data;
+-		u32 *in = pdata;
+-
+-		en->enable = *in;
+-		packet->shdr.hdr.size += sizeof(u32) + sizeof(*en);
+-		break;
+-	}
+ 	default:
+ 		return -EINVAL;
+ 	}
+@@ -546,14 +540,15 @@ static int iris_hfi_gen1_set_resolution(struct iris_inst *inst)
+ 	struct hfi_framesize fs;
+ 	int ret;
+ 
+-	fs.buffer_type = HFI_BUFFER_INPUT;
+-	fs.width = inst->fmt_src->fmt.pix_mp.width;
+-	fs.height = inst->fmt_src->fmt.pix_mp.height;
+-
+-	ret = hfi_gen1_set_property(inst, ptype, &fs, sizeof(fs));
+-	if (ret)
+-		return ret;
++	if (!iris_drc_pending(inst)) {
++		fs.buffer_type = HFI_BUFFER_INPUT;
++		fs.width = inst->fmt_src->fmt.pix_mp.width;
++		fs.height = inst->fmt_src->fmt.pix_mp.height;
+ 
++		ret = hfi_gen1_set_property(inst, ptype, &fs, sizeof(fs));
++		if (ret)
++			return ret;
++	}
+ 	fs.buffer_type = HFI_BUFFER_OUTPUT2;
+ 	fs.width = inst->fmt_dst->fmt.pix_mp.width;
+ 	fs.height = inst->fmt_dst->fmt.pix_mp.height;
+diff --git a/drivers/media/platform/qcom/iris/iris_hfi_gen1_defines.h b/drivers/media/platform/qcom/iris/iris_hfi_gen1_defines.h
+index 93b5f838c2901c..adffcead58ea77 100644
+--- a/drivers/media/platform/qcom/iris/iris_hfi_gen1_defines.h
++++ b/drivers/media/platform/qcom/iris/iris_hfi_gen1_defines.h
+@@ -65,7 +65,6 @@
+ 
+ #define HFI_PROPERTY_CONFIG_BUFFER_REQUIREMENTS		0x202001
+ 
+-#define HFI_PROPERTY_CONFIG_VDEC_POST_LOOP_DEBLOCKER	0x1200001
+ #define HFI_PROPERTY_PARAM_VDEC_DPB_COUNTS		0x120300e
+ #define HFI_PROPERTY_CONFIG_VDEC_ENTROPY		0x1204004
+ 
+diff --git a/drivers/media/platform/qcom/iris/iris_hfi_gen1_response.c b/drivers/media/platform/qcom/iris/iris_hfi_gen1_response.c
+index 91d95eed68aa29..14d8bef62b606a 100644
+--- a/drivers/media/platform/qcom/iris/iris_hfi_gen1_response.c
++++ b/drivers/media/platform/qcom/iris/iris_hfi_gen1_response.c
+@@ -200,14 +200,14 @@ static void iris_hfi_gen1_event_seq_changed(struct iris_inst *inst,
+ 
+ 	iris_hfi_gen1_read_changed_params(inst, pkt);
+ 
+-	if (inst->state != IRIS_INST_ERROR) {
+-		reinit_completion(&inst->flush_completion);
++	if (inst->state != IRIS_INST_ERROR && !(inst->sub_state & IRIS_INST_SUB_FIRST_IPSC)) {
+ 
+ 		flush_pkt.shdr.hdr.size = sizeof(struct hfi_session_flush_pkt);
+ 		flush_pkt.shdr.hdr.pkt_type = HFI_CMD_SESSION_FLUSH;
+ 		flush_pkt.shdr.session_id = inst->session_id;
+ 		flush_pkt.flush_type = HFI_FLUSH_OUTPUT;
+-		iris_hfi_queue_cmd_write(inst->core, &flush_pkt, flush_pkt.shdr.hdr.size);
++		if (!iris_hfi_queue_cmd_write(inst->core, &flush_pkt, flush_pkt.shdr.hdr.size))
++			inst->flush_responses_pending++;
+ 	}
+ 
+ 	iris_vdec_src_change(inst);
+@@ -408,7 +408,9 @@ static void iris_hfi_gen1_session_ftb_done(struct iris_inst *inst, void *packet)
+ 		flush_pkt.shdr.hdr.pkt_type = HFI_CMD_SESSION_FLUSH;
+ 		flush_pkt.shdr.session_id = inst->session_id;
+ 		flush_pkt.flush_type = HFI_FLUSH_OUTPUT;
+-		iris_hfi_queue_cmd_write(core, &flush_pkt, flush_pkt.shdr.hdr.size);
++		if (!iris_hfi_queue_cmd_write(core, &flush_pkt, flush_pkt.shdr.hdr.size))
++			inst->flush_responses_pending++;
++
+ 		iris_inst_sub_state_change_drain_last(inst);
+ 
+ 		return;
+@@ -564,7 +566,6 @@ static void iris_hfi_gen1_handle_response(struct iris_core *core, void *response
+ 	const struct iris_hfi_gen1_response_pkt_info *pkt_info;
+ 	struct device *dev = core->dev;
+ 	struct hfi_session_pkt *pkt;
+-	struct completion *done;
+ 	struct iris_inst *inst;
+ 	bool found = false;
+ 	u32 i;
+@@ -625,9 +626,12 @@ static void iris_hfi_gen1_handle_response(struct iris_core *core, void *response
+ 			if (shdr->error_type != HFI_ERR_NONE)
+ 				iris_inst_change_state(inst, IRIS_INST_ERROR);
+ 
+-			done = pkt_info->pkt == HFI_MSG_SESSION_FLUSH ?
+-				&inst->flush_completion : &inst->completion;
+-			complete(done);
++			if (pkt_info->pkt == HFI_MSG_SESSION_FLUSH) {
++				if (!(--inst->flush_responses_pending))
++					complete(&inst->flush_completion);
++			} else {
++				complete(&inst->completion);
++			}
+ 		}
+ 		mutex_unlock(&inst->lock);
+ 
+diff --git a/drivers/media/platform/qcom/iris/iris_hfi_gen2_command.c b/drivers/media/platform/qcom/iris/iris_hfi_gen2_command.c
+index a908b41e2868fc..802fa62c26ebef 100644
+--- a/drivers/media/platform/qcom/iris/iris_hfi_gen2_command.c
++++ b/drivers/media/platform/qcom/iris/iris_hfi_gen2_command.c
+@@ -178,7 +178,7 @@ static int iris_hfi_gen2_set_crop_offsets(struct iris_inst *inst)
+ 						  sizeof(u64));
+ }
+ 
+-static int iris_hfi_gen2_set_bit_dpeth(struct iris_inst *inst)
++static int iris_hfi_gen2_set_bit_depth(struct iris_inst *inst)
+ {
+ 	struct iris_inst_hfi_gen2 *inst_hfi_gen2 = to_iris_inst_hfi_gen2(inst);
+ 	u32 port = iris_hfi_gen2_get_port(V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE);
+@@ -378,7 +378,7 @@ static int iris_hfi_gen2_session_set_config_params(struct iris_inst *inst, u32 p
+ 		{HFI_PROP_BITSTREAM_RESOLUTION,       iris_hfi_gen2_set_bitstream_resolution   },
+ 		{HFI_PROP_CROP_OFFSETS,               iris_hfi_gen2_set_crop_offsets           },
+ 		{HFI_PROP_CODED_FRAMES,               iris_hfi_gen2_set_coded_frames           },
+-		{HFI_PROP_LUMA_CHROMA_BIT_DEPTH,      iris_hfi_gen2_set_bit_dpeth              },
++		{HFI_PROP_LUMA_CHROMA_BIT_DEPTH,      iris_hfi_gen2_set_bit_depth              },
+ 		{HFI_PROP_BUFFER_FW_MIN_OUTPUT_COUNT, iris_hfi_gen2_set_min_output_count       },
+ 		{HFI_PROP_PIC_ORDER_CNT_TYPE,         iris_hfi_gen2_set_picture_order_count    },
+ 		{HFI_PROP_SIGNAL_COLOR_INFO,          iris_hfi_gen2_set_colorspace             },
+diff --git a/drivers/media/platform/qcom/iris/iris_hfi_gen2_response.c b/drivers/media/platform/qcom/iris/iris_hfi_gen2_response.c
+index b75a01641d5d48..d2cede2fe1b5a8 100644
+--- a/drivers/media/platform/qcom/iris/iris_hfi_gen2_response.c
++++ b/drivers/media/platform/qcom/iris/iris_hfi_gen2_response.c
+@@ -265,7 +265,8 @@ static int iris_hfi_gen2_handle_system_error(struct iris_core *core,
+ {
+ 	struct iris_inst *instance;
+ 
+-	dev_err(core->dev, "received system error of type %#x\n", pkt->type);
++	if (pkt)
++		dev_err(core->dev, "received system error of type %#x\n", pkt->type);
+ 
+ 	core->state = IRIS_CORE_ERROR;
+ 
+@@ -377,6 +378,11 @@ static int iris_hfi_gen2_handle_output_buffer(struct iris_inst *inst,
+ 
+ 	buf->flags = iris_hfi_gen2_get_driver_buffer_flags(inst, hfi_buffer->flags);
+ 
++	if (!buf->data_size && inst->state == IRIS_INST_STREAMING &&
++	    !(hfi_buffer->flags & HFI_BUF_FW_FLAG_LAST)) {
++		buf->flags |= V4L2_BUF_FLAG_ERROR;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -636,9 +642,6 @@ static int iris_hfi_gen2_handle_session_property(struct iris_inst *inst,
+ {
+ 	struct iris_inst_hfi_gen2 *inst_hfi_gen2 = to_iris_inst_hfi_gen2(inst);
+ 
+-	if (pkt->port != HFI_PORT_BITSTREAM)
+-		return 0;
+-
+ 	if (pkt->flags & HFI_FW_FLAGS_INFORMATION)
+ 		return 0;
+ 
+diff --git a/drivers/media/platform/qcom/iris/iris_hfi_queue.c b/drivers/media/platform/qcom/iris/iris_hfi_queue.c
+index fac7df0c4d1aec..221dcd09e1e109 100644
+--- a/drivers/media/platform/qcom/iris/iris_hfi_queue.c
++++ b/drivers/media/platform/qcom/iris/iris_hfi_queue.c
+@@ -113,7 +113,7 @@ int iris_hfi_queue_cmd_write_locked(struct iris_core *core, void *pkt, u32 pkt_s
+ {
+ 	struct iris_iface_q_info *q_info = &core->command_queue;
+ 
+-	if (core->state == IRIS_CORE_ERROR)
++	if (core->state == IRIS_CORE_ERROR || core->state == IRIS_CORE_DEINIT)
+ 		return -EINVAL;
+ 
+ 	if (!iris_hfi_queue_write(q_info, pkt, pkt_size)) {
+diff --git a/drivers/media/platform/qcom/iris/iris_instance.h b/drivers/media/platform/qcom/iris/iris_instance.h
+index caa3c65070061b..06a7f1174ad55e 100644
+--- a/drivers/media/platform/qcom/iris/iris_instance.h
++++ b/drivers/media/platform/qcom/iris/iris_instance.h
+@@ -27,6 +27,7 @@
+  * @crop: structure of crop info
+  * @completion: structure of signal completions
+  * @flush_completion: structure of signal completions for flush cmd
++ * @flush_responses_pending: counter to track number of pending flush responses
+  * @fw_caps: array of supported instance firmware capabilities
+  * @buffers: array of different iris buffers
+  * @fw_min_count: minimnum count of buffers needed by fw
+@@ -57,6 +58,7 @@ struct iris_inst {
+ 	struct iris_hfi_rect_desc	crop;
+ 	struct completion		completion;
+ 	struct completion		flush_completion;
++	u32				flush_responses_pending;
+ 	struct platform_inst_fw_cap	fw_caps[INST_FW_CAP_MAX];
+ 	struct iris_buffers		buffers[BUF_TYPE_MAX];
+ 	u32				fw_min_count;
+diff --git a/drivers/media/platform/qcom/iris/iris_platform_common.h b/drivers/media/platform/qcom/iris/iris_platform_common.h
+index ac76d9e1ef9c14..1dab276431c716 100644
+--- a/drivers/media/platform/qcom/iris/iris_platform_common.h
++++ b/drivers/media/platform/qcom/iris/iris_platform_common.h
+@@ -89,7 +89,7 @@ enum platform_inst_fw_cap_type {
+ 	CODED_FRAMES,
+ 	BIT_DEPTH,
+ 	RAP_FRAME,
+-	DEBLOCK,
++	TIER,
+ 	INST_FW_CAP_MAX,
+ };
+ 
+diff --git a/drivers/media/platform/qcom/iris/iris_platform_sm8250.c b/drivers/media/platform/qcom/iris/iris_platform_sm8250.c
+index 5c86fd7b7b6fd3..543fa266153918 100644
+--- a/drivers/media/platform/qcom/iris/iris_platform_sm8250.c
++++ b/drivers/media/platform/qcom/iris/iris_platform_sm8250.c
+@@ -30,15 +30,6 @@ static struct platform_inst_fw_cap inst_fw_cap_sm8250[] = {
+ 		.hfi_id = HFI_PROPERTY_PARAM_WORK_MODE,
+ 		.set = iris_set_stage,
+ 	},
+-	{
+-		.cap_id = DEBLOCK,
+-		.min = 0,
+-		.max = 1,
+-		.step_or_mask = 1,
+-		.value = 0,
+-		.hfi_id = HFI_PROPERTY_CONFIG_VDEC_POST_LOOP_DEBLOCKER,
+-		.set = iris_set_u32,
+-	},
+ };
+ 
+ static struct platform_inst_caps platform_inst_cap_sm8250 = {
+diff --git a/drivers/media/platform/qcom/iris/iris_state.c b/drivers/media/platform/qcom/iris/iris_state.c
+index 5976e926c83d13..104e1687ad39da 100644
+--- a/drivers/media/platform/qcom/iris/iris_state.c
++++ b/drivers/media/platform/qcom/iris/iris_state.c
+@@ -245,7 +245,7 @@ int iris_inst_sub_state_change_pause(struct iris_inst *inst, u32 plane)
+ 	return iris_inst_change_sub_state(inst, 0, set_sub_state);
+ }
+ 
+-static inline bool iris_drc_pending(struct iris_inst *inst)
++bool iris_drc_pending(struct iris_inst *inst)
+ {
+ 	return inst->sub_state & IRIS_INST_SUB_DRC &&
+ 		inst->sub_state & IRIS_INST_SUB_DRC_LAST;
+diff --git a/drivers/media/platform/qcom/iris/iris_state.h b/drivers/media/platform/qcom/iris/iris_state.h
+index 78c61aac5e7e0e..e718386dbe0402 100644
+--- a/drivers/media/platform/qcom/iris/iris_state.h
++++ b/drivers/media/platform/qcom/iris/iris_state.h
+@@ -140,5 +140,6 @@ int iris_inst_sub_state_change_drain_last(struct iris_inst *inst);
+ int iris_inst_sub_state_change_drc_last(struct iris_inst *inst);
+ int iris_inst_sub_state_change_pause(struct iris_inst *inst, u32 plane);
+ bool iris_allow_cmd(struct iris_inst *inst, u32 cmd);
++bool iris_drc_pending(struct iris_inst *inst);
+ 
+ #endif
+diff --git a/drivers/media/platform/qcom/iris/iris_vb2.c b/drivers/media/platform/qcom/iris/iris_vb2.c
+index cdf11feb590b5c..b3bde10eb6d2f0 100644
+--- a/drivers/media/platform/qcom/iris/iris_vb2.c
++++ b/drivers/media/platform/qcom/iris/iris_vb2.c
+@@ -259,13 +259,14 @@ int iris_vb2_buf_prepare(struct vb2_buffer *vb)
+ 			return -EINVAL;
+ 	}
+ 
+-	if (vb->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE &&
+-	    vb2_plane_size(vb, 0) < iris_get_buffer_size(inst, BUF_OUTPUT))
+-		return -EINVAL;
+-	if (vb->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE &&
+-	    vb2_plane_size(vb, 0) < iris_get_buffer_size(inst, BUF_INPUT))
+-		return -EINVAL;
+-
++	if (!(inst->sub_state & IRIS_INST_SUB_DRC)) {
++		if (vb->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE &&
++		    vb2_plane_size(vb, 0) < iris_get_buffer_size(inst, BUF_OUTPUT))
++			return -EINVAL;
++		if (vb->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE &&
++		    vb2_plane_size(vb, 0) < iris_get_buffer_size(inst, BUF_INPUT))
++			return -EINVAL;
++	}
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/platform/qcom/iris/iris_vdec.c b/drivers/media/platform/qcom/iris/iris_vdec.c
+index 4143acedfc5744..d342f733feb995 100644
+--- a/drivers/media/platform/qcom/iris/iris_vdec.c
++++ b/drivers/media/platform/qcom/iris/iris_vdec.c
+@@ -171,6 +171,11 @@ int iris_vdec_s_fmt(struct iris_inst *inst, struct v4l2_format *f)
+ 		output_fmt->fmt.pix_mp.ycbcr_enc = f->fmt.pix_mp.ycbcr_enc;
+ 		output_fmt->fmt.pix_mp.quantization = f->fmt.pix_mp.quantization;
+ 
++		/* Update capture format based on new ip w/h */
++		output_fmt->fmt.pix_mp.width = ALIGN(f->fmt.pix_mp.width, 128);
++		output_fmt->fmt.pix_mp.height = ALIGN(f->fmt.pix_mp.height, 32);
++		inst->buffers[BUF_OUTPUT].size = iris_get_buffer_size(inst, BUF_OUTPUT);
++
+ 		inst->crop.left = 0;
+ 		inst->crop.top = 0;
+ 		inst->crop.width = f->fmt.pix_mp.width;
+@@ -408,7 +413,7 @@ int iris_vdec_streamon_input(struct iris_inst *inst)
+ 
+ 	iris_get_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE);
+ 
+-	ret = iris_destroy_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE);
++	ret = iris_destroy_dequeued_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -496,7 +501,7 @@ int iris_vdec_streamon_output(struct iris_inst *inst)
+ 
+ 	iris_get_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
+ 
+-	ret = iris_destroy_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
++	ret = iris_destroy_dequeued_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/media/platform/qcom/iris/iris_vidc.c b/drivers/media/platform/qcom/iris/iris_vidc.c
+index ca0f4e310f77f9..a8144595cc78e8 100644
+--- a/drivers/media/platform/qcom/iris/iris_vidc.c
++++ b/drivers/media/platform/qcom/iris/iris_vidc.c
+@@ -221,6 +221,33 @@ static void iris_session_close(struct iris_inst *inst)
+ 		iris_wait_for_session_response(inst, false);
+ }
+ 
++static void iris_check_num_queued_internal_buffers(struct iris_inst *inst, u32 plane)
++{
++	const struct iris_platform_data *platform_data = inst->core->iris_platform_data;
++	struct iris_buffer *buf, *next;
++	struct iris_buffers *buffers;
++	const u32 *internal_buf_type;
++	u32 internal_buffer_count, i;
++	u32 count = 0;
++
++	if (V4L2_TYPE_IS_OUTPUT(plane)) {
++		internal_buf_type = platform_data->dec_ip_int_buf_tbl;
++		internal_buffer_count = platform_data->dec_ip_int_buf_tbl_size;
++	} else {
++		internal_buf_type = platform_data->dec_op_int_buf_tbl;
++		internal_buffer_count = platform_data->dec_op_int_buf_tbl_size;
++	}
++
++	for (i = 0; i < internal_buffer_count; i++) {
++		buffers = &inst->buffers[internal_buf_type[i]];
++		list_for_each_entry_safe(buf, next, &buffers->list, list)
++			count++;
++		if (count)
++			dev_err(inst->core->dev, "%d buffer of type %d not released",
++				count, internal_buf_type[i]);
++	}
++}
++
+ int iris_close(struct file *filp)
+ {
+ 	struct iris_inst *inst = iris_get_inst(filp, NULL);
+@@ -233,8 +260,10 @@ int iris_close(struct file *filp)
+ 	iris_session_close(inst);
+ 	iris_inst_change_state(inst, IRIS_INST_DEINIT);
+ 	iris_v4l2_fh_deinit(inst);
+-	iris_destroy_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE);
+-	iris_destroy_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
++	iris_destroy_all_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE);
++	iris_destroy_all_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
++	iris_check_num_queued_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE);
++	iris_check_num_queued_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
+ 	iris_remove_session(inst);
+ 	mutex_unlock(&inst->lock);
+ 	mutex_destroy(&inst->ctx_q_lock);
+diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
+index d305d74bb152d2..4c049c694d9c43 100644
+--- a/drivers/media/platform/qcom/venus/core.c
++++ b/drivers/media/platform/qcom/venus/core.c
+@@ -424,13 +424,13 @@ static int venus_probe(struct platform_device *pdev)
+ 	INIT_DELAYED_WORK(&core->work, venus_sys_error_handler);
+ 	init_waitqueue_head(&core->sys_err_done);
+ 
+-	ret = devm_request_threaded_irq(dev, core->irq, hfi_isr, venus_isr_thread,
+-					IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
+-					"venus", core);
++	ret = hfi_create(core, &venus_core_ops);
+ 	if (ret)
+ 		goto err_core_put;
+ 
+-	ret = hfi_create(core, &venus_core_ops);
++	ret = devm_request_threaded_irq(dev, core->irq, hfi_isr, venus_isr_thread,
++					IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
++					"venus", core);
+ 	if (ret)
+ 		goto err_core_put;
+ 
+@@ -709,11 +709,11 @@ static const struct venus_resources msm8996_res = {
+ };
+ 
+ static const struct freq_tbl msm8998_freq_table[] = {
+-	{ 1944000, 465000000 },	/* 4k UHD @ 60 (decode only) */
+-	{  972000, 465000000 },	/* 4k UHD @ 30 */
+-	{  489600, 360000000 },	/* 1080p @ 60 */
+-	{  244800, 186000000 },	/* 1080p @ 30 */
+-	{  108000, 100000000 },	/* 720p @ 30 */
++	{ 1728000, 533000000 },	/* 4k UHD @ 60 (decode only) */
++	{ 1036800, 444000000 },	/* 2k @ 120 */
++	{  829440, 355200000 },	/* 4k @ 44 */
++	{  489600, 269330000 },/* 4k @ 30 */
++	{  108000, 200000000 },	/* 1080p @ 60 */
+ };
+ 
+ static const struct reg_val msm8998_reg_preset[] = {
+diff --git a/drivers/media/platform/qcom/venus/core.h b/drivers/media/platform/qcom/venus/core.h
+index b412e0c5515a09..5b1ba1c69adba1 100644
+--- a/drivers/media/platform/qcom/venus/core.h
++++ b/drivers/media/platform/qcom/venus/core.h
+@@ -28,6 +28,8 @@
+ #define VIDC_RESETS_NUM_MAX		2
+ #define VIDC_MAX_HIER_CODING_LAYER 6
+ 
++#define VENUS_MAX_FPS			240
++
+ extern int venus_fw_debug;
+ 
+ struct freq_tbl {
+diff --git a/drivers/media/platform/qcom/venus/hfi_venus.c b/drivers/media/platform/qcom/venus/hfi_venus.c
+index b5f2ea8799507f..cec7f5964d3d80 100644
+--- a/drivers/media/platform/qcom/venus/hfi_venus.c
++++ b/drivers/media/platform/qcom/venus/hfi_venus.c
+@@ -239,6 +239,7 @@ static int venus_write_queue(struct venus_hfi_device *hdev,
+ static int venus_read_queue(struct venus_hfi_device *hdev,
+ 			    struct iface_queue *queue, void *pkt, u32 *tx_req)
+ {
++	struct hfi_pkt_hdr *pkt_hdr = NULL;
+ 	struct hfi_queue_header *qhdr;
+ 	u32 dwords, new_rd_idx;
+ 	u32 rd_idx, wr_idx, type, qsize;
+@@ -304,6 +305,9 @@ static int venus_read_queue(struct venus_hfi_device *hdev,
+ 			memcpy(pkt, rd_ptr, len);
+ 			memcpy(pkt + len, queue->qmem.kva, new_rd_idx << 2);
+ 		}
++		pkt_hdr = (struct hfi_pkt_hdr *)(pkt);
++		if ((pkt_hdr->size >> 2) != dwords)
++			return -EINVAL;
+ 	} else {
+ 		/* bad packet received, dropping */
+ 		new_rd_idx = qhdr->write_idx;
+@@ -1678,6 +1682,7 @@ void venus_hfi_destroy(struct venus_core *core)
+ 	venus_interface_queues_release(hdev);
+ 	mutex_destroy(&hdev->lock);
+ 	kfree(hdev);
++	disable_irq(core->irq);
+ 	core->ops = NULL;
+ }
+ 
+diff --git a/drivers/media/platform/qcom/venus/vdec.c b/drivers/media/platform/qcom/venus/vdec.c
+index 99ce5fd4157728..fca27be61f4b86 100644
+--- a/drivers/media/platform/qcom/venus/vdec.c
++++ b/drivers/media/platform/qcom/venus/vdec.c
+@@ -481,11 +481,10 @@ static int vdec_s_parm(struct file *file, void *fh, struct v4l2_streamparm *a)
+ 	us_per_frame = timeperframe->numerator * (u64)USEC_PER_SEC;
+ 	do_div(us_per_frame, timeperframe->denominator);
+ 
+-	if (!us_per_frame)
+-		return -EINVAL;
+-
++	us_per_frame = clamp(us_per_frame, 1, USEC_PER_SEC);
+ 	fps = (u64)USEC_PER_SEC;
+ 	do_div(fps, us_per_frame);
++	fps = min(VENUS_MAX_FPS, fps);
+ 
+ 	inst->fps = fps;
+ 	inst->timeperframe = *timeperframe;
+diff --git a/drivers/media/platform/qcom/venus/venc.c b/drivers/media/platform/qcom/venus/venc.c
+index c7f8e37dba9b22..b9ccee870c3d12 100644
+--- a/drivers/media/platform/qcom/venus/venc.c
++++ b/drivers/media/platform/qcom/venus/venc.c
+@@ -411,11 +411,10 @@ static int venc_s_parm(struct file *file, void *fh, struct v4l2_streamparm *a)
+ 	us_per_frame = timeperframe->numerator * (u64)USEC_PER_SEC;
+ 	do_div(us_per_frame, timeperframe->denominator);
+ 
+-	if (!us_per_frame)
+-		return -EINVAL;
+-
++	us_per_frame = clamp(us_per_frame, 1, USEC_PER_SEC);
+ 	fps = (u64)USEC_PER_SEC;
+ 	do_div(fps, us_per_frame);
++	fps = min(VENUS_MAX_FPS, fps);
+ 
+ 	inst->timeperframe = *timeperframe;
+ 	inst->fps = fps;
+diff --git a/drivers/media/platform/raspberrypi/pisp_be/Kconfig b/drivers/media/platform/raspberrypi/pisp_be/Kconfig
+index 46765a2e4c4d15..a9e51fd94aadc6 100644
+--- a/drivers/media/platform/raspberrypi/pisp_be/Kconfig
++++ b/drivers/media/platform/raspberrypi/pisp_be/Kconfig
+@@ -3,6 +3,7 @@ config VIDEO_RASPBERRYPI_PISP_BE
+ 	depends on V4L_PLATFORM_DRIVERS
+ 	depends on VIDEO_DEV
+ 	depends on ARCH_BCM2835 || COMPILE_TEST
++	depends on PM
+ 	select VIDEO_V4L2_SUBDEV_API
+ 	select MEDIA_CONTROLLER
+ 	select VIDEOBUF2_DMA_CONTIG
+diff --git a/drivers/media/platform/raspberrypi/pisp_be/pisp_be.c b/drivers/media/platform/raspberrypi/pisp_be/pisp_be.c
+index 7596ae1f7de667..f0a98afefdbd1e 100644
+--- a/drivers/media/platform/raspberrypi/pisp_be/pisp_be.c
++++ b/drivers/media/platform/raspberrypi/pisp_be/pisp_be.c
+@@ -1726,7 +1726,7 @@ static int pispbe_probe(struct platform_device *pdev)
+ 	pm_runtime_use_autosuspend(pispbe->dev);
+ 	pm_runtime_enable(pispbe->dev);
+ 
+-	ret = pispbe_runtime_resume(pispbe->dev);
++	ret = pm_runtime_resume_and_get(pispbe->dev);
+ 	if (ret)
+ 		goto pm_runtime_disable_err;
+ 
+@@ -1748,7 +1748,7 @@ static int pispbe_probe(struct platform_device *pdev)
+ disable_devs_err:
+ 	pispbe_destroy_devices(pispbe);
+ pm_runtime_suspend_err:
+-	pispbe_runtime_suspend(pispbe->dev);
++	pm_runtime_put(pispbe->dev);
+ pm_runtime_disable_err:
+ 	pm_runtime_dont_use_autosuspend(pispbe->dev);
+ 	pm_runtime_disable(pispbe->dev);
+@@ -1762,7 +1762,6 @@ static void pispbe_remove(struct platform_device *pdev)
+ 
+ 	pispbe_destroy_devices(pispbe);
+ 
+-	pispbe_runtime_suspend(pispbe->dev);
+ 	pm_runtime_dont_use_autosuspend(pispbe->dev);
+ 	pm_runtime_disable(pispbe->dev);
+ }
+diff --git a/drivers/media/platform/verisilicon/rockchip_vpu_hw.c b/drivers/media/platform/verisilicon/rockchip_vpu_hw.c
+index acd29fa41d2d10..02673be9878e1e 100644
+--- a/drivers/media/platform/verisilicon/rockchip_vpu_hw.c
++++ b/drivers/media/platform/verisilicon/rockchip_vpu_hw.c
+@@ -17,7 +17,6 @@
+ 
+ #define RK3066_ACLK_MAX_FREQ (300 * 1000 * 1000)
+ #define RK3288_ACLK_MAX_FREQ (400 * 1000 * 1000)
+-#define RK3588_ACLK_MAX_FREQ (300 * 1000 * 1000)
+ 
+ #define ROCKCHIP_VPU981_MIN_SIZE 64
+ 
+@@ -454,13 +453,6 @@ static int rk3066_vpu_hw_init(struct hantro_dev *vpu)
+ 	return 0;
+ }
+ 
+-static int rk3588_vpu981_hw_init(struct hantro_dev *vpu)
+-{
+-	/* Bump ACLKs to max. possible freq. to improve performance. */
+-	clk_set_rate(vpu->clocks[0].clk, RK3588_ACLK_MAX_FREQ);
+-	return 0;
+-}
+-
+ static int rockchip_vpu_hw_init(struct hantro_dev *vpu)
+ {
+ 	/* Bump ACLK to max. possible freq. to improve performance. */
+@@ -821,7 +813,6 @@ const struct hantro_variant rk3588_vpu981_variant = {
+ 	.codec_ops = rk3588_vpu981_codec_ops,
+ 	.irqs = rk3588_vpu981_irqs,
+ 	.num_irqs = ARRAY_SIZE(rk3588_vpu981_irqs),
+-	.init = rk3588_vpu981_hw_init,
+ 	.clk_names = rk3588_vpu981_vpu_clk_names,
+ 	.num_clocks = ARRAY_SIZE(rk3588_vpu981_vpu_clk_names)
+ };
+diff --git a/drivers/media/test-drivers/vivid/vivid-ctrls.c b/drivers/media/test-drivers/vivid/vivid-ctrls.c
+index e340df0b62617a..f94c15ff84f78f 100644
+--- a/drivers/media/test-drivers/vivid/vivid-ctrls.c
++++ b/drivers/media/test-drivers/vivid/vivid-ctrls.c
+@@ -244,7 +244,8 @@ static const struct v4l2_ctrl_config vivid_ctrl_u8_pixel_array = {
+ 	.min = 0x00,
+ 	.max = 0xff,
+ 	.step = 1,
+-	.dims = { 640 / PIXEL_ARRAY_DIV, 360 / PIXEL_ARRAY_DIV },
++	.dims = { DIV_ROUND_UP(360, PIXEL_ARRAY_DIV),
++		  DIV_ROUND_UP(640, PIXEL_ARRAY_DIV) },
+ };
+ 
+ static const struct v4l2_ctrl_config vivid_ctrl_s32_array = {
+diff --git a/drivers/media/test-drivers/vivid/vivid-vid-cap.c b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+index 84e9155b58155c..2e4c1ed37cd2ab 100644
+--- a/drivers/media/test-drivers/vivid/vivid-vid-cap.c
++++ b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+@@ -454,8 +454,8 @@ void vivid_update_format_cap(struct vivid_dev *dev, bool keep_controls)
+ 	if (keep_controls)
+ 		return;
+ 
+-	dims[0] = roundup(dev->src_rect.width, PIXEL_ARRAY_DIV);
+-	dims[1] = roundup(dev->src_rect.height, PIXEL_ARRAY_DIV);
++	dims[0] = DIV_ROUND_UP(dev->src_rect.height, PIXEL_ARRAY_DIV);
++	dims[1] = DIV_ROUND_UP(dev->src_rect.width, PIXEL_ARRAY_DIV);
+ 	v4l2_ctrl_modify_dimensions(dev->pixel_array, dims);
+ }
+ 
+diff --git a/drivers/media/usb/gspca/vicam.c b/drivers/media/usb/gspca/vicam.c
+index d98343fd33fe34..91e177aa8136fd 100644
+--- a/drivers/media/usb/gspca/vicam.c
++++ b/drivers/media/usb/gspca/vicam.c
+@@ -227,6 +227,7 @@ static int sd_init(struct gspca_dev *gspca_dev)
+ 	const struct ihex_binrec *rec;
+ 	const struct firmware *fw;
+ 	u8 *firmware_buf;
++	int len;
+ 
+ 	ret = request_ihex_firmware(&fw, VICAM_FIRMWARE,
+ 				    &gspca_dev->dev->dev);
+@@ -241,9 +242,14 @@ static int sd_init(struct gspca_dev *gspca_dev)
+ 		goto exit;
+ 	}
+ 	for (rec = (void *)fw->data; rec; rec = ihex_next_binrec(rec)) {
+-		memcpy(firmware_buf, rec->data, be16_to_cpu(rec->len));
++		len = be16_to_cpu(rec->len);
++		if (len > PAGE_SIZE) {
++			ret = -EINVAL;
++			break;
++		}
++		memcpy(firmware_buf, rec->data, len);
+ 		ret = vicam_control_msg(gspca_dev, 0xff, 0, 0, firmware_buf,
+-					be16_to_cpu(rec->len));
++					len);
+ 		if (ret < 0)
+ 			break;
+ 	}
+diff --git a/drivers/media/usb/usbtv/usbtv-video.c b/drivers/media/usb/usbtv/usbtv-video.c
+index be22a9697197c6..de0328100a60dd 100644
+--- a/drivers/media/usb/usbtv/usbtv-video.c
++++ b/drivers/media/usb/usbtv/usbtv-video.c
+@@ -73,6 +73,10 @@ static int usbtv_configure_for_norm(struct usbtv *usbtv, v4l2_std_id norm)
+ 	}
+ 
+ 	if (params) {
++		if (vb2_is_busy(&usbtv->vb2q) &&
++		    (usbtv->width != params->cap_width ||
++		     usbtv->height != params->cap_height))
++			return -EBUSY;
+ 		usbtv->width = params->cap_width;
+ 		usbtv->height = params->cap_height;
+ 		usbtv->n_chunks = usbtv->width * usbtv->height
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls-core.c b/drivers/media/v4l2-core/v4l2-ctrls-core.c
+index b45809a82f9a66..d28596c720d8a4 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls-core.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls-core.c
+@@ -1661,7 +1661,6 @@ void v4l2_ctrl_handler_free(struct v4l2_ctrl_handler *hdl)
+ 	kvfree(hdl->buckets);
+ 	hdl->buckets = NULL;
+ 	hdl->cached = NULL;
+-	hdl->error = 0;
+ 	mutex_unlock(hdl->lock);
+ 	mutex_destroy(&hdl->_lock);
+ }
+diff --git a/drivers/memstick/core/memstick.c b/drivers/memstick/core/memstick.c
+index 7f3f47db4c98a5..e4275f8ee5db8a 100644
+--- a/drivers/memstick/core/memstick.c
++++ b/drivers/memstick/core/memstick.c
+@@ -555,7 +555,6 @@ EXPORT_SYMBOL(memstick_add_host);
+  */
+ void memstick_remove_host(struct memstick_host *host)
+ {
+-	host->removing = 1;
+ 	flush_workqueue(workqueue);
+ 	mutex_lock(&host->lock);
+ 	if (host->card)
+diff --git a/drivers/memstick/host/rtsx_usb_ms.c b/drivers/memstick/host/rtsx_usb_ms.c
+index 3878136227e49c..5b5e9354fb2e4f 100644
+--- a/drivers/memstick/host/rtsx_usb_ms.c
++++ b/drivers/memstick/host/rtsx_usb_ms.c
+@@ -812,6 +812,7 @@ static void rtsx_usb_ms_drv_remove(struct platform_device *pdev)
+ 	int err;
+ 
+ 	host->eject = true;
++	msh->removing = true;
+ 	cancel_work_sync(&host->handle_req);
+ 	cancel_delayed_work_sync(&host->poll_card);
+ 
+diff --git a/drivers/mfd/mt6397-core.c b/drivers/mfd/mt6397-core.c
+index 5f8ed898890783..3e58d0764c7e0b 100644
+--- a/drivers/mfd/mt6397-core.c
++++ b/drivers/mfd/mt6397-core.c
+@@ -136,7 +136,7 @@ static const struct mfd_cell mt6323_devs[] = {
+ 		.name = "mt6323-led",
+ 		.of_compatible = "mediatek,mt6323-led"
+ 	}, {
+-		.name = "mtk-pmic-keys",
++		.name = "mt6323-keys",
+ 		.num_resources = ARRAY_SIZE(mt6323_keys_resources),
+ 		.resources = mt6323_keys_resources,
+ 		.of_compatible = "mediatek,mt6323-keys"
+@@ -153,7 +153,7 @@ static const struct mfd_cell mt6328_devs[] = {
+ 		.name = "mt6328-regulator",
+ 		.of_compatible = "mediatek,mt6328-regulator"
+ 	}, {
+-		.name = "mtk-pmic-keys",
++		.name = "mt6328-keys",
+ 		.num_resources = ARRAY_SIZE(mt6328_keys_resources),
+ 		.resources = mt6328_keys_resources,
+ 		.of_compatible = "mediatek,mt6328-keys"
+@@ -175,7 +175,7 @@ static const struct mfd_cell mt6357_devs[] = {
+ 		.name = "mt6357-sound",
+ 		.of_compatible = "mediatek,mt6357-sound"
+ 	}, {
+-		.name = "mtk-pmic-keys",
++		.name = "mt6357-keys",
+ 		.num_resources = ARRAY_SIZE(mt6357_keys_resources),
+ 		.resources = mt6357_keys_resources,
+ 		.of_compatible = "mediatek,mt6357-keys"
+@@ -196,7 +196,7 @@ static const struct mfd_cell mt6331_mt6332_devs[] = {
+ 		.name = "mt6332-regulator",
+ 		.of_compatible = "mediatek,mt6332-regulator"
+ 	}, {
+-		.name = "mtk-pmic-keys",
++		.name = "mt6331-keys",
+ 		.num_resources = ARRAY_SIZE(mt6331_keys_resources),
+ 		.resources = mt6331_keys_resources,
+ 		.of_compatible = "mediatek,mt6331-keys"
+@@ -240,7 +240,7 @@ static const struct mfd_cell mt6359_devs[] = {
+ 	},
+ 	{ .name = "mt6359-sound", },
+ 	{
+-		.name = "mtk-pmic-keys",
++		.name = "mt6359-keys",
+ 		.num_resources = ARRAY_SIZE(mt6359_keys_resources),
+ 		.resources = mt6359_keys_resources,
+ 		.of_compatible = "mediatek,mt6359-keys"
+@@ -272,7 +272,7 @@ static const struct mfd_cell mt6397_devs[] = {
+ 		.name = "mt6397-pinctrl",
+ 		.of_compatible = "mediatek,mt6397-pinctrl",
+ 	}, {
+-		.name = "mtk-pmic-keys",
++		.name = "mt6397-keys",
+ 		.num_resources = ARRAY_SIZE(mt6397_keys_resources),
+ 		.resources = mt6397_keys_resources,
+ 		.of_compatible = "mediatek,mt6397-keys"
+diff --git a/drivers/mmc/host/sdhci-of-arasan.c b/drivers/mmc/host/sdhci-of-arasan.c
+index 8c29676ab6628b..33b9282cc80d9f 100644
+--- a/drivers/mmc/host/sdhci-of-arasan.c
++++ b/drivers/mmc/host/sdhci-of-arasan.c
+@@ -99,6 +99,9 @@
+ #define HIWORD_UPDATE(val, mask, shift) \
+ 		((val) << (shift) | (mask) << ((shift) + 16))
+ 
++#define CD_STABLE_TIMEOUT_US		1000000
++#define CD_STABLE_MAX_SLEEP_US		10
++
+ /**
+  * struct sdhci_arasan_soc_ctl_field - Field used in sdhci_arasan_soc_ctl_map
+  *
+@@ -206,12 +209,15 @@ struct sdhci_arasan_data {
+  * 19MHz instead
+  */
+ #define SDHCI_ARASAN_QUIRK_CLOCK_25_BROKEN BIT(2)
++/* Enable CD stable check before power-up */
++#define SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE	BIT(3)
+ };
+ 
+ struct sdhci_arasan_of_data {
+ 	const struct sdhci_arasan_soc_ctl_map *soc_ctl_map;
+ 	const struct sdhci_pltfm_data *pdata;
+ 	const struct sdhci_arasan_clk_ops *clk_ops;
++	u32 quirks;
+ };
+ 
+ static const struct sdhci_arasan_soc_ctl_map rk3399_soc_ctl_map = {
+@@ -514,6 +520,24 @@ static int sdhci_arasan_voltage_switch(struct mmc_host *mmc,
+ 	return -EINVAL;
+ }
+ 
++static void sdhci_arasan_set_power_and_bus_voltage(struct sdhci_host *host, unsigned char mode,
++						   unsigned short vdd)
++{
++	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
++	struct sdhci_arasan_data *sdhci_arasan = sdhci_pltfm_priv(pltfm_host);
++	u32 reg;
++
++	/*
++	 * Ensure that the card detect logic has stabilized before powering up, this is
++	 * necessary after a host controller reset.
++	 */
++	if (mode == MMC_POWER_UP && sdhci_arasan->quirks & SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE)
++		read_poll_timeout(sdhci_readl, reg, reg & SDHCI_CD_STABLE, CD_STABLE_MAX_SLEEP_US,
++				  CD_STABLE_TIMEOUT_US, false, host, SDHCI_PRESENT_STATE);
++
++	sdhci_set_power_and_bus_voltage(host, mode, vdd);
++}
++
+ static const struct sdhci_ops sdhci_arasan_ops = {
+ 	.set_clock = sdhci_arasan_set_clock,
+ 	.get_max_clock = sdhci_pltfm_clk_get_max_clock,
+@@ -521,7 +545,7 @@ static const struct sdhci_ops sdhci_arasan_ops = {
+ 	.set_bus_width = sdhci_set_bus_width,
+ 	.reset = sdhci_arasan_reset,
+ 	.set_uhs_signaling = sdhci_set_uhs_signaling,
+-	.set_power = sdhci_set_power_and_bus_voltage,
++	.set_power = sdhci_arasan_set_power_and_bus_voltage,
+ 	.hw_reset = sdhci_arasan_hw_reset,
+ };
+ 
+@@ -570,7 +594,7 @@ static const struct sdhci_ops sdhci_arasan_cqe_ops = {
+ 	.set_bus_width = sdhci_set_bus_width,
+ 	.reset = sdhci_arasan_reset,
+ 	.set_uhs_signaling = sdhci_set_uhs_signaling,
+-	.set_power = sdhci_set_power_and_bus_voltage,
++	.set_power = sdhci_arasan_set_power_and_bus_voltage,
+ 	.irq = sdhci_arasan_cqhci_irq,
+ };
+ 
+@@ -1447,6 +1471,7 @@ static const struct sdhci_arasan_clk_ops zynqmp_clk_ops = {
+ static struct sdhci_arasan_of_data sdhci_arasan_zynqmp_data = {
+ 	.pdata = &sdhci_arasan_zynqmp_pdata,
+ 	.clk_ops = &zynqmp_clk_ops,
++	.quirks = SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE,
+ };
+ 
+ static const struct sdhci_arasan_clk_ops versal_clk_ops = {
+@@ -1457,6 +1482,7 @@ static const struct sdhci_arasan_clk_ops versal_clk_ops = {
+ static struct sdhci_arasan_of_data sdhci_arasan_versal_data = {
+ 	.pdata = &sdhci_arasan_zynqmp_pdata,
+ 	.clk_ops = &versal_clk_ops,
++	.quirks = SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE,
+ };
+ 
+ static const struct sdhci_arasan_clk_ops versal_net_clk_ops = {
+@@ -1467,6 +1493,7 @@ static const struct sdhci_arasan_clk_ops versal_net_clk_ops = {
+ static struct sdhci_arasan_of_data sdhci_arasan_versal_net_data = {
+ 	.pdata = &sdhci_arasan_versal_net_pdata,
+ 	.clk_ops = &versal_net_clk_ops,
++	.quirks = SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE,
+ };
+ 
+ static struct sdhci_arasan_of_data intel_keembay_emmc_data = {
+@@ -1945,6 +1972,8 @@ static int sdhci_arasan_probe(struct platform_device *pdev)
+ 	if (of_device_is_compatible(np, "rockchip,rk3399-sdhci-5.1"))
+ 		sdhci_arasan_update_clockmultiplier(host, 0x0);
+ 
++	sdhci_arasan->quirks |= data->quirks;
++
+ 	if (of_device_is_compatible(np, "intel,keembay-sdhci-5.1-emmc") ||
+ 	    of_device_is_compatible(np, "intel,keembay-sdhci-5.1-sd") ||
+ 	    of_device_is_compatible(np, "intel,keembay-sdhci-5.1-sdio")) {
+diff --git a/drivers/mmc/host/sdhci-pci-gli.c b/drivers/mmc/host/sdhci-pci-gli.c
+index 4c2ae71770f782..3a1de477e9af8d 100644
+--- a/drivers/mmc/host/sdhci-pci-gli.c
++++ b/drivers/mmc/host/sdhci-pci-gli.c
+@@ -287,6 +287,20 @@
+ #define GLI_MAX_TUNING_LOOP 40
+ 
+ /* Genesys Logic chipset */
++static void sdhci_gli_mask_replay_timer_timeout(struct pci_dev *pdev)
++{
++	int aer;
++	u32 value;
++
++	/* mask the replay timer timeout of AER */
++	aer = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ERR);
++	if (aer) {
++		pci_read_config_dword(pdev, aer + PCI_ERR_COR_MASK, &value);
++		value |= PCI_ERR_COR_REP_TIMER;
++		pci_write_config_dword(pdev, aer + PCI_ERR_COR_MASK, value);
++	}
++}
++
+ static inline void gl9750_wt_on(struct sdhci_host *host)
+ {
+ 	u32 wt_value;
+@@ -607,7 +621,6 @@ static void gl9750_hw_setting(struct sdhci_host *host)
+ {
+ 	struct sdhci_pci_slot *slot = sdhci_priv(host);
+ 	struct pci_dev *pdev;
+-	int aer;
+ 	u32 value;
+ 
+ 	pdev = slot->chip->pdev;
+@@ -626,12 +639,7 @@ static void gl9750_hw_setting(struct sdhci_host *host)
+ 	pci_set_power_state(pdev, PCI_D0);
+ 
+ 	/* mask the replay timer timeout of AER */
+-	aer = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ERR);
+-	if (aer) {
+-		pci_read_config_dword(pdev, aer + PCI_ERR_COR_MASK, &value);
+-		value |= PCI_ERR_COR_REP_TIMER;
+-		pci_write_config_dword(pdev, aer + PCI_ERR_COR_MASK, value);
+-	}
++	sdhci_gli_mask_replay_timer_timeout(pdev);
+ 
+ 	gl9750_wt_off(host);
+ }
+@@ -806,7 +814,6 @@ static void sdhci_gl9755_set_clock(struct sdhci_host *host, unsigned int clock)
+ static void gl9755_hw_setting(struct sdhci_pci_slot *slot)
+ {
+ 	struct pci_dev *pdev = slot->chip->pdev;
+-	int aer;
+ 	u32 value;
+ 
+ 	gl9755_wt_on(pdev);
+@@ -841,12 +848,7 @@ static void gl9755_hw_setting(struct sdhci_pci_slot *slot)
+ 	pci_set_power_state(pdev, PCI_D0);
+ 
+ 	/* mask the replay timer timeout of AER */
+-	aer = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ERR);
+-	if (aer) {
+-		pci_read_config_dword(pdev, aer + PCI_ERR_COR_MASK, &value);
+-		value |= PCI_ERR_COR_REP_TIMER;
+-		pci_write_config_dword(pdev, aer + PCI_ERR_COR_MASK, value);
+-	}
++	sdhci_gli_mask_replay_timer_timeout(pdev);
+ 
+ 	gl9755_wt_off(pdev);
+ }
+@@ -1751,7 +1753,7 @@ static int gl9763e_add_host(struct sdhci_pci_slot *slot)
+ 	return ret;
+ }
+ 
+-static void gli_set_gl9763e(struct sdhci_pci_slot *slot)
++static void gl9763e_hw_setting(struct sdhci_pci_slot *slot)
+ {
+ 	struct pci_dev *pdev = slot->chip->pdev;
+ 	u32 value;
+@@ -1780,6 +1782,9 @@ static void gli_set_gl9763e(struct sdhci_pci_slot *slot)
+ 	value |= FIELD_PREP(GLI_9763E_HS400_RXDLY, GLI_9763E_HS400_RXDLY_5);
+ 	pci_write_config_dword(pdev, PCIE_GLI_9763E_CLKRXDLY, value);
+ 
++	/* mask the replay timer timeout of AER */
++	sdhci_gli_mask_replay_timer_timeout(pdev);
++
+ 	pci_read_config_dword(pdev, PCIE_GLI_9763E_VHS, &value);
+ 	value &= ~GLI_9763E_VHS_REV;
+ 	value |= FIELD_PREP(GLI_9763E_VHS_REV, GLI_9763E_VHS_REV_R);
+@@ -1923,7 +1928,7 @@ static int gli_probe_slot_gl9763e(struct sdhci_pci_slot *slot)
+ 	gli_pcie_enable_msi(slot);
+ 	host->mmc_host_ops.hs400_enhanced_strobe =
+ 					gl9763e_hs400_enhanced_strobe;
+-	gli_set_gl9763e(slot);
++	gl9763e_hw_setting(slot);
+ 	sdhci_enable_v4_mode(host);
+ 
+ 	return 0;
+diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c
+index 9e94998e8df7d2..21739357273290 100644
+--- a/drivers/mmc/host/sdhci_am654.c
++++ b/drivers/mmc/host/sdhci_am654.c
+@@ -156,6 +156,7 @@ struct sdhci_am654_data {
+ 
+ #define SDHCI_AM654_QUIRK_FORCE_CDTEST BIT(0)
+ #define SDHCI_AM654_QUIRK_SUPPRESS_V1P8_ENA BIT(1)
++#define SDHCI_AM654_QUIRK_DISABLE_HS400 BIT(2)
+ };
+ 
+ struct window {
+@@ -765,6 +766,7 @@ static int sdhci_am654_init(struct sdhci_host *host)
+ {
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ 	struct sdhci_am654_data *sdhci_am654 = sdhci_pltfm_priv(pltfm_host);
++	struct device *dev = mmc_dev(host->mmc);
+ 	u32 ctl_cfg_2 = 0;
+ 	u32 mask;
+ 	u32 val;
+@@ -820,6 +822,12 @@ static int sdhci_am654_init(struct sdhci_host *host)
+ 	if (ret)
+ 		goto err_cleanup_host;
+ 
++	if (sdhci_am654->quirks & SDHCI_AM654_QUIRK_DISABLE_HS400 &&
++	    host->mmc->caps2 & (MMC_CAP2_HS400 | MMC_CAP2_HS400_ES)) {
++		dev_info(dev, "HS400 mode not supported on this silicon revision, disabling it\n");
++		host->mmc->caps2 &= ~(MMC_CAP2_HS400 | MMC_CAP2_HS400_ES);
++	}
++
+ 	ret = __sdhci_add_host(host);
+ 	if (ret)
+ 		goto err_cleanup_host;
+@@ -883,6 +891,12 @@ static int sdhci_am654_get_of_property(struct platform_device *pdev,
+ 	return 0;
+ }
+ 
++static const struct soc_device_attribute sdhci_am654_descope_hs400[] = {
++	{ .family = "AM62PX", .revision = "SR1.0" },
++	{ .family = "AM62PX", .revision = "SR1.1" },
++	{ /* sentinel */ }
++};
++
+ static const struct of_device_id sdhci_am654_of_match[] = {
+ 	{
+ 		.compatible = "ti,am654-sdhci-5.1",
+@@ -975,6 +989,10 @@ static int sdhci_am654_probe(struct platform_device *pdev)
+ 		goto err_pltfm_free;
+ 	}
+ 
++	soc = soc_device_match(sdhci_am654_descope_hs400);
++	if (soc)
++		sdhci_am654->quirks |= SDHCI_AM654_QUIRK_DISABLE_HS400;
++
+ 	host->mmc_host_ops.start_signal_voltage_switch = sdhci_am654_start_signal_voltage_switch;
+ 	host->mmc_host_ops.execute_tuning = sdhci_am654_execute_tuning;
+ 
+diff --git a/drivers/most/core.c b/drivers/most/core.c
+index a635d5082ebb64..da319d108ea1df 100644
+--- a/drivers/most/core.c
++++ b/drivers/most/core.c
+@@ -538,8 +538,8 @@ static struct most_channel *get_channel(char *mdev, char *mdev_ch)
+ 	dev = bus_find_device_by_name(&mostbus, NULL, mdev);
+ 	if (!dev)
+ 		return NULL;
+-	put_device(dev);
+ 	iface = dev_get_drvdata(dev);
++	put_device(dev);
+ 	list_for_each_entry_safe(c, tmp, &iface->p->channel_list, list) {
+ 		if (!strcmp(dev_name(&c->dev), mdev_ch))
+ 			return c;
+diff --git a/drivers/mtd/nand/raw/fsmc_nand.c b/drivers/mtd/nand/raw/fsmc_nand.c
+index d579d5dd60d66e..df61db8ce46659 100644
+--- a/drivers/mtd/nand/raw/fsmc_nand.c
++++ b/drivers/mtd/nand/raw/fsmc_nand.c
+@@ -503,6 +503,8 @@ static int dma_xfer(struct fsmc_nand_data *host, void *buffer, int len,
+ 
+ 	dma_dev = chan->device;
+ 	dma_addr = dma_map_single(dma_dev->dev, buffer, len, direction);
++	if (dma_mapping_error(dma_dev->dev, dma_addr))
++		return -EINVAL;
+ 
+ 	if (direction == DMA_TO_DEVICE) {
+ 		dma_src = dma_addr;
+diff --git a/drivers/mtd/nand/raw/renesas-nand-controller.c b/drivers/mtd/nand/raw/renesas-nand-controller.c
+index 44f6603736d19b..ac8c1b80d7be96 100644
+--- a/drivers/mtd/nand/raw/renesas-nand-controller.c
++++ b/drivers/mtd/nand/raw/renesas-nand-controller.c
+@@ -426,6 +426,9 @@ static int rnandc_read_page_hw_ecc(struct nand_chip *chip, u8 *buf,
+ 	/* Configure DMA */
+ 	dma_addr = dma_map_single(rnandc->dev, rnandc->buf, mtd->writesize,
+ 				  DMA_FROM_DEVICE);
++	if (dma_mapping_error(rnandc->dev, dma_addr))
++		return -ENOMEM;
++
+ 	writel(dma_addr, rnandc->regs + DMA_ADDR_LOW_REG);
+ 	writel(mtd->writesize, rnandc->regs + DMA_CNT_REG);
+ 	writel(DMA_TLVL_MAX, rnandc->regs + DMA_TLVL_REG);
+@@ -606,6 +609,9 @@ static int rnandc_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf,
+ 	/* Configure DMA */
+ 	dma_addr = dma_map_single(rnandc->dev, (void *)rnandc->buf, mtd->writesize,
+ 				  DMA_TO_DEVICE);
++	if (dma_mapping_error(rnandc->dev, dma_addr))
++		return -ENOMEM;
++
+ 	writel(dma_addr, rnandc->regs + DMA_ADDR_LOW_REG);
+ 	writel(mtd->writesize, rnandc->regs + DMA_CNT_REG);
+ 	writel(DMA_TLVL_MAX, rnandc->regs + DMA_TLVL_REG);
+diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
+index c411fe9be3ef81..b90f15c986a317 100644
+--- a/drivers/mtd/nand/spi/core.c
++++ b/drivers/mtd/nand/spi/core.c
+@@ -688,7 +688,10 @@ int spinand_write_page(struct spinand_device *spinand,
+ 			   SPINAND_WRITE_INITIAL_DELAY_US,
+ 			   SPINAND_WRITE_POLL_DELAY_US,
+ 			   &status);
+-	if (!ret && (status & STATUS_PROG_FAILED))
++	if (ret)
++		return ret;
++
++	if (status & STATUS_PROG_FAILED)
+ 		return -EIO;
+ 
+ 	return nand_ecc_finish_io_req(nand, (struct nand_page_io_req *)req);
+diff --git a/drivers/mtd/spi-nor/swp.c b/drivers/mtd/spi-nor/swp.c
+index 9c9328478d8a5b..9b07f83aeac76d 100644
+--- a/drivers/mtd/spi-nor/swp.c
++++ b/drivers/mtd/spi-nor/swp.c
+@@ -56,7 +56,6 @@ static u64 spi_nor_get_min_prot_length_sr(struct spi_nor *nor)
+ static void spi_nor_get_locked_range_sr(struct spi_nor *nor, u8 sr, loff_t *ofs,
+ 					u64 *len)
+ {
+-	struct mtd_info *mtd = &nor->mtd;
+ 	u64 min_prot_len;
+ 	u8 mask = spi_nor_get_sr_bp_mask(nor);
+ 	u8 tb_mask = spi_nor_get_sr_tb_mask(nor);
+@@ -77,13 +76,13 @@ static void spi_nor_get_locked_range_sr(struct spi_nor *nor, u8 sr, loff_t *ofs,
+ 	min_prot_len = spi_nor_get_min_prot_length_sr(nor);
+ 	*len = min_prot_len << (bp - 1);
+ 
+-	if (*len > mtd->size)
+-		*len = mtd->size;
++	if (*len > nor->params->size)
++		*len = nor->params->size;
+ 
+ 	if (nor->flags & SNOR_F_HAS_SR_TB && sr & tb_mask)
+ 		*ofs = 0;
+ 	else
+-		*ofs = mtd->size - *len;
++		*ofs = nor->params->size - *len;
+ }
+ 
+ /*
+@@ -158,7 +157,6 @@ static bool spi_nor_is_unlocked_sr(struct spi_nor *nor, loff_t ofs, u64 len,
+  */
+ static int spi_nor_sr_lock(struct spi_nor *nor, loff_t ofs, u64 len)
+ {
+-	struct mtd_info *mtd = &nor->mtd;
+ 	u64 min_prot_len;
+ 	int ret, status_old, status_new;
+ 	u8 mask = spi_nor_get_sr_bp_mask(nor);
+@@ -183,7 +181,7 @@ static int spi_nor_sr_lock(struct spi_nor *nor, loff_t ofs, u64 len)
+ 		can_be_bottom = false;
+ 
+ 	/* If anything above us is unlocked, we can't use 'top' protection */
+-	if (!spi_nor_is_locked_sr(nor, ofs + len, mtd->size - (ofs + len),
++	if (!spi_nor_is_locked_sr(nor, ofs + len, nor->params->size - (ofs + len),
+ 				  status_old))
+ 		can_be_top = false;
+ 
+@@ -195,11 +193,11 @@ static int spi_nor_sr_lock(struct spi_nor *nor, loff_t ofs, u64 len)
+ 
+ 	/* lock_len: length of region that should end up locked */
+ 	if (use_top)
+-		lock_len = mtd->size - ofs;
++		lock_len = nor->params->size - ofs;
+ 	else
+ 		lock_len = ofs + len;
+ 
+-	if (lock_len == mtd->size) {
++	if (lock_len == nor->params->size) {
+ 		val = mask;
+ 	} else {
+ 		min_prot_len = spi_nor_get_min_prot_length_sr(nor);
+@@ -248,7 +246,6 @@ static int spi_nor_sr_lock(struct spi_nor *nor, loff_t ofs, u64 len)
+  */
+ static int spi_nor_sr_unlock(struct spi_nor *nor, loff_t ofs, u64 len)
+ {
+-	struct mtd_info *mtd = &nor->mtd;
+ 	u64 min_prot_len;
+ 	int ret, status_old, status_new;
+ 	u8 mask = spi_nor_get_sr_bp_mask(nor);
+@@ -273,7 +270,7 @@ static int spi_nor_sr_unlock(struct spi_nor *nor, loff_t ofs, u64 len)
+ 		can_be_top = false;
+ 
+ 	/* If anything above us is locked, we can't use 'bottom' protection */
+-	if (!spi_nor_is_unlocked_sr(nor, ofs + len, mtd->size - (ofs + len),
++	if (!spi_nor_is_unlocked_sr(nor, ofs + len, nor->params->size - (ofs + len),
+ 				    status_old))
+ 		can_be_bottom = false;
+ 
+@@ -285,7 +282,7 @@ static int spi_nor_sr_unlock(struct spi_nor *nor, loff_t ofs, u64 len)
+ 
+ 	/* lock_len: length of region that should remain locked */
+ 	if (use_top)
+-		lock_len = mtd->size - (ofs + len);
++		lock_len = nor->params->size - (ofs + len);
+ 	else
+ 		lock_len = ofs;
+ 
+diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
+index c6807e473ab706..4c2560ae8866a1 100644
+--- a/drivers/net/bonding/bond_3ad.c
++++ b/drivers/net/bonding/bond_3ad.c
+@@ -95,13 +95,13 @@ static int ad_marker_send(struct port *port, struct bond_marker *marker);
+ static void ad_mux_machine(struct port *port, bool *update_slave_arr);
+ static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port);
+ static void ad_tx_machine(struct port *port);
+-static void ad_periodic_machine(struct port *port, struct bond_params *bond_params);
++static void ad_periodic_machine(struct port *port);
+ static void ad_port_selection_logic(struct port *port, bool *update_slave_arr);
+ static void ad_agg_selection_logic(struct aggregator *aggregator,
+ 				   bool *update_slave_arr);
+ static void ad_clear_agg(struct aggregator *aggregator);
+ static void ad_initialize_agg(struct aggregator *aggregator);
+-static void ad_initialize_port(struct port *port, int lacp_fast);
++static void ad_initialize_port(struct port *port, const struct bond_params *bond_params);
+ static void ad_enable_collecting(struct port *port);
+ static void ad_disable_distributing(struct port *port,
+ 				    bool *update_slave_arr);
+@@ -1296,10 +1296,16 @@ static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port)
+ 			 * case of EXPIRED even if LINK_DOWN didn't arrive for
+ 			 * the port.
+ 			 */
+-			port->partner_oper.port_state &= ~LACP_STATE_SYNCHRONIZATION;
+ 			port->sm_vars &= ~AD_PORT_MATCHED;
++			/* Based on IEEE 8021AX-2014, Figure 6-18 - Receive
++			 * machine state diagram, the statue should be
++			 * Partner_Oper_Port_State.Synchronization = FALSE;
++			 * Partner_Oper_Port_State.LACP_Timeout = Short Timeout;
++			 * start current_while_timer(Short Timeout);
++			 * Actor_Oper_Port_State.Expired = TRUE;
++			 */
++			port->partner_oper.port_state &= ~LACP_STATE_SYNCHRONIZATION;
+ 			port->partner_oper.port_state |= LACP_STATE_LACP_TIMEOUT;
+-			port->partner_oper.port_state |= LACP_STATE_LACP_ACTIVITY;
+ 			port->sm_rx_timer_counter = __ad_timer_to_ticks(AD_CURRENT_WHILE_TIMER, (u16)(AD_SHORT_TIMEOUT));
+ 			port->actor_oper_port_state |= LACP_STATE_EXPIRED;
+ 			port->sm_vars |= AD_PORT_CHURNED;
+@@ -1405,11 +1411,10 @@ static void ad_tx_machine(struct port *port)
+ /**
+  * ad_periodic_machine - handle a port's periodic state machine
+  * @port: the port we're looking at
+- * @bond_params: bond parameters we will use
+  *
+  * Turn ntt flag on priodically to perform periodic transmission of lacpdu's.
+  */
+-static void ad_periodic_machine(struct port *port, struct bond_params *bond_params)
++static void ad_periodic_machine(struct port *port)
+ {
+ 	periodic_states_t last_state;
+ 
+@@ -1418,8 +1423,7 @@ static void ad_periodic_machine(struct port *port, struct bond_params *bond_para
+ 
+ 	/* check if port was reinitialized */
+ 	if (((port->sm_vars & AD_PORT_BEGIN) || !(port->sm_vars & AD_PORT_LACP_ENABLED) || !port->is_enabled) ||
+-	    (!(port->actor_oper_port_state & LACP_STATE_LACP_ACTIVITY) && !(port->partner_oper.port_state & LACP_STATE_LACP_ACTIVITY)) ||
+-	    !bond_params->lacp_active) {
++	    (!(port->actor_oper_port_state & LACP_STATE_LACP_ACTIVITY) && !(port->partner_oper.port_state & LACP_STATE_LACP_ACTIVITY))) {
+ 		port->sm_periodic_state = AD_NO_PERIODIC;
+ 	}
+ 	/* check if state machine should change state */
+@@ -1943,16 +1947,16 @@ static void ad_initialize_agg(struct aggregator *aggregator)
+ /**
+  * ad_initialize_port - initialize a given port's parameters
+  * @port: the port we're looking at
+- * @lacp_fast: boolean. whether fast periodic should be used
++ * @bond_params: bond parameters we will use
+  */
+-static void ad_initialize_port(struct port *port, int lacp_fast)
++static void ad_initialize_port(struct port *port, const struct bond_params *bond_params)
+ {
+ 	static const struct port_params tmpl = {
+ 		.system_priority = 0xffff,
+ 		.key             = 1,
+ 		.port_number     = 1,
+ 		.port_priority   = 0xff,
+-		.port_state      = 1,
++		.port_state      = 0,
+ 	};
+ 	static const struct lacpdu lacpdu = {
+ 		.subtype		= 0x01,
+@@ -1970,12 +1974,14 @@ static void ad_initialize_port(struct port *port, int lacp_fast)
+ 		port->actor_port_priority = 0xff;
+ 		port->actor_port_aggregator_identifier = 0;
+ 		port->ntt = false;
+-		port->actor_admin_port_state = LACP_STATE_AGGREGATION |
+-					       LACP_STATE_LACP_ACTIVITY;
+-		port->actor_oper_port_state  = LACP_STATE_AGGREGATION |
+-					       LACP_STATE_LACP_ACTIVITY;
++		port->actor_admin_port_state = LACP_STATE_AGGREGATION;
++		port->actor_oper_port_state  = LACP_STATE_AGGREGATION;
++		if (bond_params->lacp_active) {
++			port->actor_admin_port_state |= LACP_STATE_LACP_ACTIVITY;
++			port->actor_oper_port_state  |= LACP_STATE_LACP_ACTIVITY;
++		}
+ 
+-		if (lacp_fast)
++		if (bond_params->lacp_fast)
+ 			port->actor_oper_port_state |= LACP_STATE_LACP_TIMEOUT;
+ 
+ 		memcpy(&port->partner_admin, &tmpl, sizeof(tmpl));
+@@ -2187,7 +2193,7 @@ void bond_3ad_bind_slave(struct slave *slave)
+ 		/* port initialization */
+ 		port = &(SLAVE_AD_INFO(slave)->port);
+ 
+-		ad_initialize_port(port, bond->params.lacp_fast);
++		ad_initialize_port(port, &bond->params);
+ 
+ 		port->slave = slave;
+ 		port->actor_port_number = SLAVE_AD_INFO(slave)->id;
+@@ -2499,7 +2505,7 @@ void bond_3ad_state_machine_handler(struct work_struct *work)
+ 		}
+ 
+ 		ad_rx_machine(NULL, port);
+-		ad_periodic_machine(port, &bond->params);
++		ad_periodic_machine(port);
+ 		ad_port_selection_logic(port, &update_slave_arr);
+ 		ad_mux_machine(port, &update_slave_arr);
+ 		ad_tx_machine(port);
+@@ -2869,6 +2875,31 @@ void bond_3ad_update_lacp_rate(struct bonding *bond)
+ 	spin_unlock_bh(&bond->mode_lock);
+ }
+ 
++/**
++ * bond_3ad_update_lacp_active - change the lacp active
++ * @bond: bonding struct
++ *
++ * Update actor_oper_port_state when lacp_active is modified.
++ */
++void bond_3ad_update_lacp_active(struct bonding *bond)
++{
++	struct port *port = NULL;
++	struct list_head *iter;
++	struct slave *slave;
++	int lacp_active;
++
++	lacp_active = bond->params.lacp_active;
++	spin_lock_bh(&bond->mode_lock);
++	bond_for_each_slave(bond, slave, iter) {
++		port = &(SLAVE_AD_INFO(slave)->port);
++		if (lacp_active)
++			port->actor_oper_port_state |= LACP_STATE_LACP_ACTIVITY;
++		else
++			port->actor_oper_port_state &= ~LACP_STATE_LACP_ACTIVITY;
++	}
++	spin_unlock_bh(&bond->mode_lock);
++}
++
+ size_t bond_3ad_stats_size(void)
+ {
+ 	return nla_total_size_64bit(sizeof(u64)) + /* BOND_3AD_STAT_LACPDU_RX */
+diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c
+index 91893c29b8995b..28c53f1b13826f 100644
+--- a/drivers/net/bonding/bond_options.c
++++ b/drivers/net/bonding/bond_options.c
+@@ -1637,6 +1637,7 @@ static int bond_option_lacp_active_set(struct bonding *bond,
+ 	netdev_dbg(bond->dev, "Setting LACP active to %s (%llu)\n",
+ 		   newval->string, newval->value);
+ 	bond->params.lacp_active = newval->value;
++	bond_3ad_update_lacp_active(bond);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
+index 7c142c17b3f69d..adef7aa327ceb0 100644
+--- a/drivers/net/dsa/microchip/ksz_common.c
++++ b/drivers/net/dsa/microchip/ksz_common.c
+@@ -2347,6 +2347,12 @@ static void ksz_update_port_member(struct ksz_device *dev, int port)
+ 		dev->dev_ops->cfg_port_member(dev, i, val | cpu_port);
+ 	}
+ 
++	/* HSR ports are setup once so need to use the assigned membership
++	 * when the port is enabled.
++	 */
++	if (!port_member && p->stp_state == BR_STATE_FORWARDING &&
++	    (dev->hsr_ports & BIT(port)))
++		port_member = dev->hsr_ports;
+ 	dev->dev_ops->cfg_port_member(dev, port, port_member | cpu_port);
+ }
+ 
+diff --git a/drivers/net/ethernet/airoha/airoha_ppe.c b/drivers/net/ethernet/airoha/airoha_ppe.c
+index 7832fe8fc2021d..af6e4d4c0ecea3 100644
+--- a/drivers/net/ethernet/airoha/airoha_ppe.c
++++ b/drivers/net/ethernet/airoha/airoha_ppe.c
+@@ -726,10 +726,8 @@ static void airoha_ppe_foe_insert_entry(struct airoha_ppe *ppe,
+ 			continue;
+ 		}
+ 
+-		if (commit_done || !airoha_ppe_foe_compare_entry(e, hwe)) {
+-			e->hash = 0xffff;
++		if (!airoha_ppe_foe_compare_entry(e, hwe))
+ 			continue;
+-		}
+ 
+ 		airoha_ppe_foe_commit_entry(ppe, &e->data, hash);
+ 		commit_done = true;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 25681c2343fb46..ec8752c298e693 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -5325,7 +5325,7 @@ static void bnxt_free_ntp_fltrs(struct bnxt *bp, bool all)
+ {
+ 	int i;
+ 
+-	netdev_assert_locked(bp->dev);
++	netdev_assert_locked_or_invisible(bp->dev);
+ 
+ 	/* Under netdev instance lock and all our NAPIs have been disabled.
+ 	 * It's safe to delete the hash table.
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index d1aeb722d48f30..36a6d766b63863 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -2726,6 +2726,8 @@ static void gve_shutdown(struct pci_dev *pdev)
+ 	struct gve_priv *priv = netdev_priv(netdev);
+ 	bool was_up = netif_running(priv->dev);
+ 
++	netif_device_detach(netdev);
++
+ 	rtnl_lock();
+ 	netdev_lock(netdev);
+ 	if (was_up && gve_close(priv->dev)) {
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 031c332f66c471..1b4465d6b2b726 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -7115,6 +7115,13 @@ static int igc_probe(struct pci_dev *pdev,
+ 	adapter->port_num = hw->bus.func;
+ 	adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE);
+ 
++	/* PCI config space info */
++	hw->vendor_id = pdev->vendor;
++	hw->device_id = pdev->device;
++	hw->revision_id = pdev->revision;
++	hw->subsystem_vendor_id = pdev->subsystem_vendor;
++	hw->subsystem_device_id = pdev->subsystem_device;
++
+ 	/* Disable ASPM L1.2 on I226 devices to avoid packet loss */
+ 	if (igc_is_device_id_i226(hw))
+ 		pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2);
+@@ -7141,13 +7148,6 @@ static int igc_probe(struct pci_dev *pdev,
+ 	netdev->mem_start = pci_resource_start(pdev, 0);
+ 	netdev->mem_end = pci_resource_end(pdev, 0);
+ 
+-	/* PCI config space info */
+-	hw->vendor_id = pdev->vendor;
+-	hw->device_id = pdev->device;
+-	hw->revision_id = pdev->revision;
+-	hw->subsystem_vendor_id = pdev->subsystem_vendor;
+-	hw->subsystem_device_id = pdev->subsystem_device;
+-
+ 	/* Copy the default MAC and PHY function pointers */
+ 	memcpy(&hw->mac.ops, ei->mac_ops, sizeof(hw->mac.ops));
+ 	memcpy(&hw->phy.ops, ei->phy_ops, sizeof(hw->phy.ops));
+diff --git a/drivers/net/ethernet/intel/ixgbe/devlink/devlink.c b/drivers/net/ethernet/intel/ixgbe/devlink/devlink.c
+index 54f1b83dfe42e0..d227f4d2a2d17a 100644
+--- a/drivers/net/ethernet/intel/ixgbe/devlink/devlink.c
++++ b/drivers/net/ethernet/intel/ixgbe/devlink/devlink.c
+@@ -543,6 +543,7 @@ int ixgbe_devlink_register_port(struct ixgbe_adapter *adapter)
+ 
+ 	attrs.flavour = DEVLINK_PORT_FLAVOUR_PHYSICAL;
+ 	attrs.phys.port_number = adapter->hw.bus.func;
++	attrs.no_phys_port_name = 1;
+ 	ixgbe_devlink_set_switch_id(adapter, &attrs.switch_id);
+ 
+ 	devlink_port_attrs_set(devlink_port, &attrs);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+index ac58964b2f087e..7b941505a9d024 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+@@ -398,7 +398,7 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
+ 	dma_addr_t dma;
+ 	u32 cmd_type;
+ 
+-	while (budget-- > 0) {
++	while (likely(budget)) {
+ 		if (unlikely(!ixgbe_desc_unused(xdp_ring))) {
+ 			work_done = false;
+ 			break;
+@@ -433,6 +433,8 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
+ 		xdp_ring->next_to_use++;
+ 		if (xdp_ring->next_to_use == xdp_ring->count)
+ 			xdp_ring->next_to_use = 0;
++
++		budget--;
+ 	}
+ 
+ 	if (tx_desc) {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
+index 1b765045aa636b..b56395ac5a7439 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
+@@ -606,8 +606,8 @@ static void npc_set_features(struct rvu *rvu, int blkaddr, u8 intf)
+ 		if (!npc_check_field(rvu, blkaddr, NPC_LB, intf))
+ 			*features &= ~BIT_ULL(NPC_OUTER_VID);
+ 
+-	/* Set SPI flag only if AH/ESP and IPSEC_SPI are in the key */
+-	if (npc_check_field(rvu, blkaddr, NPC_IPSEC_SPI, intf) &&
++	/* Allow extracting SPI field from AH and ESP headers at same offset */
++	if (npc_is_field_present(rvu, NPC_IPSEC_SPI, intf) &&
+ 	    (*features & (BIT_ULL(NPC_IPPROTO_ESP) | BIT_ULL(NPC_IPPROTO_AH))))
+ 		*features |= BIT_ULL(NPC_IPSEC_SPI);
+ 
+diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c
+index c855fb799ce145..e9bd3274198379 100644
+--- a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c
++++ b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c
+@@ -101,7 +101,9 @@ mtk_flow_get_wdma_info(struct net_device *dev, const u8 *addr, struct mtk_wdma_i
+ 	if (!IS_ENABLED(CONFIG_NET_MEDIATEK_SOC_WED))
+ 		return -1;
+ 
++	rcu_read_lock();
+ 	err = dev_fill_forward_path(dev, addr, &stack);
++	rcu_read_unlock();
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/dcbnl.h b/drivers/net/ethernet/mellanox/mlx5/core/en/dcbnl.h
+index b59aee75de94e2..2c98a5299df337 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/dcbnl.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/dcbnl.h
+@@ -26,7 +26,6 @@ struct mlx5e_dcbx {
+ 	u8                         cap;
+ 
+ 	/* Buffer configuration */
+-	bool                       manual_buffer;
+ 	u32                        cable_len;
+ 	u32                        xoff;
+ 	u16                        port_buff_cell_sz;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+index 5ae787656a7ca0..3efa8bf1d14ef4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+@@ -272,8 +272,8 @@ static int port_update_shared_buffer(struct mlx5_core_dev *mdev,
+ 	/* Total shared buffer size is split in a ratio of 3:1 between
+ 	 * lossy and lossless pools respectively.
+ 	 */
+-	lossy_epool_size = (shared_buffer_size / 4) * 3;
+ 	lossless_ipool_size = shared_buffer_size / 4;
++	lossy_epool_size    = shared_buffer_size - lossless_ipool_size;
+ 
+ 	mlx5e_port_set_sbpr(mdev, 0, MLX5_EGRESS_DIR, MLX5_LOSSY_POOL, 0,
+ 			    lossy_epool_size);
+@@ -288,14 +288,12 @@ static int port_set_buffer(struct mlx5e_priv *priv,
+ 	u16 port_buff_cell_sz = priv->dcbx.port_buff_cell_sz;
+ 	struct mlx5_core_dev *mdev = priv->mdev;
+ 	int sz = MLX5_ST_SZ_BYTES(pbmc_reg);
+-	u32 new_headroom_size = 0;
+-	u32 current_headroom_size;
++	u32 current_headroom_cells = 0;
++	u32 new_headroom_cells = 0;
+ 	void *in;
+ 	int err;
+ 	int i;
+ 
+-	current_headroom_size = port_buffer->headroom_size;
+-
+ 	in = kzalloc(sz, GFP_KERNEL);
+ 	if (!in)
+ 		return -ENOMEM;
+@@ -306,12 +304,14 @@ static int port_set_buffer(struct mlx5e_priv *priv,
+ 
+ 	for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) {
+ 		void *buffer = MLX5_ADDR_OF(pbmc_reg, in, buffer[i]);
++		current_headroom_cells += MLX5_GET(bufferx_reg, buffer, size);
++
+ 		u64 size = port_buffer->buffer[i].size;
+ 		u64 xoff = port_buffer->buffer[i].xoff;
+ 		u64 xon = port_buffer->buffer[i].xon;
+ 
+-		new_headroom_size += size;
+ 		do_div(size, port_buff_cell_sz);
++		new_headroom_cells += size;
+ 		do_div(xoff, port_buff_cell_sz);
+ 		do_div(xon, port_buff_cell_sz);
+ 		MLX5_SET(bufferx_reg, buffer, size, size);
+@@ -320,10 +320,8 @@ static int port_set_buffer(struct mlx5e_priv *priv,
+ 		MLX5_SET(bufferx_reg, buffer, xon_threshold, xon);
+ 	}
+ 
+-	new_headroom_size /= port_buff_cell_sz;
+-	current_headroom_size /= port_buff_cell_sz;
+-	err = port_update_shared_buffer(priv->mdev, current_headroom_size,
+-					new_headroom_size);
++	err = port_update_shared_buffer(priv->mdev, current_headroom_cells,
++					new_headroom_cells);
+ 	if (err)
+ 		goto out;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/ct_fs_hmfs.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/ct_fs_hmfs.c
+index a4263137fef5a0..01d522b0294707 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/ct_fs_hmfs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/ct_fs_hmfs.c
+@@ -173,6 +173,8 @@ static void mlx5_ct_fs_hmfs_fill_rule_actions(struct mlx5_ct_fs_hmfs *fs_hmfs,
+ 
+ 	memset(rule_actions, 0, NUM_CT_HMFS_RULES * sizeof(*rule_actions));
+ 	rule_actions[0].action = mlx5_fc_get_hws_action(fs_hmfs->ctx, attr->counter);
++	rule_actions[0].counter.offset =
++		attr->counter->id - attr->counter->bulk->base_id;
+ 	/* Modify header is special, it may require extra arguments outside the action itself. */
+ 	if (mh_action->mh_data) {
+ 		rule_actions[1].modify_header.offset = mh_action->mh_data->offset;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+index 5fe016e477b37e..d166c0d5189e19 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+@@ -362,6 +362,7 @@ static int mlx5e_dcbnl_ieee_getpfc(struct net_device *dev,
+ static int mlx5e_dcbnl_ieee_setpfc(struct net_device *dev,
+ 				   struct ieee_pfc *pfc)
+ {
++	u8 buffer_ownership = MLX5_BUF_OWNERSHIP_UNKNOWN;
+ 	struct mlx5e_priv *priv = netdev_priv(dev);
+ 	struct mlx5_core_dev *mdev = priv->mdev;
+ 	u32 old_cable_len = priv->dcbx.cable_len;
+@@ -389,7 +390,14 @@ static int mlx5e_dcbnl_ieee_setpfc(struct net_device *dev,
+ 
+ 	if (MLX5_BUFFER_SUPPORTED(mdev)) {
+ 		pfc_new.pfc_en = (changed & MLX5E_PORT_BUFFER_PFC) ? pfc->pfc_en : curr_pfc_en;
+-		if (priv->dcbx.manual_buffer)
++		ret = mlx5_query_port_buffer_ownership(mdev,
++						       &buffer_ownership);
++		if (ret)
++			netdev_err(dev,
++				   "%s, Failed to get buffer ownership: %d\n",
++				   __func__, ret);
++
++		if (buffer_ownership == MLX5_BUF_OWNERSHIP_SW_OWNED)
+ 			ret = mlx5e_port_manual_buffer_config(priv, changed,
+ 							      dev->mtu, &pfc_new,
+ 							      NULL, NULL);
+@@ -982,7 +990,6 @@ static int mlx5e_dcbnl_setbuffer(struct net_device *dev,
+ 	if (!changed)
+ 		return 0;
+ 
+-	priv->dcbx.manual_buffer = true;
+ 	err = mlx5e_port_manual_buffer_config(priv, changed, dev->mtu, NULL,
+ 					      buffer_size, prio2buffer);
+ 	return err;
+@@ -1252,7 +1259,6 @@ void mlx5e_dcbnl_initialize(struct mlx5e_priv *priv)
+ 		priv->dcbx.cap |= DCB_CAP_DCBX_HOST;
+ 
+ 	priv->dcbx.port_buff_cell_sz = mlx5e_query_port_buffers_cell_size(priv);
+-	priv->dcbx.manual_buffer = false;
+ 	priv->dcbx.cable_len = MLX5E_DEFAULT_CABLE_LEN;
+ 
+ 	mlx5e_ets_init(priv);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c
+index b7102e14d23d3b..c33accadae0f01 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c
+@@ -47,10 +47,12 @@ static void mlx5_esw_offloads_pf_vf_devlink_port_attrs_set(struct mlx5_eswitch *
+ 		devlink_port_attrs_pci_vf_set(dl_port, controller_num, pfnum,
+ 					      vport_num - 1, external);
+ 	}  else if (mlx5_core_is_ec_vf_vport(esw->dev, vport_num)) {
++		u16 base_vport = mlx5_core_ec_vf_vport_base(dev);
++
+ 		memcpy(dl_port->attrs.switch_id.id, ppid.id, ppid.id_len);
+ 		dl_port->attrs.switch_id.id_len = ppid.id_len;
+ 		devlink_port_attrs_pci_vf_set(dl_port, 0, pfnum,
+-					      vport_num - 1, false);
++					      vport_num - base_vport, false);
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+index 2e02bdea8361db..c2f6d205ddb1e8 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+@@ -358,6 +358,8 @@ int mlx5_query_port_dcbx_param(struct mlx5_core_dev *mdev, u32 *out);
+ int mlx5_set_port_dcbx_param(struct mlx5_core_dev *mdev, u32 *in);
+ int mlx5_set_trust_state(struct mlx5_core_dev *mdev, u8 trust_state);
+ int mlx5_query_trust_state(struct mlx5_core_dev *mdev, u8 *trust_state);
++int mlx5_query_port_buffer_ownership(struct mlx5_core_dev *mdev,
++				     u8 *buffer_ownership);
+ int mlx5_set_dscp2prio(struct mlx5_core_dev *mdev, u8 dscp, u8 prio);
+ int mlx5_query_dscp2prio(struct mlx5_core_dev *mdev, u8 *dscp2prio);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/port.c b/drivers/net/ethernet/mellanox/mlx5/core/port.c
+index 549f1066d2a508..2d7adf7444ba29 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/port.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/port.c
+@@ -968,6 +968,26 @@ int mlx5_query_trust_state(struct mlx5_core_dev *mdev, u8 *trust_state)
+ 	return err;
+ }
+ 
++int mlx5_query_port_buffer_ownership(struct mlx5_core_dev *mdev,
++				     u8 *buffer_ownership)
++{
++	u32 out[MLX5_ST_SZ_DW(pfcc_reg)] = {};
++	int err;
++
++	if (!MLX5_CAP_PCAM_FEATURE(mdev, buffer_ownership)) {
++		*buffer_ownership = MLX5_BUF_OWNERSHIP_UNKNOWN;
++		return 0;
++	}
++
++	err = mlx5_query_pfcc_reg(mdev, out, sizeof(out));
++	if (err)
++		return err;
++
++	*buffer_ownership = MLX5_GET(pfcc_reg, out, buf_ownership);
++
++	return 0;
++}
++
+ int mlx5_set_dscp2prio(struct mlx5_core_dev *mdev, u8 dscp, u8 prio)
+ {
+ 	int sz = MLX5_ST_SZ_BYTES(qpdpm_reg);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc_complex.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc_complex.c
+index ca7501c5746886..14e79579c719c2 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc_complex.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc_complex.c
+@@ -1328,11 +1328,11 @@ mlx5hws_bwc_matcher_move_all_complex(struct mlx5hws_bwc_matcher *bwc_matcher)
+ {
+ 	struct mlx5hws_context *ctx = bwc_matcher->matcher->tbl->ctx;
+ 	struct mlx5hws_matcher *matcher = bwc_matcher->matcher;
+-	bool move_error = false, poll_error = false;
+ 	u16 bwc_queues = mlx5hws_bwc_queues(ctx);
+ 	struct mlx5hws_bwc_rule *tmp_bwc_rule;
+ 	struct mlx5hws_rule_attr rule_attr;
+ 	struct mlx5hws_table *isolated_tbl;
++	int move_error = 0, poll_error = 0;
+ 	struct mlx5hws_rule *tmp_rule;
+ 	struct list_head *rules_list;
+ 	u32 expected_completions = 1;
+@@ -1391,11 +1391,15 @@ mlx5hws_bwc_matcher_move_all_complex(struct mlx5hws_bwc_matcher *bwc_matcher)
+ 			ret = mlx5hws_matcher_resize_rule_move(matcher,
+ 							       tmp_rule,
+ 							       &rule_attr);
+-			if (unlikely(ret && !move_error)) {
+-				mlx5hws_err(ctx,
+-					    "Moving complex BWC rule failed (%d), attempting to move rest of the rules\n",
+-					    ret);
+-				move_error = true;
++			if (unlikely(ret)) {
++				if (!move_error) {
++					mlx5hws_err(ctx,
++						    "Moving complex BWC rule: move failed (%d), attempting to move rest of the rules\n",
++						    ret);
++					move_error = ret;
++				}
++				/* Rule wasn't queued, no need to poll */
++				continue;
+ 			}
+ 
+ 			expected_completions = 1;
+@@ -1403,11 +1407,19 @@ mlx5hws_bwc_matcher_move_all_complex(struct mlx5hws_bwc_matcher *bwc_matcher)
+ 						     rule_attr.queue_id,
+ 						     &expected_completions,
+ 						     true);
+-			if (unlikely(ret && !poll_error)) {
+-				mlx5hws_err(ctx,
+-					    "Moving complex BWC rule: poll failed (%d), attempting to move rest of the rules\n",
+-					    ret);
+-				poll_error = true;
++			if (unlikely(ret)) {
++				if (ret == -ETIMEDOUT) {
++					mlx5hws_err(ctx,
++						    "Moving complex BWC rule: timeout polling for completions (%d), aborting rehash\n",
++						    ret);
++					return ret;
++				}
++				if (!poll_error) {
++					mlx5hws_err(ctx,
++						    "Moving complex BWC rule: polling for completions failed (%d), attempting to move rest of the rules\n",
++						    ret);
++					poll_error = ret;
++				}
+ 			}
+ 
+ 			/* Done moving the rule to the new matcher,
+@@ -1422,8 +1434,11 @@ mlx5hws_bwc_matcher_move_all_complex(struct mlx5hws_bwc_matcher *bwc_matcher)
+ 		}
+ 	}
+ 
+-	if (move_error || poll_error)
+-		ret = -EINVAL;
++	/* Return the first error that happened */
++	if (unlikely(move_error))
++		return move_error;
++	if (unlikely(poll_error))
++		return poll_error;
+ 
+ 	return ret;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.c
+index 9c83753e459243..0bdcab2e5cf3a6 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.c
+@@ -55,6 +55,7 @@ int mlx5hws_cmd_flow_table_create(struct mlx5_core_dev *mdev,
+ 
+ 	MLX5_SET(create_flow_table_in, in, opcode, MLX5_CMD_OP_CREATE_FLOW_TABLE);
+ 	MLX5_SET(create_flow_table_in, in, table_type, ft_attr->type);
++	MLX5_SET(create_flow_table_in, in, uid, ft_attr->uid);
+ 
+ 	ft_ctx = MLX5_ADDR_OF(create_flow_table_in, in, flow_table_context);
+ 	MLX5_SET(flow_table_context, ft_ctx, level, ft_attr->level);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.h
+index fa6bff210266cb..122ccc671628de 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.h
+@@ -36,6 +36,7 @@ struct mlx5hws_cmd_set_fte_attr {
+ struct mlx5hws_cmd_ft_create_attr {
+ 	u8 type;
+ 	u8 level;
++	u16 uid;
+ 	bool rtc_valid;
+ 	bool decap_en;
+ 	bool reformat_en;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
+index bf4643d0ce1790..47e3947e7b512f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
+@@ -267,6 +267,7 @@ static int mlx5_cmd_hws_create_flow_table(struct mlx5_flow_root_namespace *ns,
+ 
+ 	tbl_attr.type = MLX5HWS_TABLE_TYPE_FDB;
+ 	tbl_attr.level = ft_attr->level;
++	tbl_attr.uid = ft_attr->uid;
+ 	tbl = mlx5hws_table_create(ctx, &tbl_attr);
+ 	if (!tbl) {
+ 		mlx5_core_err(ns->dev, "Failed creating hws flow_table\n");
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c
+index ce28ee1c0e41bd..6000f2c641e083 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c
+@@ -85,6 +85,7 @@ static int hws_matcher_create_end_ft_isolated(struct mlx5hws_matcher *matcher)
+ 
+ 	ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev,
+ 					      tbl,
++					      0,
+ 					      &matcher->end_ft_id);
+ 	if (ret) {
+ 		mlx5hws_err(tbl->ctx, "Isolated matcher: failed to create end flow table\n");
+@@ -112,7 +113,9 @@ static int hws_matcher_create_end_ft(struct mlx5hws_matcher *matcher)
+ 	if (mlx5hws_matcher_is_isolated(matcher))
+ 		ret = hws_matcher_create_end_ft_isolated(matcher);
+ 	else
+-		ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev, tbl,
++		ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev,
++						      tbl,
++						      0,
+ 						      &matcher->end_ft_id);
+ 
+ 	if (ret) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
+index d8ac6c196211c9..a2fe2f9e832d26 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
+@@ -75,6 +75,7 @@ struct mlx5hws_context_attr {
+ struct mlx5hws_table_attr {
+ 	enum mlx5hws_table_type type;
+ 	u32 level;
++	u16 uid;
+ };
+ 
+ enum mlx5hws_matcher_flow_src {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/send.c
+index c4b22be19a9b10..b0595c9b09e421 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/send.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/send.c
+@@ -964,7 +964,6 @@ static int hws_send_ring_open_cq(struct mlx5_core_dev *mdev,
+ 		return -ENOMEM;
+ 
+ 	MLX5_SET(cqc, cqc_data, uar_page, mdev->priv.uar->index);
+-	MLX5_SET(cqc, cqc_data, cqe_sz, queue->num_entries);
+ 	MLX5_SET(cqc, cqc_data, log_cq_size, ilog2(queue->num_entries));
+ 
+ 	err = hws_send_ring_alloc_cq(mdev, numa_node, queue, cqc_data, cq);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/table.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/table.c
+index 568f691733f349..6113383ae47bbc 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/table.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/table.c
+@@ -9,6 +9,7 @@ u32 mlx5hws_table_get_id(struct mlx5hws_table *tbl)
+ }
+ 
+ static void hws_table_init_next_ft_attr(struct mlx5hws_table *tbl,
++					u16 uid,
+ 					struct mlx5hws_cmd_ft_create_attr *ft_attr)
+ {
+ 	ft_attr->type = tbl->fw_ft_type;
+@@ -16,7 +17,9 @@ static void hws_table_init_next_ft_attr(struct mlx5hws_table *tbl,
+ 		ft_attr->level = tbl->ctx->caps->fdb_ft.max_level - 1;
+ 	else
+ 		ft_attr->level = tbl->ctx->caps->nic_ft.max_level - 1;
++
+ 	ft_attr->rtc_valid = true;
++	ft_attr->uid = uid;
+ }
+ 
+ static void hws_table_set_cap_attr(struct mlx5hws_table *tbl,
+@@ -119,12 +122,12 @@ static int hws_table_connect_to_default_miss_tbl(struct mlx5hws_table *tbl, u32
+ 
+ int mlx5hws_table_create_default_ft(struct mlx5_core_dev *mdev,
+ 				    struct mlx5hws_table *tbl,
+-				    u32 *ft_id)
++				    u16 uid, u32 *ft_id)
+ {
+ 	struct mlx5hws_cmd_ft_create_attr ft_attr = {0};
+ 	int ret;
+ 
+-	hws_table_init_next_ft_attr(tbl, &ft_attr);
++	hws_table_init_next_ft_attr(tbl, uid, &ft_attr);
+ 	hws_table_set_cap_attr(tbl, &ft_attr);
+ 
+ 	ret = mlx5hws_cmd_flow_table_create(mdev, &ft_attr, ft_id);
+@@ -189,7 +192,10 @@ static int hws_table_init(struct mlx5hws_table *tbl)
+ 	}
+ 
+ 	mutex_lock(&ctx->ctrl_lock);
+-	ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev, tbl, &tbl->ft_id);
++	ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev,
++					      tbl,
++					      tbl->uid,
++					      &tbl->ft_id);
+ 	if (ret) {
+ 		mlx5hws_err(tbl->ctx, "Failed to create flow table object\n");
+ 		mutex_unlock(&ctx->ctrl_lock);
+@@ -239,6 +245,7 @@ struct mlx5hws_table *mlx5hws_table_create(struct mlx5hws_context *ctx,
+ 	tbl->ctx = ctx;
+ 	tbl->type = attr->type;
+ 	tbl->level = attr->level;
++	tbl->uid = attr->uid;
+ 
+ 	ret = hws_table_init(tbl);
+ 	if (ret) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/table.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/table.h
+index 0400cce0c317f7..1246f9bd84222f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/table.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/table.h
+@@ -18,6 +18,7 @@ struct mlx5hws_table {
+ 	enum mlx5hws_table_type type;
+ 	u32 fw_ft_type;
+ 	u32 level;
++	u16 uid;
+ 	struct list_head matchers_list;
+ 	struct list_head tbl_list_node;
+ 	struct mlx5hws_default_miss default_miss;
+@@ -47,7 +48,7 @@ u32 mlx5hws_table_get_res_fw_ft_type(enum mlx5hws_table_type tbl_type,
+ 
+ int mlx5hws_table_create_default_ft(struct mlx5_core_dev *mdev,
+ 				    struct mlx5hws_table *tbl,
+-				    u32 *ft_id);
++				    u16 uid, u32 *ft_id);
+ 
+ void mlx5hws_table_destroy_default_ft(struct mlx5hws_table *tbl,
+ 				      u32 ft_id);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+index 618957d6566364..9a2d64a0a8588a 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+@@ -2375,6 +2375,8 @@ static const struct mlxsw_listener mlxsw_sp_listener[] = {
+ 			     ROUTER_EXP, false),
+ 	MLXSW_SP_RXL_NO_MARK(DISCARD_ING_ROUTER_DIP_LINK_LOCAL, FORWARD,
+ 			     ROUTER_EXP, false),
++	MLXSW_SP_RXL_NO_MARK(DISCARD_ING_ROUTER_SIP_LINK_LOCAL, FORWARD,
++			     ROUTER_EXP, false),
+ 	/* Multicast Router Traps */
+ 	MLXSW_SP_RXL_MARK(ACL1, TRAP_TO_CPU, MULTICAST, false),
+ 	MLXSW_SP_RXL_L3_MARK(ACL2, TRAP_TO_CPU, MULTICAST, false),
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/trap.h b/drivers/net/ethernet/mellanox/mlxsw/trap.h
+index 80ee5c4825dc96..9962dc1579019b 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/trap.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/trap.h
+@@ -94,6 +94,7 @@ enum {
+ 	MLXSW_TRAP_ID_DISCARD_ING_ROUTER_IPV4_SIP_BC = 0x16A,
+ 	MLXSW_TRAP_ID_DISCARD_ING_ROUTER_IPV4_DIP_LOCAL_NET = 0x16B,
+ 	MLXSW_TRAP_ID_DISCARD_ING_ROUTER_DIP_LINK_LOCAL = 0x16C,
++	MLXSW_TRAP_ID_DISCARD_ING_ROUTER_SIP_LINK_LOCAL = 0x16D,
+ 	MLXSW_TRAP_ID_DISCARD_ROUTER_IRIF_EN = 0x178,
+ 	MLXSW_TRAP_ID_DISCARD_ROUTER_ERIF_EN = 0x179,
+ 	MLXSW_TRAP_ID_DISCARD_ROUTER_LPM4 = 0x17B,
+diff --git a/drivers/net/ethernet/microchip/lan865x/lan865x.c b/drivers/net/ethernet/microchip/lan865x/lan865x.c
+index dd436bdff0f86d..84c41f19356126 100644
+--- a/drivers/net/ethernet/microchip/lan865x/lan865x.c
++++ b/drivers/net/ethernet/microchip/lan865x/lan865x.c
+@@ -32,6 +32,10 @@
+ /* MAC Specific Addr 1 Top Reg */
+ #define LAN865X_REG_MAC_H_SADDR1	0x00010023
+ 
++/* MAC TSU Timer Increment Register */
++#define LAN865X_REG_MAC_TSU_TIMER_INCR		0x00010077
++#define MAC_TSU_TIMER_INCR_COUNT_NANOSECONDS	0x0028
++
+ struct lan865x_priv {
+ 	struct work_struct multicast_work;
+ 	struct net_device *netdev;
+@@ -311,6 +315,8 @@ static int lan865x_net_open(struct net_device *netdev)
+ 
+ 	phy_start(netdev->phydev);
+ 
++	netif_start_queue(netdev);
++
+ 	return 0;
+ }
+ 
+@@ -344,6 +350,21 @@ static int lan865x_probe(struct spi_device *spi)
+ 		goto free_netdev;
+ 	}
+ 
++	/* LAN865x Rev.B0/B1 configuration parameters from AN1760
++	 * As per the Configuration Application Note AN1760 published in the
++	 * link, https://www.microchip.com/en-us/application-notes/an1760
++	 * Revision F (DS60001760G - June 2024), configure the MAC to set time
++	 * stamping at the end of the Start of Frame Delimiter (SFD) and set the
++	 * Timer Increment reg to 40 ns to be used as a 25 MHz internal clock.
++	 */
++	ret = oa_tc6_write_register(priv->tc6, LAN865X_REG_MAC_TSU_TIMER_INCR,
++				    MAC_TSU_TIMER_INCR_COUNT_NANOSECONDS);
++	if (ret) {
++		dev_err(&spi->dev, "Failed to config TSU Timer Incr reg: %d\n",
++			ret);
++		goto oa_tc6_exit;
++	}
++
+ 	/* As per the point s3 in the below errata, SPI receive Ethernet frame
+ 	 * transfer may halt when starting the next frame in the same data block
+ 	 * (chunk) as the end of a previous frame. The RFA field should be
+diff --git a/drivers/net/ethernet/realtek/rtase/rtase.h b/drivers/net/ethernet/realtek/rtase/rtase.h
+index 498cfe4d0cac3a..5f2e1ab6a10080 100644
+--- a/drivers/net/ethernet/realtek/rtase/rtase.h
++++ b/drivers/net/ethernet/realtek/rtase/rtase.h
+@@ -241,7 +241,7 @@ union rtase_rx_desc {
+ #define RTASE_RX_RES        BIT(20)
+ #define RTASE_RX_RUNT       BIT(19)
+ #define RTASE_RX_RWT        BIT(18)
+-#define RTASE_RX_CRC        BIT(16)
++#define RTASE_RX_CRC        BIT(17)
+ #define RTASE_RX_V6F        BIT(31)
+ #define RTASE_RX_V4F        BIT(30)
+ #define RTASE_RX_UDPT       BIT(29)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-thead.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-thead.c
+index f2946bea0bc268..6c6c49e4b66fae 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-thead.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-thead.c
+@@ -152,7 +152,7 @@ static int thead_set_clk_tx_rate(void *bsp_priv, struct clk *clk_tx_i,
+ static int thead_dwmac_enable_clk(struct plat_stmmacenet_data *plat)
+ {
+ 	struct thead_dwmac *dwmac = plat->bsp_priv;
+-	u32 reg;
++	u32 reg, div;
+ 
+ 	switch (plat->mac_interface) {
+ 	case PHY_INTERFACE_MODE_MII:
+@@ -164,6 +164,13 @@ static int thead_dwmac_enable_clk(struct plat_stmmacenet_data *plat)
+ 	case PHY_INTERFACE_MODE_RGMII_RXID:
+ 	case PHY_INTERFACE_MODE_RGMII_TXID:
+ 		/* use pll */
++		div = clk_get_rate(plat->stmmac_clk) / rgmii_clock(SPEED_1000);
++		reg = FIELD_PREP(GMAC_PLLCLK_DIV_EN, 1) |
++		      FIELD_PREP(GMAC_PLLCLK_DIV_NUM, div);
++
++		writel(0, dwmac->apb_base + GMAC_PLLCLK_DIV);
++		writel(reg, dwmac->apb_base + GMAC_PLLCLK_DIV);
++
+ 		writel(GMAC_GTXCLK_SEL_PLL, dwmac->apb_base + GMAC_GTXCLK_SEL);
+ 		reg = GMAC_TX_CLK_EN | GMAC_TX_CLK_N_EN | GMAC_TX_CLK_OUT_EN |
+ 		      GMAC_RX_CLK_EN | GMAC_RX_CLK_N_EN;
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.c b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+index 008d7772740078..f436d7cf565a14 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+@@ -240,6 +240,44 @@ static void prueth_emac_stop(struct prueth *prueth)
+ 	}
+ }
+ 
++static void icssg_enable_fw_offload(struct prueth *prueth)
++{
++	struct prueth_emac *emac;
++	int mac;
++
++	for (mac = PRUETH_MAC0; mac < PRUETH_NUM_MACS; mac++) {
++		emac = prueth->emac[mac];
++		if (prueth->is_hsr_offload_mode) {
++			if (emac->ndev->features & NETIF_F_HW_HSR_TAG_RM)
++				icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_ENABLE);
++			else
++				icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_DISABLE);
++		}
++
++		if (prueth->is_switch_mode || prueth->is_hsr_offload_mode) {
++			if (netif_running(emac->ndev)) {
++				icssg_fdb_add_del(emac, eth_stp_addr, prueth->default_vlan,
++						  ICSSG_FDB_ENTRY_P0_MEMBERSHIP |
++						  ICSSG_FDB_ENTRY_P1_MEMBERSHIP |
++						  ICSSG_FDB_ENTRY_P2_MEMBERSHIP |
++						  ICSSG_FDB_ENTRY_BLOCK,
++						  true);
++				icssg_vtbl_modify(emac, emac->port_vlan | DEFAULT_VID,
++						  BIT(emac->port_id) | DEFAULT_PORT_MASK,
++						  BIT(emac->port_id) | DEFAULT_UNTAG_MASK,
++						  true);
++				if (prueth->is_hsr_offload_mode)
++					icssg_vtbl_modify(emac, DEFAULT_VID,
++							  DEFAULT_PORT_MASK,
++							  DEFAULT_UNTAG_MASK, true);
++				icssg_set_pvid(prueth, emac->port_vlan, emac->port_id);
++				if (prueth->is_switch_mode)
++					icssg_set_port_state(emac, ICSSG_EMAC_PORT_VLAN_AWARE_ENABLE);
++			}
++		}
++	}
++}
++
+ static int prueth_emac_common_start(struct prueth *prueth)
+ {
+ 	struct prueth_emac *emac;
+@@ -790,6 +828,7 @@ static int emac_ndo_open(struct net_device *ndev)
+ 		ret = prueth_emac_common_start(prueth);
+ 		if (ret)
+ 			goto free_rx_irq;
++		icssg_enable_fw_offload(prueth);
+ 	}
+ 
+ 	flow_cfg = emac->dram.va + ICSSG_CONFIG_OFFSET + PSI_L_REGULAR_FLOW_ID_BASE_OFFSET;
+@@ -1397,8 +1436,7 @@ static int prueth_emac_restart(struct prueth *prueth)
+ 
+ static void icssg_change_mode(struct prueth *prueth)
+ {
+-	struct prueth_emac *emac;
+-	int mac, ret;
++	int ret;
+ 
+ 	ret = prueth_emac_restart(prueth);
+ 	if (ret) {
+@@ -1406,35 +1444,7 @@ static void icssg_change_mode(struct prueth *prueth)
+ 		return;
+ 	}
+ 
+-	for (mac = PRUETH_MAC0; mac < PRUETH_NUM_MACS; mac++) {
+-		emac = prueth->emac[mac];
+-		if (prueth->is_hsr_offload_mode) {
+-			if (emac->ndev->features & NETIF_F_HW_HSR_TAG_RM)
+-				icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_ENABLE);
+-			else
+-				icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_DISABLE);
+-		}
+-
+-		if (netif_running(emac->ndev)) {
+-			icssg_fdb_add_del(emac, eth_stp_addr, prueth->default_vlan,
+-					  ICSSG_FDB_ENTRY_P0_MEMBERSHIP |
+-					  ICSSG_FDB_ENTRY_P1_MEMBERSHIP |
+-					  ICSSG_FDB_ENTRY_P2_MEMBERSHIP |
+-					  ICSSG_FDB_ENTRY_BLOCK,
+-					  true);
+-			icssg_vtbl_modify(emac, emac->port_vlan | DEFAULT_VID,
+-					  BIT(emac->port_id) | DEFAULT_PORT_MASK,
+-					  BIT(emac->port_id) | DEFAULT_UNTAG_MASK,
+-					  true);
+-			if (prueth->is_hsr_offload_mode)
+-				icssg_vtbl_modify(emac, DEFAULT_VID,
+-						  DEFAULT_PORT_MASK,
+-						  DEFAULT_UNTAG_MASK, true);
+-			icssg_set_pvid(prueth, emac->port_vlan, emac->port_id);
+-			if (prueth->is_switch_mode)
+-				icssg_set_port_state(emac, ICSSG_EMAC_PORT_VLAN_AWARE_ENABLE);
+-		}
+-	}
++	icssg_enable_fw_offload(prueth);
+ }
+ 
+ static int prueth_netdevice_port_link(struct net_device *ndev,
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 6011d7eae0c78a..0d8a05fe541afb 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -1160,6 +1160,7 @@ static void axienet_dma_rx_cb(void *data, const struct dmaengine_result *result)
+ 	struct axienet_local *lp = data;
+ 	struct sk_buff *skb;
+ 	u32 *app_metadata;
++	int i;
+ 
+ 	skbuf_dma = axienet_get_rx_desc(lp, lp->rx_ring_tail++);
+ 	skb = skbuf_dma->skb;
+@@ -1178,7 +1179,10 @@ static void axienet_dma_rx_cb(void *data, const struct dmaengine_result *result)
+ 	u64_stats_add(&lp->rx_packets, 1);
+ 	u64_stats_add(&lp->rx_bytes, rx_len);
+ 	u64_stats_update_end(&lp->rx_stat_sync);
+-	axienet_rx_submit_desc(lp->ndev);
++
++	for (i = 0; i < CIRC_SPACE(lp->rx_ring_head, lp->rx_ring_tail,
++				   RX_BUF_NUM_DEFAULT); i++)
++		axienet_rx_submit_desc(lp->ndev);
+ 	dma_async_issue_pending(lp->rx_chan);
+ }
+ 
+@@ -1457,7 +1461,6 @@ static void axienet_rx_submit_desc(struct net_device *ndev)
+ 	if (!skbuf_dma)
+ 		return;
+ 
+-	lp->rx_ring_head++;
+ 	skb = netdev_alloc_skb(ndev, lp->max_frm_size);
+ 	if (!skb)
+ 		return;
+@@ -1482,6 +1485,7 @@ static void axienet_rx_submit_desc(struct net_device *ndev)
+ 	skbuf_dma->desc = dma_rx_desc;
+ 	dma_rx_desc->callback_param = lp;
+ 	dma_rx_desc->callback_result = axienet_dma_rx_cb;
++	lp->rx_ring_head++;
+ 	dmaengine_submit(dma_rx_desc);
+ 
+ 	return;
+diff --git a/drivers/net/phy/mscc/mscc.h b/drivers/net/phy/mscc/mscc.h
+index 6a3d8a754eb8de..58c6d47fbe046d 100644
+--- a/drivers/net/phy/mscc/mscc.h
++++ b/drivers/net/phy/mscc/mscc.h
+@@ -362,6 +362,13 @@ struct vsc85xx_hw_stat {
+ 	u16 mask;
+ };
+ 
++struct vsc8531_skb_cb {
++	u32 ns;
++};
++
++#define VSC8531_SKB_CB(skb) \
++	((struct vsc8531_skb_cb *)((skb)->cb))
++
+ struct vsc8531_private {
+ 	int rate_magic;
+ 	u16 supp_led_modes;
+@@ -410,6 +417,11 @@ struct vsc8531_private {
+ 	 */
+ 	struct mutex ts_lock;
+ 	struct mutex phc_lock;
++
++	/* list of skbs that were received and need timestamp information but it
++	 * didn't received it yet
++	 */
++	struct sk_buff_head rx_skbs_list;
+ };
+ 
+ /* Shared structure between the PHYs of the same package.
+diff --git a/drivers/net/phy/mscc/mscc_main.c b/drivers/net/phy/mscc/mscc_main.c
+index 7ff975efd8e7af..c3209cf00e9607 100644
+--- a/drivers/net/phy/mscc/mscc_main.c
++++ b/drivers/net/phy/mscc/mscc_main.c
+@@ -2336,6 +2336,13 @@ static int vsc85xx_probe(struct phy_device *phydev)
+ 	return vsc85xx_dt_led_modes_get(phydev, default_mode);
+ }
+ 
++static void vsc85xx_remove(struct phy_device *phydev)
++{
++	struct vsc8531_private *priv = phydev->priv;
++
++	skb_queue_purge(&priv->rx_skbs_list);
++}
++
+ /* Microsemi VSC85xx PHYs */
+ static struct phy_driver vsc85xx_driver[] = {
+ {
+@@ -2590,6 +2597,7 @@ static struct phy_driver vsc85xx_driver[] = {
+ 	.config_intr    = &vsc85xx_config_intr,
+ 	.suspend	= &genphy_suspend,
+ 	.resume		= &genphy_resume,
++	.remove		= &vsc85xx_remove,
+ 	.probe		= &vsc8574_probe,
+ 	.set_wol	= &vsc85xx_wol_set,
+ 	.get_wol	= &vsc85xx_wol_get,
+@@ -2615,6 +2623,7 @@ static struct phy_driver vsc85xx_driver[] = {
+ 	.config_intr    = &vsc85xx_config_intr,
+ 	.suspend	= &genphy_suspend,
+ 	.resume		= &genphy_resume,
++	.remove		= &vsc85xx_remove,
+ 	.probe		= &vsc8574_probe,
+ 	.set_wol	= &vsc85xx_wol_set,
+ 	.get_wol	= &vsc85xx_wol_get,
+@@ -2640,6 +2649,7 @@ static struct phy_driver vsc85xx_driver[] = {
+ 	.config_intr    = &vsc85xx_config_intr,
+ 	.suspend	= &genphy_suspend,
+ 	.resume		= &genphy_resume,
++	.remove		= &vsc85xx_remove,
+ 	.probe		= &vsc8584_probe,
+ 	.get_tunable	= &vsc85xx_get_tunable,
+ 	.set_tunable	= &vsc85xx_set_tunable,
+@@ -2663,6 +2673,7 @@ static struct phy_driver vsc85xx_driver[] = {
+ 	.config_intr    = &vsc85xx_config_intr,
+ 	.suspend	= &genphy_suspend,
+ 	.resume		= &genphy_resume,
++	.remove		= &vsc85xx_remove,
+ 	.probe		= &vsc8584_probe,
+ 	.get_tunable	= &vsc85xx_get_tunable,
+ 	.set_tunable	= &vsc85xx_set_tunable,
+@@ -2686,6 +2697,7 @@ static struct phy_driver vsc85xx_driver[] = {
+ 	.config_intr    = &vsc85xx_config_intr,
+ 	.suspend	= &genphy_suspend,
+ 	.resume		= &genphy_resume,
++	.remove		= &vsc85xx_remove,
+ 	.probe		= &vsc8584_probe,
+ 	.get_tunable	= &vsc85xx_get_tunable,
+ 	.set_tunable	= &vsc85xx_set_tunable,
+diff --git a/drivers/net/phy/mscc/mscc_ptp.c b/drivers/net/phy/mscc/mscc_ptp.c
+index 275706de5847cd..de6c7312e8f290 100644
+--- a/drivers/net/phy/mscc/mscc_ptp.c
++++ b/drivers/net/phy/mscc/mscc_ptp.c
+@@ -1194,9 +1194,7 @@ static bool vsc85xx_rxtstamp(struct mii_timestamper *mii_ts,
+ {
+ 	struct vsc8531_private *vsc8531 =
+ 		container_of(mii_ts, struct vsc8531_private, mii_ts);
+-	struct skb_shared_hwtstamps *shhwtstamps = NULL;
+ 	struct vsc85xx_ptphdr *ptphdr;
+-	struct timespec64 ts;
+ 	unsigned long ns;
+ 
+ 	if (!vsc8531->ptp->configured)
+@@ -1206,27 +1204,52 @@ static bool vsc85xx_rxtstamp(struct mii_timestamper *mii_ts,
+ 	    type == PTP_CLASS_NONE)
+ 		return false;
+ 
+-	vsc85xx_gettime(&vsc8531->ptp->caps, &ts);
+-
+ 	ptphdr = get_ptp_header_rx(skb, vsc8531->ptp->rx_filter);
+ 	if (!ptphdr)
+ 		return false;
+ 
+-	shhwtstamps = skb_hwtstamps(skb);
+-	memset(shhwtstamps, 0, sizeof(struct skb_shared_hwtstamps));
+-
+ 	ns = ntohl(ptphdr->rsrvd2);
+ 
+-	/* nsec is in reserved field */
+-	if (ts.tv_nsec < ns)
+-		ts.tv_sec--;
++	VSC8531_SKB_CB(skb)->ns = ns;
++	skb_queue_tail(&vsc8531->rx_skbs_list, skb);
+ 
+-	shhwtstamps->hwtstamp = ktime_set(ts.tv_sec, ns);
+-	netif_rx(skb);
++	ptp_schedule_worker(vsc8531->ptp->ptp_clock, 0);
+ 
+ 	return true;
+ }
+ 
++static long vsc85xx_do_aux_work(struct ptp_clock_info *info)
++{
++	struct vsc85xx_ptp *ptp = container_of(info, struct vsc85xx_ptp, caps);
++	struct skb_shared_hwtstamps *shhwtstamps = NULL;
++	struct phy_device *phydev = ptp->phydev;
++	struct vsc8531_private *priv = phydev->priv;
++	struct sk_buff_head received;
++	struct sk_buff *rx_skb;
++	struct timespec64 ts;
++	unsigned long flags;
++
++	__skb_queue_head_init(&received);
++	spin_lock_irqsave(&priv->rx_skbs_list.lock, flags);
++	skb_queue_splice_tail_init(&priv->rx_skbs_list, &received);
++	spin_unlock_irqrestore(&priv->rx_skbs_list.lock, flags);
++
++	vsc85xx_gettime(info, &ts);
++	while ((rx_skb = __skb_dequeue(&received)) != NULL) {
++		shhwtstamps = skb_hwtstamps(rx_skb);
++		memset(shhwtstamps, 0, sizeof(struct skb_shared_hwtstamps));
++
++		if (ts.tv_nsec < VSC8531_SKB_CB(rx_skb)->ns)
++			ts.tv_sec--;
++
++		shhwtstamps->hwtstamp = ktime_set(ts.tv_sec,
++						  VSC8531_SKB_CB(rx_skb)->ns);
++		netif_rx(rx_skb);
++	}
++
++	return -1;
++}
++
+ static const struct ptp_clock_info vsc85xx_clk_caps = {
+ 	.owner		= THIS_MODULE,
+ 	.name		= "VSC85xx timer",
+@@ -1240,6 +1263,7 @@ static const struct ptp_clock_info vsc85xx_clk_caps = {
+ 	.adjfine	= &vsc85xx_adjfine,
+ 	.gettime64	= &vsc85xx_gettime,
+ 	.settime64	= &vsc85xx_settime,
++	.do_aux_work	= &vsc85xx_do_aux_work,
+ };
+ 
+ static struct vsc8531_private *vsc8584_base_priv(struct phy_device *phydev)
+@@ -1567,6 +1591,7 @@ int vsc8584_ptp_probe(struct phy_device *phydev)
+ 
+ 	mutex_init(&vsc8531->phc_lock);
+ 	mutex_init(&vsc8531->ts_lock);
++	skb_queue_head_init(&vsc8531->rx_skbs_list);
+ 
+ 	/* Retrieve the shared load/save GPIO. Request it as non exclusive as
+ 	 * the same GPIO can be requested by all the PHYs of the same package.
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index def84e87e05b2e..5e7672d2022c92 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -33,6 +33,7 @@
+ #include <linux/ppp_channel.h>
+ #include <linux/ppp-comp.h>
+ #include <linux/skbuff.h>
++#include <linux/rculist.h>
+ #include <linux/rtnetlink.h>
+ #include <linux/if_arp.h>
+ #include <linux/ip.h>
+@@ -1612,11 +1613,14 @@ static int ppp_fill_forward_path(struct net_device_path_ctx *ctx,
+ 	if (ppp->flags & SC_MULTILINK)
+ 		return -EOPNOTSUPP;
+ 
+-	if (list_empty(&ppp->channels))
++	pch = list_first_or_null_rcu(&ppp->channels, struct channel, clist);
++	if (!pch)
++		return -ENODEV;
++
++	chan = READ_ONCE(pch->chan);
++	if (!chan)
+ 		return -ENODEV;
+ 
+-	pch = list_first_entry(&ppp->channels, struct channel, clist);
+-	chan = pch->chan;
+ 	if (!chan->ops->fill_forward_path)
+ 		return -EOPNOTSUPP;
+ 
+@@ -2999,7 +3003,7 @@ ppp_unregister_channel(struct ppp_channel *chan)
+ 	 */
+ 	down_write(&pch->chan_sem);
+ 	spin_lock_bh(&pch->downl);
+-	pch->chan = NULL;
++	WRITE_ONCE(pch->chan, NULL);
+ 	spin_unlock_bh(&pch->downl);
+ 	up_write(&pch->chan_sem);
+ 	ppp_disconnect_channel(pch);
+@@ -3509,7 +3513,7 @@ ppp_connect_channel(struct channel *pch, int unit)
+ 	hdrlen = pch->file.hdrlen + 2;	/* for protocol bytes */
+ 	if (hdrlen > ppp->dev->hard_header_len)
+ 		ppp->dev->hard_header_len = hdrlen;
+-	list_add_tail(&pch->clist, &ppp->channels);
++	list_add_tail_rcu(&pch->clist, &ppp->channels);
+ 	++ppp->n_channels;
+ 	pch->ppp = ppp;
+ 	refcount_inc(&ppp->file.refcnt);
+@@ -3539,10 +3543,11 @@ ppp_disconnect_channel(struct channel *pch)
+ 	if (ppp) {
+ 		/* remove it from the ppp unit's list */
+ 		ppp_lock(ppp);
+-		list_del(&pch->clist);
++		list_del_rcu(&pch->clist);
+ 		if (--ppp->n_channels == 0)
+ 			wake_up_interruptible(&ppp->file.rwait);
+ 		ppp_unlock(ppp);
++		synchronize_net();
+ 		if (refcount_dec_and_test(&ppp->file.refcnt))
+ 			ppp_destroy_interface(ppp);
+ 		err = 0;
+diff --git a/drivers/net/usb/asix_devices.c b/drivers/net/usb/asix_devices.c
+index d9f5942ccc447b..792ddda1ad493d 100644
+--- a/drivers/net/usb/asix_devices.c
++++ b/drivers/net/usb/asix_devices.c
+@@ -676,7 +676,7 @@ static int ax88772_init_mdio(struct usbnet *dev)
+ 	priv->mdio->read = &asix_mdio_bus_read;
+ 	priv->mdio->write = &asix_mdio_bus_write;
+ 	priv->mdio->name = "Asix MDIO Bus";
+-	priv->mdio->phy_mask = ~(BIT(priv->phy_addr) | BIT(AX_EMBD_PHY_ADDR));
++	priv->mdio->phy_mask = ~(BIT(priv->phy_addr & 0x1f) | BIT(AX_EMBD_PHY_ADDR));
+ 	/* mii bus name is usb-<usb bus number>-<usb device number> */
+ 	snprintf(priv->mdio->id, MII_BUS_ID_SIZE, "usb-%03d:%03d",
+ 		 dev->udev->bus->busnum, dev->udev->devnum);
+diff --git a/drivers/net/wireless/ath/ath11k/ce.c b/drivers/net/wireless/ath/ath11k/ce.c
+index 746038006eb465..6a4895310159dd 100644
+--- a/drivers/net/wireless/ath/ath11k/ce.c
++++ b/drivers/net/wireless/ath/ath11k/ce.c
+@@ -393,9 +393,6 @@ static int ath11k_ce_completed_recv_next(struct ath11k_ce_pipe *pipe,
+ 		goto err;
+ 	}
+ 
+-	/* Make sure descriptor is read after the head pointer. */
+-	dma_rmb();
+-
+ 	*nbytes = ath11k_hal_ce_dst_status_get_length(desc);
+ 
+ 	*skb = pipe->dest_ring->skb[sw_index];
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index 9230a965f6f0eb..065fc40e254166 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -2650,9 +2650,6 @@ int ath11k_dp_process_rx(struct ath11k_base *ab, int ring_id,
+ try_again:
+ 	ath11k_hal_srng_access_begin(ab, srng);
+ 
+-	/* Make sure descriptor is read after the head pointer. */
+-	dma_rmb();
+-
+ 	while (likely(desc =
+ 	      (struct hal_reo_dest_ring *)ath11k_hal_srng_dst_get_next_entry(ab,
+ 									     srng))) {
+diff --git a/drivers/net/wireless/ath/ath11k/hal.c b/drivers/net/wireless/ath/ath11k/hal.c
+index cab11a35f9115d..d8b066946a090f 100644
+--- a/drivers/net/wireless/ath/ath11k/hal.c
++++ b/drivers/net/wireless/ath/ath11k/hal.c
+@@ -823,13 +823,23 @@ u32 *ath11k_hal_srng_src_peek(struct ath11k_base *ab, struct hal_srng *srng)
+ 
+ void ath11k_hal_srng_access_begin(struct ath11k_base *ab, struct hal_srng *srng)
+ {
++	u32 hp;
++
+ 	lockdep_assert_held(&srng->lock);
+ 
+ 	if (srng->ring_dir == HAL_SRNG_DIR_SRC) {
+ 		srng->u.src_ring.cached_tp =
+ 			*(volatile u32 *)srng->u.src_ring.tp_addr;
+ 	} else {
+-		srng->u.dst_ring.cached_hp = READ_ONCE(*srng->u.dst_ring.hp_addr);
++		hp = READ_ONCE(*srng->u.dst_ring.hp_addr);
++
++		if (hp != srng->u.dst_ring.cached_hp) {
++			srng->u.dst_ring.cached_hp = hp;
++			/* Make sure descriptor is read after the head
++			 * pointer.
++			 */
++			dma_rmb();
++		}
+ 
+ 		/* Try to prefetch the next descriptor in the ring */
+ 		if (srng->flags & HAL_SRNG_FLAGS_CACHED)
+@@ -844,7 +854,6 @@ void ath11k_hal_srng_access_end(struct ath11k_base *ab, struct hal_srng *srng)
+ {
+ 	lockdep_assert_held(&srng->lock);
+ 
+-	/* TODO: See if we need a write memory barrier here */
+ 	if (srng->flags & HAL_SRNG_FLAGS_LMAC_RING) {
+ 		/* For LMAC rings, ring pointer updates are done through FW and
+ 		 * hence written to a shared memory location that is read by FW
+@@ -852,21 +861,37 @@ void ath11k_hal_srng_access_end(struct ath11k_base *ab, struct hal_srng *srng)
+ 		if (srng->ring_dir == HAL_SRNG_DIR_SRC) {
+ 			srng->u.src_ring.last_tp =
+ 				*(volatile u32 *)srng->u.src_ring.tp_addr;
+-			*srng->u.src_ring.hp_addr = srng->u.src_ring.hp;
++			/* Make sure descriptor is written before updating the
++			 * head pointer.
++			 */
++			dma_wmb();
++			WRITE_ONCE(*srng->u.src_ring.hp_addr, srng->u.src_ring.hp);
+ 		} else {
+ 			srng->u.dst_ring.last_hp = *srng->u.dst_ring.hp_addr;
+-			*srng->u.dst_ring.tp_addr = srng->u.dst_ring.tp;
++			/* Make sure descriptor is read before updating the
++			 * tail pointer.
++			 */
++			dma_mb();
++			WRITE_ONCE(*srng->u.dst_ring.tp_addr, srng->u.dst_ring.tp);
+ 		}
+ 	} else {
+ 		if (srng->ring_dir == HAL_SRNG_DIR_SRC) {
+ 			srng->u.src_ring.last_tp =
+ 				*(volatile u32 *)srng->u.src_ring.tp_addr;
++			/* Assume implementation use an MMIO write accessor
++			 * which has the required wmb() so that the descriptor
++			 * is written before the updating the head pointer.
++			 */
+ 			ath11k_hif_write32(ab,
+ 					   (unsigned long)srng->u.src_ring.hp_addr -
+ 					   (unsigned long)ab->mem,
+ 					   srng->u.src_ring.hp);
+ 		} else {
+ 			srng->u.dst_ring.last_hp = *srng->u.dst_ring.hp_addr;
++			/* Make sure descriptor is read before updating the
++			 * tail pointer.
++			 */
++			mb();
+ 			ath11k_hif_write32(ab,
+ 					   (unsigned long)srng->u.dst_ring.tp_addr -
+ 					   (unsigned long)ab->mem,
+diff --git a/drivers/net/wireless/ath/ath12k/ce.c b/drivers/net/wireless/ath/ath12k/ce.c
+index 3f3439262cf47e..f7c15b547504d5 100644
+--- a/drivers/net/wireless/ath/ath12k/ce.c
++++ b/drivers/net/wireless/ath/ath12k/ce.c
+@@ -433,9 +433,6 @@ static int ath12k_ce_completed_recv_next(struct ath12k_ce_pipe *pipe,
+ 		goto err;
+ 	}
+ 
+-	/* Make sure descriptor is read after the head pointer. */
+-	dma_rmb();
+-
+ 	*nbytes = ath12k_hal_ce_dst_status_get_length(desc);
+ 
+ 	*skb = pipe->dest_ring->skb[sw_index];
+diff --git a/drivers/net/wireless/ath/ath12k/hal.c b/drivers/net/wireless/ath/ath12k/hal.c
+index a301898e5849ad..4ef8b4e99c25f7 100644
+--- a/drivers/net/wireless/ath/ath12k/hal.c
++++ b/drivers/net/wireless/ath/ath12k/hal.c
+@@ -2143,13 +2143,24 @@ void *ath12k_hal_srng_src_get_next_reaped(struct ath12k_base *ab,
+ 
+ void ath12k_hal_srng_access_begin(struct ath12k_base *ab, struct hal_srng *srng)
+ {
++	u32 hp;
++
+ 	lockdep_assert_held(&srng->lock);
+ 
+-	if (srng->ring_dir == HAL_SRNG_DIR_SRC)
++	if (srng->ring_dir == HAL_SRNG_DIR_SRC) {
+ 		srng->u.src_ring.cached_tp =
+ 			*(volatile u32 *)srng->u.src_ring.tp_addr;
+-	else
+-		srng->u.dst_ring.cached_hp = READ_ONCE(*srng->u.dst_ring.hp_addr);
++	} else {
++		hp = READ_ONCE(*srng->u.dst_ring.hp_addr);
++
++		if (hp != srng->u.dst_ring.cached_hp) {
++			srng->u.dst_ring.cached_hp = hp;
++			/* Make sure descriptor is read after the head
++			 * pointer.
++			 */
++			dma_rmb();
++		}
++	}
+ }
+ 
+ /* Update cached ring head/tail pointers to HW. ath12k_hal_srng_access_begin()
+@@ -2159,7 +2170,6 @@ void ath12k_hal_srng_access_end(struct ath12k_base *ab, struct hal_srng *srng)
+ {
+ 	lockdep_assert_held(&srng->lock);
+ 
+-	/* TODO: See if we need a write memory barrier here */
+ 	if (srng->flags & HAL_SRNG_FLAGS_LMAC_RING) {
+ 		/* For LMAC rings, ring pointer updates are done through FW and
+ 		 * hence written to a shared memory location that is read by FW
+@@ -2167,21 +2177,37 @@ void ath12k_hal_srng_access_end(struct ath12k_base *ab, struct hal_srng *srng)
+ 		if (srng->ring_dir == HAL_SRNG_DIR_SRC) {
+ 			srng->u.src_ring.last_tp =
+ 				*(volatile u32 *)srng->u.src_ring.tp_addr;
+-			*srng->u.src_ring.hp_addr = srng->u.src_ring.hp;
++			/* Make sure descriptor is written before updating the
++			 * head pointer.
++			 */
++			dma_wmb();
++			WRITE_ONCE(*srng->u.src_ring.hp_addr, srng->u.src_ring.hp);
+ 		} else {
+ 			srng->u.dst_ring.last_hp = *srng->u.dst_ring.hp_addr;
+-			*srng->u.dst_ring.tp_addr = srng->u.dst_ring.tp;
++			/* Make sure descriptor is read before updating the
++			 * tail pointer.
++			 */
++			dma_mb();
++			WRITE_ONCE(*srng->u.dst_ring.tp_addr, srng->u.dst_ring.tp);
+ 		}
+ 	} else {
+ 		if (srng->ring_dir == HAL_SRNG_DIR_SRC) {
+ 			srng->u.src_ring.last_tp =
+ 				*(volatile u32 *)srng->u.src_ring.tp_addr;
++			/* Assume implementation use an MMIO write accessor
++			 * which has the required wmb() so that the descriptor
++			 * is written before the updating the head pointer.
++			 */
+ 			ath12k_hif_write32(ab,
+ 					   (unsigned long)srng->u.src_ring.hp_addr -
+ 					   (unsigned long)ab->mem,
+ 					   srng->u.src_ring.hp);
+ 		} else {
+ 			srng->u.dst_ring.last_hp = *srng->u.dst_ring.hp_addr;
++			/* Make sure descriptor is read before updating the
++			 * tail pointer.
++			 */
++			mb();
+ 			ath12k_hif_write32(ab,
+ 					   (unsigned long)srng->u.dst_ring.tp_addr -
+ 					   (unsigned long)ab->mem,
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c
+index d0faba24056105..b4bba67a45ec36 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c
+@@ -919,7 +919,7 @@ void wlc_lcnphy_read_table(struct brcms_phy *pi, struct phytbl_info *pti)
+ 
+ static void
+ wlc_lcnphy_common_read_table(struct brcms_phy *pi, u32 tbl_id,
+-			     const u16 *tbl_ptr, u32 tbl_len,
++			     u16 *tbl_ptr, u32 tbl_len,
+ 			     u32 tbl_width, u32 tbl_offset)
+ {
+ 	struct phytbl_info tab;
+diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
+index 5a38cfaf989b1c..fda03512944d52 100644
+--- a/drivers/pci/controller/dwc/pci-imx6.c
++++ b/drivers/pci/controller/dwc/pci-imx6.c
+@@ -860,7 +860,6 @@ static int imx95_pcie_core_reset(struct imx_pcie *imx_pcie, bool assert)
+ static void imx_pcie_assert_core_reset(struct imx_pcie *imx_pcie)
+ {
+ 	reset_control_assert(imx_pcie->pciephy_reset);
+-	reset_control_assert(imx_pcie->apps_reset);
+ 
+ 	if (imx_pcie->drvdata->core_reset)
+ 		imx_pcie->drvdata->core_reset(imx_pcie, true);
+@@ -872,7 +871,6 @@ static void imx_pcie_assert_core_reset(struct imx_pcie *imx_pcie)
+ static int imx_pcie_deassert_core_reset(struct imx_pcie *imx_pcie)
+ {
+ 	reset_control_deassert(imx_pcie->pciephy_reset);
+-	reset_control_deassert(imx_pcie->apps_reset);
+ 
+ 	if (imx_pcie->drvdata->core_reset)
+ 		imx_pcie->drvdata->core_reset(imx_pcie, false);
+@@ -1247,6 +1245,9 @@ static int imx_pcie_host_init(struct dw_pcie_rp *pp)
+ 		}
+ 	}
+ 
++	/* Make sure that PCIe LTSSM is cleared */
++	imx_pcie_ltssm_disable(dev);
++
+ 	ret = imx_pcie_deassert_core_reset(imx_pcie);
+ 	if (ret < 0) {
+ 		dev_err(dev, "pcie deassert core reset failed: %d\n", ret);
+@@ -1385,6 +1386,8 @@ static const struct pci_epc_features imx8m_pcie_epc_features = {
+ 	.msix_capable = false,
+ 	.bar[BAR_1] = { .type = BAR_RESERVED, },
+ 	.bar[BAR_3] = { .type = BAR_RESERVED, },
++	.bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = SZ_256, },
++	.bar[BAR_5] = { .type = BAR_RESERVED, },
+ 	.align = SZ_64K,
+ };
+ 
+@@ -1465,9 +1468,6 @@ static int imx_add_pcie_ep(struct imx_pcie *imx_pcie,
+ 
+ 	pci_epc_init_notify(ep->epc);
+ 
+-	/* Start LTSSM. */
+-	imx_pcie_ltssm_enable(dev);
+-
+ 	return 0;
+ }
+ 
+@@ -1912,7 +1912,7 @@ static const struct imx_pcie_drvdata drvdata[] = {
+ 		.mode_mask[0] = IMX6Q_GPR12_DEVICE_TYPE,
+ 		.mode_off[1] = IOMUXC_GPR12,
+ 		.mode_mask[1] = IMX8MQ_GPR12_PCIE2_CTRL_DEVICE_TYPE,
+-		.epc_features = &imx8m_pcie_epc_features,
++		.epc_features = &imx8q_pcie_epc_features,
+ 		.init_phy = imx8mq_pcie_init_phy,
+ 		.enable_ref_clk = imx8mm_pcie_enable_ref_clk,
+ 	},
+diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c
+index 4d794964fa0fd3..053e9c54043958 100644
+--- a/drivers/pci/controller/dwc/pcie-designware.c
++++ b/drivers/pci/controller/dwc/pcie-designware.c
+@@ -714,6 +714,14 @@ int dw_pcie_wait_for_link(struct dw_pcie *pci)
+ 		return -ETIMEDOUT;
+ 	}
+ 
++	/*
++	 * As per PCIe r6.0, sec 6.6.1, a Downstream Port that supports Link
++	 * speeds greater than 5.0 GT/s, software must wait a minimum of 100 ms
++	 * after Link training completes before sending a Configuration Request.
++	 */
++	if (pci->max_link_speed > 2)
++		msleep(PCIE_RESET_CONFIG_WAIT_MS);
++
+ 	offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
+ 	val = dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKSTA);
+ 
+diff --git a/drivers/pci/controller/pcie-rockchip-ep.c b/drivers/pci/controller/pcie-rockchip-ep.c
+index 55416b8311dda2..300cd85fa035cd 100644
+--- a/drivers/pci/controller/pcie-rockchip-ep.c
++++ b/drivers/pci/controller/pcie-rockchip-ep.c
+@@ -518,9 +518,9 @@ static void rockchip_pcie_ep_retrain_link(struct rockchip_pcie *rockchip)
+ {
+ 	u32 status;
+ 
+-	status = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_LCS);
++	status = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_BASE + PCI_EXP_LNKCTL);
+ 	status |= PCI_EXP_LNKCTL_RL;
+-	rockchip_pcie_write(rockchip, status, PCIE_EP_CONFIG_LCS);
++	rockchip_pcie_write(rockchip, status, PCIE_EP_CONFIG_BASE + PCI_EXP_LNKCTL);
+ }
+ 
+ static bool rockchip_pcie_ep_link_up(struct rockchip_pcie *rockchip)
+diff --git a/drivers/pci/controller/pcie-rockchip-host.c b/drivers/pci/controller/pcie-rockchip-host.c
+index 648b6fcb93b0b7..77f065284fa3b1 100644
+--- a/drivers/pci/controller/pcie-rockchip-host.c
++++ b/drivers/pci/controller/pcie-rockchip-host.c
+@@ -11,6 +11,7 @@
+  * ARM PCI Host generic driver.
+  */
+ 
++#include <linux/bitfield.h>
+ #include <linux/bitrev.h>
+ #include <linux/clk.h>
+ #include <linux/delay.h>
+@@ -40,18 +41,18 @@ static void rockchip_pcie_enable_bw_int(struct rockchip_pcie *rockchip)
+ {
+ 	u32 status;
+ 
+-	status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS);
++	status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
+ 	status |= (PCI_EXP_LNKCTL_LBMIE | PCI_EXP_LNKCTL_LABIE);
+-	rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS);
++	rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
+ }
+ 
+ static void rockchip_pcie_clr_bw_int(struct rockchip_pcie *rockchip)
+ {
+ 	u32 status;
+ 
+-	status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS);
++	status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
+ 	status |= (PCI_EXP_LNKSTA_LBMS | PCI_EXP_LNKSTA_LABS) << 16;
+-	rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS);
++	rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
+ }
+ 
+ static void rockchip_pcie_update_txcredit_mui(struct rockchip_pcie *rockchip)
+@@ -269,7 +270,7 @@ static void rockchip_pcie_set_power_limit(struct rockchip_pcie *rockchip)
+ 	scale = 3; /* 0.001x */
+ 	curr = curr / 1000; /* convert to mA */
+ 	power = (curr * 3300) / 1000; /* milliwatt */
+-	while (power > PCIE_RC_CONFIG_DCR_CSPL_LIMIT) {
++	while (power > FIELD_MAX(PCI_EXP_DEVCAP_PWR_VAL)) {
+ 		if (!scale) {
+ 			dev_warn(rockchip->dev, "invalid power supply\n");
+ 			return;
+@@ -278,10 +279,10 @@ static void rockchip_pcie_set_power_limit(struct rockchip_pcie *rockchip)
+ 		power = power / 10;
+ 	}
+ 
+-	status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_DCR);
+-	status |= (power << PCIE_RC_CONFIG_DCR_CSPL_SHIFT) |
+-		  (scale << PCIE_RC_CONFIG_DCR_CPLS_SHIFT);
+-	rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_DCR);
++	status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_DEVCAP);
++	status |= FIELD_PREP(PCI_EXP_DEVCAP_PWR_VAL, power);
++	status |= FIELD_PREP(PCI_EXP_DEVCAP_PWR_SCL, scale);
++	rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_DEVCAP);
+ }
+ 
+ /**
+@@ -309,14 +310,14 @@ static int rockchip_pcie_host_init_port(struct rockchip_pcie *rockchip)
+ 	rockchip_pcie_set_power_limit(rockchip);
+ 
+ 	/* Set RC's clock architecture as common clock */
+-	status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS);
++	status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
+ 	status |= PCI_EXP_LNKSTA_SLC << 16;
+-	rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS);
++	rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
+ 
+ 	/* Set RC's RCB to 128 */
+-	status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS);
++	status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
+ 	status |= PCI_EXP_LNKCTL_RCB;
+-	rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS);
++	rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
+ 
+ 	/* Enable Gen1 training */
+ 	rockchip_pcie_write(rockchip, PCIE_CLIENT_LINK_TRAIN_ENABLE,
+@@ -341,9 +342,13 @@ static int rockchip_pcie_host_init_port(struct rockchip_pcie *rockchip)
+ 		 * Enable retrain for gen2. This should be configured only after
+ 		 * gen1 finished.
+ 		 */
+-		status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS);
++		status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL2);
++		status &= ~PCI_EXP_LNKCTL2_TLS;
++		status |= PCI_EXP_LNKCTL2_TLS_5_0GT;
++		rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL2);
++		status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
+ 		status |= PCI_EXP_LNKCTL_RL;
+-		rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS);
++		rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL);
+ 
+ 		err = readl_poll_timeout(rockchip->apb_base + PCIE_CORE_CTRL,
+ 					 status, PCIE_LINK_IS_GEN2(status), 20,
+@@ -380,15 +385,15 @@ static int rockchip_pcie_host_init_port(struct rockchip_pcie *rockchip)
+ 
+ 	/* Clear L0s from RC's link cap */
+ 	if (of_property_read_bool(dev->of_node, "aspm-no-l0s")) {
+-		status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LINK_CAP);
+-		status &= ~PCIE_RC_CONFIG_LINK_CAP_L0S;
+-		rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LINK_CAP);
++		status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCAP);
++		status &= ~PCI_EXP_LNKCAP_ASPM_L0S;
++		rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCAP);
+ 	}
+ 
+-	status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_DCSR);
+-	status &= ~PCIE_RC_CONFIG_DCSR_MPS_MASK;
+-	status |= PCIE_RC_CONFIG_DCSR_MPS_256;
+-	rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_DCSR);
++	status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_DEVCTL);
++	status &= ~PCI_EXP_DEVCTL_PAYLOAD;
++	status |= PCI_EXP_DEVCTL_PAYLOAD_256B;
++	rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_DEVCTL);
+ 
+ 	return 0;
+ err_power_off_phy:
+diff --git a/drivers/pci/controller/pcie-rockchip.h b/drivers/pci/controller/pcie-rockchip.h
+index 5864a20323f21a..f5cbf3c9d2d989 100644
+--- a/drivers/pci/controller/pcie-rockchip.h
++++ b/drivers/pci/controller/pcie-rockchip.h
+@@ -155,17 +155,7 @@
+ #define PCIE_EP_CONFIG_DID_VID		(PCIE_EP_CONFIG_BASE + 0x00)
+ #define PCIE_EP_CONFIG_LCS		(PCIE_EP_CONFIG_BASE + 0xd0)
+ #define PCIE_RC_CONFIG_RID_CCR		(PCIE_RC_CONFIG_BASE + 0x08)
+-#define PCIE_RC_CONFIG_DCR		(PCIE_RC_CONFIG_BASE + 0xc4)
+-#define   PCIE_RC_CONFIG_DCR_CSPL_SHIFT		18
+-#define   PCIE_RC_CONFIG_DCR_CSPL_LIMIT		0xff
+-#define   PCIE_RC_CONFIG_DCR_CPLS_SHIFT		26
+-#define PCIE_RC_CONFIG_DCSR		(PCIE_RC_CONFIG_BASE + 0xc8)
+-#define   PCIE_RC_CONFIG_DCSR_MPS_MASK		GENMASK(7, 5)
+-#define   PCIE_RC_CONFIG_DCSR_MPS_256		(0x1 << 5)
+-#define PCIE_RC_CONFIG_LINK_CAP		(PCIE_RC_CONFIG_BASE + 0xcc)
+-#define   PCIE_RC_CONFIG_LINK_CAP_L0S		BIT(10)
+-#define PCIE_RC_CONFIG_LCS		(PCIE_RC_CONFIG_BASE + 0xd0)
+-#define PCIE_EP_CONFIG_LCS		(PCIE_EP_CONFIG_BASE + 0xd0)
++#define PCIE_RC_CONFIG_CR		(PCIE_RC_CONFIG_BASE + 0xc0)
+ #define PCIE_RC_CONFIG_L1_SUBSTATE_CTRL2 (PCIE_RC_CONFIG_BASE + 0x90c)
+ #define PCIE_RC_CONFIG_THP_CAP		(PCIE_RC_CONFIG_BASE + 0x274)
+ #define   PCIE_RC_CONFIG_THP_CAP_NEXT_MASK	GENMASK(31, 20)
+diff --git a/drivers/pci/endpoint/pci-ep-cfs.c b/drivers/pci/endpoint/pci-ep-cfs.c
+index d712c7a866d261..ef50c82e647f4d 100644
+--- a/drivers/pci/endpoint/pci-ep-cfs.c
++++ b/drivers/pci/endpoint/pci-ep-cfs.c
+@@ -691,6 +691,7 @@ void pci_ep_cfs_remove_epf_group(struct config_group *group)
+ 	if (IS_ERR_OR_NULL(group))
+ 		return;
+ 
++	list_del(&group->group_entry);
+ 	configfs_unregister_default_group(group);
+ }
+ EXPORT_SYMBOL(pci_ep_cfs_remove_epf_group);
+diff --git a/drivers/pci/endpoint/pci-epf-core.c b/drivers/pci/endpoint/pci-epf-core.c
+index 577a9e490115c9..167dc6ee63f7af 100644
+--- a/drivers/pci/endpoint/pci-epf-core.c
++++ b/drivers/pci/endpoint/pci-epf-core.c
+@@ -338,7 +338,7 @@ static void pci_epf_remove_cfs(struct pci_epf_driver *driver)
+ 	mutex_lock(&pci_epf_mutex);
+ 	list_for_each_entry_safe(group, tmp, &driver->epf_group, group_entry)
+ 		pci_ep_cfs_remove_epf_group(group);
+-	list_del(&driver->epf_group);
++	WARN_ON(!list_empty(&driver->epf_group));
+ 	mutex_unlock(&pci_epf_mutex);
+ }
+ 
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index 98d6fccb383e55..6ff1990c482f4e 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -391,12 +391,14 @@ void pci_bus_put(struct pci_bus *bus);
+ 
+ #define PCIE_LNKCAP_SLS2SPEED(lnkcap)					\
+ ({									\
+-	((lnkcap) == PCI_EXP_LNKCAP_SLS_64_0GB ? PCIE_SPEED_64_0GT :	\
+-	 (lnkcap) == PCI_EXP_LNKCAP_SLS_32_0GB ? PCIE_SPEED_32_0GT :	\
+-	 (lnkcap) == PCI_EXP_LNKCAP_SLS_16_0GB ? PCIE_SPEED_16_0GT :	\
+-	 (lnkcap) == PCI_EXP_LNKCAP_SLS_8_0GB ? PCIE_SPEED_8_0GT :	\
+-	 (lnkcap) == PCI_EXP_LNKCAP_SLS_5_0GB ? PCIE_SPEED_5_0GT :	\
+-	 (lnkcap) == PCI_EXP_LNKCAP_SLS_2_5GB ? PCIE_SPEED_2_5GT :	\
++	u32 lnkcap_sls = (lnkcap) & PCI_EXP_LNKCAP_SLS;			\
++									\
++	(lnkcap_sls == PCI_EXP_LNKCAP_SLS_64_0GB ? PCIE_SPEED_64_0GT :	\
++	 lnkcap_sls == PCI_EXP_LNKCAP_SLS_32_0GB ? PCIE_SPEED_32_0GT :	\
++	 lnkcap_sls == PCI_EXP_LNKCAP_SLS_16_0GB ? PCIE_SPEED_16_0GT :	\
++	 lnkcap_sls == PCI_EXP_LNKCAP_SLS_8_0GB ? PCIE_SPEED_8_0GT :	\
++	 lnkcap_sls == PCI_EXP_LNKCAP_SLS_5_0GB ? PCIE_SPEED_5_0GT :	\
++	 lnkcap_sls == PCI_EXP_LNKCAP_SLS_2_5GB ? PCIE_SPEED_2_5GT :	\
+ 	 PCI_SPEED_UNKNOWN);						\
+ })
+ 
+@@ -411,13 +413,17 @@ void pci_bus_put(struct pci_bus *bus);
+ 	 PCI_SPEED_UNKNOWN)
+ 
+ #define PCIE_LNKCTL2_TLS2SPEED(lnkctl2) \
+-	((lnkctl2) == PCI_EXP_LNKCTL2_TLS_64_0GT ? PCIE_SPEED_64_0GT : \
+-	 (lnkctl2) == PCI_EXP_LNKCTL2_TLS_32_0GT ? PCIE_SPEED_32_0GT : \
+-	 (lnkctl2) == PCI_EXP_LNKCTL2_TLS_16_0GT ? PCIE_SPEED_16_0GT : \
+-	 (lnkctl2) == PCI_EXP_LNKCTL2_TLS_8_0GT ? PCIE_SPEED_8_0GT : \
+-	 (lnkctl2) == PCI_EXP_LNKCTL2_TLS_5_0GT ? PCIE_SPEED_5_0GT : \
+-	 (lnkctl2) == PCI_EXP_LNKCTL2_TLS_2_5GT ? PCIE_SPEED_2_5GT : \
+-	 PCI_SPEED_UNKNOWN)
++({									\
++	u16 lnkctl2_tls = (lnkctl2) & PCI_EXP_LNKCTL2_TLS;		\
++									\
++	(lnkctl2_tls == PCI_EXP_LNKCTL2_TLS_64_0GT ? PCIE_SPEED_64_0GT :	\
++	 lnkctl2_tls == PCI_EXP_LNKCTL2_TLS_32_0GT ? PCIE_SPEED_32_0GT :	\
++	 lnkctl2_tls == PCI_EXP_LNKCTL2_TLS_16_0GT ? PCIE_SPEED_16_0GT :	\
++	 lnkctl2_tls == PCI_EXP_LNKCTL2_TLS_8_0GT ? PCIE_SPEED_8_0GT :	\
++	 lnkctl2_tls == PCI_EXP_LNKCTL2_TLS_5_0GT ? PCIE_SPEED_5_0GT :	\
++	 lnkctl2_tls == PCI_EXP_LNKCTL2_TLS_2_5GT ? PCIE_SPEED_2_5GT :	\
++	 PCI_SPEED_UNKNOWN);						\
++})
+ 
+ /* PCIe speed to Mb/s reduced by encoding overhead */
+ #define PCIE_SPEED2MBS_ENC(speed) \
+diff --git a/drivers/pci/pcie/portdrv.c b/drivers/pci/pcie/portdrv.c
+index e8318fd5f6ed53..d1b68c18444f80 100644
+--- a/drivers/pci/pcie/portdrv.c
++++ b/drivers/pci/pcie/portdrv.c
+@@ -220,7 +220,7 @@ static int get_port_device_capability(struct pci_dev *dev)
+ 	struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
+ 	int services = 0;
+ 
+-	if (dev->is_hotplug_bridge &&
++	if (dev->is_pciehp &&
+ 	    (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT ||
+ 	     pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM) &&
+ 	    (pcie_ports_native || host->native_pcie_hotplug)) {
+diff --git a/drivers/phy/qualcomm/phy-qcom-m31.c b/drivers/phy/qualcomm/phy-qcom-m31.c
+index 20d4c020a83c1f..8b0f8a3a059c21 100644
+--- a/drivers/phy/qualcomm/phy-qcom-m31.c
++++ b/drivers/phy/qualcomm/phy-qcom-m31.c
+@@ -58,14 +58,16 @@
+  #define USB2_0_TX_ENABLE		BIT(2)
+ 
+ #define USB2PHY_USB_PHY_M31_XCFGI_4	0xc8
+- #define HSTX_SLEW_RATE_565PS		GENMASK(1, 0)
++ #define HSTX_SLEW_RATE_400PS		GENMASK(2, 0)
+  #define PLL_CHARGING_PUMP_CURRENT_35UA	GENMASK(4, 3)
+  #define ODT_VALUE_38_02_OHM		GENMASK(7, 6)
+ 
+ #define USB2PHY_USB_PHY_M31_XCFGI_5	0xcc
+- #define ODT_VALUE_45_02_OHM		BIT(2)
+  #define HSTX_PRE_EMPHASIS_LEVEL_0_55MA	BIT(0)
+ 
++#define USB2PHY_USB_PHY_M31_XCFGI_9	0xdc
++ #define HSTX_CURRENT_17_1MA_385MV	BIT(1)
++
+ #define USB2PHY_USB_PHY_M31_XCFGI_11	0xe4
+  #define XCFG_COARSE_TUNE_NUM		BIT(1)
+  #define XCFG_FINE_TUNE_NUM		BIT(3)
+@@ -164,7 +166,7 @@ static struct m31_phy_regs m31_ipq5332_regs[] = {
+ 	},
+ 	{
+ 		USB2PHY_USB_PHY_M31_XCFGI_4,
+-		HSTX_SLEW_RATE_565PS | PLL_CHARGING_PUMP_CURRENT_35UA | ODT_VALUE_38_02_OHM,
++		HSTX_SLEW_RATE_400PS | PLL_CHARGING_PUMP_CURRENT_35UA | ODT_VALUE_38_02_OHM,
+ 		0
+ 	},
+ 	{
+@@ -174,9 +176,13 @@ static struct m31_phy_regs m31_ipq5332_regs[] = {
+ 	},
+ 	{
+ 		USB2PHY_USB_PHY_M31_XCFGI_5,
+-		ODT_VALUE_45_02_OHM | HSTX_PRE_EMPHASIS_LEVEL_0_55MA,
++		HSTX_PRE_EMPHASIS_LEVEL_0_55MA,
+ 		4
+ 	},
++	{
++		USB2PHY_USB_PHY_M31_XCFGI_9,
++		HSTX_CURRENT_17_1MA_385MV,
++	},
+ 	{
+ 		USB_PHY_UTMI_CTRL5,
+ 		0x0,
+diff --git a/drivers/platform/chrome/cros_ec.c b/drivers/platform/chrome/cros_ec.c
+index 110771a8645ed7..fd58781a2fb7da 100644
+--- a/drivers/platform/chrome/cros_ec.c
++++ b/drivers/platform/chrome/cros_ec.c
+@@ -318,6 +318,9 @@ EXPORT_SYMBOL(cros_ec_register);
+  */
+ void cros_ec_unregister(struct cros_ec_device *ec_dev)
+ {
++	if (ec_dev->mkbp_event_supported)
++		blocking_notifier_chain_unregister(&ec_dev->event_notifier,
++						   &ec_dev->notifier_ready);
+ 	platform_device_unregister(ec_dev->pd);
+ 	platform_device_unregister(ec_dev->ec);
+ 	mutex_destroy(&ec_dev->lock);
+diff --git a/drivers/platform/x86/amd/hsmp/hsmp.c b/drivers/platform/x86/amd/hsmp/hsmp.c
+index 885e2f8136fd44..19f82c1d309059 100644
+--- a/drivers/platform/x86/amd/hsmp/hsmp.c
++++ b/drivers/platform/x86/amd/hsmp/hsmp.c
+@@ -356,6 +356,11 @@ ssize_t hsmp_metric_tbl_read(struct hsmp_socket *sock, char *buf, size_t size)
+ 	if (!sock || !buf)
+ 		return -EINVAL;
+ 
++	if (!sock->metric_tbl_addr) {
++		dev_err(sock->dev, "Metrics table address not available\n");
++		return -ENOMEM;
++	}
++
+ 	/* Do not support lseek(), also don't allow more than the size of metric table */
+ 	if (size != sizeof(struct hsmp_metric_table)) {
+ 		dev_err(sock->dev, "Wrong buffer size\n");
+diff --git a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-tpmi.c b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-tpmi.c
+index 44d9948ed2241b..dd7cdae5bb8662 100644
+--- a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-tpmi.c
++++ b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-tpmi.c
+@@ -191,9 +191,14 @@ static int uncore_read_control_freq(struct uncore_data *data, unsigned int *valu
+ static int write_eff_lat_ctrl(struct uncore_data *data, unsigned int val, enum uncore_index index)
+ {
+ 	struct tpmi_uncore_cluster_info *cluster_info;
++	struct tpmi_uncore_struct *uncore_root;
+ 	u64 control;
+ 
+ 	cluster_info = container_of(data, struct tpmi_uncore_cluster_info, uncore_data);
++	uncore_root = cluster_info->uncore_root;
++
++	if (uncore_root->write_blocked)
++		return -EPERM;
+ 
+ 	if (cluster_info->root_domain)
+ 		return -ENODATA;
+diff --git a/drivers/pwm/pwm-imx-tpm.c b/drivers/pwm/pwm-imx-tpm.c
+index 7ee7b65b9b90c5..5b399de16d6040 100644
+--- a/drivers/pwm/pwm-imx-tpm.c
++++ b/drivers/pwm/pwm-imx-tpm.c
+@@ -204,6 +204,15 @@ static int pwm_imx_tpm_apply_hw(struct pwm_chip *chip,
+ 		val |= FIELD_PREP(PWM_IMX_TPM_SC_PS, p->prescale);
+ 		writel(val, tpm->base + PWM_IMX_TPM_SC);
+ 
++		/*
++		 * if the counter is disabled (CMOD == 0), programming the new
++		 * period length (MOD) will not reset the counter (CNT). If
++		 * CNT.COUNT happens to be bigger than the new MOD value then
++		 * the counter will end up being reset way too late. Therefore,
++		 * manually reset it to 0.
++		 */
++		if (!cmod)
++			writel(0x0, tpm->base + PWM_IMX_TPM_CNT);
+ 		/*
+ 		 * set period count:
+ 		 * if the PWM is disabled (CMOD[1:0] = 2b00), then MOD register
+diff --git a/drivers/pwm/pwm-mediatek.c b/drivers/pwm/pwm-mediatek.c
+index 33d3554b9197ab..bfbfe7f2917b1d 100644
+--- a/drivers/pwm/pwm-mediatek.c
++++ b/drivers/pwm/pwm-mediatek.c
+@@ -115,6 +115,26 @@ static inline void pwm_mediatek_writel(struct pwm_mediatek_chip *chip,
+ 	writel(value, chip->regs + chip->soc->reg_offset[num] + offset);
+ }
+ 
++static void pwm_mediatek_enable(struct pwm_chip *chip, struct pwm_device *pwm)
++{
++	struct pwm_mediatek_chip *pc = to_pwm_mediatek_chip(chip);
++	u32 value;
++
++	value = readl(pc->regs);
++	value |= BIT(pwm->hwpwm);
++	writel(value, pc->regs);
++}
++
++static void pwm_mediatek_disable(struct pwm_chip *chip, struct pwm_device *pwm)
++{
++	struct pwm_mediatek_chip *pc = to_pwm_mediatek_chip(chip);
++	u32 value;
++
++	value = readl(pc->regs);
++	value &= ~BIT(pwm->hwpwm);
++	writel(value, pc->regs);
++}
++
+ static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ 			       int duty_ns, int period_ns)
+ {
+@@ -144,7 +164,10 @@ static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	do_div(resolution, clk_rate);
+ 
+ 	cnt_period = DIV_ROUND_CLOSEST_ULL((u64)period_ns * 1000, resolution);
+-	while (cnt_period > 8191) {
++	if (!cnt_period)
++		return -EINVAL;
++
++	while (cnt_period > 8192) {
+ 		resolution *= 2;
+ 		clkdiv++;
+ 		cnt_period = DIV_ROUND_CLOSEST_ULL((u64)period_ns * 1000,
+@@ -167,9 +190,16 @@ static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	}
+ 
+ 	cnt_duty = DIV_ROUND_CLOSEST_ULL((u64)duty_ns * 1000, resolution);
++
+ 	pwm_mediatek_writel(pc, pwm->hwpwm, PWMCON, BIT(15) | clkdiv);
+-	pwm_mediatek_writel(pc, pwm->hwpwm, reg_width, cnt_period);
+-	pwm_mediatek_writel(pc, pwm->hwpwm, reg_thres, cnt_duty);
++	pwm_mediatek_writel(pc, pwm->hwpwm, reg_width, cnt_period - 1);
++
++	if (cnt_duty) {
++		pwm_mediatek_writel(pc, pwm->hwpwm, reg_thres, cnt_duty - 1);
++		pwm_mediatek_enable(chip, pwm);
++	} else {
++		pwm_mediatek_disable(chip, pwm);
++	}
+ 
+ out:
+ 	pwm_mediatek_clk_disable(chip, pwm);
+@@ -177,35 +207,6 @@ static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	return ret;
+ }
+ 
+-static int pwm_mediatek_enable(struct pwm_chip *chip, struct pwm_device *pwm)
+-{
+-	struct pwm_mediatek_chip *pc = to_pwm_mediatek_chip(chip);
+-	u32 value;
+-	int ret;
+-
+-	ret = pwm_mediatek_clk_enable(chip, pwm);
+-	if (ret < 0)
+-		return ret;
+-
+-	value = readl(pc->regs);
+-	value |= BIT(pwm->hwpwm);
+-	writel(value, pc->regs);
+-
+-	return 0;
+-}
+-
+-static void pwm_mediatek_disable(struct pwm_chip *chip, struct pwm_device *pwm)
+-{
+-	struct pwm_mediatek_chip *pc = to_pwm_mediatek_chip(chip);
+-	u32 value;
+-
+-	value = readl(pc->regs);
+-	value &= ~BIT(pwm->hwpwm);
+-	writel(value, pc->regs);
+-
+-	pwm_mediatek_clk_disable(chip, pwm);
+-}
+-
+ static int pwm_mediatek_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 			      const struct pwm_state *state)
+ {
+@@ -215,8 +216,10 @@ static int pwm_mediatek_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 		return -EINVAL;
+ 
+ 	if (!state->enabled) {
+-		if (pwm->state.enabled)
++		if (pwm->state.enabled) {
+ 			pwm_mediatek_disable(chip, pwm);
++			pwm_mediatek_clk_disable(chip, pwm);
++		}
+ 
+ 		return 0;
+ 	}
+@@ -226,7 +229,7 @@ static int pwm_mediatek_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 		return err;
+ 
+ 	if (!pwm->state.enabled)
+-		err = pwm_mediatek_enable(chip, pwm);
++		err = pwm_mediatek_clk_enable(chip, pwm);
+ 
+ 	return err;
+ }
+diff --git a/drivers/regulator/pca9450-regulator.c b/drivers/regulator/pca9450-regulator.c
+index 14d19a6d665573..49ff762eb33e1f 100644
+--- a/drivers/regulator/pca9450-regulator.c
++++ b/drivers/regulator/pca9450-regulator.c
+@@ -34,7 +34,6 @@ struct pca9450 {
+ 	struct device *dev;
+ 	struct regmap *regmap;
+ 	struct gpio_desc *sd_vsel_gpio;
+-	struct notifier_block restart_nb;
+ 	enum pca9450_chip_type type;
+ 	unsigned int rcnt;
+ 	int irq;
+@@ -967,10 +966,9 @@ static irqreturn_t pca9450_irq_handler(int irq, void *data)
+ 	return IRQ_HANDLED;
+ }
+ 
+-static int pca9450_i2c_restart_handler(struct notifier_block *nb,
+-				unsigned long action, void *data)
++static int pca9450_i2c_restart_handler(struct sys_off_data *data)
+ {
+-	struct pca9450 *pca9450 = container_of(nb, struct pca9450, restart_nb);
++	struct pca9450 *pca9450 = data->cb_data;
+ 	struct i2c_client *i2c = container_of(pca9450->dev, struct i2c_client, dev);
+ 
+ 	dev_dbg(&i2c->dev, "Restarting device..\n");
+@@ -1128,10 +1126,9 @@ static int pca9450_i2c_probe(struct i2c_client *i2c)
+ 	pca9450->sd_vsel_fixed_low =
+ 		of_property_read_bool(ldo5->dev.of_node, "nxp,sd-vsel-fixed-low");
+ 
+-	pca9450->restart_nb.notifier_call = pca9450_i2c_restart_handler;
+-	pca9450->restart_nb.priority = PCA9450_RESTART_HANDLER_PRIORITY;
+-
+-	if (register_restart_handler(&pca9450->restart_nb))
++	if (devm_register_sys_off_handler(&i2c->dev, SYS_OFF_MODE_RESTART,
++					  PCA9450_RESTART_HANDLER_PRIORITY,
++					  pca9450_i2c_restart_handler, pca9450))
+ 		dev_warn(&i2c->dev, "Failed to register restart handler\n");
+ 
+ 	dev_info(&i2c->dev, "%s probed.\n",
+diff --git a/drivers/regulator/tps65219-regulator.c b/drivers/regulator/tps65219-regulator.c
+index 5e67fdc88f49e6..d77ca486879fd6 100644
+--- a/drivers/regulator/tps65219-regulator.c
++++ b/drivers/regulator/tps65219-regulator.c
+@@ -454,9 +454,9 @@ static int tps65219_regulator_probe(struct platform_device *pdev)
+ 						  irq_type->irq_name,
+ 						  irq_data);
+ 		if (error)
+-			return dev_err_probe(tps->dev, PTR_ERR(rdev),
+-					     "Failed to request %s IRQ %d: %d\n",
+-					     irq_type->irq_name, irq, error);
++			return dev_err_probe(tps->dev, error,
++					     "Failed to request %s IRQ %d\n",
++					     irq_type->irq_name, irq);
+ 	}
+ 
+ 	for (i = 0; i < pmic->dev_irq_size; ++i) {
+@@ -477,9 +477,9 @@ static int tps65219_regulator_probe(struct platform_device *pdev)
+ 						  irq_type->irq_name,
+ 						  irq_data);
+ 		if (error)
+-			return dev_err_probe(tps->dev, PTR_ERR(rdev),
+-					     "Failed to request %s IRQ %d: %d\n",
+-					     irq_type->irq_name, irq, error);
++			return dev_err_probe(tps->dev, error,
++					     "Failed to request %s IRQ %d\n",
++					     irq_type->irq_name, irq);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/s390/char/sclp.c b/drivers/s390/char/sclp.c
+index 9a55e2d04e633f..2ff1658bc93ccb 100644
+--- a/drivers/s390/char/sclp.c
++++ b/drivers/s390/char/sclp.c
+@@ -76,6 +76,13 @@ unsigned long sclp_console_full;
+ /* The currently active SCLP command word. */
+ static sclp_cmdw_t active_cmd;
+ 
++static inline struct sccb_header *sclpint_to_sccb(u32 sccb_int)
++{
++	if (sccb_int)
++		return __va(sccb_int);
++	return NULL;
++}
++
+ static inline void sclp_trace(int prio, char *id, u32 a, u64 b, bool err)
+ {
+ 	struct sclp_trace_entry e;
+@@ -619,7 +626,7 @@ __sclp_find_req(u32 sccb)
+ 
+ static bool ok_response(u32 sccb_int, sclp_cmdw_t cmd)
+ {
+-	struct sccb_header *sccb = (struct sccb_header *)__va(sccb_int);
++	struct sccb_header *sccb = sclpint_to_sccb(sccb_int);
+ 	struct evbuf_header *evbuf;
+ 	u16 response;
+ 
+@@ -658,7 +665,7 @@ static void sclp_interrupt_handler(struct ext_code ext_code,
+ 
+ 	/* INT: Interrupt received (a=intparm, b=cmd) */
+ 	sclp_trace_sccb(0, "INT", param32, active_cmd, active_cmd,
+-			(struct sccb_header *)__va(finished_sccb),
++			sclpint_to_sccb(finished_sccb),
+ 			!ok_response(finished_sccb, active_cmd));
+ 
+ 	if (finished_sccb) {
+diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h
+index 9bbc7cb98ca324..f2201108ea9116 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr.h
++++ b/drivers/scsi/mpi3mr/mpi3mr.h
+@@ -1137,6 +1137,8 @@ struct scmd_priv {
+  * @logdata_buf: Circular buffer to store log data entries
+  * @logdata_buf_idx: Index of entry in buffer to store
+  * @logdata_entry_sz: log data entry size
++ * @adm_req_q_bar_writeq_lock: Admin request queue lock
++ * @adm_reply_q_bar_writeq_lock: Admin reply queue lock
+  * @pend_large_data_sz: Counter to track pending large data
+  * @io_throttle_data_length: I/O size to track in 512b blocks
+  * @io_throttle_high: I/O size to start throttle in 512b blocks
+@@ -1185,7 +1187,7 @@ struct mpi3mr_ioc {
+ 	char name[MPI3MR_NAME_LENGTH];
+ 	char driver_name[MPI3MR_NAME_LENGTH];
+ 
+-	volatile struct mpi3_sysif_registers __iomem *sysif_regs;
++	struct mpi3_sysif_registers __iomem *sysif_regs;
+ 	resource_size_t sysif_regs_phys;
+ 	int bars;
+ 	u64 dma_mask;
+@@ -1339,6 +1341,8 @@ struct mpi3mr_ioc {
+ 	u8 *logdata_buf;
+ 	u16 logdata_buf_idx;
+ 	u16 logdata_entry_sz;
++	spinlock_t adm_req_q_bar_writeq_lock;
++	spinlock_t adm_reply_q_bar_writeq_lock;
+ 
+ 	atomic_t pend_large_data_sz;
+ 	u32 io_throttle_data_length;
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+index 1d7901a8f0e406..0152d31d430abd 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_fw.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+@@ -23,17 +23,22 @@ module_param(poll_queues, int, 0444);
+ MODULE_PARM_DESC(poll_queues, "Number of queues for io_uring poll mode. (Range 1 - 126)");
+ 
+ #if defined(writeq) && defined(CONFIG_64BIT)
+-static inline void mpi3mr_writeq(__u64 b, volatile void __iomem *addr)
++static inline void mpi3mr_writeq(__u64 b, void __iomem *addr,
++	spinlock_t *write_queue_lock)
+ {
+ 	writeq(b, addr);
+ }
+ #else
+-static inline void mpi3mr_writeq(__u64 b, volatile void __iomem *addr)
++static inline void mpi3mr_writeq(__u64 b, void __iomem *addr,
++	spinlock_t *write_queue_lock)
+ {
+ 	__u64 data_out = b;
++	unsigned long flags;
+ 
++	spin_lock_irqsave(write_queue_lock, flags);
+ 	writel((u32)(data_out), addr);
+ 	writel((u32)(data_out >> 32), (addr + 4));
++	spin_unlock_irqrestore(write_queue_lock, flags);
+ }
+ #endif
+ 
+@@ -428,8 +433,8 @@ static void mpi3mr_process_admin_reply_desc(struct mpi3mr_ioc *mrioc,
+ 				       MPI3MR_SENSE_BUF_SZ);
+ 			}
+ 			if (cmdptr->is_waiting) {
+-				complete(&cmdptr->done);
+ 				cmdptr->is_waiting = 0;
++				complete(&cmdptr->done);
+ 			} else if (cmdptr->callback)
+ 				cmdptr->callback(mrioc, cmdptr);
+ 		}
+@@ -2954,9 +2959,11 @@ static int mpi3mr_setup_admin_qpair(struct mpi3mr_ioc *mrioc)
+ 	    (mrioc->num_admin_req);
+ 	writel(num_admin_entries, &mrioc->sysif_regs->admin_queue_num_entries);
+ 	mpi3mr_writeq(mrioc->admin_req_dma,
+-	    &mrioc->sysif_regs->admin_request_queue_address);
++		&mrioc->sysif_regs->admin_request_queue_address,
++		&mrioc->adm_req_q_bar_writeq_lock);
+ 	mpi3mr_writeq(mrioc->admin_reply_dma,
+-	    &mrioc->sysif_regs->admin_reply_queue_address);
++		&mrioc->sysif_regs->admin_reply_queue_address,
++		&mrioc->adm_reply_q_bar_writeq_lock);
+ 	writel(mrioc->admin_req_pi, &mrioc->sysif_regs->admin_request_queue_pi);
+ 	writel(mrioc->admin_reply_ci, &mrioc->sysif_regs->admin_reply_queue_ci);
+ 	return retval;
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c
+index 87983ea4e06e08..e467b56949e989 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_os.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_os.c
+@@ -5383,6 +5383,8 @@ mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	spin_lock_init(&mrioc->tgtdev_lock);
+ 	spin_lock_init(&mrioc->watchdog_lock);
+ 	spin_lock_init(&mrioc->chain_buf_lock);
++	spin_lock_init(&mrioc->adm_req_q_bar_writeq_lock);
++	spin_lock_init(&mrioc->adm_reply_q_bar_writeq_lock);
+ 	spin_lock_init(&mrioc->sas_node_lock);
+ 	spin_lock_init(&mrioc->trigger_lock);
+ 
+diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c
+index a39f1da4ce474b..a761c0aa5127fe 100644
+--- a/drivers/scsi/qla4xxx/ql4_os.c
++++ b/drivers/scsi/qla4xxx/ql4_os.c
+@@ -6606,6 +6606,8 @@ static struct iscsi_endpoint *qla4xxx_get_ep_fwdb(struct scsi_qla_host *ha,
+ 
+ 	ep = qla4xxx_ep_connect(ha->host, (struct sockaddr *)dst_addr, 0);
+ 	vfree(dst_addr);
++	if (IS_ERR(ep))
++		return NULL;
+ 	return ep;
+ }
+ 
+diff --git a/drivers/soc/qcom/mdt_loader.c b/drivers/soc/qcom/mdt_loader.c
+index 44589d10b15b50..64e0facc392e5d 100644
+--- a/drivers/soc/qcom/mdt_loader.c
++++ b/drivers/soc/qcom/mdt_loader.c
+@@ -18,6 +18,37 @@
+ #include <linux/slab.h>
+ #include <linux/soc/qcom/mdt_loader.h>
+ 
++static bool mdt_header_valid(const struct firmware *fw)
++{
++	const struct elf32_hdr *ehdr;
++	size_t phend;
++	size_t shend;
++
++	if (fw->size < sizeof(*ehdr))
++		return false;
++
++	ehdr = (struct elf32_hdr *)fw->data;
++
++	if (memcmp(ehdr->e_ident, ELFMAG, SELFMAG))
++		return false;
++
++	if (ehdr->e_phentsize != sizeof(struct elf32_phdr))
++		return false;
++
++	phend = size_add(size_mul(sizeof(struct elf32_phdr), ehdr->e_phnum), ehdr->e_phoff);
++	if (phend > fw->size)
++		return false;
++
++	if (ehdr->e_shentsize != sizeof(struct elf32_shdr))
++		return false;
++
++	shend = size_add(size_mul(sizeof(struct elf32_shdr), ehdr->e_shnum), ehdr->e_shoff);
++	if (shend > fw->size)
++		return false;
++
++	return true;
++}
++
+ static bool mdt_phdr_valid(const struct elf32_phdr *phdr)
+ {
+ 	if (phdr->p_type != PT_LOAD)
+@@ -82,6 +113,9 @@ ssize_t qcom_mdt_get_size(const struct firmware *fw)
+ 	phys_addr_t max_addr = 0;
+ 	int i;
+ 
++	if (!mdt_header_valid(fw))
++		return -EINVAL;
++
+ 	ehdr = (struct elf32_hdr *)fw->data;
+ 	phdrs = (struct elf32_phdr *)(fw->data + ehdr->e_phoff);
+ 
+@@ -134,6 +168,9 @@ void *qcom_mdt_read_metadata(const struct firmware *fw, size_t *data_len,
+ 	ssize_t ret;
+ 	void *data;
+ 
++	if (!mdt_header_valid(fw))
++		return ERR_PTR(-EINVAL);
++
+ 	ehdr = (struct elf32_hdr *)fw->data;
+ 	phdrs = (struct elf32_phdr *)(fw->data + ehdr->e_phoff);
+ 
+@@ -214,6 +251,9 @@ int qcom_mdt_pas_init(struct device *dev, const struct firmware *fw,
+ 	int ret;
+ 	int i;
+ 
++	if (!mdt_header_valid(fw))
++		return -EINVAL;
++
+ 	ehdr = (struct elf32_hdr *)fw->data;
+ 	phdrs = (struct elf32_phdr *)(fw->data + ehdr->e_phoff);
+ 
+@@ -310,6 +350,9 @@ static int __qcom_mdt_load(struct device *dev, const struct firmware *fw,
+ 	if (!fw || !mem_region || !mem_phys || !mem_size)
+ 		return -EINVAL;
+ 
++	if (!mdt_header_valid(fw))
++		return -EINVAL;
++
+ 	is_split = qcom_mdt_bins_are_split(fw, fw_name);
+ 	ehdr = (struct elf32_hdr *)fw->data;
+ 	phdrs = (struct elf32_phdr *)(fw->data + ehdr->e_phoff);
+diff --git a/drivers/soc/tegra/pmc.c b/drivers/soc/tegra/pmc.c
+index e0d67bfe955cde..fda41b25a77144 100644
+--- a/drivers/soc/tegra/pmc.c
++++ b/drivers/soc/tegra/pmc.c
+@@ -1234,7 +1234,7 @@ static int tegra_powergate_of_get_clks(struct tegra_powergate *pg,
+ }
+ 
+ static int tegra_powergate_of_get_resets(struct tegra_powergate *pg,
+-					 struct device_node *np, bool off)
++					 struct device_node *np)
+ {
+ 	struct device *dev = pg->pmc->dev;
+ 	int err;
+@@ -1249,22 +1249,6 @@ static int tegra_powergate_of_get_resets(struct tegra_powergate *pg,
+ 	err = reset_control_acquire(pg->reset);
+ 	if (err < 0) {
+ 		pr_err("failed to acquire resets: %d\n", err);
+-		goto out;
+-	}
+-
+-	if (off) {
+-		err = reset_control_assert(pg->reset);
+-	} else {
+-		err = reset_control_deassert(pg->reset);
+-		if (err < 0)
+-			goto out;
+-
+-		reset_control_release(pg->reset);
+-	}
+-
+-out:
+-	if (err) {
+-		reset_control_release(pg->reset);
+ 		reset_control_put(pg->reset);
+ 	}
+ 
+@@ -1309,20 +1293,43 @@ static int tegra_powergate_add(struct tegra_pmc *pmc, struct device_node *np)
+ 		goto set_available;
+ 	}
+ 
+-	err = tegra_powergate_of_get_resets(pg, np, off);
++	err = tegra_powergate_of_get_resets(pg, np);
+ 	if (err < 0) {
+ 		dev_err(dev, "failed to get resets for %pOFn: %d\n", np, err);
+ 		goto remove_clks;
+ 	}
+ 
+-	if (!IS_ENABLED(CONFIG_PM_GENERIC_DOMAINS)) {
+-		if (off)
+-			WARN_ON(tegra_powergate_power_up(pg, true));
++	/*
++	 * If the power-domain is off, then ensure the resets are asserted.
++	 * If the power-domain is on, then power down to ensure that when is
++	 * it turned on the power-domain, clocks and resets are all in the
++	 * expected state.
++	 */
++	if (off) {
++		err = reset_control_assert(pg->reset);
++		if (err) {
++			pr_err("failed to assert resets: %d\n", err);
++			goto remove_resets;
++		}
++	} else {
++		err = tegra_powergate_power_down(pg);
++		if (err) {
++			dev_err(dev, "failed to turn off PM domain %s: %d\n",
++				pg->genpd.name, err);
++			goto remove_resets;
++		}
++	}
+ 
++	/*
++	 * If PM_GENERIC_DOMAINS is not enabled, power-on
++	 * the domain and skip the genpd registration.
++	 */
++	if (!IS_ENABLED(CONFIG_PM_GENERIC_DOMAINS)) {
++		WARN_ON(tegra_powergate_power_up(pg, true));
+ 		goto remove_resets;
+ 	}
+ 
+-	err = pm_genpd_init(&pg->genpd, NULL, off);
++	err = pm_genpd_init(&pg->genpd, NULL, true);
+ 	if (err < 0) {
+ 		dev_err(dev, "failed to initialise PM domain %pOFn: %d\n", np,
+ 		       err);
+diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c
+index 5e381844523440..1a22d356a73d95 100644
+--- a/drivers/spi/spi-fsl-lpspi.c
++++ b/drivers/spi/spi-fsl-lpspi.c
+@@ -331,13 +331,11 @@ static int fsl_lpspi_set_bitrate(struct fsl_lpspi_data *fsl_lpspi)
+ 	}
+ 
+ 	if (config.speed_hz > perclk_rate / 2) {
+-		dev_err(fsl_lpspi->dev,
+-		      "per-clk should be at least two times of transfer speed");
+-		return -EINVAL;
++		div = 2;
++	} else {
++		div = DIV_ROUND_UP(perclk_rate, config.speed_hz);
+ 	}
+ 
+-	div = DIV_ROUND_UP(perclk_rate, config.speed_hz);
+-
+ 	for (prescale = 0; prescale <= prescale_max; prescale++) {
+ 		scldiv = div / (1 << prescale) - 2;
+ 		if (scldiv >= 0 && scldiv < 256) {
+diff --git a/drivers/spi/spi-qpic-snand.c b/drivers/spi/spi-qpic-snand.c
+index 3b757e3d00c01d..e98e997680c754 100644
+--- a/drivers/spi/spi-qpic-snand.c
++++ b/drivers/spi/spi-qpic-snand.c
+@@ -216,13 +216,21 @@ static int qcom_spi_ooblayout_ecc(struct mtd_info *mtd, int section,
+ 	struct qcom_nand_controller *snandc = nand_to_qcom_snand(nand);
+ 	struct qpic_ecc *qecc = snandc->qspi->ecc;
+ 
+-	if (section > 1)
+-		return -ERANGE;
+-
+-	oobregion->length = qecc->ecc_bytes_hw + qecc->spare_bytes;
+-	oobregion->offset = mtd->oobsize - oobregion->length;
++	switch (section) {
++	case 0:
++		oobregion->offset = 0;
++		oobregion->length = qecc->bytes * (qecc->steps - 1) +
++				    qecc->bbm_size;
++		return 0;
++	case 1:
++		oobregion->offset = qecc->bytes * (qecc->steps - 1) +
++				    qecc->bbm_size +
++				    qecc->steps * 4;
++		oobregion->length = mtd->oobsize - oobregion->offset;
++		return 0;
++	}
+ 
+-	return 0;
++	return -ERANGE;
+ }
+ 
+ static int qcom_spi_ooblayout_free(struct mtd_info *mtd, int section,
+@@ -1185,7 +1193,7 @@ static int qcom_spi_program_oob(struct qcom_nand_controller *snandc,
+ 	u32 cfg0, cfg1, ecc_bch_cfg, ecc_buf_cfg;
+ 
+ 	cfg0 = (ecc_cfg->cfg0 & ~CW_PER_PAGE_MASK) |
+-	       FIELD_PREP(CW_PER_PAGE_MASK, num_cw - 1);
++	       FIELD_PREP(CW_PER_PAGE_MASK, 0);
+ 	cfg1 = ecc_cfg->cfg1;
+ 	ecc_bch_cfg = ecc_cfg->ecc_bch_cfg;
+ 	ecc_buf_cfg = ecc_cfg->ecc_buf_cfg;
+diff --git a/drivers/staging/media/imx/imx-media-csc-scaler.c b/drivers/staging/media/imx/imx-media-csc-scaler.c
+index e5e08c6f79f222..19fd31cb9bb035 100644
+--- a/drivers/staging/media/imx/imx-media-csc-scaler.c
++++ b/drivers/staging/media/imx/imx-media-csc-scaler.c
+@@ -912,7 +912,7 @@ imx_media_csc_scaler_device_init(struct imx_media_dev *md)
+ 	return &priv->vdev;
+ 
+ err_m2m:
+-	video_set_drvdata(vfd, NULL);
++	video_device_release(vfd);
+ err_vfd:
+ 	kfree(priv);
+ 	return ERR_PTR(ret);
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 6d7b8c4667c9cd..db26cf5a5c0264 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -2376,9 +2376,8 @@ int serial8250_do_startup(struct uart_port *port)
+ 	/*
+ 	 * Now, initialize the UART
+ 	 */
+-	serial_port_out(port, UART_LCR, UART_LCR_WLEN8);
+-
+ 	uart_port_lock_irqsave(port, &flags);
++	serial_port_out(port, UART_LCR, UART_LCR_WLEN8);
+ 	if (up->port.flags & UPF_FOURPORT) {
+ 		if (!up->port.irq)
+ 			up->port.mctrl |= TIOCM_OUT1;
+diff --git a/drivers/tty/vt/defkeymap.c_shipped b/drivers/tty/vt/defkeymap.c_shipped
+index 0c043e4f292e8a..6af7bf8d5460c5 100644
+--- a/drivers/tty/vt/defkeymap.c_shipped
++++ b/drivers/tty/vt/defkeymap.c_shipped
+@@ -23,6 +23,22 @@ unsigned short plain_map[NR_KEYS] = {
+ 	0xf118,	0xf601,	0xf602,	0xf117,	0xf600,	0xf119,	0xf115,	0xf116,
+ 	0xf11a,	0xf10c,	0xf10d,	0xf11b,	0xf11c,	0xf110,	0xf311,	0xf11d,
+ 	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
+ };
+ 
+ static unsigned short shift_map[NR_KEYS] = {
+@@ -42,6 +58,22 @@ static unsigned short shift_map[NR_KEYS] = {
+ 	0xf20b,	0xf601,	0xf602,	0xf117,	0xf600,	0xf20a,	0xf115,	0xf116,
+ 	0xf11a,	0xf10c,	0xf10d,	0xf11b,	0xf11c,	0xf110,	0xf311,	0xf11d,
+ 	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
+ };
+ 
+ static unsigned short altgr_map[NR_KEYS] = {
+@@ -61,6 +93,22 @@ static unsigned short altgr_map[NR_KEYS] = {
+ 	0xf118,	0xf601,	0xf602,	0xf117,	0xf600,	0xf119,	0xf115,	0xf116,
+ 	0xf11a,	0xf10c,	0xf10d,	0xf11b,	0xf11c,	0xf110,	0xf311,	0xf11d,
+ 	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
+ };
+ 
+ static unsigned short ctrl_map[NR_KEYS] = {
+@@ -80,6 +128,22 @@ static unsigned short ctrl_map[NR_KEYS] = {
+ 	0xf118,	0xf601,	0xf602,	0xf117,	0xf600,	0xf119,	0xf115,	0xf116,
+ 	0xf11a,	0xf10c,	0xf10d,	0xf11b,	0xf11c,	0xf110,	0xf311,	0xf11d,
+ 	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
+ };
+ 
+ static unsigned short shift_ctrl_map[NR_KEYS] = {
+@@ -99,6 +163,22 @@ static unsigned short shift_ctrl_map[NR_KEYS] = {
+ 	0xf118,	0xf601,	0xf602,	0xf117,	0xf600,	0xf119,	0xf115,	0xf116,
+ 	0xf11a,	0xf10c,	0xf10d,	0xf11b,	0xf11c,	0xf110,	0xf311,	0xf11d,
+ 	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
+ };
+ 
+ static unsigned short alt_map[NR_KEYS] = {
+@@ -118,6 +198,22 @@ static unsigned short alt_map[NR_KEYS] = {
+ 	0xf118,	0xf210,	0xf211,	0xf117,	0xf600,	0xf119,	0xf115,	0xf116,
+ 	0xf11a,	0xf10c,	0xf10d,	0xf11b,	0xf11c,	0xf110,	0xf311,	0xf11d,
+ 	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
+ };
+ 
+ static unsigned short ctrl_alt_map[NR_KEYS] = {
+@@ -137,6 +233,22 @@ static unsigned short ctrl_alt_map[NR_KEYS] = {
+ 	0xf118,	0xf601,	0xf602,	0xf117,	0xf600,	0xf119,	0xf115,	0xf20c,
+ 	0xf11a,	0xf10c,	0xf10d,	0xf11b,	0xf11c,	0xf110,	0xf311,	0xf11d,
+ 	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
++	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,	0xf200,
+ };
+ 
+ unsigned short *key_maps[MAX_NR_KEYMAPS] = {
+diff --git a/drivers/tty/vt/keyboard.c b/drivers/tty/vt/keyboard.c
+index dc585079c2fb8c..ee1d9c448c7ebf 100644
+--- a/drivers/tty/vt/keyboard.c
++++ b/drivers/tty/vt/keyboard.c
+@@ -1487,7 +1487,7 @@ static void kbd_keycode(unsigned int keycode, int down, bool hw_raw)
+ 		rc = atomic_notifier_call_chain(&keyboard_notifier_list,
+ 						KBD_UNICODE, &param);
+ 		if (rc != NOTIFY_STOP)
+-			if (down && !raw_mode)
++			if (down && !(raw_mode || kbd->kbdmode == VC_OFF))
+ 				k_unicode(vc, keysym, !down);
+ 		return;
+ 	}
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 3cc566e8bd1d26..5224a214540212 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -5531,9 +5531,9 @@ static irqreturn_t ufshcd_uic_cmd_compl(struct ufs_hba *hba, u32 intr_status)
+ 	irqreturn_t retval = IRQ_NONE;
+ 	struct uic_command *cmd;
+ 
+-	spin_lock(hba->host->host_lock);
++	guard(spinlock_irqsave)(hba->host->host_lock);
+ 	cmd = hba->active_uic_cmd;
+-	if (WARN_ON_ONCE(!cmd))
++	if (!cmd)
+ 		goto unlock;
+ 
+ 	if (ufshcd_is_auto_hibern8_error(hba, intr_status))
+@@ -5558,8 +5558,6 @@ static irqreturn_t ufshcd_uic_cmd_compl(struct ufs_hba *hba, u32 intr_status)
+ 		ufshcd_add_uic_command_trace(hba, cmd, UFS_CMD_COMP);
+ 
+ unlock:
+-	spin_unlock(hba->host->host_lock);
+-
+ 	return retval;
+ }
+ 
+@@ -6892,7 +6890,7 @@ static irqreturn_t ufshcd_check_errors(struct ufs_hba *hba, u32 intr_status)
+ 	bool queue_eh_work = false;
+ 	irqreturn_t retval = IRQ_NONE;
+ 
+-	spin_lock(hba->host->host_lock);
++	guard(spinlock_irqsave)(hba->host->host_lock);
+ 	hba->errors |= UFSHCD_ERROR_MASK & intr_status;
+ 
+ 	if (hba->errors & INT_FATAL_ERRORS) {
+@@ -6951,7 +6949,7 @@ static irqreturn_t ufshcd_check_errors(struct ufs_hba *hba, u32 intr_status)
+ 	 */
+ 	hba->errors = 0;
+ 	hba->uic_error = 0;
+-	spin_unlock(hba->host->host_lock);
++
+ 	return retval;
+ }
+ 
+diff --git a/drivers/ufs/host/ufs-exynos.c b/drivers/ufs/host/ufs-exynos.c
+index 3e545af536e53e..f0adcd9dd553d2 100644
+--- a/drivers/ufs/host/ufs-exynos.c
++++ b/drivers/ufs/host/ufs-exynos.c
+@@ -1110,8 +1110,8 @@ static int exynos_ufs_post_link(struct ufs_hba *hba)
+ 	hci_writel(ufs, val, HCI_TXPRDT_ENTRY_SIZE);
+ 
+ 	hci_writel(ufs, ilog2(DATA_UNIT_SIZE), HCI_RXPRDT_ENTRY_SIZE);
+-	hci_writel(ufs, (1 << hba->nutrs) - 1, HCI_UTRL_NEXUS_TYPE);
+-	hci_writel(ufs, (1 << hba->nutmrs) - 1, HCI_UTMRL_NEXUS_TYPE);
++	hci_writel(ufs, BIT(hba->nutrs) - 1, HCI_UTRL_NEXUS_TYPE);
++	hci_writel(ufs, BIT(hba->nutmrs) - 1, HCI_UTMRL_NEXUS_TYPE);
+ 	hci_writel(ufs, 0xf, HCI_AXIDMA_RWDATA_BURST_LEN);
+ 
+ 	if (ufs->opts & EXYNOS_UFS_OPT_SKIP_CONNECTION_ESTAB)
+diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
+index 18a9784520013e..2e4edc192e8e9b 100644
+--- a/drivers/ufs/host/ufs-qcom.c
++++ b/drivers/ufs/host/ufs-qcom.c
+@@ -2053,17 +2053,6 @@ static irqreturn_t ufs_qcom_mcq_esi_handler(int irq, void *data)
+ 	return IRQ_HANDLED;
+ }
+ 
+-static void ufs_qcom_irq_free(struct ufs_qcom_irq *uqi)
+-{
+-	for (struct ufs_qcom_irq *q = uqi; q->irq; q++)
+-		devm_free_irq(q->hba->dev, q->irq, q->hba);
+-
+-	platform_device_msi_free_irqs_all(uqi->hba->dev);
+-	devm_kfree(uqi->hba->dev, uqi);
+-}
+-
+-DEFINE_FREE(ufs_qcom_irq, struct ufs_qcom_irq *, if (_T) ufs_qcom_irq_free(_T))
+-
+ static int ufs_qcom_config_esi(struct ufs_hba *hba)
+ {
+ 	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+@@ -2078,18 +2067,18 @@ static int ufs_qcom_config_esi(struct ufs_hba *hba)
+ 	 */
+ 	nr_irqs = hba->nr_hw_queues - hba->nr_queues[HCTX_TYPE_POLL];
+ 
+-	struct ufs_qcom_irq *qi __free(ufs_qcom_irq) =
+-		devm_kcalloc(hba->dev, nr_irqs, sizeof(*qi), GFP_KERNEL);
+-	if (!qi)
+-		return -ENOMEM;
+-	/* Preset so __free() has a pointer to hba in all error paths */
+-	qi[0].hba = hba;
+-
+ 	ret = platform_device_msi_init_and_alloc_irqs(hba->dev, nr_irqs,
+ 						      ufs_qcom_write_msi_msg);
+ 	if (ret) {
+-		dev_err(hba->dev, "Failed to request Platform MSI %d\n", ret);
+-		return ret;
++		dev_warn(hba->dev, "Platform MSI not supported or failed, continuing without ESI\n");
++		return ret; /* Continue without ESI */
++	}
++
++	struct ufs_qcom_irq *qi = devm_kcalloc(hba->dev, nr_irqs, sizeof(*qi), GFP_KERNEL);
++
++	if (!qi) {
++		platform_device_msi_free_irqs_all(hba->dev);
++		return -ENOMEM;
+ 	}
+ 
+ 	for (int idx = 0; idx < nr_irqs; idx++) {
+@@ -2100,17 +2089,18 @@ static int ufs_qcom_config_esi(struct ufs_hba *hba)
+ 		ret = devm_request_irq(hba->dev, qi[idx].irq, ufs_qcom_mcq_esi_handler,
+ 				       IRQF_SHARED, "qcom-mcq-esi", qi + idx);
+ 		if (ret) {
+-			dev_err(hba->dev, "%s: Fail to request IRQ for %d, err = %d\n",
++			dev_err(hba->dev, "%s: Failed to request IRQ for %d, err = %d\n",
+ 				__func__, qi[idx].irq, ret);
+-			qi[idx].irq = 0;
++			/* Free previously allocated IRQs */
++			for (int j = 0; j < idx; j++)
++				devm_free_irq(hba->dev, qi[j].irq, qi + j);
++			platform_device_msi_free_irqs_all(hba->dev);
++			devm_kfree(hba->dev, qi);
+ 			return ret;
+ 		}
+ 	}
+ 
+-	retain_and_null_ptr(qi);
+-
+-	if (host->hw_ver.major == 6 && host->hw_ver.minor == 0 &&
+-	    host->hw_ver.step == 0) {
++	if (host->hw_ver.major >= 6) {
+ 		ufshcd_rmwl(hba, ESI_VEC_MASK, FIELD_PREP(ESI_VEC_MASK, MAX_ESI_VEC - 1),
+ 			    REG_UFS_CFG3);
+ 	}
+diff --git a/drivers/ufs/host/ufshcd-pci.c b/drivers/ufs/host/ufshcd-pci.c
+index 996387906aa14e..8aff32d7057d5e 100644
+--- a/drivers/ufs/host/ufshcd-pci.c
++++ b/drivers/ufs/host/ufshcd-pci.c
+@@ -216,6 +216,32 @@ static int ufs_intel_lkf_apply_dev_quirks(struct ufs_hba *hba)
+ 	return ret;
+ }
+ 
++static void ufs_intel_ctrl_uic_compl(struct ufs_hba *hba, bool enable)
++{
++	u32 set = ufshcd_readl(hba, REG_INTERRUPT_ENABLE);
++
++	if (enable)
++		set |= UIC_COMMAND_COMPL;
++	else
++		set &= ~UIC_COMMAND_COMPL;
++	ufshcd_writel(hba, set, REG_INTERRUPT_ENABLE);
++}
++
++static void ufs_intel_mtl_h8_notify(struct ufs_hba *hba,
++				    enum uic_cmd_dme cmd,
++				    enum ufs_notify_change_status status)
++{
++	/*
++	 * Disable UIC COMPL INTR to prevent access to UFSHCI after
++	 * checking HCS.UPMCRS
++	 */
++	if (status == PRE_CHANGE && cmd == UIC_CMD_DME_HIBER_ENTER)
++		ufs_intel_ctrl_uic_compl(hba, false);
++
++	if (status == POST_CHANGE && cmd == UIC_CMD_DME_HIBER_EXIT)
++		ufs_intel_ctrl_uic_compl(hba, true);
++}
++
+ #define INTEL_ACTIVELTR		0x804
+ #define INTEL_IDLELTR		0x808
+ 
+@@ -442,10 +468,23 @@ static int ufs_intel_adl_init(struct ufs_hba *hba)
+ 	return ufs_intel_common_init(hba);
+ }
+ 
++static void ufs_intel_mtl_late_init(struct ufs_hba *hba)
++{
++	hba->rpm_lvl = UFS_PM_LVL_2;
++	hba->spm_lvl = UFS_PM_LVL_2;
++}
++
+ static int ufs_intel_mtl_init(struct ufs_hba *hba)
+ {
++	struct ufs_host *ufs_host;
++	int err;
++
+ 	hba->caps |= UFSHCD_CAP_CRYPTO | UFSHCD_CAP_WB_EN;
+-	return ufs_intel_common_init(hba);
++	err = ufs_intel_common_init(hba);
++	/* Get variant after it is set in ufs_intel_common_init() */
++	ufs_host = ufshcd_get_variant(hba);
++	ufs_host->late_init = ufs_intel_mtl_late_init;
++	return err;
+ }
+ 
+ static int ufs_qemu_get_hba_mac(struct ufs_hba *hba)
+@@ -533,6 +572,7 @@ static struct ufs_hba_variant_ops ufs_intel_mtl_hba_vops = {
+ 	.init			= ufs_intel_mtl_init,
+ 	.exit			= ufs_intel_common_exit,
+ 	.hce_enable_notify	= ufs_intel_hce_enable_notify,
++	.hibern8_notify		= ufs_intel_mtl_h8_notify,
+ 	.link_startup_notify	= ufs_intel_link_startup_notify,
+ 	.resume			= ufs_intel_resume,
+ 	.device_reset		= ufs_intel_device_reset,
+diff --git a/drivers/usb/atm/cxacru.c b/drivers/usb/atm/cxacru.c
+index a12ab90b3db75b..68a8e9de8b4fe9 100644
+--- a/drivers/usb/atm/cxacru.c
++++ b/drivers/usb/atm/cxacru.c
+@@ -980,25 +980,60 @@ static int cxacru_fw(struct usb_device *usb_dev, enum cxacru_fw_request fw,
+ 	return ret;
+ }
+ 
+-static void cxacru_upload_firmware(struct cxacru_data *instance,
+-				   const struct firmware *fw,
+-				   const struct firmware *bp)
++
++static int cxacru_find_firmware(struct cxacru_data *instance,
++				char *phase, const struct firmware **fw_p)
+ {
+-	int ret;
++	struct usbatm_data *usbatm = instance->usbatm;
++	struct device *dev = &usbatm->usb_intf->dev;
++	char buf[16];
++
++	sprintf(buf, "cxacru-%s.bin", phase);
++	usb_dbg(usbatm, "cxacru_find_firmware: looking for %s\n", buf);
++
++	if (request_firmware(fw_p, buf, dev)) {
++		usb_dbg(usbatm, "no stage %s firmware found\n", phase);
++		return -ENOENT;
++	}
++
++	usb_info(usbatm, "found firmware %s\n", buf);
++
++	return 0;
++}
++
++static int cxacru_heavy_init(struct usbatm_data *usbatm_instance,
++			     struct usb_interface *usb_intf)
++{
++	const struct firmware *fw, *bp;
++	struct cxacru_data *instance = usbatm_instance->driver_data;
+ 	struct usbatm_data *usbatm = instance->usbatm;
+ 	struct usb_device *usb_dev = usbatm->usb_dev;
+ 	__le16 signature[] = { usb_dev->descriptor.idVendor,
+ 			       usb_dev->descriptor.idProduct };
+ 	__le32 val;
++	int ret;
+ 
+-	usb_dbg(usbatm, "%s\n", __func__);
++	ret = cxacru_find_firmware(instance, "fw", &fw);
++	if (ret) {
++		usb_warn(usbatm_instance, "firmware (cxacru-fw.bin) unavailable (system misconfigured?)\n");
++		return ret;
++	}
++
++	if (instance->modem_type->boot_rom_patch) {
++		ret = cxacru_find_firmware(instance, "bp", &bp);
++		if (ret) {
++			usb_warn(usbatm_instance, "boot ROM patch (cxacru-bp.bin) unavailable (system misconfigured?)\n");
++			release_firmware(fw);
++			return ret;
++		}
++	}
+ 
+ 	/* FirmwarePllFClkValue */
+ 	val = cpu_to_le32(instance->modem_type->pll_f_clk);
+ 	ret = cxacru_fw(usb_dev, FW_WRITE_MEM, 0x2, 0x0, PLLFCLK_ADDR, (u8 *) &val, 4);
+ 	if (ret) {
+ 		usb_err(usbatm, "FirmwarePllFClkValue failed: %d\n", ret);
+-		return;
++		goto done;
+ 	}
+ 
+ 	/* FirmwarePllBClkValue */
+@@ -1006,7 +1041,7 @@ static void cxacru_upload_firmware(struct cxacru_data *instance,
+ 	ret = cxacru_fw(usb_dev, FW_WRITE_MEM, 0x2, 0x0, PLLBCLK_ADDR, (u8 *) &val, 4);
+ 	if (ret) {
+ 		usb_err(usbatm, "FirmwarePllBClkValue failed: %d\n", ret);
+-		return;
++		goto done;
+ 	}
+ 
+ 	/* Enable SDRAM */
+@@ -1014,7 +1049,7 @@ static void cxacru_upload_firmware(struct cxacru_data *instance,
+ 	ret = cxacru_fw(usb_dev, FW_WRITE_MEM, 0x2, 0x0, SDRAMEN_ADDR, (u8 *) &val, 4);
+ 	if (ret) {
+ 		usb_err(usbatm, "Enable SDRAM failed: %d\n", ret);
+-		return;
++		goto done;
+ 	}
+ 
+ 	/* Firmware */
+@@ -1022,7 +1057,7 @@ static void cxacru_upload_firmware(struct cxacru_data *instance,
+ 	ret = cxacru_fw(usb_dev, FW_WRITE_MEM, 0x2, 0x0, FW_ADDR, fw->data, fw->size);
+ 	if (ret) {
+ 		usb_err(usbatm, "Firmware upload failed: %d\n", ret);
+-		return;
++		goto done;
+ 	}
+ 
+ 	/* Boot ROM patch */
+@@ -1031,7 +1066,7 @@ static void cxacru_upload_firmware(struct cxacru_data *instance,
+ 		ret = cxacru_fw(usb_dev, FW_WRITE_MEM, 0x2, 0x0, BR_ADDR, bp->data, bp->size);
+ 		if (ret) {
+ 			usb_err(usbatm, "Boot ROM patching failed: %d\n", ret);
+-			return;
++			goto done;
+ 		}
+ 	}
+ 
+@@ -1039,7 +1074,7 @@ static void cxacru_upload_firmware(struct cxacru_data *instance,
+ 	ret = cxacru_fw(usb_dev, FW_WRITE_MEM, 0x2, 0x0, SIG_ADDR, (u8 *) signature, 4);
+ 	if (ret) {
+ 		usb_err(usbatm, "Signature storing failed: %d\n", ret);
+-		return;
++		goto done;
+ 	}
+ 
+ 	usb_info(usbatm, "starting device\n");
+@@ -1051,7 +1086,7 @@ static void cxacru_upload_firmware(struct cxacru_data *instance,
+ 	}
+ 	if (ret) {
+ 		usb_err(usbatm, "Passing control to firmware failed: %d\n", ret);
+-		return;
++		goto done;
+ 	}
+ 
+ 	/* Delay to allow firmware to start up. */
+@@ -1065,53 +1100,10 @@ static void cxacru_upload_firmware(struct cxacru_data *instance,
+ 	ret = cxacru_cm(instance, CM_REQUEST_CARD_GET_STATUS, NULL, 0, NULL, 0);
+ 	if (ret < 0) {
+ 		usb_err(usbatm, "modem failed to initialize: %d\n", ret);
+-		return;
+-	}
+-}
+-
+-static int cxacru_find_firmware(struct cxacru_data *instance,
+-				char *phase, const struct firmware **fw_p)
+-{
+-	struct usbatm_data *usbatm = instance->usbatm;
+-	struct device *dev = &usbatm->usb_intf->dev;
+-	char buf[16];
+-
+-	sprintf(buf, "cxacru-%s.bin", phase);
+-	usb_dbg(usbatm, "cxacru_find_firmware: looking for %s\n", buf);
+-
+-	if (request_firmware(fw_p, buf, dev)) {
+-		usb_dbg(usbatm, "no stage %s firmware found\n", phase);
+-		return -ENOENT;
+-	}
+-
+-	usb_info(usbatm, "found firmware %s\n", buf);
+-
+-	return 0;
+-}
+-
+-static int cxacru_heavy_init(struct usbatm_data *usbatm_instance,
+-			     struct usb_interface *usb_intf)
+-{
+-	const struct firmware *fw, *bp;
+-	struct cxacru_data *instance = usbatm_instance->driver_data;
+-	int ret = cxacru_find_firmware(instance, "fw", &fw);
+-
+-	if (ret) {
+-		usb_warn(usbatm_instance, "firmware (cxacru-fw.bin) unavailable (system misconfigured?)\n");
+-		return ret;
++		goto done;
+ 	}
+ 
+-	if (instance->modem_type->boot_rom_patch) {
+-		ret = cxacru_find_firmware(instance, "bp", &bp);
+-		if (ret) {
+-			usb_warn(usbatm_instance, "boot ROM patch (cxacru-bp.bin) unavailable (system misconfigured?)\n");
+-			release_firmware(fw);
+-			return ret;
+-		}
+-	}
+-
+-	cxacru_upload_firmware(instance, fw, bp);
+-
++done:
+ 	if (instance->modem_type->boot_rom_patch)
+ 		release_firmware(bp);
+ 	release_firmware(fw);
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index c22de97432a01b..04bb87e4615a2e 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -1623,7 +1623,6 @@ static void __usb_hcd_giveback_urb(struct urb *urb)
+ 	struct usb_hcd *hcd = bus_to_hcd(urb->dev->bus);
+ 	struct usb_anchor *anchor = urb->anchor;
+ 	int status = urb->unlinked;
+-	unsigned long flags;
+ 
+ 	urb->hcpriv = NULL;
+ 	if (unlikely((urb->transfer_flags & URB_SHORT_NOT_OK) &&
+@@ -1641,14 +1640,13 @@ static void __usb_hcd_giveback_urb(struct urb *urb)
+ 	/* pass ownership to the completion handler */
+ 	urb->status = status;
+ 	/*
+-	 * Only collect coverage in the softirq context and disable interrupts
+-	 * to avoid scenarios with nested remote coverage collection sections
+-	 * that KCOV does not support.
+-	 * See the comment next to kcov_remote_start_usb_softirq() for details.
++	 * This function can be called in task context inside another remote
++	 * coverage collection section, but kcov doesn't support that kind of
++	 * recursion yet. Only collect coverage in softirq context for now.
+ 	 */
+-	flags = kcov_remote_start_usb_softirq((u64)urb->dev->bus->busnum);
++	kcov_remote_start_usb_softirq((u64)urb->dev->bus->busnum);
+ 	urb->complete(urb);
+-	kcov_remote_stop_softirq(flags);
++	kcov_remote_stop_softirq();
+ 
+ 	usb_anchor_resume_wakeups(anchor);
+ 	atomic_dec(&urb->use_count);
+@@ -2153,7 +2151,7 @@ static struct urb *request_single_step_set_feature_urb(
+ 	urb->complete = usb_ehset_completion;
+ 	urb->status = -EINPROGRESS;
+ 	urb->actual_length = 0;
+-	urb->transfer_flags = URB_DIR_IN;
++	urb->transfer_flags = URB_DIR_IN | URB_NO_TRANSFER_DMA_MAP;
+ 	usb_get_urb(urb);
+ 	atomic_inc(&urb->use_count);
+ 	atomic_inc(&urb->dev->urbnum);
+@@ -2217,9 +2215,15 @@ int ehset_single_step_set_feature(struct usb_hcd *hcd, int port)
+ 
+ 	/* Complete remaining DATA and STATUS stages using the same URB */
+ 	urb->status = -EINPROGRESS;
++	urb->transfer_flags &= ~URB_NO_TRANSFER_DMA_MAP;
+ 	usb_get_urb(urb);
+ 	atomic_inc(&urb->use_count);
+ 	atomic_inc(&urb->dev->urbnum);
++	if (map_urb_for_dma(hcd, urb, GFP_KERNEL)) {
++		usb_put_urb(urb);
++		goto out1;
++	}
++
+ 	retval = hcd->driver->submit_single_step_set_feature(hcd, urb, 0);
+ 	if (!retval && !wait_for_completion_timeout(&done,
+ 						msecs_to_jiffies(2000))) {
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 0cf94c7a2c9ce6..d6daad39491b75 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -371,6 +371,7 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	{ USB_DEVICE(0x0781, 0x5591), .driver_info = USB_QUIRK_NO_LPM },
+ 
+ 	/* SanDisk Corp. SanDisk 3.2Gen1 */
++	{ USB_DEVICE(0x0781, 0x5596), .driver_info = USB_QUIRK_DELAY_INIT },
+ 	{ USB_DEVICE(0x0781, 0x55a3), .driver_info = USB_QUIRK_DELAY_INIT },
+ 
+ 	/* SanDisk Extreme 55AE */
+diff --git a/drivers/usb/dwc3/dwc3-imx8mp.c b/drivers/usb/dwc3/dwc3-imx8mp.c
+index 3edc5aca76f97b..bce6af82f54c24 100644
+--- a/drivers/usb/dwc3/dwc3-imx8mp.c
++++ b/drivers/usb/dwc3/dwc3-imx8mp.c
+@@ -244,7 +244,7 @@ static int dwc3_imx8mp_probe(struct platform_device *pdev)
+ 					IRQF_ONESHOT, dev_name(dev), dwc3_imx);
+ 	if (err) {
+ 		dev_err(dev, "failed to request IRQ #%d --> %d\n", irq, err);
+-		goto depopulate;
++		goto put_dwc3;
+ 	}
+ 
+ 	device_set_wakeup_capable(dev, true);
+@@ -252,6 +252,8 @@ static int dwc3_imx8mp_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
++put_dwc3:
++	put_device(&dwc3_imx->dwc3->dev);
+ depopulate:
+ 	of_platform_depopulate(dev);
+ remove_swnode:
+@@ -265,8 +267,11 @@ static int dwc3_imx8mp_probe(struct platform_device *pdev)
+ 
+ static void dwc3_imx8mp_remove(struct platform_device *pdev)
+ {
++	struct dwc3_imx8mp *dwc3_imx = platform_get_drvdata(pdev);
+ 	struct device *dev = &pdev->dev;
+ 
++	put_device(&dwc3_imx->dwc3->dev);
++
+ 	pm_runtime_get_sync(dev);
+ 	of_platform_depopulate(dev);
+ 	device_remove_software_node(dev);
+diff --git a/drivers/usb/dwc3/dwc3-meson-g12a.c b/drivers/usb/dwc3/dwc3-meson-g12a.c
+index 7d80bf7b18b0d2..55e144ba8cfc6c 100644
+--- a/drivers/usb/dwc3/dwc3-meson-g12a.c
++++ b/drivers/usb/dwc3/dwc3-meson-g12a.c
+@@ -837,6 +837,9 @@ static void dwc3_meson_g12a_remove(struct platform_device *pdev)
+ 
+ 	usb_role_switch_unregister(priv->role_switch);
+ 
++	put_device(priv->switch_desc.udc);
++	put_device(priv->switch_desc.usb2_port);
++
+ 	of_platform_depopulate(dev);
+ 
+ 	for (i = 0 ; i < PHY_COUNT ; ++i) {
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index 54a4ee2b90b7f4..39c72cb52ce76a 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -41,6 +41,7 @@
+ #define PCI_DEVICE_ID_INTEL_TGPLP		0xa0ee
+ #define PCI_DEVICE_ID_INTEL_TGPH		0x43ee
+ #define PCI_DEVICE_ID_INTEL_JSP			0x4dee
++#define PCI_DEVICE_ID_INTEL_WCL			0x4d7e
+ #define PCI_DEVICE_ID_INTEL_ADL			0x460e
+ #define PCI_DEVICE_ID_INTEL_ADL_PCH		0x51ee
+ #define PCI_DEVICE_ID_INTEL_ADLN		0x465e
+@@ -431,6 +432,7 @@ static const struct pci_device_id dwc3_pci_id_table[] = {
+ 	{ PCI_DEVICE_DATA(INTEL, TGPLP, &dwc3_pci_intel_swnode) },
+ 	{ PCI_DEVICE_DATA(INTEL, TGPH, &dwc3_pci_intel_swnode) },
+ 	{ PCI_DEVICE_DATA(INTEL, JSP, &dwc3_pci_intel_swnode) },
++	{ PCI_DEVICE_DATA(INTEL, WCL, &dwc3_pci_intel_swnode) },
+ 	{ PCI_DEVICE_DATA(INTEL, ADL, &dwc3_pci_intel_swnode) },
+ 	{ PCI_DEVICE_DATA(INTEL, ADL_PCH, &dwc3_pci_intel_swnode) },
+ 	{ PCI_DEVICE_DATA(INTEL, ADLN, &dwc3_pci_intel_swnode) },
+diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
+index 666ac432f52d67..b4229aa13f375b 100644
+--- a/drivers/usb/dwc3/ep0.c
++++ b/drivers/usb/dwc3/ep0.c
+@@ -288,7 +288,9 @@ void dwc3_ep0_out_start(struct dwc3 *dwc)
+ 	dwc3_ep0_prepare_one_trb(dep, dwc->ep0_trb_addr, 8,
+ 			DWC3_TRBCTL_CONTROL_SETUP, false);
+ 	ret = dwc3_ep0_start_trans(dep);
+-	WARN_ON(ret < 0);
++	if (ret < 0)
++		dev_err(dwc->dev, "ep0 out start transfer failed: %d\n", ret);
++
+ 	for (i = 2; i < DWC3_ENDPOINTS_NUM; i++) {
+ 		struct dwc3_ep *dwc3_ep;
+ 
+@@ -1061,7 +1063,9 @@ static void __dwc3_ep0_do_control_data(struct dwc3 *dwc,
+ 		ret = dwc3_ep0_start_trans(dep);
+ 	}
+ 
+-	WARN_ON(ret < 0);
++	if (ret < 0)
++		dev_err(dwc->dev,
++			"ep0 data phase start transfer failed: %d\n", ret);
+ }
+ 
+ static int dwc3_ep0_start_control_status(struct dwc3_ep *dep)
+@@ -1078,7 +1082,12 @@ static int dwc3_ep0_start_control_status(struct dwc3_ep *dep)
+ 
+ static void __dwc3_ep0_do_control_status(struct dwc3 *dwc, struct dwc3_ep *dep)
+ {
+-	WARN_ON(dwc3_ep0_start_control_status(dep));
++	int	ret;
++
++	ret = dwc3_ep0_start_control_status(dep);
++	if (ret)
++		dev_err(dwc->dev,
++			"ep0 status phase start transfer failed: %d\n", ret);
+ }
+ 
+ static void dwc3_ep0_do_control_status(struct dwc3 *dwc,
+@@ -1121,7 +1130,10 @@ void dwc3_ep0_end_control_data(struct dwc3 *dwc, struct dwc3_ep *dep)
+ 	cmd |= DWC3_DEPCMD_PARAM(dep->resource_index);
+ 	memset(&params, 0, sizeof(params));
+ 	ret = dwc3_send_gadget_ep_cmd(dep, cmd, &params);
+-	WARN_ON_ONCE(ret);
++	if (ret)
++		dev_err_ratelimited(dwc->dev,
++			"ep0 data phase end transfer failed: %d\n", ret);
++
+ 	dep->resource_index = 0;
+ }
+ 
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 74968f93d4a353..6ab6c9f163b893 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1774,7 +1774,11 @@ static int __dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force, bool int
+ 		dep->flags |= DWC3_EP_DELAY_STOP;
+ 		return 0;
+ 	}
+-	WARN_ON_ONCE(ret);
++
++	if (ret)
++		dev_err_ratelimited(dep->dwc->dev,
++				"end transfer failed: %d\n", ret);
++
+ 	dep->resource_index = 0;
+ 
+ 	if (!interrupt)
+@@ -3779,6 +3783,15 @@ static void dwc3_gadget_endpoint_transfer_complete(struct dwc3_ep *dep,
+ static void dwc3_gadget_endpoint_transfer_not_ready(struct dwc3_ep *dep,
+ 		const struct dwc3_event_depevt *event)
+ {
++	/*
++	 * During a device-initiated disconnect, a late xferNotReady event can
++	 * be generated after the End Transfer command resets the event filter,
++	 * but before the controller is halted. Ignore it to prevent a new
++	 * transfer from starting.
++	 */
++	if (!dep->dwc->connected)
++		return;
++
+ 	dwc3_gadget_endpoint_frame_from_event(dep, event);
+ 
+ 	/*
+@@ -4041,7 +4054,9 @@ static void dwc3_clear_stall_all_ep(struct dwc3 *dwc)
+ 		dep->flags &= ~DWC3_EP_STALL;
+ 
+ 		ret = dwc3_send_clear_stall_ep_cmd(dep);
+-		WARN_ON_ONCE(ret);
++		if (ret)
++			dev_err_ratelimited(dwc->dev,
++				"failed to clear STALL on %s\n", dep->name);
+ 	}
+ }
+ 
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index 3e4d5645759791..d23b1762e0e45f 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -2657,6 +2657,7 @@ static void renesas_usb3_remove(struct platform_device *pdev)
+ 	struct renesas_usb3 *usb3 = platform_get_drvdata(pdev);
+ 
+ 	debugfs_remove_recursive(usb3->dentry);
++	put_device(usb3->host_dev);
+ 	device_remove_file(&pdev->dev, &dev_attr_role);
+ 
+ 	cancel_work_sync(&usb3->role_work);
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 92bb84f8132a97..b3a59ce1b3f41f 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -704,8 +704,7 @@ static int xhci_enter_test_mode(struct xhci_hcd *xhci,
+ 		if (!xhci->devs[i])
+ 			continue;
+ 
+-		retval = xhci_disable_slot(xhci, i);
+-		xhci_free_virt_device(xhci, i);
++		retval = xhci_disable_and_free_slot(xhci, i);
+ 		if (retval)
+ 			xhci_err(xhci, "Failed to disable slot %d, %d. Enter test mode anyway\n",
+ 				 i, retval);
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 07289333a1e8f1..81eaad87a3d9d0 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -865,21 +865,20 @@ int xhci_alloc_tt_info(struct xhci_hcd *xhci,
+  * will be manipulated by the configure endpoint, allocate device, or update
+  * hub functions while this function is removing the TT entries from the list.
+  */
+-void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id)
++void xhci_free_virt_device(struct xhci_hcd *xhci, struct xhci_virt_device *dev,
++		int slot_id)
+ {
+-	struct xhci_virt_device *dev;
+ 	int i;
+ 	int old_active_eps = 0;
+ 
+ 	/* Slot ID 0 is reserved */
+-	if (slot_id == 0 || !xhci->devs[slot_id])
++	if (slot_id == 0 || !dev)
+ 		return;
+ 
+-	dev = xhci->devs[slot_id];
+-
+-	xhci->dcbaa->dev_context_ptrs[slot_id] = 0;
+-	if (!dev)
+-		return;
++	/* If device ctx array still points to _this_ device, clear it */
++	if (dev->out_ctx &&
++	    xhci->dcbaa->dev_context_ptrs[slot_id] == cpu_to_le64(dev->out_ctx->dma))
++		xhci->dcbaa->dev_context_ptrs[slot_id] = 0;
+ 
+ 	trace_xhci_free_virt_device(dev);
+ 
+@@ -920,8 +919,9 @@ void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id)
+ 		dev->udev->slot_id = 0;
+ 	if (dev->rhub_port && dev->rhub_port->slot_id == slot_id)
+ 		dev->rhub_port->slot_id = 0;
+-	kfree(xhci->devs[slot_id]);
+-	xhci->devs[slot_id] = NULL;
++	if (xhci->devs[slot_id] == dev)
++		xhci->devs[slot_id] = NULL;
++	kfree(dev);
+ }
+ 
+ /*
+@@ -962,7 +962,7 @@ static void xhci_free_virt_devices_depth_first(struct xhci_hcd *xhci, int slot_i
+ out:
+ 	/* we are now at a leaf device */
+ 	xhci_debugfs_remove_slot(xhci, slot_id);
+-	xhci_free_virt_device(xhci, slot_id);
++	xhci_free_virt_device(xhci, vdev, slot_id);
+ }
+ 
+ int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id,
+diff --git a/drivers/usb/host/xhci-pci-renesas.c b/drivers/usb/host/xhci-pci-renesas.c
+index 620f8f0febb84b..86df80399c9fdc 100644
+--- a/drivers/usb/host/xhci-pci-renesas.c
++++ b/drivers/usb/host/xhci-pci-renesas.c
+@@ -47,8 +47,9 @@
+ #define RENESAS_ROM_ERASE_MAGIC				0x5A65726F
+ #define RENESAS_ROM_WRITE_MAGIC				0x53524F4D
+ 
+-#define RENESAS_RETRY	10000
+-#define RENESAS_DELAY	10
++#define RENESAS_RETRY			50000	/* 50000 * RENESAS_DELAY ~= 500ms */
++#define RENESAS_CHIP_ERASE_RETRY	500000	/* 500000 * RENESAS_DELAY ~= 5s */
++#define RENESAS_DELAY			10
+ 
+ #define RENESAS_FW_NAME	"renesas_usb_fw.mem"
+ 
+@@ -407,7 +408,7 @@ static void renesas_rom_erase(struct pci_dev *pdev)
+ 	/* sleep a bit while ROM is erased */
+ 	msleep(20);
+ 
+-	for (i = 0; i < RENESAS_RETRY; i++) {
++	for (i = 0; i < RENESAS_CHIP_ERASE_RETRY; i++) {
+ 		retval = pci_read_config_byte(pdev, RENESAS_ROM_STATUS,
+ 					      &status);
+ 		status &= RENESAS_ROM_STATUS_ERASE;
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index ecd757d482c582..4f8f5aab109d0c 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1592,7 +1592,8 @@ static void xhci_handle_cmd_enable_slot(int slot_id, struct xhci_command *comman
+ 		command->slot_id = 0;
+ }
+ 
+-static void xhci_handle_cmd_disable_slot(struct xhci_hcd *xhci, int slot_id)
++static void xhci_handle_cmd_disable_slot(struct xhci_hcd *xhci, int slot_id,
++					u32 cmd_comp_code)
+ {
+ 	struct xhci_virt_device *virt_dev;
+ 	struct xhci_slot_ctx *slot_ctx;
+@@ -1607,6 +1608,10 @@ static void xhci_handle_cmd_disable_slot(struct xhci_hcd *xhci, int slot_id)
+ 	if (xhci->quirks & XHCI_EP_LIMIT_QUIRK)
+ 		/* Delete default control endpoint resources */
+ 		xhci_free_device_endpoint_resources(xhci, virt_dev, true);
++	if (cmd_comp_code == COMP_SUCCESS) {
++		xhci->dcbaa->dev_context_ptrs[slot_id] = 0;
++		xhci->devs[slot_id] = NULL;
++	}
+ }
+ 
+ static void xhci_handle_cmd_config_ep(struct xhci_hcd *xhci, int slot_id)
+@@ -1856,7 +1861,7 @@ static void handle_cmd_completion(struct xhci_hcd *xhci,
+ 		xhci_handle_cmd_enable_slot(slot_id, cmd, cmd_comp_code);
+ 		break;
+ 	case TRB_DISABLE_SLOT:
+-		xhci_handle_cmd_disable_slot(xhci, slot_id);
++		xhci_handle_cmd_disable_slot(xhci, slot_id, cmd_comp_code);
+ 		break;
+ 	case TRB_CONFIG_EP:
+ 		if (!cmd->completion)
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 47151ca527bfaf..742c23826e173a 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -309,6 +309,7 @@ int xhci_enable_interrupter(struct xhci_interrupter *ir)
+ 		return -EINVAL;
+ 
+ 	iman = readl(&ir->ir_set->iman);
++	iman &= ~IMAN_IP;
+ 	iman |= IMAN_IE;
+ 	writel(iman, &ir->ir_set->iman);
+ 
+@@ -325,6 +326,7 @@ int xhci_disable_interrupter(struct xhci_hcd *xhci, struct xhci_interrupter *ir)
+ 		return -EINVAL;
+ 
+ 	iman = readl(&ir->ir_set->iman);
++	iman &= ~IMAN_IP;
+ 	iman &= ~IMAN_IE;
+ 	writel(iman, &ir->ir_set->iman);
+ 
+@@ -3932,8 +3934,7 @@ static int xhci_discover_or_reset_device(struct usb_hcd *hcd,
+ 		 * Obtaining a new device slot to inform the xHCI host that
+ 		 * the USB device has been reset.
+ 		 */
+-		ret = xhci_disable_slot(xhci, udev->slot_id);
+-		xhci_free_virt_device(xhci, udev->slot_id);
++		ret = xhci_disable_and_free_slot(xhci, udev->slot_id);
+ 		if (!ret) {
+ 			ret = xhci_alloc_dev(hcd, udev);
+ 			if (ret == 1)
+@@ -4090,7 +4091,7 @@ static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev)
+ 	xhci_disable_slot(xhci, udev->slot_id);
+ 
+ 	spin_lock_irqsave(&xhci->lock, flags);
+-	xhci_free_virt_device(xhci, udev->slot_id);
++	xhci_free_virt_device(xhci, virt_dev, udev->slot_id);
+ 	spin_unlock_irqrestore(&xhci->lock, flags);
+ 
+ }
+@@ -4139,6 +4140,16 @@ int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id)
+ 	return 0;
+ }
+ 
++int xhci_disable_and_free_slot(struct xhci_hcd *xhci, u32 slot_id)
++{
++	struct xhci_virt_device *vdev = xhci->devs[slot_id];
++	int ret;
++
++	ret = xhci_disable_slot(xhci, slot_id);
++	xhci_free_virt_device(xhci, vdev, slot_id);
++	return ret;
++}
++
+ /*
+  * Checks if we have enough host controller resources for the default control
+  * endpoint.
+@@ -4245,8 +4256,7 @@ int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev)
+ 	return 1;
+ 
+ disable_slot:
+-	xhci_disable_slot(xhci, udev->slot_id);
+-	xhci_free_virt_device(xhci, udev->slot_id);
++	xhci_disable_and_free_slot(xhci, udev->slot_id);
+ 
+ 	return 0;
+ }
+@@ -4382,8 +4392,7 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
+ 		dev_warn(&udev->dev, "Device not responding to setup %s.\n", act);
+ 
+ 		mutex_unlock(&xhci->mutex);
+-		ret = xhci_disable_slot(xhci, udev->slot_id);
+-		xhci_free_virt_device(xhci, udev->slot_id);
++		ret = xhci_disable_and_free_slot(xhci, udev->slot_id);
+ 		if (!ret) {
+ 			if (xhci_alloc_dev(hcd, udev) == 1)
+ 				xhci_setup_addressable_virt_dev(xhci, udev);
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index a20f4e7cd43a80..85d5b964bf1e9b 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1791,7 +1791,7 @@ void xhci_dbg_trace(struct xhci_hcd *xhci, void (*trace)(struct va_format *),
+ /* xHCI memory management */
+ void xhci_mem_cleanup(struct xhci_hcd *xhci);
+ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags);
+-void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id);
++void xhci_free_virt_device(struct xhci_hcd *xhci, struct xhci_virt_device *dev, int slot_id);
+ int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id, struct usb_device *udev, gfp_t flags);
+ int xhci_setup_addressable_virt_dev(struct xhci_hcd *xhci, struct usb_device *udev);
+ void xhci_copy_ep0_dequeue_into_input_ctx(struct xhci_hcd *xhci,
+@@ -1888,6 +1888,7 @@ void xhci_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev);
+ int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
+ 			   struct usb_tt *tt, gfp_t mem_flags);
+ int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id);
++int xhci_disable_and_free_slot(struct xhci_hcd *xhci, u32 slot_id);
+ int xhci_ext_cap_init(struct xhci_hcd *xhci);
+ 
+ int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup);
+diff --git a/drivers/usb/musb/omap2430.c b/drivers/usb/musb/omap2430.c
+index 2970967a4fd28b..36f756f9b7f603 100644
+--- a/drivers/usb/musb/omap2430.c
++++ b/drivers/usb/musb/omap2430.c
+@@ -400,7 +400,7 @@ static int omap2430_probe(struct platform_device *pdev)
+ 	ret = platform_device_add_resources(musb, pdev->resource, pdev->num_resources);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to add resources\n");
+-		goto err2;
++		goto err_put_control_otghs;
+ 	}
+ 
+ 	if (populate_irqs) {
+@@ -413,7 +413,7 @@ static int omap2430_probe(struct platform_device *pdev)
+ 		res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 		if (!res) {
+ 			ret = -EINVAL;
+-			goto err2;
++			goto err_put_control_otghs;
+ 		}
+ 
+ 		musb_res[i].start = res->start;
+@@ -441,14 +441,14 @@ static int omap2430_probe(struct platform_device *pdev)
+ 		ret = platform_device_add_resources(musb, musb_res, i);
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "failed to add IRQ resources\n");
+-			goto err2;
++			goto err_put_control_otghs;
+ 		}
+ 	}
+ 
+ 	ret = platform_device_add_data(musb, pdata, sizeof(*pdata));
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to add platform_data\n");
+-		goto err2;
++		goto err_put_control_otghs;
+ 	}
+ 
+ 	pm_runtime_enable(glue->dev);
+@@ -463,7 +463,9 @@ static int omap2430_probe(struct platform_device *pdev)
+ 
+ err3:
+ 	pm_runtime_disable(glue->dev);
+-
++err_put_control_otghs:
++	if (!IS_ERR(glue->control_otghs))
++		put_device(glue->control_otghs);
+ err2:
+ 	platform_device_put(musb);
+ 
+@@ -477,6 +479,8 @@ static void omap2430_remove(struct platform_device *pdev)
+ 
+ 	platform_device_unregister(glue->musb);
+ 	pm_runtime_disable(glue->dev);
++	if (!IS_ERR(glue->control_otghs))
++		put_device(glue->control_otghs);
+ }
+ 
+ #ifdef CONFIG_PM
+diff --git a/drivers/usb/storage/realtek_cr.c b/drivers/usb/storage/realtek_cr.c
+index c18dfa2ca034e7..dc655bd640dc22 100644
+--- a/drivers/usb/storage/realtek_cr.c
++++ b/drivers/usb/storage/realtek_cr.c
+@@ -252,7 +252,7 @@ static int rts51x_bulk_transport(struct us_data *us, u8 lun,
+ 		return USB_STOR_TRANSPORT_ERROR;
+ 	}
+ 
+-	residue = bcs->Residue;
++	residue = le32_to_cpu(bcs->Residue);
+ 	if (bcs->Tag != us->tag)
+ 		return USB_STOR_TRANSPORT_ERROR;
+ 
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index 54f0b1c83317cd..dfa5276a5a43e2 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -934,6 +934,13 @@ UNUSUAL_DEV(  0x05e3, 0x0723, 0x9451, 0x9451,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_SANE_SENSE ),
+ 
++/* Added by Maël GUERIN <mael.guerin@murena.io> */
++UNUSUAL_DEV(  0x0603, 0x8611, 0x0000, 0xffff,
++		"Novatek",
++		"NTK96550-based camera",
++		USB_SC_SCSI, USB_PR_BULK, NULL,
++		US_FL_BULK_IGNORE_TAG ),
++
+ /*
+  * Reported by Hanno Boeck <hanno@gmx.de>
+  * Taken from the Lycoris Kernel
+@@ -1494,6 +1501,28 @@ UNUSUAL_DEV( 0x0bc2, 0x3332, 0x0000, 0x9999,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_NO_WP_DETECT ),
+ 
++/*
++ * Reported by Zenm Chen <zenmchen@gmail.com>
++ * Ignore driver CD mode, otherwise usb_modeswitch may fail to switch
++ * the device into Wi-Fi mode.
++ */
++UNUSUAL_DEV( 0x0bda, 0x1a2b, 0x0000, 0xffff,
++		"Realtek",
++		"DISK",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_IGNORE_DEVICE ),
++
++/*
++ * Reported by Zenm Chen <zenmchen@gmail.com>
++ * Ignore driver CD mode, otherwise usb_modeswitch may fail to switch
++ * the device into Wi-Fi mode.
++ */
++UNUSUAL_DEV( 0x0bda, 0xa192, 0x0000, 0xffff,
++		"Realtek",
++		"DISK",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_IGNORE_DEVICE ),
++
+ UNUSUAL_DEV(  0x0d49, 0x7310, 0x0000, 0x9999,
+ 		"Maxtor",
+ 		"USB to SATA",
+diff --git a/drivers/usb/typec/tcpm/maxim_contaminant.c b/drivers/usb/typec/tcpm/maxim_contaminant.c
+index 0cdda06592fd3c..af8da6dc60ae0b 100644
+--- a/drivers/usb/typec/tcpm/maxim_contaminant.c
++++ b/drivers/usb/typec/tcpm/maxim_contaminant.c
+@@ -188,6 +188,11 @@ static int max_contaminant_read_comparators(struct max_tcpci_chip *chip, u8 *ven
+ 	if (ret < 0)
+ 		return ret;
+ 
++	/* Disable low power mode */
++	ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL2, CCLPMODESEL,
++				 FIELD_PREP(CCLPMODESEL,
++					    LOW_POWER_MODE_DISABLE));
++
+ 	/* Sleep to allow comparators settle */
+ 	usleep_range(5000, 6000);
+ 	ret = regmap_update_bits(regmap, TCPC_TCPC_CTRL, TCPC_TCPC_CTRL_ORIENTATION, PLUG_ORNT_CC1);
+@@ -324,6 +329,39 @@ static int max_contaminant_enable_dry_detection(struct max_tcpci_chip *chip)
+ 	return 0;
+ }
+ 
++static int max_contaminant_enable_toggling(struct max_tcpci_chip *chip)
++{
++	struct regmap *regmap = chip->data.regmap;
++	int ret;
++
++	/* Disable dry detection if enabled. */
++	ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL2, CCLPMODESEL,
++				 FIELD_PREP(CCLPMODESEL,
++					    LOW_POWER_MODE_DISABLE));
++	if (ret)
++		return ret;
++
++	ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL1, CCCONNDRY, 0);
++	if (ret)
++		return ret;
++
++	ret = max_tcpci_write8(chip, TCPC_ROLE_CTRL, TCPC_ROLE_CTRL_DRP |
++			       FIELD_PREP(TCPC_ROLE_CTRL_CC1,
++					  TCPC_ROLE_CTRL_CC_RD) |
++			       FIELD_PREP(TCPC_ROLE_CTRL_CC2,
++					  TCPC_ROLE_CTRL_CC_RD));
++	if (ret)
++		return ret;
++
++	ret = regmap_update_bits(regmap, TCPC_TCPC_CTRL,
++				 TCPC_TCPC_CTRL_EN_LK4CONN_ALRT,
++				 TCPC_TCPC_CTRL_EN_LK4CONN_ALRT);
++	if (ret)
++		return ret;
++
++	return max_tcpci_write8(chip, TCPC_COMMAND, TCPC_CMD_LOOK4CONNECTION);
++}
++
+ bool max_contaminant_is_contaminant(struct max_tcpci_chip *chip, bool disconnect_while_debounce,
+ 				    bool *cc_handled)
+ {
+@@ -340,6 +378,12 @@ bool max_contaminant_is_contaminant(struct max_tcpci_chip *chip, bool disconnect
+ 	if (ret < 0)
+ 		return false;
+ 
++	if (cc_status & TCPC_CC_STATUS_TOGGLING) {
++		if (chip->contaminant_state == DETECTED)
++			return true;
++		return false;
++	}
++
+ 	if (chip->contaminant_state == NOT_DETECTED || chip->contaminant_state == SINK) {
+ 		if (!disconnect_while_debounce)
+ 			msleep(100);
+@@ -372,6 +416,12 @@ bool max_contaminant_is_contaminant(struct max_tcpci_chip *chip, bool disconnect
+ 				max_contaminant_enable_dry_detection(chip);
+ 				return true;
+ 			}
++
++			ret = max_contaminant_enable_toggling(chip);
++			if (ret)
++				dev_err(chip->dev,
++					"Failed to enable toggling, ret=%d",
++					ret);
+ 		}
+ 	} else if (chip->contaminant_state == DETECTED) {
+ 		if (!(cc_status & TCPC_CC_STATUS_TOGGLING)) {
+@@ -379,6 +429,14 @@ bool max_contaminant_is_contaminant(struct max_tcpci_chip *chip, bool disconnect
+ 			if (chip->contaminant_state == DETECTED) {
+ 				max_contaminant_enable_dry_detection(chip);
+ 				return true;
++			} else {
++				ret = max_contaminant_enable_toggling(chip);
++				if (ret) {
++					dev_err(chip->dev,
++						"Failed to enable toggling, ret=%d",
++						ret);
++					return true;
++				}
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/usb/typec/tcpm/tcpci_maxim.h b/drivers/usb/typec/tcpm/tcpci_maxim.h
+index 76270d5c283880..b33540a42a953d 100644
+--- a/drivers/usb/typec/tcpm/tcpci_maxim.h
++++ b/drivers/usb/typec/tcpm/tcpci_maxim.h
+@@ -21,6 +21,7 @@
+ #define CCOVPDIS                                BIT(6)
+ #define SBURPCTRL                               BIT(5)
+ #define CCLPMODESEL                             GENMASK(4, 3)
++#define LOW_POWER_MODE_DISABLE                  0
+ #define ULTRA_LOW_POWER_MODE                    1
+ #define CCRPCTRL                                GENMASK(2, 0)
+ #define UA_1_SRC                                1
+diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
+index 802153e230730b..66a0f060770ef2 100644
+--- a/drivers/vhost/vsock.c
++++ b/drivers/vhost/vsock.c
+@@ -344,6 +344,9 @@ vhost_vsock_alloc_skb(struct vhost_virtqueue *vq,
+ 
+ 	len = iov_length(vq->iov, out);
+ 
++	if (len > VIRTIO_VSOCK_MAX_PKT_BUF_SIZE + VIRTIO_VSOCK_SKB_HEADROOM)
++		return NULL;
++
+ 	/* len contains both payload and hdr */
+ 	skb = virtio_vsock_alloc_skb(len, GFP_KERNEL);
+ 	if (!skb)
+@@ -367,8 +370,7 @@ vhost_vsock_alloc_skb(struct vhost_virtqueue *vq,
+ 		return skb;
+ 
+ 	/* The pkt is too big or the length in the header is invalid */
+-	if (payload_len > VIRTIO_VSOCK_MAX_PKT_BUF_SIZE ||
+-	    payload_len + sizeof(*hdr) > len) {
++	if (payload_len + sizeof(*hdr) > len) {
+ 		kfree_skb(skb);
+ 		return NULL;
+ 	}
+diff --git a/drivers/video/console/vgacon.c b/drivers/video/console/vgacon.c
+index f9cdbf8c53e34b..37bd18730fe0df 100644
+--- a/drivers/video/console/vgacon.c
++++ b/drivers/video/console/vgacon.c
+@@ -1168,7 +1168,7 @@ static bool vgacon_scroll(struct vc_data *c, unsigned int t, unsigned int b,
+ 				     c->vc_screenbuf_size - delta);
+ 			c->vc_origin = vga_vram_end - c->vc_screenbuf_size;
+ 			vga_rolled_over = 0;
+-		} else if (oldo - delta >= (unsigned long)c->vc_screenbuf)
++		} else
+ 			c->vc_origin -= delta;
+ 		c->vc_scr_end = c->vc_origin + c->vc_screenbuf_size;
+ 		scr_memsetw((u16 *) (c->vc_origin), c->vc_video_erase_char,
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 8abc066ce51fb4..94fd6f2404cdce 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -198,7 +198,7 @@ struct extent_buffer *btrfs_root_node(struct btrfs_root *root)
+ 		 * the inc_not_zero dance and if it doesn't work then
+ 		 * synchronize_rcu and try again.
+ 		 */
+-		if (atomic_inc_not_zero(&eb->refs)) {
++		if (refcount_inc_not_zero(&eb->refs)) {
+ 			rcu_read_unlock();
+ 			break;
+ 		}
+@@ -283,7 +283,14 @@ int btrfs_copy_root(struct btrfs_trans_handle *trans,
+ 
+ 	write_extent_buffer_fsid(cow, fs_info->fs_devices->metadata_uuid);
+ 
+-	WARN_ON(btrfs_header_generation(buf) > trans->transid);
++	if (unlikely(btrfs_header_generation(buf) > trans->transid)) {
++		btrfs_tree_unlock(cow);
++		free_extent_buffer(cow);
++		ret = -EUCLEAN;
++		btrfs_abort_transaction(trans, ret);
++		return ret;
++	}
++
+ 	if (new_root_objectid == BTRFS_TREE_RELOC_OBJECTID)
+ 		ret = btrfs_inc_ref(trans, root, cow, 1);
+ 	else
+@@ -549,7 +556,7 @@ int btrfs_force_cow_block(struct btrfs_trans_handle *trans,
+ 			btrfs_abort_transaction(trans, ret);
+ 			goto error_unlock_cow;
+ 		}
+-		atomic_inc(&cow->refs);
++		refcount_inc(&cow->refs);
+ 		rcu_assign_pointer(root->node, cow);
+ 
+ 		ret = btrfs_free_tree_block(trans, btrfs_root_id(root), buf,
+@@ -1081,7 +1088,7 @@ static noinline int balance_level(struct btrfs_trans_handle *trans,
+ 	/* update the path */
+ 	if (left) {
+ 		if (btrfs_header_nritems(left) > orig_slot) {
+-			atomic_inc(&left->refs);
++			refcount_inc(&left->refs);
+ 			/* left was locked after cow */
+ 			path->nodes[level] = left;
+ 			path->slots[level + 1] -= 1;
+@@ -1685,7 +1692,7 @@ static struct extent_buffer *btrfs_search_slot_get_root(struct btrfs_root *root,
+ 
+ 	if (p->search_commit_root) {
+ 		b = root->commit_root;
+-		atomic_inc(&b->refs);
++		refcount_inc(&b->refs);
+ 		level = btrfs_header_level(b);
+ 		/*
+ 		 * Ensure that all callers have set skip_locking when
+@@ -2886,7 +2893,7 @@ static noinline int insert_new_root(struct btrfs_trans_handle *trans,
+ 	free_extent_buffer(old);
+ 
+ 	add_root_to_dirty_list(root);
+-	atomic_inc(&c->refs);
++	refcount_inc(&c->refs);
+ 	path->nodes[level] = c;
+ 	path->locks[level] = BTRFS_WRITE_LOCK;
+ 	path->slots[level] = 0;
+@@ -4443,7 +4450,7 @@ static noinline int btrfs_del_leaf(struct btrfs_trans_handle *trans,
+ 
+ 	root_sub_used_bytes(root);
+ 
+-	atomic_inc(&leaf->refs);
++	refcount_inc(&leaf->refs);
+ 	ret = btrfs_free_tree_block(trans, btrfs_root_id(root), leaf, 0, 1);
+ 	free_extent_buffer_stale(leaf);
+ 	if (ret < 0)
+@@ -4528,7 +4535,7 @@ int btrfs_del_items(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ 			 * for possible call to btrfs_del_ptr below
+ 			 */
+ 			slot = path->slots[1];
+-			atomic_inc(&leaf->refs);
++			refcount_inc(&leaf->refs);
+ 			/*
+ 			 * We want to be able to at least push one item to the
+ 			 * left neighbour leaf, and that's the first item.
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 5f16e6a79d1088..528ac11505de96 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -6342,7 +6342,7 @@ int btrfs_drop_subtree(struct btrfs_trans_handle *trans,
+ 
+ 	btrfs_assert_tree_write_locked(parent);
+ 	parent_level = btrfs_header_level(parent);
+-	atomic_inc(&parent->refs);
++	refcount_inc(&parent->refs);
+ 	path->nodes[parent_level] = parent;
+ 	path->slots[parent_level] = btrfs_header_nritems(parent);
+ 
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 1dc931c4937fc0..3711a5d073423d 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -77,7 +77,7 @@ void btrfs_extent_buffer_leak_debug_check(struct btrfs_fs_info *fs_info)
+ 				      struct extent_buffer, leak_list);
+ 		pr_err(
+ 	"BTRFS: buffer leak start %llu len %u refs %d bflags %lu owner %llu\n",
+-		       eb->start, eb->len, atomic_read(&eb->refs), eb->bflags,
++		       eb->start, eb->len, refcount_read(&eb->refs), eb->bflags,
+ 		       btrfs_header_owner(eb));
+ 		list_del(&eb->leak_list);
+ 		WARN_ON_ONCE(1);
+@@ -782,7 +782,7 @@ static void submit_extent_folio(struct btrfs_bio_ctrl *bio_ctrl,
+ 
+ static int attach_extent_buffer_folio(struct extent_buffer *eb,
+ 				      struct folio *folio,
+-				      struct btrfs_subpage *prealloc)
++				      struct btrfs_folio_state *prealloc)
+ {
+ 	struct btrfs_fs_info *fs_info = eb->fs_info;
+ 	int ret = 0;
+@@ -806,7 +806,7 @@ static int attach_extent_buffer_folio(struct extent_buffer *eb,
+ 
+ 	/* Already mapped, just free prealloc */
+ 	if (folio_test_private(folio)) {
+-		btrfs_free_subpage(prealloc);
++		btrfs_free_folio_state(prealloc);
+ 		return 0;
+ 	}
+ 
+@@ -815,7 +815,7 @@ static int attach_extent_buffer_folio(struct extent_buffer *eb,
+ 		folio_attach_private(folio, prealloc);
+ 	else
+ 		/* Do new allocation to attach subpage */
+-		ret = btrfs_attach_subpage(fs_info, folio, BTRFS_SUBPAGE_METADATA);
++		ret = btrfs_attach_folio_state(fs_info, folio, BTRFS_SUBPAGE_METADATA);
+ 	return ret;
+ }
+ 
+@@ -831,7 +831,7 @@ int set_folio_extent_mapped(struct folio *folio)
+ 	fs_info = folio_to_fs_info(folio);
+ 
+ 	if (btrfs_is_subpage(fs_info, folio))
+-		return btrfs_attach_subpage(fs_info, folio, BTRFS_SUBPAGE_DATA);
++		return btrfs_attach_folio_state(fs_info, folio, BTRFS_SUBPAGE_DATA);
+ 
+ 	folio_attach_private(folio, (void *)EXTENT_FOLIO_PRIVATE);
+ 	return 0;
+@@ -848,7 +848,7 @@ void clear_folio_extent_mapped(struct folio *folio)
+ 
+ 	fs_info = folio_to_fs_info(folio);
+ 	if (btrfs_is_subpage(fs_info, folio))
+-		return btrfs_detach_subpage(fs_info, folio, BTRFS_SUBPAGE_DATA);
++		return btrfs_detach_folio_state(fs_info, folio, BTRFS_SUBPAGE_DATA);
+ 
+ 	folio_detach_private(folio);
+ }
+@@ -1961,7 +1961,7 @@ static inline struct extent_buffer *find_get_eb(struct xa_state *xas, unsigned l
+ 	if (!eb)
+ 		return NULL;
+ 
+-	if (!atomic_inc_not_zero(&eb->refs)) {
++	if (!refcount_inc_not_zero(&eb->refs)) {
+ 		xas_reset(xas);
+ 		goto retry;
+ 	}
+@@ -2012,7 +2012,7 @@ static struct extent_buffer *find_extent_buffer_nolock(
+ 
+ 	rcu_read_lock();
+ 	eb = xa_load(&fs_info->buffer_tree, index);
+-	if (eb && !atomic_inc_not_zero(&eb->refs))
++	if (eb && !refcount_inc_not_zero(&eb->refs))
+ 		eb = NULL;
+ 	rcu_read_unlock();
+ 	return eb;
+@@ -2731,13 +2731,13 @@ static int extent_buffer_under_io(const struct extent_buffer *eb)
+ 
+ static bool folio_range_has_eb(struct folio *folio)
+ {
+-	struct btrfs_subpage *subpage;
++	struct btrfs_folio_state *bfs;
+ 
+ 	lockdep_assert_held(&folio->mapping->i_private_lock);
+ 
+ 	if (folio_test_private(folio)) {
+-		subpage = folio_get_private(folio);
+-		if (atomic_read(&subpage->eb_refs))
++		bfs = folio_get_private(folio);
++		if (atomic_read(&bfs->eb_refs))
+ 			return true;
+ 	}
+ 	return false;
+@@ -2787,7 +2787,7 @@ static void detach_extent_buffer_folio(const struct extent_buffer *eb, struct fo
+ 	 * attached to one dummy eb, no sharing.
+ 	 */
+ 	if (!mapped) {
+-		btrfs_detach_subpage(fs_info, folio, BTRFS_SUBPAGE_METADATA);
++		btrfs_detach_folio_state(fs_info, folio, BTRFS_SUBPAGE_METADATA);
+ 		return;
+ 	}
+ 
+@@ -2798,7 +2798,7 @@ static void detach_extent_buffer_folio(const struct extent_buffer *eb, struct fo
+ 	 * page range and no unfinished IO.
+ 	 */
+ 	if (!folio_range_has_eb(folio))
+-		btrfs_detach_subpage(fs_info, folio, BTRFS_SUBPAGE_METADATA);
++		btrfs_detach_folio_state(fs_info, folio, BTRFS_SUBPAGE_METADATA);
+ 
+ 	spin_unlock(&mapping->i_private_lock);
+ }
+@@ -2842,7 +2842,7 @@ static struct extent_buffer *__alloc_extent_buffer(struct btrfs_fs_info *fs_info
+ 	btrfs_leak_debug_add_eb(eb);
+ 
+ 	spin_lock_init(&eb->refs_lock);
+-	atomic_set(&eb->refs, 1);
++	refcount_set(&eb->refs, 1);
+ 
+ 	ASSERT(eb->len <= BTRFS_MAX_METADATA_BLOCKSIZE);
+ 
+@@ -2975,13 +2975,13 @@ static void check_buffer_tree_ref(struct extent_buffer *eb)
+ 	 * once io is initiated, TREE_REF can no longer be cleared, so that is
+ 	 * the moment at which any such race is best fixed.
+ 	 */
+-	refs = atomic_read(&eb->refs);
++	refs = refcount_read(&eb->refs);
+ 	if (refs >= 2 && test_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags))
+ 		return;
+ 
+ 	spin_lock(&eb->refs_lock);
+ 	if (!test_and_set_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags))
+-		atomic_inc(&eb->refs);
++		refcount_inc(&eb->refs);
+ 	spin_unlock(&eb->refs_lock);
+ }
+ 
+@@ -3047,7 +3047,7 @@ struct extent_buffer *alloc_test_extent_buffer(struct btrfs_fs_info *fs_info,
+ 		return ERR_PTR(ret);
+ 	}
+ 	if (exists) {
+-		if (!atomic_inc_not_zero(&exists->refs)) {
++		if (!refcount_inc_not_zero(&exists->refs)) {
+ 			/* The extent buffer is being freed, retry. */
+ 			xa_unlock_irq(&fs_info->buffer_tree);
+ 			goto again;
+@@ -3092,7 +3092,7 @@ static struct extent_buffer *grab_extent_buffer(struct btrfs_fs_info *fs_info,
+ 	 * just overwrite folio private.
+ 	 */
+ 	exists = folio_get_private(folio);
+-	if (atomic_inc_not_zero(&exists->refs))
++	if (refcount_inc_not_zero(&exists->refs))
+ 		return exists;
+ 
+ 	WARN_ON(folio_test_dirty(folio));
+@@ -3141,7 +3141,7 @@ static bool check_eb_alignment(struct btrfs_fs_info *fs_info, u64 start)
+  * The caller needs to free the existing folios and retry using the same order.
+  */
+ static int attach_eb_folio_to_filemap(struct extent_buffer *eb, int i,
+-				      struct btrfs_subpage *prealloc,
++				      struct btrfs_folio_state *prealloc,
+ 				      struct extent_buffer **found_eb_ret)
+ {
+ 
+@@ -3224,7 +3224,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
+ 	int attached = 0;
+ 	struct extent_buffer *eb;
+ 	struct extent_buffer *existing_eb = NULL;
+-	struct btrfs_subpage *prealloc = NULL;
++	struct btrfs_folio_state *prealloc = NULL;
+ 	u64 lockdep_owner = owner_root;
+ 	bool page_contig = true;
+ 	int uptodate = 1;
+@@ -3269,7 +3269,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
+ 	 * manually if we exit earlier.
+ 	 */
+ 	if (btrfs_meta_is_subpage(fs_info)) {
+-		prealloc = btrfs_alloc_subpage(fs_info, PAGE_SIZE, BTRFS_SUBPAGE_METADATA);
++		prealloc = btrfs_alloc_folio_state(fs_info, PAGE_SIZE, BTRFS_SUBPAGE_METADATA);
+ 		if (IS_ERR(prealloc)) {
+ 			ret = PTR_ERR(prealloc);
+ 			goto out;
+@@ -3280,7 +3280,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
+ 	/* Allocate all pages first. */
+ 	ret = alloc_eb_folio_array(eb, true);
+ 	if (ret < 0) {
+-		btrfs_free_subpage(prealloc);
++		btrfs_free_folio_state(prealloc);
+ 		goto out;
+ 	}
+ 
+@@ -3362,7 +3362,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
+ 		goto out;
+ 	}
+ 	if (existing_eb) {
+-		if (!atomic_inc_not_zero(&existing_eb->refs)) {
++		if (!refcount_inc_not_zero(&existing_eb->refs)) {
+ 			xa_unlock_irq(&fs_info->buffer_tree);
+ 			goto again;
+ 		}
+@@ -3391,7 +3391,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
+ 	return eb;
+ 
+ out:
+-	WARN_ON(!atomic_dec_and_test(&eb->refs));
++	WARN_ON(!refcount_dec_and_test(&eb->refs));
+ 
+ 	/*
+ 	 * Any attached folios need to be detached before we unlock them.  This
+@@ -3437,8 +3437,7 @@ static int release_extent_buffer(struct extent_buffer *eb)
+ {
+ 	lockdep_assert_held(&eb->refs_lock);
+ 
+-	WARN_ON(atomic_read(&eb->refs) == 0);
+-	if (atomic_dec_and_test(&eb->refs)) {
++	if (refcount_dec_and_test(&eb->refs)) {
+ 		struct btrfs_fs_info *fs_info = eb->fs_info;
+ 
+ 		spin_unlock(&eb->refs_lock);
+@@ -3484,22 +3483,26 @@ void free_extent_buffer(struct extent_buffer *eb)
+ 	if (!eb)
+ 		return;
+ 
+-	refs = atomic_read(&eb->refs);
++	refs = refcount_read(&eb->refs);
+ 	while (1) {
+-		if ((!test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags) && refs <= 3)
+-		    || (test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags) &&
+-			refs == 1))
++		if (test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags)) {
++			if (refs == 1)
++				break;
++		} else if (refs <= 3) {
+ 			break;
+-		if (atomic_try_cmpxchg(&eb->refs, &refs, refs - 1))
++		}
++
++		/* Optimization to avoid locking eb->refs_lock. */
++		if (atomic_try_cmpxchg(&eb->refs.refs, &refs, refs - 1))
+ 			return;
+ 	}
+ 
+ 	spin_lock(&eb->refs_lock);
+-	if (atomic_read(&eb->refs) == 2 &&
++	if (refcount_read(&eb->refs) == 2 &&
+ 	    test_bit(EXTENT_BUFFER_STALE, &eb->bflags) &&
+ 	    !extent_buffer_under_io(eb) &&
+ 	    test_and_clear_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags))
+-		atomic_dec(&eb->refs);
++		refcount_dec(&eb->refs);
+ 
+ 	/*
+ 	 * I know this is terrible, but it's temporary until we stop tracking
+@@ -3516,9 +3519,9 @@ void free_extent_buffer_stale(struct extent_buffer *eb)
+ 	spin_lock(&eb->refs_lock);
+ 	set_bit(EXTENT_BUFFER_STALE, &eb->bflags);
+ 
+-	if (atomic_read(&eb->refs) == 2 && !extent_buffer_under_io(eb) &&
++	if (refcount_read(&eb->refs) == 2 && !extent_buffer_under_io(eb) &&
+ 	    test_and_clear_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags))
+-		atomic_dec(&eb->refs);
++		refcount_dec(&eb->refs);
+ 	release_extent_buffer(eb);
+ }
+ 
+@@ -3576,7 +3579,7 @@ void btrfs_clear_buffer_dirty(struct btrfs_trans_handle *trans,
+ 			btree_clear_folio_dirty_tag(folio);
+ 		folio_unlock(folio);
+ 	}
+-	WARN_ON(atomic_read(&eb->refs) == 0);
++	WARN_ON(refcount_read(&eb->refs) == 0);
+ }
+ 
+ void set_extent_buffer_dirty(struct extent_buffer *eb)
+@@ -3587,7 +3590,7 @@ void set_extent_buffer_dirty(struct extent_buffer *eb)
+ 
+ 	was_dirty = test_and_set_bit(EXTENT_BUFFER_DIRTY, &eb->bflags);
+ 
+-	WARN_ON(atomic_read(&eb->refs) == 0);
++	WARN_ON(refcount_read(&eb->refs) == 0);
+ 	WARN_ON(!test_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags));
+ 	WARN_ON(test_bit(EXTENT_BUFFER_ZONED_ZEROOUT, &eb->bflags));
+ 
+@@ -3713,7 +3716,7 @@ int read_extent_buffer_pages_nowait(struct extent_buffer *eb, int mirror_num,
+ 
+ 	eb->read_mirror = 0;
+ 	check_buffer_tree_ref(eb);
+-	atomic_inc(&eb->refs);
++	refcount_inc(&eb->refs);
+ 
+ 	bbio = btrfs_bio_alloc(INLINE_EXTENT_BUFFER_PAGES,
+ 			       REQ_OP_READ | REQ_META, eb->fs_info,
+@@ -4301,15 +4304,18 @@ static int try_release_subpage_extent_buffer(struct folio *folio)
+ 	unsigned long end = index + (PAGE_SIZE >> fs_info->sectorsize_bits) - 1;
+ 	int ret;
+ 
+-	xa_lock_irq(&fs_info->buffer_tree);
++	rcu_read_lock();
+ 	xa_for_each_range(&fs_info->buffer_tree, index, eb, start, end) {
+ 		/*
+ 		 * The same as try_release_extent_buffer(), to ensure the eb
+ 		 * won't disappear out from under us.
+ 		 */
+ 		spin_lock(&eb->refs_lock);
+-		if (atomic_read(&eb->refs) != 1 || extent_buffer_under_io(eb)) {
++		rcu_read_unlock();
++
++		if (refcount_read(&eb->refs) != 1 || extent_buffer_under_io(eb)) {
+ 			spin_unlock(&eb->refs_lock);
++			rcu_read_lock();
+ 			continue;
+ 		}
+ 
+@@ -4328,11 +4334,10 @@ static int try_release_subpage_extent_buffer(struct folio *folio)
+ 		 * check the folio private at the end.  And
+ 		 * release_extent_buffer() will release the refs_lock.
+ 		 */
+-		xa_unlock_irq(&fs_info->buffer_tree);
+ 		release_extent_buffer(eb);
+-		xa_lock_irq(&fs_info->buffer_tree);
++		rcu_read_lock();
+ 	}
+-	xa_unlock_irq(&fs_info->buffer_tree);
++	rcu_read_unlock();
+ 
+ 	/*
+ 	 * Finally to check if we have cleared folio private, as if we have
+@@ -4345,7 +4350,6 @@ static int try_release_subpage_extent_buffer(struct folio *folio)
+ 		ret = 0;
+ 	spin_unlock(&folio->mapping->i_private_lock);
+ 	return ret;
+-
+ }
+ 
+ int try_release_extent_buffer(struct folio *folio)
+@@ -4374,7 +4378,7 @@ int try_release_extent_buffer(struct folio *folio)
+ 	 * this page.
+ 	 */
+ 	spin_lock(&eb->refs_lock);
+-	if (atomic_read(&eb->refs) != 1 || extent_buffer_under_io(eb)) {
++	if (refcount_read(&eb->refs) != 1 || extent_buffer_under_io(eb)) {
+ 		spin_unlock(&eb->refs_lock);
+ 		spin_unlock(&folio->mapping->i_private_lock);
+ 		return 0;
+diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
+index e36e8d6a00bc50..65bb87f1dce61e 100644
+--- a/fs/btrfs/extent_io.h
++++ b/fs/btrfs/extent_io.h
+@@ -98,7 +98,7 @@ struct extent_buffer {
+ 	void *addr;
+ 
+ 	spinlock_t refs_lock;
+-	atomic_t refs;
++	refcount_t refs;
+ 	int read_mirror;
+ 	/* >= 0 if eb belongs to a log tree, -1 otherwise */
+ 	s8 log_index;
+diff --git a/fs/btrfs/fiemap.c b/fs/btrfs/fiemap.c
+index 43bf0979fd5394..7935586a9dbd0f 100644
+--- a/fs/btrfs/fiemap.c
++++ b/fs/btrfs/fiemap.c
+@@ -320,7 +320,7 @@ static int fiemap_next_leaf_item(struct btrfs_inode *inode, struct btrfs_path *p
+ 	 * the cost of allocating a new one.
+ 	 */
+ 	ASSERT(test_bit(EXTENT_BUFFER_UNMAPPED, &clone->bflags));
+-	atomic_inc(&clone->refs);
++	refcount_inc(&clone->refs);
+ 
+ 	ret = btrfs_next_leaf(inode->root, path);
+ 	if (ret != 0)
+diff --git a/fs/btrfs/free-space-tree.c b/fs/btrfs/free-space-tree.c
+index a83c268f7f87ca..d37ce8200a1026 100644
+--- a/fs/btrfs/free-space-tree.c
++++ b/fs/btrfs/free-space-tree.c
+@@ -1431,12 +1431,17 @@ static int __add_block_group_free_space(struct btrfs_trans_handle *trans,
+ 	set_bit(BLOCK_GROUP_FLAG_FREE_SPACE_ADDED, &block_group->runtime_flags);
+ 
+ 	ret = add_new_free_space_info(trans, block_group, path);
+-	if (ret)
++	if (ret) {
++		btrfs_abort_transaction(trans, ret);
+ 		return ret;
++	}
++
++	ret = __add_to_free_space_tree(trans, block_group, path,
++				       block_group->start, block_group->length);
++	if (ret)
++		btrfs_abort_transaction(trans, ret);
+ 
+-	return __add_to_free_space_tree(trans, block_group, path,
+-					block_group->start,
+-					block_group->length);
++	return 0;
+ }
+ 
+ int add_block_group_free_space(struct btrfs_trans_handle *trans,
+@@ -1456,16 +1461,14 @@ int add_block_group_free_space(struct btrfs_trans_handle *trans,
+ 	path = btrfs_alloc_path();
+ 	if (!path) {
+ 		ret = -ENOMEM;
++		btrfs_abort_transaction(trans, ret);
+ 		goto out;
+ 	}
+ 
+ 	ret = __add_block_group_free_space(trans, block_group, path);
+-
+ out:
+ 	btrfs_free_path(path);
+ 	mutex_unlock(&block_group->free_space_lock);
+-	if (ret)
+-		btrfs_abort_transaction(trans, ret);
+ 	return ret;
+ }
+ 
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 4ae34c22ff1de1..df4c8312aae39d 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -7364,13 +7364,13 @@ struct extent_map *btrfs_create_io_em(struct btrfs_inode *inode, u64 start,
+ static void wait_subpage_spinlock(struct folio *folio)
+ {
+ 	struct btrfs_fs_info *fs_info = folio_to_fs_info(folio);
+-	struct btrfs_subpage *subpage;
++	struct btrfs_folio_state *bfs;
+ 
+ 	if (!btrfs_is_subpage(fs_info, folio))
+ 		return;
+ 
+ 	ASSERT(folio_test_private(folio) && folio_get_private(folio));
+-	subpage = folio_get_private(folio);
++	bfs = folio_get_private(folio);
+ 
+ 	/*
+ 	 * This may look insane as we just acquire the spinlock and release it,
+@@ -7383,8 +7383,8 @@ static void wait_subpage_spinlock(struct folio *folio)
+ 	 * Here we just acquire the spinlock so that all existing callers
+ 	 * should exit and we're safe to release/invalidate the page.
+ 	 */
+-	spin_lock_irq(&subpage->lock);
+-	spin_unlock_irq(&subpage->lock);
++	spin_lock_irq(&bfs->lock);
++	spin_unlock_irq(&bfs->lock);
+ }
+ 
+ static int btrfs_launder_folio(struct folio *folio)
+diff --git a/fs/btrfs/print-tree.c b/fs/btrfs/print-tree.c
+index fc821aa446f02f..21605b03f51188 100644
+--- a/fs/btrfs/print-tree.c
++++ b/fs/btrfs/print-tree.c
+@@ -223,7 +223,7 @@ static void print_eb_refs_lock(const struct extent_buffer *eb)
+ {
+ #ifdef CONFIG_BTRFS_DEBUG
+ 	btrfs_info(eb->fs_info, "refs %u lock_owner %u current %u",
+-		   atomic_read(&eb->refs), eb->lock_owner, current->pid);
++		   refcount_read(&eb->refs), eb->lock_owner, current->pid);
+ #endif
+ }
+ 
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index e1a34e69927dd3..68cbb2b1e3df8e 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -2348,7 +2348,7 @@ static int qgroup_trace_extent_swap(struct btrfs_trans_handle* trans,
+ 		btrfs_item_key_to_cpu(dst_path->nodes[dst_level], &key, 0);
+ 
+ 	/* For src_path */
+-	atomic_inc(&src_eb->refs);
++	refcount_inc(&src_eb->refs);
+ 	src_path->nodes[root_level] = src_eb;
+ 	src_path->slots[root_level] = dst_path->slots[root_level];
+ 	src_path->locks[root_level] = 0;
+@@ -2581,7 +2581,7 @@ static int qgroup_trace_subtree_swap(struct btrfs_trans_handle *trans,
+ 		goto out;
+ 	}
+ 	/* For dst_path */
+-	atomic_inc(&dst_eb->refs);
++	refcount_inc(&dst_eb->refs);
+ 	dst_path->nodes[level] = dst_eb;
+ 	dst_path->slots[level] = 0;
+ 	dst_path->locks[level] = 0;
+@@ -2673,7 +2673,7 @@ int btrfs_qgroup_trace_subtree(struct btrfs_trans_handle *trans,
+ 	 * walk back up the tree (adjusting slot pointers as we go)
+ 	 * and restart the search process.
+ 	 */
+-	atomic_inc(&root_eb->refs);	/* For path */
++	refcount_inc(&root_eb->refs);	/* For path */
+ 	path->nodes[root_level] = root_eb;
+ 	path->slots[root_level] = 0;
+ 	path->locks[root_level] = 0; /* so release_path doesn't try to unlock */
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 068c7a1ad73173..1e139d23bd75fb 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -1535,7 +1535,7 @@ static noinline_for_stack int merge_reloc_root(struct reloc_control *rc,
+ 
+ 	if (btrfs_disk_key_objectid(&root_item->drop_progress) == 0) {
+ 		level = btrfs_root_level(root_item);
+-		atomic_inc(&reloc_root->node->refs);
++		refcount_inc(&reloc_root->node->refs);
+ 		path->nodes[level] = reloc_root->node;
+ 		path->slots[level] = 0;
+ 	} else {
+@@ -4358,7 +4358,7 @@ int btrfs_reloc_cow_block(struct btrfs_trans_handle *trans,
+ 		}
+ 
+ 		btrfs_backref_drop_node_buffer(node);
+-		atomic_inc(&cow->refs);
++		refcount_inc(&cow->refs);
+ 		node->eb = cow;
+ 		node->new_bytenr = cow->start;
+ 
+diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c
+index d4f0192334936c..2951fdc5db4e39 100644
+--- a/fs/btrfs/subpage.c
++++ b/fs/btrfs/subpage.c
+@@ -49,7 +49,7 @@
+  * Implementation:
+  *
+  * - Common
+- *   Both metadata and data will use a new structure, btrfs_subpage, to
++ *   Both metadata and data will use a new structure, btrfs_folio_state, to
+  *   record the status of each sector inside a page.  This provides the extra
+  *   granularity needed.
+  *
+@@ -63,10 +63,10 @@
+  *   This means a slightly higher tree locking latency.
+  */
+ 
+-int btrfs_attach_subpage(const struct btrfs_fs_info *fs_info,
+-			 struct folio *folio, enum btrfs_subpage_type type)
++int btrfs_attach_folio_state(const struct btrfs_fs_info *fs_info,
++			     struct folio *folio, enum btrfs_folio_type type)
+ {
+-	struct btrfs_subpage *subpage;
++	struct btrfs_folio_state *bfs;
+ 
+ 	/* For metadata we don't support large folio yet. */
+ 	if (type == BTRFS_SUBPAGE_METADATA)
+@@ -87,18 +87,18 @@ int btrfs_attach_subpage(const struct btrfs_fs_info *fs_info,
+ 	if (type == BTRFS_SUBPAGE_DATA && !btrfs_is_subpage(fs_info, folio))
+ 		return 0;
+ 
+-	subpage = btrfs_alloc_subpage(fs_info, folio_size(folio), type);
+-	if (IS_ERR(subpage))
+-		return  PTR_ERR(subpage);
++	bfs = btrfs_alloc_folio_state(fs_info, folio_size(folio), type);
++	if (IS_ERR(bfs))
++		return PTR_ERR(bfs);
+ 
+-	folio_attach_private(folio, subpage);
++	folio_attach_private(folio, bfs);
+ 	return 0;
+ }
+ 
+-void btrfs_detach_subpage(const struct btrfs_fs_info *fs_info, struct folio *folio,
+-			  enum btrfs_subpage_type type)
++void btrfs_detach_folio_state(const struct btrfs_fs_info *fs_info, struct folio *folio,
++			      enum btrfs_folio_type type)
+ {
+-	struct btrfs_subpage *subpage;
++	struct btrfs_folio_state *bfs;
+ 
+ 	/* Either not subpage, or the folio already has private attached. */
+ 	if (!folio_test_private(folio))
+@@ -108,15 +108,15 @@ void btrfs_detach_subpage(const struct btrfs_fs_info *fs_info, struct folio *fol
+ 	if (type == BTRFS_SUBPAGE_DATA && !btrfs_is_subpage(fs_info, folio))
+ 		return;
+ 
+-	subpage = folio_detach_private(folio);
+-	ASSERT(subpage);
+-	btrfs_free_subpage(subpage);
++	bfs = folio_detach_private(folio);
++	ASSERT(bfs);
++	btrfs_free_folio_state(bfs);
+ }
+ 
+-struct btrfs_subpage *btrfs_alloc_subpage(const struct btrfs_fs_info *fs_info,
+-				size_t fsize, enum btrfs_subpage_type type)
++struct btrfs_folio_state *btrfs_alloc_folio_state(const struct btrfs_fs_info *fs_info,
++						  size_t fsize, enum btrfs_folio_type type)
+ {
+-	struct btrfs_subpage *ret;
++	struct btrfs_folio_state *ret;
+ 	unsigned int real_size;
+ 
+ 	ASSERT(fs_info->sectorsize < fsize);
+@@ -136,11 +136,6 @@ struct btrfs_subpage *btrfs_alloc_subpage(const struct btrfs_fs_info *fs_info,
+ 	return ret;
+ }
+ 
+-void btrfs_free_subpage(struct btrfs_subpage *subpage)
+-{
+-	kfree(subpage);
+-}
+-
+ /*
+  * Increase the eb_refs of current subpage.
+  *
+@@ -152,7 +147,7 @@ void btrfs_free_subpage(struct btrfs_subpage *subpage)
+  */
+ void btrfs_folio_inc_eb_refs(const struct btrfs_fs_info *fs_info, struct folio *folio)
+ {
+-	struct btrfs_subpage *subpage;
++	struct btrfs_folio_state *bfs;
+ 
+ 	if (!btrfs_meta_is_subpage(fs_info))
+ 		return;
+@@ -160,13 +155,13 @@ void btrfs_folio_inc_eb_refs(const struct btrfs_fs_info *fs_info, struct folio *
+ 	ASSERT(folio_test_private(folio) && folio->mapping);
+ 	lockdep_assert_held(&folio->mapping->i_private_lock);
+ 
+-	subpage = folio_get_private(folio);
+-	atomic_inc(&subpage->eb_refs);
++	bfs = folio_get_private(folio);
++	atomic_inc(&bfs->eb_refs);
+ }
+ 
+ void btrfs_folio_dec_eb_refs(const struct btrfs_fs_info *fs_info, struct folio *folio)
+ {
+-	struct btrfs_subpage *subpage;
++	struct btrfs_folio_state *bfs;
+ 
+ 	if (!btrfs_meta_is_subpage(fs_info))
+ 		return;
+@@ -174,9 +169,9 @@ void btrfs_folio_dec_eb_refs(const struct btrfs_fs_info *fs_info, struct folio *
+ 	ASSERT(folio_test_private(folio) && folio->mapping);
+ 	lockdep_assert_held(&folio->mapping->i_private_lock);
+ 
+-	subpage = folio_get_private(folio);
+-	ASSERT(atomic_read(&subpage->eb_refs));
+-	atomic_dec(&subpage->eb_refs);
++	bfs = folio_get_private(folio);
++	ASSERT(atomic_read(&bfs->eb_refs));
++	atomic_dec(&bfs->eb_refs);
+ }
+ 
+ static void btrfs_subpage_assert(const struct btrfs_fs_info *fs_info,
+@@ -228,7 +223,7 @@ static void btrfs_subpage_clamp_range(struct folio *folio, u64 *start, u32 *len)
+ static bool btrfs_subpage_end_and_test_lock(const struct btrfs_fs_info *fs_info,
+ 					    struct folio *folio, u64 start, u32 len)
+ {
+-	struct btrfs_subpage *subpage = folio_get_private(folio);
++	struct btrfs_folio_state *bfs = folio_get_private(folio);
+ 	const int start_bit = subpage_calc_start_bit(fs_info, folio, locked, start, len);
+ 	const int nbits = (len >> fs_info->sectorsize_bits);
+ 	unsigned long flags;
+@@ -238,7 +233,7 @@ static bool btrfs_subpage_end_and_test_lock(const struct btrfs_fs_info *fs_info,
+ 
+ 	btrfs_subpage_assert(fs_info, folio, start, len);
+ 
+-	spin_lock_irqsave(&subpage->lock, flags);
++	spin_lock_irqsave(&bfs->lock, flags);
+ 	/*
+ 	 * We have call sites passing @lock_page into
+ 	 * extent_clear_unlock_delalloc() for compression path.
+@@ -246,18 +241,18 @@ static bool btrfs_subpage_end_and_test_lock(const struct btrfs_fs_info *fs_info,
+ 	 * This @locked_page is locked by plain lock_page(), thus its
+ 	 * subpage::locked is 0.  Handle them in a special way.
+ 	 */
+-	if (atomic_read(&subpage->nr_locked) == 0) {
+-		spin_unlock_irqrestore(&subpage->lock, flags);
++	if (atomic_read(&bfs->nr_locked) == 0) {
++		spin_unlock_irqrestore(&bfs->lock, flags);
+ 		return true;
+ 	}
+ 
+-	for_each_set_bit_from(bit, subpage->bitmaps, start_bit + nbits) {
+-		clear_bit(bit, subpage->bitmaps);
++	for_each_set_bit_from(bit, bfs->bitmaps, start_bit + nbits) {
++		clear_bit(bit, bfs->bitmaps);
+ 		cleared++;
+ 	}
+-	ASSERT(atomic_read(&subpage->nr_locked) >= cleared);
+-	last = atomic_sub_and_test(cleared, &subpage->nr_locked);
+-	spin_unlock_irqrestore(&subpage->lock, flags);
++	ASSERT(atomic_read(&bfs->nr_locked) >= cleared);
++	last = atomic_sub_and_test(cleared, &bfs->nr_locked);
++	spin_unlock_irqrestore(&bfs->lock, flags);
+ 	return last;
+ }
+ 
+@@ -280,7 +275,7 @@ static bool btrfs_subpage_end_and_test_lock(const struct btrfs_fs_info *fs_info,
+ void btrfs_folio_end_lock(const struct btrfs_fs_info *fs_info,
+ 			  struct folio *folio, u64 start, u32 len)
+ {
+-	struct btrfs_subpage *subpage = folio_get_private(folio);
++	struct btrfs_folio_state *bfs = folio_get_private(folio);
+ 
+ 	ASSERT(folio_test_locked(folio));
+ 
+@@ -296,7 +291,7 @@ void btrfs_folio_end_lock(const struct btrfs_fs_info *fs_info,
+ 	 * Since we own the page lock, no one else could touch subpage::locked
+ 	 * and we are safe to do several atomic operations without spinlock.
+ 	 */
+-	if (atomic_read(&subpage->nr_locked) == 0) {
++	if (atomic_read(&bfs->nr_locked) == 0) {
+ 		/* No subpage lock, locked by plain lock_page(). */
+ 		folio_unlock(folio);
+ 		return;
+@@ -310,7 +305,7 @@ void btrfs_folio_end_lock(const struct btrfs_fs_info *fs_info,
+ void btrfs_folio_end_lock_bitmap(const struct btrfs_fs_info *fs_info,
+ 				 struct folio *folio, unsigned long bitmap)
+ {
+-	struct btrfs_subpage *subpage = folio_get_private(folio);
++	struct btrfs_folio_state *bfs = folio_get_private(folio);
+ 	const unsigned int blocks_per_folio = btrfs_blocks_per_folio(fs_info, folio);
+ 	const int start_bit = blocks_per_folio * btrfs_bitmap_nr_locked;
+ 	unsigned long flags;
+@@ -323,42 +318,42 @@ void btrfs_folio_end_lock_bitmap(const struct btrfs_fs_info *fs_info,
+ 		return;
+ 	}
+ 
+-	if (atomic_read(&subpage->nr_locked) == 0) {
++	if (atomic_read(&bfs->nr_locked) == 0) {
+ 		/* No subpage lock, locked by plain lock_page(). */
+ 		folio_unlock(folio);
+ 		return;
+ 	}
+ 
+-	spin_lock_irqsave(&subpage->lock, flags);
++	spin_lock_irqsave(&bfs->lock, flags);
+ 	for_each_set_bit(bit, &bitmap, blocks_per_folio) {
+-		if (test_and_clear_bit(bit + start_bit, subpage->bitmaps))
++		if (test_and_clear_bit(bit + start_bit, bfs->bitmaps))
+ 			cleared++;
+ 	}
+-	ASSERT(atomic_read(&subpage->nr_locked) >= cleared);
+-	last = atomic_sub_and_test(cleared, &subpage->nr_locked);
+-	spin_unlock_irqrestore(&subpage->lock, flags);
++	ASSERT(atomic_read(&bfs->nr_locked) >= cleared);
++	last = atomic_sub_and_test(cleared, &bfs->nr_locked);
++	spin_unlock_irqrestore(&bfs->lock, flags);
+ 	if (last)
+ 		folio_unlock(folio);
+ }
+ 
+ #define subpage_test_bitmap_all_set(fs_info, folio, name)		\
+ ({									\
+-	struct btrfs_subpage *subpage = folio_get_private(folio);	\
++	struct btrfs_folio_state *bfs = folio_get_private(folio);	\
+ 	const unsigned int blocks_per_folio =				\
+ 				btrfs_blocks_per_folio(fs_info, folio); \
+ 									\
+-	bitmap_test_range_all_set(subpage->bitmaps,			\
++	bitmap_test_range_all_set(bfs->bitmaps,				\
+ 			blocks_per_folio * btrfs_bitmap_nr_##name,	\
+ 			blocks_per_folio);				\
+ })
+ 
+ #define subpage_test_bitmap_all_zero(fs_info, folio, name)		\
+ ({									\
+-	struct btrfs_subpage *subpage = folio_get_private(folio);	\
++	struct btrfs_folio_state *bfs = folio_get_private(folio);	\
+ 	const unsigned int blocks_per_folio =				\
+ 				btrfs_blocks_per_folio(fs_info, folio); \
+ 									\
+-	bitmap_test_range_all_zero(subpage->bitmaps,			\
++	bitmap_test_range_all_zero(bfs->bitmaps,			\
+ 			blocks_per_folio * btrfs_bitmap_nr_##name,	\
+ 			blocks_per_folio);				\
+ })
+@@ -366,43 +361,43 @@ void btrfs_folio_end_lock_bitmap(const struct btrfs_fs_info *fs_info,
+ void btrfs_subpage_set_uptodate(const struct btrfs_fs_info *fs_info,
+ 				struct folio *folio, u64 start, u32 len)
+ {
+-	struct btrfs_subpage *subpage = folio_get_private(folio);
++	struct btrfs_folio_state *bfs = folio_get_private(folio);
+ 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
+ 							uptodate, start, len);
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&subpage->lock, flags);
+-	bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
++	spin_lock_irqsave(&bfs->lock, flags);
++	bitmap_set(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
+ 	if (subpage_test_bitmap_all_set(fs_info, folio, uptodate))
+ 		folio_mark_uptodate(folio);
+-	spin_unlock_irqrestore(&subpage->lock, flags);
++	spin_unlock_irqrestore(&bfs->lock, flags);
+ }
+ 
+ void btrfs_subpage_clear_uptodate(const struct btrfs_fs_info *fs_info,
+ 				  struct folio *folio, u64 start, u32 len)
+ {
+-	struct btrfs_subpage *subpage = folio_get_private(folio);
++	struct btrfs_folio_state *bfs = folio_get_private(folio);
+ 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
+ 							uptodate, start, len);
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&subpage->lock, flags);
+-	bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
++	spin_lock_irqsave(&bfs->lock, flags);
++	bitmap_clear(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
+ 	folio_clear_uptodate(folio);
+-	spin_unlock_irqrestore(&subpage->lock, flags);
++	spin_unlock_irqrestore(&bfs->lock, flags);
+ }
+ 
+ void btrfs_subpage_set_dirty(const struct btrfs_fs_info *fs_info,
+ 			     struct folio *folio, u64 start, u32 len)
+ {
+-	struct btrfs_subpage *subpage = folio_get_private(folio);
++	struct btrfs_folio_state *bfs = folio_get_private(folio);
+ 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
+ 							dirty, start, len);
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&subpage->lock, flags);
+-	bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
+-	spin_unlock_irqrestore(&subpage->lock, flags);
++	spin_lock_irqsave(&bfs->lock, flags);
++	bitmap_set(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
++	spin_unlock_irqrestore(&bfs->lock, flags);
+ 	folio_mark_dirty(folio);
+ }
+ 
+@@ -419,17 +414,17 @@ void btrfs_subpage_set_dirty(const struct btrfs_fs_info *fs_info,
+ bool btrfs_subpage_clear_and_test_dirty(const struct btrfs_fs_info *fs_info,
+ 					struct folio *folio, u64 start, u32 len)
+ {
+-	struct btrfs_subpage *subpage = folio_get_private(folio);
++	struct btrfs_folio_state *bfs = folio_get_private(folio);
+ 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
+ 							dirty, start, len);
+ 	unsigned long flags;
+ 	bool last = false;
+ 
+-	spin_lock_irqsave(&subpage->lock, flags);
+-	bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
++	spin_lock_irqsave(&bfs->lock, flags);
++	bitmap_clear(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
+ 	if (subpage_test_bitmap_all_zero(fs_info, folio, dirty))
+ 		last = true;
+-	spin_unlock_irqrestore(&subpage->lock, flags);
++	spin_unlock_irqrestore(&bfs->lock, flags);
+ 	return last;
+ }
+ 
+@@ -446,91 +441,108 @@ void btrfs_subpage_clear_dirty(const struct btrfs_fs_info *fs_info,
+ void btrfs_subpage_set_writeback(const struct btrfs_fs_info *fs_info,
+ 				 struct folio *folio, u64 start, u32 len)
+ {
+-	struct btrfs_subpage *subpage = folio_get_private(folio);
++	struct btrfs_folio_state *bfs = folio_get_private(folio);
+ 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
+ 							writeback, start, len);
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&subpage->lock, flags);
+-	bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
++	spin_lock_irqsave(&bfs->lock, flags);
++	bitmap_set(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
++
++	/*
++	 * Don't clear the TOWRITE tag when starting writeback on a still-dirty
++	 * folio. Doing so can cause WB_SYNC_ALL writepages() to overlook it,
++	 * assume writeback is complete, and exit too early — violating sync
++	 * ordering guarantees.
++	 */
+ 	if (!folio_test_writeback(folio))
+-		folio_start_writeback(folio);
+-	spin_unlock_irqrestore(&subpage->lock, flags);
++		__folio_start_writeback(folio, true);
++	if (!folio_test_dirty(folio)) {
++		struct address_space *mapping = folio_mapping(folio);
++		XA_STATE(xas, &mapping->i_pages, folio->index);
++		unsigned long flags;
++
++		xas_lock_irqsave(&xas, flags);
++		xas_load(&xas);
++		xas_clear_mark(&xas, PAGECACHE_TAG_TOWRITE);
++		xas_unlock_irqrestore(&xas, flags);
++	}
++	spin_unlock_irqrestore(&bfs->lock, flags);
+ }
+ 
+ void btrfs_subpage_clear_writeback(const struct btrfs_fs_info *fs_info,
+ 				   struct folio *folio, u64 start, u32 len)
+ {
+-	struct btrfs_subpage *subpage = folio_get_private(folio);
++	struct btrfs_folio_state *bfs = folio_get_private(folio);
+ 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
+ 							writeback, start, len);
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&subpage->lock, flags);
+-	bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
++	spin_lock_irqsave(&bfs->lock, flags);
++	bitmap_clear(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
+ 	if (subpage_test_bitmap_all_zero(fs_info, folio, writeback)) {
+ 		ASSERT(folio_test_writeback(folio));
+ 		folio_end_writeback(folio);
+ 	}
+-	spin_unlock_irqrestore(&subpage->lock, flags);
++	spin_unlock_irqrestore(&bfs->lock, flags);
+ }
+ 
+ void btrfs_subpage_set_ordered(const struct btrfs_fs_info *fs_info,
+ 			       struct folio *folio, u64 start, u32 len)
+ {
+-	struct btrfs_subpage *subpage = folio_get_private(folio);
++	struct btrfs_folio_state *bfs = folio_get_private(folio);
+ 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
+ 							ordered, start, len);
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&subpage->lock, flags);
+-	bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
++	spin_lock_irqsave(&bfs->lock, flags);
++	bitmap_set(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
+ 	folio_set_ordered(folio);
+-	spin_unlock_irqrestore(&subpage->lock, flags);
++	spin_unlock_irqrestore(&bfs->lock, flags);
+ }
+ 
+ void btrfs_subpage_clear_ordered(const struct btrfs_fs_info *fs_info,
+ 				 struct folio *folio, u64 start, u32 len)
+ {
+-	struct btrfs_subpage *subpage = folio_get_private(folio);
++	struct btrfs_folio_state *bfs = folio_get_private(folio);
+ 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
+ 							ordered, start, len);
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&subpage->lock, flags);
+-	bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
++	spin_lock_irqsave(&bfs->lock, flags);
++	bitmap_clear(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
+ 	if (subpage_test_bitmap_all_zero(fs_info, folio, ordered))
+ 		folio_clear_ordered(folio);
+-	spin_unlock_irqrestore(&subpage->lock, flags);
++	spin_unlock_irqrestore(&bfs->lock, flags);
+ }
+ 
+ void btrfs_subpage_set_checked(const struct btrfs_fs_info *fs_info,
+ 			       struct folio *folio, u64 start, u32 len)
+ {
+-	struct btrfs_subpage *subpage = folio_get_private(folio);
++	struct btrfs_folio_state *bfs = folio_get_private(folio);
+ 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
+ 							checked, start, len);
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&subpage->lock, flags);
+-	bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
++	spin_lock_irqsave(&bfs->lock, flags);
++	bitmap_set(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
+ 	if (subpage_test_bitmap_all_set(fs_info, folio, checked))
+ 		folio_set_checked(folio);
+-	spin_unlock_irqrestore(&subpage->lock, flags);
++	spin_unlock_irqrestore(&bfs->lock, flags);
+ }
+ 
+ void btrfs_subpage_clear_checked(const struct btrfs_fs_info *fs_info,
+ 				 struct folio *folio, u64 start, u32 len)
+ {
+-	struct btrfs_subpage *subpage = folio_get_private(folio);
++	struct btrfs_folio_state *bfs = folio_get_private(folio);
+ 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
+ 							checked, start, len);
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&subpage->lock, flags);
+-	bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
++	spin_lock_irqsave(&bfs->lock, flags);
++	bitmap_clear(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
+ 	folio_clear_checked(folio);
+-	spin_unlock_irqrestore(&subpage->lock, flags);
++	spin_unlock_irqrestore(&bfs->lock, flags);
+ }
+ 
+ /*
+@@ -541,16 +553,16 @@ void btrfs_subpage_clear_checked(const struct btrfs_fs_info *fs_info,
+ bool btrfs_subpage_test_##name(const struct btrfs_fs_info *fs_info,	\
+ 			       struct folio *folio, u64 start, u32 len)	\
+ {									\
+-	struct btrfs_subpage *subpage = folio_get_private(folio);	\
++	struct btrfs_folio_state *bfs = folio_get_private(folio);	\
+ 	unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,	\
+ 						name, start, len);	\
+ 	unsigned long flags;						\
+ 	bool ret;							\
+ 									\
+-	spin_lock_irqsave(&subpage->lock, flags);			\
+-	ret = bitmap_test_range_all_set(subpage->bitmaps, start_bit,	\
++	spin_lock_irqsave(&bfs->lock, flags);			\
++	ret = bitmap_test_range_all_set(bfs->bitmaps, start_bit,	\
+ 				len >> fs_info->sectorsize_bits);	\
+-	spin_unlock_irqrestore(&subpage->lock, flags);			\
++	spin_unlock_irqrestore(&bfs->lock, flags);			\
+ 	return ret;							\
+ }
+ IMPLEMENT_BTRFS_SUBPAGE_TEST_OP(uptodate);
+@@ -662,10 +674,10 @@ IMPLEMENT_BTRFS_PAGE_OPS(checked, folio_set_checked, folio_clear_checked,
+ {									\
+ 	const unsigned int blocks_per_folio =				\
+ 				btrfs_blocks_per_folio(fs_info, folio);	\
+-	const struct btrfs_subpage *subpage = folio_get_private(folio);	\
++	const struct btrfs_folio_state *bfs = folio_get_private(folio);	\
+ 									\
+ 	ASSERT(blocks_per_folio <= BITS_PER_LONG);			\
+-	*dst = bitmap_read(subpage->bitmaps,				\
++	*dst = bitmap_read(bfs->bitmaps,				\
+ 			   blocks_per_folio * btrfs_bitmap_nr_##name,	\
+ 			   blocks_per_folio);				\
+ }
+@@ -690,7 +702,7 @@ IMPLEMENT_BTRFS_PAGE_OPS(checked, folio_set_checked, folio_clear_checked,
+ void btrfs_folio_assert_not_dirty(const struct btrfs_fs_info *fs_info,
+ 				  struct folio *folio, u64 start, u32 len)
+ {
+-	struct btrfs_subpage *subpage;
++	struct btrfs_folio_state *bfs;
+ 	unsigned int start_bit;
+ 	unsigned int nbits;
+ 	unsigned long flags;
+@@ -705,15 +717,15 @@ void btrfs_folio_assert_not_dirty(const struct btrfs_fs_info *fs_info,
+ 
+ 	start_bit = subpage_calc_start_bit(fs_info, folio, dirty, start, len);
+ 	nbits = len >> fs_info->sectorsize_bits;
+-	subpage = folio_get_private(folio);
+-	ASSERT(subpage);
+-	spin_lock_irqsave(&subpage->lock, flags);
+-	if (unlikely(!bitmap_test_range_all_zero(subpage->bitmaps, start_bit, nbits))) {
++	bfs = folio_get_private(folio);
++	ASSERT(bfs);
++	spin_lock_irqsave(&bfs->lock, flags);
++	if (unlikely(!bitmap_test_range_all_zero(bfs->bitmaps, start_bit, nbits))) {
+ 		SUBPAGE_DUMP_BITMAP(fs_info, folio, dirty, start, len);
+-		ASSERT(bitmap_test_range_all_zero(subpage->bitmaps, start_bit, nbits));
++		ASSERT(bitmap_test_range_all_zero(bfs->bitmaps, start_bit, nbits));
+ 	}
+-	ASSERT(bitmap_test_range_all_zero(subpage->bitmaps, start_bit, nbits));
+-	spin_unlock_irqrestore(&subpage->lock, flags);
++	ASSERT(bitmap_test_range_all_zero(bfs->bitmaps, start_bit, nbits));
++	spin_unlock_irqrestore(&bfs->lock, flags);
+ }
+ 
+ /*
+@@ -726,7 +738,7 @@ void btrfs_folio_assert_not_dirty(const struct btrfs_fs_info *fs_info,
+ void btrfs_folio_set_lock(const struct btrfs_fs_info *fs_info,
+ 			  struct folio *folio, u64 start, u32 len)
+ {
+-	struct btrfs_subpage *subpage;
++	struct btrfs_folio_state *bfs;
+ 	unsigned long flags;
+ 	unsigned int start_bit;
+ 	unsigned int nbits;
+@@ -736,19 +748,19 @@ void btrfs_folio_set_lock(const struct btrfs_fs_info *fs_info,
+ 	if (unlikely(!fs_info) || !btrfs_is_subpage(fs_info, folio))
+ 		return;
+ 
+-	subpage = folio_get_private(folio);
++	bfs = folio_get_private(folio);
+ 	start_bit = subpage_calc_start_bit(fs_info, folio, locked, start, len);
+ 	nbits = len >> fs_info->sectorsize_bits;
+-	spin_lock_irqsave(&subpage->lock, flags);
++	spin_lock_irqsave(&bfs->lock, flags);
+ 	/* Target range should not yet be locked. */
+-	if (unlikely(!bitmap_test_range_all_zero(subpage->bitmaps, start_bit, nbits))) {
++	if (unlikely(!bitmap_test_range_all_zero(bfs->bitmaps, start_bit, nbits))) {
+ 		SUBPAGE_DUMP_BITMAP(fs_info, folio, locked, start, len);
+-		ASSERT(bitmap_test_range_all_zero(subpage->bitmaps, start_bit, nbits));
++		ASSERT(bitmap_test_range_all_zero(bfs->bitmaps, start_bit, nbits));
+ 	}
+-	bitmap_set(subpage->bitmaps, start_bit, nbits);
+-	ret = atomic_add_return(nbits, &subpage->nr_locked);
++	bitmap_set(bfs->bitmaps, start_bit, nbits);
++	ret = atomic_add_return(nbits, &bfs->nr_locked);
+ 	ASSERT(ret <= btrfs_blocks_per_folio(fs_info, folio));
+-	spin_unlock_irqrestore(&subpage->lock, flags);
++	spin_unlock_irqrestore(&bfs->lock, flags);
+ }
+ 
+ /*
+@@ -776,7 +788,7 @@ bool btrfs_meta_folio_clear_and_test_dirty(struct folio *folio, const struct ext
+ void __cold btrfs_subpage_dump_bitmap(const struct btrfs_fs_info *fs_info,
+ 				      struct folio *folio, u64 start, u32 len)
+ {
+-	struct btrfs_subpage *subpage;
++	struct btrfs_folio_state *bfs;
+ 	const unsigned int blocks_per_folio = btrfs_blocks_per_folio(fs_info, folio);
+ 	unsigned long uptodate_bitmap;
+ 	unsigned long dirty_bitmap;
+@@ -788,18 +800,18 @@ void __cold btrfs_subpage_dump_bitmap(const struct btrfs_fs_info *fs_info,
+ 
+ 	ASSERT(folio_test_private(folio) && folio_get_private(folio));
+ 	ASSERT(blocks_per_folio > 1);
+-	subpage = folio_get_private(folio);
++	bfs = folio_get_private(folio);
+ 
+-	spin_lock_irqsave(&subpage->lock, flags);
++	spin_lock_irqsave(&bfs->lock, flags);
+ 	GET_SUBPAGE_BITMAP(fs_info, folio, uptodate, &uptodate_bitmap);
+ 	GET_SUBPAGE_BITMAP(fs_info, folio, dirty, &dirty_bitmap);
+ 	GET_SUBPAGE_BITMAP(fs_info, folio, writeback, &writeback_bitmap);
+ 	GET_SUBPAGE_BITMAP(fs_info, folio, ordered, &ordered_bitmap);
+ 	GET_SUBPAGE_BITMAP(fs_info, folio, checked, &checked_bitmap);
+ 	GET_SUBPAGE_BITMAP(fs_info, folio, locked, &locked_bitmap);
+-	spin_unlock_irqrestore(&subpage->lock, flags);
++	spin_unlock_irqrestore(&bfs->lock, flags);
+ 
+-	dump_page(folio_page(folio, 0), "btrfs subpage dump");
++	dump_page(folio_page(folio, 0), "btrfs folio state dump");
+ 	btrfs_warn(fs_info,
+ "start=%llu len=%u page=%llu, bitmaps uptodate=%*pbl dirty=%*pbl locked=%*pbl writeback=%*pbl ordered=%*pbl checked=%*pbl",
+ 		    start, len, folio_pos(folio),
+@@ -815,14 +827,14 @@ void btrfs_get_subpage_dirty_bitmap(struct btrfs_fs_info *fs_info,
+ 				    struct folio *folio,
+ 				    unsigned long *ret_bitmap)
+ {
+-	struct btrfs_subpage *subpage;
++	struct btrfs_folio_state *bfs;
+ 	unsigned long flags;
+ 
+ 	ASSERT(folio_test_private(folio) && folio_get_private(folio));
+ 	ASSERT(btrfs_blocks_per_folio(fs_info, folio) > 1);
+-	subpage = folio_get_private(folio);
++	bfs = folio_get_private(folio);
+ 
+-	spin_lock_irqsave(&subpage->lock, flags);
++	spin_lock_irqsave(&bfs->lock, flags);
+ 	GET_SUBPAGE_BITMAP(fs_info, folio, dirty, ret_bitmap);
+-	spin_unlock_irqrestore(&subpage->lock, flags);
++	spin_unlock_irqrestore(&bfs->lock, flags);
+ }
+diff --git a/fs/btrfs/subpage.h b/fs/btrfs/subpage.h
+index 3042c5ea840aec..b6e40a678d7387 100644
+--- a/fs/btrfs/subpage.h
++++ b/fs/btrfs/subpage.h
+@@ -32,9 +32,31 @@ struct folio;
+ enum {
+ 	btrfs_bitmap_nr_uptodate = 0,
+ 	btrfs_bitmap_nr_dirty,
++
++	/*
++	 * This can be changed to atomic eventually.  But this change will rely
++	 * on the async delalloc range rework for locked bitmap.  As async
++	 * delalloc can unlock its range and mark blocks writeback at random
++	 * timing.
++	 */
+ 	btrfs_bitmap_nr_writeback,
++
++	/*
++	 * The ordered and checked flags are for COW fixup, already marked
++	 * deprecated, and will be removed eventually.
++	 */
+ 	btrfs_bitmap_nr_ordered,
+ 	btrfs_bitmap_nr_checked,
++
++	/*
++	 * The locked bit is for async delalloc range (compression), currently
++	 * async extent is queued with the range locked, until the compression
++	 * is done.
++	 * So an async extent can unlock the range at any random timing.
++	 *
++	 * This will need a rework on the async extent lifespan (mark writeback
++	 * and do compression) before deprecating this flag.
++	 */
+ 	btrfs_bitmap_nr_locked,
+ 	btrfs_bitmap_nr_max
+ };
+@@ -43,7 +65,7 @@ enum {
+  * Structure to trace status of each sector inside a page, attached to
+  * page::private for both data and metadata inodes.
+  */
+-struct btrfs_subpage {
++struct btrfs_folio_state {
+ 	/* Common members for both data and metadata pages */
+ 	spinlock_t lock;
+ 	union {
+@@ -51,7 +73,7 @@ struct btrfs_subpage {
+ 		 * Structures only used by metadata
+ 		 *
+ 		 * @eb_refs should only be operated under private_lock, as it
+-		 * manages whether the subpage can be detached.
++		 * manages whether the btrfs_folio_state can be detached.
+ 		 */
+ 		atomic_t eb_refs;
+ 
+@@ -65,7 +87,7 @@ struct btrfs_subpage {
+ 	unsigned long bitmaps[];
+ };
+ 
+-enum btrfs_subpage_type {
++enum btrfs_folio_type {
+ 	BTRFS_SUBPAGE_METADATA,
+ 	BTRFS_SUBPAGE_DATA,
+ };
+@@ -105,15 +127,18 @@ static inline bool btrfs_is_subpage(const struct btrfs_fs_info *fs_info,
+ }
+ #endif
+ 
+-int btrfs_attach_subpage(const struct btrfs_fs_info *fs_info,
+-			 struct folio *folio, enum btrfs_subpage_type type);
+-void btrfs_detach_subpage(const struct btrfs_fs_info *fs_info, struct folio *folio,
+-			  enum btrfs_subpage_type type);
++int btrfs_attach_folio_state(const struct btrfs_fs_info *fs_info,
++			     struct folio *folio, enum btrfs_folio_type type);
++void btrfs_detach_folio_state(const struct btrfs_fs_info *fs_info, struct folio *folio,
++			      enum btrfs_folio_type type);
+ 
+ /* Allocate additional data where page represents more than one sector */
+-struct btrfs_subpage *btrfs_alloc_subpage(const struct btrfs_fs_info *fs_info,
+-				size_t fsize, enum btrfs_subpage_type type);
+-void btrfs_free_subpage(struct btrfs_subpage *subpage);
++struct btrfs_folio_state *btrfs_alloc_folio_state(const struct btrfs_fs_info *fs_info,
++						  size_t fsize, enum btrfs_folio_type type);
++static inline void btrfs_free_folio_state(struct btrfs_folio_state *bfs)
++{
++	kfree(bfs);
++}
+ 
+ void btrfs_folio_inc_eb_refs(const struct btrfs_fs_info *fs_info, struct folio *folio);
+ void btrfs_folio_dec_eb_refs(const struct btrfs_fs_info *fs_info, struct folio *folio);
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index a0c65adce1abd2..3213815ed76571 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -88,6 +88,9 @@ struct btrfs_fs_context {
+ 	refcount_t refs;
+ };
+ 
++static void btrfs_emit_options(struct btrfs_fs_info *info,
++			       struct btrfs_fs_context *old);
++
+ enum {
+ 	Opt_acl,
+ 	Opt_clear_cache,
+@@ -689,12 +692,9 @@ bool btrfs_check_options(const struct btrfs_fs_info *info,
+ 
+ 	if (!test_bit(BTRFS_FS_STATE_REMOUNTING, &info->fs_state)) {
+ 		if (btrfs_raw_test_opt(*mount_opt, SPACE_CACHE)) {
+-			btrfs_info(info, "disk space caching is enabled");
+ 			btrfs_warn(info,
+ "space cache v1 is being deprecated and will be removed in a future release, please use -o space_cache=v2");
+ 		}
+-		if (btrfs_raw_test_opt(*mount_opt, FREE_SPACE_TREE))
+-			btrfs_info(info, "using free-space-tree");
+ 	}
+ 
+ 	return ret;
+@@ -971,6 +971,8 @@ static int btrfs_fill_super(struct super_block *sb,
+ 		return err;
+ 	}
+ 
++	btrfs_emit_options(fs_info, NULL);
++
+ 	inode = btrfs_iget(BTRFS_FIRST_FREE_OBJECTID, fs_info->fs_root);
+ 	if (IS_ERR(inode)) {
+ 		err = PTR_ERR(inode);
+@@ -1428,7 +1430,7 @@ static void btrfs_emit_options(struct btrfs_fs_info *info,
+ {
+ 	btrfs_info_if_set(info, old, NODATASUM, "setting nodatasum");
+ 	btrfs_info_if_set(info, old, DEGRADED, "allowing degraded mounts");
+-	btrfs_info_if_set(info, old, NODATASUM, "setting nodatasum");
++	btrfs_info_if_set(info, old, NODATACOW, "setting nodatacow");
+ 	btrfs_info_if_set(info, old, SSD, "enabling ssd optimizations");
+ 	btrfs_info_if_set(info, old, SSD_SPREAD, "using spread ssd allocation scheme");
+ 	btrfs_info_if_set(info, old, NOBARRIER, "turning off barriers");
+@@ -1450,10 +1452,11 @@ static void btrfs_emit_options(struct btrfs_fs_info *info,
+ 	btrfs_info_if_set(info, old, IGNOREMETACSUMS, "ignoring meta csums");
+ 	btrfs_info_if_set(info, old, IGNORESUPERFLAGS, "ignoring unknown super block flags");
+ 
++	btrfs_info_if_unset(info, old, NODATASUM, "setting datasum");
+ 	btrfs_info_if_unset(info, old, NODATACOW, "setting datacow");
+ 	btrfs_info_if_unset(info, old, SSD, "not using ssd optimizations");
+ 	btrfs_info_if_unset(info, old, SSD_SPREAD, "not using spread ssd allocation scheme");
+-	btrfs_info_if_unset(info, old, NOBARRIER, "turning off barriers");
++	btrfs_info_if_unset(info, old, NOBARRIER, "turning on barriers");
+ 	btrfs_info_if_unset(info, old, NOTREELOG, "enabling tree log");
+ 	btrfs_info_if_unset(info, old, SPACE_CACHE, "disabling disk space caching");
+ 	btrfs_info_if_unset(info, old, FREE_SPACE_TREE, "disabling free space tree");
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 7241229d218b3a..afc05e406689ae 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -2747,7 +2747,7 @@ static int walk_log_tree(struct btrfs_trans_handle *trans,
+ 	level = btrfs_header_level(log->node);
+ 	orig_level = level;
+ 	path->nodes[level] = log->node;
+-	atomic_inc(&log->node->refs);
++	refcount_inc(&log->node->refs);
+ 	path->slots[level] = 0;
+ 
+ 	while (1) {
+@@ -3711,7 +3711,7 @@ static int clone_leaf(struct btrfs_path *path, struct btrfs_log_ctx *ctx)
+ 	 * Add extra ref to scratch eb so that it is not freed when callers
+ 	 * release the path, so we can reuse it later if needed.
+ 	 */
+-	atomic_inc(&ctx->scratch_eb->refs);
++	refcount_inc(&ctx->scratch_eb->refs);
+ 
+ 	return 0;
+ }
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index 5439d837471617..af5ba3ad2eb833 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -18,6 +18,7 @@
+ #include "accessors.h"
+ #include "bio.h"
+ #include "transaction.h"
++#include "sysfs.h"
+ 
+ /* Maximum number of zones to report per blkdev_report_zones() call */
+ #define BTRFS_REPORT_NR_ZONES   4096
+@@ -2169,10 +2170,15 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
+ 		goto out_unlock;
+ 	}
+ 
+-	/* No space left */
+-	if (btrfs_zoned_bg_is_full(block_group)) {
+-		ret = false;
+-		goto out_unlock;
++	if (block_group->flags & BTRFS_BLOCK_GROUP_DATA) {
++		/* The caller should check if the block group is full. */
++		if (WARN_ON_ONCE(btrfs_zoned_bg_is_full(block_group))) {
++			ret = false;
++			goto out_unlock;
++		}
++	} else {
++		/* Since it is already written, it should have been active. */
++		WARN_ON_ONCE(block_group->meta_write_pointer != block_group->start);
+ 	}
+ 
+ 	for (i = 0; i < map->num_stripes; i++) {
+@@ -2486,7 +2492,7 @@ void btrfs_schedule_zone_finish_bg(struct btrfs_block_group *bg,
+ 
+ 	/* For the work */
+ 	btrfs_get_block_group(bg);
+-	atomic_inc(&eb->refs);
++	refcount_inc(&eb->refs);
+ 	bg->last_eb = eb;
+ 	INIT_WORK(&bg->zone_finish_work, btrfs_zone_finish_endio_workfn);
+ 	queue_work(system_unbound_wq, &bg->zone_finish_work);
+@@ -2505,12 +2511,12 @@ void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg)
+ void btrfs_zoned_reserve_data_reloc_bg(struct btrfs_fs_info *fs_info)
+ {
+ 	struct btrfs_space_info *data_sinfo = fs_info->data_sinfo;
+-	struct btrfs_space_info *space_info = data_sinfo->sub_group[0];
++	struct btrfs_space_info *space_info = data_sinfo;
+ 	struct btrfs_trans_handle *trans;
+ 	struct btrfs_block_group *bg;
+ 	struct list_head *bg_list;
+ 	u64 alloc_flags;
+-	bool initial = false;
++	bool first = true;
+ 	bool did_chunk_alloc = false;
+ 	int index;
+ 	int ret;
+@@ -2524,21 +2530,52 @@ void btrfs_zoned_reserve_data_reloc_bg(struct btrfs_fs_info *fs_info)
+ 	if (sb_rdonly(fs_info->sb))
+ 		return;
+ 
+-	ASSERT(space_info->subgroup_id == BTRFS_SUB_GROUP_DATA_RELOC);
+ 	alloc_flags = btrfs_get_alloc_profile(fs_info, space_info->flags);
+ 	index = btrfs_bg_flags_to_raid_index(alloc_flags);
+ 
+-	bg_list = &data_sinfo->block_groups[index];
++	/* Scan the data space_info to find empty block groups. Take the second one. */
+ again:
++	bg_list = &space_info->block_groups[index];
+ 	list_for_each_entry(bg, bg_list, list) {
+-		if (bg->used > 0)
++		if (bg->alloc_offset != 0)
+ 			continue;
+ 
+-		if (!initial) {
+-			initial = true;
++		if (first) {
++			first = false;
+ 			continue;
+ 		}
+ 
++		if (space_info == data_sinfo) {
++			/* Migrate the block group to the data relocation space_info. */
++			struct btrfs_space_info *reloc_sinfo = data_sinfo->sub_group[0];
++			int factor;
++
++			ASSERT(reloc_sinfo->subgroup_id == BTRFS_SUB_GROUP_DATA_RELOC);
++			factor = btrfs_bg_type_to_factor(bg->flags);
++
++			down_write(&space_info->groups_sem);
++			list_del_init(&bg->list);
++			/* We can assume this as we choose the second empty one. */
++			ASSERT(!list_empty(&space_info->block_groups[index]));
++			up_write(&space_info->groups_sem);
++
++			spin_lock(&space_info->lock);
++			space_info->total_bytes -= bg->length;
++			space_info->disk_total -= bg->length * factor;
++			/* There is no allocation ever happened. */
++			ASSERT(bg->used == 0);
++			ASSERT(bg->zone_unusable == 0);
++			/* No super block in a block group on the zoned setup. */
++			ASSERT(bg->bytes_super == 0);
++			spin_unlock(&space_info->lock);
++
++			bg->space_info = reloc_sinfo;
++			if (reloc_sinfo->block_group_kobjs[index] == NULL)
++				btrfs_sysfs_add_block_group_type(bg);
++
++			btrfs_add_bg_to_space_info(fs_info, bg);
++		}
++
+ 		fs_info->data_reloc_bg = bg->start;
+ 		set_bit(BLOCK_GROUP_FLAG_ZONED_DATA_RELOC, &bg->runtime_flags);
+ 		btrfs_zone_activate(bg);
+@@ -2553,11 +2590,18 @@ void btrfs_zoned_reserve_data_reloc_bg(struct btrfs_fs_info *fs_info)
+ 	if (IS_ERR(trans))
+ 		return;
+ 
++	/* Allocate new BG in the data relocation space_info. */
++	space_info = data_sinfo->sub_group[0];
++	ASSERT(space_info->subgroup_id == BTRFS_SUB_GROUP_DATA_RELOC);
+ 	ret = btrfs_chunk_alloc(trans, space_info, alloc_flags, CHUNK_ALLOC_FORCE);
+ 	btrfs_end_transaction(trans);
+ 	if (ret == 1) {
++		/*
++		 * We allocated a new block group in the data relocation space_info. We
++		 * can take that one.
++		 */
++		first = false;
+ 		did_chunk_alloc = true;
+-		bg_list = &space_info->block_groups[index];
+ 		goto again;
+ 	}
+ }
+diff --git a/fs/buffer.c b/fs/buffer.c
+index 8cf4a1dc481eb1..eb6d85edc37a12 100644
+--- a/fs/buffer.c
++++ b/fs/buffer.c
+@@ -157,8 +157,8 @@ static void __end_buffer_read_notouch(struct buffer_head *bh, int uptodate)
+  */
+ void end_buffer_read_sync(struct buffer_head *bh, int uptodate)
+ {
+-	__end_buffer_read_notouch(bh, uptodate);
+ 	put_bh(bh);
++	__end_buffer_read_notouch(bh, uptodate);
+ }
+ EXPORT_SYMBOL(end_buffer_read_sync);
+ 
+diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
+index 30c4944e18622d..644e90ee865458 100644
+--- a/fs/debugfs/inode.c
++++ b/fs/debugfs/inode.c
+@@ -183,6 +183,9 @@ static int debugfs_reconfigure(struct fs_context *fc)
+ 	struct debugfs_fs_info *sb_opts = sb->s_fs_info;
+ 	struct debugfs_fs_info *new_opts = fc->s_fs_info;
+ 
++	if (!new_opts)
++		return 0;
++
+ 	sync_filesystem(sb);
+ 
+ 	/* structure copy of new mount options to sb */
+@@ -282,10 +285,16 @@ static int debugfs_fill_super(struct super_block *sb, struct fs_context *fc)
+ 
+ static int debugfs_get_tree(struct fs_context *fc)
+ {
++	int err;
++
+ 	if (!(debugfs_allow & DEBUGFS_ALLOW_API))
+ 		return -EPERM;
+ 
+-	return get_tree_single(fc, debugfs_fill_super);
++	err = get_tree_single(fc, debugfs_fill_super);
++	if (err)
++		return err;
++
++	return debugfs_reconfigure(fc);
+ }
+ 
+ static void debugfs_free_fc(struct fs_context *fc)
+diff --git a/fs/erofs/Kconfig b/fs/erofs/Kconfig
+index 6beeb7063871cc..d81f3318417dff 100644
+--- a/fs/erofs/Kconfig
++++ b/fs/erofs/Kconfig
+@@ -3,8 +3,18 @@
+ config EROFS_FS
+ 	tristate "EROFS filesystem support"
+ 	depends on BLOCK
++	select CACHEFILES if EROFS_FS_ONDEMAND
+ 	select CRC32
++	select CRYPTO if EROFS_FS_ZIP_ACCEL
++	select CRYPTO_DEFLATE if EROFS_FS_ZIP_ACCEL
+ 	select FS_IOMAP
++	select LZ4_DECOMPRESS if EROFS_FS_ZIP
++	select NETFS_SUPPORT if EROFS_FS_ONDEMAND
++	select XXHASH if EROFS_FS_XATTR
++	select XZ_DEC if EROFS_FS_ZIP_LZMA
++	select XZ_DEC_MICROLZMA if EROFS_FS_ZIP_LZMA
++	select ZLIB_INFLATE if EROFS_FS_ZIP_DEFLATE
++	select ZSTD_DECOMPRESS if EROFS_FS_ZIP_ZSTD
+ 	help
+ 	  EROFS (Enhanced Read-Only File System) is a lightweight read-only
+ 	  file system with modern designs (e.g. no buffer heads, inline
+@@ -38,7 +48,6 @@ config EROFS_FS_DEBUG
+ config EROFS_FS_XATTR
+ 	bool "EROFS extended attributes"
+ 	depends on EROFS_FS
+-	select XXHASH
+ 	default y
+ 	help
+ 	  Extended attributes are name:value pairs associated with inodes by
+@@ -94,7 +103,6 @@ config EROFS_FS_BACKED_BY_FILE
+ config EROFS_FS_ZIP
+ 	bool "EROFS Data Compression Support"
+ 	depends on EROFS_FS
+-	select LZ4_DECOMPRESS
+ 	default y
+ 	help
+ 	  Enable transparent compression support for EROFS file systems.
+@@ -104,8 +112,6 @@ config EROFS_FS_ZIP
+ config EROFS_FS_ZIP_LZMA
+ 	bool "EROFS LZMA compressed data support"
+ 	depends on EROFS_FS_ZIP
+-	select XZ_DEC
+-	select XZ_DEC_MICROLZMA
+ 	help
+ 	  Saying Y here includes support for reading EROFS file systems
+ 	  containing LZMA compressed data, specifically called microLZMA. It
+@@ -117,7 +123,6 @@ config EROFS_FS_ZIP_LZMA
+ config EROFS_FS_ZIP_DEFLATE
+ 	bool "EROFS DEFLATE compressed data support"
+ 	depends on EROFS_FS_ZIP
+-	select ZLIB_INFLATE
+ 	help
+ 	  Saying Y here includes support for reading EROFS file systems
+ 	  containing DEFLATE compressed data.  It gives better compression
+@@ -132,7 +137,6 @@ config EROFS_FS_ZIP_DEFLATE
+ config EROFS_FS_ZIP_ZSTD
+ 	bool "EROFS Zstandard compressed data support"
+ 	depends on EROFS_FS_ZIP
+-	select ZSTD_DECOMPRESS
+ 	help
+ 	  Saying Y here includes support for reading EROFS file systems
+ 	  containing Zstandard compressed data.  It gives better compression
+@@ -161,9 +165,7 @@ config EROFS_FS_ZIP_ACCEL
+ config EROFS_FS_ONDEMAND
+ 	bool "EROFS fscache-based on-demand read support (deprecated)"
+ 	depends on EROFS_FS
+-	select NETFS_SUPPORT
+ 	select FSCACHE
+-	select CACHEFILES
+ 	select CACHEFILES_ONDEMAND
+ 	help
+ 	  This permits EROFS to use fscache-backed data blobs with on-demand
+diff --git a/fs/ext4/fsmap.c b/fs/ext4/fsmap.c
+index 383c6edea6dd31..91185c40f755a5 100644
+--- a/fs/ext4/fsmap.c
++++ b/fs/ext4/fsmap.c
+@@ -393,6 +393,14 @@ static unsigned int ext4_getfsmap_find_sb(struct super_block *sb,
+ 	/* Reserved GDT blocks */
+ 	if (!ext4_has_feature_meta_bg(sb) || metagroup < first_meta_bg) {
+ 		len = le16_to_cpu(sbi->s_es->s_reserved_gdt_blocks);
++
++		/*
++		 * mkfs.ext4 can set s_reserved_gdt_blocks as 0 in some cases,
++		 * check for that.
++		 */
++		if (!len)
++			return 0;
++
+ 		error = ext4_getfsmap_fill(meta_list, fsb, len,
+ 					   EXT4_FMR_OWN_RESV_GDT);
+ 		if (error)
+@@ -526,6 +534,7 @@ static int ext4_getfsmap_datadev(struct super_block *sb,
+ 	ext4_group_t end_ag;
+ 	ext4_grpblk_t first_cluster;
+ 	ext4_grpblk_t last_cluster;
++	struct ext4_fsmap irec;
+ 	int error = 0;
+ 
+ 	bofs = le32_to_cpu(sbi->s_es->s_first_data_block);
+@@ -609,10 +618,18 @@ static int ext4_getfsmap_datadev(struct super_block *sb,
+ 			goto err;
+ 	}
+ 
+-	/* Report any gaps at the end of the bg */
++	/*
++	 * The dummy record below will cause ext4_getfsmap_helper() to report
++	 * any allocated blocks at the end of the range.
++	 */
++	irec.fmr_device = 0;
++	irec.fmr_physical = end_fsb + 1;
++	irec.fmr_length = 0;
++	irec.fmr_owner = EXT4_FMR_OWN_FREE;
++	irec.fmr_flags = 0;
++
+ 	info->gfi_last = true;
+-	error = ext4_getfsmap_datadev_helper(sb, end_ag, last_cluster + 1,
+-					     0, info);
++	error = ext4_getfsmap_helper(sb, info, &irec);
+ 	if (error)
+ 		goto err;
+ 
+diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
+index 7de327fa7b1c51..d45124318200d8 100644
+--- a/fs/ext4/indirect.c
++++ b/fs/ext4/indirect.c
+@@ -539,7 +539,7 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
+ 	int indirect_blks;
+ 	int blocks_to_boundary = 0;
+ 	int depth;
+-	int count = 0;
++	u64 count = 0;
+ 	ext4_fsblk_t first_block = 0;
+ 
+ 	trace_ext4_ind_map_blocks_enter(inode, map->m_lblk, map->m_len, flags);
+@@ -588,7 +588,7 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
+ 		count++;
+ 		/* Fill in size of a hole we found */
+ 		map->m_pblk = 0;
+-		map->m_len = min_t(unsigned int, map->m_len, count);
++		map->m_len = umin(map->m_len, count);
+ 		goto cleanup;
+ 	}
+ 
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 3df0796a30104e..c0a85b7548535a 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -146,7 +146,7 @@ static inline int ext4_begin_ordered_truncate(struct inode *inode,
+  */
+ int ext4_inode_is_fast_symlink(struct inode *inode)
+ {
+-	if (!(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)) {
++	if (!ext4_has_feature_ea_inode(inode->i_sb)) {
+ 		int ea_blocks = EXT4_I(inode)->i_file_acl ?
+ 				EXT4_CLUSTER_SIZE(inode->i_sb) >> 9 : 0;
+ 
+diff --git a/fs/ext4/orphan.c b/fs/ext4/orphan.c
+index 7c7f792ad6aba9..524d4658fa408d 100644
+--- a/fs/ext4/orphan.c
++++ b/fs/ext4/orphan.c
+@@ -589,8 +589,9 @@ int ext4_init_orphan_info(struct super_block *sb)
+ 	}
+ 	oi->of_blocks = inode->i_size >> sb->s_blocksize_bits;
+ 	oi->of_csum_seed = EXT4_I(inode)->i_csum_seed;
+-	oi->of_binfo = kmalloc(oi->of_blocks*sizeof(struct ext4_orphan_block),
+-			       GFP_KERNEL);
++	oi->of_binfo = kmalloc_array(oi->of_blocks,
++				     sizeof(struct ext4_orphan_block),
++				     GFP_KERNEL);
+ 	if (!oi->of_binfo) {
+ 		ret = -ENOMEM;
+ 		goto out_put;
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index c7d39da7e733b1..8f460663d6c457 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -1998,6 +1998,9 @@ int ext4_init_fs_context(struct fs_context *fc)
+ 	fc->fs_private = ctx;
+ 	fc->ops = &ext4_context_ops;
+ 
++	/* i_version is always enabled now */
++	fc->sb_flags |= SB_I_VERSION;
++
+ 	return 0;
+ }
+ 
+@@ -5314,9 +5317,6 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
+ 	sb->s_flags = (sb->s_flags & ~SB_POSIXACL) |
+ 		(test_opt(sb, POSIX_ACL) ? SB_POSIXACL : 0);
+ 
+-	/* i_version is always enabled now */
+-	sb->s_flags |= SB_I_VERSION;
+-
+ 	/* HSM events are allowed by default. */
+ 	sb->s_iflags |= SB_I_ALLOW_HSM;
+ 
+@@ -5414,6 +5414,8 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
+ 		err = ext4_load_and_init_journal(sb, es, ctx);
+ 		if (err)
+ 			goto failed_mount3a;
++		if (bdev_read_only(sb->s_bdev))
++		    needs_recovery = 0;
+ 	} else if (test_opt(sb, NOLOAD) && !sb_rdonly(sb) &&
+ 		   ext4_has_feature_journal_needs_recovery(sb)) {
+ 		ext4_msg(sb, KERN_ERR, "required journal recovery "
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 2fd287f2bca4ba..e8d1abbb8052e6 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -816,6 +816,16 @@ int f2fs_get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode)
+ 	for (i = 1; i <= level; i++) {
+ 		bool done = false;
+ 
++		if (nids[i] && nids[i] == dn->inode->i_ino) {
++			err = -EFSCORRUPTED;
++			f2fs_err_ratelimited(sbi,
++				"inode mapping table is corrupted, run fsck to fix it, "
++				"ino:%lu, nid:%u, level:%d, offset:%d",
++				dn->inode->i_ino, nids[i], level, offset[level]);
++			set_sbi_flag(sbi, SBI_NEED_FSCK);
++			goto release_pages;
++		}
++
+ 		if (!nids[i] && mode == ALLOC_NODE) {
+ 			/* alloc new node */
+ 			if (!f2fs_alloc_nid(sbi, &(nids[i]))) {
+diff --git a/fs/fhandle.c b/fs/fhandle.c
+index 66ff60591d17b2..e21ec857f2abcf 100644
+--- a/fs/fhandle.c
++++ b/fs/fhandle.c
+@@ -404,7 +404,7 @@ static long do_handle_open(int mountdirfd, struct file_handle __user *ufh,
+ 	if (retval)
+ 		return retval;
+ 
+-	CLASS(get_unused_fd, fd)(O_CLOEXEC);
++	CLASS(get_unused_fd, fd)(open_flag);
+ 	if (fd < 0)
+ 		return fd;
+ 
+diff --git a/fs/internal.h b/fs/internal.h
+index 393f6c5c24f6b8..22ba066d1dbaf7 100644
+--- a/fs/internal.h
++++ b/fs/internal.h
+@@ -322,12 +322,15 @@ struct mnt_idmap *alloc_mnt_idmap(struct user_namespace *mnt_userns);
+ struct mnt_idmap *mnt_idmap_get(struct mnt_idmap *idmap);
+ void mnt_idmap_put(struct mnt_idmap *idmap);
+ struct stashed_operations {
++	struct dentry *(*stash_dentry)(struct dentry **stashed,
++				       struct dentry *dentry);
+ 	void (*put_data)(void *data);
+ 	int (*init_inode)(struct inode *inode, void *data);
+ };
+ int path_from_stashed(struct dentry **stashed, struct vfsmount *mnt, void *data,
+ 		      struct path *path);
+ void stashed_dentry_prune(struct dentry *dentry);
++struct dentry *stash_dentry(struct dentry **stashed, struct dentry *dentry);
+ struct dentry *stashed_dentry_get(struct dentry **stashed);
+ /**
+  * path_mounted - check whether path is mounted
+diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
+index 844261a31156c4..b875165b7c27d9 100644
+--- a/fs/iomap/direct-io.c
++++ b/fs/iomap/direct-io.c
+@@ -368,14 +368,14 @@ static int iomap_dio_bio_iter(struct iomap_iter *iter, struct iomap_dio *dio)
+ 		if (iomap->flags & IOMAP_F_SHARED)
+ 			dio->flags |= IOMAP_DIO_COW;
+ 
+-		if (iomap->flags & IOMAP_F_NEW) {
++		if (iomap->flags & IOMAP_F_NEW)
+ 			need_zeroout = true;
+-		} else if (iomap->type == IOMAP_MAPPED) {
+-			if (iomap_dio_can_use_fua(iomap, dio))
+-				bio_opf |= REQ_FUA;
+-			else
+-				dio->flags &= ~IOMAP_DIO_WRITE_THROUGH;
+-		}
++		else if (iomap->type == IOMAP_MAPPED &&
++			 iomap_dio_can_use_fua(iomap, dio))
++			bio_opf |= REQ_FUA;
++
++		if (!(bio_opf & REQ_FUA))
++			dio->flags &= ~IOMAP_DIO_WRITE_THROUGH;
+ 
+ 		/*
+ 		 * We can only do deferred completion for pure overwrites that
+diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c
+index b3971e91e8eb80..38861ca04899f0 100644
+--- a/fs/jbd2/checkpoint.c
++++ b/fs/jbd2/checkpoint.c
+@@ -285,6 +285,7 @@ int jbd2_log_do_checkpoint(journal_t *journal)
+ 		retry:
+ 			if (batch_count)
+ 				__flush_batch(journal, &batch_count);
++			cond_resched();
+ 			spin_lock(&journal->j_list_lock);
+ 			goto restart;
+ 	}
+diff --git a/fs/libfs.c b/fs/libfs.c
+index 972b95cc743357..5b936ee71892a9 100644
+--- a/fs/libfs.c
++++ b/fs/libfs.c
+@@ -2126,6 +2126,8 @@ struct dentry *stashed_dentry_get(struct dentry **stashed)
+ 	dentry = rcu_dereference(*stashed);
+ 	if (!dentry)
+ 		return NULL;
++	if (IS_ERR(dentry))
++		return dentry;
+ 	if (!lockref_get_not_dead(&dentry->d_lockref))
+ 		return NULL;
+ 	return dentry;
+@@ -2174,8 +2176,7 @@ static struct dentry *prepare_anon_dentry(struct dentry **stashed,
+ 	return dentry;
+ }
+ 
+-static struct dentry *stash_dentry(struct dentry **stashed,
+-				   struct dentry *dentry)
++struct dentry *stash_dentry(struct dentry **stashed, struct dentry *dentry)
+ {
+ 	guard(rcu)();
+ 	for (;;) {
+@@ -2216,12 +2217,15 @@ static struct dentry *stash_dentry(struct dentry **stashed,
+ int path_from_stashed(struct dentry **stashed, struct vfsmount *mnt, void *data,
+ 		      struct path *path)
+ {
+-	struct dentry *dentry;
++	struct dentry *dentry, *res;
+ 	const struct stashed_operations *sops = mnt->mnt_sb->s_fs_info;
+ 
+ 	/* See if dentry can be reused. */
+-	path->dentry = stashed_dentry_get(stashed);
+-	if (path->dentry) {
++	res = stashed_dentry_get(stashed);
++	if (IS_ERR(res))
++		return PTR_ERR(res);
++	if (res) {
++		path->dentry = res;
+ 		sops->put_data(data);
+ 		goto out_path;
+ 	}
+@@ -2232,8 +2236,17 @@ int path_from_stashed(struct dentry **stashed, struct vfsmount *mnt, void *data,
+ 		return PTR_ERR(dentry);
+ 
+ 	/* Added a new dentry. @data is now owned by the filesystem. */
+-	path->dentry = stash_dentry(stashed, dentry);
+-	if (path->dentry != dentry)
++	if (sops->stash_dentry)
++		res = sops->stash_dentry(stashed, dentry);
++	else
++		res = stash_dentry(stashed, dentry);
++	if (IS_ERR(res)) {
++		dput(dentry);
++		return PTR_ERR(res);
++	}
++	path->dentry = res;
++	/* A dentry was reused. */
++	if (res != dentry)
+ 		dput(dentry);
+ 
+ out_path:
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 54c59e091919bc..6b038bf74a3df2 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -2925,6 +2925,19 @@ static int graft_tree(struct mount *mnt, struct mount *p, struct mountpoint *mp)
+ 	return attach_recursive_mnt(mnt, p, mp, 0);
+ }
+ 
++static int may_change_propagation(const struct mount *m)
++{
++        struct mnt_namespace *ns = m->mnt_ns;
++
++	 // it must be mounted in some namespace
++	 if (IS_ERR_OR_NULL(ns))         // is_mounted()
++		 return -EINVAL;
++	 // and the caller must be admin in userns of that namespace
++	 if (!ns_capable(ns->user_ns, CAP_SYS_ADMIN))
++		 return -EPERM;
++	 return 0;
++}
++
+ /*
+  * Sanity check the flags to change_mnt_propagation.
+  */
+@@ -2961,10 +2974,10 @@ static int do_change_type(struct path *path, int ms_flags)
+ 		return -EINVAL;
+ 
+ 	namespace_lock();
+-	if (!check_mnt(mnt)) {
+-		err = -EINVAL;
++	err = may_change_propagation(mnt);
++	if (err)
+ 		goto out_unlock;
+-	}
++
+ 	if (type == MS_SHARED) {
+ 		err = invent_group_ids(mnt, recurse);
+ 		if (err)
+@@ -3419,18 +3432,11 @@ static int do_set_group(struct path *from_path, struct path *to_path)
+ 
+ 	namespace_lock();
+ 
+-	err = -EINVAL;
+-	/* To and From must be mounted */
+-	if (!is_mounted(&from->mnt))
+-		goto out;
+-	if (!is_mounted(&to->mnt))
+-		goto out;
+-
+-	err = -EPERM;
+-	/* We should be allowed to modify mount namespaces of both mounts */
+-	if (!ns_capable(from->mnt_ns->user_ns, CAP_SYS_ADMIN))
++	err = may_change_propagation(from);
++	if (err)
+ 		goto out;
+-	if (!ns_capable(to->mnt_ns->user_ns, CAP_SYS_ADMIN))
++	err = may_change_propagation(to);
++	if (err)
+ 		goto out;
+ 
+ 	err = -EINVAL;
+@@ -4657,20 +4663,10 @@ SYSCALL_DEFINE5(move_mount,
+ 	if (flags & MOVE_MOUNT_SET_GROUP)	mflags |= MNT_TREE_PROPAGATION;
+ 	if (flags & MOVE_MOUNT_BENEATH)		mflags |= MNT_TREE_BENEATH;
+ 
+-	lflags = 0;
+-	if (flags & MOVE_MOUNT_F_SYMLINKS)	lflags |= LOOKUP_FOLLOW;
+-	if (flags & MOVE_MOUNT_F_AUTOMOUNTS)	lflags |= LOOKUP_AUTOMOUNT;
+ 	uflags = 0;
+-	if (flags & MOVE_MOUNT_F_EMPTY_PATH)	uflags = AT_EMPTY_PATH;
+-	from_name = getname_maybe_null(from_pathname, uflags);
+-	if (IS_ERR(from_name))
+-		return PTR_ERR(from_name);
++	if (flags & MOVE_MOUNT_T_EMPTY_PATH)
++		uflags = AT_EMPTY_PATH;
+ 
+-	lflags = 0;
+-	if (flags & MOVE_MOUNT_T_SYMLINKS)	lflags |= LOOKUP_FOLLOW;
+-	if (flags & MOVE_MOUNT_T_AUTOMOUNTS)	lflags |= LOOKUP_AUTOMOUNT;
+-	uflags = 0;
+-	if (flags & MOVE_MOUNT_T_EMPTY_PATH)	uflags = AT_EMPTY_PATH;
+ 	to_name = getname_maybe_null(to_pathname, uflags);
+ 	if (IS_ERR(to_name))
+ 		return PTR_ERR(to_name);
+@@ -4683,11 +4679,24 @@ SYSCALL_DEFINE5(move_mount,
+ 		to_path = fd_file(f_to)->f_path;
+ 		path_get(&to_path);
+ 	} else {
++		lflags = 0;
++		if (flags & MOVE_MOUNT_T_SYMLINKS)
++			lflags |= LOOKUP_FOLLOW;
++		if (flags & MOVE_MOUNT_T_AUTOMOUNTS)
++			lflags |= LOOKUP_AUTOMOUNT;
+ 		ret = filename_lookup(to_dfd, to_name, lflags, &to_path, NULL);
+ 		if (ret)
+ 			return ret;
+ 	}
+ 
++	uflags = 0;
++	if (flags & MOVE_MOUNT_F_EMPTY_PATH)
++		uflags = AT_EMPTY_PATH;
++
++	from_name = getname_maybe_null(from_pathname, uflags);
++	if (IS_ERR(from_name))
++		return PTR_ERR(from_name);
++
+ 	if (!from_name && from_dfd >= 0) {
+ 		CLASS(fd_raw, f_from)(from_dfd);
+ 		if (fd_empty(f_from))
+@@ -4696,6 +4705,11 @@ SYSCALL_DEFINE5(move_mount,
+ 		return vfs_move_mount(&fd_file(f_from)->f_path, &to_path, mflags);
+ 	}
+ 
++	lflags = 0;
++	if (flags & MOVE_MOUNT_F_SYMLINKS)
++		lflags |= LOOKUP_FOLLOW;
++	if (flags & MOVE_MOUNT_F_AUTOMOUNTS)
++		lflags |= LOOKUP_AUTOMOUNT;
+ 	ret = filename_lookup(from_dfd, from_name, lflags, &from_path, NULL);
+ 	if (ret)
+ 		return ret;
+@@ -5302,7 +5316,8 @@ SYSCALL_DEFINE5(open_tree_attr, int, dfd, const char __user *, filename,
+ 		int ret;
+ 		struct mount_kattr kattr = {};
+ 
+-		kattr.kflags = MOUNT_KATTR_IDMAP_REPLACE;
++		if (flags & OPEN_TREE_CLONE)
++			kattr.kflags = MOUNT_KATTR_IDMAP_REPLACE;
+ 		if (flags & AT_RECURSIVE)
+ 			kattr.kflags |= MOUNT_KATTR_RECURSE;
+ 
+diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c
+index 3e804da1e1eb10..a95e7aadafd072 100644
+--- a/fs/netfs/read_collect.c
++++ b/fs/netfs/read_collect.c
+@@ -281,8 +281,10 @@ static void netfs_collect_read_results(struct netfs_io_request *rreq)
+ 		} else if (test_bit(NETFS_RREQ_SHORT_TRANSFER, &rreq->flags)) {
+ 			notes |= MADE_PROGRESS;
+ 		} else {
+-			if (!stream->failed)
++			if (!stream->failed) {
+ 				stream->transferred += transferred;
++				stream->transferred_valid = true;
++			}
+ 			if (front->transferred < front->len)
+ 				set_bit(NETFS_RREQ_SHORT_TRANSFER, &rreq->flags);
+ 			notes |= MADE_PROGRESS;
+diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c
+index 0f3a36852a4dc1..cbf3d9194c7bf6 100644
+--- a/fs/netfs/write_collect.c
++++ b/fs/netfs/write_collect.c
+@@ -254,6 +254,7 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq)
+ 			if (front->start + front->transferred > stream->collected_to) {
+ 				stream->collected_to = front->start + front->transferred;
+ 				stream->transferred = stream->collected_to - wreq->start;
++				stream->transferred_valid = true;
+ 				notes |= MADE_PROGRESS;
+ 			}
+ 			if (test_bit(NETFS_SREQ_FAILED, &front->flags)) {
+@@ -356,6 +357,7 @@ bool netfs_write_collection(struct netfs_io_request *wreq)
+ {
+ 	struct netfs_inode *ictx = netfs_inode(wreq->inode);
+ 	size_t transferred;
++	bool transferred_valid = false;
+ 	int s;
+ 
+ 	_enter("R=%x", wreq->debug_id);
+@@ -376,12 +378,16 @@ bool netfs_write_collection(struct netfs_io_request *wreq)
+ 			continue;
+ 		if (!list_empty(&stream->subrequests))
+ 			return false;
+-		if (stream->transferred < transferred)
++		if (stream->transferred_valid &&
++		    stream->transferred < transferred) {
+ 			transferred = stream->transferred;
++			transferred_valid = true;
++		}
+ 	}
+ 
+ 	/* Okay, declare that all I/O is complete. */
+-	wreq->transferred = transferred;
++	if (transferred_valid)
++		wreq->transferred = transferred;
+ 	trace_netfs_rreq(wreq, netfs_rreq_trace_write_done);
+ 
+ 	if (wreq->io_streams[1].active &&
+diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c
+index 50bee2c4130d1e..0584cba1a04392 100644
+--- a/fs/netfs/write_issue.c
++++ b/fs/netfs/write_issue.c
+@@ -118,12 +118,12 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping,
+ 	wreq->io_streams[0].prepare_write	= ictx->ops->prepare_write;
+ 	wreq->io_streams[0].issue_write		= ictx->ops->issue_write;
+ 	wreq->io_streams[0].collected_to	= start;
+-	wreq->io_streams[0].transferred		= LONG_MAX;
++	wreq->io_streams[0].transferred		= 0;
+ 
+ 	wreq->io_streams[1].stream_nr		= 1;
+ 	wreq->io_streams[1].source		= NETFS_WRITE_TO_CACHE;
+ 	wreq->io_streams[1].collected_to	= start;
+-	wreq->io_streams[1].transferred		= LONG_MAX;
++	wreq->io_streams[1].transferred		= 0;
+ 	if (fscache_resources_valid(&wreq->cache_resources)) {
+ 		wreq->io_streams[1].avail	= true;
+ 		wreq->io_streams[1].active	= true;
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index 11968dcb724317..6e69ce43a13ff7 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -253,13 +253,14 @@ nfs_page_group_unlock(struct nfs_page *req)
+ 	nfs_page_clear_headlock(req);
+ }
+ 
+-/*
+- * nfs_page_group_sync_on_bit_locked
++/**
++ * nfs_page_group_sync_on_bit_locked - Test if all requests have @bit set
++ * @req: request in page group
++ * @bit: PG_* bit that is used to sync page group
+  *
+  * must be called with page group lock held
+  */
+-static bool
+-nfs_page_group_sync_on_bit_locked(struct nfs_page *req, unsigned int bit)
++bool nfs_page_group_sync_on_bit_locked(struct nfs_page *req, unsigned int bit)
+ {
+ 	struct nfs_page *head = req->wb_head;
+ 	struct nfs_page *tmp;
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index 374fc6b34c7954..ff29335ed85999 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -153,20 +153,10 @@ nfs_page_set_inode_ref(struct nfs_page *req, struct inode *inode)
+ 	}
+ }
+ 
+-static int
+-nfs_cancel_remove_inode(struct nfs_page *req, struct inode *inode)
++static void nfs_cancel_remove_inode(struct nfs_page *req, struct inode *inode)
+ {
+-	int ret;
+-
+-	if (!test_bit(PG_REMOVE, &req->wb_flags))
+-		return 0;
+-	ret = nfs_page_group_lock(req);
+-	if (ret)
+-		return ret;
+ 	if (test_and_clear_bit(PG_REMOVE, &req->wb_flags))
+ 		nfs_page_set_inode_ref(req, inode);
+-	nfs_page_group_unlock(req);
+-	return 0;
+ }
+ 
+ /**
+@@ -585,19 +575,18 @@ static struct nfs_page *nfs_lock_and_join_requests(struct folio *folio)
+ 		}
+ 	}
+ 
++	ret = nfs_page_group_lock(head);
++	if (ret < 0)
++		goto out_unlock;
++
+ 	/* Ensure that nobody removed the request before we locked it */
+ 	if (head != folio->private) {
++		nfs_page_group_unlock(head);
+ 		nfs_unlock_and_release_request(head);
+ 		goto retry;
+ 	}
+ 
+-	ret = nfs_cancel_remove_inode(head, inode);
+-	if (ret < 0)
+-		goto out_unlock;
+-
+-	ret = nfs_page_group_lock(head);
+-	if (ret < 0)
+-		goto out_unlock;
++	nfs_cancel_remove_inode(head, inode);
+ 
+ 	/* lock each request in the page group */
+ 	for (subreq = head->wb_this_page;
+@@ -786,7 +775,8 @@ static void nfs_inode_remove_request(struct nfs_page *req)
+ {
+ 	struct nfs_inode *nfsi = NFS_I(nfs_page_to_inode(req));
+ 
+-	if (nfs_page_group_sync_on_bit(req, PG_REMOVE)) {
++	nfs_page_group_lock(req);
++	if (nfs_page_group_sync_on_bit_locked(req, PG_REMOVE)) {
+ 		struct folio *folio = nfs_page_to_folio(req->wb_head);
+ 		struct address_space *mapping = folio->mapping;
+ 
+@@ -798,6 +788,7 @@ static void nfs_inode_remove_request(struct nfs_page *req)
+ 		}
+ 		spin_unlock(&mapping->i_private_lock);
+ 	}
++	nfs_page_group_unlock(req);
+ 
+ 	if (test_and_clear_bit(PG_INODE_REF, &req->wb_flags)) {
+ 		atomic_long_dec(&nfsi->nrequests);
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index d7310fcf38881e..c2263148ff20aa 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -779,7 +779,7 @@ static int ovl_copy_up_workdir(struct ovl_copy_up_ctx *c)
+ 		return err;
+ 
+ 	ovl_start_write(c->dentry);
+-	inode_lock(wdir);
++	inode_lock_nested(wdir, I_MUTEX_PARENT);
+ 	temp = ovl_create_temp(ofs, c->workdir, &cattr);
+ 	inode_unlock(wdir);
+ 	ovl_end_write(c->dentry);
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index 0102ab3aaec162..bdc3c2e9334ad5 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -212,8 +212,8 @@ static int proc_maps_open(struct inode *inode, struct file *file,
+ 
+ 	priv->inode = inode;
+ 	priv->mm = proc_mem_open(inode, PTRACE_MODE_READ);
+-	if (IS_ERR_OR_NULL(priv->mm)) {
+-		int err = priv->mm ? PTR_ERR(priv->mm) : -ESRCH;
++	if (IS_ERR(priv->mm)) {
++		int err = PTR_ERR(priv->mm);
+ 
+ 		seq_release_private(inode, file);
+ 		return err;
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index 4bb065a6fbaa7d..d3e09b10dea476 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -4496,7 +4496,7 @@ smb3_init_transform_rq(struct TCP_Server_Info *server, int num_rqst,
+ 	for (int i = 1; i < num_rqst; i++) {
+ 		struct smb_rqst *old = &old_rq[i - 1];
+ 		struct smb_rqst *new = &new_rq[i];
+-		struct folio_queue *buffer;
++		struct folio_queue *buffer = NULL;
+ 		size_t size = iov_iter_count(&old->rq_iter);
+ 
+ 		orig_len += smb_rqst_len(server, old);
+diff --git a/fs/smb/server/connection.c b/fs/smb/server/connection.c
+index 3f04a2977ba86c..67c4f73398dfee 100644
+--- a/fs/smb/server/connection.c
++++ b/fs/smb/server/connection.c
+@@ -504,7 +504,8 @@ void ksmbd_conn_transport_destroy(void)
+ {
+ 	mutex_lock(&init_lock);
+ 	ksmbd_tcp_destroy();
+-	ksmbd_rdma_destroy();
++	ksmbd_rdma_stop_listening();
+ 	stop_sessions();
++	ksmbd_rdma_destroy();
+ 	mutex_unlock(&init_lock);
+ }
+diff --git a/fs/smb/server/connection.h b/fs/smb/server/connection.h
+index 31dd1caac1e8a8..2aa8084bb59302 100644
+--- a/fs/smb/server/connection.h
++++ b/fs/smb/server/connection.h
+@@ -46,7 +46,12 @@ struct ksmbd_conn {
+ 	struct mutex			srv_mutex;
+ 	int				status;
+ 	unsigned int			cli_cap;
+-	__be32				inet_addr;
++	union {
++		__be32			inet_addr;
++#if IS_ENABLED(CONFIG_IPV6)
++		u8			inet6_addr[16];
++#endif
++	};
+ 	char				*request_buf;
+ 	struct ksmbd_transport		*transport;
+ 	struct nls_table		*local_nls;
+diff --git a/fs/smb/server/oplock.c b/fs/smb/server/oplock.c
+index d7a8a580d01362..a04d5702820d07 100644
+--- a/fs/smb/server/oplock.c
++++ b/fs/smb/server/oplock.c
+@@ -1102,8 +1102,10 @@ void smb_send_parent_lease_break_noti(struct ksmbd_file *fp,
+ 			if (!atomic_inc_not_zero(&opinfo->refcount))
+ 				continue;
+ 
+-			if (ksmbd_conn_releasing(opinfo->conn))
++			if (ksmbd_conn_releasing(opinfo->conn)) {
++				opinfo_put(opinfo);
+ 				continue;
++			}
+ 
+ 			oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE, NULL);
+ 			opinfo_put(opinfo);
+@@ -1139,8 +1141,11 @@ void smb_lazy_parent_lease_break_close(struct ksmbd_file *fp)
+ 			if (!atomic_inc_not_zero(&opinfo->refcount))
+ 				continue;
+ 
+-			if (ksmbd_conn_releasing(opinfo->conn))
++			if (ksmbd_conn_releasing(opinfo->conn)) {
++				opinfo_put(opinfo);
+ 				continue;
++			}
++
+ 			oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE, NULL);
+ 			opinfo_put(opinfo);
+ 		}
+@@ -1343,8 +1348,10 @@ void smb_break_all_levII_oplock(struct ksmbd_work *work, struct ksmbd_file *fp,
+ 		if (!atomic_inc_not_zero(&brk_op->refcount))
+ 			continue;
+ 
+-		if (ksmbd_conn_releasing(brk_op->conn))
++		if (ksmbd_conn_releasing(brk_op->conn)) {
++			opinfo_put(brk_op);
+ 			continue;
++		}
+ 
+ 		if (brk_op->is_lease && (brk_op->o_lease->state &
+ 		    (~(SMB2_LEASE_READ_CACHING_LE |
+diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c
+index 8d366db5f60547..5466aa8c39b1cd 100644
+--- a/fs/smb/server/transport_rdma.c
++++ b/fs/smb/server/transport_rdma.c
+@@ -2194,7 +2194,7 @@ int ksmbd_rdma_init(void)
+ 	return 0;
+ }
+ 
+-void ksmbd_rdma_destroy(void)
++void ksmbd_rdma_stop_listening(void)
+ {
+ 	if (!smb_direct_listener.cm_id)
+ 		return;
+@@ -2203,7 +2203,10 @@ void ksmbd_rdma_destroy(void)
+ 	rdma_destroy_id(smb_direct_listener.cm_id);
+ 
+ 	smb_direct_listener.cm_id = NULL;
++}
+ 
++void ksmbd_rdma_destroy(void)
++{
+ 	if (smb_direct_wq) {
+ 		destroy_workqueue(smb_direct_wq);
+ 		smb_direct_wq = NULL;
+diff --git a/fs/smb/server/transport_rdma.h b/fs/smb/server/transport_rdma.h
+index 77aee4e5c9dcd8..a2291b77488a15 100644
+--- a/fs/smb/server/transport_rdma.h
++++ b/fs/smb/server/transport_rdma.h
+@@ -54,13 +54,15 @@ struct smb_direct_data_transfer {
+ 
+ #ifdef CONFIG_SMB_SERVER_SMBDIRECT
+ int ksmbd_rdma_init(void);
++void ksmbd_rdma_stop_listening(void);
+ void ksmbd_rdma_destroy(void);
+ bool ksmbd_rdma_capable_netdev(struct net_device *netdev);
+ void init_smbd_max_io_size(unsigned int sz);
+ unsigned int get_smbd_max_read_write_size(void);
+ #else
+ static inline int ksmbd_rdma_init(void) { return 0; }
+-static inline int ksmbd_rdma_destroy(void) { return 0; }
++static inline void ksmbd_rdma_stop_listening(void) { }
++static inline void ksmbd_rdma_destroy(void) { }
+ static inline bool ksmbd_rdma_capable_netdev(struct net_device *netdev) { return false; }
+ static inline void init_smbd_max_io_size(unsigned int sz) { }
+ static inline unsigned int get_smbd_max_read_write_size(void) { return 0; }
+diff --git a/fs/smb/server/transport_tcp.c b/fs/smb/server/transport_tcp.c
+index d72588f33b9cd1..756833c91b140b 100644
+--- a/fs/smb/server/transport_tcp.c
++++ b/fs/smb/server/transport_tcp.c
+@@ -87,7 +87,14 @@ static struct tcp_transport *alloc_transport(struct socket *client_sk)
+ 		return NULL;
+ 	}
+ 
++#if IS_ENABLED(CONFIG_IPV6)
++	if (client_sk->sk->sk_family == AF_INET6)
++		memcpy(&conn->inet6_addr, &client_sk->sk->sk_v6_daddr, 16);
++	else
++		conn->inet_addr = inet_sk(client_sk->sk)->inet_daddr;
++#else
+ 	conn->inet_addr = inet_sk(client_sk->sk)->inet_daddr;
++#endif
+ 	conn->transport = KSMBD_TRANS(t);
+ 	KSMBD_TRANS(t)->conn = conn;
+ 	KSMBD_TRANS(t)->ops = &ksmbd_tcp_transport_ops;
+@@ -231,7 +238,6 @@ static int ksmbd_kthread_fn(void *p)
+ {
+ 	struct socket *client_sk = NULL;
+ 	struct interface *iface = (struct interface *)p;
+-	struct inet_sock *csk_inet;
+ 	struct ksmbd_conn *conn;
+ 	int ret;
+ 
+@@ -254,13 +260,27 @@ static int ksmbd_kthread_fn(void *p)
+ 		/*
+ 		 * Limits repeated connections from clients with the same IP.
+ 		 */
+-		csk_inet = inet_sk(client_sk->sk);
+ 		down_read(&conn_list_lock);
+ 		list_for_each_entry(conn, &conn_list, conns_list)
+-			if (csk_inet->inet_daddr == conn->inet_addr) {
++#if IS_ENABLED(CONFIG_IPV6)
++			if (client_sk->sk->sk_family == AF_INET6) {
++				if (memcmp(&client_sk->sk->sk_v6_daddr,
++					   &conn->inet6_addr, 16) == 0) {
++					ret = -EAGAIN;
++					break;
++				}
++			} else if (inet_sk(client_sk->sk)->inet_daddr ==
++				 conn->inet_addr) {
++				ret = -EAGAIN;
++				break;
++			}
++#else
++			if (inet_sk(client_sk->sk)->inet_daddr ==
++			    conn->inet_addr) {
+ 				ret = -EAGAIN;
+ 				break;
+ 			}
++#endif
+ 		up_read(&conn_list_lock);
+ 		if (ret == -EAGAIN)
+ 			continue;
+diff --git a/fs/splice.c b/fs/splice.c
+index 4d6df083e0c06a..f5094b6d00a09f 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -739,6 +739,9 @@ iter_file_splice_write(struct pipe_inode_info *pipe, struct file *out,
+ 		sd.pos = kiocb.ki_pos;
+ 		if (ret <= 0)
+ 			break;
++		WARN_ONCE(ret > sd.total_len - left,
++			  "Splice Exceeded! ret=%zd tot=%zu left=%zu\n",
++			  ret, sd.total_len, left);
+ 
+ 		sd.num_spliced += ret;
+ 		sd.total_len -= ret;
+diff --git a/fs/squashfs/super.c b/fs/squashfs/super.c
+index 992ea0e372572f..4465cf05603a8f 100644
+--- a/fs/squashfs/super.c
++++ b/fs/squashfs/super.c
+@@ -187,10 +187,15 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	unsigned short flags;
+ 	unsigned int fragments;
+ 	u64 lookup_table_start, xattr_id_table_start, next_table;
+-	int err;
++	int err, devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE);
+ 
+ 	TRACE("Entered squashfs_fill_superblock\n");
+ 
++	if (!devblksize) {
++		errorf(fc, "squashfs: unable to set blocksize\n");
++		return -EINVAL;
++	}
++
+ 	sb->s_fs_info = kzalloc(sizeof(*msblk), GFP_KERNEL);
+ 	if (sb->s_fs_info == NULL) {
+ 		ERROR("Failed to allocate squashfs_sb_info\n");
+@@ -201,12 +206,7 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc)
+ 
+ 	msblk->panic_on_errors = (opts->errors == Opt_errors_panic);
+ 
+-	msblk->devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE);
+-	if (!msblk->devblksize) {
+-		errorf(fc, "squashfs: unable to set blocksize\n");
+-		return -EINVAL;
+-	}
+-
++	msblk->devblksize = devblksize;
+ 	msblk->devblksize_log2 = ffz(~msblk->devblksize);
+ 
+ 	mutex_init(&msblk->meta_index_mutex);
+diff --git a/fs/xfs/libxfs/xfs_refcount.c b/fs/xfs/libxfs/xfs_refcount.c
+index cebe83f7842a1c..8977840374836e 100644
+--- a/fs/xfs/libxfs/xfs_refcount.c
++++ b/fs/xfs/libxfs/xfs_refcount.c
+@@ -2099,9 +2099,7 @@ xfs_refcount_recover_cow_leftovers(
+ 	 * recording the CoW debris we cancel the (empty) transaction
+ 	 * and everything goes away cleanly.
+ 	 */
+-	error = xfs_trans_alloc_empty(mp, &tp);
+-	if (error)
+-		return error;
++	tp = xfs_trans_alloc_empty(mp);
+ 
+ 	if (isrt) {
+ 		xfs_rtgroup_lock(to_rtg(xg), XFS_RTGLOCK_REFCOUNT);
+diff --git a/fs/xfs/scrub/common.c b/fs/xfs/scrub/common.c
+index 28ad341df8eede..d080f4e6e9d8c2 100644
+--- a/fs/xfs/scrub/common.c
++++ b/fs/xfs/scrub/common.c
+@@ -870,7 +870,8 @@ int
+ xchk_trans_alloc_empty(
+ 	struct xfs_scrub	*sc)
+ {
+-	return xfs_trans_alloc_empty(sc->mp, &sc->tp);
++	sc->tp = xfs_trans_alloc_empty(sc->mp);
++	return 0;
+ }
+ 
+ /*
+diff --git a/fs/xfs/scrub/repair.c b/fs/xfs/scrub/repair.c
+index f8f9ed30f56b03..f7f80ff32afc25 100644
+--- a/fs/xfs/scrub/repair.c
++++ b/fs/xfs/scrub/repair.c
+@@ -1279,18 +1279,10 @@ xrep_trans_alloc_hook_dummy(
+ 	void			**cookiep,
+ 	struct xfs_trans	**tpp)
+ {
+-	int			error;
+-
+ 	*cookiep = current->journal_info;
+ 	current->journal_info = NULL;
+-
+-	error = xfs_trans_alloc_empty(mp, tpp);
+-	if (!error)
+-		return 0;
+-
+-	current->journal_info = *cookiep;
+-	*cookiep = NULL;
+-	return error;
++	*tpp = xfs_trans_alloc_empty(mp);
++	return 0;
+ }
+ 
+ /* Cancel a dummy transaction used by a live update hook function. */
+diff --git a/fs/xfs/scrub/scrub.c b/fs/xfs/scrub/scrub.c
+index 76e24032e99a53..3c3b0d25006ff4 100644
+--- a/fs/xfs/scrub/scrub.c
++++ b/fs/xfs/scrub/scrub.c
+@@ -876,10 +876,7 @@ xchk_scrubv_open_by_handle(
+ 	struct xfs_inode		*ip;
+ 	int				error;
+ 
+-	error = xfs_trans_alloc_empty(mp, &tp);
+-	if (error)
+-		return NULL;
+-
++	tp = xfs_trans_alloc_empty(mp);
+ 	error = xfs_iget(mp, tp, head->svh_ino, XCHK_IGET_FLAGS, 0, &ip);
+ 	xfs_trans_cancel(tp);
+ 	if (error)
+diff --git a/fs/xfs/xfs_attr_item.c b/fs/xfs/xfs_attr_item.c
+index f683b7a9323f16..da1e11f38eb004 100644
+--- a/fs/xfs/xfs_attr_item.c
++++ b/fs/xfs/xfs_attr_item.c
+@@ -616,10 +616,7 @@ xfs_attri_iread_extents(
+ 	struct xfs_trans		*tp;
+ 	int				error;
+ 
+-	error = xfs_trans_alloc_empty(ip->i_mount, &tp);
+-	if (error)
+-		return error;
+-
++	tp = xfs_trans_alloc_empty(ip->i_mount);
+ 	xfs_ilock(ip, XFS_ILOCK_EXCL);
+ 	error = xfs_iread_extents(tp, ip, XFS_ATTR_FORK);
+ 	xfs_iunlock(ip, XFS_ILOCK_EXCL);
+diff --git a/fs/xfs/xfs_discard.c b/fs/xfs/xfs_discard.c
+index 603d5136564508..ee49f20875afa3 100644
+--- a/fs/xfs/xfs_discard.c
++++ b/fs/xfs/xfs_discard.c
+@@ -189,9 +189,7 @@ xfs_trim_gather_extents(
+ 	 */
+ 	xfs_log_force(mp, XFS_LOG_SYNC);
+ 
+-	error = xfs_trans_alloc_empty(mp, &tp);
+-	if (error)
+-		return error;
++	tp = xfs_trans_alloc_empty(mp);
+ 
+ 	error = xfs_alloc_read_agf(pag, tp, 0, &agbp);
+ 	if (error)
+@@ -583,9 +581,7 @@ xfs_trim_rtextents(
+ 	struct xfs_trans	*tp;
+ 	int			error;
+ 
+-	error = xfs_trans_alloc_empty(mp, &tp);
+-	if (error)
+-		return error;
++	tp = xfs_trans_alloc_empty(mp);
+ 
+ 	/*
+ 	 * Walk the free ranges between low and high.  The query_range function
+@@ -701,9 +697,7 @@ xfs_trim_rtgroup_extents(
+ 	struct xfs_trans	*tp;
+ 	int			error;
+ 
+-	error = xfs_trans_alloc_empty(mp, &tp);
+-	if (error)
+-		return error;
++	tp = xfs_trans_alloc_empty(mp);
+ 
+ 	/*
+ 	 * Walk the free ranges between low and high.  The query_range function
+diff --git a/fs/xfs/xfs_fsmap.c b/fs/xfs/xfs_fsmap.c
+index 414b27a8645886..af68c7de8ee8ad 100644
+--- a/fs/xfs/xfs_fsmap.c
++++ b/fs/xfs/xfs_fsmap.c
+@@ -1270,9 +1270,7 @@ xfs_getfsmap(
+ 		 * buffer locking abilities to detect cycles in the rmapbt
+ 		 * without deadlocking.
+ 		 */
+-		error = xfs_trans_alloc_empty(mp, &tp);
+-		if (error)
+-			break;
++		tp = xfs_trans_alloc_empty(mp);
+ 
+ 		info.dev = handlers[i].dev;
+ 		info.last = false;
+diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
+index bbc2f2973dcc90..4cf7abe5014371 100644
+--- a/fs/xfs/xfs_icache.c
++++ b/fs/xfs/xfs_icache.c
+@@ -893,10 +893,7 @@ xfs_metafile_iget(
+ 	struct xfs_trans	*tp;
+ 	int			error;
+ 
+-	error = xfs_trans_alloc_empty(mp, &tp);
+-	if (error)
+-		return error;
+-
++	tp = xfs_trans_alloc_empty(mp);
+ 	error = xfs_trans_metafile_iget(tp, ino, metafile_type, ipp);
+ 	xfs_trans_cancel(tp);
+ 	return error;
+diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
+index 761a996a857cac..9c39251961a32a 100644
+--- a/fs/xfs/xfs_inode.c
++++ b/fs/xfs/xfs_inode.c
+@@ -2932,12 +2932,9 @@ xfs_inode_reload_unlinked(
+ 	struct xfs_inode	*ip)
+ {
+ 	struct xfs_trans	*tp;
+-	int			error;
+-
+-	error = xfs_trans_alloc_empty(ip->i_mount, &tp);
+-	if (error)
+-		return error;
++	int			error = 0;
+ 
++	tp = xfs_trans_alloc_empty(ip->i_mount);
+ 	xfs_ilock(ip, XFS_ILOCK_SHARED);
+ 	if (xfs_inode_unlinked_incomplete(ip))
+ 		error = xfs_inode_reload_unlinked_bucket(tp, ip);
+diff --git a/fs/xfs/xfs_itable.c b/fs/xfs/xfs_itable.c
+index 1fa1c0564b0c5a..5116842420b2e6 100644
+--- a/fs/xfs/xfs_itable.c
++++ b/fs/xfs/xfs_itable.c
+@@ -239,14 +239,10 @@ xfs_bulkstat_one(
+ 	 * Grab an empty transaction so that we can use its recursive buffer
+ 	 * locking abilities to detect cycles in the inobt without deadlocking.
+ 	 */
+-	error = xfs_trans_alloc_empty(breq->mp, &tp);
+-	if (error)
+-		goto out;
+-
++	tp = xfs_trans_alloc_empty(breq->mp);
+ 	error = xfs_bulkstat_one_int(breq->mp, breq->idmap, tp,
+ 			breq->startino, &bc);
+ 	xfs_trans_cancel(tp);
+-out:
+ 	kfree(bc.buf);
+ 
+ 	/*
+@@ -331,17 +327,13 @@ xfs_bulkstat(
+ 	 * Grab an empty transaction so that we can use its recursive buffer
+ 	 * locking abilities to detect cycles in the inobt without deadlocking.
+ 	 */
+-	error = xfs_trans_alloc_empty(breq->mp, &tp);
+-	if (error)
+-		goto out;
+-
++	tp = xfs_trans_alloc_empty(breq->mp);
+ 	if (breq->flags & XFS_IBULK_SAME_AG)
+ 		iwalk_flags |= XFS_IWALK_SAME_AG;
+ 
+ 	error = xfs_iwalk(breq->mp, tp, breq->startino, iwalk_flags,
+ 			xfs_bulkstat_iwalk, breq->icount, &bc);
+ 	xfs_trans_cancel(tp);
+-out:
+ 	kfree(bc.buf);
+ 
+ 	/*
+@@ -455,23 +447,23 @@ xfs_inumbers(
+ 		.breq		= breq,
+ 	};
+ 	struct xfs_trans	*tp;
++	unsigned int		iwalk_flags = 0;
+ 	int			error = 0;
+ 
+ 	if (xfs_bulkstat_already_done(breq->mp, breq->startino))
+ 		return 0;
+ 
++	if (breq->flags & XFS_IBULK_SAME_AG)
++		iwalk_flags |= XFS_IWALK_SAME_AG;
++
+ 	/*
+ 	 * Grab an empty transaction so that we can use its recursive buffer
+ 	 * locking abilities to detect cycles in the inobt without deadlocking.
+ 	 */
+-	error = xfs_trans_alloc_empty(breq->mp, &tp);
+-	if (error)
+-		goto out;
+-
+-	error = xfs_inobt_walk(breq->mp, tp, breq->startino, breq->flags,
++	tp = xfs_trans_alloc_empty(breq->mp);
++	error = xfs_inobt_walk(breq->mp, tp, breq->startino, iwalk_flags,
+ 			xfs_inumbers_walk, breq->icount, &ic);
+ 	xfs_trans_cancel(tp);
+-out:
+ 
+ 	/*
+ 	 * We found some inode groups, so clear the error status and return
+diff --git a/fs/xfs/xfs_iwalk.c b/fs/xfs/xfs_iwalk.c
+index 7db3ece370b100..c1c31d1a8e21b5 100644
+--- a/fs/xfs/xfs_iwalk.c
++++ b/fs/xfs/xfs_iwalk.c
+@@ -377,11 +377,8 @@ xfs_iwalk_run_callbacks(
+ 	if (!has_more)
+ 		return 0;
+ 
+-	if (iwag->drop_trans) {
+-		error = xfs_trans_alloc_empty(mp, &iwag->tp);
+-		if (error)
+-			return error;
+-	}
++	if (iwag->drop_trans)
++		iwag->tp = xfs_trans_alloc_empty(mp);
+ 
+ 	/* ...and recreate the cursor just past where we left off. */
+ 	error = xfs_ialloc_read_agi(iwag->pag, iwag->tp, 0, agi_bpp);
+@@ -617,9 +614,7 @@ xfs_iwalk_ag_work(
+ 	 * Grab an empty transaction so that we can use its recursive buffer
+ 	 * locking abilities to detect cycles in the inobt without deadlocking.
+ 	 */
+-	error = xfs_trans_alloc_empty(mp, &iwag->tp);
+-	if (error)
+-		goto out;
++	iwag->tp = xfs_trans_alloc_empty(mp);
+ 	iwag->drop_trans = 1;
+ 
+ 	error = xfs_iwalk_ag(iwag);
+diff --git a/fs/xfs/xfs_notify_failure.c b/fs/xfs/xfs_notify_failure.c
+index 42e9c72b85c00f..fbeddcac479208 100644
+--- a/fs/xfs/xfs_notify_failure.c
++++ b/fs/xfs/xfs_notify_failure.c
+@@ -279,10 +279,7 @@ xfs_dax_notify_dev_failure(
+ 		kernel_frozen = xfs_dax_notify_failure_freeze(mp) == 0;
+ 	}
+ 
+-	error = xfs_trans_alloc_empty(mp, &tp);
+-	if (error)
+-		goto out;
+-
++	tp = xfs_trans_alloc_empty(mp);
+ 	start_gno = xfs_fsb_to_gno(mp, start_bno, type);
+ 	end_gno = xfs_fsb_to_gno(mp, end_bno, type);
+ 	while ((xg = xfs_group_next_range(mp, xg, start_gno, end_gno, type))) {
+@@ -353,7 +350,6 @@ xfs_dax_notify_dev_failure(
+ 			error = -EFSCORRUPTED;
+ 	}
+ 
+-out:
+ 	/* Thaw the fs if it has been frozen before. */
+ 	if (mf_flags & MF_MEM_PRE_REMOVE)
+ 		xfs_dax_notify_failure_thaw(mp, kernel_frozen);
+diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c
+index fa135ac264710a..23ba84ec919a4d 100644
+--- a/fs/xfs/xfs_qm.c
++++ b/fs/xfs/xfs_qm.c
+@@ -660,10 +660,7 @@ xfs_qm_load_metadir_qinos(
+ 	struct xfs_trans	*tp;
+ 	int			error;
+ 
+-	error = xfs_trans_alloc_empty(mp, &tp);
+-	if (error)
+-		return error;
+-
++	tp = xfs_trans_alloc_empty(mp);
+ 	error = xfs_dqinode_load_parent(tp, &qi->qi_dirip);
+ 	if (error == -ENOENT) {
+ 		/* no quota dir directory, but we'll create one later */
+@@ -1755,10 +1752,7 @@ xfs_qm_qino_load(
+ 	struct xfs_inode	*dp = NULL;
+ 	int			error;
+ 
+-	error = xfs_trans_alloc_empty(mp, &tp);
+-	if (error)
+-		return error;
+-
++	tp = xfs_trans_alloc_empty(mp);
+ 	if (xfs_has_metadir(mp)) {
+ 		error = xfs_dqinode_load_parent(tp, &dp);
+ 		if (error)
+diff --git a/fs/xfs/xfs_rtalloc.c b/fs/xfs/xfs_rtalloc.c
+index 736eb0924573d3..6907e871fa1511 100644
+--- a/fs/xfs/xfs_rtalloc.c
++++ b/fs/xfs/xfs_rtalloc.c
+@@ -729,9 +729,7 @@ xfs_rtginode_ensure(
+ 	if (rtg->rtg_inodes[type])
+ 		return 0;
+ 
+-	error = xfs_trans_alloc_empty(rtg_mount(rtg), &tp);
+-	if (error)
+-		return error;
++	tp = xfs_trans_alloc_empty(rtg_mount(rtg));
+ 	error = xfs_rtginode_load(rtg, type, tp);
+ 	xfs_trans_cancel(tp);
+ 
+@@ -1305,9 +1303,7 @@ xfs_growfs_rt_prep_groups(
+ 	if (!mp->m_rtdirip) {
+ 		struct xfs_trans	*tp;
+ 
+-		error = xfs_trans_alloc_empty(mp, &tp);
+-		if (error)
+-			return error;
++		tp = xfs_trans_alloc_empty(mp);
+ 		error = xfs_rtginode_load_parent(tp);
+ 		xfs_trans_cancel(tp);
+ 
+@@ -1674,10 +1670,7 @@ xfs_rtmount_inodes(
+ 	struct xfs_rtgroup	*rtg = NULL;
+ 	int			error;
+ 
+-	error = xfs_trans_alloc_empty(mp, &tp);
+-	if (error)
+-		return error;
+-
++	tp = xfs_trans_alloc_empty(mp);
+ 	if (xfs_has_rtgroups(mp) && mp->m_sb.sb_rgcount > 0) {
+ 		error = xfs_rtginode_load_parent(tp);
+ 		if (error)
+diff --git a/fs/xfs/xfs_trans.c b/fs/xfs/xfs_trans.c
+index b4a07af513bada..f1135c4cf3c204 100644
+--- a/fs/xfs/xfs_trans.c
++++ b/fs/xfs/xfs_trans.c
+@@ -241,6 +241,28 @@ xfs_trans_reserve(
+ 	return error;
+ }
+ 
++static struct xfs_trans *
++__xfs_trans_alloc(
++	struct xfs_mount	*mp,
++	uint			flags)
++{
++	struct xfs_trans	*tp;
++
++	ASSERT(!(flags & XFS_TRANS_RES_FDBLKS) || xfs_has_lazysbcount(mp));
++
++	tp = kmem_cache_zalloc(xfs_trans_cache, GFP_KERNEL | __GFP_NOFAIL);
++	if (!(flags & XFS_TRANS_NO_WRITECOUNT))
++		sb_start_intwrite(mp->m_super);
++	xfs_trans_set_context(tp);
++	tp->t_flags = flags;
++	tp->t_mountp = mp;
++	INIT_LIST_HEAD(&tp->t_items);
++	INIT_LIST_HEAD(&tp->t_busy);
++	INIT_LIST_HEAD(&tp->t_dfops);
++	tp->t_highest_agno = NULLAGNUMBER;
++	return tp;
++}
++
+ int
+ xfs_trans_alloc(
+ 	struct xfs_mount	*mp,
+@@ -254,33 +276,16 @@ xfs_trans_alloc(
+ 	bool			want_retry = true;
+ 	int			error;
+ 
++	ASSERT(resp->tr_logres > 0);
++
+ 	/*
+ 	 * Allocate the handle before we do our freeze accounting and setting up
+ 	 * GFP_NOFS allocation context so that we avoid lockdep false positives
+ 	 * by doing GFP_KERNEL allocations inside sb_start_intwrite().
+ 	 */
+ retry:
+-	tp = kmem_cache_zalloc(xfs_trans_cache, GFP_KERNEL | __GFP_NOFAIL);
+-	if (!(flags & XFS_TRANS_NO_WRITECOUNT))
+-		sb_start_intwrite(mp->m_super);
+-	xfs_trans_set_context(tp);
+-
+-	/*
+-	 * Zero-reservation ("empty") transactions can't modify anything, so
+-	 * they're allowed to run while we're frozen.
+-	 */
+-	WARN_ON(resp->tr_logres > 0 &&
+-		mp->m_super->s_writers.frozen == SB_FREEZE_COMPLETE);
+-	ASSERT(!(flags & XFS_TRANS_RES_FDBLKS) ||
+-	       xfs_has_lazysbcount(mp));
+-
+-	tp->t_flags = flags;
+-	tp->t_mountp = mp;
+-	INIT_LIST_HEAD(&tp->t_items);
+-	INIT_LIST_HEAD(&tp->t_busy);
+-	INIT_LIST_HEAD(&tp->t_dfops);
+-	tp->t_highest_agno = NULLAGNUMBER;
+-
++	tp = __xfs_trans_alloc(mp, flags);
++	WARN_ON(mp->m_super->s_writers.frozen == SB_FREEZE_COMPLETE);
+ 	error = xfs_trans_reserve(tp, resp, blocks, rtextents);
+ 	if (error == -ENOSPC && want_retry) {
+ 		xfs_trans_cancel(tp);
+@@ -324,14 +329,11 @@ xfs_trans_alloc(
+  * where we can be grabbing buffers at the same time that freeze is trying to
+  * drain the buffer LRU list.
+  */
+-int
++struct xfs_trans *
+ xfs_trans_alloc_empty(
+-	struct xfs_mount		*mp,
+-	struct xfs_trans		**tpp)
++	struct xfs_mount		*mp)
+ {
+-	struct xfs_trans_res		resv = {0};
+-
+-	return xfs_trans_alloc(mp, &resv, 0, 0, XFS_TRANS_NO_WRITECOUNT, tpp);
++	return __xfs_trans_alloc(mp, XFS_TRANS_NO_WRITECOUNT);
+ }
+ 
+ /*
+diff --git a/fs/xfs/xfs_trans.h b/fs/xfs/xfs_trans.h
+index 2b366851e9a4f4..a6b10aaeb1f1e0 100644
+--- a/fs/xfs/xfs_trans.h
++++ b/fs/xfs/xfs_trans.h
+@@ -168,8 +168,7 @@ int		xfs_trans_alloc(struct xfs_mount *mp, struct xfs_trans_res *resp,
+ 			struct xfs_trans **tpp);
+ int		xfs_trans_reserve_more(struct xfs_trans *tp,
+ 			unsigned int blocks, unsigned int rtextents);
+-int		xfs_trans_alloc_empty(struct xfs_mount *mp,
+-			struct xfs_trans **tpp);
++struct xfs_trans *xfs_trans_alloc_empty(struct xfs_mount *mp);
+ void		xfs_trans_mod_sb(xfs_trans_t *, uint, int64_t);
+ 
+ int xfs_trans_get_buf_map(struct xfs_trans *tp, struct xfs_buftarg *target,
+diff --git a/fs/xfs/xfs_zone_alloc.c b/fs/xfs/xfs_zone_alloc.c
+index 01315ed75502dc..417895f8a24c74 100644
+--- a/fs/xfs/xfs_zone_alloc.c
++++ b/fs/xfs/xfs_zone_alloc.c
+@@ -654,13 +654,6 @@ static inline bool xfs_zoned_pack_tight(struct xfs_inode *ip)
+ 		!(ip->i_diflags & XFS_DIFLAG_APPEND);
+ }
+ 
+-/*
+- * Pick a new zone for writes.
+- *
+- * If we aren't using up our budget of open zones just open a new one from the
+- * freelist.  Else try to find one that matches the expected data lifetime.  If
+- * we don't find one that is good pick any zone that is available.
+- */
+ static struct xfs_open_zone *
+ xfs_select_zone_nowait(
+ 	struct xfs_mount	*mp,
+@@ -688,7 +681,8 @@ xfs_select_zone_nowait(
+ 		goto out_unlock;
+ 
+ 	/*
+-	 * See if we can open a new zone and use that.
++	 * See if we can open a new zone and use that so that data for different
++	 * files is mixed as little as possible.
+ 	 */
+ 	oz = xfs_try_open_zone(mp, write_hint);
+ 	if (oz)
+diff --git a/fs/xfs/xfs_zone_gc.c b/fs/xfs/xfs_zone_gc.c
+index 9c00fc5baa3073..e1954b0e6021ef 100644
+--- a/fs/xfs/xfs_zone_gc.c
++++ b/fs/xfs/xfs_zone_gc.c
+@@ -328,10 +328,7 @@ xfs_zone_gc_query(
+ 	iter->rec_idx = 0;
+ 	iter->rec_count = 0;
+ 
+-	error = xfs_trans_alloc_empty(mp, &tp);
+-	if (error)
+-		return error;
+-
++	tp = xfs_trans_alloc_empty(mp);
+ 	xfs_rtgroup_lock(rtg, XFS_RTGLOCK_RMAP);
+ 	cur = xfs_rtrmapbt_init_cursor(tp, rtg);
+ 	error = xfs_rmap_query_range(cur, &ri_low, &ri_high,
+diff --git a/include/crypto/hash.h b/include/crypto/hash.h
+index db294d452e8cd9..bbaeae705ef0e5 100644
+--- a/include/crypto/hash.h
++++ b/include/crypto/hash.h
+@@ -184,7 +184,7 @@ struct shash_desc {
+  * Worst case is hmac(sha3-224-s390).  Its context is a nested 'shash_desc'
+  * containing a 'struct s390_sha_ctx'.
+  */
+-#define HASH_MAX_DESCSIZE	(sizeof(struct shash_desc) + 360)
++#define HASH_MAX_DESCSIZE	(sizeof(struct shash_desc) + 361)
+ #define MAX_SYNC_HASH_REQSIZE	(sizeof(struct ahash_request) + \
+ 				 HASH_MAX_DESCSIZE)
+ 
+diff --git a/include/crypto/internal/acompress.h b/include/crypto/internal/acompress.h
+index ffffd88bbbad33..2d97440028ffd7 100644
+--- a/include/crypto/internal/acompress.h
++++ b/include/crypto/internal/acompress.h
+@@ -63,10 +63,7 @@ struct crypto_acomp_stream {
+ struct crypto_acomp_streams {
+ 	/* These must come first because of struct scomp_alg. */
+ 	void *(*alloc_ctx)(void);
+-	union {
+-		void (*free_ctx)(void *);
+-		void (*cfree_ctx)(const void *);
+-	};
++	void (*free_ctx)(void *);
+ 
+ 	struct crypto_acomp_stream __percpu *streams;
+ 	struct work_struct stream_work;
+diff --git a/include/drm/drm_format_helper.h b/include/drm/drm_format_helper.h
+index d8539174ca11ba..49a2e09155d1ce 100644
+--- a/include/drm/drm_format_helper.h
++++ b/include/drm/drm_format_helper.h
+@@ -102,6 +102,15 @@ void drm_fb_xrgb8888_to_bgr888(struct iosys_map *dst, const unsigned int *dst_pi
+ void drm_fb_xrgb8888_to_argb8888(struct iosys_map *dst, const unsigned int *dst_pitch,
+ 				 const struct iosys_map *src, const struct drm_framebuffer *fb,
+ 				 const struct drm_rect *clip, struct drm_format_conv_state *state);
++void drm_fb_xrgb8888_to_abgr8888(struct iosys_map *dst, const unsigned int *dst_pitch,
++				 const struct iosys_map *src, const struct drm_framebuffer *fb,
++				 const struct drm_rect *clip, struct drm_format_conv_state *state);
++void drm_fb_xrgb8888_to_xbgr8888(struct iosys_map *dst, const unsigned int *dst_pitch,
++				 const struct iosys_map *src, const struct drm_framebuffer *fb,
++				 const struct drm_rect *clip, struct drm_format_conv_state *state);
++void drm_fb_xrgb8888_to_bgrx8888(struct iosys_map *dst, const unsigned int *dst_pitch,
++				 const struct iosys_map *src, const struct drm_framebuffer *fb,
++				 const struct drm_rect *clip, struct drm_format_conv_state *state);
+ void drm_fb_xrgb8888_to_xrgb2101010(struct iosys_map *dst, const unsigned int *dst_pitch,
+ 				    const struct iosys_map *src, const struct drm_framebuffer *fb,
+ 				    const struct drm_rect *clip,
+diff --git a/include/drm/intel/pciids.h b/include/drm/intel/pciids.h
+index a7ce9523c50d37..9864d2d7d661dc 100644
+--- a/include/drm/intel/pciids.h
++++ b/include/drm/intel/pciids.h
+@@ -846,6 +846,7 @@
+ /* BMG */
+ #define INTEL_BMG_IDS(MACRO__, ...) \
+ 	MACRO__(0xE202, ## __VA_ARGS__), \
++	MACRO__(0xE209, ## __VA_ARGS__), \
+ 	MACRO__(0xE20B, ## __VA_ARGS__), \
+ 	MACRO__(0xE20C, ## __VA_ARGS__), \
+ 	MACRO__(0xE20D, ## __VA_ARGS__), \
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 620345ce3aaad9..3921188c9e1397 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -652,6 +652,7 @@ enum {
+ 	QUEUE_FLAG_SQ_SCHED,		/* single queue style io dispatch */
+ 	QUEUE_FLAG_DISABLE_WBT_DEF,	/* for sched to disable/enable wbt */
+ 	QUEUE_FLAG_NO_ELV_SWITCH,	/* can't switch elevator any more */
++	QUEUE_FLAG_QOS_ENABLED,		/* qos is enabled */
+ 	QUEUE_FLAG_MAX
+ };
+ 
+diff --git a/include/linux/compiler.h b/include/linux/compiler.h
+index 6f04a1d8c72094..64ff73c533e54e 100644
+--- a/include/linux/compiler.h
++++ b/include/linux/compiler.h
+@@ -288,14 +288,6 @@ static inline void *offset_to_ptr(const int *off)
+ #define __ADDRESSABLE(sym) \
+ 	___ADDRESSABLE(sym, __section(".discard.addressable"))
+ 
+-#define __ADDRESSABLE_ASM(sym)						\
+-	.pushsection .discard.addressable,"aw";				\
+-	.align ARCH_SEL(8,4);						\
+-	ARCH_SEL(.quad, .long) __stringify(sym);			\
+-	.popsection;
+-
+-#define __ADDRESSABLE_ASM_STR(sym) __stringify(__ADDRESSABLE_ASM(sym))
+-
+ /*
+  * This returns a constant expression while determining if an argument is
+  * a constant expression, most importantly without evaluating the argument.
+diff --git a/include/linux/iosys-map.h b/include/linux/iosys-map.h
+index 4696abfd311cc1..3e85afe794c0aa 100644
+--- a/include/linux/iosys-map.h
++++ b/include/linux/iosys-map.h
+@@ -264,12 +264,7 @@ static inline bool iosys_map_is_set(const struct iosys_map *map)
+  */
+ static inline void iosys_map_clear(struct iosys_map *map)
+ {
+-	if (map->is_iomem) {
+-		map->vaddr_iomem = NULL;
+-		map->is_iomem = false;
+-	} else {
+-		map->vaddr = NULL;
+-	}
++	memset(map, 0, sizeof(*map));
+ }
+ 
+ /**
+diff --git a/include/linux/iov_iter.h b/include/linux/iov_iter.h
+index c4aa58032faf87..f9a17fbbd3980b 100644
+--- a/include/linux/iov_iter.h
++++ b/include/linux/iov_iter.h
+@@ -160,7 +160,7 @@ size_t iterate_folioq(struct iov_iter *iter, size_t len, void *priv, void *priv2
+ 
+ 	do {
+ 		struct folio *folio = folioq_folio(folioq, slot);
+-		size_t part, remain, consumed;
++		size_t part, remain = 0, consumed;
+ 		size_t fsize;
+ 		void *base;
+ 
+@@ -168,14 +168,16 @@ size_t iterate_folioq(struct iov_iter *iter, size_t len, void *priv, void *priv2
+ 			break;
+ 
+ 		fsize = folioq_folio_size(folioq, slot);
+-		base = kmap_local_folio(folio, skip);
+-		part = umin(len, PAGE_SIZE - skip % PAGE_SIZE);
+-		remain = step(base, progress, part, priv, priv2);
+-		kunmap_local(base);
+-		consumed = part - remain;
+-		len -= consumed;
+-		progress += consumed;
+-		skip += consumed;
++		if (skip < fsize) {
++			base = kmap_local_folio(folio, skip);
++			part = umin(len, PAGE_SIZE - skip % PAGE_SIZE);
++			remain = step(base, progress, part, priv, priv2);
++			kunmap_local(base);
++			consumed = part - remain;
++			len -= consumed;
++			progress += consumed;
++			skip += consumed;
++		}
+ 		if (skip >= fsize) {
+ 			skip = 0;
+ 			slot++;
+diff --git a/include/linux/kcov.h b/include/linux/kcov.h
+index 75a2fb8b16c329..0143358874b07b 100644
+--- a/include/linux/kcov.h
++++ b/include/linux/kcov.h
+@@ -57,47 +57,21 @@ static inline void kcov_remote_start_usb(u64 id)
+ 
+ /*
+  * The softirq flavor of kcov_remote_*() functions is introduced as a temporary
+- * workaround for KCOV's lack of nested remote coverage sections support.
+- *
+- * Adding support is tracked in https://bugzilla.kernel.org/show_bug.cgi?id=210337.
+- *
+- * kcov_remote_start_usb_softirq():
+- *
+- * 1. Only collects coverage when called in the softirq context. This allows
+- *    avoiding nested remote coverage collection sections in the task context.
+- *    For example, USB/IP calls usb_hcd_giveback_urb() in the task context
+- *    within an existing remote coverage collection section. Thus, KCOV should
+- *    not attempt to start collecting coverage within the coverage collection
+- *    section in __usb_hcd_giveback_urb() in this case.
+- *
+- * 2. Disables interrupts for the duration of the coverage collection section.
+- *    This allows avoiding nested remote coverage collection sections in the
+- *    softirq context (a softirq might occur during the execution of a work in
+- *    the BH workqueue, which runs with in_serving_softirq() > 0).
+- *    For example, usb_giveback_urb_bh() runs in the BH workqueue with
+- *    interrupts enabled, so __usb_hcd_giveback_urb() might be interrupted in
+- *    the middle of its remote coverage collection section, and the interrupt
+- *    handler might invoke __usb_hcd_giveback_urb() again.
++ * work around for kcov's lack of nested remote coverage sections support in
++ * task context. Adding support for nested sections is tracked in:
++ * https://bugzilla.kernel.org/show_bug.cgi?id=210337
+  */
+ 
+-static inline unsigned long kcov_remote_start_usb_softirq(u64 id)
++static inline void kcov_remote_start_usb_softirq(u64 id)
+ {
+-	unsigned long flags = 0;
+-
+-	if (in_serving_softirq()) {
+-		local_irq_save(flags);
++	if (in_serving_softirq() && !in_hardirq())
+ 		kcov_remote_start_usb(id);
+-	}
+-
+-	return flags;
+ }
+ 
+-static inline void kcov_remote_stop_softirq(unsigned long flags)
++static inline void kcov_remote_stop_softirq(void)
+ {
+-	if (in_serving_softirq()) {
++	if (in_serving_softirq() && !in_hardirq())
+ 		kcov_remote_stop();
+-		local_irq_restore(flags);
+-	}
+ }
+ 
+ #ifdef CONFIG_64BIT
+@@ -131,11 +105,8 @@ static inline u64 kcov_common_handle(void)
+ }
+ static inline void kcov_remote_start_common(u64 id) {}
+ static inline void kcov_remote_start_usb(u64 id) {}
+-static inline unsigned long kcov_remote_start_usb_softirq(u64 id)
+-{
+-	return 0;
+-}
+-static inline void kcov_remote_stop_softirq(unsigned long flags) {}
++static inline void kcov_remote_start_usb_softirq(u64 id) {}
++static inline void kcov_remote_stop_softirq(void) {}
+ 
+ #endif /* CONFIG_KCOV */
+ #endif /* _LINUX_KCOV_H */
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index 2c09df4ee57428..83288df7bb4597 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -10460,8 +10460,16 @@ struct mlx5_ifc_pifr_reg_bits {
+ 	u8         port_filter_update_en[8][0x20];
+ };
+ 
++enum {
++	MLX5_BUF_OWNERSHIP_UNKNOWN	= 0x0,
++	MLX5_BUF_OWNERSHIP_FW_OWNED	= 0x1,
++	MLX5_BUF_OWNERSHIP_SW_OWNED	= 0x2,
++};
++
+ struct mlx5_ifc_pfcc_reg_bits {
+-	u8         reserved_at_0[0x8];
++	u8         reserved_at_0[0x4];
++	u8	   buf_ownership[0x2];
++	u8	   reserved_at_6[0x2];
+ 	u8         local_port[0x8];
+ 	u8         reserved_at_10[0xb];
+ 	u8         ppan_mask_n[0x1];
+@@ -10597,7 +10605,9 @@ struct mlx5_ifc_pcam_enhanced_features_bits {
+ 	u8         fec_200G_per_lane_in_pplm[0x1];
+ 	u8         reserved_at_1e[0x2a];
+ 	u8         fec_100G_per_lane_in_pplm[0x1];
+-	u8         reserved_at_49[0x1f];
++	u8         reserved_at_49[0xa];
++	u8	   buffer_ownership[0x1];
++	u8	   resereved_at_54[0x14];
+ 	u8         fec_50G_per_lane_in_pplm[0x1];
+ 	u8         reserved_at_69[0x4];
+ 	u8         rx_icrc_encapsulated_counter[0x1];
+diff --git a/include/linux/netfs.h b/include/linux/netfs.h
+index f43f075852c06b..31929c84b71822 100644
+--- a/include/linux/netfs.h
++++ b/include/linux/netfs.h
+@@ -150,6 +150,7 @@ struct netfs_io_stream {
+ 	bool			active;		/* T if stream is active */
+ 	bool			need_retry;	/* T if this stream needs retrying */
+ 	bool			failed;		/* T if this stream failed */
++	bool			transferred_valid; /* T is ->transferred is valid */
+ };
+ 
+ /*
+diff --git a/include/linux/nfs_page.h b/include/linux/nfs_page.h
+index 169b4ae30ff479..9aed39abc94bc3 100644
+--- a/include/linux/nfs_page.h
++++ b/include/linux/nfs_page.h
+@@ -160,6 +160,7 @@ extern void nfs_join_page_group(struct nfs_page *head,
+ extern int nfs_page_group_lock(struct nfs_page *);
+ extern void nfs_page_group_unlock(struct nfs_page *);
+ extern bool nfs_page_group_sync_on_bit(struct nfs_page *, unsigned int);
++extern bool nfs_page_group_sync_on_bit_locked(struct nfs_page *, unsigned int);
+ extern	int nfs_page_set_headlock(struct nfs_page *req);
+ extern void nfs_page_clear_headlock(struct nfs_page *req);
+ extern bool nfs_async_iocounter_wait(struct rpc_task *, struct nfs_lock_context *);
+diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h
+index 114299bd8b9878..b847cdd2c9d3cb 100644
+--- a/include/net/bluetooth/bluetooth.h
++++ b/include/net/bluetooth/bluetooth.h
+@@ -638,7 +638,7 @@ static inline void sco_exit(void)
+ #if IS_ENABLED(CONFIG_BT_LE)
+ int iso_init(void);
+ int iso_exit(void);
+-bool iso_enabled(void);
++bool iso_inited(void);
+ #else
+ static inline int iso_init(void)
+ {
+@@ -650,7 +650,7 @@ static inline int iso_exit(void)
+ 	return 0;
+ }
+ 
+-static inline bool iso_enabled(void)
++static inline bool iso_inited(void)
+ {
+ 	return false;
+ }
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 8fa82987313424..7d1ba92b71f656 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -562,6 +562,7 @@ enum {
+ #define LE_LINK		0x80
+ #define CIS_LINK	0x82
+ #define BIS_LINK	0x83
++#define PA_LINK		0x84
+ #define INVALID_LINK	0xff
+ 
+ /* LMP features */
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index f22881bf1b392a..439bc124ce7098 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -129,7 +129,9 @@ struct hci_conn_hash {
+ 	struct list_head list;
+ 	unsigned int     acl_num;
+ 	unsigned int     sco_num;
+-	unsigned int     iso_num;
++	unsigned int     cis_num;
++	unsigned int     bis_num;
++	unsigned int     pa_num;
+ 	unsigned int     le_num;
+ 	unsigned int     le_num_peripheral;
+ };
+@@ -1014,8 +1016,13 @@ static inline void hci_conn_hash_add(struct hci_dev *hdev, struct hci_conn *c)
+ 		h->sco_num++;
+ 		break;
+ 	case CIS_LINK:
++		h->cis_num++;
++		break;
+ 	case BIS_LINK:
+-		h->iso_num++;
++		h->bis_num++;
++		break;
++	case PA_LINK:
++		h->pa_num++;
+ 		break;
+ 	}
+ }
+@@ -1041,8 +1048,13 @@ static inline void hci_conn_hash_del(struct hci_dev *hdev, struct hci_conn *c)
+ 		h->sco_num--;
+ 		break;
+ 	case CIS_LINK:
++		h->cis_num--;
++		break;
+ 	case BIS_LINK:
+-		h->iso_num--;
++		h->bis_num--;
++		break;
++	case PA_LINK:
++		h->pa_num--;
+ 		break;
+ 	}
+ }
+@@ -1059,8 +1071,11 @@ static inline unsigned int hci_conn_num(struct hci_dev *hdev, __u8 type)
+ 	case ESCO_LINK:
+ 		return h->sco_num;
+ 	case CIS_LINK:
++		return h->cis_num;
+ 	case BIS_LINK:
+-		return h->iso_num;
++		return h->bis_num;
++	case PA_LINK:
++		return h->pa_num;
+ 	default:
+ 		return 0;
+ 	}
+@@ -1070,7 +1085,15 @@ static inline unsigned int hci_conn_count(struct hci_dev *hdev)
+ {
+ 	struct hci_conn_hash *c = &hdev->conn_hash;
+ 
+-	return c->acl_num + c->sco_num + c->le_num + c->iso_num;
++	return c->acl_num + c->sco_num + c->le_num + c->cis_num + c->bis_num +
++		c->pa_num;
++}
++
++static inline unsigned int hci_iso_count(struct hci_dev *hdev)
++{
++	struct hci_conn_hash *c = &hdev->conn_hash;
++
++	return c->cis_num + c->bis_num;
+ }
+ 
+ static inline bool hci_conn_valid(struct hci_dev *hdev, struct hci_conn *conn)
+@@ -1142,7 +1165,7 @@ hci_conn_hash_lookup_create_pa_sync(struct hci_dev *hdev)
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != BIS_LINK)
++		if (c->type != PA_LINK)
+ 			continue;
+ 
+ 		if (!test_bit(HCI_CONN_CREATE_PA_SYNC, &c->flags))
+@@ -1337,7 +1360,7 @@ hci_conn_hash_lookup_big_sync_pend(struct hci_dev *hdev,
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != BIS_LINK)
++		if (c->type != PA_LINK)
+ 			continue;
+ 
+ 		if (handle == c->iso_qos.bcast.big && num_bis == c->num_bis) {
+@@ -1407,7 +1430,7 @@ hci_conn_hash_lookup_pa_sync_handle(struct hci_dev *hdev, __u16 sync_handle)
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != BIS_LINK)
++		if (c->type != PA_LINK)
+ 			continue;
+ 
+ 		/* Ignore the listen hcon, we are looking
+@@ -1932,6 +1955,8 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
+ 				!hci_dev_test_flag(dev, HCI_RPA_EXPIRED))
+ #define adv_rpa_valid(adv)     (bacmp(&adv->random_addr, BDADDR_ANY) && \
+ 				!adv->rpa_expired)
++#define le_enabled(dev)        (lmp_le_capable(dev) && \
++				hci_dev_test_flag(dev, HCI_LE_ENABLED))
+ 
+ #define scan_1m(dev) (((dev)->le_tx_def_phys & HCI_LE_SET_PHY_1M) || \
+ 		      ((dev)->le_rx_def_phys & HCI_LE_SET_PHY_1M))
+@@ -1949,6 +1974,7 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
+ 			 ((dev)->le_rx_def_phys & HCI_LE_SET_PHY_CODED))
+ 
+ #define ll_privacy_capable(dev) ((dev)->le_features[0] & HCI_LE_LL_PRIVACY)
++#define ll_privacy_enabled(dev) (le_enabled(dev) && ll_privacy_capable(dev))
+ 
+ #define privacy_mode_capable(dev) (ll_privacy_capable(dev) && \
+ 				   ((dev)->commands[39] & 0x04))
+@@ -1998,14 +2024,23 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
+ 
+ /* CIS Master/Slave and BIS support */
+ #define iso_capable(dev) (cis_capable(dev) || bis_capable(dev))
++#define iso_enabled(dev) (le_enabled(dev) && iso_capable(dev))
+ #define cis_capable(dev) \
+ 	(cis_central_capable(dev) || cis_peripheral_capable(dev))
++#define cis_enabled(dev) (le_enabled(dev) && cis_capable(dev))
+ #define cis_central_capable(dev) \
+ 	((dev)->le_features[3] & HCI_LE_CIS_CENTRAL)
++#define cis_central_enabled(dev) \
++	(le_enabled(dev) && cis_central_capable(dev))
+ #define cis_peripheral_capable(dev) \
+ 	((dev)->le_features[3] & HCI_LE_CIS_PERIPHERAL)
++#define cis_peripheral_enabled(dev) \
++	(le_enabled(dev) && cis_peripheral_capable(dev))
+ #define bis_capable(dev) ((dev)->le_features[3] & HCI_LE_ISO_BROADCASTER)
+-#define sync_recv_capable(dev) ((dev)->le_features[3] & HCI_LE_ISO_SYNC_RECEIVER)
++#define bis_enabled(dev) (le_enabled(dev) && bis_capable(dev))
++#define sync_recv_capable(dev) \
++	((dev)->le_features[3] & HCI_LE_ISO_SYNC_RECEIVER)
++#define sync_recv_enabled(dev) (le_enabled(dev) && sync_recv_capable(dev))
+ 
+ #define mws_transport_config_capable(dev) (((dev)->commands[30] & 0x08) && \
+ 	(!hci_test_quirk((dev), HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG)))
+@@ -2026,6 +2061,7 @@ static inline int hci_proto_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ 
+ 	case CIS_LINK:
+ 	case BIS_LINK:
++	case PA_LINK:
+ 		return iso_connect_ind(hdev, bdaddr, flags);
+ 
+ 	default:
+diff --git a/include/net/bond_3ad.h b/include/net/bond_3ad.h
+index 2053cd8e788a73..dba369a2cf27ef 100644
+--- a/include/net/bond_3ad.h
++++ b/include/net/bond_3ad.h
+@@ -307,6 +307,7 @@ int bond_3ad_lacpdu_recv(const struct sk_buff *skb, struct bonding *bond,
+ 			 struct slave *slave);
+ int bond_3ad_set_carrier(struct bonding *bond);
+ void bond_3ad_update_lacp_rate(struct bonding *bond);
++void bond_3ad_update_lacp_active(struct bonding *bond);
+ void bond_3ad_update_ad_actor_settings(struct bonding *bond);
+ int bond_3ad_stats_fill(struct sk_buff *skb, struct bond_3ad_stats *stats);
+ size_t bond_3ad_stats_size(void);
+diff --git a/include/net/devlink.h b/include/net/devlink.h
+index 0091f23a40f7d9..af3fd45155dd6e 100644
+--- a/include/net/devlink.h
++++ b/include/net/devlink.h
+@@ -78,6 +78,9 @@ struct devlink_port_pci_sf_attrs {
+  * @flavour: flavour of the port
+  * @split: indicates if this is split port
+  * @splittable: indicates if the port can be split.
++ * @no_phys_port_name: skip automatic phys_port_name generation; for
++ *		       compatibility only, newly added driver/port instance
++ *		       should never set this.
+  * @lanes: maximum number of lanes the port supports. 0 value is not passed to netlink.
+  * @switch_id: if the port is part of switch, this is buffer with ID, otherwise this is NULL
+  * @phys: physical port attributes
+@@ -87,7 +90,8 @@ struct devlink_port_pci_sf_attrs {
+  */
+ struct devlink_port_attrs {
+ 	u8 split:1,
+-	   splittable:1;
++	   splittable:1,
++	   no_phys_port_name:1;
+ 	u32 lanes;
+ 	enum devlink_port_flavour flavour;
+ 	struct netdev_phys_item_id switch_id;
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 638948be4c50e2..738cd5b13c62fb 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -1038,12 +1038,17 @@ static inline struct sk_buff *qdisc_dequeue_internal(struct Qdisc *sch, bool dir
+ 	skb = __skb_dequeue(&sch->gso_skb);
+ 	if (skb) {
+ 		sch->q.qlen--;
++		qdisc_qstats_backlog_dec(sch, skb);
+ 		return skb;
+ 	}
+-	if (direct)
+-		return __qdisc_dequeue_head(&sch->q);
+-	else
++	if (direct) {
++		skb = __qdisc_dequeue_head(&sch->q);
++		if (skb)
++			qdisc_qstats_backlog_dec(sch, skb);
++		return skb;
++	} else {
+ 		return sch->dequeue(sch);
++	}
+ }
+ 
+ static inline struct sk_buff *qdisc_dequeue_head(struct Qdisc *sch)
+diff --git a/include/sound/cs35l56.h b/include/sound/cs35l56.h
+index e17c4cadd04d1a..7c8bbe8ad1e2de 100644
+--- a/include/sound/cs35l56.h
++++ b/include/sound/cs35l56.h
+@@ -107,8 +107,8 @@
+ #define CS35L56_DSP1_PMEM_5114				0x3804FE8
+ 
+ #define CS35L63_DSP1_FW_VER				CS35L56_DSP1_FW_VER
+-#define CS35L63_DSP1_HALO_STATE				0x280396C
+-#define CS35L63_DSP1_PM_CUR_STATE			0x28042C8
++#define CS35L63_DSP1_HALO_STATE				0x2803C04
++#define CS35L63_DSP1_PM_CUR_STATE			0x2804518
+ #define CS35L63_PROTECTION_STATUS			0x340009C
+ #define CS35L63_TRANSDUCER_ACTUAL_PS			0x34000F4
+ #define CS35L63_MAIN_RENDER_USER_MUTE			0x3400020
+@@ -306,6 +306,7 @@ struct cs35l56_base {
+ 	struct gpio_desc *reset_gpio;
+ 	struct cs35l56_spi_payload *spi_payload_buf;
+ 	const struct cs35l56_fw_reg *fw_reg;
++	const struct cirrus_amp_cal_controls *calibration_controls;
+ };
+ 
+ static inline bool cs35l56_is_otp_register(unsigned int reg)
+diff --git a/include/trace/events/btrfs.h b/include/trace/events/btrfs.h
+index bebc252db8654f..a32305044371d9 100644
+--- a/include/trace/events/btrfs.h
++++ b/include/trace/events/btrfs.h
+@@ -1095,7 +1095,7 @@ TRACE_EVENT(btrfs_cow_block,
+ 	TP_fast_assign_btrfs(root->fs_info,
+ 		__entry->root_objectid	= btrfs_root_id(root);
+ 		__entry->buf_start	= buf->start;
+-		__entry->refs		= atomic_read(&buf->refs);
++		__entry->refs		= refcount_read(&buf->refs);
+ 		__entry->cow_start	= cow->start;
+ 		__entry->buf_level	= btrfs_header_level(buf);
+ 		__entry->cow_level	= btrfs_header_level(cow);
+diff --git a/include/uapi/linux/pfrut.h b/include/uapi/linux/pfrut.h
+index 42fa15f8310d6b..b77d5c210c2620 100644
+--- a/include/uapi/linux/pfrut.h
++++ b/include/uapi/linux/pfrut.h
+@@ -89,6 +89,7 @@ struct pfru_payload_hdr {
+ 	__u32 hw_ver;
+ 	__u32 rt_ver;
+ 	__u8 platform_id[16];
++	__u32 svn_ver;
+ };
+ 
+ enum pfru_dsm_status {
+diff --git a/include/uapi/linux/raid/md_p.h b/include/uapi/linux/raid/md_p.h
+index ff47b6f0ba0f5f..b1394628727758 100644
+--- a/include/uapi/linux/raid/md_p.h
++++ b/include/uapi/linux/raid/md_p.h
+@@ -173,7 +173,7 @@ typedef struct mdp_superblock_s {
+ #else
+ #error unspecified endianness
+ #endif
+-	__u32 recovery_cp;	/* 11 recovery checkpoint sector count	      */
++	__u32 resync_offset;	/* 11 resync checkpoint sector count	      */
+ 	/* There are only valid for minor_version > 90 */
+ 	__u64 reshape_position;	/* 12,13 next address in array-space for reshape */
+ 	__u32 new_level;	/* 14 new level we are reshaping to	      */
+diff --git a/io_uring/futex.c b/io_uring/futex.c
+index 692462d50c8c0c..9113a44984f3cb 100644
+--- a/io_uring/futex.c
++++ b/io_uring/futex.c
+@@ -288,6 +288,7 @@ int io_futex_wait(struct io_kiocb *req, unsigned int issue_flags)
+ 		goto done_unlock;
+ 	}
+ 
++	req->flags |= REQ_F_ASYNC_DATA;
+ 	req->async_data = ifd;
+ 	ifd->q = futex_q_init;
+ 	ifd->q.bitset = iof->futex_mask;
+@@ -309,6 +310,8 @@ int io_futex_wait(struct io_kiocb *req, unsigned int issue_flags)
+ 	if (ret < 0)
+ 		req_set_fail(req);
+ 	io_req_set_res(req, ret, 0);
++	req->async_data = NULL;
++	req->flags &= ~REQ_F_ASYNC_DATA;
+ 	kfree(ifd);
+ 	return IOU_COMPLETE;
+ }
+diff --git a/kernel/Kconfig.kexec b/kernel/Kconfig.kexec
+index 2ee603a98813e2..1224dd937df0c4 100644
+--- a/kernel/Kconfig.kexec
++++ b/kernel/Kconfig.kexec
+@@ -97,6 +97,7 @@ config KEXEC_JUMP
+ config KEXEC_HANDOVER
+ 	bool "kexec handover"
+ 	depends on ARCH_SUPPORTS_KEXEC_HANDOVER && ARCH_SUPPORTS_KEXEC_FILE
++	depends on !DEFERRED_STRUCT_PAGE_INIT
+ 	select MEMBLOCK_KHO_SCRATCH
+ 	select KEXEC_FILE
+ 	select DEBUG_FS
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 3bc4301466f334..f9d7799c5c9470 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -280,7 +280,7 @@ static inline void check_insane_mems_config(nodemask_t *nodes)
+ {
+ 	if (!cpusets_insane_config() &&
+ 		movable_only_nodes(nodes)) {
+-		static_branch_enable(&cpusets_insane_config_key);
++		static_branch_enable_cpuslocked(&cpusets_insane_config_key);
+ 		pr_info("Unsupported (movable nodes only) cpuset configuration detected (nmask=%*pbl)!\n"
+ 			"Cpuset allocations might fail even with a lot of memory available.\n",
+ 			nodemask_pr_args(nodes));
+@@ -1843,7 +1843,7 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
+ 			if (is_partition_valid(cs))
+ 				adding = cpumask_and(tmp->addmask,
+ 						xcpus, parent->effective_xcpus);
+-		} else if (is_partition_invalid(cs) &&
++		} else if (is_partition_invalid(cs) && !cpumask_empty(xcpus) &&
+ 			   cpumask_subset(xcpus, parent->effective_xcpus)) {
+ 			struct cgroup_subsys_state *css;
+ 			struct cpuset *child;
+@@ -3870,9 +3870,10 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp)
+ 		partcmd = partcmd_invalidate;
+ 	/*
+ 	 * On the other hand, an invalid partition root may be transitioned
+-	 * back to a regular one.
++	 * back to a regular one with a non-empty effective xcpus.
+ 	 */
+-	else if (is_partition_valid(parent) && is_partition_invalid(cs))
++	else if (is_partition_valid(parent) && is_partition_invalid(cs) &&
++		 !cpumask_empty(cs->effective_xcpus))
+ 		partcmd = partcmd_update;
+ 
+ 	if (partcmd >= 0) {
+diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
+index cbeaa499a96af3..408e52d5f7a4e2 100644
+--- a/kernel/cgroup/rstat.c
++++ b/kernel/cgroup/rstat.c
+@@ -488,6 +488,9 @@ void css_rstat_exit(struct cgroup_subsys_state *css)
+ 	if (!css_uses_rstat(css))
+ 		return;
+ 
++	if (!css->rstat_cpu)
++		return;
++
+ 	css_rstat_flush(css);
+ 
+ 	/* sanity check */
+diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c
+index 5a21dbe179505a..e640d3eb343098 100644
+--- a/kernel/kexec_handover.c
++++ b/kernel/kexec_handover.c
+@@ -144,14 +144,34 @@ static int __kho_preserve_order(struct kho_mem_track *track, unsigned long pfn,
+ 				unsigned int order)
+ {
+ 	struct kho_mem_phys_bits *bits;
+-	struct kho_mem_phys *physxa;
++	struct kho_mem_phys *physxa, *new_physxa;
+ 	const unsigned long pfn_high = pfn >> order;
+ 
+ 	might_sleep();
+ 
+-	physxa = xa_load_or_alloc(&track->orders, order, sizeof(*physxa));
+-	if (IS_ERR(physxa))
+-		return PTR_ERR(physxa);
++	physxa = xa_load(&track->orders, order);
++	if (!physxa) {
++		int err;
++
++		new_physxa = kzalloc(sizeof(*physxa), GFP_KERNEL);
++		if (!new_physxa)
++			return -ENOMEM;
++
++		xa_init(&new_physxa->phys_bits);
++		physxa = xa_cmpxchg(&track->orders, order, NULL, new_physxa,
++				    GFP_KERNEL);
++
++		err = xa_err(physxa);
++		if (err || physxa) {
++			xa_destroy(&new_physxa->phys_bits);
++			kfree(new_physxa);
++
++			if (err)
++				return err;
++		} else {
++			physxa = new_physxa;
++		}
++	}
+ 
+ 	bits = xa_load_or_alloc(&physxa->phys_bits, pfn_high / PRESERVE_BITS,
+ 				sizeof(*bits));
+@@ -544,6 +564,7 @@ static void __init kho_reserve_scratch(void)
+ err_free_scratch_desc:
+ 	memblock_free(kho_scratch, kho_scratch_cnt * sizeof(*kho_scratch));
+ err_disable_kho:
++	pr_warn("Failed to reserve scratch area, disabling kexec handover\n");
+ 	kho_enable = false;
+ }
+ 
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index 7dd5cbcb7a069d..717e3d1d6a2fa2 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -5694,6 +5694,9 @@ static int scx_enable(struct sched_ext_ops *ops, struct bpf_link *link)
+ 			__setscheduler_class(p->policy, p->prio);
+ 		struct sched_enq_and_set_ctx ctx;
+ 
++		if (!tryget_task_struct(p))
++			continue;
++
+ 		if (old_class != new_class && p->se.sched_delayed)
+ 			dequeue_task(task_rq(p), p, DEQUEUE_SLEEP | DEQUEUE_DELAYED);
+ 
+@@ -5706,6 +5709,7 @@ static int scx_enable(struct sched_ext_ops *ops, struct bpf_link *link)
+ 		sched_enq_and_set_task(&ctx);
+ 
+ 		check_class_changed(task_rq(p), p, old_class, p->prio);
++		put_task_struct(p);
+ 	}
+ 	scx_task_iter_stop(&sti);
+ 	percpu_up_write(&scx_fork_rwsem);
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 148082db9a553d..6b1493558a3dd1 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -4067,6 +4067,7 @@ SYSCALL_DEFINE4(pidfd_send_signal, int, pidfd, int, sig,
+ {
+ 	struct pid *pid;
+ 	enum pid_type type;
++	int ret;
+ 
+ 	/* Enforce flags be set to 0 until we add an extension. */
+ 	if (flags & ~PIDFD_SEND_SIGNAL_FLAGS)
+@@ -4108,7 +4109,10 @@ SYSCALL_DEFINE4(pidfd_send_signal, int, pidfd, int, sig,
+ 	}
+ 	}
+ 
+-	return do_pidfd_send_signal(pid, sig, type, info, flags);
++	ret = do_pidfd_send_signal(pid, sig, type, info, flags);
++	put_pid(pid);
++
++	return ret;
+ }
+ 
+ static int
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 4203fad56b6c58..366efafe9f49c7 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -4665,13 +4665,17 @@ ftrace_regex_open(struct ftrace_ops *ops, int flag,
+ 	        } else {
+ 			iter->hash = alloc_and_copy_ftrace_hash(size_bits, hash);
+ 		}
++	} else {
++		if (hash)
++			iter->hash = alloc_and_copy_ftrace_hash(hash->size_bits, hash);
++		else
++			iter->hash = EMPTY_HASH;
++	}
+ 
+-		if (!iter->hash) {
+-			trace_parser_put(&iter->parser);
+-			goto out_unlock;
+-		}
+-	} else
+-		iter->hash = hash;
++	if (!iter->hash) {
++		trace_parser_put(&iter->parser);
++		goto out_unlock;
++	}
+ 
+ 	ret = 0;
+ 
+@@ -6547,9 +6551,6 @@ int ftrace_regex_release(struct inode *inode, struct file *file)
+ 		ftrace_hash_move_and_update_ops(iter->ops, orig_hash,
+ 						      iter->hash, filter_hash);
+ 		mutex_unlock(&ftrace_lock);
+-	} else {
+-		/* For read only, the hash is the ops hash */
+-		iter->hash = NULL;
+ 	}
+ 
+ 	mutex_unlock(&iter->ops->func_hash->regex_lock);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 7996f26c3f46d2..8ea6ada38c40ec 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -1846,7 +1846,7 @@ int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
+ 
+ 	ret = get_user(ch, ubuf++);
+ 	if (ret)
+-		goto out;
++		goto fail;
+ 
+ 	read++;
+ 	cnt--;
+@@ -1860,7 +1860,7 @@ int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
+ 		while (cnt && isspace(ch)) {
+ 			ret = get_user(ch, ubuf++);
+ 			if (ret)
+-				goto out;
++				goto fail;
+ 			read++;
+ 			cnt--;
+ 		}
+@@ -1870,8 +1870,7 @@ int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
+ 		/* only spaces were written */
+ 		if (isspace(ch) || !ch) {
+ 			*ppos += read;
+-			ret = read;
+-			goto out;
++			return read;
+ 		}
+ 	}
+ 
+@@ -1881,11 +1880,12 @@ int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
+ 			parser->buffer[parser->idx++] = ch;
+ 		else {
+ 			ret = -EINVAL;
+-			goto out;
++			goto fail;
+ 		}
++
+ 		ret = get_user(ch, ubuf++);
+ 		if (ret)
+-			goto out;
++			goto fail;
+ 		read++;
+ 		cnt--;
+ 	}
+@@ -1901,13 +1901,13 @@ int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
+ 		parser->buffer[parser->idx] = 0;
+ 	} else {
+ 		ret = -EINVAL;
+-		goto out;
++		goto fail;
+ 	}
+ 
+ 	*ppos += read;
+-	ret = read;
+-
+-out:
++	return read;
++fail:
++	trace_parser_fail(parser);
+ 	return ret;
+ }
+ 
+@@ -2410,10 +2410,10 @@ int __init register_tracer(struct tracer *type)
+ 	mutex_unlock(&trace_types_lock);
+ 
+ 	if (ret || !default_bootup_tracer)
+-		goto out_unlock;
++		return ret;
+ 
+ 	if (strncmp(default_bootup_tracer, type->name, MAX_TRACER_SIZE))
+-		goto out_unlock;
++		return 0;
+ 
+ 	printk(KERN_INFO "Starting tracer '%s'\n", type->name);
+ 	/* Do we want this tracer to start on bootup? */
+@@ -2425,8 +2425,7 @@ int __init register_tracer(struct tracer *type)
+ 	/* disable other selftests, since this will break it. */
+ 	disable_tracing_selftest("running a tracer");
+ 
+- out_unlock:
+-	return ret;
++	return 0;
+ }
+ 
+ static void tracing_reset_cpu(struct array_buffer *buf, int cpu)
+@@ -8954,12 +8953,12 @@ ftrace_trace_snapshot_callback(struct trace_array *tr, struct ftrace_hash *hash,
+  out_reg:
+ 	ret = tracing_arm_snapshot(tr);
+ 	if (ret < 0)
+-		goto out;
++		return ret;
+ 
+ 	ret = register_ftrace_function_probe(glob, tr, ops, count);
+ 	if (ret < 0)
+ 		tracing_disarm_snapshot(tr);
+- out:
++
+ 	return ret < 0 ? ret : 0;
+ }
+ 
+@@ -11057,7 +11056,7 @@ __init static int tracer_alloc_buffers(void)
+ 	BUILD_BUG_ON(TRACE_ITER_LAST_BIT > TRACE_FLAGS_MAX_SIZE);
+ 
+ 	if (!alloc_cpumask_var(&tracing_buffer_mask, GFP_KERNEL))
+-		goto out;
++		return -ENOMEM;
+ 
+ 	if (!alloc_cpumask_var(&global_trace.tracing_cpumask, GFP_KERNEL))
+ 		goto out_free_buffer_mask;
+@@ -11175,7 +11174,6 @@ __init static int tracer_alloc_buffers(void)
+ 	free_cpumask_var(global_trace.tracing_cpumask);
+ out_free_buffer_mask:
+ 	free_cpumask_var(tracing_buffer_mask);
+-out:
+ 	return ret;
+ }
+ 
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index bd084953a98bef..dba1a9e4f7385c 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -1292,6 +1292,7 @@ bool ftrace_event_is_function(struct trace_event_call *call);
+  */
+ struct trace_parser {
+ 	bool		cont;
++	bool		fail;
+ 	char		*buffer;
+ 	unsigned	idx;
+ 	unsigned	size;
+@@ -1299,7 +1300,7 @@ struct trace_parser {
+ 
+ static inline bool trace_parser_loaded(struct trace_parser *parser)
+ {
+-	return (parser->idx != 0);
++	return !parser->fail && parser->idx != 0;
+ }
+ 
+ static inline bool trace_parser_cont(struct trace_parser *parser)
+@@ -1313,6 +1314,11 @@ static inline void trace_parser_clear(struct trace_parser *parser)
+ 	parser->idx = 0;
+ }
+ 
++static inline void trace_parser_fail(struct trace_parser *parser)
++{
++	parser->fail = true;
++}
++
+ extern int trace_parser_get_init(struct trace_parser *parser, int size);
+ extern void trace_parser_put(struct trace_parser *parser);
+ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
+@@ -2204,7 +2210,7 @@ static inline bool is_good_system_name(const char *name)
+ static inline void sanitize_event_name(char *name)
+ {
+ 	while (*name++ != '\0')
+-		if (*name == ':' || *name == '.')
++		if (*name == ':' || *name == '.' || *name == '*')
+ 			*name = '_';
+ }
+ 
+diff --git a/mm/damon/core.c b/mm/damon/core.c
+index d9c4a93b24509c..8ead13792f0495 100644
+--- a/mm/damon/core.c
++++ b/mm/damon/core.c
+@@ -843,6 +843,18 @@ static struct damos_filter *damos_nth_filter(int n, struct damos *s)
+ 	return NULL;
+ }
+ 
++static struct damos_filter *damos_nth_ops_filter(int n, struct damos *s)
++{
++	struct damos_filter *filter;
++	int i = 0;
++
++	damos_for_each_ops_filter(filter, s) {
++		if (i++ == n)
++			return filter;
++	}
++	return NULL;
++}
++
+ static void damos_commit_filter_arg(
+ 		struct damos_filter *dst, struct damos_filter *src)
+ {
+@@ -869,6 +881,7 @@ static void damos_commit_filter(
+ {
+ 	dst->type = src->type;
+ 	dst->matching = src->matching;
++	dst->allow = src->allow;
+ 	damos_commit_filter_arg(dst, src);
+ }
+ 
+@@ -906,7 +919,7 @@ static int damos_commit_ops_filters(struct damos *dst, struct damos *src)
+ 	int i = 0, j = 0;
+ 
+ 	damos_for_each_ops_filter_safe(dst_filter, next, dst) {
+-		src_filter = damos_nth_filter(i++, src);
++		src_filter = damos_nth_ops_filter(i++, src);
+ 		if (src_filter)
+ 			damos_commit_filter(dst_filter, src_filter);
+ 		else
+diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
+index 4102a8c5f9926d..578546ef74a012 100644
+--- a/mm/damon/paddr.c
++++ b/mm/damon/paddr.c
+@@ -476,6 +476,10 @@ static unsigned long damon_pa_migrate_pages(struct list_head *folio_list,
+ 	if (list_empty(folio_list))
+ 		return nr_migrated;
+ 
++	if (target_nid < 0 || target_nid >= MAX_NUMNODES ||
++			!node_state(target_nid, N_MEMORY))
++		return nr_migrated;
++
+ 	noreclaim_flag = memalloc_noreclaim_save();
+ 
+ 	nid = folio_nid(lru_to_folio(folio_list));
+diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
+index 7731b238b5340f..0f5ddefd128afb 100644
+--- a/mm/debug_vm_pgtable.c
++++ b/mm/debug_vm_pgtable.c
+@@ -1041,29 +1041,34 @@ static void __init destroy_args(struct pgtable_debug_args *args)
+ 
+ 	/* Free page table entries */
+ 	if (args->start_ptep) {
++		pmd_clear(args->pmdp);
+ 		pte_free(args->mm, args->start_ptep);
+ 		mm_dec_nr_ptes(args->mm);
+ 	}
+ 
+ 	if (args->start_pmdp) {
++		pud_clear(args->pudp);
+ 		pmd_free(args->mm, args->start_pmdp);
+ 		mm_dec_nr_pmds(args->mm);
+ 	}
+ 
+ 	if (args->start_pudp) {
++		p4d_clear(args->p4dp);
+ 		pud_free(args->mm, args->start_pudp);
+ 		mm_dec_nr_puds(args->mm);
+ 	}
+ 
+-	if (args->start_p4dp)
++	if (args->start_p4dp) {
++		pgd_clear(args->pgdp);
+ 		p4d_free(args->mm, args->start_p4dp);
++	}
+ 
+ 	/* Free vma and mm struct */
+ 	if (args->vma)
+ 		vm_area_free(args->vma);
+ 
+ 	if (args->mm)
+-		mmdrop(args->mm);
++		mmput(args->mm);
+ }
+ 
+ static struct page * __init
+diff --git a/mm/filemap.c b/mm/filemap.c
+index bada249b9fb762..a6459874bb2aaa 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -1778,8 +1778,9 @@ pgoff_t page_cache_next_miss(struct address_space *mapping,
+ 			     pgoff_t index, unsigned long max_scan)
+ {
+ 	XA_STATE(xas, &mapping->i_pages, index);
++	unsigned long nr = max_scan;
+ 
+-	while (max_scan--) {
++	while (nr--) {
+ 		void *entry = xas_next(&xas);
+ 		if (!entry || xa_is_value(entry))
+ 			return xas.xa_index;
+diff --git a/mm/kasan/kasan_test_c.c b/mm/kasan/kasan_test_c.c
+index 5f922dd38ffa13..c9cdafdde13234 100644
+--- a/mm/kasan/kasan_test_c.c
++++ b/mm/kasan/kasan_test_c.c
+@@ -47,7 +47,7 @@ static struct {
+  * Some tests use these global variables to store return values from function
+  * calls that could otherwise be eliminated by the compiler as dead code.
+  */
+-static volatile void *kasan_ptr_result;
++static void *volatile kasan_ptr_result;
+ static volatile int kasan_int_result;
+ 
+ /* Probe for console output: obtains test_status lines of interest. */
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 225dddff091d71..dd543dd7755fc0 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -847,9 +847,17 @@ static int hwpoison_hugetlb_range(pte_t *ptep, unsigned long hmask,
+ #define hwpoison_hugetlb_range	NULL
+ #endif
+ 
++static int hwpoison_test_walk(unsigned long start, unsigned long end,
++			     struct mm_walk *walk)
++{
++	/* We also want to consider pages mapped into VM_PFNMAP. */
++	return 0;
++}
++
+ static const struct mm_walk_ops hwpoison_walk_ops = {
+ 	.pmd_entry = hwpoison_pte_range,
+ 	.hugetlb_entry = hwpoison_hugetlb_range,
++	.test_walk = hwpoison_test_walk,
+ 	.walk_lock = PGWALK_RDLOCK,
+ };
+ 
+diff --git a/mm/mremap.c b/mm/mremap.c
+index 60f6b8d0d5f0ba..3211dd47ece679 100644
+--- a/mm/mremap.c
++++ b/mm/mremap.c
+@@ -294,6 +294,25 @@ static inline bool arch_supports_page_table_move(void)
+ }
+ #endif
+ 
++static inline bool uffd_supports_page_table_move(struct pagetable_move_control *pmc)
++{
++	/*
++	 * If we are moving a VMA that has uffd-wp registered but with
++	 * remap events disabled (new VMA will not be registered with uffd), we
++	 * need to ensure that the uffd-wp state is cleared from all pgtables.
++	 * This means recursing into lower page tables in move_page_tables().
++	 *
++	 * We might get called with VMAs reversed when recovering from a
++	 * failed page table move. In that case, the
++	 * "old"-but-actually-"originally new" VMA during recovery will not have
++	 * a uffd context. Recursing into lower page tables during the original
++	 * move but not during the recovery move will cause trouble, because we
++	 * run into already-existing page tables. So check both VMAs.
++	 */
++	return !vma_has_uffd_without_event_remap(pmc->old) &&
++	       !vma_has_uffd_without_event_remap(pmc->new);
++}
++
+ #ifdef CONFIG_HAVE_MOVE_PMD
+ static bool move_normal_pmd(struct pagetable_move_control *pmc,
+ 			pmd_t *old_pmd, pmd_t *new_pmd)
+@@ -306,6 +325,8 @@ static bool move_normal_pmd(struct pagetable_move_control *pmc,
+ 
+ 	if (!arch_supports_page_table_move())
+ 		return false;
++	if (!uffd_supports_page_table_move(pmc))
++		return false;
+ 	/*
+ 	 * The destination pmd shouldn't be established, free_pgtables()
+ 	 * should have released it.
+@@ -332,15 +353,6 @@ static bool move_normal_pmd(struct pagetable_move_control *pmc,
+ 	if (WARN_ON_ONCE(!pmd_none(*new_pmd)))
+ 		return false;
+ 
+-	/* If this pmd belongs to a uffd vma with remap events disabled, we need
+-	 * to ensure that the uffd-wp state is cleared from all pgtables. This
+-	 * means recursing into lower page tables in move_page_tables(), and we
+-	 * can reuse the existing code if we simply treat the entry as "not
+-	 * moved".
+-	 */
+-	if (vma_has_uffd_without_event_remap(vma))
+-		return false;
+-
+ 	/*
+ 	 * We don't have to worry about the ordering of src and dst
+ 	 * ptlocks because exclusive mmap_lock prevents deadlock.
+@@ -389,6 +401,8 @@ static bool move_normal_pud(struct pagetable_move_control *pmc,
+ 
+ 	if (!arch_supports_page_table_move())
+ 		return false;
++	if (!uffd_supports_page_table_move(pmc))
++		return false;
+ 	/*
+ 	 * The destination pud shouldn't be established, free_pgtables()
+ 	 * should have released it.
+@@ -396,15 +410,6 @@ static bool move_normal_pud(struct pagetable_move_control *pmc,
+ 	if (WARN_ON_ONCE(!pud_none(*new_pud)))
+ 		return false;
+ 
+-	/* If this pud belongs to a uffd vma with remap events disabled, we need
+-	 * to ensure that the uffd-wp state is cleared from all pgtables. This
+-	 * means recursing into lower page tables in move_page_tables(), and we
+-	 * can reuse the existing code if we simply treat the entry as "not
+-	 * moved".
+-	 */
+-	if (vma_has_uffd_without_event_remap(vma))
+-		return false;
+-
+ 	/*
+ 	 * We don't have to worry about the ordering of src and dst
+ 	 * ptlocks because exclusive mmap_lock prevents deadlock.
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index f5cd935490ad97..6a064a6b0e4319 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -339,7 +339,8 @@ static int hci_enhanced_setup_sync(struct hci_dev *hdev, void *data)
+ 	case BT_CODEC_TRANSPARENT:
+ 		if (!find_next_esco_param(conn, esco_param_msbc,
+ 					  ARRAY_SIZE(esco_param_msbc)))
+-			return false;
++			return -EINVAL;
++
+ 		param = &esco_param_msbc[conn->attempt - 1];
+ 		cp.tx_coding_format.id = 0x03;
+ 		cp.rx_coding_format.id = 0x03;
+@@ -785,7 +786,7 @@ static int hci_le_big_terminate(struct hci_dev *hdev, u8 big, struct hci_conn *c
+ 	d->sync_handle = conn->sync_handle;
+ 
+ 	if (test_and_clear_bit(HCI_CONN_PA_SYNC, &conn->flags)) {
+-		hci_conn_hash_list_flag(hdev, find_bis, BIS_LINK,
++		hci_conn_hash_list_flag(hdev, find_bis, PA_LINK,
+ 					HCI_CONN_PA_SYNC, d);
+ 
+ 		if (!d->count)
+@@ -914,6 +915,7 @@ static struct hci_conn *__hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t
+ 		break;
+ 	case CIS_LINK:
+ 	case BIS_LINK:
++	case PA_LINK:
+ 		if (hdev->iso_mtu)
+ 			/* Dedicated ISO Buffer exists */
+ 			break;
+@@ -979,6 +981,7 @@ static struct hci_conn *__hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t
+ 		break;
+ 	case CIS_LINK:
+ 	case BIS_LINK:
++	case PA_LINK:
+ 		/* conn->src should reflect the local identity address */
+ 		hci_copy_identity_address(hdev, &conn->src, &conn->src_type);
+ 
+@@ -1033,7 +1036,6 @@ static struct hci_conn *__hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t
+ 	}
+ 
+ 	hci_conn_init_sysfs(conn);
+-
+ 	return conn;
+ }
+ 
+@@ -1077,6 +1079,7 @@ static void hci_conn_cleanup_child(struct hci_conn *conn, u8 reason)
+ 		break;
+ 	case CIS_LINK:
+ 	case BIS_LINK:
++	case PA_LINK:
+ 		if ((conn->state != BT_CONNECTED &&
+ 		    !test_bit(HCI_CONN_CREATE_CIS, &conn->flags)) ||
+ 		    test_bit(HCI_CONN_BIG_CREATED, &conn->flags))
+@@ -1152,7 +1155,8 @@ void hci_conn_del(struct hci_conn *conn)
+ 	} else {
+ 		/* Unacked ISO frames */
+ 		if (conn->type == CIS_LINK ||
+-		    conn->type == BIS_LINK) {
++		    conn->type == BIS_LINK ||
++		    conn->type == PA_LINK) {
+ 			if (hdev->iso_pkts)
+ 				hdev->iso_cnt += conn->sent;
+ 			else if (hdev->le_pkts)
+@@ -2081,7 +2085,7 @@ struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst,
+ 
+ 	bt_dev_dbg(hdev, "dst %pMR type %d sid %d", dst, dst_type, sid);
+ 
+-	conn = hci_conn_add_unset(hdev, BIS_LINK, dst, HCI_ROLE_SLAVE);
++	conn = hci_conn_add_unset(hdev, PA_LINK, dst, HCI_ROLE_SLAVE);
+ 	if (IS_ERR(conn))
+ 		return conn;
+ 
+@@ -2246,7 +2250,7 @@ struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst,
+ 	 * the start periodic advertising and create BIG commands have
+ 	 * been queued
+ 	 */
+-	hci_conn_hash_list_state(hdev, bis_mark_per_adv, BIS_LINK,
++	hci_conn_hash_list_state(hdev, bis_mark_per_adv, PA_LINK,
+ 				 BT_BOUND, &data);
+ 
+ 	/* Queue start periodic advertising and create BIG */
+@@ -2980,6 +2984,7 @@ void hci_conn_tx_queue(struct hci_conn *conn, struct sk_buff *skb)
+ 	switch (conn->type) {
+ 	case CIS_LINK:
+ 	case BIS_LINK:
++	case PA_LINK:
+ 	case ACL_LINK:
+ 	case LE_LINK:
+ 		break;
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 441cb1700f9972..0aa8a591ce428c 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -2938,12 +2938,14 @@ int hci_recv_frame(struct hci_dev *hdev, struct sk_buff *skb)
+ 	case HCI_ACLDATA_PKT:
+ 		/* Detect if ISO packet has been sent as ACL */
+ 		if (hci_conn_num(hdev, CIS_LINK) ||
+-		    hci_conn_num(hdev, BIS_LINK)) {
++		    hci_conn_num(hdev, BIS_LINK) ||
++			hci_conn_num(hdev, PA_LINK)) {
+ 			__u16 handle = __le16_to_cpu(hci_acl_hdr(skb)->handle);
+ 			__u8 type;
+ 
+ 			type = hci_conn_lookup_type(hdev, hci_handle(handle));
+-			if (type == CIS_LINK || type == BIS_LINK)
++			if (type == CIS_LINK || type == BIS_LINK ||
++			    type == PA_LINK)
+ 				hci_skb_pkt_type(skb) = HCI_ISODATA_PKT;
+ 		}
+ 		break;
+@@ -3398,6 +3400,7 @@ static inline void hci_quote_sent(struct hci_conn *conn, int num, int *quote)
+ 		break;
+ 	case CIS_LINK:
+ 	case BIS_LINK:
++	case PA_LINK:
+ 		cnt = hdev->iso_mtu ? hdev->iso_cnt :
+ 			hdev->le_mtu ? hdev->le_cnt : hdev->acl_cnt;
+ 		break;
+@@ -3411,7 +3414,7 @@ static inline void hci_quote_sent(struct hci_conn *conn, int num, int *quote)
+ }
+ 
+ static struct hci_conn *hci_low_sent(struct hci_dev *hdev, __u8 type,
+-				     __u8 type2, int *quote)
++				     int *quote)
+ {
+ 	struct hci_conn_hash *h = &hdev->conn_hash;
+ 	struct hci_conn *conn = NULL, *c;
+@@ -3423,7 +3426,7 @@ static struct hci_conn *hci_low_sent(struct hci_dev *hdev, __u8 type,
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if ((c->type != type && c->type != type2) ||
++		if (c->type != type ||
+ 		    skb_queue_empty(&c->data_q))
+ 			continue;
+ 
+@@ -3627,7 +3630,7 @@ static void hci_sched_sco(struct hci_dev *hdev, __u8 type)
+ 	else
+ 		cnt = &hdev->sco_cnt;
+ 
+-	while (*cnt && (conn = hci_low_sent(hdev, type, type, &quote))) {
++	while (*cnt && (conn = hci_low_sent(hdev, type, &quote))) {
+ 		while (quote-- && (skb = skb_dequeue(&conn->data_q))) {
+ 			BT_DBG("skb %p len %d", skb, skb->len);
+ 			hci_send_conn_frame(hdev, conn, skb);
+@@ -3746,8 +3749,8 @@ static void hci_sched_le(struct hci_dev *hdev)
+ 		hci_prio_recalculate(hdev, LE_LINK);
+ }
+ 
+-/* Schedule CIS */
+-static void hci_sched_iso(struct hci_dev *hdev)
++/* Schedule iso */
++static void hci_sched_iso(struct hci_dev *hdev, __u8 type)
+ {
+ 	struct hci_conn *conn;
+ 	struct sk_buff *skb;
+@@ -3755,14 +3758,12 @@ static void hci_sched_iso(struct hci_dev *hdev)
+ 
+ 	BT_DBG("%s", hdev->name);
+ 
+-	if (!hci_conn_num(hdev, CIS_LINK) &&
+-	    !hci_conn_num(hdev, BIS_LINK))
++	if (!hci_conn_num(hdev, type))
+ 		return;
+ 
+ 	cnt = hdev->iso_pkts ? &hdev->iso_cnt :
+ 		hdev->le_pkts ? &hdev->le_cnt : &hdev->acl_cnt;
+-	while (*cnt && (conn = hci_low_sent(hdev, CIS_LINK, BIS_LINK,
+-					    &quote))) {
++	while (*cnt && (conn = hci_low_sent(hdev, type, &quote))) {
+ 		while (quote-- && (skb = skb_dequeue(&conn->data_q))) {
+ 			BT_DBG("skb %p len %d", skb, skb->len);
+ 			hci_send_conn_frame(hdev, conn, skb);
+@@ -3787,7 +3788,9 @@ static void hci_tx_work(struct work_struct *work)
+ 		/* Schedule queues and send stuff to HCI driver */
+ 		hci_sched_sco(hdev, SCO_LINK);
+ 		hci_sched_sco(hdev, ESCO_LINK);
+-		hci_sched_iso(hdev);
++		hci_sched_iso(hdev, CIS_LINK);
++		hci_sched_iso(hdev, BIS_LINK);
++		hci_sched_iso(hdev, PA_LINK);
+ 		hci_sched_acl(hdev);
+ 		hci_sched_le(hdev);
+ 	}
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index b83995898da098..5ef54853bc5eb1 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -4432,6 +4432,7 @@ static void hci_num_comp_pkts_evt(struct hci_dev *hdev, void *data,
+ 
+ 		case CIS_LINK:
+ 		case BIS_LINK:
++		case PA_LINK:
+ 			if (hdev->iso_pkts) {
+ 				hdev->iso_cnt += count;
+ 				if (hdev->iso_cnt > hdev->iso_pkts)
+@@ -6381,7 +6382,7 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
+ 	conn->sync_handle = le16_to_cpu(ev->handle);
+ 	conn->sid = HCI_SID_INVALID;
+ 
+-	mask |= hci_proto_connect_ind(hdev, &ev->bdaddr, BIS_LINK,
++	mask |= hci_proto_connect_ind(hdev, &ev->bdaddr, PA_LINK,
+ 				      &flags);
+ 	if (!(mask & HCI_LM_ACCEPT)) {
+ 		hci_le_pa_term_sync(hdev, ev->handle);
+@@ -6392,7 +6393,7 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
+ 		goto unlock;
+ 
+ 	/* Add connection to indicate PA sync event */
+-	pa_sync = hci_conn_add_unset(hdev, BIS_LINK, BDADDR_ANY,
++	pa_sync = hci_conn_add_unset(hdev, PA_LINK, BDADDR_ANY,
+ 				     HCI_ROLE_SLAVE);
+ 
+ 	if (IS_ERR(pa_sync))
+@@ -6423,7 +6424,7 @@ static void hci_le_per_adv_report_evt(struct hci_dev *hdev, void *data,
+ 
+ 	hci_dev_lock(hdev);
+ 
+-	mask |= hci_proto_connect_ind(hdev, BDADDR_ANY, BIS_LINK, &flags);
++	mask |= hci_proto_connect_ind(hdev, BDADDR_ANY, PA_LINK, &flags);
+ 	if (!(mask & HCI_LM_ACCEPT))
+ 		goto unlock;
+ 
+@@ -6744,8 +6745,8 @@ static void hci_le_cis_estabilished_evt(struct hci_dev *hdev, void *data,
+ 		qos->ucast.out.latency =
+ 			DIV_ROUND_CLOSEST(get_unaligned_le24(ev->p_latency),
+ 					  1000);
+-		qos->ucast.in.sdu = le16_to_cpu(ev->c_mtu);
+-		qos->ucast.out.sdu = le16_to_cpu(ev->p_mtu);
++		qos->ucast.in.sdu = ev->c_bn ? le16_to_cpu(ev->c_mtu) : 0;
++		qos->ucast.out.sdu = ev->p_bn ? le16_to_cpu(ev->p_mtu) : 0;
+ 		qos->ucast.in.phy = ev->c_phy;
+ 		qos->ucast.out.phy = ev->p_phy;
+ 		break;
+@@ -6759,8 +6760,8 @@ static void hci_le_cis_estabilished_evt(struct hci_dev *hdev, void *data,
+ 		qos->ucast.in.latency =
+ 			DIV_ROUND_CLOSEST(get_unaligned_le24(ev->p_latency),
+ 					  1000);
+-		qos->ucast.out.sdu = le16_to_cpu(ev->c_mtu);
+-		qos->ucast.in.sdu = le16_to_cpu(ev->p_mtu);
++		qos->ucast.out.sdu = ev->c_bn ? le16_to_cpu(ev->c_mtu) : 0;
++		qos->ucast.in.sdu = ev->p_bn ? le16_to_cpu(ev->p_mtu) : 0;
+ 		qos->ucast.out.phy = ev->c_phy;
+ 		qos->ucast.in.phy = ev->p_phy;
+ 		break;
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 7938c004071c49..115dc1cd99ce40 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -2929,7 +2929,7 @@ static int hci_le_set_ext_scan_param_sync(struct hci_dev *hdev, u8 type,
+ 		if (sent) {
+ 			struct hci_conn *conn;
+ 
+-			conn = hci_conn_hash_lookup_ba(hdev, BIS_LINK,
++			conn = hci_conn_hash_lookup_ba(hdev, PA_LINK,
+ 						       &sent->bdaddr);
+ 			if (conn) {
+ 				struct bt_iso_qos *qos = &conn->iso_qos;
+@@ -4531,14 +4531,14 @@ static int hci_le_set_host_feature_sync(struct hci_dev *hdev)
+ {
+ 	struct hci_cp_le_set_host_feature cp;
+ 
+-	if (!cis_capable(hdev))
++	if (!iso_capable(hdev))
+ 		return 0;
+ 
+ 	memset(&cp, 0, sizeof(cp));
+ 
+ 	/* Connected Isochronous Channels (Host Support) */
+ 	cp.bit_number = 32;
+-	cp.bit_value = 1;
++	cp.bit_value = iso_enabled(hdev) ? 0x01 : 0x00;
+ 
+ 	return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_HOST_FEATURE,
+ 				     sizeof(cp), &cp, HCI_CMD_TIMEOUT);
+@@ -5493,7 +5493,7 @@ static int hci_disconnect_sync(struct hci_dev *hdev, struct hci_conn *conn,
+ {
+ 	struct hci_cp_disconnect cp;
+ 
+-	if (conn->type == BIS_LINK) {
++	if (conn->type == BIS_LINK || conn->type == PA_LINK) {
+ 		/* This is a BIS connection, hci_conn_del will
+ 		 * do the necessary cleanup.
+ 		 */
+@@ -5562,7 +5562,7 @@ static int hci_connect_cancel_sync(struct hci_dev *hdev, struct hci_conn *conn,
+ 		return HCI_ERROR_LOCAL_HOST_TERM;
+ 	}
+ 
+-	if (conn->type == BIS_LINK) {
++	if (conn->type == BIS_LINK || conn->type == PA_LINK) {
+ 		/* There is no way to cancel a BIS without terminating the BIG
+ 		 * which is done later on connection cleanup.
+ 		 */
+@@ -5627,7 +5627,7 @@ static int hci_reject_conn_sync(struct hci_dev *hdev, struct hci_conn *conn,
+ 	if (conn->type == CIS_LINK)
+ 		return hci_le_reject_cis_sync(hdev, conn, reason);
+ 
+-	if (conn->type == BIS_LINK)
++	if (conn->type == BIS_LINK || conn->type == PA_LINK)
+ 		return -EINVAL;
+ 
+ 	if (conn->type == SCO_LINK || conn->type == ESCO_LINK)
+@@ -6985,8 +6985,6 @@ static void create_pa_complete(struct hci_dev *hdev, void *data, int err)
+ 
+ 	hci_dev_lock(hdev);
+ 
+-	hci_dev_clear_flag(hdev, HCI_PA_SYNC);
+-
+ 	if (!hci_conn_valid(hdev, conn))
+ 		clear_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);
+ 
+@@ -6994,7 +6992,7 @@ static void create_pa_complete(struct hci_dev *hdev, void *data, int err)
+ 		goto unlock;
+ 
+ 	/* Add connection to indicate PA sync error */
+-	pa_sync = hci_conn_add_unset(hdev, BIS_LINK, BDADDR_ANY,
++	pa_sync = hci_conn_add_unset(hdev, PA_LINK, BDADDR_ANY,
+ 				     HCI_ROLE_SLAVE);
+ 
+ 	if (IS_ERR(pa_sync))
+@@ -7047,10 +7045,13 @@ static int hci_le_pa_create_sync(struct hci_dev *hdev, void *data)
+ 	/* SID has not been set listen for HCI_EV_LE_EXT_ADV_REPORT to update
+ 	 * it.
+ 	 */
+-	if (conn->sid == HCI_SID_INVALID)
+-		__hci_cmd_sync_status_sk(hdev, HCI_OP_NOP, 0, NULL,
+-					 HCI_EV_LE_EXT_ADV_REPORT,
+-					 conn->conn_timeout, NULL);
++	if (conn->sid == HCI_SID_INVALID) {
++		err = __hci_cmd_sync_status_sk(hdev, HCI_OP_NOP, 0, NULL,
++					       HCI_EV_LE_EXT_ADV_REPORT,
++					       conn->conn_timeout, NULL);
++		if (err == -ETIMEDOUT)
++			goto done;
++	}
+ 
+ 	memset(&cp, 0, sizeof(cp));
+ 	cp.options = qos->bcast.options;
+@@ -7080,6 +7081,12 @@ static int hci_le_pa_create_sync(struct hci_dev *hdev, void *data)
+ 		__hci_cmd_sync_status(hdev, HCI_OP_LE_PA_CREATE_SYNC_CANCEL,
+ 				      0, NULL, HCI_CMD_TIMEOUT);
+ 
++done:
++	hci_dev_clear_flag(hdev, HCI_PA_SYNC);
++
++	/* Update passive scan since HCI_PA_SYNC flag has been cleared */
++	hci_update_passive_scan_sync(hdev);
++
+ 	return err;
+ }
+ 
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index 3c2c98eecc6267..14a4215352d5f1 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -2226,7 +2226,8 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+ 
+ static void iso_connect_cfm(struct hci_conn *hcon, __u8 status)
+ {
+-	if (hcon->type != CIS_LINK && hcon->type != BIS_LINK) {
++	if (hcon->type != CIS_LINK && hcon->type != BIS_LINK &&
++	    hcon->type != PA_LINK) {
+ 		if (hcon->type != LE_LINK)
+ 			return;
+ 
+@@ -2267,7 +2268,8 @@ static void iso_connect_cfm(struct hci_conn *hcon, __u8 status)
+ 
+ static void iso_disconn_cfm(struct hci_conn *hcon, __u8 reason)
+ {
+-	if (hcon->type != CIS_LINK && hcon->type != BIS_LINK)
++	if (hcon->type != CIS_LINK && hcon->type !=  BIS_LINK &&
++	    hcon->type != PA_LINK)
+ 		return;
+ 
+ 	BT_DBG("hcon %p reason %d", hcon, reason);
+@@ -2455,11 +2457,11 @@ static const struct net_proto_family iso_sock_family_ops = {
+ 	.create	= iso_sock_create,
+ };
+ 
+-static bool iso_inited;
++static bool inited;
+ 
+-bool iso_enabled(void)
++bool iso_inited(void)
+ {
+-	return iso_inited;
++	return inited;
+ }
+ 
+ int iso_init(void)
+@@ -2468,7 +2470,7 @@ int iso_init(void)
+ 
+ 	BUILD_BUG_ON(sizeof(struct sockaddr_iso) > sizeof(struct sockaddr));
+ 
+-	if (iso_inited)
++	if (inited)
+ 		return -EALREADY;
+ 
+ 	err = proto_register(&iso_proto, 0);
+@@ -2496,7 +2498,7 @@ int iso_init(void)
+ 		iso_debugfs = debugfs_create_file("iso", 0444, bt_debugfs,
+ 						  NULL, &iso_debugfs_fops);
+ 
+-	iso_inited = true;
++	inited = true;
+ 
+ 	return 0;
+ 
+@@ -2507,7 +2509,7 @@ int iso_init(void)
+ 
+ int iso_exit(void)
+ {
+-	if (!iso_inited)
++	if (!inited)
+ 		return -EALREADY;
+ 
+ 	bt_procfs_cleanup(&init_net, "iso");
+@@ -2521,7 +2523,7 @@ int iso_exit(void)
+ 
+ 	proto_unregister(&iso_proto);
+ 
+-	iso_inited = false;
++	inited = false;
+ 
+ 	return 0;
+ }
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 63dba0503653bd..3166f5fb876b11 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -922,19 +922,19 @@ static u32 get_current_settings(struct hci_dev *hdev)
+ 	if (hci_dev_test_flag(hdev, HCI_WIDEBAND_SPEECH_ENABLED))
+ 		settings |= MGMT_SETTING_WIDEBAND_SPEECH;
+ 
+-	if (cis_central_capable(hdev))
++	if (cis_central_enabled(hdev))
+ 		settings |= MGMT_SETTING_CIS_CENTRAL;
+ 
+-	if (cis_peripheral_capable(hdev))
++	if (cis_peripheral_enabled(hdev))
+ 		settings |= MGMT_SETTING_CIS_PERIPHERAL;
+ 
+-	if (bis_capable(hdev))
++	if (bis_enabled(hdev))
+ 		settings |= MGMT_SETTING_ISO_BROADCASTER;
+ 
+-	if (sync_recv_capable(hdev))
++	if (sync_recv_enabled(hdev))
+ 		settings |= MGMT_SETTING_ISO_SYNC_RECEIVER;
+ 
+-	if (ll_privacy_capable(hdev))
++	if (ll_privacy_enabled(hdev))
+ 		settings |= MGMT_SETTING_LL_PRIVACY;
+ 
+ 	return settings;
+@@ -3237,6 +3237,7 @@ static u8 link_to_bdaddr(u8 link_type, u8 addr_type)
+ 	switch (link_type) {
+ 	case CIS_LINK:
+ 	case BIS_LINK:
++	case PA_LINK:
+ 	case LE_LINK:
+ 		switch (addr_type) {
+ 		case ADDR_LE_DEV_PUBLIC:
+@@ -4512,7 +4513,7 @@ static int read_exp_features_info(struct sock *sk, struct hci_dev *hdev,
+ 	}
+ 
+ 	if (IS_ENABLED(CONFIG_BT_LE)) {
+-		flags = iso_enabled() ? BIT(0) : 0;
++		flags = iso_inited() ? BIT(0) : 0;
+ 		memcpy(rp->features[idx].uuid, iso_socket_uuid, 16);
+ 		rp->features[idx].flags = cpu_to_le32(flags);
+ 		idx++;
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index 1377f31b719cdd..8ce145938b02d9 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -4818,6 +4818,14 @@ void br_multicast_set_query_intvl(struct net_bridge_mcast *brmctx,
+ 		intvl_jiffies = BR_MULTICAST_QUERY_INTVL_MIN;
+ 	}
+ 
++	if (intvl_jiffies > BR_MULTICAST_QUERY_INTVL_MAX) {
++		br_info(brmctx->br,
++			"trying to set multicast query interval above maximum, setting to %lu (%ums)\n",
++			jiffies_to_clock_t(BR_MULTICAST_QUERY_INTVL_MAX),
++			jiffies_to_msecs(BR_MULTICAST_QUERY_INTVL_MAX));
++		intvl_jiffies = BR_MULTICAST_QUERY_INTVL_MAX;
++	}
++
+ 	brmctx->multicast_query_interval = intvl_jiffies;
+ }
+ 
+@@ -4834,6 +4842,14 @@ void br_multicast_set_startup_query_intvl(struct net_bridge_mcast *brmctx,
+ 		intvl_jiffies = BR_MULTICAST_STARTUP_QUERY_INTVL_MIN;
+ 	}
+ 
++	if (intvl_jiffies > BR_MULTICAST_STARTUP_QUERY_INTVL_MAX) {
++		br_info(brmctx->br,
++			"trying to set multicast startup query interval above maximum, setting to %lu (%ums)\n",
++			jiffies_to_clock_t(BR_MULTICAST_STARTUP_QUERY_INTVL_MAX),
++			jiffies_to_msecs(BR_MULTICAST_STARTUP_QUERY_INTVL_MAX));
++		intvl_jiffies = BR_MULTICAST_STARTUP_QUERY_INTVL_MAX;
++	}
++
+ 	brmctx->multicast_startup_query_interval = intvl_jiffies;
+ }
+ 
+diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
+index b159aae594c0b0..8de0904b9627f7 100644
+--- a/net/bridge/br_private.h
++++ b/net/bridge/br_private.h
+@@ -31,6 +31,8 @@
+ #define BR_MULTICAST_DEFAULT_HASH_MAX 4096
+ #define BR_MULTICAST_QUERY_INTVL_MIN msecs_to_jiffies(1000)
+ #define BR_MULTICAST_STARTUP_QUERY_INTVL_MIN BR_MULTICAST_QUERY_INTVL_MIN
++#define BR_MULTICAST_QUERY_INTVL_MAX msecs_to_jiffies(86400000) /* 24 hours */
++#define BR_MULTICAST_STARTUP_QUERY_INTVL_MAX BR_MULTICAST_QUERY_INTVL_MAX
+ 
+ #define BR_HWDOM_MAX BITS_PER_LONG
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index be97c440ecd5f9..b014a5ce9e0ff2 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3782,6 +3782,18 @@ static netdev_features_t gso_features_check(const struct sk_buff *skb,
+ 			features &= ~NETIF_F_TSO_MANGLEID;
+ 	}
+ 
++	/* NETIF_F_IPV6_CSUM does not support IPv6 extension headers,
++	 * so neither does TSO that depends on it.
++	 */
++	if (features & NETIF_F_IPV6_CSUM &&
++	    (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6 ||
++	     (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4 &&
++	      vlan_get_protocol(skb) == htons(ETH_P_IPV6))) &&
++	    skb_transport_header_was_set(skb) &&
++	    skb_network_header_len(skb) != sizeof(struct ipv6hdr) &&
++	    !ipv6_has_hopopt_jumbo(skb))
++		features &= ~(NETIF_F_IPV6_CSUM | NETIF_F_TSO6 | NETIF_F_GSO_UDP_L4);
++
+ 	return features;
+ }
+ 
+diff --git a/net/devlink/port.c b/net/devlink/port.c
+index 939081a0e6154a..cb8d4df616199e 100644
+--- a/net/devlink/port.c
++++ b/net/devlink/port.c
+@@ -1519,7 +1519,7 @@ static int __devlink_port_phys_port_name_get(struct devlink_port *devlink_port,
+ 	struct devlink_port_attrs *attrs = &devlink_port->attrs;
+ 	int n = 0;
+ 
+-	if (!devlink_port->attrs_set)
++	if (!devlink_port->attrs_set || devlink_port->attrs.no_phys_port_name)
+ 		return -EOPNOTSUPP;
+ 
+ 	switch (attrs->flavour) {
+diff --git a/net/hsr/hsr_slave.c b/net/hsr/hsr_slave.c
+index b87b6a6fe07000..102eccf5ead734 100644
+--- a/net/hsr/hsr_slave.c
++++ b/net/hsr/hsr_slave.c
+@@ -63,8 +63,14 @@ static rx_handler_result_t hsr_handle_frame(struct sk_buff **pskb)
+ 	skb_push(skb, ETH_HLEN);
+ 	skb_reset_mac_header(skb);
+ 	if ((!hsr->prot_version && protocol == htons(ETH_P_PRP)) ||
+-	    protocol == htons(ETH_P_HSR))
++	    protocol == htons(ETH_P_HSR)) {
++		if (!pskb_may_pull(skb, ETH_HLEN + HSR_HLEN)) {
++			kfree_skb(skb);
++			goto finish_consume;
++		}
++
+ 		skb_set_network_header(skb, ETH_HLEN + HSR_HLEN);
++	}
+ 	skb_reset_mac_len(skb);
+ 
+ 	/* Only the frames received over the interlink port will assign a
+diff --git a/net/ipv4/netfilter/nf_reject_ipv4.c b/net/ipv4/netfilter/nf_reject_ipv4.c
+index 87fd945a0d27a5..0d3cb2ba6fc841 100644
+--- a/net/ipv4/netfilter/nf_reject_ipv4.c
++++ b/net/ipv4/netfilter/nf_reject_ipv4.c
+@@ -247,8 +247,7 @@ void nf_send_reset(struct net *net, struct sock *sk, struct sk_buff *oldskb,
+ 	if (!oth)
+ 		return;
+ 
+-	if ((hook == NF_INET_PRE_ROUTING || hook == NF_INET_INGRESS) &&
+-	    nf_reject_fill_skb_dst(oldskb) < 0)
++	if (!skb_dst(oldskb) && nf_reject_fill_skb_dst(oldskb) < 0)
+ 		return;
+ 
+ 	if (skb_rtable(oldskb)->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST))
+@@ -321,8 +320,7 @@ void nf_send_unreach(struct sk_buff *skb_in, int code, int hook)
+ 	if (iph->frag_off & htons(IP_OFFSET))
+ 		return;
+ 
+-	if ((hook == NF_INET_PRE_ROUTING || hook == NF_INET_INGRESS) &&
+-	    nf_reject_fill_skb_dst(skb_in) < 0)
++	if (!skb_dst(skb_in) && nf_reject_fill_skb_dst(skb_in) < 0)
+ 		return;
+ 
+ 	if (skb_csum_unnecessary(skb_in) ||
+diff --git a/net/ipv6/netfilter/nf_reject_ipv6.c b/net/ipv6/netfilter/nf_reject_ipv6.c
+index 9ae2b2725bf99a..c3d64c4b69d7de 100644
+--- a/net/ipv6/netfilter/nf_reject_ipv6.c
++++ b/net/ipv6/netfilter/nf_reject_ipv6.c
+@@ -293,7 +293,7 @@ void nf_send_reset6(struct net *net, struct sock *sk, struct sk_buff *oldskb,
+ 	fl6.fl6_sport = otcph->dest;
+ 	fl6.fl6_dport = otcph->source;
+ 
+-	if (hook == NF_INET_PRE_ROUTING || hook == NF_INET_INGRESS) {
++	if (!skb_dst(oldskb)) {
+ 		nf_ip6_route(net, &dst, flowi6_to_flowi(&fl6), false);
+ 		if (!dst)
+ 			return;
+@@ -397,8 +397,7 @@ void nf_send_unreach6(struct net *net, struct sk_buff *skb_in,
+ 	if (hooknum == NF_INET_LOCAL_OUT && skb_in->dev == NULL)
+ 		skb_in->dev = net->loopback_dev;
+ 
+-	if ((hooknum == NF_INET_PRE_ROUTING || hooknum == NF_INET_INGRESS) &&
+-	    nf_reject6_fill_skb_dst(skb_in) < 0)
++	if (!skb_dst(skb_in) && nf_reject6_fill_skb_dst(skb_in) < 0)
+ 		return;
+ 
+ 	icmpv6_send(skb_in, ICMPV6_DEST_UNREACH, code, 0);
+diff --git a/net/ipv6/seg6_hmac.c b/net/ipv6/seg6_hmac.c
+index f78ecb6ad83834..fd58426f222beb 100644
+--- a/net/ipv6/seg6_hmac.c
++++ b/net/ipv6/seg6_hmac.c
+@@ -35,6 +35,7 @@
+ #include <net/xfrm.h>
+ 
+ #include <crypto/hash.h>
++#include <crypto/utils.h>
+ #include <net/seg6.h>
+ #include <net/genetlink.h>
+ #include <net/seg6_hmac.h>
+@@ -280,7 +281,7 @@ bool seg6_hmac_validate_skb(struct sk_buff *skb)
+ 	if (seg6_hmac_compute(hinfo, srh, &ipv6_hdr(skb)->saddr, hmac_output))
+ 		return false;
+ 
+-	if (memcmp(hmac_output, tlv->hmac, SEG6_HMAC_FIELD_LEN) != 0)
++	if (crypto_memneq(hmac_output, tlv->hmac, SEG6_HMAC_FIELD_LEN))
+ 		return false;
+ 
+ 	return true;
+@@ -304,6 +305,9 @@ int seg6_hmac_info_add(struct net *net, u32 key, struct seg6_hmac_info *hinfo)
+ 	struct seg6_pernet_data *sdata = seg6_pernet(net);
+ 	int err;
+ 
++	if (!__hmac_get_algo(hinfo->alg_id))
++		return -EINVAL;
++
+ 	err = rhashtable_lookup_insert_fast(&sdata->hmac_infos, &hinfo->node,
+ 					    rht_params);
+ 
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index 1f898888b22357..c6983471dca552 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -1117,7 +1117,9 @@ static bool add_addr_hmac_valid(struct mptcp_sock *msk,
+ 	return hmac == mp_opt->ahmac;
+ }
+ 
+-/* Return false if a subflow has been reset, else return true */
++/* Return false in case of error (or subflow has been reset),
++ * else return true.
++ */
+ bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
+@@ -1221,7 +1223,7 @@ bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
+ 
+ 	mpext = skb_ext_add(skb, SKB_EXT_MPTCP);
+ 	if (!mpext)
+-		return true;
++		return false;
+ 
+ 	memset(mpext, 0, sizeof(*mpext));
+ 
+diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c
+index 420d416e2603de..136a380602cae8 100644
+--- a/net/mptcp/pm.c
++++ b/net/mptcp/pm.c
+@@ -274,6 +274,7 @@ static void mptcp_pm_add_timer(struct timer_list *timer)
+ 							      add_timer);
+ 	struct mptcp_sock *msk = entry->sock;
+ 	struct sock *sk = (struct sock *)msk;
++	unsigned int timeout;
+ 
+ 	pr_debug("msk=%p\n", msk);
+ 
+@@ -291,6 +292,10 @@ static void mptcp_pm_add_timer(struct timer_list *timer)
+ 		goto out;
+ 	}
+ 
++	timeout = mptcp_get_add_addr_timeout(sock_net(sk));
++	if (!timeout)
++		goto out;
++
+ 	spin_lock_bh(&msk->pm.lock);
+ 
+ 	if (!mptcp_pm_should_add_signal_addr(msk)) {
+@@ -302,7 +307,7 @@ static void mptcp_pm_add_timer(struct timer_list *timer)
+ 
+ 	if (entry->retrans_times < ADD_ADDR_RETRANS_MAX)
+ 		sk_reset_timer(sk, timer,
+-			       jiffies + mptcp_get_add_addr_timeout(sock_net(sk)));
++			       jiffies + timeout);
+ 
+ 	spin_unlock_bh(&msk->pm.lock);
+ 
+@@ -344,6 +349,7 @@ bool mptcp_pm_alloc_anno_list(struct mptcp_sock *msk,
+ 	struct mptcp_pm_add_entry *add_entry = NULL;
+ 	struct sock *sk = (struct sock *)msk;
+ 	struct net *net = sock_net(sk);
++	unsigned int timeout;
+ 
+ 	lockdep_assert_held(&msk->pm.lock);
+ 
+@@ -353,9 +359,7 @@ bool mptcp_pm_alloc_anno_list(struct mptcp_sock *msk,
+ 		if (WARN_ON_ONCE(mptcp_pm_is_kernel(msk)))
+ 			return false;
+ 
+-		sk_reset_timer(sk, &add_entry->add_timer,
+-			       jiffies + mptcp_get_add_addr_timeout(net));
+-		return true;
++		goto reset_timer;
+ 	}
+ 
+ 	add_entry = kmalloc(sizeof(*add_entry), GFP_ATOMIC);
+@@ -369,8 +373,10 @@ bool mptcp_pm_alloc_anno_list(struct mptcp_sock *msk,
+ 	add_entry->retrans_times = 0;
+ 
+ 	timer_setup(&add_entry->add_timer, mptcp_pm_add_timer, 0);
+-	sk_reset_timer(sk, &add_entry->add_timer,
+-		       jiffies + mptcp_get_add_addr_timeout(net));
++reset_timer:
++	timeout = mptcp_get_add_addr_timeout(net);
++	if (timeout)
++		sk_reset_timer(sk, &add_entry->add_timer, jiffies + timeout);
+ 
+ 	return true;
+ }
+diff --git a/net/mptcp/pm_kernel.c b/net/mptcp/pm_kernel.c
+index d39e7c1784608d..667803d72b643a 100644
+--- a/net/mptcp/pm_kernel.c
++++ b/net/mptcp/pm_kernel.c
+@@ -1085,7 +1085,6 @@ static void __flush_addrs(struct list_head *list)
+ static void __reset_counters(struct pm_nl_pernet *pernet)
+ {
+ 	WRITE_ONCE(pernet->add_addr_signal_max, 0);
+-	WRITE_ONCE(pernet->add_addr_accept_max, 0);
+ 	WRITE_ONCE(pernet->local_addr_max, 0);
+ 	pernet->addrs = 0;
+ }
+diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
+index 48dd8c88903feb..aa9f31e4415ad8 100644
+--- a/net/sched/sch_cake.c
++++ b/net/sched/sch_cake.c
+@@ -1747,7 +1747,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 	ktime_t now = ktime_get();
+ 	struct cake_tin_data *b;
+ 	struct cake_flow *flow;
+-	u32 idx;
++	u32 idx, tin;
+ 
+ 	/* choose flow to insert into */
+ 	idx = cake_classify(sch, &b, skb, q->flow_mode, &ret);
+@@ -1757,6 +1757,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 		__qdisc_drop(skb, to_free);
+ 		return ret;
+ 	}
++	tin = (u32)(b - q->tins);
+ 	idx--;
+ 	flow = &b->flows[idx];
+ 
+@@ -1924,13 +1925,22 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 		q->buffer_max_used = q->buffer_used;
+ 
+ 	if (q->buffer_used > q->buffer_limit) {
++		bool same_flow = false;
+ 		u32 dropped = 0;
++		u32 drop_id;
+ 
+ 		while (q->buffer_used > q->buffer_limit) {
+ 			dropped++;
+-			cake_drop(sch, to_free);
++			drop_id = cake_drop(sch, to_free);
++
++			if ((drop_id >> 16) == tin &&
++			    (drop_id & 0xFFFF) == idx)
++				same_flow = true;
+ 		}
+ 		b->drop_overlimit += dropped;
++
++		if (same_flow)
++			return NET_XMIT_CN;
+ 	}
+ 	return NET_XMIT_SUCCESS;
+ }
+diff --git a/net/sched/sch_codel.c b/net/sched/sch_codel.c
+index c93761040c6e77..fa0314679e434a 100644
+--- a/net/sched/sch_codel.c
++++ b/net/sched/sch_codel.c
+@@ -101,9 +101,9 @@ static const struct nla_policy codel_policy[TCA_CODEL_MAX + 1] = {
+ static int codel_change(struct Qdisc *sch, struct nlattr *opt,
+ 			struct netlink_ext_ack *extack)
+ {
++	unsigned int dropped_pkts = 0, dropped_bytes = 0;
+ 	struct codel_sched_data *q = qdisc_priv(sch);
+ 	struct nlattr *tb[TCA_CODEL_MAX + 1];
+-	unsigned int qlen, dropped = 0;
+ 	int err;
+ 
+ 	err = nla_parse_nested_deprecated(tb, TCA_CODEL_MAX, opt,
+@@ -142,15 +142,17 @@ static int codel_change(struct Qdisc *sch, struct nlattr *opt,
+ 		WRITE_ONCE(q->params.ecn,
+ 			   !!nla_get_u32(tb[TCA_CODEL_ECN]));
+ 
+-	qlen = sch->q.qlen;
+ 	while (sch->q.qlen > sch->limit) {
+ 		struct sk_buff *skb = qdisc_dequeue_internal(sch, true);
+ 
+-		dropped += qdisc_pkt_len(skb);
+-		qdisc_qstats_backlog_dec(sch, skb);
++		if (!skb)
++			break;
++
++		dropped_pkts++;
++		dropped_bytes += qdisc_pkt_len(skb);
+ 		rtnl_qdisc_drop(skb, sch);
+ 	}
+-	qdisc_tree_reduce_backlog(sch, qlen - sch->q.qlen, dropped);
++	qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes);
+ 
+ 	sch_tree_unlock(sch);
+ 	return 0;
+diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
+index 902ff54706072b..fee922da2f99c0 100644
+--- a/net/sched/sch_fq.c
++++ b/net/sched/sch_fq.c
+@@ -1013,11 +1013,11 @@ static int fq_load_priomap(struct fq_sched_data *q,
+ static int fq_change(struct Qdisc *sch, struct nlattr *opt,
+ 		     struct netlink_ext_ack *extack)
+ {
++	unsigned int dropped_pkts = 0, dropped_bytes = 0;
+ 	struct fq_sched_data *q = qdisc_priv(sch);
+ 	struct nlattr *tb[TCA_FQ_MAX + 1];
+-	int err, drop_count = 0;
+-	unsigned drop_len = 0;
+ 	u32 fq_log;
++	int err;
+ 
+ 	err = nla_parse_nested_deprecated(tb, TCA_FQ_MAX, opt, fq_policy,
+ 					  NULL);
+@@ -1135,16 +1135,18 @@ static int fq_change(struct Qdisc *sch, struct nlattr *opt,
+ 		err = fq_resize(sch, fq_log);
+ 		sch_tree_lock(sch);
+ 	}
++
+ 	while (sch->q.qlen > sch->limit) {
+ 		struct sk_buff *skb = qdisc_dequeue_internal(sch, false);
+ 
+ 		if (!skb)
+ 			break;
+-		drop_len += qdisc_pkt_len(skb);
++
++		dropped_pkts++;
++		dropped_bytes += qdisc_pkt_len(skb);
+ 		rtnl_kfree_skbs(skb, skb);
+-		drop_count++;
+ 	}
+-	qdisc_tree_reduce_backlog(sch, drop_count, drop_len);
++	qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes);
+ 
+ 	sch_tree_unlock(sch);
+ 	return err;
+diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c
+index 2a0f3a513bfaa1..a141423929394d 100644
+--- a/net/sched/sch_fq_codel.c
++++ b/net/sched/sch_fq_codel.c
+@@ -366,6 +366,7 @@ static const struct nla_policy fq_codel_policy[TCA_FQ_CODEL_MAX + 1] = {
+ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt,
+ 			   struct netlink_ext_ack *extack)
+ {
++	unsigned int dropped_pkts = 0, dropped_bytes = 0;
+ 	struct fq_codel_sched_data *q = qdisc_priv(sch);
+ 	struct nlattr *tb[TCA_FQ_CODEL_MAX + 1];
+ 	u32 quantum = 0;
+@@ -443,13 +444,14 @@ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt,
+ 	       q->memory_usage > q->memory_limit) {
+ 		struct sk_buff *skb = qdisc_dequeue_internal(sch, false);
+ 
+-		q->cstats.drop_len += qdisc_pkt_len(skb);
++		if (!skb)
++			break;
++
++		dropped_pkts++;
++		dropped_bytes += qdisc_pkt_len(skb);
+ 		rtnl_kfree_skbs(skb, skb);
+-		q->cstats.drop_count++;
+ 	}
+-	qdisc_tree_reduce_backlog(sch, q->cstats.drop_count, q->cstats.drop_len);
+-	q->cstats.drop_count = 0;
+-	q->cstats.drop_len = 0;
++	qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes);
+ 
+ 	sch_tree_unlock(sch);
+ 	return 0;
+diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c
+index b0e34daf1f7517..7b96bc3ff89184 100644
+--- a/net/sched/sch_fq_pie.c
++++ b/net/sched/sch_fq_pie.c
+@@ -287,10 +287,9 @@ static struct sk_buff *fq_pie_qdisc_dequeue(struct Qdisc *sch)
+ static int fq_pie_change(struct Qdisc *sch, struct nlattr *opt,
+ 			 struct netlink_ext_ack *extack)
+ {
++	unsigned int dropped_pkts = 0, dropped_bytes = 0;
+ 	struct fq_pie_sched_data *q = qdisc_priv(sch);
+ 	struct nlattr *tb[TCA_FQ_PIE_MAX + 1];
+-	unsigned int len_dropped = 0;
+-	unsigned int num_dropped = 0;
+ 	int err;
+ 
+ 	err = nla_parse_nested(tb, TCA_FQ_PIE_MAX, opt, fq_pie_policy, extack);
+@@ -368,11 +367,14 @@ static int fq_pie_change(struct Qdisc *sch, struct nlattr *opt,
+ 	while (sch->q.qlen > sch->limit) {
+ 		struct sk_buff *skb = qdisc_dequeue_internal(sch, false);
+ 
+-		len_dropped += qdisc_pkt_len(skb);
+-		num_dropped += 1;
++		if (!skb)
++			break;
++
++		dropped_pkts++;
++		dropped_bytes += qdisc_pkt_len(skb);
+ 		rtnl_kfree_skbs(skb, skb);
+ 	}
+-	qdisc_tree_reduce_backlog(sch, num_dropped, len_dropped);
++	qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes);
+ 
+ 	sch_tree_unlock(sch);
+ 	return 0;
+diff --git a/net/sched/sch_hhf.c b/net/sched/sch_hhf.c
+index 5aa434b4670738..2d4855e28a286f 100644
+--- a/net/sched/sch_hhf.c
++++ b/net/sched/sch_hhf.c
+@@ -508,9 +508,9 @@ static const struct nla_policy hhf_policy[TCA_HHF_MAX + 1] = {
+ static int hhf_change(struct Qdisc *sch, struct nlattr *opt,
+ 		      struct netlink_ext_ack *extack)
+ {
++	unsigned int dropped_pkts = 0, dropped_bytes = 0;
+ 	struct hhf_sched_data *q = qdisc_priv(sch);
+ 	struct nlattr *tb[TCA_HHF_MAX + 1];
+-	unsigned int qlen, prev_backlog;
+ 	int err;
+ 	u64 non_hh_quantum;
+ 	u32 new_quantum = q->quantum;
+@@ -561,15 +561,17 @@ static int hhf_change(struct Qdisc *sch, struct nlattr *opt,
+ 			   usecs_to_jiffies(us));
+ 	}
+ 
+-	qlen = sch->q.qlen;
+-	prev_backlog = sch->qstats.backlog;
+ 	while (sch->q.qlen > sch->limit) {
+ 		struct sk_buff *skb = qdisc_dequeue_internal(sch, false);
+ 
++		if (!skb)
++			break;
++
++		dropped_pkts++;
++		dropped_bytes += qdisc_pkt_len(skb);
+ 		rtnl_kfree_skbs(skb, skb);
+ 	}
+-	qdisc_tree_reduce_backlog(sch, qlen - sch->q.qlen,
+-				  prev_backlog - sch->qstats.backlog);
++	qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes);
+ 
+ 	sch_tree_unlock(sch);
+ 	return 0;
+diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
+index c968ea76377463..b5e40c51655a73 100644
+--- a/net/sched/sch_htb.c
++++ b/net/sched/sch_htb.c
+@@ -592,7 +592,7 @@ htb_change_class_mode(struct htb_sched *q, struct htb_class *cl, s64 *diff)
+  */
+ static inline void htb_activate(struct htb_sched *q, struct htb_class *cl)
+ {
+-	WARN_ON(cl->level || !cl->leaf.q || !cl->leaf.q->q.qlen);
++	WARN_ON(cl->level || !cl->leaf.q);
+ 
+ 	if (!cl->prio_activity) {
+ 		cl->prio_activity = 1 << cl->prio;
+diff --git a/net/sched/sch_pie.c b/net/sched/sch_pie.c
+index ad46ee3ed5a968..0a377313b6a9d2 100644
+--- a/net/sched/sch_pie.c
++++ b/net/sched/sch_pie.c
+@@ -141,9 +141,9 @@ static const struct nla_policy pie_policy[TCA_PIE_MAX + 1] = {
+ static int pie_change(struct Qdisc *sch, struct nlattr *opt,
+ 		      struct netlink_ext_ack *extack)
+ {
++	unsigned int dropped_pkts = 0, dropped_bytes = 0;
+ 	struct pie_sched_data *q = qdisc_priv(sch);
+ 	struct nlattr *tb[TCA_PIE_MAX + 1];
+-	unsigned int qlen, dropped = 0;
+ 	int err;
+ 
+ 	err = nla_parse_nested_deprecated(tb, TCA_PIE_MAX, opt, pie_policy,
+@@ -193,15 +193,17 @@ static int pie_change(struct Qdisc *sch, struct nlattr *opt,
+ 			   nla_get_u32(tb[TCA_PIE_DQ_RATE_ESTIMATOR]));
+ 
+ 	/* Drop excess packets if new limit is lower */
+-	qlen = sch->q.qlen;
+ 	while (sch->q.qlen > sch->limit) {
+ 		struct sk_buff *skb = qdisc_dequeue_internal(sch, true);
+ 
+-		dropped += qdisc_pkt_len(skb);
+-		qdisc_qstats_backlog_dec(sch, skb);
++		if (!skb)
++			break;
++
++		dropped_pkts++;
++		dropped_bytes += qdisc_pkt_len(skb);
+ 		rtnl_qdisc_drop(skb, sch);
+ 	}
+-	qdisc_tree_reduce_backlog(sch, qlen - sch->q.qlen, dropped);
++	qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes);
+ 
+ 	sch_tree_unlock(sch);
+ 	return 0;
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 1882bab8e00e79..dc72ff353813b3 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -2568,8 +2568,9 @@ static void smc_listen_work(struct work_struct *work)
+ 			goto out_decl;
+ 	}
+ 
+-	smc_listen_out_connected(new_smc);
+ 	SMC_STAT_SERV_SUCC_INC(sock_net(newclcsock->sk), ini);
++	/* smc_listen_out() will release smcsk */
++	smc_listen_out_connected(new_smc);
+ 	goto out_free;
+ 
+ out_unlock:
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 51c98a007ddac4..bac65d0d4e3e1e 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1808,6 +1808,9 @@ int decrypt_skb(struct sock *sk, struct scatterlist *sgout)
+ 	return tls_decrypt_sg(sk, NULL, sgout, &darg);
+ }
+ 
++/* All records returned from a recvmsg() call must have the same type.
++ * 0 is not a valid content type. Use it as "no type reported, yet".
++ */
+ static int tls_record_content_type(struct msghdr *msg, struct tls_msg *tlm,
+ 				   u8 *control)
+ {
+@@ -2051,8 +2054,10 @@ int tls_sw_recvmsg(struct sock *sk,
+ 	if (err < 0)
+ 		goto end;
+ 
++	/* process_rx_list() will set @control if it processed any records */
+ 	copied = err;
+-	if (len <= copied || (copied && control != TLS_RECORD_TYPE_DATA) || rx_more)
++	if (len <= copied || rx_more ||
++	    (control && control != TLS_RECORD_TYPE_DATA))
+ 		goto end;
+ 
+ 	target = sock_rcvlowat(sk, flags & MSG_WAITALL, len);
+diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
+index f01f9e8781061e..1ef6f7829d2942 100644
+--- a/net/vmw_vsock/virtio_transport.c
++++ b/net/vmw_vsock/virtio_transport.c
+@@ -624,8 +624,9 @@ static void virtio_transport_rx_work(struct work_struct *work)
+ 	do {
+ 		virtqueue_disable_cb(vq);
+ 		for (;;) {
++			unsigned int len, payload_len;
++			struct virtio_vsock_hdr *hdr;
+ 			struct sk_buff *skb;
+-			unsigned int len;
+ 
+ 			if (!virtio_transport_more_replies(vsock)) {
+ 				/* Stop rx until the device processes already
+@@ -642,12 +643,19 @@ static void virtio_transport_rx_work(struct work_struct *work)
+ 			vsock->rx_buf_nr--;
+ 
+ 			/* Drop short/long packets */
+-			if (unlikely(len < sizeof(struct virtio_vsock_hdr) ||
++			if (unlikely(len < sizeof(*hdr) ||
+ 				     len > virtio_vsock_skb_len(skb))) {
+ 				kfree_skb(skb);
+ 				continue;
+ 			}
+ 
++			hdr = virtio_vsock_hdr(skb);
++			payload_len = le32_to_cpu(hdr->len);
++			if (unlikely(payload_len > len - sizeof(*hdr))) {
++				kfree_skb(skb);
++				continue;
++			}
++
+ 			virtio_vsock_skb_rx_put(skb);
+ 			virtio_transport_deliver_tap_pkt(skb);
+ 			virtio_transport_recv_pkt(&virtio_transport, skb);
+diff --git a/rust/kernel/alloc/allocator.rs b/rust/kernel/alloc/allocator.rs
+index aa2dfa9dca4c30..2692cf90c9482d 100644
+--- a/rust/kernel/alloc/allocator.rs
++++ b/rust/kernel/alloc/allocator.rs
+@@ -43,17 +43,6 @@
+ /// For more details see [self].
+ pub struct KVmalloc;
+ 
+-/// Returns a proper size to alloc a new object aligned to `new_layout`'s alignment.
+-fn aligned_size(new_layout: Layout) -> usize {
+-    // Customized layouts from `Layout::from_size_align()` can have size < align, so pad first.
+-    let layout = new_layout.pad_to_align();
+-
+-    // Note that `layout.size()` (after padding) is guaranteed to be a multiple of `layout.align()`
+-    // which together with the slab guarantees means the `krealloc` will return a properly aligned
+-    // object (see comments in `kmalloc()` for more information).
+-    layout.size()
+-}
+-
+ /// # Invariants
+ ///
+ /// One of the following: `krealloc`, `vrealloc`, `kvrealloc`.
+@@ -88,7 +77,7 @@ unsafe fn call(
+         old_layout: Layout,
+         flags: Flags,
+     ) -> Result<NonNull<[u8]>, AllocError> {
+-        let size = aligned_size(layout);
++        let size = layout.size();
+         let ptr = match ptr {
+             Some(ptr) => {
+                 if old_layout.size() == 0 {
+@@ -123,6 +112,17 @@ unsafe fn call(
+     }
+ }
+ 
++impl Kmalloc {
++    /// Returns a [`Layout`] that makes [`Kmalloc`] fulfill the requested size and alignment of
++    /// `layout`.
++    pub fn aligned_layout(layout: Layout) -> Layout {
++        // Note that `layout.size()` (after padding) is guaranteed to be a multiple of
++        // `layout.align()` which together with the slab guarantees means that `Kmalloc` will return
++        // a properly aligned object (see comments in `kmalloc()` for more information).
++        layout.pad_to_align()
++    }
++}
++
+ // SAFETY: `realloc` delegates to `ReallocFunc::call`, which guarantees that
+ // - memory remains valid until it is explicitly freed,
+ // - passing a pointer to a valid memory allocation is OK,
+@@ -135,6 +135,8 @@ unsafe fn realloc(
+         old_layout: Layout,
+         flags: Flags,
+     ) -> Result<NonNull<[u8]>, AllocError> {
++        let layout = Kmalloc::aligned_layout(layout);
++
+         // SAFETY: `ReallocFunc::call` has the same safety requirements as `Allocator::realloc`.
+         unsafe { ReallocFunc::KREALLOC.call(ptr, layout, old_layout, flags) }
+     }
+@@ -176,6 +178,10 @@ unsafe fn realloc(
+         old_layout: Layout,
+         flags: Flags,
+     ) -> Result<NonNull<[u8]>, AllocError> {
++        // `KVmalloc` may use the `Kmalloc` backend, hence we have to enforce a `Kmalloc`
++        // compatible layout.
++        let layout = Kmalloc::aligned_layout(layout);
++
+         // TODO: Support alignments larger than PAGE_SIZE.
+         if layout.align() > bindings::PAGE_SIZE {
+             pr_warn!("KVmalloc does not support alignments larger than PAGE_SIZE yet.\n");
+diff --git a/rust/kernel/alloc/allocator_test.rs b/rust/kernel/alloc/allocator_test.rs
+index d19c06ef0498c1..981e002ae3fcf0 100644
+--- a/rust/kernel/alloc/allocator_test.rs
++++ b/rust/kernel/alloc/allocator_test.rs
+@@ -22,6 +22,17 @@
+ pub type Vmalloc = Kmalloc;
+ pub type KVmalloc = Kmalloc;
+ 
++impl Cmalloc {
++    /// Returns a [`Layout`] that makes [`Kmalloc`] fulfill the requested size and alignment of
++    /// `layout`.
++    pub fn aligned_layout(layout: Layout) -> Layout {
++        // Note that `layout.size()` (after padding) is guaranteed to be a multiple of
++        // `layout.align()` which together with the slab guarantees means that `Kmalloc` will return
++        // a properly aligned object (see comments in `kmalloc()` for more information).
++        layout.pad_to_align()
++    }
++}
++
+ extern "C" {
+     #[link_name = "aligned_alloc"]
+     fn libc_aligned_alloc(align: usize, size: usize) -> *mut crate::ffi::c_void;
+diff --git a/rust/kernel/drm/device.rs b/rust/kernel/drm/device.rs
+index 14c1aa402951a8..3832779f439fd5 100644
+--- a/rust/kernel/drm/device.rs
++++ b/rust/kernel/drm/device.rs
+@@ -5,6 +5,7 @@
+ //! C header: [`include/linux/drm/drm_device.h`](srctree/include/linux/drm/drm_device.h)
+ 
+ use crate::{
++    alloc::allocator::Kmalloc,
+     bindings, device, drm,
+     drm::driver::AllocImpl,
+     error::from_err_ptr,
+@@ -12,7 +13,7 @@
+     prelude::*,
+     types::{ARef, AlwaysRefCounted, Opaque},
+ };
+-use core::{mem, ops::Deref, ptr, ptr::NonNull};
++use core::{alloc::Layout, mem, ops::Deref, ptr, ptr::NonNull};
+ 
+ #[cfg(CONFIG_DRM_LEGACY)]
+ macro_rules! drm_legacy_fields {
+@@ -53,10 +54,8 @@ macro_rules! drm_legacy_fields {
+ ///
+ /// `self.dev` is a valid instance of a `struct device`.
+ #[repr(C)]
+-#[pin_data]
+ pub struct Device<T: drm::Driver> {
+     dev: Opaque<bindings::drm_device>,
+-    #[pin]
+     data: T::Data,
+ }
+ 
+@@ -96,6 +95,10 @@ impl<T: drm::Driver> Device<T> {
+ 
+     /// Create a new `drm::Device` for a `drm::Driver`.
+     pub fn new(dev: &device::Device, data: impl PinInit<T::Data, Error>) -> Result<ARef<Self>> {
++        // `__drm_dev_alloc` uses `kmalloc()` to allocate memory, hence ensure a `kmalloc()`
++        // compatible `Layout`.
++        let layout = Kmalloc::aligned_layout(Layout::new::<Self>());
++
+         // SAFETY:
+         // - `VTABLE`, as a `const` is pinned to the read-only section of the compilation,
+         // - `dev` is valid by its type invarants,
+@@ -103,7 +106,7 @@ pub fn new(dev: &device::Device, data: impl PinInit<T::Data, Error>) -> Result<A
+             bindings::__drm_dev_alloc(
+                 dev.as_raw(),
+                 &Self::VTABLE,
+-                mem::size_of::<Self>(),
++                layout.size(),
+                 mem::offset_of!(Self, dev),
+             )
+         }
+@@ -117,9 +120,13 @@ pub fn new(dev: &device::Device, data: impl PinInit<T::Data, Error>) -> Result<A
+         // - `raw_data` is a valid pointer to uninitialized memory.
+         // - `raw_data` will not move until it is dropped.
+         unsafe { data.__pinned_init(raw_data) }.inspect_err(|_| {
+-            // SAFETY: `__drm_dev_alloc()` was successful, hence `raw_drm` must be valid and the
++            // SAFETY: `raw_drm` is a valid pointer to `Self`, given that `__drm_dev_alloc` was
++            // successful.
++            let drm_dev = unsafe { Self::into_drm_device(raw_drm) };
++
++            // SAFETY: `__drm_dev_alloc()` was successful, hence `drm_dev` must be valid and the
+             // refcount must be non-zero.
+-            unsafe { bindings::drm_dev_put(ptr::addr_of_mut!((*raw_drm.as_ptr()).dev).cast()) };
++            unsafe { bindings::drm_dev_put(drm_dev) };
+         })?;
+ 
+         // SAFETY: The reference count is one, and now we take ownership of that reference as a
+@@ -142,6 +149,14 @@ unsafe fn from_drm_device(ptr: *const bindings::drm_device) -> *mut Self {
+         unsafe { crate::container_of!(ptr, Self, dev) }.cast_mut()
+     }
+ 
++    /// # Safety
++    ///
++    /// `ptr` must be a valid pointer to `Self`.
++    unsafe fn into_drm_device(ptr: NonNull<Self>) -> *mut bindings::drm_device {
++        // SAFETY: By the safety requirements of this function, `ptr` is a valid pointer to `Self`.
++        unsafe { &raw mut (*ptr.as_ptr()).dev }.cast()
++    }
++
+     /// Not intended to be called externally, except via declare_drm_ioctls!()
+     ///
+     /// # Safety
+@@ -191,8 +206,11 @@ fn inc_ref(&self) {
+     }
+ 
+     unsafe fn dec_ref(obj: NonNull<Self>) {
++        // SAFETY: `obj` is a valid pointer to `Self`.
++        let drm_dev = unsafe { Self::into_drm_device(obj) };
++
+         // SAFETY: The safety requirements guarantee that the refcount is non-zero.
+-        unsafe { bindings::drm_dev_put(obj.cast().as_ptr()) };
++        unsafe { bindings::drm_dev_put(drm_dev) };
+     }
+ }
+ 
+diff --git a/rust/kernel/faux.rs b/rust/kernel/faux.rs
+index 8a50fcd4c9bbba..50c3c55f29e1dc 100644
+--- a/rust/kernel/faux.rs
++++ b/rust/kernel/faux.rs
+@@ -4,7 +4,7 @@
+ //!
+ //! This module provides bindings for working with faux devices in kernel modules.
+ //!
+-//! C header: [`include/linux/device/faux.h`]
++//! C header: [`include/linux/device/faux.h`](srctree/include/linux/device/faux.h)
+ 
+ use crate::{bindings, device, error::code::*, prelude::*};
+ use core::ptr::{addr_of_mut, null, null_mut, NonNull};
+diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
+index 9b6c2f157f83e8..531bde29cccb05 100644
+--- a/security/apparmor/lsm.c
++++ b/security/apparmor/lsm.c
+@@ -2149,12 +2149,12 @@ static int __init apparmor_nf_ip_init(void)
+ __initcall(apparmor_nf_ip_init);
+ #endif
+ 
+-static char nulldfa_src[] = {
++static char nulldfa_src[] __aligned(8) = {
+ 	#include "nulldfa.in"
+ };
+ static struct aa_dfa *nulldfa;
+ 
+-static char stacksplitdfa_src[] = {
++static char stacksplitdfa_src[] __aligned(8) = {
+ 	#include "stacksplitdfa.in"
+ };
+ struct aa_dfa *stacksplitdfa;
+diff --git a/sound/core/timer.c b/sound/core/timer.c
+index 8072183c33d393..a352247519be41 100644
+--- a/sound/core/timer.c
++++ b/sound/core/timer.c
+@@ -2139,14 +2139,14 @@ static int snd_utimer_create(struct snd_timer_uinfo *utimer_info,
+ 		goto err_take_id;
+ 	}
+ 
++	utimer->id = utimer_id;
++
+ 	utimer->name = kasprintf(GFP_KERNEL, "snd-utimer%d", utimer_id);
+ 	if (!utimer->name) {
+ 		err = -ENOMEM;
+ 		goto err_get_name;
+ 	}
+ 
+-	utimer->id = utimer_id;
+-
+ 	tid.dev_sclass = SNDRV_TIMER_SCLASS_APPLICATION;
+ 	tid.dev_class = SNDRV_TIMER_CLASS_GLOBAL;
+ 	tid.card = -1;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 2ab2666d4058d6..0ec98833e3d2e7 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10662,6 +10662,8 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360),
+ 	SND_PCI_QUIRK(0x103c, 0x8537, "HP ProBook 440 G6", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++	SND_PCI_QUIRK(0x103c, 0x8548, "HP EliteBook x360 830 G6", ALC285_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x854a, "HP EliteBook 830 G6", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x85c6, "HP Pavilion x360 Convertible 14-dy1xxx", ALC295_FIXUP_HP_MUTE_LED_COEFBIT11),
+ 	SND_PCI_QUIRK(0x103c, 0x85de, "HP Envy x360 13-ar0xxx", ALC285_FIXUP_HP_ENVY_X360),
+ 	SND_PCI_QUIRK(0x103c, 0x860f, "HP ZBook 15 G6", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+diff --git a/sound/pci/hda/tas2781_hda_i2c.c b/sound/pci/hda/tas2781_hda_i2c.c
+index d91eed9f780432..8c5ebbe8a1f315 100644
+--- a/sound/pci/hda/tas2781_hda_i2c.c
++++ b/sound/pci/hda/tas2781_hda_i2c.c
+@@ -287,7 +287,7 @@ static int tas2563_save_calibration(struct tas2781_hda *h)
+ 	efi_char16_t efi_name[TAS2563_CAL_VAR_NAME_MAX];
+ 	unsigned long max_size = TAS2563_CAL_DATA_SIZE;
+ 	unsigned char var8[TAS2563_CAL_VAR_NAME_MAX];
+-	struct tasdevice_priv *p = h->hda_priv;
++	struct tasdevice_priv *p = h->priv;
+ 	struct calidata *cd = &p->cali_data;
+ 	struct cali_reg *r = &cd->cali_reg_array;
+ 	unsigned int offset = 0;
+diff --git a/sound/soc/codecs/cs35l56-sdw.c b/sound/soc/codecs/cs35l56-sdw.c
+index fa9693af3722b1..d7fa12d287e06d 100644
+--- a/sound/soc/codecs/cs35l56-sdw.c
++++ b/sound/soc/codecs/cs35l56-sdw.c
+@@ -394,74 +394,6 @@ static int cs35l56_sdw_update_status(struct sdw_slave *peripheral,
+ 	return 0;
+ }
+ 
+-static int cs35l63_sdw_kick_divider(struct cs35l56_private *cs35l56,
+-				    struct sdw_slave *peripheral)
+-{
+-	unsigned int curr_scale_reg, next_scale_reg;
+-	int curr_scale, next_scale, ret;
+-
+-	if (!cs35l56->base.init_done)
+-		return 0;
+-
+-	if (peripheral->bus->params.curr_bank) {
+-		curr_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B1;
+-		next_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B0;
+-	} else {
+-		curr_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B0;
+-		next_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B1;
+-	}
+-
+-	/*
+-	 * Current clock scale value must be different to new value.
+-	 * Modify current to guarantee this. If next still has the dummy
+-	 * value we wrote when it was current, the core code has not set
+-	 * a new scale so restore its original good value
+-	 */
+-	curr_scale = sdw_read_no_pm(peripheral, curr_scale_reg);
+-	if (curr_scale < 0) {
+-		dev_err(cs35l56->base.dev, "Failed to read current clock scale: %d\n", curr_scale);
+-		return curr_scale;
+-	}
+-
+-	next_scale = sdw_read_no_pm(peripheral, next_scale_reg);
+-	if (next_scale < 0) {
+-		dev_err(cs35l56->base.dev, "Failed to read next clock scale: %d\n", next_scale);
+-		return next_scale;
+-	}
+-
+-	if (next_scale == CS35L56_SDW_INVALID_BUS_SCALE) {
+-		next_scale = cs35l56->old_sdw_clock_scale;
+-		ret = sdw_write_no_pm(peripheral, next_scale_reg, next_scale);
+-		if (ret < 0) {
+-			dev_err(cs35l56->base.dev, "Failed to modify current clock scale: %d\n",
+-				ret);
+-			return ret;
+-		}
+-	}
+-
+-	cs35l56->old_sdw_clock_scale = curr_scale;
+-	ret = sdw_write_no_pm(peripheral, curr_scale_reg, CS35L56_SDW_INVALID_BUS_SCALE);
+-	if (ret < 0) {
+-		dev_err(cs35l56->base.dev, "Failed to modify current clock scale: %d\n", ret);
+-		return ret;
+-	}
+-
+-	dev_dbg(cs35l56->base.dev, "Next bus scale: %#x\n", next_scale);
+-
+-	return 0;
+-}
+-
+-static int cs35l56_sdw_bus_config(struct sdw_slave *peripheral,
+-				  struct sdw_bus_params *params)
+-{
+-	struct cs35l56_private *cs35l56 = dev_get_drvdata(&peripheral->dev);
+-
+-	if ((cs35l56->base.type == 0x63) && (cs35l56->base.rev < 0xa1))
+-		return cs35l63_sdw_kick_divider(cs35l56, peripheral);
+-
+-	return 0;
+-}
+-
+ static int __maybe_unused cs35l56_sdw_clk_stop(struct sdw_slave *peripheral,
+ 					       enum sdw_clk_stop_mode mode,
+ 					       enum sdw_clk_stop_type type)
+@@ -477,7 +409,6 @@ static const struct sdw_slave_ops cs35l56_sdw_ops = {
+ 	.read_prop = cs35l56_sdw_read_prop,
+ 	.interrupt_callback = cs35l56_sdw_interrupt,
+ 	.update_status = cs35l56_sdw_update_status,
+-	.bus_config = cs35l56_sdw_bus_config,
+ #ifdef DEBUG
+ 	.clk_stop = cs35l56_sdw_clk_stop,
+ #endif
+diff --git a/sound/soc/codecs/cs35l56-shared.c b/sound/soc/codecs/cs35l56-shared.c
+index ba653f6ccfaefe..850fcf38599681 100644
+--- a/sound/soc/codecs/cs35l56-shared.c
++++ b/sound/soc/codecs/cs35l56-shared.c
+@@ -838,6 +838,15 @@ const struct cirrus_amp_cal_controls cs35l56_calibration_controls = {
+ };
+ EXPORT_SYMBOL_NS_GPL(cs35l56_calibration_controls, "SND_SOC_CS35L56_SHARED");
+ 
++static const struct cirrus_amp_cal_controls cs35l63_calibration_controls = {
++	.alg_id =	0xbf210,
++	.mem_region =	WMFW_ADSP2_YM,
++	.ambient =	"CAL_AMBIENT",
++	.calr =		"CAL_R",
++	.status =	"CAL_STATUS",
++	.checksum =	"CAL_CHECKSUM",
++};
++
+ int cs35l56_get_calibration(struct cs35l56_base *cs35l56_base)
+ {
+ 	u64 silicon_uid = 0;
+@@ -912,19 +921,31 @@ EXPORT_SYMBOL_NS_GPL(cs35l56_read_prot_status, "SND_SOC_CS35L56_SHARED");
+ void cs35l56_log_tuning(struct cs35l56_base *cs35l56_base, struct cs_dsp *cs_dsp)
+ {
+ 	__be32 pid, sid, tid;
++	unsigned int alg_id;
+ 	int ret;
+ 
++	switch (cs35l56_base->type) {
++	case 0x54:
++	case 0x56:
++	case 0x57:
++		alg_id = 0x9f212;
++		break;
++	default:
++		alg_id = 0xbf212;
++		break;
++	}
++
+ 	scoped_guard(mutex, &cs_dsp->pwr_lock) {
+ 		ret = cs_dsp_coeff_read_ctrl(cs_dsp_get_ctl(cs_dsp, "AS_PRJCT_ID",
+-							    WMFW_ADSP2_XM, 0x9f212),
++							    WMFW_ADSP2_XM, alg_id),
+ 					     0, &pid, sizeof(pid));
+ 		if (!ret)
+ 			ret = cs_dsp_coeff_read_ctrl(cs_dsp_get_ctl(cs_dsp, "AS_CHNNL_ID",
+-								    WMFW_ADSP2_XM, 0x9f212),
++								    WMFW_ADSP2_XM, alg_id),
+ 						     0, &sid, sizeof(sid));
+ 		if (!ret)
+ 			ret = cs_dsp_coeff_read_ctrl(cs_dsp_get_ctl(cs_dsp, "AS_SNPSHT_ID",
+-								    WMFW_ADSP2_XM, 0x9f212),
++								    WMFW_ADSP2_XM, alg_id),
+ 						     0, &tid, sizeof(tid));
+ 	}
+ 
+@@ -974,8 +995,10 @@ int cs35l56_hw_init(struct cs35l56_base *cs35l56_base)
+ 	case 0x35A54:
+ 	case 0x35A56:
+ 	case 0x35A57:
++		cs35l56_base->calibration_controls = &cs35l56_calibration_controls;
+ 		break;
+ 	case 0x35A630:
++		cs35l56_base->calibration_controls = &cs35l63_calibration_controls;
+ 		devid = devid >> 4;
+ 		break;
+ 	default:
+diff --git a/sound/soc/codecs/cs35l56.c b/sound/soc/codecs/cs35l56.c
+index 1b42586794ad75..76306282b2e641 100644
+--- a/sound/soc/codecs/cs35l56.c
++++ b/sound/soc/codecs/cs35l56.c
+@@ -695,7 +695,7 @@ static int cs35l56_write_cal(struct cs35l56_private *cs35l56)
+ 		return ret;
+ 
+ 	ret = cs_amp_write_cal_coeffs(&cs35l56->dsp.cs_dsp,
+-				      &cs35l56_calibration_controls,
++				      cs35l56->base.calibration_controls,
+ 				      &cs35l56->base.cal_data);
+ 
+ 	wm_adsp_stop(&cs35l56->dsp);
+diff --git a/sound/soc/codecs/cs35l56.h b/sound/soc/codecs/cs35l56.h
+index bd77a57249d79b..40a1800a458515 100644
+--- a/sound/soc/codecs/cs35l56.h
++++ b/sound/soc/codecs/cs35l56.h
+@@ -20,8 +20,6 @@
+ #define CS35L56_SDW_GEN_INT_MASK_1	0xc1
+ #define CS35L56_SDW_INT_MASK_CODEC_IRQ	BIT(0)
+ 
+-#define CS35L56_SDW_INVALID_BUS_SCALE	0xf
+-
+ #define CS35L56_RX_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE)
+ #define CS35L56_TX_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE \
+ 			    | SNDRV_PCM_FMTBIT_S32_LE)
+@@ -52,7 +50,6 @@ struct cs35l56_private {
+ 	u8 asp_slot_count;
+ 	bool tdm_mode;
+ 	bool sysclk_set;
+-	u8 old_sdw_clock_scale;
+ 	u8 sdw_link_num;
+ 	u8 sdw_unique_id;
+ };
+diff --git a/sound/soc/sof/amd/acp-loader.c b/sound/soc/sof/amd/acp-loader.c
+index ea105227227dc4..98324bbade1517 100644
+--- a/sound/soc/sof/amd/acp-loader.c
++++ b/sound/soc/sof/amd/acp-loader.c
+@@ -65,7 +65,7 @@ int acp_dsp_block_write(struct snd_sof_dev *sdev, enum snd_sof_fw_blk_type blk_t
+ 			dma_size = page_count * ACP_PAGE_SIZE;
+ 			adata->bin_buf = dma_alloc_coherent(&pci->dev, dma_size,
+ 							    &adata->sha_dma_addr,
+-							    GFP_ATOMIC);
++							    GFP_KERNEL);
+ 			if (!adata->bin_buf)
+ 				return -ENOMEM;
+ 		}
+@@ -77,7 +77,7 @@ int acp_dsp_block_write(struct snd_sof_dev *sdev, enum snd_sof_fw_blk_type blk_t
+ 			adata->data_buf = dma_alloc_coherent(&pci->dev,
+ 							     ACP_DEFAULT_DRAM_LENGTH,
+ 							     &adata->dma_addr,
+-							     GFP_ATOMIC);
++							     GFP_KERNEL);
+ 			if (!adata->data_buf)
+ 				return -ENOMEM;
+ 		}
+@@ -90,7 +90,7 @@ int acp_dsp_block_write(struct snd_sof_dev *sdev, enum snd_sof_fw_blk_type blk_t
+ 			adata->sram_data_buf = dma_alloc_coherent(&pci->dev,
+ 								  ACP_DEFAULT_SRAM_LENGTH,
+ 								  &adata->sram_dma_addr,
+-								  GFP_ATOMIC);
++								  GFP_KERNEL);
+ 			if (!adata->sram_data_buf)
+ 				return -ENOMEM;
+ 		}
+diff --git a/sound/usb/stream.c b/sound/usb/stream.c
+index 1cb52373e70f64..db2c9bac00adca 100644
+--- a/sound/usb/stream.c
++++ b/sound/usb/stream.c
+@@ -349,7 +349,7 @@ snd_pcm_chmap_elem *convert_chmap_v3(struct uac3_cluster_header_descriptor
+ 		u16 cs_len;
+ 		u8 cs_type;
+ 
+-		if (len < sizeof(*p))
++		if (len < sizeof(*cs_desc))
+ 			break;
+ 		cs_len = le16_to_cpu(cs_desc->wLength);
+ 		if (len < cs_len)
+diff --git a/sound/usb/validate.c b/sound/usb/validate.c
+index 4f4e8e87a14cd0..a0d55b77c9941d 100644
+--- a/sound/usb/validate.c
++++ b/sound/usb/validate.c
+@@ -285,7 +285,7 @@ static const struct usb_desc_validator audio_validators[] = {
+ 	/* UAC_VERSION_3, UAC3_EXTENDED_TERMINAL: not implemented yet */
+ 	FUNC(UAC_VERSION_3, UAC3_MIXER_UNIT, validate_mixer_unit),
+ 	FUNC(UAC_VERSION_3, UAC3_SELECTOR_UNIT, validate_selector_unit),
+-	FUNC(UAC_VERSION_3, UAC_FEATURE_UNIT, validate_uac3_feature_unit),
++	FUNC(UAC_VERSION_3, UAC3_FEATURE_UNIT, validate_uac3_feature_unit),
+ 	/*  UAC_VERSION_3, UAC3_EFFECT_UNIT: not implemented yet */
+ 	FUNC(UAC_VERSION_3, UAC3_PROCESSING_UNIT, validate_processing_unit),
+ 	FUNC(UAC_VERSION_3, UAC3_EXTENSION_UNIT, validate_processing_unit),
+diff --git a/tools/objtool/arch/loongarch/special.c b/tools/objtool/arch/loongarch/special.c
+index e39f86d97002b9..a80b75f7b061f7 100644
+--- a/tools/objtool/arch/loongarch/special.c
++++ b/tools/objtool/arch/loongarch/special.c
+@@ -27,6 +27,7 @@ static void get_rodata_table_size_by_table_annotate(struct objtool_file *file,
+ 	struct table_info *next_table;
+ 	unsigned long tmp_insn_offset;
+ 	unsigned long tmp_rodata_offset;
++	bool is_valid_list = false;
+ 
+ 	rsec = find_section_by_name(file->elf, ".rela.discard.tablejump_annotate");
+ 	if (!rsec)
+@@ -35,6 +36,12 @@ static void get_rodata_table_size_by_table_annotate(struct objtool_file *file,
+ 	INIT_LIST_HEAD(&table_list);
+ 
+ 	for_each_reloc(rsec, reloc) {
++		if (reloc->sym->sec->rodata)
++			continue;
++
++		if (strcmp(insn->sec->name, reloc->sym->sec->name))
++			continue;
++
+ 		orig_table = malloc(sizeof(struct table_info));
+ 		if (!orig_table) {
+ 			WARN("malloc failed");
+@@ -49,6 +56,22 @@ static void get_rodata_table_size_by_table_annotate(struct objtool_file *file,
+ 
+ 		if (reloc_idx(reloc) + 1 == sec_num_entries(rsec))
+ 			break;
++
++		if (strcmp(insn->sec->name, (reloc + 1)->sym->sec->name)) {
++			list_for_each_entry(orig_table, &table_list, jump_info) {
++				if (orig_table->insn_offset == insn->offset) {
++					is_valid_list = true;
++					break;
++				}
++			}
++
++			if (!is_valid_list) {
++				list_del_init(&table_list);
++				continue;
++			}
++
++			break;
++		}
+ 	}
+ 
+ 	list_for_each_entry(orig_table, &table_list, jump_info) {
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.c b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+index ac1349c4b9e540..4f07ac9fa207cb 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.c
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+@@ -183,9 +183,10 @@ static void xgetaddrinfo(const char *node, const char *service,
+ 			 struct addrinfo *hints,
+ 			 struct addrinfo **res)
+ {
+-again:
+-	int err = getaddrinfo(node, service, hints, res);
++	int err;
+ 
++again:
++	err = getaddrinfo(node, service, hints, res);
+ 	if (err) {
+ 		const char *errstr;
+ 
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_inq.c b/tools/testing/selftests/net/mptcp/mptcp_inq.c
+index 3cf1e2a612cef9..f3bcaa48df8f22 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_inq.c
++++ b/tools/testing/selftests/net/mptcp/mptcp_inq.c
+@@ -75,9 +75,10 @@ static void xgetaddrinfo(const char *node, const char *service,
+ 			 struct addrinfo *hints,
+ 			 struct addrinfo **res)
+ {
+-again:
+-	int err = getaddrinfo(node, service, hints, res);
++	int err;
+ 
++again:
++	err = getaddrinfo(node, service, hints, res);
+ 	if (err) {
+ 		const char *errstr;
+ 
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_sockopt.c b/tools/testing/selftests/net/mptcp/mptcp_sockopt.c
+index 9934a68df23708..e934dd26a59d9b 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_sockopt.c
++++ b/tools/testing/selftests/net/mptcp/mptcp_sockopt.c
+@@ -162,9 +162,10 @@ static void xgetaddrinfo(const char *node, const char *service,
+ 			 struct addrinfo *hints,
+ 			 struct addrinfo **res)
+ {
+-again:
+-	int err = getaddrinfo(node, service, hints, res);
++	int err;
+ 
++again:
++	err = getaddrinfo(node, service, hints, res);
+ 	if (err) {
+ 		const char *errstr;
+ 
+diff --git a/tools/testing/selftests/net/mptcp/pm_netlink.sh b/tools/testing/selftests/net/mptcp/pm_netlink.sh
+index 2e6648a2b2c0c6..ac7ec6f9402376 100755
+--- a/tools/testing/selftests/net/mptcp/pm_netlink.sh
++++ b/tools/testing/selftests/net/mptcp/pm_netlink.sh
+@@ -198,6 +198,7 @@ set_limits 1 9 2>/dev/null
+ check "get_limits" "${default_limits}" "subflows above hard limit"
+ 
+ set_limits 8 8
++flush_endpoint  ## to make sure it doesn't affect the limits
+ check "get_limits" "$(format_limits 8 8)" "set limits"
+ 
+ flush_endpoint


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-08-28 16:01 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-08-28 16:01 UTC (permalink / raw
  To: gentoo-commits

commit:     5660a09951f70f684588a884b91b37dc3ee594cf
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 28 16:00:42 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Aug 28 16:01:06 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5660a099

Remove patch 2011 devlink let driver opt out of automatic phys_port_name generation

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                        |  4 --
 ...ut_of_automatic_phys_port_name_generation.patch | 81 ----------------------
 2 files changed, 85 deletions(-)

diff --git a/0000_README b/0000_README
index 32d61205..4fe55656 100644
--- a/0000_README
+++ b/0000_README
@@ -83,10 +83,6 @@ Patch:  2010_ipv4_fix_regression_in_local-broadcast_routes.patch
 From:   https://lore.kernel.org/regressions/20250826121750.8451-1-oscmaes92@gmail.com/
 Desc:   net: ipv4: fix regression in local-broadcast routes
 
-Patch:  2011_devlink_let_driver_opt_out_of_automatic_phys_port_name_generation.patch
-From:   https://lore.kernel.org/all/20597f81c1439569e34d026542365aef1cedfb00.1756088250.git.calvin@wbinvd.org/
-Desc:   devlink: let driver opt out of automatic phys_port_name generation
-
 Patch:  2901_permit-menuconfig-sorting.patch
 From:   https://lore.kernel.org/
 Desc:   menuconfig: Allow sorting the entries alphabetically

diff --git a/2011_devlink_let_driver_opt_out_of_automatic_phys_port_name_generation.patch b/2011_devlink_let_driver_opt_out_of_automatic_phys_port_name_generation.patch
deleted file mode 100644
index 7d0ddb39..00000000
--- a/2011_devlink_let_driver_opt_out_of_automatic_phys_port_name_generation.patch
+++ /dev/null
@@ -1,81 +0,0 @@
-Subject: [PATCH 6.16.y 1/2] devlink: let driver opt out of automatic phys_port_name generation
-Date: Sun, 24 Aug 2025 20:30:13 -0700
-Message-ID: <20597f81c1439569e34d026542365aef1cedfb00.1756088250.git.calvin@wbinvd.org>
-X-Mailer: git-send-email 2.47.2
-Precedence: bulk
-X-Mailing-List: stable@vger.kernel.org
-List-Id: <stable.vger.kernel.org>
-List-Subscribe: <mailto:stable+subscribe@vger.kernel.org>
-List-Unsubscribe: <mailto:stable+unsubscribe@vger.kernel.org>
-MIME-Version: 1.0
-Content-Transfer-Encoding: 8bit
-
-From: Jedrzej Jagielski <jedrzej.jagielski@intel.com>
-
-[ Upstream commit c5ec7f49b480db0dfc83f395755b1c2a7c979920 ]
-
-Currently when adding devlink port, phys_port_name is automatically
-generated within devlink port initialization flow. As a result adding
-devlink port support to driver may result in forced changes of interface
-names, which breaks already existing network configs.
-
-This is an expected behavior but in some scenarios it would not be
-preferable to provide such limitation for legacy driver not being able to
-keep 'pre-devlink' interface name.
-
-Add flag no_phys_port_name to devlink_port_attrs struct which indicates
-if devlink should not alter name of interface.
-
-Suggested-by: Jiri Pirko <jiri@resnulli.us>
-Link: https://lore.kernel.org/all/nbwrfnjhvrcduqzjl4a2jafnvvud6qsbxlvxaxilnryglf4j7r@btuqrimnfuly/
-Signed-off-by: Jedrzej Jagielski <jedrzej.jagielski@intel.com>
-Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-Cc: stable@vger.kernel.org # 6.16
-Tested-By: Calvin Owens <calvin@wbinvd.org>
-Signed-off-by: Calvin Owens <calvin@wbinvd.org>
----
- include/net/devlink.h | 6 +++++-
- net/devlink/port.c    | 2 +-
- 2 files changed, 6 insertions(+), 2 deletions(-)
-
-diff --git a/include/net/devlink.h b/include/net/devlink.h
-index 0091f23a40f7..af3fd45155dd 100644
---- a/include/net/devlink.h
-+++ b/include/net/devlink.h
-@@ -78,6 +78,9 @@ struct devlink_port_pci_sf_attrs {
-  * @flavour: flavour of the port
-  * @split: indicates if this is split port
-  * @splittable: indicates if the port can be split.
-+ * @no_phys_port_name: skip automatic phys_port_name generation; for
-+ *		       compatibility only, newly added driver/port instance
-+ *		       should never set this.
-  * @lanes: maximum number of lanes the port supports. 0 value is not passed to netlink.
-  * @switch_id: if the port is part of switch, this is buffer with ID, otherwise this is NULL
-  * @phys: physical port attributes
-@@ -87,7 +90,8 @@ struct devlink_port_pci_sf_attrs {
-  */
- struct devlink_port_attrs {
- 	u8 split:1,
--	   splittable:1;
-+	   splittable:1,
-+	   no_phys_port_name:1;
- 	u32 lanes;
- 	enum devlink_port_flavour flavour;
- 	struct netdev_phys_item_id switch_id;
-diff --git a/net/devlink/port.c b/net/devlink/port.c
-index 939081a0e615..cb8d4df61619 100644
---- a/net/devlink/port.c
-+++ b/net/devlink/port.c
-@@ -1519,7 +1519,7 @@ static int __devlink_port_phys_port_name_get(struct devlink_port *devlink_port,
- 	struct devlink_port_attrs *attrs = &devlink_port->attrs;
- 	int n = 0;
- 
--	if (!devlink_port->attrs_set)
-+	if (!devlink_port->attrs_set || devlink_port->attrs.no_phys_port_name)
- 		return -EOPNOTSUPP;
- 
- 	switch (attrs->flavour) {
--- 
-2.47.2
-
-


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-08-28 16:37 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-08-28 16:37 UTC (permalink / raw
  To: gentoo-commits

commit:     89da65391ade80e805f022898bdc6234a5834e23
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 28 16:37:29 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Aug 28 16:37:29 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=89da6539

Add platform/x86: asus-wmi: Fix racy registrations

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                |  4 ++
 2700_asus-wmi_fix_racy_registrations.patch | 67 ++++++++++++++++++++++++++++++
 2 files changed, 71 insertions(+)

diff --git a/0000_README b/0000_README
index 4fe55656..5324b3e9 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch:  2010_ipv4_fix_regression_in_local-broadcast_routes.patch
 From:   https://lore.kernel.org/regressions/20250826121750.8451-1-oscmaes92@gmail.com/
 Desc:   net: ipv4: fix regression in local-broadcast routes
 
+Patch:  2700_asus-wmi_fix_racy_registrations.patch
+From:   https://lore.kernel.org/all/20250827052441.23382-1-tiwai@suse.de/#Z31drivers:platform:x86:asus-wmi.c
+Desc:   platform/x86: asus-wmi: Fix racy registrations
+
 Patch:  2901_permit-menuconfig-sorting.patch
 From:   https://lore.kernel.org/
 Desc:   menuconfig: Allow sorting the entries alphabetically

diff --git a/2700_asus-wmi_fix_racy_registrations.patch b/2700_asus-wmi_fix_racy_registrations.patch
new file mode 100644
index 00000000..0e012eae
--- /dev/null
+++ b/2700_asus-wmi_fix_racy_registrations.patch
@@ -0,0 +1,67 @@
+Subject: [PATCH] platform/x86: asus-wmi: Fix racy registrations
+Date: Wed, 27 Aug 2025 07:24:33 +0200
+Message-ID: <20250827052441.23382-1-tiwai@suse.de>
+X-Mailer: git-send-email 2.50.1
+X-Mailing-List: platform-driver-x86@vger.kernel.org
+List-Id: <platform-driver-x86.vger.kernel.org>
+List-Subscribe: <mailto:platform-driver-x86+subscribe@vger.kernel.org>
+List-Unsubscribe: <mailto:platform-driver-x86+unsubscribe@vger.kernel.org>
+
+asus_wmi_register_driver() may be called from multiple drivers
+concurrently, which can lead to the racy list operations, eventually
+corrupting the memory and hitting Oops on some ASUS machines.
+Also, the error handling is missing, and it forgot to unregister ACPI
+lps0 dev ops in the error case.
+
+This patch covers those issues by introducing a simple mutex at
+acpi_wmi_register_driver() & *_unregister_driver, and adding the
+proper call of asus_s2idle_check_unregister() in the error path.
+
+Fixes: feea7bd6b02d ("platform/x86: asus-wmi: Refactor Ally suspend/resume")
+Link: https://bugzilla.suse.com/show_bug.cgi?id=1246924
+Link: https://lore.kernel.org/07815053-0e31-4e8e-8049-b652c929323b@kernel.org
+Signed-off-by: Takashi Iwai <tiwai@suse.de>
+---
+ drivers/platform/x86/asus-wmi.c | 9 ++++++++-
+ 1 file changed, 8 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index f7191fdded14..e72a2b5d158e 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -5088,16 +5088,22 @@ static int asus_wmi_probe(struct platform_device *pdev)
+ 
+ 	asus_s2idle_check_register();
+ 
+-	return asus_wmi_add(pdev);
++	ret = asus_wmi_add(pdev);
++	if (ret)
++		asus_s2idle_check_unregister();
++
++	return ret;
+ }
+ 
+ static bool used;
++static DEFINE_MUTEX(register_mutex);
+ 
+ int __init_or_module asus_wmi_register_driver(struct asus_wmi_driver *driver)
+ {
+ 	struct platform_driver *platform_driver;
+ 	struct platform_device *platform_device;
+ 
++	guard(mutex)(&register_mutex);
+ 	if (used)
+ 		return -EBUSY;
+ 
+@@ -5120,6 +5126,7 @@ EXPORT_SYMBOL_GPL(asus_wmi_register_driver);
+ 
+ void asus_wmi_unregister_driver(struct asus_wmi_driver *driver)
+ {
++	guard(mutex)(&register_mutex);
+ 	asus_s2idle_check_unregister();
+ 
+ 	platform_device_unregister(driver->platform_device);
+-- 
+2.50.1
+
+


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-09-04 15:33 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-09-04 15:33 UTC (permalink / raw
  To: gentoo-commits

commit:     fe4faebbd155ce8088006c7f08fc36a4e7622a1a
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Sep  4 15:33:04 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Sep  4 15:33:04 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fe4faebb

Linux patch 6.16.5

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README             |    4 +
 1004_linux-6.16.5.patch | 7622 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7626 insertions(+)

diff --git a/0000_README b/0000_README
index 5324b3e9..69964e75 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch:  1003_linux-6.16.4.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.16.4
 
+Patch:  1004_linux-6.16.5.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.16.5
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1004_linux-6.16.5.patch b/1004_linux-6.16.5.patch
new file mode 100644
index 00000000..034f8fb1
--- /dev/null
+++ b/1004_linux-6.16.5.patch
@@ -0,0 +1,7622 @@
+diff --git a/Documentation/devicetree/bindings/display/msm/qcom,mdp5.yaml b/Documentation/devicetree/bindings/display/msm/qcom,mdp5.yaml
+index e153f8d26e7aae..2735c78b0b67af 100644
+--- a/Documentation/devicetree/bindings/display/msm/qcom,mdp5.yaml
++++ b/Documentation/devicetree/bindings/display/msm/qcom,mdp5.yaml
+@@ -60,7 +60,6 @@ properties:
+           - const: bus
+           - const: core
+           - const: vsync
+-          - const: lut
+           - const: tbu
+           - const: tbu_rt
+         # MSM8996 has additional iommu clock
+diff --git a/Makefile b/Makefile
+index e5509045fe3f3a..58a78d21557742 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 16
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
+index 6e8aa8e726015e..49f1a810df1681 100644
+--- a/arch/arm64/include/asm/mmu.h
++++ b/arch/arm64/include/asm/mmu.h
+@@ -17,6 +17,13 @@
+ #include <linux/refcount.h>
+ #include <asm/cpufeature.h>
+ 
++enum pgtable_type {
++	TABLE_PTE,
++	TABLE_PMD,
++	TABLE_PUD,
++	TABLE_P4D,
++};
++
+ typedef struct {
+ 	atomic64_t	id;
+ #ifdef CONFIG_COMPAT
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index e151585c6cca10..7f58ef66dbc527 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -84,6 +84,7 @@
+ #include <asm/hwcap.h>
+ #include <asm/insn.h>
+ #include <asm/kvm_host.h>
++#include <asm/mmu.h>
+ #include <asm/mmu_context.h>
+ #include <asm/mte.h>
+ #include <asm/hypervisor.h>
+@@ -1941,11 +1942,11 @@ static bool has_pmuv3(const struct arm64_cpu_capabilities *entry, int scope)
+ extern
+ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
+ 			     phys_addr_t size, pgprot_t prot,
+-			     phys_addr_t (*pgtable_alloc)(int), int flags);
++			     phys_addr_t (*pgtable_alloc)(enum pgtable_type), int flags);
+ 
+ static phys_addr_t __initdata kpti_ng_temp_alloc;
+ 
+-static phys_addr_t __init kpti_ng_pgd_alloc(int shift)
++static phys_addr_t __init kpti_ng_pgd_alloc(enum pgtable_type type)
+ {
+ 	kpti_ng_temp_alloc -= PAGE_SIZE;
+ 	return kpti_ng_temp_alloc;
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 00ab1d648db62b..566ab1d2e0c68e 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -46,13 +46,6 @@
+ #define NO_CONT_MAPPINGS	BIT(1)
+ #define NO_EXEC_MAPPINGS	BIT(2)	/* assumes FEAT_HPDS is not used */
+ 
+-enum pgtable_type {
+-	TABLE_PTE,
+-	TABLE_PMD,
+-	TABLE_PUD,
+-	TABLE_P4D,
+-};
+-
+ u64 kimage_voffset __ro_after_init;
+ EXPORT_SYMBOL(kimage_voffset);
+ 
+diff --git a/arch/mips/boot/dts/lantiq/danube_easy50712.dts b/arch/mips/boot/dts/lantiq/danube_easy50712.dts
+index 1ce20b7d05cb8c..c4d7aa5753b043 100644
+--- a/arch/mips/boot/dts/lantiq/danube_easy50712.dts
++++ b/arch/mips/boot/dts/lantiq/danube_easy50712.dts
+@@ -82,13 +82,16 @@ conf_out {
+ 			};
+ 		};
+ 
+-		etop@e180000 {
++		ethernet@e180000 {
+ 			compatible = "lantiq,etop-xway";
+ 			reg = <0xe180000 0x40000>;
+ 			interrupt-parent = <&icu0>;
+ 			interrupts = <73 78>;
++			interrupt-names = "tx", "rx";
+ 			phy-mode = "rmii";
+ 			mac-address = [ 00 11 22 33 44 55 ];
++			lantiq,rx-burst-length = <4>;
++			lantiq,tx-burst-length = <4>;
+ 		};
+ 
+ 		stp0: stp@e100bb0 {
+diff --git a/arch/mips/lantiq/xway/sysctrl.c b/arch/mips/lantiq/xway/sysctrl.c
+index 5a75283d17f10e..6031a0272d8743 100644
+--- a/arch/mips/lantiq/xway/sysctrl.c
++++ b/arch/mips/lantiq/xway/sysctrl.c
+@@ -497,7 +497,7 @@ void __init ltq_soc_init(void)
+ 		ifccr = CGU_IFCCR_VR9;
+ 		pcicr = CGU_PCICR_VR9;
+ 	} else {
+-		clkdev_add_pmu("1e180000.etop", NULL, 1, 0, PMU_PPE);
++		clkdev_add_pmu("1e180000.ethernet", NULL, 1, 0, PMU_PPE);
+ 	}
+ 
+ 	if (!of_machine_is_compatible("lantiq,ase"))
+@@ -531,9 +531,9 @@ void __init ltq_soc_init(void)
+ 						CLOCK_133M, CLOCK_133M);
+ 		clkdev_add_pmu("1e101000.usb", "otg", 1, 0, PMU_USB0);
+ 		clkdev_add_pmu("1f203018.usb2-phy", "phy", 1, 0, PMU_USB0_P);
+-		clkdev_add_pmu("1e180000.etop", "ppe", 1, 0, PMU_PPE);
+-		clkdev_add_cgu("1e180000.etop", "ephycgu", CGU_EPHY);
+-		clkdev_add_pmu("1e180000.etop", "ephy", 1, 0, PMU_EPHY);
++		clkdev_add_pmu("1e180000.ethernet", "ppe", 1, 0, PMU_PPE);
++		clkdev_add_cgu("1e180000.ethernet", "ephycgu", CGU_EPHY);
++		clkdev_add_pmu("1e180000.ethernet", "ephy", 1, 0, PMU_EPHY);
+ 		clkdev_add_pmu("1e103000.sdio", NULL, 1, 0, PMU_ASE_SDIO);
+ 		clkdev_add_pmu("1e116000.mei", "dfe", 1, 0, PMU_DFE);
+ 	} else if (of_machine_is_compatible("lantiq,grx390")) {
+@@ -592,7 +592,7 @@ void __init ltq_soc_init(void)
+ 		clkdev_add_pmu("1e101000.usb", "otg", 1, 0, PMU_USB0 | PMU_AHBM);
+ 		clkdev_add_pmu("1f203034.usb2-phy", "phy", 1, 0, PMU_USB1_P);
+ 		clkdev_add_pmu("1e106000.usb", "otg", 1, 0, PMU_USB1 | PMU_AHBM);
+-		clkdev_add_pmu("1e180000.etop", "switch", 1, 0, PMU_SWITCH);
++		clkdev_add_pmu("1e180000.ethernet", "switch", 1, 0, PMU_SWITCH);
+ 		clkdev_add_pmu("1e103000.sdio", NULL, 1, 0, PMU_SDIO);
+ 		clkdev_add_pmu("1e103100.deu", NULL, 1, 0, PMU_DEU);
+ 		clkdev_add_pmu("1e116000.mei", "dfe", 1, 0, PMU_DFE);
+diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c
+index 5b3c093611baf1..7209d00a9c2576 100644
+--- a/arch/powerpc/kernel/kvm.c
++++ b/arch/powerpc/kernel/kvm.c
+@@ -632,19 +632,19 @@ static void __init kvm_check_ins(u32 *inst, u32 features)
+ #endif
+ 	}
+ 
+-	switch (inst_no_rt & ~KVM_MASK_RB) {
+ #ifdef CONFIG_PPC_BOOK3S_32
++	switch (inst_no_rt & ~KVM_MASK_RB) {
+ 	case KVM_INST_MTSRIN:
+ 		if (features & KVM_MAGIC_FEAT_SR) {
+ 			u32 inst_rb = _inst & KVM_MASK_RB;
+ 			kvm_patch_ins_mtsrin(inst, inst_rt, inst_rb);
+ 		}
+ 		break;
+-#endif
+ 	}
++#endif
+ 
+-	switch (_inst) {
+ #ifdef CONFIG_BOOKE
++	switch (_inst) {
+ 	case KVM_INST_WRTEEI_0:
+ 		kvm_patch_ins_wrteei_0(inst);
+ 		break;
+@@ -652,8 +652,8 @@ static void __init kvm_check_ins(u32 *inst, u32 features)
+ 	case KVM_INST_WRTEEI_1:
+ 		kvm_patch_ins_wrtee(inst, 0, 1);
+ 		break;
+-#endif
+ 	}
++#endif
+ }
+ 
+ extern u32 kvm_template_start[];
+diff --git a/arch/riscv/kvm/vcpu_vector.c b/arch/riscv/kvm/vcpu_vector.c
+index a5f88cb717f3df..05f3cc2d8e311a 100644
+--- a/arch/riscv/kvm/vcpu_vector.c
++++ b/arch/riscv/kvm/vcpu_vector.c
+@@ -182,6 +182,8 @@ int kvm_riscv_vcpu_set_reg_vector(struct kvm_vcpu *vcpu,
+ 		struct kvm_cpu_context *cntx = &vcpu->arch.guest_context;
+ 		unsigned long reg_val;
+ 
++		if (reg_size != sizeof(reg_val))
++			return -EINVAL;
+ 		if (copy_from_user(&reg_val, uaddr, reg_size))
+ 			return -EFAULT;
+ 		if (reg_val != cntx->vector.vlenb)
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index 076eaa41b8c81b..98ae4c37c93ecc 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -262,7 +262,7 @@ static void early_init_intel(struct cpuinfo_x86 *c)
+ 	if (c->x86_power & (1 << 8)) {
+ 		set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
+ 		set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
+-	} else if ((c->x86_vfm >= INTEL_P4_PRESCOTT && c->x86_vfm <= INTEL_P4_WILLAMETTE) ||
++	} else if ((c->x86_vfm >= INTEL_P4_PRESCOTT && c->x86_vfm <= INTEL_P4_CEDARMILL) ||
+ 		   (c->x86_vfm >= INTEL_CORE_YONAH  && c->x86_vfm <= INTEL_IVYBRIDGE)) {
+ 		set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
+ 	}
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index 097e39327942e7..514f63340880fd 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -171,8 +171,28 @@ static int cmp_id(const void *key, const void *elem)
+ 		return 1;
+ }
+ 
++static u32 cpuid_to_ucode_rev(unsigned int val)
++{
++	union zen_patch_rev p = {};
++	union cpuid_1_eax c;
++
++	c.full = val;
++
++	p.stepping  = c.stepping;
++	p.model     = c.model;
++	p.ext_model = c.ext_model;
++	p.ext_fam   = c.ext_fam;
++
++	return p.ucode_rev;
++}
++
+ static bool need_sha_check(u32 cur_rev)
+ {
++	if (!cur_rev) {
++		cur_rev = cpuid_to_ucode_rev(bsp_cpuid_1_eax);
++		pr_info_once("No current revision, generating the lowest one: 0x%x\n", cur_rev);
++	}
++
+ 	switch (cur_rev >> 8) {
+ 	case 0x80012: return cur_rev <= 0x800126f; break;
+ 	case 0x80082: return cur_rev <= 0x800820f; break;
+@@ -749,8 +769,6 @@ static struct ucode_patch *cache_find_patch(struct ucode_cpu_info *uci, u16 equi
+ 	n.equiv_cpu = equiv_cpu;
+ 	n.patch_id  = uci->cpu_sig.rev;
+ 
+-	WARN_ON_ONCE(!n.patch_id);
+-
+ 	list_for_each_entry(p, &microcode_cache, plist)
+ 		if (patch_cpus_equivalent(p, &n, false))
+ 			return p;
+diff --git a/arch/x86/kernel/cpu/topology_amd.c b/arch/x86/kernel/cpu/topology_amd.c
+index 843b1655ab45df..827dd0dbb6e9d2 100644
+--- a/arch/x86/kernel/cpu/topology_amd.c
++++ b/arch/x86/kernel/cpu/topology_amd.c
+@@ -81,20 +81,25 @@ static bool parse_8000_001e(struct topo_scan *tscan, bool has_topoext)
+ 
+ 	cpuid_leaf(0x8000001e, &leaf);
+ 
+-	tscan->c->topo.initial_apicid = leaf.ext_apic_id;
+-
+ 	/*
+-	 * If leaf 0xb is available, then the domain shifts are set
+-	 * already and nothing to do here. Only valid for family >= 0x17.
++	 * If leaf 0xb/0x26 is available, then the APIC ID and the domain
++	 * shifts are set already.
+ 	 */
+-	if (!has_topoext && tscan->c->x86 >= 0x17) {
++	if (!has_topoext) {
++		tscan->c->topo.initial_apicid = leaf.ext_apic_id;
++
+ 		/*
+-		 * Leaf 0x80000008 set the CORE domain shift already.
+-		 * Update the SMT domain, but do not propagate it.
++		 * Leaf 0x8000008 sets the CORE domain shift but not the
++		 * SMT domain shift. On CPUs with family >= 0x17, there
++		 * might be hyperthreads.
+ 		 */
+-		unsigned int nthreads = leaf.core_nthreads + 1;
++		if (tscan->c->x86 >= 0x17) {
++			/* Update the SMT domain, but do not propagate it. */
++			unsigned int nthreads = leaf.core_nthreads + 1;
+ 
+-		topology_update_dom(tscan, TOPO_SMT_DOMAIN, get_count_order(nthreads), nthreads);
++			topology_update_dom(tscan, TOPO_SMT_DOMAIN,
++					    get_count_order(nthreads), nthreads);
++		}
+ 	}
+ 
+ 	store_node(tscan, leaf.nnodes_per_socket + 1, leaf.node_id);
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 73418dc0ebb223..0725d2cae74268 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -852,6 +852,8 @@ static int __pv_send_ipi(unsigned long *ipi_bitmap, struct kvm_apic_map *map,
+ 	if (min > map->max_apic_id)
+ 		return 0;
+ 
++	min = array_index_nospec(min, map->max_apic_id + 1);
++
+ 	for_each_set_bit(i, ipi_bitmap,
+ 		min((u32)BITS_PER_LONG, (map->max_apic_id - min + 1))) {
+ 		if (map->phys_map[min + i]) {
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 45c8cabba524ac..7d4cb1cbd629d3 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -10051,8 +10051,11 @@ static void kvm_sched_yield(struct kvm_vcpu *vcpu, unsigned long dest_id)
+ 	rcu_read_lock();
+ 	map = rcu_dereference(vcpu->kvm->arch.apic_map);
+ 
+-	if (likely(map) && dest_id <= map->max_apic_id && map->phys_map[dest_id])
+-		target = map->phys_map[dest_id]->vcpu;
++	if (likely(map) && dest_id <= map->max_apic_id) {
++		dest_id = array_index_nospec(dest_id, map->max_apic_id + 1);
++		if (map->phys_map[dest_id])
++			target = map->phys_map[dest_id]->vcpu;
++	}
+ 
+ 	rcu_read_unlock();
+ 
+diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
+index 1fe22000a3790e..b538f2c0febc2b 100644
+--- a/block/blk-rq-qos.h
++++ b/block/blk-rq-qos.h
+@@ -149,12 +149,15 @@ static inline void rq_qos_done_bio(struct bio *bio)
+ 	q = bdev_get_queue(bio->bi_bdev);
+ 
+ 	/*
+-	 * If a bio has BIO_QOS_xxx set, it implicitly implies that
+-	 * q->rq_qos is present. So, we skip re-checking q->rq_qos
+-	 * here as an extra optimization and directly call
+-	 * __rq_qos_done_bio().
++	 * A BIO may carry BIO_QOS_* flags even if the associated request_queue
++	 * does not have rq_qos enabled. This can happen with stacked block
++	 * devices — for example, NVMe multipath, where it's possible that the
++	 * bottom device has QoS enabled but the top device does not. Therefore,
++	 * always verify that q->rq_qos is present and QoS is enabled before
++	 * calling __rq_qos_done_bio().
+ 	 */
+-	__rq_qos_done_bio(q->rq_qos, bio);
++	if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos)
++		__rq_qos_done_bio(q->rq_qos, bio);
+ }
+ 
+ static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio)
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index efe71b1a1da138..9104c3a34642fa 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -1266,14 +1266,14 @@ static void blk_zone_wplug_bio_work(struct work_struct *work)
+ 	struct block_device *bdev;
+ 	unsigned long flags;
+ 	struct bio *bio;
++	bool prepared;
+ 
+ 	/*
+ 	 * Submit the next plugged BIO. If we do not have any, clear
+ 	 * the plugged flag.
+ 	 */
+-	spin_lock_irqsave(&zwplug->lock, flags);
+-
+ again:
++	spin_lock_irqsave(&zwplug->lock, flags);
+ 	bio = bio_list_pop(&zwplug->bio_list);
+ 	if (!bio) {
+ 		zwplug->flags &= ~BLK_ZONE_WPLUG_PLUGGED;
+@@ -1281,13 +1281,14 @@ static void blk_zone_wplug_bio_work(struct work_struct *work)
+ 		goto put_zwplug;
+ 	}
+ 
+-	if (!blk_zone_wplug_prepare_bio(zwplug, bio)) {
++	prepared = blk_zone_wplug_prepare_bio(zwplug, bio);
++	spin_unlock_irqrestore(&zwplug->lock, flags);
++
++	if (!prepared) {
+ 		blk_zone_wplug_bio_io_error(zwplug, bio);
+ 		goto again;
+ 	}
+ 
+-	spin_unlock_irqrestore(&zwplug->lock, flags);
+-
+ 	bdev = bio->bi_bdev;
+ 
+ 	/*
+diff --git a/drivers/atm/atmtcp.c b/drivers/atm/atmtcp.c
+index eeae160c898d38..fa3c76a2b49d1f 100644
+--- a/drivers/atm/atmtcp.c
++++ b/drivers/atm/atmtcp.c
+@@ -279,6 +279,19 @@ static struct atm_vcc *find_vcc(struct atm_dev *dev, short vpi, int vci)
+         return NULL;
+ }
+ 
++static int atmtcp_c_pre_send(struct atm_vcc *vcc, struct sk_buff *skb)
++{
++	struct atmtcp_hdr *hdr;
++
++	if (skb->len < sizeof(struct atmtcp_hdr))
++		return -EINVAL;
++
++	hdr = (struct atmtcp_hdr *)skb->data;
++	if (hdr->length == ATMTCP_HDR_MAGIC)
++		return -EINVAL;
++
++	return 0;
++}
+ 
+ static int atmtcp_c_send(struct atm_vcc *vcc,struct sk_buff *skb)
+ {
+@@ -288,9 +301,6 @@ static int atmtcp_c_send(struct atm_vcc *vcc,struct sk_buff *skb)
+ 	struct sk_buff *new_skb;
+ 	int result = 0;
+ 
+-	if (skb->len < sizeof(struct atmtcp_hdr))
+-		goto done;
+-
+ 	dev = vcc->dev_data;
+ 	hdr = (struct atmtcp_hdr *) skb->data;
+ 	if (hdr->length == ATMTCP_HDR_MAGIC) {
+@@ -347,6 +357,7 @@ static const struct atmdev_ops atmtcp_v_dev_ops = {
+ 
+ static const struct atmdev_ops atmtcp_c_dev_ops = {
+ 	.close		= atmtcp_c_close,
++	.pre_send	= atmtcp_c_pre_send,
+ 	.send		= atmtcp_c_send
+ };
+ 
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
+index d529bcb03775f8..062def303dcea9 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
++++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
+@@ -18,9 +18,8 @@
+ #define OTX2_CPT_MAX_VFS_NUM 128
+ #define OTX2_CPT_RVU_FUNC_ADDR_S(blk, slot, offs) \
+ 		(((blk) << 20) | ((slot) << 12) | (offs))
+-#define OTX2_CPT_RVU_PFFUNC(pf, func)	\
+-		((((pf) & RVU_PFVF_PF_MASK) << RVU_PFVF_PF_SHIFT) | \
+-		(((func) & RVU_PFVF_FUNC_MASK) << RVU_PFVF_FUNC_SHIFT))
++
++#define OTX2_CPT_RVU_PFFUNC(pdev, pf, func) rvu_make_pcifunc(pdev, pf, func)
+ 
+ #define OTX2_CPT_INVALID_CRYPTO_ENG_GRP 0xFF
+ #define OTX2_CPT_NAME_LENGTH 64
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
+index 12c0e966fa65fe..b4b2d3d1cbc2a9 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
+@@ -142,7 +142,7 @@ static int send_inline_ipsec_inbound_msg(struct otx2_cptpf_dev *cptpf,
+ 	memset(req, 0, sizeof(*req));
+ 	req->hdr.id = MBOX_MSG_CPT_INLINE_IPSEC_CFG;
+ 	req->hdr.sig = OTX2_MBOX_REQ_SIG;
+-	req->hdr.pcifunc = OTX2_CPT_RVU_PFFUNC(cptpf->pf_id, 0);
++	req->hdr.pcifunc = OTX2_CPT_RVU_PFFUNC(cptpf->pdev, cptpf->pf_id, 0);
+ 	req->dir = CPT_INLINE_INBOUND;
+ 	req->slot = slot;
+ 	req->sso_pf_func_ovrd = cptpf->sso_pf_func_ovrd;
+@@ -184,7 +184,8 @@ static int rx_inline_ipsec_lf_cfg(struct otx2_cptpf_dev *cptpf, u8 egrp,
+ 		nix_req->gen_cfg.opcode = cpt_inline_rx_opcode(pdev);
+ 	nix_req->gen_cfg.param1 = req->param1;
+ 	nix_req->gen_cfg.param2 = req->param2;
+-	nix_req->inst_qsel.cpt_pf_func = OTX2_CPT_RVU_PFFUNC(cptpf->pf_id, 0);
++	nix_req->inst_qsel.cpt_pf_func =
++		OTX2_CPT_RVU_PFFUNC(cptpf->pdev, cptpf->pf_id, 0);
+ 	nix_req->inst_qsel.cpt_slot = 0;
+ 	ret = otx2_cpt_send_mbox_msg(&cptpf->afpf_mbox, pdev);
+ 	if (ret)
+@@ -392,9 +393,8 @@ void otx2_cptpf_vfpf_mbox_handler(struct work_struct *work)
+ 		msg = (struct mbox_msghdr *)(mdev->mbase + offset);
+ 
+ 		/* Set which VF sent this message based on mbox IRQ */
+-		msg->pcifunc = ((u16)cptpf->pf_id << RVU_PFVF_PF_SHIFT) |
+-				((vf->vf_id + 1) & RVU_PFVF_FUNC_MASK);
+-
++		msg->pcifunc = rvu_make_pcifunc(cptpf->pdev, cptpf->pf_id,
++						(vf->vf_id + 1));
+ 		err = cptpf_handle_vf_req(cptpf, vf, msg,
+ 					  msg->next_msgoff - offset);
+ 		/*
+@@ -469,8 +469,7 @@ static void process_afpf_mbox_msg(struct otx2_cptpf_dev *cptpf,
+ 
+ 	switch (msg->id) {
+ 	case MBOX_MSG_READY:
+-		cptpf->pf_id = (msg->pcifunc >> RVU_PFVF_PF_SHIFT) &
+-				RVU_PFVF_PF_MASK;
++		cptpf->pf_id = rvu_get_pf(cptpf->pdev, msg->pcifunc);
+ 		break;
+ 	case MBOX_MSG_MSIX_OFFSET:
+ 		rsp_msix = (struct msix_offset_rsp *) msg;
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
+index 56645b3eb71757..cc47e361089a05 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
+@@ -176,7 +176,9 @@ static int cptx_set_ucode_base(struct otx2_cpt_eng_grp_info *eng_grp,
+ 	/* Set PF number for microcode fetches */
+ 	ret = otx2_cpt_write_af_reg(&cptpf->afpf_mbox, cptpf->pdev,
+ 				    CPT_AF_PF_FUNC,
+-				    cptpf->pf_id << RVU_PFVF_PF_SHIFT, blkaddr);
++				    rvu_make_pcifunc(cptpf->pdev,
++						     cptpf->pf_id, 0),
++				    blkaddr);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_mbox.c
+index 931b72580fd9f8..92e49babd79af1 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptvf_mbox.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_mbox.c
+@@ -189,7 +189,7 @@ int otx2_cptvf_send_eng_grp_num_msg(struct otx2_cptvf_dev *cptvf, int eng_type)
+ 	}
+ 	req->hdr.id = MBOX_MSG_GET_ENG_GRP_NUM;
+ 	req->hdr.sig = OTX2_MBOX_REQ_SIG;
+-	req->hdr.pcifunc = OTX2_CPT_RVU_PFFUNC(cptvf->vf_id, 0);
++	req->hdr.pcifunc = OTX2_CPT_RVU_PFFUNC(cptvf->pdev, cptvf->vf_id, 0);
+ 	req->eng_type = eng_type;
+ 
+ 	return otx2_cpt_send_mbox_msg(mbox, pdev);
+@@ -210,7 +210,7 @@ int otx2_cptvf_send_kvf_limits_msg(struct otx2_cptvf_dev *cptvf)
+ 	}
+ 	req->id = MBOX_MSG_GET_KVF_LIMITS;
+ 	req->sig = OTX2_MBOX_REQ_SIG;
+-	req->pcifunc = OTX2_CPT_RVU_PFFUNC(cptvf->vf_id, 0);
++	req->pcifunc = OTX2_CPT_RVU_PFFUNC(cptvf->pdev, cptvf->vf_id, 0);
+ 
+ 	return otx2_cpt_send_mbox_msg(mbox, pdev);
+ }
+@@ -230,7 +230,7 @@ int otx2_cptvf_send_caps_msg(struct otx2_cptvf_dev *cptvf)
+ 	}
+ 	req->id = MBOX_MSG_GET_CAPS;
+ 	req->sig = OTX2_MBOX_REQ_SIG;
+-	req->pcifunc = OTX2_CPT_RVU_PFFUNC(cptvf->vf_id, 0);
++	req->pcifunc = OTX2_CPT_RVU_PFFUNC(cptvf->pdev, cptvf->vf_id, 0);
+ 
+ 	return otx2_cpt_send_mbox_msg(mbox, pdev);
+ }
+diff --git a/drivers/firmware/efi/stmm/tee_stmm_efi.c b/drivers/firmware/efi/stmm/tee_stmm_efi.c
+index f741ca279052bb..e15d11ed165eef 100644
+--- a/drivers/firmware/efi/stmm/tee_stmm_efi.c
++++ b/drivers/firmware/efi/stmm/tee_stmm_efi.c
+@@ -143,6 +143,10 @@ static efi_status_t mm_communicate(u8 *comm_buf, size_t payload_size)
+ 	return var_hdr->ret_status;
+ }
+ 
++#define COMM_BUF_SIZE(__payload_size)	(MM_COMMUNICATE_HEADER_SIZE + \
++					 MM_VARIABLE_COMMUNICATE_SIZE + \
++					 (__payload_size))
++
+ /**
+  * setup_mm_hdr() -	Allocate a buffer for StandAloneMM and initialize the
+  *			header data.
+@@ -173,9 +177,8 @@ static void *setup_mm_hdr(u8 **dptr, size_t payload_size, size_t func,
+ 		return NULL;
+ 	}
+ 
+-	comm_buf = kzalloc(MM_COMMUNICATE_HEADER_SIZE +
+-				   MM_VARIABLE_COMMUNICATE_SIZE + payload_size,
+-			   GFP_KERNEL);
++	comm_buf = alloc_pages_exact(COMM_BUF_SIZE(payload_size),
++				     GFP_KERNEL | __GFP_ZERO);
+ 	if (!comm_buf) {
+ 		*ret = EFI_OUT_OF_RESOURCES;
+ 		return NULL;
+@@ -239,7 +242,7 @@ static efi_status_t get_max_payload(size_t *size)
+ 	 */
+ 	*size -= 2;
+ out:
+-	kfree(comm_buf);
++	free_pages_exact(comm_buf, COMM_BUF_SIZE(payload_size));
+ 	return ret;
+ }
+ 
+@@ -282,7 +285,7 @@ static efi_status_t get_property_int(u16 *name, size_t name_size,
+ 	memcpy(var_property, &smm_property->property, sizeof(*var_property));
+ 
+ out:
+-	kfree(comm_buf);
++	free_pages_exact(comm_buf, COMM_BUF_SIZE(payload_size));
+ 	return ret;
+ }
+ 
+@@ -347,7 +350,7 @@ static efi_status_t tee_get_variable(u16 *name, efi_guid_t *vendor,
+ 	memcpy(data, (u8 *)var_acc->name + var_acc->name_size,
+ 	       var_acc->data_size);
+ out:
+-	kfree(comm_buf);
++	free_pages_exact(comm_buf, COMM_BUF_SIZE(payload_size));
+ 	return ret;
+ }
+ 
+@@ -404,7 +407,7 @@ static efi_status_t tee_get_next_variable(unsigned long *name_size,
+ 	memcpy(name, var_getnext->name, var_getnext->name_size);
+ 
+ out:
+-	kfree(comm_buf);
++	free_pages_exact(comm_buf, COMM_BUF_SIZE(payload_size));
+ 	return ret;
+ }
+ 
+@@ -467,7 +470,7 @@ static efi_status_t tee_set_variable(efi_char16_t *name, efi_guid_t *vendor,
+ 	ret = mm_communicate(comm_buf, payload_size);
+ 	dev_dbg(pvt_data.dev, "Set Variable %s %d %lx\n", __FILE__, __LINE__, ret);
+ out:
+-	kfree(comm_buf);
++	free_pages_exact(comm_buf, COMM_BUF_SIZE(payload_size));
+ 	return ret;
+ }
+ 
+@@ -507,7 +510,7 @@ static efi_status_t tee_query_variable_info(u32 attributes,
+ 	*max_variable_size = mm_query_info->max_variable_size;
+ 
+ out:
+-	kfree(comm_buf);
++	free_pages_exact(comm_buf, COMM_BUF_SIZE(payload_size));
+ 	return ret;
+ }
+ 
+diff --git a/drivers/firmware/qcom/qcom_scm.c b/drivers/firmware/qcom/qcom_scm.c
+index f63b716be5b027..26cd0458aacd67 100644
+--- a/drivers/firmware/qcom/qcom_scm.c
++++ b/drivers/firmware/qcom/qcom_scm.c
+@@ -1603,7 +1603,13 @@ bool qcom_scm_lmh_dcvsh_available(void)
+ }
+ EXPORT_SYMBOL_GPL(qcom_scm_lmh_dcvsh_available);
+ 
+-int qcom_scm_shm_bridge_enable(void)
++/*
++ * This is only supposed to be called once by the TZMem module. It takes the
++ * SCM struct device as argument and uses it to pass the call as at the time
++ * the SHM Bridge is enabled, the SCM is not yet fully set up and doesn't
++ * accept global user calls. Don't try to use the __scm pointer here.
++ */
++int qcom_scm_shm_bridge_enable(struct device *scm_dev)
+ {
+ 	int ret;
+ 
+@@ -1615,11 +1621,11 @@ int qcom_scm_shm_bridge_enable(void)
+ 
+ 	struct qcom_scm_res res;
+ 
+-	if (!__qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_MP,
++	if (!__qcom_scm_is_call_available(scm_dev, QCOM_SCM_SVC_MP,
+ 					  QCOM_SCM_MP_SHM_BRIDGE_ENABLE))
+ 		return -EOPNOTSUPP;
+ 
+-	ret = qcom_scm_call(__scm->dev, &desc, &res);
++	ret = qcom_scm_call(scm_dev, &desc, &res);
+ 
+ 	if (ret)
+ 		return ret;
+@@ -1631,7 +1637,7 @@ int qcom_scm_shm_bridge_enable(void)
+ }
+ EXPORT_SYMBOL_GPL(qcom_scm_shm_bridge_enable);
+ 
+-int qcom_scm_shm_bridge_create(struct device *dev, u64 pfn_and_ns_perm_flags,
++int qcom_scm_shm_bridge_create(u64 pfn_and_ns_perm_flags,
+ 			       u64 ipfn_and_s_perm_flags, u64 size_and_flags,
+ 			       u64 ns_vmids, u64 *handle)
+ {
+@@ -1659,7 +1665,7 @@ int qcom_scm_shm_bridge_create(struct device *dev, u64 pfn_and_ns_perm_flags,
+ }
+ EXPORT_SYMBOL_GPL(qcom_scm_shm_bridge_create);
+ 
+-int qcom_scm_shm_bridge_delete(struct device *dev, u64 handle)
++int qcom_scm_shm_bridge_delete(u64 handle)
+ {
+ 	struct qcom_scm_desc desc = {
+ 		.svc = QCOM_SCM_SVC_MP,
+@@ -2250,24 +2256,47 @@ static int qcom_scm_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		return ret;
+ 
+-	/* Paired with smp_load_acquire() in qcom_scm_is_available(). */
+-	smp_store_release(&__scm, scm);
++	ret = of_reserved_mem_device_init(scm->dev);
++	if (ret && ret != -ENODEV)
++		return dev_err_probe(scm->dev, ret,
++				     "Failed to setup the reserved memory region for TZ mem\n");
++
++	ret = qcom_tzmem_enable(scm->dev);
++	if (ret)
++		return dev_err_probe(scm->dev, ret,
++				     "Failed to enable the TrustZone memory allocator\n");
++
++	memset(&pool_config, 0, sizeof(pool_config));
++	pool_config.initial_size = 0;
++	pool_config.policy = QCOM_TZMEM_POLICY_ON_DEMAND;
++	pool_config.max_size = SZ_256K;
++
++	scm->mempool = devm_qcom_tzmem_pool_new(scm->dev, &pool_config);
++	if (IS_ERR(scm->mempool))
++		return dev_err_probe(scm->dev, PTR_ERR(scm->mempool),
++				     "Failed to create the SCM memory pool\n");
+ 
+ 	irq = platform_get_irq_optional(pdev, 0);
+ 	if (irq < 0) {
+-		if (irq != -ENXIO) {
+-			ret = irq;
+-			goto err;
+-		}
++		if (irq != -ENXIO)
++			return irq;
+ 	} else {
+-		ret = devm_request_threaded_irq(__scm->dev, irq, NULL, qcom_scm_irq_handler,
+-						IRQF_ONESHOT, "qcom-scm", __scm);
+-		if (ret < 0) {
+-			dev_err_probe(scm->dev, ret, "Failed to request qcom-scm irq\n");
+-			goto err;
+-		}
++		ret = devm_request_threaded_irq(scm->dev, irq, NULL, qcom_scm_irq_handler,
++						IRQF_ONESHOT, "qcom-scm", scm);
++		if (ret < 0)
++			return dev_err_probe(scm->dev, ret,
++					     "Failed to request qcom-scm irq\n");
+ 	}
+ 
++	/*
++	 * Paired with smp_load_acquire() in qcom_scm_is_available().
++	 *
++	 * This marks the SCM API as ready to accept user calls and can only
++	 * be called after the TrustZone memory pool is initialized and the
++	 * waitqueue interrupt requested.
++	 */
++	smp_store_release(&__scm, scm);
++
+ 	__get_convention();
+ 
+ 	/*
+@@ -2283,32 +2312,6 @@ static int qcom_scm_probe(struct platform_device *pdev)
+ 	if (of_property_read_bool(pdev->dev.of_node, "qcom,sdi-enabled") || !download_mode)
+ 		qcom_scm_disable_sdi();
+ 
+-	ret = of_reserved_mem_device_init(__scm->dev);
+-	if (ret && ret != -ENODEV) {
+-		dev_err_probe(__scm->dev, ret,
+-			      "Failed to setup the reserved memory region for TZ mem\n");
+-		goto err;
+-	}
+-
+-	ret = qcom_tzmem_enable(__scm->dev);
+-	if (ret) {
+-		dev_err_probe(__scm->dev, ret,
+-			      "Failed to enable the TrustZone memory allocator\n");
+-		goto err;
+-	}
+-
+-	memset(&pool_config, 0, sizeof(pool_config));
+-	pool_config.initial_size = 0;
+-	pool_config.policy = QCOM_TZMEM_POLICY_ON_DEMAND;
+-	pool_config.max_size = SZ_256K;
+-
+-	__scm->mempool = devm_qcom_tzmem_pool_new(__scm->dev, &pool_config);
+-	if (IS_ERR(__scm->mempool)) {
+-		ret = dev_err_probe(__scm->dev, PTR_ERR(__scm->mempool),
+-				    "Failed to create the SCM memory pool\n");
+-		goto err;
+-	}
+-
+ 	/*
+ 	 * Initialize the QSEECOM interface.
+ 	 *
+@@ -2323,12 +2326,6 @@ static int qcom_scm_probe(struct platform_device *pdev)
+ 	WARN(ret < 0, "failed to initialize qseecom: %d\n", ret);
+ 
+ 	return 0;
+-
+-err:
+-	/* Paired with smp_load_acquire() in qcom_scm_is_available(). */
+-	smp_store_release(&__scm, NULL);
+-
+-	return ret;
+ }
+ 
+ static void qcom_scm_shutdown(struct platform_device *pdev)
+diff --git a/drivers/firmware/qcom/qcom_scm.h b/drivers/firmware/qcom/qcom_scm.h
+index 3133d826f5fae8..0e8dd838099e11 100644
+--- a/drivers/firmware/qcom/qcom_scm.h
++++ b/drivers/firmware/qcom/qcom_scm.h
+@@ -83,6 +83,7 @@ int scm_legacy_call(struct device *dev, const struct qcom_scm_desc *desc,
+ 		    struct qcom_scm_res *res);
+ 
+ struct qcom_tzmem_pool *qcom_scm_get_tzmem_pool(void);
++int qcom_scm_shm_bridge_enable(struct device *scm_dev);
+ 
+ #define QCOM_SCM_SVC_BOOT		0x01
+ #define QCOM_SCM_BOOT_SET_ADDR		0x01
+diff --git a/drivers/firmware/qcom/qcom_tzmem.c b/drivers/firmware/qcom/qcom_tzmem.c
+index 94196ad87105c6..ea0a3535565706 100644
+--- a/drivers/firmware/qcom/qcom_tzmem.c
++++ b/drivers/firmware/qcom/qcom_tzmem.c
+@@ -20,6 +20,7 @@
+ #include <linux/spinlock.h>
+ #include <linux/types.h>
+ 
++#include "qcom_scm.h"
+ #include "qcom_tzmem.h"
+ 
+ struct qcom_tzmem_area {
+@@ -94,7 +95,7 @@ static int qcom_tzmem_init(void)
+ 			goto notsupp;
+ 	}
+ 
+-	ret = qcom_scm_shm_bridge_enable();
++	ret = qcom_scm_shm_bridge_enable(qcom_tzmem_dev);
+ 	if (ret == -EOPNOTSUPP)
+ 		goto notsupp;
+ 
+@@ -124,9 +125,9 @@ static int qcom_tzmem_init_area(struct qcom_tzmem_area *area)
+ 	if (!handle)
+ 		return -ENOMEM;
+ 
+-	ret = qcom_scm_shm_bridge_create(qcom_tzmem_dev, pfn_and_ns_perm,
+-					 ipfn_and_s_perm, size_and_flags,
+-					 QCOM_SCM_VMID_HLOS, handle);
++	ret = qcom_scm_shm_bridge_create(pfn_and_ns_perm, ipfn_and_s_perm,
++					 size_and_flags, QCOM_SCM_VMID_HLOS,
++					 handle);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -142,7 +143,7 @@ static void qcom_tzmem_cleanup_area(struct qcom_tzmem_area *area)
+ 	if (!qcom_tzmem_using_shm_bridge)
+ 		return;
+ 
+-	qcom_scm_shm_bridge_delete(qcom_tzmem_dev, *handle);
++	qcom_scm_shm_bridge_delete(*handle);
+ 	kfree(handle);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c
+index dfb6cfd8376069..02138aa557935e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c
+@@ -88,8 +88,8 @@ int amdgpu_map_static_csa(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ 	}
+ 
+ 	r = amdgpu_vm_bo_map(adev, *bo_va, csa_addr, 0, size,
+-			     AMDGPU_VM_PAGE_READABLE | AMDGPU_VM_PAGE_WRITEABLE |
+-			     AMDGPU_VM_PAGE_EXECUTABLE);
++			     AMDGPU_PTE_READABLE | AMDGPU_PTE_WRITEABLE |
++			     AMDGPU_PTE_EXECUTABLE);
+ 
+ 	if (r) {
+ 		DRM_ERROR("failed to do bo_map on static CSA, err=%d\n", r);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c
+index aac0de86f3e8c8..a966ede1dba24e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c
+@@ -426,6 +426,7 @@ amdgpu_userq_create(struct drm_file *filp, union drm_amdgpu_userq *args)
+ 	if (index == (uint64_t)-EINVAL) {
+ 		drm_file_err(uq_mgr->file, "Failed to get doorbell for queue\n");
+ 		kfree(queue);
++		r = -EINVAL;
+ 		goto unlock;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+index e632e97d63be02..96566870f079b8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+@@ -1612,9 +1612,9 @@ static int gfx_v11_0_sw_init(struct amdgpu_ip_block *ip_block)
+ 	case IP_VERSION(11, 0, 2):
+ 	case IP_VERSION(11, 0, 3):
+ 		if (!adev->gfx.disable_uq &&
+-		    adev->gfx.me_fw_version  >= 2390 &&
+-		    adev->gfx.pfp_fw_version >= 2530 &&
+-		    adev->gfx.mec_fw_version >= 2600 &&
++		    adev->gfx.me_fw_version  >= 2420 &&
++		    adev->gfx.pfp_fw_version >= 2580 &&
++		    adev->gfx.mec_fw_version >= 2650 &&
+ 		    adev->mes.fw_version[0] >= 120) {
+ 			adev->userq_funcs[AMDGPU_HW_IP_GFX] = &userq_mes_funcs;
+ 			adev->userq_funcs[AMDGPU_HW_IP_COMPUTE] = &userq_mes_funcs;
+@@ -4124,6 +4124,8 @@ static int gfx_v11_0_gfx_mqd_init(struct amdgpu_device *adev, void *m,
+ #endif
+ 	if (prop->tmz_queue)
+ 		tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, TMZ_MATCH, 1);
++	if (!prop->kernel_queue)
++		tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, RB_NON_PRIV, 1);
+ 	mqd->cp_gfx_hqd_cntl = tmp;
+ 
+ 	/* set up cp_doorbell_control */
+@@ -4276,8 +4278,10 @@ static int gfx_v11_0_compute_mqd_init(struct amdgpu_device *adev, void *m,
+ 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 1);
+ 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH,
+ 			    prop->allow_tunneling);
+-	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
+-	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1);
++	if (prop->kernel_queue) {
++		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
++		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1);
++	}
+ 	if (prop->tmz_queue)
+ 		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TMZ, 1);
+ 	mqd->cp_hqd_pq_control = tmp;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+index 097ec7d99c5abb..b56e0ba7303243 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+@@ -3022,6 +3022,8 @@ static int gfx_v12_0_gfx_mqd_init(struct amdgpu_device *adev, void *m,
+ #endif
+ 	if (prop->tmz_queue)
+ 		tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, TMZ_MATCH, 1);
++	if (!prop->kernel_queue)
++		tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, RB_NON_PRIV, 1);
+ 	mqd->cp_gfx_hqd_cntl = tmp;
+ 
+ 	/* set up cp_doorbell_control */
+@@ -3171,8 +3173,10 @@ static int gfx_v12_0_compute_mqd_init(struct amdgpu_device *adev, void *m,
+ 			    (order_base_2(AMDGPU_GPU_PAGE_SIZE / 4) - 1));
+ 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 1);
+ 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH, 0);
+-	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
+-	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1);
++	if (prop->kernel_queue) {
++		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
++		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1);
++	}
+ 	if (prop->tmz_queue)
+ 		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TMZ, 1);
+ 	mqd->cp_hqd_pq_control = tmp;
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+index 39ee810850885c..f417c5eb0f41a9 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+@@ -3458,14 +3458,16 @@ static umode_t hwmon_attributes_visible(struct kobject *kobj,
+ 		effective_mode &= ~S_IWUSR;
+ 
+ 	/* not implemented yet for APUs other than GC 10.3.1 (vangogh) and 9.4.3 */
+-	if (((adev->family == AMDGPU_FAMILY_SI) ||
+-	     ((adev->flags & AMD_IS_APU) && (gc_ver != IP_VERSION(10, 3, 1)) &&
+-	      (gc_ver != IP_VERSION(9, 4, 3) && gc_ver != IP_VERSION(9, 4, 4)))) &&
+-	    (attr == &sensor_dev_attr_power1_cap_max.dev_attr.attr ||
+-	     attr == &sensor_dev_attr_power1_cap_min.dev_attr.attr ||
+-	     attr == &sensor_dev_attr_power1_cap.dev_attr.attr ||
+-	     attr == &sensor_dev_attr_power1_cap_default.dev_attr.attr))
+-		return 0;
++	if (attr == &sensor_dev_attr_power1_cap_max.dev_attr.attr ||
++	    attr == &sensor_dev_attr_power1_cap_min.dev_attr.attr ||
++	    attr == &sensor_dev_attr_power1_cap.dev_attr.attr ||
++	    attr == &sensor_dev_attr_power1_cap_default.dev_attr.attr) {
++		if (adev->family == AMDGPU_FAMILY_SI ||
++		    ((adev->flags & AMD_IS_APU) && gc_ver != IP_VERSION(10, 3, 1) &&
++		     (gc_ver != IP_VERSION(9, 4, 3) && gc_ver != IP_VERSION(9, 4, 4))) ||
++		    (amdgpu_sriov_vf(adev) && gc_ver == IP_VERSION(11, 0, 3)))
++			return 0;
++	}
+ 
+ 	/* not implemented yet for APUs having < GC 9.3.0 (Renoir) */
+ 	if (((adev->family == AMDGPU_FAMILY_SI) ||
+diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
+index dc622c78db9d4a..f2a6559a271008 100644
+--- a/drivers/gpu/drm/display/drm_dp_helper.c
++++ b/drivers/gpu/drm/display/drm_dp_helper.c
+@@ -725,7 +725,7 @@ ssize_t drm_dp_dpcd_read(struct drm_dp_aux *aux, unsigned int offset,
+ 	 * monitor doesn't power down exactly after the throw away read.
+ 	 */
+ 	if (!aux->is_remote) {
+-		ret = drm_dp_dpcd_probe(aux, DP_LANE0_1_STATUS);
++		ret = drm_dp_dpcd_probe(aux, DP_DPCD_REV);
+ 		if (ret < 0)
+ 			return ret;
+ 	}
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+index 7c0c12dde48859..34131ae2c207df 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+@@ -388,19 +388,19 @@ static bool mtk_drm_get_all_drm_priv(struct device *dev)
+ 
+ 		of_id = of_match_node(mtk_drm_of_ids, node);
+ 		if (!of_id)
+-			continue;
++			goto next_put_node;
+ 
+ 		pdev = of_find_device_by_node(node);
+ 		if (!pdev)
+-			continue;
++			goto next_put_node;
+ 
+ 		drm_dev = device_find_child(&pdev->dev, NULL, mtk_drm_match);
+ 		if (!drm_dev)
+-			continue;
++			goto next_put_device_pdev_dev;
+ 
+ 		temp_drm_priv = dev_get_drvdata(drm_dev);
+ 		if (!temp_drm_priv)
+-			continue;
++			goto next_put_device_drm_dev;
+ 
+ 		if (temp_drm_priv->data->main_len)
+ 			all_drm_priv[CRTC_MAIN] = temp_drm_priv;
+@@ -412,10 +412,17 @@ static bool mtk_drm_get_all_drm_priv(struct device *dev)
+ 		if (temp_drm_priv->mtk_drm_bound)
+ 			cnt++;
+ 
+-		if (cnt == MAX_CRTC) {
+-			of_node_put(node);
++next_put_device_drm_dev:
++		put_device(drm_dev);
++
++next_put_device_pdev_dev:
++		put_device(&pdev->dev);
++
++next_put_node:
++		of_node_put(node);
++
++		if (cnt == MAX_CRTC)
+ 			break;
+-		}
+ 	}
+ 
+ 	if (drm_priv->data->mmsys_dev_num == cnt) {
+diff --git a/drivers/gpu/drm/mediatek/mtk_hdmi.c b/drivers/gpu/drm/mediatek/mtk_hdmi.c
+index 8803cd4a8bc9b1..4404e1b527b52f 100644
+--- a/drivers/gpu/drm/mediatek/mtk_hdmi.c
++++ b/drivers/gpu/drm/mediatek/mtk_hdmi.c
+@@ -182,8 +182,8 @@ static inline struct mtk_hdmi *hdmi_ctx_from_bridge(struct drm_bridge *b)
+ 
+ static void mtk_hdmi_hw_vid_black(struct mtk_hdmi *hdmi, bool black)
+ {
+-	regmap_update_bits(hdmi->regs, VIDEO_SOURCE_SEL,
+-			   VIDEO_CFG_4, black ? GEN_RGB : NORMAL_PATH);
++	regmap_update_bits(hdmi->regs, VIDEO_CFG_4,
++			   VIDEO_SOURCE_SEL, black ? GEN_RGB : NORMAL_PATH);
+ }
+ 
+ static void mtk_hdmi_hw_make_reg_writable(struct mtk_hdmi *hdmi, bool enable)
+@@ -310,8 +310,8 @@ static void mtk_hdmi_hw_send_info_frame(struct mtk_hdmi *hdmi, u8 *buffer,
+ 
+ static void mtk_hdmi_hw_send_aud_packet(struct mtk_hdmi *hdmi, bool enable)
+ {
+-	regmap_update_bits(hdmi->regs, AUDIO_PACKET_OFF,
+-			   GRL_SHIFT_R2, enable ? 0 : AUDIO_PACKET_OFF);
++	regmap_update_bits(hdmi->regs, GRL_SHIFT_R2,
++			   AUDIO_PACKET_OFF, enable ? 0 : AUDIO_PACKET_OFF);
+ }
+ 
+ static void mtk_hdmi_hw_config_sys(struct mtk_hdmi *hdmi)
+diff --git a/drivers/gpu/drm/mediatek/mtk_plane.c b/drivers/gpu/drm/mediatek/mtk_plane.c
+index cbc4f37da8ba81..02349bd4400176 100644
+--- a/drivers/gpu/drm/mediatek/mtk_plane.c
++++ b/drivers/gpu/drm/mediatek/mtk_plane.c
+@@ -292,7 +292,8 @@ static void mtk_plane_atomic_disable(struct drm_plane *plane,
+ 	wmb(); /* Make sure the above parameter is set before update */
+ 	mtk_plane_state->pending.dirty = true;
+ 
+-	mtk_crtc_plane_disable(old_state->crtc, plane);
++	if (old_state && old_state->crtc)
++		mtk_crtc_plane_disable(old_state->crtc, plane);
+ }
+ 
+ static void mtk_plane_atomic_update(struct drm_plane *plane,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index c0ed110a7d30fa..4bddb9504796b6 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -729,6 +729,8 @@ bool dpu_encoder_needs_modeset(struct drm_encoder *drm_enc, struct drm_atomic_st
+ 		return false;
+ 
+ 	conn_state = drm_atomic_get_new_connector_state(state, connector);
++	if (!conn_state)
++		return false;
+ 
+ 	/**
+ 	 * These checks are duplicated from dpu_encoder_update_topology() since
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+index 421138bc3cb779..d059ed1e4b7072 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+@@ -1136,7 +1136,7 @@ static int dpu_plane_virtual_atomic_check(struct drm_plane *plane,
+ 	struct drm_plane_state *old_plane_state =
+ 		drm_atomic_get_old_plane_state(state, plane);
+ 	struct dpu_plane_state *pstate = to_dpu_plane_state(plane_state);
+-	struct drm_crtc_state *crtc_state;
++	struct drm_crtc_state *crtc_state = NULL;
+ 	int ret;
+ 
+ 	if (IS_ERR(plane_state))
+@@ -1169,7 +1169,7 @@ static int dpu_plane_virtual_atomic_check(struct drm_plane *plane,
+ 	if (!old_plane_state || !old_plane_state->fb ||
+ 	    old_plane_state->src_w != plane_state->src_w ||
+ 	    old_plane_state->src_h != plane_state->src_h ||
+-	    old_plane_state->src_w != plane_state->src_w ||
++	    old_plane_state->crtc_w != plane_state->crtc_w ||
+ 	    old_plane_state->crtc_h != plane_state->crtc_h ||
+ 	    msm_framebuffer_format(old_plane_state->fb) !=
+ 	    msm_framebuffer_format(plane_state->fb))
+diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
+index d4f71bb54e84c0..081d59979e31d1 100644
+--- a/drivers/gpu/drm/msm/msm_gem_submit.c
++++ b/drivers/gpu/drm/msm/msm_gem_submit.c
+@@ -869,12 +869,8 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
+ 
+ 	if (ret == 0 && args->flags & MSM_SUBMIT_FENCE_FD_OUT) {
+ 		sync_file = sync_file_create(submit->user_fence);
+-		if (!sync_file) {
++		if (!sync_file)
+ 			ret = -ENOMEM;
+-		} else {
+-			fd_install(out_fence_fd, sync_file->file);
+-			args->fence_fd = out_fence_fd;
+-		}
+ 	}
+ 
+ 	if (ret)
+@@ -902,10 +898,14 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
+ out_unlock:
+ 	mutex_unlock(&queue->lock);
+ out_post_unlock:
+-	if (ret && (out_fence_fd >= 0)) {
+-		put_unused_fd(out_fence_fd);
++	if (ret) {
++		if (out_fence_fd >= 0)
++			put_unused_fd(out_fence_fd);
+ 		if (sync_file)
+ 			fput(sync_file->file);
++	} else if (sync_file) {
++		fd_install(out_fence_fd, sync_file->file);
++		args->fence_fd = out_fence_fd;
+ 	}
+ 
+ 	if (!IS_ERR_OR_NULL(submit)) {
+diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c
+index 35d5397e73b4c5..f2c00716f9d1a8 100644
+--- a/drivers/gpu/drm/msm/msm_kms.c
++++ b/drivers/gpu/drm/msm/msm_kms.c
+@@ -258,6 +258,12 @@ int msm_drm_kms_init(struct device *dev, const struct drm_driver *drv)
+ 	if (ret)
+ 		return ret;
+ 
++	ret = msm_disp_snapshot_init(ddev);
++	if (ret) {
++		DRM_DEV_ERROR(dev, "msm_disp_snapshot_init failed ret = %d\n", ret);
++		return ret;
++	}
++
+ 	ret = priv->kms_init(ddev);
+ 	if (ret) {
+ 		DRM_DEV_ERROR(dev, "failed to load kms\n");
+@@ -310,10 +316,6 @@ int msm_drm_kms_init(struct device *dev, const struct drm_driver *drv)
+ 		goto err_msm_uninit;
+ 	}
+ 
+-	ret = msm_disp_snapshot_init(ddev);
+-	if (ret)
+-		DRM_DEV_ERROR(dev, "msm_disp_snapshot_init failed ret = %d\n", ret);
+-
+ 	drm_mode_config_reset(ddev);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/msm/registers/display/dsi.xml b/drivers/gpu/drm/msm/registers/display/dsi.xml
+index 501ffc585a9f69..c7a7b633d747bc 100644
+--- a/drivers/gpu/drm/msm/registers/display/dsi.xml
++++ b/drivers/gpu/drm/msm/registers/display/dsi.xml
+@@ -159,28 +159,28 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ 		<bitfield name="RGB_SWAP" low="12" high="14" type="dsi_rgb_swap"/>
+ 	</reg32>
+ 	<reg32 offset="0x00020" name="ACTIVE_H">
+-		<bitfield name="START" low="0" high="11" type="uint"/>
+-		<bitfield name="END" low="16" high="27" type="uint"/>
++		<bitfield name="START" low="0" high="15" type="uint"/>
++		<bitfield name="END" low="16" high="31" type="uint"/>
+ 	</reg32>
+ 	<reg32 offset="0x00024" name="ACTIVE_V">
+-		<bitfield name="START" low="0" high="11" type="uint"/>
+-		<bitfield name="END" low="16" high="27" type="uint"/>
++		<bitfield name="START" low="0" high="15" type="uint"/>
++		<bitfield name="END" low="16" high="31" type="uint"/>
+ 	</reg32>
+ 	<reg32 offset="0x00028" name="TOTAL">
+-		<bitfield name="H_TOTAL" low="0" high="11" type="uint"/>
+-		<bitfield name="V_TOTAL" low="16" high="27" type="uint"/>
++		<bitfield name="H_TOTAL" low="0" high="15" type="uint"/>
++		<bitfield name="V_TOTAL" low="16" high="31" type="uint"/>
+ 	</reg32>
+ 	<reg32 offset="0x0002c" name="ACTIVE_HSYNC">
+-		<bitfield name="START" low="0" high="11" type="uint"/>
+-		<bitfield name="END" low="16" high="27" type="uint"/>
++		<bitfield name="START" low="0" high="15" type="uint"/>
++		<bitfield name="END" low="16" high="31" type="uint"/>
+ 	</reg32>
+ 	<reg32 offset="0x00030" name="ACTIVE_VSYNC_HPOS">
+-		<bitfield name="START" low="0" high="11" type="uint"/>
+-		<bitfield name="END" low="16" high="27" type="uint"/>
++		<bitfield name="START" low="0" high="15" type="uint"/>
++		<bitfield name="END" low="16" high="31" type="uint"/>
+ 	</reg32>
+ 	<reg32 offset="0x00034" name="ACTIVE_VSYNC_VPOS">
+-		<bitfield name="START" low="0" high="11" type="uint"/>
+-		<bitfield name="END" low="16" high="27" type="uint"/>
++		<bitfield name="START" low="0" high="15" type="uint"/>
++		<bitfield name="END" low="16" high="31" type="uint"/>
+ 	</reg32>
+ 
+ 	<reg32 offset="0x00038" name="CMD_DMA_CTRL">
+@@ -209,8 +209,8 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ 		<bitfield name="WORD_COUNT" low="16" high="31" type="uint"/>
+ 	</reg32>
+ 	<reg32 offset="0x00058" name="CMD_MDP_STREAM0_TOTAL">
+-		<bitfield name="H_TOTAL" low="0" high="11" type="uint"/>
+-		<bitfield name="V_TOTAL" low="16" high="27" type="uint"/>
++		<bitfield name="H_TOTAL" low="0" high="15" type="uint"/>
++		<bitfield name="V_TOTAL" low="16" high="31" type="uint"/>
+ 	</reg32>
+ 	<reg32 offset="0x0005c" name="CMD_MDP_STREAM1_CTRL">
+ 		<bitfield name="DATA_TYPE" low="0" high="5" type="uint"/>
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
+index 11d5b923d6e703..e2c55f4b9c5a14 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
+@@ -795,6 +795,10 @@ static bool nv50_plane_format_mod_supported(struct drm_plane *plane,
+ 	struct nouveau_drm *drm = nouveau_drm(plane->dev);
+ 	uint8_t i;
+ 
++	/* All chipsets can display all formats in linear layout */
++	if (modifier == DRM_FORMAT_MOD_LINEAR)
++		return true;
++
+ 	if (drm->client.device.info.chipset < 0xc0) {
+ 		const struct drm_format_info *info = drm_format_info(format);
+ 		const uint8_t kind = (modifier >> 12) & 0xff;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/falcon/gm200.c b/drivers/gpu/drm/nouveau/nvkm/falcon/gm200.c
+index b7da3ab44c277d..7c43397c19e61d 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/falcon/gm200.c
++++ b/drivers/gpu/drm/nouveau/nvkm/falcon/gm200.c
+@@ -103,7 +103,7 @@ gm200_flcn_pio_imem_wr_init(struct nvkm_falcon *falcon, u8 port, bool sec, u32 i
+ static void
+ gm200_flcn_pio_imem_wr(struct nvkm_falcon *falcon, u8 port, const u8 *img, int len, u16 tag)
+ {
+-	nvkm_falcon_wr32(falcon, 0x188 + (port * 0x10), tag++);
++	nvkm_falcon_wr32(falcon, 0x188 + (port * 0x10), tag);
+ 	while (len >= 4) {
+ 		nvkm_falcon_wr32(falcon, 0x184 + (port * 0x10), *(u32 *)img);
+ 		img += 4;
+@@ -249,9 +249,11 @@ int
+ gm200_flcn_fw_load(struct nvkm_falcon_fw *fw)
+ {
+ 	struct nvkm_falcon *falcon = fw->falcon;
+-	int target, ret;
++	int ret;
+ 
+ 	if (fw->inst) {
++		int target;
++
+ 		nvkm_falcon_mask(falcon, 0x048, 0x00000001, 0x00000001);
+ 
+ 		switch (nvkm_memory_target(fw->inst)) {
+@@ -285,15 +287,6 @@ gm200_flcn_fw_load(struct nvkm_falcon_fw *fw)
+ 	}
+ 
+ 	if (fw->boot) {
+-		switch (nvkm_memory_target(&fw->fw.mem.memory)) {
+-		case NVKM_MEM_TARGET_VRAM: target = 4; break;
+-		case NVKM_MEM_TARGET_HOST: target = 5; break;
+-		case NVKM_MEM_TARGET_NCOH: target = 6; break;
+-		default:
+-			WARN_ON(1);
+-			return -EINVAL;
+-		}
+-
+ 		ret = nvkm_falcon_pio_wr(falcon, fw->boot, 0, 0,
+ 					 IMEM, falcon->code.limit - fw->boot_size, fw->boot_size,
+ 					 fw->boot_addr >> 8, false);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/fwsec.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/fwsec.c
+index 52412965fac107..5b721bd9d79949 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/fwsec.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/fwsec.c
+@@ -209,11 +209,12 @@ nvkm_gsp_fwsec_v2(struct nvkm_gsp *gsp, const char *name,
+ 	fw->boot_addr = bld->start_tag << 8;
+ 	fw->boot_size = bld->code_size;
+ 	fw->boot = kmemdup(bl->data + hdr->data_offset + bld->code_off, fw->boot_size, GFP_KERNEL);
+-	if (!fw->boot)
+-		ret = -ENOMEM;
+ 
+ 	nvkm_firmware_put(bl);
+ 
++	if (!fw->boot)
++		return -ENOMEM;
++
+ 	/* Patch in interface data. */
+ 	return nvkm_gsp_fwsec_patch(gsp, fw, desc->InterfaceOffset, init_cmd);
+ }
+diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
+index 7aa2c17825da9a..74635b444122d7 100644
+--- a/drivers/gpu/drm/xe/xe_bo.c
++++ b/drivers/gpu/drm/xe/xe_bo.c
+@@ -796,7 +796,8 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
+ 	}
+ 
+ 	if (ttm_bo->type == ttm_bo_type_sg) {
+-		ret = xe_bo_move_notify(bo, ctx);
++		if (new_mem->mem_type == XE_PL_SYSTEM)
++			ret = xe_bo_move_notify(bo, ctx);
+ 		if (!ret)
+ 			ret = xe_bo_move_dmabuf(ttm_bo, new_mem);
+ 		return ret;
+@@ -2435,7 +2436,6 @@ int xe_bo_validate(struct xe_bo *bo, struct xe_vm *vm, bool allow_res_evict)
+ 		.no_wait_gpu = false,
+ 		.gfp_retry_mayfail = true,
+ 	};
+-	struct pin_cookie cookie;
+ 	int ret;
+ 
+ 	if (vm) {
+@@ -2446,10 +2446,10 @@ int xe_bo_validate(struct xe_bo *bo, struct xe_vm *vm, bool allow_res_evict)
+ 		ctx.resv = xe_vm_resv(vm);
+ 	}
+ 
+-	cookie = xe_vm_set_validating(vm, allow_res_evict);
++	xe_vm_set_validating(vm, allow_res_evict);
+ 	trace_xe_bo_validate(bo);
+ 	ret = ttm_bo_validate(&bo->ttm, &bo->placement, &ctx);
+-	xe_vm_clear_validating(vm, allow_res_evict, cookie);
++	xe_vm_clear_validating(vm, allow_res_evict);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/xe/xe_sync.c b/drivers/gpu/drm/xe/xe_sync.c
+index f87276df18f280..82872a51f0983a 100644
+--- a/drivers/gpu/drm/xe/xe_sync.c
++++ b/drivers/gpu/drm/xe/xe_sync.c
+@@ -77,6 +77,7 @@ static void user_fence_worker(struct work_struct *w)
+ {
+ 	struct xe_user_fence *ufence = container_of(w, struct xe_user_fence, worker);
+ 
++	WRITE_ONCE(ufence->signalled, 1);
+ 	if (mmget_not_zero(ufence->mm)) {
+ 		kthread_use_mm(ufence->mm);
+ 		if (copy_to_user(ufence->addr, &ufence->value, sizeof(ufence->value)))
+@@ -91,7 +92,6 @@ static void user_fence_worker(struct work_struct *w)
+ 	 * Wake up waiters only after updating the ufence state, allowing the UMD
+ 	 * to safely reuse the same ufence without encountering -EBUSY errors.
+ 	 */
+-	WRITE_ONCE(ufence->signalled, 1);
+ 	wake_up_all(&ufence->xe->ufence_wq);
+ 	user_fence_put(ufence);
+ }
+diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
+index 3b11b1d52bee9b..e278aad1a6eb29 100644
+--- a/drivers/gpu/drm/xe/xe_vm.c
++++ b/drivers/gpu/drm/xe/xe_vm.c
+@@ -1582,8 +1582,12 @@ static int xe_vm_create_scratch(struct xe_device *xe, struct xe_tile *tile,
+ 
+ 	for (i = MAX_HUGEPTE_LEVEL; i < vm->pt_root[id]->level; i++) {
+ 		vm->scratch_pt[id][i] = xe_pt_create(vm, tile, i);
+-		if (IS_ERR(vm->scratch_pt[id][i]))
+-			return PTR_ERR(vm->scratch_pt[id][i]);
++		if (IS_ERR(vm->scratch_pt[id][i])) {
++			int err = PTR_ERR(vm->scratch_pt[id][i]);
++
++			vm->scratch_pt[id][i] = NULL;
++			return err;
++		}
+ 
+ 		xe_pt_populate_empty(tile, vm, vm->scratch_pt[id][i]);
+ 	}
+diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
+index 0158ec0ae3b230..e54ca835b58282 100644
+--- a/drivers/gpu/drm/xe/xe_vm.h
++++ b/drivers/gpu/drm/xe/xe_vm.h
+@@ -310,22 +310,14 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap);
+  * Register this task as currently making bos resident for the vm. Intended
+  * to avoid eviction by the same task of shared bos bound to the vm.
+  * Call with the vm's resv lock held.
+- *
+- * Return: A pin cookie that should be used for xe_vm_clear_validating().
+  */
+-static inline struct pin_cookie xe_vm_set_validating(struct xe_vm *vm,
+-						     bool allow_res_evict)
++static inline void xe_vm_set_validating(struct xe_vm *vm, bool allow_res_evict)
+ {
+-	struct pin_cookie cookie = {};
+-
+ 	if (vm && !allow_res_evict) {
+ 		xe_vm_assert_held(vm);
+-		cookie = lockdep_pin_lock(&xe_vm_resv(vm)->lock.base);
+ 		/* Pairs with READ_ONCE in xe_vm_is_validating() */
+ 		WRITE_ONCE(vm->validating, current);
+ 	}
+-
+-	return cookie;
+ }
+ 
+ /**
+@@ -333,17 +325,14 @@ static inline struct pin_cookie xe_vm_set_validating(struct xe_vm *vm,
+  * @vm: Pointer to the vm or NULL
+  * @allow_res_evict: Eviction from @vm was allowed. Must be set to the same
+  * value as for xe_vm_set_validation().
+- * @cookie: Cookie obtained from xe_vm_set_validating().
+  *
+  * Register this task as currently making bos resident for the vm. Intended
+  * to avoid eviction by the same task of shared bos bound to the vm.
+  * Call with the vm's resv lock held.
+  */
+-static inline void xe_vm_clear_validating(struct xe_vm *vm, bool allow_res_evict,
+-					  struct pin_cookie cookie)
++static inline void xe_vm_clear_validating(struct xe_vm *vm, bool allow_res_evict)
+ {
+ 	if (vm && !allow_res_evict) {
+-		lockdep_unpin_lock(&xe_vm_resv(vm)->lock.base, cookie);
+ 		/* Pairs with READ_ONCE in xe_vm_is_validating() */
+ 		WRITE_ONCE(vm->validating, NULL);
+ 	}
+diff --git a/drivers/hid/hid-asus.c b/drivers/hid/hid-asus.c
+index 4b45e31f0bab2c..d27dcfb2b9e4e1 100644
+--- a/drivers/hid/hid-asus.c
++++ b/drivers/hid/hid-asus.c
+@@ -1213,7 +1213,13 @@ static int asus_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 		return ret;
+ 	}
+ 
+-	if (!drvdata->input) {
++	/*
++	 * Check that input registration succeeded. Checking that
++	 * HID_CLAIMED_INPUT is set prevents a UAF when all input devices
++	 * were freed during registration due to no usages being mapped,
++	 * leaving drvdata->input pointing to freed memory.
++	 */
++	if (!drvdata->input || !(hdev->claimed & HID_CLAIMED_INPUT)) {
+ 		hid_err(hdev, "Asus input not registered\n");
+ 		ret = -ENOMEM;
+ 		goto err_stop_hw;
+diff --git a/drivers/hid/hid-elecom.c b/drivers/hid/hid-elecom.c
+index 0ad7d25d98647f..69771fd3500605 100644
+--- a/drivers/hid/hid-elecom.c
++++ b/drivers/hid/hid-elecom.c
+@@ -101,6 +101,7 @@ static const __u8 *elecom_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ 		 */
+ 		mouse_button_fixup(hdev, rdesc, *rsize, 12, 30, 14, 20, 8);
+ 		break;
++	case USB_DEVICE_ID_ELECOM_M_DT2DRBK:
+ 	case USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C:
+ 		/*
+ 		 * Report descriptor format:
+@@ -123,6 +124,7 @@ static const struct hid_device_id elecom_devices[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT4DRBK) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1URBK) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1DRBK) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT2DRBK) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK_010C) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK_019B) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D) },
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 33cc5820f2be19..a752c667fbcaa4 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -448,6 +448,7 @@
+ #define USB_DEVICE_ID_ELECOM_M_XT4DRBK	0x00fd
+ #define USB_DEVICE_ID_ELECOM_M_DT1URBK	0x00fe
+ #define USB_DEVICE_ID_ELECOM_M_DT1DRBK	0x00ff
++#define USB_DEVICE_ID_ELECOM_M_DT2DRBK	0x018d
+ #define USB_DEVICE_ID_ELECOM_M_HT1URBK_010C	0x010c
+ #define USB_DEVICE_ID_ELECOM_M_HT1URBK_019B	0x019b
+ #define USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D	0x010d
+@@ -831,6 +832,8 @@
+ #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6019	0x6019
+ #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_602E	0x602e
+ #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6093	0x6093
++#define USB_DEVICE_ID_LENOVO_LEGION_GO_DUAL_DINPUT	0x6184
++#define USB_DEVICE_ID_LENOVO_LEGION_GO2_DUAL_DINPUT	0x61ed
+ 
+ #define USB_VENDOR_ID_LETSKETCH		0x6161
+ #define USB_DEVICE_ID_WP9620N		0x4d15
+@@ -904,6 +907,7 @@
+ #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_2		0xc534
+ #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1	0xc539
+ #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1_1	0xc53f
++#define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1_2	0xc543
+ #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_POWERPLAY	0xc53a
+ #define USB_DEVICE_ID_LOGITECH_BOLT_RECEIVER	0xc548
+ #define USB_DEVICE_ID_SPACETRAVELLER	0xc623
+diff --git a/drivers/hid/hid-input-test.c b/drivers/hid/hid-input-test.c
+index 77c2d45ac62a7f..6f5c71660d823b 100644
+--- a/drivers/hid/hid-input-test.c
++++ b/drivers/hid/hid-input-test.c
+@@ -7,7 +7,7 @@
+ 
+ #include <kunit/test.h>
+ 
+-static void hid_test_input_set_battery_charge_status(struct kunit *test)
++static void hid_test_input_update_battery_charge_status(struct kunit *test)
+ {
+ 	struct hid_device *dev;
+ 	bool handled;
+@@ -15,15 +15,15 @@ static void hid_test_input_set_battery_charge_status(struct kunit *test)
+ 	dev = kunit_kzalloc(test, sizeof(*dev), GFP_KERNEL);
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev);
+ 
+-	handled = hidinput_set_battery_charge_status(dev, HID_DG_HEIGHT, 0);
++	handled = hidinput_update_battery_charge_status(dev, HID_DG_HEIGHT, 0);
+ 	KUNIT_EXPECT_FALSE(test, handled);
+ 	KUNIT_EXPECT_EQ(test, dev->battery_charge_status, POWER_SUPPLY_STATUS_UNKNOWN);
+ 
+-	handled = hidinput_set_battery_charge_status(dev, HID_BAT_CHARGING, 0);
++	handled = hidinput_update_battery_charge_status(dev, HID_BAT_CHARGING, 0);
+ 	KUNIT_EXPECT_TRUE(test, handled);
+ 	KUNIT_EXPECT_EQ(test, dev->battery_charge_status, POWER_SUPPLY_STATUS_DISCHARGING);
+ 
+-	handled = hidinput_set_battery_charge_status(dev, HID_BAT_CHARGING, 1);
++	handled = hidinput_update_battery_charge_status(dev, HID_BAT_CHARGING, 1);
+ 	KUNIT_EXPECT_TRUE(test, handled);
+ 	KUNIT_EXPECT_EQ(test, dev->battery_charge_status, POWER_SUPPLY_STATUS_CHARGING);
+ }
+@@ -63,7 +63,7 @@ static void hid_test_input_get_battery_property(struct kunit *test)
+ }
+ 
+ static struct kunit_case hid_input_tests[] = {
+-	KUNIT_CASE(hid_test_input_set_battery_charge_status),
++	KUNIT_CASE(hid_test_input_update_battery_charge_status),
+ 	KUNIT_CASE(hid_test_input_get_battery_property),
+ 	{ }
+ };
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index ff1784b5c2a477..f45f856a127fe7 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -595,13 +595,33 @@ static void hidinput_cleanup_battery(struct hid_device *dev)
+ 	dev->battery = NULL;
+ }
+ 
+-static void hidinput_update_battery(struct hid_device *dev, int value)
++static bool hidinput_update_battery_charge_status(struct hid_device *dev,
++						  unsigned int usage, int value)
++{
++	switch (usage) {
++	case HID_BAT_CHARGING:
++		dev->battery_charge_status = value ?
++					     POWER_SUPPLY_STATUS_CHARGING :
++					     POWER_SUPPLY_STATUS_DISCHARGING;
++		return true;
++	}
++
++	return false;
++}
++
++static void hidinput_update_battery(struct hid_device *dev, unsigned int usage,
++				    int value)
+ {
+ 	int capacity;
+ 
+ 	if (!dev->battery)
+ 		return;
+ 
++	if (hidinput_update_battery_charge_status(dev, usage, value)) {
++		power_supply_changed(dev->battery);
++		return;
++	}
++
+ 	if (value == 0 || value < dev->battery_min || value > dev->battery_max)
+ 		return;
+ 
+@@ -617,20 +637,6 @@ static void hidinput_update_battery(struct hid_device *dev, int value)
+ 		power_supply_changed(dev->battery);
+ 	}
+ }
+-
+-static bool hidinput_set_battery_charge_status(struct hid_device *dev,
+-					       unsigned int usage, int value)
+-{
+-	switch (usage) {
+-	case HID_BAT_CHARGING:
+-		dev->battery_charge_status = value ?
+-					     POWER_SUPPLY_STATUS_CHARGING :
+-					     POWER_SUPPLY_STATUS_DISCHARGING;
+-		return true;
+-	}
+-
+-	return false;
+-}
+ #else  /* !CONFIG_HID_BATTERY_STRENGTH */
+ static int hidinput_setup_battery(struct hid_device *dev, unsigned report_type,
+ 				  struct hid_field *field, bool is_percentage)
+@@ -642,14 +648,9 @@ static void hidinput_cleanup_battery(struct hid_device *dev)
+ {
+ }
+ 
+-static void hidinput_update_battery(struct hid_device *dev, int value)
+-{
+-}
+-
+-static bool hidinput_set_battery_charge_status(struct hid_device *dev,
+-					       unsigned int usage, int value)
++static void hidinput_update_battery(struct hid_device *dev, unsigned int usage,
++				    int value)
+ {
+-	return false;
+ }
+ #endif	/* CONFIG_HID_BATTERY_STRENGTH */
+ 
+@@ -1515,11 +1516,7 @@ void hidinput_hid_event(struct hid_device *hid, struct hid_field *field, struct
+ 		return;
+ 
+ 	if (usage->type == EV_PWR) {
+-		bool handled = hidinput_set_battery_charge_status(hid, usage->hid, value);
+-
+-		if (!handled)
+-			hidinput_update_battery(hid, value);
+-
++		hidinput_update_battery(hid, usage->hid, value);
+ 		return;
+ 	}
+ 
+diff --git a/drivers/hid/hid-logitech-dj.c b/drivers/hid/hid-logitech-dj.c
+index 34fa71ceec2b20..cce54dd9884a3e 100644
+--- a/drivers/hid/hid-logitech-dj.c
++++ b/drivers/hid/hid-logitech-dj.c
+@@ -1983,6 +1983,10 @@ static const struct hid_device_id logi_dj_receivers[] = {
+ 	  HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,
+ 		USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1_1),
+ 	 .driver_data = recvr_type_gaming_hidpp},
++	{ /* Logitech lightspeed receiver (0xc543) */
++	  HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,
++		USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1_2),
++	 .driver_data = recvr_type_gaming_hidpp},
+ 
+ 	{ /* Logitech 27 MHz HID++ 1.0 receiver (0xc513) */
+ 	  HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_MX3000_RECEIVER),
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index 10a3bc5f931b43..aaef405a717ee9 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -4596,6 +4596,8 @@ static const struct hid_device_id hidpp_devices[] = {
+ 	  HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xC094) },
+ 	{ /* Logitech G Pro X Superlight 2 Gaming Mouse over USB */
+ 	  HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xC09b) },
++	{ /* Logitech G PRO 2 LIGHTSPEED Wireless Mouse over USB */
++	  HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xc09a) },
+ 
+ 	{ /* G935 Gaming Headset */
+ 	  HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0x0a87),
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index a1c54ffe02b497..4c22bd2ba17080 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1461,6 +1461,14 @@ static const __u8 *mt_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ 	if (hdev->vendor == I2C_VENDOR_ID_GOODIX &&
+ 	    (hdev->product == I2C_DEVICE_ID_GOODIX_01E8 ||
+ 	     hdev->product == I2C_DEVICE_ID_GOODIX_01E9)) {
++		if (*size < 608) {
++			dev_info(
++				&hdev->dev,
++				"GT7868Q fixup: report descriptor is only %u bytes, skipping\n",
++				*size);
++			return rdesc;
++		}
++
+ 		if (rdesc[607] == 0x15) {
+ 			rdesc[607] = 0x25;
+ 			dev_info(
+diff --git a/drivers/hid/hid-ntrig.c b/drivers/hid/hid-ntrig.c
+index 2738ce947434f9..0f76e241e0afb4 100644
+--- a/drivers/hid/hid-ntrig.c
++++ b/drivers/hid/hid-ntrig.c
+@@ -144,6 +144,9 @@ static void ntrig_report_version(struct hid_device *hdev)
+ 	struct usb_device *usb_dev = hid_to_usb_dev(hdev);
+ 	unsigned char *data = kmalloc(8, GFP_KERNEL);
+ 
++	if (!hid_is_usb(hdev))
++		return;
++
+ 	if (!data)
+ 		goto err_free;
+ 
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 9bf9ce8dc80327..416160cfde77ba 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -124,6 +124,8 @@ static const struct hid_device_id hid_quirks[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_MOUSEPEN_I608X_V2), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_PENSKETCH_T609A), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_LABTEC, USB_DEVICE_ID_LABTEC_ODDOR_HANDBRAKE), HID_QUIRK_ALWAYS_POLL },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_LEGION_GO_DUAL_DINPUT), HID_QUIRK_MULTI_INPUT },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_LEGION_GO2_DUAL_DINPUT), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_OPTICAL_USB_MOUSE_600E), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D), HID_QUIRK_ALWAYS_POLL },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6019), HID_QUIRK_ALWAYS_POLL },
+@@ -410,6 +412,7 @@ static const struct hid_device_id hid_have_special_driver[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT4DRBK) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1URBK) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1DRBK) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT2DRBK) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK_010C) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK_019B) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D) },
+diff --git a/drivers/hid/intel-thc-hid/intel-quicki2c/pci-quicki2c.c b/drivers/hid/intel-thc-hid/intel-quicki2c/pci-quicki2c.c
+index 8a8c4a46f92700..142e5c40192ea8 100644
+--- a/drivers/hid/intel-thc-hid/intel-quicki2c/pci-quicki2c.c
++++ b/drivers/hid/intel-thc-hid/intel-quicki2c/pci-quicki2c.c
+@@ -406,6 +406,7 @@ static struct quicki2c_device *quicki2c_dev_init(struct pci_dev *pdev, void __io
+  */
+ static void quicki2c_dev_deinit(struct quicki2c_device *qcdev)
+ {
++	thc_interrupt_quiesce(qcdev->thc_hw, true);
+ 	thc_interrupt_enable(qcdev->thc_hw, false);
+ 	thc_ltr_unconfig(qcdev->thc_hw);
+ 
+diff --git a/drivers/hid/intel-thc-hid/intel-quicki2c/quicki2c-dev.h b/drivers/hid/intel-thc-hid/intel-quicki2c/quicki2c-dev.h
+index 6ddb584bd61103..97085a6a7452da 100644
+--- a/drivers/hid/intel-thc-hid/intel-quicki2c/quicki2c-dev.h
++++ b/drivers/hid/intel-thc-hid/intel-quicki2c/quicki2c-dev.h
+@@ -71,6 +71,7 @@ struct quicki2c_subip_acpi_parameter {
+ 	u16 device_address;
+ 	u64 connection_speed;
+ 	u8 addressing_mode;
++	u8 reserved;
+ } __packed;
+ 
+ /**
+@@ -120,6 +121,7 @@ struct quicki2c_subip_acpi_config {
+ 	u64 HMTD;
+ 	u64 HMRD;
+ 	u64 HMSL;
++	u8 reserved;
+ };
+ 
+ struct device;
+diff --git a/drivers/hid/intel-thc-hid/intel-thc/intel-thc-dev.c b/drivers/hid/intel-thc-hid/intel-thc/intel-thc-dev.c
+index c105df7f6c8733..4698722e0d0a61 100644
+--- a/drivers/hid/intel-thc-hid/intel-thc/intel-thc-dev.c
++++ b/drivers/hid/intel-thc-hid/intel-thc/intel-thc-dev.c
+@@ -1539,7 +1539,7 @@ int thc_i2c_subip_regs_save(struct thc_device *dev)
+ 
+ 	for (int i = 0; i < ARRAY_SIZE(i2c_subip_regs); i++) {
+ 		ret = thc_i2c_subip_pio_read(dev, i2c_subip_regs[i],
+-					     &read_size, (u32 *)&dev->i2c_subip_regs + i);
++					     &read_size, &dev->i2c_subip_regs[i]);
+ 		if (ret < 0)
+ 			return ret;
+ 	}
+@@ -1562,7 +1562,7 @@ int thc_i2c_subip_regs_restore(struct thc_device *dev)
+ 
+ 	for (int i = 0; i < ARRAY_SIZE(i2c_subip_regs); i++) {
+ 		ret = thc_i2c_subip_pio_write(dev, i2c_subip_regs[i],
+-					      write_size, (u32 *)&dev->i2c_subip_regs + i);
++					      write_size, &dev->i2c_subip_regs[i]);
+ 		if (ret < 0)
+ 			return ret;
+ 	}
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 955b39d2252442..9b2c710f8da182 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -684,6 +684,7 @@ static bool wacom_is_art_pen(int tool_id)
+ 	case 0x885:	/* Intuos3 Marker Pen */
+ 	case 0x804:	/* Intuos4/5 13HD/24HD Marker Pen */
+ 	case 0x10804:	/* Intuos4/5 13HD/24HD Art Pen */
++	case 0x204:     /* Art Pen 2 */
+ 		is_art_pen = true;
+ 		break;
+ 	}
+diff --git a/drivers/isdn/hardware/mISDN/hfcpci.c b/drivers/isdn/hardware/mISDN/hfcpci.c
+index 2b05722d4dbe8b..ea8a0ab47afd8a 100644
+--- a/drivers/isdn/hardware/mISDN/hfcpci.c
++++ b/drivers/isdn/hardware/mISDN/hfcpci.c
+@@ -39,12 +39,13 @@
+ 
+ #include "hfc_pci.h"
+ 
++static void hfcpci_softirq(struct timer_list *unused);
+ static const char *hfcpci_revision = "2.0";
+ 
+ static int HFC_cnt;
+ static uint debug;
+ static uint poll, tics;
+-static struct timer_list hfc_tl;
++static DEFINE_TIMER(hfc_tl, hfcpci_softirq);
+ static unsigned long hfc_jiffies;
+ 
+ MODULE_AUTHOR("Karsten Keil");
+@@ -2305,8 +2306,7 @@ hfcpci_softirq(struct timer_list *unused)
+ 		hfc_jiffies = jiffies + 1;
+ 	else
+ 		hfc_jiffies += tics;
+-	hfc_tl.expires = hfc_jiffies;
+-	add_timer(&hfc_tl);
++	mod_timer(&hfc_tl, hfc_jiffies);
+ }
+ 
+ static int __init
+@@ -2332,10 +2332,8 @@ HFC_init(void)
+ 	if (poll != HFCPCI_BTRANS_THRESHOLD) {
+ 		printk(KERN_INFO "%s: Using alternative poll value of %d\n",
+ 		       __func__, poll);
+-		timer_setup(&hfc_tl, hfcpci_softirq, 0);
+-		hfc_tl.expires = jiffies + tics;
+-		hfc_jiffies = hfc_tl.expires;
+-		add_timer(&hfc_tl);
++		hfc_jiffies = jiffies + tics;
++		mod_timer(&hfc_tl, hfc_jiffies);
+ 	} else
+ 		tics = 0; /* indicate the use of controller's timer */
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index ec8752c298e693..cb76ab78904fc1 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -8009,7 +8009,8 @@ static int __bnxt_reserve_rings(struct bnxt *bp)
+ 	}
+ 	rx_rings = min_t(int, rx_rings, hwr.grp);
+ 	hwr.cp = min_t(int, hwr.cp, bp->cp_nr_rings);
+-	if (hwr.stat > bnxt_get_ulp_stat_ctxs(bp))
++	if (bnxt_ulp_registered(bp->edev) &&
++	    hwr.stat > bnxt_get_ulp_stat_ctxs(bp))
+ 		hwr.stat -= bnxt_get_ulp_stat_ctxs(bp);
+ 	hwr.cp = min_t(int, hwr.cp, hwr.stat);
+ 	rc = bnxt_trim_rings(bp, &rx_rings, &hwr.tx, hwr.cp, sh);
+@@ -8017,6 +8018,11 @@ static int __bnxt_reserve_rings(struct bnxt *bp)
+ 		hwr.rx = rx_rings << 1;
+ 	tx_cp = bnxt_num_tx_to_cp(bp, hwr.tx);
+ 	hwr.cp = sh ? max_t(int, tx_cp, rx_rings) : tx_cp + rx_rings;
++	if (hwr.tx != bp->tx_nr_rings) {
++		netdev_warn(bp->dev,
++			    "Able to reserve only %d out of %d requested TX rings\n",
++			    hwr.tx, bp->tx_nr_rings);
++	}
+ 	bp->tx_nr_rings = hwr.tx;
+ 
+ 	/* If we cannot reserve all the RX rings, reset the RSS map only
+@@ -12844,6 +12850,17 @@ static int bnxt_set_xps_mapping(struct bnxt *bp)
+ 	return rc;
+ }
+ 
++static int bnxt_tx_nr_rings(struct bnxt *bp)
++{
++	return bp->num_tc ? bp->tx_nr_rings_per_tc * bp->num_tc :
++			    bp->tx_nr_rings_per_tc;
++}
++
++static int bnxt_tx_nr_rings_per_tc(struct bnxt *bp)
++{
++	return bp->num_tc ? bp->tx_nr_rings / bp->num_tc : bp->tx_nr_rings;
++}
++
+ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
+ {
+ 	int rc = 0;
+@@ -12861,6 +12878,13 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
+ 	if (rc)
+ 		return rc;
+ 
++	/* Make adjustments if reserved TX rings are less than requested */
++	bp->tx_nr_rings -= bp->tx_nr_rings_xdp;
++	bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp);
++	if (bp->tx_nr_rings_xdp) {
++		bp->tx_nr_rings_xdp = bp->tx_nr_rings_per_tc;
++		bp->tx_nr_rings += bp->tx_nr_rings_xdp;
++	}
+ 	rc = bnxt_alloc_mem(bp, irq_re_init);
+ 	if (rc) {
+ 		netdev_err(bp->dev, "bnxt_alloc_mem err: %x\n", rc);
+@@ -16338,7 +16362,7 @@ static void bnxt_trim_dflt_sh_rings(struct bnxt *bp)
+ 	bp->cp_nr_rings = min_t(int, bp->tx_nr_rings_per_tc, bp->rx_nr_rings);
+ 	bp->rx_nr_rings = bp->cp_nr_rings;
+ 	bp->tx_nr_rings_per_tc = bp->cp_nr_rings;
+-	bp->tx_nr_rings = bp->tx_nr_rings_per_tc;
++	bp->tx_nr_rings = bnxt_tx_nr_rings(bp);
+ }
+ 
+ static int bnxt_set_dflt_rings(struct bnxt *bp, bool sh)
+@@ -16370,7 +16394,7 @@ static int bnxt_set_dflt_rings(struct bnxt *bp, bool sh)
+ 		bnxt_trim_dflt_sh_rings(bp);
+ 	else
+ 		bp->cp_nr_rings = bp->tx_nr_rings_per_tc + bp->rx_nr_rings;
+-	bp->tx_nr_rings = bp->tx_nr_rings_per_tc;
++	bp->tx_nr_rings = bnxt_tx_nr_rings(bp);
+ 
+ 	avail_msix = bnxt_get_max_func_irqs(bp) - bp->cp_nr_rings;
+ 	if (avail_msix >= BNXT_MIN_ROCE_CP_RINGS) {
+@@ -16383,7 +16407,7 @@ static int bnxt_set_dflt_rings(struct bnxt *bp, bool sh)
+ 	rc = __bnxt_reserve_rings(bp);
+ 	if (rc && rc != -ENODEV)
+ 		netdev_warn(bp->dev, "Unable to reserve tx rings\n");
+-	bp->tx_nr_rings_per_tc = bp->tx_nr_rings;
++	bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp);
+ 	if (sh)
+ 		bnxt_trim_dflt_sh_rings(bp);
+ 
+@@ -16392,7 +16416,7 @@ static int bnxt_set_dflt_rings(struct bnxt *bp, bool sh)
+ 		rc = __bnxt_reserve_rings(bp);
+ 		if (rc && rc != -ENODEV)
+ 			netdev_warn(bp->dev, "2nd rings reservation failed.\n");
+-		bp->tx_nr_rings_per_tc = bp->tx_nr_rings;
++		bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp);
+ 	}
+ 	if (BNXT_CHIP_TYPE_NITRO_A0(bp)) {
+ 		bp->rx_nr_rings++;
+@@ -16426,7 +16450,7 @@ static int bnxt_init_dflt_ring_mode(struct bnxt *bp)
+ 	if (rc)
+ 		goto init_dflt_ring_err;
+ 
+-	bp->tx_nr_rings_per_tc = bp->tx_nr_rings;
++	bp->tx_nr_rings_per_tc = bnxt_tx_nr_rings_per_tc(bp);
+ 
+ 	bnxt_set_dflt_rfs(bp);
+ 
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index d1f1ae5ea161cc..d949d2ba6cb9fa 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -3090,7 +3090,7 @@ static void gem_update_stats(struct macb *bp)
+ 			/* Add GEM_OCTTXH, GEM_OCTRXH */
+ 			val = bp->macb_reg_readl(bp, offset + 4);
+ 			bp->ethtool_stats[i] += ((u64)val) << 32;
+-			*(p++) += ((u64)val) << 32;
++			*p += ((u64)val) << 32;
+ 		}
+ 	}
+ 
+@@ -5391,19 +5391,16 @@ static void macb_remove(struct platform_device *pdev)
+ 
+ 	if (dev) {
+ 		bp = netdev_priv(dev);
++		unregister_netdev(dev);
+ 		phy_exit(bp->sgmii_phy);
+ 		mdiobus_unregister(bp->mii_bus);
+ 		mdiobus_free(bp->mii_bus);
+ 
+-		unregister_netdev(dev);
++		device_set_wakeup_enable(&bp->pdev->dev, 0);
+ 		cancel_work_sync(&bp->hresp_err_bh_work);
+ 		pm_runtime_disable(&pdev->dev);
+ 		pm_runtime_dont_use_autosuspend(&pdev->dev);
+-		if (!pm_runtime_suspended(&pdev->dev)) {
+-			macb_clks_disable(bp->pclk, bp->hclk, bp->tx_clk,
+-					  bp->rx_clk, bp->tsu_clk);
+-			pm_runtime_set_suspended(&pdev->dev);
+-		}
++		pm_runtime_set_suspended(&pdev->dev);
+ 		phylink_destroy(bp->phylink);
+ 		free_netdev(dev);
+ 	}
+diff --git a/drivers/net/ethernet/dlink/dl2k.c b/drivers/net/ethernet/dlink/dl2k.c
+index da9b7715df050a..f828f38cd76826 100644
+--- a/drivers/net/ethernet/dlink/dl2k.c
++++ b/drivers/net/ethernet/dlink/dl2k.c
+@@ -1091,7 +1091,7 @@ get_stats (struct net_device *dev)
+ 	dev->stats.rx_bytes += dr32(OctetRcvOk);
+ 	dev->stats.tx_bytes += dr32(OctetXmtOk);
+ 
+-	dev->stats.multicast = dr32(McstFramesRcvdOk);
++	dev->stats.multicast += dr32(McstFramesRcvdOk);
+ 	dev->stats.collisions += dr32(SingleColFrames)
+ 			     +  dr32(MultiColFrames);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index ddd0ad68185b44..0ef11b7ab477eb 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -509,6 +509,7 @@ enum ice_pf_flags {
+ 	ICE_FLAG_LINK_LENIENT_MODE_ENA,
+ 	ICE_FLAG_PLUG_AUX_DEV,
+ 	ICE_FLAG_UNPLUG_AUX_DEV,
++	ICE_FLAG_AUX_DEV_CREATED,
+ 	ICE_FLAG_MTU_CHANGED,
+ 	ICE_FLAG_GNSS,			/* GNSS successfully initialized */
+ 	ICE_FLAG_DPLL,			/* SyncE/PTP dplls initialized */
+diff --git a/drivers/net/ethernet/intel/ice/ice_adapter.c b/drivers/net/ethernet/intel/ice/ice_adapter.c
+index 66e070095d1bbe..10285995c9eddd 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adapter.c
++++ b/drivers/net/ethernet/intel/ice/ice_adapter.c
+@@ -13,16 +13,45 @@
+ static DEFINE_XARRAY(ice_adapters);
+ static DEFINE_MUTEX(ice_adapters_mutex);
+ 
+-static unsigned long ice_adapter_index(u64 dsn)
++#define ICE_ADAPTER_FIXED_INDEX	BIT_ULL(63)
++
++#define ICE_ADAPTER_INDEX_E825C	\
++	(ICE_DEV_ID_E825C_BACKPLANE | ICE_ADAPTER_FIXED_INDEX)
++
++static u64 ice_adapter_index(struct pci_dev *pdev)
+ {
++	switch (pdev->device) {
++	case ICE_DEV_ID_E825C_BACKPLANE:
++	case ICE_DEV_ID_E825C_QSFP:
++	case ICE_DEV_ID_E825C_SFP:
++	case ICE_DEV_ID_E825C_SGMII:
++		/* E825C devices have multiple NACs which are connected to the
++		 * same clock source, and which must share the same
++		 * ice_adapter structure. We can't use the serial number since
++		 * each NAC has its own NVM generated with its own unique
++		 * Device Serial Number. Instead, rely on the embedded nature
++		 * of the E825C devices, and use a fixed index. This relies on
++		 * the fact that all E825C physical functions in a given
++		 * system are part of the same overall device.
++		 */
++		return ICE_ADAPTER_INDEX_E825C;
++	default:
++		return pci_get_dsn(pdev) & ~ICE_ADAPTER_FIXED_INDEX;
++	}
++}
++
++static unsigned long ice_adapter_xa_index(struct pci_dev *pdev)
++{
++	u64 index = ice_adapter_index(pdev);
++
+ #if BITS_PER_LONG == 64
+-	return dsn;
++	return index;
+ #else
+-	return (u32)dsn ^ (u32)(dsn >> 32);
++	return (u32)index ^ (u32)(index >> 32);
+ #endif
+ }
+ 
+-static struct ice_adapter *ice_adapter_new(u64 dsn)
++static struct ice_adapter *ice_adapter_new(struct pci_dev *pdev)
+ {
+ 	struct ice_adapter *adapter;
+ 
+@@ -30,7 +59,7 @@ static struct ice_adapter *ice_adapter_new(u64 dsn)
+ 	if (!adapter)
+ 		return NULL;
+ 
+-	adapter->device_serial_number = dsn;
++	adapter->index = ice_adapter_index(pdev);
+ 	spin_lock_init(&adapter->ptp_gltsyn_time_lock);
+ 	refcount_set(&adapter->refcount, 1);
+ 
+@@ -63,24 +92,23 @@ static void ice_adapter_free(struct ice_adapter *adapter)
+  */
+ struct ice_adapter *ice_adapter_get(struct pci_dev *pdev)
+ {
+-	u64 dsn = pci_get_dsn(pdev);
+ 	struct ice_adapter *adapter;
+ 	unsigned long index;
+ 	int err;
+ 
+-	index = ice_adapter_index(dsn);
++	index = ice_adapter_xa_index(pdev);
+ 	scoped_guard(mutex, &ice_adapters_mutex) {
+ 		err = xa_insert(&ice_adapters, index, NULL, GFP_KERNEL);
+ 		if (err == -EBUSY) {
+ 			adapter = xa_load(&ice_adapters, index);
+ 			refcount_inc(&adapter->refcount);
+-			WARN_ON_ONCE(adapter->device_serial_number != dsn);
++			WARN_ON_ONCE(adapter->index != ice_adapter_index(pdev));
+ 			return adapter;
+ 		}
+ 		if (err)
+ 			return ERR_PTR(err);
+ 
+-		adapter = ice_adapter_new(dsn);
++		adapter = ice_adapter_new(pdev);
+ 		if (!adapter)
+ 			return ERR_PTR(-ENOMEM);
+ 		xa_store(&ice_adapters, index, adapter, GFP_KERNEL);
+@@ -99,11 +127,10 @@ struct ice_adapter *ice_adapter_get(struct pci_dev *pdev)
+  */
+ void ice_adapter_put(struct pci_dev *pdev)
+ {
+-	u64 dsn = pci_get_dsn(pdev);
+ 	struct ice_adapter *adapter;
+ 	unsigned long index;
+ 
+-	index = ice_adapter_index(dsn);
++	index = ice_adapter_xa_index(pdev);
+ 	scoped_guard(mutex, &ice_adapters_mutex) {
+ 		adapter = xa_load(&ice_adapters, index);
+ 		if (WARN_ON(!adapter))
+diff --git a/drivers/net/ethernet/intel/ice/ice_adapter.h b/drivers/net/ethernet/intel/ice/ice_adapter.h
+index ac15c0d2bc1a47..409467847c7536 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adapter.h
++++ b/drivers/net/ethernet/intel/ice/ice_adapter.h
+@@ -32,7 +32,7 @@ struct ice_port_list {
+  * @refcount: Reference count. struct ice_pf objects hold the references.
+  * @ctrl_pf: Control PF of the adapter
+  * @ports: Ports list
+- * @device_serial_number: DSN cached for collision detection on 32bit systems
++ * @index: 64-bit index cached for collision detection on 32bit systems
+  */
+ struct ice_adapter {
+ 	refcount_t refcount;
+@@ -41,7 +41,7 @@ struct ice_adapter {
+ 
+ 	struct ice_pf *ctrl_pf;
+ 	struct ice_port_list ports;
+-	u64 device_serial_number;
++	u64 index;
+ };
+ 
+ struct ice_adapter *ice_adapter_get(struct pci_dev *pdev);
+diff --git a/drivers/net/ethernet/intel/ice/ice_ddp.c b/drivers/net/ethernet/intel/ice/ice_ddp.c
+index 351824dc3c6245..1d3e1b188d22c8 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ddp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ddp.c
+@@ -2376,7 +2376,13 @@ ice_get_set_tx_topo(struct ice_hw *hw, u8 *buf, u16 buf_size,
+  * The function will apply the new Tx topology from the package buffer
+  * if available.
+  *
+- * Return: zero when update was successful, negative values otherwise.
++ * Return:
++ * * 0 - Successfully applied topology configuration.
++ * * -EBUSY - Failed to acquire global configuration lock.
++ * * -EEXIST - Topology configuration has already been applied.
++ * * -EIO - Unable to apply topology configuration.
++ * * -ENODEV - Failed to re-initialize device after applying configuration.
++ * * Other negative error codes indicate unexpected failures.
+  */
+ int ice_cfg_tx_topo(struct ice_hw *hw, const void *buf, u32 len)
+ {
+@@ -2409,7 +2415,7 @@ int ice_cfg_tx_topo(struct ice_hw *hw, const void *buf, u32 len)
+ 
+ 	if (status) {
+ 		ice_debug(hw, ICE_DBG_INIT, "Get current topology is failed\n");
+-		return status;
++		return -EIO;
+ 	}
+ 
+ 	/* Is default topology already applied ? */
+@@ -2496,31 +2502,45 @@ int ice_cfg_tx_topo(struct ice_hw *hw, const void *buf, u32 len)
+ 				 ICE_GLOBAL_CFG_LOCK_TIMEOUT);
+ 	if (status) {
+ 		ice_debug(hw, ICE_DBG_INIT, "Failed to acquire global lock\n");
+-		return status;
++		return -EBUSY;
+ 	}
+ 
+ 	/* Check if reset was triggered already. */
+ 	reg = rd32(hw, GLGEN_RSTAT);
+ 	if (reg & GLGEN_RSTAT_DEVSTATE_M) {
+-		/* Reset is in progress, re-init the HW again */
+ 		ice_debug(hw, ICE_DBG_INIT, "Reset is in progress. Layer topology might be applied already\n");
+ 		ice_check_reset(hw);
+-		return 0;
++		/* Reset is in progress, re-init the HW again */
++		goto reinit_hw;
+ 	}
+ 
+ 	/* Set new topology */
+ 	status = ice_get_set_tx_topo(hw, new_topo, size, NULL, NULL, true);
+ 	if (status) {
+-		ice_debug(hw, ICE_DBG_INIT, "Failed setting Tx topology\n");
+-		return status;
++		ice_debug(hw, ICE_DBG_INIT, "Failed to set Tx topology, status %pe\n",
++			  ERR_PTR(status));
++		/* only report -EIO here as the caller checks the error value
++		 * and reports an informational error message informing that
++		 * the driver failed to program Tx topology.
++		 */
++		status = -EIO;
+ 	}
+ 
+-	/* New topology is updated, delay 1 second before issuing the CORER */
++	/* Even if Tx topology config failed, we need to CORE reset here to
++	 * clear the global configuration lock. Delay 1 second to allow
++	 * hardware to settle then issue a CORER
++	 */
+ 	msleep(1000);
+ 	ice_reset(hw, ICE_RESET_CORER);
+-	/* CORER will clear the global lock, so no explicit call
+-	 * required for release.
+-	 */
++	ice_check_reset(hw);
++
++reinit_hw:
++	/* Since we triggered a CORER, re-initialize hardware */
++	ice_deinit_hw(hw);
++	if (ice_init_hw(hw)) {
++		ice_debug(hw, ICE_DBG_INIT, "Failed to re-init hardware after setting Tx topology\n");
++		return -ENODEV;
++	}
+ 
+-	return 0;
++	return status;
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_idc.c b/drivers/net/ethernet/intel/ice/ice_idc.c
+index 6ab53e430f9121..420d45c2558b62 100644
+--- a/drivers/net/ethernet/intel/ice/ice_idc.c
++++ b/drivers/net/ethernet/intel/ice/ice_idc.c
+@@ -336,6 +336,7 @@ int ice_plug_aux_dev(struct ice_pf *pf)
+ 	mutex_lock(&pf->adev_mutex);
+ 	cdev->adev = adev;
+ 	mutex_unlock(&pf->adev_mutex);
++	set_bit(ICE_FLAG_AUX_DEV_CREATED, pf->flags);
+ 
+ 	return 0;
+ }
+@@ -347,15 +348,16 @@ void ice_unplug_aux_dev(struct ice_pf *pf)
+ {
+ 	struct auxiliary_device *adev;
+ 
++	if (!test_and_clear_bit(ICE_FLAG_AUX_DEV_CREATED, pf->flags))
++		return;
++
+ 	mutex_lock(&pf->adev_mutex);
+ 	adev = pf->cdev_info->adev;
+ 	pf->cdev_info->adev = NULL;
+ 	mutex_unlock(&pf->adev_mutex);
+ 
+-	if (adev) {
+-		auxiliary_device_delete(adev);
+-		auxiliary_device_uninit(adev);
+-	}
++	auxiliary_device_delete(adev);
++	auxiliary_device_uninit(adev);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 0a11b4281092e2..d42892c8c5a12c 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -4532,17 +4532,23 @@ ice_init_tx_topology(struct ice_hw *hw, const struct firmware *firmware)
+ 			dev_info(dev, "Tx scheduling layers switching feature disabled\n");
+ 		else
+ 			dev_info(dev, "Tx scheduling layers switching feature enabled\n");
+-		/* if there was a change in topology ice_cfg_tx_topo triggered
+-		 * a CORER and we need to re-init hw
++		return 0;
++	} else if (err == -ENODEV) {
++		/* If we failed to re-initialize the device, we can no longer
++		 * continue loading.
+ 		 */
+-		ice_deinit_hw(hw);
+-		err = ice_init_hw(hw);
+-
++		dev_warn(dev, "Failed to initialize hardware after applying Tx scheduling configuration.\n");
+ 		return err;
+ 	} else if (err == -EIO) {
+ 		dev_info(dev, "DDP package does not support Tx scheduling layers switching feature - please update to the latest DDP package and try again\n");
++		return 0;
++	} else if (err == -EEXIST) {
++		return 0;
+ 	}
+ 
++	/* Do not treat this as a fatal error. */
++	dev_info(dev, "Failed to apply Tx scheduling configuration, err %pe\n",
++		 ERR_PTR(err));
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
+index 0e5107fe62ad5b..c50cf3ad190e9a 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
+@@ -1295,7 +1295,7 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
+ 			skb = ice_construct_skb(rx_ring, xdp);
+ 		/* exit if we failed to retrieve a buffer */
+ 		if (!skb) {
+-			rx_ring->ring_stats->rx_stats.alloc_page_failed++;
++			rx_ring->ring_stats->rx_stats.alloc_buf_failed++;
+ 			xdp_verdict = ICE_XDP_CONSUMED;
+ 		}
+ 		ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict);
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
+index 993c354aa27adf..bf9b820c833031 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
+@@ -179,6 +179,58 @@ static int idpf_tx_singleq_csum(struct sk_buff *skb,
+ 	return 1;
+ }
+ 
++/**
++ * idpf_tx_singleq_dma_map_error - handle TX DMA map errors
++ * @txq: queue to send buffer on
++ * @skb: send buffer
++ * @first: original first buffer info buffer for packet
++ * @idx: starting point on ring to unwind
++ */
++static void idpf_tx_singleq_dma_map_error(struct idpf_tx_queue *txq,
++					  struct sk_buff *skb,
++					  struct idpf_tx_buf *first, u16 idx)
++{
++	struct libeth_sq_napi_stats ss = { };
++	struct libeth_cq_pp cp = {
++		.dev	= txq->dev,
++		.ss	= &ss,
++	};
++
++	u64_stats_update_begin(&txq->stats_sync);
++	u64_stats_inc(&txq->q_stats.dma_map_errs);
++	u64_stats_update_end(&txq->stats_sync);
++
++	/* clear dma mappings for failed tx_buf map */
++	for (;;) {
++		struct idpf_tx_buf *tx_buf;
++
++		tx_buf = &txq->tx_buf[idx];
++		libeth_tx_complete(tx_buf, &cp);
++		if (tx_buf == first)
++			break;
++		if (idx == 0)
++			idx = txq->desc_count;
++		idx--;
++	}
++
++	if (skb_is_gso(skb)) {
++		union idpf_tx_flex_desc *tx_desc;
++
++		/* If we failed a DMA mapping for a TSO packet, we will have
++		 * used one additional descriptor for a context
++		 * descriptor. Reset that here.
++		 */
++		tx_desc = &txq->flex_tx[idx];
++		memset(tx_desc, 0, sizeof(*tx_desc));
++		if (idx == 0)
++			idx = txq->desc_count;
++		idx--;
++	}
++
++	/* Update tail in case netdev_xmit_more was previously true */
++	idpf_tx_buf_hw_update(txq, idx, false);
++}
++
+ /**
+  * idpf_tx_singleq_map - Build the Tx base descriptor
+  * @tx_q: queue to send buffer on
+@@ -219,8 +271,9 @@ static void idpf_tx_singleq_map(struct idpf_tx_queue *tx_q,
+ 	for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
+ 		unsigned int max_data = IDPF_TX_MAX_DESC_DATA_ALIGNED;
+ 
+-		if (dma_mapping_error(tx_q->dev, dma))
+-			return idpf_tx_dma_map_error(tx_q, skb, first, i);
++		if (unlikely(dma_mapping_error(tx_q->dev, dma)))
++			return idpf_tx_singleq_dma_map_error(tx_q, skb,
++							     first, i);
+ 
+ 		/* record length, and DMA address */
+ 		dma_unmap_len_set(tx_buf, len, size);
+@@ -362,11 +415,11 @@ netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb,
+ {
+ 	struct idpf_tx_offload_params offload = { };
+ 	struct idpf_tx_buf *first;
++	u32 count, buf_count = 1;
+ 	int csum, tso, needed;
+-	unsigned int count;
+ 	__be16 protocol;
+ 
+-	count = idpf_tx_desc_count_required(tx_q, skb);
++	count = idpf_tx_res_count_required(tx_q, skb, &buf_count);
+ 	if (unlikely(!count))
+ 		return idpf_tx_drop_skb(tx_q, skb);
+ 
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+index 89185d1b8485ee..cd83243e7c765c 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+@@ -13,6 +13,7 @@ struct idpf_tx_stash {
+ 	struct libeth_sqe buf;
+ };
+ 
++#define idpf_tx_buf_next(buf)		(*(u32 *)&(buf)->priv)
+ #define idpf_tx_buf_compl_tag(buf)	(*(u32 *)&(buf)->priv)
+ LIBETH_SQE_CHECK_PRIV(u32);
+ 
+@@ -91,7 +92,7 @@ static void idpf_tx_buf_rel_all(struct idpf_tx_queue *txq)
+ 		return;
+ 
+ 	/* Free all the Tx buffer sk_buffs */
+-	for (i = 0; i < txq->desc_count; i++)
++	for (i = 0; i < txq->buf_pool_size; i++)
+ 		libeth_tx_complete(&txq->tx_buf[i], &cp);
+ 
+ 	kfree(txq->tx_buf);
+@@ -139,6 +140,9 @@ static void idpf_tx_desc_rel(struct idpf_tx_queue *txq)
+ 	if (!txq->desc_ring)
+ 		return;
+ 
++	if (txq->refillq)
++		kfree(txq->refillq->ring);
++
+ 	dmam_free_coherent(txq->dev, txq->size, txq->desc_ring, txq->dma);
+ 	txq->desc_ring = NULL;
+ 	txq->next_to_use = 0;
+@@ -196,14 +200,17 @@ static void idpf_tx_desc_rel_all(struct idpf_vport *vport)
+ static int idpf_tx_buf_alloc_all(struct idpf_tx_queue *tx_q)
+ {
+ 	struct idpf_buf_lifo *buf_stack;
+-	int buf_size;
+ 	int i;
+ 
+ 	/* Allocate book keeping buffers only. Buffers to be supplied to HW
+ 	 * are allocated by kernel network stack and received as part of skb
+ 	 */
+-	buf_size = sizeof(struct idpf_tx_buf) * tx_q->desc_count;
+-	tx_q->tx_buf = kzalloc(buf_size, GFP_KERNEL);
++	if (idpf_queue_has(FLOW_SCH_EN, tx_q))
++		tx_q->buf_pool_size = U16_MAX;
++	else
++		tx_q->buf_pool_size = tx_q->desc_count;
++	tx_q->tx_buf = kcalloc(tx_q->buf_pool_size, sizeof(*tx_q->tx_buf),
++			       GFP_KERNEL);
+ 	if (!tx_q->tx_buf)
+ 		return -ENOMEM;
+ 
+@@ -244,6 +251,7 @@ static int idpf_tx_desc_alloc(const struct idpf_vport *vport,
+ 			      struct idpf_tx_queue *tx_q)
+ {
+ 	struct device *dev = tx_q->dev;
++	struct idpf_sw_queue *refillq;
+ 	int err;
+ 
+ 	err = idpf_tx_buf_alloc_all(tx_q);
+@@ -267,6 +275,29 @@ static int idpf_tx_desc_alloc(const struct idpf_vport *vport,
+ 	tx_q->next_to_clean = 0;
+ 	idpf_queue_set(GEN_CHK, tx_q);
+ 
++	if (!idpf_queue_has(FLOW_SCH_EN, tx_q))
++		return 0;
++
++	refillq = tx_q->refillq;
++	refillq->desc_count = tx_q->buf_pool_size;
++	refillq->ring = kcalloc(refillq->desc_count, sizeof(u32),
++				GFP_KERNEL);
++	if (!refillq->ring) {
++		err = -ENOMEM;
++		goto err_alloc;
++	}
++
++	for (unsigned int i = 0; i < refillq->desc_count; i++)
++		refillq->ring[i] =
++			FIELD_PREP(IDPF_RFL_BI_BUFID_M, i) |
++			FIELD_PREP(IDPF_RFL_BI_GEN_M,
++				   idpf_queue_has(GEN_CHK, refillq));
++
++	/* Go ahead and flip the GEN bit since this counts as filling
++	 * up the ring, i.e. we already ring wrapped.
++	 */
++	idpf_queue_change(GEN_CHK, refillq);
++
+ 	return 0;
+ 
+ err_alloc:
+@@ -603,18 +634,18 @@ static int idpf_rx_hdr_buf_alloc_all(struct idpf_buf_queue *bufq)
+ }
+ 
+ /**
+- * idpf_rx_post_buf_refill - Post buffer id to refill queue
++ * idpf_post_buf_refill - Post buffer id to refill queue
+  * @refillq: refill queue to post to
+  * @buf_id: buffer id to post
+  */
+-static void idpf_rx_post_buf_refill(struct idpf_sw_queue *refillq, u16 buf_id)
++static void idpf_post_buf_refill(struct idpf_sw_queue *refillq, u16 buf_id)
+ {
+ 	u32 nta = refillq->next_to_use;
+ 
+ 	/* store the buffer ID and the SW maintained GEN bit to the refillq */
+ 	refillq->ring[nta] =
+-		FIELD_PREP(IDPF_RX_BI_BUFID_M, buf_id) |
+-		FIELD_PREP(IDPF_RX_BI_GEN_M,
++		FIELD_PREP(IDPF_RFL_BI_BUFID_M, buf_id) |
++		FIELD_PREP(IDPF_RFL_BI_GEN_M,
+ 			   idpf_queue_has(GEN_CHK, refillq));
+ 
+ 	if (unlikely(++nta == refillq->desc_count)) {
+@@ -995,6 +1026,11 @@ static void idpf_txq_group_rel(struct idpf_vport *vport)
+ 		struct idpf_txq_group *txq_grp = &vport->txq_grps[i];
+ 
+ 		for (j = 0; j < txq_grp->num_txq; j++) {
++			if (flow_sch_en) {
++				kfree(txq_grp->txqs[j]->refillq);
++				txq_grp->txqs[j]->refillq = NULL;
++			}
++
+ 			kfree(txq_grp->txqs[j]);
+ 			txq_grp->txqs[j] = NULL;
+ 		}
+@@ -1414,6 +1450,13 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
+ 			}
+ 
+ 			idpf_queue_set(FLOW_SCH_EN, q);
++
++			q->refillq = kzalloc(sizeof(*q->refillq), GFP_KERNEL);
++			if (!q->refillq)
++				goto err_alloc;
++
++			idpf_queue_set(GEN_CHK, q->refillq);
++			idpf_queue_set(RFL_GEN_CHK, q->refillq);
+ 		}
+ 
+ 		if (!split)
+@@ -1828,6 +1871,12 @@ static bool idpf_tx_splitq_clean(struct idpf_tx_queue *tx_q, u16 end,
+ 	struct idpf_tx_buf *tx_buf;
+ 	bool clean_complete = true;
+ 
++	if (descs_only) {
++		/* Bump ring index to mark as cleaned. */
++		tx_q->next_to_clean = end;
++		return true;
++	}
++
+ 	tx_desc = &tx_q->flex_tx[ntc];
+ 	next_pending_desc = &tx_q->flex_tx[end];
+ 	tx_buf = &tx_q->tx_buf[ntc];
+@@ -1894,87 +1943,43 @@ do {							\
+ } while (0)
+ 
+ /**
+- * idpf_tx_clean_buf_ring - clean flow scheduling TX queue buffers
++ * idpf_tx_clean_bufs - clean flow scheduling TX queue buffers
+  * @txq: queue to clean
+- * @compl_tag: completion tag of packet to clean (from completion descriptor)
++ * @buf_id: packet's starting buffer ID, from completion descriptor
+  * @cleaned: pointer to stats struct to track cleaned packets/bytes
+  * @budget: Used to determine if we are in netpoll
+  *
+- * Cleans all buffers associated with the input completion tag either from the
+- * TX buffer ring or from the hash table if the buffers were previously
+- * stashed. Returns the byte/segment count for the cleaned packet associated
+- * this completion tag.
++ * Clean all buffers associated with the packet starting at buf_id. Returns the
++ * byte/segment count for the cleaned packet.
+  */
+-static bool idpf_tx_clean_buf_ring(struct idpf_tx_queue *txq, u16 compl_tag,
+-				   struct libeth_sq_napi_stats *cleaned,
+-				   int budget)
++static bool idpf_tx_clean_bufs(struct idpf_tx_queue *txq, u32 buf_id,
++			       struct libeth_sq_napi_stats *cleaned,
++			       int budget)
+ {
+-	u16 idx = compl_tag & txq->compl_tag_bufid_m;
+ 	struct idpf_tx_buf *tx_buf = NULL;
+ 	struct libeth_cq_pp cp = {
+ 		.dev	= txq->dev,
+ 		.ss	= cleaned,
+ 		.napi	= budget,
+ 	};
+-	u16 ntc, orig_idx = idx;
+-
+-	tx_buf = &txq->tx_buf[idx];
+-
+-	if (unlikely(tx_buf->type <= LIBETH_SQE_CTX ||
+-		     idpf_tx_buf_compl_tag(tx_buf) != compl_tag))
+-		return false;
+ 
++	tx_buf = &txq->tx_buf[buf_id];
+ 	if (tx_buf->type == LIBETH_SQE_SKB) {
+ 		if (skb_shinfo(tx_buf->skb)->tx_flags & SKBTX_IN_PROGRESS)
+ 			idpf_tx_read_tstamp(txq, tx_buf->skb);
+ 
+ 		libeth_tx_complete(tx_buf, &cp);
++		idpf_post_buf_refill(txq->refillq, buf_id);
+ 	}
+ 
+-	idpf_tx_clean_buf_ring_bump_ntc(txq, idx, tx_buf);
++	while (idpf_tx_buf_next(tx_buf) != IDPF_TXBUF_NULL) {
++		buf_id = idpf_tx_buf_next(tx_buf);
+ 
+-	while (idpf_tx_buf_compl_tag(tx_buf) == compl_tag) {
++		tx_buf = &txq->tx_buf[buf_id];
+ 		libeth_tx_complete(tx_buf, &cp);
+-		idpf_tx_clean_buf_ring_bump_ntc(txq, idx, tx_buf);
++		idpf_post_buf_refill(txq->refillq, buf_id);
+ 	}
+ 
+-	/*
+-	 * It's possible the packet we just cleaned was an out of order
+-	 * completion, which means we can stash the buffers starting from
+-	 * the original next_to_clean and reuse the descriptors. We need
+-	 * to compare the descriptor ring next_to_clean packet's "first" buffer
+-	 * to the "first" buffer of the packet we just cleaned to determine if
+-	 * this is the case. Howevever, next_to_clean can point to either a
+-	 * reserved buffer that corresponds to a context descriptor used for the
+-	 * next_to_clean packet (TSO packet) or the "first" buffer (single
+-	 * packet). The orig_idx from the packet we just cleaned will always
+-	 * point to the "first" buffer. If next_to_clean points to a reserved
+-	 * buffer, let's bump ntc once and start the comparison from there.
+-	 */
+-	ntc = txq->next_to_clean;
+-	tx_buf = &txq->tx_buf[ntc];
+-
+-	if (tx_buf->type == LIBETH_SQE_CTX)
+-		idpf_tx_clean_buf_ring_bump_ntc(txq, ntc, tx_buf);
+-
+-	/*
+-	 * If ntc still points to a different "first" buffer, clean the
+-	 * descriptor ring and stash all of the buffers for later cleaning. If
+-	 * we cannot stash all of the buffers, next_to_clean will point to the
+-	 * "first" buffer of the packet that could not be stashed and cleaning
+-	 * will start there next time.
+-	 */
+-	if (unlikely(tx_buf != &txq->tx_buf[orig_idx] &&
+-		     !idpf_tx_splitq_clean(txq, orig_idx, budget, cleaned,
+-					   true)))
+-		return true;
+-
+-	/*
+-	 * Otherwise, update next_to_clean to reflect the cleaning that was
+-	 * done above.
+-	 */
+-	txq->next_to_clean = idx;
+-
+ 	return true;
+ }
+ 
+@@ -2008,7 +2013,7 @@ static void idpf_tx_handle_rs_completion(struct idpf_tx_queue *txq,
+ 	/* If we didn't clean anything on the ring, this packet must be
+ 	 * in the hash table. Go clean it there.
+ 	 */
+-	if (!idpf_tx_clean_buf_ring(txq, compl_tag, cleaned, budget))
++	if (!idpf_tx_clean_bufs(txq, compl_tag, cleaned, budget))
+ 		idpf_tx_clean_stashed_bufs(txq, compl_tag, cleaned, budget);
+ }
+ 
+@@ -2184,15 +2189,22 @@ void idpf_tx_splitq_build_flow_desc(union idpf_tx_flex_desc *desc,
+ 	desc->flow.qw1.compl_tag = cpu_to_le16(params->compl_tag);
+ }
+ 
+-/* Global conditions to tell whether the txq (and related resources)
+- * has room to allow the use of "size" descriptors.
++/**
++ * idpf_tx_splitq_has_room - check if enough Tx splitq resources are available
++ * @tx_q: the queue to be checked
++ * @descs_needed: number of descriptors required for this packet
++ * @bufs_needed: number of Tx buffers required for this packet
++ *
++ * Return: 0 if no room available, 1 otherwise
+  */
+-static int idpf_txq_has_room(struct idpf_tx_queue *tx_q, u32 size)
++static int idpf_txq_has_room(struct idpf_tx_queue *tx_q, u32 descs_needed,
++			     u32 bufs_needed)
+ {
+-	if (IDPF_DESC_UNUSED(tx_q) < size ||
++	if (IDPF_DESC_UNUSED(tx_q) < descs_needed ||
+ 	    IDPF_TX_COMPLQ_PENDING(tx_q->txq_grp) >
+ 		IDPF_TX_COMPLQ_OVERFLOW_THRESH(tx_q->txq_grp->complq) ||
+-	    IDPF_TX_BUF_RSV_LOW(tx_q))
++	    IDPF_TX_BUF_RSV_LOW(tx_q) ||
++	    idpf_tx_splitq_get_free_bufs(tx_q->refillq) < bufs_needed)
+ 		return 0;
+ 	return 1;
+ }
+@@ -2201,14 +2213,21 @@ static int idpf_txq_has_room(struct idpf_tx_queue *tx_q, u32 size)
+  * idpf_tx_maybe_stop_splitq - 1st level check for Tx splitq stop conditions
+  * @tx_q: the queue to be checked
+  * @descs_needed: number of descriptors required for this packet
++ * @bufs_needed: number of buffers needed for this packet
+  *
+- * Returns 0 if stop is not needed
++ * Return: 0 if stop is not needed
+  */
+ static int idpf_tx_maybe_stop_splitq(struct idpf_tx_queue *tx_q,
+-				     unsigned int descs_needed)
++				     u32 descs_needed,
++				     u32 bufs_needed)
+ {
++	/* Since we have multiple resources to check for splitq, our
++	 * start,stop_thrs becomes a boolean check instead of a count
++	 * threshold.
++	 */
+ 	if (netif_subqueue_maybe_stop(tx_q->netdev, tx_q->idx,
+-				      idpf_txq_has_room(tx_q, descs_needed),
++				      idpf_txq_has_room(tx_q, descs_needed,
++							bufs_needed),
+ 				      1, 1))
+ 		return 0;
+ 
+@@ -2250,14 +2269,16 @@ void idpf_tx_buf_hw_update(struct idpf_tx_queue *tx_q, u32 val,
+ }
+ 
+ /**
+- * idpf_tx_desc_count_required - calculate number of Tx descriptors needed
++ * idpf_tx_res_count_required - get number of Tx resources needed for this pkt
+  * @txq: queue to send buffer on
+  * @skb: send buffer
++ * @bufs_needed: (output) number of buffers needed for this skb.
+  *
+- * Returns number of data descriptors needed for this skb.
++ * Return: number of data descriptors and buffers needed for this skb.
+  */
+-unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq,
+-					 struct sk_buff *skb)
++unsigned int idpf_tx_res_count_required(struct idpf_tx_queue *txq,
++					struct sk_buff *skb,
++					u32 *bufs_needed)
+ {
+ 	const struct skb_shared_info *shinfo;
+ 	unsigned int count = 0, i;
+@@ -2268,6 +2289,7 @@ unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq,
+ 		return count;
+ 
+ 	shinfo = skb_shinfo(skb);
++	*bufs_needed += shinfo->nr_frags;
+ 	for (i = 0; i < shinfo->nr_frags; i++) {
+ 		unsigned int size;
+ 
+@@ -2297,71 +2319,91 @@ unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq,
+ }
+ 
+ /**
+- * idpf_tx_dma_map_error - handle TX DMA map errors
+- * @txq: queue to send buffer on
+- * @skb: send buffer
+- * @first: original first buffer info buffer for packet
+- * @idx: starting point on ring to unwind
++ * idpf_tx_splitq_bump_ntu - adjust NTU and generation
++ * @txq: the tx ring to wrap
++ * @ntu: ring index to bump
+  */
+-void idpf_tx_dma_map_error(struct idpf_tx_queue *txq, struct sk_buff *skb,
+-			   struct idpf_tx_buf *first, u16 idx)
++static unsigned int idpf_tx_splitq_bump_ntu(struct idpf_tx_queue *txq, u16 ntu)
+ {
+-	struct libeth_sq_napi_stats ss = { };
+-	struct libeth_cq_pp cp = {
+-		.dev	= txq->dev,
+-		.ss	= &ss,
+-	};
++	ntu++;
+ 
+-	u64_stats_update_begin(&txq->stats_sync);
+-	u64_stats_inc(&txq->q_stats.dma_map_errs);
+-	u64_stats_update_end(&txq->stats_sync);
++	if (ntu == txq->desc_count) {
++		ntu = 0;
++		txq->compl_tag_cur_gen = IDPF_TX_ADJ_COMPL_TAG_GEN(txq);
++	}
+ 
+-	/* clear dma mappings for failed tx_buf map */
+-	for (;;) {
+-		struct idpf_tx_buf *tx_buf;
++	return ntu;
++}
+ 
+-		tx_buf = &txq->tx_buf[idx];
+-		libeth_tx_complete(tx_buf, &cp);
+-		if (tx_buf == first)
+-			break;
+-		if (idx == 0)
+-			idx = txq->desc_count;
+-		idx--;
+-	}
++/**
++ * idpf_tx_get_free_buf_id - get a free buffer ID from the refill queue
++ * @refillq: refill queue to get buffer ID from
++ * @buf_id: return buffer ID
++ *
++ * Return: true if a buffer ID was found, false if not
++ */
++static bool idpf_tx_get_free_buf_id(struct idpf_sw_queue *refillq,
++				    u32 *buf_id)
++{
++	u32 ntc = refillq->next_to_clean;
++	u32 refill_desc;
+ 
+-	if (skb_is_gso(skb)) {
+-		union idpf_tx_flex_desc *tx_desc;
++	refill_desc = refillq->ring[ntc];
+ 
+-		/* If we failed a DMA mapping for a TSO packet, we will have
+-		 * used one additional descriptor for a context
+-		 * descriptor. Reset that here.
+-		 */
+-		tx_desc = &txq->flex_tx[idx];
+-		memset(tx_desc, 0, sizeof(*tx_desc));
+-		if (idx == 0)
+-			idx = txq->desc_count;
+-		idx--;
++	if (unlikely(idpf_queue_has(RFL_GEN_CHK, refillq) !=
++		     !!(refill_desc & IDPF_RFL_BI_GEN_M)))
++		return false;
++
++	*buf_id = FIELD_GET(IDPF_RFL_BI_BUFID_M, refill_desc);
++
++	if (unlikely(++ntc == refillq->desc_count)) {
++		idpf_queue_change(RFL_GEN_CHK, refillq);
++		ntc = 0;
+ 	}
+ 
+-	/* Update tail in case netdev_xmit_more was previously true */
+-	idpf_tx_buf_hw_update(txq, idx, false);
++	refillq->next_to_clean = ntc;
++
++	return true;
+ }
+ 
+ /**
+- * idpf_tx_splitq_bump_ntu - adjust NTU and generation
+- * @txq: the tx ring to wrap
+- * @ntu: ring index to bump
++ * idpf_tx_splitq_pkt_err_unmap - Unmap buffers and bump tail in case of error
++ * @txq: Tx queue to unwind
++ * @params: pointer to splitq params struct
++ * @first: starting buffer for packet to unmap
+  */
+-static unsigned int idpf_tx_splitq_bump_ntu(struct idpf_tx_queue *txq, u16 ntu)
++static void idpf_tx_splitq_pkt_err_unmap(struct idpf_tx_queue *txq,
++					 struct idpf_tx_splitq_params *params,
++					 struct idpf_tx_buf *first)
+ {
+-	ntu++;
++	struct idpf_sw_queue *refillq = txq->refillq;
++	struct libeth_sq_napi_stats ss = { };
++	struct idpf_tx_buf *tx_buf = first;
++	struct libeth_cq_pp cp = {
++		.dev    = txq->dev,
++		.ss     = &ss,
++	};
+ 
+-	if (ntu == txq->desc_count) {
+-		ntu = 0;
+-		txq->compl_tag_cur_gen = IDPF_TX_ADJ_COMPL_TAG_GEN(txq);
++	u64_stats_update_begin(&txq->stats_sync);
++	u64_stats_inc(&txq->q_stats.dma_map_errs);
++	u64_stats_update_end(&txq->stats_sync);
++
++	libeth_tx_complete(tx_buf, &cp);
++	while (idpf_tx_buf_next(tx_buf) != IDPF_TXBUF_NULL) {
++		tx_buf = &txq->tx_buf[idpf_tx_buf_next(tx_buf)];
++		libeth_tx_complete(tx_buf, &cp);
+ 	}
+ 
+-	return ntu;
++	/* Update tail in case netdev_xmit_more was previously true. */
++	idpf_tx_buf_hw_update(txq, params->prev_ntu, false);
++
++	if (!refillq)
++		return;
++
++	/* Restore refillq state to avoid leaking tags. */
++	if (params->prev_refill_gen != idpf_queue_has(RFL_GEN_CHK, refillq))
++		idpf_queue_change(RFL_GEN_CHK, refillq);
++	refillq->next_to_clean = params->prev_refill_ntc;
+ }
+ 
+ /**
+@@ -2385,6 +2427,7 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q,
+ 	struct netdev_queue *nq;
+ 	struct sk_buff *skb;
+ 	skb_frag_t *frag;
++	u32 next_buf_id;
+ 	u16 td_cmd = 0;
+ 	dma_addr_t dma;
+ 
+@@ -2402,17 +2445,16 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q,
+ 	tx_buf = first;
+ 	first->nr_frags = 0;
+ 
+-	params->compl_tag =
+-		(tx_q->compl_tag_cur_gen << tx_q->compl_tag_gen_s) | i;
+-
+ 	for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
+ 		unsigned int max_data = IDPF_TX_MAX_DESC_DATA_ALIGNED;
+ 
+-		if (dma_mapping_error(tx_q->dev, dma))
+-			return idpf_tx_dma_map_error(tx_q, skb, first, i);
++		if (unlikely(dma_mapping_error(tx_q->dev, dma))) {
++			idpf_tx_buf_next(tx_buf) = IDPF_TXBUF_NULL;
++			return idpf_tx_splitq_pkt_err_unmap(tx_q, params,
++							    first);
++		}
+ 
+ 		first->nr_frags++;
+-		idpf_tx_buf_compl_tag(tx_buf) = params->compl_tag;
+ 		tx_buf->type = LIBETH_SQE_FRAG;
+ 
+ 		/* record length, and DMA address */
+@@ -2468,29 +2510,14 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q,
+ 						  max_data);
+ 
+ 			if (unlikely(++i == tx_q->desc_count)) {
+-				tx_buf = tx_q->tx_buf;
+ 				tx_desc = &tx_q->flex_tx[0];
+ 				i = 0;
+ 				tx_q->compl_tag_cur_gen =
+ 					IDPF_TX_ADJ_COMPL_TAG_GEN(tx_q);
+ 			} else {
+-				tx_buf++;
+ 				tx_desc++;
+ 			}
+ 
+-			/* Since this packet has a buffer that is going to span
+-			 * multiple descriptors, it's going to leave holes in
+-			 * to the TX buffer ring. To ensure these holes do not
+-			 * cause issues in the cleaning routines, we will clear
+-			 * them of any stale data and assign them the same
+-			 * completion tag as the current packet. Then when the
+-			 * packet is being cleaned, the cleaning routines will
+-			 * simply pass over these holes and finish cleaning the
+-			 * rest of the packet.
+-			 */
+-			tx_buf->type = LIBETH_SQE_EMPTY;
+-			idpf_tx_buf_compl_tag(tx_buf) = params->compl_tag;
+-
+ 			/* Adjust the DMA offset and the remaining size of the
+ 			 * fragment.  On the first iteration of this loop,
+ 			 * max_data will be >= 12K and <= 16K-1.  On any
+@@ -2515,15 +2542,26 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q,
+ 		idpf_tx_splitq_build_desc(tx_desc, params, td_cmd, size);
+ 
+ 		if (unlikely(++i == tx_q->desc_count)) {
+-			tx_buf = tx_q->tx_buf;
+ 			tx_desc = &tx_q->flex_tx[0];
+ 			i = 0;
+ 			tx_q->compl_tag_cur_gen = IDPF_TX_ADJ_COMPL_TAG_GEN(tx_q);
+ 		} else {
+-			tx_buf++;
+ 			tx_desc++;
+ 		}
+ 
++		if (idpf_queue_has(FLOW_SCH_EN, tx_q)) {
++			if (unlikely(!idpf_tx_get_free_buf_id(tx_q->refillq,
++							      &next_buf_id))) {
++				idpf_tx_buf_next(tx_buf) = IDPF_TXBUF_NULL;
++				return idpf_tx_splitq_pkt_err_unmap(tx_q, params,
++								    first);
++			}
++		} else {
++			next_buf_id = i;
++		}
++		idpf_tx_buf_next(tx_buf) = next_buf_id;
++		tx_buf = &tx_q->tx_buf[next_buf_id];
++
+ 		size = skb_frag_size(frag);
+ 		data_len -= size;
+ 
+@@ -2538,6 +2576,7 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q,
+ 
+ 	/* write last descriptor with RS and EOP bits */
+ 	first->rs_idx = i;
++	idpf_tx_buf_next(tx_buf) = IDPF_TXBUF_NULL;
+ 	td_cmd |= params->eop_cmd;
+ 	idpf_tx_splitq_build_desc(tx_desc, params, td_cmd, size);
+ 	i = idpf_tx_splitq_bump_ntu(tx_q, i);
+@@ -2746,8 +2785,6 @@ idpf_tx_splitq_get_ctx_desc(struct idpf_tx_queue *txq)
+ 	union idpf_flex_tx_ctx_desc *desc;
+ 	int i = txq->next_to_use;
+ 
+-	txq->tx_buf[i].type = LIBETH_SQE_CTX;
+-
+ 	/* grab the next descriptor */
+ 	desc = &txq->flex_ctx[i];
+ 	txq->next_to_use = idpf_tx_splitq_bump_ntu(txq, i);
+@@ -2850,13 +2887,16 @@ static void idpf_tx_set_tstamp_desc(union idpf_flex_tx_ctx_desc *ctx_desc,
+ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
+ 					struct idpf_tx_queue *tx_q)
+ {
+-	struct idpf_tx_splitq_params tx_params = { };
++	struct idpf_tx_splitq_params tx_params = {
++		.prev_ntu = tx_q->next_to_use,
++	};
+ 	union idpf_flex_tx_ctx_desc *ctx_desc;
+ 	struct idpf_tx_buf *first;
+-	unsigned int count;
++	u32 count, buf_count = 1;
+ 	int tso, idx;
++	u32 buf_id;
+ 
+-	count = idpf_tx_desc_count_required(tx_q, skb);
++	count = idpf_tx_res_count_required(tx_q, skb, &buf_count);
+ 	if (unlikely(!count))
+ 		return idpf_tx_drop_skb(tx_q, skb);
+ 
+@@ -2866,7 +2906,7 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
+ 
+ 	/* Check for splitq specific TX resources */
+ 	count += (IDPF_TX_DESCS_PER_CACHE_LINE + tso);
+-	if (idpf_tx_maybe_stop_splitq(tx_q, count)) {
++	if (idpf_tx_maybe_stop_splitq(tx_q, count, buf_count)) {
+ 		idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false);
+ 
+ 		return NETDEV_TX_BUSY;
+@@ -2898,20 +2938,29 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
+ 		idpf_tx_set_tstamp_desc(ctx_desc, idx);
+ 	}
+ 
+-	/* record the location of the first descriptor for this packet */
+-	first = &tx_q->tx_buf[tx_q->next_to_use];
+-	first->skb = skb;
++	if (idpf_queue_has(FLOW_SCH_EN, tx_q)) {
++		struct idpf_sw_queue *refillq = tx_q->refillq;
+ 
+-	if (tso) {
+-		first->packets = tx_params.offload.tso_segs;
+-		first->bytes = skb->len +
+-			((first->packets - 1) * tx_params.offload.tso_hdr_len);
+-	} else {
+-		first->packets = 1;
+-		first->bytes = max_t(unsigned int, skb->len, ETH_ZLEN);
+-	}
++		/* Save refillq state in case of a packet rollback.  Otherwise,
++		 * the tags will be leaked since they will be popped from the
++		 * refillq but never reposted during cleaning.
++		 */
++		tx_params.prev_refill_gen =
++			idpf_queue_has(RFL_GEN_CHK, refillq);
++		tx_params.prev_refill_ntc = refillq->next_to_clean;
++
++		if (unlikely(!idpf_tx_get_free_buf_id(tx_q->refillq,
++						      &buf_id))) {
++			if (tx_params.prev_refill_gen !=
++			    idpf_queue_has(RFL_GEN_CHK, refillq))
++				idpf_queue_change(RFL_GEN_CHK, refillq);
++			refillq->next_to_clean = tx_params.prev_refill_ntc;
++
++			tx_q->next_to_use = tx_params.prev_ntu;
++			return idpf_tx_drop_skb(tx_q, skb);
++		}
++		tx_params.compl_tag = buf_id;
+ 
+-	if (idpf_queue_has(FLOW_SCH_EN, tx_q)) {
+ 		tx_params.dtype = IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE;
+ 		tx_params.eop_cmd = IDPF_TXD_FLEX_FLOW_CMD_EOP;
+ 		/* Set the RE bit to catch any packets that may have not been
+@@ -2928,6 +2977,8 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
+ 			tx_params.offload.td_cmd |= IDPF_TXD_FLEX_FLOW_CMD_CS_EN;
+ 
+ 	} else {
++		buf_id = tx_q->next_to_use;
++
+ 		tx_params.dtype = IDPF_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2;
+ 		tx_params.eop_cmd = IDPF_TXD_LAST_DESC_CMD;
+ 
+@@ -2935,6 +2986,18 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
+ 			tx_params.offload.td_cmd |= IDPF_TX_FLEX_DESC_CMD_CS_EN;
+ 	}
+ 
++	first = &tx_q->tx_buf[buf_id];
++	first->skb = skb;
++
++	if (tso) {
++		first->packets = tx_params.offload.tso_segs;
++		first->bytes = skb->len +
++			((first->packets - 1) * tx_params.offload.tso_hdr_len);
++	} else {
++		first->packets = 1;
++		first->bytes = max_t(unsigned int, skb->len, ETH_ZLEN);
++	}
++
+ 	idpf_tx_splitq_map(tx_q, &tx_params, first);
+ 
+ 	return NETDEV_TX_OK;
+@@ -3464,7 +3527,7 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget)
+ skip_data:
+ 		rx_buf->page = NULL;
+ 
+-		idpf_rx_post_buf_refill(refillq, buf_id);
++		idpf_post_buf_refill(refillq, buf_id);
+ 		IDPF_RX_BUMP_NTC(rxq, ntc);
+ 
+ 		/* skip if it is non EOP desc */
+@@ -3572,10 +3635,10 @@ static void idpf_rx_clean_refillq(struct idpf_buf_queue *bufq,
+ 		bool failure;
+ 
+ 		if (idpf_queue_has(RFL_GEN_CHK, refillq) !=
+-		    !!(refill_desc & IDPF_RX_BI_GEN_M))
++		    !!(refill_desc & IDPF_RFL_BI_GEN_M))
+ 			break;
+ 
+-		buf_id = FIELD_GET(IDPF_RX_BI_BUFID_M, refill_desc);
++		buf_id = FIELD_GET(IDPF_RFL_BI_BUFID_M, refill_desc);
+ 		failure = idpf_rx_update_bufq_desc(bufq, buf_id, buf_desc);
+ 		if (failure)
+ 			break;
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+index 36a0f828a6f80c..54b314ceee7386 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+@@ -107,8 +107,8 @@ do {								\
+  */
+ #define IDPF_TX_SPLITQ_RE_MIN_GAP	64
+ 
+-#define IDPF_RX_BI_GEN_M		BIT(16)
+-#define IDPF_RX_BI_BUFID_M		GENMASK(15, 0)
++#define IDPF_RFL_BI_GEN_M		BIT(16)
++#define IDPF_RFL_BI_BUFID_M		GENMASK(15, 0)
+ 
+ #define IDPF_RXD_EOF_SPLITQ		VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_M
+ #define IDPF_RXD_EOF_SINGLEQ		VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_M
+@@ -136,6 +136,8 @@ do {								\
+ 	((++(txq)->compl_tag_cur_gen) >= (txq)->compl_tag_gen_max ? \
+ 	0 : (txq)->compl_tag_cur_gen)
+ 
++#define IDPF_TXBUF_NULL			U32_MAX
++
+ #define IDPF_TXD_LAST_DESC_CMD (IDPF_TX_DESC_CMD_EOP | IDPF_TX_DESC_CMD_RS)
+ 
+ #define IDPF_TX_FLAGS_TSO		BIT(0)
+@@ -195,6 +197,9 @@ struct idpf_tx_offload_params {
+  * @compl_tag: Associated tag for completion
+  * @td_tag: Descriptor tunneling tag
+  * @offload: Offload parameters
++ * @prev_ntu: stored TxQ next_to_use in case of rollback
++ * @prev_refill_ntc: stored refillq next_to_clean in case of packet rollback
++ * @prev_refill_gen: stored refillq generation bit in case of packet rollback
+  */
+ struct idpf_tx_splitq_params {
+ 	enum idpf_tx_desc_dtype_value dtype;
+@@ -205,6 +210,10 @@ struct idpf_tx_splitq_params {
+ 	};
+ 
+ 	struct idpf_tx_offload_params offload;
++
++	u16 prev_ntu;
++	u16 prev_refill_ntc;
++	bool prev_refill_gen;
+ };
+ 
+ enum idpf_tx_ctx_desc_eipt_offload {
+@@ -621,6 +630,7 @@ libeth_cacheline_set_assert(struct idpf_rx_queue, 64,
+  * @cleaned_pkts: Number of packets cleaned for the above said case
+  * @tx_max_bufs: Max buffers that can be transmitted with scatter-gather
+  * @stash: Tx buffer stash for Flow-based scheduling mode
++ * @refillq: Pointer to refill queue
+  * @compl_tag_bufid_m: Completion tag buffer id mask
+  * @compl_tag_cur_gen: Used to keep track of current completion tag generation
+  * @compl_tag_gen_max: To determine when compl_tag_cur_gen should be reset
+@@ -632,6 +642,7 @@ libeth_cacheline_set_assert(struct idpf_rx_queue, 64,
+  * @size: Length of descriptor ring in bytes
+  * @dma: Physical address of ring
+  * @q_vector: Backreference to associated vector
++ * @buf_pool_size: Total number of idpf_tx_buf
+  */
+ struct idpf_tx_queue {
+ 	__cacheline_group_begin_aligned(read_mostly);
+@@ -670,6 +681,7 @@ struct idpf_tx_queue {
+ 
+ 	u16 tx_max_bufs;
+ 	struct idpf_txq_stash *stash;
++	struct idpf_sw_queue *refillq;
+ 
+ 	u16 compl_tag_bufid_m;
+ 	u16 compl_tag_cur_gen;
+@@ -688,11 +700,12 @@ struct idpf_tx_queue {
+ 	dma_addr_t dma;
+ 
+ 	struct idpf_q_vector *q_vector;
++	u32 buf_pool_size;
+ 	__cacheline_group_end_aligned(cold);
+ };
+ libeth_cacheline_set_assert(struct idpf_tx_queue, 64,
+-			    112 + sizeof(struct u64_stats_sync),
+-			    24);
++			    120 + sizeof(struct u64_stats_sync),
++			    32);
+ 
+ /**
+  * struct idpf_buf_queue - software structure representing a buffer queue
+@@ -1010,6 +1023,17 @@ static inline void idpf_vport_intr_set_wb_on_itr(struct idpf_q_vector *q_vector)
+ 	       reg->dyn_ctl);
+ }
+ 
++/**
++ * idpf_tx_splitq_get_free_bufs - get number of free buf_ids in refillq
++ * @refillq: pointer to refillq containing buf_ids
++ */
++static inline u32 idpf_tx_splitq_get_free_bufs(struct idpf_sw_queue *refillq)
++{
++	return (refillq->next_to_use > refillq->next_to_clean ?
++		0 : refillq->desc_count) +
++	       refillq->next_to_use - refillq->next_to_clean - 1;
++}
++
+ int idpf_vport_singleq_napi_poll(struct napi_struct *napi, int budget);
+ void idpf_vport_init_num_qs(struct idpf_vport *vport,
+ 			    struct virtchnl2_create_vport *vport_msg);
+@@ -1037,10 +1061,8 @@ void idpf_tx_buf_hw_update(struct idpf_tx_queue *tx_q, u32 val,
+ 			   bool xmit_more);
+ unsigned int idpf_size_to_txd_count(unsigned int size);
+ netdev_tx_t idpf_tx_drop_skb(struct idpf_tx_queue *tx_q, struct sk_buff *skb);
+-void idpf_tx_dma_map_error(struct idpf_tx_queue *txq, struct sk_buff *skb,
+-			   struct idpf_tx_buf *first, u16 ring_idx);
+-unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq,
+-					 struct sk_buff *skb);
++unsigned int idpf_tx_res_count_required(struct idpf_tx_queue *txq,
++					struct sk_buff *skb, u32 *buf_count);
+ void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue);
+ netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb,
+ 				  struct idpf_tx_queue *tx_q);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_e610.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_e610.c
+index 71ea25de1bac7a..754c176fd4a7a1 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_e610.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_e610.c
+@@ -3123,7 +3123,7 @@ static int ixgbe_get_orom_ver_info(struct ixgbe_hw *hw,
+ 	if (err)
+ 		return err;
+ 
+-	combo_ver = le32_to_cpu(civd.combo_ver);
++	combo_ver = get_unaligned_le32(&civd.combo_ver);
+ 
+ 	orom->major = (u8)FIELD_GET(IXGBE_OROM_VER_MASK, combo_ver);
+ 	orom->patch = (u8)FIELD_GET(IXGBE_OROM_VER_PATCH_MASK, combo_ver);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_type_e610.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_type_e610.h
+index 09df67f03cf470..38a41d81de0fa4 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_type_e610.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_type_e610.h
+@@ -1150,7 +1150,7 @@ struct ixgbe_orom_civd_info {
+ 	__le32 combo_ver;	/* Combo Image Version number */
+ 	u8 combo_name_len;	/* Length of the unicode combo image version string, max of 32 */
+ 	__le16 combo_name[32];	/* Unicode string representing the Combo Image version */
+-};
++} __packed;
+ 
+ /* Function specific capabilities */
+ struct ixgbe_hw_func_caps {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+index 971993586fb49d..442305463cc0ae 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+@@ -1940,6 +1940,13 @@ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 		goto err_release_regions;
+ 	}
+ 
++	if (!is_cn20k(pdev) &&
++	    !is_cgx_mapped_to_nix(pdev->subsystem_device, cgx->cgx_id)) {
++		dev_notice(dev, "CGX %d not mapped to NIX, skipping probe\n",
++			   cgx->cgx_id);
++		goto err_release_regions;
++	}
++
+ 	cgx->lmac_count = cgx->mac_ops->get_nr_lmacs(cgx);
+ 	if (!cgx->lmac_count) {
+ 		dev_notice(dev, "CGX %d LMAC count is zero, skipping probe\n", cgx->cgx_id);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+index 0277d226293e9c..d7030dfa5dad24 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+@@ -97,7 +97,7 @@ int mcs_add_intr_wq_entry(struct mcs *mcs, struct mcs_intr_event *event)
+ 	if (pcifunc & RVU_PFVF_FUNC_MASK)
+ 		pfvf = &mcs->vf[rvu_get_hwvf(rvu, pcifunc)];
+ 	else
+-		pfvf = &mcs->pf[rvu_get_pf(pcifunc)];
++		pfvf = &mcs->pf[rvu_get_pf(rvu->pdev, pcifunc)];
+ 
+ 	event->intr_mask &= pfvf->intr_mask;
+ 
+@@ -123,7 +123,7 @@ static int mcs_notify_pfvf(struct mcs_intr_event *event, struct rvu *rvu)
+ 	struct mcs_intr_info *req;
+ 	int pf;
+ 
+-	pf = rvu_get_pf(event->pcifunc);
++	pf = rvu_get_pf(rvu->pdev, event->pcifunc);
+ 
+ 	mutex_lock(&rvu->mbox_lock);
+ 
+@@ -193,7 +193,7 @@ int rvu_mbox_handler_mcs_intr_cfg(struct rvu *rvu,
+ 	if (pcifunc & RVU_PFVF_FUNC_MASK)
+ 		pfvf = &mcs->vf[rvu_get_hwvf(rvu, pcifunc)];
+ 	else
+-		pfvf = &mcs->pf[rvu_get_pf(pcifunc)];
++		pfvf = &mcs->pf[rvu_get_pf(rvu->pdev, pcifunc)];
+ 
+ 	mcs->pf_map[0] = pcifunc;
+ 	pfvf->intr_mask = req->intr_mask;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index a8025f0486c9f1..39f664d60ecffa 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -294,7 +294,7 @@ int rvu_get_blkaddr(struct rvu *rvu, int blktype, u16 pcifunc)
+ 		devnum = rvu_get_hwvf(rvu, pcifunc);
+ 	} else {
+ 		is_pf = true;
+-		devnum = rvu_get_pf(pcifunc);
++		devnum = rvu_get_pf(rvu->pdev, pcifunc);
+ 	}
+ 
+ 	/* Check if the 'pcifunc' has a NIX LF from 'BLKADDR_NIX0' or
+@@ -359,7 +359,7 @@ static void rvu_update_rsrc_map(struct rvu *rvu, struct rvu_pfvf *pfvf,
+ 		devnum = rvu_get_hwvf(rvu, pcifunc);
+ 	} else {
+ 		is_pf = true;
+-		devnum = rvu_get_pf(pcifunc);
++		devnum = rvu_get_pf(rvu->pdev, pcifunc);
+ 	}
+ 
+ 	block->fn_map[lf] = attach ? pcifunc : 0;
+@@ -400,11 +400,6 @@ static void rvu_update_rsrc_map(struct rvu *rvu, struct rvu_pfvf *pfvf,
+ 	rvu_write64(rvu, BLKADDR_RVUM, reg | (devnum << 16), num_lfs);
+ }
+ 
+-inline int rvu_get_pf(u16 pcifunc)
+-{
+-	return (pcifunc >> RVU_PFVF_PF_SHIFT) & RVU_PFVF_PF_MASK;
+-}
+-
+ void rvu_get_pf_numvfs(struct rvu *rvu, int pf, int *numvfs, int *hwvf)
+ {
+ 	u64 cfg;
+@@ -422,7 +417,7 @@ int rvu_get_hwvf(struct rvu *rvu, int pcifunc)
+ 	int pf, func;
+ 	u64 cfg;
+ 
+-	pf = rvu_get_pf(pcifunc);
++	pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	func = pcifunc & RVU_PFVF_FUNC_MASK;
+ 
+ 	/* Get first HWVF attached to this PF */
+@@ -437,7 +432,7 @@ struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc)
+ 	if (pcifunc & RVU_PFVF_FUNC_MASK)
+ 		return &rvu->hwvf[rvu_get_hwvf(rvu, pcifunc)];
+ 	else
+-		return &rvu->pf[rvu_get_pf(pcifunc)];
++		return &rvu->pf[rvu_get_pf(rvu->pdev, pcifunc)];
+ }
+ 
+ static bool is_pf_func_valid(struct rvu *rvu, u16 pcifunc)
+@@ -445,7 +440,7 @@ static bool is_pf_func_valid(struct rvu *rvu, u16 pcifunc)
+ 	int pf, vf, nvfs;
+ 	u64 cfg;
+ 
+-	pf = rvu_get_pf(pcifunc);
++	pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	if (pf >= rvu->hw->total_pfs)
+ 		return false;
+ 
+@@ -1487,7 +1482,7 @@ int rvu_get_nix_blkaddr(struct rvu *rvu, u16 pcifunc)
+ 	pf = rvu_get_pfvf(rvu, pcifunc & ~RVU_PFVF_FUNC_MASK);
+ 
+ 	/* All CGX mapped PFs are set with assigned NIX block during init */
+-	if (is_pf_cgxmapped(rvu, rvu_get_pf(pcifunc))) {
++	if (is_pf_cgxmapped(rvu, rvu_get_pf(rvu->pdev, pcifunc))) {
+ 		blkaddr = pf->nix_blkaddr;
+ 	} else if (is_lbk_vf(rvu, pcifunc)) {
+ 		vf = pcifunc - 1;
+@@ -1501,7 +1496,7 @@ int rvu_get_nix_blkaddr(struct rvu *rvu, u16 pcifunc)
+ 	}
+ 
+ 	/* if SDP1 then the blkaddr is NIX1 */
+-	if (is_sdp_pfvf(pcifunc) && pf->sdp_info->node_id == 1)
++	if (is_sdp_pfvf(rvu, pcifunc) && pf->sdp_info->node_id == 1)
+ 		blkaddr = BLKADDR_NIX1;
+ 
+ 	switch (blkaddr) {
+@@ -2006,7 +2001,7 @@ int rvu_mbox_handler_vf_flr(struct rvu *rvu, struct msg_req *req,
+ 
+ 	vf = pcifunc & RVU_PFVF_FUNC_MASK;
+ 	cfg = rvu_read64(rvu, BLKADDR_RVUM,
+-			 RVU_PRIV_PFX_CFG(rvu_get_pf(pcifunc)));
++			 RVU_PRIV_PFX_CFG(rvu_get_pf(rvu->pdev, pcifunc)));
+ 	numvfs = (cfg >> 12) & 0xFF;
+ 
+ 	if (vf && vf <= numvfs)
+@@ -2229,9 +2224,8 @@ static void __rvu_mbox_handler(struct rvu_work *mwork, int type, bool poll)
+ 		/* Set which PF/VF sent this message based on mbox IRQ */
+ 		switch (type) {
+ 		case TYPE_AFPF:
+-			msg->pcifunc &=
+-				~(RVU_PFVF_PF_MASK << RVU_PFVF_PF_SHIFT);
+-			msg->pcifunc |= (devid << RVU_PFVF_PF_SHIFT);
++			msg->pcifunc &= rvu_pcifunc_pf_mask(rvu->pdev);
++			msg->pcifunc |= rvu_make_pcifunc(rvu->pdev, devid, 0);
+ 			break;
+ 		case TYPE_AFVF:
+ 			msg->pcifunc &=
+@@ -2249,7 +2243,7 @@ static void __rvu_mbox_handler(struct rvu_work *mwork, int type, bool poll)
+ 		if (msg->pcifunc & RVU_PFVF_FUNC_MASK)
+ 			dev_warn(rvu->dev, "Error %d when processing message %s (0x%x) from PF%d:VF%d\n",
+ 				 err, otx2_mbox_id2name(msg->id),
+-				 msg->id, rvu_get_pf(msg->pcifunc),
++				 msg->id, rvu_get_pf(rvu->pdev, msg->pcifunc),
+ 				 (msg->pcifunc & RVU_PFVF_FUNC_MASK) - 1);
+ 		else
+ 			dev_warn(rvu->dev, "Error %d when processing message %s (0x%x) from PF%d\n",
+@@ -2773,7 +2767,7 @@ static void rvu_flr_handler(struct work_struct *work)
+ 
+ 	cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf));
+ 	numvfs = (cfg >> 12) & 0xFF;
+-	pcifunc  = pf << RVU_PFVF_PF_SHIFT;
++	pcifunc  = rvu_make_pcifunc(rvu->pdev, pf, 0);
+ 
+ 	for (vf = 0; vf < numvfs; vf++)
+ 		__rvu_flr_handler(rvu, (pcifunc | (vf + 1)));
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+index 48f66292ad5c5f..9cdb7431f558c8 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+@@ -10,6 +10,7 @@
+ 
+ #include <linux/pci.h>
+ #include <net/devlink.h>
++#include <linux/soc/marvell/silicons.h>
+ 
+ #include "rvu_struct.h"
+ #include "rvu_devlink.h"
+@@ -43,10 +44,34 @@
+ #define MAX_CPT_BLKS				2
+ 
+ /* PF_FUNC */
+-#define RVU_PFVF_PF_SHIFT	10
+-#define RVU_PFVF_PF_MASK	0x3F
+-#define RVU_PFVF_FUNC_SHIFT	0
+-#define RVU_PFVF_FUNC_MASK	0x3FF
++#define RVU_OTX2_PFVF_PF_SHIFT			10
++#define RVU_OTX2_PFVF_PF_MASK			0x3F
++#define RVU_PFVF_FUNC_SHIFT			0
++#define RVU_PFVF_FUNC_MASK			0x3FF
++#define RVU_CN20K_PFVF_PF_SHIFT			9
++#define RVU_CN20K_PFVF_PF_MASK			0x7F
++
++static inline u16 rvu_make_pcifunc(struct pci_dev *pdev, int pf, int func)
++{
++	if (is_cn20k(pdev))
++		return ((pf & RVU_CN20K_PFVF_PF_MASK) <<
++			RVU_CN20K_PFVF_PF_SHIFT) |
++			((func & RVU_PFVF_FUNC_MASK) <<
++			RVU_PFVF_FUNC_SHIFT);
++	else
++		return ((pf & RVU_OTX2_PFVF_PF_MASK) <<
++			RVU_OTX2_PFVF_PF_SHIFT) |
++			((func & RVU_PFVF_FUNC_MASK) <<
++			RVU_PFVF_FUNC_SHIFT);
++}
++
++static inline int rvu_pcifunc_pf_mask(struct pci_dev *pdev)
++{
++	if (is_cn20k(pdev))
++		return ~(RVU_CN20K_PFVF_PF_MASK << RVU_CN20K_PFVF_PF_SHIFT);
++	else
++		return ~(RVU_OTX2_PFVF_PF_MASK << RVU_OTX2_PFVF_PF_SHIFT);
++}
+ 
+ #ifdef CONFIG_DEBUG_FS
+ struct dump_ctx {
+@@ -736,6 +761,20 @@ static inline bool is_cn10kb(struct rvu *rvu)
+ 	return false;
+ }
+ 
++static inline bool is_cgx_mapped_to_nix(unsigned short id, u8 cgx_id)
++{
++	/* On CNF10KA and CNF10KB silicons only two CGX blocks are connected
++	 * to NIX.
++	 */
++	if (id == PCI_SUBSYS_DEVID_CNF10K_A || id == PCI_SUBSYS_DEVID_CNF10K_B)
++		return cgx_id <= 1;
++
++	return !(cgx_id && !(id == PCI_SUBSYS_DEVID_96XX ||
++			     id == PCI_SUBSYS_DEVID_98XX ||
++			     id == PCI_SUBSYS_DEVID_CN10K_A ||
++			     id == PCI_SUBSYS_DEVID_CN10K_B));
++}
++
+ static inline bool is_rvu_npc_hash_extract_en(struct rvu *rvu)
+ {
+ 	u64 npc_const3;
+@@ -836,7 +875,6 @@ int rvu_alloc_rsrc_contig(struct rsrc_bmap *rsrc, int nrsrc);
+ void rvu_free_rsrc_contig(struct rsrc_bmap *rsrc, int nrsrc, int start);
+ bool rvu_rsrc_check_contig(struct rsrc_bmap *rsrc, int nrsrc);
+ u16 rvu_get_rsrc_mapcount(struct rvu_pfvf *pfvf, int blkaddr);
+-int rvu_get_pf(u16 pcifunc);
+ struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc);
+ void rvu_get_pf_numvfs(struct rvu *rvu, int pf, int *numvfs, int *hwvf);
+ bool is_block_implemented(struct rvu_hwinfo *hw, int blkaddr);
+@@ -865,8 +903,8 @@ void rvu_aq_free(struct rvu *rvu, struct admin_queue *aq);
+ 
+ /* SDP APIs */
+ int rvu_sdp_init(struct rvu *rvu);
+-bool is_sdp_pfvf(u16 pcifunc);
+-bool is_sdp_pf(u16 pcifunc);
++bool is_sdp_pfvf(struct rvu *rvu, u16 pcifunc);
++bool is_sdp_pf(struct rvu *rvu, u16 pcifunc);
+ bool is_sdp_vf(struct rvu *rvu, u16 pcifunc);
+ 
+ static inline bool is_rep_dev(struct rvu *rvu, u16 pcifunc)
+@@ -877,11 +915,21 @@ static inline bool is_rep_dev(struct rvu *rvu, u16 pcifunc)
+ 	return false;
+ }
+ 
++static inline int rvu_get_pf(struct pci_dev *pdev, u16 pcifunc)
++{
++	if (is_cn20k(pdev))
++		return (pcifunc >> RVU_CN20K_PFVF_PF_SHIFT) &
++			RVU_CN20K_PFVF_PF_MASK;
++	else
++		return (pcifunc >> RVU_OTX2_PFVF_PF_SHIFT) &
++			RVU_OTX2_PFVF_PF_MASK;
++}
++
+ /* CGX APIs */
+ static inline bool is_pf_cgxmapped(struct rvu *rvu, u8 pf)
+ {
+ 	return (pf >= PF_CGXMAP_BASE && pf <= rvu->cgx_mapped_pfs) &&
+-		!is_sdp_pf(pf << RVU_PFVF_PF_SHIFT);
++		!is_sdp_pf(rvu, rvu_make_pcifunc(rvu->pdev, pf, 0));
+ }
+ 
+ static inline void rvu_get_cgx_lmac_id(u8 map, u8 *cgx_id, u8 *lmac_id)
+@@ -893,7 +941,7 @@ static inline void rvu_get_cgx_lmac_id(u8 map, u8 *cgx_id, u8 *lmac_id)
+ static inline bool is_cgx_vf(struct rvu *rvu, u16 pcifunc)
+ {
+ 	return ((pcifunc & RVU_PFVF_FUNC_MASK) &&
+-		is_pf_cgxmapped(rvu, rvu_get_pf(pcifunc)));
++		is_pf_cgxmapped(rvu, rvu_get_pf(rvu->pdev, pcifunc)));
+ }
+ 
+ #define M(_name, _id, fn_name, req, rsp)				\
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+index d0331b0e0bfd4a..b79db887ab9b21 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+@@ -457,7 +457,7 @@ int rvu_cgx_exit(struct rvu *rvu)
+ inline bool is_cgx_config_permitted(struct rvu *rvu, u16 pcifunc)
+ {
+ 	if ((pcifunc & RVU_PFVF_FUNC_MASK) ||
+-	    !is_pf_cgxmapped(rvu, rvu_get_pf(pcifunc)))
++	    !is_pf_cgxmapped(rvu, rvu_get_pf(rvu->pdev, pcifunc)))
+ 		return false;
+ 	return true;
+ }
+@@ -484,7 +484,7 @@ void rvu_cgx_enadis_rx_bp(struct rvu *rvu, int pf, bool enable)
+ 
+ int rvu_cgx_config_rxtx(struct rvu *rvu, u16 pcifunc, bool start)
+ {
+-	int pf = rvu_get_pf(pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	struct mac_ops *mac_ops;
+ 	u8 cgx_id, lmac_id;
+ 	void *cgxd;
+@@ -501,7 +501,7 @@ int rvu_cgx_config_rxtx(struct rvu *rvu, u16 pcifunc, bool start)
+ 
+ int rvu_cgx_tx_enable(struct rvu *rvu, u16 pcifunc, bool enable)
+ {
+-	int pf = rvu_get_pf(pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	struct mac_ops *mac_ops;
+ 	u8 cgx_id, lmac_id;
+ 	void *cgxd;
+@@ -526,7 +526,7 @@ int rvu_cgx_config_tx(void *cgxd, int lmac_id, bool enable)
+ 
+ void rvu_cgx_disable_dmac_entries(struct rvu *rvu, u16 pcifunc)
+ {
+-	int pf = rvu_get_pf(pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	int i = 0, lmac_count = 0;
+ 	struct mac_ops *mac_ops;
+ 	u8 max_dmac_filters;
+@@ -577,7 +577,7 @@ int rvu_mbox_handler_cgx_stop_rxtx(struct rvu *rvu, struct msg_req *req,
+ static int rvu_lmac_get_stats(struct rvu *rvu, struct msg_req *req,
+ 			      void *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	struct mac_ops *mac_ops;
+ 	int stat = 0, err = 0;
+ 	u64 tx_stat, rx_stat;
+@@ -633,7 +633,7 @@ int rvu_mbox_handler_rpm_stats(struct rvu *rvu, struct msg_req *req,
+ int rvu_mbox_handler_cgx_stats_rst(struct rvu *rvu, struct msg_req *req,
+ 				   struct msg_rsp *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	struct rvu_pfvf	*parent_pf;
+ 	struct mac_ops *mac_ops;
+ 	u8 cgx_idx, lmac;
+@@ -663,7 +663,7 @@ int rvu_mbox_handler_cgx_fec_stats(struct rvu *rvu,
+ 				   struct msg_req *req,
+ 				   struct cgx_fec_stats_rsp *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	struct mac_ops *mac_ops;
+ 	u8 cgx_idx, lmac;
+ 	void *cgxd;
+@@ -681,7 +681,7 @@ int rvu_mbox_handler_cgx_mac_addr_set(struct rvu *rvu,
+ 				      struct cgx_mac_addr_set_or_get *req,
+ 				      struct cgx_mac_addr_set_or_get *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	u8 cgx_id, lmac_id;
+ 
+ 	if (!is_cgx_config_permitted(rvu, req->hdr.pcifunc))
+@@ -701,7 +701,7 @@ int rvu_mbox_handler_cgx_mac_addr_add(struct rvu *rvu,
+ 				      struct cgx_mac_addr_add_req *req,
+ 				      struct cgx_mac_addr_add_rsp *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	u8 cgx_id, lmac_id;
+ 	int rc = 0;
+ 
+@@ -725,7 +725,7 @@ int rvu_mbox_handler_cgx_mac_addr_del(struct rvu *rvu,
+ 				      struct cgx_mac_addr_del_req *req,
+ 				      struct msg_rsp *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	u8 cgx_id, lmac_id;
+ 
+ 	if (!is_cgx_config_permitted(rvu, req->hdr.pcifunc))
+@@ -743,7 +743,7 @@ int rvu_mbox_handler_cgx_mac_max_entries_get(struct rvu *rvu,
+ 					     struct cgx_max_dmac_entries_get_rsp
+ 					     *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	u8 cgx_id, lmac_id;
+ 
+ 	/* If msg is received from PFs(which are not mapped to CGX LMACs)
+@@ -769,7 +769,7 @@ int rvu_mbox_handler_cgx_mac_addr_get(struct rvu *rvu,
+ 				      struct cgx_mac_addr_set_or_get *req,
+ 				      struct cgx_mac_addr_set_or_get *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	u8 cgx_id, lmac_id;
+ 	int rc = 0;
+ 	u64 cfg;
+@@ -790,7 +790,7 @@ int rvu_mbox_handler_cgx_promisc_enable(struct rvu *rvu, struct msg_req *req,
+ 					struct msg_rsp *rsp)
+ {
+ 	u16 pcifunc = req->hdr.pcifunc;
+-	int pf = rvu_get_pf(pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	u8 cgx_id, lmac_id;
+ 
+ 	if (!is_cgx_config_permitted(rvu, req->hdr.pcifunc))
+@@ -809,7 +809,7 @@ int rvu_mbox_handler_cgx_promisc_enable(struct rvu *rvu, struct msg_req *req,
+ int rvu_mbox_handler_cgx_promisc_disable(struct rvu *rvu, struct msg_req *req,
+ 					 struct msg_rsp *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	u8 cgx_id, lmac_id;
+ 
+ 	if (!is_cgx_config_permitted(rvu, req->hdr.pcifunc))
+@@ -828,7 +828,7 @@ int rvu_mbox_handler_cgx_promisc_disable(struct rvu *rvu, struct msg_req *req,
+ static int rvu_cgx_ptp_rx_cfg(struct rvu *rvu, u16 pcifunc, bool enable)
+ {
+ 	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
+-	int pf = rvu_get_pf(pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	struct mac_ops *mac_ops;
+ 	u8 cgx_id, lmac_id;
+ 	void *cgxd;
+@@ -864,7 +864,7 @@ static int rvu_cgx_ptp_rx_cfg(struct rvu *rvu, u16 pcifunc, bool enable)
+ int rvu_mbox_handler_cgx_ptp_rx_enable(struct rvu *rvu, struct msg_req *req,
+ 				       struct msg_rsp *rsp)
+ {
+-	if (!is_pf_cgxmapped(rvu, rvu_get_pf(req->hdr.pcifunc)))
++	if (!is_pf_cgxmapped(rvu, rvu_get_pf(rvu->pdev, req->hdr.pcifunc)))
+ 		return -EPERM;
+ 
+ 	return rvu_cgx_ptp_rx_cfg(rvu, req->hdr.pcifunc, true);
+@@ -878,7 +878,7 @@ int rvu_mbox_handler_cgx_ptp_rx_disable(struct rvu *rvu, struct msg_req *req,
+ 
+ static int rvu_cgx_config_linkevents(struct rvu *rvu, u16 pcifunc, bool en)
+ {
+-	int pf = rvu_get_pf(pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	u8 cgx_id, lmac_id;
+ 
+ 	if (!is_cgx_config_permitted(rvu, pcifunc))
+@@ -917,7 +917,7 @@ int rvu_mbox_handler_cgx_get_linkinfo(struct rvu *rvu, struct msg_req *req,
+ 	u8 cgx_id, lmac_id;
+ 	int pf, err;
+ 
+-	pf = rvu_get_pf(req->hdr.pcifunc);
++	pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 
+ 	if (!is_pf_cgxmapped(rvu, pf))
+ 		return -ENODEV;
+@@ -933,7 +933,7 @@ int rvu_mbox_handler_cgx_features_get(struct rvu *rvu,
+ 				      struct msg_req *req,
+ 				      struct cgx_features_info_msg *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	u8 cgx_idx, lmac;
+ 	void *cgxd;
+ 
+@@ -975,7 +975,7 @@ u32 rvu_cgx_get_lmac_fifolen(struct rvu *rvu, int cgx, int lmac)
+ 
+ static int rvu_cgx_config_intlbk(struct rvu *rvu, u16 pcifunc, bool en)
+ {
+-	int pf = rvu_get_pf(pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	struct mac_ops *mac_ops;
+ 	u8 cgx_id, lmac_id;
+ 
+@@ -1005,7 +1005,7 @@ int rvu_mbox_handler_cgx_intlbk_disable(struct rvu *rvu, struct msg_req *req,
+ 
+ int rvu_cgx_cfg_pause_frm(struct rvu *rvu, u16 pcifunc, u8 tx_pause, u8 rx_pause)
+ {
+-	int pf = rvu_get_pf(pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	u8 rx_pfc = 0, tx_pfc = 0;
+ 	struct mac_ops *mac_ops;
+ 	u8 cgx_id, lmac_id;
+@@ -1046,7 +1046,7 @@ int rvu_mbox_handler_cgx_cfg_pause_frm(struct rvu *rvu,
+ 				       struct cgx_pause_frm_cfg *req,
+ 				       struct cgx_pause_frm_cfg *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	struct mac_ops *mac_ops;
+ 	u8 cgx_id, lmac_id;
+ 	int err = 0;
+@@ -1073,7 +1073,7 @@ int rvu_mbox_handler_cgx_cfg_pause_frm(struct rvu *rvu,
+ int rvu_mbox_handler_cgx_get_phy_fec_stats(struct rvu *rvu, struct msg_req *req,
+ 					   struct msg_rsp *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	u8 cgx_id, lmac_id;
+ 
+ 	if (!is_pf_cgxmapped(rvu, pf))
+@@ -1106,7 +1106,7 @@ int rvu_cgx_nix_cuml_stats(struct rvu *rvu, void *cgxd, int lmac_id,
+ 	/* Assumes LF of a PF and all of its VF belongs to the same
+ 	 * NIX block
+ 	 */
+-	pcifunc = pf << RVU_PFVF_PF_SHIFT;
++	pcifunc = rvu_make_pcifunc(rvu->pdev, pf, 0);
+ 	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
+ 	if (blkaddr < 0)
+ 		return 0;
+@@ -1133,10 +1133,10 @@ int rvu_cgx_start_stop_io(struct rvu *rvu, u16 pcifunc, bool start)
+ 	struct rvu_pfvf *parent_pf, *pfvf;
+ 	int cgx_users, err = 0;
+ 
+-	if (!is_pf_cgxmapped(rvu, rvu_get_pf(pcifunc)))
++	if (!is_pf_cgxmapped(rvu, rvu_get_pf(rvu->pdev, pcifunc)))
+ 		return 0;
+ 
+-	parent_pf = &rvu->pf[rvu_get_pf(pcifunc)];
++	parent_pf = &rvu->pf[rvu_get_pf(rvu->pdev, pcifunc)];
+ 	pfvf = rvu_get_pfvf(rvu, pcifunc);
+ 
+ 	mutex_lock(&rvu->cgx_cfg_lock);
+@@ -1179,7 +1179,7 @@ int rvu_mbox_handler_cgx_set_fec_param(struct rvu *rvu,
+ 				       struct fec_mode *req,
+ 				       struct fec_mode *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	u8 cgx_id, lmac_id;
+ 
+ 	if (!is_pf_cgxmapped(rvu, pf))
+@@ -1195,7 +1195,7 @@ int rvu_mbox_handler_cgx_set_fec_param(struct rvu *rvu,
+ int rvu_mbox_handler_cgx_get_aux_link_info(struct rvu *rvu, struct msg_req *req,
+ 					   struct cgx_fw_data *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	u8 cgx_id, lmac_id;
+ 
+ 	if (!rvu->fwdata)
+@@ -1222,7 +1222,7 @@ int rvu_mbox_handler_cgx_set_link_mode(struct rvu *rvu,
+ 				       struct cgx_set_link_mode_req *req,
+ 				       struct cgx_set_link_mode_rsp *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	u8 cgx_idx, lmac;
+ 	void *cgxd;
+ 
+@@ -1238,7 +1238,7 @@ int rvu_mbox_handler_cgx_set_link_mode(struct rvu *rvu,
+ int rvu_mbox_handler_cgx_mac_addr_reset(struct rvu *rvu, struct cgx_mac_addr_reset_req *req,
+ 					struct msg_rsp *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	u8 cgx_id, lmac_id;
+ 
+ 	if (!is_cgx_config_permitted(rvu, req->hdr.pcifunc))
+@@ -1256,7 +1256,7 @@ int rvu_mbox_handler_cgx_mac_addr_update(struct rvu *rvu,
+ 					 struct cgx_mac_addr_update_req *req,
+ 					 struct cgx_mac_addr_update_rsp *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	u8 cgx_id, lmac_id;
+ 
+ 	if (!is_cgx_config_permitted(rvu, req->hdr.pcifunc))
+@@ -1272,7 +1272,7 @@ int rvu_mbox_handler_cgx_mac_addr_update(struct rvu *rvu,
+ int rvu_cgx_prio_flow_ctrl_cfg(struct rvu *rvu, u16 pcifunc, u8 tx_pause,
+ 			       u8 rx_pause, u16 pfc_en)
+ {
+-	int pf = rvu_get_pf(pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	u8 rx_8023 = 0, tx_8023 = 0;
+ 	struct mac_ops *mac_ops;
+ 	u8 cgx_id, lmac_id;
+@@ -1310,7 +1310,7 @@ int rvu_mbox_handler_cgx_prio_flow_ctrl_cfg(struct rvu *rvu,
+ 					    struct cgx_pfc_cfg *req,
+ 					    struct cgx_pfc_rsp *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	struct mac_ops *mac_ops;
+ 	u8 cgx_id, lmac_id;
+ 	void *cgxd;
+@@ -1335,7 +1335,7 @@ int rvu_mbox_handler_cgx_prio_flow_ctrl_cfg(struct rvu *rvu,
+ 
+ void rvu_mac_reset(struct rvu *rvu, u16 pcifunc)
+ {
+-	int pf = rvu_get_pf(pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	struct mac_ops *mac_ops;
+ 	struct cgx *cgxd;
+ 	u8 cgx, lmac;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
+index 4a3370a40dd887..05adc54535eb39 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
+@@ -66,7 +66,7 @@ static int lmtst_map_table_ops(struct rvu *rvu, u32 index, u64 *val,
+ #define LMT_MAP_TBL_W1_OFF  8
+ static u32 rvu_get_lmtst_tbl_index(struct rvu *rvu, u16 pcifunc)
+ {
+-	return ((rvu_get_pf(pcifunc) * LMT_MAX_VFS) +
++	return ((rvu_get_pf(rvu->pdev, pcifunc) * LMT_MAX_VFS) +
+ 		(pcifunc & RVU_PFVF_FUNC_MASK)) * LMT_MAPTBL_ENTRY_SIZE;
+ }
+ 
+@@ -83,7 +83,7 @@ static int rvu_get_lmtaddr(struct rvu *rvu, u16 pcifunc,
+ 
+ 	mutex_lock(&rvu->rsrc_lock);
+ 	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_REQ, iova);
+-	pf = rvu_get_pf(pcifunc) & RVU_PFVF_PF_MASK;
++	pf = rvu_get_pf(rvu->pdev, pcifunc) & RVU_OTX2_PFVF_PF_MASK;
+ 	val = BIT_ULL(63) | BIT_ULL(14) | BIT_ULL(13) | pf << 8 |
+ 	      ((pcifunc & RVU_PFVF_FUNC_MASK) & 0xFF);
+ 	rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_TXN_REQ, val);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+index 3c5bbaf12e594c..f404117bf6c8c7 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+@@ -410,7 +410,7 @@ static bool is_cpt_pf(struct rvu *rvu, u16 pcifunc)
+ {
+ 	int cpt_pf_num = rvu->cpt_pf_num;
+ 
+-	if (rvu_get_pf(pcifunc) != cpt_pf_num)
++	if (rvu_get_pf(rvu->pdev, pcifunc) != cpt_pf_num)
+ 		return false;
+ 	if (pcifunc & RVU_PFVF_FUNC_MASK)
+ 		return false;
+@@ -422,7 +422,7 @@ static bool is_cpt_vf(struct rvu *rvu, u16 pcifunc)
+ {
+ 	int cpt_pf_num = rvu->cpt_pf_num;
+ 
+-	if (rvu_get_pf(pcifunc) != cpt_pf_num)
++	if (rvu_get_pf(rvu->pdev, pcifunc) != cpt_pf_num)
+ 		return false;
+ 	if (!(pcifunc & RVU_PFVF_FUNC_MASK))
+ 		return false;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+index c827da62647126..0c20642f81b9b7 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+@@ -688,7 +688,7 @@ static int get_max_column_width(struct rvu *rvu)
+ 
+ 	for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
+ 		for (vf = 0; vf <= rvu->hw->total_vfs; vf++) {
+-			pcifunc = pf << 10 | vf;
++			pcifunc = rvu_make_pcifunc(rvu->pdev, pf, vf);
+ 			if (!pcifunc)
+ 				continue;
+ 
+@@ -759,7 +759,7 @@ static ssize_t rvu_dbg_rsrc_attach_status(struct file *filp,
+ 		for (vf = 0; vf <= rvu->hw->total_vfs; vf++) {
+ 			off = 0;
+ 			flag = 0;
+-			pcifunc = pf << 10 | vf;
++			pcifunc = rvu_make_pcifunc(rvu->pdev, pf, vf);
+ 			if (!pcifunc)
+ 				continue;
+ 
+@@ -842,7 +842,7 @@ static int rvu_dbg_rvu_pf_cgx_map_display(struct seq_file *filp, void *unused)
+ 
+ 		cgx[0] = 0;
+ 		lmac[0] = 0;
+-		pcifunc = pf << 10;
++		pcifunc = rvu_make_pcifunc(rvu->pdev, pf, 0);
+ 		pfvf = rvu_get_pfvf(rvu, pcifunc);
+ 
+ 		if (pfvf->nix_blkaddr == BLKADDR_NIX0)
+@@ -2623,10 +2623,10 @@ static int rvu_dbg_nix_band_prof_ctx_display(struct seq_file *m, void *unused)
+ 			pcifunc = ipolicer->pfvf_map[idx];
+ 			if (!(pcifunc & RVU_PFVF_FUNC_MASK))
+ 				seq_printf(m, "Allocated to :: PF %d\n",
+-					   rvu_get_pf(pcifunc));
++					   rvu_get_pf(rvu->pdev, pcifunc));
+ 			else
+ 				seq_printf(m, "Allocated to :: PF %d VF %d\n",
+-					   rvu_get_pf(pcifunc),
++					   rvu_get_pf(rvu->pdev, pcifunc),
+ 					   (pcifunc & RVU_PFVF_FUNC_MASK) - 1);
+ 			print_band_prof_ctx(m, &aq_rsp.prof);
+ 		}
+@@ -2983,10 +2983,10 @@ static void rvu_print_npc_mcam_info(struct seq_file *s,
+ 
+ 	if (!(pcifunc & RVU_PFVF_FUNC_MASK))
+ 		seq_printf(s, "\n\t\t Device \t\t: PF%d\n",
+-			   rvu_get_pf(pcifunc));
++			   rvu_get_pf(rvu->pdev, pcifunc));
+ 	else
+ 		seq_printf(s, "\n\t\t Device \t\t: PF%d VF%d\n",
+-			   rvu_get_pf(pcifunc),
++			   rvu_get_pf(rvu->pdev, pcifunc),
+ 			   (pcifunc & RVU_PFVF_FUNC_MASK) - 1);
+ 
+ 	if (entry_acnt) {
+@@ -3049,13 +3049,13 @@ static int rvu_dbg_npc_mcam_info_display(struct seq_file *filp, void *unsued)
+ 	seq_puts(filp, "\n\t\t Current allocation\n");
+ 	seq_puts(filp, "\t\t====================\n");
+ 	for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
+-		pcifunc = (pf << RVU_PFVF_PF_SHIFT);
++		pcifunc = rvu_make_pcifunc(rvu->pdev, pf, 0);
+ 		rvu_print_npc_mcam_info(filp, pcifunc, blkaddr);
+ 
+ 		cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf));
+ 		numvfs = (cfg >> 12) & 0xFF;
+ 		for (vf = 0; vf < numvfs; vf++) {
+-			pcifunc = (pf << RVU_PFVF_PF_SHIFT) | (vf + 1);
++			pcifunc = rvu_make_pcifunc(rvu->pdev, pf, (vf + 1));
+ 			rvu_print_npc_mcam_info(filp, pcifunc, blkaddr);
+ 		}
+ 	}
+@@ -3326,7 +3326,7 @@ static int rvu_dbg_npc_mcam_show_rules(struct seq_file *s, void *unused)
+ 
+ 	mutex_lock(&mcam->lock);
+ 	list_for_each_entry(iter, &mcam->mcam_rules, list) {
+-		pf = (iter->owner >> RVU_PFVF_PF_SHIFT) & RVU_PFVF_PF_MASK;
++		pf = rvu_get_pf(rvu->pdev, iter->owner);
+ 		seq_printf(s, "\n\tInstalled by: PF%d ", pf);
+ 
+ 		if (iter->owner & RVU_PFVF_FUNC_MASK) {
+@@ -3344,7 +3344,7 @@ static int rvu_dbg_npc_mcam_show_rules(struct seq_file *s, void *unused)
+ 		rvu_dbg_npc_mcam_show_flows(s, iter);
+ 		if (is_npc_intf_rx(iter->intf)) {
+ 			target = iter->rx_action.pf_func;
+-			pf = (target >> RVU_PFVF_PF_SHIFT) & RVU_PFVF_PF_MASK;
++			pf = rvu_get_pf(rvu->pdev, target);
+ 			seq_printf(s, "\tForward to: PF%d ", pf);
+ 
+ 			if (target & RVU_PFVF_FUNC_MASK) {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+index 613655fcd34f48..bdf4d852c15de2 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+@@ -315,7 +315,8 @@ static bool is_valid_txschq(struct rvu *rvu, int blkaddr,
+ 	if (lvl >= hw->cap.nix_tx_aggr_lvl) {
+ 		if ((nix_get_tx_link(rvu, map_func) !=
+ 		     nix_get_tx_link(rvu, pcifunc)) &&
+-		     (rvu_get_pf(map_func) != rvu_get_pf(pcifunc)))
++		     (rvu_get_pf(rvu->pdev, map_func) !=
++				rvu_get_pf(rvu->pdev, pcifunc)))
+ 			return false;
+ 		else
+ 			return true;
+@@ -339,7 +340,7 @@ static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf,
+ 	bool from_vf;
+ 	int err;
+ 
+-	pf = rvu_get_pf(pcifunc);
++	pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	if (!is_pf_cgxmapped(rvu, pf) && type != NIX_INTF_TYPE_LBK &&
+ 	    type != NIX_INTF_TYPE_SDP)
+ 		return 0;
+@@ -416,7 +417,7 @@ static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf,
+ 		break;
+ 	case NIX_INTF_TYPE_SDP:
+ 		from_vf = !!(pcifunc & RVU_PFVF_FUNC_MASK);
+-		parent_pf = &rvu->pf[rvu_get_pf(pcifunc)];
++		parent_pf = &rvu->pf[rvu_get_pf(rvu->pdev, pcifunc)];
+ 		sdp_info = parent_pf->sdp_info;
+ 		if (!sdp_info) {
+ 			dev_err(rvu->dev, "Invalid sdp_info pointer\n");
+@@ -590,12 +591,12 @@ static int nix_bp_disable(struct rvu *rvu,
+ 	u16 chan_v;
+ 	u64 cfg;
+ 
+-	pf = rvu_get_pf(pcifunc);
++	pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	type = is_lbk_vf(rvu, pcifunc) ? NIX_INTF_TYPE_LBK : NIX_INTF_TYPE_CGX;
+ 	if (!is_pf_cgxmapped(rvu, pf) && type != NIX_INTF_TYPE_LBK)
+ 		return 0;
+ 
+-	if (is_sdp_pfvf(pcifunc))
++	if (is_sdp_pfvf(rvu, pcifunc))
+ 		type = NIX_INTF_TYPE_SDP;
+ 
+ 	if (cpt_link && !rvu->hw->cpt_links)
+@@ -736,9 +737,9 @@ static int nix_bp_enable(struct rvu *rvu,
+ 	u16 chan_v;
+ 	u64 cfg;
+ 
+-	pf = rvu_get_pf(pcifunc);
++	pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	type = is_lbk_vf(rvu, pcifunc) ? NIX_INTF_TYPE_LBK : NIX_INTF_TYPE_CGX;
+-	if (is_sdp_pfvf(pcifunc))
++	if (is_sdp_pfvf(rvu, pcifunc))
+ 		type = NIX_INTF_TYPE_SDP;
+ 
+ 	/* Enable backpressure only for CGX mapped PFs and LBK/SDP interface */
+@@ -1674,7 +1675,7 @@ int rvu_mbox_handler_nix_lf_alloc(struct rvu *rvu,
+ 	}
+ 
+ 	intf = is_lbk_vf(rvu, pcifunc) ? NIX_INTF_TYPE_LBK : NIX_INTF_TYPE_CGX;
+-	if (is_sdp_pfvf(pcifunc))
++	if (is_sdp_pfvf(rvu, pcifunc))
+ 		intf = NIX_INTF_TYPE_SDP;
+ 
+ 	err = nix_interface_init(rvu, pcifunc, intf, nixlf, rsp,
+@@ -1798,7 +1799,8 @@ int rvu_mbox_handler_nix_mark_format_cfg(struct rvu *rvu,
+ 	rc = rvu_nix_reserve_mark_format(rvu, nix_hw, blkaddr, cfg);
+ 	if (rc < 0) {
+ 		dev_err(rvu->dev, "No mark_format_ctl for (pf:%d, vf:%d)",
+-			rvu_get_pf(pcifunc), pcifunc & RVU_PFVF_FUNC_MASK);
++			rvu_get_pf(rvu->pdev,  pcifunc),
++				   pcifunc & RVU_PFVF_FUNC_MASK);
+ 		return NIX_AF_ERR_MARK_CFG_FAIL;
+ 	}
+ 
+@@ -2050,7 +2052,7 @@ static void nix_clear_tx_xoff(struct rvu *rvu, int blkaddr,
+ static int nix_get_tx_link(struct rvu *rvu, u16 pcifunc)
+ {
+ 	struct rvu_hwinfo *hw = rvu->hw;
+-	int pf = rvu_get_pf(pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	u8 cgx_id = 0, lmac_id = 0;
+ 
+ 	if (is_lbk_vf(rvu, pcifunc)) {/* LBK links */
+@@ -2068,7 +2070,7 @@ static void nix_get_txschq_range(struct rvu *rvu, u16 pcifunc,
+ 				 int link, int *start, int *end)
+ {
+ 	struct rvu_hwinfo *hw = rvu->hw;
+-	int pf = rvu_get_pf(pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 
+ 	/* LBK links */
+ 	if (is_lbk_vf(rvu, pcifunc) || is_rep_dev(rvu, pcifunc)) {
+@@ -2426,7 +2428,7 @@ static int nix_smq_flush(struct rvu *rvu, int blkaddr,
+ {
+ 	struct nix_smq_flush_ctx *smq_flush_ctx;
+ 	int err, restore_tx_en = 0, i;
+-	int pf = rvu_get_pf(pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	u8 cgx_id = 0, lmac_id = 0;
+ 	u16 tl2_tl3_link_schq;
+ 	u8 link, link_level;
+@@ -2820,7 +2822,7 @@ void rvu_nix_tx_tl2_cfg(struct rvu *rvu, int blkaddr, u16 pcifunc,
+ {
+ 	struct rvu_hwinfo *hw = rvu->hw;
+ 	int lbk_link_start, lbk_links;
+-	u8 pf = rvu_get_pf(pcifunc);
++	u8 pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	int schq;
+ 	u64 cfg;
+ 
+@@ -3190,7 +3192,8 @@ static int nix_blk_setup_mce(struct rvu *rvu, struct nix_hw *nix_hw,
+ 	err = rvu_nix_blk_aq_enq_inst(rvu, nix_hw, &aq_req, NULL);
+ 	if (err) {
+ 		dev_err(rvu->dev, "Failed to setup Bcast MCE for PF%d:VF%d\n",
+-			rvu_get_pf(pcifunc), pcifunc & RVU_PFVF_FUNC_MASK);
++			rvu_get_pf(rvu->pdev, pcifunc),
++				pcifunc & RVU_PFVF_FUNC_MASK);
+ 		return err;
+ 	}
+ 	return 0;
+@@ -3458,7 +3461,7 @@ int nix_update_mce_list(struct rvu *rvu, u16 pcifunc,
+ 		dev_err(rvu->dev,
+ 			"%s: Idx %d > max MCE idx %d, for PF%d bcast list\n",
+ 			__func__, idx, mce_list->max,
+-			pcifunc >> RVU_PFVF_PF_SHIFT);
++			rvu_get_pf(rvu->pdev, pcifunc));
+ 		return -EINVAL;
+ 	}
+ 
+@@ -3510,7 +3513,8 @@ void nix_get_mce_list(struct rvu *rvu, u16 pcifunc, int type,
+ 	struct rvu_pfvf *pfvf;
+ 
+ 	if (!hw->cap.nix_rx_multicast ||
+-	    !is_pf_cgxmapped(rvu, rvu_get_pf(pcifunc & ~RVU_PFVF_FUNC_MASK))) {
++	    !is_pf_cgxmapped(rvu, rvu_get_pf(rvu->pdev,
++			     pcifunc & ~RVU_PFVF_FUNC_MASK))) {
+ 		*mce_list = NULL;
+ 		*mce_idx = 0;
+ 		return;
+@@ -3544,13 +3548,13 @@ static int nix_update_mce_rule(struct rvu *rvu, u16 pcifunc,
+ 	int pf;
+ 
+ 	/* skip multicast pkt replication for AF's VFs & SDP links */
+-	if (is_lbk_vf(rvu, pcifunc) || is_sdp_pfvf(pcifunc))
++	if (is_lbk_vf(rvu, pcifunc) || is_sdp_pfvf(rvu, pcifunc))
+ 		return 0;
+ 
+ 	if (!hw->cap.nix_rx_multicast)
+ 		return 0;
+ 
+-	pf = rvu_get_pf(pcifunc);
++	pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	if (!is_pf_cgxmapped(rvu, pf))
+ 		return 0;
+ 
+@@ -3619,7 +3623,7 @@ static int nix_setup_mce_tables(struct rvu *rvu, struct nix_hw *nix_hw)
+ 
+ 		for (idx = 0; idx < (numvfs + 1); idx++) {
+ 			/* idx-0 is for PF, followed by VFs */
+-			pcifunc = (pf << RVU_PFVF_PF_SHIFT);
++			pcifunc = rvu_make_pcifunc(rvu->pdev, pf, 0);
+ 			pcifunc |= idx;
+ 			/* Add dummy entries now, so that we don't have to check
+ 			 * for whether AQ_OP should be INIT/WRITE later on.
+@@ -4554,7 +4558,7 @@ int rvu_mbox_handler_nix_set_rx_mode(struct rvu *rvu, struct nix_rx_mode *req,
+ static void nix_find_link_frs(struct rvu *rvu,
+ 			      struct nix_frs_cfg *req, u16 pcifunc)
+ {
+-	int pf = rvu_get_pf(pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	struct rvu_pfvf *pfvf;
+ 	int maxlen, minlen;
+ 	int numvfs, hwvf;
+@@ -4601,7 +4605,7 @@ int rvu_mbox_handler_nix_set_hw_frs(struct rvu *rvu, struct nix_frs_cfg *req,
+ {
+ 	struct rvu_hwinfo *hw = rvu->hw;
+ 	u16 pcifunc = req->hdr.pcifunc;
+-	int pf = rvu_get_pf(pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	int blkaddr, link = -1;
+ 	struct nix_hw *nix_hw;
+ 	struct rvu_pfvf *pfvf;
+@@ -5251,7 +5255,7 @@ int rvu_mbox_handler_nix_lf_start_rx(struct rvu *rvu, struct msg_req *req,
+ 
+ 	rvu_switch_update_rules(rvu, pcifunc, true);
+ 
+-	pf = rvu_get_pf(pcifunc);
++	pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	if (is_pf_cgxmapped(rvu, pf) && rvu->rep_mode)
+ 		rvu_rep_notify_pfvf_state(rvu, pcifunc, true);
+ 
+@@ -5284,7 +5288,7 @@ int rvu_mbox_handler_nix_lf_stop_rx(struct rvu *rvu, struct msg_req *req,
+ 	rvu_switch_update_rules(rvu, pcifunc, false);
+ 	rvu_cgx_tx_enable(rvu, pcifunc, true);
+ 
+-	pf = rvu_get_pf(pcifunc);
++	pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	if (is_pf_cgxmapped(rvu, pf) && rvu->rep_mode)
+ 		rvu_rep_notify_pfvf_state(rvu, pcifunc, false);
+ 	return 0;
+@@ -5296,7 +5300,7 @@ void rvu_nix_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int nixlf)
+ {
+ 	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
+ 	struct hwctx_disable_req ctx_req;
+-	int pf = rvu_get_pf(pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	struct mac_ops *mac_ops;
+ 	u8 cgx_id, lmac_id;
+ 	u64 sa_base;
+@@ -5385,7 +5389,7 @@ static int rvu_nix_lf_ptp_tx_cfg(struct rvu *rvu, u16 pcifunc, bool enable)
+ 	int nixlf;
+ 	u64 cfg;
+ 
+-	pf = rvu_get_pf(pcifunc);
++	pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	if (!is_mac_feature_supported(rvu, pf, RVU_LMAC_FEAT_PTP))
+ 		return 0;
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+index da15bb4511788d..c7c70429eb6c10 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+@@ -147,7 +147,9 @@ static int npc_get_ucast_mcam_index(struct npc_mcam *mcam, u16 pcifunc,
+ int npc_get_nixlf_mcam_index(struct npc_mcam *mcam,
+ 			     u16 pcifunc, int nixlf, int type)
+ {
+-	int pf = rvu_get_pf(pcifunc);
++	struct rvu_hwinfo *hw = container_of(mcam, struct rvu_hwinfo, mcam);
++	struct rvu *rvu = hw->rvu;
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	int index;
+ 
+ 	/* Check if this is for a PF */
+@@ -698,7 +700,7 @@ void rvu_npc_install_promisc_entry(struct rvu *rvu, u16 pcifunc,
+ 
+ 	/* RX_ACTION set to MCAST for CGX PF's */
+ 	if (hw->cap.nix_rx_multicast && pfvf->use_mce_list &&
+-	    is_pf_cgxmapped(rvu, rvu_get_pf(pcifunc))) {
++	    is_pf_cgxmapped(rvu, rvu_get_pf(rvu->pdev, pcifunc))) {
+ 		*(u64 *)&action = 0;
+ 		action.op = NIX_RX_ACTIONOP_MCAST;
+ 		pfvf = rvu_get_pfvf(rvu, pcifunc & ~RVU_PFVF_FUNC_MASK);
+@@ -3434,7 +3436,7 @@ int rvu_npc_set_parse_mode(struct rvu *rvu, u16 pcifunc, u64 mode, u8 dir,
+ {
+ 	struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
+ 	int blkaddr, nixlf, rc, intf_mode;
+-	int pf = rvu_get_pf(pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	u64 rxpkind, txpkind;
+ 	u8 cgx_id, lmac_id;
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c
+index d2661e7fabdb49..999f6d93c7fe8d 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.c
+@@ -1465,7 +1465,7 @@ static int rvu_npc_exact_update_table_entry(struct rvu *rvu, u8 cgx_id, u8 lmac_
+ int rvu_npc_exact_promisc_disable(struct rvu *rvu, u16 pcifunc)
+ {
+ 	struct npc_exact_table *table;
+-	int pf = rvu_get_pf(pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	u8 cgx_id, lmac_id;
+ 	u32 drop_mcam_idx;
+ 	bool *promisc;
+@@ -1512,7 +1512,7 @@ int rvu_npc_exact_promisc_disable(struct rvu *rvu, u16 pcifunc)
+ int rvu_npc_exact_promisc_enable(struct rvu *rvu, u16 pcifunc)
+ {
+ 	struct npc_exact_table *table;
+-	int pf = rvu_get_pf(pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	u8 cgx_id, lmac_id;
+ 	u32 drop_mcam_idx;
+ 	bool *promisc;
+@@ -1560,7 +1560,7 @@ int rvu_npc_exact_promisc_enable(struct rvu *rvu, u16 pcifunc)
+ int rvu_npc_exact_mac_addr_reset(struct rvu *rvu, struct cgx_mac_addr_reset_req *req,
+ 				 struct msg_rsp *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	u32 seq_id = req->index;
+ 	struct rvu_pfvf *pfvf;
+ 	u8 cgx_id, lmac_id;
+@@ -1593,7 +1593,7 @@ int rvu_npc_exact_mac_addr_update(struct rvu *rvu,
+ 				  struct cgx_mac_addr_update_req *req,
+ 				  struct cgx_mac_addr_update_rsp *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	struct npc_exact_table_entry *entry;
+ 	struct npc_exact_table *table;
+ 	struct rvu_pfvf *pfvf;
+@@ -1675,7 +1675,7 @@ int rvu_npc_exact_mac_addr_add(struct rvu *rvu,
+ 			       struct cgx_mac_addr_add_req *req,
+ 			       struct cgx_mac_addr_add_rsp *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	struct rvu_pfvf *pfvf;
+ 	u8 cgx_id, lmac_id;
+ 	int rc = 0;
+@@ -1711,7 +1711,7 @@ int rvu_npc_exact_mac_addr_del(struct rvu *rvu,
+ 			       struct cgx_mac_addr_del_req *req,
+ 			       struct msg_rsp *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	int rc;
+ 
+ 	rc = rvu_npc_exact_del_table_entry_by_id(rvu, req->index);
+@@ -1736,7 +1736,7 @@ int rvu_npc_exact_mac_addr_del(struct rvu *rvu,
+ int rvu_npc_exact_mac_addr_set(struct rvu *rvu, struct cgx_mac_addr_set_or_get *req,
+ 			       struct cgx_mac_addr_set_or_get *rsp)
+ {
+-	int pf = rvu_get_pf(req->hdr.pcifunc);
++	int pf = rvu_get_pf(rvu->pdev, req->hdr.pcifunc);
+ 	u32 seq_id = req->index;
+ 	struct rvu_pfvf *pfvf;
+ 	u8 cgx_id, lmac_id;
+@@ -2001,7 +2001,7 @@ int rvu_npc_exact_init(struct rvu *rvu)
+ 		}
+ 
+ 		/* Filter rules are only for PF */
+-		pcifunc = RVU_PFFUNC(i, 0);
++		pcifunc = RVU_PFFUNC(rvu->pdev, i, 0);
+ 
+ 		dev_dbg(rvu->dev,
+ 			"%s:Drop rule cgx=%d lmac=%d chan(val=0x%llx, mask=0x%llx\n",
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h
+index 57a09328d46b5a..cb25cf478f1fdd 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_hash.h
+@@ -139,9 +139,7 @@ static struct npc_mcam_kex_hash npc_mkex_hash_default __maybe_unused = {
+ #define NPC_MCAM_DROP_RULE_MAX 30
+ #define NPC_MCAM_SDP_DROP_RULE_IDX 0
+ 
+-#define RVU_PFFUNC(pf, func)	\
+-	((((pf) & RVU_PFVF_PF_MASK) << RVU_PFVF_PF_SHIFT) | \
+-	(((func) & RVU_PFVF_FUNC_MASK) << RVU_PFVF_FUNC_SHIFT))
++#define RVU_PFFUNC(pdev, pf, func) rvu_make_pcifunc(pdev, pf, func)
+ 
+ enum npc_exact_opc_type {
+ 	NPC_EXACT_OPC_MEM,
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
+index 32953cca108c80..03099bc570bd50 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
+@@ -39,7 +39,7 @@ static int rvu_rep_up_notify(struct rvu *rvu, struct rep_event *event)
+ 	struct rep_event *msg;
+ 	int pf;
+ 
+-	pf = rvu_get_pf(event->pcifunc);
++	pf = rvu_get_pf(rvu->pdev, event->pcifunc);
+ 
+ 	if (event->event & RVU_EVENT_MAC_ADDR_CHANGE)
+ 		ether_addr_copy(pfvf->mac_addr, event->evt_data.mac);
+@@ -114,10 +114,10 @@ int rvu_rep_notify_pfvf_state(struct rvu *rvu, u16 pcifunc, bool enable)
+ 	struct rep_event *req;
+ 	int pf;
+ 
+-	if (!is_pf_cgxmapped(rvu, rvu_get_pf(pcifunc)))
++	if (!is_pf_cgxmapped(rvu, rvu_get_pf(rvu->pdev, pcifunc)))
+ 		return 0;
+ 
+-	pf = rvu_get_pf(rvu->rep_pcifunc);
++	pf = rvu_get_pf(rvu->pdev, rvu->rep_pcifunc);
+ 
+ 	mutex_lock(&rvu->mbox_lock);
+ 	req = otx2_mbox_alloc_msg_rep_event_up_notify(rvu, pf);
+@@ -325,7 +325,7 @@ int rvu_rep_install_mcam_rules(struct rvu *rvu)
+ 		if (!is_pf_cgxmapped(rvu, pf))
+ 			continue;
+ 
+-		pcifunc = pf << RVU_PFVF_PF_SHIFT;
++		pcifunc = rvu_make_pcifunc(rvu->pdev, pf, 0);
+ 		rvu_get_nix_blkaddr(rvu, pcifunc);
+ 		rep = true;
+ 		for (i = 0; i < 2; i++) {
+@@ -345,8 +345,7 @@ int rvu_rep_install_mcam_rules(struct rvu *rvu)
+ 
+ 		rvu_get_pf_numvfs(rvu, pf, &numvfs, NULL);
+ 		for (vf = 0; vf < numvfs; vf++) {
+-			pcifunc = pf << RVU_PFVF_PF_SHIFT |
+-				  ((vf + 1) & RVU_PFVF_FUNC_MASK);
++			pcifunc = rvu_make_pcifunc(rvu->pdev, pf, vf + 1);
+ 			rvu_get_nix_blkaddr(rvu, pcifunc);
+ 
+ 			/* Skip installimg rules if nixlf is not attached */
+@@ -454,7 +453,7 @@ int rvu_mbox_handler_get_rep_cnt(struct rvu *rvu, struct msg_req *req,
+ 	for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
+ 		if (!is_pf_cgxmapped(rvu, pf))
+ 			continue;
+-		pcifunc = pf << RVU_PFVF_PF_SHIFT;
++		pcifunc = rvu_make_pcifunc(rvu->pdev, pf, 0);
+ 		rvu->rep2pfvf_map[rep] = pcifunc;
+ 		rsp->rep_pf_map[rep] = pcifunc;
+ 		rep++;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_sdp.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_sdp.c
+index 38cfe148f4b748..e4a5f9fa6fd46d 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_sdp.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_sdp.c
+@@ -17,9 +17,9 @@
+ /* SDP PF number */
+ static int sdp_pf_num[MAX_SDP] = {-1, -1};
+ 
+-bool is_sdp_pfvf(u16 pcifunc)
++bool is_sdp_pfvf(struct rvu *rvu, u16 pcifunc)
+ {
+-	u16 pf = rvu_get_pf(pcifunc);
++	u16 pf = rvu_get_pf(rvu->pdev, pcifunc);
+ 	u32 found = 0, i = 0;
+ 
+ 	while (i < MAX_SDP) {
+@@ -34,9 +34,9 @@ bool is_sdp_pfvf(u16 pcifunc)
+ 	return true;
+ }
+ 
+-bool is_sdp_pf(u16 pcifunc)
++bool is_sdp_pf(struct rvu *rvu, u16 pcifunc)
+ {
+-	return (is_sdp_pfvf(pcifunc) &&
++	return (is_sdp_pfvf(rvu, pcifunc) &&
+ 		!(pcifunc & RVU_PFVF_FUNC_MASK));
+ }
+ 
+@@ -46,7 +46,7 @@ bool is_sdp_vf(struct rvu *rvu, u16 pcifunc)
+ 	if (!(pcifunc & ~RVU_PFVF_FUNC_MASK))
+ 		return (rvu->vf_devid == RVU_SDP_VF_DEVID);
+ 
+-	return (is_sdp_pfvf(pcifunc) &&
++	return (is_sdp_pfvf(rvu, pcifunc) &&
+ 		!!(pcifunc & RVU_PFVF_FUNC_MASK));
+ }
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c
+index 268efb7c1c15d2..49ce38685a7e6b 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c
+@@ -93,7 +93,7 @@ static int rvu_switch_install_rules(struct rvu *rvu)
+ 		if (!is_pf_cgxmapped(rvu, pf))
+ 			continue;
+ 
+-		pcifunc = pf << 10;
++		pcifunc = rvu_make_pcifunc(rvu->pdev, pf, 0);
+ 		/* rvu_get_nix_blkaddr sets up the corresponding NIX block
+ 		 * address and NIX RX and TX interfaces for a pcifunc.
+ 		 * Generally it is called during attach call of a pcifunc but it
+@@ -126,7 +126,7 @@ static int rvu_switch_install_rules(struct rvu *rvu)
+ 
+ 		rvu_get_pf_numvfs(rvu, pf, &numvfs, NULL);
+ 		for (vf = 0; vf < numvfs; vf++) {
+-			pcifunc = pf << 10 | ((vf + 1) & 0x3FF);
++			pcifunc = rvu_make_pcifunc(rvu->pdev, pf, (vf + 1));
+ 			rvu_get_nix_blkaddr(rvu, pcifunc);
+ 
+ 			err = rvu_switch_install_rx_rule(rvu, pcifunc, 0x0);
+@@ -236,7 +236,7 @@ void rvu_switch_disable(struct rvu *rvu)
+ 		if (!is_pf_cgxmapped(rvu, pf))
+ 			continue;
+ 
+-		pcifunc = pf << 10;
++		pcifunc = rvu_make_pcifunc(rvu->pdev, pf, 0);
+ 		err = rvu_switch_install_rx_rule(rvu, pcifunc, 0xFFF);
+ 		if (err)
+ 			dev_err(rvu->dev,
+@@ -248,7 +248,7 @@ void rvu_switch_disable(struct rvu *rvu)
+ 
+ 		rvu_get_pf_numvfs(rvu, pf, &numvfs, NULL);
+ 		for (vf = 0; vf < numvfs; vf++) {
+-			pcifunc = pf << 10 | ((vf + 1) & 0x3FF);
++			pcifunc = rvu_make_pcifunc(rvu->pdev, pf, (vf + 1));
+ 			err = rvu_switch_install_rx_rule(rvu, pcifunc, 0xFFF);
+ 			if (err)
+ 				dev_err(rvu->dev,
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+index a6500e3673f248..c691f072215400 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+@@ -481,7 +481,7 @@ static int cn10k_outb_write_sa(struct otx2_nic *pf, struct qmem *sa_info)
+ 		goto set_available;
+ 
+ 	/* Trigger CTX flush to write dirty data back to DRAM */
+-	reg_val = FIELD_PREP(CPT_LF_CTX_FLUSH, sa_iova >> 7);
++	reg_val = FIELD_PREP(CPT_LF_CTX_FLUSH_CPTR, sa_iova >> 7);
+ 	otx2_write64(pf, CN10K_CPT_LF_CTX_FLUSH, reg_val);
+ 
+ set_available:
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
+index 9965df0faa3e7d..43fbce0d603907 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
+@@ -220,7 +220,7 @@ struct cpt_sg_s {
+ #define CPT_LF_Q_SIZE_DIV40 GENMASK_ULL(14, 0)
+ 
+ /* CPT LF CTX Flush Register */
+-#define CPT_LF_CTX_FLUSH GENMASK_ULL(45, 0)
++#define CPT_LF_CTX_FLUSH_CPTR GENMASK_ULL(45, 0)
+ 
+ #ifdef CONFIG_XFRM_OFFLOAD
+ int cn10k_ipsec_init(struct net_device *netdev);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index 6b5c9536d26d3f..6f7b608261d9c8 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -124,7 +124,9 @@ void otx2_get_dev_stats(struct otx2_nic *pfvf)
+ 			       dev_stats->rx_ucast_frames;
+ 
+ 	dev_stats->tx_bytes = OTX2_GET_TX_STATS(TX_OCTS);
+-	dev_stats->tx_drops = OTX2_GET_TX_STATS(TX_DROP);
++	dev_stats->tx_drops = OTX2_GET_TX_STATS(TX_DROP) +
++			       (unsigned long)atomic_long_read(&dev_stats->tx_discards);
++
+ 	dev_stats->tx_bcast_frames = OTX2_GET_TX_STATS(TX_BCAST);
+ 	dev_stats->tx_mcast_frames = OTX2_GET_TX_STATS(TX_MCAST);
+ 	dev_stats->tx_ucast_frames = OTX2_GET_TX_STATS(TX_UCAST);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+index ca0e6ab12cebeb..f4fc915a0b5f11 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+@@ -28,6 +28,7 @@
+ #include "otx2_reg.h"
+ #include "otx2_txrx.h"
+ #include "otx2_devlink.h"
++#include <rvu.h>
+ #include <rvu_trace.h>
+ #include "qos.h"
+ #include "rep.h"
+@@ -149,6 +150,7 @@ struct otx2_dev_stats {
+ 	u64 tx_bcast_frames;
+ 	u64 tx_mcast_frames;
+ 	u64 tx_drops;
++	atomic_long_t tx_discards;
+ };
+ 
+ /* Driver counted stats */
+@@ -899,21 +901,11 @@ MBOX_UP_MCS_MESSAGES
+ /* Time to wait before watchdog kicks off */
+ #define OTX2_TX_TIMEOUT		(100 * HZ)
+ 
+-#define	RVU_PFVF_PF_SHIFT	10
+-#define	RVU_PFVF_PF_MASK	0x3F
+-#define	RVU_PFVF_FUNC_SHIFT	0
+-#define	RVU_PFVF_FUNC_MASK	0x3FF
+-
+ static inline bool is_otx2_vf(u16 pcifunc)
+ {
+ 	return !!(pcifunc & RVU_PFVF_FUNC_MASK);
+ }
+ 
+-static inline int rvu_get_pf(u16 pcifunc)
+-{
+-	return (pcifunc >> RVU_PFVF_PF_SHIFT) & RVU_PFVF_PF_MASK;
+-}
+-
+ static inline dma_addr_t otx2_dma_map_page(struct otx2_nic *pfvf,
+ 					   struct page *page,
+ 					   size_t offset, size_t size,
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index db7c466fdc39ee..c6d2f2249cc35e 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -206,7 +206,8 @@ static int otx2_register_flr_me_intr(struct otx2_nic *pf, int numvfs)
+ 
+ 	/* Register ME interrupt handler*/
+ 	irq_name = &hw->irq_name[RVU_PF_INT_VEC_VFME0 * NAME_SIZE];
+-	snprintf(irq_name, NAME_SIZE, "RVUPF%d_ME0", rvu_get_pf(pf->pcifunc));
++	snprintf(irq_name, NAME_SIZE, "RVUPF%d_ME0",
++		 rvu_get_pf(pf->pdev, pf->pcifunc));
+ 	ret = request_irq(pci_irq_vector(pf->pdev, RVU_PF_INT_VEC_VFME0),
+ 			  otx2_pf_me_intr_handler, 0, irq_name, pf);
+ 	if (ret) {
+@@ -216,7 +217,8 @@ static int otx2_register_flr_me_intr(struct otx2_nic *pf, int numvfs)
+ 
+ 	/* Register FLR interrupt handler */
+ 	irq_name = &hw->irq_name[RVU_PF_INT_VEC_VFFLR0 * NAME_SIZE];
+-	snprintf(irq_name, NAME_SIZE, "RVUPF%d_FLR0", rvu_get_pf(pf->pcifunc));
++	snprintf(irq_name, NAME_SIZE, "RVUPF%d_FLR0",
++		 rvu_get_pf(pf->pdev, pf->pcifunc));
+ 	ret = request_irq(pci_irq_vector(pf->pdev, RVU_PF_INT_VEC_VFFLR0),
+ 			  otx2_pf_flr_intr_handler, 0, irq_name, pf);
+ 	if (ret) {
+@@ -228,7 +230,7 @@ static int otx2_register_flr_me_intr(struct otx2_nic *pf, int numvfs)
+ 	if (numvfs > 64) {
+ 		irq_name = &hw->irq_name[RVU_PF_INT_VEC_VFME1 * NAME_SIZE];
+ 		snprintf(irq_name, NAME_SIZE, "RVUPF%d_ME1",
+-			 rvu_get_pf(pf->pcifunc));
++			 rvu_get_pf(pf->pdev, pf->pcifunc));
+ 		ret = request_irq(pci_irq_vector
+ 				  (pf->pdev, RVU_PF_INT_VEC_VFME1),
+ 				  otx2_pf_me_intr_handler, 0, irq_name, pf);
+@@ -238,7 +240,7 @@ static int otx2_register_flr_me_intr(struct otx2_nic *pf, int numvfs)
+ 		}
+ 		irq_name = &hw->irq_name[RVU_PF_INT_VEC_VFFLR1 * NAME_SIZE];
+ 		snprintf(irq_name, NAME_SIZE, "RVUPF%d_FLR1",
+-			 rvu_get_pf(pf->pcifunc));
++			 rvu_get_pf(pf->pdev, pf->pcifunc));
+ 		ret = request_irq(pci_irq_vector
+ 				  (pf->pdev, RVU_PF_INT_VEC_VFFLR1),
+ 				  otx2_pf_flr_intr_handler, 0, irq_name, pf);
+@@ -701,7 +703,7 @@ static int otx2_register_pfvf_mbox_intr(struct otx2_nic *pf, int numvfs)
+ 	irq_name = &hw->irq_name[RVU_PF_INT_VEC_VFPF_MBOX0 * NAME_SIZE];
+ 	if (pf->pcifunc)
+ 		snprintf(irq_name, NAME_SIZE,
+-			 "RVUPF%d_VF Mbox0", rvu_get_pf(pf->pcifunc));
++			 "RVUPF%d_VF Mbox0", rvu_get_pf(pf->pdev, pf->pcifunc));
+ 	else
+ 		snprintf(irq_name, NAME_SIZE, "RVUPF_VF Mbox0");
+ 	err = request_irq(pci_irq_vector(pf->pdev, RVU_PF_INT_VEC_VFPF_MBOX0),
+@@ -717,7 +719,8 @@ static int otx2_register_pfvf_mbox_intr(struct otx2_nic *pf, int numvfs)
+ 		irq_name = &hw->irq_name[RVU_PF_INT_VEC_VFPF_MBOX1 * NAME_SIZE];
+ 		if (pf->pcifunc)
+ 			snprintf(irq_name, NAME_SIZE,
+-				 "RVUPF%d_VF Mbox1", rvu_get_pf(pf->pcifunc));
++				 "RVUPF%d_VF Mbox1",
++				 rvu_get_pf(pf->pdev, pf->pcifunc));
+ 		else
+ 			snprintf(irq_name, NAME_SIZE, "RVUPF_VF Mbox1");
+ 		err = request_irq(pci_irq_vector(pf->pdev,
+@@ -1972,7 +1975,7 @@ int otx2_open(struct net_device *netdev)
+ 	if (err) {
+ 		dev_err(pf->dev,
+ 			"RVUPF%d: IRQ registration failed for QERR\n",
+-			rvu_get_pf(pf->pcifunc));
++			rvu_get_pf(pf->pdev, pf->pcifunc));
+ 		goto err_disable_napi;
+ 	}
+ 
+@@ -1990,7 +1993,7 @@ int otx2_open(struct net_device *netdev)
+ 		if (name_len >= NAME_SIZE) {
+ 			dev_err(pf->dev,
+ 				"RVUPF%d: IRQ registration failed for CQ%d, irq name is too long\n",
+-				rvu_get_pf(pf->pcifunc), qidx);
++				rvu_get_pf(pf->pdev, pf->pcifunc), qidx);
+ 			err = -EINVAL;
+ 			goto err_free_cints;
+ 		}
+@@ -2001,7 +2004,7 @@ int otx2_open(struct net_device *netdev)
+ 		if (err) {
+ 			dev_err(pf->dev,
+ 				"RVUPF%d: IRQ registration failed for CQ%d\n",
+-				rvu_get_pf(pf->pcifunc), qidx);
++				rvu_get_pf(pf->pdev, pf->pcifunc), qidx);
+ 			goto err_free_cints;
+ 		}
+ 		vec++;
+@@ -2153,6 +2156,7 @@ static netdev_tx_t otx2_xmit(struct sk_buff *skb, struct net_device *netdev)
+ {
+ 	struct otx2_nic *pf = netdev_priv(netdev);
+ 	int qidx = skb_get_queue_mapping(skb);
++	struct otx2_dev_stats *dev_stats;
+ 	struct otx2_snd_queue *sq;
+ 	struct netdev_queue *txq;
+ 	int sq_idx;
+@@ -2165,6 +2169,8 @@ static netdev_tx_t otx2_xmit(struct sk_buff *skb, struct net_device *netdev)
+ 	/* Check for minimum and maximum packet length */
+ 	if (skb->len <= ETH_HLEN ||
+ 	    (!skb_shinfo(skb)->gso_size && skb->len > pf->tx_max_pktlen)) {
++		dev_stats = &pf->hw.dev_stats;
++		atomic_long_inc(&dev_stats->tx_discards);
+ 		dev_kfree_skb(skb);
+ 		return NETDEV_TX_OK;
+ 	}
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h
+index e3aee6e3621517..858f084b9d4740 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h
+@@ -138,36 +138,6 @@
+ #define	NIX_LF_CINTX_ENA_W1S(a)		(NIX_LFBASE | 0xD40 | (a) << 12)
+ #define	NIX_LF_CINTX_ENA_W1C(a)		(NIX_LFBASE | 0xD50 | (a) << 12)
+ 
+-/* NIX AF transmit scheduler registers */
+-#define NIX_AF_SMQX_CFG(a)		(0x700 | (u64)(a) << 16)
+-#define NIX_AF_TL4X_SDP_LINK_CFG(a)	(0xB10 | (u64)(a) << 16)
+-#define NIX_AF_TL1X_SCHEDULE(a)		(0xC00 | (u64)(a) << 16)
+-#define NIX_AF_TL1X_CIR(a)		(0xC20 | (u64)(a) << 16)
+-#define NIX_AF_TL1X_TOPOLOGY(a)		(0xC80 | (u64)(a) << 16)
+-#define NIX_AF_TL2X_PARENT(a)		(0xE88 | (u64)(a) << 16)
+-#define NIX_AF_TL2X_SCHEDULE(a)		(0xE00 | (u64)(a) << 16)
+-#define NIX_AF_TL2X_TOPOLOGY(a)		(0xE80 | (u64)(a) << 16)
+-#define NIX_AF_TL2X_CIR(a)		(0xE20 | (u64)(a) << 16)
+-#define NIX_AF_TL2X_PIR(a)		(0xE30 | (u64)(a) << 16)
+-#define NIX_AF_TL3X_PARENT(a)		(0x1088 | (u64)(a) << 16)
+-#define NIX_AF_TL3X_SCHEDULE(a)		(0x1000 | (u64)(a) << 16)
+-#define NIX_AF_TL3X_SHAPE(a)		(0x1010 | (u64)(a) << 16)
+-#define NIX_AF_TL3X_CIR(a)		(0x1020 | (u64)(a) << 16)
+-#define NIX_AF_TL3X_PIR(a)		(0x1030 | (u64)(a) << 16)
+-#define NIX_AF_TL3X_TOPOLOGY(a)		(0x1080 | (u64)(a) << 16)
+-#define NIX_AF_TL4X_PARENT(a)		(0x1288 | (u64)(a) << 16)
+-#define NIX_AF_TL4X_SCHEDULE(a)		(0x1200 | (u64)(a) << 16)
+-#define NIX_AF_TL4X_SHAPE(a)		(0x1210 | (u64)(a) << 16)
+-#define NIX_AF_TL4X_CIR(a)		(0x1220 | (u64)(a) << 16)
+-#define NIX_AF_TL4X_PIR(a)		(0x1230 | (u64)(a) << 16)
+-#define NIX_AF_TL4X_TOPOLOGY(a)		(0x1280 | (u64)(a) << 16)
+-#define NIX_AF_MDQX_SCHEDULE(a)		(0x1400 | (u64)(a) << 16)
+-#define NIX_AF_MDQX_SHAPE(a)		(0x1410 | (u64)(a) << 16)
+-#define NIX_AF_MDQX_CIR(a)		(0x1420 | (u64)(a) << 16)
+-#define NIX_AF_MDQX_PIR(a)		(0x1430 | (u64)(a) << 16)
+-#define NIX_AF_MDQX_PARENT(a)		(0x1480 | (u64)(a) << 16)
+-#define NIX_AF_TL3_TL2X_LINKX_CFG(a, b)	(0x1700 | (u64)(a) << 16 | (b) << 3)
+-
+ /* LMT LF registers */
+ #define LMT_LFBASE			BIT_ULL(RVU_FUNC_BLKADDR_SHIFT)
+ #define LMT_LF_LMTLINEX(a)		(LMT_LFBASE | 0x000 | (a) << 12)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
+index 9a226ca7442546..5f80b23c5335cd 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
+@@ -467,7 +467,8 @@ static int otx2_tc_parse_actions(struct otx2_nic *nic,
+ 			target = act->dev;
+ 			if (target->dev.parent) {
+ 				priv = netdev_priv(target);
+-				if (rvu_get_pf(nic->pcifunc) != rvu_get_pf(priv->pcifunc)) {
++				if (rvu_get_pf(nic->pdev, nic->pcifunc) !=
++					rvu_get_pf(nic->pdev, priv->pcifunc)) {
+ 					NL_SET_ERR_MSG_MOD(extack,
+ 							   "can't redirect to other pf/vf");
+ 					return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+index 8a8b598bd389b9..76dd2e965cf035 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+@@ -391,9 +391,19 @@ static netdev_tx_t otx2vf_xmit(struct sk_buff *skb, struct net_device *netdev)
+ {
+ 	struct otx2_nic *vf = netdev_priv(netdev);
+ 	int qidx = skb_get_queue_mapping(skb);
++	struct otx2_dev_stats *dev_stats;
+ 	struct otx2_snd_queue *sq;
+ 	struct netdev_queue *txq;
+ 
++	/* Check for minimum and maximum packet length */
++	if (skb->len <= ETH_HLEN ||
++	    (!skb_shinfo(skb)->gso_size && skb->len > vf->tx_max_pktlen)) {
++		dev_stats = &vf->hw.dev_stats;
++		atomic_long_inc(&dev_stats->tx_discards);
++		dev_kfree_skb(skb);
++		return NETDEV_TX_OK;
++	}
++
+ 	sq = &vf->qset.sq[qidx];
+ 	txq = netdev_get_tx_queue(netdev, qidx);
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
+index 2cd3da3b684320..b476733a023455 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
+@@ -244,10 +244,10 @@ static int rvu_rep_devlink_port_register(struct rep_dev *rep)
+ 
+ 	if (!(rep->pcifunc & RVU_PFVF_FUNC_MASK)) {
+ 		attrs.flavour = DEVLINK_PORT_FLAVOUR_PHYSICAL;
+-		attrs.phys.port_number = rvu_get_pf(rep->pcifunc);
++		attrs.phys.port_number = rvu_get_pf(priv->pdev, rep->pcifunc);
+ 	} else {
+ 		attrs.flavour = DEVLINK_PORT_FLAVOUR_PCI_VF;
+-		attrs.pci_vf.pf = rvu_get_pf(rep->pcifunc);
++		attrs.pci_vf.pf = rvu_get_pf(priv->pdev, rep->pcifunc);
+ 		attrs.pci_vf.vf = rep->pcifunc & RVU_PFVF_FUNC_MASK;
+ 	}
+ 
+@@ -371,7 +371,8 @@ static void rvu_rep_get_stats(struct work_struct *work)
+ 	stats->rx_mcast_frames = rsp->rx.mcast;
+ 	stats->tx_bytes = rsp->tx.octs;
+ 	stats->tx_frames = rsp->tx.ucast + rsp->tx.bcast + rsp->tx.mcast;
+-	stats->tx_drops = rsp->tx.drop;
++	stats->tx_drops = rsp->tx.drop +
++			  (unsigned long)atomic_long_read(&stats->tx_discards);
+ exit:
+ 	mutex_unlock(&priv->mbox.lock);
+ }
+@@ -418,6 +419,16 @@ static netdev_tx_t rvu_rep_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	struct otx2_nic *pf = rep->mdev;
+ 	struct otx2_snd_queue *sq;
+ 	struct netdev_queue *txq;
++	struct rep_stats *stats;
++
++	/* Check for minimum and maximum packet length */
++	if (skb->len <= ETH_HLEN ||
++	    (!skb_shinfo(skb)->gso_size && skb->len > pf->tx_max_pktlen)) {
++		stats = &rep->stats;
++		atomic_long_inc(&stats->tx_discards);
++		dev_kfree_skb(skb);
++		return NETDEV_TX_OK;
++	}
+ 
+ 	sq = &pf->qset.sq[rep->rep_id];
+ 	txq = netdev_get_tx_queue(dev, 0);
+@@ -672,7 +683,8 @@ int rvu_rep_create(struct otx2_nic *priv, struct netlink_ext_ack *extack)
+ 		rep->pcifunc = pcifunc;
+ 
+ 		snprintf(ndev->name, sizeof(ndev->name), "Rpf%dvf%d",
+-			 rvu_get_pf(pcifunc), (pcifunc & RVU_PFVF_FUNC_MASK));
++			 rvu_get_pf(priv->pdev, pcifunc),
++			 (pcifunc & RVU_PFVF_FUNC_MASK));
+ 
+ 		ndev->hw_features = (NETIF_F_RXCSUM | NETIF_F_IP_CSUM |
+ 			       NETIF_F_IPV6_CSUM | NETIF_F_RXHASH |
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.h b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h
+index 38446b3e4f13cc..5bc9e2c7d800b4 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h
+@@ -27,6 +27,7 @@ struct rep_stats {
+ 	u64 tx_bytes;
+ 	u64 tx_frames;
+ 	u64 tx_drops;
++	atomic_long_t tx_discards;
+ };
+ 
+ struct rep_dev {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+index 73cd7464437889..ca03dbcb07da0d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+@@ -107,7 +107,7 @@ static int mlx5_devlink_reload_fw_activate(struct devlink *devlink, struct netli
+ 	if (err)
+ 		return err;
+ 
+-	mlx5_unload_one_devl_locked(dev, true);
++	mlx5_sync_reset_unload_flow(dev, true);
+ 	err = mlx5_health_wait_pci_up(dev);
+ 	if (err)
+ 		NL_SET_ERR_MSG_MOD(extack, "FW activate aborted, PCI reads fail after reset");
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+index 3efa8bf1d14ef4..4720523813b976 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+@@ -575,7 +575,6 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 		if (err)
+ 			return err;
+ 	}
+-	priv->dcbx.xoff = xoff;
+ 
+ 	/* Apply the settings */
+ 	if (update_buffer) {
+@@ -584,6 +583,8 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 			return err;
+ 	}
+ 
++	priv->dcbx.xoff = xoff;
++
+ 	if (update_prio2buffer)
+ 		err = mlx5e_port_set_priority2buffer(priv->mdev, prio2buffer);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h
+index f4a19ffbb641c0..66d276a1be836a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h
+@@ -66,11 +66,23 @@ struct mlx5e_port_buffer {
+ 	struct mlx5e_bufferx_reg  buffer[MLX5E_MAX_NETWORK_BUFFER];
+ };
+ 
++#ifdef CONFIG_MLX5_CORE_EN_DCB
+ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 				    u32 change, unsigned int mtu,
+ 				    struct ieee_pfc *pfc,
+ 				    u32 *buffer_size,
+ 				    u8 *prio2buffer);
++#else
++static inline int
++mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
++				u32 change, unsigned int mtu,
++				void *pfc,
++				u32 *buffer_size,
++				u8 *prio2buffer)
++{
++	return 0;
++}
++#endif
+ 
+ int mlx5e_port_query_buffer(struct mlx5e_priv *priv,
+ 			    struct mlx5e_port_buffer *port_buffer);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 16d818943487b2..e39c51cfc8e6c2 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -48,6 +48,7 @@
+ #include "en.h"
+ #include "en/dim.h"
+ #include "en/txrx.h"
++#include "en/port_buffer.h"
+ #include "en_tc.h"
+ #include "en_rep.h"
+ #include "en_accel/ipsec.h"
+@@ -135,6 +136,8 @@ void mlx5e_update_carrier(struct mlx5e_priv *priv)
+ 	if (up) {
+ 		netdev_info(priv->netdev, "Link up\n");
+ 		netif_carrier_on(priv->netdev);
++		mlx5e_port_manual_buffer_config(priv, 0, priv->netdev->mtu,
++						NULL, NULL, NULL);
+ 	} else {
+ 		netdev_info(priv->netdev, "Link down\n");
+ 		netif_carrier_off(priv->netdev);
+@@ -2985,9 +2988,11 @@ int mlx5e_set_dev_port_mtu(struct mlx5e_priv *priv)
+ 	struct mlx5e_params *params = &priv->channels.params;
+ 	struct net_device *netdev = priv->netdev;
+ 	struct mlx5_core_dev *mdev = priv->mdev;
+-	u16 mtu;
++	u16 mtu, prev_mtu;
+ 	int err;
+ 
++	mlx5e_query_mtu(mdev, params, &prev_mtu);
++
+ 	err = mlx5e_set_mtu(mdev, params, params->sw_mtu);
+ 	if (err)
+ 		return err;
+@@ -2997,6 +3002,18 @@ int mlx5e_set_dev_port_mtu(struct mlx5e_priv *priv)
+ 		netdev_warn(netdev, "%s: VPort MTU %d is different than netdev mtu %d\n",
+ 			    __func__, mtu, params->sw_mtu);
+ 
++	if (mtu != prev_mtu && MLX5_BUFFER_SUPPORTED(mdev)) {
++		err = mlx5e_port_manual_buffer_config(priv, 0, mtu,
++						      NULL, NULL, NULL);
++		if (err) {
++			netdev_warn(netdev, "%s: Failed to set Xon/Xoff values with MTU %d (err %d), setting back to previous MTU %d\n",
++				    __func__, mtu, err, prev_mtu);
++
++			mlx5e_set_mtu(mdev, params, prev_mtu);
++			return err;
++		}
++	}
++
+ 	params->sw_mtu = mtu;
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 3dd9a6f407092f..29ce09af59aef0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -3706,6 +3706,13 @@ static int mlx5_fs_mode_validate(struct devlink *devlink, u32 id,
+ 	char *value = val.vstr;
+ 	u8 eswitch_mode;
+ 
++	eswitch_mode = mlx5_eswitch_mode(dev);
++	if (eswitch_mode == MLX5_ESWITCH_OFFLOADS) {
++		NL_SET_ERR_MSG_FMT_MOD(extack,
++				       "Changing fs mode is not supported when eswitch offloads enabled.");
++		return -EOPNOTSUPP;
++	}
++
+ 	if (!strcmp(value, "dmfs"))
+ 		return 0;
+ 
+@@ -3731,14 +3738,6 @@ static int mlx5_fs_mode_validate(struct devlink *devlink, u32 id,
+ 		return -EINVAL;
+ 	}
+ 
+-	eswitch_mode = mlx5_eswitch_mode(dev);
+-	if (eswitch_mode == MLX5_ESWITCH_OFFLOADS) {
+-		NL_SET_ERR_MSG_FMT_MOD(extack,
+-				       "Moving to %s is not supported when eswitch offloads enabled.",
+-				       value);
+-		return -EOPNOTSUPP;
+-	}
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+index 69933addd921e6..22995131824a03 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+@@ -6,13 +6,15 @@
+ #include "fw_reset.h"
+ #include "diag/fw_tracer.h"
+ #include "lib/tout.h"
++#include "sf/sf.h"
+ 
+ enum {
+ 	MLX5_FW_RESET_FLAGS_RESET_REQUESTED,
+ 	MLX5_FW_RESET_FLAGS_NACK_RESET_REQUEST,
+ 	MLX5_FW_RESET_FLAGS_PENDING_COMP,
+ 	MLX5_FW_RESET_FLAGS_DROP_NEW_REQUESTS,
+-	MLX5_FW_RESET_FLAGS_RELOAD_REQUIRED
++	MLX5_FW_RESET_FLAGS_RELOAD_REQUIRED,
++	MLX5_FW_RESET_FLAGS_UNLOAD_EVENT,
+ };
+ 
+ struct mlx5_fw_reset {
+@@ -219,7 +221,7 @@ int mlx5_fw_reset_set_live_patch(struct mlx5_core_dev *dev)
+ 	return mlx5_reg_mfrl_set(dev, MLX5_MFRL_REG_RESET_LEVEL0, 0, 0, false);
+ }
+ 
+-static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev, bool unloaded)
++static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev)
+ {
+ 	struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
+ 	struct devlink *devlink = priv_to_devlink(dev);
+@@ -228,8 +230,7 @@ static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev, bool unload
+ 	if (test_bit(MLX5_FW_RESET_FLAGS_PENDING_COMP, &fw_reset->reset_flags)) {
+ 		complete(&fw_reset->done);
+ 	} else {
+-		if (!unloaded)
+-			mlx5_unload_one(dev, false);
++		mlx5_sync_reset_unload_flow(dev, false);
+ 		if (mlx5_health_wait_pci_up(dev))
+ 			mlx5_core_err(dev, "reset reload flow aborted, PCI reads still not working\n");
+ 		else
+@@ -272,7 +273,7 @@ static void mlx5_sync_reset_reload_work(struct work_struct *work)
+ 
+ 	mlx5_sync_reset_clear_reset_requested(dev, false);
+ 	mlx5_enter_error_state(dev, true);
+-	mlx5_fw_reset_complete_reload(dev, false);
++	mlx5_fw_reset_complete_reload(dev);
+ }
+ 
+ #define MLX5_RESET_POLL_INTERVAL	(HZ / 10)
+@@ -428,6 +429,11 @@ static bool mlx5_is_reset_now_capable(struct mlx5_core_dev *dev,
+ 		return false;
+ 	}
+ 
++	if (!mlx5_core_is_ecpf(dev) && !mlx5_sf_table_empty(dev)) {
++		mlx5_core_warn(dev, "SFs should be removed before reset\n");
++		return false;
++	}
++
+ #if IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE)
+ 	if (reset_method != MLX5_MFRL_REG_PCI_RESET_METHOD_HOT_RESET) {
+ 		err = mlx5_check_hotplug_interrupt(dev, bridge);
+@@ -586,6 +592,65 @@ static int mlx5_sync_pci_reset(struct mlx5_core_dev *dev, u8 reset_method)
+ 	return err;
+ }
+ 
++void mlx5_sync_reset_unload_flow(struct mlx5_core_dev *dev, bool locked)
++{
++	struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
++	unsigned long timeout;
++	int poll_freq = 20;
++	bool reset_action;
++	u8 rst_state;
++	int err;
++
++	if (locked)
++		mlx5_unload_one_devl_locked(dev, false);
++	else
++		mlx5_unload_one(dev, false);
++
++	if (!test_bit(MLX5_FW_RESET_FLAGS_UNLOAD_EVENT, &fw_reset->reset_flags))
++		return;
++
++	mlx5_set_fw_rst_ack(dev);
++	mlx5_core_warn(dev, "Sync Reset Unload done, device reset expected\n");
++
++	reset_action = false;
++	timeout = jiffies + msecs_to_jiffies(mlx5_tout_ms(dev, RESET_UNLOAD));
++	do {
++		rst_state = mlx5_get_fw_rst_state(dev);
++		if (rst_state == MLX5_FW_RST_STATE_TOGGLE_REQ ||
++		    rst_state == MLX5_FW_RST_STATE_IDLE) {
++			reset_action = true;
++			break;
++		}
++		if (rst_state == MLX5_FW_RST_STATE_DROP_MODE) {
++			mlx5_core_info(dev, "Sync Reset Drop mode ack\n");
++			mlx5_set_fw_rst_ack(dev);
++			poll_freq = 1000;
++		}
++		msleep(poll_freq);
++	} while (!time_after(jiffies, timeout));
++
++	if (!reset_action) {
++		mlx5_core_err(dev, "Got timeout waiting for sync reset action, state = %u\n",
++			      rst_state);
++		fw_reset->ret = -ETIMEDOUT;
++		goto done;
++	}
++
++	mlx5_core_warn(dev, "Sync Reset, got reset action. rst_state = %u\n",
++		       rst_state);
++	if (rst_state == MLX5_FW_RST_STATE_TOGGLE_REQ) {
++		err = mlx5_sync_pci_reset(dev, fw_reset->reset_method);
++		if (err) {
++			mlx5_core_warn(dev, "mlx5_sync_pci_reset failed, err %d\n",
++				       err);
++			fw_reset->ret = err;
++		}
++	}
++
++done:
++	clear_bit(MLX5_FW_RESET_FLAGS_UNLOAD_EVENT, &fw_reset->reset_flags);
++}
++
+ static void mlx5_sync_reset_now_event(struct work_struct *work)
+ {
+ 	struct mlx5_fw_reset *fw_reset = container_of(work, struct mlx5_fw_reset,
+@@ -613,17 +678,13 @@ static void mlx5_sync_reset_now_event(struct work_struct *work)
+ 	mlx5_enter_error_state(dev, true);
+ done:
+ 	fw_reset->ret = err;
+-	mlx5_fw_reset_complete_reload(dev, false);
++	mlx5_fw_reset_complete_reload(dev);
+ }
+ 
+ static void mlx5_sync_reset_unload_event(struct work_struct *work)
+ {
+ 	struct mlx5_fw_reset *fw_reset;
+ 	struct mlx5_core_dev *dev;
+-	unsigned long timeout;
+-	int poll_freq = 20;
+-	bool reset_action;
+-	u8 rst_state;
+ 	int err;
+ 
+ 	fw_reset = container_of(work, struct mlx5_fw_reset, reset_unload_work);
+@@ -632,6 +693,7 @@ static void mlx5_sync_reset_unload_event(struct work_struct *work)
+ 	if (mlx5_sync_reset_clear_reset_requested(dev, false))
+ 		return;
+ 
++	set_bit(MLX5_FW_RESET_FLAGS_UNLOAD_EVENT, &fw_reset->reset_flags);
+ 	mlx5_core_warn(dev, "Sync Reset Unload. Function is forced down.\n");
+ 
+ 	err = mlx5_cmd_fast_teardown_hca(dev);
+@@ -640,49 +702,7 @@ static void mlx5_sync_reset_unload_event(struct work_struct *work)
+ 	else
+ 		mlx5_enter_error_state(dev, true);
+ 
+-	if (test_bit(MLX5_FW_RESET_FLAGS_PENDING_COMP, &fw_reset->reset_flags))
+-		mlx5_unload_one_devl_locked(dev, false);
+-	else
+-		mlx5_unload_one(dev, false);
+-
+-	mlx5_set_fw_rst_ack(dev);
+-	mlx5_core_warn(dev, "Sync Reset Unload done, device reset expected\n");
+-
+-	reset_action = false;
+-	timeout = jiffies + msecs_to_jiffies(mlx5_tout_ms(dev, RESET_UNLOAD));
+-	do {
+-		rst_state = mlx5_get_fw_rst_state(dev);
+-		if (rst_state == MLX5_FW_RST_STATE_TOGGLE_REQ ||
+-		    rst_state == MLX5_FW_RST_STATE_IDLE) {
+-			reset_action = true;
+-			break;
+-		}
+-		if (rst_state == MLX5_FW_RST_STATE_DROP_MODE) {
+-			mlx5_core_info(dev, "Sync Reset Drop mode ack\n");
+-			mlx5_set_fw_rst_ack(dev);
+-			poll_freq = 1000;
+-		}
+-		msleep(poll_freq);
+-	} while (!time_after(jiffies, timeout));
+-
+-	if (!reset_action) {
+-		mlx5_core_err(dev, "Got timeout waiting for sync reset action, state = %u\n",
+-			      rst_state);
+-		fw_reset->ret = -ETIMEDOUT;
+-		goto done;
+-	}
+-
+-	mlx5_core_warn(dev, "Sync Reset, got reset action. rst_state = %u\n", rst_state);
+-	if (rst_state == MLX5_FW_RST_STATE_TOGGLE_REQ) {
+-		err = mlx5_sync_pci_reset(dev, fw_reset->reset_method);
+-		if (err) {
+-			mlx5_core_warn(dev, "mlx5_sync_pci_reset failed, err %d\n", err);
+-			fw_reset->ret = err;
+-		}
+-	}
+-
+-done:
+-	mlx5_fw_reset_complete_reload(dev, true);
++	mlx5_fw_reset_complete_reload(dev);
+ }
+ 
+ static void mlx5_sync_reset_abort_event(struct work_struct *work)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h
+index ea527d06a85f07..d5b28525c960dc 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h
+@@ -12,6 +12,7 @@ int mlx5_fw_reset_set_reset_sync(struct mlx5_core_dev *dev, u8 reset_type_sel,
+ int mlx5_fw_reset_set_live_patch(struct mlx5_core_dev *dev);
+ 
+ int mlx5_fw_reset_wait_reset_done(struct mlx5_core_dev *dev);
++void mlx5_sync_reset_unload_flow(struct mlx5_core_dev *dev, bool locked);
+ int mlx5_fw_reset_verify_fw_complete(struct mlx5_core_dev *dev,
+ 				     struct netlink_ext_ack *extack);
+ void mlx5_fw_reset_events_start(struct mlx5_core_dev *dev);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c
+index 0864ba625c07d7..3304f25cc8055c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c
+@@ -518,3 +518,13 @@ void mlx5_sf_table_cleanup(struct mlx5_core_dev *dev)
+ 	WARN_ON(!xa_empty(&table->function_ids));
+ 	kfree(table);
+ }
++
++bool mlx5_sf_table_empty(const struct mlx5_core_dev *dev)
++{
++	struct mlx5_sf_table *table = dev->priv.sf_table;
++
++	if (!table)
++		return true;
++
++	return xa_empty(&table->function_ids);
++}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/sf.h b/drivers/net/ethernet/mellanox/mlx5/core/sf/sf.h
+index 860f9ddb7107b8..89559a37997ad6 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/sf/sf.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/sf.h
+@@ -17,6 +17,7 @@ void mlx5_sf_hw_table_destroy(struct mlx5_core_dev *dev);
+ 
+ int mlx5_sf_table_init(struct mlx5_core_dev *dev);
+ void mlx5_sf_table_cleanup(struct mlx5_core_dev *dev);
++bool mlx5_sf_table_empty(const struct mlx5_core_dev *dev);
+ 
+ int mlx5_devlink_sf_port_new(struct devlink *devlink,
+ 			     const struct devlink_port_new_attrs *add_attr,
+@@ -61,6 +62,11 @@ static inline void mlx5_sf_table_cleanup(struct mlx5_core_dev *dev)
+ {
+ }
+ 
++static inline bool mlx5_sf_table_empty(const struct mlx5_core_dev *dev)
++{
++	return true;
++}
++
+ #endif
+ 
+ #endif
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
+index 447ea3f8722ce4..8e4a085f4a2ec9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
+@@ -117,7 +117,7 @@ static int hws_action_get_shared_stc_nic(struct mlx5hws_context *ctx,
+ 		mlx5hws_err(ctx, "No such stc_type: %d\n", stc_type);
+ 		pr_warn("HWS: Invalid stc_type: %d\n", stc_type);
+ 		ret = -EINVAL;
+-		goto unlock_and_out;
++		goto free_shared_stc;
+ 	}
+ 
+ 	ret = mlx5hws_action_alloc_single_stc(ctx, &stc_attr, tbl_type,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pat_arg.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pat_arg.c
+index 51e4c551e0efd6..d56271a9e4f014 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pat_arg.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pat_arg.c
+@@ -279,7 +279,7 @@ int mlx5hws_pat_get_pattern(struct mlx5hws_context *ctx,
+ 	return ret;
+ 
+ clean_pattern:
+-	mlx5hws_cmd_header_modify_pattern_destroy(ctx->mdev, *pattern_id);
++	mlx5hws_cmd_header_modify_pattern_destroy(ctx->mdev, ptrn_id);
+ out_unlock:
+ 	mutex_unlock(&ctx->pattern_cache->lock);
+ 	return ret;
+@@ -527,7 +527,6 @@ int mlx5hws_pat_calc_nop(__be64 *pattern, size_t num_actions,
+ 			 u32 *nop_locations, __be64 *new_pat)
+ {
+ 	u16 prev_src_field = INVALID_FIELD, prev_dst_field = INVALID_FIELD;
+-	u16 src_field, dst_field;
+ 	u8 action_type;
+ 	bool dependent;
+ 	size_t i, j;
+@@ -539,6 +538,9 @@ int mlx5hws_pat_calc_nop(__be64 *pattern, size_t num_actions,
+ 		return 0;
+ 
+ 	for (i = 0, j = 0; i < num_actions; i++, j++) {
++		u16 src_field = INVALID_FIELD;
++		u16 dst_field = INVALID_FIELD;
++
+ 		if (j >= max_actions)
+ 			return -EINVAL;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c
+index 7e37d6e9eb8361..7b5071c3df368b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/pool.c
+@@ -124,6 +124,7 @@ static int hws_pool_buddy_init(struct mlx5hws_pool *pool)
+ 		mlx5hws_err(pool->ctx, "Failed to create resource type: %d size %zu\n",
+ 			    pool->type, pool->alloc_log_sz);
+ 		mlx5hws_buddy_cleanup(buddy);
++		kfree(buddy);
+ 		return -ENOMEM;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c b/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
+index 553bd8b8bb0562..d3d1003df83145 100644
+--- a/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
++++ b/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
+@@ -52,6 +52,8 @@ int __fbnic_open(struct fbnic_net *fbn)
+ 	fbnic_bmc_rpc_init(fbd);
+ 	fbnic_rss_reinit(fbd, fbn);
+ 
++	phylink_resume(fbn->phylink);
++
+ 	return 0;
+ time_stop:
+ 	fbnic_time_stop(fbn);
+@@ -84,6 +86,8 @@ static int fbnic_stop(struct net_device *netdev)
+ {
+ 	struct fbnic_net *fbn = netdev_priv(netdev);
+ 
++	phylink_suspend(fbn->phylink, fbnic_bmc_present(fbn->fbd));
++
+ 	fbnic_down(fbn);
+ 	fbnic_pcs_free_irq(fbn->fbd);
+ 
+diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_pci.c b/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
+index 249d3ef862d5a9..38045cce380123 100644
+--- a/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
++++ b/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
+@@ -118,14 +118,12 @@ static void fbnic_service_task_start(struct fbnic_net *fbn)
+ 	struct fbnic_dev *fbd = fbn->fbd;
+ 
+ 	schedule_delayed_work(&fbd->service_task, HZ);
+-	phylink_resume(fbn->phylink);
+ }
+ 
+ static void fbnic_service_task_stop(struct fbnic_net *fbn)
+ {
+ 	struct fbnic_dev *fbd = fbn->fbd;
+ 
+-	phylink_suspend(fbn->phylink, fbnic_bmc_present(fbd));
+ 	cancel_delayed_work(&fbd->service_task);
+ }
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+index 6cadf8de4fdfdb..00e929bf280bae 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+@@ -49,6 +49,14 @@ static void dwxgmac2_core_init(struct mac_device_info *hw,
+ 	writel(XGMAC_INT_DEFAULT_EN, ioaddr + XGMAC_INT_EN);
+ }
+ 
++static void dwxgmac2_update_caps(struct stmmac_priv *priv)
++{
++	if (!priv->dma_cap.mbps_10_100)
++		priv->hw->link.caps &= ~(MAC_10 | MAC_100);
++	else if (!priv->dma_cap.half_duplex)
++		priv->hw->link.caps &= ~(MAC_10HD | MAC_100HD);
++}
++
+ static void dwxgmac2_set_mac(void __iomem *ioaddr, bool enable)
+ {
+ 	u32 tx = readl(ioaddr + XGMAC_TX_CONFIG);
+@@ -1424,6 +1432,7 @@ static void dwxgmac2_set_arp_offload(struct mac_device_info *hw, bool en,
+ 
+ const struct stmmac_ops dwxgmac210_ops = {
+ 	.core_init = dwxgmac2_core_init,
++	.update_caps = dwxgmac2_update_caps,
+ 	.set_mac = dwxgmac2_set_mac,
+ 	.rx_ipc = dwxgmac2_rx_ipc,
+ 	.rx_queue_enable = dwxgmac2_rx_queue_enable,
+@@ -1532,8 +1541,8 @@ int dwxgmac2_setup(struct stmmac_priv *priv)
+ 		mac->mcast_bits_log2 = ilog2(mac->multicast_filter_bins);
+ 
+ 	mac->link.caps = MAC_ASYM_PAUSE | MAC_SYM_PAUSE |
+-			 MAC_1000FD | MAC_2500FD | MAC_5000FD |
+-			 MAC_10000FD;
++			 MAC_10 | MAC_100 | MAC_1000FD |
++			 MAC_2500FD | MAC_5000FD | MAC_10000FD;
+ 	mac->link.duplex = 0;
+ 	mac->link.speed10 = XGMAC_CONFIG_SS_10_MII;
+ 	mac->link.speed100 = XGMAC_CONFIG_SS_100_MII;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
+index 5dcc95bc0ad28b..4d6bb995d8d84c 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
+@@ -203,10 +203,6 @@ static void dwxgmac2_dma_rx_mode(struct stmmac_priv *priv, void __iomem *ioaddr,
+ 	}
+ 
+ 	writel(value, ioaddr + XGMAC_MTL_RXQ_OPMODE(channel));
+-
+-	/* Enable MTL RX overflow */
+-	value = readl(ioaddr + XGMAC_MTL_QINTEN(channel));
+-	writel(value | XGMAC_RXOIE, ioaddr + XGMAC_MTL_QINTEN(channel));
+ }
+ 
+ static void dwxgmac2_dma_tx_mode(struct stmmac_priv *priv, void __iomem *ioaddr,
+@@ -386,8 +382,11 @@ static int dwxgmac2_dma_interrupt(struct stmmac_priv *priv,
+ static int dwxgmac2_get_hw_feature(void __iomem *ioaddr,
+ 				   struct dma_features *dma_cap)
+ {
++	struct stmmac_priv *priv;
+ 	u32 hw_cap;
+ 
++	priv = container_of(dma_cap, struct stmmac_priv, dma_cap);
++
+ 	/* MAC HW feature 0 */
+ 	hw_cap = readl(ioaddr + XGMAC_HW_FEATURE0);
+ 	dma_cap->edma = (hw_cap & XGMAC_HWFEAT_EDMA) >> 31;
+@@ -410,6 +409,8 @@ static int dwxgmac2_get_hw_feature(void __iomem *ioaddr,
+ 	dma_cap->vlhash = (hw_cap & XGMAC_HWFEAT_VLHASH) >> 4;
+ 	dma_cap->half_duplex = (hw_cap & XGMAC_HWFEAT_HDSEL) >> 3;
+ 	dma_cap->mbps_1000 = (hw_cap & XGMAC_HWFEAT_GMIISEL) >> 1;
++	if (dma_cap->mbps_1000 && priv->synopsys_id >= DWXGMAC_CORE_2_20)
++		dma_cap->mbps_10_100 = 1;
+ 
+ 	/* MAC HW feature 1 */
+ 	hw_cap = readl(ioaddr + XGMAC_HW_FEATURE1);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index e0fb06af1f940d..36082d4917bcc9 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2584,6 +2584,7 @@ static bool stmmac_xdp_xmit_zc(struct stmmac_priv *priv, u32 queue, u32 budget)
+ 	struct netdev_queue *nq = netdev_get_tx_queue(priv->dev, queue);
+ 	struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue];
+ 	struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[queue];
++	bool csum = !priv->plat->tx_queues_cfg[queue].coe_unsupported;
+ 	struct xsk_buff_pool *pool = tx_q->xsk_pool;
+ 	unsigned int entry = tx_q->cur_tx;
+ 	struct dma_desc *tx_desc = NULL;
+@@ -2671,7 +2672,7 @@ static bool stmmac_xdp_xmit_zc(struct stmmac_priv *priv, u32 queue, u32 budget)
+ 		}
+ 
+ 		stmmac_prepare_tx_desc(priv, tx_desc, 1, xdp_desc.len,
+-				       true, priv->mode, true, true,
++				       csum, priv->mode, true, true,
+ 				       xdp_desc.len);
+ 
+ 		stmmac_enable_dma_transmission(priv, priv->ioaddr, queue);
+@@ -4983,6 +4984,7 @@ static int stmmac_xdp_xmit_xdpf(struct stmmac_priv *priv, int queue,
+ {
+ 	struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[queue];
+ 	struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue];
++	bool csum = !priv->plat->tx_queues_cfg[queue].coe_unsupported;
+ 	unsigned int entry = tx_q->cur_tx;
+ 	struct dma_desc *tx_desc;
+ 	dma_addr_t dma_addr;
+@@ -5034,7 +5036,7 @@ static int stmmac_xdp_xmit_xdpf(struct stmmac_priv *priv, int queue,
+ 	stmmac_set_desc_addr(priv, tx_desc, dma_addr);
+ 
+ 	stmmac_prepare_tx_desc(priv, tx_desc, 1, xdpf->len,
+-			       true, priv->mode, true, true,
++			       csum, priv->mode, true, true,
+ 			       xdpf->len);
+ 
+ 	tx_q->tx_count_frames++;
+diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
+index 720104661d7f24..60a4629fe6ba7a 100644
+--- a/drivers/net/hyperv/netvsc.c
++++ b/drivers/net/hyperv/netvsc.c
+@@ -1812,6 +1812,11 @@ struct netvsc_device *netvsc_device_add(struct hv_device *device,
+ 
+ 	/* Enable NAPI handler before init callbacks */
+ 	netif_napi_add(ndev, &net_device->chan_table[0].napi, netvsc_poll);
++	napi_enable(&net_device->chan_table[0].napi);
++	netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_RX,
++			     &net_device->chan_table[0].napi);
++	netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_TX,
++			     &net_device->chan_table[0].napi);
+ 
+ 	/* Open the channel */
+ 	device->channel->next_request_id_callback = vmbus_next_request_id;
+@@ -1831,12 +1836,6 @@ struct netvsc_device *netvsc_device_add(struct hv_device *device,
+ 	/* Channel is opened */
+ 	netdev_dbg(ndev, "hv_netvsc channel opened successfully\n");
+ 
+-	napi_enable(&net_device->chan_table[0].napi);
+-	netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_RX,
+-			     &net_device->chan_table[0].napi);
+-	netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_TX,
+-			     &net_device->chan_table[0].napi);
+-
+ 	/* Connect with the NetVsp */
+ 	ret = netvsc_connect_vsp(device, net_device, device_info);
+ 	if (ret != 0) {
+@@ -1854,14 +1853,14 @@ struct netvsc_device *netvsc_device_add(struct hv_device *device,
+ 
+ close:
+ 	RCU_INIT_POINTER(net_device_ctx->nvdev, NULL);
+-	netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_TX, NULL);
+-	netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_RX, NULL);
+-	napi_disable(&net_device->chan_table[0].napi);
+ 
+ 	/* Now, we can close the channel safely */
+ 	vmbus_close(device->channel);
+ 
+ cleanup:
++	netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_TX, NULL);
++	netif_queue_set_napi(ndev, 0, NETDEV_QUEUE_TYPE_RX, NULL);
++	napi_disable(&net_device->chan_table[0].napi);
+ 	netif_napi_del(&net_device->chan_table[0].napi);
+ 
+ cleanup2:
+diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c
+index 9e73959e61ee0b..c35f9685b6bf04 100644
+--- a/drivers/net/hyperv/rndis_filter.c
++++ b/drivers/net/hyperv/rndis_filter.c
+@@ -1252,17 +1252,26 @@ static void netvsc_sc_open(struct vmbus_channel *new_sc)
+ 	new_sc->rqstor_size = netvsc_rqstor_size(netvsc_ring_bytes);
+ 	new_sc->max_pkt_size = NETVSC_MAX_PKT_SIZE;
+ 
++	/* Enable napi before opening the vmbus channel to avoid races
++	 * as the host placing data on the host->guest ring may be left
++	 * out if napi was not enabled.
++	 */
++	napi_enable(&nvchan->napi);
++	netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_RX,
++			     &nvchan->napi);
++	netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_TX,
++			     &nvchan->napi);
++
+ 	ret = vmbus_open(new_sc, netvsc_ring_bytes,
+ 			 netvsc_ring_bytes, NULL, 0,
+ 			 netvsc_channel_cb, nvchan);
+-	if (ret == 0) {
+-		napi_enable(&nvchan->napi);
+-		netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_RX,
+-				     &nvchan->napi);
+-		netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_TX,
+-				     &nvchan->napi);
+-	} else {
++	if (ret != 0) {
+ 		netdev_notice(ndev, "sub channel open failed: %d\n", ret);
++		netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_TX,
++				     NULL);
++		netif_queue_set_napi(ndev, chn_index, NETDEV_QUEUE_TYPE_RX,
++				     NULL);
++		napi_disable(&nvchan->napi);
+ 	}
+ 
+ 	if (atomic_inc_return(&nvscdev->open_chn) == nvscdev->num_chn)
+diff --git a/drivers/net/phy/mscc/mscc.h b/drivers/net/phy/mscc/mscc.h
+index 58c6d47fbe046d..2bfe314ef881c3 100644
+--- a/drivers/net/phy/mscc/mscc.h
++++ b/drivers/net/phy/mscc/mscc.h
+@@ -481,6 +481,7 @@ static inline void vsc8584_config_macsec_intr(struct phy_device *phydev)
+ void vsc85xx_link_change_notify(struct phy_device *phydev);
+ void vsc8584_config_ts_intr(struct phy_device *phydev);
+ int vsc8584_ptp_init(struct phy_device *phydev);
++void vsc8584_ptp_deinit(struct phy_device *phydev);
+ int vsc8584_ptp_probe_once(struct phy_device *phydev);
+ int vsc8584_ptp_probe(struct phy_device *phydev);
+ irqreturn_t vsc8584_handle_ts_interrupt(struct phy_device *phydev);
+@@ -495,6 +496,9 @@ static inline int vsc8584_ptp_init(struct phy_device *phydev)
+ {
+ 	return 0;
+ }
++static inline void vsc8584_ptp_deinit(struct phy_device *phydev)
++{
++}
+ static inline int vsc8584_ptp_probe_once(struct phy_device *phydev)
+ {
+ 	return 0;
+diff --git a/drivers/net/phy/mscc/mscc_main.c b/drivers/net/phy/mscc/mscc_main.c
+index c3209cf00e9607..ac9b6130ef5021 100644
+--- a/drivers/net/phy/mscc/mscc_main.c
++++ b/drivers/net/phy/mscc/mscc_main.c
+@@ -2338,9 +2338,7 @@ static int vsc85xx_probe(struct phy_device *phydev)
+ 
+ static void vsc85xx_remove(struct phy_device *phydev)
+ {
+-	struct vsc8531_private *priv = phydev->priv;
+-
+-	skb_queue_purge(&priv->rx_skbs_list);
++	vsc8584_ptp_deinit(phydev);
+ }
+ 
+ /* Microsemi VSC85xx PHYs */
+diff --git a/drivers/net/phy/mscc/mscc_ptp.c b/drivers/net/phy/mscc/mscc_ptp.c
+index de6c7312e8f290..72847320cb652d 100644
+--- a/drivers/net/phy/mscc/mscc_ptp.c
++++ b/drivers/net/phy/mscc/mscc_ptp.c
+@@ -1298,7 +1298,6 @@ static void vsc8584_set_input_clk_configured(struct phy_device *phydev)
+ 
+ static int __vsc8584_init_ptp(struct phy_device *phydev)
+ {
+-	struct vsc8531_private *vsc8531 = phydev->priv;
+ 	static const u32 ltc_seq_e[] = { 0, 400000, 0, 0, 0 };
+ 	static const u8  ltc_seq_a[] = { 8, 6, 5, 4, 2 };
+ 	u32 val;
+@@ -1515,17 +1514,7 @@ static int __vsc8584_init_ptp(struct phy_device *phydev)
+ 
+ 	vsc85xx_ts_eth_cmp1_sig(phydev);
+ 
+-	vsc8531->mii_ts.rxtstamp = vsc85xx_rxtstamp;
+-	vsc8531->mii_ts.txtstamp = vsc85xx_txtstamp;
+-	vsc8531->mii_ts.hwtstamp = vsc85xx_hwtstamp;
+-	vsc8531->mii_ts.ts_info  = vsc85xx_ts_info;
+-	phydev->mii_ts = &vsc8531->mii_ts;
+-
+-	memcpy(&vsc8531->ptp->caps, &vsc85xx_clk_caps, sizeof(vsc85xx_clk_caps));
+-
+-	vsc8531->ptp->ptp_clock = ptp_clock_register(&vsc8531->ptp->caps,
+-						     &phydev->mdio.dev);
+-	return PTR_ERR_OR_ZERO(vsc8531->ptp->ptp_clock);
++	return 0;
+ }
+ 
+ void vsc8584_config_ts_intr(struct phy_device *phydev)
+@@ -1552,6 +1541,16 @@ int vsc8584_ptp_init(struct phy_device *phydev)
+ 	return 0;
+ }
+ 
++void vsc8584_ptp_deinit(struct phy_device *phydev)
++{
++	struct vsc8531_private *vsc8531 = phydev->priv;
++
++	if (vsc8531->ptp->ptp_clock) {
++		ptp_clock_unregister(vsc8531->ptp->ptp_clock);
++		skb_queue_purge(&vsc8531->rx_skbs_list);
++	}
++}
++
+ irqreturn_t vsc8584_handle_ts_interrupt(struct phy_device *phydev)
+ {
+ 	struct vsc8531_private *priv = phydev->priv;
+@@ -1612,7 +1611,16 @@ int vsc8584_ptp_probe(struct phy_device *phydev)
+ 
+ 	vsc8531->ptp->phydev = phydev;
+ 
+-	return 0;
++	vsc8531->mii_ts.rxtstamp = vsc85xx_rxtstamp;
++	vsc8531->mii_ts.txtstamp = vsc85xx_txtstamp;
++	vsc8531->mii_ts.hwtstamp = vsc85xx_hwtstamp;
++	vsc8531->mii_ts.ts_info  = vsc85xx_ts_info;
++	phydev->mii_ts = &vsc8531->mii_ts;
++
++	memcpy(&vsc8531->ptp->caps, &vsc85xx_clk_caps, sizeof(vsc85xx_clk_caps));
++	vsc8531->ptp->ptp_clock = ptp_clock_register(&vsc8531->ptp->caps,
++						     &phydev->mdio.dev);
++	return PTR_ERR_OR_ZERO(vsc8531->ptp->ptp_clock);
+ }
+ 
+ int vsc8584_ptp_probe_once(struct phy_device *phydev)
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index e56901bb6ebc43..11352d85475ae2 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1355,6 +1355,9 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x2357, 0x0201, 4)},	/* TP-LINK HSUPA Modem MA180 */
+ 	{QMI_FIXED_INTF(0x2357, 0x9000, 4)},	/* TP-LINK MA260 */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1031, 3)}, /* Telit LE910C1-EUX */
++	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1034, 2)}, /* Telit LE910C4-WWX */
++	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1037, 4)}, /* Telit LE910C4-WWX */
++	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1038, 3)}, /* Telit LE910C4-WWX */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x103a, 0)}, /* Telit LE910C4-WWX */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)},	/* Telit LE922A */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)},	/* Telit FN980 */
+diff --git a/drivers/of/dynamic.c b/drivers/of/dynamic.c
+index 0aba760f7577ef..2eaaddcb0ec4e2 100644
+--- a/drivers/of/dynamic.c
++++ b/drivers/of/dynamic.c
+@@ -935,10 +935,15 @@ static int of_changeset_add_prop_helper(struct of_changeset *ocs,
+ 		return -ENOMEM;
+ 
+ 	ret = of_changeset_add_property(ocs, np, new_pp);
+-	if (ret)
++	if (ret) {
+ 		__of_prop_free(new_pp);
++		return ret;
++	}
+ 
+-	return ret;
++	new_pp->next = np->deadprops;
++	np->deadprops = new_pp;
++
++	return 0;
+ }
+ 
+ /**
+diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
+index 77016c0cc296e5..2e9ea751ed2df0 100644
+--- a/drivers/of/of_reserved_mem.c
++++ b/drivers/of/of_reserved_mem.c
+@@ -25,6 +25,7 @@
+ #include <linux/memblock.h>
+ #include <linux/kmemleak.h>
+ #include <linux/cma.h>
++#include <linux/dma-map-ops.h>
+ 
+ #include "of_private.h"
+ 
+@@ -175,13 +176,17 @@ static int __init __reserved_mem_reserve_reg(unsigned long node,
+ 		base = dt_mem_next_cell(dt_root_addr_cells, &prop);
+ 		size = dt_mem_next_cell(dt_root_size_cells, &prop);
+ 
+-		if (size &&
+-		    early_init_dt_reserve_memory(base, size, nomap) == 0)
++		if (size && early_init_dt_reserve_memory(base, size, nomap) == 0) {
++			/* Architecture specific contiguous memory fixup. */
++			if (of_flat_dt_is_compatible(node, "shared-dma-pool") &&
++			    of_get_flat_dt_prop(node, "reusable", NULL))
++				dma_contiguous_early_fixup(base, size);
+ 			pr_debug("Reserved memory: reserved region for node '%s': base %pa, size %lu MiB\n",
+ 				uname, &base, (unsigned long)(size / SZ_1M));
+-		else
++		} else {
+ 			pr_err("Reserved memory: failed to reserve memory for node '%s': base %pa, size %lu MiB\n",
+ 			       uname, &base, (unsigned long)(size / SZ_1M));
++		}
+ 
+ 		len -= t_len;
+ 	}
+@@ -472,7 +477,10 @@ static int __init __reserved_mem_alloc_size(unsigned long node, const char *unam
+ 		       uname, (unsigned long)(size / SZ_1M));
+ 		return -ENOMEM;
+ 	}
+-
++	/* Architecture specific contiguous memory fixup. */
++	if (of_flat_dt_is_compatible(node, "shared-dma-pool") &&
++	    of_get_flat_dt_prop(node, "reusable", NULL))
++		dma_contiguous_early_fixup(base, size);
+ 	/* Save region in the reserved_mem array */
+ 	fdt_reserved_mem_save_node(node, uname, base, size);
+ 	return 0;
+@@ -771,6 +779,7 @@ int of_reserved_mem_region_to_resource(const struct device_node *np,
+ 		return -EINVAL;
+ 
+ 	resource_set_range(res, rmem->base, rmem->size);
++	res->flags = IORESOURCE_MEM;
+ 	res->name = rmem->name;
+ 	return 0;
+ }
+diff --git a/drivers/pinctrl/Kconfig b/drivers/pinctrl/Kconfig
+index 33db9104df178e..3952f77081a35c 100644
+--- a/drivers/pinctrl/Kconfig
++++ b/drivers/pinctrl/Kconfig
+@@ -527,6 +527,7 @@ config PINCTRL_STMFX
+ 	tristate "STMicroelectronics STMFX GPIO expander pinctrl driver"
+ 	depends on I2C
+ 	depends on OF_GPIO
++	depends on HAS_IOMEM
+ 	select GENERIC_PINCONF
+ 	select GPIOLIB_IRQCHIP
+ 	select MFD_STMFX
+diff --git a/drivers/pinctrl/mediatek/pinctrl-airoha.c b/drivers/pinctrl/mediatek/pinctrl-airoha.c
+index b97b28ebb37a6e..3fa5131d81e525 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-airoha.c
++++ b/drivers/pinctrl/mediatek/pinctrl-airoha.c
+@@ -2696,7 +2696,7 @@ static int airoha_pinconf_get(struct pinctrl_dev *pctrl_dev,
+ 		arg = 1;
+ 		break;
+ 	default:
+-		return -EOPNOTSUPP;
++		return -ENOTSUPP;
+ 	}
+ 
+ 	*config = pinconf_to_config_packed(param, arg);
+@@ -2788,7 +2788,7 @@ static int airoha_pinconf_set(struct pinctrl_dev *pctrl_dev,
+ 			break;
+ 		}
+ 		default:
+-			return -EOPNOTSUPP;
++			return -ENOTSUPP;
+ 		}
+ 	}
+ 
+@@ -2805,10 +2805,10 @@ static int airoha_pinconf_group_get(struct pinctrl_dev *pctrl_dev,
+ 		if (airoha_pinconf_get(pctrl_dev,
+ 				       airoha_pinctrl_groups[group].pins[i],
+ 				       config))
+-			return -EOPNOTSUPP;
++			return -ENOTSUPP;
+ 
+ 		if (i && cur_config != *config)
+-			return -EOPNOTSUPP;
++			return -ENOTSUPP;
+ 
+ 		cur_config = *config;
+ 	}
+diff --git a/drivers/platform/x86/intel/int3472/discrete.c b/drivers/platform/x86/intel/int3472/discrete.c
+index 4c0aed6e626f6b..bdfb8a800c5489 100644
+--- a/drivers/platform/x86/intel/int3472/discrete.c
++++ b/drivers/platform/x86/intel/int3472/discrete.c
+@@ -193,6 +193,10 @@ static void int3472_get_con_id_and_polarity(struct int3472_discrete_device *int3
+ 		*con_id = "privacy-led";
+ 		*gpio_flags = GPIO_ACTIVE_HIGH;
+ 		break;
++	case INT3472_GPIO_TYPE_HOTPLUG_DETECT:
++		*con_id = "hpd";
++		*gpio_flags = GPIO_ACTIVE_HIGH;
++		break;
+ 	case INT3472_GPIO_TYPE_POWER_ENABLE:
+ 		*con_id = "avdd";
+ 		*gpio_flags = GPIO_ACTIVE_HIGH;
+@@ -223,6 +227,7 @@ static void int3472_get_con_id_and_polarity(struct int3472_discrete_device *int3
+  * 0x0b Power enable
+  * 0x0c Clock enable
+  * 0x0d Privacy LED
++ * 0x13 Hotplug detect
+  *
+  * There are some known platform specific quirks where that does not quite
+  * hold up; for example where a pin with type 0x01 (Power down) is mapped to
+@@ -292,6 +297,7 @@ static int skl_int3472_handle_gpio_resources(struct acpi_resource *ares,
+ 	switch (type) {
+ 	case INT3472_GPIO_TYPE_RESET:
+ 	case INT3472_GPIO_TYPE_POWERDOWN:
++	case INT3472_GPIO_TYPE_HOTPLUG_DETECT:
+ 		ret = skl_int3472_map_gpio_to_sensor(int3472, agpio, con_id, gpio_flags);
+ 		if (ret)
+ 			err_msg = "Failed to map GPIO pin to sensor\n";
+diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
+index d772258e29ad25..e6464b99896090 100644
+--- a/drivers/scsi/scsi_sysfs.c
++++ b/drivers/scsi/scsi_sysfs.c
+@@ -265,7 +265,7 @@ show_shost_supported_mode(struct device *dev, struct device_attribute *attr,
+ 	return show_shost_mode(supported_mode, buf);
+ }
+ 
+-static DEVICE_ATTR(supported_mode, S_IRUGO | S_IWUSR, show_shost_supported_mode, NULL);
++static DEVICE_ATTR(supported_mode, S_IRUGO, show_shost_supported_mode, NULL);
+ 
+ static ssize_t
+ show_shost_active_mode(struct device *dev,
+@@ -279,7 +279,7 @@ show_shost_active_mode(struct device *dev,
+ 		return show_shost_mode(shost->active_mode, buf);
+ }
+ 
+-static DEVICE_ATTR(active_mode, S_IRUGO | S_IWUSR, show_shost_active_mode, NULL);
++static DEVICE_ATTR(active_mode, S_IRUGO, show_shost_active_mode, NULL);
+ 
+ static int check_reset_type(const char *str)
+ {
+diff --git a/drivers/thermal/mediatek/lvts_thermal.c b/drivers/thermal/mediatek/lvts_thermal.c
+index 985925147ac068..4d49482f0c5799 100644
+--- a/drivers/thermal/mediatek/lvts_thermal.c
++++ b/drivers/thermal/mediatek/lvts_thermal.c
+@@ -125,7 +125,11 @@ struct lvts_ctrl_data {
+ 
+ struct lvts_data {
+ 	const struct lvts_ctrl_data *lvts_ctrl;
++	const u32 *conn_cmd;
++	const u32 *init_cmd;
+ 	int num_lvts_ctrl;
++	int num_conn_cmd;
++	int num_init_cmd;
+ 	int temp_factor;
+ 	int temp_offset;
+ 	int gt_calib_bit_offset;
+@@ -902,7 +906,7 @@ static void lvts_ctrl_monitor_enable(struct device *dev, struct lvts_ctrl *lvts_
+  * each write in the configuration register must be separated by a
+  * delay of 2 us.
+  */
+-static void lvts_write_config(struct lvts_ctrl *lvts_ctrl, u32 *cmds, int nr_cmds)
++static void lvts_write_config(struct lvts_ctrl *lvts_ctrl, const u32 *cmds, int nr_cmds)
+ {
+ 	int i;
+ 
+@@ -985,9 +989,10 @@ static int lvts_ctrl_set_enable(struct lvts_ctrl *lvts_ctrl, int enable)
+ 
+ static int lvts_ctrl_connect(struct device *dev, struct lvts_ctrl *lvts_ctrl)
+ {
+-	u32 id, cmds[] = { 0xC103FFFF, 0xC502FF55 };
++	const struct lvts_data *lvts_data = lvts_ctrl->lvts_data;
++	u32 id;
+ 
+-	lvts_write_config(lvts_ctrl, cmds, ARRAY_SIZE(cmds));
++	lvts_write_config(lvts_ctrl, lvts_data->conn_cmd, lvts_data->num_conn_cmd);
+ 
+ 	/*
+ 	 * LVTS_ID : Get ID and status of the thermal controller
+@@ -1006,17 +1011,9 @@ static int lvts_ctrl_connect(struct device *dev, struct lvts_ctrl *lvts_ctrl)
+ 
+ static int lvts_ctrl_initialize(struct device *dev, struct lvts_ctrl *lvts_ctrl)
+ {
+-	/*
+-	 * Write device mask: 0xC1030000
+-	 */
+-	u32 cmds[] = {
+-		0xC1030E01, 0xC1030CFC, 0xC1030A8C, 0xC103098D, 0xC10308F1,
+-		0xC10307A6, 0xC10306B8, 0xC1030500, 0xC1030420, 0xC1030300,
+-		0xC1030030, 0xC10300F6, 0xC1030050, 0xC1030060, 0xC10300AC,
+-		0xC10300FC, 0xC103009D, 0xC10300F1, 0xC10300E1
+-	};
++	const struct lvts_data *lvts_data = lvts_ctrl->lvts_data;
+ 
+-	lvts_write_config(lvts_ctrl, cmds, ARRAY_SIZE(cmds));
++	lvts_write_config(lvts_ctrl, lvts_data->init_cmd, lvts_data->num_init_cmd);
+ 
+ 	return 0;
+ }
+@@ -1445,6 +1442,25 @@ static int lvts_resume(struct device *dev)
+ 	return 0;
+ }
+ 
++static const u32 default_conn_cmds[] = { 0xC103FFFF, 0xC502FF55 };
++static const u32 mt7988_conn_cmds[] = { 0xC103FFFF, 0xC502FC55 };
++
++/*
++ * Write device mask: 0xC1030000
++ */
++static const u32 default_init_cmds[] = {
++	0xC1030E01, 0xC1030CFC, 0xC1030A8C, 0xC103098D, 0xC10308F1,
++	0xC10307A6, 0xC10306B8, 0xC1030500, 0xC1030420, 0xC1030300,
++	0xC1030030, 0xC10300F6, 0xC1030050, 0xC1030060, 0xC10300AC,
++	0xC10300FC, 0xC103009D, 0xC10300F1, 0xC10300E1
++};
++
++static const u32 mt7988_init_cmds[] = {
++	0xC1030300, 0xC1030420, 0xC1030500, 0xC10307A6, 0xC1030CFC,
++	0xC1030A8C, 0xC103098D, 0xC10308F1, 0xC1030B04, 0xC1030E01,
++	0xC10306B8
++};
++
+ /*
+  * The MT8186 calibration data is stored as packed 3-byte little-endian
+  * values using a weird layout that makes sense only when viewed as a 32-bit
+@@ -1739,7 +1755,11 @@ static const struct lvts_ctrl_data mt8195_lvts_ap_data_ctrl[] = {
+ 
+ static const struct lvts_data mt7988_lvts_ap_data = {
+ 	.lvts_ctrl	= mt7988_lvts_ap_data_ctrl,
++	.conn_cmd	= mt7988_conn_cmds,
++	.init_cmd	= mt7988_init_cmds,
+ 	.num_lvts_ctrl	= ARRAY_SIZE(mt7988_lvts_ap_data_ctrl),
++	.num_conn_cmd	= ARRAY_SIZE(mt7988_conn_cmds),
++	.num_init_cmd	= ARRAY_SIZE(mt7988_init_cmds),
+ 	.temp_factor	= LVTS_COEFF_A_MT7988,
+ 	.temp_offset	= LVTS_COEFF_B_MT7988,
+ 	.gt_calib_bit_offset = 24,
+@@ -1747,7 +1767,11 @@ static const struct lvts_data mt7988_lvts_ap_data = {
+ 
+ static const struct lvts_data mt8186_lvts_data = {
+ 	.lvts_ctrl	= mt8186_lvts_data_ctrl,
++	.conn_cmd	= default_conn_cmds,
++	.init_cmd	= default_init_cmds,
+ 	.num_lvts_ctrl	= ARRAY_SIZE(mt8186_lvts_data_ctrl),
++	.num_conn_cmd	= ARRAY_SIZE(default_conn_cmds),
++	.num_init_cmd	= ARRAY_SIZE(default_init_cmds),
+ 	.temp_factor	= LVTS_COEFF_A_MT7988,
+ 	.temp_offset	= LVTS_COEFF_B_MT7988,
+ 	.gt_calib_bit_offset = 24,
+@@ -1756,7 +1780,11 @@ static const struct lvts_data mt8186_lvts_data = {
+ 
+ static const struct lvts_data mt8188_lvts_mcu_data = {
+ 	.lvts_ctrl	= mt8188_lvts_mcu_data_ctrl,
++	.conn_cmd	= default_conn_cmds,
++	.init_cmd	= default_init_cmds,
+ 	.num_lvts_ctrl	= ARRAY_SIZE(mt8188_lvts_mcu_data_ctrl),
++	.num_conn_cmd	= ARRAY_SIZE(default_conn_cmds),
++	.num_init_cmd	= ARRAY_SIZE(default_init_cmds),
+ 	.temp_factor	= LVTS_COEFF_A_MT8195,
+ 	.temp_offset	= LVTS_COEFF_B_MT8195,
+ 	.gt_calib_bit_offset = 20,
+@@ -1765,7 +1793,11 @@ static const struct lvts_data mt8188_lvts_mcu_data = {
+ 
+ static const struct lvts_data mt8188_lvts_ap_data = {
+ 	.lvts_ctrl	= mt8188_lvts_ap_data_ctrl,
++	.conn_cmd	= default_conn_cmds,
++	.init_cmd	= default_init_cmds,
+ 	.num_lvts_ctrl	= ARRAY_SIZE(mt8188_lvts_ap_data_ctrl),
++	.num_conn_cmd	= ARRAY_SIZE(default_conn_cmds),
++	.num_init_cmd	= ARRAY_SIZE(default_init_cmds),
+ 	.temp_factor	= LVTS_COEFF_A_MT8195,
+ 	.temp_offset	= LVTS_COEFF_B_MT8195,
+ 	.gt_calib_bit_offset = 20,
+@@ -1774,7 +1806,11 @@ static const struct lvts_data mt8188_lvts_ap_data = {
+ 
+ static const struct lvts_data mt8192_lvts_mcu_data = {
+ 	.lvts_ctrl	= mt8192_lvts_mcu_data_ctrl,
++	.conn_cmd	= default_conn_cmds,
++	.init_cmd	= default_init_cmds,
+ 	.num_lvts_ctrl	= ARRAY_SIZE(mt8192_lvts_mcu_data_ctrl),
++	.num_conn_cmd	= ARRAY_SIZE(default_conn_cmds),
++	.num_init_cmd	= ARRAY_SIZE(default_init_cmds),
+ 	.temp_factor	= LVTS_COEFF_A_MT8195,
+ 	.temp_offset	= LVTS_COEFF_B_MT8195,
+ 	.gt_calib_bit_offset = 24,
+@@ -1783,7 +1819,11 @@ static const struct lvts_data mt8192_lvts_mcu_data = {
+ 
+ static const struct lvts_data mt8192_lvts_ap_data = {
+ 	.lvts_ctrl	= mt8192_lvts_ap_data_ctrl,
++	.conn_cmd	= default_conn_cmds,
++	.init_cmd	= default_init_cmds,
+ 	.num_lvts_ctrl	= ARRAY_SIZE(mt8192_lvts_ap_data_ctrl),
++	.num_conn_cmd	= ARRAY_SIZE(default_conn_cmds),
++	.num_init_cmd	= ARRAY_SIZE(default_init_cmds),
+ 	.temp_factor	= LVTS_COEFF_A_MT8195,
+ 	.temp_offset	= LVTS_COEFF_B_MT8195,
+ 	.gt_calib_bit_offset = 24,
+@@ -1792,7 +1832,11 @@ static const struct lvts_data mt8192_lvts_ap_data = {
+ 
+ static const struct lvts_data mt8195_lvts_mcu_data = {
+ 	.lvts_ctrl	= mt8195_lvts_mcu_data_ctrl,
++	.conn_cmd	= default_conn_cmds,
++	.init_cmd	= default_init_cmds,
+ 	.num_lvts_ctrl	= ARRAY_SIZE(mt8195_lvts_mcu_data_ctrl),
++	.num_conn_cmd	= ARRAY_SIZE(default_conn_cmds),
++	.num_init_cmd	= ARRAY_SIZE(default_init_cmds),
+ 	.temp_factor	= LVTS_COEFF_A_MT8195,
+ 	.temp_offset	= LVTS_COEFF_B_MT8195,
+ 	.gt_calib_bit_offset = 24,
+@@ -1801,7 +1845,11 @@ static const struct lvts_data mt8195_lvts_mcu_data = {
+ 
+ static const struct lvts_data mt8195_lvts_ap_data = {
+ 	.lvts_ctrl	= mt8195_lvts_ap_data_ctrl,
++	.conn_cmd	= default_conn_cmds,
++	.init_cmd	= default_init_cmds,
+ 	.num_lvts_ctrl	= ARRAY_SIZE(mt8195_lvts_ap_data_ctrl),
++	.num_conn_cmd	= ARRAY_SIZE(default_conn_cmds),
++	.num_init_cmd	= ARRAY_SIZE(default_init_cmds),
+ 	.temp_factor	= LVTS_COEFF_A_MT8195,
+ 	.temp_offset	= LVTS_COEFF_B_MT8195,
+ 	.gt_calib_bit_offset = 24,
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index 7cbfc7d718b3fd..53551aafd47027 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -96,6 +96,7 @@ struct vhost_net_ubuf_ref {
+ 	atomic_t refcount;
+ 	wait_queue_head_t wait;
+ 	struct vhost_virtqueue *vq;
++	struct rcu_head rcu;
+ };
+ 
+ #define VHOST_NET_BATCH 64
+@@ -247,9 +248,13 @@ vhost_net_ubuf_alloc(struct vhost_virtqueue *vq, bool zcopy)
+ 
+ static int vhost_net_ubuf_put(struct vhost_net_ubuf_ref *ubufs)
+ {
+-	int r = atomic_sub_return(1, &ubufs->refcount);
++	int r;
++
++	rcu_read_lock();
++	r = atomic_sub_return(1, &ubufs->refcount);
+ 	if (unlikely(!r))
+ 		wake_up(&ubufs->wait);
++	rcu_read_unlock();
+ 	return r;
+ }
+ 
+@@ -262,7 +267,7 @@ static void vhost_net_ubuf_put_and_wait(struct vhost_net_ubuf_ref *ubufs)
+ static void vhost_net_ubuf_put_wait_and_free(struct vhost_net_ubuf_ref *ubufs)
+ {
+ 	vhost_net_ubuf_put_and_wait(ubufs);
+-	kfree(ubufs);
++	kfree_rcu(ubufs, rcu);
+ }
+ 
+ static void vhost_net_clear_ubuf_info(struct vhost_net *n)
+diff --git a/fs/efivarfs/super.c b/fs/efivarfs/super.c
+index 284d6dbba2ece8..5c0d45cccc10e0 100644
+--- a/fs/efivarfs/super.c
++++ b/fs/efivarfs/super.c
+@@ -152,6 +152,10 @@ static int efivarfs_d_compare(const struct dentry *dentry,
+ {
+ 	int guid = len - EFI_VARIABLE_GUID_LEN;
+ 
++	/* Parallel lookups may produce a temporary invalid filename */
++	if (guid <= 0)
++		return 1;
++
+ 	if (name->len != len)
+ 		return 1;
+ 
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index 799fef437aa8e7..cad87e4d669432 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -174,6 +174,11 @@ static int erofs_init_device(struct erofs_buf *buf, struct super_block *sb,
+ 		if (!erofs_is_fileio_mode(sbi)) {
+ 			dif->dax_dev = fs_dax_get_by_bdev(file_bdev(file),
+ 					&dif->dax_part_off, NULL, NULL);
++			if (!dif->dax_dev && test_opt(&sbi->opt, DAX_ALWAYS)) {
++				erofs_info(sb, "DAX unsupported by %s. Turning off DAX.",
++					   dif->path);
++				clear_opt(&sbi->opt, DAX_ALWAYS);
++			}
+ 		} else if (!S_ISREG(file_inode(file)->i_mode)) {
+ 			fput(file);
+ 			return -EINVAL;
+@@ -210,8 +215,13 @@ static int erofs_scan_devices(struct super_block *sb,
+ 			  ondisk_extradevs, sbi->devs->extra_devices);
+ 		return -EINVAL;
+ 	}
+-	if (!ondisk_extradevs)
++	if (!ondisk_extradevs) {
++		if (test_opt(&sbi->opt, DAX_ALWAYS) && !sbi->dif0.dax_dev) {
++			erofs_info(sb, "DAX unsupported by block device. Turning off DAX.");
++			clear_opt(&sbi->opt, DAX_ALWAYS);
++		}
+ 		return 0;
++	}
+ 
+ 	if (!sbi->devs->extra_devices && !erofs_is_fscache_mode(sb))
+ 		sbi->devs->flatdev = true;
+@@ -330,7 +340,6 @@ static int erofs_read_superblock(struct super_block *sb)
+ 	if (ret < 0)
+ 		goto out;
+ 
+-	/* handle multiple devices */
+ 	ret = erofs_scan_devices(sb, dsb);
+ 
+ 	if (erofs_sb_has_48bit(sbi))
+@@ -661,14 +670,9 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
+ 			return invalfc(fc, "cannot use fsoffset in fscache mode");
+ 	}
+ 
+-	if (test_opt(&sbi->opt, DAX_ALWAYS)) {
+-		if (!sbi->dif0.dax_dev) {
+-			errorfc(fc, "DAX unsupported by block device. Turning off DAX.");
+-			clear_opt(&sbi->opt, DAX_ALWAYS);
+-		} else if (sbi->blkszbits != PAGE_SHIFT) {
+-			errorfc(fc, "unsupported blocksize for DAX");
+-			clear_opt(&sbi->opt, DAX_ALWAYS);
+-		}
++	if (test_opt(&sbi->opt, DAX_ALWAYS) && sbi->blkszbits != PAGE_SHIFT) {
++		erofs_info(sb, "unsupported blocksize for DAX");
++		clear_opt(&sbi->opt, DAX_ALWAYS);
+ 	}
+ 
+ 	sb->s_time_gran = 1;
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index e3f28a1bb9452e..9bb53f00c2c629 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -1430,6 +1430,16 @@ static void z_erofs_decompressqueue_kthread_work(struct kthread_work *work)
+ }
+ #endif
+ 
++/* Use (kthread_)work in atomic contexts to minimize scheduling overhead */
++static inline bool z_erofs_in_atomic(void)
++{
++	if (IS_ENABLED(CONFIG_PREEMPTION) && rcu_preempt_depth())
++		return true;
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return true;
++	return !preemptible();
++}
++
+ static void z_erofs_decompress_kickoff(struct z_erofs_decompressqueue *io,
+ 				       int bios)
+ {
+@@ -1444,8 +1454,7 @@ static void z_erofs_decompress_kickoff(struct z_erofs_decompressqueue *io,
+ 
+ 	if (atomic_add_return(bios, &io->pending_bios))
+ 		return;
+-	/* Use (kthread_)work and sync decompression for atomic contexts only */
+-	if (!in_task() || irqs_disabled() || rcu_read_lock_any_held()) {
++	if (z_erofs_in_atomic()) {
+ #ifdef CONFIG_EROFS_FS_PCPU_KTHREAD
+ 		struct kthread_worker *worker;
+ 
+diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c
+index 697badd0445a0d..0375250ccb2b0e 100644
+--- a/fs/smb/client/cifsfs.c
++++ b/fs/smb/client/cifsfs.c
+@@ -1358,6 +1358,20 @@ static loff_t cifs_remap_file_range(struct file *src_file, loff_t off,
+ 			truncate_setsize(target_inode, new_size);
+ 			fscache_resize_cookie(cifs_inode_cookie(target_inode),
+ 					      new_size);
++		} else if (rc == -EOPNOTSUPP) {
++			/*
++			 * copy_file_range syscall man page indicates EINVAL
++			 * is returned e.g when "fd_in and fd_out refer to the
++			 * same file and the source and target ranges overlap."
++			 * Test generic/157 was what showed these cases where
++			 * we need to remap EOPNOTSUPP to EINVAL
++			 */
++			if (off >= src_inode->i_size) {
++				rc = -EINVAL;
++			} else if (src_inode == target_inode) {
++				if (off + len > destoff)
++					rc = -EINVAL;
++			}
+ 		}
+ 		if (rc == 0 && new_size > target_cifsi->netfs.zero_point)
+ 			target_cifsi->netfs.zero_point = new_size;
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index 75be4b46bc6f18..fe453a4b3dc831 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -1943,15 +1943,24 @@ int cifs_unlink(struct inode *dir, struct dentry *dentry)
+ 	struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
+ 	struct tcon_link *tlink;
+ 	struct cifs_tcon *tcon;
++	__u32 dosattr = 0, origattr = 0;
+ 	struct TCP_Server_Info *server;
+ 	struct iattr *attrs = NULL;
+-	__u32 dosattr = 0, origattr = 0;
++	bool rehash = false;
+ 
+ 	cifs_dbg(FYI, "cifs_unlink, dir=0x%p, dentry=0x%p\n", dir, dentry);
+ 
+ 	if (unlikely(cifs_forced_shutdown(cifs_sb)))
+ 		return -EIO;
+ 
++	/* Unhash dentry in advance to prevent any concurrent opens */
++	spin_lock(&dentry->d_lock);
++	if (!d_unhashed(dentry)) {
++		__d_drop(dentry);
++		rehash = true;
++	}
++	spin_unlock(&dentry->d_lock);
++
+ 	tlink = cifs_sb_tlink(cifs_sb);
+ 	if (IS_ERR(tlink))
+ 		return PTR_ERR(tlink);
+@@ -2003,7 +2012,8 @@ int cifs_unlink(struct inode *dir, struct dentry *dentry)
+ 			cifs_drop_nlink(inode);
+ 		}
+ 	} else if (rc == -ENOENT) {
+-		d_drop(dentry);
++		if (simple_positive(dentry))
++			d_delete(dentry);
+ 	} else if (rc == -EBUSY) {
+ 		if (server->ops->rename_pending_delete) {
+ 			rc = server->ops->rename_pending_delete(full_path,
+@@ -2056,6 +2066,8 @@ int cifs_unlink(struct inode *dir, struct dentry *dentry)
+ 	kfree(attrs);
+ 	free_xid(xid);
+ 	cifs_put_tlink(tlink);
++	if (rehash)
++		d_rehash(dentry);
+ 	return rc;
+ }
+ 
+@@ -2462,6 +2474,7 @@ cifs_rename2(struct mnt_idmap *idmap, struct inode *source_dir,
+ 	struct cifs_sb_info *cifs_sb;
+ 	struct tcon_link *tlink;
+ 	struct cifs_tcon *tcon;
++	bool rehash = false;
+ 	unsigned int xid;
+ 	int rc, tmprc;
+ 	int retry_count = 0;
+@@ -2477,6 +2490,17 @@ cifs_rename2(struct mnt_idmap *idmap, struct inode *source_dir,
+ 	if (unlikely(cifs_forced_shutdown(cifs_sb)))
+ 		return -EIO;
+ 
++	/*
++	 * Prevent any concurrent opens on the target by unhashing the dentry.
++	 * VFS already unhashes the target when renaming directories.
++	 */
++	if (d_is_positive(target_dentry) && !d_is_dir(target_dentry)) {
++		if (!d_unhashed(target_dentry)) {
++			d_drop(target_dentry);
++			rehash = true;
++		}
++	}
++
+ 	tlink = cifs_sb_tlink(cifs_sb);
+ 	if (IS_ERR(tlink))
+ 		return PTR_ERR(tlink);
+@@ -2518,6 +2542,8 @@ cifs_rename2(struct mnt_idmap *idmap, struct inode *source_dir,
+ 		}
+ 	}
+ 
++	if (!rc)
++		rehash = false;
+ 	/*
+ 	 * No-replace is the natural behavior for CIFS, so skip unlink hacks.
+ 	 */
+@@ -2576,12 +2602,16 @@ cifs_rename2(struct mnt_idmap *idmap, struct inode *source_dir,
+ 			goto cifs_rename_exit;
+ 		rc = cifs_do_rename(xid, source_dentry, from_name,
+ 				    target_dentry, to_name);
++		if (!rc)
++			rehash = false;
+ 	}
+ 
+ 	/* force revalidate to go get info when needed */
+ 	CIFS_I(source_dir)->time = CIFS_I(target_dir)->time = 0;
+ 
+ cifs_rename_exit:
++	if (rehash)
++		d_rehash(target_dentry);
+ 	kfree(info_buf_source);
+ 	free_dentry_path(page2);
+ 	free_dentry_path(page1);
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index a11a2a693c5194..8b271bbe41c471 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -207,8 +207,10 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 	server = cifs_pick_channel(ses);
+ 
+ 	vars = kzalloc(sizeof(*vars), GFP_ATOMIC);
+-	if (vars == NULL)
+-		return -ENOMEM;
++	if (vars == NULL) {
++		rc = -ENOMEM;
++		goto out;
++	}
+ 	rqst = &vars->rqst[0];
+ 	rsp_iov = &vars->rsp_iov[0];
+ 
+@@ -864,6 +866,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 	    smb2_should_replay(tcon, &retries, &cur_sleep))
+ 		goto replay_again;
+ 
++out:
+ 	if (cfile)
+ 		cifsFileInfo_put(cfile);
+ 
+diff --git a/fs/xfs/libxfs/xfs_attr_remote.c b/fs/xfs/libxfs/xfs_attr_remote.c
+index 4c44ce1c8a644b..bff3dc226f8128 100644
+--- a/fs/xfs/libxfs/xfs_attr_remote.c
++++ b/fs/xfs/libxfs/xfs_attr_remote.c
+@@ -435,6 +435,13 @@ xfs_attr_rmtval_get(
+ 					0, &bp, &xfs_attr3_rmt_buf_ops);
+ 			if (xfs_metadata_is_sick(error))
+ 				xfs_dirattr_mark_sick(args->dp, XFS_ATTR_FORK);
++			/*
++			 * ENODATA from disk implies a disk medium failure;
++			 * ENODATA for xattrs means attribute not found, so
++			 * disambiguate that here.
++			 */
++			if (error == -ENODATA)
++				error = -EIO;
+ 			if (error)
+ 				return error;
+ 
+diff --git a/fs/xfs/libxfs/xfs_da_btree.c b/fs/xfs/libxfs/xfs_da_btree.c
+index 17d9e6154f1978..723a0643b8386c 100644
+--- a/fs/xfs/libxfs/xfs_da_btree.c
++++ b/fs/xfs/libxfs/xfs_da_btree.c
+@@ -2833,6 +2833,12 @@ xfs_da_read_buf(
+ 			&bp, ops);
+ 	if (xfs_metadata_is_sick(error))
+ 		xfs_dirattr_mark_sick(dp, whichfork);
++	/*
++	 * ENODATA from disk implies a disk medium failure; ENODATA for
++	 * xattrs means attribute not found, so disambiguate that here.
++	 */
++	if (error == -ENODATA && whichfork == XFS_ATTR_FORK)
++		error = -EIO;
+ 	if (error)
+ 		goto out_free;
+ 
+diff --git a/include/linux/atmdev.h b/include/linux/atmdev.h
+index 45f2f278b50a8a..70807c679f1abc 100644
+--- a/include/linux/atmdev.h
++++ b/include/linux/atmdev.h
+@@ -185,6 +185,7 @@ struct atmdev_ops { /* only send is required */
+ 	int (*compat_ioctl)(struct atm_dev *dev,unsigned int cmd,
+ 			    void __user *arg);
+ #endif
++	int (*pre_send)(struct atm_vcc *vcc, struct sk_buff *skb);
+ 	int (*send)(struct atm_vcc *vcc,struct sk_buff *skb);
+ 	int (*send_bh)(struct atm_vcc *vcc, struct sk_buff *skb);
+ 	int (*send_oam)(struct atm_vcc *vcc,void *cell,int flags);
+diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
+index f48e5fb88bd5dd..332b80c42b6f32 100644
+--- a/include/linux/dma-map-ops.h
++++ b/include/linux/dma-map-ops.h
+@@ -153,6 +153,9 @@ static inline void dma_free_contiguous(struct device *dev, struct page *page,
+ {
+ 	__free_pages(page, get_order(size));
+ }
++static inline void dma_contiguous_early_fixup(phys_addr_t base, unsigned long size)
++{
++}
+ #endif /* CONFIG_DMA_CMA*/
+ 
+ #ifdef CONFIG_DMA_DECLARE_COHERENT
+diff --git a/include/linux/firmware/qcom/qcom_scm.h b/include/linux/firmware/qcom/qcom_scm.h
+index 983e1591bbba75..0f667bf1d4d9d8 100644
+--- a/include/linux/firmware/qcom/qcom_scm.h
++++ b/include/linux/firmware/qcom/qcom_scm.h
+@@ -148,11 +148,10 @@ bool qcom_scm_lmh_dcvsh_available(void);
+ 
+ int qcom_scm_gpu_init_regs(u32 gpu_req);
+ 
+-int qcom_scm_shm_bridge_enable(void);
+-int qcom_scm_shm_bridge_create(struct device *dev, u64 pfn_and_ns_perm_flags,
++int qcom_scm_shm_bridge_create(u64 pfn_and_ns_perm_flags,
+ 			       u64 ipfn_and_s_perm_flags, u64 size_and_flags,
+ 			       u64 ns_vmids, u64 *handle);
+-int qcom_scm_shm_bridge_delete(struct device *dev, u64 handle);
++int qcom_scm_shm_bridge_delete(u64 handle);
+ 
+ #ifdef CONFIG_QCOM_QSEECOM
+ 
+diff --git a/include/linux/platform_data/x86/int3472.h b/include/linux/platform_data/x86/int3472.h
+index 78276a11c48d6c..1571e9157fa50b 100644
+--- a/include/linux/platform_data/x86/int3472.h
++++ b/include/linux/platform_data/x86/int3472.h
+@@ -27,6 +27,7 @@
+ #define INT3472_GPIO_TYPE_CLK_ENABLE				0x0c
+ #define INT3472_GPIO_TYPE_PRIVACY_LED				0x0d
+ #define INT3472_GPIO_TYPE_HANDSHAKE				0x12
++#define INT3472_GPIO_TYPE_HOTPLUG_DETECT			0x13
+ 
+ #define INT3472_PDEV_MAX_NAME_LEN				23
+ #define INT3472_MAX_SENSOR_GPIOS				3
+diff --git a/include/linux/soc/marvell/silicons.h b/include/linux/soc/marvell/silicons.h
+new file mode 100644
+index 00000000000000..66bb9bfaf17d50
+--- /dev/null
++++ b/include/linux/soc/marvell/silicons.h
+@@ -0,0 +1,25 @@
++/* SPDX-License-Identifier: GPL-2.0-only
++ * Copyright (C) 2024 Marvell.
++ */
++
++#ifndef __SOC_SILICON_H
++#define __SOC_SILICON_H
++
++#include <linux/types.h>
++#include <linux/pci.h>
++
++#if defined(CONFIG_ARM64)
++
++#define CN20K_CHIPID	0x20
++/*
++ * Silicon check for CN20K family
++ */
++static inline bool is_cn20k(struct pci_dev *pdev)
++{
++	return (pdev->subsystem_device & 0xFF) == CN20K_CHIPID;
++}
++#else
++#define is_cn20k(pdev)		((void)(pdev), 0)
++#endif
++
++#endif /* __SOC_SILICON_H */
+diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h
+index b3e1d30c765bc7..169c7d367facb3 100644
+--- a/include/linux/virtio_config.h
++++ b/include/linux/virtio_config.h
+@@ -329,8 +329,6 @@ static inline
+ bool virtio_get_shm_region(struct virtio_device *vdev,
+ 			   struct virtio_shm_region *region, u8 id)
+ {
+-	if (!region->len)
+-		return false;
+ 	if (!vdev->config->get_shm_region)
+ 		return false;
+ 	return vdev->config->get_shm_region(vdev, region, id);
+diff --git a/include/net/bluetooth/hci_sync.h b/include/net/bluetooth/hci_sync.h
+index 5224f57f6af2c4..e352a4e0ef8d76 100644
+--- a/include/net/bluetooth/hci_sync.h
++++ b/include/net/bluetooth/hci_sync.h
+@@ -93,7 +93,7 @@ int hci_update_class_sync(struct hci_dev *hdev);
+ 
+ int hci_update_eir_sync(struct hci_dev *hdev);
+ int hci_update_class_sync(struct hci_dev *hdev);
+-int hci_update_name_sync(struct hci_dev *hdev);
++int hci_update_name_sync(struct hci_dev *hdev, const u8 *name);
+ int hci_write_ssp_mode_sync(struct hci_dev *hdev, u8 mode);
+ 
+ int hci_get_random_address(struct hci_dev *hdev, bool require_privacy,
+diff --git a/include/net/rose.h b/include/net/rose.h
+index 23267b4efcfa32..2b5491bbf39ab5 100644
+--- a/include/net/rose.h
++++ b/include/net/rose.h
+@@ -8,6 +8,7 @@
+ #ifndef _ROSE_H
+ #define _ROSE_H 
+ 
++#include <linux/refcount.h>
+ #include <linux/rose.h>
+ #include <net/ax25.h>
+ #include <net/sock.h>
+@@ -96,7 +97,7 @@ struct rose_neigh {
+ 	ax25_cb			*ax25;
+ 	struct net_device		*dev;
+ 	unsigned short		count;
+-	unsigned short		use;
++	refcount_t		use;
+ 	unsigned int		number;
+ 	char			restarted;
+ 	char			dce_mode;
+@@ -151,6 +152,21 @@ struct rose_sock {
+ 
+ #define rose_sk(sk) ((struct rose_sock *)(sk))
+ 
++static inline void rose_neigh_hold(struct rose_neigh *rose_neigh)
++{
++	refcount_inc(&rose_neigh->use);
++}
++
++static inline void rose_neigh_put(struct rose_neigh *rose_neigh)
++{
++	if (refcount_dec_and_test(&rose_neigh->use)) {
++		if (rose_neigh->ax25)
++			ax25_cb_put(rose_neigh->ax25);
++		kfree(rose_neigh->digipeat);
++		kfree(rose_neigh);
++	}
++}
++
+ /* af_rose.c */
+ extern ax25_address rose_callsign;
+ extern int  sysctl_rose_restart_request_timeout;
+diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
+index e72f2655459e45..d284a35d8104b5 100644
+--- a/include/uapi/linux/vhost.h
++++ b/include/uapi/linux/vhost.h
+@@ -254,7 +254,7 @@
+  * When fork_owner is set to VHOST_FORK_OWNER_KTHREAD:
+  *   - Vhost will create vhost workers as kernel threads.
+  */
+-#define VHOST_SET_FORK_FROM_OWNER _IOW(VHOST_VIRTIO, 0x83, __u8)
++#define VHOST_SET_FORK_FROM_OWNER _IOW(VHOST_VIRTIO, 0x84, __u8)
+ 
+ /**
+  * VHOST_GET_FORK_OWNER - Get the current fork_owner flag for the vhost device.
+@@ -262,6 +262,6 @@
+  *
+  * @return: An 8-bit value indicating the current thread mode.
+  */
+-#define VHOST_GET_FORK_FROM_OWNER _IOR(VHOST_VIRTIO, 0x84, __u8)
++#define VHOST_GET_FORK_FROM_OWNER _IOR(VHOST_VIRTIO, 0x85, __u8)
+ 
+ #endif
+diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
+index be91edf34f0137..17dfaa0395c46b 100644
+--- a/io_uring/io-wq.c
++++ b/io_uring/io-wq.c
+@@ -357,6 +357,13 @@ static void create_worker_cb(struct callback_head *cb)
+ 	worker = container_of(cb, struct io_worker, create_work);
+ 	wq = worker->wq;
+ 	acct = worker->acct;
++
++	rcu_read_lock();
++	do_create = !io_acct_activate_free_worker(acct);
++	rcu_read_unlock();
++	if (!do_create)
++		goto no_need_create;
++
+ 	raw_spin_lock(&acct->workers_lock);
+ 
+ 	if (acct->nr_workers < acct->max_workers) {
+@@ -367,6 +374,7 @@ static void create_worker_cb(struct callback_head *cb)
+ 	if (do_create) {
+ 		create_io_worker(wq, acct);
+ 	} else {
++no_need_create:
+ 		atomic_dec(&acct->nr_running);
+ 		io_worker_ref_put(wq);
+ 	}
+diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
+index f2d2cc319faac5..19a8bde5e1e1c3 100644
+--- a/io_uring/kbuf.c
++++ b/io_uring/kbuf.c
+@@ -36,15 +36,19 @@ static bool io_kbuf_inc_commit(struct io_buffer_list *bl, int len)
+ {
+ 	while (len) {
+ 		struct io_uring_buf *buf;
+-		u32 this_len;
++		u32 buf_len, this_len;
+ 
+ 		buf = io_ring_head_to_buf(bl->buf_ring, bl->head, bl->mask);
+-		this_len = min_t(int, len, buf->len);
+-		buf->len -= this_len;
+-		if (buf->len) {
++		buf_len = READ_ONCE(buf->len);
++		this_len = min_t(u32, len, buf_len);
++		buf_len -= this_len;
++		/* Stop looping for invalid buffer length of 0 */
++		if (buf_len || !this_len) {
+ 			buf->addr += this_len;
++			buf->len = buf_len;
+ 			return false;
+ 		}
++		buf->len = 0;
+ 		bl->head++;
+ 		len -= this_len;
+ 	}
+@@ -159,6 +163,7 @@ static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len,
+ 	__u16 tail, head = bl->head;
+ 	struct io_uring_buf *buf;
+ 	void __user *ret;
++	u32 buf_len;
+ 
+ 	tail = smp_load_acquire(&br->tail);
+ 	if (unlikely(tail == head))
+@@ -168,8 +173,9 @@ static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len,
+ 		req->flags |= REQ_F_BL_EMPTY;
+ 
+ 	buf = io_ring_head_to_buf(br, head, bl->mask);
+-	if (*len == 0 || *len > buf->len)
+-		*len = buf->len;
++	buf_len = READ_ONCE(buf->len);
++	if (*len == 0 || *len > buf_len)
++		*len = buf_len;
+ 	req->flags |= REQ_F_BUFFER_RING | REQ_F_BUFFERS_COMMIT;
+ 	req->buf_list = bl;
+ 	req->buf_index = buf->bid;
+@@ -265,7 +271,7 @@ static int io_ring_buffers_peek(struct io_kiocb *req, struct buf_sel_arg *arg,
+ 
+ 	req->buf_index = buf->bid;
+ 	do {
+-		u32 len = buf->len;
++		u32 len = READ_ONCE(buf->len);
+ 
+ 		/* truncate end piece, if needed, for non partial buffers */
+ 		if (len > arg->max_len) {
+diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
+index 67af8a55185d99..d9b9dcba6ff7cf 100644
+--- a/kernel/dma/contiguous.c
++++ b/kernel/dma/contiguous.c
+@@ -483,8 +483,6 @@ static int __init rmem_cma_setup(struct reserved_mem *rmem)
+ 		pr_err("Reserved memory: unable to setup CMA region\n");
+ 		return err;
+ 	}
+-	/* Architecture specific contiguous memory fixup. */
+-	dma_contiguous_early_fixup(rmem->base, rmem->size);
+ 
+ 	if (default_cma)
+ 		dma_contiguous_default_area = cma;
+diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
+index 7b04f7575796b8..ee45dee33d4916 100644
+--- a/kernel/dma/pool.c
++++ b/kernel/dma/pool.c
+@@ -102,8 +102,8 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
+ 
+ #ifdef CONFIG_DMA_DIRECT_REMAP
+ 	addr = dma_common_contiguous_remap(page, pool_size,
+-					   pgprot_dmacoherent(PAGE_KERNEL),
+-					   __builtin_return_address(0));
++			pgprot_decrypted(pgprot_dmacoherent(PAGE_KERNEL)),
++			__builtin_return_address(0));
+ 	if (!addr)
+ 		goto free_page;
+ #else
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 8060c2857bb2b3..872122e074e5fe 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -2665,6 +2665,9 @@ static void perf_log_itrace_start(struct perf_event *event);
+ 
+ static void perf_event_unthrottle(struct perf_event *event, bool start)
+ {
++	if (event->state != PERF_EVENT_STATE_ACTIVE)
++		return;
++
+ 	event->hw.interrupts = 0;
+ 	if (start)
+ 		event->pmu->start(event, 0);
+@@ -2674,6 +2677,9 @@ static void perf_event_unthrottle(struct perf_event *event, bool start)
+ 
+ static void perf_event_throttle(struct perf_event *event)
+ {
++	if (event->state != PERF_EVENT_STATE_ACTIVE)
++		return;
++
+ 	event->hw.interrupts = MAX_INTERRUPTS;
+ 	event->pmu->stop(event, 0);
+ 	if (event == event->group_leader)
+diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
+index c5b207992fb49e..dac2d58f39490b 100644
+--- a/kernel/trace/fgraph.c
++++ b/kernel/trace/fgraph.c
+@@ -1393,6 +1393,7 @@ int register_ftrace_graph(struct fgraph_ops *gops)
+ 		ftrace_graph_active--;
+ 		gops->saved_func = NULL;
+ 		fgraph_lru_release_index(i);
++		unregister_pm_notifier(&ftrace_suspend_notifier);
+ 	}
+ 	return ret;
+ }
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 8ea6ada38c40ec..b91fa02cc54a6a 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -10700,10 +10700,10 @@ static void ftrace_dump_one(struct trace_array *tr, enum ftrace_dump_mode dump_m
+ 			ret = print_trace_line(&iter);
+ 			if (ret != TRACE_TYPE_NO_CONSUME)
+ 				trace_consume(&iter);
++
++			trace_printk_seq(&iter.seq);
+ 		}
+ 		touch_nmi_watchdog();
+-
+-		trace_printk_seq(&iter.seq);
+ 	}
+ 
+ 	if (!cnt)
+diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
+index 14d74a7491b8cb..97fb32aaebeb9d 100644
+--- a/kernel/trace/trace_functions_graph.c
++++ b/kernel/trace/trace_functions_graph.c
+@@ -27,14 +27,21 @@ struct fgraph_cpu_data {
+ 	unsigned long	enter_funcs[FTRACE_RETFUNC_DEPTH];
+ };
+ 
++struct fgraph_ent_args {
++	struct ftrace_graph_ent_entry	ent;
++	/* Force the sizeof of args[] to have FTRACE_REGS_MAX_ARGS entries */
++	unsigned long			args[FTRACE_REGS_MAX_ARGS];
++};
++
+ struct fgraph_data {
+ 	struct fgraph_cpu_data __percpu *cpu_data;
+ 
+ 	/* Place to preserve last processed entry. */
+ 	union {
+-		struct ftrace_graph_ent_entry	ent;
++		struct fgraph_ent_args		ent;
++		/* TODO allow retaddr to have args */
+ 		struct fgraph_retaddr_ent_entry	rent;
+-	} ent;
++	};
+ 	struct ftrace_graph_ret_entry	ret;
+ 	int				failed;
+ 	int				cpu;
+@@ -627,10 +634,13 @@ get_return_for_leaf(struct trace_iterator *iter,
+ 			 * Save current and next entries for later reference
+ 			 * if the output fails.
+ 			 */
+-			if (unlikely(curr->ent.type == TRACE_GRAPH_RETADDR_ENT))
+-				data->ent.rent = *(struct fgraph_retaddr_ent_entry *)curr;
+-			else
+-				data->ent.ent = *curr;
++			if (unlikely(curr->ent.type == TRACE_GRAPH_RETADDR_ENT)) {
++				data->rent = *(struct fgraph_retaddr_ent_entry *)curr;
++			} else {
++				int size = min((int)sizeof(data->ent), (int)iter->ent_size);
++
++				memcpy(&data->ent, curr, size);
++			}
+ 			/*
+ 			 * If the next event is not a return type, then
+ 			 * we only care about what type it is. Otherwise we can
+diff --git a/net/atm/common.c b/net/atm/common.c
+index d7f7976ea13ac6..881c7f259dbd46 100644
+--- a/net/atm/common.c
++++ b/net/atm/common.c
+@@ -635,18 +635,27 @@ int vcc_sendmsg(struct socket *sock, struct msghdr *m, size_t size)
+ 
+ 	skb->dev = NULL; /* for paths shared with net_device interfaces */
+ 	if (!copy_from_iter_full(skb_put(skb, size), size, &m->msg_iter)) {
+-		atm_return_tx(vcc, skb);
+-		kfree_skb(skb);
+ 		error = -EFAULT;
+-		goto out;
++		goto free_skb;
+ 	}
+ 	if (eff != size)
+ 		memset(skb->data + size, 0, eff-size);
++
++	if (vcc->dev->ops->pre_send) {
++		error = vcc->dev->ops->pre_send(vcc, skb);
++		if (error)
++			goto free_skb;
++	}
++
+ 	error = vcc->dev->ops->send(vcc, skb);
+ 	error = error ? error : size;
+ out:
+ 	release_sock(sk);
+ 	return error;
++free_skb:
++	atm_return_tx(vcc, skb);
++	kfree_skb(skb);
++	goto out;
+ }
+ 
+ __poll_t vcc_poll(struct file *file, struct socket *sock, poll_table *wait)
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 6a064a6b0e4319..ad5574e9a93ee9 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -149,8 +149,6 @@ static void hci_conn_cleanup(struct hci_conn *conn)
+ 
+ 	hci_chan_list_flush(conn);
+ 
+-	hci_conn_hash_del(hdev, conn);
+-
+ 	if (HCI_CONN_HANDLE_UNSET(conn->handle))
+ 		ida_free(&hdev->unset_handle_ida, conn->handle);
+ 
+@@ -1142,28 +1140,54 @@ void hci_conn_del(struct hci_conn *conn)
+ 	disable_delayed_work_sync(&conn->auto_accept_work);
+ 	disable_delayed_work_sync(&conn->idle_work);
+ 
+-	if (conn->type == ACL_LINK) {
+-		/* Unacked frames */
+-		hdev->acl_cnt += conn->sent;
+-	} else if (conn->type == LE_LINK) {
+-		cancel_delayed_work(&conn->le_conn_timeout);
++	/* Remove the connection from the list so unacked logic can detect when
++	 * a certain pool is not being utilized.
++	 */
++	hci_conn_hash_del(hdev, conn);
+ 
+-		if (hdev->le_pkts)
+-			hdev->le_cnt += conn->sent;
++	/* Handle unacked frames:
++	 *
++	 * - In case there are no connection, or if restoring the buffers
++	 *   considered in transist would overflow, restore all buffers to the
++	 *   pool.
++	 * - Otherwise restore just the buffers considered in transit for the
++	 *   hci_conn
++	 */
++	switch (conn->type) {
++	case ACL_LINK:
++		if (!hci_conn_num(hdev, ACL_LINK) ||
++		    hdev->acl_cnt + conn->sent > hdev->acl_pkts)
++			hdev->acl_cnt = hdev->acl_pkts;
+ 		else
+ 			hdev->acl_cnt += conn->sent;
+-	} else {
+-		/* Unacked ISO frames */
+-		if (conn->type == CIS_LINK ||
+-		    conn->type == BIS_LINK ||
+-		    conn->type == PA_LINK) {
+-			if (hdev->iso_pkts)
+-				hdev->iso_cnt += conn->sent;
+-			else if (hdev->le_pkts)
++		break;
++	case LE_LINK:
++		cancel_delayed_work(&conn->le_conn_timeout);
++
++		if (hdev->le_pkts) {
++			if (!hci_conn_num(hdev, LE_LINK) ||
++			    hdev->le_cnt + conn->sent > hdev->le_pkts)
++				hdev->le_cnt = hdev->le_pkts;
++			else
+ 				hdev->le_cnt += conn->sent;
++		} else {
++			if ((!hci_conn_num(hdev, LE_LINK) &&
++			     !hci_conn_num(hdev, ACL_LINK)) ||
++			    hdev->acl_cnt + conn->sent > hdev->acl_pkts)
++				hdev->acl_cnt = hdev->acl_pkts;
+ 			else
+ 				hdev->acl_cnt += conn->sent;
+ 		}
++		break;
++	case CIS_LINK:
++	case BIS_LINK:
++	case PA_LINK:
++		if (!hci_iso_count(hdev) ||
++		    hdev->iso_cnt + conn->sent > hdev->iso_pkts)
++			hdev->iso_cnt = hdev->iso_pkts;
++		else
++			hdev->iso_cnt += conn->sent;
++		break;
+ 	}
+ 
+ 	skb_queue_purge(&conn->data_q);
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 5ef54853bc5eb1..0ffdbe249f5d3d 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -2703,7 +2703,7 @@ static void hci_cs_disconnect(struct hci_dev *hdev, u8 status)
+ 	if (!conn)
+ 		goto unlock;
+ 
+-	if (status) {
++	if (status && status != HCI_ERROR_UNKNOWN_CONN_ID) {
+ 		mgmt_disconnect_failed(hdev, &conn->dst, conn->type,
+ 				       conn->dst_type, status);
+ 
+@@ -2718,6 +2718,12 @@ static void hci_cs_disconnect(struct hci_dev *hdev, u8 status)
+ 		goto done;
+ 	}
+ 
++	/* During suspend, mark connection as closed immediately
++	 * since we might not receive HCI_EV_DISCONN_COMPLETE
++	 */
++	if (hdev->suspended)
++		conn->state = BT_CLOSED;
++
+ 	mgmt_conn = test_and_clear_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags);
+ 
+ 	if (conn->type == ACL_LINK) {
+@@ -4398,7 +4404,17 @@ static void hci_num_comp_pkts_evt(struct hci_dev *hdev, void *data,
+ 		if (!conn)
+ 			continue;
+ 
+-		conn->sent -= count;
++		/* Check if there is really enough packets outstanding before
++		 * attempting to decrease the sent counter otherwise it could
++		 * underflow..
++		 */
++		if (conn->sent >= count) {
++			conn->sent -= count;
++		} else {
++			bt_dev_warn(hdev, "hcon %p sent %u < count %u",
++				    conn, conn->sent, count);
++			conn->sent = 0;
++		}
+ 
+ 		for (i = 0; i < count; ++i)
+ 			hci_conn_tx_dequeue(conn);
+@@ -7003,6 +7019,7 @@ static void hci_le_big_sync_lost_evt(struct hci_dev *hdev, void *data,
+ {
+ 	struct hci_evt_le_big_sync_lost *ev = data;
+ 	struct hci_conn *bis, *conn;
++	bool mgmt_conn;
+ 
+ 	bt_dev_dbg(hdev, "big handle 0x%2.2x", ev->handle);
+ 
+@@ -7021,6 +7038,10 @@ static void hci_le_big_sync_lost_evt(struct hci_dev *hdev, void *data,
+ 	while ((bis = hci_conn_hash_lookup_big_state(hdev, ev->handle,
+ 						     BT_CONNECTED,
+ 						     HCI_ROLE_SLAVE))) {
++		mgmt_conn = test_and_clear_bit(HCI_CONN_MGMT_CONNECTED, &bis->flags);
++		mgmt_device_disconnected(hdev, &bis->dst, bis->type, bis->dst_type,
++					 ev->reason, mgmt_conn);
++
+ 		clear_bit(HCI_CONN_BIG_SYNC, &bis->flags);
+ 		hci_disconn_cfm(bis, ev->reason);
+ 		hci_conn_del(bis);
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 115dc1cd99ce40..749bba1512eb12 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -3481,13 +3481,13 @@ int hci_update_scan_sync(struct hci_dev *hdev)
+ 	return hci_write_scan_enable_sync(hdev, scan);
+ }
+ 
+-int hci_update_name_sync(struct hci_dev *hdev)
++int hci_update_name_sync(struct hci_dev *hdev, const u8 *name)
+ {
+ 	struct hci_cp_write_local_name cp;
+ 
+ 	memset(&cp, 0, sizeof(cp));
+ 
+-	memcpy(cp.name, hdev->dev_name, sizeof(cp.name));
++	memcpy(cp.name, name, sizeof(cp.name));
+ 
+ 	return __hci_cmd_sync_status(hdev, HCI_OP_WRITE_LOCAL_NAME,
+ 					    sizeof(cp), &cp,
+@@ -3540,7 +3540,7 @@ int hci_powered_update_sync(struct hci_dev *hdev)
+ 			hci_write_fast_connectable_sync(hdev, false);
+ 		hci_update_scan_sync(hdev);
+ 		hci_update_class_sync(hdev);
+-		hci_update_name_sync(hdev);
++		hci_update_name_sync(hdev, hdev->dev_name);
+ 		hci_update_eir_sync(hdev);
+ 	}
+ 
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 3166f5fb876b11..50634ef5c8b707 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -3892,8 +3892,11 @@ static void set_name_complete(struct hci_dev *hdev, void *data, int err)
+ 
+ static int set_name_sync(struct hci_dev *hdev, void *data)
+ {
++	struct mgmt_pending_cmd *cmd = data;
++	struct mgmt_cp_set_local_name *cp = cmd->param;
++
+ 	if (lmp_bredr_capable(hdev)) {
+-		hci_update_name_sync(hdev);
++		hci_update_name_sync(hdev, cp->name);
+ 		hci_update_eir_sync(hdev);
+ 	}
+ 
+@@ -9705,7 +9708,9 @@ void mgmt_device_disconnected(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ 	if (!mgmt_connected)
+ 		return;
+ 
+-	if (link_type != ACL_LINK && link_type != LE_LINK)
++	if (link_type != ACL_LINK &&
++	    link_type != LE_LINK  &&
++	    link_type != BIS_LINK)
+ 		return;
+ 
+ 	bacpy(&ev.addr.bdaddr, bdaddr);
+diff --git a/net/core/page_pool.c b/net/core/page_pool.c
+index 368412baad2649..e14d743554ec1c 100644
+--- a/net/core/page_pool.c
++++ b/net/core/page_pool.c
+@@ -287,8 +287,10 @@ static int page_pool_init(struct page_pool *pool,
+ 	}
+ 
+ 	if (pool->mp_ops) {
+-		if (!pool->dma_map || !pool->dma_sync)
+-			return -EOPNOTSUPP;
++		if (!pool->dma_map || !pool->dma_sync) {
++			err = -EOPNOTSUPP;
++			goto free_ptr_ring;
++		}
+ 
+ 		if (WARN_ON(!is_kernel_rodata((unsigned long)pool->mp_ops))) {
+ 			err = -EFAULT;
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 7b8e80c4f1d98c..bf82434c3541b0 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -2573,12 +2573,16 @@ static struct rtable *__mkroute_output(const struct fib_result *res,
+ 		    !netif_is_l3_master(dev_out))
+ 			return ERR_PTR(-EINVAL);
+ 
+-	if (ipv4_is_lbcast(fl4->daddr))
++	if (ipv4_is_lbcast(fl4->daddr)) {
+ 		type = RTN_BROADCAST;
+-	else if (ipv4_is_multicast(fl4->daddr))
++
++		/* reset fi to prevent gateway resolution */
++		fi = NULL;
++	} else if (ipv4_is_multicast(fl4->daddr)) {
+ 		type = RTN_MULTICAST;
+-	else if (ipv4_is_zeronet(fl4->daddr))
++	} else if (ipv4_is_zeronet(fl4->daddr)) {
+ 		return ERR_PTR(-EINVAL);
++	}
+ 
+ 	if (dev_out->flags & IFF_LOOPBACK)
+ 		flags |= RTCF_LOCAL;
+diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c
+index fc5c2fd8f34c7e..5e12e7ce17d8a7 100644
+--- a/net/l2tp/l2tp_ppp.c
++++ b/net/l2tp/l2tp_ppp.c
+@@ -129,22 +129,12 @@ static const struct ppp_channel_ops pppol2tp_chan_ops = {
+ 
+ static const struct proto_ops pppol2tp_ops;
+ 
+-/* Retrieves the pppol2tp socket associated to a session.
+- * A reference is held on the returned socket, so this function must be paired
+- * with sock_put().
+- */
++/* Retrieves the pppol2tp socket associated to a session. */
+ static struct sock *pppol2tp_session_get_sock(struct l2tp_session *session)
+ {
+ 	struct pppol2tp_session *ps = l2tp_session_priv(session);
+-	struct sock *sk;
+-
+-	rcu_read_lock();
+-	sk = rcu_dereference(ps->sk);
+-	if (sk)
+-		sock_hold(sk);
+-	rcu_read_unlock();
+ 
+-	return sk;
++	return rcu_dereference(ps->sk);
+ }
+ 
+ /* Helpers to obtain tunnel/session contexts from sockets.
+@@ -206,14 +196,13 @@ static int pppol2tp_recvmsg(struct socket *sock, struct msghdr *msg,
+ 
+ static void pppol2tp_recv(struct l2tp_session *session, struct sk_buff *skb, int data_len)
+ {
+-	struct pppol2tp_session *ps = l2tp_session_priv(session);
+-	struct sock *sk = NULL;
++	struct sock *sk;
+ 
+ 	/* If the socket is bound, send it in to PPP's input queue. Otherwise
+ 	 * queue it on the session socket.
+ 	 */
+ 	rcu_read_lock();
+-	sk = rcu_dereference(ps->sk);
++	sk = pppol2tp_session_get_sock(session);
+ 	if (!sk)
+ 		goto no_sock;
+ 
+@@ -510,13 +499,14 @@ static void pppol2tp_show(struct seq_file *m, void *arg)
+ 	struct l2tp_session *session = arg;
+ 	struct sock *sk;
+ 
++	rcu_read_lock();
+ 	sk = pppol2tp_session_get_sock(session);
+ 	if (sk) {
+ 		struct pppox_sock *po = pppox_sk(sk);
+ 
+ 		seq_printf(m, "   interface %s\n", ppp_dev_name(&po->chan));
+-		sock_put(sk);
+ 	}
++	rcu_read_unlock();
+ }
+ 
+ static void pppol2tp_session_init(struct l2tp_session *session)
+@@ -1530,6 +1520,7 @@ static void pppol2tp_seq_session_show(struct seq_file *m, void *v)
+ 		port = ntohs(inet->inet_sport);
+ 	}
+ 
++	rcu_read_lock();
+ 	sk = pppol2tp_session_get_sock(session);
+ 	if (sk) {
+ 		state = sk->sk_state;
+@@ -1565,8 +1556,8 @@ static void pppol2tp_seq_session_show(struct seq_file *m, void *v)
+ 		struct pppox_sock *po = pppox_sk(sk);
+ 
+ 		seq_printf(m, "   interface %s\n", ppp_dev_name(&po->chan));
+-		sock_put(sk);
+ 	}
++	rcu_read_unlock();
+ }
+ 
+ static int pppol2tp_seq_show(struct seq_file *m, void *v)
+diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
+index 4e72b636a46a5f..543f9e8ebb6937 100644
+--- a/net/rose/af_rose.c
++++ b/net/rose/af_rose.c
+@@ -170,7 +170,7 @@ void rose_kill_by_neigh(struct rose_neigh *neigh)
+ 
+ 		if (rose->neighbour == neigh) {
+ 			rose_disconnect(s, ENETUNREACH, ROSE_OUT_OF_ORDER, 0);
+-			rose->neighbour->use--;
++			rose_neigh_put(rose->neighbour);
+ 			rose->neighbour = NULL;
+ 		}
+ 	}
+@@ -212,7 +212,7 @@ static void rose_kill_by_device(struct net_device *dev)
+ 		if (rose->device == dev) {
+ 			rose_disconnect(sk, ENETUNREACH, ROSE_OUT_OF_ORDER, 0);
+ 			if (rose->neighbour)
+-				rose->neighbour->use--;
++				rose_neigh_put(rose->neighbour);
+ 			netdev_put(rose->device, &rose->dev_tracker);
+ 			rose->device = NULL;
+ 		}
+@@ -655,7 +655,7 @@ static int rose_release(struct socket *sock)
+ 		break;
+ 
+ 	case ROSE_STATE_2:
+-		rose->neighbour->use--;
++		rose_neigh_put(rose->neighbour);
+ 		release_sock(sk);
+ 		rose_disconnect(sk, 0, -1, -1);
+ 		lock_sock(sk);
+@@ -823,6 +823,7 @@ static int rose_connect(struct socket *sock, struct sockaddr *uaddr, int addr_le
+ 	rose->lci = rose_new_lci(rose->neighbour);
+ 	if (!rose->lci) {
+ 		err = -ENETUNREACH;
++		rose_neigh_put(rose->neighbour);
+ 		goto out_release;
+ 	}
+ 
+@@ -834,12 +835,14 @@ static int rose_connect(struct socket *sock, struct sockaddr *uaddr, int addr_le
+ 		dev = rose_dev_first();
+ 		if (!dev) {
+ 			err = -ENETUNREACH;
++			rose_neigh_put(rose->neighbour);
+ 			goto out_release;
+ 		}
+ 
+ 		user = ax25_findbyuid(current_euid());
+ 		if (!user) {
+ 			err = -EINVAL;
++			rose_neigh_put(rose->neighbour);
+ 			dev_put(dev);
+ 			goto out_release;
+ 		}
+@@ -874,8 +877,6 @@ static int rose_connect(struct socket *sock, struct sockaddr *uaddr, int addr_le
+ 
+ 	rose->state = ROSE_STATE_1;
+ 
+-	rose->neighbour->use++;
+-
+ 	rose_write_internal(sk, ROSE_CALL_REQUEST);
+ 	rose_start_heartbeat(sk);
+ 	rose_start_t1timer(sk);
+@@ -1077,7 +1078,7 @@ int rose_rx_call_request(struct sk_buff *skb, struct net_device *dev, struct ros
+ 			     GFP_ATOMIC);
+ 	make_rose->facilities    = facilities;
+ 
+-	make_rose->neighbour->use++;
++	rose_neigh_hold(make_rose->neighbour);
+ 
+ 	if (rose_sk(sk)->defer) {
+ 		make_rose->state = ROSE_STATE_5;
+diff --git a/net/rose/rose_in.c b/net/rose/rose_in.c
+index 4d67f36dce1b49..7caae93937ee9b 100644
+--- a/net/rose/rose_in.c
++++ b/net/rose/rose_in.c
+@@ -56,7 +56,7 @@ static int rose_state1_machine(struct sock *sk, struct sk_buff *skb, int framety
+ 	case ROSE_CLEAR_REQUEST:
+ 		rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION);
+ 		rose_disconnect(sk, ECONNREFUSED, skb->data[3], skb->data[4]);
+-		rose->neighbour->use--;
++		rose_neigh_put(rose->neighbour);
+ 		break;
+ 
+ 	default:
+@@ -79,12 +79,12 @@ static int rose_state2_machine(struct sock *sk, struct sk_buff *skb, int framety
+ 	case ROSE_CLEAR_REQUEST:
+ 		rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION);
+ 		rose_disconnect(sk, 0, skb->data[3], skb->data[4]);
+-		rose->neighbour->use--;
++		rose_neigh_put(rose->neighbour);
+ 		break;
+ 
+ 	case ROSE_CLEAR_CONFIRMATION:
+ 		rose_disconnect(sk, 0, -1, -1);
+-		rose->neighbour->use--;
++		rose_neigh_put(rose->neighbour);
+ 		break;
+ 
+ 	default:
+@@ -120,7 +120,7 @@ static int rose_state3_machine(struct sock *sk, struct sk_buff *skb, int framety
+ 	case ROSE_CLEAR_REQUEST:
+ 		rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION);
+ 		rose_disconnect(sk, 0, skb->data[3], skb->data[4]);
+-		rose->neighbour->use--;
++		rose_neigh_put(rose->neighbour);
+ 		break;
+ 
+ 	case ROSE_RR:
+@@ -233,7 +233,7 @@ static int rose_state4_machine(struct sock *sk, struct sk_buff *skb, int framety
+ 	case ROSE_CLEAR_REQUEST:
+ 		rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION);
+ 		rose_disconnect(sk, 0, skb->data[3], skb->data[4]);
+-		rose->neighbour->use--;
++		rose_neigh_put(rose->neighbour);
+ 		break;
+ 
+ 	default:
+@@ -253,7 +253,7 @@ static int rose_state5_machine(struct sock *sk, struct sk_buff *skb, int framety
+ 	if (frametype == ROSE_CLEAR_REQUEST) {
+ 		rose_write_internal(sk, ROSE_CLEAR_CONFIRMATION);
+ 		rose_disconnect(sk, 0, skb->data[3], skb->data[4]);
+-		rose_sk(sk)->neighbour->use--;
++		rose_neigh_put(rose_sk(sk)->neighbour);
+ 	}
+ 
+ 	return 0;
+diff --git a/net/rose/rose_route.c b/net/rose/rose_route.c
+index b72bf8a08d489f..a1e9b05ef6f5f6 100644
+--- a/net/rose/rose_route.c
++++ b/net/rose/rose_route.c
+@@ -93,11 +93,11 @@ static int __must_check rose_add_node(struct rose_route_struct *rose_route,
+ 		rose_neigh->ax25      = NULL;
+ 		rose_neigh->dev       = dev;
+ 		rose_neigh->count     = 0;
+-		rose_neigh->use       = 0;
+ 		rose_neigh->dce_mode  = 0;
+ 		rose_neigh->loopback  = 0;
+ 		rose_neigh->number    = rose_neigh_no++;
+ 		rose_neigh->restarted = 0;
++		refcount_set(&rose_neigh->use, 1);
+ 
+ 		skb_queue_head_init(&rose_neigh->queue);
+ 
+@@ -178,6 +178,7 @@ static int __must_check rose_add_node(struct rose_route_struct *rose_route,
+ 			}
+ 		}
+ 		rose_neigh->count++;
++		rose_neigh_hold(rose_neigh);
+ 
+ 		goto out;
+ 	}
+@@ -187,6 +188,7 @@ static int __must_check rose_add_node(struct rose_route_struct *rose_route,
+ 		rose_node->neighbour[rose_node->count] = rose_neigh;
+ 		rose_node->count++;
+ 		rose_neigh->count++;
++		rose_neigh_hold(rose_neigh);
+ 	}
+ 
+ out:
+@@ -234,20 +236,12 @@ static void rose_remove_neigh(struct rose_neigh *rose_neigh)
+ 
+ 	if ((s = rose_neigh_list) == rose_neigh) {
+ 		rose_neigh_list = rose_neigh->next;
+-		if (rose_neigh->ax25)
+-			ax25_cb_put(rose_neigh->ax25);
+-		kfree(rose_neigh->digipeat);
+-		kfree(rose_neigh);
+ 		return;
+ 	}
+ 
+ 	while (s != NULL && s->next != NULL) {
+ 		if (s->next == rose_neigh) {
+ 			s->next = rose_neigh->next;
+-			if (rose_neigh->ax25)
+-				ax25_cb_put(rose_neigh->ax25);
+-			kfree(rose_neigh->digipeat);
+-			kfree(rose_neigh);
+ 			return;
+ 		}
+ 
+@@ -263,10 +257,10 @@ static void rose_remove_route(struct rose_route *rose_route)
+ 	struct rose_route *s;
+ 
+ 	if (rose_route->neigh1 != NULL)
+-		rose_route->neigh1->use--;
++		rose_neigh_put(rose_route->neigh1);
+ 
+ 	if (rose_route->neigh2 != NULL)
+-		rose_route->neigh2->use--;
++		rose_neigh_put(rose_route->neigh2);
+ 
+ 	if ((s = rose_route_list) == rose_route) {
+ 		rose_route_list = rose_route->next;
+@@ -330,9 +324,12 @@ static int rose_del_node(struct rose_route_struct *rose_route,
+ 	for (i = 0; i < rose_node->count; i++) {
+ 		if (rose_node->neighbour[i] == rose_neigh) {
+ 			rose_neigh->count--;
++			rose_neigh_put(rose_neigh);
+ 
+-			if (rose_neigh->count == 0 && rose_neigh->use == 0)
++			if (rose_neigh->count == 0) {
+ 				rose_remove_neigh(rose_neigh);
++				rose_neigh_put(rose_neigh);
++			}
+ 
+ 			rose_node->count--;
+ 
+@@ -381,11 +378,11 @@ void rose_add_loopback_neigh(void)
+ 	sn->ax25      = NULL;
+ 	sn->dev       = NULL;
+ 	sn->count     = 0;
+-	sn->use       = 0;
+ 	sn->dce_mode  = 1;
+ 	sn->loopback  = 1;
+ 	sn->number    = rose_neigh_no++;
+ 	sn->restarted = 1;
++	refcount_set(&sn->use, 1);
+ 
+ 	skb_queue_head_init(&sn->queue);
+ 
+@@ -436,6 +433,7 @@ int rose_add_loopback_node(const rose_address *address)
+ 	rose_node_list  = rose_node;
+ 
+ 	rose_loopback_neigh->count++;
++	rose_neigh_hold(rose_loopback_neigh);
+ 
+ out:
+ 	spin_unlock_bh(&rose_node_list_lock);
+@@ -467,6 +465,7 @@ void rose_del_loopback_node(const rose_address *address)
+ 	rose_remove_node(rose_node);
+ 
+ 	rose_loopback_neigh->count--;
++	rose_neigh_put(rose_loopback_neigh);
+ 
+ out:
+ 	spin_unlock_bh(&rose_node_list_lock);
+@@ -506,6 +505,7 @@ void rose_rt_device_down(struct net_device *dev)
+ 				memmove(&t->neighbour[i], &t->neighbour[i + 1],
+ 					sizeof(t->neighbour[0]) *
+ 						(t->count - i));
++				rose_neigh_put(s);
+ 			}
+ 
+ 			if (t->count <= 0)
+@@ -513,6 +513,7 @@ void rose_rt_device_down(struct net_device *dev)
+ 		}
+ 
+ 		rose_remove_neigh(s);
++		rose_neigh_put(s);
+ 	}
+ 	spin_unlock_bh(&rose_neigh_list_lock);
+ 	spin_unlock_bh(&rose_node_list_lock);
+@@ -548,6 +549,7 @@ static int rose_clear_routes(void)
+ {
+ 	struct rose_neigh *s, *rose_neigh;
+ 	struct rose_node  *t, *rose_node;
++	int i;
+ 
+ 	spin_lock_bh(&rose_node_list_lock);
+ 	spin_lock_bh(&rose_neigh_list_lock);
+@@ -558,17 +560,21 @@ static int rose_clear_routes(void)
+ 	while (rose_node != NULL) {
+ 		t         = rose_node;
+ 		rose_node = rose_node->next;
+-		if (!t->loopback)
++
++		if (!t->loopback) {
++			for (i = 0; i < t->count; i++)
++				rose_neigh_put(t->neighbour[i]);
+ 			rose_remove_node(t);
++		}
+ 	}
+ 
+ 	while (rose_neigh != NULL) {
+ 		s          = rose_neigh;
+ 		rose_neigh = rose_neigh->next;
+ 
+-		if (s->use == 0 && !s->loopback) {
+-			s->count = 0;
++		if (!s->loopback) {
+ 			rose_remove_neigh(s);
++			rose_neigh_put(s);
+ 		}
+ 	}
+ 
+@@ -684,6 +690,7 @@ struct rose_neigh *rose_get_neigh(rose_address *addr, unsigned char *cause,
+ 			for (i = 0; i < node->count; i++) {
+ 				if (node->neighbour[i]->restarted) {
+ 					res = node->neighbour[i];
++					rose_neigh_hold(node->neighbour[i]);
+ 					goto out;
+ 				}
+ 			}
+@@ -695,6 +702,7 @@ struct rose_neigh *rose_get_neigh(rose_address *addr, unsigned char *cause,
+ 				for (i = 0; i < node->count; i++) {
+ 					if (!rose_ftimer_running(node->neighbour[i])) {
+ 						res = node->neighbour[i];
++						rose_neigh_hold(node->neighbour[i]);
+ 						goto out;
+ 					}
+ 					failed = 1;
+@@ -784,13 +792,13 @@ static void rose_del_route_by_neigh(struct rose_neigh *rose_neigh)
+ 		}
+ 
+ 		if (rose_route->neigh1 == rose_neigh) {
+-			rose_route->neigh1->use--;
++			rose_neigh_put(rose_route->neigh1);
+ 			rose_route->neigh1 = NULL;
+ 			rose_transmit_clear_request(rose_route->neigh2, rose_route->lci2, ROSE_OUT_OF_ORDER, 0);
+ 		}
+ 
+ 		if (rose_route->neigh2 == rose_neigh) {
+-			rose_route->neigh2->use--;
++			rose_neigh_put(rose_route->neigh2);
+ 			rose_route->neigh2 = NULL;
+ 			rose_transmit_clear_request(rose_route->neigh1, rose_route->lci1, ROSE_OUT_OF_ORDER, 0);
+ 		}
+@@ -919,7 +927,7 @@ int rose_route_frame(struct sk_buff *skb, ax25_cb *ax25)
+ 			rose_clear_queues(sk);
+ 			rose->cause	 = ROSE_NETWORK_CONGESTION;
+ 			rose->diagnostic = 0;
+-			rose->neighbour->use--;
++			rose_neigh_put(rose->neighbour);
+ 			rose->neighbour	 = NULL;
+ 			rose->lci	 = 0;
+ 			rose->state	 = ROSE_STATE_0;
+@@ -1044,12 +1052,12 @@ int rose_route_frame(struct sk_buff *skb, ax25_cb *ax25)
+ 
+ 	if ((new_lci = rose_new_lci(new_neigh)) == 0) {
+ 		rose_transmit_clear_request(rose_neigh, lci, ROSE_NETWORK_CONGESTION, 71);
+-		goto out;
++		goto put_neigh;
+ 	}
+ 
+ 	if ((rose_route = kmalloc(sizeof(*rose_route), GFP_ATOMIC)) == NULL) {
+ 		rose_transmit_clear_request(rose_neigh, lci, ROSE_NETWORK_CONGESTION, 120);
+-		goto out;
++		goto put_neigh;
+ 	}
+ 
+ 	rose_route->lci1      = lci;
+@@ -1062,8 +1070,8 @@ int rose_route_frame(struct sk_buff *skb, ax25_cb *ax25)
+ 	rose_route->lci2      = new_lci;
+ 	rose_route->neigh2    = new_neigh;
+ 
+-	rose_route->neigh1->use++;
+-	rose_route->neigh2->use++;
++	rose_neigh_hold(rose_route->neigh1);
++	rose_neigh_hold(rose_route->neigh2);
+ 
+ 	rose_route->next = rose_route_list;
+ 	rose_route_list  = rose_route;
+@@ -1075,6 +1083,8 @@ int rose_route_frame(struct sk_buff *skb, ax25_cb *ax25)
+ 	rose_transmit_link(skb, rose_route->neigh2);
+ 	res = 1;
+ 
++put_neigh:
++	rose_neigh_put(new_neigh);
+ out:
+ 	spin_unlock_bh(&rose_route_list_lock);
+ 	spin_unlock_bh(&rose_neigh_list_lock);
+@@ -1190,7 +1200,7 @@ static int rose_neigh_show(struct seq_file *seq, void *v)
+ 			   (rose_neigh->loopback) ? "RSLOOP-0" : ax2asc(buf, &rose_neigh->callsign),
+ 			   rose_neigh->dev ? rose_neigh->dev->name : "???",
+ 			   rose_neigh->count,
+-			   rose_neigh->use,
++			   refcount_read(&rose_neigh->use) - rose_neigh->count - 1,
+ 			   (rose_neigh->dce_mode) ? "DCE" : "DTE",
+ 			   (rose_neigh->restarted) ? "yes" : "no",
+ 			   ax25_display_timer(&rose_neigh->t0timer) / HZ,
+@@ -1295,18 +1305,22 @@ void __exit rose_rt_free(void)
+ 	struct rose_neigh *s, *rose_neigh = rose_neigh_list;
+ 	struct rose_node  *t, *rose_node  = rose_node_list;
+ 	struct rose_route *u, *rose_route = rose_route_list;
++	int i;
+ 
+ 	while (rose_neigh != NULL) {
+ 		s          = rose_neigh;
+ 		rose_neigh = rose_neigh->next;
+ 
+ 		rose_remove_neigh(s);
++		rose_neigh_put(s);
+ 	}
+ 
+ 	while (rose_node != NULL) {
+ 		t         = rose_node;
+ 		rose_node = rose_node->next;
+ 
++		for (i = 0; i < t->count; i++)
++			rose_neigh_put(t->neighbour[i]);
+ 		rose_remove_node(t);
+ 	}
+ 
+diff --git a/net/rose/rose_timer.c b/net/rose/rose_timer.c
+index 020369c49587f1..bb60a1654d6125 100644
+--- a/net/rose/rose_timer.c
++++ b/net/rose/rose_timer.c
+@@ -180,7 +180,7 @@ static void rose_timer_expiry(struct timer_list *t)
+ 		break;
+ 
+ 	case ROSE_STATE_2:	/* T3 */
+-		rose->neighbour->use--;
++		rose_neigh_put(rose->neighbour);
+ 		rose_disconnect(sk, ETIMEDOUT, -1, -1);
+ 		break;
+ 
+diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
+index a9ed2ccab1bdb0..2bb5e19e10caa8 100644
+--- a/net/sctp/ipv6.c
++++ b/net/sctp/ipv6.c
+@@ -546,7 +546,9 @@ static void sctp_v6_from_sk(union sctp_addr *addr, struct sock *sk)
+ {
+ 	addr->v6.sin6_family = AF_INET6;
+ 	addr->v6.sin6_port = 0;
++	addr->v6.sin6_flowinfo = 0;
+ 	addr->v6.sin6_addr = sk->sk_v6_rcv_saddr;
++	addr->v6.sin6_scope_id = 0;
+ }
+ 
+ /* Initialize sk->sk_rcv_saddr from sctp_addr. */
+diff --git a/sound/soc/codecs/lpass-tx-macro.c b/sound/soc/codecs/lpass-tx-macro.c
+index 27bae58f407253..fe000ff522d245 100644
+--- a/sound/soc/codecs/lpass-tx-macro.c
++++ b/sound/soc/codecs/lpass-tx-macro.c
+@@ -2230,7 +2230,7 @@ static int tx_macro_register_mclk_output(struct tx_macro *tx)
+ }
+ 
+ static const struct snd_soc_component_driver tx_macro_component_drv = {
+-	.name = "RX-MACRO",
++	.name = "TX-MACRO",
+ 	.probe = tx_macro_component_probe,
+ 	.controls = tx_macro_snd_controls,
+ 	.num_controls = ARRAY_SIZE(tx_macro_snd_controls),
+diff --git a/sound/soc/codecs/rt1320-sdw.c b/sound/soc/codecs/rt1320-sdw.c
+index 015cc710e6dc08..d6d54168cccd09 100644
+--- a/sound/soc/codecs/rt1320-sdw.c
++++ b/sound/soc/codecs/rt1320-sdw.c
+@@ -109,6 +109,7 @@ static const struct reg_sequence rt1320_blind_write[] = {
+ 	{ 0x0000d540, 0x01 },
+ 	{ 0xd172, 0x2a },
+ 	{ 0xc5d6, 0x01 },
++	{ 0xd478, 0xff },
+ };
+ 
+ static const struct reg_sequence rt1320_vc_blind_write[] = {
+@@ -159,7 +160,7 @@ static const struct reg_sequence rt1320_vc_blind_write[] = {
+ 	{ 0xd471, 0x3a },
+ 	{ 0xd474, 0x11 },
+ 	{ 0xd475, 0x32 },
+-	{ 0xd478, 0x64 },
++	{ 0xd478, 0xff },
+ 	{ 0xd479, 0x20 },
+ 	{ 0xd47a, 0x10 },
+ 	{ 0xd47c, 0xff },
+diff --git a/sound/soc/codecs/rt721-sdca.c b/sound/soc/codecs/rt721-sdca.c
+index ba080957e93361..98d8ebc6607ff1 100644
+--- a/sound/soc/codecs/rt721-sdca.c
++++ b/sound/soc/codecs/rt721-sdca.c
+@@ -278,6 +278,8 @@ static void rt721_sdca_jack_preset(struct rt721_sdca_priv *rt721)
+ 		RT721_ENT_FLOAT_CTL1, 0x4040);
+ 	rt_sdca_index_write(rt721->mbq_regmap, RT721_HDA_SDCA_FLOAT,
+ 		RT721_ENT_FLOAT_CTL4, 0x1201);
++	rt_sdca_index_write(rt721->mbq_regmap, RT721_BOOST_CTRL,
++		RT721_BST_4CH_TOP_GATING_CTRL1, 0x002a);
+ 	regmap_write(rt721->regmap, 0x2f58, 0x07);
+ }
+ 
+diff --git a/sound/soc/codecs/rt721-sdca.h b/sound/soc/codecs/rt721-sdca.h
+index 0a82c107b19a20..71fac9cd87394e 100644
+--- a/sound/soc/codecs/rt721-sdca.h
++++ b/sound/soc/codecs/rt721-sdca.h
+@@ -56,6 +56,7 @@ struct rt721_sdca_dmic_kctrl_priv {
+ #define RT721_CBJ_CTRL				0x0a
+ #define RT721_CAP_PORT_CTRL			0x0c
+ #define RT721_CLASD_AMP_CTRL			0x0d
++#define RT721_BOOST_CTRL			0x0f
+ #define RT721_VENDOR_REG			0x20
+ #define RT721_RC_CALIB_CTRL			0x40
+ #define RT721_VENDOR_EQ_L			0x53
+@@ -93,6 +94,9 @@ struct rt721_sdca_dmic_kctrl_priv {
+ /* Index (NID:0dh) */
+ #define RT721_CLASD_AMP_2CH_CAL			0x14
+ 
++/* Index (NID:0fh) */
++#define RT721_BST_4CH_TOP_GATING_CTRL1		0x05
++
+ /* Index (NID:20h) */
+ #define RT721_JD_PRODUCT_NUM			0x00
+ #define RT721_ANALOG_BIAS_CTL3			0x04
+diff --git a/tools/perf/util/symbol-minimal.c b/tools/perf/util/symbol-minimal.c
+index c73fe2e09fe91a..fc9c6f39d5dd39 100644
+--- a/tools/perf/util/symbol-minimal.c
++++ b/tools/perf/util/symbol-minimal.c
+@@ -4,7 +4,6 @@
+ 
+ #include <errno.h>
+ #include <unistd.h>
+-#include <stdio.h>
+ #include <fcntl.h>
+ #include <string.h>
+ #include <stdlib.h>
+@@ -88,11 +87,8 @@ int filename__read_debuglink(const char *filename __maybe_unused,
+  */
+ int filename__read_build_id(const char *filename, struct build_id *bid)
+ {
+-	FILE *fp;
+-	int ret = -1;
++	int fd, ret = -1;
+ 	bool need_swap = false, elf32;
+-	u8 e_ident[EI_NIDENT];
+-	int i;
+ 	union {
+ 		struct {
+ 			Elf32_Ehdr ehdr32;
+@@ -103,28 +99,27 @@ int filename__read_build_id(const char *filename, struct build_id *bid)
+ 			Elf64_Phdr *phdr64;
+ 		};
+ 	} hdrs;
+-	void *phdr;
+-	size_t phdr_size;
+-	void *buf = NULL;
+-	size_t buf_size = 0;
++	void *phdr, *buf = NULL;
++	ssize_t phdr_size, ehdr_size, buf_size = 0;
+ 
+-	fp = fopen(filename, "r");
+-	if (fp == NULL)
++	fd = open(filename, O_RDONLY);
++	if (fd < 0)
+ 		return -1;
+ 
+-	if (fread(e_ident, sizeof(e_ident), 1, fp) != 1)
++	if (read(fd, hdrs.ehdr32.e_ident, EI_NIDENT) != EI_NIDENT)
+ 		goto out;
+ 
+-	if (memcmp(e_ident, ELFMAG, SELFMAG) ||
+-	    e_ident[EI_VERSION] != EV_CURRENT)
++	if (memcmp(hdrs.ehdr32.e_ident, ELFMAG, SELFMAG) ||
++	    hdrs.ehdr32.e_ident[EI_VERSION] != EV_CURRENT)
+ 		goto out;
+ 
+-	need_swap = check_need_swap(e_ident[EI_DATA]);
+-	elf32 = e_ident[EI_CLASS] == ELFCLASS32;
++	need_swap = check_need_swap(hdrs.ehdr32.e_ident[EI_DATA]);
++	elf32 = hdrs.ehdr32.e_ident[EI_CLASS] == ELFCLASS32;
++	ehdr_size = (elf32 ? sizeof(hdrs.ehdr32) : sizeof(hdrs.ehdr64)) - EI_NIDENT;
+ 
+-	if (fread(elf32 ? (void *)&hdrs.ehdr32 : (void *)&hdrs.ehdr64,
+-		  elf32 ? sizeof(hdrs.ehdr32) : sizeof(hdrs.ehdr64),
+-		  1, fp) != 1)
++	if (read(fd,
++		 (elf32 ? (void *)&hdrs.ehdr32 : (void *)&hdrs.ehdr64) + EI_NIDENT,
++		 ehdr_size) != ehdr_size)
+ 		goto out;
+ 
+ 	if (need_swap) {
+@@ -138,14 +133,18 @@ int filename__read_build_id(const char *filename, struct build_id *bid)
+ 			hdrs.ehdr64.e_phnum = bswap_16(hdrs.ehdr64.e_phnum);
+ 		}
+ 	}
+-	phdr_size = elf32 ? hdrs.ehdr32.e_phentsize * hdrs.ehdr32.e_phnum
+-			  : hdrs.ehdr64.e_phentsize * hdrs.ehdr64.e_phnum;
++	if ((elf32 && hdrs.ehdr32.e_phentsize != sizeof(Elf32_Phdr)) ||
++	    (!elf32 && hdrs.ehdr64.e_phentsize != sizeof(Elf64_Phdr)))
++		goto out;
++
++	phdr_size = elf32 ? sizeof(Elf32_Phdr) * hdrs.ehdr32.e_phnum
++			  : sizeof(Elf64_Phdr) * hdrs.ehdr64.e_phnum;
+ 	phdr = malloc(phdr_size);
+ 	if (phdr == NULL)
+ 		goto out;
+ 
+-	fseek(fp, elf32 ? hdrs.ehdr32.e_phoff : hdrs.ehdr64.e_phoff, SEEK_SET);
+-	if (fread(phdr, phdr_size, 1, fp) != 1)
++	lseek(fd, elf32 ? hdrs.ehdr32.e_phoff : hdrs.ehdr64.e_phoff, SEEK_SET);
++	if (read(fd, phdr, phdr_size) != phdr_size)
+ 		goto out_free;
+ 
+ 	if (elf32)
+@@ -153,8 +152,8 @@ int filename__read_build_id(const char *filename, struct build_id *bid)
+ 	else
+ 		hdrs.phdr64 = phdr;
+ 
+-	for (i = 0; i < elf32 ? hdrs.ehdr32.e_phnum : hdrs.ehdr64.e_phnum; i++) {
+-		size_t p_filesz;
++	for (int i = 0; i < (elf32 ? hdrs.ehdr32.e_phnum : hdrs.ehdr64.e_phnum); i++) {
++		ssize_t p_filesz;
+ 
+ 		if (need_swap) {
+ 			if (elf32) {
+@@ -180,8 +179,8 @@ int filename__read_build_id(const char *filename, struct build_id *bid)
+ 				goto out_free;
+ 			buf = tmp;
+ 		}
+-		fseek(fp, elf32 ? hdrs.phdr32[i].p_offset : hdrs.phdr64[i].p_offset, SEEK_SET);
+-		if (fread(buf, p_filesz, 1, fp) != 1)
++		lseek(fd, elf32 ? hdrs.phdr32[i].p_offset : hdrs.phdr64[i].p_offset, SEEK_SET);
++		if (read(fd, buf, p_filesz) != p_filesz)
+ 			goto out_free;
+ 
+ 		ret = read_build_id(buf, p_filesz, bid, need_swap);
+@@ -194,7 +193,7 @@ int filename__read_build_id(const char *filename, struct build_id *bid)
+ 	free(buf);
+ 	free(phdr);
+ out:
+-	fclose(fp);
++	close(fd);
+ 	return ret;
+ }
+ 
+diff --git a/tools/tracing/latency/Makefile.config b/tools/tracing/latency/Makefile.config
+index 0fe6b50f029bf7..6efa13e3ca93fd 100644
+--- a/tools/tracing/latency/Makefile.config
++++ b/tools/tracing/latency/Makefile.config
+@@ -1,7 +1,15 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ 
++include $(srctree)/tools/scripts/utilities.mak
++
+ STOP_ERROR :=
+ 
++ifndef ($(NO_LIBTRACEEVENT),1)
++  ifeq ($(call get-executable,$(PKG_CONFIG)),)
++    $(error Error: $(PKG_CONFIG) needed by libtraceevent/libtracefs is missing on this system, please install it)
++  endif
++endif
++
+ define lib_setup
+   $(eval LIB_INCLUDES += $(shell sh -c "$(PKG_CONFIG) --cflags lib$(1)"))
+   $(eval LDFLAGS += $(shell sh -c "$(PKG_CONFIG) --libs-only-L lib$(1)"))
+diff --git a/tools/tracing/rtla/Makefile.config b/tools/tracing/rtla/Makefile.config
+index 5f2231d8d62666..07ff5e8f3006e6 100644
+--- a/tools/tracing/rtla/Makefile.config
++++ b/tools/tracing/rtla/Makefile.config
+@@ -1,10 +1,18 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ 
++include $(srctree)/tools/scripts/utilities.mak
++
+ STOP_ERROR :=
+ 
+ LIBTRACEEVENT_MIN_VERSION = 1.5
+ LIBTRACEFS_MIN_VERSION = 1.6
+ 
++ifndef ($(NO_LIBTRACEEVENT),1)
++  ifeq ($(call get-executable,$(PKG_CONFIG)),)
++    $(error Error: $(PKG_CONFIG) needed by libtraceevent/libtracefs is missing on this system, please install it)
++  endif
++endif
++
+ define lib_setup
+   $(eval LIB_INCLUDES += $(shell sh -c "$(PKG_CONFIG) --cflags lib$(1)"))
+   $(eval LDFLAGS += $(shell sh -c "$(PKG_CONFIG) --libs-only-L lib$(1)"))


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-09-04 15:46 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-09-04 15:46 UTC (permalink / raw
  To: gentoo-commits

commit:     1ddd61645e329899fb37bd1bb4ef8ffae948c8b7
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Sep  4 15:46:24 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Sep  4 15:46:24 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1ddd6164

Remove net: ipv4: fix regression in local-broadcast routes

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                        |   4 -
 ..._fix_regression_in_local-broadcast_routes.patch | 134 ---------------------
 2 files changed, 138 deletions(-)

diff --git a/0000_README b/0000_README
index 69964e75..df581c25 100644
--- a/0000_README
+++ b/0000_README
@@ -83,10 +83,6 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
-Patch:  2010_ipv4_fix_regression_in_local-broadcast_routes.patch
-From:   https://lore.kernel.org/regressions/20250826121750.8451-1-oscmaes92@gmail.com/
-Desc:   net: ipv4: fix regression in local-broadcast routes
-
 Patch:  2700_asus-wmi_fix_racy_registrations.patch
 From:   https://lore.kernel.org/all/20250827052441.23382-1-tiwai@suse.de/#Z31drivers:platform:x86:asus-wmi.c
 Desc:   platform/x86: asus-wmi: Fix racy registrations

diff --git a/2010_ipv4_fix_regression_in_local-broadcast_routes.patch b/2010_ipv4_fix_regression_in_local-broadcast_routes.patch
deleted file mode 100644
index a306132d..00000000
--- a/2010_ipv4_fix_regression_in_local-broadcast_routes.patch
+++ /dev/null
@@ -1,134 +0,0 @@
-From mboxrd@z Thu Jan  1 00:00:00 1970
-Received: from mail-wm1-f48.google.com (mail-wm1-f48.google.com [209.85.128.48])
-	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
-	(No client certificate requested)
-	by smtp.subspace.kernel.org (Postfix) with ESMTPS id 35E81393DF2
-	for <regressions@lists.linux.dev>; Tue, 26 Aug 2025 12:18:24 +0000 (UTC)
-Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.48
-ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116;
-	t=1756210706; cv=none; b=Da/rRcCPW+gdgl9sh1AJU0E8vP05G0xfCEUnpWuqnQjaf8/mvVPUhzba4pXCTtFhHNsTTT3iEOPPiPqCzdNwRexxsZIkyL6JGG+hXkV8cn+i7XctZ961TmWYP8ACY74i8MLs7Iud+2gt8y4VrLoMeHXcE7ripzyOxa8THiVuFTc=
-ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org;
-	s=arc-20240116; t=1756210706; c=relaxed/simple;
-	bh=WNRFfbyB1JScy1/30FZa+Ntq9RVZSUi/ijHlpcIjNBs=;
-	h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:
-	 MIME-Version; b=Y3iH3AFJjiR147yq3M5X/KlRR6baEAus+ZHb4N2PZZKa0T3Ln2c2/SnZLXQgRCa8rdr3MCFoXaoDuRUcx8k744Dh1j64HY9sRnYjM01rc0Kh+iaf3nZ0jYkC+zOL+8Wv5eWgNbDX5Qg+WzwUQMQhrC5xEQNjNorKTxd+SRFGpao=
-ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=dvEwPzW3; arc=none smtp.client-ip=209.85.128.48
-Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com
-Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com
-Authentication-Results: smtp.subspace.kernel.org;
-	dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="dvEwPzW3"
-Received: by mail-wm1-f48.google.com with SMTP id 5b1f17b1804b1-45a1b004954so43862245e9.0
-        for <regressions@lists.linux.dev>; Tue, 26 Aug 2025 05:18:23 -0700 (PDT)
-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
-        d=gmail.com; s=20230601; t=1756210702; x=1756815502; darn=lists.linux.dev;
-        h=content-transfer-encoding:mime-version:references:in-reply-to
-         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
-         :message-id:reply-to;
-        bh=LChTYlNX7jpHHdvqfK7E+ehTE+2KqMA/oVeIigfrSAA=;
-        b=dvEwPzW3bP5r/IJF4+nyqmSsoFRE2/TxvBct7S3hXKOLTfxpExbkGfpZTxb/wRhBAJ
-         wQL8iEOoH47boqy/i72LQhH6bNLS72yU2FMpqZNVENRJqtwB6lq8PJlRvDn7gEVW4awK
-         8Phof2i45jLRu1288bEfZkYpSVK0hclcCXgP/5f7t0zNSdutKc/aOXCyLeoIeciLR4Zx
-         JmtIedPpVahUnw0oCgxmQbOkHd3yf1xoxAiEblYfya59tRXPty3gfMnh2Ox4gTYn29NF
-         kp+PqMg4GxVf0j4TZMuCnBqnjtYFkfDpGyqNr4HBBV4PzdZjjbuJ8bPNOVUNOzk14j+4
-         JE9Q==
-X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
-        d=1e100.net; s=20230601; t=1756210702; x=1756815502;
-        h=content-transfer-encoding:mime-version:references:in-reply-to
-         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
-         :subject:date:message-id:reply-to;
-        bh=LChTYlNX7jpHHdvqfK7E+ehTE+2KqMA/oVeIigfrSAA=;
-        b=XlKRIeF/DtRxj+OA67VSgyB25oK9Z0gak5vT5pjoH+XFlP+Y6y9GSx70oRvvIgIZE0
-         apTakbKssvoFmeCLmAFQRStZfubuWoor6Ond1N/6K7j7VBU11eysPUkeo6jQSTzdSQMt
-         v9Jq11Lnaii0ms5s6kIaWWPR9lGAWFb++ZJNYkXt59iXnhEVlVW4dFssD6VR/VJnyX+e
-         A+eGOVoa1k3c4ae23Wmq55GQR1iKbviKO28+BXatjKRWcFjaTgedk1WATZrrwcRYdD2E
-         a3r6R5iTOkMNX/TOJ4v2X7s69ndC+qxxJQ0yLTAsmfV1EDGUp3kwwkdIVl3UDqUhHszh
-         N0+w==
-X-Forwarded-Encrypted: i=1; AJvYcCX+aV0s2nW7qE7ZH57rmDl4GNnOxwFmQdMvPxxvM100/HNXQPAUrKLUeYBdO5rTpnftaJQ4J3zRSge7XA==@lists.linux.dev
-X-Gm-Message-State: AOJu0Yy9j79mStVe7fYpUjVZm00DYURS6tKQYofu48lxIG03z+fJEMUq
-	NKggf5H7k0btf9k9VXff6yWYNoL6JnO/uWjuPcDWrTtpRme13iQ8weyk
-X-Gm-Gg: ASbGncv/CVsSHrFQyxd//IAOzxZbvxje250ZYi2TUZi9g/Gf4x/86XgM4MjXoZFeBZB
-	4c00kmZrQIKWk4ToI+ySCSydYzZQbrw+nGnrad6FqeWQESk5tqOBYnIYKTUT+rseRkG5dukKJdE
-	lNeFu0sfmmoAnvNyKtLNqG9VwFQtqSwODKIKH+CZ92mMBuWe4ePVv4JQpz/fUhIRN+eZBdfDwUZ
-	eWZScFkZRFJo2SrVq9Ku3CIOA8hD0ktkkBDaFj57r+4YoToeLSvbCzzZrcFGoj2E1zqyTSlUhbf
-	SsKS4HgBDjkhx2k41IZAVyT+pE/GfU2BgS6BmY/VUxh72VrmHWCvbCnGX1TsHixJSwCGJiilLTg
-	KuDu6j0RQCZjyzUt7t8/H5A==
-X-Google-Smtp-Source: AGHT+IGDspJcry+lZbYtZeVg4+4kmcTBPmZuyilfg0+W2o8HlDsbRsJZkF4781x4cl6MBUZul/po1A==
-X-Received: by 2002:a05:600c:3b2a:b0:453:2066:4a26 with SMTP id 5b1f17b1804b1-45b5179f3d1mr194717105e9.16.1756210702258;
-        Tue, 26 Aug 2025 05:18:22 -0700 (PDT)
-Received: from oscar-xps.. ([45.128.133.231])
-        by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-45b5758a0bfsm149513675e9.20.2025.08.26.05.18.18
-        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
-        Tue, 26 Aug 2025 05:18:21 -0700 (PDT)
-From: Oscar Maes <oscmaes92@gmail.com>
-To: bacs@librecast.net,
-	brett@librecast.net,
-	kuba@kernel.org
-Cc: davem@davemloft.net,
-	dsahern@kernel.org,
-	netdev@vger.kernel.org,
-	regressions@lists.linux.dev,
-	stable@vger.kernel.org,
-	Oscar Maes <oscmaes92@gmail.com>
-Subject: [PATCH net v2 1/2] net: ipv4: fix regression in local-broadcast routes
-Date: Tue, 26 Aug 2025 14:17:49 +0200
-Message-Id: <20250826121750.8451-1-oscmaes92@gmail.com>
-X-Mailer: git-send-email 2.39.5
-In-Reply-To: <20250826121126-oscmaes92@gmail.com>
-References: <20250826121126-oscmaes92@gmail.com>
-Precedence: bulk
-X-Mailing-List: regressions@lists.linux.dev
-List-Id: <regressions.lists.linux.dev>
-List-Subscribe: <mailto:regressions+subscribe@lists.linux.dev>
-List-Unsubscribe: <mailto:regressions+unsubscribe@lists.linux.dev>
-MIME-Version: 1.0
-Content-Transfer-Encoding: 8bit
-
-Commit 9e30ecf23b1b ("net: ipv4: fix incorrect MTU in broadcast routes")
-introduced a regression where local-broadcast packets would have their
-gateway set in __mkroute_output, which was caused by fi = NULL being
-removed.
-
-Fix this by resetting the fib_info for local-broadcast packets. This
-preserves the intended changes for directed-broadcast packets.
-
-Cc: stable@vger.kernel.org
-Fixes: 9e30ecf23b1b ("net: ipv4: fix incorrect MTU in broadcast routes")
-Reported-by: Brett A C Sheffield <bacs@librecast.net>
-Closes: https://lore.kernel.org/regressions/20250822165231.4353-4-bacs@librecast.net
-Signed-off-by: Oscar Maes <oscmaes92@gmail.com>
----
-
-Thanks to Brett Sheffield for finding the regression and writing
-the initial fix!
----
- net/ipv4/route.c | 10 +++++++---
- 1 file changed, 7 insertions(+), 3 deletions(-)
-
-diff --git a/net/ipv4/route.c b/net/ipv4/route.c
-index 1f212b2ce4c6..24c898b7654f 100644
---- a/net/ipv4/route.c
-+++ b/net/ipv4/route.c
-@@ -2575,12 +2575,16 @@ static struct rtable *__mkroute_output(const struct fib_result *res,
- 		    !netif_is_l3_master(dev_out))
- 			return ERR_PTR(-EINVAL);
- 
--	if (ipv4_is_lbcast(fl4->daddr))
-+	if (ipv4_is_lbcast(fl4->daddr)) {
- 		type = RTN_BROADCAST;
--	else if (ipv4_is_multicast(fl4->daddr))
-+
-+		/* reset fi to prevent gateway resolution */
-+		fi = NULL;
-+	} else if (ipv4_is_multicast(fl4->daddr)) {
- 		type = RTN_MULTICAST;
--	else if (ipv4_is_zeronet(fl4->daddr))
-+	} else if (ipv4_is_zeronet(fl4->daddr)) {
- 		return ERR_PTR(-EINVAL);
-+	}
- 
- 	if (dev_out->flags & IFF_LOOPBACK)
- 		flags |= RTCF_LOCAL;
--- 
-2.39.5
-
-


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-09-05 14:01 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-09-05 14:01 UTC (permalink / raw
  To: gentoo-commits

commit:     59f69be672470056d48c84c2dc4a1ebec7dfbad9
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri Sep  5 12:45:29 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Fri Sep  5 12:45:29 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=59f69be6

Add 1801_proc_fix_type_confusion_in_pde_set_flags.patch

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                        |  4 +++
 ..._proc_fix_type_confusion_in_pde_set_flags.patch | 40 ++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/0000_README b/0000_README
index df581c25..1902d1c9 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  1800_proc_fix_missing_pde_set_flags_for_net_proc_files.patch
 From:   https://lore.kernel.org/all/20250821105806.1453833-1-wangzijie1@honor.com/
 Desc:   proc: fix missing pde_set_flags() for net proc files
 
+Patch:  1801_proc_fix_type_confusion_in_pde_set_flags.patch
+From:   https://lore.kernel.org/linux-fsdevel/20250904135715.3972782-1-wangzijie1@honor.com/
+Desc:   proc: fix type confusion in pde_set_flags()
+
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1801_proc_fix_type_confusion_in_pde_set_flags.patch b/1801_proc_fix_type_confusion_in_pde_set_flags.patch
new file mode 100644
index 00000000..4777dbdc
--- /dev/null
+++ b/1801_proc_fix_type_confusion_in_pde_set_flags.patch
@@ -0,0 +1,40 @@
+Subject: [PATCH] proc: fix type confusion in pde_set_flags()
+
+Commit 2ce3d282bd50 ("proc: fix missing pde_set_flags() for net proc files")
+missed a key part in the definition of proc_dir_entry:
+
+union {
+	const struct proc_ops *proc_ops;
+	const struct file_operations *proc_dir_ops;
+};
+
+So dereference of ->proc_ops assumes it is a proc_ops structure results in
+type confusion and make NULL check for 'proc_ops' not work for proc dir.
+
+Add !S_ISDIR(dp->mode) test before calling pde_set_flags() to fix it.
+
+Fixes: 2ce3d282bd50 ("proc: fix missing pde_set_flags() for net proc files")
+Reported-by: Brad Spengler <spender@grsecurity.net>
+Signed-off-by: wangzijie <wangzijie1@honor.com>
+---
+ fs/proc/generic.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/fs/proc/generic.c b/fs/proc/generic.c
+index bd0c099cf..176281112 100644
+--- a/fs/proc/generic.c
++++ b/fs/proc/generic.c
+@@ -393,7 +393,8 @@ struct proc_dir_entry *proc_register(struct proc_dir_entry *dir,
+ 	if (proc_alloc_inum(&dp->low_ino))
+ 		goto out_free_entry;
+ 
+-	pde_set_flags(dp);
++	if (!S_ISDIR(dp->mode))
++		pde_set_flags(dp);
+ 
+ 	write_lock(&proc_subdir_lock);
+ 	dp->parent = dir;
+-- 
+2.25.1
+
+


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-09-10  5:30 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-09-10  5:30 UTC (permalink / raw
  To: gentoo-commits

commit:     03624f3721d6c142c333f0737ac770a718cb9004
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 10 05:30:41 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Sep 10 05:30:41 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=03624f37

Linux patch 6.16.6

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README             |    4 +
 1005_linux-6.16.6.patch | 7524 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7528 insertions(+)

diff --git a/0000_README b/0000_README
index 1902d1c9..29987f33 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  1004_linux-6.16.5.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.16.5
 
+Patch:  1005_linux-6.16.6.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.16.6
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1005_linux-6.16.6.patch b/1005_linux-6.16.6.patch
new file mode 100644
index 00000000..4d7b7190
--- /dev/null
+++ b/1005_linux-6.16.6.patch
@@ -0,0 +1,7524 @@
+diff --git a/Makefile b/Makefile
+index 58a78d21557742..0200497da26cd0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 16
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm/boot/dts/microchip/at91-sama7d65_curiosity.dts b/arch/arm/boot/dts/microchip/at91-sama7d65_curiosity.dts
+index 53a657cf4efba3..dfe1f0616a8100 100644
+--- a/arch/arm/boot/dts/microchip/at91-sama7d65_curiosity.dts
++++ b/arch/arm/boot/dts/microchip/at91-sama7d65_curiosity.dts
+@@ -352,6 +352,8 @@ &rtt {
+ 
+ &sdmmc1 {
+ 	bus-width = <4>;
++	no-1-8-v;
++	sdhci-caps-mask = <0x0 0x00200000>;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_sdmmc1_default>;
+ 	status = "okay";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-data-modul-edm-sbc.dts b/arch/arm64/boot/dts/freescale/imx8mp-data-modul-edm-sbc.dts
+index d0fc5977258fbf..16078ff60ef08b 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-data-modul-edm-sbc.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-data-modul-edm-sbc.dts
+@@ -555,6 +555,7 @@ &usdhc2 {
+ 	pinctrl-2 = <&pinctrl_usdhc2_200mhz>, <&pinctrl_usdhc2_gpio>;
+ 	cd-gpios = <&gpio2 12 GPIO_ACTIVE_LOW>;
+ 	vmmc-supply = <&reg_usdhc2_vmmc>;
++	vqmmc-supply = <&ldo5>;
+ 	bus-width = <4>;
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-dhcom-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-dhcom-som.dtsi
+index 7f754e0a5d693f..68c2e0156a5c81 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-dhcom-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp-dhcom-som.dtsi
+@@ -609,6 +609,7 @@ &usdhc2 {
+ 	pinctrl-2 = <&pinctrl_usdhc2_200mhz>, <&pinctrl_usdhc2_gpio>;
+ 	cd-gpios = <&gpio2 12 GPIO_ACTIVE_LOW>;
+ 	vmmc-supply = <&reg_usdhc2_vmmc>;
++	vqmmc-supply = <&ldo5>;
+ 	bus-width = <4>;
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mp-ras314.dts b/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mp-ras314.dts
+index d7fd9d36f8240e..f7346b3d35fe53 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mp-ras314.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mp-ras314.dts
+@@ -467,6 +467,10 @@ &pwm4 {
+ 	status = "okay";
+ };
+ 
++&reg_usdhc2_vqmmc {
++	status = "okay";
++};
++
+ &sai5 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_sai5>;
+@@ -876,8 +880,7 @@ pinctrl_usdhc2: usdhc2grp {
+ 			   <MX8MP_IOMUXC_SD2_DATA0__USDHC2_DATA0	0x1d2>,
+ 			   <MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1	0x1d2>,
+ 			   <MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2	0x1d2>,
+-			   <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3	0x1d2>,
+-			   <MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT	0xc0>;
++			   <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3	0x1d2>;
+ 	};
+ 
+ 	pinctrl_usdhc2_100mhz: usdhc2-100mhzgrp {
+@@ -886,8 +889,7 @@ pinctrl_usdhc2_100mhz: usdhc2-100mhzgrp {
+ 			   <MX8MP_IOMUXC_SD2_DATA0__USDHC2_DATA0	0x1d4>,
+ 			   <MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1	0x1d4>,
+ 			   <MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2	0x1d4>,
+-			   <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3	0x1d4>,
+-			   <MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT	0xc0>;
++			   <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3	0x1d4>;
+ 	};
+ 
+ 	pinctrl_usdhc2_200mhz: usdhc2-200mhzgrp {
+@@ -896,8 +898,7 @@ pinctrl_usdhc2_200mhz: usdhc2-200mhzgrp {
+ 			   <MX8MP_IOMUXC_SD2_DATA0__USDHC2_DATA0	0x1d4>,
+ 			   <MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1	0x1d4>,
+ 			   <MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2	0x1d4>,
+-			   <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3	0x1d4>,
+-			   <MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT	0xc0>;
++			   <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3	0x1d4>;
+ 	};
+ 
+ 	pinctrl_usdhc2_gpio: usdhc2-gpiogrp {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mpxl.dts b/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mpxl.dts
+index 23c612e80dd383..092b1b65a88c03 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mpxl.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mpxl.dts
+@@ -603,6 +603,10 @@ &pwm3 {
+ 	status = "okay";
+ };
+ 
++&reg_usdhc2_vqmmc {
++	status = "okay";
++};
++
+ &sai3 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_sai3>;
+@@ -982,8 +986,7 @@ pinctrl_usdhc2: usdhc2grp {
+ 			   <MX8MP_IOMUXC_SD2_DATA0__USDHC2_DATA0	0x1d2>,
+ 			   <MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1	0x1d2>,
+ 			   <MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2	0x1d2>,
+-			   <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3	0x1d2>,
+-			   <MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT	0xc0>;
++			   <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3	0x1d2>;
+ 	};
+ 
+ 	pinctrl_usdhc2_100mhz: usdhc2-100mhzgrp {
+@@ -992,8 +995,7 @@ pinctrl_usdhc2_100mhz: usdhc2-100mhzgrp {
+ 			   <MX8MP_IOMUXC_SD2_DATA0__USDHC2_DATA0	0x1d4>,
+ 			   <MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1	0x1d4>,
+ 			   <MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2	0x1d4>,
+-			   <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3	0x1d4>,
+-			   <MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT	0xc0>;
++			   <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3	0x1d4>;
+ 	};
+ 
+ 	pinctrl_usdhc2_200mhz: usdhc2-200mhzgrp {
+@@ -1002,8 +1004,7 @@ pinctrl_usdhc2_200mhz: usdhc2-200mhzgrp {
+ 			   <MX8MP_IOMUXC_SD2_DATA0__USDHC2_DATA0	0x1d4>,
+ 			   <MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1	0x1d4>,
+ 			   <MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2	0x1d4>,
+-			   <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3	0x1d4>,
+-			   <MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT	0xc0>;
++			   <MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3	0x1d4>;
+ 	};
+ 
+ 	pinctrl_usdhc2_gpio: usdhc2-gpiogrp {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql.dtsi
+index 6067ca3be814e1..0a592fa2d8bc7b 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql.dtsi
+@@ -24,6 +24,20 @@ reg_vcc3v3: regulator-vcc3v3 {
+ 		regulator-max-microvolt = <3300000>;
+ 		regulator-always-on;
+ 	};
++
++	reg_usdhc2_vqmmc: regulator-usdhc2-vqmmc {
++		compatible = "regulator-gpio";
++		pinctrl-names = "default";
++		pinctrl-0 = <&pinctrl_reg_usdhc2_vqmmc>;
++		regulator-name = "V_SD2";
++		regulator-min-microvolt = <1800000>;
++		regulator-max-microvolt = <3300000>;
++		gpios = <&gpio1 4 GPIO_ACTIVE_HIGH>;
++		states = <1800000 0x1>,
++			 <3300000 0x0>;
++		vin-supply = <&ldo5_reg>;
++		status = "disabled";
++	};
+ };
+ 
+ &A53_0 {
+@@ -180,6 +194,10 @@ m24c64: eeprom@57 {
+ 	};
+ };
+ 
++&usdhc2 {
++	vqmmc-supply = <&reg_usdhc2_vqmmc>;
++};
++
+ &usdhc3 {
+ 	pinctrl-names = "default", "state_100mhz", "state_200mhz";
+ 	pinctrl-0 = <&pinctrl_usdhc3>;
+@@ -229,6 +247,10 @@ pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp {
+ 		fsl,pins = <MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19	0x10>;
+ 	};
+ 
++	pinctrl_reg_usdhc2_vqmmc: regusdhc2vqmmcgrp {
++		fsl,pins = <MX8MP_IOMUXC_GPIO1_IO04__GPIO1_IO04		0xc0>;
++	};
++
+ 	pinctrl_usdhc3: usdhc3grp {
+ 		fsl,pins = <MX8MP_IOMUXC_NAND_WE_B__USDHC3_CLK		0x194>,
+ 			   <MX8MP_IOMUXC_NAND_WP_B__USDHC3_CMD		0x1d4>,
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts b/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
+index 5473070823cb12..6cadde440fff48 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
+@@ -966,6 +966,7 @@ spiflash: flash@0 {
+ 		reg = <0>;
+ 		m25p,fast-read;
+ 		spi-max-frequency = <10000000>;
++		vcc-supply = <&vcc_3v0>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3582-radxa-e52c.dts b/arch/arm64/boot/dts/rockchip/rk3582-radxa-e52c.dts
+index e04f21d8c831eb..431ff77d451803 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3582-radxa-e52c.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3582-radxa-e52c.dts
+@@ -250,6 +250,7 @@ eeprom@50 {
+ 		compatible = "belling,bl24c16a", "atmel,24c16";
+ 		reg = <0x50>;
+ 		pagesize = <16>;
++		read-only;
+ 		vcc-supply = <&vcc_3v3_pmu>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-orangepi-5-plus.dts b/arch/arm64/boot/dts/rockchip/rk3588-orangepi-5-plus.dts
+index 121e4d1c3fa5da..8222f1fae8fadc 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-orangepi-5-plus.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3588-orangepi-5-plus.dts
+@@ -77,7 +77,7 @@ &analog_sound {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&hp_detect>;
+ 	simple-audio-card,aux-devs = <&speaker_amp>, <&headphone_amp>;
+-	simple-audio-card,hp-det-gpios = <&gpio1 RK_PD3 GPIO_ACTIVE_LOW>;
++	simple-audio-card,hp-det-gpios = <&gpio1 RK_PD3 GPIO_ACTIVE_HIGH>;
+ 	simple-audio-card,widgets =
+ 		"Microphone", "Onboard Microphone",
+ 		"Microphone", "Microphone Jack",
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-orangepi-5.dtsi b/arch/arm64/boot/dts/rockchip/rk3588-orangepi-5.dtsi
+index 91d56c34a1e456..8a8f3b26754d74 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-orangepi-5.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588-orangepi-5.dtsi
+@@ -365,6 +365,8 @@ &sdhci {
+ 	max-frequency = <200000000>;
+ 	mmc-hs400-1_8v;
+ 	mmc-hs400-enhanced-strobe;
++	vmmc-supply = <&vcc_3v3_s3>;
++	vqmmc-supply = <&vcc_1v8_s3>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/include/asm/module.h b/arch/arm64/include/asm/module.h
+index 79550b22ba19ce..fb9b88eebeb15a 100644
+--- a/arch/arm64/include/asm/module.h
++++ b/arch/arm64/include/asm/module.h
+@@ -19,6 +19,7 @@ struct mod_arch_specific {
+ 
+ 	/* for CONFIG_DYNAMIC_FTRACE */
+ 	struct plt_entry	*ftrace_trampolines;
++	struct plt_entry	*init_ftrace_trampolines;
+ };
+ 
+ u64 module_emit_plt_entry(struct module *mod, Elf64_Shdr *sechdrs,
+diff --git a/arch/arm64/include/asm/module.lds.h b/arch/arm64/include/asm/module.lds.h
+index b9ae8349e35dbb..fb944b46846dae 100644
+--- a/arch/arm64/include/asm/module.lds.h
++++ b/arch/arm64/include/asm/module.lds.h
+@@ -2,6 +2,7 @@ SECTIONS {
+ 	.plt 0 : { BYTE(0) }
+ 	.init.plt 0 : { BYTE(0) }
+ 	.text.ftrace_trampoline 0 : { BYTE(0) }
++	.init.text.ftrace_trampoline 0 : { BYTE(0) }
+ 
+ #ifdef CONFIG_KASAN_SW_TAGS
+ 	/*
+diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
+index 5a890714ee2e98..5adad37ab4faff 100644
+--- a/arch/arm64/kernel/ftrace.c
++++ b/arch/arm64/kernel/ftrace.c
+@@ -258,10 +258,17 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
+ 	return ftrace_modify_code(pc, 0, new, false);
+ }
+ 
+-static struct plt_entry *get_ftrace_plt(struct module *mod)
++static struct plt_entry *get_ftrace_plt(struct module *mod, unsigned long addr)
+ {
+ #ifdef CONFIG_MODULES
+-	struct plt_entry *plt = mod->arch.ftrace_trampolines;
++	struct plt_entry *plt = NULL;
++
++	if (within_module_mem_type(addr, mod, MOD_INIT_TEXT))
++		plt = mod->arch.init_ftrace_trampolines;
++	else if (within_module_mem_type(addr, mod, MOD_TEXT))
++		plt = mod->arch.ftrace_trampolines;
++	else
++		return NULL;
+ 
+ 	return &plt[FTRACE_PLT_IDX];
+ #else
+@@ -332,7 +339,7 @@ static bool ftrace_find_callable_addr(struct dyn_ftrace *rec,
+ 	if (WARN_ON(!mod))
+ 		return false;
+ 
+-	plt = get_ftrace_plt(mod);
++	plt = get_ftrace_plt(mod, pc);
+ 	if (!plt) {
+ 		pr_err("ftrace: no module PLT for %ps\n", (void *)*addr);
+ 		return false;
+diff --git a/arch/arm64/kernel/module-plts.c b/arch/arm64/kernel/module-plts.c
+index bde32979c06afc..7afd370da9f48f 100644
+--- a/arch/arm64/kernel/module-plts.c
++++ b/arch/arm64/kernel/module-plts.c
+@@ -283,7 +283,7 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
+ 	unsigned long core_plts = 0;
+ 	unsigned long init_plts = 0;
+ 	Elf64_Sym *syms = NULL;
+-	Elf_Shdr *pltsec, *tramp = NULL;
++	Elf_Shdr *pltsec, *tramp = NULL, *init_tramp = NULL;
+ 	int i;
+ 
+ 	/*
+@@ -298,6 +298,9 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
+ 		else if (!strcmp(secstrings + sechdrs[i].sh_name,
+ 				 ".text.ftrace_trampoline"))
+ 			tramp = sechdrs + i;
++		else if (!strcmp(secstrings + sechdrs[i].sh_name,
++				 ".init.text.ftrace_trampoline"))
++			init_tramp = sechdrs + i;
+ 		else if (sechdrs[i].sh_type == SHT_SYMTAB)
+ 			syms = (Elf64_Sym *)sechdrs[i].sh_addr;
+ 	}
+@@ -363,5 +366,12 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
+ 		tramp->sh_size = NR_FTRACE_PLTS * sizeof(struct plt_entry);
+ 	}
+ 
++	if (init_tramp) {
++		init_tramp->sh_type = SHT_NOBITS;
++		init_tramp->sh_flags = SHF_EXECINSTR | SHF_ALLOC;
++		init_tramp->sh_addralign = __alignof__(struct plt_entry);
++		init_tramp->sh_size = NR_FTRACE_PLTS * sizeof(struct plt_entry);
++	}
++
+ 	return 0;
+ }
+diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
+index 06bb680bfe975c..8b6f5cd58883af 100644
+--- a/arch/arm64/kernel/module.c
++++ b/arch/arm64/kernel/module.c
+@@ -453,6 +453,17 @@ static int module_init_ftrace_plt(const Elf_Ehdr *hdr,
+ 	__init_plt(&plts[FTRACE_PLT_IDX], FTRACE_ADDR);
+ 
+ 	mod->arch.ftrace_trampolines = plts;
++
++	s = find_section(hdr, sechdrs, ".init.text.ftrace_trampoline");
++	if (!s)
++		return -ENOEXEC;
++
++	plts = (void *)s->sh_addr;
++
++	__init_plt(&plts[FTRACE_PLT_IDX], FTRACE_ADDR);
++
++	mod->arch.init_ftrace_trampolines = plts;
++
+ #endif
+ 	return 0;
+ }
+diff --git a/arch/loongarch/kernel/signal.c b/arch/loongarch/kernel/signal.c
+index 4740cb5b238898..c9f7ca778364ed 100644
+--- a/arch/loongarch/kernel/signal.c
++++ b/arch/loongarch/kernel/signal.c
+@@ -677,6 +677,11 @@ static int setup_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc,
+ 	for (i = 1; i < 32; i++)
+ 		err |= __put_user(regs->regs[i], &sc->sc_regs[i]);
+ 
++#ifdef CONFIG_CPU_HAS_LBT
++	if (extctx->lbt.addr)
++		err |= protected_save_lbt_context(extctx);
++#endif
++
+ 	if (extctx->lasx.addr)
+ 		err |= protected_save_lasx_context(extctx);
+ 	else if (extctx->lsx.addr)
+@@ -684,11 +689,6 @@ static int setup_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc,
+ 	else if (extctx->fpu.addr)
+ 		err |= protected_save_fpu_context(extctx);
+ 
+-#ifdef CONFIG_CPU_HAS_LBT
+-	if (extctx->lbt.addr)
+-		err |= protected_save_lbt_context(extctx);
+-#endif
+-
+ 	/* Set the "end" magic */
+ 	info = (struct sctx_info *)extctx->end.addr;
+ 	err |= __put_user(0, &info->magic);
+diff --git a/arch/loongarch/kernel/time.c b/arch/loongarch/kernel/time.c
+index 367906b10f810a..f3092f2de8b501 100644
+--- a/arch/loongarch/kernel/time.c
++++ b/arch/loongarch/kernel/time.c
+@@ -5,6 +5,7 @@
+  * Copyright (C) 2020-2022 Loongson Technology Corporation Limited
+  */
+ #include <linux/clockchips.h>
++#include <linux/cpuhotplug.h>
+ #include <linux/delay.h>
+ #include <linux/export.h>
+ #include <linux/init.h>
+@@ -102,6 +103,23 @@ static int constant_timer_next_event(unsigned long delta, struct clock_event_dev
+ 	return 0;
+ }
+ 
++static int arch_timer_starting(unsigned int cpu)
++{
++	set_csr_ecfg(ECFGF_TIMER);
++
++	return 0;
++}
++
++static int arch_timer_dying(unsigned int cpu)
++{
++	constant_set_state_shutdown(this_cpu_ptr(&constant_clockevent_device));
++
++	/* Clear Timer Interrupt */
++	write_csr_tintclear(CSR_TINTCLR_TI);
++
++	return 0;
++}
++
+ static unsigned long get_loops_per_jiffy(void)
+ {
+ 	unsigned long lpj = (unsigned long)const_clock_freq;
+@@ -172,6 +190,10 @@ int constant_clockevent_init(void)
+ 	lpj_fine = get_loops_per_jiffy();
+ 	pr_info("Constant clock event device register\n");
+ 
++	cpuhp_setup_state(CPUHP_AP_LOONGARCH_ARCH_TIMER_STARTING,
++			  "clockevents/loongarch/timer:starting",
++			  arch_timer_starting, arch_timer_dying);
++
+ 	return 0;
+ }
+ 
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index 1c5544401530fe..b29eff4ace2da0 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -69,7 +69,7 @@ config RISCV
+ 	select ARCH_SUPPORTS_HUGE_PFNMAP if TRANSPARENT_HUGEPAGE
+ 	select ARCH_SUPPORTS_HUGETLBFS if MMU
+ 	# LLD >= 14: https://github.com/llvm/llvm-project/issues/50505
+-	select ARCH_SUPPORTS_LTO_CLANG if LLD_VERSION >= 140000
++	select ARCH_SUPPORTS_LTO_CLANG if LLD_VERSION >= 140000 && CMODEL_MEDANY
+ 	select ARCH_SUPPORTS_LTO_CLANG_THIN if LLD_VERSION >= 140000
+ 	select ARCH_SUPPORTS_MSEAL_SYSTEM_MAPPINGS if 64BIT && MMU
+ 	select ARCH_SUPPORTS_PAGE_TABLE_CHECK if MMU
+diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h
+index a8a2af6dfe9d24..2a16e88e13deda 100644
+--- a/arch/riscv/include/asm/asm.h
++++ b/arch/riscv/include/asm/asm.h
+@@ -91,7 +91,7 @@
+ #endif
+ 
+ .macro asm_per_cpu dst sym tmp
+-	REG_L \tmp, TASK_TI_CPU_NUM(tp)
++	lw    \tmp, TASK_TI_CPU_NUM(tp)
+ 	slli  \tmp, \tmp, PER_CPU_OFFSET_SHIFT
+ 	la    \dst, __per_cpu_offset
+ 	add   \dst, \dst, \tmp
+diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h
+index b88a6218b7f243..f5f4f7f85543f2 100644
+--- a/arch/riscv/include/asm/uaccess.h
++++ b/arch/riscv/include/asm/uaccess.h
+@@ -209,7 +209,7 @@ do {									\
+ 		err = 0;						\
+ 		break;							\
+ __gu_failed:								\
+-		x = 0;							\
++		x = (__typeof__(x))0;					\
+ 		err = -EFAULT;						\
+ } while (0)
+ 
+@@ -311,7 +311,7 @@ do {								\
+ do {								\
+ 	if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) &&	\
+ 	    !IS_ALIGNED((uintptr_t)__gu_ptr, sizeof(*__gu_ptr))) {	\
+-		__inttype(x) ___val = (__inttype(x))x;			\
++		__typeof__(*(__gu_ptr)) ___val = (x);		\
+ 		if (__asm_copy_to_user_sum_enabled(__gu_ptr, &(___val), sizeof(*__gu_ptr))) \
+ 			goto label;				\
+ 		break;						\
+@@ -438,10 +438,10 @@ unsigned long __must_check clear_user(void __user *to, unsigned long n)
+ }
+ 
+ #define __get_kernel_nofault(dst, src, type, err_label)			\
+-	__get_user_nocheck(*((type *)(dst)), (type *)(src), err_label)
++	__get_user_nocheck(*((type *)(dst)), (__force __user type *)(src), err_label)
+ 
+ #define __put_kernel_nofault(dst, src, type, err_label)			\
+-	__put_user_nocheck(*((type *)(src)), (type *)(dst), err_label)
++	__put_user_nocheck(*((type *)(src)), (__force __user type *)(dst), err_label)
+ 
+ static __must_check __always_inline bool user_access_begin(const void __user *ptr, size_t len)
+ {
+diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
+index 75656afa2d6be8..4fdf187a62bfd5 100644
+--- a/arch/riscv/kernel/entry.S
++++ b/arch/riscv/kernel/entry.S
+@@ -46,7 +46,7 @@
+ 	 * a0 = &new_vmalloc[BIT_WORD(cpu)]
+ 	 * a1 = BIT_MASK(cpu)
+ 	 */
+-	REG_L 	a2, TASK_TI_CPU(tp)
++	lw	a2, TASK_TI_CPU(tp)
+ 	/*
+ 	 * Compute the new_vmalloc element position:
+ 	 * (cpu / 64) * 8 = (cpu >> 6) << 3
+diff --git a/arch/riscv/kernel/kexec_elf.c b/arch/riscv/kernel/kexec_elf.c
+index f4755d49b89eda..a00aa92c824f89 100644
+--- a/arch/riscv/kernel/kexec_elf.c
++++ b/arch/riscv/kernel/kexec_elf.c
+@@ -28,7 +28,7 @@ static int riscv_kexec_elf_load(struct kimage *image, struct elfhdr *ehdr,
+ 	int i;
+ 	int ret = 0;
+ 	size_t size;
+-	struct kexec_buf kbuf;
++	struct kexec_buf kbuf = {};
+ 	const struct elf_phdr *phdr;
+ 
+ 	kbuf.image = image;
+@@ -66,7 +66,7 @@ static int elf_find_pbase(struct kimage *image, unsigned long kernel_len,
+ {
+ 	int i;
+ 	int ret;
+-	struct kexec_buf kbuf;
++	struct kexec_buf kbuf = {};
+ 	const struct elf_phdr *phdr;
+ 	unsigned long lowest_paddr = ULONG_MAX;
+ 	unsigned long lowest_vaddr = ULONG_MAX;
+diff --git a/arch/riscv/kernel/kexec_image.c b/arch/riscv/kernel/kexec_image.c
+index 26a81774a78a36..8f2eb900910b14 100644
+--- a/arch/riscv/kernel/kexec_image.c
++++ b/arch/riscv/kernel/kexec_image.c
+@@ -41,7 +41,7 @@ static void *image_load(struct kimage *image,
+ 	struct riscv_image_header *h;
+ 	u64 flags;
+ 	bool be_image, be_kernel;
+-	struct kexec_buf kbuf;
++	struct kexec_buf kbuf = {};
+ 	int ret;
+ 
+ 	/* Check Image header */
+diff --git a/arch/riscv/kernel/machine_kexec_file.c b/arch/riscv/kernel/machine_kexec_file.c
+index e36104af2e247f..b9eb41b0a97519 100644
+--- a/arch/riscv/kernel/machine_kexec_file.c
++++ b/arch/riscv/kernel/machine_kexec_file.c
+@@ -261,7 +261,7 @@ int load_extra_segments(struct kimage *image, unsigned long kernel_start,
+ 	int ret;
+ 	void *fdt;
+ 	unsigned long initrd_pbase = 0UL;
+-	struct kexec_buf kbuf;
++	struct kexec_buf kbuf = {};
+ 	char *modified_cmdline = NULL;
+ 
+ 	kbuf.image = image;
+diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
+index 10e01ff06312d9..9883a55d61b5b9 100644
+--- a/arch/riscv/net/bpf_jit_comp64.c
++++ b/arch/riscv/net/bpf_jit_comp64.c
+@@ -1356,7 +1356,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
+ 				emit_mv(rd, rs, ctx);
+ #ifdef CONFIG_SMP
+ 			/* Load current CPU number in T1 */
+-			emit_ld(RV_REG_T1, offsetof(struct thread_info, cpu),
++			emit_lw(RV_REG_T1, offsetof(struct thread_info, cpu),
+ 				RV_REG_TP, ctx);
+ 			/* Load address of __per_cpu_offset array in T2 */
+ 			emit_addr(RV_REG_T2, (u64)&__per_cpu_offset, extra_pass, ctx);
+@@ -1763,7 +1763,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
+ 		 */
+ 		if (insn->src_reg == 0 && insn->imm == BPF_FUNC_get_smp_processor_id) {
+ 			/* Load current CPU number in R0 */
+-			emit_ld(bpf_to_rv_reg(BPF_REG_0, ctx), offsetof(struct thread_info, cpu),
++			emit_lw(bpf_to_rv_reg(BPF_REG_0, ctx), offsetof(struct thread_info, cpu),
+ 				RV_REG_TP, ctx);
+ 			break;
+ 		}
+diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
+index 4604f924d8b86a..7eb61ef6a185f6 100644
+--- a/arch/x86/include/asm/pgtable_64_types.h
++++ b/arch/x86/include/asm/pgtable_64_types.h
+@@ -36,6 +36,9 @@ static inline bool pgtable_l5_enabled(void)
+ #define pgtable_l5_enabled() cpu_feature_enabled(X86_FEATURE_LA57)
+ #endif /* USE_EARLY_PGTABLE_L5 */
+ 
++#define ARCH_PAGE_TABLE_SYNC_MASK \
++	(pgtable_l5_enabled() ? PGTBL_PGD_MODIFIED : PGTBL_P4D_MODIFIED)
++
+ extern unsigned int pgdir_shift;
+ extern unsigned int ptrs_per_p4d;
+ 
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index fdb6cab524f08d..458e5148d69e81 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -223,6 +223,24 @@ static void sync_global_pgds(unsigned long start, unsigned long end)
+ 		sync_global_pgds_l4(start, end);
+ }
+ 
++/*
++ * Make kernel mappings visible in all page tables in the system.
++ * This is necessary except when the init task populates kernel mappings
++ * during the boot process. In that case, all processes originating from
++ * the init task copies the kernel mappings, so there is no issue.
++ * Otherwise, missing synchronization could lead to kernel crashes due
++ * to missing page table entries for certain kernel mappings.
++ *
++ * Synchronization is performed at the top level, which is the PGD in
++ * 5-level paging systems. But in 4-level paging systems, however,
++ * pgd_populate() is a no-op, so synchronization is done at the P4D level.
++ * sync_global_pgds() handles this difference between paging levels.
++ */
++void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
++{
++	sync_global_pgds(start, end);
++}
++
+ /*
+  * NOTE: This function is marked __ref because it calls __init function
+  * (alloc_bootmem_pages). It's safe to do it ONLY when after_bootmem == 0.
+diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c
+index 0e7748c5e11796..83a9f99ea7f95a 100644
+--- a/drivers/accel/ivpu/ivpu_drv.c
++++ b/drivers/accel/ivpu/ivpu_drv.c
+@@ -677,7 +677,7 @@ static void ivpu_bo_unbind_all_user_contexts(struct ivpu_device *vdev)
+ static void ivpu_dev_fini(struct ivpu_device *vdev)
+ {
+ 	ivpu_jobs_abort_all(vdev);
+-	ivpu_pm_cancel_recovery(vdev);
++	ivpu_pm_disable_recovery(vdev);
+ 	ivpu_pm_disable(vdev);
+ 	ivpu_prepare_for_reset(vdev);
+ 	ivpu_shutdown(vdev);
+diff --git a/drivers/accel/ivpu/ivpu_pm.c b/drivers/accel/ivpu/ivpu_pm.c
+index ea30db181cd75e..d2c6cebdc901b3 100644
+--- a/drivers/accel/ivpu/ivpu_pm.c
++++ b/drivers/accel/ivpu/ivpu_pm.c
+@@ -408,10 +408,10 @@ void ivpu_pm_init(struct ivpu_device *vdev)
+ 	ivpu_dbg(vdev, PM, "Autosuspend delay = %d\n", delay);
+ }
+ 
+-void ivpu_pm_cancel_recovery(struct ivpu_device *vdev)
++void ivpu_pm_disable_recovery(struct ivpu_device *vdev)
+ {
+ 	drm_WARN_ON(&vdev->drm, delayed_work_pending(&vdev->pm->job_timeout_work));
+-	cancel_work_sync(&vdev->pm->recovery_work);
++	disable_work_sync(&vdev->pm->recovery_work);
+ }
+ 
+ void ivpu_pm_enable(struct ivpu_device *vdev)
+diff --git a/drivers/accel/ivpu/ivpu_pm.h b/drivers/accel/ivpu/ivpu_pm.h
+index 89b264cc0e3e78..a2aa7a27f32ef8 100644
+--- a/drivers/accel/ivpu/ivpu_pm.h
++++ b/drivers/accel/ivpu/ivpu_pm.h
+@@ -25,7 +25,7 @@ struct ivpu_pm_info {
+ void ivpu_pm_init(struct ivpu_device *vdev);
+ void ivpu_pm_enable(struct ivpu_device *vdev);
+ void ivpu_pm_disable(struct ivpu_device *vdev);
+-void ivpu_pm_cancel_recovery(struct ivpu_device *vdev);
++void ivpu_pm_disable_recovery(struct ivpu_device *vdev);
+ 
+ int ivpu_pm_suspend_cb(struct device *dev);
+ int ivpu_pm_resume_cb(struct device *dev);
+diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
+index 98759d6199d3ac..65f0f56ad7538a 100644
+--- a/drivers/acpi/arm64/iort.c
++++ b/drivers/acpi/arm64/iort.c
+@@ -937,8 +937,10 @@ static u32 *iort_rmr_alloc_sids(u32 *sids, u32 count, u32 id_start,
+ 
+ 	new_sids = krealloc_array(sids, count + new_count,
+ 				  sizeof(*new_sids), GFP_KERNEL);
+-	if (!new_sids)
++	if (!new_sids) {
++		kfree(sids);
+ 		return NULL;
++	}
+ 
+ 	for (i = count; i < total_count; i++)
+ 		new_sids[i] = id_start++;
+diff --git a/drivers/acpi/riscv/cppc.c b/drivers/acpi/riscv/cppc.c
+index 440cf9fb91aabb..42c1a90524707f 100644
+--- a/drivers/acpi/riscv/cppc.c
++++ b/drivers/acpi/riscv/cppc.c
+@@ -119,7 +119,7 @@ int cpc_read_ffh(int cpu, struct cpc_reg *reg, u64 *val)
+ 
+ 		*val = data.ret.value;
+ 
+-		return (data.ret.error) ? sbi_err_map_linux_errno(data.ret.error) : 0;
++		return data.ret.error;
+ 	}
+ 
+ 	return -EINVAL;
+@@ -148,7 +148,7 @@ int cpc_write_ffh(int cpu, struct cpc_reg *reg, u64 val)
+ 
+ 		smp_call_function_single(cpu, cppc_ffh_csr_write, &data, 1);
+ 
+-		return (data.ret.error) ? sbi_err_map_linux_errno(data.ret.error) : 0;
++		return data.ret.error;
+ 	}
+ 
+ 	return -EINVAL;
+diff --git a/drivers/bluetooth/hci_vhci.c b/drivers/bluetooth/hci_vhci.c
+index f7d8c3c00655a8..2fef08254d78d9 100644
+--- a/drivers/bluetooth/hci_vhci.c
++++ b/drivers/bluetooth/hci_vhci.c
+@@ -380,6 +380,28 @@ static const struct file_operations force_devcoredump_fops = {
+ 	.write		= force_devcd_write,
+ };
+ 
++static void vhci_debugfs_init(struct vhci_data *data)
++{
++	struct hci_dev *hdev = data->hdev;
++
++	debugfs_create_file("force_suspend", 0644, hdev->debugfs, data,
++			    &force_suspend_fops);
++
++	debugfs_create_file("force_wakeup", 0644, hdev->debugfs, data,
++			    &force_wakeup_fops);
++
++	if (IS_ENABLED(CONFIG_BT_MSFTEXT))
++		debugfs_create_file("msft_opcode", 0644, hdev->debugfs, data,
++				    &msft_opcode_fops);
++
++	if (IS_ENABLED(CONFIG_BT_AOSPEXT))
++		debugfs_create_file("aosp_capable", 0644, hdev->debugfs, data,
++				    &aosp_capable_fops);
++
++	debugfs_create_file("force_devcoredump", 0644, hdev->debugfs, data,
++			    &force_devcoredump_fops);
++}
++
+ static int __vhci_create_device(struct vhci_data *data, __u8 opcode)
+ {
+ 	struct hci_dev *hdev;
+@@ -434,22 +456,8 @@ static int __vhci_create_device(struct vhci_data *data, __u8 opcode)
+ 		return -EBUSY;
+ 	}
+ 
+-	debugfs_create_file("force_suspend", 0644, hdev->debugfs, data,
+-			    &force_suspend_fops);
+-
+-	debugfs_create_file("force_wakeup", 0644, hdev->debugfs, data,
+-			    &force_wakeup_fops);
+-
+-	if (IS_ENABLED(CONFIG_BT_MSFTEXT))
+-		debugfs_create_file("msft_opcode", 0644, hdev->debugfs, data,
+-				    &msft_opcode_fops);
+-
+-	if (IS_ENABLED(CONFIG_BT_AOSPEXT))
+-		debugfs_create_file("aosp_capable", 0644, hdev->debugfs, data,
+-				    &aosp_capable_fops);
+-
+-	debugfs_create_file("force_devcoredump", 0644, hdev->debugfs, data,
+-			    &force_devcoredump_fops);
++	if (!IS_ERR_OR_NULL(hdev->debugfs))
++		vhci_debugfs_init(data);
+ 
+ 	hci_skb_pkt_type(skb) = HCI_VENDOR_PKT;
+ 
+@@ -651,6 +659,21 @@ static int vhci_open(struct inode *inode, struct file *file)
+ 	return 0;
+ }
+ 
++static void vhci_debugfs_remove(struct hci_dev *hdev)
++{
++	debugfs_lookup_and_remove("force_suspend", hdev->debugfs);
++
++	debugfs_lookup_and_remove("force_wakeup", hdev->debugfs);
++
++	if (IS_ENABLED(CONFIG_BT_MSFTEXT))
++		debugfs_lookup_and_remove("msft_opcode", hdev->debugfs);
++
++	if (IS_ENABLED(CONFIG_BT_AOSPEXT))
++		debugfs_lookup_and_remove("aosp_capable", hdev->debugfs);
++
++	debugfs_lookup_and_remove("force_devcoredump", hdev->debugfs);
++}
++
+ static int vhci_release(struct inode *inode, struct file *file)
+ {
+ 	struct vhci_data *data = file->private_data;
+@@ -662,6 +685,8 @@ static int vhci_release(struct inode *inode, struct file *file)
+ 	hdev = data->hdev;
+ 
+ 	if (hdev) {
++		if (!IS_ERR_OR_NULL(hdev->debugfs))
++			vhci_debugfs_remove(hdev);
+ 		hci_unregister_dev(hdev);
+ 		hci_free_dev(hdev);
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index 7d8b98aa5271c2..f9ceda7861f1b1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -447,7 +447,7 @@ static int psp_sw_init(struct amdgpu_ip_block *ip_block)
+ 	psp->cmd = kzalloc(sizeof(struct psp_gfx_cmd_resp), GFP_KERNEL);
+ 	if (!psp->cmd) {
+ 		dev_err(adev->dev, "Failed to allocate memory to command buffer!\n");
+-		ret = -ENOMEM;
++		return -ENOMEM;
+ 	}
+ 
+ 	adev->psp.xgmi_context.supports_extended_data =
+diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
+index bf7c22f81cda34..ba73518f5cdf36 100644
+--- a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
+@@ -1462,17 +1462,12 @@ static int dce_v10_0_audio_init(struct amdgpu_device *adev)
+ 
+ static void dce_v10_0_audio_fini(struct amdgpu_device *adev)
+ {
+-	int i;
+-
+ 	if (!amdgpu_audio)
+ 		return;
+ 
+ 	if (!adev->mode_info.audio.enabled)
+ 		return;
+ 
+-	for (i = 0; i < adev->mode_info.audio.num_pins; i++)
+-		dce_v10_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false);
+-
+ 	adev->mode_info.audio.enabled = false;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
+index 47e05783c4a0e3..b01d88d078fa2b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
+@@ -1511,17 +1511,12 @@ static int dce_v11_0_audio_init(struct amdgpu_device *adev)
+ 
+ static void dce_v11_0_audio_fini(struct amdgpu_device *adev)
+ {
+-	int i;
+-
+ 	if (!amdgpu_audio)
+ 		return;
+ 
+ 	if (!adev->mode_info.audio.enabled)
+ 		return;
+ 
+-	for (i = 0; i < adev->mode_info.audio.num_pins; i++)
+-		dce_v11_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false);
+-
+ 	adev->mode_info.audio.enabled = false;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
+index 276c025c4c03db..81760a26f2ffca 100644
+--- a/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
+@@ -1451,17 +1451,12 @@ static int dce_v6_0_audio_init(struct amdgpu_device *adev)
+ 
+ static void dce_v6_0_audio_fini(struct amdgpu_device *adev)
+ {
+-	int i;
+-
+ 	if (!amdgpu_audio)
+ 		return;
+ 
+ 	if (!adev->mode_info.audio.enabled)
+ 		return;
+ 
+-	for (i = 0; i < adev->mode_info.audio.num_pins; i++)
+-		dce_v6_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false);
+-
+ 	adev->mode_info.audio.enabled = false;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
+index e62ccf9eb73de5..19a265bd4d1966 100644
+--- a/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
+@@ -1443,17 +1443,12 @@ static int dce_v8_0_audio_init(struct amdgpu_device *adev)
+ 
+ static void dce_v8_0_audio_fini(struct amdgpu_device *adev)
+ {
+-	int i;
+-
+ 	if (!amdgpu_audio)
+ 		return;
+ 
+ 	if (!adev->mode_info.audio.enabled)
+ 		return;
+ 
+-	for (i = 0; i < adev->mode_info.audio.num_pins; i++)
+-		dce_v8_0_audio_enable(adev, &adev->mode_info.audio.pin[i], false);
+-
+ 	adev->mode_info.audio.enabled = false;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+index 28eb846280dd40..3f6a828cad8ad8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+@@ -641,8 +641,9 @@ static int mes_v11_0_misc_op(struct amdgpu_mes *mes,
+ 		break;
+ 	case MES_MISC_OP_CHANGE_CONFIG:
+ 		if ((mes->adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) < 0x63) {
+-			dev_err(mes->adev->dev, "MES FW version must be larger than 0x63 to support limit single process feature.\n");
+-			return -EINVAL;
++			dev_warn_once(mes->adev->dev,
++				      "MES FW version must be larger than 0x63 to support limit single process feature.\n");
++			return 0;
+ 		}
+ 		misc_pkt.opcode = MESAPI_MISC__CHANGE_CONFIG;
+ 		misc_pkt.change_config.opcode =
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
+index 041bca58add556..9ac52eb6f59360 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
+@@ -1376,15 +1376,15 @@ static int sdma_v6_0_sw_init(struct amdgpu_ip_block *ip_block)
+ 
+ 	switch (amdgpu_ip_version(adev, SDMA0_HWIP, 0)) {
+ 	case IP_VERSION(6, 0, 0):
+-		if ((adev->sdma.instance[0].fw_version >= 24) && !adev->sdma.disable_uq)
++		if ((adev->sdma.instance[0].fw_version >= 27) && !adev->sdma.disable_uq)
+ 			adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs;
+ 		break;
+ 	case IP_VERSION(6, 0, 2):
+-		if ((adev->sdma.instance[0].fw_version >= 21) && !adev->sdma.disable_uq)
++		if ((adev->sdma.instance[0].fw_version >= 23) && !adev->sdma.disable_uq)
+ 			adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs;
+ 		break;
+ 	case IP_VERSION(6, 0, 3):
+-		if ((adev->sdma.instance[0].fw_version >= 25) && !adev->sdma.disable_uq)
++		if ((adev->sdma.instance[0].fw_version >= 27) && !adev->sdma.disable_uq)
+ 			adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs;
+ 		break;
+ 	default:
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
+index 4a9d07c31bc5b1..0c50fe266c8a16 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
+@@ -896,13 +896,13 @@ void dce110_link_encoder_construct(
+ 						enc110->base.id, &bp_cap_info);
+ 
+ 	/* Override features with DCE-specific values */
+-	if (BP_RESULT_OK == result) {
++	if (result == BP_RESULT_OK) {
+ 		enc110->base.features.flags.bits.IS_HBR2_CAPABLE =
+ 				bp_cap_info.DP_HBR2_EN;
+ 		enc110->base.features.flags.bits.IS_HBR3_CAPABLE =
+ 				bp_cap_info.DP_HBR3_EN;
+ 		enc110->base.features.flags.bits.HDMI_6GB_EN = bp_cap_info.HDMI_6GB_EN;
+-	} else {
++	} else if (result != BP_RESULT_NORECORD) {
+ 		DC_LOG_WARNING("%s: Failed to get encoder_cap_info from VBIOS with error code %d!\n",
+ 				__func__,
+ 				result);
+@@ -1798,13 +1798,13 @@ void dce60_link_encoder_construct(
+ 						enc110->base.id, &bp_cap_info);
+ 
+ 	/* Override features with DCE-specific values */
+-	if (BP_RESULT_OK == result) {
++	if (result == BP_RESULT_OK) {
+ 		enc110->base.features.flags.bits.IS_HBR2_CAPABLE =
+ 				bp_cap_info.DP_HBR2_EN;
+ 		enc110->base.features.flags.bits.IS_HBR3_CAPABLE =
+ 				bp_cap_info.DP_HBR3_EN;
+ 		enc110->base.features.flags.bits.HDMI_6GB_EN = bp_cap_info.HDMI_6GB_EN;
+-	} else {
++	} else if (result != BP_RESULT_NORECORD) {
+ 		DC_LOG_WARNING("%s: Failed to get encoder_cap_info from VBIOS with error code %d!\n",
+ 				__func__,
+ 				result);
+diff --git a/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c b/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c
+index 75fb77bca83ba2..01480a04f85ef5 100644
+--- a/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c
++++ b/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c
+@@ -520,6 +520,15 @@ void dpp1_dppclk_control(
+ 		REG_UPDATE(DPP_CONTROL, DPP_CLOCK_ENABLE, 0);
+ }
+ 
++void dpp_force_disable_cursor(struct dpp *dpp_base)
++{
++	struct dcn10_dpp *dpp = TO_DCN10_DPP(dpp_base);
++
++	/* Force disable cursor */
++	REG_UPDATE(CURSOR0_CONTROL, CUR0_ENABLE, 0);
++	dpp_base->pos.cur0_ctl.bits.cur0_enable = 0;
++}
++
+ static const struct dpp_funcs dcn10_dpp_funcs = {
+ 		.dpp_read_state = dpp_read_state,
+ 		.dpp_reset = dpp_reset,
+diff --git a/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.h b/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.h
+index c48139bed11f51..f466182963f756 100644
+--- a/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.h
++++ b/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.h
+@@ -1525,4 +1525,6 @@ void dpp1_construct(struct dcn10_dpp *dpp1,
+ 
+ void dpp1_cm_get_gamut_remap(struct dpp *dpp_base,
+ 			     struct dpp_grph_csc_adjustment *adjust);
++void dpp_force_disable_cursor(struct dpp *dpp_base);
++
+ #endif
+diff --git a/drivers/gpu/drm/amd/display/dc/dpp/dcn30/dcn30_dpp.c b/drivers/gpu/drm/amd/display/dc/dpp/dcn30/dcn30_dpp.c
+index 2d70586cef4027..09be2a90cc79dc 100644
+--- a/drivers/gpu/drm/amd/display/dc/dpp/dcn30/dcn30_dpp.c
++++ b/drivers/gpu/drm/amd/display/dc/dpp/dcn30/dcn30_dpp.c
+@@ -1494,6 +1494,7 @@ static struct dpp_funcs dcn30_dpp_funcs = {
+ 	.dpp_dppclk_control		= dpp1_dppclk_control,
+ 	.dpp_set_hdr_multiplier		= dpp3_set_hdr_multiplier,
+ 	.dpp_get_gamut_remap		= dpp3_cm_get_gamut_remap,
++	.dpp_force_disable_cursor 	= dpp_force_disable_cursor,
+ };
+ 
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
+index e68f21fd5f0fb4..56098453395055 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
+@@ -528,3 +528,75 @@ void dcn314_disable_link_output(struct dc_link *link,
+ 
+ 	apply_symclk_on_tx_off_wa(link);
+ }
++
++/**
++ * dcn314_dpp_pg_control - DPP power gate control.
++ *
++ * @hws: dce_hwseq reference.
++ * @dpp_inst: DPP instance reference.
++ * @power_on: true if we want to enable power gate, false otherwise.
++ *
++ * Enable or disable power gate in the specific DPP instance.
++ * If power gating is disabled, will force disable cursor in the DPP instance.
++ */
++void dcn314_dpp_pg_control(
++		struct dce_hwseq *hws,
++		unsigned int dpp_inst,
++		bool power_on)
++{
++	uint32_t power_gate = power_on ? 0 : 1;
++	uint32_t pwr_status = power_on ? 0 : 2;
++
++
++	if (hws->ctx->dc->debug.disable_dpp_power_gate) {
++		/* Workaround for DCN314 with disabled power gating */
++		if (!power_on) {
++
++			/* Force disable cursor if power gating is disabled */
++			struct dpp *dpp = hws->ctx->dc->res_pool->dpps[dpp_inst];
++			if (dpp && dpp->funcs->dpp_force_disable_cursor)
++				dpp->funcs->dpp_force_disable_cursor(dpp);
++		}
++		return;
++	}
++	if (REG(DOMAIN1_PG_CONFIG) == 0)
++		return;
++
++	switch (dpp_inst) {
++	case 0: /* DPP0 */
++		REG_UPDATE(DOMAIN1_PG_CONFIG,
++				DOMAIN1_POWER_GATE, power_gate);
++
++		REG_WAIT(DOMAIN1_PG_STATUS,
++				DOMAIN1_PGFSM_PWR_STATUS, pwr_status,
++				1, 1000);
++		break;
++	case 1: /* DPP1 */
++		REG_UPDATE(DOMAIN3_PG_CONFIG,
++				DOMAIN3_POWER_GATE, power_gate);
++
++		REG_WAIT(DOMAIN3_PG_STATUS,
++				DOMAIN3_PGFSM_PWR_STATUS, pwr_status,
++				1, 1000);
++		break;
++	case 2: /* DPP2 */
++		REG_UPDATE(DOMAIN5_PG_CONFIG,
++				DOMAIN5_POWER_GATE, power_gate);
++
++		REG_WAIT(DOMAIN5_PG_STATUS,
++				DOMAIN5_PGFSM_PWR_STATUS, pwr_status,
++				1, 1000);
++		break;
++	case 3: /* DPP3 */
++		REG_UPDATE(DOMAIN7_PG_CONFIG,
++				DOMAIN7_POWER_GATE, power_gate);
++
++		REG_WAIT(DOMAIN7_PG_STATUS,
++				DOMAIN7_PGFSM_PWR_STATUS, pwr_status,
++				1, 1000);
++		break;
++	default:
++		BREAK_TO_DEBUGGER();
++		break;
++	}
++}
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.h b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.h
+index 2305ad282f218b..6c072d0274ea37 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.h
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.h
+@@ -47,4 +47,6 @@ void dcn314_dpp_root_clock_control(struct dce_hwseq *hws, unsigned int dpp_inst,
+ 
+ void dcn314_disable_link_output(struct dc_link *link, const struct link_resource *link_res, enum signal_type signal);
+ 
++void dcn314_dpp_pg_control(struct dce_hwseq *hws, unsigned int dpp_inst, bool power_on);
++
+ #endif /* __DC_HWSS_DCN314_H__ */
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_init.c
+index f5112742edf9b4..9f454fa90e65db 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_init.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_init.c
+@@ -141,6 +141,7 @@ static const struct hwseq_private_funcs dcn314_private_funcs = {
+ 	.enable_power_gating_plane = dcn314_enable_power_gating_plane,
+ 	.dpp_root_clock_control = dcn314_dpp_root_clock_control,
+ 	.hubp_pg_control = dcn31_hubp_pg_control,
++	.dpp_pg_control = dcn314_dpp_pg_control,
+ 	.program_all_writeback_pipes_in_tree = dcn30_program_all_writeback_pipes_in_tree,
+ 	.update_odm = dcn314_update_odm,
+ 	.dsc_pg_control = dcn314_dsc_pg_control,
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h
+index 0c5675d1c59368..1b7c085dc2cc1e 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h
+@@ -349,6 +349,9 @@ struct dpp_funcs {
+ 		struct dpp *dpp_base,
+ 		enum dc_color_space color_space,
+ 		struct dc_csc_transform cursor_csc_color_matrix);
++
++	void (*dpp_force_disable_cursor)(struct dpp *dpp_base);
++
+ };
+ 
+ 
+diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi86.c b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+index 834b42a4d31f8d..4545ac42778abc 100644
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi86.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+@@ -392,6 +392,17 @@ static int __maybe_unused ti_sn65dsi86_resume(struct device *dev)
+ 
+ 	gpiod_set_value_cansleep(pdata->enable_gpio, 1);
+ 
++	/*
++	 * After EN is deasserted and an external clock is detected, the bridge
++	 * will sample GPIO3:1 to determine its frequency. The driver will
++	 * overwrite this setting in ti_sn_bridge_set_refclk_freq(). But this is
++	 * racy. Thus we have to wait a couple of us. According to the datasheet
++	 * the GPIO lines has to be stable at least 5 us (td5) but it seems that
++	 * is not enough and the refclk frequency value is still lost or
++	 * overwritten by the bridge itself. Waiting for 20us seems to work.
++	 */
++	usleep_range(20, 30);
++
+ 	/*
+ 	 * If we have a reference clock we can enable communication w/ the
+ 	 * panel (including the aux channel) w/out any need for an input clock
+diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
+index f2a6559a271008..ea78c6c8ca7a63 100644
+--- a/drivers/gpu/drm/display/drm_dp_helper.c
++++ b/drivers/gpu/drm/display/drm_dp_helper.c
+@@ -725,7 +725,7 @@ ssize_t drm_dp_dpcd_read(struct drm_dp_aux *aux, unsigned int offset,
+ 	 * monitor doesn't power down exactly after the throw away read.
+ 	 */
+ 	if (!aux->is_remote) {
+-		ret = drm_dp_dpcd_probe(aux, DP_DPCD_REV);
++		ret = drm_dp_dpcd_probe(aux, DP_TRAINING_PATTERN_SET);
+ 		if (ret < 0)
+ 			return ret;
+ 	}
+diff --git a/drivers/gpu/drm/nouveau/gv100_fence.c b/drivers/gpu/drm/nouveau/gv100_fence.c
+index cccdeca72002e0..317e516c4ec729 100644
+--- a/drivers/gpu/drm/nouveau/gv100_fence.c
++++ b/drivers/gpu/drm/nouveau/gv100_fence.c
+@@ -18,7 +18,7 @@ gv100_fence_emit32(struct nouveau_channel *chan, u64 virtual, u32 sequence)
+ 	struct nvif_push *push = &chan->chan.push;
+ 	int ret;
+ 
+-	ret = PUSH_WAIT(push, 8);
++	ret = PUSH_WAIT(push, 13);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -32,6 +32,11 @@ gv100_fence_emit32(struct nouveau_channel *chan, u64 virtual, u32 sequence)
+ 		  NVDEF(NVC36F, SEM_EXECUTE, PAYLOAD_SIZE, 32BIT) |
+ 		  NVDEF(NVC36F, SEM_EXECUTE, RELEASE_TIMESTAMP, DIS));
+ 
++	PUSH_MTHD(push, NVC36F, MEM_OP_A, 0,
++				MEM_OP_B, 0,
++				MEM_OP_C, NVDEF(NVC36F, MEM_OP_C, MEMBAR_TYPE, SYS_MEMBAR),
++				MEM_OP_D, NVDEF(NVC36F, MEM_OP_D, OPERATION, MEMBAR));
++
+ 	PUSH_MTHD(push, NVC36F, NON_STALL_INTERRUPT, 0);
+ 
+ 	PUSH_KICK(push);
+diff --git a/drivers/gpu/drm/nouveau/include/nvhw/class/clc36f.h b/drivers/gpu/drm/nouveau/include/nvhw/class/clc36f.h
+index 8735dda4c8a714..338f74b9f501ea 100644
+--- a/drivers/gpu/drm/nouveau/include/nvhw/class/clc36f.h
++++ b/drivers/gpu/drm/nouveau/include/nvhw/class/clc36f.h
+@@ -7,6 +7,91 @@
+ 
+ #define NVC36F_NON_STALL_INTERRUPT                                 (0x00000020)
+ #define NVC36F_NON_STALL_INTERRUPT_HANDLE                                 31:0
++// NOTE - MEM_OP_A and MEM_OP_B have been replaced in gp100 with methods for
++// specifying the page address for a targeted TLB invalidate and the uTLB for
++// a targeted REPLAY_CANCEL for UVM.
++// The previous MEM_OP_A/B functionality is in MEM_OP_C/D, with slightly
++// rearranged fields.
++#define NVC36F_MEM_OP_A                                            (0x00000028)
++#define NVC36F_MEM_OP_A_TLB_INVALIDATE_CANCEL_TARGET_CLIENT_UNIT_ID        5:0  // only relevant for REPLAY_CANCEL_TARGETED
++#define NVC36F_MEM_OP_A_TLB_INVALIDATE_INVALIDATION_SIZE                   5:0  // Used to specify size of invalidate, used for invalidates which are not of the REPLAY_CANCEL_TARGETED type
++#define NVC36F_MEM_OP_A_TLB_INVALIDATE_CANCEL_TARGET_GPC_ID               10:6  // only relevant for REPLAY_CANCEL_TARGETED
++#define NVC36F_MEM_OP_A_TLB_INVALIDATE_CANCEL_MMU_ENGINE_ID                6:0  // only relevant for REPLAY_CANCEL_VA_GLOBAL
++#define NVC36F_MEM_OP_A_TLB_INVALIDATE_SYSMEMBAR                         11:11
++#define NVC36F_MEM_OP_A_TLB_INVALIDATE_SYSMEMBAR_EN                 0x00000001
++#define NVC36F_MEM_OP_A_TLB_INVALIDATE_SYSMEMBAR_DIS                0x00000000
++#define NVC36F_MEM_OP_A_TLB_INVALIDATE_TARGET_ADDR_LO                    31:12
++#define NVC36F_MEM_OP_B                                            (0x0000002c)
++#define NVC36F_MEM_OP_B_TLB_INVALIDATE_TARGET_ADDR_HI                     31:0
++#define NVC36F_MEM_OP_C                                            (0x00000030)
++#define NVC36F_MEM_OP_C_MEMBAR_TYPE                                        2:0
++#define NVC36F_MEM_OP_C_MEMBAR_TYPE_SYS_MEMBAR                      0x00000000
++#define NVC36F_MEM_OP_C_MEMBAR_TYPE_MEMBAR                          0x00000001
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB                                 0:0
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_ONE                      0x00000000
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_ALL                      0x00000001  // Probably nonsensical for MMU_TLB_INVALIDATE_TARGETED
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_GPC                                 1:1
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_GPC_ENABLE                   0x00000000
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_GPC_DISABLE                  0x00000001
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY                              4:2  // only relevant if GPC ENABLE
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_NONE                  0x00000000
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_START                 0x00000001
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_START_ACK_ALL         0x00000002
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_CANCEL_TARGETED       0x00000003
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_CANCEL_GLOBAL         0x00000004
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_REPLAY_CANCEL_VA_GLOBAL      0x00000005
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACK_TYPE                            6:5  // only relevant if GPC ENABLE
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACK_TYPE_NONE                0x00000000
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACK_TYPE_GLOBALLY            0x00000001
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACK_TYPE_INTRANODE           0x00000002
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE                         9:7 //only relevant for REPLAY_CANCEL_VA_GLOBAL
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_READ                 0
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_WRITE                1
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_ATOMIC_STRONG        2
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_RSVRVD               3
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_ATOMIC_WEAK          4
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_ATOMIC_ALL           5
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_WRITE_AND_ATOMIC     6
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_ACCESS_TYPE_VIRT_ALL                  7
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL                    9:7  // Invalidate affects this level and all below
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_ALL         0x00000000  // Invalidate tlb caches at all levels of the page table
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_PTE_ONLY    0x00000001
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE0  0x00000002
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE1  0x00000003
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE2  0x00000004
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE3  0x00000005
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE4  0x00000006
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PAGE_TABLE_LEVEL_UP_TO_PDE5  0x00000007
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_APERTURE                          11:10  // only relevant if PDB_ONE
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_APERTURE_VID_MEM             0x00000000
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_APERTURE_SYS_MEM_COHERENT    0x00000002
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_APERTURE_SYS_MEM_NONCOHERENT 0x00000003
++#define NVC36F_MEM_OP_C_TLB_INVALIDATE_PDB_ADDR_LO                       31:12  // only relevant if PDB_ONE
++#define NVC36F_MEM_OP_C_ACCESS_COUNTER_CLR_TARGETED_NOTIFY_TAG            19:0
++// MEM_OP_D MUST be preceded by MEM_OPs A-C.
++#define NVC36F_MEM_OP_D                                            (0x00000034)
++#define NVC36F_MEM_OP_D_TLB_INVALIDATE_PDB_ADDR_HI                        26:0  // only relevant if PDB_ONE
++#define NVC36F_MEM_OP_D_OPERATION                                        31:27
++#define NVC36F_MEM_OP_D_OPERATION_MEMBAR                            0x00000005
++#define NVC36F_MEM_OP_D_OPERATION_MMU_TLB_INVALIDATE                0x00000009
++#define NVC36F_MEM_OP_D_OPERATION_MMU_TLB_INVALIDATE_TARGETED       0x0000000a
++#define NVC36F_MEM_OP_D_OPERATION_L2_PEERMEM_INVALIDATE             0x0000000d
++#define NVC36F_MEM_OP_D_OPERATION_L2_SYSMEM_INVALIDATE              0x0000000e
++// CLEAN_LINES is an alias for Tegra/GPU IP usage
++#define NVC36F_MEM_OP_B_OPERATION_L2_INVALIDATE_CLEAN_LINES         0x0000000e
++#define NVC36F_MEM_OP_D_OPERATION_L2_CLEAN_COMPTAGS                 0x0000000f
++#define NVC36F_MEM_OP_D_OPERATION_L2_FLUSH_DIRTY                    0x00000010
++#define NVC36F_MEM_OP_D_OPERATION_L2_WAIT_FOR_SYS_PENDING_READS     0x00000015
++#define NVC36F_MEM_OP_D_OPERATION_ACCESS_COUNTER_CLR                0x00000016
++#define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE                            1:0
++#define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE_MIMC                0x00000000
++#define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE_MOMC                0x00000001
++#define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE_ALL                 0x00000002
++#define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TYPE_TARGETED            0x00000003
++#define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TARGETED_TYPE                   2:2
++#define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TARGETED_TYPE_MIMC       0x00000000
++#define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TARGETED_TYPE_MOMC       0x00000001
++#define NVC36F_MEM_OP_D_ACCESS_COUNTER_CLR_TARGETED_BANK                   6:3
+ #define NVC36F_SEM_ADDR_LO                                         (0x0000005c)
+ #define NVC36F_SEM_ADDR_LO_OFFSET                                         31:2
+ #define NVC36F_SEM_ADDR_HI                                         (0x00000060)
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/base.c b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/base.c
+index fdffa0391b31c0..6fd4e60634fbe4 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/base.c
+@@ -350,6 +350,8 @@ nvkm_fifo_dtor(struct nvkm_engine *engine)
+ 	nvkm_chid_unref(&fifo->chid);
+ 
+ 	nvkm_event_fini(&fifo->nonstall.event);
++	if (fifo->func->nonstall_dtor)
++		fifo->func->nonstall_dtor(fifo);
+ 	mutex_destroy(&fifo->mutex);
+ 
+ 	if (fifo->func->dtor)
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga100.c b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga100.c
+index e74493a4569edb..6848a56f20c076 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga100.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga100.c
+@@ -517,19 +517,11 @@ ga100_fifo_nonstall_intr(struct nvkm_inth *inth)
+ static void
+ ga100_fifo_nonstall_block(struct nvkm_event *event, int type, int index)
+ {
+-	struct nvkm_fifo *fifo = container_of(event, typeof(*fifo), nonstall.event);
+-	struct nvkm_runl *runl = nvkm_runl_get(fifo, index, 0);
+-
+-	nvkm_inth_block(&runl->nonstall.inth);
+ }
+ 
+ static void
+ ga100_fifo_nonstall_allow(struct nvkm_event *event, int type, int index)
+ {
+-	struct nvkm_fifo *fifo = container_of(event, typeof(*fifo), nonstall.event);
+-	struct nvkm_runl *runl = nvkm_runl_get(fifo, index, 0);
+-
+-	nvkm_inth_allow(&runl->nonstall.inth);
+ }
+ 
+ const struct nvkm_event_func
+@@ -564,12 +556,26 @@ ga100_fifo_nonstall_ctor(struct nvkm_fifo *fifo)
+ 		if (ret)
+ 			return ret;
+ 
++		nvkm_inth_allow(&runl->nonstall.inth);
++
+ 		nr = max(nr, runl->id + 1);
+ 	}
+ 
+ 	return nr;
+ }
+ 
++void
++ga100_fifo_nonstall_dtor(struct nvkm_fifo *fifo)
++{
++	struct nvkm_runl *runl;
++
++	nvkm_runl_foreach(runl, fifo) {
++		if (runl->nonstall.vector < 0)
++			continue;
++		nvkm_inth_block(&runl->nonstall.inth);
++	}
++}
++
+ int
+ ga100_fifo_runl_ctor(struct nvkm_fifo *fifo)
+ {
+@@ -599,6 +605,7 @@ ga100_fifo = {
+ 	.runl_ctor = ga100_fifo_runl_ctor,
+ 	.mmu_fault = &tu102_fifo_mmu_fault,
+ 	.nonstall_ctor = ga100_fifo_nonstall_ctor,
++	.nonstall_dtor = ga100_fifo_nonstall_dtor,
+ 	.nonstall = &ga100_fifo_nonstall,
+ 	.runl = &ga100_runl,
+ 	.runq = &ga100_runq,
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga102.c b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga102.c
+index 755235f55b3aca..18a0b1f4eab76a 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga102.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga102.c
+@@ -30,6 +30,7 @@ ga102_fifo = {
+ 	.runl_ctor = ga100_fifo_runl_ctor,
+ 	.mmu_fault = &tu102_fifo_mmu_fault,
+ 	.nonstall_ctor = ga100_fifo_nonstall_ctor,
++	.nonstall_dtor = ga100_fifo_nonstall_dtor,
+ 	.nonstall = &ga100_fifo_nonstall,
+ 	.runl = &ga100_runl,
+ 	.runq = &ga100_runq,
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/priv.h b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/priv.h
+index 5e81ae1953290d..fff1428ef267ba 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/priv.h
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/priv.h
+@@ -41,6 +41,7 @@ struct nvkm_fifo_func {
+ 	void (*start)(struct nvkm_fifo *, unsigned long *);
+ 
+ 	int (*nonstall_ctor)(struct nvkm_fifo *);
++	void (*nonstall_dtor)(struct nvkm_fifo *);
+ 	const struct nvkm_event_func *nonstall;
+ 
+ 	const struct nvkm_runl_func *runl;
+@@ -200,6 +201,7 @@ u32 tu102_chan_doorbell_handle(struct nvkm_chan *);
+ 
+ int ga100_fifo_runl_ctor(struct nvkm_fifo *);
+ int ga100_fifo_nonstall_ctor(struct nvkm_fifo *);
++void ga100_fifo_nonstall_dtor(struct nvkm_fifo *);
+ extern const struct nvkm_event_func ga100_fifo_nonstall;
+ extern const struct nvkm_runl_func ga100_runl;
+ extern const struct nvkm_runq_func ga100_runq;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/fifo.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/fifo.c
+index 1ac5628c5140e6..4ed54b386a60f5 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/fifo.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/fifo.c
+@@ -601,6 +601,7 @@ r535_fifo_new(const struct nvkm_fifo_func *hw, struct nvkm_device *device,
+ 	rm->chan.func = &r535_chan;
+ 	rm->nonstall = &ga100_fifo_nonstall;
+ 	rm->nonstall_ctor = ga100_fifo_nonstall_ctor;
++	rm->nonstall_dtor = ga100_fifo_nonstall_dtor;
+ 
+ 	return nvkm_fifo_new_(rm, device, type, inst, pfifo);
+ }
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
+index 186f6452a7d359..b50927a824b402 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
+@@ -2579,12 +2579,13 @@ static int vop2_win_init(struct vop2 *vop2)
+ }
+ 
+ /*
+- * The window registers are only updated when config done is written.
+- * Until that they read back the old value. As we read-modify-write
+- * these registers mark them as non-volatile. This makes sure we read
+- * the new values from the regmap register cache.
++ * The window and video port registers are only updated when config
++ * done is written. Until that they read back the old value. As we
++ * read-modify-write these registers mark them as non-volatile. This
++ * makes sure we read the new values from the regmap register cache.
+  */
+ static const struct regmap_range vop2_nonvolatile_range[] = {
++	regmap_reg_range(RK3568_VP0_CTRL_BASE, RK3588_VP3_CTRL_BASE + 255),
+ 	regmap_reg_range(0x1000, 0x23ff),
+ };
+ 
+diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
+index 74635b444122d7..50326e756f8975 100644
+--- a/drivers/gpu/drm/xe/xe_bo.c
++++ b/drivers/gpu/drm/xe/xe_bo.c
+@@ -803,8 +803,7 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
+ 		return ret;
+ 	}
+ 
+-	tt_has_data = ttm && (ttm_tt_is_populated(ttm) ||
+-			      (ttm->page_flags & TTM_TT_FLAG_SWAPPED));
++	tt_has_data = ttm && (ttm_tt_is_populated(ttm) || ttm_tt_is_swapped(ttm));
+ 
+ 	move_lacks_source = !old_mem || (handle_system_ccs ? (!bo->ccs_cleared) :
+ 					 (!mem_type_is_vram(old_mem_type) && !tt_has_data));
+diff --git a/drivers/hwmon/ina238.c b/drivers/hwmon/ina238.c
+index 9a5fd16a4ec2a6..5c90c3e59f80c1 100644
+--- a/drivers/hwmon/ina238.c
++++ b/drivers/hwmon/ina238.c
+@@ -300,7 +300,7 @@ static int ina238_write_in(struct device *dev, u32 attr, int channel,
+ 		regval = clamp_val(val, -163, 163);
+ 		regval = (regval * 1000 * 4) /
+ 			 (INA238_SHUNT_VOLTAGE_LSB * data->gain);
+-		regval = clamp_val(regval, S16_MIN, S16_MAX);
++		regval = clamp_val(regval, S16_MIN, S16_MAX) & 0xffff;
+ 
+ 		switch (attr) {
+ 		case hwmon_in_max:
+@@ -426,9 +426,10 @@ static int ina238_write_power(struct device *dev, u32 attr, long val)
+ 	 * Unsigned postive values. Compared against the 24-bit power register,
+ 	 * lower 8-bits are truncated. Same conversion to/from uW as POWER
+ 	 * register.
++	 * The first clamp_val() is to establish a baseline to avoid overflows.
+ 	 */
+-	regval = clamp_val(val, 0, LONG_MAX);
+-	regval = div_u64(val * 4 * 100 * data->rshunt, data->config->power_calculate_factor *
++	regval = clamp_val(val, 0, LONG_MAX / 2);
++	regval = div_u64(regval * 4 * 100 * data->rshunt, data->config->power_calculate_factor *
+ 			1000ULL * INA238_FIXED_SHUNT * data->gain);
+ 	regval = clamp_val(regval >> 8, 0, U16_MAX);
+ 
+@@ -481,7 +482,7 @@ static int ina238_write_temp(struct device *dev, u32 attr, long val)
+ 		return -EOPNOTSUPP;
+ 
+ 	/* Signed */
+-	regval = clamp_val(val, -40000, 125000);
++	val = clamp_val(val, -40000, 125000);
+ 	regval = div_s64(val * 10000, data->config->temp_lsb) << data->config->temp_shift;
+ 	regval = clamp_val(regval, S16_MIN, S16_MAX) & (0xffff << data->config->temp_shift);
+ 
+diff --git a/drivers/hwmon/mlxreg-fan.c b/drivers/hwmon/mlxreg-fan.c
+index a5f89aab3fb4d2..c25a54d5b39ad5 100644
+--- a/drivers/hwmon/mlxreg-fan.c
++++ b/drivers/hwmon/mlxreg-fan.c
+@@ -561,15 +561,14 @@ static int mlxreg_fan_cooling_config(struct device *dev, struct mlxreg_fan *fan)
+ 		if (!pwm->connected)
+ 			continue;
+ 		pwm->fan = fan;
++		/* Set minimal PWM speed. */
++		pwm->last_hwmon_state = MLXREG_FAN_PWM_DUTY2STATE(MLXREG_FAN_MIN_DUTY);
+ 		pwm->cdev = devm_thermal_of_cooling_device_register(dev, NULL, mlxreg_fan_name[i],
+ 								    pwm, &mlxreg_fan_cooling_ops);
+ 		if (IS_ERR(pwm->cdev)) {
+ 			dev_err(dev, "Failed to register cooling device\n");
+ 			return PTR_ERR(pwm->cdev);
+ 		}
+-
+-		/* Set minimal PWM speed. */
+-		pwm->last_hwmon_state = MLXREG_FAN_PWM_DUTY2STATE(MLXREG_FAN_MIN_DUTY);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/isdn/mISDN/dsp_hwec.c b/drivers/isdn/mISDN/dsp_hwec.c
+index 0b3f29195330ac..0cd216e28f0090 100644
+--- a/drivers/isdn/mISDN/dsp_hwec.c
++++ b/drivers/isdn/mISDN/dsp_hwec.c
+@@ -51,14 +51,14 @@ void dsp_hwec_enable(struct dsp *dsp, const char *arg)
+ 		goto _do;
+ 
+ 	{
+-		char *dup, *tok, *name, *val;
++		char *dup, *next, *tok, *name, *val;
+ 		int tmp;
+ 
+-		dup = kstrdup(arg, GFP_ATOMIC);
++		dup = next = kstrdup(arg, GFP_ATOMIC);
+ 		if (!dup)
+ 			return;
+ 
+-		while ((tok = strsep(&dup, ","))) {
++		while ((tok = strsep(&next, ","))) {
+ 			if (!strlen(tok))
+ 				continue;
+ 			name = strsep(&tok, "=");
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 8746b22060a7c2..3f355bb85797f8 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -9089,6 +9089,11 @@ void md_do_sync(struct md_thread *thread)
+ 	}
+ 
+ 	action = md_sync_action(mddev);
++	if (action == ACTION_FROZEN || action == ACTION_IDLE) {
++		set_bit(MD_RECOVERY_INTR, &mddev->recovery);
++		goto skip;
++	}
++
+ 	desc = md_sync_action_name(action);
+ 	mddev->last_sync_action = action;
+ 
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 6cee738a645ff2..6d52336057946e 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -1226,7 +1226,7 @@ static void alloc_behind_master_bio(struct r1bio *r1_bio,
+ 	int i = 0;
+ 	struct bio *behind_bio = NULL;
+ 
+-	behind_bio = bio_alloc_bioset(NULL, vcnt, 0, GFP_NOIO,
++	behind_bio = bio_alloc_bioset(NULL, vcnt, bio->bi_opf, GFP_NOIO,
+ 				      &r1_bio->mddev->bio_set);
+ 
+ 	/* discard op, we don't support writezero/writesame yet */
+diff --git a/drivers/net/dsa/mv88e6xxx/leds.c b/drivers/net/dsa/mv88e6xxx/leds.c
+index 1c88bfaea46ba6..ab3bc645da5665 100644
+--- a/drivers/net/dsa/mv88e6xxx/leds.c
++++ b/drivers/net/dsa/mv88e6xxx/leds.c
+@@ -779,7 +779,8 @@ int mv88e6xxx_port_setup_leds(struct mv88e6xxx_chip *chip, int port)
+ 			continue;
+ 		if (led_num > 1) {
+ 			dev_err(dev, "invalid LED specified port %d\n", port);
+-			return -EINVAL;
++			ret = -EINVAL;
++			goto err_put_led;
+ 		}
+ 
+ 		if (led_num == 0)
+@@ -823,17 +824,25 @@ int mv88e6xxx_port_setup_leds(struct mv88e6xxx_chip *chip, int port)
+ 		init_data.devname_mandatory = true;
+ 		init_data.devicename = kasprintf(GFP_KERNEL, "%s:0%d:0%d", chip->info->name,
+ 						 port, led_num);
+-		if (!init_data.devicename)
+-			return -ENOMEM;
++		if (!init_data.devicename) {
++			ret = -ENOMEM;
++			goto err_put_led;
++		}
+ 
+ 		ret = devm_led_classdev_register_ext(dev, l, &init_data);
+ 		kfree(init_data.devicename);
+ 
+ 		if (ret) {
+ 			dev_err(dev, "Failed to init LED %d for port %d", led_num, port);
+-			return ret;
++			goto err_put_led;
+ 		}
+ 	}
+ 
++	fwnode_handle_put(leds);
+ 	return 0;
++
++err_put_led:
++	fwnode_handle_put(led);
++	fwnode_handle_put(leds);
++	return ret;
+ }
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index cb76ab78904fc1..d47c1d81c49b82 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -4390,7 +4390,7 @@ static void bnxt_alloc_one_rx_ring_netmem(struct bnxt *bp,
+ 	for (i = 0; i < bp->rx_agg_ring_size; i++) {
+ 		if (bnxt_alloc_rx_netmem(bp, rxr, prod, GFP_KERNEL)) {
+ 			netdev_warn(bp->dev, "init'ed rx ring %d with %d/%d pages only\n",
+-				    ring_nr, i, bp->rx_ring_size);
++				    ring_nr, i, bp->rx_agg_ring_size);
+ 			break;
+ 		}
+ 		prod = NEXT_RX_AGG(prod);
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index d949d2ba6cb9fa..f5d7556afb97ef 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -1223,12 +1223,13 @@ static int macb_tx_complete(struct macb_queue *queue, int budget)
+ {
+ 	struct macb *bp = queue->bp;
+ 	u16 queue_index = queue - bp->queues;
++	unsigned long flags;
+ 	unsigned int tail;
+ 	unsigned int head;
+ 	int packets = 0;
+ 	u32 bytes = 0;
+ 
+-	spin_lock(&queue->tx_ptr_lock);
++	spin_lock_irqsave(&queue->tx_ptr_lock, flags);
+ 	head = queue->tx_head;
+ 	for (tail = queue->tx_tail; tail != head && packets < budget; tail++) {
+ 		struct macb_tx_skb	*tx_skb;
+@@ -1291,7 +1292,7 @@ static int macb_tx_complete(struct macb_queue *queue, int budget)
+ 	    CIRC_CNT(queue->tx_head, queue->tx_tail,
+ 		     bp->tx_ring_size) <= MACB_TX_WAKEUP_THRESH(bp))
+ 		netif_wake_subqueue(bp->dev, queue_index);
+-	spin_unlock(&queue->tx_ptr_lock);
++	spin_unlock_irqrestore(&queue->tx_ptr_lock, flags);
+ 
+ 	return packets;
+ }
+@@ -1707,8 +1708,9 @@ static void macb_tx_restart(struct macb_queue *queue)
+ {
+ 	struct macb *bp = queue->bp;
+ 	unsigned int head_idx, tbqp;
++	unsigned long flags;
+ 
+-	spin_lock(&queue->tx_ptr_lock);
++	spin_lock_irqsave(&queue->tx_ptr_lock, flags);
+ 
+ 	if (queue->tx_head == queue->tx_tail)
+ 		goto out_tx_ptr_unlock;
+@@ -1720,19 +1722,20 @@ static void macb_tx_restart(struct macb_queue *queue)
+ 	if (tbqp == head_idx)
+ 		goto out_tx_ptr_unlock;
+ 
+-	spin_lock_irq(&bp->lock);
++	spin_lock(&bp->lock);
+ 	macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART));
+-	spin_unlock_irq(&bp->lock);
++	spin_unlock(&bp->lock);
+ 
+ out_tx_ptr_unlock:
+-	spin_unlock(&queue->tx_ptr_lock);
++	spin_unlock_irqrestore(&queue->tx_ptr_lock, flags);
+ }
+ 
+ static bool macb_tx_complete_pending(struct macb_queue *queue)
+ {
+ 	bool retval = false;
++	unsigned long flags;
+ 
+-	spin_lock(&queue->tx_ptr_lock);
++	spin_lock_irqsave(&queue->tx_ptr_lock, flags);
+ 	if (queue->tx_head != queue->tx_tail) {
+ 		/* Make hw descriptor updates visible to CPU */
+ 		rmb();
+@@ -1740,7 +1743,7 @@ static bool macb_tx_complete_pending(struct macb_queue *queue)
+ 		if (macb_tx_desc(queue, queue->tx_tail)->ctrl & MACB_BIT(TX_USED))
+ 			retval = true;
+ 	}
+-	spin_unlock(&queue->tx_ptr_lock);
++	spin_unlock_irqrestore(&queue->tx_ptr_lock, flags);
+ 	return retval;
+ }
+ 
+@@ -2308,6 +2311,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	struct macb_queue *queue = &bp->queues[queue_index];
+ 	unsigned int desc_cnt, nr_frags, frag_size, f;
+ 	unsigned int hdrlen;
++	unsigned long flags;
+ 	bool is_lso;
+ 	netdev_tx_t ret = NETDEV_TX_OK;
+ 
+@@ -2368,7 +2372,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		desc_cnt += DIV_ROUND_UP(frag_size, bp->max_tx_length);
+ 	}
+ 
+-	spin_lock_bh(&queue->tx_ptr_lock);
++	spin_lock_irqsave(&queue->tx_ptr_lock, flags);
+ 
+ 	/* This is a hard error, log it. */
+ 	if (CIRC_SPACE(queue->tx_head, queue->tx_tail,
+@@ -2392,15 +2396,15 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	netdev_tx_sent_queue(netdev_get_tx_queue(bp->dev, queue_index),
+ 			     skb->len);
+ 
+-	spin_lock_irq(&bp->lock);
++	spin_lock(&bp->lock);
+ 	macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART));
+-	spin_unlock_irq(&bp->lock);
++	spin_unlock(&bp->lock);
+ 
+ 	if (CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring_size) < 1)
+ 		netif_stop_subqueue(dev, queue_index);
+ 
+ unlock:
+-	spin_unlock_bh(&queue->tx_ptr_lock);
++	spin_unlock_irqrestore(&queue->tx_ptr_lock, flags);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
+index 21495b5dce254d..9efb60842ad1fe 100644
+--- a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
++++ b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
+@@ -1493,13 +1493,17 @@ static int bgx_init_of_phy(struct bgx *bgx)
+ 		 * this cortina phy, for which there is no driver
+ 		 * support, ignore it.
+ 		 */
+-		if (phy_np &&
+-		    !of_device_is_compatible(phy_np, "cortina,cs4223-slice")) {
+-			/* Wait until the phy drivers are available */
+-			pd = of_phy_find_device(phy_np);
+-			if (!pd)
+-				goto defer;
+-			bgx->lmac[lmac].phydev = pd;
++		if (phy_np) {
++			if (!of_device_is_compatible(phy_np, "cortina,cs4223-slice")) {
++				/* Wait until the phy drivers are available */
++				pd = of_phy_find_device(phy_np);
++				if (!pd) {
++					of_node_put(phy_np);
++					goto defer;
++				}
++				bgx->lmac[lmac].phydev = pd;
++			}
++			of_node_put(phy_np);
+ 		}
+ 
+ 		lmac++;
+@@ -1515,11 +1519,11 @@ static int bgx_init_of_phy(struct bgx *bgx)
+ 	 * for phy devices we may have already found.
+ 	 */
+ 	while (lmac) {
++		lmac--;
+ 		if (bgx->lmac[lmac].phydev) {
+ 			put_device(&bgx->lmac[lmac].phydev->mdio.dev);
+ 			bgx->lmac[lmac].phydev = NULL;
+ 		}
+-		lmac--;
+ 	}
+ 	of_node_put(node);
+ 	return -EPROBE_DEFER;
+diff --git a/drivers/net/ethernet/intel/e1000e/ethtool.c b/drivers/net/ethernet/intel/e1000e/ethtool.c
+index 9364bc2b4eb155..641a36dd0e604a 100644
+--- a/drivers/net/ethernet/intel/e1000e/ethtool.c
++++ b/drivers/net/ethernet/intel/e1000e/ethtool.c
+@@ -549,12 +549,12 @@ static int e1000_set_eeprom(struct net_device *netdev,
+ {
+ 	struct e1000_adapter *adapter = netdev_priv(netdev);
+ 	struct e1000_hw *hw = &adapter->hw;
++	size_t total_len, max_len;
+ 	u16 *eeprom_buff;
+-	void *ptr;
+-	int max_len;
++	int ret_val = 0;
+ 	int first_word;
+ 	int last_word;
+-	int ret_val = 0;
++	void *ptr;
+ 	u16 i;
+ 
+ 	if (eeprom->len == 0)
+@@ -569,6 +569,10 @@ static int e1000_set_eeprom(struct net_device *netdev,
+ 
+ 	max_len = hw->nvm.word_size * 2;
+ 
++	if (check_add_overflow(eeprom->offset, eeprom->len, &total_len) ||
++	    total_len > max_len)
++		return -EFBIG;
++
+ 	first_word = eeprom->offset >> 1;
+ 	last_word = (eeprom->offset + eeprom->len - 1) >> 1;
+ 	eeprom_buff = kmalloc(max_len, GFP_KERNEL);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_client.c b/drivers/net/ethernet/intel/i40e/i40e_client.c
+index 59263551c3838f..0b099e5f48163d 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_client.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_client.c
+@@ -359,8 +359,8 @@ static void i40e_client_add_instance(struct i40e_pf *pf)
+ 	if (i40e_client_get_params(vsi, &cdev->lan_info.params))
+ 		goto free_cdev;
+ 
+-	mac = list_first_entry(&cdev->lan_info.netdev->dev_addrs.list,
+-			       struct netdev_hw_addr, list);
++	mac = list_first_entry_or_null(&cdev->lan_info.netdev->dev_addrs.list,
++				       struct netdev_hw_addr, list);
+ 	if (mac)
+ 		ether_addr_copy(cdev->lan_info.lanmac, mac->addr);
+ 	else
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+index 6cd9da662ae112..a5c794371dfe60 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+@@ -40,48 +40,6 @@ static struct i40e_vsi *i40e_dbg_find_vsi(struct i40e_pf *pf, int seid)
+  * setup, adding or removing filters, or other things.  Many of
+  * these will be useful for some forms of unit testing.
+  **************************************************************/
+-static char i40e_dbg_command_buf[256] = "";
+-
+-/**
+- * i40e_dbg_command_read - read for command datum
+- * @filp: the opened file
+- * @buffer: where to write the data for the user to read
+- * @count: the size of the user's buffer
+- * @ppos: file position offset
+- **/
+-static ssize_t i40e_dbg_command_read(struct file *filp, char __user *buffer,
+-				     size_t count, loff_t *ppos)
+-{
+-	struct i40e_pf *pf = filp->private_data;
+-	struct i40e_vsi *main_vsi;
+-	int bytes_not_copied;
+-	int buf_size = 256;
+-	char *buf;
+-	int len;
+-
+-	/* don't allow partial reads */
+-	if (*ppos != 0)
+-		return 0;
+-	if (count < buf_size)
+-		return -ENOSPC;
+-
+-	buf = kzalloc(buf_size, GFP_KERNEL);
+-	if (!buf)
+-		return -ENOSPC;
+-
+-	main_vsi = i40e_pf_get_main_vsi(pf);
+-	len = snprintf(buf, buf_size, "%s: %s\n", main_vsi->netdev->name,
+-		       i40e_dbg_command_buf);
+-
+-	bytes_not_copied = copy_to_user(buffer, buf, len);
+-	kfree(buf);
+-
+-	if (bytes_not_copied)
+-		return -EFAULT;
+-
+-	*ppos = len;
+-	return len;
+-}
+ 
+ static char *i40e_filter_state_string[] = {
+ 	"INVALID",
+@@ -1621,7 +1579,6 @@ static ssize_t i40e_dbg_command_write(struct file *filp,
+ static const struct file_operations i40e_dbg_command_fops = {
+ 	.owner = THIS_MODULE,
+ 	.open =  simple_open,
+-	.read =  i40e_dbg_command_read,
+ 	.write = i40e_dbg_command_write,
+ };
+ 
+@@ -1630,48 +1587,6 @@ static const struct file_operations i40e_dbg_command_fops = {
+  * The netdev_ops entry in debugfs is for giving the driver commands
+  * to be executed from the netdev operations.
+  **************************************************************/
+-static char i40e_dbg_netdev_ops_buf[256] = "";
+-
+-/**
+- * i40e_dbg_netdev_ops_read - read for netdev_ops datum
+- * @filp: the opened file
+- * @buffer: where to write the data for the user to read
+- * @count: the size of the user's buffer
+- * @ppos: file position offset
+- **/
+-static ssize_t i40e_dbg_netdev_ops_read(struct file *filp, char __user *buffer,
+-					size_t count, loff_t *ppos)
+-{
+-	struct i40e_pf *pf = filp->private_data;
+-	struct i40e_vsi *main_vsi;
+-	int bytes_not_copied;
+-	int buf_size = 256;
+-	char *buf;
+-	int len;
+-
+-	/* don't allow partal reads */
+-	if (*ppos != 0)
+-		return 0;
+-	if (count < buf_size)
+-		return -ENOSPC;
+-
+-	buf = kzalloc(buf_size, GFP_KERNEL);
+-	if (!buf)
+-		return -ENOSPC;
+-
+-	main_vsi = i40e_pf_get_main_vsi(pf);
+-	len = snprintf(buf, buf_size, "%s: %s\n", main_vsi->netdev->name,
+-		       i40e_dbg_netdev_ops_buf);
+-
+-	bytes_not_copied = copy_to_user(buffer, buf, len);
+-	kfree(buf);
+-
+-	if (bytes_not_copied)
+-		return -EFAULT;
+-
+-	*ppos = len;
+-	return len;
+-}
+ 
+ /**
+  * i40e_dbg_netdev_ops_write - write into netdev_ops datum
+@@ -1685,35 +1600,36 @@ static ssize_t i40e_dbg_netdev_ops_write(struct file *filp,
+ 					 size_t count, loff_t *ppos)
+ {
+ 	struct i40e_pf *pf = filp->private_data;
++	char *cmd_buf, *buf_tmp;
+ 	int bytes_not_copied;
+ 	struct i40e_vsi *vsi;
+-	char *buf_tmp;
+ 	int vsi_seid;
+ 	int i, cnt;
+ 
+ 	/* don't allow partial writes */
+ 	if (*ppos != 0)
+ 		return 0;
+-	if (count >= sizeof(i40e_dbg_netdev_ops_buf))
+-		return -ENOSPC;
+ 
+-	memset(i40e_dbg_netdev_ops_buf, 0, sizeof(i40e_dbg_netdev_ops_buf));
+-	bytes_not_copied = copy_from_user(i40e_dbg_netdev_ops_buf,
+-					  buffer, count);
+-	if (bytes_not_copied)
++	cmd_buf = kzalloc(count + 1, GFP_KERNEL);
++	if (!cmd_buf)
++		return count;
++	bytes_not_copied = copy_from_user(cmd_buf, buffer, count);
++	if (bytes_not_copied) {
++		kfree(cmd_buf);
+ 		return -EFAULT;
+-	i40e_dbg_netdev_ops_buf[count] = '\0';
++	}
++	cmd_buf[count] = '\0';
+ 
+-	buf_tmp = strchr(i40e_dbg_netdev_ops_buf, '\n');
++	buf_tmp = strchr(cmd_buf, '\n');
+ 	if (buf_tmp) {
+ 		*buf_tmp = '\0';
+-		count = buf_tmp - i40e_dbg_netdev_ops_buf + 1;
++		count = buf_tmp - cmd_buf + 1;
+ 	}
+ 
+-	if (strncmp(i40e_dbg_netdev_ops_buf, "change_mtu", 10) == 0) {
++	if (strncmp(cmd_buf, "change_mtu", 10) == 0) {
+ 		int mtu;
+ 
+-		cnt = sscanf(&i40e_dbg_netdev_ops_buf[11], "%i %i",
++		cnt = sscanf(&cmd_buf[11], "%i %i",
+ 			     &vsi_seid, &mtu);
+ 		if (cnt != 2) {
+ 			dev_info(&pf->pdev->dev, "change_mtu <vsi_seid> <mtu>\n");
+@@ -1735,8 +1651,8 @@ static ssize_t i40e_dbg_netdev_ops_write(struct file *filp,
+ 			dev_info(&pf->pdev->dev, "Could not acquire RTNL - please try again\n");
+ 		}
+ 
+-	} else if (strncmp(i40e_dbg_netdev_ops_buf, "set_rx_mode", 11) == 0) {
+-		cnt = sscanf(&i40e_dbg_netdev_ops_buf[11], "%i", &vsi_seid);
++	} else if (strncmp(cmd_buf, "set_rx_mode", 11) == 0) {
++		cnt = sscanf(&cmd_buf[11], "%i", &vsi_seid);
+ 		if (cnt != 1) {
+ 			dev_info(&pf->pdev->dev, "set_rx_mode <vsi_seid>\n");
+ 			goto netdev_ops_write_done;
+@@ -1756,8 +1672,8 @@ static ssize_t i40e_dbg_netdev_ops_write(struct file *filp,
+ 			dev_info(&pf->pdev->dev, "Could not acquire RTNL - please try again\n");
+ 		}
+ 
+-	} else if (strncmp(i40e_dbg_netdev_ops_buf, "napi", 4) == 0) {
+-		cnt = sscanf(&i40e_dbg_netdev_ops_buf[4], "%i", &vsi_seid);
++	} else if (strncmp(cmd_buf, "napi", 4) == 0) {
++		cnt = sscanf(&cmd_buf[4], "%i", &vsi_seid);
+ 		if (cnt != 1) {
+ 			dev_info(&pf->pdev->dev, "napi <vsi_seid>\n");
+ 			goto netdev_ops_write_done;
+@@ -1775,21 +1691,20 @@ static ssize_t i40e_dbg_netdev_ops_write(struct file *filp,
+ 			dev_info(&pf->pdev->dev, "napi called\n");
+ 		}
+ 	} else {
+-		dev_info(&pf->pdev->dev, "unknown command '%s'\n",
+-			 i40e_dbg_netdev_ops_buf);
++		dev_info(&pf->pdev->dev, "unknown command '%s'\n", cmd_buf);
+ 		dev_info(&pf->pdev->dev, "available commands\n");
+ 		dev_info(&pf->pdev->dev, "  change_mtu <vsi_seid> <mtu>\n");
+ 		dev_info(&pf->pdev->dev, "  set_rx_mode <vsi_seid>\n");
+ 		dev_info(&pf->pdev->dev, "  napi <vsi_seid>\n");
+ 	}
+ netdev_ops_write_done:
++	kfree(cmd_buf);
+ 	return count;
+ }
+ 
+ static const struct file_operations i40e_dbg_netdev_ops_fops = {
+ 	.owner = THIS_MODULE,
+ 	.open = simple_open,
+-	.read = i40e_dbg_netdev_ops_read,
+ 	.write = i40e_dbg_netdev_ops_write,
+ };
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index d42892c8c5a12c..a34faa4894739a 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -3172,12 +3172,14 @@ static irqreturn_t ice_ll_ts_intr(int __always_unused irq, void *data)
+ 	hw = &pf->hw;
+ 	tx = &pf->ptp.port.tx;
+ 	spin_lock_irqsave(&tx->lock, flags);
+-	ice_ptp_complete_tx_single_tstamp(tx);
++	if (tx->init) {
++		ice_ptp_complete_tx_single_tstamp(tx);
+ 
+-	idx = find_next_bit_wrap(tx->in_use, tx->len,
+-				 tx->last_ll_ts_idx_read + 1);
+-	if (idx != tx->len)
+-		ice_ptp_req_tx_single_tstamp(tx, idx);
++		idx = find_next_bit_wrap(tx->in_use, tx->len,
++					 tx->last_ll_ts_idx_read + 1);
++		if (idx != tx->len)
++			ice_ptp_req_tx_single_tstamp(tx, idx);
++	}
+ 	spin_unlock_irqrestore(&tx->lock, flags);
+ 
+ 	val = GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M |
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
+index 55cad824c5b9f4..69e05bafb1e371 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
+@@ -2877,16 +2877,19 @@ irqreturn_t ice_ptp_ts_irq(struct ice_pf *pf)
+ 		 */
+ 		if (hw->dev_caps.ts_dev_info.ts_ll_int_read) {
+ 			struct ice_ptp_tx *tx = &pf->ptp.port.tx;
+-			u8 idx;
++			u8 idx, last;
+ 
+ 			if (!ice_pf_state_is_nominal(pf))
+ 				return IRQ_HANDLED;
+ 
+ 			spin_lock(&tx->lock);
+-			idx = find_next_bit_wrap(tx->in_use, tx->len,
+-						 tx->last_ll_ts_idx_read + 1);
+-			if (idx != tx->len)
+-				ice_ptp_req_tx_single_tstamp(tx, idx);
++			if (tx->init) {
++				last = tx->last_ll_ts_idx_read + 1;
++				idx = find_next_bit_wrap(tx->in_use, tx->len,
++							 last);
++				if (idx != tx->len)
++					ice_ptp_req_tx_single_tstamp(tx, idx);
++			}
+ 			spin_unlock(&tx->lock);
+ 
+ 			return IRQ_HANDLED;
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+index e9f8da9f7979be..5fc6147ecd93cc 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+@@ -2277,6 +2277,7 @@ static int idpf_set_mac(struct net_device *netdev, void *p)
+ 	struct idpf_netdev_priv *np = netdev_priv(netdev);
+ 	struct idpf_vport_config *vport_config;
+ 	struct sockaddr *addr = p;
++	u8 old_mac_addr[ETH_ALEN];
+ 	struct idpf_vport *vport;
+ 	int err = 0;
+ 
+@@ -2300,17 +2301,19 @@ static int idpf_set_mac(struct net_device *netdev, void *p)
+ 	if (ether_addr_equal(netdev->dev_addr, addr->sa_data))
+ 		goto unlock_mutex;
+ 
++	ether_addr_copy(old_mac_addr, vport->default_mac_addr);
++	ether_addr_copy(vport->default_mac_addr, addr->sa_data);
+ 	vport_config = vport->adapter->vport_config[vport->idx];
+ 	err = idpf_add_mac_filter(vport, np, addr->sa_data, false);
+ 	if (err) {
+ 		__idpf_del_mac_filter(vport_config, addr->sa_data);
++		ether_addr_copy(vport->default_mac_addr, netdev->dev_addr);
+ 		goto unlock_mutex;
+ 	}
+ 
+-	if (is_valid_ether_addr(vport->default_mac_addr))
+-		idpf_del_mac_filter(vport, np, vport->default_mac_addr, false);
++	if (is_valid_ether_addr(old_mac_addr))
++		__idpf_del_mac_filter(vport_config, old_mac_addr);
+ 
+-	ether_addr_copy(vport->default_mac_addr, addr->sa_data);
+ 	eth_hw_addr_set(netdev, addr->sa_data);
+ 
+ unlock_mutex:
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+index 24febaaa8fbb83..cb9a27307670ee 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+@@ -3507,6 +3507,16 @@ u32 idpf_get_vport_id(struct idpf_vport *vport)
+ 	return le32_to_cpu(vport_msg->vport_id);
+ }
+ 
++static void idpf_set_mac_type(struct idpf_vport *vport,
++			      struct virtchnl2_mac_addr *mac_addr)
++{
++	bool is_primary;
++
++	is_primary = ether_addr_equal(vport->default_mac_addr, mac_addr->addr);
++	mac_addr->type = is_primary ? VIRTCHNL2_MAC_ADDR_PRIMARY :
++				      VIRTCHNL2_MAC_ADDR_EXTRA;
++}
++
+ /**
+  * idpf_mac_filter_async_handler - Async callback for mac filters
+  * @adapter: private data struct
+@@ -3636,6 +3646,7 @@ int idpf_add_del_mac_filters(struct idpf_vport *vport,
+ 			    list) {
+ 		if (add && f->add) {
+ 			ether_addr_copy(mac_addr[i].addr, f->macaddr);
++			idpf_set_mac_type(vport, &mac_addr[i]);
+ 			i++;
+ 			f->add = false;
+ 			if (i == total_filters)
+@@ -3643,6 +3654,7 @@ int idpf_add_del_mac_filters(struct idpf_vport *vport,
+ 		}
+ 		if (!add && f->remove) {
+ 			ether_addr_copy(mac_addr[i].addr, f->macaddr);
++			idpf_set_mac_type(vport, &mac_addr[i]);
+ 			i++;
+ 			f->remove = false;
+ 			if (i == total_filters)
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+index d8a919ab7027aa..05a1f9f5914fda 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+@@ -3565,13 +3565,13 @@ ixgbe_get_eee_fw(struct ixgbe_adapter *adapter, struct ethtool_keee *edata)
+ 
+ 	for (i = 0; i < ARRAY_SIZE(ixgbe_ls_map); ++i) {
+ 		if (hw->phy.eee_speeds_supported & ixgbe_ls_map[i].mac_speed)
+-			linkmode_set_bit(ixgbe_lp_map[i].link_mode,
++			linkmode_set_bit(ixgbe_ls_map[i].link_mode,
+ 					 edata->supported);
+ 	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(ixgbe_ls_map); ++i) {
+ 		if (hw->phy.eee_speeds_advertised & ixgbe_ls_map[i].mac_speed)
+-			linkmode_set_bit(ixgbe_lp_map[i].link_mode,
++			linkmode_set_bit(ixgbe_ls_map[i].link_mode,
+ 					 edata->advertised);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index b38e4f2de6748e..880f27ca84d42e 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -1737,6 +1737,13 @@ static netdev_tx_t mtk_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	bool gso = false;
+ 	int tx_num;
+ 
++	if (skb_vlan_tag_present(skb) &&
++	    !eth_proto_is_802_3(eth_hdr(skb)->h_proto)) {
++		skb = __vlan_hwaccel_push_inside(skb);
++		if (!skb)
++			goto dropped;
++	}
++
+ 	/* normally we can rely on the stack not calling this more than once,
+ 	 * however we have 2 queues running on the same ring so we need to lock
+ 	 * the ring access
+@@ -1782,8 +1789,9 @@ static netdev_tx_t mtk_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ drop:
+ 	spin_unlock(&eth->page_lock);
+-	stats->tx_dropped++;
+ 	dev_kfree_skb_any(skb);
++dropped:
++	stats->tx_dropped++;
+ 	return NETDEV_TX_OK;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+index b33285d755b907..a626fd0d20735f 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+@@ -267,8 +267,10 @@ int mlx4_en_create_rx_ring(struct mlx4_en_priv *priv,
+ 	pp.dma_dir = priv->dma_dir;
+ 
+ 	ring->pp = page_pool_create(&pp);
+-	if (!ring->pp)
++	if (IS_ERR(ring->pp)) {
++		err = PTR_ERR(ring->pp);
+ 		goto err_ring;
++	}
+ 
+ 	if (xdp_rxq_info_reg(&ring->xdp_rxq, priv->dev, queue_index, 0) < 0)
+ 		goto err_pp;
+diff --git a/drivers/net/ethernet/microchip/lan865x/lan865x.c b/drivers/net/ethernet/microchip/lan865x/lan865x.c
+index 84c41f19356126..79b800d2b72c28 100644
+--- a/drivers/net/ethernet/microchip/lan865x/lan865x.c
++++ b/drivers/net/ethernet/microchip/lan865x/lan865x.c
+@@ -423,13 +423,16 @@ static void lan865x_remove(struct spi_device *spi)
+ 	free_netdev(priv->netdev);
+ }
+ 
+-static const struct spi_device_id spidev_spi_ids[] = {
++static const struct spi_device_id lan865x_ids[] = {
+ 	{ .name = "lan8650" },
++	{ .name = "lan8651" },
+ 	{},
+ };
++MODULE_DEVICE_TABLE(spi, lan865x_ids);
+ 
+ static const struct of_device_id lan865x_dt_ids[] = {
+ 	{ .compatible = "microchip,lan8650" },
++	{ .compatible = "microchip,lan8651" },
+ 	{ /* Sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, lan865x_dt_ids);
+@@ -441,7 +444,7 @@ static struct spi_driver lan865x_driver = {
+ 	 },
+ 	.probe = lan865x_probe,
+ 	.remove = lan865x_remove,
+-	.id_table = spidev_spi_ids,
++	.id_table = lan865x_ids,
+ };
+ module_spi_driver(lan865x_driver);
+ 
+diff --git a/drivers/net/ethernet/oa_tc6.c b/drivers/net/ethernet/oa_tc6.c
+index db200e4ec284d7..91a906a7918a25 100644
+--- a/drivers/net/ethernet/oa_tc6.c
++++ b/drivers/net/ethernet/oa_tc6.c
+@@ -1249,7 +1249,8 @@ struct oa_tc6 *oa_tc6_init(struct spi_device *spi, struct net_device *netdev)
+ 
+ 	/* Set the SPI controller to pump at realtime priority */
+ 	tc6->spi->rt = true;
+-	spi_setup(tc6->spi);
++	if (spi_setup(tc6->spi) < 0)
++		return NULL;
+ 
+ 	tc6->spi_ctrl_tx_buf = devm_kzalloc(&tc6->spi->dev,
+ 					    OA_TC6_CTRL_SPI_BUF_SIZE,
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 231ca141331f51..dbdbc40109c51d 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -1522,7 +1522,7 @@ static int am65_cpsw_nuss_tx_compl_packets(struct am65_cpsw_common *common,
+ 		}
+ 	}
+ 
+-	if (single_port) {
++	if (single_port && num_tx) {
+ 		netif_txq = netdev_get_tx_queue(ndev, chn);
+ 		netdev_tx_completed_queue(netif_txq, num_tx, total_bytes);
+ 		am65_cpsw_nuss_tx_wake(tx_chn, ndev, netif_txq);
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 0d8a05fe541afb..ec6d47dc984aa8 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -1168,6 +1168,15 @@ static void axienet_dma_rx_cb(void *data, const struct dmaengine_result *result)
+ 						       &meta_max_len);
+ 	dma_unmap_single(lp->dev, skbuf_dma->dma_address, lp->max_frm_size,
+ 			 DMA_FROM_DEVICE);
++
++	if (IS_ERR(app_metadata)) {
++		if (net_ratelimit())
++			netdev_err(lp->ndev, "Failed to get RX metadata pointer\n");
++		dev_kfree_skb_any(skb);
++		lp->ndev->stats.rx_dropped++;
++		goto rx_submit;
++	}
++
+ 	/* TODO: Derive app word index programmatically */
+ 	rx_len = (app_metadata[LEN_APP] & 0xFFFF);
+ 	skb_put(skb, rx_len);
+@@ -1180,6 +1189,7 @@ static void axienet_dma_rx_cb(void *data, const struct dmaengine_result *result)
+ 	u64_stats_add(&lp->rx_bytes, rx_len);
+ 	u64_stats_update_end(&lp->rx_stat_sync);
+ 
++rx_submit:
+ 	for (i = 0; i < CIRC_SPACE(lp->rx_ring_head, lp->rx_ring_tail,
+ 				   RX_BUF_NUM_DEFAULT); i++)
+ 		axienet_rx_submit_desc(lp->ndev);
+diff --git a/drivers/net/ethernet/xircom/xirc2ps_cs.c b/drivers/net/ethernet/xircom/xirc2ps_cs.c
+index a31d5d5e65936d..97e88886253f54 100644
+--- a/drivers/net/ethernet/xircom/xirc2ps_cs.c
++++ b/drivers/net/ethernet/xircom/xirc2ps_cs.c
+@@ -1576,7 +1576,7 @@ do_reset(struct net_device *dev, int full)
+ 	    msleep(40);			/* wait 40 msec to let it complete */
+ 	}
+ 	if (full_duplex)
+-	    PutByte(XIRCREG1_ECR, GetByte(XIRCREG1_ECR | FullDuplex));
++	    PutByte(XIRCREG1_ECR, GetByte(XIRCREG1_ECR) | FullDuplex);
+     } else {  /* No MII */
+ 	SelectPage(0);
+ 	value = GetByte(XIRCREG_ESR);	 /* read the ESR */
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 4c75d1fea55271..01329fe7451a12 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -1844,7 +1844,7 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info)
+ 
+ 	if (tb_sa[MACSEC_SA_ATTR_PN]) {
+ 		spin_lock_bh(&rx_sa->lock);
+-		rx_sa->next_pn = nla_get_u64(tb_sa[MACSEC_SA_ATTR_PN]);
++		rx_sa->next_pn = nla_get_uint(tb_sa[MACSEC_SA_ATTR_PN]);
+ 		spin_unlock_bh(&rx_sa->lock);
+ 	}
+ 
+@@ -2086,7 +2086,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info)
+ 	}
+ 
+ 	spin_lock_bh(&tx_sa->lock);
+-	tx_sa->next_pn = nla_get_u64(tb_sa[MACSEC_SA_ATTR_PN]);
++	tx_sa->next_pn = nla_get_uint(tb_sa[MACSEC_SA_ATTR_PN]);
+ 	spin_unlock_bh(&tx_sa->lock);
+ 
+ 	if (tb_sa[MACSEC_SA_ATTR_ACTIVE])
+@@ -2398,7 +2398,7 @@ static int macsec_upd_txsa(struct sk_buff *skb, struct genl_info *info)
+ 
+ 		spin_lock_bh(&tx_sa->lock);
+ 		prev_pn = tx_sa->next_pn_halves;
+-		tx_sa->next_pn = nla_get_u64(tb_sa[MACSEC_SA_ATTR_PN]);
++		tx_sa->next_pn = nla_get_uint(tb_sa[MACSEC_SA_ATTR_PN]);
+ 		spin_unlock_bh(&tx_sa->lock);
+ 	}
+ 
+@@ -2496,7 +2496,7 @@ static int macsec_upd_rxsa(struct sk_buff *skb, struct genl_info *info)
+ 
+ 		spin_lock_bh(&rx_sa->lock);
+ 		prev_pn = rx_sa->next_pn_halves;
+-		rx_sa->next_pn = nla_get_u64(tb_sa[MACSEC_SA_ATTR_PN]);
++		rx_sa->next_pn = nla_get_uint(tb_sa[MACSEC_SA_ATTR_PN]);
+ 		spin_unlock_bh(&rx_sa->lock);
+ 	}
+ 
+diff --git a/drivers/net/mctp/mctp-usb.c b/drivers/net/mctp/mctp-usb.c
+index 775a386d0aca12..36ccc53b179759 100644
+--- a/drivers/net/mctp/mctp-usb.c
++++ b/drivers/net/mctp/mctp-usb.c
+@@ -183,6 +183,7 @@ static void mctp_usb_in_complete(struct urb *urb)
+ 		struct mctp_usb_hdr *hdr;
+ 		u8 pkt_len; /* length of MCTP packet, no USB header */
+ 
++		skb_reset_mac_header(skb);
+ 		hdr = skb_pull_data(skb, sizeof(*hdr));
+ 		if (!hdr)
+ 			break;
+diff --git a/drivers/net/pcs/pcs-rzn1-miic.c b/drivers/net/pcs/pcs-rzn1-miic.c
+index d79bb9b06cd22f..ce73d9474d5bfd 100644
+--- a/drivers/net/pcs/pcs-rzn1-miic.c
++++ b/drivers/net/pcs/pcs-rzn1-miic.c
+@@ -19,7 +19,7 @@
+ #define MIIC_PRCMD			0x0
+ #define MIIC_ESID_CODE			0x4
+ 
+-#define MIIC_MODCTRL			0x20
++#define MIIC_MODCTRL			0x8
+ #define MIIC_MODCTRL_SW_MODE		GENMASK(4, 0)
+ 
+ #define MIIC_CONVCTRL(port)		(0x100 + (port) * 4)
+diff --git a/drivers/net/phy/mscc/mscc_ptp.c b/drivers/net/phy/mscc/mscc_ptp.c
+index 72847320cb652d..d692df7d975c70 100644
+--- a/drivers/net/phy/mscc/mscc_ptp.c
++++ b/drivers/net/phy/mscc/mscc_ptp.c
+@@ -456,12 +456,12 @@ static void vsc85xx_dequeue_skb(struct vsc85xx_ptp *ptp)
+ 		*p++ = (reg >> 24) & 0xff;
+ 	}
+ 
+-	len = skb_queue_len(&ptp->tx_queue);
++	len = skb_queue_len_lockless(&ptp->tx_queue);
+ 	if (len < 1)
+ 		return;
+ 
+ 	while (len--) {
+-		skb = __skb_dequeue(&ptp->tx_queue);
++		skb = skb_dequeue(&ptp->tx_queue);
+ 		if (!skb)
+ 			return;
+ 
+@@ -486,7 +486,7 @@ static void vsc85xx_dequeue_skb(struct vsc85xx_ptp *ptp)
+ 		 * packet in the FIFO right now, reschedule it for later
+ 		 * packets.
+ 		 */
+-		__skb_queue_tail(&ptp->tx_queue, skb);
++		skb_queue_tail(&ptp->tx_queue, skb);
+ 	}
+ }
+ 
+@@ -1068,6 +1068,7 @@ static int vsc85xx_hwtstamp(struct mii_timestamper *mii_ts,
+ 	case HWTSTAMP_TX_ON:
+ 		break;
+ 	case HWTSTAMP_TX_OFF:
++		skb_queue_purge(&vsc8531->ptp->tx_queue);
+ 		break;
+ 	default:
+ 		return -ERANGE;
+@@ -1092,9 +1093,6 @@ static int vsc85xx_hwtstamp(struct mii_timestamper *mii_ts,
+ 
+ 	mutex_lock(&vsc8531->ts_lock);
+ 
+-	__skb_queue_purge(&vsc8531->ptp->tx_queue);
+-	__skb_queue_head_init(&vsc8531->ptp->tx_queue);
+-
+ 	/* Disable predictor while configuring the 1588 block */
+ 	val = vsc85xx_ts_read_csr(phydev, PROCESSOR,
+ 				  MSCC_PHY_PTP_INGR_PREDICTOR);
+@@ -1180,9 +1178,7 @@ static void vsc85xx_txtstamp(struct mii_timestamper *mii_ts,
+ 
+ 	skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+ 
+-	mutex_lock(&vsc8531->ts_lock);
+-	__skb_queue_tail(&vsc8531->ptp->tx_queue, skb);
+-	mutex_unlock(&vsc8531->ts_lock);
++	skb_queue_tail(&vsc8531->ptp->tx_queue, skb);
+ 	return;
+ 
+ out:
+@@ -1548,6 +1544,7 @@ void vsc8584_ptp_deinit(struct phy_device *phydev)
+ 	if (vsc8531->ptp->ptp_clock) {
+ 		ptp_clock_unregister(vsc8531->ptp->ptp_clock);
+ 		skb_queue_purge(&vsc8531->rx_skbs_list);
++		skb_queue_purge(&vsc8531->ptp->tx_queue);
+ 	}
+ }
+ 
+@@ -1571,7 +1568,7 @@ irqreturn_t vsc8584_handle_ts_interrupt(struct phy_device *phydev)
+ 	if (rc & VSC85XX_1588_INT_FIFO_ADD) {
+ 		vsc85xx_get_tx_ts(priv->ptp);
+ 	} else if (rc & VSC85XX_1588_INT_FIFO_OVERFLOW) {
+-		__skb_queue_purge(&priv->ptp->tx_queue);
++		skb_queue_purge(&priv->ptp->tx_queue);
+ 		vsc85xx_ts_reset_fifo(phydev);
+ 	}
+ 
+@@ -1591,6 +1588,7 @@ int vsc8584_ptp_probe(struct phy_device *phydev)
+ 	mutex_init(&vsc8531->phc_lock);
+ 	mutex_init(&vsc8531->ts_lock);
+ 	skb_queue_head_init(&vsc8531->rx_skbs_list);
++	skb_queue_head_init(&vsc8531->ptp->tx_queue);
+ 
+ 	/* Retrieve the shared load/save GPIO. Request it as non exclusive as
+ 	 * the same GPIO can be requested by all the PHYs of the same package.
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index 5e7672d2022c92..bb5343c0392590 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -1752,7 +1752,6 @@ pad_compress_skb(struct ppp *ppp, struct sk_buff *skb)
+ 		 */
+ 		if (net_ratelimit())
+ 			netdev_err(ppp->dev, "ppp: compressor dropped pkt\n");
+-		kfree_skb(skb);
+ 		consume_skb(new_skb);
+ 		new_skb = NULL;
+ 	}
+@@ -1854,9 +1853,10 @@ ppp_send_frame(struct ppp *ppp, struct sk_buff *skb)
+ 					   "down - pkt dropped.\n");
+ 			goto drop;
+ 		}
+-		skb = pad_compress_skb(ppp, skb);
+-		if (!skb)
++		new_skb = pad_compress_skb(ppp, skb);
++		if (!new_skb)
+ 			goto drop;
++		skb = new_skb;
+ 	}
+ 
+ 	/*
+diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
+index ea0e5e276cd6d1..5d123df0a866b8 100644
+--- a/drivers/net/usb/cdc_ncm.c
++++ b/drivers/net/usb/cdc_ncm.c
+@@ -2087,6 +2087,13 @@ static const struct usb_device_id cdc_devs[] = {
+ 	  .driver_info = (unsigned long)&wwan_info,
+ 	},
+ 
++	/* Intel modem (label from OEM reads Fibocom L850-GL) */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x8087, 0x095a,
++		USB_CLASS_COMM,
++		USB_CDC_SUBCLASS_NCM, USB_CDC_PROTO_NONE),
++	  .driver_info = (unsigned long)&wwan_info,
++	},
++
+ 	/* DisplayLink docking stations */
+ 	{ .match_flags = USB_DEVICE_ID_MATCH_INT_INFO
+ 		| USB_DEVICE_ID_MATCH_VENDOR,
+diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
+index 97792de896b72f..45cec14d76f62a 100644
+--- a/drivers/net/vxlan/vxlan_core.c
++++ b/drivers/net/vxlan/vxlan_core.c
+@@ -1445,6 +1445,10 @@ static enum skb_drop_reason vxlan_snoop(struct net_device *dev,
+ 		if (READ_ONCE(f->updated) != now)
+ 			WRITE_ONCE(f->updated, now);
+ 
++		/* Don't override an fdb with nexthop with a learnt entry */
++		if (rcu_access_pointer(f->nh))
++			return SKB_DROP_REASON_VXLAN_ENTRY_EXISTS;
++
+ 		if (likely(vxlan_addr_equal(&rdst->remote_ip, src_ip) &&
+ 			   rdst->remote_ifindex == ifindex))
+ 			return SKB_NOT_DROPPED_YET;
+@@ -1453,10 +1457,6 @@ static enum skb_drop_reason vxlan_snoop(struct net_device *dev,
+ 		if (f->state & (NUD_PERMANENT | NUD_NOARP))
+ 			return SKB_DROP_REASON_VXLAN_ENTRY_EXISTS;
+ 
+-		/* Don't override an fdb with nexthop with a learnt entry */
+-		if (rcu_access_pointer(f->nh))
+-			return SKB_DROP_REASON_VXLAN_ENTRY_EXISTS;
+-
+ 		if (net_ratelimit())
+ 			netdev_info(dev,
+ 				    "%pM migrated from %pIS to %pIS\n",
+@@ -1880,6 +1880,7 @@ static int arp_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni)
+ 	n = neigh_lookup(&arp_tbl, &tip, dev);
+ 
+ 	if (n) {
++		struct vxlan_rdst *rdst = NULL;
+ 		struct vxlan_fdb *f;
+ 		struct sk_buff	*reply;
+ 
+@@ -1890,7 +1891,9 @@ static int arp_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni)
+ 
+ 		rcu_read_lock();
+ 		f = vxlan_find_mac_tx(vxlan, n->ha, vni);
+-		if (f && vxlan_addr_any(&(first_remote_rcu(f)->remote_ip))) {
++		if (f)
++			rdst = first_remote_rcu(f);
++		if (rdst && vxlan_addr_any(&rdst->remote_ip)) {
+ 			/* bridge-local neighbor */
+ 			neigh_release(n);
+ 			rcu_read_unlock();
+@@ -2047,6 +2050,7 @@ static int neigh_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni)
+ 	n = neigh_lookup(ipv6_stub->nd_tbl, &msg->target, dev);
+ 
+ 	if (n) {
++		struct vxlan_rdst *rdst = NULL;
+ 		struct vxlan_fdb *f;
+ 		struct sk_buff *reply;
+ 
+@@ -2056,7 +2060,9 @@ static int neigh_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni)
+ 		}
+ 
+ 		f = vxlan_find_mac_tx(vxlan, n->ha, vni);
+-		if (f && vxlan_addr_any(&(first_remote_rcu(f)->remote_ip))) {
++		if (f)
++			rdst = first_remote_rcu(f);
++		if (rdst && vxlan_addr_any(&rdst->remote_ip)) {
+ 			/* bridge-local neighbor */
+ 			neigh_release(n);
+ 			goto out;
+diff --git a/drivers/net/vxlan/vxlan_private.h b/drivers/net/vxlan/vxlan_private.h
+index d328aed9feefdd..55b84c0cbd65e6 100644
+--- a/drivers/net/vxlan/vxlan_private.h
++++ b/drivers/net/vxlan/vxlan_private.h
+@@ -61,9 +61,7 @@ static inline struct hlist_head *vs_head(struct net *net, __be16 port)
+ 	return &vn->sock_list[hash_32(ntohs(port), PORT_HASH_BITS)];
+ }
+ 
+-/* First remote destination for a forwarding entry.
+- * Guaranteed to be non-NULL because remotes are never deleted.
+- */
++/* First remote destination for a forwarding entry. */
+ static inline struct vxlan_rdst *first_remote_rcu(struct vxlan_fdb *fdb)
+ {
+ 	if (rcu_access_pointer(fdb->nh))
+diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
+index 6b2f207975e33a..5d0210953fa307 100644
+--- a/drivers/net/wireless/ath/ath11k/core.h
++++ b/drivers/net/wireless/ath/ath11k/core.h
+@@ -410,6 +410,8 @@ struct ath11k_vif {
+ 	bool do_not_send_tmpl;
+ 	struct ath11k_arp_ns_offload arp_ns_offload;
+ 	struct ath11k_rekey_data rekey_data;
++	u32 num_stations;
++	bool reinstall_group_keys;
+ 
+ 	struct ath11k_reg_tpc_power_info reg_tpc_info;
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 977f370fd6de42..5f6cc763c86acc 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -4317,6 +4317,40 @@ static int ath11k_clear_peer_keys(struct ath11k_vif *arvif,
+ 	return first_errno;
+ }
+ 
++static int ath11k_set_group_keys(struct ath11k_vif *arvif)
++{
++	struct ath11k *ar = arvif->ar;
++	struct ath11k_base *ab = ar->ab;
++	const u8 *addr = arvif->bssid;
++	int i, ret, first_errno = 0;
++	struct ath11k_peer *peer;
++
++	spin_lock_bh(&ab->base_lock);
++	peer = ath11k_peer_find(ab, arvif->vdev_id, addr);
++	spin_unlock_bh(&ab->base_lock);
++
++	if (!peer)
++		return -ENOENT;
++
++	for (i = 0; i < ARRAY_SIZE(peer->keys); i++) {
++		struct ieee80211_key_conf *key = peer->keys[i];
++
++		if (!key || (key->flags & IEEE80211_KEY_FLAG_PAIRWISE))
++			continue;
++
++		ret = ath11k_install_key(arvif, key, SET_KEY, addr,
++					 WMI_KEY_GROUP);
++		if (ret < 0 && first_errno == 0)
++			first_errno = ret;
++
++		if (ret < 0)
++			ath11k_warn(ab, "failed to set group key of idx %d for vdev %d: %d\n",
++				    i, arvif->vdev_id, ret);
++	}
++
++	return first_errno;
++}
++
+ static int ath11k_mac_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ 				 struct ieee80211_vif *vif, struct ieee80211_sta *sta,
+ 				 struct ieee80211_key_conf *key)
+@@ -4326,6 +4360,7 @@ static int ath11k_mac_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ 	struct ath11k_vif *arvif = ath11k_vif_to_arvif(vif);
+ 	struct ath11k_peer *peer;
+ 	struct ath11k_sta *arsta;
++	bool is_ap_with_no_sta;
+ 	const u8 *peer_addr;
+ 	int ret = 0;
+ 	u32 flags = 0;
+@@ -4386,16 +4421,57 @@ static int ath11k_mac_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ 	else
+ 		flags |= WMI_KEY_GROUP;
+ 
+-	ret = ath11k_install_key(arvif, key, cmd, peer_addr, flags);
+-	if (ret) {
+-		ath11k_warn(ab, "ath11k_install_key failed (%d)\n", ret);
+-		goto exit;
+-	}
++	ath11k_dbg(ar->ab, ATH11K_DBG_MAC,
++		   "%s for peer %pM on vdev %d flags 0x%X, type = %d, num_sta %d\n",
++		   cmd == SET_KEY ? "SET_KEY" : "DEL_KEY", peer_addr, arvif->vdev_id,
++		   flags, arvif->vdev_type, arvif->num_stations);
++
++	/* Allow group key clearing only in AP mode when no stations are
++	 * associated. There is a known race condition in firmware where
++	 * group addressed packets may be dropped if the key is cleared
++	 * and immediately set again during rekey.
++	 *
++	 * During GTK rekey, mac80211 issues a clear key (if the old key
++	 * exists) followed by an install key operation for same key
++	 * index. This causes ath11k to send two WMI commands in quick
++	 * succession: one to clear the old key and another to install the
++	 * new key in the same slot.
++	 *
++	 * Under certain conditions—especially under high load or time
++	 * sensitive scenarios, firmware may process these commands
++	 * asynchronously in a way that firmware assumes the key is
++	 * cleared whereas hardware has a valid key. This inconsistency
++	 * between hardware and firmware leads to group addressed packet
++	 * drops after rekey.
++	 * Only setting the same key again can restore a valid key in
++	 * firmware and allow packets to be transmitted.
++	 *
++	 * There is a use case where an AP can transition from Secure mode
++	 * to open mode without a vdev restart by just deleting all
++	 * associated peers and clearing key, Hence allow clear key for
++	 * that case alone. Mark arvif->reinstall_group_keys in such cases
++	 * and reinstall the same key when the first peer is added,
++	 * allowing firmware to recover from the race if it had occurred.
++	 */
+ 
+-	ret = ath11k_dp_peer_rx_pn_replay_config(arvif, peer_addr, cmd, key);
+-	if (ret) {
+-		ath11k_warn(ab, "failed to offload PN replay detection %d\n", ret);
+-		goto exit;
++	is_ap_with_no_sta = (vif->type == NL80211_IFTYPE_AP &&
++			     !arvif->num_stations);
++	if ((flags & WMI_KEY_PAIRWISE) || cmd == SET_KEY || is_ap_with_no_sta) {
++		ret = ath11k_install_key(arvif, key, cmd, peer_addr, flags);
++		if (ret) {
++			ath11k_warn(ab, "ath11k_install_key failed (%d)\n", ret);
++			goto exit;
++		}
++
++		ret = ath11k_dp_peer_rx_pn_replay_config(arvif, peer_addr, cmd, key);
++		if (ret) {
++			ath11k_warn(ab, "failed to offload PN replay detection %d\n",
++				    ret);
++			goto exit;
++		}
++
++		if ((flags & WMI_KEY_GROUP) && cmd == SET_KEY && is_ap_with_no_sta)
++			arvif->reinstall_group_keys = true;
+ 	}
+ 
+ 	spin_lock_bh(&ab->base_lock);
+@@ -4994,6 +5070,7 @@ static int ath11k_mac_inc_num_stations(struct ath11k_vif *arvif,
+ 		return -ENOBUFS;
+ 
+ 	ar->num_stations++;
++	arvif->num_stations++;
+ 
+ 	return 0;
+ }
+@@ -5009,6 +5086,7 @@ static void ath11k_mac_dec_num_stations(struct ath11k_vif *arvif,
+ 		return;
+ 
+ 	ar->num_stations--;
++	arvif->num_stations--;
+ }
+ 
+ static u32 ath11k_mac_ieee80211_sta_bw_to_wmi(struct ath11k *ar,
+@@ -9536,6 +9614,21 @@ static int ath11k_mac_station_add(struct ath11k *ar,
+ 		goto exit;
+ 	}
+ 
++	/* Driver allows the DEL KEY followed by SET KEY sequence for
++	 * group keys for only when there is no clients associated, if at
++	 * all firmware has entered the race during that window,
++	 * reinstalling the same key when the first sta connects will allow
++	 * firmware to recover from the race.
++	 */
++	if (arvif->num_stations == 1 && arvif->reinstall_group_keys) {
++		ath11k_dbg(ab, ATH11K_DBG_MAC, "set group keys on 1st station add for vdev %d\n",
++			   arvif->vdev_id);
++		ret = ath11k_set_group_keys(arvif);
++		if (ret)
++			goto dec_num_station;
++		arvif->reinstall_group_keys = false;
++	}
++
+ 	arsta->rx_stats = kzalloc(sizeof(*arsta->rx_stats), GFP_KERNEL);
+ 	if (!arsta->rx_stats) {
+ 		ret = -ENOMEM;
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
+index 1d0d4a66894641..eac5d48cade663 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.c
++++ b/drivers/net/wireless/ath/ath12k/wmi.c
+@@ -2370,6 +2370,7 @@ int ath12k_wmi_send_peer_assoc_cmd(struct ath12k *ar,
+ 
+ 	eml_cap = arg->ml.eml_cap;
+ 	if (u16_get_bits(eml_cap, IEEE80211_EML_CAP_EMLSR_SUPP)) {
++		ml_params->flags |= cpu_to_le32(ATH12K_WMI_FLAG_MLO_EMLSR_SUPPORT);
+ 		/* Padding delay */
+ 		eml_pad_delay = ieee80211_emlsr_pad_delay_in_us(eml_cap);
+ 		ml_params->emlsr_padding_delay_us = cpu_to_le32(eml_pad_delay);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c
+index 69ef8cf203d24f..67c0c5a92f9985 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/btcoex.c
+@@ -393,10 +393,8 @@ void brcmf_btcoex_detach(struct brcmf_cfg80211_info *cfg)
+ 	if (!cfg->btcoex)
+ 		return;
+ 
+-	if (cfg->btcoex->timer_on) {
+-		cfg->btcoex->timer_on = false;
+-		timer_shutdown_sync(&cfg->btcoex->timer);
+-	}
++	timer_shutdown_sync(&cfg->btcoex->timer);
++	cfg->btcoex->timer_on = false;
+ 
+ 	cancel_work_sync(&cfg->btcoex->work);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index bee7d92293b8d6..7ec22738b5d650 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -169,7 +169,7 @@ int iwl_acpi_get_dsm(struct iwl_fw_runtime *fwrt,
+ 
+ 	BUILD_BUG_ON(ARRAY_SIZE(acpi_dsm_size) != DSM_FUNC_NUM_FUNCS);
+ 
+-	if (WARN_ON(func >= ARRAY_SIZE(acpi_dsm_size)))
++	if (WARN_ON(func >= ARRAY_SIZE(acpi_dsm_size) || !func))
+ 		return -EINVAL;
+ 
+ 	expected_size = acpi_dsm_size[func];
+@@ -178,6 +178,29 @@ int iwl_acpi_get_dsm(struct iwl_fw_runtime *fwrt,
+ 	if (expected_size != sizeof(u8) && expected_size != sizeof(u32))
+ 		return -EOPNOTSUPP;
+ 
++	if (!fwrt->acpi_dsm_funcs_valid) {
++		ret = iwl_acpi_get_dsm_integer(fwrt->dev, ACPI_DSM_REV,
++					       DSM_FUNC_QUERY,
++					       &iwl_guid, &tmp,
++					       acpi_dsm_size[DSM_FUNC_QUERY]);
++		if (ret) {
++			/* always indicate BIT(0) to avoid re-reading */
++			fwrt->acpi_dsm_funcs_valid = BIT(0);
++			return ret;
++		}
++
++		IWL_DEBUG_RADIO(fwrt, "ACPI DSM validity bitmap 0x%x\n",
++				(u32)tmp);
++		/* always indicate BIT(0) to avoid re-reading */
++		fwrt->acpi_dsm_funcs_valid = tmp | BIT(0);
++	}
++
++	if (!(fwrt->acpi_dsm_funcs_valid & BIT(func))) {
++		IWL_DEBUG_RADIO(fwrt, "ACPI DSM %d not indicated as valid\n",
++				func);
++		return -ENODATA;
++	}
++
+ 	ret = iwl_acpi_get_dsm_integer(fwrt->dev, ACPI_DSM_REV, func,
+ 				       &iwl_guid, &tmp, expected_size);
+ 	if (ret)
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/runtime.h b/drivers/net/wireless/intel/iwlwifi/fw/runtime.h
+index 0444a736c2b206..bd3bc2846cfa49 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/runtime.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/runtime.h
+@@ -113,6 +113,10 @@ struct iwl_txf_iter_data {
+  * @phy_filters: specific phy filters as read from WPFC BIOS table
+  * @ppag_bios_rev: PPAG BIOS revision
+  * @ppag_bios_source: see &enum bios_source
++ * @acpi_dsm_funcs_valid: bitmap indicating which DSM values are valid,
++ *	zero (default initialization) means it hasn't been read yet,
++ *	and BIT(0) is set when it has since function 0 also has this
++ *	bitmap and is always supported
+  */
+ struct iwl_fw_runtime {
+ 	struct iwl_trans *trans;
+@@ -189,6 +193,10 @@ struct iwl_fw_runtime {
+ 	bool uats_valid;
+ 	u8 uefi_tables_lock_status;
+ 	struct iwl_phy_specific_cfg phy_filters;
++
++#ifdef CONFIG_ACPI
++	u32 acpi_dsm_funcs_valid;
++#endif
+ };
+ 
+ void iwl_fw_runtime_init(struct iwl_fw_runtime *fwrt, struct iwl_trans *trans,
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/uefi.c b/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
+index 48126ec6b94bfd..99a17b9323e9b7 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
+@@ -747,6 +747,12 @@ int iwl_uefi_get_dsm(struct iwl_fw_runtime *fwrt, enum iwl_dsm_funcs func,
+ 		goto out;
+ 	}
+ 
++	if (!(data->functions[DSM_FUNC_QUERY] & BIT(func))) {
++		IWL_DEBUG_RADIO(fwrt, "DSM func %d not in 0x%x\n",
++				func, data->functions[DSM_FUNC_QUERY]);
++		goto out;
++	}
++
+ 	*value = data->functions[func];
+ 
+ 	IWL_DEBUG_RADIO(fwrt,
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 0a9e0dbb58fbf7..4e47ccb43bd86c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -670,6 +670,8 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct iwl_dev_info iwl_dev_info_table[] = {
+ 
+ 	IWL_DEV_INFO(iwl6005_n_cfg, iwl6005_2agn_sff_name,
+ 		     DEVICE(0x0082), SUBDEV_MASKED(0xC000, 0xF000)),
++	IWL_DEV_INFO(iwl6005_n_cfg, iwl6005_2agn_sff_name,
++		     DEVICE(0x0085), SUBDEV_MASKED(0xC000, 0xF000)),
+ 	IWL_DEV_INFO(iwl6005_n_cfg, iwl6005_2agn_d_name,
+ 		     DEVICE(0x0082), SUBDEV(0x4820)),
+ 	IWL_DEV_INFO(iwl6005_n_cfg, iwl6005_2agn_mow1_name,
+@@ -726,10 +728,10 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct iwl_dev_info iwl_dev_info_table[] = {
+ 		     DEVICE(0x0083), SUBDEV_MASKED(0x5, 0xF)),
+ 	IWL_DEV_INFO(iwl1000_bg_cfg, iwl1000_bg_name,
+ 		     DEVICE(0x0083), SUBDEV_MASKED(0x6, 0xF)),
++	IWL_DEV_INFO(iwl1000_bgn_cfg, iwl1000_bgn_name,
++		     DEVICE(0x0084), SUBDEV_MASKED(0x5, 0xF)),
+ 	IWL_DEV_INFO(iwl1000_bg_cfg, iwl1000_bg_name,
+-		     DEVICE(0x0084), SUBDEV(0x1216)),
+-	IWL_DEV_INFO(iwl1000_bg_cfg, iwl1000_bg_name,
+-		     DEVICE(0x0084), SUBDEV(0x1316)),
++		     DEVICE(0x0084), SUBDEV_MASKED(0x6, 0xF)),
+ 
+ /* 100 Series WiFi */
+ 	IWL_DEV_INFO(iwl100_bgn_cfg, iwl100_bgn_name,
+@@ -961,6 +963,12 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct iwl_dev_info iwl_dev_info_table[] = {
+ 		     DEVICE(0x24F3), SUBDEV(0x0004)),
+ 	IWL_DEV_INFO(iwl8260_cfg, iwl8260_2n_name,
+ 		     DEVICE(0x24F3), SUBDEV(0x0044)),
++	IWL_DEV_INFO(iwl8260_cfg, iwl8260_2ac_name,
++		     DEVICE(0x24F4)),
++	IWL_DEV_INFO(iwl8260_cfg, iwl4165_2ac_name,
++		     DEVICE(0x24F5)),
++	IWL_DEV_INFO(iwl8260_cfg, iwl4165_2ac_name,
++		     DEVICE(0x24F6)),
+ 	IWL_DEV_INFO(iwl8265_cfg, iwl8265_2ac_name,
+ 		     DEVICE(0x24FD)),
+ 	IWL_DEV_INFO(iwl8265_cfg, iwl8275_2ac_name,
+@@ -1503,11 +1511,15 @@ static int _iwl_pci_resume(struct device *device, bool restore)
+ 	 * Note: MAC (bits 0:7) will be cleared upon suspend even with wowlan,
+ 	 * but not bits [15:8]. So if we have bits set in lower word, assume
+ 	 * the device is alive.
++	 * Alternatively, if the scratch value is 0xFFFFFFFF, then we no longer
++	 * have access to the device and consider it powered off.
+ 	 * For older devices, just try silently to grab the NIC.
+ 	 */
+ 	if (trans->mac_cfg->device_family >= IWL_DEVICE_FAMILY_BZ) {
+-		if (!(iwl_read32(trans, CSR_FUNC_SCRATCH) &
+-		      CSR_FUNC_SCRATCH_POWER_OFF_MASK))
++		u32 scratch = iwl_read32(trans, CSR_FUNC_SCRATCH);
++
++		if (!(scratch & CSR_FUNC_SCRATCH_POWER_OFF_MASK) ||
++		    scratch == ~0U)
+ 			device_was_powered_off = true;
+ 	} else {
+ 		/*
+diff --git a/drivers/net/wireless/marvell/libertas/cfg.c b/drivers/net/wireless/marvell/libertas/cfg.c
+index 2e2c193716d96a..309556541a83ed 100644
+--- a/drivers/net/wireless/marvell/libertas/cfg.c
++++ b/drivers/net/wireless/marvell/libertas/cfg.c
+@@ -1151,10 +1151,13 @@ static int lbs_associate(struct lbs_private *priv,
+ 	/* add SSID TLV */
+ 	rcu_read_lock();
+ 	ssid_eid = ieee80211_bss_get_ie(bss, WLAN_EID_SSID);
+-	if (ssid_eid)
+-		pos += lbs_add_ssid_tlv(pos, ssid_eid + 2, ssid_eid[1]);
+-	else
++	if (ssid_eid) {
++		u32 ssid_len = min(ssid_eid[1], IEEE80211_MAX_SSID_LEN);
++
++		pos += lbs_add_ssid_tlv(pos, ssid_eid + 2, ssid_len);
++	} else {
+ 		lbs_deb_assoc("no SSID\n");
++	}
+ 	rcu_read_unlock();
+ 
+ 	/* add DS param TLV */
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+index 60c12328c2f315..8085a1ae4bdbb3 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+@@ -4668,8 +4668,9 @@ int mwifiex_init_channel_scan_gap(struct mwifiex_adapter *adapter)
+ 	 * additional active scan request for hidden SSIDs on passive channels.
+ 	 */
+ 	adapter->num_in_chan_stats = 2 * (n_channels_bg + n_channels_a);
+-	adapter->chan_stats = vmalloc(array_size(sizeof(*adapter->chan_stats),
+-						 adapter->num_in_chan_stats));
++	adapter->chan_stats = kcalloc(adapter->num_in_chan_stats,
++				      sizeof(*adapter->chan_stats),
++				      GFP_KERNEL);
+ 
+ 	if (!adapter->chan_stats)
+ 		return -ENOMEM;
+diff --git a/drivers/net/wireless/marvell/mwifiex/main.c b/drivers/net/wireless/marvell/mwifiex/main.c
+index 7b50a88a18e573..1ec069bc8ea1e3 100644
+--- a/drivers/net/wireless/marvell/mwifiex/main.c
++++ b/drivers/net/wireless/marvell/mwifiex/main.c
+@@ -642,7 +642,7 @@ static int _mwifiex_fw_dpc(const struct firmware *firmware, void *context)
+ 	goto done;
+ 
+ err_add_intf:
+-	vfree(adapter->chan_stats);
++	kfree(adapter->chan_stats);
+ err_init_chan_scan:
+ 	wiphy_unregister(adapter->wiphy);
+ 	wiphy_free(adapter->wiphy);
+@@ -1485,7 +1485,7 @@ static void mwifiex_uninit_sw(struct mwifiex_adapter *adapter)
+ 	wiphy_free(adapter->wiphy);
+ 	adapter->wiphy = NULL;
+ 
+-	vfree(adapter->chan_stats);
++	kfree(adapter->chan_stats);
+ 	mwifiex_free_cmd_buffers(adapter);
+ }
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c
+index 45c8db939d5546..8e6ce16ab5b88f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mac80211.c
++++ b/drivers/net/wireless/mediatek/mt76/mac80211.c
+@@ -818,6 +818,43 @@ void mt76_free_device(struct mt76_dev *dev)
+ }
+ EXPORT_SYMBOL_GPL(mt76_free_device);
+ 
++static void mt76_reset_phy(struct mt76_phy *phy)
++{
++	if (!phy)
++		return;
++
++	INIT_LIST_HEAD(&phy->tx_list);
++}
++
++void mt76_reset_device(struct mt76_dev *dev)
++{
++	int i;
++
++	rcu_read_lock();
++	for (i = 0; i < ARRAY_SIZE(dev->wcid); i++) {
++		struct mt76_wcid *wcid;
++
++		wcid = rcu_dereference(dev->wcid[i]);
++		if (!wcid)
++			continue;
++
++		wcid->sta = 0;
++		mt76_wcid_cleanup(dev, wcid);
++		rcu_assign_pointer(dev->wcid[i], NULL);
++	}
++	rcu_read_unlock();
++
++	INIT_LIST_HEAD(&dev->wcid_list);
++	INIT_LIST_HEAD(&dev->sta_poll_list);
++	dev->vif_mask = 0;
++	memset(dev->wcid_mask, 0, sizeof(dev->wcid_mask));
++
++	mt76_reset_phy(&dev->phy);
++	for (i = 0; i < ARRAY_SIZE(dev->phys); i++)
++		mt76_reset_phy(dev->phys[i]);
++}
++EXPORT_SYMBOL_GPL(mt76_reset_device);
++
+ struct mt76_phy *mt76_vif_phy(struct ieee80211_hw *hw,
+ 			      struct ieee80211_vif *vif)
+ {
+@@ -1679,6 +1716,10 @@ void mt76_wcid_cleanup(struct mt76_dev *dev, struct mt76_wcid *wcid)
+ 	skb_queue_splice_tail_init(&wcid->tx_pending, &list);
+ 	spin_unlock(&wcid->tx_pending.lock);
+ 
++	spin_lock(&wcid->tx_offchannel.lock);
++	skb_queue_splice_tail_init(&wcid->tx_offchannel, &list);
++	spin_unlock(&wcid->tx_offchannel.lock);
++
+ 	spin_unlock_bh(&phy->tx_lock);
+ 
+ 	while ((skb = __skb_dequeue(&list)) != NULL) {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 0ecf77fcbe3d07..0290ddbb2424e6 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -1241,6 +1241,7 @@ int mt76_register_device(struct mt76_dev *dev, bool vht,
+ 			 struct ieee80211_rate *rates, int n_rates);
+ void mt76_unregister_device(struct mt76_dev *dev);
+ void mt76_free_device(struct mt76_dev *dev);
++void mt76_reset_device(struct mt76_dev *dev);
+ void mt76_unregister_phy(struct mt76_phy *phy);
+ 
+ struct mt76_phy *mt76_alloc_radio_phy(struct mt76_dev *dev, unsigned int size,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index 6639976afcee6a..1c0d310146d63b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -1460,17 +1460,15 @@ mt7915_mac_full_reset(struct mt7915_dev *dev)
+ 	if (i == 10)
+ 		dev_err(dev->mt76.dev, "chip full reset failed\n");
+ 
+-	spin_lock_bh(&dev->mt76.sta_poll_lock);
+-	while (!list_empty(&dev->mt76.sta_poll_list))
+-		list_del_init(dev->mt76.sta_poll_list.next);
+-	spin_unlock_bh(&dev->mt76.sta_poll_lock);
+-
+-	memset(dev->mt76.wcid_mask, 0, sizeof(dev->mt76.wcid_mask));
+-	dev->mt76.vif_mask = 0;
+ 	dev->phy.omac_mask = 0;
+ 	if (phy2)
+ 		phy2->omac_mask = 0;
+ 
++	mt76_reset_device(&dev->mt76);
++
++	INIT_LIST_HEAD(&dev->sta_rc_list);
++	INIT_LIST_HEAD(&dev->twt_list);
++
+ 	i = mt76_wcid_alloc(dev->mt76.wcid_mask, MT7915_WTBL_STA);
+ 	dev->mt76.global_wcid.idx = i;
+ 	dev->recovery.hw_full_reset = false;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index 77f73ae1d7ecce..f6b431c422ebc4 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -1457,11 +1457,8 @@ static int mt7921_pre_channel_switch(struct ieee80211_hw *hw,
+ 	if (vif->type != NL80211_IFTYPE_STATION || !vif->cfg.assoc)
+ 		return -EOPNOTSUPP;
+ 
+-	/* Avoid beacon loss due to the CAC(Channel Availability Check) time
+-	 * of the AP.
+-	 */
+ 	if (!cfg80211_chandef_usable(hw->wiphy, &chsw->chandef,
+-				     IEEE80211_CHAN_RADAR))
++				     IEEE80211_CHAN_DISABLED))
+ 		return -EOPNOTSUPP;
+ 
+ 	return 0;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mac.c b/drivers/net/wireless/mediatek/mt76/mt7925/mac.c
+index 75823c9fd3a10b..b581ab9427f22b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mac.c
+@@ -1449,7 +1449,7 @@ void mt7925_usb_sdio_tx_complete_skb(struct mt76_dev *mdev,
+ 	sta = wcid_to_sta(wcid);
+ 
+ 	if (sta && likely(e->skb->protocol != cpu_to_be16(ETH_P_PAE)))
+-		mt76_connac2_tx_check_aggr(sta, txwi);
++		mt7925_tx_check_aggr(sta, e->skb, wcid);
+ 
+ 	skb_pull(e->skb, headroom);
+ 	mt76_tx_complete_skb(mdev, e->wcid, e->skb);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/main.c b/drivers/net/wireless/mediatek/mt76/mt7925/main.c
+index 5b001548dffce7..5ea16a0eeea478 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/main.c
+@@ -1191,6 +1191,9 @@ mt7925_mac_sta_remove_links(struct mt792x_dev *dev, struct ieee80211_vif *vif,
+ 		struct mt792x_bss_conf *mconf;
+ 		struct mt792x_link_sta *mlink;
+ 
++		if (vif->type == NL80211_IFTYPE_AP)
++			break;
++
+ 		link_sta = mt792x_sta_to_link_sta(vif, sta, link_id);
+ 		if (!link_sta)
+ 			continue;
+@@ -2067,8 +2070,10 @@ mt7925_change_vif_links(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 					     GFP_KERNEL);
+ 			mlink = devm_kzalloc(dev->mt76.dev, sizeof(*mlink),
+ 					     GFP_KERNEL);
+-			if (!mconf || !mlink)
++			if (!mconf || !mlink) {
++				mt792x_mutex_release(dev);
+ 				return -ENOMEM;
++			}
+ 		}
+ 
+ 		mconfs[link_id] = mconf;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+index 300c863f0e3e20..cd457be26523e6 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+@@ -1834,13 +1834,13 @@ mt7925_mcu_sta_eht_mld_tlv(struct sk_buff *skb,
+ 	struct tlv *tlv;
+ 	u16 eml_cap;
+ 
++	if (!ieee80211_vif_is_mld(vif))
++		return;
++
+ 	tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_EHT_MLD, sizeof(*eht_mld));
+ 	eht_mld = (struct sta_rec_eht_mld *)tlv;
+ 	eht_mld->mld_type = 0xff;
+ 
+-	if (!ieee80211_vif_is_mld(vif))
+-		return;
+-
+ 	ext_capa = cfg80211_get_iftype_ext_capa(wiphy,
+ 						ieee80211_vif_type_p2p(vif));
+ 	if (!ext_capa)
+@@ -1912,6 +1912,7 @@ mt7925_mcu_sta_cmd(struct mt76_phy *phy,
+ 	struct mt76_dev *dev = phy->dev;
+ 	struct mt792x_bss_conf *mconf;
+ 	struct sk_buff *skb;
++	int conn_state;
+ 
+ 	mconf = mt792x_vif_to_link(mvif, info->wcid->link_id);
+ 
+@@ -1920,10 +1921,13 @@ mt7925_mcu_sta_cmd(struct mt76_phy *phy,
+ 	if (IS_ERR(skb))
+ 		return PTR_ERR(skb);
+ 
++	conn_state = info->enable ? CONN_STATE_PORT_SECURE :
++				    CONN_STATE_DISCONNECT;
++
+ 	if (info->enable && info->link_sta) {
+ 		mt76_connac_mcu_sta_basic_tlv(dev, skb, info->link_conf,
+ 					      info->link_sta,
+-					      info->enable, info->newly);
++					      conn_state, info->newly);
+ 		mt7925_mcu_sta_phy_tlv(skb, info->vif, info->link_sta);
+ 		mt7925_mcu_sta_ht_tlv(skb, info->link_sta);
+ 		mt7925_mcu_sta_vht_tlv(skb, info->link_sta);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+index 37b21ad828b966..a7a5ac8b7d2659 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+@@ -62,7 +62,7 @@ static struct mt76_wcid *mt7996_rx_get_wcid(struct mt7996_dev *dev,
+ 	int i;
+ 
+ 	wcid = mt76_wcid_ptr(dev, idx);
+-	if (!wcid)
++	if (!wcid || !wcid->sta)
+ 		return NULL;
+ 
+ 	if (!mt7996_band_valid(dev, band_idx))
+@@ -903,8 +903,12 @@ void mt7996_mac_write_txwi(struct mt7996_dev *dev, __le32 *txwi,
+ 				       IEEE80211_TX_CTRL_MLO_LINK);
+ 
+ 	mvif = vif ? (struct mt7996_vif *)vif->drv_priv : NULL;
+-	if (mvif)
+-		mlink = rcu_dereference(mvif->mt76.link[link_id]);
++	if (mvif) {
++		if (wcid->offchannel)
++			mlink = rcu_dereference(mvif->mt76.offchannel_link);
++		if (!mlink)
++			mlink = rcu_dereference(mvif->mt76.link[link_id]);
++	}
+ 
+ 	if (mlink) {
+ 		omac_idx = mlink->omac_idx;
+@@ -1696,43 +1700,53 @@ mt7996_wait_reset_state(struct mt7996_dev *dev, u32 state)
+ static void
+ mt7996_update_vif_beacon(void *priv, u8 *mac, struct ieee80211_vif *vif)
+ {
+-	struct ieee80211_hw *hw = priv;
++	struct ieee80211_bss_conf *link_conf;
++	struct mt7996_phy *phy = priv;
++	struct mt7996_dev *dev = phy->dev;
++	unsigned int link_id;
++
+ 
+ 	switch (vif->type) {
+ 	case NL80211_IFTYPE_MESH_POINT:
+ 	case NL80211_IFTYPE_ADHOC:
+ 	case NL80211_IFTYPE_AP:
+-		mt7996_mcu_add_beacon(hw, vif, &vif->bss_conf);
+ 		break;
+ 	default:
+-		break;
++		return;
++	}
++
++	for_each_vif_active_link(vif, link_conf, link_id) {
++		struct mt7996_vif_link *link;
++
++		link = mt7996_vif_link(dev, vif, link_id);
++		if (!link || link->phy != phy)
++			continue;
++
++		mt7996_mcu_add_beacon(dev->mt76.hw, vif, link_conf);
+ 	}
+ }
+ 
++void mt7996_mac_update_beacons(struct mt7996_phy *phy)
++{
++	ieee80211_iterate_active_interfaces(phy->mt76->hw,
++					    IEEE80211_IFACE_ITER_RESUME_ALL,
++					    mt7996_update_vif_beacon, phy);
++}
++
+ static void
+ mt7996_update_beacons(struct mt7996_dev *dev)
+ {
+ 	struct mt76_phy *phy2, *phy3;
+ 
+-	ieee80211_iterate_active_interfaces(dev->mt76.hw,
+-					    IEEE80211_IFACE_ITER_RESUME_ALL,
+-					    mt7996_update_vif_beacon, dev->mt76.hw);
++	mt7996_mac_update_beacons(&dev->phy);
+ 
+ 	phy2 = dev->mt76.phys[MT_BAND1];
+-	if (!phy2)
+-		return;
+-
+-	ieee80211_iterate_active_interfaces(phy2->hw,
+-					    IEEE80211_IFACE_ITER_RESUME_ALL,
+-					    mt7996_update_vif_beacon, phy2->hw);
++	if (phy2)
++		mt7996_mac_update_beacons(phy2->priv);
+ 
+ 	phy3 = dev->mt76.phys[MT_BAND2];
+-	if (!phy3)
+-		return;
+-
+-	ieee80211_iterate_active_interfaces(phy3->hw,
+-					    IEEE80211_IFACE_ITER_RESUME_ALL,
+-					    mt7996_update_vif_beacon, phy3->hw);
++	if (phy3)
++		mt7996_mac_update_beacons(phy3->priv);
+ }
+ 
+ void mt7996_tx_token_put(struct mt7996_dev *dev)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/main.c b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+index f41b2c98bc4518..f6590ef85c0d09 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+@@ -516,6 +516,9 @@ int mt7996_set_channel(struct mt76_phy *mphy)
+ 	struct mt7996_phy *phy = mphy->priv;
+ 	int ret;
+ 
++	if (mphy->offchannel)
++		mt7996_mac_update_beacons(phy);
++
+ 	ret = mt7996_mcu_set_chan_info(phy, UNI_CHANNEL_SWITCH);
+ 	if (ret)
+ 		goto out;
+@@ -533,6 +536,8 @@ int mt7996_set_channel(struct mt76_phy *mphy)
+ 
+ 	mt7996_mac_reset_counters(phy);
+ 	phy->noise = 0;
++	if (!mphy->offchannel)
++		mt7996_mac_update_beacons(phy);
+ 
+ out:
+ 	ieee80211_queue_delayed_work(mphy->hw, &mphy->mac_work,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+index dd4b7b8c34ea1f..a808218da394ce 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+@@ -1879,8 +1879,8 @@ mt7996_mcu_get_mmps_mode(enum ieee80211_smps_mode smps)
+ int mt7996_mcu_set_fixed_rate_ctrl(struct mt7996_dev *dev,
+ 				   void *data, u16 version)
+ {
++	struct uni_header hdr = {};
+ 	struct ra_fixed_rate *req;
+-	struct uni_header hdr;
+ 	struct sk_buff *skb;
+ 	struct tlv *tlv;
+ 	int len;
+@@ -2755,13 +2755,15 @@ int mt7996_mcu_add_beacon(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 			  struct ieee80211_bss_conf *link_conf)
+ {
+ 	struct mt7996_dev *dev = mt7996_hw_dev(hw);
+-	struct mt76_vif_link *mlink = mt76_vif_conf_link(&dev->mt76, vif, link_conf);
++	struct mt7996_vif_link *link = mt7996_vif_conf_link(dev, vif, link_conf);
++	struct mt76_vif_link *mlink = link ? &link->mt76 : NULL;
+ 	struct ieee80211_mutable_offsets offs;
+ 	struct ieee80211_tx_info *info;
+ 	struct sk_buff *skb, *rskb;
+ 	struct tlv *tlv;
+ 	struct bss_bcn_content_tlv *bcn;
+ 	int len, extra_len = 0;
++	bool enabled = link_conf->enable_beacon;
+ 
+ 	if (link_conf->nontransmitted)
+ 		return 0;
+@@ -2769,13 +2771,16 @@ int mt7996_mcu_add_beacon(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 	if (!mlink)
+ 		return -EINVAL;
+ 
++	if (link->phy && link->phy->mt76->offchannel)
++		enabled = false;
++
+ 	rskb = __mt7996_mcu_alloc_bss_req(&dev->mt76, mlink,
+ 					  MT7996_MAX_BSS_OFFLOAD_SIZE);
+ 	if (IS_ERR(rskb))
+ 		return PTR_ERR(rskb);
+ 
+ 	skb = ieee80211_beacon_get_template(hw, vif, &offs, link_conf->link_id);
+-	if (link_conf->enable_beacon && !skb) {
++	if (enabled && !skb) {
+ 		dev_kfree_skb(rskb);
+ 		return -EINVAL;
+ 	}
+@@ -2794,7 +2799,7 @@ int mt7996_mcu_add_beacon(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 	len = ALIGN(sizeof(*bcn) + MT_TXD_SIZE + extra_len, 4);
+ 	tlv = mt7996_mcu_add_uni_tlv(rskb, UNI_BSS_INFO_BCN_CONTENT, len);
+ 	bcn = (struct bss_bcn_content_tlv *)tlv;
+-	bcn->enable = link_conf->enable_beacon;
++	bcn->enable = enabled;
+ 	if (!bcn->enable)
+ 		goto out;
+ 
+@@ -3372,7 +3377,7 @@ int mt7996_mcu_set_hdr_trans(struct mt7996_dev *dev, bool hdr_trans)
+ {
+ 	struct {
+ 		u8 __rsv[4];
+-	} __packed hdr;
++	} __packed hdr = {};
+ 	struct hdr_trans_blacklist *req_blacklist;
+ 	struct hdr_trans_en *req_en;
+ 	struct sk_buff *skb;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h b/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
+index 33ac16b64ef113..8509d508e1e19c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
+@@ -732,6 +732,7 @@ void mt7996_mac_write_txwi(struct mt7996_dev *dev, __le32 *txwi,
+ 			   struct sk_buff *skb, struct mt76_wcid *wcid,
+ 			   struct ieee80211_key_conf *key, int pid,
+ 			   enum mt76_txq_id qid, u32 changed);
++void mt7996_mac_update_beacons(struct mt7996_phy *phy);
+ void mt7996_mac_set_coverage_class(struct mt7996_phy *phy);
+ void mt7996_mac_work(struct work_struct *work);
+ void mt7996_mac_reset_work(struct work_struct *work);
+diff --git a/drivers/net/wireless/mediatek/mt76/tx.c b/drivers/net/wireless/mediatek/mt76/tx.c
+index e6cf16706667e7..8ab5840fee57f9 100644
+--- a/drivers/net/wireless/mediatek/mt76/tx.c
++++ b/drivers/net/wireless/mediatek/mt76/tx.c
+@@ -332,6 +332,7 @@ mt76_tx(struct mt76_phy *phy, struct ieee80211_sta *sta,
+ 	struct mt76_wcid *wcid, struct sk_buff *skb)
+ {
+ 	struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
++	struct ieee80211_hdr *hdr = (void *)skb->data;
+ 	struct sk_buff_head *head;
+ 
+ 	if (mt76_testmode_enabled(phy)) {
+@@ -349,7 +350,8 @@ mt76_tx(struct mt76_phy *phy, struct ieee80211_sta *sta,
+ 	info->hw_queue |= FIELD_PREP(MT_TX_HW_QUEUE_PHY, phy->band_idx);
+ 
+ 	if ((info->flags & IEEE80211_TX_CTL_TX_OFFCHAN) ||
+-	    (info->control.flags & IEEE80211_TX_CTRL_DONT_USE_RATE_MASK))
++	    ((info->control.flags & IEEE80211_TX_CTRL_DONT_USE_RATE_MASK) &&
++	     ieee80211_is_probe_req(hdr->frame_control)))
+ 		head = &wcid->tx_offchannel;
+ 	else
+ 		head = &wcid->tx_pending;
+@@ -644,6 +646,7 @@ mt76_txq_schedule_pending_wcid(struct mt76_phy *phy, struct mt76_wcid *wcid,
+ static void mt76_txq_schedule_pending(struct mt76_phy *phy)
+ {
+ 	LIST_HEAD(tx_list);
++	int ret = 0;
+ 
+ 	if (list_empty(&phy->tx_list))
+ 		return;
+@@ -655,13 +658,13 @@ static void mt76_txq_schedule_pending(struct mt76_phy *phy)
+ 	list_splice_init(&phy->tx_list, &tx_list);
+ 	while (!list_empty(&tx_list)) {
+ 		struct mt76_wcid *wcid;
+-		int ret;
+ 
+ 		wcid = list_first_entry(&tx_list, struct mt76_wcid, tx_list);
+ 		list_del_init(&wcid->tx_list);
+ 
+ 		spin_unlock(&phy->tx_lock);
+-		ret = mt76_txq_schedule_pending_wcid(phy, wcid, &wcid->tx_offchannel);
++		if (ret >= 0)
++			ret = mt76_txq_schedule_pending_wcid(phy, wcid, &wcid->tx_offchannel);
+ 		if (ret >= 0 && !phy->offchannel)
+ 			ret = mt76_txq_schedule_pending_wcid(phy, wcid, &wcid->tx_pending);
+ 		spin_lock(&phy->tx_lock);
+@@ -670,9 +673,6 @@ static void mt76_txq_schedule_pending(struct mt76_phy *phy)
+ 		    !skb_queue_empty(&wcid->tx_offchannel) &&
+ 		    list_empty(&wcid->tx_list))
+ 			list_add_tail(&wcid->tx_list, &phy->tx_list);
+-
+-		if (ret < 0)
+-			break;
+ 	}
+ 	spin_unlock(&phy->tx_lock);
+ 
+diff --git a/drivers/net/wireless/st/cw1200/sta.c b/drivers/net/wireless/st/cw1200/sta.c
+index 5dd7f6a389006d..cc56018b2e3274 100644
+--- a/drivers/net/wireless/st/cw1200/sta.c
++++ b/drivers/net/wireless/st/cw1200/sta.c
+@@ -1290,7 +1290,7 @@ static void cw1200_do_join(struct cw1200_common *priv)
+ 		rcu_read_lock();
+ 		ssidie = ieee80211_bss_get_ie(bss, WLAN_EID_SSID);
+ 		if (ssidie) {
+-			join.ssid_len = ssidie[1];
++			join.ssid_len = min(ssidie[1], IEEE80211_MAX_SSID_LEN);
+ 			memcpy(join.ssid, &ssidie[2], join.ssid_len);
+ 		}
+ 		rcu_read_unlock();
+diff --git a/drivers/of/of_numa.c b/drivers/of/of_numa.c
+index 230d5f628c1b47..cd2dc8e825c92b 100644
+--- a/drivers/of/of_numa.c
++++ b/drivers/of/of_numa.c
+@@ -59,8 +59,11 @@ static int __init of_numa_parse_memory_nodes(void)
+ 			r = -EINVAL;
+ 		}
+ 
+-		for (i = 0; !r && !of_address_to_resource(np, i, &rsrc); i++)
++		for (i = 0; !r && !of_address_to_resource(np, i, &rsrc); i++) {
+ 			r = numa_add_memblk(nid, rsrc.start, rsrc.end + 1);
++			if (!r)
++				node_set(nid, numa_nodes_parsed);
++		}
+ 
+ 		if (!i || r) {
+ 			of_node_put(np);
+diff --git a/drivers/pcmcia/omap_cf.c b/drivers/pcmcia/omap_cf.c
+index 1b1dff56ec7b11..441cdf83f5a449 100644
+--- a/drivers/pcmcia/omap_cf.c
++++ b/drivers/pcmcia/omap_cf.c
+@@ -215,6 +215,8 @@ static int __init omap_cf_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!res)
++		return -EINVAL;
+ 
+ 	cf = kzalloc(sizeof *cf, GFP_KERNEL);
+ 	if (!cf)
+diff --git a/drivers/pcmcia/rsrc_iodyn.c b/drivers/pcmcia/rsrc_iodyn.c
+index b04b16496b0c4b..2677b577c1f858 100644
+--- a/drivers/pcmcia/rsrc_iodyn.c
++++ b/drivers/pcmcia/rsrc_iodyn.c
+@@ -62,6 +62,9 @@ static struct resource *__iodyn_find_io_region(struct pcmcia_socket *s,
+ 	unsigned long min = base;
+ 	int ret;
+ 
++	if (!res)
++		return NULL;
++
+ 	data.mask = align - 1;
+ 	data.offset = base & data.mask;
+ 
+diff --git a/drivers/pcmcia/rsrc_nonstatic.c b/drivers/pcmcia/rsrc_nonstatic.c
+index bf9d070a44966d..da494fe451baf0 100644
+--- a/drivers/pcmcia/rsrc_nonstatic.c
++++ b/drivers/pcmcia/rsrc_nonstatic.c
+@@ -375,7 +375,9 @@ static int do_validate_mem(struct pcmcia_socket *s,
+ 
+ 	if (validate && !s->fake_cis) {
+ 		/* move it to the validated data set */
+-		add_interval(&s_data->mem_db_valid, base, size);
++		ret = add_interval(&s_data->mem_db_valid, base, size);
++		if (ret)
++			return ret;
+ 		sub_interval(&s_data->mem_db, base, size);
+ 	}
+ 
+diff --git a/drivers/platform/x86/acer-wmi.c b/drivers/platform/x86/acer-wmi.c
+index 69336bd778eead..13eb22b35aa8f8 100644
+--- a/drivers/platform/x86/acer-wmi.c
++++ b/drivers/platform/x86/acer-wmi.c
+@@ -129,6 +129,7 @@ enum acer_wmi_predator_v4_oc {
+ enum acer_wmi_gaming_misc_setting {
+ 	ACER_WMID_MISC_SETTING_OC_1			= 0x0005,
+ 	ACER_WMID_MISC_SETTING_OC_2			= 0x0007,
++	/* Unreliable on some models */
+ 	ACER_WMID_MISC_SETTING_SUPPORTED_PROFILES	= 0x000A,
+ 	ACER_WMID_MISC_SETTING_PLATFORM_PROFILE		= 0x000B,
+ };
+@@ -794,9 +795,6 @@ static bool platform_profile_support;
+  */
+ static int last_non_turbo_profile = INT_MIN;
+ 
+-/* The most performant supported profile */
+-static int acer_predator_v4_max_perf;
+-
+ enum acer_predator_v4_thermal_profile {
+ 	ACER_PREDATOR_V4_THERMAL_PROFILE_QUIET		= 0x00,
+ 	ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED	= 0x01,
+@@ -2014,7 +2012,7 @@ acer_predator_v4_platform_profile_set(struct device *dev,
+ 	if (err)
+ 		return err;
+ 
+-	if (tp != acer_predator_v4_max_perf)
++	if (tp != ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO)
+ 		last_non_turbo_profile = tp;
+ 
+ 	return 0;
+@@ -2023,55 +2021,14 @@ acer_predator_v4_platform_profile_set(struct device *dev,
+ static int
+ acer_predator_v4_platform_profile_probe(void *drvdata, unsigned long *choices)
+ {
+-	unsigned long supported_profiles;
+-	int err;
++	set_bit(PLATFORM_PROFILE_PERFORMANCE, choices);
++	set_bit(PLATFORM_PROFILE_BALANCED_PERFORMANCE, choices);
++	set_bit(PLATFORM_PROFILE_BALANCED, choices);
++	set_bit(PLATFORM_PROFILE_QUIET, choices);
++	set_bit(PLATFORM_PROFILE_LOW_POWER, choices);
+ 
+-	err = WMID_gaming_get_misc_setting(ACER_WMID_MISC_SETTING_SUPPORTED_PROFILES,
+-					   (u8 *)&supported_profiles);
+-	if (err)
+-		return err;
+-
+-	/* Iterate through supported profiles in order of increasing performance */
+-	if (test_bit(ACER_PREDATOR_V4_THERMAL_PROFILE_ECO, &supported_profiles)) {
+-		set_bit(PLATFORM_PROFILE_LOW_POWER, choices);
+-		acer_predator_v4_max_perf = ACER_PREDATOR_V4_THERMAL_PROFILE_ECO;
+-		last_non_turbo_profile = ACER_PREDATOR_V4_THERMAL_PROFILE_ECO;
+-	}
+-
+-	if (test_bit(ACER_PREDATOR_V4_THERMAL_PROFILE_QUIET, &supported_profiles)) {
+-		set_bit(PLATFORM_PROFILE_QUIET, choices);
+-		acer_predator_v4_max_perf = ACER_PREDATOR_V4_THERMAL_PROFILE_QUIET;
+-		last_non_turbo_profile = ACER_PREDATOR_V4_THERMAL_PROFILE_QUIET;
+-	}
+-
+-	if (test_bit(ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED, &supported_profiles)) {
+-		set_bit(PLATFORM_PROFILE_BALANCED, choices);
+-		acer_predator_v4_max_perf = ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED;
+-		last_non_turbo_profile = ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED;
+-	}
+-
+-	if (test_bit(ACER_PREDATOR_V4_THERMAL_PROFILE_PERFORMANCE, &supported_profiles)) {
+-		set_bit(PLATFORM_PROFILE_BALANCED_PERFORMANCE, choices);
+-		acer_predator_v4_max_perf = ACER_PREDATOR_V4_THERMAL_PROFILE_PERFORMANCE;
+-
+-		/* We only use this profile as a fallback option in case no prior
+-		 * profile is supported.
+-		 */
+-		if (last_non_turbo_profile < 0)
+-			last_non_turbo_profile = ACER_PREDATOR_V4_THERMAL_PROFILE_PERFORMANCE;
+-	}
+-
+-	if (test_bit(ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO, &supported_profiles)) {
+-		set_bit(PLATFORM_PROFILE_PERFORMANCE, choices);
+-		acer_predator_v4_max_perf = ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO;
+-
+-		/* We need to handle the hypothetical case where only the turbo profile
+-		 * is supported. In this case the turbo toggle will essentially be a
+-		 * no-op.
+-		 */
+-		if (last_non_turbo_profile < 0)
+-			last_non_turbo_profile = ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO;
+-	}
++	/* Set default non-turbo profile */
++	last_non_turbo_profile = ACER_PREDATOR_V4_THERMAL_PROFILE_BALANCED;
+ 
+ 	return 0;
+ }
+@@ -2108,19 +2065,15 @@ static int acer_thermal_profile_change(void)
+ 		if (cycle_gaming_thermal_profile) {
+ 			platform_profile_cycle();
+ 		} else {
+-			/* Do nothing if no suitable platform profiles where found */
+-			if (last_non_turbo_profile < 0)
+-				return 0;
+-
+ 			err = WMID_gaming_get_misc_setting(
+ 				ACER_WMID_MISC_SETTING_PLATFORM_PROFILE, &current_tp);
+ 			if (err)
+ 				return err;
+ 
+-			if (current_tp == acer_predator_v4_max_perf)
++			if (current_tp == ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO)
+ 				tp = last_non_turbo_profile;
+ 			else
+-				tp = acer_predator_v4_max_perf;
++				tp = ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO;
+ 
+ 			err = WMID_gaming_set_misc_setting(
+ 				ACER_WMID_MISC_SETTING_PLATFORM_PROFILE, tp);
+@@ -2128,7 +2081,7 @@ static int acer_thermal_profile_change(void)
+ 				return err;
+ 
+ 			/* Store last profile for toggle */
+-			if (current_tp != acer_predator_v4_max_perf)
++			if (current_tp != ACER_PREDATOR_V4_THERMAL_PROFILE_TURBO)
+ 				last_non_turbo_profile = current_tp;
+ 
+ 			platform_profile_notify(platform_profile_device);
+diff --git a/drivers/platform/x86/amd/pmc/pmc-quirks.c b/drivers/platform/x86/amd/pmc/pmc-quirks.c
+index ded4c84f5ed149..18fb44139de251 100644
+--- a/drivers/platform/x86/amd/pmc/pmc-quirks.c
++++ b/drivers/platform/x86/amd/pmc/pmc-quirks.c
+@@ -28,10 +28,15 @@ static struct quirk_entry quirk_spurious_8042 = {
+ 	.spurious_8042 = true,
+ };
+ 
++static struct quirk_entry quirk_s2idle_spurious_8042 = {
++	.s2idle_bug_mmio = FCH_PM_BASE + FCH_PM_SCRATCH,
++	.spurious_8042 = true,
++};
++
+ static const struct dmi_system_id fwbug_list[] = {
+ 	{
+ 		.ident = "L14 Gen2 AMD",
+-		.driver_data = &quirk_s2idle_bug,
++		.driver_data = &quirk_s2idle_spurious_8042,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "20X5"),
+@@ -39,7 +44,7 @@ static const struct dmi_system_id fwbug_list[] = {
+ 	},
+ 	{
+ 		.ident = "T14s Gen2 AMD",
+-		.driver_data = &quirk_s2idle_bug,
++		.driver_data = &quirk_s2idle_spurious_8042,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "20XF"),
+@@ -47,7 +52,7 @@ static const struct dmi_system_id fwbug_list[] = {
+ 	},
+ 	{
+ 		.ident = "X13 Gen2 AMD",
+-		.driver_data = &quirk_s2idle_bug,
++		.driver_data = &quirk_s2idle_spurious_8042,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "20XH"),
+@@ -55,7 +60,7 @@ static const struct dmi_system_id fwbug_list[] = {
+ 	},
+ 	{
+ 		.ident = "T14 Gen2 AMD",
+-		.driver_data = &quirk_s2idle_bug,
++		.driver_data = &quirk_s2idle_spurious_8042,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "20XK"),
+@@ -63,7 +68,7 @@ static const struct dmi_system_id fwbug_list[] = {
+ 	},
+ 	{
+ 		.ident = "T14 Gen1 AMD",
+-		.driver_data = &quirk_s2idle_bug,
++		.driver_data = &quirk_s2idle_spurious_8042,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "20UD"),
+@@ -71,7 +76,7 @@ static const struct dmi_system_id fwbug_list[] = {
+ 	},
+ 	{
+ 		.ident = "T14 Gen1 AMD",
+-		.driver_data = &quirk_s2idle_bug,
++		.driver_data = &quirk_s2idle_spurious_8042,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "20UE"),
+@@ -79,7 +84,7 @@ static const struct dmi_system_id fwbug_list[] = {
+ 	},
+ 	{
+ 		.ident = "T14s Gen1 AMD",
+-		.driver_data = &quirk_s2idle_bug,
++		.driver_data = &quirk_s2idle_spurious_8042,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "20UH"),
+@@ -87,7 +92,7 @@ static const struct dmi_system_id fwbug_list[] = {
+ 	},
+ 	{
+ 		.ident = "T14s Gen1 AMD",
+-		.driver_data = &quirk_s2idle_bug,
++		.driver_data = &quirk_s2idle_spurious_8042,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "20UJ"),
+@@ -95,7 +100,7 @@ static const struct dmi_system_id fwbug_list[] = {
+ 	},
+ 	{
+ 		.ident = "P14s Gen1 AMD",
+-		.driver_data = &quirk_s2idle_bug,
++		.driver_data = &quirk_s2idle_spurious_8042,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "20Y1"),
+@@ -103,7 +108,7 @@ static const struct dmi_system_id fwbug_list[] = {
+ 	},
+ 	{
+ 		.ident = "P14s Gen2 AMD",
+-		.driver_data = &quirk_s2idle_bug,
++		.driver_data = &quirk_s2idle_spurious_8042,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21A0"),
+@@ -111,7 +116,7 @@ static const struct dmi_system_id fwbug_list[] = {
+ 	},
+ 	{
+ 		.ident = "P14s Gen2 AMD",
+-		.driver_data = &quirk_s2idle_bug,
++		.driver_data = &quirk_s2idle_spurious_8042,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21A1"),
+@@ -152,7 +157,7 @@ static const struct dmi_system_id fwbug_list[] = {
+ 	},
+ 	{
+ 		.ident = "IdeaPad 1 14AMN7",
+-		.driver_data = &quirk_s2idle_bug,
++		.driver_data = &quirk_s2idle_spurious_8042,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "82VF"),
+@@ -160,7 +165,7 @@ static const struct dmi_system_id fwbug_list[] = {
+ 	},
+ 	{
+ 		.ident = "IdeaPad 1 15AMN7",
+-		.driver_data = &quirk_s2idle_bug,
++		.driver_data = &quirk_s2idle_spurious_8042,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "82VG"),
+@@ -168,7 +173,7 @@ static const struct dmi_system_id fwbug_list[] = {
+ 	},
+ 	{
+ 		.ident = "IdeaPad 1 15AMN7",
+-		.driver_data = &quirk_s2idle_bug,
++		.driver_data = &quirk_s2idle_spurious_8042,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "82X5"),
+@@ -176,7 +181,7 @@ static const struct dmi_system_id fwbug_list[] = {
+ 	},
+ 	{
+ 		.ident = "IdeaPad Slim 3 14AMN8",
+-		.driver_data = &quirk_s2idle_bug,
++		.driver_data = &quirk_s2idle_spurious_8042,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "82XN"),
+@@ -184,7 +189,7 @@ static const struct dmi_system_id fwbug_list[] = {
+ 	},
+ 	{
+ 		.ident = "IdeaPad Slim 3 15AMN8",
+-		.driver_data = &quirk_s2idle_bug,
++		.driver_data = &quirk_s2idle_spurious_8042,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "82XQ"),
+@@ -193,7 +198,7 @@ static const struct dmi_system_id fwbug_list[] = {
+ 	/* https://gitlab.freedesktop.org/drm/amd/-/issues/4434 */
+ 	{
+ 		.ident = "Lenovo Yoga 6 13ALC6",
+-		.driver_data = &quirk_s2idle_bug,
++		.driver_data = &quirk_s2idle_spurious_8042,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "82ND"),
+@@ -202,7 +207,7 @@ static const struct dmi_system_id fwbug_list[] = {
+ 	/* https://gitlab.freedesktop.org/drm/amd/-/issues/2684 */
+ 	{
+ 		.ident = "HP Laptop 15s-eq2xxx",
+-		.driver_data = &quirk_s2idle_bug,
++		.driver_data = &quirk_s2idle_spurious_8042,
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "HP"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "HP Laptop 15s-eq2xxx"),
+@@ -243,6 +248,20 @@ static const struct dmi_system_id fwbug_list[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "Lafite Pro V 14M"),
+ 		}
+ 	},
++	{
++		.ident = "TUXEDO InfinityBook Pro 14/15 AMD Gen10",
++		.driver_data = &quirk_spurious_8042,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "XxHP4NAx"),
++		}
++	},
++	{
++		.ident = "TUXEDO InfinityBook Pro 14/15 AMD Gen10",
++		.driver_data = &quirk_spurious_8042,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "XxKK4NAx_XxSP4NAx"),
++		}
++	},
+ 	{}
+ };
+ 
+@@ -285,6 +304,16 @@ void amd_pmc_quirks_init(struct amd_pmc_dev *dev)
+ {
+ 	const struct dmi_system_id *dmi_id;
+ 
++	/*
++	 * IRQ1 may cause an interrupt during resume even without a keyboard
++	 * press.
++	 *
++	 * Affects Renoir, Cezanne and Barcelo SoCs
++	 *
++	 * A solution is available in PMFW 64.66.0, but it must be activated by
++	 * SBIOS. If SBIOS is known to have the fix a quirk can be added for
++	 * a given system to avoid workaround.
++	 */
+ 	if (dev->cpu_id == AMD_CPU_ID_CZN)
+ 		dev->disable_8042_wakeup = true;
+ 
+@@ -295,6 +324,5 @@ void amd_pmc_quirks_init(struct amd_pmc_dev *dev)
+ 	if (dev->quirks->s2idle_bug_mmio)
+ 		pr_info("Using s2idle quirk to avoid %s platform firmware bug\n",
+ 			dmi_id->ident);
+-	if (dev->quirks->spurious_8042)
+-		dev->disable_8042_wakeup = true;
++	dev->disable_8042_wakeup = dev->quirks->spurious_8042;
+ }
+diff --git a/drivers/platform/x86/amd/pmc/pmc.c b/drivers/platform/x86/amd/pmc/pmc.c
+index 0b9b23eb7c2c30..bd318fd02ccf4a 100644
+--- a/drivers/platform/x86/amd/pmc/pmc.c
++++ b/drivers/platform/x86/amd/pmc/pmc.c
+@@ -530,19 +530,6 @@ static int amd_pmc_get_os_hint(struct amd_pmc_dev *dev)
+ static int amd_pmc_wa_irq1(struct amd_pmc_dev *pdev)
+ {
+ 	struct device *d;
+-	int rc;
+-
+-	/* cezanne platform firmware has a fix in 64.66.0 */
+-	if (pdev->cpu_id == AMD_CPU_ID_CZN) {
+-		if (!pdev->major) {
+-			rc = amd_pmc_get_smu_version(pdev);
+-			if (rc)
+-				return rc;
+-		}
+-
+-		if (pdev->major > 64 || (pdev->major == 64 && pdev->minor > 65))
+-			return 0;
+-	}
+ 
+ 	d = bus_find_device_by_name(&serio_bus, NULL, "serio0");
+ 	if (!d)
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index f84c3d03c1de78..e6726be5890e7f 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -655,8 +655,6 @@ static void asus_nb_wmi_key_filter(struct asus_wmi_driver *asus_wmi, int *code,
+ 		if (atkbd_reports_vol_keys)
+ 			*code = ASUS_WMI_KEY_IGNORE;
+ 		break;
+-	case 0x5D: /* Wireless console Toggle */
+-	case 0x5E: /* Wireless console Enable */
+ 	case 0x5F: /* Wireless console Disable */
+ 		if (quirks->ignore_key_wlan)
+ 			*code = ASUS_WMI_KEY_IGNORE;
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index f7191fdded14d2..e72a2b5d158e9c 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -5088,16 +5088,22 @@ static int asus_wmi_probe(struct platform_device *pdev)
+ 
+ 	asus_s2idle_check_register();
+ 
+-	return asus_wmi_add(pdev);
++	ret = asus_wmi_add(pdev);
++	if (ret)
++		asus_s2idle_check_unregister();
++
++	return ret;
+ }
+ 
+ static bool used;
++static DEFINE_MUTEX(register_mutex);
+ 
+ int __init_or_module asus_wmi_register_driver(struct asus_wmi_driver *driver)
+ {
+ 	struct platform_driver *platform_driver;
+ 	struct platform_device *platform_device;
+ 
++	guard(mutex)(&register_mutex);
+ 	if (used)
+ 		return -EBUSY;
+ 
+@@ -5120,6 +5126,7 @@ EXPORT_SYMBOL_GPL(asus_wmi_register_driver);
+ 
+ void asus_wmi_unregister_driver(struct asus_wmi_driver *driver)
+ {
++	guard(mutex)(&register_mutex);
+ 	asus_s2idle_check_unregister();
+ 
+ 	platform_device_unregister(driver->platform_device);
+diff --git a/drivers/platform/x86/intel/tpmi_power_domains.c b/drivers/platform/x86/intel/tpmi_power_domains.c
+index 9d8247bb9cfa57..8641353b2e0617 100644
+--- a/drivers/platform/x86/intel/tpmi_power_domains.c
++++ b/drivers/platform/x86/intel/tpmi_power_domains.c
+@@ -178,7 +178,7 @@ static int tpmi_get_logical_id(unsigned int cpu, struct tpmi_cpu_info *info)
+ 
+ 	info->punit_thread_id = FIELD_GET(LP_ID_MASK, data);
+ 	info->punit_core_id = FIELD_GET(MODULE_ID_MASK, data);
+-	info->pkg_id = topology_physical_package_id(cpu);
++	info->pkg_id = topology_logical_package_id(cpu);
+ 	info->linux_cpu = cpu;
+ 
+ 	return 0;
+diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c
+index 1e7f72e5755760..53882854759527 100644
+--- a/drivers/ptp/ptp_ocp.c
++++ b/drivers/ptp/ptp_ocp.c
+@@ -4557,8 +4557,7 @@ ptp_ocp_detach(struct ptp_ocp *bp)
+ 	ptp_ocp_debugfs_remove_device(bp);
+ 	ptp_ocp_detach_sysfs(bp);
+ 	ptp_ocp_attr_group_del(bp);
+-	if (timer_pending(&bp->watchdog))
+-		timer_delete_sync(&bp->watchdog);
++	timer_delete_sync(&bp->watchdog);
+ 	if (bp->ts0)
+ 		ptp_ocp_unregister_ext(bp->ts0);
+ 	if (bp->ts1)
+diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
+index fba2e62027b719..4cfc928bcf2d23 100644
+--- a/drivers/scsi/lpfc/lpfc_nvmet.c
++++ b/drivers/scsi/lpfc/lpfc_nvmet.c
+@@ -1243,7 +1243,7 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport,
+ 	struct lpfc_nvmet_tgtport *tgtp;
+ 	struct lpfc_async_xchg_ctx *ctxp =
+ 		container_of(rsp, struct lpfc_async_xchg_ctx, hdlrctx.fcp_req);
+-	struct rqb_dmabuf *nvmebuf = ctxp->rqb_buffer;
++	struct rqb_dmabuf *nvmebuf;
+ 	struct lpfc_hba *phba = ctxp->phba;
+ 	unsigned long iflag;
+ 
+@@ -1251,13 +1251,18 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport,
+ 	lpfc_nvmeio_data(phba, "NVMET DEFERRCV: xri x%x sz %d CPU %02x\n",
+ 			 ctxp->oxid, ctxp->size, raw_smp_processor_id());
+ 
++	spin_lock_irqsave(&ctxp->ctxlock, iflag);
++	nvmebuf = ctxp->rqb_buffer;
+ 	if (!nvmebuf) {
++		spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
+ 		lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR,
+ 				"6425 Defer rcv: no buffer oxid x%x: "
+ 				"flg %x ste %x\n",
+ 				ctxp->oxid, ctxp->flag, ctxp->state);
+ 		return;
+ 	}
++	ctxp->rqb_buffer = NULL;
++	spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
+ 
+ 	tgtp = phba->targetport->private;
+ 	if (tgtp)
+@@ -1265,9 +1270,6 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport,
+ 
+ 	/* Free the nvmebuf since a new buffer already replaced it */
+ 	nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf);
+-	spin_lock_irqsave(&ctxp->ctxlock, iflag);
+-	ctxp->rqb_buffer = NULL;
+-	spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
+ }
+ 
+ /**
+diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
+index b17796d5ee6652..add13e30689838 100644
+--- a/drivers/scsi/sr.c
++++ b/drivers/scsi/sr.c
+@@ -475,13 +475,21 @@ static blk_status_t sr_init_command(struct scsi_cmnd *SCpnt)
+ 
+ static int sr_revalidate_disk(struct scsi_cd *cd)
+ {
++	struct request_queue *q = cd->device->request_queue;
+ 	struct scsi_sense_hdr sshdr;
++	struct queue_limits lim;
++	int sector_size;
+ 
+ 	/* if the unit is not ready, nothing more to do */
+ 	if (scsi_test_unit_ready(cd->device, SR_TIMEOUT, MAX_RETRIES, &sshdr))
+ 		return 0;
+ 	sr_cd_check(&cd->cdi);
+-	return get_sectorsize(cd);
++	sector_size = get_sectorsize(cd);
++
++	lim = queue_limits_start_update(q);
++	lim.logical_block_size = sector_size;
++	lim.features |= BLK_FEAT_ROTATIONAL;
++	return queue_limits_commit_update_frozen(q, &lim);
+ }
+ 
+ static int sr_block_open(struct gendisk *disk, blk_mode_t mode)
+@@ -721,10 +729,8 @@ static int sr_probe(struct device *dev)
+ 
+ static int get_sectorsize(struct scsi_cd *cd)
+ {
+-	struct request_queue *q = cd->device->request_queue;
+ 	static const u8 cmd[10] = { READ_CAPACITY };
+ 	unsigned char buffer[8] = { };
+-	struct queue_limits lim;
+ 	int err;
+ 	int sector_size;
+ 	struct scsi_failure failure_defs[] = {
+@@ -795,9 +801,7 @@ static int get_sectorsize(struct scsi_cd *cd)
+ 		set_capacity(cd->disk, cd->capacity);
+ 	}
+ 
+-	lim = queue_limits_start_update(q);
+-	lim.logical_block_size = sector_size;
+-	return queue_limits_commit_update_frozen(q, &lim);
++	return sector_size;
+ }
+ 
+ static int get_capabilities(struct scsi_cd *cd)
+diff --git a/drivers/soc/qcom/mdt_loader.c b/drivers/soc/qcom/mdt_loader.c
+index 64e0facc392e5d..29124aa6d03b5e 100644
+--- a/drivers/soc/qcom/mdt_loader.c
++++ b/drivers/soc/qcom/mdt_loader.c
+@@ -39,12 +39,14 @@ static bool mdt_header_valid(const struct firmware *fw)
+ 	if (phend > fw->size)
+ 		return false;
+ 
+-	if (ehdr->e_shentsize != sizeof(struct elf32_shdr))
+-		return false;
++	if (ehdr->e_shentsize || ehdr->e_shnum) {
++		if (ehdr->e_shentsize != sizeof(struct elf32_shdr))
++			return false;
+ 
+-	shend = size_add(size_mul(sizeof(struct elf32_shdr), ehdr->e_shnum), ehdr->e_shoff);
+-	if (shend > fw->size)
+-		return false;
++		shend = size_add(size_mul(sizeof(struct elf32_shdr), ehdr->e_shnum), ehdr->e_shoff);
++		if (shend > fw->size)
++			return false;
++	}
+ 
+ 	return true;
+ }
+diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c
+index 1a22d356a73d95..90e4028ca14fcd 100644
+--- a/drivers/spi/spi-fsl-lpspi.c
++++ b/drivers/spi/spi-fsl-lpspi.c
+@@ -3,8 +3,9 @@
+ // Freescale i.MX7ULP LPSPI driver
+ //
+ // Copyright 2016 Freescale Semiconductor, Inc.
+-// Copyright 2018 NXP Semiconductors
++// Copyright 2018, 2023, 2025 NXP
+ 
++#include <linux/bitfield.h>
+ #include <linux/clk.h>
+ #include <linux/completion.h>
+ #include <linux/delay.h>
+@@ -70,7 +71,7 @@
+ #define DER_TDDE	BIT(0)
+ #define CFGR1_PCSCFG	BIT(27)
+ #define CFGR1_PINCFG	(BIT(24)|BIT(25))
+-#define CFGR1_PCSPOL	BIT(8)
++#define CFGR1_PCSPOL_MASK	GENMASK(11, 8)
+ #define CFGR1_NOSTALL	BIT(3)
+ #define CFGR1_HOST	BIT(0)
+ #define FSR_TXCOUNT	(0xFF)
+@@ -82,6 +83,8 @@
+ #define TCR_RXMSK	BIT(19)
+ #define TCR_TXMSK	BIT(18)
+ 
++#define SR_CLEAR_MASK	GENMASK(13, 8)
++
+ struct fsl_lpspi_devtype_data {
+ 	u8 prescale_max;
+ };
+@@ -424,7 +427,9 @@ static int fsl_lpspi_config(struct fsl_lpspi_data *fsl_lpspi)
+ 	else
+ 		temp = CFGR1_PINCFG;
+ 	if (fsl_lpspi->config.mode & SPI_CS_HIGH)
+-		temp |= CFGR1_PCSPOL;
++		temp |= FIELD_PREP(CFGR1_PCSPOL_MASK,
++				   BIT(fsl_lpspi->config.chip_select));
++
+ 	writel(temp, fsl_lpspi->base + IMX7ULP_CFGR1);
+ 
+ 	temp = readl(fsl_lpspi->base + IMX7ULP_CR);
+@@ -533,14 +538,13 @@ static int fsl_lpspi_reset(struct fsl_lpspi_data *fsl_lpspi)
+ 		fsl_lpspi_intctrl(fsl_lpspi, 0);
+ 	}
+ 
+-	/* W1C for all flags in SR */
+-	temp = 0x3F << 8;
+-	writel(temp, fsl_lpspi->base + IMX7ULP_SR);
+-
+ 	/* Clear FIFO and disable module */
+ 	temp = CR_RRF | CR_RTF;
+ 	writel(temp, fsl_lpspi->base + IMX7ULP_CR);
+ 
++	/* W1C for all flags in SR */
++	writel(SR_CLEAR_MASK, fsl_lpspi->base + IMX7ULP_SR);
++
+ 	return 0;
+ }
+ 
+@@ -731,12 +735,10 @@ static int fsl_lpspi_pio_transfer(struct spi_controller *controller,
+ 	fsl_lpspi_write_tx_fifo(fsl_lpspi);
+ 
+ 	ret = fsl_lpspi_wait_for_completion(controller);
+-	if (ret)
+-		return ret;
+ 
+ 	fsl_lpspi_reset(fsl_lpspi);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int fsl_lpspi_transfer_one(struct spi_controller *controller,
+@@ -786,7 +788,7 @@ static irqreturn_t fsl_lpspi_isr(int irq, void *dev_id)
+ 	if (temp_SR & SR_MBF ||
+ 	    readl(fsl_lpspi->base + IMX7ULP_FSR) & FSR_TXCOUNT) {
+ 		writel(SR_FCF, fsl_lpspi->base + IMX7ULP_SR);
+-		fsl_lpspi_intctrl(fsl_lpspi, IER_FCIE);
++		fsl_lpspi_intctrl(fsl_lpspi, IER_FCIE | (temp_IER & IER_TDIE));
+ 		return IRQ_HANDLED;
+ 	}
+ 
+diff --git a/drivers/spi/spi-microchip-core-qspi.c b/drivers/spi/spi-microchip-core-qspi.c
+index fa828fcaaef2d4..073607fb51cb8b 100644
+--- a/drivers/spi/spi-microchip-core-qspi.c
++++ b/drivers/spi/spi-microchip-core-qspi.c
+@@ -458,10 +458,6 @@ static int mchp_coreqspi_exec_op(struct spi_mem *mem, const struct spi_mem_op *o
+ 
+ static bool mchp_coreqspi_supports_op(struct spi_mem *mem, const struct spi_mem_op *op)
+ {
+-	struct mchp_coreqspi *qspi = spi_controller_get_devdata(mem->spi->controller);
+-	unsigned long clk_hz;
+-	u32 baud_rate_val;
+-
+ 	if (!spi_mem_default_supports_op(mem, op))
+ 		return false;
+ 
+@@ -484,14 +480,6 @@ static bool mchp_coreqspi_supports_op(struct spi_mem *mem, const struct spi_mem_
+ 			return false;
+ 	}
+ 
+-	clk_hz = clk_get_rate(qspi->clk);
+-	if (!clk_hz)
+-		return false;
+-
+-	baud_rate_val = DIV_ROUND_UP(clk_hz, 2 * op->max_freq);
+-	if (baud_rate_val > MAX_DIVIDER || baud_rate_val < MIN_DIVIDER)
+-		return false;
+-
+ 	return true;
+ }
+ 
+diff --git a/drivers/spi/spi-qpic-snand.c b/drivers/spi/spi-qpic-snand.c
+index e98e997680c754..cfc81327f7a441 100644
+--- a/drivers/spi/spi-qpic-snand.c
++++ b/drivers/spi/spi-qpic-snand.c
+@@ -1615,11 +1615,13 @@ static int qcom_spi_probe(struct platform_device *pdev)
+ 	ret = spi_register_controller(ctlr);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "spi_register_controller failed.\n");
+-		goto err_spi_init;
++		goto err_register_controller;
+ 	}
+ 
+ 	return 0;
+ 
++err_register_controller:
++	nand_ecc_unregister_on_host_hw_engine(&snandc->qspi->ecc_eng);
+ err_spi_init:
+ 	qcom_nandc_unalloc(snandc);
+ err_snand_alloc:
+@@ -1641,7 +1643,7 @@ static void qcom_spi_remove(struct platform_device *pdev)
+ 	struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 
+ 	spi_unregister_controller(ctlr);
+-
++	nand_ecc_unregister_on_host_hw_engine(&snandc->qspi->ecc_eng);
+ 	qcom_nandc_unalloc(snandc);
+ 
+ 	clk_disable_unprepare(snandc->aon_clk);
+diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c
+index f9ef7d94cebd7a..a963eed70c1d4c 100644
+--- a/drivers/tee/optee/ffa_abi.c
++++ b/drivers/tee/optee/ffa_abi.c
+@@ -657,7 +657,7 @@ static int optee_ffa_do_call_with_arg(struct tee_context *ctx,
+  * with a matching configuration.
+  */
+ 
+-static bool optee_ffa_api_is_compatbile(struct ffa_device *ffa_dev,
++static bool optee_ffa_api_is_compatible(struct ffa_device *ffa_dev,
+ 					const struct ffa_ops *ops)
+ {
+ 	const struct ffa_msg_ops *msg_ops = ops->msg_ops;
+@@ -908,7 +908,7 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev)
+ 	ffa_ops = ffa_dev->ops;
+ 	notif_ops = ffa_ops->notifier_ops;
+ 
+-	if (!optee_ffa_api_is_compatbile(ffa_dev, ffa_ops))
++	if (!optee_ffa_api_is_compatible(ffa_dev, ffa_ops))
+ 		return -EINVAL;
+ 
+ 	if (!optee_ffa_exchange_caps(ffa_dev, ffa_ops, &sec_caps,
+diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c
+index daf6e5cfd59ae2..2a7d253d9c554c 100644
+--- a/drivers/tee/tee_shm.c
++++ b/drivers/tee/tee_shm.c
+@@ -230,7 +230,7 @@ int tee_dyn_shm_alloc_helper(struct tee_shm *shm, size_t size, size_t align,
+ 	pages = kcalloc(nr_pages, sizeof(*pages), GFP_KERNEL);
+ 	if (!pages) {
+ 		rc = -ENOMEM;
+-		goto err;
++		goto err_pages;
+ 	}
+ 
+ 	for (i = 0; i < nr_pages; i++)
+@@ -243,11 +243,13 @@ int tee_dyn_shm_alloc_helper(struct tee_shm *shm, size_t size, size_t align,
+ 		rc = shm_register(shm->ctx, shm, pages, nr_pages,
+ 				  (unsigned long)shm->kaddr);
+ 		if (rc)
+-			goto err;
++			goto err_kfree;
+ 	}
+ 
+ 	return 0;
+-err:
++err_kfree:
++	kfree(pages);
++err_pages:
+ 	free_pages_exact(shm->kaddr, shm->size);
+ 	shm->kaddr = NULL;
+ 	return rc;
+@@ -560,9 +562,13 @@ EXPORT_SYMBOL_GPL(tee_shm_get_from_id);
+  */
+ void tee_shm_put(struct tee_shm *shm)
+ {
+-	struct tee_device *teedev = shm->ctx->teedev;
++	struct tee_device *teedev;
+ 	bool do_release = false;
+ 
++	if (!shm || !shm->ctx || !shm->ctx->teedev)
++		return;
++
++	teedev = shm->ctx->teedev;
+ 	mutex_lock(&teedev->mutex);
+ 	if (refcount_dec_and_test(&shm->refcount)) {
+ 		/*
+diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
+index a79fa0726f1d9c..216eff293ffec2 100644
+--- a/fs/btrfs/btrfs_inode.h
++++ b/fs/btrfs/btrfs_inode.h
+@@ -248,7 +248,7 @@ struct btrfs_inode {
+ 		u64 new_delalloc_bytes;
+ 		/*
+ 		 * The offset of the last dir index key that was logged.
+-		 * This is used only for directories.
++		 * This is used only for directories. Protected by 'log_mutex'.
+ 		 */
+ 		u64 last_dir_index_offset;
+ 	};
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 3711a5d073423d..fac4000a5bcaef 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -1483,7 +1483,7 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
+ 
+ /*
+  * Return 0 if we have submitted or queued the sector for submission.
+- * Return <0 for critical errors.
++ * Return <0 for critical errors, and the sector will have its dirty flag cleared.
+  *
+  * Caller should make sure filepos < i_size and handle filepos >= i_size case.
+  */
+@@ -1506,8 +1506,17 @@ static int submit_one_sector(struct btrfs_inode *inode,
+ 	ASSERT(filepos < i_size);
+ 
+ 	em = btrfs_get_extent(inode, NULL, filepos, sectorsize);
+-	if (IS_ERR(em))
++	if (IS_ERR(em)) {
++		/*
++		 * When submission failed, we should still clear the folio dirty.
++		 * Or the folio will be written back again but without any
++		 * ordered extent.
++		 */
++		btrfs_folio_clear_dirty(fs_info, folio, filepos, sectorsize);
++		btrfs_folio_set_writeback(fs_info, folio, filepos, sectorsize);
++		btrfs_folio_clear_writeback(fs_info, folio, filepos, sectorsize);
+ 		return PTR_ERR(em);
++	}
+ 
+ 	extent_offset = filepos - em->start;
+ 	em_end = btrfs_extent_map_end(em);
+@@ -1637,8 +1646,8 @@ static noinline_for_stack int extent_writepage_io(struct btrfs_inode *inode,
+ 	 * Here we set writeback and clear for the range. If the full folio
+ 	 * is no longer dirty then we clear the PAGECACHE_TAG_DIRTY tag.
+ 	 *
+-	 * If we hit any error, the corresponding sector will still be dirty
+-	 * thus no need to clear PAGECACHE_TAG_DIRTY.
++	 * If we hit any error, the corresponding sector will have its dirty
++	 * flag cleared and writeback finished, thus no need to handle the error case.
+ 	 */
+ 	if (!submitted_io && !error) {
+ 		btrfs_folio_set_writeback(fs_info, folio, start, len);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index df4c8312aae39d..ffa5d6c1594050 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -7827,6 +7827,7 @@ struct inode *btrfs_alloc_inode(struct super_block *sb)
+ 	ei->last_sub_trans = 0;
+ 	ei->logged_trans = 0;
+ 	ei->delalloc_bytes = 0;
++	/* new_delalloc_bytes and last_dir_index_offset are in a union. */
+ 	ei->new_delalloc_bytes = 0;
+ 	ei->defrag_bytes = 0;
+ 	ei->disk_i_size = 0;
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index afc05e406689ae..56d30ec0f52fca 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -3309,6 +3309,31 @@ int btrfs_free_log_root_tree(struct btrfs_trans_handle *trans,
+ 	return 0;
+ }
+ 
++static bool mark_inode_as_not_logged(const struct btrfs_trans_handle *trans,
++				     struct btrfs_inode *inode)
++{
++	bool ret = false;
++
++	/*
++	 * Do this only if ->logged_trans is still 0 to prevent races with
++	 * concurrent logging as we may see the inode not logged when
++	 * inode_logged() is called but it gets logged after inode_logged() did
++	 * not find it in the log tree and we end up setting ->logged_trans to a
++	 * value less than trans->transid after the concurrent logging task has
++	 * set it to trans->transid. As a consequence, subsequent rename, unlink
++	 * and link operations may end up not logging new names and removing old
++	 * names from the log.
++	 */
++	spin_lock(&inode->lock);
++	if (inode->logged_trans == 0)
++		inode->logged_trans = trans->transid - 1;
++	else if (inode->logged_trans == trans->transid)
++		ret = true;
++	spin_unlock(&inode->lock);
++
++	return ret;
++}
++
+ /*
+  * Check if an inode was logged in the current transaction. This correctly deals
+  * with the case where the inode was logged but has a logged_trans of 0, which
+@@ -3326,15 +3351,32 @@ static int inode_logged(const struct btrfs_trans_handle *trans,
+ 	struct btrfs_key key;
+ 	int ret;
+ 
+-	if (inode->logged_trans == trans->transid)
++	/*
++	 * Quick lockless call, since once ->logged_trans is set to the current
++	 * transaction, we never set it to a lower value anywhere else.
++	 */
++	if (data_race(inode->logged_trans) == trans->transid)
+ 		return 1;
+ 
+ 	/*
+-	 * If logged_trans is not 0, then we know the inode logged was not logged
+-	 * in this transaction, so we can return false right away.
++	 * If logged_trans is not 0 and not trans->transid, then we know the
++	 * inode was not logged in this transaction, so we can return false
++	 * right away. We take the lock to avoid a race caused by load/store
++	 * tearing with a concurrent btrfs_log_inode() call or a concurrent task
++	 * in this function further below - an update to trans->transid can be
++	 * teared into two 32 bits updates for example, in which case we could
++	 * see a positive value that is not trans->transid and assume the inode
++	 * was not logged when it was.
+ 	 */
+-	if (inode->logged_trans > 0)
++	spin_lock(&inode->lock);
++	if (inode->logged_trans == trans->transid) {
++		spin_unlock(&inode->lock);
++		return 1;
++	} else if (inode->logged_trans > 0) {
++		spin_unlock(&inode->lock);
+ 		return 0;
++	}
++	spin_unlock(&inode->lock);
+ 
+ 	/*
+ 	 * If no log tree was created for this root in this transaction, then
+@@ -3343,10 +3385,8 @@ static int inode_logged(const struct btrfs_trans_handle *trans,
+ 	 * transaction's ID, to avoid the search below in a future call in case
+ 	 * a log tree gets created after this.
+ 	 */
+-	if (!test_bit(BTRFS_ROOT_HAS_LOG_TREE, &inode->root->state)) {
+-		inode->logged_trans = trans->transid - 1;
+-		return 0;
+-	}
++	if (!test_bit(BTRFS_ROOT_HAS_LOG_TREE, &inode->root->state))
++		return mark_inode_as_not_logged(trans, inode);
+ 
+ 	/*
+ 	 * We have a log tree and the inode's logged_trans is 0. We can't tell
+@@ -3400,8 +3440,7 @@ static int inode_logged(const struct btrfs_trans_handle *trans,
+ 		 * Set logged_trans to a value greater than 0 and less then the
+ 		 * current transaction to avoid doing the search in future calls.
+ 		 */
+-		inode->logged_trans = trans->transid - 1;
+-		return 0;
++		return mark_inode_as_not_logged(trans, inode);
+ 	}
+ 
+ 	/*
+@@ -3409,20 +3448,9 @@ static int inode_logged(const struct btrfs_trans_handle *trans,
+ 	 * the current transacion's ID, to avoid future tree searches as long as
+ 	 * the inode is not evicted again.
+ 	 */
++	spin_lock(&inode->lock);
+ 	inode->logged_trans = trans->transid;
+-
+-	/*
+-	 * If it's a directory, then we must set last_dir_index_offset to the
+-	 * maximum possible value, so that the next attempt to log the inode does
+-	 * not skip checking if dir index keys found in modified subvolume tree
+-	 * leaves have been logged before, otherwise it would result in attempts
+-	 * to insert duplicate dir index keys in the log tree. This must be done
+-	 * because last_dir_index_offset is an in-memory only field, not persisted
+-	 * in the inode item or any other on-disk structure, so its value is lost
+-	 * once the inode is evicted.
+-	 */
+-	if (S_ISDIR(inode->vfs_inode.i_mode))
+-		inode->last_dir_index_offset = (u64)-1;
++	spin_unlock(&inode->lock);
+ 
+ 	return 1;
+ }
+@@ -4014,7 +4042,7 @@ static noinline int log_dir_items(struct btrfs_trans_handle *trans,
+ 
+ /*
+  * If the inode was logged before and it was evicted, then its
+- * last_dir_index_offset is (u64)-1, so we don't the value of the last index
++ * last_dir_index_offset is 0, so we don't know the value of the last index
+  * key offset. If that's the case, search for it and update the inode. This
+  * is to avoid lookups in the log tree every time we try to insert a dir index
+  * key from a leaf changed in the current transaction, and to allow us to always
+@@ -4030,7 +4058,7 @@ static int update_last_dir_index_offset(struct btrfs_inode *inode,
+ 
+ 	lockdep_assert_held(&inode->log_mutex);
+ 
+-	if (inode->last_dir_index_offset != (u64)-1)
++	if (inode->last_dir_index_offset != 0)
+ 		return 0;
+ 
+ 	if (!ctx->logged_before) {
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index af5ba3ad2eb833..d7a1193332d941 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -2252,6 +2252,40 @@ static void wait_eb_writebacks(struct btrfs_block_group *block_group)
+ 	rcu_read_unlock();
+ }
+ 
++static int call_zone_finish(struct btrfs_block_group *block_group,
++			    struct btrfs_io_stripe *stripe)
++{
++	struct btrfs_device *device = stripe->dev;
++	const u64 physical = stripe->physical;
++	struct btrfs_zoned_device_info *zinfo = device->zone_info;
++	int ret;
++
++	if (!device->bdev)
++		return 0;
++
++	if (zinfo->max_active_zones == 0)
++		return 0;
++
++	if (btrfs_dev_is_sequential(device, physical)) {
++		unsigned int nofs_flags;
++
++		nofs_flags = memalloc_nofs_save();
++		ret = blkdev_zone_mgmt(device->bdev, REQ_OP_ZONE_FINISH,
++				       physical >> SECTOR_SHIFT,
++				       zinfo->zone_size >> SECTOR_SHIFT);
++		memalloc_nofs_restore(nofs_flags);
++
++		if (ret)
++			return ret;
++	}
++
++	if (!(block_group->flags & BTRFS_BLOCK_GROUP_DATA))
++		zinfo->reserved_active_zones++;
++	btrfs_dev_clear_active_zone(device, physical);
++
++	return 0;
++}
++
+ static int do_zone_finish(struct btrfs_block_group *block_group, bool fully_written)
+ {
+ 	struct btrfs_fs_info *fs_info = block_group->fs_info;
+@@ -2336,31 +2370,12 @@ static int do_zone_finish(struct btrfs_block_group *block_group, bool fully_writ
+ 	down_read(&dev_replace->rwsem);
+ 	map = block_group->physical_map;
+ 	for (i = 0; i < map->num_stripes; i++) {
+-		struct btrfs_device *device = map->stripes[i].dev;
+-		const u64 physical = map->stripes[i].physical;
+-		struct btrfs_zoned_device_info *zinfo = device->zone_info;
+-		unsigned int nofs_flags;
+-
+-		if (!device->bdev)
+-			continue;
+-
+-		if (zinfo->max_active_zones == 0)
+-			continue;
+-
+-		nofs_flags = memalloc_nofs_save();
+-		ret = blkdev_zone_mgmt(device->bdev, REQ_OP_ZONE_FINISH,
+-				       physical >> SECTOR_SHIFT,
+-				       zinfo->zone_size >> SECTOR_SHIFT);
+-		memalloc_nofs_restore(nofs_flags);
+ 
++		ret = call_zone_finish(block_group, &map->stripes[i]);
+ 		if (ret) {
+ 			up_read(&dev_replace->rwsem);
+ 			return ret;
+ 		}
+-
+-		if (!(block_group->flags & BTRFS_BLOCK_GROUP_DATA))
+-			zinfo->reserved_active_zones++;
+-		btrfs_dev_clear_active_zone(device, physical);
+ 	}
+ 	up_read(&dev_replace->rwsem);
+ 
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index cc57367fb641d7..a07b8cf73ae271 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -2608,10 +2608,6 @@ void __mark_inode_dirty(struct inode *inode, int flags)
+ 			wakeup_bdi = inode_io_list_move_locked(inode, wb,
+ 							       dirty_list);
+ 
+-			spin_unlock(&wb->list_lock);
+-			spin_unlock(&inode->i_lock);
+-			trace_writeback_dirty_inode_enqueue(inode);
+-
+ 			/*
+ 			 * If this is the first dirty inode for this bdi,
+ 			 * we have to wake-up the corresponding bdi thread
+@@ -2621,6 +2617,11 @@ void __mark_inode_dirty(struct inode *inode, int flags)
+ 			if (wakeup_bdi &&
+ 			    (wb->bdi->capabilities & BDI_CAP_WRITEBACK))
+ 				wb_wakeup_delayed(wb);
++
++			spin_unlock(&wb->list_lock);
++			spin_unlock(&inode->i_lock);
++			trace_writeback_dirty_inode_enqueue(inode);
++
+ 			return;
+ 		}
+ 	}
+diff --git a/fs/ocfs2/inode.c b/fs/ocfs2/inode.c
+index 12e5d1f733256a..561d3b755fe8dd 100644
+--- a/fs/ocfs2/inode.c
++++ b/fs/ocfs2/inode.c
+@@ -1219,6 +1219,9 @@ static void ocfs2_clear_inode(struct inode *inode)
+ 	 * the journal is flushed before journal shutdown. Thus it is safe to
+ 	 * have inodes get cleaned up after journal shutdown.
+ 	 */
++	if (!osb->journal)
++		return;
++
+ 	jbd2_journal_release_jbd_inode(osb->journal->j_journal,
+ 				       &oi->ip_jinode);
+ }
+diff --git a/fs/proc/generic.c b/fs/proc/generic.c
+index e0e50914ab25f2..409bc1d11eca39 100644
+--- a/fs/proc/generic.c
++++ b/fs/proc/generic.c
+@@ -364,6 +364,25 @@ static const struct inode_operations proc_dir_inode_operations = {
+ 	.setattr	= proc_notify_change,
+ };
+ 
++static void pde_set_flags(struct proc_dir_entry *pde)
++{
++	const struct proc_ops *proc_ops = pde->proc_ops;
++
++	if (!proc_ops)
++		return;
++
++	if (proc_ops->proc_flags & PROC_ENTRY_PERMANENT)
++		pde->flags |= PROC_ENTRY_PERMANENT;
++	if (proc_ops->proc_read_iter)
++		pde->flags |= PROC_ENTRY_proc_read_iter;
++#ifdef CONFIG_COMPAT
++	if (proc_ops->proc_compat_ioctl)
++		pde->flags |= PROC_ENTRY_proc_compat_ioctl;
++#endif
++	if (proc_ops->proc_lseek)
++		pde->flags |= PROC_ENTRY_proc_lseek;
++}
++
+ /* returns the registered entry, or frees dp and returns NULL on failure */
+ struct proc_dir_entry *proc_register(struct proc_dir_entry *dir,
+ 		struct proc_dir_entry *dp)
+@@ -371,6 +390,8 @@ struct proc_dir_entry *proc_register(struct proc_dir_entry *dir,
+ 	if (proc_alloc_inum(&dp->low_ino))
+ 		goto out_free_entry;
+ 
++	pde_set_flags(dp);
++
+ 	write_lock(&proc_subdir_lock);
+ 	dp->parent = dir;
+ 	if (pde_subdir_insert(dir, dp) == false) {
+@@ -559,20 +580,6 @@ struct proc_dir_entry *proc_create_reg(const char *name, umode_t mode,
+ 	return p;
+ }
+ 
+-static void pde_set_flags(struct proc_dir_entry *pde)
+-{
+-	if (pde->proc_ops->proc_flags & PROC_ENTRY_PERMANENT)
+-		pde->flags |= PROC_ENTRY_PERMANENT;
+-	if (pde->proc_ops->proc_read_iter)
+-		pde->flags |= PROC_ENTRY_proc_read_iter;
+-#ifdef CONFIG_COMPAT
+-	if (pde->proc_ops->proc_compat_ioctl)
+-		pde->flags |= PROC_ENTRY_proc_compat_ioctl;
+-#endif
+-	if (pde->proc_ops->proc_lseek)
+-		pde->flags |= PROC_ENTRY_proc_lseek;
+-}
+-
+ struct proc_dir_entry *proc_create_data(const char *name, umode_t mode,
+ 		struct proc_dir_entry *parent,
+ 		const struct proc_ops *proc_ops, void *data)
+@@ -583,7 +590,6 @@ struct proc_dir_entry *proc_create_data(const char *name, umode_t mode,
+ 	if (!p)
+ 		return NULL;
+ 	p->proc_ops = proc_ops;
+-	pde_set_flags(p);
+ 	return proc_register(parent, p);
+ }
+ EXPORT_SYMBOL(proc_create_data);
+@@ -634,7 +640,6 @@ struct proc_dir_entry *proc_create_seq_private(const char *name, umode_t mode,
+ 	p->proc_ops = &proc_seq_ops;
+ 	p->seq_ops = ops;
+ 	p->state_size = state_size;
+-	pde_set_flags(p);
+ 	return proc_register(parent, p);
+ }
+ EXPORT_SYMBOL(proc_create_seq_private);
+@@ -665,7 +670,6 @@ struct proc_dir_entry *proc_create_single_data(const char *name, umode_t mode,
+ 		return NULL;
+ 	p->proc_ops = &proc_single_ops;
+ 	p->single_show = show;
+-	pde_set_flags(p);
+ 	return proc_register(parent, p);
+ }
+ EXPORT_SYMBOL(proc_create_single_data);
+diff --git a/fs/smb/client/cifs_unicode.c b/fs/smb/client/cifs_unicode.c
+index 4cc6e0896fad37..f8659d36793f17 100644
+--- a/fs/smb/client/cifs_unicode.c
++++ b/fs/smb/client/cifs_unicode.c
+@@ -629,6 +629,9 @@ cifs_strndup_to_utf16(const char *src, const int maxlen, int *utf16_len,
+ 	int len;
+ 	__le16 *dst;
+ 
++	if (!src)
++		return NULL;
++
+ 	len = cifs_local_to_utf16_bytes(src, maxlen, cp);
+ 	len += 2; /* NULL */
+ 	dst = kmalloc(len, GFP_KERNEL);
+diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
+index df366ee15456bb..e62064cb9e08a4 100644
+--- a/include/linux/cpuhotplug.h
++++ b/include/linux/cpuhotplug.h
+@@ -169,6 +169,7 @@ enum cpuhp_state {
+ 	CPUHP_AP_QCOM_TIMER_STARTING,
+ 	CPUHP_AP_TEGRA_TIMER_STARTING,
+ 	CPUHP_AP_ARMADA_TIMER_STARTING,
++	CPUHP_AP_LOONGARCH_ARCH_TIMER_STARTING,
+ 	CPUHP_AP_MIPS_GIC_TIMER_STARTING,
+ 	CPUHP_AP_ARC_TIMER_STARTING,
+ 	CPUHP_AP_REALTEK_TIMER_STARTING,
+diff --git a/include/linux/pgalloc.h b/include/linux/pgalloc.h
+new file mode 100644
+index 00000000000000..9174fa59bbc54d
+--- /dev/null
++++ b/include/linux/pgalloc.h
+@@ -0,0 +1,29 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _LINUX_PGALLOC_H
++#define _LINUX_PGALLOC_H
++
++#include <linux/pgtable.h>
++#include <asm/pgalloc.h>
++
++/*
++ * {pgd,p4d}_populate_kernel() are defined as macros to allow
++ * compile-time optimization based on the configured page table levels.
++ * Without this, linking may fail because callers (e.g., KASAN) may rely
++ * on calls to these functions being optimized away when passing symbols
++ * that exist only for certain page table levels.
++ */
++#define pgd_populate_kernel(addr, pgd, p4d)				\
++	do {								\
++		pgd_populate(&init_mm, pgd, p4d);			\
++		if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED)	\
++			arch_sync_kernel_mappings(addr, addr);		\
++	} while (0)
++
++#define p4d_populate_kernel(addr, p4d, pud)				\
++	do {								\
++		p4d_populate(&init_mm, p4d, pud);			\
++		if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_P4D_MODIFIED)	\
++			arch_sync_kernel_mappings(addr, addr);		\
++	} while (0)
++
++#endif /* _LINUX_PGALLOC_H */
+diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
+index 0b6e1f781d86d7..6dbf8303bd0e11 100644
+--- a/include/linux/pgtable.h
++++ b/include/linux/pgtable.h
+@@ -1695,6 +1695,22 @@ static inline int pmd_protnone(pmd_t pmd)
+ }
+ #endif /* CONFIG_NUMA_BALANCING */
+ 
++/*
++ * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
++ * and let generic vmalloc, ioremap and page table update code know when
++ * arch_sync_kernel_mappings() needs to be called.
++ */
++#ifndef ARCH_PAGE_TABLE_SYNC_MASK
++#define ARCH_PAGE_TABLE_SYNC_MASK 0
++#endif
++
++/*
++ * There is no default implementation for arch_sync_kernel_mappings(). It is
++ * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK
++ * is 0.
++ */
++void arch_sync_kernel_mappings(unsigned long start, unsigned long end);
++
+ #endif /* CONFIG_MMU */
+ 
+ #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
+@@ -1815,10 +1831,11 @@ static inline bool arch_has_pfn_modify_check(void)
+ /*
+  * Page Table Modification bits for pgtbl_mod_mask.
+  *
+- * These are used by the p?d_alloc_track*() set of functions an in the generic
+- * vmalloc/ioremap code to track at which page-table levels entries have been
+- * modified. Based on that the code can better decide when vmalloc and ioremap
+- * mapping changes need to be synchronized to other page-tables in the system.
++ * These are used by the p?d_alloc_track*() and p*d_populate_kernel()
++ * functions in the generic vmalloc, ioremap and page table update code
++ * to track at which page-table levels entries have been modified.
++ * Based on that the code can better decide when page table changes need
++ * to be synchronized to other page-tables in the system.
+  */
+ #define		__PGTBL_PGD_MODIFIED	0
+ #define		__PGTBL_P4D_MODIFIED	1
+diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
+index fdc9aeb74a446b..2759dac6be44ea 100644
+--- a/include/linux/vmalloc.h
++++ b/include/linux/vmalloc.h
+@@ -219,22 +219,6 @@ extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
+ int vmap_pages_range(unsigned long addr, unsigned long end, pgprot_t prot,
+ 		     struct page **pages, unsigned int page_shift);
+ 
+-/*
+- * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
+- * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
+- * needs to be called.
+- */
+-#ifndef ARCH_PAGE_TABLE_SYNC_MASK
+-#define ARCH_PAGE_TABLE_SYNC_MASK 0
+-#endif
+-
+-/*
+- * There is no default implementation for arch_sync_kernel_mappings(). It is
+- * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK
+- * is 0.
+- */
+-void arch_sync_kernel_mappings(unsigned long start, unsigned long end);
+-
+ /*
+  *	Lowlevel-APIs (not for driver use!)
+  */
+diff --git a/include/net/sock.h b/include/net/sock.h
+index e3ab203456858a..a348ae145eda43 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -285,6 +285,7 @@ struct sk_filter;
+   *	@sk_ack_backlog: current listen backlog
+   *	@sk_max_ack_backlog: listen backlog set in listen()
+   *	@sk_uid: user id of owner
++  *	@sk_ino: inode number (zero if orphaned)
+   *	@sk_prefer_busy_poll: prefer busypolling over softirq processing
+   *	@sk_busy_poll_budget: napi processing budget when busypolling
+   *	@sk_priority: %SO_PRIORITY setting
+@@ -518,6 +519,7 @@ struct sock {
+ 	u32			sk_ack_backlog;
+ 	u32			sk_max_ack_backlog;
+ 	kuid_t			sk_uid;
++	unsigned long		sk_ino;
+ 	spinlock_t		sk_peer_lock;
+ 	int			sk_bind_phc;
+ 	struct pid		*sk_peer_pid;
+@@ -2056,6 +2058,10 @@ static inline int sk_rx_queue_get(const struct sock *sk)
+ static inline void sk_set_socket(struct sock *sk, struct socket *sock)
+ {
+ 	sk->sk_socket = sock;
++	if (sock) {
++		WRITE_ONCE(sk->sk_uid, SOCK_INODE(sock)->i_uid);
++		WRITE_ONCE(sk->sk_ino, SOCK_INODE(sock)->i_ino);
++	}
+ }
+ 
+ static inline wait_queue_head_t *sk_sleep(struct sock *sk)
+@@ -2077,6 +2083,7 @@ static inline void sock_orphan(struct sock *sk)
+ 	sk_set_socket(sk, NULL);
+ 	sk->sk_wq  = NULL;
+ 	/* Note: sk_uid is unchanged. */
++	WRITE_ONCE(sk->sk_ino, 0);
+ 	write_unlock_bh(&sk->sk_callback_lock);
+ }
+ 
+@@ -2087,12 +2094,15 @@ static inline void sock_graft(struct sock *sk, struct socket *parent)
+ 	rcu_assign_pointer(sk->sk_wq, &parent->wq);
+ 	parent->sk = sk;
+ 	sk_set_socket(sk, parent);
+-	WRITE_ONCE(sk->sk_uid, SOCK_INODE(parent)->i_uid);
+ 	security_sock_graft(sk, parent);
+ 	write_unlock_bh(&sk->sk_callback_lock);
+ }
+ 
+-kuid_t sock_i_uid(struct sock *sk);
++static inline unsigned long sock_i_ino(const struct sock *sk)
++{
++	/* Paired with WRITE_ONCE() in sock_graft() and sock_orphan() */
++	return READ_ONCE(sk->sk_ino);
++}
+ 
+ static inline kuid_t sk_uid(const struct sock *sk)
+ {
+@@ -2100,9 +2110,6 @@ static inline kuid_t sk_uid(const struct sock *sk)
+ 	return READ_ONCE(sk->sk_uid);
+ }
+ 
+-unsigned long __sock_i_ino(struct sock *sk);
+-unsigned long sock_i_ino(struct sock *sk);
+-
+ static inline kuid_t sock_net_uid(const struct net *net, const struct sock *sk)
+ {
+ 	return sk ? sk_uid(sk) : make_kuid(net->user_ns, 0);
+diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h
+index 2beb30be2c5f8e..8e0eb832bc01ec 100644
+--- a/include/uapi/linux/netfilter/nf_tables.h
++++ b/include/uapi/linux/netfilter/nf_tables.h
+@@ -1784,10 +1784,12 @@ enum nft_synproxy_attributes {
+  * enum nft_device_attributes - nf_tables device netlink attributes
+  *
+  * @NFTA_DEVICE_NAME: name of this device (NLA_STRING)
++ * @NFTA_DEVICE_PREFIX: device name prefix, a simple wildcard (NLA_STRING)
+  */
+ enum nft_devices_attributes {
+ 	NFTA_DEVICE_UNSPEC,
+ 	NFTA_DEVICE_NAME,
++	NFTA_DEVICE_PREFIX,
+ 	__NFTA_DEVICE_MAX
+ };
+ #define NFTA_DEVICE_MAX		(__NFTA_DEVICE_MAX - 1)
+diff --git a/kernel/auditfilter.c b/kernel/auditfilter.c
+index e3f42018ed46fa..f7708fe2c45722 100644
+--- a/kernel/auditfilter.c
++++ b/kernel/auditfilter.c
+@@ -1326,7 +1326,7 @@ int audit_compare_dname_path(const struct qstr *dname, const char *path, int par
+ 
+ 	/* handle trailing slashes */
+ 	pathlen -= parentlen;
+-	while (p[pathlen - 1] == '/')
++	while (pathlen > 0 && p[pathlen - 1] == '/')
+ 		pathlen--;
+ 
+ 	if (pathlen != dlen)
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index b958fe48e02051..ac36069bd25217 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -2212,6 +2212,8 @@ int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
+ 		goto unlock;
+ 
+ 	hop_masks = bsearch(&k, k.masks, sched_domains_numa_levels, sizeof(k.masks[0]), hop_cmp);
++	if (!hop_masks)
++		goto unlock;
+ 	hop = hop_masks	- k.masks;
+ 
+ 	ret = hop ?
+diff --git a/mm/kasan/init.c b/mm/kasan/init.c
+index ced6b29fcf763f..8fce3370c84ea6 100644
+--- a/mm/kasan/init.c
++++ b/mm/kasan/init.c
+@@ -13,9 +13,9 @@
+ #include <linux/mm.h>
+ #include <linux/pfn.h>
+ #include <linux/slab.h>
++#include <linux/pgalloc.h>
+ 
+ #include <asm/page.h>
+-#include <asm/pgalloc.h>
+ 
+ #include "kasan.h"
+ 
+@@ -191,7 +191,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr,
+ 			pud_t *pud;
+ 			pmd_t *pmd;
+ 
+-			p4d_populate(&init_mm, p4d,
++			p4d_populate_kernel(addr, p4d,
+ 					lm_alias(kasan_early_shadow_pud));
+ 			pud = pud_offset(p4d, addr);
+ 			pud_populate(&init_mm, pud,
+@@ -212,7 +212,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr,
+ 			} else {
+ 				p = early_alloc(PAGE_SIZE, NUMA_NO_NODE);
+ 				pud_init(p);
+-				p4d_populate(&init_mm, p4d, p);
++				p4d_populate_kernel(addr, p4d, p);
+ 			}
+ 		}
+ 		zero_pud_populate(p4d, addr, next);
+@@ -251,10 +251,10 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
+ 			 * puds,pmds, so pgd_populate(), pud_populate()
+ 			 * is noops.
+ 			 */
+-			pgd_populate(&init_mm, pgd,
++			pgd_populate_kernel(addr, pgd,
+ 					lm_alias(kasan_early_shadow_p4d));
+ 			p4d = p4d_offset(pgd, addr);
+-			p4d_populate(&init_mm, p4d,
++			p4d_populate_kernel(addr, p4d,
+ 					lm_alias(kasan_early_shadow_pud));
+ 			pud = pud_offset(p4d, addr);
+ 			pud_populate(&init_mm, pud,
+@@ -273,7 +273,7 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
+ 				if (!p)
+ 					return -ENOMEM;
+ 			} else {
+-				pgd_populate(&init_mm, pgd,
++				pgd_populate_kernel(addr, pgd,
+ 					early_alloc(PAGE_SIZE, NUMA_NO_NODE));
+ 			}
+ 		}
+diff --git a/mm/kasan/kasan_test_c.c b/mm/kasan/kasan_test_c.c
+index c9cdafdde13234..00ca3c4def3799 100644
+--- a/mm/kasan/kasan_test_c.c
++++ b/mm/kasan/kasan_test_c.c
+@@ -1578,9 +1578,11 @@ static void kasan_strings(struct kunit *test)
+ 
+ 	ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO);
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
++	OPTIMIZER_HIDE_VAR(ptr);
+ 
+ 	src = kmalloc(KASAN_GRANULE_SIZE, GFP_KERNEL | __GFP_ZERO);
+ 	strscpy(src, "f0cacc1a0000000", KASAN_GRANULE_SIZE);
++	OPTIMIZER_HIDE_VAR(src);
+ 
+ 	/*
+ 	 * Make sure that strscpy() does not trigger KASAN if it overreads into
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index 84265983f239c1..1ac56ceb29b6b1 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -437,9 +437,15 @@ static struct kmemleak_object *__lookup_object(unsigned long ptr, int alias,
+ 		else if (untagged_objp == untagged_ptr || alias)
+ 			return object;
+ 		else {
++			/*
++			 * Printk deferring due to the kmemleak_lock held.
++			 * This is done to avoid deadlock.
++			 */
++			printk_deferred_enter();
+ 			kmemleak_warn("Found object by alias at 0x%08lx\n",
+ 				      ptr);
+ 			dump_object_info(object);
++			printk_deferred_exit();
+ 			break;
+ 		}
+ 	}
+@@ -736,6 +742,11 @@ static int __link_object(struct kmemleak_object *object, unsigned long ptr,
+ 		else if (untagged_objp + parent->size <= untagged_ptr)
+ 			link = &parent->rb_node.rb_right;
+ 		else {
++			/*
++			 * Printk deferring due to the kmemleak_lock held.
++			 * This is done to avoid deadlock.
++			 */
++			printk_deferred_enter();
+ 			kmemleak_stop("Cannot insert 0x%lx into the object search tree (overlaps existing)\n",
+ 				      ptr);
+ 			/*
+@@ -743,6 +754,7 @@ static int __link_object(struct kmemleak_object *object, unsigned long ptr,
+ 			 * be freed while the kmemleak_lock is held.
+ 			 */
+ 			dump_object_info(parent);
++			printk_deferred_exit();
+ 			return -EEXIST;
+ 		}
+ 	}
+@@ -856,13 +868,8 @@ static void delete_object_part(unsigned long ptr, size_t size,
+ 
+ 	raw_spin_lock_irqsave(&kmemleak_lock, flags);
+ 	object = __find_and_remove_object(ptr, 1, objflags);
+-	if (!object) {
+-#ifdef DEBUG
+-		kmemleak_warn("Partially freeing unknown object at 0x%08lx (size %zu)\n",
+-			      ptr, size);
+-#endif
++	if (!object)
+ 		goto unlock;
+-	}
+ 
+ 	/*
+ 	 * Create one or two objects that may result from the memory block
+@@ -882,8 +889,14 @@ static void delete_object_part(unsigned long ptr, size_t size,
+ 
+ unlock:
+ 	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
+-	if (object)
++	if (object) {
+ 		__delete_object(object);
++	} else {
++#ifdef DEBUG
++		kmemleak_warn("Partially freeing unknown object at 0x%08lx (size %zu)\n",
++			      ptr, size);
++#endif
++	}
+ 
+ out:
+ 	if (object_l)
+diff --git a/mm/percpu.c b/mm/percpu.c
+index b35494c8ede280..ce460711fa4e7a 100644
+--- a/mm/percpu.c
++++ b/mm/percpu.c
+@@ -3108,7 +3108,7 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
+ #endif /* BUILD_EMBED_FIRST_CHUNK */
+ 
+ #ifdef BUILD_PAGE_FIRST_CHUNK
+-#include <asm/pgalloc.h>
++#include <linux/pgalloc.h>
+ 
+ #ifndef P4D_TABLE_SIZE
+ #define P4D_TABLE_SIZE PAGE_SIZE
+@@ -3134,13 +3134,13 @@ void __init __weak pcpu_populate_pte(unsigned long addr)
+ 
+ 	if (pgd_none(*pgd)) {
+ 		p4d = memblock_alloc_or_panic(P4D_TABLE_SIZE, P4D_TABLE_SIZE);
+-		pgd_populate(&init_mm, pgd, p4d);
++		pgd_populate_kernel(addr, pgd, p4d);
+ 	}
+ 
+ 	p4d = p4d_offset(pgd, addr);
+ 	if (p4d_none(*p4d)) {
+ 		pud = memblock_alloc_or_panic(PUD_TABLE_SIZE, PUD_TABLE_SIZE);
+-		p4d_populate(&init_mm, p4d, pud);
++		p4d_populate_kernel(addr, p4d, pud);
+ 	}
+ 
+ 	pud = pud_offset(p4d, addr);
+diff --git a/mm/slub.c b/mm/slub.c
+index 394646988b1c2a..09b6404ac57529 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -926,19 +926,19 @@ static struct track *get_track(struct kmem_cache *s, void *object,
+ }
+ 
+ #ifdef CONFIG_STACKDEPOT
+-static noinline depot_stack_handle_t set_track_prepare(void)
++static noinline depot_stack_handle_t set_track_prepare(gfp_t gfp_flags)
+ {
+ 	depot_stack_handle_t handle;
+ 	unsigned long entries[TRACK_ADDRS_COUNT];
+ 	unsigned int nr_entries;
+ 
+ 	nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 3);
+-	handle = stack_depot_save(entries, nr_entries, GFP_NOWAIT);
++	handle = stack_depot_save(entries, nr_entries, gfp_flags);
+ 
+ 	return handle;
+ }
+ #else
+-static inline depot_stack_handle_t set_track_prepare(void)
++static inline depot_stack_handle_t set_track_prepare(gfp_t gfp_flags)
+ {
+ 	return 0;
+ }
+@@ -960,9 +960,9 @@ static void set_track_update(struct kmem_cache *s, void *object,
+ }
+ 
+ static __always_inline void set_track(struct kmem_cache *s, void *object,
+-				      enum track_item alloc, unsigned long addr)
++				      enum track_item alloc, unsigned long addr, gfp_t gfp_flags)
+ {
+-	depot_stack_handle_t handle = set_track_prepare();
++	depot_stack_handle_t handle = set_track_prepare(gfp_flags);
+ 
+ 	set_track_update(s, object, alloc, addr, handle);
+ }
+@@ -1104,7 +1104,12 @@ static void object_err(struct kmem_cache *s, struct slab *slab,
+ 		return;
+ 
+ 	slab_bug(s, reason);
+-	print_trailer(s, slab, object);
++	if (!object || !check_valid_pointer(s, slab, object)) {
++		print_slab_info(slab);
++		pr_err("Invalid pointer 0x%p\n", object);
++	} else {
++		print_trailer(s, slab, object);
++	}
+ 	add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
+ 
+ 	WARN_ON(1);
+@@ -1885,9 +1890,9 @@ static inline bool free_debug_processing(struct kmem_cache *s,
+ static inline void slab_pad_check(struct kmem_cache *s, struct slab *slab) {}
+ static inline int check_object(struct kmem_cache *s, struct slab *slab,
+ 			void *object, u8 val) { return 1; }
+-static inline depot_stack_handle_t set_track_prepare(void) { return 0; }
++static inline depot_stack_handle_t set_track_prepare(gfp_t gfp_flags) { return 0; }
+ static inline void set_track(struct kmem_cache *s, void *object,
+-			     enum track_item alloc, unsigned long addr) {}
++			     enum track_item alloc, unsigned long addr, gfp_t gfp_flags) {}
+ static inline void add_full(struct kmem_cache *s, struct kmem_cache_node *n,
+ 					struct slab *slab) {}
+ static inline void remove_full(struct kmem_cache *s, struct kmem_cache_node *n,
+@@ -3844,9 +3849,14 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
+ 			 * For debug caches here we had to go through
+ 			 * alloc_single_from_partial() so just store the
+ 			 * tracking info and return the object.
++			 *
++			 * Due to disabled preemption we need to disallow
++			 * blocking. The flags are further adjusted by
++			 * gfp_nested_mask() in stack_depot itself.
+ 			 */
+ 			if (s->flags & SLAB_STORE_USER)
+-				set_track(s, freelist, TRACK_ALLOC, addr);
++				set_track(s, freelist, TRACK_ALLOC, addr,
++					  gfpflags & ~(__GFP_DIRECT_RECLAIM));
+ 
+ 			return freelist;
+ 		}
+@@ -3878,7 +3888,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
+ 			goto new_objects;
+ 
+ 		if (s->flags & SLAB_STORE_USER)
+-			set_track(s, freelist, TRACK_ALLOC, addr);
++			set_track(s, freelist, TRACK_ALLOC, addr,
++				  gfpflags & ~(__GFP_DIRECT_RECLAIM));
+ 
+ 		return freelist;
+ 	}
+@@ -4389,8 +4400,12 @@ static noinline void free_to_partial_list(
+ 	unsigned long flags;
+ 	depot_stack_handle_t handle = 0;
+ 
++	/*
++	 * We cannot use GFP_NOWAIT as there are callsites where waking up
++	 * kswapd could deadlock
++	 */
+ 	if (s->flags & SLAB_STORE_USER)
+-		handle = set_track_prepare();
++		handle = set_track_prepare(__GFP_NOWARN);
+ 
+ 	spin_lock_irqsave(&n->list_lock, flags);
+ 
+diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
+index fd2ab5118e13df..dbd8daccade28c 100644
+--- a/mm/sparse-vmemmap.c
++++ b/mm/sparse-vmemmap.c
+@@ -27,9 +27,9 @@
+ #include <linux/spinlock.h>
+ #include <linux/vmalloc.h>
+ #include <linux/sched.h>
++#include <linux/pgalloc.h>
+ 
+ #include <asm/dma.h>
+-#include <asm/pgalloc.h>
+ #include <asm/tlbflush.h>
+ 
+ #include "hugetlb_vmemmap.h"
+@@ -229,7 +229,7 @@ p4d_t * __meminit vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node)
+ 		if (!p)
+ 			return NULL;
+ 		pud_init(p);
+-		p4d_populate(&init_mm, p4d, p);
++		p4d_populate_kernel(addr, p4d, p);
+ 	}
+ 	return p4d;
+ }
+@@ -241,7 +241,7 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node)
+ 		void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
+ 		if (!p)
+ 			return NULL;
+-		pgd_populate(&init_mm, pgd, p);
++		pgd_populate_kernel(addr, pgd, p);
+ 	}
+ 	return pgd;
+ }
+@@ -578,11 +578,6 @@ struct page * __meminit __populate_section_memmap(unsigned long pfn,
+ 	if (r < 0)
+ 		return NULL;
+ 
+-	if (system_state == SYSTEM_BOOTING)
+-		memmap_boot_pages_add(DIV_ROUND_UP(end - start, PAGE_SIZE));
+-	else
+-		memmap_pages_add(DIV_ROUND_UP(end - start, PAGE_SIZE));
+-
+ 	return pfn_to_page(pfn);
+ }
+ 
+diff --git a/mm/sparse.c b/mm/sparse.c
+index 3c012cf83cc2b4..e6075b62240707 100644
+--- a/mm/sparse.c
++++ b/mm/sparse.c
+@@ -454,9 +454,6 @@ static void __init sparse_buffer_init(unsigned long size, int nid)
+ 	 */
+ 	sparsemap_buf = memmap_alloc(size, section_map_size(), addr, nid, true);
+ 	sparsemap_buf_end = sparsemap_buf + size;
+-#ifndef CONFIG_SPARSEMEM_VMEMMAP
+-	memmap_boot_pages_add(DIV_ROUND_UP(size, PAGE_SIZE));
+-#endif
+ }
+ 
+ static void __init sparse_buffer_fini(void)
+@@ -567,6 +564,8 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin,
+ 				sparse_buffer_fini();
+ 				goto failed;
+ 			}
++			memmap_boot_pages_add(DIV_ROUND_UP(PAGES_PER_SECTION * sizeof(struct page),
++							   PAGE_SIZE));
+ 			sparse_init_early_section(nid, map, pnum, 0);
+ 		}
+ 	}
+@@ -680,7 +679,6 @@ static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages,
+ 	unsigned long start = (unsigned long) pfn_to_page(pfn);
+ 	unsigned long end = start + nr_pages * sizeof(struct page);
+ 
+-	memmap_pages_add(-1L * (DIV_ROUND_UP(end - start, PAGE_SIZE)));
+ 	vmemmap_free(start, end, altmap);
+ }
+ static void free_map_bootmem(struct page *memmap)
+@@ -856,10 +854,14 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
+ 	 * The memmap of early sections is always fully populated. See
+ 	 * section_activate() and pfn_valid() .
+ 	 */
+-	if (!section_is_early)
++	if (!section_is_early) {
++		memmap_pages_add(-1L * (DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE)));
+ 		depopulate_section_memmap(pfn, nr_pages, altmap);
+-	else if (memmap)
++	} else if (memmap) {
++		memmap_boot_pages_add(-1L * (DIV_ROUND_UP(nr_pages * sizeof(struct page),
++							  PAGE_SIZE)));
+ 		free_map_bootmem(memmap);
++	}
+ 
+ 	if (empty)
+ 		ms->section_mem_map = (unsigned long)NULL;
+@@ -904,6 +906,7 @@ static struct page * __meminit section_activate(int nid, unsigned long pfn,
+ 		section_deactivate(pfn, nr_pages, altmap);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
++	memmap_pages_add(DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE));
+ 
+ 	return memmap;
+ }
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index 2f4b98f2c8adf5..563449d7b092b9 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -1453,10 +1453,15 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
+ 		folio_unlock(src_folio);
+ 		folio_put(src_folio);
+ 	}
+-	if (dst_pte)
+-		pte_unmap(dst_pte);
++	/*
++	 * Unmap in reverse order (LIFO) to maintain proper kmap_local
++	 * index ordering when CONFIG_HIGHPTE is enabled. We mapped dst_pte
++	 * first, then src_pte, so we must unmap src_pte first, then dst_pte.
++	 */
+ 	if (src_pte)
+ 		pte_unmap(src_pte);
++	if (dst_pte)
++		pte_unmap(dst_pte);
+ 	mmu_notifier_invalidate_range_end(&range);
+ 	if (si)
+ 		put_swap_device(si);
+diff --git a/net/appletalk/atalk_proc.c b/net/appletalk/atalk_proc.c
+index 9c1241292d1d2e..01787fb6a7bce2 100644
+--- a/net/appletalk/atalk_proc.c
++++ b/net/appletalk/atalk_proc.c
+@@ -181,7 +181,7 @@ static int atalk_seq_socket_show(struct seq_file *seq, void *v)
+ 		   sk_wmem_alloc_get(s),
+ 		   sk_rmem_alloc_get(s),
+ 		   s->sk_state,
+-		   from_kuid_munged(seq_user_ns(seq), sock_i_uid(s)));
++		   from_kuid_munged(seq_user_ns(seq), sk_uid(s)));
+ out:
+ 	return 0;
+ }
+diff --git a/net/atm/resources.c b/net/atm/resources.c
+index b19d851e1f4439..7c6fdedbcf4e5c 100644
+--- a/net/atm/resources.c
++++ b/net/atm/resources.c
+@@ -112,7 +112,9 @@ struct atm_dev *atm_dev_register(const char *type, struct device *parent,
+ 
+ 	if (atm_proc_dev_register(dev) < 0) {
+ 		pr_err("atm_proc_dev_register failed for dev %s\n", type);
+-		goto out_fail;
++		mutex_unlock(&atm_dev_mutex);
++		kfree(dev);
++		return NULL;
+ 	}
+ 
+ 	if (atm_register_sysfs(dev, parent) < 0) {
+@@ -128,7 +130,7 @@ struct atm_dev *atm_dev_register(const char *type, struct device *parent,
+ 	return dev;
+ 
+ out_fail:
+-	kfree(dev);
++	put_device(&dev->class_dev);
+ 	dev = NULL;
+ 	goto out;
+ }
+diff --git a/net/ax25/ax25_in.c b/net/ax25/ax25_in.c
+index 1cac25aca63784..f2d66af8635957 100644
+--- a/net/ax25/ax25_in.c
++++ b/net/ax25/ax25_in.c
+@@ -433,6 +433,10 @@ static int ax25_rcv(struct sk_buff *skb, struct net_device *dev,
+ int ax25_kiss_rcv(struct sk_buff *skb, struct net_device *dev,
+ 		  struct packet_type *ptype, struct net_device *orig_dev)
+ {
++	skb = skb_share_check(skb, GFP_ATOMIC);
++	if (!skb)
++		return NET_RX_DROP;
++
+ 	skb_orphan(skb);
+ 
+ 	if (!net_eq(dev_net(dev), &init_net)) {
+diff --git a/net/batman-adv/network-coding.c b/net/batman-adv/network-coding.c
+index 9f56308779cc3a..af97d077369f9b 100644
+--- a/net/batman-adv/network-coding.c
++++ b/net/batman-adv/network-coding.c
+@@ -1687,7 +1687,12 @@ batadv_nc_skb_decode_packet(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ 
+ 	coding_len = ntohs(coded_packet_tmp.coded_len);
+ 
+-	if (coding_len > skb->len)
++	/* ensure dst buffer is large enough (payload only) */
++	if (coding_len + h_size > skb->len)
++		return NULL;
++
++	/* ensure src buffer is large enough (payload only) */
++	if (coding_len + h_size > nc_packet->skb->len)
+ 		return NULL;
+ 
+ 	/* Here the magic is reversed:
+diff --git a/net/bluetooth/af_bluetooth.c b/net/bluetooth/af_bluetooth.c
+index 6ad2f72f53f4e5..ee9bf84c88a70b 100644
+--- a/net/bluetooth/af_bluetooth.c
++++ b/net/bluetooth/af_bluetooth.c
+@@ -815,7 +815,7 @@ static int bt_seq_show(struct seq_file *seq, void *v)
+ 			   refcount_read(&sk->sk_refcnt),
+ 			   sk_rmem_alloc_get(sk),
+ 			   sk_wmem_alloc_get(sk),
+-			   from_kuid(seq_user_ns(seq), sock_i_uid(sk)),
++			   from_kuid(seq_user_ns(seq), sk_uid(sk)),
+ 			   sock_i_ino(sk),
+ 			   bt->parent ? sock_i_ino(bt->parent) : 0LU);
+ 
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 749bba1512eb12..a25439f1eeac28 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -3344,7 +3344,7 @@ static int hci_powered_update_adv_sync(struct hci_dev *hdev)
+ 	 * advertising data. This also applies to the case
+ 	 * where BR/EDR was toggled during the AUTO_OFF phase.
+ 	 */
+-	if (hci_dev_test_flag(hdev, HCI_ADVERTISING) ||
++	if (hci_dev_test_flag(hdev, HCI_ADVERTISING) &&
+ 	    list_empty(&hdev->adv_instances)) {
+ 		if (ext_adv_capable(hdev)) {
+ 			err = hci_setup_ext_adv_instance_sync(hdev, 0x00);
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index 82d943c4cb5059..05b7480970f721 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -1422,7 +1422,10 @@ static int l2cap_sock_release(struct socket *sock)
+ 	if (!sk)
+ 		return 0;
+ 
++	lock_sock_nested(sk, L2CAP_NESTING_PARENT);
+ 	l2cap_sock_cleanup_listen(sk);
++	release_sock(sk);
++
+ 	bt_sock_unlink(&l2cap_sk_list, sk);
+ 
+ 	err = l2cap_sock_shutdown(sock, SHUT_RDWR);
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index 94cbe967d1c163..083e2fe96441d4 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -626,9 +626,6 @@ static unsigned int br_nf_local_in(void *priv,
+ 		break;
+ 	}
+ 
+-	ct = container_of(nfct, struct nf_conn, ct_general);
+-	WARN_ON_ONCE(!nf_ct_is_confirmed(ct));
+-
+ 	return ret;
+ }
+ #endif
+diff --git a/net/core/gen_estimator.c b/net/core/gen_estimator.c
+index 7d426a8e29f30b..f112156db587ba 100644
+--- a/net/core/gen_estimator.c
++++ b/net/core/gen_estimator.c
+@@ -90,10 +90,12 @@ static void est_timer(struct timer_list *t)
+ 	rate = (b_packets - est->last_packets) << (10 - est->intvl_log);
+ 	rate = (rate >> est->ewma_log) - (est->avpps >> est->ewma_log);
+ 
++	preempt_disable_nested();
+ 	write_seqcount_begin(&est->seq);
+ 	est->avbps += brate;
+ 	est->avpps += rate;
+ 	write_seqcount_end(&est->seq);
++	preempt_enable_nested();
+ 
+ 	est->last_bytes = b_bytes;
+ 	est->last_packets = b_packets;
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 9fae9239f93934..10c1df62338be0 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2788,39 +2788,6 @@ void sock_pfree(struct sk_buff *skb)
+ EXPORT_SYMBOL(sock_pfree);
+ #endif /* CONFIG_INET */
+ 
+-kuid_t sock_i_uid(struct sock *sk)
+-{
+-	kuid_t uid;
+-
+-	read_lock_bh(&sk->sk_callback_lock);
+-	uid = sk->sk_socket ? SOCK_INODE(sk->sk_socket)->i_uid : GLOBAL_ROOT_UID;
+-	read_unlock_bh(&sk->sk_callback_lock);
+-	return uid;
+-}
+-EXPORT_SYMBOL(sock_i_uid);
+-
+-unsigned long __sock_i_ino(struct sock *sk)
+-{
+-	unsigned long ino;
+-
+-	read_lock(&sk->sk_callback_lock);
+-	ino = sk->sk_socket ? SOCK_INODE(sk->sk_socket)->i_ino : 0;
+-	read_unlock(&sk->sk_callback_lock);
+-	return ino;
+-}
+-EXPORT_SYMBOL(__sock_i_ino);
+-
+-unsigned long sock_i_ino(struct sock *sk)
+-{
+-	unsigned long ino;
+-
+-	local_bh_disable();
+-	ino = __sock_i_ino(sk);
+-	local_bh_enable();
+-	return ino;
+-}
+-EXPORT_SYMBOL(sock_i_ino);
+-
+ /*
+  * Allocate a skb from the socket's send buffer.
+  */
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index c47d3828d4f656..942a887bf08930 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -340,14 +340,13 @@ static void inetdev_destroy(struct in_device *in_dev)
+ 
+ static int __init inet_blackhole_dev_init(void)
+ {
+-	int err = 0;
++	struct in_device *in_dev;
+ 
+ 	rtnl_lock();
+-	if (!inetdev_init(blackhole_netdev))
+-		err = -ENOMEM;
++	in_dev = inetdev_init(blackhole_netdev);
+ 	rtnl_unlock();
+ 
+-	return err;
++	return PTR_ERR_OR_ZERO(in_dev);
+ }
+ late_initcall(inet_blackhole_dev_init);
+ 
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index 717cb7d3607a1c..14beae97f81b3a 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -797,11 +797,12 @@ void icmp_ndo_send(struct sk_buff *skb_in, int type, int code, __be32 info)
+ 	struct sk_buff *cloned_skb = NULL;
+ 	struct ip_options opts = { 0 };
+ 	enum ip_conntrack_info ctinfo;
++	enum ip_conntrack_dir dir;
+ 	struct nf_conn *ct;
+ 	__be32 orig_ip;
+ 
+ 	ct = nf_ct_get(skb_in, &ctinfo);
+-	if (!ct || !(ct->status & IPS_SRC_NAT)) {
++	if (!ct || !(READ_ONCE(ct->status) & IPS_NAT_MASK)) {
+ 		__icmp_send(skb_in, type, code, info, &opts);
+ 		return;
+ 	}
+@@ -816,7 +817,8 @@ void icmp_ndo_send(struct sk_buff *skb_in, int type, int code, __be32 info)
+ 		goto out;
+ 
+ 	orig_ip = ip_hdr(skb_in)->saddr;
+-	ip_hdr(skb_in)->saddr = ct->tuplehash[0].tuple.src.u3.ip;
++	dir = CTINFO2DIR(ctinfo);
++	ip_hdr(skb_in)->saddr = ct->tuplehash[dir].tuple.src.u3.ip;
+ 	__icmp_send(skb_in, type, code, info, &opts);
+ 	ip_hdr(skb_in)->saddr = orig_ip;
+ out:
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 46750c96d08ea3..f4157d26ec9e41 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -168,7 +168,7 @@ static bool inet_use_bhash2_on_bind(const struct sock *sk)
+ }
+ 
+ static bool inet_bind_conflict(const struct sock *sk, struct sock *sk2,
+-			       kuid_t sk_uid, bool relax,
++			       kuid_t uid, bool relax,
+ 			       bool reuseport_cb_ok, bool reuseport_ok)
+ {
+ 	int bound_dev_if2;
+@@ -185,12 +185,12 @@ static bool inet_bind_conflict(const struct sock *sk, struct sock *sk2,
+ 			if (!relax || (!reuseport_ok && sk->sk_reuseport &&
+ 				       sk2->sk_reuseport && reuseport_cb_ok &&
+ 				       (sk2->sk_state == TCP_TIME_WAIT ||
+-					uid_eq(sk_uid, sock_i_uid(sk2)))))
++					uid_eq(uid, sk_uid(sk2)))))
+ 				return true;
+ 		} else if (!reuseport_ok || !sk->sk_reuseport ||
+ 			   !sk2->sk_reuseport || !reuseport_cb_ok ||
+ 			   (sk2->sk_state != TCP_TIME_WAIT &&
+-			    !uid_eq(sk_uid, sock_i_uid(sk2)))) {
++			    !uid_eq(uid, sk_uid(sk2)))) {
+ 			return true;
+ 		}
+ 	}
+@@ -198,7 +198,7 @@ static bool inet_bind_conflict(const struct sock *sk, struct sock *sk2,
+ }
+ 
+ static bool __inet_bhash2_conflict(const struct sock *sk, struct sock *sk2,
+-				   kuid_t sk_uid, bool relax,
++				   kuid_t uid, bool relax,
+ 				   bool reuseport_cb_ok, bool reuseport_ok)
+ {
+ 	if (ipv6_only_sock(sk2)) {
+@@ -211,20 +211,20 @@ static bool __inet_bhash2_conflict(const struct sock *sk, struct sock *sk2,
+ #endif
+ 	}
+ 
+-	return inet_bind_conflict(sk, sk2, sk_uid, relax,
++	return inet_bind_conflict(sk, sk2, uid, relax,
+ 				  reuseport_cb_ok, reuseport_ok);
+ }
+ 
+ static bool inet_bhash2_conflict(const struct sock *sk,
+ 				 const struct inet_bind2_bucket *tb2,
+-				 kuid_t sk_uid,
++				 kuid_t uid,
+ 				 bool relax, bool reuseport_cb_ok,
+ 				 bool reuseport_ok)
+ {
+ 	struct sock *sk2;
+ 
+ 	sk_for_each_bound(sk2, &tb2->owners) {
+-		if (__inet_bhash2_conflict(sk, sk2, sk_uid, relax,
++		if (__inet_bhash2_conflict(sk, sk2, uid, relax,
+ 					   reuseport_cb_ok, reuseport_ok))
+ 			return true;
+ 	}
+@@ -242,8 +242,8 @@ static int inet_csk_bind_conflict(const struct sock *sk,
+ 				  const struct inet_bind2_bucket *tb2, /* may be null */
+ 				  bool relax, bool reuseport_ok)
+ {
+-	kuid_t uid = sock_i_uid((struct sock *)sk);
+ 	struct sock_reuseport *reuseport_cb;
++	kuid_t uid = sk_uid(sk);
+ 	bool reuseport_cb_ok;
+ 	struct sock *sk2;
+ 
+@@ -287,11 +287,11 @@ static int inet_csk_bind_conflict(const struct sock *sk,
+ static bool inet_bhash2_addr_any_conflict(const struct sock *sk, int port, int l3mdev,
+ 					  bool relax, bool reuseport_ok)
+ {
+-	kuid_t uid = sock_i_uid((struct sock *)sk);
+ 	const struct net *net = sock_net(sk);
+ 	struct sock_reuseport *reuseport_cb;
+ 	struct inet_bind_hashbucket *head2;
+ 	struct inet_bind2_bucket *tb2;
++	kuid_t uid = sk_uid(sk);
+ 	bool conflict = false;
+ 	bool reuseport_cb_ok;
+ 
+@@ -425,15 +425,13 @@ inet_csk_find_open_port(const struct sock *sk, struct inet_bind_bucket **tb_ret,
+ static inline int sk_reuseport_match(struct inet_bind_bucket *tb,
+ 				     struct sock *sk)
+ {
+-	kuid_t uid = sock_i_uid(sk);
+-
+ 	if (tb->fastreuseport <= 0)
+ 		return 0;
+ 	if (!sk->sk_reuseport)
+ 		return 0;
+ 	if (rcu_access_pointer(sk->sk_reuseport_cb))
+ 		return 0;
+-	if (!uid_eq(tb->fastuid, uid))
++	if (!uid_eq(tb->fastuid, sk_uid(sk)))
+ 		return 0;
+ 	/* We only need to check the rcv_saddr if this tb was once marked
+ 	 * without fastreuseport and then was reset, as we can only know that
+@@ -458,14 +456,13 @@ static inline int sk_reuseport_match(struct inet_bind_bucket *tb,
+ void inet_csk_update_fastreuse(struct inet_bind_bucket *tb,
+ 			       struct sock *sk)
+ {
+-	kuid_t uid = sock_i_uid(sk);
+ 	bool reuse = sk->sk_reuse && sk->sk_state != TCP_LISTEN;
+ 
+ 	if (hlist_empty(&tb->bhash2)) {
+ 		tb->fastreuse = reuse;
+ 		if (sk->sk_reuseport) {
+ 			tb->fastreuseport = FASTREUSEPORT_ANY;
+-			tb->fastuid = uid;
++			tb->fastuid = sk_uid(sk);
+ 			tb->fast_rcv_saddr = sk->sk_rcv_saddr;
+ 			tb->fast_ipv6_only = ipv6_only_sock(sk);
+ 			tb->fast_sk_family = sk->sk_family;
+@@ -492,7 +489,7 @@ void inet_csk_update_fastreuse(struct inet_bind_bucket *tb,
+ 			 */
+ 			if (!sk_reuseport_match(tb, sk)) {
+ 				tb->fastreuseport = FASTREUSEPORT_STRICT;
+-				tb->fastuid = uid;
++				tb->fastuid = sk_uid(sk);
+ 				tb->fast_rcv_saddr = sk->sk_rcv_saddr;
+ 				tb->fast_ipv6_only = ipv6_only_sock(sk);
+ 				tb->fast_sk_family = sk->sk_family;
+diff --git a/net/ipv4/inet_diag.c b/net/ipv4/inet_diag.c
+index 1d1d6ad53f4c91..2fa53b16fe7788 100644
+--- a/net/ipv4/inet_diag.c
++++ b/net/ipv4/inet_diag.c
+@@ -181,7 +181,7 @@ int inet_diag_msg_attrs_fill(struct sock *sk, struct sk_buff *skb,
+ 		goto errout;
+ #endif
+ 
+-	r->idiag_uid = from_kuid_munged(user_ns, sock_i_uid(sk));
++	r->idiag_uid = from_kuid_munged(user_ns, sk_uid(sk));
+ 	r->idiag_inode = sock_i_ino(sk);
+ 
+ 	memset(&inet_sockopt, 0, sizeof(inet_sockopt));
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index 77a0b52b2eabfc..ceeeec9b7290aa 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -721,8 +721,8 @@ static int inet_reuseport_add_sock(struct sock *sk,
+ {
+ 	struct inet_bind_bucket *tb = inet_csk(sk)->icsk_bind_hash;
+ 	const struct hlist_nulls_node *node;
++	kuid_t uid = sk_uid(sk);
+ 	struct sock *sk2;
+-	kuid_t uid = sock_i_uid(sk);
+ 
+ 	sk_nulls_for_each_rcu(sk2, node, &ilb->nulls_head) {
+ 		if (sk2 != sk &&
+@@ -730,7 +730,7 @@ static int inet_reuseport_add_sock(struct sock *sk,
+ 		    ipv6_only_sock(sk2) == ipv6_only_sock(sk) &&
+ 		    sk2->sk_bound_dev_if == sk->sk_bound_dev_if &&
+ 		    inet_csk(sk2)->icsk_bind_hash == tb &&
+-		    sk2->sk_reuseport && uid_eq(uid, sock_i_uid(sk2)) &&
++		    sk2->sk_reuseport && uid_eq(uid, sk_uid(sk2)) &&
+ 		    inet_rcv_saddr_equal(sk, sk2, false))
+ 			return reuseport_add_sock(sk, sk2,
+ 						  inet_rcv_saddr_any(sk));
+diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
+index 4eacaf00e2e9b7..031df4c19fcc5c 100644
+--- a/net/ipv4/ping.c
++++ b/net/ipv4/ping.c
+@@ -1116,7 +1116,7 @@ static void ping_v4_format_sock(struct sock *sp, struct seq_file *f,
+ 		sk_wmem_alloc_get(sp),
+ 		sk_rmem_alloc_get(sp),
+ 		0, 0L, 0,
+-		from_kuid_munged(seq_user_ns(f), sock_i_uid(sp)),
++		from_kuid_munged(seq_user_ns(f), sk_uid(sp)),
+ 		0, sock_i_ino(sp),
+ 		refcount_read(&sp->sk_refcnt), sp,
+ 		atomic_read(&sp->sk_drops));
+diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
+index 32f942d0f944cc..1d2c89d63cc71f 100644
+--- a/net/ipv4/raw.c
++++ b/net/ipv4/raw.c
+@@ -1043,7 +1043,7 @@ static void raw_sock_seq_show(struct seq_file *seq, struct sock *sp, int i)
+ 		sk_wmem_alloc_get(sp),
+ 		sk_rmem_alloc_get(sp),
+ 		0, 0L, 0,
+-		from_kuid_munged(seq_user_ns(seq), sock_i_uid(sp)),
++		from_kuid_munged(seq_user_ns(seq), sk_uid(sp)),
+ 		0, sock_i_ino(sp),
+ 		refcount_read(&sp->sk_refcnt), sp, atomic_read(&sp->sk_drops));
+ }
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 6a14f9e6fef645..429fb34b075e0b 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -2896,7 +2896,7 @@ static void get_openreq4(const struct request_sock *req,
+ 		jiffies_delta_to_clock_t(delta),
+ 		req->num_timeout,
+ 		from_kuid_munged(seq_user_ns(f),
+-				 sock_i_uid(req->rsk_listener)),
++				 sk_uid(req->rsk_listener)),
+ 		0,  /* non standard timer */
+ 		0, /* open_requests have no inode */
+ 		0,
+@@ -2954,7 +2954,7 @@ static void get_tcp4_sock(struct sock *sk, struct seq_file *f, int i)
+ 		timer_active,
+ 		jiffies_delta_to_clock_t(timer_expires - jiffies),
+ 		icsk->icsk_retransmits,
+-		from_kuid_munged(seq_user_ns(f), sock_i_uid(sk)),
++		from_kuid_munged(seq_user_ns(f), sk_uid(sk)),
+ 		icsk->icsk_probes_out,
+ 		sock_i_ino(sk),
+ 		refcount_read(&sk->sk_refcnt), sk,
+@@ -3246,9 +3246,9 @@ static int bpf_iter_tcp_seq_show(struct seq_file *seq, void *v)
+ 		const struct request_sock *req = v;
+ 
+ 		uid = from_kuid_munged(seq_user_ns(seq),
+-				       sock_i_uid(req->rsk_listener));
++				       sk_uid(req->rsk_listener));
+ 	} else {
+-		uid = from_kuid_munged(seq_user_ns(seq), sock_i_uid(sk));
++		uid = from_kuid_munged(seq_user_ns(seq), sk_uid(sk));
+ 	}
+ 
+ 	meta.seq = seq;
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index f94bb222aa2d49..19573ee64a0f18 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -145,8 +145,8 @@ static int udp_lib_lport_inuse(struct net *net, __u16 num,
+ 			       unsigned long *bitmap,
+ 			       struct sock *sk, unsigned int log)
+ {
++	kuid_t uid = sk_uid(sk);
+ 	struct sock *sk2;
+-	kuid_t uid = sock_i_uid(sk);
+ 
+ 	sk_for_each(sk2, &hslot->head) {
+ 		if (net_eq(sock_net(sk2), net) &&
+@@ -158,7 +158,7 @@ static int udp_lib_lport_inuse(struct net *net, __u16 num,
+ 		    inet_rcv_saddr_equal(sk, sk2, true)) {
+ 			if (sk2->sk_reuseport && sk->sk_reuseport &&
+ 			    !rcu_access_pointer(sk->sk_reuseport_cb) &&
+-			    uid_eq(uid, sock_i_uid(sk2))) {
++			    uid_eq(uid, sk_uid(sk2))) {
+ 				if (!bitmap)
+ 					return 0;
+ 			} else {
+@@ -180,8 +180,8 @@ static int udp_lib_lport_inuse2(struct net *net, __u16 num,
+ 				struct udp_hslot *hslot2,
+ 				struct sock *sk)
+ {
++	kuid_t uid = sk_uid(sk);
+ 	struct sock *sk2;
+-	kuid_t uid = sock_i_uid(sk);
+ 	int res = 0;
+ 
+ 	spin_lock(&hslot2->lock);
+@@ -195,7 +195,7 @@ static int udp_lib_lport_inuse2(struct net *net, __u16 num,
+ 		    inet_rcv_saddr_equal(sk, sk2, true)) {
+ 			if (sk2->sk_reuseport && sk->sk_reuseport &&
+ 			    !rcu_access_pointer(sk->sk_reuseport_cb) &&
+-			    uid_eq(uid, sock_i_uid(sk2))) {
++			    uid_eq(uid, sk_uid(sk2))) {
+ 				res = 0;
+ 			} else {
+ 				res = 1;
+@@ -210,7 +210,7 @@ static int udp_lib_lport_inuse2(struct net *net, __u16 num,
+ static int udp_reuseport_add_sock(struct sock *sk, struct udp_hslot *hslot)
+ {
+ 	struct net *net = sock_net(sk);
+-	kuid_t uid = sock_i_uid(sk);
++	kuid_t uid = sk_uid(sk);
+ 	struct sock *sk2;
+ 
+ 	sk_for_each(sk2, &hslot->head) {
+@@ -220,7 +220,7 @@ static int udp_reuseport_add_sock(struct sock *sk, struct udp_hslot *hslot)
+ 		    ipv6_only_sock(sk2) == ipv6_only_sock(sk) &&
+ 		    (udp_sk(sk2)->udp_port_hash == udp_sk(sk)->udp_port_hash) &&
+ 		    (sk2->sk_bound_dev_if == sk->sk_bound_dev_if) &&
+-		    sk2->sk_reuseport && uid_eq(uid, sock_i_uid(sk2)) &&
++		    sk2->sk_reuseport && uid_eq(uid, sk_uid(sk2)) &&
+ 		    inet_rcv_saddr_equal(sk, sk2, false)) {
+ 			return reuseport_add_sock(sk, sk2,
+ 						  inet_rcv_saddr_any(sk));
+@@ -3387,7 +3387,7 @@ static void udp4_format_sock(struct sock *sp, struct seq_file *f,
+ 		sk_wmem_alloc_get(sp),
+ 		udp_rqueue_get(sp),
+ 		0, 0L, 0,
+-		from_kuid_munged(seq_user_ns(f), sock_i_uid(sp)),
++		from_kuid_munged(seq_user_ns(f), sk_uid(sp)),
+ 		0, sock_i_ino(sp),
+ 		refcount_read(&sp->sk_refcnt), sp,
+ 		atomic_read(&sp->sk_drops));
+@@ -3630,7 +3630,7 @@ static int bpf_iter_udp_seq_show(struct seq_file *seq, void *v)
+ 		goto unlock;
+ 	}
+ 
+-	uid = from_kuid_munged(seq_user_ns(seq), sock_i_uid(sk));
++	uid = from_kuid_munged(seq_user_ns(seq), sk_uid(sk));
+ 	meta.seq = seq;
+ 	prog = bpf_iter_get_info(&meta, false);
+ 	ret = udp_prog_seq_show(prog, &meta, v, uid, state->bucket);
+diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
+index 83f5aa5e133ab2..281722817a65c4 100644
+--- a/net/ipv6/datagram.c
++++ b/net/ipv6/datagram.c
+@@ -1064,7 +1064,7 @@ void __ip6_dgram_sock_seq_show(struct seq_file *seq, struct sock *sp,
+ 		   sk_wmem_alloc_get(sp),
+ 		   rqueue,
+ 		   0, 0L, 0,
+-		   from_kuid_munged(seq_user_ns(seq), sock_i_uid(sp)),
++		   from_kuid_munged(seq_user_ns(seq), sk_uid(sp)),
+ 		   0,
+ 		   sock_i_ino(sp),
+ 		   refcount_read(&sp->sk_refcnt), sp,
+diff --git a/net/ipv6/ip6_icmp.c b/net/ipv6/ip6_icmp.c
+index 9e3574880cb03e..233914b63bdb82 100644
+--- a/net/ipv6/ip6_icmp.c
++++ b/net/ipv6/ip6_icmp.c
+@@ -54,11 +54,12 @@ void icmpv6_ndo_send(struct sk_buff *skb_in, u8 type, u8 code, __u32 info)
+ 	struct inet6_skb_parm parm = { 0 };
+ 	struct sk_buff *cloned_skb = NULL;
+ 	enum ip_conntrack_info ctinfo;
++	enum ip_conntrack_dir dir;
+ 	struct in6_addr orig_ip;
+ 	struct nf_conn *ct;
+ 
+ 	ct = nf_ct_get(skb_in, &ctinfo);
+-	if (!ct || !(ct->status & IPS_SRC_NAT)) {
++	if (!ct || !(READ_ONCE(ct->status) & IPS_NAT_MASK)) {
+ 		__icmpv6_send(skb_in, type, code, info, &parm);
+ 		return;
+ 	}
+@@ -73,7 +74,8 @@ void icmpv6_ndo_send(struct sk_buff *skb_in, u8 type, u8 code, __u32 info)
+ 		goto out;
+ 
+ 	orig_ip = ipv6_hdr(skb_in)->saddr;
+-	ipv6_hdr(skb_in)->saddr = ct->tuplehash[0].tuple.src.u3.in6;
++	dir = CTINFO2DIR(ctinfo);
++	ipv6_hdr(skb_in)->saddr = ct->tuplehash[dir].tuple.src.u3.in6;
+ 	__icmpv6_send(skb_in, type, code, info, &parm);
+ 	ipv6_hdr(skb_in)->saddr = orig_ip;
+ out:
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index f61b0396ef6b18..5604ae6163f452 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1431,17 +1431,17 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
+ 	ireq = inet_rsk(req);
+ 
+ 	if (sk_acceptq_is_full(sk))
+-		goto out_overflow;
++		goto exit_overflow;
+ 
+ 	if (!dst) {
+ 		dst = inet6_csk_route_req(sk, &fl6, req, IPPROTO_TCP);
+ 		if (!dst)
+-			goto out;
++			goto exit;
+ 	}
+ 
+ 	newsk = tcp_create_openreq_child(sk, req, skb);
+ 	if (!newsk)
+-		goto out_nonewsk;
++		goto exit_nonewsk;
+ 
+ 	/*
+ 	 * No need to charge this sock to the relevant IPv6 refcnt debug socks
+@@ -1525,25 +1525,19 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
+ 			const union tcp_md5_addr *addr;
+ 
+ 			addr = (union tcp_md5_addr *)&newsk->sk_v6_daddr;
+-			if (tcp_md5_key_copy(newsk, addr, AF_INET6, 128, l3index, key)) {
+-				inet_csk_prepare_forced_close(newsk);
+-				tcp_done(newsk);
+-				goto out;
+-			}
++			if (tcp_md5_key_copy(newsk, addr, AF_INET6, 128, l3index, key))
++				goto put_and_exit;
+ 		}
+ 	}
+ #endif
+ #ifdef CONFIG_TCP_AO
+ 	/* Copy over tcp_ao_info if any */
+ 	if (tcp_ao_copy_all_matching(sk, newsk, req, skb, AF_INET6))
+-		goto out; /* OOM */
++		goto put_and_exit; /* OOM */
+ #endif
+ 
+-	if (__inet_inherit_port(sk, newsk) < 0) {
+-		inet_csk_prepare_forced_close(newsk);
+-		tcp_done(newsk);
+-		goto out;
+-	}
++	if (__inet_inherit_port(sk, newsk) < 0)
++		goto put_and_exit;
+ 	*own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash),
+ 				       &found_dup_sk);
+ 	if (*own_req) {
+@@ -1570,13 +1564,17 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
+ 
+ 	return newsk;
+ 
+-out_overflow:
++exit_overflow:
+ 	__NET_INC_STATS(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
+-out_nonewsk:
++exit_nonewsk:
+ 	dst_release(dst);
+-out:
++exit:
+ 	tcp_listendrop(sk);
+ 	return NULL;
++put_and_exit:
++	inet_csk_prepare_forced_close(newsk);
++	tcp_done(newsk);
++	goto exit;
+ }
+ 
+ INDIRECT_CALLABLE_DECLARE(struct dst_entry *ipv4_dst_check(struct dst_entry *,
+@@ -2168,7 +2166,7 @@ static void get_openreq6(struct seq_file *seq,
+ 		   jiffies_to_clock_t(ttd),
+ 		   req->num_timeout,
+ 		   from_kuid_munged(seq_user_ns(seq),
+-				    sock_i_uid(req->rsk_listener)),
++				    sk_uid(req->rsk_listener)),
+ 		   0,  /* non standard timer */
+ 		   0, /* open_requests have no inode */
+ 		   0, req);
+@@ -2234,7 +2232,7 @@ static void get_tcp6_sock(struct seq_file *seq, struct sock *sp, int i)
+ 		   timer_active,
+ 		   jiffies_delta_to_clock_t(timer_expires - jiffies),
+ 		   icsk->icsk_retransmits,
+-		   from_kuid_munged(seq_user_ns(seq), sock_i_uid(sp)),
++		   from_kuid_munged(seq_user_ns(seq), sk_uid(sp)),
+ 		   icsk->icsk_probes_out,
+ 		   sock_i_ino(sp),
+ 		   refcount_read(&sp->sk_refcnt), sp,
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index b5d761700776a1..2ebde035224598 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -3788,7 +3788,7 @@ static int pfkey_seq_show(struct seq_file *f, void *v)
+ 			       refcount_read(&s->sk_refcnt),
+ 			       sk_rmem_alloc_get(s),
+ 			       sk_wmem_alloc_get(s),
+-			       from_kuid_munged(seq_user_ns(f), sock_i_uid(s)),
++			       from_kuid_munged(seq_user_ns(f), sk_uid(s)),
+ 			       sock_i_ino(s)
+ 			       );
+ 	return 0;
+diff --git a/net/llc/llc_proc.c b/net/llc/llc_proc.c
+index 07e9abb5978a71..aa81c67b24a156 100644
+--- a/net/llc/llc_proc.c
++++ b/net/llc/llc_proc.c
+@@ -151,7 +151,7 @@ static int llc_seq_socket_show(struct seq_file *seq, void *v)
+ 		   sk_wmem_alloc_get(sk),
+ 		   sk_rmem_alloc_get(sk) - llc->copied_seq,
+ 		   sk->sk_state,
+-		   from_kuid_munged(seq_user_ns(seq), sock_i_uid(sk)),
++		   from_kuid_munged(seq_user_ns(seq), sk_uid(sk)),
+ 		   llc->link);
+ out:
+ 	return 0;
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 006d02dce94923..b73a6d7f3e933c 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -1193,6 +1193,14 @@ ieee80211_determine_chan_mode(struct ieee80211_sub_if_data *sdata,
+ 			     "required MCSes not supported, disabling EHT\n");
+ 	}
+ 
++	if (conn->mode >= IEEE80211_CONN_MODE_EHT &&
++	    channel->band != NL80211_BAND_2GHZ &&
++	    conn->bw_limit == IEEE80211_CONN_BW_LIMIT_40) {
++		conn->mode = IEEE80211_CONN_MODE_HE;
++		link_id_info(sdata, link_id,
++			     "required bandwidth not supported, disabling EHT\n");
++	}
++
+ 	/* the mode can only decrease, so this must terminate */
+ 	if (ap_mode != conn->mode) {
+ 		kfree(elems);
+diff --git a/net/mac80211/tests/chan-mode.c b/net/mac80211/tests/chan-mode.c
+index 96c7b3ab27444d..adc069065e73dd 100644
+--- a/net/mac80211/tests/chan-mode.c
++++ b/net/mac80211/tests/chan-mode.c
+@@ -2,7 +2,7 @@
+ /*
+  * KUnit tests for channel mode functions
+  *
+- * Copyright (C) 2024 Intel Corporation
++ * Copyright (C) 2024-2025 Intel Corporation
+  */
+ #include <net/cfg80211.h>
+ #include <kunit/test.h>
+@@ -28,6 +28,10 @@ static const struct determine_chan_mode_case {
+ 	u8 vht_basic_mcs_1_4, vht_basic_mcs_5_8;
+ 	u8 he_basic_mcs_1_4, he_basic_mcs_5_8;
+ 	u8 eht_mcs7_min_nss;
++	u16 eht_disabled_subchannels;
++	u8 eht_bw;
++	enum ieee80211_conn_bw_limit conn_bw_limit;
++	enum ieee80211_conn_bw_limit expected_bw_limit;
+ 	int error;
+ } determine_chan_mode_cases[] = {
+ 	{
+@@ -128,6 +132,14 @@ static const struct determine_chan_mode_case {
+ 		.conn_mode = IEEE80211_CONN_MODE_EHT,
+ 		.eht_mcs7_min_nss = 0x15,
+ 		.error = EINVAL,
++	}, {
++		.desc = "80 MHz EHT is downgraded to 40 MHz HE due to puncturing",
++		.conn_mode = IEEE80211_CONN_MODE_EHT,
++		.expected_mode = IEEE80211_CONN_MODE_HE,
++		.conn_bw_limit = IEEE80211_CONN_BW_LIMIT_80,
++		.expected_bw_limit = IEEE80211_CONN_BW_LIMIT_40,
++		.eht_disabled_subchannels = 0x08,
++		.eht_bw = IEEE80211_EHT_OPER_CHAN_WIDTH_80MHZ,
+ 	}
+ };
+ KUNIT_ARRAY_PARAM_DESC(determine_chan_mode, determine_chan_mode_cases, desc)
+@@ -138,7 +150,7 @@ static void test_determine_chan_mode(struct kunit *test)
+ 	struct t_sdata *t_sdata = T_SDATA(test);
+ 	struct ieee80211_conn_settings conn = {
+ 		.mode = params->conn_mode,
+-		.bw_limit = IEEE80211_CONN_BW_LIMIT_20,
++		.bw_limit = params->conn_bw_limit,
+ 	};
+ 	struct cfg80211_bss cbss = {
+ 		.channel = &t_sdata->band_5ghz.channels[0],
+@@ -191,14 +203,21 @@ static void test_determine_chan_mode(struct kunit *test)
+ 		0x7f, 0x01, 0x00, 0x88, 0x88, 0x88, 0x00, 0x00,
+ 		0x00,
+ 		/* EHT Operation */
+-		WLAN_EID_EXTENSION, 0x09, WLAN_EID_EXT_EHT_OPERATION,
+-		0x01, params->eht_mcs7_min_nss ? params->eht_mcs7_min_nss : 0x11,
+-		0x00, 0x00, 0x00, 0x00, 0x24, 0x00,
++		WLAN_EID_EXTENSION, 0x0b, WLAN_EID_EXT_EHT_OPERATION,
++		0x03, params->eht_mcs7_min_nss ? params->eht_mcs7_min_nss : 0x11,
++		0x00, 0x00, 0x00, params->eht_bw,
++		params->eht_bw == IEEE80211_EHT_OPER_CHAN_WIDTH_80MHZ ? 42 : 36,
++		0x00,
++		u16_get_bits(params->eht_disabled_subchannels, 0xff),
++		u16_get_bits(params->eht_disabled_subchannels, 0xff00),
+ 	};
+ 	struct ieee80211_chan_req chanreq = {};
+ 	struct cfg80211_chan_def ap_chandef = {};
+ 	struct ieee802_11_elems *elems;
+ 
++	/* To force EHT downgrade to HE on punctured 80 MHz downgraded to 40 MHz */
++	set_bit(IEEE80211_HW_DISALLOW_PUNCTURING, t_sdata->local.hw.flags);
++
+ 	if (params->strict)
+ 		set_bit(IEEE80211_HW_STRICT, t_sdata->local.hw.flags);
+ 	else
+@@ -237,6 +256,7 @@ static void test_determine_chan_mode(struct kunit *test)
+ 	} else {
+ 		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, elems);
+ 		KUNIT_ASSERT_EQ(test, conn.mode, params->expected_mode);
++		KUNIT_ASSERT_EQ(test, conn.bw_limit, params->expected_bw_limit);
+ 	}
+ }
+ 
+diff --git a/net/mctp/af_mctp.c b/net/mctp/af_mctp.c
+index 9d5db3feedec57..4dee06171361ea 100644
+--- a/net/mctp/af_mctp.c
++++ b/net/mctp/af_mctp.c
+@@ -346,7 +346,7 @@ static int mctp_getsockopt(struct socket *sock, int level, int optname,
+ 		return 0;
+ 	}
+ 
+-	return -EINVAL;
++	return -ENOPROTOOPT;
+ }
+ 
+ /* helpers for reading/writing the tag ioc, handling compatibility across the
+diff --git a/net/mctp/route.c b/net/mctp/route.c
+index d9c8e5a5f9ce9a..19ff259d7bc437 100644
+--- a/net/mctp/route.c
++++ b/net/mctp/route.c
+@@ -325,6 +325,7 @@ static void mctp_skb_set_flow(struct sk_buff *skb, struct mctp_sk_key *key) {}
+ static void mctp_flow_prepare_output(struct sk_buff *skb, struct mctp_dev *dev) {}
+ #endif
+ 
++/* takes ownership of skb, both in success and failure cases */
+ static int mctp_frag_queue(struct mctp_sk_key *key, struct sk_buff *skb)
+ {
+ 	struct mctp_hdr *hdr = mctp_hdr(skb);
+@@ -334,8 +335,10 @@ static int mctp_frag_queue(struct mctp_sk_key *key, struct sk_buff *skb)
+ 		& MCTP_HDR_SEQ_MASK;
+ 
+ 	if (!key->reasm_head) {
+-		/* Since we're manipulating the shared frag_list, ensure it isn't
+-		 * shared with any other SKBs.
++		/* Since we're manipulating the shared frag_list, ensure it
++		 * isn't shared with any other SKBs. In the cloned case,
++		 * this will free the skb; callers can no longer access it
++		 * safely.
+ 		 */
+ 		key->reasm_head = skb_unshare(skb, GFP_ATOMIC);
+ 		if (!key->reasm_head)
+@@ -349,10 +352,10 @@ static int mctp_frag_queue(struct mctp_sk_key *key, struct sk_buff *skb)
+ 	exp_seq = (key->last_seq + 1) & MCTP_HDR_SEQ_MASK;
+ 
+ 	if (this_seq != exp_seq)
+-		return -EINVAL;
++		goto err_free;
+ 
+ 	if (key->reasm_head->len + skb->len > mctp_message_maxlen)
+-		return -EINVAL;
++		goto err_free;
+ 
+ 	skb->next = NULL;
+ 	skb->sk = NULL;
+@@ -366,6 +369,10 @@ static int mctp_frag_queue(struct mctp_sk_key *key, struct sk_buff *skb)
+ 	key->reasm_head->truesize += skb->truesize;
+ 
+ 	return 0;
++
++err_free:
++	kfree_skb(skb);
++	return -EINVAL;
+ }
+ 
+ static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)
+@@ -476,18 +483,16 @@ static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)
+ 			 * key isn't observable yet
+ 			 */
+ 			mctp_frag_queue(key, skb);
++			skb = NULL;
+ 
+ 			/* if the key_add fails, we've raced with another
+ 			 * SOM packet with the same src, dest and tag. There's
+ 			 * no way to distinguish future packets, so all we
+-			 * can do is drop; we'll free the skb on exit from
+-			 * this function.
++			 * can do is drop.
+ 			 */
+ 			rc = mctp_key_add(key, msk);
+-			if (!rc) {
++			if (!rc)
+ 				trace_mctp_key_acquire(key);
+-				skb = NULL;
+-			}
+ 
+ 			/* we don't need to release key->lock on exit, so
+ 			 * clean up here and suppress the unlock via
+@@ -505,8 +510,7 @@ static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)
+ 				key = NULL;
+ 			} else {
+ 				rc = mctp_frag_queue(key, skb);
+-				if (!rc)
+-					skb = NULL;
++				skb = NULL;
+ 			}
+ 		}
+ 
+@@ -516,17 +520,16 @@ static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)
+ 		 */
+ 
+ 		/* we need to be continuing an existing reassembly... */
+-		if (!key->reasm_head)
++		if (!key->reasm_head) {
+ 			rc = -EINVAL;
+-		else
++		} else {
+ 			rc = mctp_frag_queue(key, skb);
++			skb = NULL;
++		}
+ 
+ 		if (rc)
+ 			goto out_unlock;
+ 
+-		/* we've queued; the queue owns the skb now */
+-		skb = NULL;
+-
+ 		/* end of message? deliver to socket, and we're done with
+ 		 * the reassembly/response key
+ 		 */
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 76cb699885b388..1063c53850c057 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -3537,7 +3537,6 @@ void mptcp_sock_graft(struct sock *sk, struct socket *parent)
+ 	write_lock_bh(&sk->sk_callback_lock);
+ 	rcu_assign_pointer(sk->sk_wq, &parent->wq);
+ 	sk_set_socket(sk, parent);
+-	WRITE_ONCE(sk->sk_uid, SOCK_INODE(parent)->i_uid);
+ 	write_unlock_bh(&sk->sk_callback_lock);
+ }
+ 
+diff --git a/net/netfilter/nf_conntrack_helper.c b/net/netfilter/nf_conntrack_helper.c
+index 4ed5878cb25b16..ceb48c3ca0a439 100644
+--- a/net/netfilter/nf_conntrack_helper.c
++++ b/net/netfilter/nf_conntrack_helper.c
+@@ -368,7 +368,7 @@ int nf_conntrack_helper_register(struct nf_conntrack_helper *me)
+ 			    (cur->tuple.src.l3num == NFPROTO_UNSPEC ||
+ 			     cur->tuple.src.l3num == me->tuple.src.l3num) &&
+ 			    cur->tuple.dst.protonum == me->tuple.dst.protonum) {
+-				ret = -EEXIST;
++				ret = -EBUSY;
+ 				goto out;
+ 			}
+ 		}
+@@ -379,7 +379,7 @@ int nf_conntrack_helper_register(struct nf_conntrack_helper *me)
+ 		hlist_for_each_entry(cur, &nf_ct_helper_hash[h], hnode) {
+ 			if (nf_ct_tuple_src_mask_cmp(&cur->tuple, &me->tuple,
+ 						     &mask)) {
+-				ret = -EEXIST;
++				ret = -EBUSY;
+ 				goto out;
+ 			}
+ 		}
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 46ca725d653819..0e86434ca13b00 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1953,6 +1953,18 @@ static int nft_dump_stats(struct sk_buff *skb, struct nft_stats __percpu *stats)
+ 	return -ENOSPC;
+ }
+ 
++static bool hook_is_prefix(struct nft_hook *hook)
++{
++	return strlen(hook->ifname) >= hook->ifnamelen;
++}
++
++static int nft_nla_put_hook_dev(struct sk_buff *skb, struct nft_hook *hook)
++{
++	int attr = hook_is_prefix(hook) ? NFTA_DEVICE_PREFIX : NFTA_DEVICE_NAME;
++
++	return nla_put_string(skb, attr, hook->ifname);
++}
++
+ static int nft_dump_basechain_hook(struct sk_buff *skb,
+ 				   const struct net *net, int family,
+ 				   const struct nft_base_chain *basechain,
+@@ -1984,16 +1996,15 @@ static int nft_dump_basechain_hook(struct sk_buff *skb,
+ 			if (!first)
+ 				first = hook;
+ 
+-			if (nla_put(skb, NFTA_DEVICE_NAME,
+-				    hook->ifnamelen, hook->ifname))
++			if (nft_nla_put_hook_dev(skb, hook))
+ 				goto nla_put_failure;
+ 			n++;
+ 		}
+ 		nla_nest_end(skb, nest_devs);
+ 
+ 		if (n == 1 &&
+-		    nla_put(skb, NFTA_HOOK_DEV,
+-			    first->ifnamelen, first->ifname))
++		    !hook_is_prefix(first) &&
++		    nla_put_string(skb, NFTA_HOOK_DEV, first->ifname))
+ 			goto nla_put_failure;
+ 	}
+ 	nla_nest_end(skb, nest);
+@@ -2297,7 +2308,8 @@ void nf_tables_chain_destroy(struct nft_chain *chain)
+ }
+ 
+ static struct nft_hook *nft_netdev_hook_alloc(struct net *net,
+-					      const struct nlattr *attr)
++					      const struct nlattr *attr,
++					      bool prefix)
+ {
+ 	struct nf_hook_ops *ops;
+ 	struct net_device *dev;
+@@ -2314,7 +2326,8 @@ static struct nft_hook *nft_netdev_hook_alloc(struct net *net,
+ 	if (err < 0)
+ 		goto err_hook_free;
+ 
+-	hook->ifnamelen = nla_len(attr);
++	/* include the terminating NUL-char when comparing non-prefixes */
++	hook->ifnamelen = strlen(hook->ifname) + !prefix;
+ 
+ 	/* nf_tables_netdev_event() is called under rtnl_mutex, this is
+ 	 * indirectly serializing all the other holders of the commit_mutex with
+@@ -2361,14 +2374,22 @@ static int nf_tables_parse_netdev_hooks(struct net *net,
+ 	struct nft_hook *hook, *next;
+ 	const struct nlattr *tmp;
+ 	int rem, n = 0, err;
++	bool prefix;
+ 
+ 	nla_for_each_nested(tmp, attr, rem) {
+-		if (nla_type(tmp) != NFTA_DEVICE_NAME) {
++		switch (nla_type(tmp)) {
++		case NFTA_DEVICE_NAME:
++			prefix = false;
++			break;
++		case NFTA_DEVICE_PREFIX:
++			prefix = true;
++			break;
++		default:
+ 			err = -EINVAL;
+ 			goto err_hook;
+ 		}
+ 
+-		hook = nft_netdev_hook_alloc(net, tmp);
++		hook = nft_netdev_hook_alloc(net, tmp, prefix);
+ 		if (IS_ERR(hook)) {
+ 			NL_SET_BAD_ATTR(extack, tmp);
+ 			err = PTR_ERR(hook);
+@@ -2414,7 +2435,7 @@ static int nft_chain_parse_netdev(struct net *net, struct nlattr *tb[],
+ 	int err;
+ 
+ 	if (tb[NFTA_HOOK_DEV]) {
+-		hook = nft_netdev_hook_alloc(net, tb[NFTA_HOOK_DEV]);
++		hook = nft_netdev_hook_alloc(net, tb[NFTA_HOOK_DEV], false);
+ 		if (IS_ERR(hook)) {
+ 			NL_SET_BAD_ATTR(extack, tb[NFTA_HOOK_DEV]);
+ 			return PTR_ERR(hook);
+@@ -9424,8 +9445,7 @@ static int nf_tables_fill_flowtable_info(struct sk_buff *skb, struct net *net,
+ 
+ 	list_for_each_entry_rcu(hook, hook_list, list,
+ 				lockdep_commit_lock_is_held(net)) {
+-		if (nla_put(skb, NFTA_DEVICE_NAME,
+-			    hook->ifnamelen, hook->ifname))
++		if (nft_nla_put_hook_dev(skb, hook))
+ 			goto nla_put_failure;
+ 	}
+ 	nla_nest_end(skb, nest_devs);
+diff --git a/net/netlink/diag.c b/net/netlink/diag.c
+index 61981e01fd6ff1..b8e58132e8af12 100644
+--- a/net/netlink/diag.c
++++ b/net/netlink/diag.c
+@@ -168,7 +168,7 @@ static int __netlink_diag_dump(struct sk_buff *skb, struct netlink_callback *cb,
+ 				 NETLINK_CB(cb->skb).portid,
+ 				 cb->nlh->nlmsg_seq,
+ 				 NLM_F_MULTI,
+-				 __sock_i_ino(sk)) < 0) {
++				 sock_i_ino(sk)) < 0) {
+ 			ret = 1;
+ 			break;
+ 		}
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index c7c7de3403f76e..a7017d7f092720 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -4782,7 +4782,7 @@ static int packet_seq_show(struct seq_file *seq, void *v)
+ 			   READ_ONCE(po->ifindex),
+ 			   packet_sock_flag(po, PACKET_SOCK_RUNNING),
+ 			   atomic_read(&s->sk_rmem_alloc),
+-			   from_kuid_munged(seq_user_ns(seq), sock_i_uid(s)),
++			   from_kuid_munged(seq_user_ns(seq), sk_uid(s)),
+ 			   sock_i_ino(s));
+ 	}
+ 
+diff --git a/net/packet/diag.c b/net/packet/diag.c
+index 47f69f3dbf73e9..6ce1dcc284d920 100644
+--- a/net/packet/diag.c
++++ b/net/packet/diag.c
+@@ -153,7 +153,7 @@ static int sk_diag_fill(struct sock *sk, struct sk_buff *skb,
+ 
+ 	if ((req->pdiag_show & PACKET_SHOW_INFO) &&
+ 	    nla_put_u32(skb, PACKET_DIAG_UID,
+-			from_kuid_munged(user_ns, sock_i_uid(sk))))
++			from_kuid_munged(user_ns, sk_uid(sk))))
+ 		goto out_nlmsg_trim;
+ 
+ 	if ((req->pdiag_show & PACKET_SHOW_MCLIST) &&
+diff --git a/net/phonet/socket.c b/net/phonet/socket.c
+index 5ce0b3ee5def84..ea4d5e6533dba7 100644
+--- a/net/phonet/socket.c
++++ b/net/phonet/socket.c
+@@ -584,7 +584,7 @@ static int pn_sock_seq_show(struct seq_file *seq, void *v)
+ 			sk->sk_protocol, pn->sobject, pn->dobject,
+ 			pn->resource, sk->sk_state,
+ 			sk_wmem_alloc_get(sk), sk_rmem_alloc_get(sk),
+-			from_kuid_munged(seq_user_ns(seq), sock_i_uid(sk)),
++			from_kuid_munged(seq_user_ns(seq), sk_uid(sk)),
+ 			sock_i_ino(sk),
+ 			refcount_read(&sk->sk_refcnt), sk,
+ 			atomic_read(&sk->sk_drops));
+@@ -755,7 +755,7 @@ static int pn_res_seq_show(struct seq_file *seq, void *v)
+ 
+ 		seq_printf(seq, "%02X %5u %lu",
+ 			   (int) (psk - pnres.sk),
+-			   from_kuid_munged(seq_user_ns(seq), sock_i_uid(sk)),
++			   from_kuid_munged(seq_user_ns(seq), sk_uid(sk)),
+ 			   sock_i_ino(sk));
+ 	}
+ 	seq_pad(seq, '\n');
+diff --git a/net/sctp/input.c b/net/sctp/input.c
+index 6fcdcaeed40e97..7e99894778d4fd 100644
+--- a/net/sctp/input.c
++++ b/net/sctp/input.c
+@@ -756,7 +756,7 @@ static int __sctp_hash_endpoint(struct sctp_endpoint *ep)
+ 			struct sock *sk2 = ep2->base.sk;
+ 
+ 			if (!net_eq(sock_net(sk2), net) || sk2 == sk ||
+-			    !uid_eq(sock_i_uid(sk2), sock_i_uid(sk)) ||
++			    !uid_eq(sk_uid(sk2), sk_uid(sk)) ||
+ 			    !sk2->sk_reuseport)
+ 				continue;
+ 
+diff --git a/net/sctp/proc.c b/net/sctp/proc.c
+index ec00ee75d59a65..74bff317e205c8 100644
+--- a/net/sctp/proc.c
++++ b/net/sctp/proc.c
+@@ -177,7 +177,7 @@ static int sctp_eps_seq_show(struct seq_file *seq, void *v)
+ 		seq_printf(seq, "%8pK %8pK %-3d %-3d %-4d %-5d %5u %5lu ", ep, sk,
+ 			   sctp_sk(sk)->type, sk->sk_state, hash,
+ 			   ep->base.bind_addr.port,
+-			   from_kuid_munged(seq_user_ns(seq), sock_i_uid(sk)),
++			   from_kuid_munged(seq_user_ns(seq), sk_uid(sk)),
+ 			   sock_i_ino(sk));
+ 
+ 		sctp_seq_dump_local_addrs(seq, &ep->base);
+@@ -267,7 +267,7 @@ static int sctp_assocs_seq_show(struct seq_file *seq, void *v)
+ 		   assoc->assoc_id,
+ 		   assoc->sndbuf_used,
+ 		   atomic_read(&assoc->rmem_alloc),
+-		   from_kuid_munged(seq_user_ns(seq), sock_i_uid(sk)),
++		   from_kuid_munged(seq_user_ns(seq), sk_uid(sk)),
+ 		   sock_i_ino(sk),
+ 		   epb->bind_addr.port,
+ 		   assoc->peer.port);
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 1e5739858c2067..aa6400811018e0 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -8345,8 +8345,8 @@ static int sctp_get_port_local(struct sock *sk, union sctp_addr *addr)
+ 	bool reuse = (sk->sk_reuse || sp->reuse);
+ 	struct sctp_bind_hashbucket *head; /* hash list */
+ 	struct net *net = sock_net(sk);
+-	kuid_t uid = sock_i_uid(sk);
+ 	struct sctp_bind_bucket *pp;
++	kuid_t uid = sk_uid(sk);
+ 	unsigned short snum;
+ 	int ret;
+ 
+@@ -8444,7 +8444,7 @@ static int sctp_get_port_local(struct sock *sk, union sctp_addr *addr)
+ 			    (reuse && (sk2->sk_reuse || sp2->reuse) &&
+ 			     sk2->sk_state != SCTP_SS_LISTENING) ||
+ 			    (sk->sk_reuseport && sk2->sk_reuseport &&
+-			     uid_eq(uid, sock_i_uid(sk2))))
++			     uid_eq(uid, sk_uid(sk2))))
+ 				continue;
+ 
+ 			if ((!sk->sk_bound_dev_if || !bound_dev_if2 ||
+diff --git a/net/smc/smc_clc.c b/net/smc/smc_clc.c
+index 521f5df80e10ca..8a794333e9927c 100644
+--- a/net/smc/smc_clc.c
++++ b/net/smc/smc_clc.c
+@@ -426,8 +426,6 @@ smc_clc_msg_decl_valid(struct smc_clc_msg_decline *dclc)
+ {
+ 	struct smc_clc_msg_hdr *hdr = &dclc->hdr;
+ 
+-	if (hdr->typev1 != SMC_TYPE_R && hdr->typev1 != SMC_TYPE_D)
+-		return false;
+ 	if (hdr->version == SMC_V1) {
+ 		if (ntohs(hdr->length) != sizeof(struct smc_clc_msg_decline))
+ 			return false;
+diff --git a/net/smc/smc_diag.c b/net/smc/smc_diag.c
+index 6fdb2d96777ad7..8ed2f6689b0170 100644
+--- a/net/smc/smc_diag.c
++++ b/net/smc/smc_diag.c
+@@ -64,7 +64,7 @@ static int smc_diag_msg_attrs_fill(struct sock *sk, struct sk_buff *skb,
+ 	if (nla_put_u8(skb, SMC_DIAG_SHUTDOWN, sk->sk_shutdown))
+ 		return 1;
+ 
+-	r->diag_uid = from_kuid_munged(user_ns, sock_i_uid(sk));
++	r->diag_uid = from_kuid_munged(user_ns, sk_uid(sk));
+ 	r->diag_inode = sock_i_ino(sk);
+ 	return 0;
+ }
+diff --git a/net/smc/smc_ib.c b/net/smc/smc_ib.c
+index 53828833a3f7f7..a42ef3f77b961a 100644
+--- a/net/smc/smc_ib.c
++++ b/net/smc/smc_ib.c
+@@ -742,6 +742,9 @@ bool smc_ib_is_sg_need_sync(struct smc_link *lnk,
+ 	unsigned int i;
+ 	bool ret = false;
+ 
++	if (!lnk->smcibdev->ibdev->dma_device)
++		return ret;
++
+ 	/* for now there is just one DMA address */
+ 	for_each_sg(buf_slot->sgt[lnk->link_idx].sgl, sg,
+ 		    buf_slot->sgt[lnk->link_idx].nents, i) {
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 7c61d47ea20860..e028bf6584992c 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -3642,7 +3642,7 @@ int tipc_sk_fill_sock_diag(struct sk_buff *skb, struct netlink_callback *cb,
+ 	    nla_put_u32(skb, TIPC_NLA_SOCK_INO, sock_i_ino(sk)) ||
+ 	    nla_put_u32(skb, TIPC_NLA_SOCK_UID,
+ 			from_kuid_munged(sk_user_ns(NETLINK_CB(cb->skb).sk),
+-					 sock_i_uid(sk))) ||
++					 sk_uid(sk))) ||
+ 	    nla_put_u64_64bit(skb, TIPC_NLA_SOCK_COOKIE,
+ 			      tipc_diag_gen_cookie(sk),
+ 			      TIPC_NLA_SOCK_PAD))
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 52b155123985a1..564c970d97fffe 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -3697,7 +3697,7 @@ static int bpf_iter_unix_seq_show(struct seq_file *seq, void *v)
+ 		goto unlock;
+ 	}
+ 
+-	uid = from_kuid_munged(seq_user_ns(seq), sock_i_uid(sk));
++	uid = from_kuid_munged(seq_user_ns(seq), sk_uid(sk));
+ 	meta.seq = seq;
+ 	prog = bpf_iter_get_info(&meta, false);
+ 	ret = unix_prog_seq_show(prog, &meta, v, uid);
+diff --git a/net/unix/diag.c b/net/unix/diag.c
+index 79b182d0e62ae4..ca34730261510c 100644
+--- a/net/unix/diag.c
++++ b/net/unix/diag.c
+@@ -106,7 +106,7 @@ static int sk_diag_show_rqlen(struct sock *sk, struct sk_buff *nlskb)
+ static int sk_diag_dump_uid(struct sock *sk, struct sk_buff *nlskb,
+ 			    struct user_namespace *user_ns)
+ {
+-	uid_t uid = from_kuid_munged(user_ns, sock_i_uid(sk));
++	uid_t uid = from_kuid_munged(user_ns, sk_uid(sk));
+ 	return nla_put(nlskb, UNIX_DIAG_UID, sizeof(uid_t), &uid);
+ }
+ 
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index e8a4fe44ec2d80..f2a66af385dcb3 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -1905,7 +1905,8 @@ cfg80211_update_known_bss(struct cfg80211_registered_device *rdev,
+ 			 */
+ 
+ 			f = rcu_access_pointer(new->pub.beacon_ies);
+-			kfree_rcu((struct cfg80211_bss_ies *)f, rcu_head);
++			if (!new->pub.hidden_beacon_bss)
++				kfree_rcu((struct cfg80211_bss_ies *)f, rcu_head);
+ 			return false;
+ 		}
+ 
+diff --git a/net/wireless/sme.c b/net/wireless/sme.c
+index cf998500a96549..05d06512983c2e 100644
+--- a/net/wireless/sme.c
++++ b/net/wireless/sme.c
+@@ -901,13 +901,16 @@ void __cfg80211_connect_result(struct net_device *dev,
+ 	if (!wdev->u.client.ssid_len) {
+ 		rcu_read_lock();
+ 		for_each_valid_link(cr, link) {
++			u32 ssid_len;
++
+ 			ssid = ieee80211_bss_get_elem(cr->links[link].bss,
+ 						      WLAN_EID_SSID);
+ 
+ 			if (!ssid || !ssid->datalen)
+ 				continue;
+ 
+-			memcpy(wdev->u.client.ssid, ssid->data, ssid->datalen);
++			ssid_len = min(ssid->datalen, IEEE80211_MAX_SSID_LEN);
++			memcpy(wdev->u.client.ssid, ssid->data, ssid_len);
+ 			wdev->u.client.ssid_len = ssid->datalen;
+ 			break;
+ 		}
+diff --git a/net/xdp/xsk_diag.c b/net/xdp/xsk_diag.c
+index 09dcea0cbbed97..0e0bca031c0399 100644
+--- a/net/xdp/xsk_diag.c
++++ b/net/xdp/xsk_diag.c
+@@ -119,7 +119,7 @@ static int xsk_diag_fill(struct sock *sk, struct sk_buff *nlskb,
+ 
+ 	if ((req->xdiag_show & XDP_SHOW_INFO) &&
+ 	    nla_put_u32(nlskb, XDP_DIAG_UID,
+-			from_kuid_munged(user_ns, sock_i_uid(sk))))
++			from_kuid_munged(user_ns, sk_uid(sk))))
+ 		goto out_nlmsg_trim;
+ 
+ 	if ((req->xdiag_show & XDP_SHOW_RING_CFG) &&
+diff --git a/rust/kernel/mm/virt.rs b/rust/kernel/mm/virt.rs
+index 31803674aecc57..a1181cfae9620e 100644
+--- a/rust/kernel/mm/virt.rs
++++ b/rust/kernel/mm/virt.rs
+@@ -209,6 +209,7 @@ pub fn vm_insert_page(&self, address: usize, page: &Page) -> Result {
+ ///
+ /// For the duration of 'a, the referenced vma must be undergoing initialization in an
+ /// `f_ops->mmap()` hook.
++#[repr(transparent)]
+ pub struct VmaNew {
+     vma: VmaRef,
+ }
+diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
+index 693dbbebebba10..0ba2aac3b8dc00 100644
+--- a/scripts/Makefile.kasan
++++ b/scripts/Makefile.kasan
+@@ -86,10 +86,14 @@ kasan_params += hwasan-instrument-stack=$(stack_enable) \
+ 		hwasan-use-short-granules=0 \
+ 		hwasan-inline-all-checks=0
+ 
+-# Instrument memcpy/memset/memmove calls by using instrumented __hwasan_mem*().
+-ifeq ($(call clang-min-version, 150000)$(call gcc-min-version, 130000),y)
+-	kasan_params += hwasan-kernel-mem-intrinsic-prefix=1
+-endif
++# Instrument memcpy/memset/memmove calls by using instrumented __(hw)asan_mem*().
++ifdef CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX
++	ifdef CONFIG_CC_IS_GCC
++		kasan_params += asan-kernel-mem-intrinsic-prefix=1
++	else
++		kasan_params += hwasan-kernel-mem-intrinsic-prefix=1
++	endif
++endif # CONFIG_CC_HAS_KASAN_MEMINTRINSIC_PREFIX
+ 
+ endif # CONFIG_KASAN_SW_TAGS
+ 
+diff --git a/scripts/generate_rust_target.rs b/scripts/generate_rust_target.rs
+index 39c82908ff3a3f..38b3416bb9799e 100644
+--- a/scripts/generate_rust_target.rs
++++ b/scripts/generate_rust_target.rs
+@@ -225,7 +225,11 @@ fn main() {
+         ts.push("features", features);
+         ts.push("llvm-target", "x86_64-linux-gnu");
+         ts.push("supported-sanitizers", ["kcfi", "kernel-address"]);
+-        ts.push("target-pointer-width", "64");
++        if cfg.rustc_version_atleast(1, 91, 0) {
++            ts.push("target-pointer-width", 64);
++        } else {
++            ts.push("target-pointer-width", "64");
++        }
+     } else if cfg.has("X86_32") {
+         // This only works on UML, as i386 otherwise needs regparm support in rustc
+         if !cfg.has("UML") {
+@@ -245,7 +249,11 @@ fn main() {
+         }
+         ts.push("features", features);
+         ts.push("llvm-target", "i386-unknown-linux-gnu");
+-        ts.push("target-pointer-width", "32");
++        if cfg.rustc_version_atleast(1, 91, 0) {
++            ts.push("target-pointer-width", 32);
++        } else {
++            ts.push("target-pointer-width", "32");
++        }
+     } else if cfg.has("LOONGARCH") {
+         panic!("loongarch uses the builtin rustc loongarch64-unknown-none-softfloat target");
+     } else {
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 9a7793eb16e91e..bc43ea4d4e1612 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1991,6 +1991,7 @@ static int hdmi_add_cvt(struct hda_codec *codec, hda_nid_t cvt_nid)
+ static const struct snd_pci_quirk force_connect_list[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x83e2, "HP EliteDesk 800 G4", 1),
+ 	SND_PCI_QUIRK(0x103c, 0x83ef, "HP MP9 G4 Retail System AMS", 1),
++	SND_PCI_QUIRK(0x103c, 0x845a, "HP EliteDesk 800 G4 DM 65W", 1),
+ 	SND_PCI_QUIRK(0x103c, 0x870f, "HP", 1),
+ 	SND_PCI_QUIRK(0x103c, 0x871a, "HP", 1),
+ 	SND_PCI_QUIRK(0x103c, 0x8711, "HP", 1),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 0ec98833e3d2e7..8458ca4d8d9dab 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -11427,6 +11427,8 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1d05, 0x121b, "TongFang GMxAGxx", ALC269_FIXUP_NO_SHUTUP),
+ 	SND_PCI_QUIRK(0x1d05, 0x1387, "TongFang GMxIXxx", ALC2XX_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d05, 0x1409, "TongFang GMxIXxx", ALC2XX_FIXUP_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1d05, 0x300f, "TongFang X6AR5xxY", ALC2XX_FIXUP_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1d05, 0x3019, "TongFang X6FR5xxY", ALC2XX_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d17, 0x3288, "Haier Boyue G42", ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS),
+ 	SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE),
+diff --git a/sound/pci/hda/tas2781_hda_i2c.c b/sound/pci/hda/tas2781_hda_i2c.c
+index 8c5ebbe8a1f315..eab99e9ba2c6f0 100644
+--- a/sound/pci/hda/tas2781_hda_i2c.c
++++ b/sound/pci/hda/tas2781_hda_i2c.c
+@@ -282,7 +282,7 @@ static int tas2563_save_calibration(struct tas2781_hda *h)
+ {
+ 	efi_guid_t efi_guid = tasdev_fct_efi_guid[LENOVO];
+ 	char *vars[TASDEV_CALIB_N] = {
+-		"R0_%d", "InvR0_%d", "R0_Low_%d", "Power_%d", "TLim_%d"
++		"R0_%d", "R0_Low_%d", "InvR0_%d", "Power_%d", "TLim_%d"
+ 	};
+ 	efi_char16_t efi_name[TAS2563_CAL_VAR_NAME_MAX];
+ 	unsigned long max_size = TAS2563_CAL_DATA_SIZE;
+@@ -292,6 +292,7 @@ static int tas2563_save_calibration(struct tas2781_hda *h)
+ 	struct cali_reg *r = &cd->cali_reg_array;
+ 	unsigned int offset = 0;
+ 	unsigned char *data;
++	__be32 bedata;
+ 	efi_status_t status;
+ 	unsigned int attr;
+ 	int ret, i, j, k;
+@@ -333,6 +334,8 @@ static int tas2563_save_calibration(struct tas2781_hda *h)
+ 					i, j, status);
+ 				return -EINVAL;
+ 			}
++			bedata = cpu_to_be32(*(uint32_t *)&data[offset]);
++			memcpy(&data[offset], &bedata, sizeof(bedata));
+ 			offset += TAS2563_CAL_DATA_SIZE;
+ 		}
+ 	}
+diff --git a/sound/soc/renesas/rcar/core.c b/sound/soc/renesas/rcar/core.c
+index a72f36d3ca2cd2..4f4ed24cb36161 100644
+--- a/sound/soc/renesas/rcar/core.c
++++ b/sound/soc/renesas/rcar/core.c
+@@ -597,7 +597,7 @@ int rsnd_dai_connect(struct rsnd_mod *mod,
+ 
+ 	dev_dbg(dev, "%s is connected to io (%s)\n",
+ 		rsnd_mod_name(mod),
+-		snd_pcm_direction_name(io->substream->stream));
++		rsnd_io_is_play(io) ? "Playback" : "Capture");
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index 16bbc074dc5f67..d31ee6e9abefc8 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -375,8 +375,9 @@ struct snd_soc_component
+ 	for_each_component(component) {
+ 		if ((dev == component->dev) &&
+ 		    (!driver_name ||
+-		     (driver_name == component->driver->name) ||
+-		     (strcmp(component->driver->name, driver_name) == 0))) {
++		     (component->driver->name &&
++		      ((component->driver->name == driver_name) ||
++		       (strcmp(component->driver->name, driver_name) == 0))))) {
+ 			found_component = component;
+ 			break;
+ 		}
+diff --git a/sound/soc/sof/intel/ptl.c b/sound/soc/sof/intel/ptl.c
+index 1bc1f54c470df7..4633cd01e7dd4b 100644
+--- a/sound/soc/sof/intel/ptl.c
++++ b/sound/soc/sof/intel/ptl.c
+@@ -143,6 +143,7 @@ const struct sof_intel_dsp_desc wcl_chip_info = {
+ 	.read_sdw_lcount =  hda_sdw_check_lcount_ext,
+ 	.check_sdw_irq = lnl_dsp_check_sdw_irq,
+ 	.check_sdw_wakeen_irq = lnl_sdw_check_wakeen_irq,
++	.sdw_process_wakeen = hda_sdw_process_wakeen_common,
+ 	.check_ipc_irq = mtl_dsp_check_ipc_irq,
+ 	.cl_init = mtl_dsp_cl_init,
+ 	.power_down_dsp = mtl_power_down_dsp,
+diff --git a/sound/usb/format.c b/sound/usb/format.c
+index 0ee532acbb6034..ec95a063beb105 100644
+--- a/sound/usb/format.c
++++ b/sound/usb/format.c
+@@ -327,12 +327,16 @@ static bool focusrite_valid_sample_rate(struct snd_usb_audio *chip,
+ 		max_rate = combine_quad(&fmt[6]);
+ 
+ 		switch (max_rate) {
++		case 192000:
++			if (rate == 176400 || rate == 192000)
++				return true;
++			fallthrough;
++		case 96000:
++			if (rate == 88200 || rate == 96000)
++				return true;
++			fallthrough;
+ 		case 48000:
+ 			return (rate == 44100 || rate == 48000);
+-		case 96000:
+-			return (rate == 88200 || rate == 96000);
+-		case 192000:
+-			return (rate == 176400 || rate == 192000);
+ 		default:
+ 			usb_audio_info(chip,
+ 				"%u:%d : unexpected max rate: %u\n",
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index d0efb3dd8675e4..9530c59b3cf4c8 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -4339,9 +4339,11 @@ void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer,
+ 			snd_dragonfly_quirk_db_scale(mixer, cval, kctl);
+ 		break;
+ 	/* lowest playback value is muted on some devices */
++	case USB_ID(0x0572, 0x1b09): /* Conexant Systems (Rockwell), Inc. */
+ 	case USB_ID(0x0d8c, 0x000c): /* C-Media */
+ 	case USB_ID(0x0d8c, 0x0014): /* C-Media */
+ 	case USB_ID(0x19f7, 0x0003): /* RODE NT-USB */
++	case USB_ID(0x2d99, 0x0026): /* HECATE G2 GAMING HEADSET */
+ 		if (strstr(kctl->id.name, "Playback"))
+ 			cval->min_mute = 1;
+ 		break;
+diff --git a/tools/gpio/Makefile b/tools/gpio/Makefile
+index ed565eb52275f1..342e056c8c665a 100644
+--- a/tools/gpio/Makefile
++++ b/tools/gpio/Makefile
+@@ -77,7 +77,7 @@ $(OUTPUT)gpio-watch: $(GPIO_WATCH_IN)
+ 
+ clean:
+ 	rm -f $(ALL_PROGRAMS)
+-	rm -f $(OUTPUT)include/linux/gpio.h
++	rm -rf $(OUTPUT)include
+ 	find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.d' -delete -o -name '\.*.cmd' -delete
+ 
+ install: $(ALL_PROGRAMS)
+diff --git a/tools/net/ynl/pyynl/ynl_gen_c.py b/tools/net/ynl/pyynl/ynl_gen_c.py
+index 76032e01c2e759..0725a52b6ad7b0 100755
+--- a/tools/net/ynl/pyynl/ynl_gen_c.py
++++ b/tools/net/ynl/pyynl/ynl_gen_c.py
+@@ -830,7 +830,7 @@ class TypeArrayNest(Type):
+                      'ynl_attr_for_each_nested(attr2, attr) {',
+                      '\tif (ynl_attr_validate(yarg, attr2))',
+                      '\t\treturn YNL_PARSE_CB_ERROR;',
+-                     f'\t{var}->_count.{self.c_name}++;',
++                     f'\tn_{self.c_name}++;',
+                      '}']
+         return get_lines, None, local_vars
+ 
+diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c
+index c81444059ad077..d9123c9637baaf 100644
+--- a/tools/perf/util/bpf-event.c
++++ b/tools/perf/util/bpf-event.c
+@@ -290,9 +290,15 @@ static int perf_event__synthesize_one_bpf_prog(struct perf_session *session,
+ 
+ 		info_node->info_linear = info_linear;
+ 		if (!perf_env__insert_bpf_prog_info(env, info_node)) {
+-			free(info_linear);
++			/*
++			 * Insert failed, likely because of a duplicate event
++			 * made by the sideband thread. Ignore synthesizing the
++			 * metadata.
++			 */
+ 			free(info_node);
++			goto out;
+ 		}
++		/* info_linear is now owned by info_node and shouldn't be freed below. */
+ 		info_linear = NULL;
+ 
+ 		/*
+@@ -451,18 +457,18 @@ int perf_event__synthesize_bpf_events(struct perf_session *session,
+ 	return err;
+ }
+ 
+-static void perf_env__add_bpf_info(struct perf_env *env, u32 id)
++static int perf_env__add_bpf_info(struct perf_env *env, u32 id)
+ {
+ 	struct bpf_prog_info_node *info_node;
+ 	struct perf_bpil *info_linear;
+ 	struct btf *btf = NULL;
+ 	u64 arrays;
+ 	u32 btf_id;
+-	int fd;
++	int fd, err = 0;
+ 
+ 	fd = bpf_prog_get_fd_by_id(id);
+ 	if (fd < 0)
+-		return;
++		return -EINVAL;
+ 
+ 	arrays = 1UL << PERF_BPIL_JITED_KSYMS;
+ 	arrays |= 1UL << PERF_BPIL_JITED_FUNC_LENS;
+@@ -475,6 +481,7 @@ static void perf_env__add_bpf_info(struct perf_env *env, u32 id)
+ 	info_linear = get_bpf_prog_info_linear(fd, arrays);
+ 	if (IS_ERR_OR_NULL(info_linear)) {
+ 		pr_debug("%s: failed to get BPF program info. aborting\n", __func__);
++		err = PTR_ERR(info_linear);
+ 		goto out;
+ 	}
+ 
+@@ -484,38 +491,46 @@ static void perf_env__add_bpf_info(struct perf_env *env, u32 id)
+ 	if (info_node) {
+ 		info_node->info_linear = info_linear;
+ 		if (!perf_env__insert_bpf_prog_info(env, info_node)) {
++			pr_debug("%s: duplicate add bpf info request for id %u\n",
++				 __func__, btf_id);
+ 			free(info_linear);
+ 			free(info_node);
++			goto out;
+ 		}
+-	} else
++	} else {
+ 		free(info_linear);
++		err = -ENOMEM;
++		goto out;
++	}
+ 
+ 	if (btf_id == 0)
+ 		goto out;
+ 
+ 	btf = btf__load_from_kernel_by_id(btf_id);
+-	if (libbpf_get_error(btf)) {
+-		pr_debug("%s: failed to get BTF of id %u, aborting\n",
+-			 __func__, btf_id);
+-		goto out;
++	if (!btf) {
++		err = -errno;
++		pr_debug("%s: failed to get BTF of id %u %d\n", __func__, btf_id, err);
++	} else {
++		perf_env__fetch_btf(env, btf_id, btf);
+ 	}
+-	perf_env__fetch_btf(env, btf_id, btf);
+ 
+ out:
+ 	btf__free(btf);
+ 	close(fd);
++	return err;
+ }
+ 
+ static int bpf_event__sb_cb(union perf_event *event, void *data)
+ {
+ 	struct perf_env *env = data;
++	int ret = 0;
+ 
+ 	if (event->header.type != PERF_RECORD_BPF_EVENT)
+ 		return -1;
+ 
+ 	switch (event->bpf.type) {
+ 	case PERF_BPF_EVENT_PROG_LOAD:
+-		perf_env__add_bpf_info(env, event->bpf.id);
++		ret = perf_env__add_bpf_info(env, event->bpf.id);
+ 
+ 	case PERF_BPF_EVENT_PROG_UNLOAD:
+ 		/*
+@@ -529,7 +544,7 @@ static int bpf_event__sb_cb(union perf_event *event, void *data)
+ 		break;
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ int evlist__add_bpf_sb_event(struct evlist *evlist, struct perf_env *env)
+diff --git a/tools/perf/util/bpf-utils.c b/tools/perf/util/bpf-utils.c
+index 80b1d2b3729ba4..5a66dc8594aa88 100644
+--- a/tools/perf/util/bpf-utils.c
++++ b/tools/perf/util/bpf-utils.c
+@@ -20,7 +20,7 @@ struct bpil_array_desc {
+ 				 */
+ };
+ 
+-static struct bpil_array_desc bpil_array_desc[] = {
++static const struct bpil_array_desc bpil_array_desc[] = {
+ 	[PERF_BPIL_JITED_INSNS] = {
+ 		offsetof(struct bpf_prog_info, jited_prog_insns),
+ 		offsetof(struct bpf_prog_info, jited_prog_len),
+@@ -115,7 +115,7 @@ get_bpf_prog_info_linear(int fd, __u64 arrays)
+ 	__u32 info_len = sizeof(info);
+ 	__u32 data_len = 0;
+ 	int i, err;
+-	void *ptr;
++	__u8 *ptr;
+ 
+ 	if (arrays >> PERF_BPIL_LAST_ARRAY)
+ 		return ERR_PTR(-EINVAL);
+@@ -126,15 +126,15 @@ get_bpf_prog_info_linear(int fd, __u64 arrays)
+ 		pr_debug("can't get prog info: %s", strerror(errno));
+ 		return ERR_PTR(-EFAULT);
+ 	}
++	if (info.type >= __MAX_BPF_PROG_TYPE)
++		pr_debug("%s:%d: unexpected program type %u\n", __func__, __LINE__, info.type);
+ 
+ 	/* step 2: calculate total size of all arrays */
+ 	for (i = PERF_BPIL_FIRST_ARRAY; i < PERF_BPIL_LAST_ARRAY; ++i) {
++		const struct bpil_array_desc *desc = &bpil_array_desc[i];
+ 		bool include_array = (arrays & (1UL << i)) > 0;
+-		struct bpil_array_desc *desc;
+ 		__u32 count, size;
+ 
+-		desc = bpil_array_desc + i;
+-
+ 		/* kernel is too old to support this field */
+ 		if (info_len < desc->array_offset + sizeof(__u32) ||
+ 		    info_len < desc->count_offset + sizeof(__u32) ||
+@@ -163,19 +163,20 @@ get_bpf_prog_info_linear(int fd, __u64 arrays)
+ 	ptr = info_linear->data;
+ 
+ 	for (i = PERF_BPIL_FIRST_ARRAY; i < PERF_BPIL_LAST_ARRAY; ++i) {
+-		struct bpil_array_desc *desc;
++		const struct bpil_array_desc *desc = &bpil_array_desc[i];
+ 		__u32 count, size;
+ 
+ 		if ((arrays & (1UL << i)) == 0)
+ 			continue;
+ 
+-		desc  = bpil_array_desc + i;
+ 		count = bpf_prog_info_read_offset_u32(&info, desc->count_offset);
+ 		size  = bpf_prog_info_read_offset_u32(&info, desc->size_offset);
+ 		bpf_prog_info_set_offset_u32(&info_linear->info,
+ 					     desc->count_offset, count);
+ 		bpf_prog_info_set_offset_u32(&info_linear->info,
+ 					     desc->size_offset, size);
++		assert(ptr >= info_linear->data);
++		assert(ptr < &info_linear->data[data_len]);
+ 		bpf_prog_info_set_offset_u64(&info_linear->info,
+ 					     desc->array_offset,
+ 					     ptr_to_u64(ptr));
+@@ -189,27 +190,45 @@ get_bpf_prog_info_linear(int fd, __u64 arrays)
+ 		free(info_linear);
+ 		return ERR_PTR(-EFAULT);
+ 	}
++	if (info_linear->info.type >= __MAX_BPF_PROG_TYPE) {
++		pr_debug("%s:%d: unexpected program type %u\n",
++			 __func__, __LINE__, info_linear->info.type);
++	}
+ 
+ 	/* step 6: verify the data */
++	ptr = info_linear->data;
+ 	for (i = PERF_BPIL_FIRST_ARRAY; i < PERF_BPIL_LAST_ARRAY; ++i) {
+-		struct bpil_array_desc *desc;
+-		__u32 v1, v2;
++		const struct bpil_array_desc *desc = &bpil_array_desc[i];
++		__u32 count1, count2, size1, size2;
++		__u64 ptr2;
+ 
+ 		if ((arrays & (1UL << i)) == 0)
+ 			continue;
+ 
+-		desc = bpil_array_desc + i;
+-		v1 = bpf_prog_info_read_offset_u32(&info, desc->count_offset);
+-		v2 = bpf_prog_info_read_offset_u32(&info_linear->info,
++		count1 = bpf_prog_info_read_offset_u32(&info, desc->count_offset);
++		count2 = bpf_prog_info_read_offset_u32(&info_linear->info,
+ 						   desc->count_offset);
+-		if (v1 != v2)
+-			pr_warning("%s: mismatch in element count\n", __func__);
++		if (count1 != count2) {
++			pr_warning("%s: mismatch in element count %u vs %u\n", __func__, count1, count2);
++			free(info_linear);
++			return ERR_PTR(-ERANGE);
++		}
+ 
+-		v1 = bpf_prog_info_read_offset_u32(&info, desc->size_offset);
+-		v2 = bpf_prog_info_read_offset_u32(&info_linear->info,
++		size1 = bpf_prog_info_read_offset_u32(&info, desc->size_offset);
++		size2 = bpf_prog_info_read_offset_u32(&info_linear->info,
+ 						   desc->size_offset);
+-		if (v1 != v2)
+-			pr_warning("%s: mismatch in rec size\n", __func__);
++		if (size1 != size2) {
++			pr_warning("%s: mismatch in rec size %u vs %u\n", __func__, size1, size2);
++			free(info_linear);
++			return ERR_PTR(-ERANGE);
++		}
++		ptr2 = bpf_prog_info_read_offset_u64(&info_linear->info, desc->array_offset);
++		if (ptr_to_u64(ptr) != ptr2) {
++			pr_warning("%s: mismatch in array %p vs %llx\n", __func__, ptr, ptr2);
++			free(info_linear);
++			return ERR_PTR(-ERANGE);
++		}
++		ptr += roundup(count1 * size1, sizeof(__u64));
+ 	}
+ 
+ 	/* step 7: update info_len and data_len */
+@@ -224,13 +243,12 @@ void bpil_addr_to_offs(struct perf_bpil *info_linear)
+ 	int i;
+ 
+ 	for (i = PERF_BPIL_FIRST_ARRAY; i < PERF_BPIL_LAST_ARRAY; ++i) {
+-		struct bpil_array_desc *desc;
++		const struct bpil_array_desc *desc = &bpil_array_desc[i];
+ 		__u64 addr, offs;
+ 
+ 		if ((info_linear->arrays & (1UL << i)) == 0)
+ 			continue;
+ 
+-		desc = bpil_array_desc + i;
+ 		addr = bpf_prog_info_read_offset_u64(&info_linear->info,
+ 						     desc->array_offset);
+ 		offs = addr - ptr_to_u64(info_linear->data);
+@@ -244,13 +262,12 @@ void bpil_offs_to_addr(struct perf_bpil *info_linear)
+ 	int i;
+ 
+ 	for (i = PERF_BPIL_FIRST_ARRAY; i < PERF_BPIL_LAST_ARRAY; ++i) {
+-		struct bpil_array_desc *desc;
++		const struct bpil_array_desc *desc = &bpil_array_desc[i];
+ 		__u64 addr, offs;
+ 
+ 		if ((info_linear->arrays & (1UL << i)) == 0)
+ 			continue;
+ 
+-		desc = bpil_array_desc + i;
+ 		offs = bpf_prog_info_read_offset_u64(&info_linear->info,
+ 						     desc->array_offset);
+ 		addr = offs + ptr_to_u64(info_linear->data);
+diff --git a/tools/power/cpupower/utils/cpupower-set.c b/tools/power/cpupower/utils/cpupower-set.c
+index 0677b58374abf1..59ace394cf3ef9 100644
+--- a/tools/power/cpupower/utils/cpupower-set.c
++++ b/tools/power/cpupower/utils/cpupower-set.c
+@@ -62,8 +62,8 @@ int cmd_set(int argc, char **argv)
+ 
+ 	params.params = 0;
+ 	/* parameter parsing */
+-	while ((ret = getopt_long(argc, argv, "b:e:m:",
+-						set_opts, NULL)) != -1) {
++	while ((ret = getopt_long(argc, argv, "b:e:m:t:",
++				  set_opts, NULL)) != -1) {
+ 		switch (ret) {
+ 		case 'b':
+ 			if (params.perf_bias)
+diff --git a/tools/testing/selftests/drivers/net/hw/csum.py b/tools/testing/selftests/drivers/net/hw/csum.py
+index cd23af87531708..3e3a89a34afe74 100755
+--- a/tools/testing/selftests/drivers/net/hw/csum.py
++++ b/tools/testing/selftests/drivers/net/hw/csum.py
+@@ -17,7 +17,7 @@ def test_receive(cfg, ipver="6", extra_args=None):
+     ip_args = f"-{ipver} -S {cfg.remote_addr_v[ipver]} -D {cfg.addr_v[ipver]}"
+ 
+     rx_cmd = f"{cfg.bin_local} -i {cfg.ifname} -n 100 {ip_args} -r 1 -R {extra_args}"
+-    tx_cmd = f"{cfg.bin_remote} -i {cfg.ifname} -n 100 {ip_args} -r 1 -T {extra_args}"
++    tx_cmd = f"{cfg.bin_remote} -i {cfg.remote_ifname} -n 100 {ip_args} -r 1 -T {extra_args}"
+ 
+     with bkg(rx_cmd, exit_wait=True):
+         wait_port_listen(34000, proto="udp")
+@@ -37,7 +37,7 @@ def test_transmit(cfg, ipver="6", extra_args=None):
+     if extra_args != "-U -Z":
+         extra_args += " -r 1"
+ 
+-    rx_cmd = f"{cfg.bin_remote} -i {cfg.ifname} -L 1 -n 100 {ip_args} -R {extra_args}"
++    rx_cmd = f"{cfg.bin_remote} -i {cfg.remote_ifname} -L 1 -n 100 {ip_args} -R {extra_args}"
+     tx_cmd = f"{cfg.bin_local} -i {cfg.ifname} -L 1 -n 100 {ip_args} -T {extra_args}"
+ 
+     with bkg(rx_cmd, host=cfg.remote, exit_wait=True):
+diff --git a/tools/testing/selftests/net/bind_bhash.c b/tools/testing/selftests/net/bind_bhash.c
+index 57ff67a3751eb3..da04b0b19b73ca 100644
+--- a/tools/testing/selftests/net/bind_bhash.c
++++ b/tools/testing/selftests/net/bind_bhash.c
+@@ -75,7 +75,7 @@ static void *setup(void *arg)
+ 	int *array = (int *)arg;
+ 
+ 	for (i = 0; i < MAX_CONNECTIONS; i++) {
+-		sock_fd = bind_socket(SO_REUSEADDR | SO_REUSEPORT, setup_addr);
++		sock_fd = bind_socket(SO_REUSEPORT, setup_addr);
+ 		if (sock_fd < 0) {
+ 			ret = sock_fd;
+ 			pthread_exit(&ret);
+@@ -103,7 +103,7 @@ int main(int argc, const char *argv[])
+ 
+ 	setup_addr = use_v6 ? setup_addr_v6 : setup_addr_v4;
+ 
+-	listener_fd = bind_socket(SO_REUSEADDR | SO_REUSEPORT, setup_addr);
++	listener_fd = bind_socket(SO_REUSEPORT, setup_addr);
+ 	if (listen(listener_fd, 100) < 0) {
+ 		perror("listen failed");
+ 		return -1;
+diff --git a/tools/testing/selftests/net/netfilter/conntrack_clash.sh b/tools/testing/selftests/net/netfilter/conntrack_clash.sh
+index 606a43a60f7368..7fc6c5dbd5516e 100755
+--- a/tools/testing/selftests/net/netfilter/conntrack_clash.sh
++++ b/tools/testing/selftests/net/netfilter/conntrack_clash.sh
+@@ -99,7 +99,7 @@ run_one_clash_test()
+ 	local entries
+ 	local cre
+ 
+-	if ! ip netns exec "$ns" ./udpclash $daddr $dport;then
++	if ! ip netns exec "$ns" timeout 30 ./udpclash $daddr $dport;then
+ 		echo "INFO: did not receive expected number of replies for $daddr:$dport"
+ 		ip netns exec "$ctns" conntrack -S
+ 		# don't fail: check if clash resolution triggered after all.
+diff --git a/tools/testing/selftests/net/netfilter/conntrack_resize.sh b/tools/testing/selftests/net/netfilter/conntrack_resize.sh
+index 788cd56ea4a0dc..615fe3c6f405da 100755
+--- a/tools/testing/selftests/net/netfilter/conntrack_resize.sh
++++ b/tools/testing/selftests/net/netfilter/conntrack_resize.sh
+@@ -187,7 +187,7 @@ ct_udpclash()
+ 	[ -x udpclash ] || return
+ 
+         while [ $now -lt $end ]; do
+-		ip netns exec "$ns" ./udpclash 127.0.0.1 $((RANDOM%65536)) > /dev/null 2>&1
++		ip netns exec "$ns" timeout 30 ./udpclash 127.0.0.1 $((RANDOM%65536)) > /dev/null 2>&1
+ 
+ 		now=$(date +%s)
+ 	done
+@@ -277,6 +277,7 @@ check_taint()
+ insert_flood()
+ {
+ 	local n="$1"
++	local timeout="$2"
+ 	local r=0
+ 
+ 	r=$((RANDOM%$insert_count))
+@@ -302,7 +303,7 @@ test_floodresize_all()
+ 	read tainted_then < /proc/sys/kernel/tainted
+ 
+ 	for n in "$nsclient1" "$nsclient2";do
+-		insert_flood "$n" &
++		insert_flood "$n" "$timeout" &
+ 	done
+ 
+ 	# resize table constantly while flood/insert/dump/flushs
+diff --git a/tools/testing/selftests/net/netfilter/nft_flowtable.sh b/tools/testing/selftests/net/netfilter/nft_flowtable.sh
+index a4ee5496f2a17c..45832df982950c 100755
+--- a/tools/testing/selftests/net/netfilter/nft_flowtable.sh
++++ b/tools/testing/selftests/net/netfilter/nft_flowtable.sh
+@@ -20,6 +20,7 @@ ret=0
+ SOCAT_TIMEOUT=60
+ 
+ nsin=""
++nsin_small=""
+ ns1out=""
+ ns2out=""
+ 
+@@ -36,7 +37,7 @@ cleanup() {
+ 
+ 	cleanup_all_ns
+ 
+-	rm -f "$nsin" "$ns1out" "$ns2out"
++	rm -f "$nsin" "$nsin_small" "$ns1out" "$ns2out"
+ 
+ 	[ "$log_netns" -eq 0 ] && sysctl -q net.netfilter.nf_log_all_netns="$log_netns"
+ }
+@@ -72,6 +73,7 @@ lmtu=1500
+ rmtu=2000
+ 
+ filesize=$((2 * 1024 * 1024))
++filesize_small=$((filesize / 16))
+ 
+ usage(){
+ 	echo "nft_flowtable.sh [OPTIONS]"
+@@ -89,7 +91,10 @@ do
+ 		o) omtu=$OPTARG;;
+ 		l) lmtu=$OPTARG;;
+ 		r) rmtu=$OPTARG;;
+-		s) filesize=$OPTARG;;
++		s)
++			filesize=$OPTARG
++			filesize_small=$((OPTARG / 16))
++		;;
+ 		*) usage;;
+ 	esac
+ done
+@@ -215,6 +220,7 @@ if ! ip netns exec "$ns2" ping -c 1 -q 10.0.1.99 > /dev/null; then
+ fi
+ 
+ nsin=$(mktemp)
++nsin_small=$(mktemp)
+ ns1out=$(mktemp)
+ ns2out=$(mktemp)
+ 
+@@ -265,6 +271,7 @@ check_counters()
+ check_dscp()
+ {
+ 	local what=$1
++	local pmtud="$2"
+ 	local ok=1
+ 
+ 	local counter
+@@ -277,37 +284,39 @@ check_dscp()
+ 	local pc4z=${counter%*bytes*}
+ 	local pc4z=${pc4z#*packets}
+ 
++	local failmsg="FAIL: pmtu $pmtu: $what counters do not match, expected"
++
+ 	case "$what" in
+ 	"dscp_none")
+ 		if [ "$pc4" -gt 0 ] || [ "$pc4z" -eq 0 ]; then
+-			echo "FAIL: dscp counters do not match, expected dscp3 == 0, dscp0 > 0, but got $pc4,$pc4z" 1>&2
++			echo "$failmsg dscp3 == 0, dscp0 > 0, but got $pc4,$pc4z" 1>&2
+ 			ret=1
+ 			ok=0
+ 		fi
+ 		;;
+ 	"dscp_fwd")
+ 		if [ "$pc4" -eq 0 ] || [ "$pc4z" -eq 0 ]; then
+-			echo "FAIL: dscp counters do not match, expected dscp3 and dscp0 > 0 but got $pc4,$pc4z" 1>&2
++			echo "$failmsg dscp3 and dscp0 > 0 but got $pc4,$pc4z" 1>&2
+ 			ret=1
+ 			ok=0
+ 		fi
+ 		;;
+ 	"dscp_ingress")
+ 		if [ "$pc4" -eq 0 ] || [ "$pc4z" -gt 0 ]; then
+-			echo "FAIL: dscp counters do not match, expected dscp3 > 0, dscp0 == 0 but got $pc4,$pc4z" 1>&2
++			echo "$failmsg dscp3 > 0, dscp0 == 0 but got $pc4,$pc4z" 1>&2
+ 			ret=1
+ 			ok=0
+ 		fi
+ 		;;
+ 	"dscp_egress")
+ 		if [ "$pc4" -eq 0 ] || [ "$pc4z" -gt 0 ]; then
+-			echo "FAIL: dscp counters do not match, expected dscp3 > 0, dscp0 == 0 but got $pc4,$pc4z" 1>&2
++			echo "$failmsg dscp3 > 0, dscp0 == 0 but got $pc4,$pc4z" 1>&2
+ 			ret=1
+ 			ok=0
+ 		fi
+ 		;;
+ 	*)
+-		echo "FAIL: Unknown DSCP check" 1>&2
++		echo "$failmsg: Unknown DSCP check" 1>&2
+ 		ret=1
+ 		ok=0
+ 	esac
+@@ -319,9 +328,9 @@ check_dscp()
+ 
+ check_transfer()
+ {
+-	in=$1
+-	out=$2
+-	what=$3
++	local in=$1
++	local out=$2
++	local what=$3
+ 
+ 	if ! cmp "$in" "$out" > /dev/null 2>&1; then
+ 		echo "FAIL: file mismatch for $what" 1>&2
+@@ -342,25 +351,39 @@ test_tcp_forwarding_ip()
+ {
+ 	local nsa=$1
+ 	local nsb=$2
+-	local dstip=$3
+-	local dstport=$4
++	local pmtu=$3
++	local dstip=$4
++	local dstport=$5
+ 	local lret=0
++	local socatc
++	local socatl
++	local infile="$nsin"
++
++	if [ $pmtu -eq 0 ]; then
++		infile="$nsin_small"
++	fi
+ 
+-	timeout "$SOCAT_TIMEOUT" ip netns exec "$nsb" socat -4 TCP-LISTEN:12345,reuseaddr STDIO < "$nsin" > "$ns2out" &
++	timeout "$SOCAT_TIMEOUT" ip netns exec "$nsb" socat -4 TCP-LISTEN:12345,reuseaddr STDIO < "$infile" > "$ns2out" &
+ 	lpid=$!
+ 
+ 	busywait 1000 listener_ready
+ 
+-	timeout "$SOCAT_TIMEOUT" ip netns exec "$nsa" socat -4 TCP:"$dstip":"$dstport" STDIO < "$nsin" > "$ns1out"
++	timeout "$SOCAT_TIMEOUT" ip netns exec "$nsa" socat -4 TCP:"$dstip":"$dstport" STDIO < "$infile" > "$ns1out"
++	socatc=$?
+ 
+ 	wait $lpid
++	socatl=$?
+ 
+-	if ! check_transfer "$nsin" "$ns2out" "ns1 -> ns2"; then
++	if [ $socatl -ne 0 ] || [ $socatc -ne 0 ];then
++		rc=1
++	fi
++
++	if ! check_transfer "$infile" "$ns2out" "ns1 -> ns2"; then
+ 		lret=1
+ 		ret=1
+ 	fi
+ 
+-	if ! check_transfer "$nsin" "$ns1out" "ns1 <- ns2"; then
++	if ! check_transfer "$infile" "$ns1out" "ns1 <- ns2"; then
+ 		lret=1
+ 		ret=1
+ 	fi
+@@ -370,14 +393,16 @@ test_tcp_forwarding_ip()
+ 
+ test_tcp_forwarding()
+ {
+-	test_tcp_forwarding_ip "$1" "$2" 10.0.2.99 12345
++	local pmtu="$3"
++
++	test_tcp_forwarding_ip "$1" "$2" "$pmtu" 10.0.2.99 12345
+ 
+ 	return $?
+ }
+ 
+ test_tcp_forwarding_set_dscp()
+ {
+-	check_dscp "dscp_none"
++	local pmtu="$3"
+ 
+ ip netns exec "$nsr1" nft -f - <<EOF
+ table netdev dscpmangle {
+@@ -388,8 +413,8 @@ table netdev dscpmangle {
+ }
+ EOF
+ if [ $? -eq 0 ]; then
+-	test_tcp_forwarding_ip "$1" "$2"  10.0.2.99 12345
+-	check_dscp "dscp_ingress"
++	test_tcp_forwarding_ip "$1" "$2" "$3" 10.0.2.99 12345
++	check_dscp "dscp_ingress" "$pmtu"
+ 
+ 	ip netns exec "$nsr1" nft delete table netdev dscpmangle
+ else
+@@ -405,10 +430,10 @@ table netdev dscpmangle {
+ }
+ EOF
+ if [ $? -eq 0 ]; then
+-	test_tcp_forwarding_ip "$1" "$2"  10.0.2.99 12345
+-	check_dscp "dscp_egress"
++	test_tcp_forwarding_ip "$1" "$2" "$pmtu"  10.0.2.99 12345
++	check_dscp "dscp_egress" "$pmtu"
+ 
+-	ip netns exec "$nsr1" nft flush table netdev dscpmangle
++	ip netns exec "$nsr1" nft delete table netdev dscpmangle
+ else
+ 	echo "SKIP: Could not load netdev:egress for veth1"
+ fi
+@@ -416,48 +441,53 @@ fi
+ 	# partial.  If flowtable really works, then both dscp-is-0 and dscp-is-cs3
+ 	# counters should have seen packets (before and after ft offload kicks in).
+ 	ip netns exec "$nsr1" nft -a insert rule inet filter forward ip dscp set cs3
+-	test_tcp_forwarding_ip "$1" "$2"  10.0.2.99 12345
+-	check_dscp "dscp_fwd"
++	test_tcp_forwarding_ip "$1" "$2" "$pmtu"  10.0.2.99 12345
++	check_dscp "dscp_fwd" "$pmtu"
+ }
+ 
+ test_tcp_forwarding_nat()
+ {
++	local nsa="$1"
++	local nsb="$2"
++	local pmtu="$3"
++	local what="$4"
+ 	local lret
+-	local pmtu
+ 
+-	test_tcp_forwarding_ip "$1" "$2" 10.0.2.99 12345
+-	lret=$?
++	[ "$pmtu" -eq 0 ] && what="$what (pmtu disabled)"
+ 
+-	pmtu=$3
+-	what=$4
++	test_tcp_forwarding_ip "$nsa" "$nsb" "$pmtu" 10.0.2.99 12345
++	lret=$?
+ 
+ 	if [ "$lret" -eq 0 ] ; then
+ 		if [ "$pmtu" -eq 1 ] ;then
+-			check_counters "flow offload for ns1/ns2 with masquerade and pmtu discovery $what"
++			check_counters "flow offload for ns1/ns2 with masquerade $what"
+ 		else
+ 			echo "PASS: flow offload for ns1/ns2 with masquerade $what"
+ 		fi
+ 
+-		test_tcp_forwarding_ip "$1" "$2" 10.6.6.6 1666
++		test_tcp_forwarding_ip "$1" "$2" "$pmtu" 10.6.6.6 1666
+ 		lret=$?
+ 		if [ "$pmtu" -eq 1 ] ;then
+-			check_counters "flow offload for ns1/ns2 with dnat and pmtu discovery $what"
++			check_counters "flow offload for ns1/ns2 with dnat $what"
+ 		elif [ "$lret" -eq 0 ] ; then
+ 			echo "PASS: flow offload for ns1/ns2 with dnat $what"
+ 		fi
++	else
++		echo "FAIL: flow offload for ns1/ns2 with dnat $what"
+ 	fi
+ 
+ 	return $lret
+ }
+ 
+ make_file "$nsin" "$filesize"
++make_file "$nsin_small" "$filesize_small"
+ 
+ # First test:
+ # No PMTU discovery, nsr1 is expected to fragment packets from ns1 to ns2 as needed.
+ # Due to MTU mismatch in both directions, all packets (except small packets like pure
+ # acks) have to be handled by normal forwarding path.  Therefore, packet counters
+ # are not checked.
+-if test_tcp_forwarding "$ns1" "$ns2"; then
++if test_tcp_forwarding "$ns1" "$ns2" 0; then
+ 	echo "PASS: flow offloaded for ns1/ns2"
+ else
+ 	echo "FAIL: flow offload for ns1/ns2:" 1>&2
+@@ -489,8 +519,9 @@ table ip nat {
+ }
+ EOF
+ 
++check_dscp "dscp_none" "0"
+ if ! test_tcp_forwarding_set_dscp "$ns1" "$ns2" 0 ""; then
+-	echo "FAIL: flow offload for ns1/ns2 with dscp update" 1>&2
++	echo "FAIL: flow offload for ns1/ns2 with dscp update and no pmtu discovery" 1>&2
+ 	exit 0
+ fi
+ 
+@@ -512,6 +543,14 @@ ip netns exec "$ns2" sysctl net.ipv4.ip_no_pmtu_disc=0 > /dev/null
+ # are lower than file size and packets were forwarded via flowtable layer.
+ # For earlier tests (large mtus), packets cannot be handled via flowtable
+ # (except pure acks and other small packets).
++ip netns exec "$nsr1" nft reset counters table inet filter >/dev/null
++ip netns exec "$ns2"  nft reset counters table inet filter >/dev/null
++
++if ! test_tcp_forwarding_set_dscp "$ns1" "$ns2" 1 ""; then
++	echo "FAIL: flow offload for ns1/ns2 with dscp update and pmtu discovery" 1>&2
++	exit 0
++fi
++
+ ip netns exec "$nsr1" nft reset counters table inet filter >/dev/null
+ 
+ if ! test_tcp_forwarding_nat "$ns1" "$ns2" 1 ""; then
+@@ -644,7 +683,7 @@ ip -net "$ns2" route del 192.168.10.1 via 10.0.2.1
+ ip -net "$ns2" route add default via 10.0.2.1
+ ip -net "$ns2" route add default via dead:2::1
+ 
+-if test_tcp_forwarding "$ns1" "$ns2"; then
++if test_tcp_forwarding "$ns1" "$ns2" 1; then
+ 	check_counters "ipsec tunnel mode for ns1/ns2"
+ else
+ 	echo "FAIL: ipsec tunnel mode for ns1/ns2"
+@@ -668,7 +707,7 @@ if [ "$1" = "" ]; then
+ 	fi
+ 
+ 	echo "re-run with random mtus and file size: -o $o -l $l -r $r -s $filesize"
+-	$0 -o "$o" -l "$l" -r "$r" -s "$filesize"
++	$0 -o "$o" -l "$l" -r "$r" -s "$filesize" || ret=1
+ fi
+ 
+ exit $ret
+diff --git a/tools/testing/selftests/net/netfilter/udpclash.c b/tools/testing/selftests/net/netfilter/udpclash.c
+index 85c7b906ad08f7..79de163d61ab79 100644
+--- a/tools/testing/selftests/net/netfilter/udpclash.c
++++ b/tools/testing/selftests/net/netfilter/udpclash.c
+@@ -29,7 +29,7 @@ struct thread_args {
+ 	int sockfd;
+ };
+ 
+-static int wait = 1;
++static volatile int wait = 1;
+ 
+ static void *thread_main(void *varg)
+ {


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-09-10  5:57 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-09-10  5:57 UTC (permalink / raw
  To: gentoo-commits

commit:     66f8aab3003ef63757d4df285dce35ca7f259504
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 10 05:56:17 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Sep 10 05:56:17 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=66f8aab3

remove patch 1800 proc: fix missing pde_set_flags() for net proc files

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                        |   4 -
 ..._missing_pde_set_flags_for_net_proc_files.patch | 129 ---------------------
 2 files changed, 133 deletions(-)

diff --git a/0000_README b/0000_README
index 29987f33..71e6b1a7 100644
--- a/0000_README
+++ b/0000_README
@@ -79,10 +79,6 @@ Patch:  1730_parisc-Disable-prctl.patch
 From:   https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
 Desc:   prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
 
-Patch:  1800_proc_fix_missing_pde_set_flags_for_net_proc_files.patch
-From:   https://lore.kernel.org/all/20250821105806.1453833-1-wangzijie1@honor.com/
-Desc:   proc: fix missing pde_set_flags() for net proc files
-
 Patch:  1801_proc_fix_type_confusion_in_pde_set_flags.patch
 From:   https://lore.kernel.org/linux-fsdevel/20250904135715.3972782-1-wangzijie1@honor.com/
 Desc:   proc: fix type confusion in pde_set_flags()

diff --git a/1800_proc_fix_missing_pde_set_flags_for_net_proc_files.patch b/1800_proc_fix_missing_pde_set_flags_for_net_proc_files.patch
deleted file mode 100644
index 8632f53b..00000000
--- a/1800_proc_fix_missing_pde_set_flags_for_net_proc_files.patch
+++ /dev/null
@@ -1,129 +0,0 @@
-Subject: [PATCH v3] proc: fix missing pde_set_flags() for net proc files
-Date: Thu, 21 Aug 2025 18:58:06 +0800
-Message-ID: <20250821105806.1453833-1-wangzijie1@honor.com>
-X-Mailer: git-send-email 2.25.1
-Precedence: bulk
-X-Mailing-List: regressions@lists.linux.dev
-List-Id: <regressions.lists.linux.dev>
-List-Subscribe: <mailto:regressions+subscribe@lists.linux.dev>
-List-Unsubscribe: <mailto:regressions+unsubscribe@lists.linux.dev>
-MIME-Version: 1.0
-Content-Transfer-Encoding: 8bit
-Content-Type: text/plain
-X-ClientProxiedBy: w002.hihonor.com (10.68.28.120) To a011.hihonor.com
- (10.68.31.243)
-
-To avoid potential UAF issues during module removal races, we use pde_set_flags()
-to save proc_ops flags in PDE itself before proc_register(), and then use
-pde_has_proc_*() helpers instead of directly dereferencing pde->proc_ops->*.
-
-However, the pde_set_flags() call was missing when creating net related proc files.
-This omission caused incorrect behavior which FMODE_LSEEK was being cleared
-inappropriately in proc_reg_open() for net proc files. Lars reported it in this link[1].
-
-Fix this by ensuring pde_set_flags() is called when register proc entry, and add
-NULL check for proc_ops in pde_set_flags().
-
-[1]: https://lore.kernel.org/all/20250815195616.64497967@chagall.paradoxon.rec/
-
-Fixes: ff7ec8dc1b64 ("proc: use the same treatment to check proc_lseek as ones for proc_read_iter et.al")
-Cc: stable@vger.kernel.org
-Reported-by: Lars Wendler <polynomial-c@gmx.de>
-Signed-off-by: wangzijie <wangzijie1@honor.com>
----
-v3:
-- followed by Christian's suggestion to stash pde->proc_ops in a local const variable
-v2:
-- followed by Jiri's suggestion to refractor code and reformat commit message
----
- fs/proc/generic.c | 38 +++++++++++++++++++++-----------------
- 1 file changed, 21 insertions(+), 17 deletions(-)
-
-diff --git a/fs/proc/generic.c b/fs/proc/generic.c
-index 76e800e38..bd0c099cf 100644
---- a/fs/proc/generic.c
-+++ b/fs/proc/generic.c
-@@ -367,6 +367,25 @@ static const struct inode_operations proc_dir_inode_operations = {
- 	.setattr	= proc_notify_change,
- };
- 
-+static void pde_set_flags(struct proc_dir_entry *pde)
-+{
-+	const struct proc_ops *proc_ops = pde->proc_ops;
-+
-+	if (!proc_ops)
-+		return;
-+
-+	if (proc_ops->proc_flags & PROC_ENTRY_PERMANENT)
-+		pde->flags |= PROC_ENTRY_PERMANENT;
-+	if (proc_ops->proc_read_iter)
-+		pde->flags |= PROC_ENTRY_proc_read_iter;
-+#ifdef CONFIG_COMPAT
-+	if (proc_ops->proc_compat_ioctl)
-+		pde->flags |= PROC_ENTRY_proc_compat_ioctl;
-+#endif
-+	if (proc_ops->proc_lseek)
-+		pde->flags |= PROC_ENTRY_proc_lseek;
-+}
-+
- /* returns the registered entry, or frees dp and returns NULL on failure */
- struct proc_dir_entry *proc_register(struct proc_dir_entry *dir,
- 		struct proc_dir_entry *dp)
-@@ -374,6 +393,8 @@ struct proc_dir_entry *proc_register(struct proc_dir_entry *dir,
- 	if (proc_alloc_inum(&dp->low_ino))
- 		goto out_free_entry;
- 
-+	pde_set_flags(dp);
-+
- 	write_lock(&proc_subdir_lock);
- 	dp->parent = dir;
- 	if (pde_subdir_insert(dir, dp) == false) {
-@@ -561,20 +582,6 @@ struct proc_dir_entry *proc_create_reg(const char *name, umode_t mode,
- 	return p;
- }
- 
--static void pde_set_flags(struct proc_dir_entry *pde)
--{
--	if (pde->proc_ops->proc_flags & PROC_ENTRY_PERMANENT)
--		pde->flags |= PROC_ENTRY_PERMANENT;
--	if (pde->proc_ops->proc_read_iter)
--		pde->flags |= PROC_ENTRY_proc_read_iter;
--#ifdef CONFIG_COMPAT
--	if (pde->proc_ops->proc_compat_ioctl)
--		pde->flags |= PROC_ENTRY_proc_compat_ioctl;
--#endif
--	if (pde->proc_ops->proc_lseek)
--		pde->flags |= PROC_ENTRY_proc_lseek;
--}
--
- struct proc_dir_entry *proc_create_data(const char *name, umode_t mode,
- 		struct proc_dir_entry *parent,
- 		const struct proc_ops *proc_ops, void *data)
-@@ -585,7 +592,6 @@ struct proc_dir_entry *proc_create_data(const char *name, umode_t mode,
- 	if (!p)
- 		return NULL;
- 	p->proc_ops = proc_ops;
--	pde_set_flags(p);
- 	return proc_register(parent, p);
- }
- EXPORT_SYMBOL(proc_create_data);
-@@ -636,7 +642,6 @@ struct proc_dir_entry *proc_create_seq_private(const char *name, umode_t mode,
- 	p->proc_ops = &proc_seq_ops;
- 	p->seq_ops = ops;
- 	p->state_size = state_size;
--	pde_set_flags(p);
- 	return proc_register(parent, p);
- }
- EXPORT_SYMBOL(proc_create_seq_private);
-@@ -667,7 +672,6 @@ struct proc_dir_entry *proc_create_single_data(const char *name, umode_t mode,
- 		return NULL;
- 	p->proc_ops = &proc_single_ops;
- 	p->single_show = show;
--	pde_set_flags(p);
- 	return proc_register(parent, p);
- }
- EXPORT_SYMBOL(proc_create_single_data);
--- 
-2.25.1
-
-


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-09-10  6:18 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-09-10  6:18 UTC (permalink / raw
  To: gentoo-commits

commit:     cec9c56fcd21599c1c7887cc45690a0086abfd6d
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 10 06:17:48 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Sep 10 06:17:48 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cec9c56f

Remove 2700_asus-wmi_fix_racy_registrations.patch

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                |  4 --
 2700_asus-wmi_fix_racy_registrations.patch | 67 ------------------------------
 2 files changed, 71 deletions(-)

diff --git a/0000_README b/0000_README
index 71e6b1a7..a3545e96 100644
--- a/0000_README
+++ b/0000_README
@@ -87,10 +87,6 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
-Patch:  2700_asus-wmi_fix_racy_registrations.patch
-From:   https://lore.kernel.org/all/20250827052441.23382-1-tiwai@suse.de/#Z31drivers:platform:x86:asus-wmi.c
-Desc:   platform/x86: asus-wmi: Fix racy registrations
-
 Patch:  2901_permit-menuconfig-sorting.patch
 From:   https://lore.kernel.org/
 Desc:   menuconfig: Allow sorting the entries alphabetically

diff --git a/2700_asus-wmi_fix_racy_registrations.patch b/2700_asus-wmi_fix_racy_registrations.patch
deleted file mode 100644
index 0e012eae..00000000
--- a/2700_asus-wmi_fix_racy_registrations.patch
+++ /dev/null
@@ -1,67 +0,0 @@
-Subject: [PATCH] platform/x86: asus-wmi: Fix racy registrations
-Date: Wed, 27 Aug 2025 07:24:33 +0200
-Message-ID: <20250827052441.23382-1-tiwai@suse.de>
-X-Mailer: git-send-email 2.50.1
-X-Mailing-List: platform-driver-x86@vger.kernel.org
-List-Id: <platform-driver-x86.vger.kernel.org>
-List-Subscribe: <mailto:platform-driver-x86+subscribe@vger.kernel.org>
-List-Unsubscribe: <mailto:platform-driver-x86+unsubscribe@vger.kernel.org>
-
-asus_wmi_register_driver() may be called from multiple drivers
-concurrently, which can lead to the racy list operations, eventually
-corrupting the memory and hitting Oops on some ASUS machines.
-Also, the error handling is missing, and it forgot to unregister ACPI
-lps0 dev ops in the error case.
-
-This patch covers those issues by introducing a simple mutex at
-acpi_wmi_register_driver() & *_unregister_driver, and adding the
-proper call of asus_s2idle_check_unregister() in the error path.
-
-Fixes: feea7bd6b02d ("platform/x86: asus-wmi: Refactor Ally suspend/resume")
-Link: https://bugzilla.suse.com/show_bug.cgi?id=1246924
-Link: https://lore.kernel.org/07815053-0e31-4e8e-8049-b652c929323b@kernel.org
-Signed-off-by: Takashi Iwai <tiwai@suse.de>
----
- drivers/platform/x86/asus-wmi.c | 9 ++++++++-
- 1 file changed, 8 insertions(+), 1 deletion(-)
-
-diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
-index f7191fdded14..e72a2b5d158e 100644
---- a/drivers/platform/x86/asus-wmi.c
-+++ b/drivers/platform/x86/asus-wmi.c
-@@ -5088,16 +5088,22 @@ static int asus_wmi_probe(struct platform_device *pdev)
- 
- 	asus_s2idle_check_register();
- 
--	return asus_wmi_add(pdev);
-+	ret = asus_wmi_add(pdev);
-+	if (ret)
-+		asus_s2idle_check_unregister();
-+
-+	return ret;
- }
- 
- static bool used;
-+static DEFINE_MUTEX(register_mutex);
- 
- int __init_or_module asus_wmi_register_driver(struct asus_wmi_driver *driver)
- {
- 	struct platform_driver *platform_driver;
- 	struct platform_device *platform_device;
- 
-+	guard(mutex)(&register_mutex);
- 	if (used)
- 		return -EBUSY;
- 
-@@ -5120,6 +5126,7 @@ EXPORT_SYMBOL_GPL(asus_wmi_register_driver);
- 
- void asus_wmi_unregister_driver(struct asus_wmi_driver *driver)
- {
-+	guard(mutex)(&register_mutex);
- 	asus_s2idle_check_unregister();
- 
- 	platform_device_unregister(driver->platform_device);
--- 
-2.50.1
-
-


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-09-12  3:56 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-09-12  3:56 UTC (permalink / raw
  To: gentoo-commits

commit:     812ae1b872f4f52b5f34a081e8dac0293458680f
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri Sep 12 03:56:05 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Fri Sep 12 03:56:05 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=812ae1b8

Linux patch 6.16.7

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README             |   4 +
 1006_linux-6.16.7.patch | 792 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 796 insertions(+)

diff --git a/0000_README b/0000_README
index a3545e96..33049ae5 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  1005_linux-6.16.6.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.16.6
 
+Patch:  1006_linux-6.16.7.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.16.7
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1006_linux-6.16.7.patch b/1006_linux-6.16.7.patch
new file mode 100644
index 00000000..ec3abb37
--- /dev/null
+++ b/1006_linux-6.16.7.patch
@@ -0,0 +1,792 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index ab8cd337f43aad..8aed6d94c4cd0d 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -586,6 +586,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
+ 		/sys/devices/system/cpu/vulnerabilities/srbds
+ 		/sys/devices/system/cpu/vulnerabilities/tsa
+ 		/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
++		/sys/devices/system/cpu/vulnerabilities/vmscape
+ Date:		January 2018
+ Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
+ Description:	Information about CPU vulnerabilities
+diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
+index 09890a8f3ee906..8e6130d21de131 100644
+--- a/Documentation/admin-guide/hw-vuln/index.rst
++++ b/Documentation/admin-guide/hw-vuln/index.rst
+@@ -25,3 +25,4 @@ are configurable at compile, boot or run time.
+    rsb
+    old_microcode
+    indirect-target-selection
++   vmscape
+diff --git a/Documentation/admin-guide/hw-vuln/vmscape.rst b/Documentation/admin-guide/hw-vuln/vmscape.rst
+new file mode 100644
+index 00000000000000..d9b9a2b6c114c0
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/vmscape.rst
+@@ -0,0 +1,110 @@
++.. SPDX-License-Identifier: GPL-2.0
++
++VMSCAPE
++=======
++
++VMSCAPE is a vulnerability that may allow a guest to influence the branch
++prediction in host userspace. It particularly affects hypervisors like QEMU.
++
++Even if a hypervisor may not have any sensitive data like disk encryption keys,
++guest-userspace may be able to attack the guest-kernel using the hypervisor as
++a confused deputy.
++
++Affected processors
++-------------------
++
++The following CPU families are affected by VMSCAPE:
++
++**Intel processors:**
++  - Skylake generation (Parts without Enhanced-IBRS)
++  - Cascade Lake generation - (Parts affected by ITS guest/host separation)
++  - Alder Lake and newer (Parts affected by BHI)
++
++Note that, BHI affected parts that use BHB clearing software mitigation e.g.
++Icelake are not vulnerable to VMSCAPE.
++
++**AMD processors:**
++  - Zen series (families 0x17, 0x19, 0x1a)
++
++** Hygon processors:**
++ - Family 0x18
++
++Mitigation
++----------
++
++Conditional IBPB
++----------------
++
++Kernel tracks when a CPU has run a potentially malicious guest and issues an
++IBPB before the first exit to userspace after VM-exit. If userspace did not run
++between VM-exit and the next VM-entry, no IBPB is issued.
++
++Note that the existing userspace mitigation against Spectre-v2 is effective in
++protecting the userspace. They are insufficient to protect the userspace VMMs
++from a malicious guest. This is because Spectre-v2 mitigations are applied at
++context switch time, while the userspace VMM can run after a VM-exit without a
++context switch.
++
++Vulnerability enumeration and mitigation is not applied inside a guest. This is
++because nested hypervisors should already be deploying IBPB to isolate
++themselves from nested guests.
++
++SMT considerations
++------------------
++
++When Simultaneous Multi-Threading (SMT) is enabled, hypervisors can be
++vulnerable to cross-thread attacks. For complete protection against VMSCAPE
++attacks in SMT environments, STIBP should be enabled.
++
++The kernel will issue a warning if SMT is enabled without adequate STIBP
++protection. Warning is not issued when:
++
++- SMT is disabled
++- STIBP is enabled system-wide
++- Intel eIBRS is enabled (which implies STIBP protection)
++
++System information and options
++------------------------------
++
++The sysfs file showing VMSCAPE mitigation status is:
++
++  /sys/devices/system/cpu/vulnerabilities/vmscape
++
++The possible values in this file are:
++
++ * 'Not affected':
++
++   The processor is not vulnerable to VMSCAPE attacks.
++
++ * 'Vulnerable':
++
++   The processor is vulnerable and no mitigation has been applied.
++
++ * 'Mitigation: IBPB before exit to userspace':
++
++   Conditional IBPB mitigation is enabled. The kernel tracks when a CPU has
++   run a potentially malicious guest and issues an IBPB before the first
++   exit to userspace after VM-exit.
++
++ * 'Mitigation: IBPB on VMEXIT':
++
++   IBPB is issued on every VM-exit. This occurs when other mitigations like
++   RETBLEED or SRSO are already issuing IBPB on VM-exit.
++
++Mitigation control on the kernel command line
++----------------------------------------------
++
++The mitigation can be controlled via the ``vmscape=`` command line parameter:
++
++ * ``vmscape=off``:
++
++   Disable the VMSCAPE mitigation.
++
++ * ``vmscape=ibpb``:
++
++   Enable conditional IBPB mitigation (default when CONFIG_MITIGATION_VMSCAPE=y).
++
++ * ``vmscape=force``:
++
++   Force vulnerability detection and mitigation even on processors that are
++   not known to be affected.
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index f6d317e1674d6a..089e1a395178d3 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -3774,6 +3774,7 @@
+ 					       srbds=off [X86,INTEL]
+ 					       ssbd=force-off [ARM64]
+ 					       tsx_async_abort=off [X86]
++					       vmscape=off [X86]
+ 
+ 				Exceptions:
+ 					       This does not have any effect on
+@@ -7937,6 +7938,16 @@
+ 	vmpoff=		[KNL,S390] Perform z/VM CP command after power off.
+ 			Format: <command>
+ 
++	vmscape=	[X86] Controls mitigation for VMscape attacks.
++			VMscape attacks can leak information from a userspace
++			hypervisor to a guest via speculative side-channels.
++
++			off		- disable the mitigation
++			ibpb		- use Indirect Branch Prediction Barrier
++					  (IBPB) mitigation (default)
++			force		- force vulnerability detection even on
++					  unaffected processors
++
+ 	vsyscall=	[X86-64,EARLY]
+ 			Controls the behavior of vsyscalls (i.e. calls to
+ 			fixed addresses of 0xffffffffff600x00 from legacy
+diff --git a/Makefile b/Makefile
+index 0200497da26cd0..86359283ccc9a9 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 16
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 8bed9030ad4735..874c9b264d6f0c 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2704,6 +2704,15 @@ config MITIGATION_TSA
+ 	  security vulnerability on AMD CPUs which can lead to forwarding of
+ 	  invalid info to subsequent instructions and thus can affect their
+ 	  timing and thereby cause a leakage.
++
++config MITIGATION_VMSCAPE
++	bool "Mitigate VMSCAPE"
++	depends on KVM
++	default y
++	help
++	  Enable mitigation for VMSCAPE attacks. VMSCAPE is a hardware security
++	  vulnerability on Intel and AMD CPUs that may allow a guest to do
++	  Spectre v2 style attacks on userspace hypervisor.
+ endif
+ 
+ config ARCH_HAS_ADD_PAGES
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 4597ef6621220e..48ffdbab914543 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -492,6 +492,7 @@
+ #define X86_FEATURE_TSA_SQ_NO		(21*32+11) /* AMD CPU not vulnerable to TSA-SQ */
+ #define X86_FEATURE_TSA_L1_NO		(21*32+12) /* AMD CPU not vulnerable to TSA-L1 */
+ #define X86_FEATURE_CLEAR_CPU_BUF_VM	(21*32+13) /* Clear CPU buffers using VERW before VMRUN */
++#define X86_FEATURE_IBPB_EXIT_TO_USER	(21*32+14) /* Use IBPB on exit-to-userspace, see VMSCAPE bug */
+ 
+ /*
+  * BUG word(s)
+@@ -548,4 +549,5 @@
+ #define X86_BUG_ITS			X86_BUG( 1*32+ 7) /* "its" CPU is affected by Indirect Target Selection */
+ #define X86_BUG_ITS_NATIVE_ONLY		X86_BUG( 1*32+ 8) /* "its_native_only" CPU is affected by ITS, VMX is not affected */
+ #define X86_BUG_TSA			X86_BUG( 1*32+ 9) /* "tsa" CPU is affected by Transient Scheduler Attacks */
++#define X86_BUG_VMSCAPE			X86_BUG( 1*32+10) /* "vmscape" CPU is affected by VMSCAPE attacks from guests */
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h
+index d535a97c728422..ce3eb6d5fdf9f2 100644
+--- a/arch/x86/include/asm/entry-common.h
++++ b/arch/x86/include/asm/entry-common.h
+@@ -93,6 +93,13 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs,
+ 	 * 8 (ia32) bits.
+ 	 */
+ 	choose_random_kstack_offset(rdtsc());
++
++	/* Avoid unnecessary reads of 'x86_ibpb_exit_to_user' */
++	if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER) &&
++	    this_cpu_read(x86_ibpb_exit_to_user)) {
++		indirect_branch_prediction_barrier();
++		this_cpu_write(x86_ibpb_exit_to_user, false);
++	}
+ }
+ #define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare
+ 
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index 10f261678749a7..e29f82466f4323 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -530,6 +530,8 @@ void alternative_msr_write(unsigned int msr, u64 val, unsigned int feature)
+ 		: "memory");
+ }
+ 
++DECLARE_PER_CPU(bool, x86_ibpb_exit_to_user);
++
+ static inline void indirect_branch_prediction_barrier(void)
+ {
+ 	asm_inline volatile(ALTERNATIVE("", "call write_ibpb", X86_FEATURE_IBPB)
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index d19972d5d72955..65e253ef521843 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -96,6 +96,9 @@ static void __init its_update_mitigation(void);
+ static void __init its_apply_mitigation(void);
+ static void __init tsa_select_mitigation(void);
+ static void __init tsa_apply_mitigation(void);
++static void __init vmscape_select_mitigation(void);
++static void __init vmscape_update_mitigation(void);
++static void __init vmscape_apply_mitigation(void);
+ 
+ /* The base value of the SPEC_CTRL MSR without task-specific bits set */
+ u64 x86_spec_ctrl_base;
+@@ -105,6 +108,14 @@ EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
+ DEFINE_PER_CPU(u64, x86_spec_ctrl_current);
+ EXPORT_PER_CPU_SYMBOL_GPL(x86_spec_ctrl_current);
+ 
++/*
++ * Set when the CPU has run a potentially malicious guest. An IBPB will
++ * be needed to before running userspace. That IBPB will flush the branch
++ * predictor content.
++ */
++DEFINE_PER_CPU(bool, x86_ibpb_exit_to_user);
++EXPORT_PER_CPU_SYMBOL_GPL(x86_ibpb_exit_to_user);
++
+ u64 x86_pred_cmd __ro_after_init = PRED_CMD_IBPB;
+ 
+ static u64 __ro_after_init x86_arch_cap_msr;
+@@ -227,6 +238,7 @@ void __init cpu_select_mitigations(void)
+ 	its_select_mitigation();
+ 	bhi_select_mitigation();
+ 	tsa_select_mitigation();
++	vmscape_select_mitigation();
+ 
+ 	/*
+ 	 * After mitigations are selected, some may need to update their
+@@ -258,6 +270,7 @@ void __init cpu_select_mitigations(void)
+ 	bhi_update_mitigation();
+ 	/* srso_update_mitigation() depends on retbleed_update_mitigation(). */
+ 	srso_update_mitigation();
++	vmscape_update_mitigation();
+ 
+ 	spectre_v1_apply_mitigation();
+ 	spectre_v2_apply_mitigation();
+@@ -275,6 +288,7 @@ void __init cpu_select_mitigations(void)
+ 	its_apply_mitigation();
+ 	bhi_apply_mitigation();
+ 	tsa_apply_mitigation();
++	vmscape_apply_mitigation();
+ }
+ 
+ /*
+@@ -2355,88 +2369,6 @@ static void update_mds_branch_idle(void)
+ 	}
+ }
+ 
+-#define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n"
+-#define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n"
+-#define MMIO_MSG_SMT "MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.\n"
+-
+-void cpu_bugs_smt_update(void)
+-{
+-	mutex_lock(&spec_ctrl_mutex);
+-
+-	if (sched_smt_active() && unprivileged_ebpf_enabled() &&
+-	    spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
+-		pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
+-
+-	switch (spectre_v2_user_stibp) {
+-	case SPECTRE_V2_USER_NONE:
+-		break;
+-	case SPECTRE_V2_USER_STRICT:
+-	case SPECTRE_V2_USER_STRICT_PREFERRED:
+-		update_stibp_strict();
+-		break;
+-	case SPECTRE_V2_USER_PRCTL:
+-	case SPECTRE_V2_USER_SECCOMP:
+-		update_indir_branch_cond();
+-		break;
+-	}
+-
+-	switch (mds_mitigation) {
+-	case MDS_MITIGATION_FULL:
+-	case MDS_MITIGATION_AUTO:
+-	case MDS_MITIGATION_VMWERV:
+-		if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY))
+-			pr_warn_once(MDS_MSG_SMT);
+-		update_mds_branch_idle();
+-		break;
+-	case MDS_MITIGATION_OFF:
+-		break;
+-	}
+-
+-	switch (taa_mitigation) {
+-	case TAA_MITIGATION_VERW:
+-	case TAA_MITIGATION_AUTO:
+-	case TAA_MITIGATION_UCODE_NEEDED:
+-		if (sched_smt_active())
+-			pr_warn_once(TAA_MSG_SMT);
+-		break;
+-	case TAA_MITIGATION_TSX_DISABLED:
+-	case TAA_MITIGATION_OFF:
+-		break;
+-	}
+-
+-	switch (mmio_mitigation) {
+-	case MMIO_MITIGATION_VERW:
+-	case MMIO_MITIGATION_AUTO:
+-	case MMIO_MITIGATION_UCODE_NEEDED:
+-		if (sched_smt_active())
+-			pr_warn_once(MMIO_MSG_SMT);
+-		break;
+-	case MMIO_MITIGATION_OFF:
+-		break;
+-	}
+-
+-	switch (tsa_mitigation) {
+-	case TSA_MITIGATION_USER_KERNEL:
+-	case TSA_MITIGATION_VM:
+-	case TSA_MITIGATION_AUTO:
+-	case TSA_MITIGATION_FULL:
+-		/*
+-		 * TSA-SQ can potentially lead to info leakage between
+-		 * SMT threads.
+-		 */
+-		if (sched_smt_active())
+-			static_branch_enable(&cpu_buf_idle_clear);
+-		else
+-			static_branch_disable(&cpu_buf_idle_clear);
+-		break;
+-	case TSA_MITIGATION_NONE:
+-	case TSA_MITIGATION_UCODE_NEEDED:
+-		break;
+-	}
+-
+-	mutex_unlock(&spec_ctrl_mutex);
+-}
+-
+ #undef pr_fmt
+ #define pr_fmt(fmt)	"Speculative Store Bypass: " fmt
+ 
+@@ -3137,9 +3069,185 @@ static void __init srso_apply_mitigation(void)
+ 	}
+ }
+ 
++#undef pr_fmt
++#define pr_fmt(fmt)	"VMSCAPE: " fmt
++
++enum vmscape_mitigations {
++	VMSCAPE_MITIGATION_NONE,
++	VMSCAPE_MITIGATION_AUTO,
++	VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER,
++	VMSCAPE_MITIGATION_IBPB_ON_VMEXIT,
++};
++
++static const char * const vmscape_strings[] = {
++	[VMSCAPE_MITIGATION_NONE]		= "Vulnerable",
++	/* [VMSCAPE_MITIGATION_AUTO] */
++	[VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER]	= "Mitigation: IBPB before exit to userspace",
++	[VMSCAPE_MITIGATION_IBPB_ON_VMEXIT]	= "Mitigation: IBPB on VMEXIT",
++};
++
++static enum vmscape_mitigations vmscape_mitigation __ro_after_init =
++	IS_ENABLED(CONFIG_MITIGATION_VMSCAPE) ? VMSCAPE_MITIGATION_AUTO : VMSCAPE_MITIGATION_NONE;
++
++static int __init vmscape_parse_cmdline(char *str)
++{
++	if (!str)
++		return -EINVAL;
++
++	if (!strcmp(str, "off")) {
++		vmscape_mitigation = VMSCAPE_MITIGATION_NONE;
++	} else if (!strcmp(str, "ibpb")) {
++		vmscape_mitigation = VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER;
++	} else if (!strcmp(str, "force")) {
++		setup_force_cpu_bug(X86_BUG_VMSCAPE);
++		vmscape_mitigation = VMSCAPE_MITIGATION_AUTO;
++	} else {
++		pr_err("Ignoring unknown vmscape=%s option.\n", str);
++	}
++
++	return 0;
++}
++early_param("vmscape", vmscape_parse_cmdline);
++
++static void __init vmscape_select_mitigation(void)
++{
++	if (cpu_mitigations_off() ||
++	    !boot_cpu_has_bug(X86_BUG_VMSCAPE) ||
++	    !boot_cpu_has(X86_FEATURE_IBPB)) {
++		vmscape_mitigation = VMSCAPE_MITIGATION_NONE;
++		return;
++	}
++
++	if (vmscape_mitigation == VMSCAPE_MITIGATION_AUTO)
++		vmscape_mitigation = VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER;
++}
++
++static void __init vmscape_update_mitigation(void)
++{
++	if (!boot_cpu_has_bug(X86_BUG_VMSCAPE))
++		return;
++
++	if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB ||
++	    srso_mitigation == SRSO_MITIGATION_IBPB_ON_VMEXIT)
++		vmscape_mitigation = VMSCAPE_MITIGATION_IBPB_ON_VMEXIT;
++
++	pr_info("%s\n", vmscape_strings[vmscape_mitigation]);
++}
++
++static void __init vmscape_apply_mitigation(void)
++{
++	if (vmscape_mitigation == VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER)
++		setup_force_cpu_cap(X86_FEATURE_IBPB_EXIT_TO_USER);
++}
++
+ #undef pr_fmt
+ #define pr_fmt(fmt) fmt
+ 
++#define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n"
++#define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n"
++#define MMIO_MSG_SMT "MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.\n"
++#define VMSCAPE_MSG_SMT "VMSCAPE: SMT on, STIBP is required for full protection. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/vmscape.html for more details.\n"
++
++void cpu_bugs_smt_update(void)
++{
++	mutex_lock(&spec_ctrl_mutex);
++
++	if (sched_smt_active() && unprivileged_ebpf_enabled() &&
++	    spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
++		pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
++
++	switch (spectre_v2_user_stibp) {
++	case SPECTRE_V2_USER_NONE:
++		break;
++	case SPECTRE_V2_USER_STRICT:
++	case SPECTRE_V2_USER_STRICT_PREFERRED:
++		update_stibp_strict();
++		break;
++	case SPECTRE_V2_USER_PRCTL:
++	case SPECTRE_V2_USER_SECCOMP:
++		update_indir_branch_cond();
++		break;
++	}
++
++	switch (mds_mitigation) {
++	case MDS_MITIGATION_FULL:
++	case MDS_MITIGATION_AUTO:
++	case MDS_MITIGATION_VMWERV:
++		if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY))
++			pr_warn_once(MDS_MSG_SMT);
++		update_mds_branch_idle();
++		break;
++	case MDS_MITIGATION_OFF:
++		break;
++	}
++
++	switch (taa_mitigation) {
++	case TAA_MITIGATION_VERW:
++	case TAA_MITIGATION_AUTO:
++	case TAA_MITIGATION_UCODE_NEEDED:
++		if (sched_smt_active())
++			pr_warn_once(TAA_MSG_SMT);
++		break;
++	case TAA_MITIGATION_TSX_DISABLED:
++	case TAA_MITIGATION_OFF:
++		break;
++	}
++
++	switch (mmio_mitigation) {
++	case MMIO_MITIGATION_VERW:
++	case MMIO_MITIGATION_AUTO:
++	case MMIO_MITIGATION_UCODE_NEEDED:
++		if (sched_smt_active())
++			pr_warn_once(MMIO_MSG_SMT);
++		break;
++	case MMIO_MITIGATION_OFF:
++		break;
++	}
++
++	switch (tsa_mitigation) {
++	case TSA_MITIGATION_USER_KERNEL:
++	case TSA_MITIGATION_VM:
++	case TSA_MITIGATION_AUTO:
++	case TSA_MITIGATION_FULL:
++		/*
++		 * TSA-SQ can potentially lead to info leakage between
++		 * SMT threads.
++		 */
++		if (sched_smt_active())
++			static_branch_enable(&cpu_buf_idle_clear);
++		else
++			static_branch_disable(&cpu_buf_idle_clear);
++		break;
++	case TSA_MITIGATION_NONE:
++	case TSA_MITIGATION_UCODE_NEEDED:
++		break;
++	}
++
++	switch (vmscape_mitigation) {
++	case VMSCAPE_MITIGATION_NONE:
++	case VMSCAPE_MITIGATION_AUTO:
++		break;
++	case VMSCAPE_MITIGATION_IBPB_ON_VMEXIT:
++	case VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER:
++		/*
++		 * Hypervisors can be attacked across-threads, warn for SMT when
++		 * STIBP is not already enabled system-wide.
++		 *
++		 * Intel eIBRS (!AUTOIBRS) implies STIBP on.
++		 */
++		if (!sched_smt_active() ||
++		    spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
++		    spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED ||
++		    (spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&
++		     !boot_cpu_has(X86_FEATURE_AUTOIBRS)))
++			break;
++		pr_warn_once(VMSCAPE_MSG_SMT);
++		break;
++	}
++
++	mutex_unlock(&spec_ctrl_mutex);
++}
++
+ #ifdef CONFIG_SYSFS
+ 
+ #define L1TF_DEFAULT_MSG "Mitigation: PTE Inversion"
+@@ -3388,6 +3496,11 @@ static ssize_t tsa_show_state(char *buf)
+ 	return sysfs_emit(buf, "%s\n", tsa_strings[tsa_mitigation]);
+ }
+ 
++static ssize_t vmscape_show_state(char *buf)
++{
++	return sysfs_emit(buf, "%s\n", vmscape_strings[vmscape_mitigation]);
++}
++
+ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
+ 			       char *buf, unsigned int bug)
+ {
+@@ -3454,6 +3567,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 	case X86_BUG_TSA:
+ 		return tsa_show_state(buf);
+ 
++	case X86_BUG_VMSCAPE:
++		return vmscape_show_state(buf);
++
+ 	default:
+ 		break;
+ 	}
+@@ -3545,6 +3661,11 @@ ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *bu
+ {
+ 	return cpu_show_common(dev, attr, buf, X86_BUG_TSA);
+ }
++
++ssize_t cpu_show_vmscape(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	return cpu_show_common(dev, attr, buf, X86_BUG_VMSCAPE);
++}
+ #endif
+ 
+ void __warn_thunk(void)
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index fb50c1dd53ef7f..bce82fa055e492 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1235,55 +1235,71 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ #define ITS_NATIVE_ONLY	BIT(9)
+ /* CPU is affected by Transient Scheduler Attacks */
+ #define TSA		BIT(10)
++/* CPU is affected by VMSCAPE */
++#define VMSCAPE		BIT(11)
+ 
+ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+-	VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE,	     X86_STEP_MAX,	SRBDS),
+-	VULNBL_INTEL_STEPS(INTEL_HASWELL,	     X86_STEP_MAX,	SRBDS),
+-	VULNBL_INTEL_STEPS(INTEL_HASWELL_L,	     X86_STEP_MAX,	SRBDS),
+-	VULNBL_INTEL_STEPS(INTEL_HASWELL_G,	     X86_STEP_MAX,	SRBDS),
+-	VULNBL_INTEL_STEPS(INTEL_HASWELL_X,	     X86_STEP_MAX,	MMIO),
+-	VULNBL_INTEL_STEPS(INTEL_BROADWELL_D,	     X86_STEP_MAX,	MMIO),
+-	VULNBL_INTEL_STEPS(INTEL_BROADWELL_G,	     X86_STEP_MAX,	SRBDS),
+-	VULNBL_INTEL_STEPS(INTEL_BROADWELL_X,	     X86_STEP_MAX,	MMIO),
+-	VULNBL_INTEL_STEPS(INTEL_BROADWELL,	     X86_STEP_MAX,	SRBDS),
+-	VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X,		      0x5,	MMIO | RETBLEED | GDS),
+-	VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X,	     X86_STEP_MAX,	MMIO | RETBLEED | GDS | ITS),
+-	VULNBL_INTEL_STEPS(INTEL_SKYLAKE_L,	     X86_STEP_MAX,	MMIO | RETBLEED | GDS | SRBDS),
+-	VULNBL_INTEL_STEPS(INTEL_SKYLAKE,	     X86_STEP_MAX,	MMIO | RETBLEED | GDS | SRBDS),
+-	VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L,		      0xb,	MMIO | RETBLEED | GDS | SRBDS),
+-	VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L,	     X86_STEP_MAX,	MMIO | RETBLEED | GDS | SRBDS | ITS),
+-	VULNBL_INTEL_STEPS(INTEL_KABYLAKE,		      0xc,	MMIO | RETBLEED | GDS | SRBDS),
+-	VULNBL_INTEL_STEPS(INTEL_KABYLAKE,	     X86_STEP_MAX,	MMIO | RETBLEED | GDS | SRBDS | ITS),
+-	VULNBL_INTEL_STEPS(INTEL_CANNONLAKE_L,	     X86_STEP_MAX,	RETBLEED),
++	VULNBL_INTEL_STEPS(INTEL_SANDYBRIDGE_X,	     X86_STEP_MAX,	VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_SANDYBRIDGE,	     X86_STEP_MAX,	VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE_X,	     X86_STEP_MAX,	VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_IVYBRIDGE,	     X86_STEP_MAX,	SRBDS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_HASWELL,	     X86_STEP_MAX,	SRBDS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_HASWELL_L,	     X86_STEP_MAX,	SRBDS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_HASWELL_G,	     X86_STEP_MAX,	SRBDS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_HASWELL_X,	     X86_STEP_MAX,	MMIO | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_BROADWELL_D,	     X86_STEP_MAX,	MMIO | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_BROADWELL_X,	     X86_STEP_MAX,	MMIO | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_BROADWELL_G,	     X86_STEP_MAX,	SRBDS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_BROADWELL,	     X86_STEP_MAX,	SRBDS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X,		      0x5,	MMIO | RETBLEED | GDS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_SKYLAKE_X,	     X86_STEP_MAX,	MMIO | RETBLEED | GDS | ITS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_SKYLAKE_L,	     X86_STEP_MAX,	MMIO | RETBLEED | GDS | SRBDS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_SKYLAKE,	     X86_STEP_MAX,	MMIO | RETBLEED | GDS | SRBDS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L,		      0xb,	MMIO | RETBLEED | GDS | SRBDS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_KABYLAKE_L,	     X86_STEP_MAX,	MMIO | RETBLEED | GDS | SRBDS | ITS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_KABYLAKE,		      0xc,	MMIO | RETBLEED | GDS | SRBDS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_KABYLAKE,	     X86_STEP_MAX,	MMIO | RETBLEED | GDS | SRBDS | ITS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_CANNONLAKE_L,	     X86_STEP_MAX,	RETBLEED | VMSCAPE),
+ 	VULNBL_INTEL_STEPS(INTEL_ICELAKE_L,	     X86_STEP_MAX,	MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | ITS_NATIVE_ONLY),
+ 	VULNBL_INTEL_STEPS(INTEL_ICELAKE_D,	     X86_STEP_MAX,	MMIO | GDS | ITS | ITS_NATIVE_ONLY),
+ 	VULNBL_INTEL_STEPS(INTEL_ICELAKE_X,	     X86_STEP_MAX,	MMIO | GDS | ITS | ITS_NATIVE_ONLY),
+-	VULNBL_INTEL_STEPS(INTEL_COMETLAKE,	     X86_STEP_MAX,	MMIO | MMIO_SBDS | RETBLEED | GDS | ITS),
+-	VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L,		      0x0,	MMIO | RETBLEED | ITS),
+-	VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L,	     X86_STEP_MAX,	MMIO | MMIO_SBDS | RETBLEED | GDS | ITS),
++	VULNBL_INTEL_STEPS(INTEL_COMETLAKE,	     X86_STEP_MAX,	MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L,		      0x0,	MMIO | RETBLEED | ITS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_COMETLAKE_L,	     X86_STEP_MAX,	MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | VMSCAPE),
+ 	VULNBL_INTEL_STEPS(INTEL_TIGERLAKE_L,	     X86_STEP_MAX,	GDS | ITS | ITS_NATIVE_ONLY),
+ 	VULNBL_INTEL_STEPS(INTEL_TIGERLAKE,	     X86_STEP_MAX,	GDS | ITS | ITS_NATIVE_ONLY),
+ 	VULNBL_INTEL_STEPS(INTEL_LAKEFIELD,	     X86_STEP_MAX,	MMIO | MMIO_SBDS | RETBLEED),
+ 	VULNBL_INTEL_STEPS(INTEL_ROCKETLAKE,	     X86_STEP_MAX,	MMIO | RETBLEED | GDS | ITS | ITS_NATIVE_ONLY),
+-	VULNBL_INTEL_TYPE(INTEL_ALDERLAKE,		     ATOM,	RFDS),
+-	VULNBL_INTEL_STEPS(INTEL_ALDERLAKE_L,	     X86_STEP_MAX,	RFDS),
+-	VULNBL_INTEL_TYPE(INTEL_RAPTORLAKE,		     ATOM,	RFDS),
+-	VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_P,	     X86_STEP_MAX,	RFDS),
+-	VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_S,	     X86_STEP_MAX,	RFDS),
+-	VULNBL_INTEL_STEPS(INTEL_ATOM_GRACEMONT,     X86_STEP_MAX,	RFDS),
++	VULNBL_INTEL_TYPE(INTEL_ALDERLAKE,		     ATOM,	RFDS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_ALDERLAKE,	     X86_STEP_MAX,	VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_ALDERLAKE_L,	     X86_STEP_MAX,	RFDS | VMSCAPE),
++	VULNBL_INTEL_TYPE(INTEL_RAPTORLAKE,		     ATOM,	RFDS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE,	     X86_STEP_MAX,	VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_P,	     X86_STEP_MAX,	RFDS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_RAPTORLAKE_S,	     X86_STEP_MAX,	RFDS | VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_METEORLAKE_L,	     X86_STEP_MAX,	VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_ARROWLAKE_H,	     X86_STEP_MAX,	VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_ARROWLAKE,	     X86_STEP_MAX,	VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_ARROWLAKE_U,	     X86_STEP_MAX,	VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_LUNARLAKE_M,	     X86_STEP_MAX,	VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_SAPPHIRERAPIDS_X,   X86_STEP_MAX,	VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_GRANITERAPIDS_X,    X86_STEP_MAX,	VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_EMERALDRAPIDS_X,    X86_STEP_MAX,	VMSCAPE),
++	VULNBL_INTEL_STEPS(INTEL_ATOM_GRACEMONT,     X86_STEP_MAX,	RFDS | VMSCAPE),
+ 	VULNBL_INTEL_STEPS(INTEL_ATOM_TREMONT,	     X86_STEP_MAX,	MMIO | MMIO_SBDS | RFDS),
+ 	VULNBL_INTEL_STEPS(INTEL_ATOM_TREMONT_D,     X86_STEP_MAX,	MMIO | RFDS),
+ 	VULNBL_INTEL_STEPS(INTEL_ATOM_TREMONT_L,     X86_STEP_MAX,	MMIO | MMIO_SBDS | RFDS),
+ 	VULNBL_INTEL_STEPS(INTEL_ATOM_GOLDMONT,      X86_STEP_MAX,	RFDS),
+ 	VULNBL_INTEL_STEPS(INTEL_ATOM_GOLDMONT_D,    X86_STEP_MAX,	RFDS),
+ 	VULNBL_INTEL_STEPS(INTEL_ATOM_GOLDMONT_PLUS, X86_STEP_MAX,	RFDS),
++	VULNBL_INTEL_STEPS(INTEL_ATOM_CRESTMONT_X,   X86_STEP_MAX,	VMSCAPE),
+ 
+ 	VULNBL_AMD(0x15, RETBLEED),
+ 	VULNBL_AMD(0x16, RETBLEED),
+-	VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO),
+-	VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO),
+-	VULNBL_AMD(0x19, SRSO | TSA),
+-	VULNBL_AMD(0x1a, SRSO),
++	VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO | VMSCAPE),
++	VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO | VMSCAPE),
++	VULNBL_AMD(0x19, SRSO | TSA | VMSCAPE),
++	VULNBL_AMD(0x1a, SRSO | VMSCAPE),
+ 	{}
+ };
+ 
+@@ -1542,6 +1558,14 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 		}
+ 	}
+ 
++	/*
++	 * Set the bug only on bare-metal. A nested hypervisor should already be
++	 * deploying IBPB to isolate itself from nested guests.
++	 */
++	if (cpu_matches(cpu_vuln_blacklist, VMSCAPE) &&
++	    !boot_cpu_has(X86_FEATURE_HYPERVISOR))
++		setup_force_cpu_bug(X86_BUG_VMSCAPE);
++
+ 	if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
+ 		return;
+ 
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 7d4cb1cbd629d3..6b3a64e73f21a7 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -11145,6 +11145,15 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 	if (vcpu->arch.guest_fpu.xfd_err)
+ 		wrmsrq(MSR_IA32_XFD_ERR, 0);
+ 
++	/*
++	 * Mark this CPU as needing a branch predictor flush before running
++	 * userspace. Must be done before enabling preemption to ensure it gets
++	 * set for the CPU that actually ran the guest, and not the CPU that it
++	 * may migrate to.
++	 */
++	if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER))
++		this_cpu_write(x86_ibpb_exit_to_user, true);
++
+ 	/*
+ 	 * Consume any pending interrupts, including the possible source of
+ 	 * VM-Exit on SVM and any ticks that occur between VM-Exit and now.
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index efc575a00edda9..008da0354fba35 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -603,6 +603,7 @@ CPU_SHOW_VULN_FALLBACK(ghostwrite);
+ CPU_SHOW_VULN_FALLBACK(old_microcode);
+ CPU_SHOW_VULN_FALLBACK(indirect_target_selection);
+ CPU_SHOW_VULN_FALLBACK(tsa);
++CPU_SHOW_VULN_FALLBACK(vmscape);
+ 
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+@@ -622,6 +623,7 @@ static DEVICE_ATTR(ghostwrite, 0444, cpu_show_ghostwrite, NULL);
+ static DEVICE_ATTR(old_microcode, 0444, cpu_show_old_microcode, NULL);
+ static DEVICE_ATTR(indirect_target_selection, 0444, cpu_show_indirect_target_selection, NULL);
+ static DEVICE_ATTR(tsa, 0444, cpu_show_tsa, NULL);
++static DEVICE_ATTR(vmscape, 0444, cpu_show_vmscape, NULL);
+ 
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_meltdown.attr,
+@@ -642,6 +644,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_old_microcode.attr,
+ 	&dev_attr_indirect_target_selection.attr,
+ 	&dev_attr_tsa.attr,
++	&dev_attr_vmscape.attr,
+ 	NULL
+ };
+ 
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index 6378370a952f65..9cc5472b87ea55 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -83,6 +83,7 @@ extern ssize_t cpu_show_old_microcode(struct device *dev,
+ extern ssize_t cpu_show_indirect_target_selection(struct device *dev,
+ 						  struct device_attribute *attr, char *buf);
+ extern ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *buf);
++extern ssize_t cpu_show_vmscape(struct device *dev, struct device_attribute *attr, char *buf);
+ 
+ extern __printf(4, 5)
+ struct device *cpu_device_create(struct device *parent, void *drvdata,


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-09-20  5:25 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-09-20  5:25 UTC (permalink / raw
  To: gentoo-commits

commit:     9ffd46d3a792ef5eb448d3a3fcf684e769b965d5
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 20 05:25:20 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Sat Sep 20 05:25:20 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9ffd46d3

Linux patch 6.16.8

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                         |     4 +
 1007_linux-6.16.8.patch             | 11381 ++++++++++++++++++++++++++++++++++
 2991_libbpf_add_WERROR_option.patch |    11 -
 3 files changed, 11385 insertions(+), 11 deletions(-)

diff --git a/0000_README b/0000_README
index 33049ae5..fb6f961a 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1006_linux-6.16.7.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.16.7
 
+Patch:  1007_linux-6.16.8.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.16.8
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1007_linux-6.16.8.patch b/1007_linux-6.16.8.patch
new file mode 100644
index 00000000..81ef3369
--- /dev/null
+++ b/1007_linux-6.16.8.patch
@@ -0,0 +1,11381 @@
+diff --git a/Documentation/devicetree/bindings/serial/brcm,bcm7271-uart.yaml b/Documentation/devicetree/bindings/serial/brcm,bcm7271-uart.yaml
+index 89c462653e2d33..8cc848ae11cb73 100644
+--- a/Documentation/devicetree/bindings/serial/brcm,bcm7271-uart.yaml
++++ b/Documentation/devicetree/bindings/serial/brcm,bcm7271-uart.yaml
+@@ -41,7 +41,7 @@ properties:
+           - const: dma_intr2
+ 
+   clocks:
+-    minItems: 1
++    maxItems: 1
+ 
+   clock-names:
+     const: sw_baud
+diff --git a/Documentation/netlink/specs/mptcp_pm.yaml b/Documentation/netlink/specs/mptcp_pm.yaml
+index fb57860fe778c6..ecfe5ee33de2d8 100644
+--- a/Documentation/netlink/specs/mptcp_pm.yaml
++++ b/Documentation/netlink/specs/mptcp_pm.yaml
+@@ -256,7 +256,7 @@ attribute-sets:
+         type: u32
+       -
+         name: if-idx
+-        type: u32
++        type: s32
+       -
+         name: reset-reason
+         type: u32
+diff --git a/Documentation/networking/can.rst b/Documentation/networking/can.rst
+index b018ce34639265..515a3876f58cfd 100644
+--- a/Documentation/networking/can.rst
++++ b/Documentation/networking/can.rst
+@@ -742,7 +742,7 @@ The broadcast manager sends responses to user space in the same form:
+             struct timeval ival1, ival2;    /* count and subsequent interval */
+             canid_t can_id;                 /* unique can_id for task */
+             __u32 nframes;                  /* number of can_frames following */
+-            struct can_frame frames[0];
++            struct can_frame frames[];
+     };
+ 
+ The aligned payload 'frames' uses the same basic CAN frame structure defined
+diff --git a/Documentation/networking/mptcp.rst b/Documentation/networking/mptcp.rst
+index 17f2bab6116447..2e31038d646205 100644
+--- a/Documentation/networking/mptcp.rst
++++ b/Documentation/networking/mptcp.rst
+@@ -60,10 +60,10 @@ address announcements. Typically, it is the client side that initiates subflows,
+ and the server side that announces additional addresses via the ``ADD_ADDR`` and
+ ``REMOVE_ADDR`` options.
+ 
+-Path managers are controlled by the ``net.mptcp.pm_type`` sysctl knob -- see
+-mptcp-sysctl.rst. There are two types: the in-kernel one (type ``0``) where the
+-same rules are applied for all the connections (see: ``ip mptcp``) ; and the
+-userspace one (type ``1``), controlled by a userspace daemon (i.e. `mptcpd
++Path managers are controlled by the ``net.mptcp.path_manager`` sysctl knob --
++see mptcp-sysctl.rst. There are two types: the in-kernel one (``kernel``) where
++the same rules are applied for all the connections (see: ``ip mptcp``) ; and the
++userspace one (``userspace``), controlled by a userspace daemon (i.e. `mptcpd
+ <https://mptcpd.mptcp.dev/>`_) where different rules can be applied for each
+ connection. The path managers can be controlled via a Netlink API; see
+ netlink_spec/mptcp_pm.rst.
+diff --git a/Makefile b/Makefile
+index 86359283ccc9a9..7594f35cbc2a5a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 16
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm64/kernel/machine_kexec_file.c b/arch/arm64/kernel/machine_kexec_file.c
+index af1ca875c52ce2..410060ebd86dfd 100644
+--- a/arch/arm64/kernel/machine_kexec_file.c
++++ b/arch/arm64/kernel/machine_kexec_file.c
+@@ -94,7 +94,7 @@ int load_other_segments(struct kimage *image,
+ 			char *initrd, unsigned long initrd_len,
+ 			char *cmdline)
+ {
+-	struct kexec_buf kbuf;
++	struct kexec_buf kbuf = {};
+ 	void *dtb = NULL;
+ 	unsigned long initrd_load_addr = 0, dtb_len,
+ 		      orig_segments = image->nr_segments;
+diff --git a/arch/s390/kernel/kexec_elf.c b/arch/s390/kernel/kexec_elf.c
+index 4d364de4379921..143e34a4eca57c 100644
+--- a/arch/s390/kernel/kexec_elf.c
++++ b/arch/s390/kernel/kexec_elf.c
+@@ -16,7 +16,7 @@
+ static int kexec_file_add_kernel_elf(struct kimage *image,
+ 				     struct s390_load_data *data)
+ {
+-	struct kexec_buf buf;
++	struct kexec_buf buf = {};
+ 	const Elf_Ehdr *ehdr;
+ 	const Elf_Phdr *phdr;
+ 	Elf_Addr entry;
+diff --git a/arch/s390/kernel/kexec_image.c b/arch/s390/kernel/kexec_image.c
+index a32ce8bea745cf..9a439175723cad 100644
+--- a/arch/s390/kernel/kexec_image.c
++++ b/arch/s390/kernel/kexec_image.c
+@@ -16,7 +16,7 @@
+ static int kexec_file_add_kernel_image(struct kimage *image,
+ 				       struct s390_load_data *data)
+ {
+-	struct kexec_buf buf;
++	struct kexec_buf buf = {};
+ 
+ 	buf.image = image;
+ 
+diff --git a/arch/s390/kernel/machine_kexec_file.c b/arch/s390/kernel/machine_kexec_file.c
+index c2bac14dd668ae..a36d7311c6683b 100644
+--- a/arch/s390/kernel/machine_kexec_file.c
++++ b/arch/s390/kernel/machine_kexec_file.c
+@@ -129,7 +129,7 @@ static int kexec_file_update_purgatory(struct kimage *image,
+ static int kexec_file_add_purgatory(struct kimage *image,
+ 				    struct s390_load_data *data)
+ {
+-	struct kexec_buf buf;
++	struct kexec_buf buf = {};
+ 	int ret;
+ 
+ 	buf.image = image;
+@@ -152,7 +152,7 @@ static int kexec_file_add_purgatory(struct kimage *image,
+ static int kexec_file_add_initrd(struct kimage *image,
+ 				 struct s390_load_data *data)
+ {
+-	struct kexec_buf buf;
++	struct kexec_buf buf = {};
+ 	int ret;
+ 
+ 	buf.image = image;
+@@ -184,7 +184,7 @@ static int kexec_file_add_ipl_report(struct kimage *image,
+ {
+ 	__u32 *lc_ipl_parmblock_ptr;
+ 	unsigned int len, ncerts;
+-	struct kexec_buf buf;
++	struct kexec_buf buf = {};
+ 	unsigned long addr;
+ 	void *ptr, *end;
+ 	int ret;
+diff --git a/arch/s390/kernel/perf_cpum_cf.c b/arch/s390/kernel/perf_cpum_cf.c
+index 6a262e198e35ec..952cc8d103693f 100644
+--- a/arch/s390/kernel/perf_cpum_cf.c
++++ b/arch/s390/kernel/perf_cpum_cf.c
+@@ -761,8 +761,6 @@ static int __hw_perf_event_init(struct perf_event *event, unsigned int type)
+ 		break;
+ 
+ 	case PERF_TYPE_HARDWARE:
+-		if (is_sampling_event(event))	/* No sampling support */
+-			return -ENOENT;
+ 		ev = attr->config;
+ 		if (!attr->exclude_user && attr->exclude_kernel) {
+ 			/*
+@@ -860,6 +858,8 @@ static int cpumf_pmu_event_init(struct perf_event *event)
+ 	unsigned int type = event->attr.type;
+ 	int err = -ENOENT;
+ 
++	if (is_sampling_event(event))	/* No sampling support */
++		return err;
+ 	if (type == PERF_TYPE_HARDWARE || type == PERF_TYPE_RAW)
+ 		err = __hw_perf_event_init(event, type);
+ 	else if (event->pmu->type == type)
+diff --git a/arch/s390/kernel/perf_pai_crypto.c b/arch/s390/kernel/perf_pai_crypto.c
+index 63875270941bc4..01cc6493367a46 100644
+--- a/arch/s390/kernel/perf_pai_crypto.c
++++ b/arch/s390/kernel/perf_pai_crypto.c
+@@ -286,10 +286,10 @@ static int paicrypt_event_init(struct perf_event *event)
+ 	/* PAI crypto PMU registered as PERF_TYPE_RAW, check event type */
+ 	if (a->type != PERF_TYPE_RAW && event->pmu->type != a->type)
+ 		return -ENOENT;
+-	/* PAI crypto event must be in valid range */
++	/* PAI crypto event must be in valid range, try others if not */
+ 	if (a->config < PAI_CRYPTO_BASE ||
+ 	    a->config > PAI_CRYPTO_BASE + paicrypt_cnt)
+-		return -EINVAL;
++		return -ENOENT;
+ 	/* Allow only CRYPTO_ALL for sampling */
+ 	if (a->sample_period && a->config != PAI_CRYPTO_BASE)
+ 		return -EINVAL;
+diff --git a/arch/s390/kernel/perf_pai_ext.c b/arch/s390/kernel/perf_pai_ext.c
+index fd14d5ebccbca0..d65a9730753c55 100644
+--- a/arch/s390/kernel/perf_pai_ext.c
++++ b/arch/s390/kernel/perf_pai_ext.c
+@@ -266,7 +266,7 @@ static int paiext_event_valid(struct perf_event *event)
+ 		event->hw.config_base = offsetof(struct paiext_cb, acc);
+ 		return 0;
+ 	}
+-	return -EINVAL;
++	return -ENOENT;
+ }
+ 
+ /* Might be called on different CPU than the one the event is intended for. */
+diff --git a/arch/x86/kernel/cpu/topology_amd.c b/arch/x86/kernel/cpu/topology_amd.c
+index 827dd0dbb6e9d2..c79ebbb639cbff 100644
+--- a/arch/x86/kernel/cpu/topology_amd.c
++++ b/arch/x86/kernel/cpu/topology_amd.c
+@@ -175,27 +175,30 @@ static void topoext_fixup(struct topo_scan *tscan)
+ 
+ static void parse_topology_amd(struct topo_scan *tscan)
+ {
+-	bool has_topoext = false;
+-
+ 	/*
+-	 * If the extended topology leaf 0x8000_001e is available
+-	 * try to get SMT, CORE, TILE, and DIE shifts from extended
++	 * Try to get SMT, CORE, TILE, and DIE shifts from extended
+ 	 * CPUID leaf 0x8000_0026 on supported processors first. If
+ 	 * extended CPUID leaf 0x8000_0026 is not supported, try to
+-	 * get SMT and CORE shift from leaf 0xb first, then try to
+-	 * get the CORE shift from leaf 0x8000_0008.
++	 * get SMT and CORE shift from leaf 0xb. If either leaf is
++	 * available, cpu_parse_topology_ext() will return true.
+ 	 */
+-	if (cpu_feature_enabled(X86_FEATURE_TOPOEXT))
+-		has_topoext = cpu_parse_topology_ext(tscan);
++	bool has_xtopology = cpu_parse_topology_ext(tscan);
+ 
+ 	if (cpu_feature_enabled(X86_FEATURE_AMD_HTR_CORES))
+ 		tscan->c->topo.cpu_type = cpuid_ebx(0x80000026);
+ 
+-	if (!has_topoext && !parse_8000_0008(tscan))
++	/*
++	 * If XTOPOLOGY leaves (0x26/0xb) are not available, try to
++	 * get the CORE shift from leaf 0x8000_0008 first.
++	 */
++	if (!has_xtopology && !parse_8000_0008(tscan))
+ 		return;
+ 
+-	/* Prefer leaf 0x8000001e if available */
+-	if (parse_8000_001e(tscan, has_topoext))
++	/*
++	 * Prefer leaf 0x8000001e if available to get the SMT shift and
++	 * the initial APIC ID if XTOPOLOGY leaves are not available.
++	 */
++	if (parse_8000_001e(tscan, has_xtopology))
+ 		return;
+ 
+ 	/* Try the NODEID MSR */
+diff --git a/block/fops.c b/block/fops.c
+index 1309861d4c2c4b..d62fbefb2e6712 100644
+--- a/block/fops.c
++++ b/block/fops.c
+@@ -7,6 +7,7 @@
+ #include <linux/init.h>
+ #include <linux/mm.h>
+ #include <linux/blkdev.h>
++#include <linux/blk-integrity.h>
+ #include <linux/buffer_head.h>
+ #include <linux/mpage.h>
+ #include <linux/uio.h>
+@@ -54,7 +55,6 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
+ 	struct bio bio;
+ 	ssize_t ret;
+ 
+-	WARN_ON_ONCE(iocb->ki_flags & IOCB_HAS_METADATA);
+ 	if (nr_pages <= DIO_INLINE_BIO_VECS)
+ 		vecs = inline_vecs;
+ 	else {
+@@ -131,7 +131,7 @@ static void blkdev_bio_end_io(struct bio *bio)
+ 	if (bio->bi_status && !dio->bio.bi_status)
+ 		dio->bio.bi_status = bio->bi_status;
+ 
+-	if (!is_sync && (dio->iocb->ki_flags & IOCB_HAS_METADATA))
++	if (bio_integrity(bio))
+ 		bio_integrity_unmap_user(bio);
+ 
+ 	if (atomic_dec_and_test(&dio->ref)) {
+@@ -233,7 +233,7 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
+ 			}
+ 			bio->bi_opf |= REQ_NOWAIT;
+ 		}
+-		if (!is_sync && (iocb->ki_flags & IOCB_HAS_METADATA)) {
++		if (iocb->ki_flags & IOCB_HAS_METADATA) {
+ 			ret = bio_integrity_map_iter(bio, iocb->private);
+ 			if (unlikely(ret))
+ 				goto fail;
+@@ -301,7 +301,7 @@ static void blkdev_bio_end_io_async(struct bio *bio)
+ 		ret = blk_status_to_errno(bio->bi_status);
+ 	}
+ 
+-	if (iocb->ki_flags & IOCB_HAS_METADATA)
++	if (bio_integrity(bio))
+ 		bio_integrity_unmap_user(bio);
+ 
+ 	iocb->ki_complete(iocb, ret);
+@@ -422,7 +422,8 @@ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ 	}
+ 
+ 	nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1);
+-	if (likely(nr_pages <= BIO_MAX_VECS)) {
++	if (likely(nr_pages <= BIO_MAX_VECS &&
++		   !(iocb->ki_flags & IOCB_HAS_METADATA))) {
+ 		if (is_sync_kiocb(iocb))
+ 			return __blkdev_direct_IO_simple(iocb, iter, bdev,
+ 							nr_pages);
+@@ -672,6 +673,8 @@ static int blkdev_open(struct inode *inode, struct file *filp)
+ 
+ 	if (bdev_can_atomic_write(bdev))
+ 		filp->f_mode |= FMODE_CAN_ATOMIC_WRITE;
++	if (blk_get_integrity(bdev->bd_disk))
++		filp->f_mode |= FMODE_HAS_METADATA;
+ 
+ 	ret = bdev_open(bdev, mode, filp->private_data, NULL, filp);
+ 	if (ret)
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index f3477ab377425f..e9aaf72502e51a 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -1547,13 +1547,15 @@ static void amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy)
+ 	pr_debug("CPU %d exiting\n", policy->cpu);
+ }
+ 
+-static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
++static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy, bool policy_change)
+ {
+ 	struct amd_cpudata *cpudata = policy->driver_data;
+ 	union perf_cached perf;
+ 	u8 epp;
+ 
+-	if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq)
++	if (policy_change ||
++	    policy->min != cpudata->min_limit_freq ||
++	    policy->max != cpudata->max_limit_freq)
+ 		amd_pstate_update_min_max_limit(policy);
+ 
+ 	if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)
+@@ -1577,7 +1579,7 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
+ 
+ 	cpudata->policy = policy->policy;
+ 
+-	ret = amd_pstate_epp_update_limit(policy);
++	ret = amd_pstate_epp_update_limit(policy, true);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1619,13 +1621,14 @@ static int amd_pstate_suspend(struct cpufreq_policy *policy)
+ 	 * min_perf value across kexec reboots. If this CPU is just resumed back without kexec,
+ 	 * the limits, epp and desired perf will get reset to the cached values in cpudata struct
+ 	 */
+-	ret = amd_pstate_update_perf(policy, perf.bios_min_perf, 0U, 0U, 0U, false);
++	ret = amd_pstate_update_perf(policy, perf.bios_min_perf,
++				     FIELD_GET(AMD_CPPC_DES_PERF_MASK, cpudata->cppc_req_cached),
++				     FIELD_GET(AMD_CPPC_MAX_PERF_MASK, cpudata->cppc_req_cached),
++				     FIELD_GET(AMD_CPPC_EPP_PERF_MASK, cpudata->cppc_req_cached),
++				     false);
+ 	if (ret)
+ 		return ret;
+ 
+-	/* invalidate to ensure it's rewritten during resume */
+-	cpudata->cppc_req_cached = 0;
+-
+ 	/* set this flag to avoid setting core offline*/
+ 	cpudata->suspended = true;
+ 
+@@ -1651,7 +1654,7 @@ static int amd_pstate_epp_resume(struct cpufreq_policy *policy)
+ 		int ret;
+ 
+ 		/* enable amd pstate from suspend state*/
+-		ret = amd_pstate_epp_update_limit(policy);
++		ret = amd_pstate_epp_update_limit(policy, false);
+ 		if (ret)
+ 			return ret;
+ 
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 06a1c7dd081ffb..9a85c58922a0c8 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -1034,8 +1034,8 @@ static bool hybrid_register_perf_domain(unsigned int cpu)
+ 	if (!cpu_dev)
+ 		return false;
+ 
+-	if (em_dev_register_perf_domain(cpu_dev, HYBRID_EM_STATE_COUNT, &cb,
+-					cpumask_of(cpu), false))
++	if (em_dev_register_pd_no_update(cpu_dev, HYBRID_EM_STATE_COUNT, &cb,
++					 cpumask_of(cpu), false))
+ 		return false;
+ 
+ 	cpudata->pd_registered = true;
+diff --git a/drivers/dma/dw/rzn1-dmamux.c b/drivers/dma/dw/rzn1-dmamux.c
+index 4fb8508419dbd8..deadf135681b67 100644
+--- a/drivers/dma/dw/rzn1-dmamux.c
++++ b/drivers/dma/dw/rzn1-dmamux.c
+@@ -48,12 +48,16 @@ static void *rzn1_dmamux_route_allocate(struct of_phandle_args *dma_spec,
+ 	u32 mask;
+ 	int ret;
+ 
+-	if (dma_spec->args_count != RNZ1_DMAMUX_NCELLS)
+-		return ERR_PTR(-EINVAL);
++	if (dma_spec->args_count != RNZ1_DMAMUX_NCELLS) {
++		ret = -EINVAL;
++		goto put_device;
++	}
+ 
+ 	map = kzalloc(sizeof(*map), GFP_KERNEL);
+-	if (!map)
+-		return ERR_PTR(-ENOMEM);
++	if (!map) {
++		ret = -ENOMEM;
++		goto put_device;
++	}
+ 
+ 	chan = dma_spec->args[0];
+ 	map->req_idx = dma_spec->args[4];
+@@ -94,12 +98,15 @@ static void *rzn1_dmamux_route_allocate(struct of_phandle_args *dma_spec,
+ 	if (ret)
+ 		goto clear_bitmap;
+ 
++	put_device(&pdev->dev);
+ 	return map;
+ 
+ clear_bitmap:
+ 	clear_bit(map->req_idx, dmamux->used_chans);
+ free_map:
+ 	kfree(map);
++put_device:
++	put_device(&pdev->dev);
+ 
+ 	return ERR_PTR(ret);
+ }
+diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
+index 80355d03004dbd..b559b0e18809e4 100644
+--- a/drivers/dma/idxd/init.c
++++ b/drivers/dma/idxd/init.c
+@@ -189,27 +189,30 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
+ 	idxd->wq_enable_map = bitmap_zalloc_node(idxd->max_wqs, GFP_KERNEL, dev_to_node(dev));
+ 	if (!idxd->wq_enable_map) {
+ 		rc = -ENOMEM;
+-		goto err_bitmap;
++		goto err_free_wqs;
+ 	}
+ 
+ 	for (i = 0; i < idxd->max_wqs; i++) {
+ 		wq = kzalloc_node(sizeof(*wq), GFP_KERNEL, dev_to_node(dev));
+ 		if (!wq) {
+ 			rc = -ENOMEM;
+-			goto err;
++			goto err_unwind;
+ 		}
+ 
+ 		idxd_dev_set_type(&wq->idxd_dev, IDXD_DEV_WQ);
+ 		conf_dev = wq_confdev(wq);
+ 		wq->id = i;
+ 		wq->idxd = idxd;
+-		device_initialize(wq_confdev(wq));
++		device_initialize(conf_dev);
+ 		conf_dev->parent = idxd_confdev(idxd);
+ 		conf_dev->bus = &dsa_bus_type;
+ 		conf_dev->type = &idxd_wq_device_type;
+ 		rc = dev_set_name(conf_dev, "wq%d.%d", idxd->id, wq->id);
+-		if (rc < 0)
+-			goto err;
++		if (rc < 0) {
++			put_device(conf_dev);
++			kfree(wq);
++			goto err_unwind;
++		}
+ 
+ 		mutex_init(&wq->wq_lock);
+ 		init_waitqueue_head(&wq->err_queue);
+@@ -220,15 +223,20 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
+ 		wq->enqcmds_retries = IDXD_ENQCMDS_RETRIES;
+ 		wq->wqcfg = kzalloc_node(idxd->wqcfg_size, GFP_KERNEL, dev_to_node(dev));
+ 		if (!wq->wqcfg) {
++			put_device(conf_dev);
++			kfree(wq);
+ 			rc = -ENOMEM;
+-			goto err;
++			goto err_unwind;
+ 		}
+ 
+ 		if (idxd->hw.wq_cap.op_config) {
+ 			wq->opcap_bmap = bitmap_zalloc(IDXD_MAX_OPCAP_BITS, GFP_KERNEL);
+ 			if (!wq->opcap_bmap) {
++				kfree(wq->wqcfg);
++				put_device(conf_dev);
++				kfree(wq);
+ 				rc = -ENOMEM;
+-				goto err_opcap_bmap;
++				goto err_unwind;
+ 			}
+ 			bitmap_copy(wq->opcap_bmap, idxd->opcap_bmap, IDXD_MAX_OPCAP_BITS);
+ 		}
+@@ -239,13 +247,7 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
+ 
+ 	return 0;
+ 
+-err_opcap_bmap:
+-	kfree(wq->wqcfg);
+-
+-err:
+-	put_device(conf_dev);
+-	kfree(wq);
+-
++err_unwind:
+ 	while (--i >= 0) {
+ 		wq = idxd->wqs[i];
+ 		if (idxd->hw.wq_cap.op_config)
+@@ -254,11 +256,10 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
+ 		conf_dev = wq_confdev(wq);
+ 		put_device(conf_dev);
+ 		kfree(wq);
+-
+ 	}
+ 	bitmap_free(idxd->wq_enable_map);
+ 
+-err_bitmap:
++err_free_wqs:
+ 	kfree(idxd->wqs);
+ 
+ 	return rc;
+@@ -1292,10 +1293,12 @@ static void idxd_remove(struct pci_dev *pdev)
+ 	device_unregister(idxd_confdev(idxd));
+ 	idxd_shutdown(pdev);
+ 	idxd_device_remove_debugfs(idxd);
+-	idxd_cleanup(idxd);
++	perfmon_pmu_remove(idxd);
++	idxd_cleanup_interrupts(idxd);
++	if (device_pasid_enabled(idxd))
++		idxd_disable_system_pasid(idxd);
+ 	pci_iounmap(pdev, idxd->reg_base);
+ 	put_device(idxd_confdev(idxd));
+-	idxd_free(idxd);
+ 	pci_disable_device(pdev);
+ }
+ 
+diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c
+index bbc3276992bb01..2cf060174795fe 100644
+--- a/drivers/dma/qcom/bam_dma.c
++++ b/drivers/dma/qcom/bam_dma.c
+@@ -1283,13 +1283,17 @@ static int bam_dma_probe(struct platform_device *pdev)
+ 	if (!bdev->bamclk) {
+ 		ret = of_property_read_u32(pdev->dev.of_node, "num-channels",
+ 					   &bdev->num_channels);
+-		if (ret)
++		if (ret) {
+ 			dev_err(bdev->dev, "num-channels unspecified in dt\n");
++			return ret;
++		}
+ 
+ 		ret = of_property_read_u32(pdev->dev.of_node, "qcom,num-ees",
+ 					   &bdev->num_ees);
+-		if (ret)
++		if (ret) {
+ 			dev_err(bdev->dev, "num-ees unspecified in dt\n");
++			return ret;
++		}
+ 	}
+ 
+ 	ret = clk_prepare_enable(bdev->bamclk);
+diff --git a/drivers/dma/ti/edma.c b/drivers/dma/ti/edma.c
+index 3ed406f08c442e..552be71db6c47b 100644
+--- a/drivers/dma/ti/edma.c
++++ b/drivers/dma/ti/edma.c
+@@ -2064,8 +2064,8 @@ static int edma_setup_from_hw(struct device *dev, struct edma_soc_info *pdata,
+ 	 * priority. So Q0 is the highest priority queue and the last queue has
+ 	 * the lowest priority.
+ 	 */
+-	queue_priority_map = devm_kcalloc(dev, ecc->num_tc + 1, sizeof(s8),
+-					  GFP_KERNEL);
++	queue_priority_map = devm_kcalloc(dev, ecc->num_tc + 1,
++					  sizeof(*queue_priority_map), GFP_KERNEL);
+ 	if (!queue_priority_map)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/edac/altera_edac.c b/drivers/edac/altera_edac.c
+index cae52c654a15c6..7685a8550d4b1f 100644
+--- a/drivers/edac/altera_edac.c
++++ b/drivers/edac/altera_edac.c
+@@ -128,7 +128,6 @@ static ssize_t altr_sdr_mc_err_inject_write(struct file *file,
+ 
+ 	ptemp = dma_alloc_coherent(mci->pdev, 16, &dma_handle, GFP_KERNEL);
+ 	if (!ptemp) {
+-		dma_free_coherent(mci->pdev, 16, ptemp, dma_handle);
+ 		edac_printk(KERN_ERR, EDAC_MC,
+ 			    "Inject: Buffer Allocation error\n");
+ 		return -ENOMEM;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index f9ceda7861f1b1..cdafce9781ed32 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -596,10 +596,6 @@ int psp_wait_for(struct psp_context *psp, uint32_t reg_index,
+ 		udelay(1);
+ 	}
+ 
+-	dev_err(adev->dev,
+-		"psp reg (0x%x) wait timed out, mask: %x, read: %x exp: %x",
+-		reg_index, mask, val, reg_val);
+-
+ 	return -ETIME;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
+index a4a00855d0b238..428adc7f741de3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
+@@ -51,17 +51,6 @@
+ #define C2PMSG_CMD_SPI_GET_ROM_IMAGE_ADDR_HI 0x10
+ #define C2PMSG_CMD_SPI_GET_FLASH_IMAGE 0x11
+ 
+-/* Command register bit 31 set to indicate readiness */
+-#define MBOX_TOS_READY_FLAG (GFX_FLAG_RESPONSE)
+-#define MBOX_TOS_READY_MASK (GFX_CMD_RESPONSE_MASK | GFX_CMD_STATUS_MASK)
+-
+-/* Values to check for a successful GFX_CMD response wait. Check against
+- * both status bits and response state - helps to detect a command failure
+- * or other unexpected cases like a device drop reading all 0xFFs
+- */
+-#define MBOX_TOS_RESP_FLAG (GFX_FLAG_RESPONSE)
+-#define MBOX_TOS_RESP_MASK (GFX_CMD_RESPONSE_MASK | GFX_CMD_STATUS_MASK)
+-
+ extern const struct attribute_group amdgpu_flash_attr_group;
+ 
+ enum psp_shared_mem_size {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+index 7c5584742471e9..a0b7ac7486dc55 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+@@ -389,8 +389,6 @@ void amdgpu_ring_fini(struct amdgpu_ring *ring)
+ 	dma_fence_put(ring->vmid_wait);
+ 	ring->vmid_wait = NULL;
+ 	ring->me = 0;
+-
+-	ring->adev->rings[ring->idx] = NULL;
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c b/drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c
+index 574880d6700995..2ab6fa4fcf20b6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c
++++ b/drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c
+@@ -29,6 +29,8 @@
+ #include "amdgpu.h"
+ #include "isp_v4_1_1.h"
+ 
++MODULE_FIRMWARE("amdgpu/isp_4_1_1.bin");
++
+ static const unsigned int isp_4_1_1_int_srcid[MAX_ISP411_INT_SRC] = {
+ 	ISP_4_1__SRCID__ISP_RINGBUFFER_WPT9,
+ 	ISP_4_1__SRCID__ISP_RINGBUFFER_WPT10,
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
+index 2c4ebd98927ff3..145186a1e48f6b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
+@@ -94,7 +94,7 @@ static int psp_v10_0_ring_create(struct psp_context *psp,
+ 
+ 	/* Wait for response flag (bit 31) in C2PMSG_64 */
+ 	ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			   MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++			   0x80000000, 0x8000FFFF, false);
+ 
+ 	return ret;
+ }
+@@ -115,7 +115,7 @@ static int psp_v10_0_ring_stop(struct psp_context *psp,
+ 
+ 	/* Wait for response flag (bit 31) in C2PMSG_64 */
+ 	ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			   MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++			   0x80000000, 0x80000000, false);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
+index 1a4a26e6ffd24c..215543575f477c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
+@@ -277,13 +277,11 @@ static int psp_v11_0_ring_stop(struct psp_context *psp,
+ 
+ 	/* Wait for response flag (bit 31) */
+ 	if (amdgpu_sriov_vf(adev))
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++				   0x80000000, 0x80000000, false);
+ 	else
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 
+ 	return ret;
+ }
+@@ -319,15 +317,13 @@ static int psp_v11_0_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_101 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++				   0x80000000, 0x8000FFFF, false);
+ 
+ 	} else {
+ 		/* Wait for sOS ready for ring creation */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 		if (ret) {
+ 			DRM_ERROR("Failed to wait for sOS ready for ring creation\n");
+ 			return ret;
+@@ -351,9 +347,8 @@ static int psp_v11_0_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_64 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x8000FFFF, false);
+ 	}
+ 
+ 	return ret;
+@@ -386,8 +381,7 @@ static int psp_v11_0_mode1_reset(struct psp_context *psp)
+ 
+ 	offset = SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64);
+ 
+-	ret = psp_wait_for(psp, offset, MBOX_TOS_READY_FLAG,
+-			   MBOX_TOS_READY_MASK, false);
++	ret = psp_wait_for(psp, offset, 0x80000000, 0x8000FFFF, false);
+ 
+ 	if (ret) {
+ 		DRM_INFO("psp is not working correctly before mode1 reset!\n");
+@@ -401,8 +395,7 @@ static int psp_v11_0_mode1_reset(struct psp_context *psp)
+ 
+ 	offset = SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_33);
+ 
+-	ret = psp_wait_for(psp, offset, MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK,
+-			   false);
++	ret = psp_wait_for(psp, offset, 0x80000000, 0x80000000, false);
+ 
+ 	if (ret) {
+ 		DRM_INFO("psp mode 1 reset failed!\n");
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v11_0_8.c b/drivers/gpu/drm/amd/amdgpu/psp_v11_0_8.c
+index 338d015c0f2ee2..5697760a819bc7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v11_0_8.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v11_0_8.c
+@@ -41,9 +41,8 @@ static int psp_v11_0_8_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++				   0x80000000, 0x80000000, false);
+ 	} else {
+ 		/* Write the ring destroy command*/
+ 		WREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_64,
+@@ -51,9 +50,8 @@ static int psp_v11_0_8_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 	}
+ 
+ 	return ret;
+@@ -89,15 +87,13 @@ static int psp_v11_0_8_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_101 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++				   0x80000000, 0x8000FFFF, false);
+ 
+ 	} else {
+ 		/* Wait for sOS ready for ring creation */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 		if (ret) {
+ 			DRM_ERROR("Failed to wait for trust OS ready for ring creation\n");
+ 			return ret;
+@@ -121,9 +117,8 @@ static int psp_v11_0_8_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_64 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x8000FFFF, false);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
+index d54b3e0fabaf40..80153f8374704a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
+@@ -163,7 +163,7 @@ static int psp_v12_0_ring_create(struct psp_context *psp,
+ 
+ 	/* Wait for response flag (bit 31) in C2PMSG_64 */
+ 	ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			   MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++			   0x80000000, 0x8000FFFF, false);
+ 
+ 	return ret;
+ }
+@@ -184,13 +184,11 @@ static int psp_v12_0_ring_stop(struct psp_context *psp,
+ 
+ 	/* Wait for response flag (bit 31) */
+ 	if (amdgpu_sriov_vf(adev))
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++				   0x80000000, 0x80000000, false);
+ 	else
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 
+ 	return ret;
+ }
+@@ -221,8 +219,7 @@ static int psp_v12_0_mode1_reset(struct psp_context *psp)
+ 
+ 	offset = SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64);
+ 
+-	ret = psp_wait_for(psp, offset, MBOX_TOS_READY_FLAG,
+-			   MBOX_TOS_READY_MASK, false);
++	ret = psp_wait_for(psp, offset, 0x80000000, 0x8000FFFF, false);
+ 
+ 	if (ret) {
+ 		DRM_INFO("psp is not working correctly before mode1 reset!\n");
+@@ -236,8 +233,7 @@ static int psp_v12_0_mode1_reset(struct psp_context *psp)
+ 
+ 	offset = SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_33);
+ 
+-	ret = psp_wait_for(psp, offset, MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK,
+-			   false);
++	ret = psp_wait_for(psp, offset, 0x80000000, 0x80000000, false);
+ 
+ 	if (ret) {
+ 		DRM_INFO("psp mode 1 reset failed!\n");
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
+index 58b6b64dcd683b..ead616c117057f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
+@@ -384,9 +384,8 @@ static int psp_v13_0_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
++				   0x80000000, 0x80000000, false);
+ 	} else {
+ 		/* Write the ring destroy command*/
+ 		WREG32_SOC15(MP0, 0, regMP0_SMN_C2PMSG_64,
+@@ -394,9 +393,8 @@ static int psp_v13_0_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 	}
+ 
+ 	return ret;
+@@ -432,15 +430,13 @@ static int psp_v13_0_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_101 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
++				   0x80000000, 0x8000FFFF, false);
+ 
+ 	} else {
+ 		/* Wait for sOS ready for ring creation */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 		if (ret) {
+ 			DRM_ERROR("Failed to wait for trust OS ready for ring creation\n");
+ 			return ret;
+@@ -464,9 +460,8 @@ static int psp_v13_0_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_64 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x8000FFFF, false);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v13_0_4.c b/drivers/gpu/drm/amd/amdgpu/psp_v13_0_4.c
+index f65af52c1c1939..eaa5512a21dacd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v13_0_4.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v13_0_4.c
+@@ -204,9 +204,8 @@ static int psp_v13_0_4_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
++				   0x80000000, 0x80000000, false);
+ 	} else {
+ 		/* Write the ring destroy command*/
+ 		WREG32_SOC15(MP0, 0, regMP0_SMN_C2PMSG_64,
+@@ -214,9 +213,8 @@ static int psp_v13_0_4_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 	}
+ 
+ 	return ret;
+@@ -252,15 +250,13 @@ static int psp_v13_0_4_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_101 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
++				   0x80000000, 0x8000FFFF, false);
+ 
+ 	} else {
+ 		/* Wait for sOS ready for ring creation */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 		if (ret) {
+ 			DRM_ERROR("Failed to wait for trust OS ready for ring creation\n");
+ 			return ret;
+@@ -284,9 +280,8 @@ static int psp_v13_0_4_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_64 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x8000FFFF, false);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c
+index b029f301aaccaf..30d8eecc567481 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c
+@@ -250,9 +250,8 @@ static int psp_v14_0_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_101),
++				   0x80000000, 0x80000000, false);
+ 	} else {
+ 		/* Write the ring destroy command*/
+ 		WREG32_SOC15(MP0, 0, regMPASP_SMN_C2PMSG_64,
+@@ -260,9 +259,8 @@ static int psp_v14_0_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 	}
+ 
+ 	return ret;
+@@ -298,15 +296,13 @@ static int psp_v14_0_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_101 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_101),
++				   0x80000000, 0x8000FFFF, false);
+ 
+ 	} else {
+ 		/* Wait for sOS ready for ring creation */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
+-			MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 		if (ret) {
+ 			DRM_ERROR("Failed to wait for trust OS ready for ring creation\n");
+ 			return ret;
+@@ -330,9 +326,8 @@ static int psp_v14_0_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_64 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
++				   0x80000000, 0x8000FFFF, false);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+index 9fb0d53805892d..614e0886556271 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+@@ -1875,15 +1875,19 @@ static int vcn_v3_0_limit_sched(struct amdgpu_cs_parser *p,
+ 				struct amdgpu_job *job)
+ {
+ 	struct drm_gpu_scheduler **scheds;
+-
+-	/* The create msg must be in the first IB submitted */
+-	if (atomic_read(&job->base.entity->fence_seq))
+-		return -EINVAL;
++	struct dma_fence *fence;
+ 
+ 	/* if VCN0 is harvested, we can't support AV1 */
+ 	if (p->adev->vcn.harvest_config & AMDGPU_VCN_HARVEST_VCN0)
+ 		return -EINVAL;
+ 
++	/* wait for all jobs to finish before switching to instance 0 */
++	fence = amdgpu_ctx_get_fence(p->ctx, job->base.entity, ~0ull);
++	if (fence) {
++		dma_fence_wait(fence, false);
++		dma_fence_put(fence);
++	}
++
+ 	scheds = p->adev->gpu_sched[AMDGPU_HW_IP_VCN_DEC]
+ 		[AMDGPU_RING_PRIO_DEFAULT].sched;
+ 	drm_sched_entity_modify_sched(job->base.entity, scheds, 1);
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+index 46c329a1b2f5f0..e77f2df1beb773 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+@@ -1807,15 +1807,19 @@ static int vcn_v4_0_limit_sched(struct amdgpu_cs_parser *p,
+ 				struct amdgpu_job *job)
+ {
+ 	struct drm_gpu_scheduler **scheds;
+-
+-	/* The create msg must be in the first IB submitted */
+-	if (atomic_read(&job->base.entity->fence_seq))
+-		return -EINVAL;
++	struct dma_fence *fence;
+ 
+ 	/* if VCN0 is harvested, we can't support AV1 */
+ 	if (p->adev->vcn.harvest_config & AMDGPU_VCN_HARVEST_VCN0)
+ 		return -EINVAL;
+ 
++	/* wait for all jobs to finish before switching to instance 0 */
++	fence = amdgpu_ctx_get_fence(p->ctx, job->base.entity, ~0ull);
++	if (fence) {
++		dma_fence_wait(fence, false);
++		dma_fence_put(fence);
++	}
++
+ 	scheds = p->adev->gpu_sched[AMDGPU_HW_IP_VCN_ENC]
+ 		[AMDGPU_RING_PRIO_0].sched;
+ 	drm_sched_entity_modify_sched(job->base.entity, scheds, 1);
+@@ -1906,22 +1910,16 @@ static int vcn_v4_0_dec_msg(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
+ 
+ #define RADEON_VCN_ENGINE_TYPE_ENCODE			(0x00000002)
+ #define RADEON_VCN_ENGINE_TYPE_DECODE			(0x00000003)
+-
+ #define RADEON_VCN_ENGINE_INFO				(0x30000001)
+-#define RADEON_VCN_ENGINE_INFO_MAX_OFFSET		16
+-
+ #define RENCODE_ENCODE_STANDARD_AV1			2
+ #define RENCODE_IB_PARAM_SESSION_INIT			0x00000003
+-#define RENCODE_IB_PARAM_SESSION_INIT_MAX_OFFSET	64
+ 
+-/* return the offset in ib if id is found, -1 otherwise
+- * to speed up the searching we only search upto max_offset
+- */
+-static int vcn_v4_0_enc_find_ib_param(struct amdgpu_ib *ib, uint32_t id, int max_offset)
++/* return the offset in ib if id is found, -1 otherwise */
++static int vcn_v4_0_enc_find_ib_param(struct amdgpu_ib *ib, uint32_t id, int start)
+ {
+ 	int i;
+ 
+-	for (i = 0; i < ib->length_dw && i < max_offset && ib->ptr[i] >= 8; i += ib->ptr[i]/4) {
++	for (i = start; i < ib->length_dw && ib->ptr[i] >= 8; i += ib->ptr[i] / 4) {
+ 		if (ib->ptr[i + 1] == id)
+ 			return i;
+ 	}
+@@ -1936,33 +1934,29 @@ static int vcn_v4_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
+ 	struct amdgpu_vcn_decode_buffer *decode_buffer;
+ 	uint64_t addr;
+ 	uint32_t val;
+-	int idx;
++	int idx = 0, sidx;
+ 
+ 	/* The first instance can decode anything */
+ 	if (!ring->me)
+ 		return 0;
+ 
+-	/* RADEON_VCN_ENGINE_INFO is at the top of ib block */
+-	idx = vcn_v4_0_enc_find_ib_param(ib, RADEON_VCN_ENGINE_INFO,
+-			RADEON_VCN_ENGINE_INFO_MAX_OFFSET);
+-	if (idx < 0) /* engine info is missing */
+-		return 0;
+-
+-	val = amdgpu_ib_get_value(ib, idx + 2); /* RADEON_VCN_ENGINE_TYPE */
+-	if (val == RADEON_VCN_ENGINE_TYPE_DECODE) {
+-		decode_buffer = (struct amdgpu_vcn_decode_buffer *)&ib->ptr[idx + 6];
+-
+-		if (!(decode_buffer->valid_buf_flag  & 0x1))
+-			return 0;
+-
+-		addr = ((u64)decode_buffer->msg_buffer_address_hi) << 32 |
+-			decode_buffer->msg_buffer_address_lo;
+-		return vcn_v4_0_dec_msg(p, job, addr);
+-	} else if (val == RADEON_VCN_ENGINE_TYPE_ENCODE) {
+-		idx = vcn_v4_0_enc_find_ib_param(ib, RENCODE_IB_PARAM_SESSION_INIT,
+-			RENCODE_IB_PARAM_SESSION_INIT_MAX_OFFSET);
+-		if (idx >= 0 && ib->ptr[idx + 2] == RENCODE_ENCODE_STANDARD_AV1)
+-			return vcn_v4_0_limit_sched(p, job);
++	while ((idx = vcn_v4_0_enc_find_ib_param(ib, RADEON_VCN_ENGINE_INFO, idx)) >= 0) {
++		val = amdgpu_ib_get_value(ib, idx + 2); /* RADEON_VCN_ENGINE_TYPE */
++		if (val == RADEON_VCN_ENGINE_TYPE_DECODE) {
++			decode_buffer = (struct amdgpu_vcn_decode_buffer *)&ib->ptr[idx + 6];
++
++			if (!(decode_buffer->valid_buf_flag & 0x1))
++				return 0;
++
++			addr = ((u64)decode_buffer->msg_buffer_address_hi) << 32 |
++				decode_buffer->msg_buffer_address_lo;
++			return vcn_v4_0_dec_msg(p, job, addr);
++		} else if (val == RADEON_VCN_ENGINE_TYPE_ENCODE) {
++			sidx = vcn_v4_0_enc_find_ib_param(ib, RENCODE_IB_PARAM_SESSION_INIT, idx);
++			if (sidx >= 0 && ib->ptr[sidx + 2] == RENCODE_ENCODE_STANDARD_AV1)
++				return vcn_v4_0_limit_sched(p, job);
++		}
++		idx += ib->ptr[idx] / 4;
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 2d94fec5b545d7..312f6075e39d11 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2910,6 +2910,17 @@ static int dm_oem_i2c_hw_init(struct amdgpu_device *adev)
+ 	return 0;
+ }
+ 
++static void dm_oem_i2c_hw_fini(struct amdgpu_device *adev)
++{
++	struct amdgpu_display_manager *dm = &adev->dm;
++
++	if (dm->oem_i2c) {
++		i2c_del_adapter(&dm->oem_i2c->base);
++		kfree(dm->oem_i2c);
++		dm->oem_i2c = NULL;
++	}
++}
++
+ /**
+  * dm_hw_init() - Initialize DC device
+  * @ip_block: Pointer to the amdgpu_ip_block for this hw instance.
+@@ -2960,7 +2971,7 @@ static int dm_hw_fini(struct amdgpu_ip_block *ip_block)
+ {
+ 	struct amdgpu_device *adev = ip_block->adev;
+ 
+-	kfree(adev->dm.oem_i2c);
++	dm_oem_i2c_hw_fini(adev);
+ 
+ 	amdgpu_dm_hpd_fini(adev);
+ 
+@@ -3073,16 +3084,55 @@ static int dm_cache_state(struct amdgpu_device *adev)
+ 	return adev->dm.cached_state ? 0 : r;
+ }
+ 
+-static int dm_prepare_suspend(struct amdgpu_ip_block *ip_block)
++static void dm_destroy_cached_state(struct amdgpu_device *adev)
+ {
+-	struct amdgpu_device *adev = ip_block->adev;
++	struct amdgpu_display_manager *dm = &adev->dm;
++	struct drm_device *ddev = adev_to_drm(adev);
++	struct dm_plane_state *dm_new_plane_state;
++	struct drm_plane_state *new_plane_state;
++	struct dm_crtc_state *dm_new_crtc_state;
++	struct drm_crtc_state *new_crtc_state;
++	struct drm_plane *plane;
++	struct drm_crtc *crtc;
++	int i;
+ 
+-	if (amdgpu_in_reset(adev))
+-		return 0;
++	if (!dm->cached_state)
++		return;
++
++	/* Force mode set in atomic commit */
++	for_each_new_crtc_in_state(dm->cached_state, crtc, new_crtc_state, i) {
++		new_crtc_state->active_changed = true;
++		dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
++		reset_freesync_config_for_crtc(dm_new_crtc_state);
++	}
++
++	/*
++	 * atomic_check is expected to create the dc states. We need to release
++	 * them here, since they were duplicated as part of the suspend
++	 * procedure.
++	 */
++	for_each_new_crtc_in_state(dm->cached_state, crtc, new_crtc_state, i) {
++		dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
++		if (dm_new_crtc_state->stream) {
++			WARN_ON(kref_read(&dm_new_crtc_state->stream->refcount) > 1);
++			dc_stream_release(dm_new_crtc_state->stream);
++			dm_new_crtc_state->stream = NULL;
++		}
++		dm_new_crtc_state->base.color_mgmt_changed = true;
++	}
++
++	for_each_new_plane_in_state(dm->cached_state, plane, new_plane_state, i) {
++		dm_new_plane_state = to_dm_plane_state(new_plane_state);
++		if (dm_new_plane_state->dc_state) {
++			WARN_ON(kref_read(&dm_new_plane_state->dc_state->refcount) > 1);
++			dc_plane_state_release(dm_new_plane_state->dc_state);
++			dm_new_plane_state->dc_state = NULL;
++		}
++	}
+ 
+-	WARN_ON(adev->dm.cached_state);
++	drm_atomic_helper_resume(ddev, dm->cached_state);
+ 
+-	return dm_cache_state(adev);
++	dm->cached_state = NULL;
+ }
+ 
+ static int dm_suspend(struct amdgpu_ip_block *ip_block)
+@@ -3306,12 +3356,6 @@ static int dm_resume(struct amdgpu_ip_block *ip_block)
+ 	struct amdgpu_dm_connector *aconnector;
+ 	struct drm_connector *connector;
+ 	struct drm_connector_list_iter iter;
+-	struct drm_crtc *crtc;
+-	struct drm_crtc_state *new_crtc_state;
+-	struct dm_crtc_state *dm_new_crtc_state;
+-	struct drm_plane *plane;
+-	struct drm_plane_state *new_plane_state;
+-	struct dm_plane_state *dm_new_plane_state;
+ 	struct dm_atomic_state *dm_state = to_dm_atomic_state(dm->atomic_obj.state);
+ 	enum dc_connection_type new_connection_type = dc_connection_none;
+ 	struct dc_state *dc_state;
+@@ -3470,40 +3514,7 @@ static int dm_resume(struct amdgpu_ip_block *ip_block)
+ 	}
+ 	drm_connector_list_iter_end(&iter);
+ 
+-	/* Force mode set in atomic commit */
+-	for_each_new_crtc_in_state(dm->cached_state, crtc, new_crtc_state, i) {
+-		new_crtc_state->active_changed = true;
+-		dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
+-		reset_freesync_config_for_crtc(dm_new_crtc_state);
+-	}
+-
+-	/*
+-	 * atomic_check is expected to create the dc states. We need to release
+-	 * them here, since they were duplicated as part of the suspend
+-	 * procedure.
+-	 */
+-	for_each_new_crtc_in_state(dm->cached_state, crtc, new_crtc_state, i) {
+-		dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
+-		if (dm_new_crtc_state->stream) {
+-			WARN_ON(kref_read(&dm_new_crtc_state->stream->refcount) > 1);
+-			dc_stream_release(dm_new_crtc_state->stream);
+-			dm_new_crtc_state->stream = NULL;
+-		}
+-		dm_new_crtc_state->base.color_mgmt_changed = true;
+-	}
+-
+-	for_each_new_plane_in_state(dm->cached_state, plane, new_plane_state, i) {
+-		dm_new_plane_state = to_dm_plane_state(new_plane_state);
+-		if (dm_new_plane_state->dc_state) {
+-			WARN_ON(kref_read(&dm_new_plane_state->dc_state->refcount) > 1);
+-			dc_plane_state_release(dm_new_plane_state->dc_state);
+-			dm_new_plane_state->dc_state = NULL;
+-		}
+-	}
+-
+-	drm_atomic_helper_resume(ddev, dm->cached_state);
+-
+-	dm->cached_state = NULL;
++	dm_destroy_cached_state(adev);
+ 
+ 	/* Do mst topology probing after resuming cached state*/
+ 	drm_connector_list_iter_begin(ddev, &iter);
+@@ -3549,7 +3560,6 @@ static const struct amd_ip_funcs amdgpu_dm_funcs = {
+ 	.early_fini = amdgpu_dm_early_fini,
+ 	.hw_init = dm_hw_init,
+ 	.hw_fini = dm_hw_fini,
+-	.prepare_suspend = dm_prepare_suspend,
+ 	.suspend = dm_suspend,
+ 	.resume = dm_resume,
+ 	.is_idle = dm_is_idle,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 25e8befbcc479a..99fd064324baa6 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -809,6 +809,7 @@ void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm,
+ 	drm_dp_aux_init(&aconnector->dm_dp_aux.aux);
+ 	drm_dp_cec_register_connector(&aconnector->dm_dp_aux.aux,
+ 				      &aconnector->base);
++	drm_dp_dpcd_set_probe(&aconnector->dm_dp_aux.aux, false);
+ 
+ 	if (aconnector->base.connector_type == DRM_MODE_CONNECTOR_eDP)
+ 		return;
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index f41073c0147e23..7dfbfb18593c12 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -1095,6 +1095,7 @@ struct dc_debug_options {
+ 	bool enable_hblank_borrow;
+ 	bool force_subvp_df_throttle;
+ 	uint32_t acpi_transition_bitmasks[MAX_PIPES];
++	bool enable_pg_cntl_debug_logs;
+ };
+ 
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c b/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
+index 58c84f555c0fb8..0ce9489ac6b728 100644
+--- a/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
++++ b/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
+@@ -133,30 +133,34 @@ enum dsc_clk_source {
+ };
+ 
+ 
+-static void dccg35_set_dsc_clk_rcg(struct dccg *dccg, int inst, bool enable)
++static void dccg35_set_dsc_clk_rcg(struct dccg *dccg, int inst, bool allow_rcg)
+ {
+ 	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+ 
+-	if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dsc && enable)
++	if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dsc && allow_rcg)
+ 		return;
+ 
+ 	switch (inst) {
+ 	case 0:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK0_ROOT_GATE_DISABLE, enable ? 0 : 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK0_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ 		break;
+ 	case 1:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK1_ROOT_GATE_DISABLE, enable ? 0 : 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK1_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ 		break;
+ 	case 2:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK2_ROOT_GATE_DISABLE, enable ? 0 : 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK2_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ 		break;
+ 	case 3:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK3_ROOT_GATE_DISABLE, enable ? 0 : 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK3_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ 		break;
+ 	default:
+ 		BREAK_TO_DEBUGGER();
+ 		return;
+ 	}
++
++	/* Wait for clock to ramp */
++	if (!allow_rcg)
++		udelay(10);
+ }
+ 
+ static void dccg35_set_symclk32_se_rcg(
+@@ -385,35 +389,34 @@ static void dccg35_set_dtbclk_p_rcg(struct dccg *dccg, int inst, bool enable)
+ 	}
+ }
+ 
+-static void dccg35_set_dppclk_rcg(struct dccg *dccg,
+-												int inst, bool enable)
++static void dccg35_set_dppclk_rcg(struct dccg *dccg, int inst, bool allow_rcg)
+ {
+-
+ 	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+ 
+-
+-	if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dpp && enable)
++	if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dpp && allow_rcg)
+ 		return;
+ 
+ 	switch (inst) {
+ 	case 0:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK0_ROOT_GATE_DISABLE, enable ? 0 : 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK0_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ 		break;
+ 	case 1:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK1_ROOT_GATE_DISABLE, enable ? 0 : 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK1_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ 		break;
+ 	case 2:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK2_ROOT_GATE_DISABLE, enable ? 0 : 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK2_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ 		break;
+ 	case 3:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK3_ROOT_GATE_DISABLE, enable ? 0 : 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK3_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ 		break;
+ 	default:
+ 	BREAK_TO_DEBUGGER();
+ 		break;
+ 	}
+-	//DC_LOG_DEBUG("%s: inst(%d) DPPCLK rcg_disable: %d\n", __func__, inst, enable ? 0 : 1);
+ 
++	/* Wait for clock to ramp */
++	if (!allow_rcg)
++		udelay(10);
+ }
+ 
+ static void dccg35_set_dpstreamclk_rcg(
+@@ -1177,32 +1180,34 @@ static void dccg35_update_dpp_dto(struct dccg *dccg, int dpp_inst,
+ }
+ 
+ static void dccg35_set_dppclk_root_clock_gating(struct dccg *dccg,
+-		 uint32_t dpp_inst, uint32_t enable)
++		 uint32_t dpp_inst, uint32_t disallow_rcg)
+ {
+ 	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+ 
+-	if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dpp)
++	if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dpp && !disallow_rcg)
+ 		return;
+ 
+ 
+ 	switch (dpp_inst) {
+ 	case 0:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK0_ROOT_GATE_DISABLE, enable);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK0_ROOT_GATE_DISABLE, disallow_rcg);
+ 		break;
+ 	case 1:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK1_ROOT_GATE_DISABLE, enable);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK1_ROOT_GATE_DISABLE, disallow_rcg);
+ 		break;
+ 	case 2:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK2_ROOT_GATE_DISABLE, enable);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK2_ROOT_GATE_DISABLE, disallow_rcg);
+ 		break;
+ 	case 3:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK3_ROOT_GATE_DISABLE, enable);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK3_ROOT_GATE_DISABLE, disallow_rcg);
+ 		break;
+ 	default:
+ 		break;
+ 	}
+-	//DC_LOG_DEBUG("%s: dpp_inst(%d) rcg: %d\n", __func__, dpp_inst, enable);
+ 
++	/* Wait for clock to ramp */
++	if (disallow_rcg)
++		udelay(10);
+ }
+ 
+ static void dccg35_get_pixel_rate_div(
+@@ -1782,8 +1787,7 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
+ 	//Disable DTO
+ 	switch (inst) {
+ 	case 0:
+-		if (dccg->ctx->dc->debug.root_clock_optimization.bits.dsc)
+-			REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK0_ROOT_GATE_DISABLE, 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK0_ROOT_GATE_DISABLE, 1);
+ 
+ 		REG_UPDATE_2(DSCCLK0_DTO_PARAM,
+ 				DSCCLK0_DTO_PHASE, 0,
+@@ -1791,8 +1795,7 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
+ 		REG_UPDATE(DSCCLK_DTO_CTRL,	DSCCLK0_EN, 1);
+ 		break;
+ 	case 1:
+-		if (dccg->ctx->dc->debug.root_clock_optimization.bits.dsc)
+-			REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK1_ROOT_GATE_DISABLE, 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK1_ROOT_GATE_DISABLE, 1);
+ 
+ 		REG_UPDATE_2(DSCCLK1_DTO_PARAM,
+ 				DSCCLK1_DTO_PHASE, 0,
+@@ -1800,8 +1803,7 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
+ 		REG_UPDATE(DSCCLK_DTO_CTRL, DSCCLK1_EN, 1);
+ 		break;
+ 	case 2:
+-		if (dccg->ctx->dc->debug.root_clock_optimization.bits.dsc)
+-			REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK2_ROOT_GATE_DISABLE, 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK2_ROOT_GATE_DISABLE, 1);
+ 
+ 		REG_UPDATE_2(DSCCLK2_DTO_PARAM,
+ 				DSCCLK2_DTO_PHASE, 0,
+@@ -1809,8 +1811,7 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
+ 		REG_UPDATE(DSCCLK_DTO_CTRL, DSCCLK2_EN, 1);
+ 		break;
+ 	case 3:
+-		if (dccg->ctx->dc->debug.root_clock_optimization.bits.dsc)
+-			REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK3_ROOT_GATE_DISABLE, 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK3_ROOT_GATE_DISABLE, 1);
+ 
+ 		REG_UPDATE_2(DSCCLK3_DTO_PARAM,
+ 				DSCCLK3_DTO_PHASE, 0,
+@@ -1821,6 +1822,9 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
+ 		BREAK_TO_DEBUGGER();
+ 		return;
+ 	}
++
++	/* Wait for clock to ramp */
++	udelay(10);
+ }
+ 
+ static void dccg35_disable_dscclk(struct dccg *dccg,
+@@ -1864,6 +1868,9 @@ static void dccg35_disable_dscclk(struct dccg *dccg,
+ 	default:
+ 		return;
+ 	}
++
++	/* Wait for clock ramp */
++	udelay(10);
+ }
+ 
+ static void dccg35_enable_symclk_se(struct dccg *dccg, uint32_t stream_enc_inst, uint32_t link_enc_inst)
+@@ -2349,10 +2356,7 @@ static void dccg35_disable_symclk_se_cb(
+ 
+ void dccg35_root_gate_disable_control(struct dccg *dccg, uint32_t pipe_idx, uint32_t disable_clock_gating)
+ {
+-
+-	if (dccg->ctx->dc->debug.root_clock_optimization.bits.dpp) {
+-		dccg35_set_dppclk_root_clock_gating(dccg, pipe_idx, disable_clock_gating);
+-	}
++	dccg35_set_dppclk_root_clock_gating(dccg, pipe_idx, disable_clock_gating);
+ }
+ 
+ static const struct dccg_funcs dccg35_funcs_new = {
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+index cdb8685ae7d719..454e362ff096aa 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+@@ -955,7 +955,7 @@ enum dc_status dcn20_enable_stream_timing(
+ 		return DC_ERROR_UNEXPECTED;
+ 	}
+ 
+-	fsleep(stream->timing.v_total * (stream->timing.h_total * 10000u / stream->timing.pix_clk_100hz));
++	udelay(stream->timing.v_total * (stream->timing.h_total * 10000u / stream->timing.pix_clk_100hz));
+ 
+ 	params.vertical_total_min = stream->adjust.v_total_min;
+ 	params.vertical_total_max = stream->adjust.v_total_max;
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+index a267f574b61937..764eff6a4ec6b7 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+@@ -113,6 +113,14 @@ static void enable_memory_low_power(struct dc *dc)
+ }
+ #endif
+ 
++static void print_pg_status(struct dc *dc, const char *debug_func, const char *debug_log)
++{
++	if (dc->debug.enable_pg_cntl_debug_logs && dc->res_pool->pg_cntl) {
++		if (dc->res_pool->pg_cntl->funcs->print_pg_status)
++			dc->res_pool->pg_cntl->funcs->print_pg_status(dc->res_pool->pg_cntl, debug_func, debug_log);
++	}
++}
++
+ void dcn35_set_dmu_fgcg(struct dce_hwseq *hws, bool enable)
+ {
+ 	REG_UPDATE_3(DMU_CLK_CNTL,
+@@ -137,6 +145,8 @@ void dcn35_init_hw(struct dc *dc)
+ 	uint32_t user_level = MAX_BACKLIGHT_LEVEL;
+ 	int i;
+ 
++	print_pg_status(dc, __func__, ": start");
++
+ 	if (dc->clk_mgr && dc->clk_mgr->funcs->init_clocks)
+ 		dc->clk_mgr->funcs->init_clocks(dc->clk_mgr);
+ 
+@@ -200,10 +210,7 @@ void dcn35_init_hw(struct dc *dc)
+ 
+ 	/* we want to turn off all dp displays before doing detection */
+ 	dc->link_srv->blank_all_dp_displays(dc);
+-/*
+-	if (hws->funcs.enable_power_gating_plane)
+-		hws->funcs.enable_power_gating_plane(dc->hwseq, true);
+-*/
++
+ 	if (res_pool->hubbub && res_pool->hubbub->funcs->dchubbub_init)
+ 		res_pool->hubbub->funcs->dchubbub_init(dc->res_pool->hubbub);
+ 	/* If taking control over from VBIOS, we may want to optimize our first
+@@ -236,6 +243,8 @@ void dcn35_init_hw(struct dc *dc)
+ 		}
+ 
+ 		hws->funcs.init_pipes(dc, dc->current_state);
++		print_pg_status(dc, __func__, ": after init_pipes");
++
+ 		if (dc->res_pool->hubbub->funcs->allow_self_refresh_control &&
+ 			!dc->res_pool->hubbub->ctx->dc->debug.disable_stutter)
+ 			dc->res_pool->hubbub->funcs->allow_self_refresh_control(dc->res_pool->hubbub,
+@@ -312,6 +321,7 @@ void dcn35_init_hw(struct dc *dc)
+ 		if (dc->res_pool->pg_cntl->funcs->init_pg_status)
+ 			dc->res_pool->pg_cntl->funcs->init_pg_status(dc->res_pool->pg_cntl);
+ 	}
++	print_pg_status(dc, __func__, ": after init_pg_status");
+ }
+ 
+ static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
+@@ -500,97 +510,6 @@ void dcn35_physymclk_root_clock_control(struct dce_hwseq *hws, unsigned int phy_
+ 	}
+ }
+ 
+-void dcn35_dsc_pg_control(
+-		struct dce_hwseq *hws,
+-		unsigned int dsc_inst,
+-		bool power_on)
+-{
+-	uint32_t power_gate = power_on ? 0 : 1;
+-	uint32_t pwr_status = power_on ? 0 : 2;
+-	uint32_t org_ip_request_cntl = 0;
+-
+-	if (hws->ctx->dc->debug.disable_dsc_power_gate)
+-		return;
+-	if (hws->ctx->dc->debug.ignore_pg)
+-		return;
+-	REG_GET(DC_IP_REQUEST_CNTL, IP_REQUEST_EN, &org_ip_request_cntl);
+-	if (org_ip_request_cntl == 0)
+-		REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 1);
+-
+-	switch (dsc_inst) {
+-	case 0: /* DSC0 */
+-		REG_UPDATE(DOMAIN16_PG_CONFIG,
+-				DOMAIN_POWER_GATE, power_gate);
+-
+-		REG_WAIT(DOMAIN16_PG_STATUS,
+-				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+-				1, 1000);
+-		break;
+-	case 1: /* DSC1 */
+-		REG_UPDATE(DOMAIN17_PG_CONFIG,
+-				DOMAIN_POWER_GATE, power_gate);
+-
+-		REG_WAIT(DOMAIN17_PG_STATUS,
+-				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+-				1, 1000);
+-		break;
+-	case 2: /* DSC2 */
+-		REG_UPDATE(DOMAIN18_PG_CONFIG,
+-				DOMAIN_POWER_GATE, power_gate);
+-
+-		REG_WAIT(DOMAIN18_PG_STATUS,
+-				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+-				1, 1000);
+-		break;
+-	case 3: /* DSC3 */
+-		REG_UPDATE(DOMAIN19_PG_CONFIG,
+-				DOMAIN_POWER_GATE, power_gate);
+-
+-		REG_WAIT(DOMAIN19_PG_STATUS,
+-				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+-				1, 1000);
+-		break;
+-	default:
+-		BREAK_TO_DEBUGGER();
+-		break;
+-	}
+-
+-	if (org_ip_request_cntl == 0)
+-		REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 0);
+-}
+-
+-void dcn35_enable_power_gating_plane(struct dce_hwseq *hws, bool enable)
+-{
+-	bool force_on = true; /* disable power gating */
+-	uint32_t org_ip_request_cntl = 0;
+-
+-	if (hws->ctx->dc->debug.disable_hubp_power_gate)
+-		return;
+-	if (hws->ctx->dc->debug.ignore_pg)
+-		return;
+-	REG_GET(DC_IP_REQUEST_CNTL, IP_REQUEST_EN, &org_ip_request_cntl);
+-	if (org_ip_request_cntl == 0)
+-		REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 1);
+-	/* DCHUBP0/1/2/3/4/5 */
+-	REG_UPDATE(DOMAIN0_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+-	REG_UPDATE(DOMAIN2_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+-	/* DPP0/1/2/3/4/5 */
+-	REG_UPDATE(DOMAIN1_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+-	REG_UPDATE(DOMAIN3_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+-
+-	force_on = true; /* disable power gating */
+-	if (enable && !hws->ctx->dc->debug.disable_dsc_power_gate)
+-		force_on = false;
+-
+-	/* DCS0/1/2/3/4 */
+-	REG_UPDATE(DOMAIN16_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+-	REG_UPDATE(DOMAIN17_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+-	REG_UPDATE(DOMAIN18_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+-	REG_UPDATE(DOMAIN19_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+-
+-
+-}
+-
+ /* In headless boot cases, DIG may be turned
+  * on which causes HW/SW discrepancies.
+  * To avoid this, power down hardware on boot
+@@ -1453,6 +1372,8 @@ void dcn35_prepare_bandwidth(
+ 	}
+ 
+ 	dcn20_prepare_bandwidth(dc, context);
++
++	print_pg_status(dc, __func__, ": after rcg and power up");
+ }
+ 
+ void dcn35_optimize_bandwidth(
+@@ -1461,6 +1382,8 @@ void dcn35_optimize_bandwidth(
+ {
+ 	struct pg_block_update pg_update_state;
+ 
++	print_pg_status(dc, __func__, ": before rcg and power up");
++
+ 	dcn20_optimize_bandwidth(dc, context);
+ 
+ 	if (dc->hwss.calc_blocks_to_gate) {
+@@ -1472,6 +1395,8 @@ void dcn35_optimize_bandwidth(
+ 		if (dc->hwss.root_clock_control)
+ 			dc->hwss.root_clock_control(dc, &pg_update_state, false);
+ 	}
++
++	print_pg_status(dc, __func__, ": after rcg and power up");
+ }
+ 
+ void dcn35_set_drr(struct pipe_ctx **pipe_ctx,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c
+index a3ccf805bd16ae..aefb7c47374158 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c
+@@ -115,7 +115,6 @@ static const struct hw_sequencer_funcs dcn35_funcs = {
+ 	.exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state,
+ 	.update_visual_confirm_color = dcn10_update_visual_confirm_color,
+ 	.apply_idle_power_optimizations = dcn35_apply_idle_power_optimizations,
+-	.update_dsc_pg = dcn32_update_dsc_pg,
+ 	.calc_blocks_to_gate = dcn35_calc_blocks_to_gate,
+ 	.calc_blocks_to_ungate = dcn35_calc_blocks_to_ungate,
+ 	.hw_block_power_up = dcn35_hw_block_power_up,
+@@ -150,7 +149,6 @@ static const struct hwseq_private_funcs dcn35_private_funcs = {
+ 	.plane_atomic_disable = dcn35_plane_atomic_disable,
+ 	//.plane_atomic_disable = dcn20_plane_atomic_disable,/*todo*/
+ 	//.hubp_pg_control = dcn35_hubp_pg_control,
+-	.enable_power_gating_plane = dcn35_enable_power_gating_plane,
+ 	.dpp_root_clock_control = dcn35_dpp_root_clock_control,
+ 	.dpstream_root_clock_control = dcn35_dpstream_root_clock_control,
+ 	.physymclk_root_clock_control = dcn35_physymclk_root_clock_control,
+@@ -165,7 +163,6 @@ static const struct hwseq_private_funcs dcn35_private_funcs = {
+ 	.calculate_dccg_k1_k2_values = dcn32_calculate_dccg_k1_k2_values,
+ 	.resync_fifo_dccg_dio = dcn314_resync_fifo_dccg_dio,
+ 	.is_dp_dig_pixel_rate_div_policy = dcn35_is_dp_dig_pixel_rate_div_policy,
+-	.dsc_pg_control = dcn35_dsc_pg_control,
+ 	.dsc_pg_status = dcn32_dsc_pg_status,
+ 	.enable_plane = dcn35_enable_plane,
+ 	.wait_for_pipe_update_if_needed = dcn10_wait_for_pipe_update_if_needed,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c
+index 58f2be2a326b89..a580a55695c3b0 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c
+@@ -114,7 +114,6 @@ static const struct hw_sequencer_funcs dcn351_funcs = {
+ 	.exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state,
+ 	.update_visual_confirm_color = dcn10_update_visual_confirm_color,
+ 	.apply_idle_power_optimizations = dcn35_apply_idle_power_optimizations,
+-	.update_dsc_pg = dcn32_update_dsc_pg,
+ 	.calc_blocks_to_gate = dcn351_calc_blocks_to_gate,
+ 	.calc_blocks_to_ungate = dcn351_calc_blocks_to_ungate,
+ 	.hw_block_power_up = dcn351_hw_block_power_up,
+@@ -145,7 +144,6 @@ static const struct hwseq_private_funcs dcn351_private_funcs = {
+ 	.plane_atomic_disable = dcn35_plane_atomic_disable,
+ 	//.plane_atomic_disable = dcn20_plane_atomic_disable,/*todo*/
+ 	//.hubp_pg_control = dcn35_hubp_pg_control,
+-	.enable_power_gating_plane = dcn35_enable_power_gating_plane,
+ 	.dpp_root_clock_control = dcn35_dpp_root_clock_control,
+ 	.dpstream_root_clock_control = dcn35_dpstream_root_clock_control,
+ 	.physymclk_root_clock_control = dcn35_physymclk_root_clock_control,
+@@ -159,7 +157,6 @@ static const struct hwseq_private_funcs dcn351_private_funcs = {
+ 	.setup_hpo_hw_control = dcn35_setup_hpo_hw_control,
+ 	.calculate_dccg_k1_k2_values = dcn32_calculate_dccg_k1_k2_values,
+ 	.is_dp_dig_pixel_rate_div_policy = dcn35_is_dp_dig_pixel_rate_div_policy,
+-	.dsc_pg_control = dcn35_dsc_pg_control,
+ 	.dsc_pg_status = dcn32_dsc_pg_status,
+ 	.enable_plane = dcn35_enable_plane,
+ 	.wait_for_pipe_update_if_needed = dcn10_wait_for_pipe_update_if_needed,
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/pg_cntl.h b/drivers/gpu/drm/amd/display/dc/inc/hw/pg_cntl.h
+index 00ea3864dd4df4..bcd0b0dd9c429a 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/pg_cntl.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/pg_cntl.h
+@@ -47,6 +47,7 @@ struct pg_cntl_funcs {
+ 	void (*optc_pg_control)(struct pg_cntl *pg_cntl, unsigned int optc_inst, bool power_on);
+ 	void (*dwb_pg_control)(struct pg_cntl *pg_cntl, bool power_on);
+ 	void (*init_pg_status)(struct pg_cntl *pg_cntl);
++	void (*print_pg_status)(struct pg_cntl *pg_cntl, const char *debug_func, const char *debug_log);
+ };
+ 
+ #endif //__DC_PG_CNTL_H__
+diff --git a/drivers/gpu/drm/amd/display/dc/pg/dcn35/dcn35_pg_cntl.c b/drivers/gpu/drm/amd/display/dc/pg/dcn35/dcn35_pg_cntl.c
+index af21c0a27f8657..72bd43f9bbe288 100644
+--- a/drivers/gpu/drm/amd/display/dc/pg/dcn35/dcn35_pg_cntl.c
++++ b/drivers/gpu/drm/amd/display/dc/pg/dcn35/dcn35_pg_cntl.c
+@@ -79,16 +79,12 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
+ 	uint32_t power_gate = power_on ? 0 : 1;
+ 	uint32_t pwr_status = power_on ? 0 : 2;
+ 	uint32_t org_ip_request_cntl = 0;
+-	bool block_enabled;
+-
+-	/*need to enable dscclk regardless DSC_PG*/
+-	if (pg_cntl->ctx->dc->res_pool->dccg->funcs->enable_dsc && power_on)
+-		pg_cntl->ctx->dc->res_pool->dccg->funcs->enable_dsc(
+-				pg_cntl->ctx->dc->res_pool->dccg, dsc_inst);
++	bool block_enabled = false;
++	bool skip_pg = pg_cntl->ctx->dc->debug.ignore_pg ||
++		       pg_cntl->ctx->dc->debug.disable_dsc_power_gate ||
++		       pg_cntl->ctx->dc->idle_optimizations_allowed;
+ 
+-	if (pg_cntl->ctx->dc->debug.ignore_pg ||
+-		pg_cntl->ctx->dc->debug.disable_dsc_power_gate ||
+-		pg_cntl->ctx->dc->idle_optimizations_allowed)
++	if (skip_pg && !power_on)
+ 		return;
+ 
+ 	block_enabled = pg_cntl35_dsc_pg_status(pg_cntl, dsc_inst);
+@@ -111,7 +107,7 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
+ 
+ 		REG_WAIT(DOMAIN16_PG_STATUS,
+ 				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+-				1, 1000);
++				1, 10000);
+ 		break;
+ 	case 1: /* DSC1 */
+ 		REG_UPDATE(DOMAIN17_PG_CONFIG,
+@@ -119,7 +115,7 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
+ 
+ 		REG_WAIT(DOMAIN17_PG_STATUS,
+ 				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+-				1, 1000);
++				1, 10000);
+ 		break;
+ 	case 2: /* DSC2 */
+ 		REG_UPDATE(DOMAIN18_PG_CONFIG,
+@@ -127,7 +123,7 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
+ 
+ 		REG_WAIT(DOMAIN18_PG_STATUS,
+ 				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+-				1, 1000);
++				1, 10000);
+ 		break;
+ 	case 3: /* DSC3 */
+ 		REG_UPDATE(DOMAIN19_PG_CONFIG,
+@@ -135,7 +131,7 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
+ 
+ 		REG_WAIT(DOMAIN19_PG_STATUS,
+ 				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+-				1, 1000);
++				1, 10000);
+ 		break;
+ 	default:
+ 		BREAK_TO_DEBUGGER();
+@@ -144,12 +140,6 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
+ 
+ 	if (dsc_inst < MAX_PIPES)
+ 		pg_cntl->pg_pipe_res_enable[PG_DSC][dsc_inst] = power_on;
+-
+-	if (pg_cntl->ctx->dc->res_pool->dccg->funcs->disable_dsc && !power_on) {
+-		/*this is to disable dscclk*/
+-		pg_cntl->ctx->dc->res_pool->dccg->funcs->disable_dsc(
+-			pg_cntl->ctx->dc->res_pool->dccg, dsc_inst);
+-	}
+ }
+ 
+ static bool pg_cntl35_hubp_dpp_pg_status(struct pg_cntl *pg_cntl, unsigned int hubp_dpp_inst)
+@@ -189,11 +179,12 @@ void pg_cntl35_hubp_dpp_pg_control(struct pg_cntl *pg_cntl, unsigned int hubp_dp
+ 	uint32_t pwr_status = power_on ? 0 : 2;
+ 	uint32_t org_ip_request_cntl;
+ 	bool block_enabled;
++	bool skip_pg = pg_cntl->ctx->dc->debug.ignore_pg ||
++		       pg_cntl->ctx->dc->debug.disable_hubp_power_gate ||
++		       pg_cntl->ctx->dc->debug.disable_dpp_power_gate ||
++		       pg_cntl->ctx->dc->idle_optimizations_allowed;
+ 
+-	if (pg_cntl->ctx->dc->debug.ignore_pg ||
+-		pg_cntl->ctx->dc->debug.disable_hubp_power_gate ||
+-		pg_cntl->ctx->dc->debug.disable_dpp_power_gate ||
+-		pg_cntl->ctx->dc->idle_optimizations_allowed)
++	if (skip_pg && !power_on)
+ 		return;
+ 
+ 	block_enabled = pg_cntl35_hubp_dpp_pg_status(pg_cntl, hubp_dpp_inst);
+@@ -213,22 +204,22 @@ void pg_cntl35_hubp_dpp_pg_control(struct pg_cntl *pg_cntl, unsigned int hubp_dp
+ 	case 0:
+ 		/* DPP0 & HUBP0 */
+ 		REG_UPDATE(DOMAIN0_PG_CONFIG, DOMAIN_POWER_GATE, power_gate);
+-		REG_WAIT(DOMAIN0_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
++		REG_WAIT(DOMAIN0_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 10000);
+ 		break;
+ 	case 1:
+ 		/* DPP1 & HUBP1 */
+ 		REG_UPDATE(DOMAIN1_PG_CONFIG, DOMAIN_POWER_GATE, power_gate);
+-		REG_WAIT(DOMAIN1_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
++		REG_WAIT(DOMAIN1_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 10000);
+ 		break;
+ 	case 2:
+ 		/* DPP2 & HUBP2 */
+ 		REG_UPDATE(DOMAIN2_PG_CONFIG, DOMAIN_POWER_GATE, power_gate);
+-		REG_WAIT(DOMAIN2_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
++		REG_WAIT(DOMAIN2_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 10000);
+ 		break;
+ 	case 3:
+ 		/* DPP3 & HUBP3 */
+ 		REG_UPDATE(DOMAIN3_PG_CONFIG, DOMAIN_POWER_GATE, power_gate);
+-		REG_WAIT(DOMAIN3_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
++		REG_WAIT(DOMAIN3_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 10000);
+ 		break;
+ 	default:
+ 		BREAK_TO_DEBUGGER();
+@@ -501,6 +492,36 @@ void pg_cntl35_init_pg_status(struct pg_cntl *pg_cntl)
+ 	pg_cntl->pg_res_enable[PG_DWB] = block_enabled;
+ }
+ 
++static void pg_cntl35_print_pg_status(struct pg_cntl *pg_cntl, const char *debug_func, const char *debug_log)
++{
++	int i = 0;
++	bool block_enabled = false;
++
++	DC_LOG_DEBUG("%s: %s", debug_func, debug_log);
++
++	DC_LOG_DEBUG("PG_CNTL status:\n");
++
++	block_enabled = pg_cntl35_io_clk_status(pg_cntl);
++	DC_LOG_DEBUG("ONO0=%d (DCCG, DIO, DCIO)\n", block_enabled ? 1 : 0);
++
++	block_enabled = pg_cntl35_mem_status(pg_cntl);
++	DC_LOG_DEBUG("ONO1=%d (DCHUBBUB, DCHVM, DCHUBBUBMEM)\n", block_enabled ? 1 : 0);
++
++	block_enabled = pg_cntl35_plane_otg_status(pg_cntl);
++	DC_LOG_DEBUG("ONO2=%d (MPC, OPP, OPTC, DWB)\n", block_enabled ? 1 : 0);
++
++	block_enabled = pg_cntl35_hpo_pg_status(pg_cntl);
++	DC_LOG_DEBUG("ONO3=%d (HPO)\n", block_enabled ? 1 : 0);
++
++	for (i = 0; i < pg_cntl->ctx->dc->res_pool->pipe_count; i++) {
++		block_enabled = pg_cntl35_hubp_dpp_pg_status(pg_cntl, i);
++		DC_LOG_DEBUG("ONO%d=%d (DCHUBP%d, DPP%d)\n", 4 + i * 2, block_enabled ? 1 : 0, i, i);
++
++		block_enabled = pg_cntl35_dsc_pg_status(pg_cntl, i);
++		DC_LOG_DEBUG("ONO%d=%d (DSC%d)\n", 5 + i * 2, block_enabled ? 1 : 0, i);
++	}
++}
++
+ static const struct pg_cntl_funcs pg_cntl35_funcs = {
+ 	.init_pg_status = pg_cntl35_init_pg_status,
+ 	.dsc_pg_control = pg_cntl35_dsc_pg_control,
+@@ -511,7 +532,8 @@ static const struct pg_cntl_funcs pg_cntl35_funcs = {
+ 	.mpcc_pg_control = pg_cntl35_mpcc_pg_control,
+ 	.opp_pg_control = pg_cntl35_opp_pg_control,
+ 	.optc_pg_control = pg_cntl35_optc_pg_control,
+-	.dwb_pg_control = pg_cntl35_dwb_pg_control
++	.dwb_pg_control = pg_cntl35_dwb_pg_control,
++	.print_pg_status = pg_cntl35_print_pg_status
+ };
+ 
+ struct pg_cntl *pg_cntl35_create(
+diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
+index ea78c6c8ca7a63..2d4e9368394641 100644
+--- a/drivers/gpu/drm/display/drm_dp_helper.c
++++ b/drivers/gpu/drm/display/drm_dp_helper.c
+@@ -691,6 +691,34 @@ void drm_dp_dpcd_set_powered(struct drm_dp_aux *aux, bool powered)
+ }
+ EXPORT_SYMBOL(drm_dp_dpcd_set_powered);
+ 
++/**
++ * drm_dp_dpcd_set_probe() - Set whether a probing before DPCD access is done
++ * @aux: DisplayPort AUX channel
++ * @enable: Enable the probing if required
++ */
++void drm_dp_dpcd_set_probe(struct drm_dp_aux *aux, bool enable)
++{
++	WRITE_ONCE(aux->dpcd_probe_disabled, !enable);
++}
++EXPORT_SYMBOL(drm_dp_dpcd_set_probe);
++
++static bool dpcd_access_needs_probe(struct drm_dp_aux *aux)
++{
++	/*
++	 * HP ZR24w corrupts the first DPCD access after entering power save
++	 * mode. Eg. on a read, the entire buffer will be filled with the same
++	 * byte. Do a throw away read to avoid corrupting anything we care
++	 * about. Afterwards things will work correctly until the monitor
++	 * gets woken up and subsequently re-enters power save mode.
++	 *
++	 * The user pressing any button on the monitor is enough to wake it
++	 * up, so there is no particularly good place to do the workaround.
++	 * We just have to do it before any DPCD access and hope that the
++	 * monitor doesn't power down exactly after the throw away read.
++	 */
++	return !aux->is_remote && !READ_ONCE(aux->dpcd_probe_disabled);
++}
++
+ /**
+  * drm_dp_dpcd_read() - read a series of bytes from the DPCD
+  * @aux: DisplayPort AUX channel (SST or MST)
+@@ -712,19 +740,7 @@ ssize_t drm_dp_dpcd_read(struct drm_dp_aux *aux, unsigned int offset,
+ {
+ 	int ret;
+ 
+-	/*
+-	 * HP ZR24w corrupts the first DPCD access after entering power save
+-	 * mode. Eg. on a read, the entire buffer will be filled with the same
+-	 * byte. Do a throw away read to avoid corrupting anything we care
+-	 * about. Afterwards things will work correctly until the monitor
+-	 * gets woken up and subsequently re-enters power save mode.
+-	 *
+-	 * The user pressing any button on the monitor is enough to wake it
+-	 * up, so there is no particularly good place to do the workaround.
+-	 * We just have to do it before any DPCD access and hope that the
+-	 * monitor doesn't power down exactly after the throw away read.
+-	 */
+-	if (!aux->is_remote) {
++	if (dpcd_access_needs_probe(aux)) {
+ 		ret = drm_dp_dpcd_probe(aux, DP_TRAINING_PATTERN_SET);
+ 		if (ret < 0)
+ 			return ret;
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 74e77742b2bd4f..9c8822b337e2e4 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -66,34 +66,36 @@ static int oui(u8 first, u8 second, u8 third)
+  * on as many displays as possible).
+  */
+ 
+-/* First detailed mode wrong, use largest 60Hz mode */
+-#define EDID_QUIRK_PREFER_LARGE_60		(1 << 0)
+-/* Reported 135MHz pixel clock is too high, needs adjustment */
+-#define EDID_QUIRK_135_CLOCK_TOO_HIGH		(1 << 1)
+-/* Prefer the largest mode at 75 Hz */
+-#define EDID_QUIRK_PREFER_LARGE_75		(1 << 2)
+-/* Detail timing is in cm not mm */
+-#define EDID_QUIRK_DETAILED_IN_CM		(1 << 3)
+-/* Detailed timing descriptors have bogus size values, so just take the
+- * maximum size and use that.
+- */
+-#define EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE	(1 << 4)
+-/* use +hsync +vsync for detailed mode */
+-#define EDID_QUIRK_DETAILED_SYNC_PP		(1 << 6)
+-/* Force reduced-blanking timings for detailed modes */
+-#define EDID_QUIRK_FORCE_REDUCED_BLANKING	(1 << 7)
+-/* Force 8bpc */
+-#define EDID_QUIRK_FORCE_8BPC			(1 << 8)
+-/* Force 12bpc */
+-#define EDID_QUIRK_FORCE_12BPC			(1 << 9)
+-/* Force 6bpc */
+-#define EDID_QUIRK_FORCE_6BPC			(1 << 10)
+-/* Force 10bpc */
+-#define EDID_QUIRK_FORCE_10BPC			(1 << 11)
+-/* Non desktop display (i.e. HMD) */
+-#define EDID_QUIRK_NON_DESKTOP			(1 << 12)
+-/* Cap the DSC target bitrate to 15bpp */
+-#define EDID_QUIRK_CAP_DSC_15BPP		(1 << 13)
++enum drm_edid_internal_quirk {
++	/* First detailed mode wrong, use largest 60Hz mode */
++	EDID_QUIRK_PREFER_LARGE_60 = DRM_EDID_QUIRK_NUM,
++	/* Reported 135MHz pixel clock is too high, needs adjustment */
++	EDID_QUIRK_135_CLOCK_TOO_HIGH,
++	/* Prefer the largest mode at 75 Hz */
++	EDID_QUIRK_PREFER_LARGE_75,
++	/* Detail timing is in cm not mm */
++	EDID_QUIRK_DETAILED_IN_CM,
++	/* Detailed timing descriptors have bogus size values, so just take the
++	 * maximum size and use that.
++	 */
++	EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE,
++	/* use +hsync +vsync for detailed mode */
++	EDID_QUIRK_DETAILED_SYNC_PP,
++	/* Force reduced-blanking timings for detailed modes */
++	EDID_QUIRK_FORCE_REDUCED_BLANKING,
++	/* Force 8bpc */
++	EDID_QUIRK_FORCE_8BPC,
++	/* Force 12bpc */
++	EDID_QUIRK_FORCE_12BPC,
++	/* Force 6bpc */
++	EDID_QUIRK_FORCE_6BPC,
++	/* Force 10bpc */
++	EDID_QUIRK_FORCE_10BPC,
++	/* Non desktop display (i.e. HMD) */
++	EDID_QUIRK_NON_DESKTOP,
++	/* Cap the DSC target bitrate to 15bpp */
++	EDID_QUIRK_CAP_DSC_15BPP,
++};
+ 
+ #define MICROSOFT_IEEE_OUI	0xca125c
+ 
+@@ -128,124 +130,132 @@ static const struct edid_quirk {
+ 	u32 quirks;
+ } edid_quirk_list[] = {
+ 	/* Acer AL1706 */
+-	EDID_QUIRK('A', 'C', 'R', 44358, EDID_QUIRK_PREFER_LARGE_60),
++	EDID_QUIRK('A', 'C', 'R', 44358, BIT(EDID_QUIRK_PREFER_LARGE_60)),
+ 	/* Acer F51 */
+-	EDID_QUIRK('A', 'P', 'I', 0x7602, EDID_QUIRK_PREFER_LARGE_60),
++	EDID_QUIRK('A', 'P', 'I', 0x7602, BIT(EDID_QUIRK_PREFER_LARGE_60)),
+ 
+ 	/* AEO model 0 reports 8 bpc, but is a 6 bpc panel */
+-	EDID_QUIRK('A', 'E', 'O', 0, EDID_QUIRK_FORCE_6BPC),
++	EDID_QUIRK('A', 'E', 'O', 0, BIT(EDID_QUIRK_FORCE_6BPC)),
+ 
+ 	/* BenQ GW2765 */
+-	EDID_QUIRK('B', 'N', 'Q', 0x78d6, EDID_QUIRK_FORCE_8BPC),
++	EDID_QUIRK('B', 'N', 'Q', 0x78d6, BIT(EDID_QUIRK_FORCE_8BPC)),
+ 
+ 	/* BOE model on HP Pavilion 15-n233sl reports 8 bpc, but is a 6 bpc panel */
+-	EDID_QUIRK('B', 'O', 'E', 0x78b, EDID_QUIRK_FORCE_6BPC),
++	EDID_QUIRK('B', 'O', 'E', 0x78b, BIT(EDID_QUIRK_FORCE_6BPC)),
+ 
+ 	/* CPT panel of Asus UX303LA reports 8 bpc, but is a 6 bpc panel */
+-	EDID_QUIRK('C', 'P', 'T', 0x17df, EDID_QUIRK_FORCE_6BPC),
++	EDID_QUIRK('C', 'P', 'T', 0x17df, BIT(EDID_QUIRK_FORCE_6BPC)),
+ 
+ 	/* SDC panel of Lenovo B50-80 reports 8 bpc, but is a 6 bpc panel */
+-	EDID_QUIRK('S', 'D', 'C', 0x3652, EDID_QUIRK_FORCE_6BPC),
++	EDID_QUIRK('S', 'D', 'C', 0x3652, BIT(EDID_QUIRK_FORCE_6BPC)),
+ 
+ 	/* BOE model 0x0771 reports 8 bpc, but is a 6 bpc panel */
+-	EDID_QUIRK('B', 'O', 'E', 0x0771, EDID_QUIRK_FORCE_6BPC),
++	EDID_QUIRK('B', 'O', 'E', 0x0771, BIT(EDID_QUIRK_FORCE_6BPC)),
+ 
+ 	/* Belinea 10 15 55 */
+-	EDID_QUIRK('M', 'A', 'X', 1516, EDID_QUIRK_PREFER_LARGE_60),
+-	EDID_QUIRK('M', 'A', 'X', 0x77e, EDID_QUIRK_PREFER_LARGE_60),
++	EDID_QUIRK('M', 'A', 'X', 1516, BIT(EDID_QUIRK_PREFER_LARGE_60)),
++	EDID_QUIRK('M', 'A', 'X', 0x77e, BIT(EDID_QUIRK_PREFER_LARGE_60)),
+ 
+ 	/* Envision Peripherals, Inc. EN-7100e */
+-	EDID_QUIRK('E', 'P', 'I', 59264, EDID_QUIRK_135_CLOCK_TOO_HIGH),
++	EDID_QUIRK('E', 'P', 'I', 59264, BIT(EDID_QUIRK_135_CLOCK_TOO_HIGH)),
+ 	/* Envision EN2028 */
+-	EDID_QUIRK('E', 'P', 'I', 8232, EDID_QUIRK_PREFER_LARGE_60),
++	EDID_QUIRK('E', 'P', 'I', 8232, BIT(EDID_QUIRK_PREFER_LARGE_60)),
+ 
+ 	/* Funai Electronics PM36B */
+-	EDID_QUIRK('F', 'C', 'M', 13600, EDID_QUIRK_PREFER_LARGE_75 |
+-				       EDID_QUIRK_DETAILED_IN_CM),
++	EDID_QUIRK('F', 'C', 'M', 13600, BIT(EDID_QUIRK_PREFER_LARGE_75) |
++					 BIT(EDID_QUIRK_DETAILED_IN_CM)),
+ 
+ 	/* LG 27GP950 */
+-	EDID_QUIRK('G', 'S', 'M', 0x5bbf, EDID_QUIRK_CAP_DSC_15BPP),
++	EDID_QUIRK('G', 'S', 'M', 0x5bbf, BIT(EDID_QUIRK_CAP_DSC_15BPP)),
+ 
+ 	/* LG 27GN950 */
+-	EDID_QUIRK('G', 'S', 'M', 0x5b9a, EDID_QUIRK_CAP_DSC_15BPP),
++	EDID_QUIRK('G', 'S', 'M', 0x5b9a, BIT(EDID_QUIRK_CAP_DSC_15BPP)),
+ 
+ 	/* LGD panel of HP zBook 17 G2, eDP 10 bpc, but reports unknown bpc */
+-	EDID_QUIRK('L', 'G', 'D', 764, EDID_QUIRK_FORCE_10BPC),
++	EDID_QUIRK('L', 'G', 'D', 764, BIT(EDID_QUIRK_FORCE_10BPC)),
+ 
+ 	/* LG Philips LCD LP154W01-A5 */
+-	EDID_QUIRK('L', 'P', 'L', 0, EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE),
+-	EDID_QUIRK('L', 'P', 'L', 0x2a00, EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE),
++	EDID_QUIRK('L', 'P', 'L', 0, BIT(EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE)),
++	EDID_QUIRK('L', 'P', 'L', 0x2a00, BIT(EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE)),
+ 
+ 	/* Samsung SyncMaster 205BW.  Note: irony */
+-	EDID_QUIRK('S', 'A', 'M', 541, EDID_QUIRK_DETAILED_SYNC_PP),
++	EDID_QUIRK('S', 'A', 'M', 541, BIT(EDID_QUIRK_DETAILED_SYNC_PP)),
+ 	/* Samsung SyncMaster 22[5-6]BW */
+-	EDID_QUIRK('S', 'A', 'M', 596, EDID_QUIRK_PREFER_LARGE_60),
+-	EDID_QUIRK('S', 'A', 'M', 638, EDID_QUIRK_PREFER_LARGE_60),
++	EDID_QUIRK('S', 'A', 'M', 596, BIT(EDID_QUIRK_PREFER_LARGE_60)),
++	EDID_QUIRK('S', 'A', 'M', 638, BIT(EDID_QUIRK_PREFER_LARGE_60)),
+ 
+ 	/* Sony PVM-2541A does up to 12 bpc, but only reports max 8 bpc */
+-	EDID_QUIRK('S', 'N', 'Y', 0x2541, EDID_QUIRK_FORCE_12BPC),
++	EDID_QUIRK('S', 'N', 'Y', 0x2541, BIT(EDID_QUIRK_FORCE_12BPC)),
+ 
+ 	/* ViewSonic VA2026w */
+-	EDID_QUIRK('V', 'S', 'C', 5020, EDID_QUIRK_FORCE_REDUCED_BLANKING),
++	EDID_QUIRK('V', 'S', 'C', 5020, BIT(EDID_QUIRK_FORCE_REDUCED_BLANKING)),
+ 
+ 	/* Medion MD 30217 PG */
+-	EDID_QUIRK('M', 'E', 'D', 0x7b8, EDID_QUIRK_PREFER_LARGE_75),
++	EDID_QUIRK('M', 'E', 'D', 0x7b8, BIT(EDID_QUIRK_PREFER_LARGE_75)),
+ 
+ 	/* Lenovo G50 */
+-	EDID_QUIRK('S', 'D', 'C', 18514, EDID_QUIRK_FORCE_6BPC),
++	EDID_QUIRK('S', 'D', 'C', 18514, BIT(EDID_QUIRK_FORCE_6BPC)),
+ 
+ 	/* Panel in Samsung NP700G7A-S01PL notebook reports 6bpc */
+-	EDID_QUIRK('S', 'E', 'C', 0xd033, EDID_QUIRK_FORCE_8BPC),
++	EDID_QUIRK('S', 'E', 'C', 0xd033, BIT(EDID_QUIRK_FORCE_8BPC)),
+ 
+ 	/* Rotel RSX-1058 forwards sink's EDID but only does HDMI 1.1*/
+-	EDID_QUIRK('E', 'T', 'R', 13896, EDID_QUIRK_FORCE_8BPC),
++	EDID_QUIRK('E', 'T', 'R', 13896, BIT(EDID_QUIRK_FORCE_8BPC)),
+ 
+ 	/* Valve Index Headset */
+-	EDID_QUIRK('V', 'L', 'V', 0x91a8, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b0, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b1, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b2, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b3, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b4, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b5, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b6, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b7, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b8, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b9, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91ba, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91bb, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91bc, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91bd, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91be, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91bf, EDID_QUIRK_NON_DESKTOP),
++	EDID_QUIRK('V', 'L', 'V', 0x91a8, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b0, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b1, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b2, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b3, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b4, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b5, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b6, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b7, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b8, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b9, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91ba, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91bb, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91bc, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91bd, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91be, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91bf, BIT(EDID_QUIRK_NON_DESKTOP)),
+ 
+ 	/* HTC Vive and Vive Pro VR Headsets */
+-	EDID_QUIRK('H', 'V', 'R', 0xaa01, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('H', 'V', 'R', 0xaa02, EDID_QUIRK_NON_DESKTOP),
++	EDID_QUIRK('H', 'V', 'R', 0xaa01, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('H', 'V', 'R', 0xaa02, BIT(EDID_QUIRK_NON_DESKTOP)),
+ 
+ 	/* Oculus Rift DK1, DK2, CV1 and Rift S VR Headsets */
+-	EDID_QUIRK('O', 'V', 'R', 0x0001, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('O', 'V', 'R', 0x0003, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('O', 'V', 'R', 0x0004, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('O', 'V', 'R', 0x0012, EDID_QUIRK_NON_DESKTOP),
++	EDID_QUIRK('O', 'V', 'R', 0x0001, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('O', 'V', 'R', 0x0003, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('O', 'V', 'R', 0x0004, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('O', 'V', 'R', 0x0012, BIT(EDID_QUIRK_NON_DESKTOP)),
+ 
+ 	/* Windows Mixed Reality Headsets */
+-	EDID_QUIRK('A', 'C', 'R', 0x7fce, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('L', 'E', 'N', 0x0408, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('F', 'U', 'J', 0x1970, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('D', 'E', 'L', 0x7fce, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('S', 'E', 'C', 0x144a, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('A', 'U', 'S', 0xc102, EDID_QUIRK_NON_DESKTOP),
++	EDID_QUIRK('A', 'C', 'R', 0x7fce, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('L', 'E', 'N', 0x0408, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('F', 'U', 'J', 0x1970, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('D', 'E', 'L', 0x7fce, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('S', 'E', 'C', 0x144a, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('A', 'U', 'S', 0xc102, BIT(EDID_QUIRK_NON_DESKTOP)),
+ 
+ 	/* Sony PlayStation VR Headset */
+-	EDID_QUIRK('S', 'N', 'Y', 0x0704, EDID_QUIRK_NON_DESKTOP),
++	EDID_QUIRK('S', 'N', 'Y', 0x0704, BIT(EDID_QUIRK_NON_DESKTOP)),
+ 
+ 	/* Sensics VR Headsets */
+-	EDID_QUIRK('S', 'E', 'N', 0x1019, EDID_QUIRK_NON_DESKTOP),
++	EDID_QUIRK('S', 'E', 'N', 0x1019, BIT(EDID_QUIRK_NON_DESKTOP)),
+ 
+ 	/* OSVR HDK and HDK2 VR Headsets */
+-	EDID_QUIRK('S', 'V', 'R', 0x1019, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('A', 'U', 'O', 0x1111, EDID_QUIRK_NON_DESKTOP),
++	EDID_QUIRK('S', 'V', 'R', 0x1019, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('A', 'U', 'O', 0x1111, BIT(EDID_QUIRK_NON_DESKTOP)),
++
++	/*
++	 * @drm_edid_internal_quirk entries end here, following with the
++	 * @drm_edid_quirk entries.
++	 */
++
++	/* HP ZR24w DP AUX DPCD access requires probing to prevent corruption. */
++	EDID_QUIRK('H', 'W', 'P', 0x2869, BIT(DRM_EDID_QUIRK_DP_DPCD_PROBE)),
+ };
+ 
+ /*
+@@ -2951,6 +2961,18 @@ static u32 edid_get_quirks(const struct drm_edid *drm_edid)
+ 	return 0;
+ }
+ 
++static bool drm_edid_has_internal_quirk(struct drm_connector *connector,
++					enum drm_edid_internal_quirk quirk)
++{
++	return connector->display_info.quirks & BIT(quirk);
++}
++
++bool drm_edid_has_quirk(struct drm_connector *connector, enum drm_edid_quirk quirk)
++{
++	return connector->display_info.quirks & BIT(quirk);
++}
++EXPORT_SYMBOL(drm_edid_has_quirk);
++
+ #define MODE_SIZE(m) ((m)->hdisplay * (m)->vdisplay)
+ #define MODE_REFRESH_DIFF(c,t) (abs((c) - (t)))
+ 
+@@ -2960,7 +2982,6 @@ static u32 edid_get_quirks(const struct drm_edid *drm_edid)
+  */
+ static void edid_fixup_preferred(struct drm_connector *connector)
+ {
+-	const struct drm_display_info *info = &connector->display_info;
+ 	struct drm_display_mode *t, *cur_mode, *preferred_mode;
+ 	int target_refresh = 0;
+ 	int cur_vrefresh, preferred_vrefresh;
+@@ -2968,9 +2989,9 @@ static void edid_fixup_preferred(struct drm_connector *connector)
+ 	if (list_empty(&connector->probed_modes))
+ 		return;
+ 
+-	if (info->quirks & EDID_QUIRK_PREFER_LARGE_60)
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_PREFER_LARGE_60))
+ 		target_refresh = 60;
+-	if (info->quirks & EDID_QUIRK_PREFER_LARGE_75)
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_PREFER_LARGE_75))
+ 		target_refresh = 75;
+ 
+ 	preferred_mode = list_first_entry(&connector->probed_modes,
+@@ -3474,7 +3495,6 @@ static struct drm_display_mode *drm_mode_detailed(struct drm_connector *connecto
+ 						  const struct drm_edid *drm_edid,
+ 						  const struct detailed_timing *timing)
+ {
+-	const struct drm_display_info *info = &connector->display_info;
+ 	struct drm_device *dev = connector->dev;
+ 	struct drm_display_mode *mode;
+ 	const struct detailed_pixel_timing *pt = &timing->data.pixel_data;
+@@ -3508,7 +3528,7 @@ static struct drm_display_mode *drm_mode_detailed(struct drm_connector *connecto
+ 		return NULL;
+ 	}
+ 
+-	if (info->quirks & EDID_QUIRK_FORCE_REDUCED_BLANKING) {
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_FORCE_REDUCED_BLANKING)) {
+ 		mode = drm_cvt_mode(dev, hactive, vactive, 60, true, false, false);
+ 		if (!mode)
+ 			return NULL;
+@@ -3520,7 +3540,7 @@ static struct drm_display_mode *drm_mode_detailed(struct drm_connector *connecto
+ 	if (!mode)
+ 		return NULL;
+ 
+-	if (info->quirks & EDID_QUIRK_135_CLOCK_TOO_HIGH)
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_135_CLOCK_TOO_HIGH))
+ 		mode->clock = 1088 * 10;
+ 	else
+ 		mode->clock = le16_to_cpu(timing->pixel_clock) * 10;
+@@ -3551,7 +3571,7 @@ static struct drm_display_mode *drm_mode_detailed(struct drm_connector *connecto
+ 
+ 	drm_mode_do_interlace_quirk(mode, pt);
+ 
+-	if (info->quirks & EDID_QUIRK_DETAILED_SYNC_PP) {
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_DETAILED_SYNC_PP)) {
+ 		mode->flags |= DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC;
+ 	} else {
+ 		mode->flags |= (pt->misc & DRM_EDID_PT_HSYNC_POSITIVE) ?
+@@ -3564,12 +3584,12 @@ static struct drm_display_mode *drm_mode_detailed(struct drm_connector *connecto
+ 	mode->width_mm = pt->width_mm_lo | (pt->width_height_mm_hi & 0xf0) << 4;
+ 	mode->height_mm = pt->height_mm_lo | (pt->width_height_mm_hi & 0xf) << 8;
+ 
+-	if (info->quirks & EDID_QUIRK_DETAILED_IN_CM) {
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_DETAILED_IN_CM)) {
+ 		mode->width_mm *= 10;
+ 		mode->height_mm *= 10;
+ 	}
+ 
+-	if (info->quirks & EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE) {
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE)) {
+ 		mode->width_mm = drm_edid->edid->width_cm * 10;
+ 		mode->height_mm = drm_edid->edid->height_cm * 10;
+ 	}
+@@ -6734,26 +6754,26 @@ static void update_display_info(struct drm_connector *connector,
+ 	drm_update_mso(connector, drm_edid);
+ 
+ out:
+-	if (info->quirks & EDID_QUIRK_NON_DESKTOP) {
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_NON_DESKTOP)) {
+ 		drm_dbg_kms(connector->dev, "[CONNECTOR:%d:%s] Non-desktop display%s\n",
+ 			    connector->base.id, connector->name,
+ 			    info->non_desktop ? " (redundant quirk)" : "");
+ 		info->non_desktop = true;
+ 	}
+ 
+-	if (info->quirks & EDID_QUIRK_CAP_DSC_15BPP)
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_CAP_DSC_15BPP))
+ 		info->max_dsc_bpp = 15;
+ 
+-	if (info->quirks & EDID_QUIRK_FORCE_6BPC)
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_FORCE_6BPC))
+ 		info->bpc = 6;
+ 
+-	if (info->quirks & EDID_QUIRK_FORCE_8BPC)
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_FORCE_8BPC))
+ 		info->bpc = 8;
+ 
+-	if (info->quirks & EDID_QUIRK_FORCE_10BPC)
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_FORCE_10BPC))
+ 		info->bpc = 10;
+ 
+-	if (info->quirks & EDID_QUIRK_FORCE_12BPC)
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_FORCE_12BPC))
+ 		info->bpc = 12;
+ 
+ 	/* Depends on info->cea_rev set by drm_parse_cea_ext() above */
+@@ -6918,7 +6938,6 @@ static int add_displayid_detailed_modes(struct drm_connector *connector,
+ static int _drm_edid_connector_add_modes(struct drm_connector *connector,
+ 					 const struct drm_edid *drm_edid)
+ {
+-	const struct drm_display_info *info = &connector->display_info;
+ 	int num_modes = 0;
+ 
+ 	if (!drm_edid)
+@@ -6948,7 +6967,8 @@ static int _drm_edid_connector_add_modes(struct drm_connector *connector,
+ 	if (drm_edid->edid->features & DRM_EDID_FEATURE_CONTINUOUS_FREQ)
+ 		num_modes += add_inferred_modes(connector, drm_edid);
+ 
+-	if (info->quirks & (EDID_QUIRK_PREFER_LARGE_60 | EDID_QUIRK_PREFER_LARGE_75))
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_PREFER_LARGE_60) ||
++	    drm_edid_has_internal_quirk(connector, EDID_QUIRK_PREFER_LARGE_75))
+ 		edid_fixup_preferred(connector);
+ 
+ 	return num_modes;
+diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c
+index 16356523816fb8..068ed911e12458 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_power.c
++++ b/drivers/gpu/drm/i915/display/intel_display_power.c
+@@ -1169,7 +1169,7 @@ static void icl_mbus_init(struct intel_display *display)
+ 	if (DISPLAY_VER(display) == 12)
+ 		abox_regs |= BIT(0);
+ 
+-	for_each_set_bit(i, &abox_regs, sizeof(abox_regs))
++	for_each_set_bit(i, &abox_regs, BITS_PER_TYPE(abox_regs))
+ 		intel_de_rmw(display, MBUS_ABOX_CTL(i), mask, val);
+ }
+ 
+@@ -1630,11 +1630,11 @@ static void tgl_bw_buddy_init(struct intel_display *display)
+ 	if (table[config].page_mask == 0) {
+ 		drm_dbg_kms(display->drm,
+ 			    "Unknown memory configuration; disabling address buddy logic.\n");
+-		for_each_set_bit(i, &abox_mask, sizeof(abox_mask))
++		for_each_set_bit(i, &abox_mask, BITS_PER_TYPE(abox_mask))
+ 			intel_de_write(display, BW_BUDDY_CTL(i),
+ 				       BW_BUDDY_DISABLE);
+ 	} else {
+-		for_each_set_bit(i, &abox_mask, sizeof(abox_mask)) {
++		for_each_set_bit(i, &abox_mask, BITS_PER_TYPE(abox_mask)) {
+ 			intel_de_write(display, BW_BUDDY_PAGE_MASK(i),
+ 				       table[config].page_mask);
+ 
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+index 34131ae2c207df..3b02ed0a16dab1 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+@@ -388,11 +388,11 @@ static bool mtk_drm_get_all_drm_priv(struct device *dev)
+ 
+ 		of_id = of_match_node(mtk_drm_of_ids, node);
+ 		if (!of_id)
+-			goto next_put_node;
++			continue;
+ 
+ 		pdev = of_find_device_by_node(node);
+ 		if (!pdev)
+-			goto next_put_node;
++			continue;
+ 
+ 		drm_dev = device_find_child(&pdev->dev, NULL, mtk_drm_match);
+ 		if (!drm_dev)
+@@ -418,11 +418,10 @@ static bool mtk_drm_get_all_drm_priv(struct device *dev)
+ next_put_device_pdev_dev:
+ 		put_device(&pdev->dev);
+ 
+-next_put_node:
+-		of_node_put(node);
+-
+-		if (cnt == MAX_CRTC)
++		if (cnt == MAX_CRTC) {
++			of_node_put(node);
+ 			break;
++		}
+ 	}
+ 
+ 	if (drm_priv->data->mmsys_dev_num == cnt) {
+diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
+index 6200cad22563a3..0f4ab9e5ef95cb 100644
+--- a/drivers/gpu/drm/panthor/panthor_drv.c
++++ b/drivers/gpu/drm/panthor/panthor_drv.c
+@@ -1093,7 +1093,7 @@ static int panthor_ioctl_group_create(struct drm_device *ddev, void *data,
+ 	struct drm_panthor_queue_create *queue_args;
+ 	int ret;
+ 
+-	if (!args->queues.count)
++	if (!args->queues.count || args->queues.count > MAX_CS_PER_CSG)
+ 		return -EINVAL;
+ 
+ 	ret = PANTHOR_UOBJ_GET_ARRAY(queue_args, &args->queues);
+diff --git a/drivers/gpu/drm/xe/tests/xe_bo.c b/drivers/gpu/drm/xe/tests/xe_bo.c
+index 378dcd0fb41493..a34d1e2597b79f 100644
+--- a/drivers/gpu/drm/xe/tests/xe_bo.c
++++ b/drivers/gpu/drm/xe/tests/xe_bo.c
+@@ -236,7 +236,7 @@ static int evict_test_run_tile(struct xe_device *xe, struct xe_tile *tile, struc
+ 		}
+ 
+ 		xe_bo_lock(external, false);
+-		err = xe_bo_pin_external(external);
++		err = xe_bo_pin_external(external, false);
+ 		xe_bo_unlock(external);
+ 		if (err) {
+ 			KUNIT_FAIL(test, "external bo pin err=%pe\n",
+diff --git a/drivers/gpu/drm/xe/tests/xe_dma_buf.c b/drivers/gpu/drm/xe/tests/xe_dma_buf.c
+index c53f67ce4b0aa2..121f17c112ec6a 100644
+--- a/drivers/gpu/drm/xe/tests/xe_dma_buf.c
++++ b/drivers/gpu/drm/xe/tests/xe_dma_buf.c
+@@ -89,15 +89,7 @@ static void check_residency(struct kunit *test, struct xe_bo *exported,
+ 		return;
+ 	}
+ 
+-	/*
+-	 * If on different devices, the exporter is kept in system  if
+-	 * possible, saving a migration step as the transfer is just
+-	 * likely as fast from system memory.
+-	 */
+-	if (params->mem_mask & XE_BO_FLAG_SYSTEM)
+-		KUNIT_EXPECT_TRUE(test, xe_bo_is_mem_type(exported, XE_PL_TT));
+-	else
+-		KUNIT_EXPECT_TRUE(test, xe_bo_is_mem_type(exported, mem_type));
++	KUNIT_EXPECT_TRUE(test, xe_bo_is_mem_type(exported, mem_type));
+ 
+ 	if (params->force_different_devices)
+ 		KUNIT_EXPECT_TRUE(test, xe_bo_is_mem_type(imported, XE_PL_TT));
+diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
+index 50326e756f8975..5390f535394695 100644
+--- a/drivers/gpu/drm/xe/xe_bo.c
++++ b/drivers/gpu/drm/xe/xe_bo.c
+@@ -184,6 +184,8 @@ static void try_add_system(struct xe_device *xe, struct xe_bo *bo,
+ 
+ 		bo->placements[*c] = (struct ttm_place) {
+ 			.mem_type = XE_PL_TT,
++			.flags = (bo_flags & XE_BO_FLAG_VRAM_MASK) ?
++			TTM_PL_FLAG_FALLBACK : 0,
+ 		};
+ 		*c += 1;
+ 	}
+@@ -2266,6 +2268,7 @@ uint64_t vram_region_gpu_offset(struct ttm_resource *res)
+ /**
+  * xe_bo_pin_external - pin an external BO
+  * @bo: buffer object to be pinned
++ * @in_place: Pin in current placement, don't attempt to migrate.
+  *
+  * Pin an external (not tied to a VM, can be exported via dma-buf / prime FD)
+  * BO. Unique call compared to xe_bo_pin as this function has it own set of
+@@ -2273,7 +2276,7 @@ uint64_t vram_region_gpu_offset(struct ttm_resource *res)
+  *
+  * Returns 0 for success, negative error code otherwise.
+  */
+-int xe_bo_pin_external(struct xe_bo *bo)
++int xe_bo_pin_external(struct xe_bo *bo, bool in_place)
+ {
+ 	struct xe_device *xe = xe_bo_device(bo);
+ 	int err;
+@@ -2282,9 +2285,11 @@ int xe_bo_pin_external(struct xe_bo *bo)
+ 	xe_assert(xe, xe_bo_is_user(bo));
+ 
+ 	if (!xe_bo_is_pinned(bo)) {
+-		err = xe_bo_validate(bo, NULL, false);
+-		if (err)
+-			return err;
++		if (!in_place) {
++			err = xe_bo_validate(bo, NULL, false);
++			if (err)
++				return err;
++		}
+ 
+ 		spin_lock(&xe->pinned.lock);
+ 		list_add_tail(&bo->pinned_link, &xe->pinned.late.external);
+@@ -2437,6 +2442,9 @@ int xe_bo_validate(struct xe_bo *bo, struct xe_vm *vm, bool allow_res_evict)
+ 	};
+ 	int ret;
+ 
++	if (xe_bo_is_pinned(bo))
++		return 0;
++
+ 	if (vm) {
+ 		lockdep_assert_held(&vm->lock);
+ 		xe_vm_assert_held(vm);
+diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
+index 02ada1fb8a2359..bf0432c360bbba 100644
+--- a/drivers/gpu/drm/xe/xe_bo.h
++++ b/drivers/gpu/drm/xe/xe_bo.h
+@@ -201,7 +201,7 @@ static inline void xe_bo_unlock_vm_held(struct xe_bo *bo)
+ 	}
+ }
+ 
+-int xe_bo_pin_external(struct xe_bo *bo);
++int xe_bo_pin_external(struct xe_bo *bo, bool in_place);
+ int xe_bo_pin(struct xe_bo *bo);
+ void xe_bo_unpin_external(struct xe_bo *bo);
+ void xe_bo_unpin(struct xe_bo *bo);
+diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
+index 6383a1c0d47847..1db2aba4738c23 100644
+--- a/drivers/gpu/drm/xe/xe_device_types.h
++++ b/drivers/gpu/drm/xe/xe_device_types.h
+@@ -529,6 +529,12 @@ struct xe_device {
+ 
+ 	/** @pm_notifier: Our PM notifier to perform actions in response to various PM events. */
+ 	struct notifier_block pm_notifier;
++	/** @pm_block: Completion to block validating tasks on suspend / hibernate prepare */
++	struct completion pm_block;
++	/** @rebind_resume_list: List of wq items to kick on resume. */
++	struct list_head rebind_resume_list;
++	/** @rebind_resume_lock: Lock to protect the rebind_resume_list */
++	struct mutex rebind_resume_lock;
+ 
+ 	/** @pmt: Support the PMT driver callback interface */
+ 	struct {
+diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c b/drivers/gpu/drm/xe/xe_dma_buf.c
+index 346f857f38374f..af64baf872ef7b 100644
+--- a/drivers/gpu/drm/xe/xe_dma_buf.c
++++ b/drivers/gpu/drm/xe/xe_dma_buf.c
+@@ -72,7 +72,7 @@ static int xe_dma_buf_pin(struct dma_buf_attachment *attach)
+ 		return ret;
+ 	}
+ 
+-	ret = xe_bo_pin_external(bo);
++	ret = xe_bo_pin_external(bo, true);
+ 	xe_assert(xe, !ret);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
+index 44364c042ad72d..374c831e691b2b 100644
+--- a/drivers/gpu/drm/xe/xe_exec.c
++++ b/drivers/gpu/drm/xe/xe_exec.c
+@@ -237,6 +237,15 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+ 		goto err_unlock_list;
+ 	}
+ 
++	/*
++	 * It's OK to block interruptible here with the vm lock held, since
++	 * on task freezing during suspend / hibernate, the call will
++	 * return -ERESTARTSYS and the IOCTL will be rerun.
++	 */
++	err = wait_for_completion_interruptible(&xe->pm_block);
++	if (err)
++		goto err_unlock_list;
++
+ 	vm_exec.vm = &vm->gpuvm;
+ 	vm_exec.flags = DRM_EXEC_INTERRUPTIBLE_WAIT;
+ 	if (xe_vm_in_lr_mode(vm)) {
+diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
+index ad263de44111d4..375a197a86089b 100644
+--- a/drivers/gpu/drm/xe/xe_pm.c
++++ b/drivers/gpu/drm/xe/xe_pm.c
+@@ -23,6 +23,7 @@
+ #include "xe_pcode.h"
+ #include "xe_pxp.h"
+ #include "xe_trace.h"
++#include "xe_vm.h"
+ #include "xe_wa.h"
+ 
+ /**
+@@ -285,6 +286,19 @@ static u32 vram_threshold_value(struct xe_device *xe)
+ 	return DEFAULT_VRAM_THRESHOLD;
+ }
+ 
++static void xe_pm_wake_rebind_workers(struct xe_device *xe)
++{
++	struct xe_vm *vm, *next;
++
++	mutex_lock(&xe->rebind_resume_lock);
++	list_for_each_entry_safe(vm, next, &xe->rebind_resume_list,
++				 preempt.pm_activate_link) {
++		list_del_init(&vm->preempt.pm_activate_link);
++		xe_vm_resume_rebind_worker(vm);
++	}
++	mutex_unlock(&xe->rebind_resume_lock);
++}
++
+ static int xe_pm_notifier_callback(struct notifier_block *nb,
+ 				   unsigned long action, void *data)
+ {
+@@ -294,30 +308,30 @@ static int xe_pm_notifier_callback(struct notifier_block *nb,
+ 	switch (action) {
+ 	case PM_HIBERNATION_PREPARE:
+ 	case PM_SUSPEND_PREPARE:
++		reinit_completion(&xe->pm_block);
+ 		xe_pm_runtime_get(xe);
+ 		err = xe_bo_evict_all_user(xe);
+-		if (err) {
++		if (err)
+ 			drm_dbg(&xe->drm, "Notifier evict user failed (%d)\n", err);
+-			xe_pm_runtime_put(xe);
+-			break;
+-		}
+ 
+ 		err = xe_bo_notifier_prepare_all_pinned(xe);
+-		if (err) {
++		if (err)
+ 			drm_dbg(&xe->drm, "Notifier prepare pin failed (%d)\n", err);
+-			xe_pm_runtime_put(xe);
+-		}
++		/*
++		 * Keep the runtime pm reference until post hibernation / post suspend to
++		 * avoid a runtime suspend interfering with evicted objects or backup
++		 * allocations.
++		 */
+ 		break;
+ 	case PM_POST_HIBERNATION:
+ 	case PM_POST_SUSPEND:
++		complete_all(&xe->pm_block);
++		xe_pm_wake_rebind_workers(xe);
+ 		xe_bo_notifier_unprepare_all_pinned(xe);
+ 		xe_pm_runtime_put(xe);
+ 		break;
+ 	}
+ 
+-	if (err)
+-		return NOTIFY_BAD;
+-
+ 	return NOTIFY_DONE;
+ }
+ 
+@@ -339,6 +353,14 @@ int xe_pm_init(struct xe_device *xe)
+ 	if (err)
+ 		return err;
+ 
++	err = drmm_mutex_init(&xe->drm, &xe->rebind_resume_lock);
++	if (err)
++		goto err_unregister;
++
++	init_completion(&xe->pm_block);
++	complete_all(&xe->pm_block);
++	INIT_LIST_HEAD(&xe->rebind_resume_list);
++
+ 	/* For now suspend/resume is only allowed with GuC */
+ 	if (!xe_device_uc_enabled(xe))
+ 		return 0;
+diff --git a/drivers/gpu/drm/xe/xe_survivability_mode.c b/drivers/gpu/drm/xe/xe_survivability_mode.c
+index 1f710b3fc599b5..5ae3d70e45167f 100644
+--- a/drivers/gpu/drm/xe/xe_survivability_mode.c
++++ b/drivers/gpu/drm/xe/xe_survivability_mode.c
+@@ -40,6 +40,8 @@
+  *
+  *	# echo 1 > /sys/kernel/config/xe/0000:03:00.0/survivability_mode
+  *
++ * It is the responsibility of the user to clear the mode once firmware flash is complete.
++ *
+  * Refer :ref:`xe_configfs` for more details on how to use configfs
+  *
+  * Survivability mode is indicated by the below admin-only readable sysfs which provides additional
+@@ -146,7 +148,6 @@ static void xe_survivability_mode_fini(void *arg)
+ 	struct pci_dev *pdev = to_pci_dev(xe->drm.dev);
+ 	struct device *dev = &pdev->dev;
+ 
+-	xe_configfs_clear_survivability_mode(pdev);
+ 	sysfs_remove_file(&dev->kobj, &dev_attr_survivability_mode.attr);
+ }
+ 
+diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
+index e278aad1a6eb29..84052b98002d14 100644
+--- a/drivers/gpu/drm/xe/xe_vm.c
++++ b/drivers/gpu/drm/xe/xe_vm.c
+@@ -393,6 +393,9 @@ static int xe_gpuvm_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec)
+ 		list_move_tail(&gpuva_to_vma(gpuva)->combined_links.rebind,
+ 			       &vm->rebind_list);
+ 
++	if (!try_wait_for_completion(&vm->xe->pm_block))
++		return -EAGAIN;
++
+ 	ret = xe_bo_validate(gem_to_xe_bo(vm_bo->obj), vm, false);
+ 	if (ret)
+ 		return ret;
+@@ -479,6 +482,33 @@ static int xe_preempt_work_begin(struct drm_exec *exec, struct xe_vm *vm,
+ 	return xe_vm_validate_rebind(vm, exec, vm->preempt.num_exec_queues);
+ }
+ 
++static bool vm_suspend_rebind_worker(struct xe_vm *vm)
++{
++	struct xe_device *xe = vm->xe;
++	bool ret = false;
++
++	mutex_lock(&xe->rebind_resume_lock);
++	if (!try_wait_for_completion(&vm->xe->pm_block)) {
++		ret = true;
++		list_move_tail(&vm->preempt.pm_activate_link, &xe->rebind_resume_list);
++	}
++	mutex_unlock(&xe->rebind_resume_lock);
++
++	return ret;
++}
++
++/**
++ * xe_vm_resume_rebind_worker() - Resume the rebind worker.
++ * @vm: The vm whose preempt worker to resume.
++ *
++ * Resume a preempt worker that was previously suspended by
++ * vm_suspend_rebind_worker().
++ */
++void xe_vm_resume_rebind_worker(struct xe_vm *vm)
++{
++	queue_work(vm->xe->ordered_wq, &vm->preempt.rebind_work);
++}
++
+ static void preempt_rebind_work_func(struct work_struct *w)
+ {
+ 	struct xe_vm *vm = container_of(w, struct xe_vm, preempt.rebind_work);
+@@ -502,6 +532,11 @@ static void preempt_rebind_work_func(struct work_struct *w)
+ 	}
+ 
+ retry:
++	if (!try_wait_for_completion(&vm->xe->pm_block) && vm_suspend_rebind_worker(vm)) {
++		up_write(&vm->lock);
++		return;
++	}
++
+ 	if (xe_vm_userptr_check_repin(vm)) {
+ 		err = xe_vm_userptr_pin(vm);
+ 		if (err)
+@@ -1686,6 +1721,7 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef)
+ 	if (flags & XE_VM_FLAG_LR_MODE) {
+ 		INIT_WORK(&vm->preempt.rebind_work, preempt_rebind_work_func);
+ 		xe_pm_runtime_get_noresume(xe);
++		INIT_LIST_HEAD(&vm->preempt.pm_activate_link);
+ 	}
+ 
+ 	if (flags & XE_VM_FLAG_FAULT_MODE) {
+@@ -1867,8 +1903,12 @@ void xe_vm_close_and_put(struct xe_vm *vm)
+ 	xe_assert(xe, !vm->preempt.num_exec_queues);
+ 
+ 	xe_vm_close(vm);
+-	if (xe_vm_in_preempt_fence_mode(vm))
++	if (xe_vm_in_preempt_fence_mode(vm)) {
++		mutex_lock(&xe->rebind_resume_lock);
++		list_del_init(&vm->preempt.pm_activate_link);
++		mutex_unlock(&xe->rebind_resume_lock);
+ 		flush_work(&vm->preempt.rebind_work);
++	}
+ 	if (xe_vm_in_fault_mode(vm))
+ 		xe_svm_close(vm);
+ 
+diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
+index e54ca835b58282..e493a17e0f19d9 100644
+--- a/drivers/gpu/drm/xe/xe_vm.h
++++ b/drivers/gpu/drm/xe/xe_vm.h
+@@ -268,6 +268,8 @@ struct dma_fence *xe_vm_bind_kernel_bo(struct xe_vm *vm, struct xe_bo *bo,
+ 				       struct xe_exec_queue *q, u64 addr,
+ 				       enum xe_cache_level cache_lvl);
+ 
++void xe_vm_resume_rebind_worker(struct xe_vm *vm);
++
+ /**
+  * xe_vm_resv() - Return's the vm's reservation object
+  * @vm: The vm
+diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
+index 1979e9bdbdf36b..4ebd3dc53f3c1a 100644
+--- a/drivers/gpu/drm/xe/xe_vm_types.h
++++ b/drivers/gpu/drm/xe/xe_vm_types.h
+@@ -286,6 +286,11 @@ struct xe_vm {
+ 		 * BOs
+ 		 */
+ 		struct work_struct rebind_work;
++		/**
++		 * @preempt.pm_activate_link: Link to list of rebind workers to be
++		 * kicked on resume.
++		 */
++		struct list_head pm_activate_link;
+ 	} preempt;
+ 
+ 	/** @um: unified memory state */
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index a7f89946dad418..e94ac746a741af 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -1052,7 +1052,7 @@ static const struct pci_device_id i801_ids[] = {
+ 	{ PCI_DEVICE_DATA(INTEL, METEOR_LAKE_P_SMBUS,		FEATURES_ICH5 | FEATURE_TCO_CNL) },
+ 	{ PCI_DEVICE_DATA(INTEL, METEOR_LAKE_SOC_S_SMBUS,	FEATURES_ICH5 | FEATURE_TCO_CNL) },
+ 	{ PCI_DEVICE_DATA(INTEL, METEOR_LAKE_PCH_S_SMBUS,	FEATURES_ICH5 | FEATURE_TCO_CNL) },
+-	{ PCI_DEVICE_DATA(INTEL, BIRCH_STREAM_SMBUS,		FEATURES_ICH5 | FEATURE_TCO_CNL) },
++	{ PCI_DEVICE_DATA(INTEL, BIRCH_STREAM_SMBUS,		FEATURES_ICH5)			 },
+ 	{ PCI_DEVICE_DATA(INTEL, ARROW_LAKE_H_SMBUS,		FEATURES_ICH5 | FEATURE_TCO_CNL) },
+ 	{ PCI_DEVICE_DATA(INTEL, PANTHER_LAKE_H_SMBUS,		FEATURES_ICH5 | FEATURE_TCO_CNL) },
+ 	{ PCI_DEVICE_DATA(INTEL, PANTHER_LAKE_P_SMBUS,		FEATURES_ICH5 | FEATURE_TCO_CNL) },
+diff --git a/drivers/i2c/busses/i2c-rtl9300.c b/drivers/i2c/busses/i2c-rtl9300.c
+index cfafe089102aa2..9e1f71fed0feac 100644
+--- a/drivers/i2c/busses/i2c-rtl9300.c
++++ b/drivers/i2c/busses/i2c-rtl9300.c
+@@ -99,6 +99,9 @@ static int rtl9300_i2c_config_xfer(struct rtl9300_i2c *i2c, struct rtl9300_i2c_c
+ {
+ 	u32 val, mask;
+ 
++	if (len < 1 || len > 16)
++		return -EINVAL;
++
+ 	val = chan->bus_freq << RTL9300_I2C_MST_CTRL2_SCL_FREQ_OFS;
+ 	mask = RTL9300_I2C_MST_CTRL2_SCL_FREQ_MASK;
+ 
+@@ -222,15 +225,6 @@ static int rtl9300_i2c_smbus_xfer(struct i2c_adapter *adap, u16 addr, unsigned s
+ 	}
+ 
+ 	switch (size) {
+-	case I2C_SMBUS_QUICK:
+-		ret = rtl9300_i2c_config_xfer(i2c, chan, addr, 0);
+-		if (ret)
+-			goto out_unlock;
+-		ret = rtl9300_i2c_reg_addr_set(i2c, 0, 0);
+-		if (ret)
+-			goto out_unlock;
+-		break;
+-
+ 	case I2C_SMBUS_BYTE:
+ 		if (read_write == I2C_SMBUS_WRITE) {
+ 			ret = rtl9300_i2c_config_xfer(i2c, chan, addr, 0);
+@@ -312,9 +306,9 @@ static int rtl9300_i2c_smbus_xfer(struct i2c_adapter *adap, u16 addr, unsigned s
+ 
+ static u32 rtl9300_i2c_func(struct i2c_adapter *a)
+ {
+-	return I2C_FUNC_SMBUS_QUICK | I2C_FUNC_SMBUS_BYTE |
+-	       I2C_FUNC_SMBUS_BYTE_DATA | I2C_FUNC_SMBUS_WORD_DATA |
+-	       I2C_FUNC_SMBUS_BLOCK_DATA;
++	return I2C_FUNC_SMBUS_BYTE | I2C_FUNC_SMBUS_BYTE_DATA |
++	       I2C_FUNC_SMBUS_WORD_DATA | I2C_FUNC_SMBUS_BLOCK_DATA |
++	       I2C_FUNC_SMBUS_I2C_BLOCK;
+ }
+ 
+ static const struct i2c_algorithm rtl9300_i2c_algo = {
+@@ -323,7 +317,7 @@ static const struct i2c_algorithm rtl9300_i2c_algo = {
+ };
+ 
+ static struct i2c_adapter_quirks rtl9300_i2c_quirks = {
+-	.flags		= I2C_AQ_NO_CLK_STRETCH,
++	.flags		= I2C_AQ_NO_CLK_STRETCH | I2C_AQ_NO_ZERO_LEN,
+ 	.max_read_len	= 16,
+ 	.max_write_len	= 16,
+ };
+@@ -353,7 +347,7 @@ static int rtl9300_i2c_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, i2c);
+ 
+-	if (device_get_child_node_count(dev) >= RTL9300_I2C_MUX_NCHAN)
++	if (device_get_child_node_count(dev) > RTL9300_I2C_MUX_NCHAN)
+ 		return dev_err_probe(dev, -EINVAL, "Too many channels\n");
+ 
+ 	device_for_each_child_node(dev, child) {
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 1d8c579b54331e..4ee9e403d38501 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -422,6 +422,7 @@ static const struct xpad_device {
+ 	{ 0x3537, 0x1010, "GameSir G7 SE", 0, XTYPE_XBOXONE },
+ 	{ 0x366c, 0x0005, "ByoWave Proteus Controller", MAP_SHARE_BUTTON, XTYPE_XBOXONE, FLAG_DELAY_INIT },
+ 	{ 0x3767, 0x0101, "Fanatec Speedster 3 Forceshock Wheel", 0, XTYPE_XBOX },
++	{ 0x37d7, 0x2501, "Flydigi Apex 5", 0, XTYPE_XBOX360 },
+ 	{ 0x413d, 0x2104, "Black Shark Green Ghost Gamepad", 0, XTYPE_XBOX360 },
+ 	{ 0xffff, 0xffff, "Chinese-made Xbox Controller", 0, XTYPE_XBOX },
+ 	{ 0x0000, 0x0000, "Generic X-Box pad", 0, XTYPE_UNKNOWN }
+@@ -578,6 +579,7 @@ static const struct usb_device_id xpad_table[] = {
+ 	XPAD_XBOX360_VENDOR(0x3537),		/* GameSir Controllers */
+ 	XPAD_XBOXONE_VENDOR(0x3537),		/* GameSir Controllers */
+ 	XPAD_XBOXONE_VENDOR(0x366c),		/* ByoWave controllers */
++	XPAD_XBOX360_VENDOR(0x37d7),		/* Flydigi Controllers */
+ 	XPAD_XBOX360_VENDOR(0x413d),		/* Black Shark Green Ghost Controller */
+ 	{ }
+ };
+diff --git a/drivers/input/misc/iqs7222.c b/drivers/input/misc/iqs7222.c
+index 6fac31c0d99f2b..ff23219a582ab8 100644
+--- a/drivers/input/misc/iqs7222.c
++++ b/drivers/input/misc/iqs7222.c
+@@ -2427,6 +2427,9 @@ static int iqs7222_parse_chan(struct iqs7222_private *iqs7222,
+ 		if (error)
+ 			return error;
+ 
++		if (!iqs7222->kp_type[chan_index][i])
++			continue;
++
+ 		if (!dev_desc->event_offset)
+ 			continue;
+ 
+diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h
+index 6ed9fc34948cbe..1caa6c4ca435c7 100644
+--- a/drivers/input/serio/i8042-acpipnpio.h
++++ b/drivers/input/serio/i8042-acpipnpio.h
+@@ -1155,6 +1155,20 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ 		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+ 					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ 	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "XxHP4NAx"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "XxKK4NAx_XxSP4NAx"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
+ 	/*
+ 	 * A lot of modern Clevo barebones have touchpad and/or keyboard issues
+ 	 * after suspend fixable with the forcenorestore quirk.
+diff --git a/drivers/iommu/intel/cache.c b/drivers/iommu/intel/cache.c
+index c8b79de84d3fb9..071f78e67fcba0 100644
+--- a/drivers/iommu/intel/cache.c
++++ b/drivers/iommu/intel/cache.c
+@@ -370,7 +370,7 @@ static void cache_tag_flush_iotlb(struct dmar_domain *domain, struct cache_tag *
+ 	struct intel_iommu *iommu = tag->iommu;
+ 	u64 type = DMA_TLB_PSI_FLUSH;
+ 
+-	if (domain->use_first_level) {
++	if (intel_domain_is_fs_paging(domain)) {
+ 		qi_batch_add_piotlb(iommu, tag->domain_id, tag->pasid, addr,
+ 				    pages, ih, domain->qi_batch);
+ 		return;
+@@ -529,7 +529,8 @@ void cache_tag_flush_range_np(struct dmar_domain *domain, unsigned long start,
+ 			qi_batch_flush_descs(iommu, domain->qi_batch);
+ 		iommu = tag->iommu;
+ 
+-		if (!cap_caching_mode(iommu->cap) || domain->use_first_level) {
++		if (!cap_caching_mode(iommu->cap) ||
++		    intel_domain_is_fs_paging(domain)) {
+ 			iommu_flush_write_buffer(iommu);
+ 			continue;
+ 		}
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index c239e280e43d91..34dd175a331dc7 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -57,6 +57,8 @@
+ static void __init check_tylersburg_isoch(void);
+ static int rwbf_quirk;
+ 
++#define rwbf_required(iommu)	(rwbf_quirk || cap_rwbf((iommu)->cap))
++
+ /*
+  * set to 1 to panic kernel if can't successfully enable VT-d
+  * (used when kernel is launched w/ TXT)
+@@ -1479,6 +1481,9 @@ static int domain_context_mapping_one(struct dmar_domain *domain,
+ 	struct context_entry *context;
+ 	int ret;
+ 
++	if (WARN_ON(!intel_domain_is_ss_paging(domain)))
++		return -EINVAL;
++
+ 	pr_debug("Set context mapping for %02x:%02x.%d\n",
+ 		bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+ 
+@@ -1795,18 +1800,6 @@ static int domain_setup_first_level(struct intel_iommu *iommu,
+ 					  (pgd_t *)pgd, flags, old);
+ }
+ 
+-static bool domain_need_iotlb_sync_map(struct dmar_domain *domain,
+-				       struct intel_iommu *iommu)
+-{
+-	if (cap_caching_mode(iommu->cap) && !domain->use_first_level)
+-		return true;
+-
+-	if (rwbf_quirk || cap_rwbf(iommu->cap))
+-		return true;
+-
+-	return false;
+-}
+-
+ static int dmar_domain_attach_device(struct dmar_domain *domain,
+ 				     struct device *dev)
+ {
+@@ -1830,12 +1823,14 @@ static int dmar_domain_attach_device(struct dmar_domain *domain,
+ 
+ 	if (!sm_supported(iommu))
+ 		ret = domain_context_mapping(domain, dev);
+-	else if (domain->use_first_level)
++	else if (intel_domain_is_fs_paging(domain))
+ 		ret = domain_setup_first_level(iommu, domain, dev,
+ 					       IOMMU_NO_PASID, NULL);
+-	else
++	else if (intel_domain_is_ss_paging(domain))
+ 		ret = domain_setup_second_level(iommu, domain, dev,
+ 						IOMMU_NO_PASID, NULL);
++	else if (WARN_ON(true))
++		ret = -EINVAL;
+ 
+ 	if (ret)
+ 		goto out_block_translation;
+@@ -1844,8 +1839,6 @@ static int dmar_domain_attach_device(struct dmar_domain *domain,
+ 	if (ret)
+ 		goto out_block_translation;
+ 
+-	domain->iotlb_sync_map |= domain_need_iotlb_sync_map(domain, iommu);
+-
+ 	return 0;
+ 
+ out_block_translation:
+@@ -3299,10 +3292,14 @@ static struct dmar_domain *paging_domain_alloc(struct device *dev, bool first_st
+ 	spin_lock_init(&domain->lock);
+ 	spin_lock_init(&domain->cache_lock);
+ 	xa_init(&domain->iommu_array);
++	INIT_LIST_HEAD(&domain->s1_domains);
++	spin_lock_init(&domain->s1_lock);
+ 
+ 	domain->nid = dev_to_node(dev);
+ 	domain->use_first_level = first_stage;
+ 
++	domain->domain.type = IOMMU_DOMAIN_UNMANAGED;
++
+ 	/* calculate the address width */
+ 	addr_width = agaw_to_width(iommu->agaw);
+ 	if (addr_width > cap_mgaw(iommu->cap))
+@@ -3344,62 +3341,92 @@ static struct dmar_domain *paging_domain_alloc(struct device *dev, bool first_st
+ }
+ 
+ static struct iommu_domain *
+-intel_iommu_domain_alloc_paging_flags(struct device *dev, u32 flags,
+-				      const struct iommu_user_data *user_data)
++intel_iommu_domain_alloc_first_stage(struct device *dev,
++				     struct intel_iommu *iommu, u32 flags)
++{
++	struct dmar_domain *dmar_domain;
++
++	if (flags & ~IOMMU_HWPT_ALLOC_PASID)
++		return ERR_PTR(-EOPNOTSUPP);
++
++	/* Only SL is available in legacy mode */
++	if (!sm_supported(iommu) || !ecap_flts(iommu->ecap))
++		return ERR_PTR(-EOPNOTSUPP);
++
++	dmar_domain = paging_domain_alloc(dev, true);
++	if (IS_ERR(dmar_domain))
++		return ERR_CAST(dmar_domain);
++
++	dmar_domain->domain.ops = &intel_fs_paging_domain_ops;
++	/*
++	 * iotlb sync for map is only needed for legacy implementations that
++	 * explicitly require flushing internal write buffers to ensure memory
++	 * coherence.
++	 */
++	if (rwbf_required(iommu))
++		dmar_domain->iotlb_sync_map = true;
++
++	return &dmar_domain->domain;
++}
++
++static struct iommu_domain *
++intel_iommu_domain_alloc_second_stage(struct device *dev,
++				      struct intel_iommu *iommu, u32 flags)
+ {
+-	struct device_domain_info *info = dev_iommu_priv_get(dev);
+-	bool dirty_tracking = flags & IOMMU_HWPT_ALLOC_DIRTY_TRACKING;
+-	bool nested_parent = flags & IOMMU_HWPT_ALLOC_NEST_PARENT;
+-	struct intel_iommu *iommu = info->iommu;
+ 	struct dmar_domain *dmar_domain;
+-	struct iommu_domain *domain;
+-	bool first_stage;
+ 
+ 	if (flags &
+ 	    (~(IOMMU_HWPT_ALLOC_NEST_PARENT | IOMMU_HWPT_ALLOC_DIRTY_TRACKING |
+ 	       IOMMU_HWPT_ALLOC_PASID)))
+ 		return ERR_PTR(-EOPNOTSUPP);
+-	if (nested_parent && !nested_supported(iommu))
++
++	if (((flags & IOMMU_HWPT_ALLOC_NEST_PARENT) &&
++	     !nested_supported(iommu)) ||
++	    ((flags & IOMMU_HWPT_ALLOC_DIRTY_TRACKING) &&
++	     !ssads_supported(iommu)))
+ 		return ERR_PTR(-EOPNOTSUPP);
+-	if (user_data || (dirty_tracking && !ssads_supported(iommu)))
++
++	/* Legacy mode always supports second stage */
++	if (sm_supported(iommu) && !ecap_slts(iommu->ecap))
+ 		return ERR_PTR(-EOPNOTSUPP);
+ 
++	dmar_domain = paging_domain_alloc(dev, false);
++	if (IS_ERR(dmar_domain))
++		return ERR_CAST(dmar_domain);
++
++	dmar_domain->domain.ops = &intel_ss_paging_domain_ops;
++	dmar_domain->nested_parent = flags & IOMMU_HWPT_ALLOC_NEST_PARENT;
++
++	if (flags & IOMMU_HWPT_ALLOC_DIRTY_TRACKING)
++		dmar_domain->domain.dirty_ops = &intel_dirty_ops;
++
+ 	/*
+-	 * Always allocate the guest compatible page table unless
+-	 * IOMMU_HWPT_ALLOC_NEST_PARENT or IOMMU_HWPT_ALLOC_DIRTY_TRACKING
+-	 * is specified.
++	 * Besides the internal write buffer flush, the caching mode used for
++	 * legacy nested translation (which utilizes shadowing page tables)
++	 * also requires iotlb sync on map.
+ 	 */
+-	if (nested_parent || dirty_tracking) {
+-		if (!sm_supported(iommu) || !ecap_slts(iommu->ecap))
+-			return ERR_PTR(-EOPNOTSUPP);
+-		first_stage = false;
+-	} else {
+-		first_stage = first_level_by_default(iommu);
+-	}
++	if (rwbf_required(iommu) || cap_caching_mode(iommu->cap))
++		dmar_domain->iotlb_sync_map = true;
+ 
+-	dmar_domain = paging_domain_alloc(dev, first_stage);
+-	if (IS_ERR(dmar_domain))
+-		return ERR_CAST(dmar_domain);
+-	domain = &dmar_domain->domain;
+-	domain->type = IOMMU_DOMAIN_UNMANAGED;
+-	domain->owner = &intel_iommu_ops;
+-	domain->ops = intel_iommu_ops.default_domain_ops;
+-
+-	if (nested_parent) {
+-		dmar_domain->nested_parent = true;
+-		INIT_LIST_HEAD(&dmar_domain->s1_domains);
+-		spin_lock_init(&dmar_domain->s1_lock);
+-	}
++	return &dmar_domain->domain;
++}
+ 
+-	if (dirty_tracking) {
+-		if (dmar_domain->use_first_level) {
+-			iommu_domain_free(domain);
+-			return ERR_PTR(-EOPNOTSUPP);
+-		}
+-		domain->dirty_ops = &intel_dirty_ops;
+-	}
++static struct iommu_domain *
++intel_iommu_domain_alloc_paging_flags(struct device *dev, u32 flags,
++				      const struct iommu_user_data *user_data)
++{
++	struct device_domain_info *info = dev_iommu_priv_get(dev);
++	struct intel_iommu *iommu = info->iommu;
++	struct iommu_domain *domain;
+ 
+-	return domain;
++	if (user_data)
++		return ERR_PTR(-EOPNOTSUPP);
++
++	/* Prefer first stage if possible by default. */
++	domain = intel_iommu_domain_alloc_first_stage(dev, iommu, flags);
++	if (domain != ERR_PTR(-EOPNOTSUPP))
++		return domain;
++	return intel_iommu_domain_alloc_second_stage(dev, iommu, flags);
+ }
+ 
+ static void intel_iommu_domain_free(struct iommu_domain *domain)
+@@ -3411,33 +3438,86 @@ static void intel_iommu_domain_free(struct iommu_domain *domain)
+ 	domain_exit(dmar_domain);
+ }
+ 
++static int paging_domain_compatible_first_stage(struct dmar_domain *dmar_domain,
++						struct intel_iommu *iommu)
++{
++	if (WARN_ON(dmar_domain->domain.dirty_ops ||
++		    dmar_domain->nested_parent))
++		return -EINVAL;
++
++	/* Only SL is available in legacy mode */
++	if (!sm_supported(iommu) || !ecap_flts(iommu->ecap))
++		return -EINVAL;
++
++	/* Same page size support */
++	if (!cap_fl1gp_support(iommu->cap) &&
++	    (dmar_domain->domain.pgsize_bitmap & SZ_1G))
++		return -EINVAL;
++
++	/* iotlb sync on map requirement */
++	if ((rwbf_required(iommu)) && !dmar_domain->iotlb_sync_map)
++		return -EINVAL;
++
++	return 0;
++}
++
++static int
++paging_domain_compatible_second_stage(struct dmar_domain *dmar_domain,
++				      struct intel_iommu *iommu)
++{
++	unsigned int sslps = cap_super_page_val(iommu->cap);
++
++	if (dmar_domain->domain.dirty_ops && !ssads_supported(iommu))
++		return -EINVAL;
++	if (dmar_domain->nested_parent && !nested_supported(iommu))
++		return -EINVAL;
++
++	/* Legacy mode always supports second stage */
++	if (sm_supported(iommu) && !ecap_slts(iommu->ecap))
++		return -EINVAL;
++
++	/* Same page size support */
++	if (!(sslps & BIT(0)) && (dmar_domain->domain.pgsize_bitmap & SZ_2M))
++		return -EINVAL;
++	if (!(sslps & BIT(1)) && (dmar_domain->domain.pgsize_bitmap & SZ_1G))
++		return -EINVAL;
++
++	/* iotlb sync on map requirement */
++	if ((rwbf_required(iommu) || cap_caching_mode(iommu->cap)) &&
++	    !dmar_domain->iotlb_sync_map)
++		return -EINVAL;
++
++	return 0;
++}
++
+ int paging_domain_compatible(struct iommu_domain *domain, struct device *dev)
+ {
+ 	struct device_domain_info *info = dev_iommu_priv_get(dev);
+ 	struct dmar_domain *dmar_domain = to_dmar_domain(domain);
+ 	struct intel_iommu *iommu = info->iommu;
++	int ret = -EINVAL;
+ 	int addr_width;
+ 
+-	if (WARN_ON_ONCE(!(domain->type & __IOMMU_DOMAIN_PAGING)))
+-		return -EPERM;
++	if (intel_domain_is_fs_paging(dmar_domain))
++		ret = paging_domain_compatible_first_stage(dmar_domain, iommu);
++	else if (intel_domain_is_ss_paging(dmar_domain))
++		ret = paging_domain_compatible_second_stage(dmar_domain, iommu);
++	else if (WARN_ON(true))
++		ret = -EINVAL;
++	if (ret)
++		return ret;
+ 
++	/*
++	 * FIXME this is locked wrong, it needs to be under the
++	 * dmar_domain->lock
++	 */
+ 	if (dmar_domain->force_snooping && !ecap_sc_support(iommu->ecap))
+ 		return -EINVAL;
+ 
+-	if (domain->dirty_ops && !ssads_supported(iommu))
+-		return -EINVAL;
+-
+ 	if (dmar_domain->iommu_coherency !=
+ 			iommu_paging_structure_coherency(iommu))
+ 		return -EINVAL;
+ 
+-	if (dmar_domain->iommu_superpage !=
+-			iommu_superpage_capability(iommu, dmar_domain->use_first_level))
+-		return -EINVAL;
+-
+-	if (dmar_domain->use_first_level &&
+-	    (!sm_supported(iommu) || !ecap_flts(iommu->ecap)))
+-		return -EINVAL;
+ 
+ 	/* check if this iommu agaw is sufficient for max mapped address */
+ 	addr_width = agaw_to_width(iommu->agaw);
+@@ -4094,12 +4174,15 @@ static int intel_iommu_set_dev_pasid(struct iommu_domain *domain,
+ 	if (ret)
+ 		goto out_remove_dev_pasid;
+ 
+-	if (dmar_domain->use_first_level)
++	if (intel_domain_is_fs_paging(dmar_domain))
+ 		ret = domain_setup_first_level(iommu, dmar_domain,
+ 					       dev, pasid, old);
+-	else
++	else if (intel_domain_is_ss_paging(dmar_domain))
+ 		ret = domain_setup_second_level(iommu, dmar_domain,
+ 						dev, pasid, old);
++	else if (WARN_ON(true))
++		ret = -EINVAL;
++
+ 	if (ret)
+ 		goto out_unwind_iopf;
+ 
+@@ -4374,6 +4457,32 @@ static struct iommu_domain identity_domain = {
+ 	},
+ };
+ 
++const struct iommu_domain_ops intel_fs_paging_domain_ops = {
++	.attach_dev = intel_iommu_attach_device,
++	.set_dev_pasid = intel_iommu_set_dev_pasid,
++	.map_pages = intel_iommu_map_pages,
++	.unmap_pages = intel_iommu_unmap_pages,
++	.iotlb_sync_map = intel_iommu_iotlb_sync_map,
++	.flush_iotlb_all = intel_flush_iotlb_all,
++	.iotlb_sync = intel_iommu_tlb_sync,
++	.iova_to_phys = intel_iommu_iova_to_phys,
++	.free = intel_iommu_domain_free,
++	.enforce_cache_coherency = intel_iommu_enforce_cache_coherency,
++};
++
++const struct iommu_domain_ops intel_ss_paging_domain_ops = {
++	.attach_dev = intel_iommu_attach_device,
++	.set_dev_pasid = intel_iommu_set_dev_pasid,
++	.map_pages = intel_iommu_map_pages,
++	.unmap_pages = intel_iommu_unmap_pages,
++	.iotlb_sync_map = intel_iommu_iotlb_sync_map,
++	.flush_iotlb_all = intel_flush_iotlb_all,
++	.iotlb_sync = intel_iommu_tlb_sync,
++	.iova_to_phys = intel_iommu_iova_to_phys,
++	.free = intel_iommu_domain_free,
++	.enforce_cache_coherency = intel_iommu_enforce_cache_coherency,
++};
++
+ const struct iommu_ops intel_iommu_ops = {
+ 	.blocked_domain		= &blocking_domain,
+ 	.release_domain		= &blocking_domain,
+@@ -4391,18 +4500,6 @@ const struct iommu_ops intel_iommu_ops = {
+ 	.is_attach_deferred	= intel_iommu_is_attach_deferred,
+ 	.def_domain_type	= device_def_domain_type,
+ 	.page_response		= intel_iommu_page_response,
+-	.default_domain_ops = &(const struct iommu_domain_ops) {
+-		.attach_dev		= intel_iommu_attach_device,
+-		.set_dev_pasid		= intel_iommu_set_dev_pasid,
+-		.map_pages		= intel_iommu_map_pages,
+-		.unmap_pages		= intel_iommu_unmap_pages,
+-		.iotlb_sync_map		= intel_iommu_iotlb_sync_map,
+-		.flush_iotlb_all        = intel_flush_iotlb_all,
+-		.iotlb_sync		= intel_iommu_tlb_sync,
+-		.iova_to_phys		= intel_iommu_iova_to_phys,
+-		.free			= intel_iommu_domain_free,
+-		.enforce_cache_coherency = intel_iommu_enforce_cache_coherency,
+-	}
+ };
+ 
+ static void quirk_iommu_igfx(struct pci_dev *dev)
+diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h
+index 61f42802fe9e95..c699ed8810f23c 100644
+--- a/drivers/iommu/intel/iommu.h
++++ b/drivers/iommu/intel/iommu.h
+@@ -1381,6 +1381,18 @@ struct context_entry *iommu_context_addr(struct intel_iommu *iommu, u8 bus,
+ 					 u8 devfn, int alloc);
+ 
+ extern const struct iommu_ops intel_iommu_ops;
++extern const struct iommu_domain_ops intel_fs_paging_domain_ops;
++extern const struct iommu_domain_ops intel_ss_paging_domain_ops;
++
++static inline bool intel_domain_is_fs_paging(struct dmar_domain *domain)
++{
++	return domain->domain.ops == &intel_fs_paging_domain_ops;
++}
++
++static inline bool intel_domain_is_ss_paging(struct dmar_domain *domain)
++{
++	return domain->domain.ops == &intel_ss_paging_domain_ops;
++}
+ 
+ #ifdef CONFIG_INTEL_IOMMU
+ extern int intel_iommu_sm;
+diff --git a/drivers/iommu/intel/nested.c b/drivers/iommu/intel/nested.c
+index fc312f649f9ef0..1b6ad9c900a5ad 100644
+--- a/drivers/iommu/intel/nested.c
++++ b/drivers/iommu/intel/nested.c
+@@ -216,8 +216,7 @@ intel_iommu_domain_alloc_nested(struct device *dev, struct iommu_domain *parent,
+ 	/* Must be nested domain */
+ 	if (user_data->type != IOMMU_HWPT_DATA_VTD_S1)
+ 		return ERR_PTR(-EOPNOTSUPP);
+-	if (parent->ops != intel_iommu_ops.default_domain_ops ||
+-	    !s2_domain->nested_parent)
++	if (!intel_domain_is_ss_paging(s2_domain) || !s2_domain->nested_parent)
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	ret = iommu_copy_struct_from_user(&vtd, user_data,
+@@ -229,7 +228,6 @@ intel_iommu_domain_alloc_nested(struct device *dev, struct iommu_domain *parent,
+ 	if (!domain)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	domain->use_first_level = true;
+ 	domain->s2_domain = s2_domain;
+ 	domain->s1_cfg = vtd;
+ 	domain->domain.ops = &intel_nested_domain_ops;
+diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
+index f3da596410b5e5..3994521f6ea488 100644
+--- a/drivers/iommu/intel/svm.c
++++ b/drivers/iommu/intel/svm.c
+@@ -214,7 +214,6 @@ struct iommu_domain *intel_svm_domain_alloc(struct device *dev,
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	domain->domain.ops = &intel_svm_domain_ops;
+-	domain->use_first_level = true;
+ 	INIT_LIST_HEAD(&domain->dev_pasids);
+ 	INIT_LIST_HEAD(&domain->cache_tags);
+ 	spin_lock_init(&domain->cache_lock);
+diff --git a/drivers/irqchip/irq-mvebu-gicp.c b/drivers/irqchip/irq-mvebu-gicp.c
+index 54833717f8a70f..667bde3c651ff2 100644
+--- a/drivers/irqchip/irq-mvebu-gicp.c
++++ b/drivers/irqchip/irq-mvebu-gicp.c
+@@ -238,7 +238,7 @@ static int mvebu_gicp_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	base = ioremap(gicp->res->start, resource_size(gicp->res));
+-	if (IS_ERR(base)) {
++	if (!base) {
+ 		dev_err(&pdev->dev, "ioremap() failed. Unable to clear pending interrupts.\n");
+ 	} else {
+ 		for (i = 0; i < 64; i++)
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 3f355bb85797f8..0f41573fa9f5ec 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -1406,7 +1406,7 @@ static int super_90_validate(struct mddev *mddev, struct md_rdev *freshest, stru
+ 		else {
+ 			if (sb->events_hi == sb->cp_events_hi &&
+ 				sb->events_lo == sb->cp_events_lo) {
+-				mddev->resync_offset = sb->resync_offset;
++				mddev->resync_offset = sb->recovery_cp;
+ 			} else
+ 				mddev->resync_offset = 0;
+ 		}
+@@ -1534,13 +1534,13 @@ static void super_90_sync(struct mddev *mddev, struct md_rdev *rdev)
+ 	mddev->minor_version = sb->minor_version;
+ 	if (mddev->in_sync)
+ 	{
+-		sb->resync_offset = mddev->resync_offset;
++		sb->recovery_cp = mddev->resync_offset;
+ 		sb->cp_events_hi = (mddev->events>>32);
+ 		sb->cp_events_lo = (u32)mddev->events;
+ 		if (mddev->resync_offset == MaxSector)
+ 			sb->state = (1<< MD_SB_CLEAN);
+ 	} else
+-		sb->resync_offset = 0;
++		sb->recovery_cp = 0;
+ 
+ 	sb->layout = mddev->layout;
+ 	sb->chunk_size = mddev->chunk_sectors << 9;
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index 84ab4a83cbd686..db94d14a3807f5 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -1377,14 +1377,24 @@ static int atmel_smc_nand_prepare_smcconf(struct atmel_nand *nand,
+ 	if (ret)
+ 		return ret;
+ 
++	/*
++	 * Read setup timing depends on the operation done on the NAND:
++	 *
++	 * NRD_SETUP = max(tAR, tCLR)
++	 */
++	timeps = max(conf->timings.sdr.tAR_min, conf->timings.sdr.tCLR_min);
++	ncycles = DIV_ROUND_UP(timeps, mckperiodps);
++	totalcycles += ncycles;
++	ret = atmel_smc_cs_conf_set_setup(smcconf, ATMEL_SMC_NRD_SHIFT, ncycles);
++	if (ret)
++		return ret;
++
+ 	/*
+ 	 * The read cycle timing is directly matching tRC, but is also
+ 	 * dependent on the setup and hold timings we calculated earlier,
+ 	 * which gives:
+ 	 *
+-	 * NRD_CYCLE = max(tRC, NRD_PULSE + NRD_HOLD)
+-	 *
+-	 * NRD_SETUP is always 0.
++	 * NRD_CYCLE = max(tRC, NRD_SETUP + NRD_PULSE + NRD_HOLD)
+ 	 */
+ 	ncycles = DIV_ROUND_UP(conf->timings.sdr.tRC_min, mckperiodps);
+ 	ncycles = max(totalcycles, ncycles);
+diff --git a/drivers/mtd/nand/raw/nuvoton-ma35d1-nand-controller.c b/drivers/mtd/nand/raw/nuvoton-ma35d1-nand-controller.c
+index c23b537948d5e6..1a285cd8fad62a 100644
+--- a/drivers/mtd/nand/raw/nuvoton-ma35d1-nand-controller.c
++++ b/drivers/mtd/nand/raw/nuvoton-ma35d1-nand-controller.c
+@@ -935,10 +935,10 @@ static void ma35_chips_cleanup(struct ma35_nand_info *nand)
+ 
+ static int ma35_nand_chips_init(struct device *dev, struct ma35_nand_info *nand)
+ {
+-	struct device_node *np = dev->of_node, *nand_np;
++	struct device_node *np = dev->of_node;
+ 	int ret;
+ 
+-	for_each_child_of_node(np, nand_np) {
++	for_each_child_of_node_scoped(np, nand_np) {
+ 		ret = ma35_nand_chip_init(dev, nand, nand_np);
+ 		if (ret) {
+ 			ma35_chips_cleanup(nand);
+diff --git a/drivers/mtd/nand/raw/stm32_fmc2_nand.c b/drivers/mtd/nand/raw/stm32_fmc2_nand.c
+index a960403081f110..d957327fb4fa04 100644
+--- a/drivers/mtd/nand/raw/stm32_fmc2_nand.c
++++ b/drivers/mtd/nand/raw/stm32_fmc2_nand.c
+@@ -272,6 +272,7 @@ struct stm32_fmc2_nfc {
+ 	struct sg_table dma_data_sg;
+ 	struct sg_table dma_ecc_sg;
+ 	u8 *ecc_buf;
++	dma_addr_t dma_ecc_addr;
+ 	int dma_ecc_len;
+ 	u32 tx_dma_max_burst;
+ 	u32 rx_dma_max_burst;
+@@ -902,17 +903,10 @@ static int stm32_fmc2_nfc_xfer(struct nand_chip *chip, const u8 *buf,
+ 
+ 	if (!write_data && !raw) {
+ 		/* Configure DMA ECC status */
+-		p = nfc->ecc_buf;
+ 		for_each_sg(nfc->dma_ecc_sg.sgl, sg, eccsteps, s) {
+-			sg_set_buf(sg, p, nfc->dma_ecc_len);
+-			p += nfc->dma_ecc_len;
+-		}
+-
+-		ret = dma_map_sg(nfc->dev, nfc->dma_ecc_sg.sgl,
+-				 eccsteps, dma_data_dir);
+-		if (!ret) {
+-			ret = -EIO;
+-			goto err_unmap_data;
++			sg_dma_address(sg) = nfc->dma_ecc_addr +
++					     s * nfc->dma_ecc_len;
++			sg_dma_len(sg) = nfc->dma_ecc_len;
+ 		}
+ 
+ 		desc_ecc = dmaengine_prep_slave_sg(nfc->dma_ecc_ch,
+@@ -921,7 +915,7 @@ static int stm32_fmc2_nfc_xfer(struct nand_chip *chip, const u8 *buf,
+ 						   DMA_PREP_INTERRUPT);
+ 		if (!desc_ecc) {
+ 			ret = -ENOMEM;
+-			goto err_unmap_ecc;
++			goto err_unmap_data;
+ 		}
+ 
+ 		reinit_completion(&nfc->dma_ecc_complete);
+@@ -929,7 +923,7 @@ static int stm32_fmc2_nfc_xfer(struct nand_chip *chip, const u8 *buf,
+ 		desc_ecc->callback_param = &nfc->dma_ecc_complete;
+ 		ret = dma_submit_error(dmaengine_submit(desc_ecc));
+ 		if (ret)
+-			goto err_unmap_ecc;
++			goto err_unmap_data;
+ 
+ 		dma_async_issue_pending(nfc->dma_ecc_ch);
+ 	}
+@@ -949,7 +943,7 @@ static int stm32_fmc2_nfc_xfer(struct nand_chip *chip, const u8 *buf,
+ 		if (!write_data && !raw)
+ 			dmaengine_terminate_all(nfc->dma_ecc_ch);
+ 		ret = -ETIMEDOUT;
+-		goto err_unmap_ecc;
++		goto err_unmap_data;
+ 	}
+ 
+ 	/* Wait DMA data transfer completion */
+@@ -969,11 +963,6 @@ static int stm32_fmc2_nfc_xfer(struct nand_chip *chip, const u8 *buf,
+ 		}
+ 	}
+ 
+-err_unmap_ecc:
+-	if (!write_data && !raw)
+-		dma_unmap_sg(nfc->dev, nfc->dma_ecc_sg.sgl,
+-			     eccsteps, dma_data_dir);
+-
+ err_unmap_data:
+ 	dma_unmap_sg(nfc->dev, nfc->dma_data_sg.sgl, eccsteps, dma_data_dir);
+ 
+@@ -996,9 +985,21 @@ static int stm32_fmc2_nfc_seq_write(struct nand_chip *chip, const u8 *buf,
+ 
+ 	/* Write oob */
+ 	if (oob_required) {
+-		ret = nand_change_write_column_op(chip, mtd->writesize,
+-						  chip->oob_poi, mtd->oobsize,
+-						  false);
++		unsigned int offset_in_page = mtd->writesize;
++		const void *buf = chip->oob_poi;
++		unsigned int len = mtd->oobsize;
++
++		if (!raw) {
++			struct mtd_oob_region oob_free;
++
++			mtd_ooblayout_free(mtd, 0, &oob_free);
++			offset_in_page += oob_free.offset;
++			buf += oob_free.offset;
++			len = oob_free.length;
++		}
++
++		ret = nand_change_write_column_op(chip, offset_in_page,
++						  buf, len, false);
+ 		if (ret)
+ 			return ret;
+ 	}
+@@ -1610,7 +1611,8 @@ static int stm32_fmc2_nfc_dma_setup(struct stm32_fmc2_nfc *nfc)
+ 		return ret;
+ 
+ 	/* Allocate a buffer to store ECC status registers */
+-	nfc->ecc_buf = devm_kzalloc(nfc->dev, FMC2_MAX_ECC_BUF_LEN, GFP_KERNEL);
++	nfc->ecc_buf = dmam_alloc_coherent(nfc->dev, FMC2_MAX_ECC_BUF_LEN,
++					   &nfc->dma_ecc_addr, GFP_KERNEL);
+ 	if (!nfc->ecc_buf)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
+index b90f15c986a317..aa6fb862451aa4 100644
+--- a/drivers/mtd/nand/spi/core.c
++++ b/drivers/mtd/nand/spi/core.c
+@@ -20,7 +20,7 @@
+ #include <linux/spi/spi.h>
+ #include <linux/spi/spi-mem.h>
+ 
+-static int spinand_read_reg_op(struct spinand_device *spinand, u8 reg, u8 *val)
++int spinand_read_reg_op(struct spinand_device *spinand, u8 reg, u8 *val)
+ {
+ 	struct spi_mem_op op = SPINAND_GET_FEATURE_1S_1S_1S_OP(reg,
+ 						      spinand->scratchbuf);
+@@ -1253,8 +1253,19 @@ static int spinand_id_detect(struct spinand_device *spinand)
+ 
+ static int spinand_manufacturer_init(struct spinand_device *spinand)
+ {
+-	if (spinand->manufacturer->ops->init)
+-		return spinand->manufacturer->ops->init(spinand);
++	int ret;
++
++	if (spinand->manufacturer->ops->init) {
++		ret = spinand->manufacturer->ops->init(spinand);
++		if (ret)
++			return ret;
++	}
++
++	if (spinand->configure_chip) {
++		ret = spinand->configure_chip(spinand);
++		if (ret)
++			return ret;
++	}
+ 
+ 	return 0;
+ }
+@@ -1349,6 +1360,7 @@ int spinand_match_and_init(struct spinand_device *spinand,
+ 		spinand->flags = table[i].flags;
+ 		spinand->id.len = 1 + table[i].devid.len;
+ 		spinand->select_target = table[i].select_target;
++		spinand->configure_chip = table[i].configure_chip;
+ 		spinand->set_cont_read = table[i].set_cont_read;
+ 		spinand->fact_otp = &table[i].fact_otp;
+ 		spinand->user_otp = &table[i].user_otp;
+diff --git a/drivers/mtd/nand/spi/winbond.c b/drivers/mtd/nand/spi/winbond.c
+index b7a28f001a387b..116ac17591a86b 100644
+--- a/drivers/mtd/nand/spi/winbond.c
++++ b/drivers/mtd/nand/spi/winbond.c
+@@ -18,6 +18,9 @@
+ 
+ #define W25N04KV_STATUS_ECC_5_8_BITFLIPS	(3 << 4)
+ 
++#define W25N0XJW_SR4			0xD0
++#define W25N0XJW_SR4_HS			BIT(2)
++
+ /*
+  * "X2" in the core is equivalent to "dual output" in the datasheets,
+  * "X4" in the core is equivalent to "quad output" in the datasheets.
+@@ -42,10 +45,12 @@ static SPINAND_OP_VARIANTS(update_cache_octal_variants,
+ static SPINAND_OP_VARIANTS(read_cache_dual_quad_dtr_variants,
+ 		SPINAND_PAGE_READ_FROM_CACHE_1S_4D_4D_OP(0, 8, NULL, 0, 80 * HZ_PER_MHZ),
+ 		SPINAND_PAGE_READ_FROM_CACHE_1S_1D_4D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 4, NULL, 0, 0),
+ 		SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0, 104 * HZ_PER_MHZ),
+ 		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0),
+ 		SPINAND_PAGE_READ_FROM_CACHE_1S_2D_2D_OP(0, 4, NULL, 0, 80 * HZ_PER_MHZ),
+ 		SPINAND_PAGE_READ_FROM_CACHE_1S_1D_2D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 2, NULL, 0, 0),
+ 		SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0, 104 * HZ_PER_MHZ),
+ 		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0),
+ 		SPINAND_PAGE_READ_FROM_CACHE_1S_1D_1D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ),
+@@ -157,6 +162,36 @@ static const struct mtd_ooblayout_ops w25n02kv_ooblayout = {
+ 	.free = w25n02kv_ooblayout_free,
+ };
+ 
++static int w25n01jw_ooblayout_ecc(struct mtd_info *mtd, int section,
++				  struct mtd_oob_region *region)
++{
++	if (section > 3)
++		return -ERANGE;
++
++	region->offset = (16 * section) + 12;
++	region->length = 4;
++
++	return 0;
++}
++
++static int w25n01jw_ooblayout_free(struct mtd_info *mtd, int section,
++				   struct mtd_oob_region *region)
++{
++	if (section > 3)
++		return -ERANGE;
++
++	region->offset = (16 * section);
++	region->length = 12;
++
++	/* Extract BBM */
++	if (!section) {
++		region->offset += 2;
++		region->length -= 2;
++	}
++
++	return 0;
++}
++
+ static int w35n01jw_ooblayout_ecc(struct mtd_info *mtd, int section,
+ 				  struct mtd_oob_region *region)
+ {
+@@ -187,6 +222,11 @@ static int w35n01jw_ooblayout_free(struct mtd_info *mtd, int section,
+ 	return 0;
+ }
+ 
++static const struct mtd_ooblayout_ops w25n01jw_ooblayout = {
++	.ecc = w25n01jw_ooblayout_ecc,
++	.free = w25n01jw_ooblayout_free,
++};
++
+ static const struct mtd_ooblayout_ops w35n01jw_ooblayout = {
+ 	.ecc = w35n01jw_ooblayout_ecc,
+ 	.free = w35n01jw_ooblayout_free,
+@@ -230,6 +270,40 @@ static int w25n02kv_ecc_get_status(struct spinand_device *spinand,
+ 	return -EINVAL;
+ }
+ 
++static int w25n0xjw_hs_cfg(struct spinand_device *spinand)
++{
++	const struct spi_mem_op *op;
++	bool hs;
++	u8 sr4;
++	int ret;
++
++	op = spinand->op_templates.read_cache;
++	if (op->cmd.dtr || op->addr.dtr || op->dummy.dtr || op->data.dtr)
++		hs = false;
++	else if (op->cmd.buswidth == 1 && op->addr.buswidth == 1 &&
++		 op->dummy.buswidth == 1 && op->data.buswidth == 1)
++		hs = false;
++	else if (!op->max_freq)
++		hs = true;
++	else
++		hs = false;
++
++	ret = spinand_read_reg_op(spinand, W25N0XJW_SR4, &sr4);
++	if (ret)
++		return ret;
++
++	if (hs)
++		sr4 |= W25N0XJW_SR4_HS;
++	else
++		sr4 &= ~W25N0XJW_SR4_HS;
++
++	ret = spinand_write_reg_op(spinand, W25N0XJW_SR4, sr4);
++	if (ret)
++		return ret;
++
++	return 0;
++}
++
+ static const struct spinand_info winbond_spinand_table[] = {
+ 	/* 512M-bit densities */
+ 	SPINAND_INFO("W25N512GW", /* 1.8V */
+@@ -268,7 +342,8 @@ static const struct spinand_info winbond_spinand_table[] = {
+ 					      &write_cache_variants,
+ 					      &update_cache_variants),
+ 		     0,
+-		     SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL)),
++		     SPINAND_ECCINFO(&w25n01jw_ooblayout, NULL),
++		     SPINAND_CONFIGURE_CHIP(w25n0xjw_hs_cfg)),
+ 	SPINAND_INFO("W25N01KV", /* 3.3V */
+ 		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xae, 0x21),
+ 		     NAND_MEMORG(1, 2048, 96, 64, 1024, 20, 1, 1, 1),
+@@ -324,7 +399,8 @@ static const struct spinand_info winbond_spinand_table[] = {
+ 					      &write_cache_variants,
+ 					      &update_cache_variants),
+ 		     0,
+-		     SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL)),
++		     SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL),
++		     SPINAND_CONFIGURE_CHIP(w25n0xjw_hs_cfg)),
+ 	SPINAND_INFO("W25N02KV", /* 3.3V */
+ 		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xaa, 0x22),
+ 		     NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 1, 1, 1),
+diff --git a/drivers/net/can/xilinx_can.c b/drivers/net/can/xilinx_can.c
+index 3f2e378199abba..5abe4af61655cd 100644
+--- a/drivers/net/can/xilinx_can.c
++++ b/drivers/net/can/xilinx_can.c
+@@ -690,14 +690,6 @@ static void xcan_write_frame(struct net_device *ndev, struct sk_buff *skb,
+ 		dlc |= XCAN_DLCR_EDL_MASK;
+ 	}
+ 
+-	if (!(priv->devtype.flags & XCAN_FLAG_TX_MAILBOXES) &&
+-	    (priv->devtype.flags & XCAN_FLAG_TXFEMP))
+-		can_put_echo_skb(skb, ndev, priv->tx_head % priv->tx_max, 0);
+-	else
+-		can_put_echo_skb(skb, ndev, 0, 0);
+-
+-	priv->tx_head++;
+-
+ 	priv->write_reg(priv, XCAN_FRAME_ID_OFFSET(frame_offset), id);
+ 	/* If the CAN frame is RTR frame this write triggers transmission
+ 	 * (not on CAN FD)
+@@ -730,6 +722,14 @@ static void xcan_write_frame(struct net_device *ndev, struct sk_buff *skb,
+ 					data[1]);
+ 		}
+ 	}
++
++	if (!(priv->devtype.flags & XCAN_FLAG_TX_MAILBOXES) &&
++	    (priv->devtype.flags & XCAN_FLAG_TXFEMP))
++		can_put_echo_skb(skb, ndev, priv->tx_head % priv->tx_max, 0);
++	else
++		can_put_echo_skb(skb, ndev, 0, 0);
++
++	priv->tx_head++;
+ }
+ 
+ /**
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index d15d912690c40e..073d20241a4c9c 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -1229,9 +1229,15 @@ static int b53_setup(struct dsa_switch *ds)
+ 	 */
+ 	ds->untag_vlan_aware_bridge_pvid = true;
+ 
+-	/* Ageing time is set in seconds */
+-	ds->ageing_time_min = 1 * 1000;
+-	ds->ageing_time_max = AGE_TIME_MAX * 1000;
++	if (dev->chip_id == BCM53101_DEVICE_ID) {
++		/* BCM53101 uses 0.5 second increments */
++		ds->ageing_time_min = 1 * 500;
++		ds->ageing_time_max = AGE_TIME_MAX * 500;
++	} else {
++		/* Everything else uses 1 second increments */
++		ds->ageing_time_min = 1 * 1000;
++		ds->ageing_time_max = AGE_TIME_MAX * 1000;
++	}
+ 
+ 	ret = b53_reset_switch(dev);
+ 	if (ret) {
+@@ -2448,7 +2454,10 @@ int b53_set_ageing_time(struct dsa_switch *ds, unsigned int msecs)
+ 	else
+ 		reg = B53_AGING_TIME_CONTROL;
+ 
+-	atc = DIV_ROUND_CLOSEST(msecs, 1000);
++	if (dev->chip_id == BCM53101_DEVICE_ID)
++		atc = DIV_ROUND_CLOSEST(msecs, 500);
++	else
++		atc = DIV_ROUND_CLOSEST(msecs, 1000);
+ 
+ 	if (!is5325(dev) && !is5365(dev))
+ 		atc |= AGE_CHANGE;
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 651b73163b6ee9..5f15f42070c539 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -2358,7 +2358,8 @@ static void fec_enet_phy_reset_after_clk_enable(struct net_device *ndev)
+ 		 */
+ 		phy_dev = of_phy_find_device(fep->phy_node);
+ 		phy_reset_after_clk_enable(phy_dev);
+-		put_device(&phy_dev->mdio.dev);
++		if (phy_dev)
++			put_device(&phy_dev->mdio.dev);
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index f1c9e575703eaa..26dcdceae741e4 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -4182,7 +4182,7 @@ static int i40e_vsi_request_irq_msix(struct i40e_vsi *vsi, char *basename)
+ 		irq_num = pf->msix_entries[base + vector].vector;
+ 		irq_set_affinity_notifier(irq_num, NULL);
+ 		irq_update_affinity_hint(irq_num, NULL);
+-		free_irq(irq_num, &vsi->q_vectors[vector]);
++		free_irq(irq_num, vsi->q_vectors[vector]);
+ 	}
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+index ca6ccbc139548b..6412c84e2d17db 100644
+--- a/drivers/net/ethernet/intel/igb/igb_ethtool.c
++++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+@@ -2081,11 +2081,8 @@ static void igb_diag_test(struct net_device *netdev,
+ 	} else {
+ 		dev_info(&adapter->pdev->dev, "online testing starting\n");
+ 
+-		/* PHY is powered down when interface is down */
+-		if (if_running && igb_link_test(adapter, &data[TEST_LINK]))
++		if (igb_link_test(adapter, &data[TEST_LINK]))
+ 			eth_test->flags |= ETH_TEST_FL_FAILED;
+-		else
+-			data[TEST_LINK] = 0;
+ 
+ 		/* Online tests aren't run; pass by default */
+ 		data[TEST_REG] = 0;
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index b76a154e635e00..d87438bef6fba5 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -4451,8 +4451,7 @@ int igb_setup_rx_resources(struct igb_ring *rx_ring)
+ 	if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq))
+ 		xdp_rxq_info_unreg(&rx_ring->xdp_rxq);
+ 	res = xdp_rxq_info_reg(&rx_ring->xdp_rxq, rx_ring->netdev,
+-			       rx_ring->queue_index,
+-			       rx_ring->q_vector->napi.napi_id);
++			       rx_ring->queue_index, 0);
+ 	if (res < 0) {
+ 		dev_err(dev, "Failed to register xdp_rxq index %u\n",
+ 			rx_ring->queue_index);
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.c b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+index f436d7cf565a14..1a9cc8206430b2 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+@@ -691,7 +691,7 @@ static void icssg_prueth_hsr_fdb_add_del(struct prueth_emac *emac,
+ 
+ static int icssg_prueth_hsr_add_mcast(struct net_device *ndev, const u8 *addr)
+ {
+-	struct net_device *real_dev;
++	struct net_device *real_dev, *port_dev;
+ 	struct prueth_emac *emac;
+ 	u8 vlan_id, i;
+ 
+@@ -700,11 +700,15 @@ static int icssg_prueth_hsr_add_mcast(struct net_device *ndev, const u8 *addr)
+ 
+ 	if (is_hsr_master(real_dev)) {
+ 		for (i = HSR_PT_SLAVE_A; i < HSR_PT_INTERLINK; i++) {
+-			emac = netdev_priv(hsr_get_port_ndev(real_dev, i));
+-			if (!emac)
++			port_dev = hsr_get_port_ndev(real_dev, i);
++			emac = netdev_priv(port_dev);
++			if (!emac) {
++				dev_put(port_dev);
+ 				return -EINVAL;
++			}
+ 			icssg_prueth_hsr_fdb_add_del(emac, addr, vlan_id,
+ 						     true);
++			dev_put(port_dev);
+ 		}
+ 	} else {
+ 		emac = netdev_priv(real_dev);
+@@ -716,7 +720,7 @@ static int icssg_prueth_hsr_add_mcast(struct net_device *ndev, const u8 *addr)
+ 
+ static int icssg_prueth_hsr_del_mcast(struct net_device *ndev, const u8 *addr)
+ {
+-	struct net_device *real_dev;
++	struct net_device *real_dev, *port_dev;
+ 	struct prueth_emac *emac;
+ 	u8 vlan_id, i;
+ 
+@@ -725,11 +729,15 @@ static int icssg_prueth_hsr_del_mcast(struct net_device *ndev, const u8 *addr)
+ 
+ 	if (is_hsr_master(real_dev)) {
+ 		for (i = HSR_PT_SLAVE_A; i < HSR_PT_INTERLINK; i++) {
+-			emac = netdev_priv(hsr_get_port_ndev(real_dev, i));
+-			if (!emac)
++			port_dev = hsr_get_port_ndev(real_dev, i);
++			emac = netdev_priv(port_dev);
++			if (!emac) {
++				dev_put(port_dev);
+ 				return -EINVAL;
++			}
+ 			icssg_prueth_hsr_fdb_add_del(emac, addr, vlan_id,
+ 						     false);
++			dev_put(port_dev);
+ 		}
+ 	} else {
+ 		emac = netdev_priv(real_dev);
+diff --git a/drivers/net/ethernet/wangxun/libwx/wx_hw.c b/drivers/net/ethernet/wangxun/libwx/wx_hw.c
+index f0823aa1ede607..bb1dcdf5fd0d58 100644
+--- a/drivers/net/ethernet/wangxun/libwx/wx_hw.c
++++ b/drivers/net/ethernet/wangxun/libwx/wx_hw.c
+@@ -2071,10 +2071,6 @@ static void wx_setup_mrqc(struct wx *wx)
+ {
+ 	u32 rss_field = 0;
+ 
+-	/* VT, and RSS do not coexist at the same time */
+-	if (test_bit(WX_FLAG_VMDQ_ENABLED, wx->flags))
+-		return;
+-
+ 	/* Disable indicating checksum in descriptor, enables RSS hash */
+ 	wr32m(wx, WX_PSR_CTL, WX_PSR_CTL_PCSD, WX_PSR_CTL_PCSD);
+ 
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 01329fe7451a12..0eca96eeed58ab 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -4286,6 +4286,7 @@ static int macsec_newlink(struct net_device *dev,
+ 	if (err < 0)
+ 		goto del_dev;
+ 
++	netdev_update_features(dev);
+ 	netif_stacked_transfer_operstate(real_dev, dev);
+ 	linkwatch_fire_event(dev);
+ 
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index 13df28445f0201..c02da57a4da5e3 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -1065,23 +1065,19 @@ EXPORT_SYMBOL_GPL(phy_inband_caps);
+  */
+ int phy_config_inband(struct phy_device *phydev, unsigned int modes)
+ {
+-	int err;
++	lockdep_assert_held(&phydev->lock);
+ 
+ 	if (!!(modes & LINK_INBAND_DISABLE) +
+ 	    !!(modes & LINK_INBAND_ENABLE) +
+ 	    !!(modes & LINK_INBAND_BYPASS) != 1)
+ 		return -EINVAL;
+ 
+-	mutex_lock(&phydev->lock);
+ 	if (!phydev->drv)
+-		err = -EIO;
++		return -EIO;
+ 	else if (!phydev->drv->config_inband)
+-		err = -EOPNOTSUPP;
+-	else
+-		err = phydev->drv->config_inband(phydev, modes);
+-	mutex_unlock(&phydev->lock);
++		return -EOPNOTSUPP;
+ 
+-	return err;
++	return phydev->drv->config_inband(phydev, modes);
+ }
+ EXPORT_SYMBOL(phy_config_inband);
+ 
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index 0faa3d97e06b94..229a503d601eed 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -67,6 +67,8 @@ struct phylink {
+ 	struct timer_list link_poll;
+ 
+ 	struct mutex state_mutex;
++	/* Serialize updates to pl->phydev with phylink_resolve() */
++	struct mutex phydev_mutex;
+ 	struct phylink_link_state phy_state;
+ 	unsigned int phy_ib_mode;
+ 	struct work_struct resolve;
+@@ -1409,6 +1411,7 @@ static void phylink_get_fixed_state(struct phylink *pl,
+ static void phylink_mac_initial_config(struct phylink *pl, bool force_restart)
+ {
+ 	struct phylink_link_state link_state;
++	struct phy_device *phy = pl->phydev;
+ 
+ 	switch (pl->req_link_an_mode) {
+ 	case MLO_AN_PHY:
+@@ -1432,7 +1435,11 @@ static void phylink_mac_initial_config(struct phylink *pl, bool force_restart)
+ 	link_state.link = false;
+ 
+ 	phylink_apply_manual_flow(pl, &link_state);
++	if (phy)
++		mutex_lock(&phy->lock);
+ 	phylink_major_config(pl, force_restart, &link_state);
++	if (phy)
++		mutex_unlock(&phy->lock);
+ }
+ 
+ static const char *phylink_pause_to_str(int pause)
+@@ -1568,8 +1575,13 @@ static void phylink_resolve(struct work_struct *w)
+ 	struct phylink_link_state link_state;
+ 	bool mac_config = false;
+ 	bool retrigger = false;
++	struct phy_device *phy;
+ 	bool cur_link_state;
+ 
++	mutex_lock(&pl->phydev_mutex);
++	phy = pl->phydev;
++	if (phy)
++		mutex_lock(&phy->lock);
+ 	mutex_lock(&pl->state_mutex);
+ 	cur_link_state = phylink_link_is_up(pl);
+ 
+@@ -1603,11 +1615,11 @@ static void phylink_resolve(struct work_struct *w)
+ 		/* If we have a phy, the "up" state is the union of both the
+ 		 * PHY and the MAC
+ 		 */
+-		if (pl->phydev)
++		if (phy)
+ 			link_state.link &= pl->phy_state.link;
+ 
+ 		/* Only update if the PHY link is up */
+-		if (pl->phydev && pl->phy_state.link) {
++		if (phy && pl->phy_state.link) {
+ 			/* If the interface has changed, force a link down
+ 			 * event if the link isn't already down, and re-resolve.
+ 			 */
+@@ -1671,6 +1683,9 @@ static void phylink_resolve(struct work_struct *w)
+ 		queue_work(system_power_efficient_wq, &pl->resolve);
+ 	}
+ 	mutex_unlock(&pl->state_mutex);
++	if (phy)
++		mutex_unlock(&phy->lock);
++	mutex_unlock(&pl->phydev_mutex);
+ }
+ 
+ static void phylink_run_resolve(struct phylink *pl)
+@@ -1806,6 +1821,7 @@ struct phylink *phylink_create(struct phylink_config *config,
+ 	if (!pl)
+ 		return ERR_PTR(-ENOMEM);
+ 
++	mutex_init(&pl->phydev_mutex);
+ 	mutex_init(&pl->state_mutex);
+ 	INIT_WORK(&pl->resolve, phylink_resolve);
+ 
+@@ -2066,6 +2082,7 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy,
+ 		     dev_name(&phy->mdio.dev), phy->drv->name, irq_str);
+ 	kfree(irq_str);
+ 
++	mutex_lock(&pl->phydev_mutex);
+ 	mutex_lock(&phy->lock);
+ 	mutex_lock(&pl->state_mutex);
+ 	pl->phydev = phy;
+@@ -2111,6 +2128,7 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy,
+ 
+ 	mutex_unlock(&pl->state_mutex);
+ 	mutex_unlock(&phy->lock);
++	mutex_unlock(&pl->phydev_mutex);
+ 
+ 	phylink_dbg(pl,
+ 		    "phy: %s setting supported %*pb advertising %*pb\n",
+@@ -2289,6 +2307,7 @@ void phylink_disconnect_phy(struct phylink *pl)
+ 
+ 	ASSERT_RTNL();
+ 
++	mutex_lock(&pl->phydev_mutex);
+ 	phy = pl->phydev;
+ 	if (phy) {
+ 		mutex_lock(&phy->lock);
+@@ -2298,8 +2317,11 @@ void phylink_disconnect_phy(struct phylink *pl)
+ 		pl->mac_tx_clk_stop = false;
+ 		mutex_unlock(&pl->state_mutex);
+ 		mutex_unlock(&phy->lock);
+-		flush_work(&pl->resolve);
++	}
++	mutex_unlock(&pl->phydev_mutex);
+ 
++	if (phy) {
++		flush_work(&pl->resolve);
+ 		phy_disconnect(phy);
+ 	}
+ }
+diff --git a/drivers/net/wireless/ath/ath12k/core.h b/drivers/net/wireless/ath/ath12k/core.h
+index 4bd286296da794..cebdf62ce3db9b 100644
+--- a/drivers/net/wireless/ath/ath12k/core.h
++++ b/drivers/net/wireless/ath/ath12k/core.h
+@@ -116,6 +116,7 @@ static inline u64 ath12k_le32hilo_to_u64(__le32 hi, __le32 lo)
+ enum ath12k_skb_flags {
+ 	ATH12K_SKB_HW_80211_ENCAP = BIT(0),
+ 	ATH12K_SKB_CIPHER_SET = BIT(1),
++	ATH12K_SKB_MLO_STA = BIT(2),
+ };
+ 
+ struct ath12k_skb_cb {
+diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.c b/drivers/net/wireless/ath/ath12k/dp_mon.c
+index 91f4e3aff74c38..6a0915a0c7aae3 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_mon.c
++++ b/drivers/net/wireless/ath/ath12k/dp_mon.c
+@@ -3610,7 +3610,6 @@ ath12k_dp_mon_rx_update_user_stats(struct ath12k *ar,
+ 				   struct hal_rx_mon_ppdu_info *ppdu_info,
+ 				   u32 uid)
+ {
+-	struct ath12k_sta *ahsta;
+ 	struct ath12k_link_sta *arsta;
+ 	struct ath12k_rx_peer_stats *rx_stats = NULL;
+ 	struct hal_rx_user_status *user_stats = &ppdu_info->userstats[uid];
+@@ -3628,8 +3627,13 @@ ath12k_dp_mon_rx_update_user_stats(struct ath12k *ar,
+ 		return;
+ 	}
+ 
+-	ahsta = ath12k_sta_to_ahsta(peer->sta);
+-	arsta = &ahsta->deflink;
++	arsta = ath12k_peer_get_link_sta(ar->ab, peer);
++	if (!arsta) {
++		ath12k_warn(ar->ab, "link sta not found on peer %pM id %d\n",
++			    peer->addr, peer->peer_id);
++		return;
++	}
++
+ 	arsta->rssi_comb = ppdu_info->rssi_comb;
+ 	ewma_avg_rssi_add(&arsta->avg_rssi, ppdu_info->rssi_comb);
+ 	rx_stats = arsta->rx_stats;
+@@ -3742,7 +3746,6 @@ int ath12k_dp_mon_srng_process(struct ath12k *ar, int *budget,
+ 	struct dp_srng *mon_dst_ring;
+ 	struct hal_srng *srng;
+ 	struct dp_rxdma_mon_ring *buf_ring;
+-	struct ath12k_sta *ahsta = NULL;
+ 	struct ath12k_link_sta *arsta;
+ 	struct ath12k_peer *peer;
+ 	struct sk_buff_head skb_list;
+@@ -3868,8 +3871,15 @@ int ath12k_dp_mon_srng_process(struct ath12k *ar, int *budget,
+ 		}
+ 
+ 		if (ppdu_info->reception_type == HAL_RX_RECEPTION_TYPE_SU) {
+-			ahsta = ath12k_sta_to_ahsta(peer->sta);
+-			arsta = &ahsta->deflink;
++			arsta = ath12k_peer_get_link_sta(ar->ab, peer);
++			if (!arsta) {
++				ath12k_warn(ar->ab, "link sta not found on peer %pM id %d\n",
++					    peer->addr, peer->peer_id);
++				spin_unlock_bh(&ab->base_lock);
++				rcu_read_unlock();
++				dev_kfree_skb_any(skb);
++				continue;
++			}
+ 			ath12k_dp_mon_rx_update_peer_su_stats(ar, arsta,
+ 							      ppdu_info);
+ 		} else if ((ppdu_info->fc_valid) &&
+diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.c b/drivers/net/wireless/ath/ath12k/dp_rx.c
+index bd95dc88f9b21f..e9137ffeb5ab48 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_rx.c
+@@ -1418,8 +1418,6 @@ ath12k_update_per_peer_tx_stats(struct ath12k *ar,
+ {
+ 	struct ath12k_base *ab = ar->ab;
+ 	struct ath12k_peer *peer;
+-	struct ieee80211_sta *sta;
+-	struct ath12k_sta *ahsta;
+ 	struct ath12k_link_sta *arsta;
+ 	struct htt_ppdu_stats_user_rate *user_rate;
+ 	struct ath12k_per_peer_tx_stats *peer_stats = &ar->peer_tx_stats;
+@@ -1500,9 +1498,12 @@ ath12k_update_per_peer_tx_stats(struct ath12k *ar,
+ 		return;
+ 	}
+ 
+-	sta = peer->sta;
+-	ahsta = ath12k_sta_to_ahsta(sta);
+-	arsta = &ahsta->deflink;
++	arsta = ath12k_peer_get_link_sta(ab, peer);
++	if (!arsta) {
++		spin_unlock_bh(&ab->base_lock);
++		rcu_read_unlock();
++		return;
++	}
+ 
+ 	memset(&arsta->txrate, 0, sizeof(arsta->txrate));
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/hw.c b/drivers/net/wireless/ath/ath12k/hw.c
+index ec77ad498b33a2..6791ae1d64e50f 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.c
++++ b/drivers/net/wireless/ath/ath12k/hw.c
+@@ -14,6 +14,7 @@
+ #include "hw.h"
+ #include "mhi.h"
+ #include "dp_rx.h"
++#include "peer.h"
+ 
+ static const guid_t wcn7850_uuid = GUID_INIT(0xf634f534, 0x6147, 0x11ec,
+ 					     0x90, 0xd6, 0x02, 0x42,
+@@ -49,6 +50,12 @@ static bool ath12k_dp_srng_is_comp_ring_qcn9274(int ring_num)
+ 	return false;
+ }
+ 
++static bool ath12k_is_frame_link_agnostic_qcn9274(struct ath12k_link_vif *arvif,
++						  struct ieee80211_mgmt *mgmt)
++{
++	return ieee80211_is_action(mgmt->frame_control);
++}
++
+ static int ath12k_hw_mac_id_to_pdev_id_wcn7850(const struct ath12k_hw_params *hw,
+ 					       int mac_id)
+ {
+@@ -74,6 +81,52 @@ static bool ath12k_dp_srng_is_comp_ring_wcn7850(int ring_num)
+ 	return false;
+ }
+ 
++static bool ath12k_is_addba_resp_action_code(struct ieee80211_mgmt *mgmt)
++{
++	if (!ieee80211_is_action(mgmt->frame_control))
++		return false;
++
++	if (mgmt->u.action.category != WLAN_CATEGORY_BACK)
++		return false;
++
++	if (mgmt->u.action.u.addba_resp.action_code != WLAN_ACTION_ADDBA_RESP)
++		return false;
++
++	return true;
++}
++
++static bool ath12k_is_frame_link_agnostic_wcn7850(struct ath12k_link_vif *arvif,
++						  struct ieee80211_mgmt *mgmt)
++{
++	struct ieee80211_vif *vif = ath12k_ahvif_to_vif(arvif->ahvif);
++	struct ath12k_hw *ah = ath12k_ar_to_ah(arvif->ar);
++	struct ath12k_base *ab = arvif->ar->ab;
++	__le16 fc = mgmt->frame_control;
++
++	spin_lock_bh(&ab->base_lock);
++	if (!ath12k_peer_find_by_addr(ab, mgmt->da) &&
++	    !ath12k_peer_ml_find(ah, mgmt->da)) {
++		spin_unlock_bh(&ab->base_lock);
++		return false;
++	}
++	spin_unlock_bh(&ab->base_lock);
++
++	if (vif->type == NL80211_IFTYPE_STATION)
++		return arvif->is_up &&
++		       (vif->valid_links == vif->active_links) &&
++		       !ieee80211_is_probe_req(fc) &&
++		       !ieee80211_is_auth(fc) &&
++		       !ieee80211_is_deauth(fc) &&
++		       !ath12k_is_addba_resp_action_code(mgmt);
++
++	if (vif->type == NL80211_IFTYPE_AP)
++		return !(ieee80211_is_probe_resp(fc) || ieee80211_is_auth(fc) ||
++			 ieee80211_is_assoc_resp(fc) || ieee80211_is_reassoc_resp(fc) ||
++			 ath12k_is_addba_resp_action_code(mgmt));
++
++	return false;
++}
++
+ static const struct ath12k_hw_ops qcn9274_ops = {
+ 	.get_hw_mac_from_pdev_id = ath12k_hw_qcn9274_mac_from_pdev_id,
+ 	.mac_id_to_pdev_id = ath12k_hw_mac_id_to_pdev_id_qcn9274,
+@@ -81,6 +134,7 @@ static const struct ath12k_hw_ops qcn9274_ops = {
+ 	.rxdma_ring_sel_config = ath12k_dp_rxdma_ring_sel_config_qcn9274,
+ 	.get_ring_selector = ath12k_hw_get_ring_selector_qcn9274,
+ 	.dp_srng_is_tx_comp_ring = ath12k_dp_srng_is_comp_ring_qcn9274,
++	.is_frame_link_agnostic = ath12k_is_frame_link_agnostic_qcn9274,
+ };
+ 
+ static const struct ath12k_hw_ops wcn7850_ops = {
+@@ -90,6 +144,7 @@ static const struct ath12k_hw_ops wcn7850_ops = {
+ 	.rxdma_ring_sel_config = ath12k_dp_rxdma_ring_sel_config_wcn7850,
+ 	.get_ring_selector = ath12k_hw_get_ring_selector_wcn7850,
+ 	.dp_srng_is_tx_comp_ring = ath12k_dp_srng_is_comp_ring_wcn7850,
++	.is_frame_link_agnostic = ath12k_is_frame_link_agnostic_wcn7850,
+ };
+ 
+ #define ATH12K_TX_RING_MASK_0 0x1
+diff --git a/drivers/net/wireless/ath/ath12k/hw.h b/drivers/net/wireless/ath/ath12k/hw.h
+index 0a75bc5abfa241..9c69dd5a22afa4 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.h
++++ b/drivers/net/wireless/ath/ath12k/hw.h
+@@ -246,6 +246,8 @@ struct ath12k_hw_ops {
+ 	int (*rxdma_ring_sel_config)(struct ath12k_base *ab);
+ 	u8 (*get_ring_selector)(struct sk_buff *skb);
+ 	bool (*dp_srng_is_tx_comp_ring)(int ring_num);
++	bool (*is_frame_link_agnostic)(struct ath12k_link_vif *arvif,
++				       struct ieee80211_mgmt *mgmt);
+ };
+ 
+ static inline
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index a885dd168a372a..708dc3dd4347ad 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -3650,12 +3650,68 @@ static int ath12k_mac_fils_discovery(struct ath12k_link_vif *arvif,
+ 	return ret;
+ }
+ 
++static void ath12k_mac_vif_setup_ps(struct ath12k_link_vif *arvif)
++{
++	struct ath12k *ar = arvif->ar;
++	struct ieee80211_vif *vif = arvif->ahvif->vif;
++	struct ieee80211_conf *conf = &ath12k_ar_to_hw(ar)->conf;
++	enum wmi_sta_powersave_param param;
++	struct ieee80211_bss_conf *info;
++	enum wmi_sta_ps_mode psmode;
++	int ret;
++	int timeout;
++	bool enable_ps;
++
++	lockdep_assert_wiphy(ath12k_ar_to_hw(ar)->wiphy);
++
++	if (vif->type != NL80211_IFTYPE_STATION)
++		return;
++
++	enable_ps = arvif->ahvif->ps;
++	if (enable_ps) {
++		psmode = WMI_STA_PS_MODE_ENABLED;
++		param = WMI_STA_PS_PARAM_INACTIVITY_TIME;
++
++		timeout = conf->dynamic_ps_timeout;
++		if (timeout == 0) {
++			info = ath12k_mac_get_link_bss_conf(arvif);
++			if (!info) {
++				ath12k_warn(ar->ab, "unable to access bss link conf in setup ps for vif %pM link %u\n",
++					    vif->addr, arvif->link_id);
++				return;
++			}
++
++			/* firmware doesn't like 0 */
++			timeout = ieee80211_tu_to_usec(info->beacon_int) / 1000;
++		}
++
++		ret = ath12k_wmi_set_sta_ps_param(ar, arvif->vdev_id, param,
++						  timeout);
++		if (ret) {
++			ath12k_warn(ar->ab, "failed to set inactivity time for vdev %d: %i\n",
++				    arvif->vdev_id, ret);
++			return;
++		}
++	} else {
++		psmode = WMI_STA_PS_MODE_DISABLED;
++	}
++
++	ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac vdev %d psmode %s\n",
++		   arvif->vdev_id, psmode ? "enable" : "disable");
++
++	ret = ath12k_wmi_pdev_set_ps_mode(ar, arvif->vdev_id, psmode);
++	if (ret)
++		ath12k_warn(ar->ab, "failed to set sta power save mode %d for vdev %d: %d\n",
++			    psmode, arvif->vdev_id, ret);
++}
++
+ static void ath12k_mac_op_vif_cfg_changed(struct ieee80211_hw *hw,
+ 					  struct ieee80211_vif *vif,
+ 					  u64 changed)
+ {
+ 	struct ath12k_vif *ahvif = ath12k_vif_to_ahvif(vif);
+ 	unsigned long links = ahvif->links_map;
++	struct ieee80211_vif_cfg *vif_cfg;
+ 	struct ieee80211_bss_conf *info;
+ 	struct ath12k_link_vif *arvif;
+ 	struct ieee80211_sta *sta;
+@@ -3719,61 +3775,24 @@ static void ath12k_mac_op_vif_cfg_changed(struct ieee80211_hw *hw,
+ 			}
+ 		}
+ 	}
+-}
+ 
+-static void ath12k_mac_vif_setup_ps(struct ath12k_link_vif *arvif)
+-{
+-	struct ath12k *ar = arvif->ar;
+-	struct ieee80211_vif *vif = arvif->ahvif->vif;
+-	struct ieee80211_conf *conf = &ath12k_ar_to_hw(ar)->conf;
+-	enum wmi_sta_powersave_param param;
+-	struct ieee80211_bss_conf *info;
+-	enum wmi_sta_ps_mode psmode;
+-	int ret;
+-	int timeout;
+-	bool enable_ps;
++	if (changed & BSS_CHANGED_PS) {
++		links = ahvif->links_map;
++		vif_cfg = &vif->cfg;
+ 
+-	lockdep_assert_wiphy(ath12k_ar_to_hw(ar)->wiphy);
+-
+-	if (vif->type != NL80211_IFTYPE_STATION)
+-		return;
++		for_each_set_bit(link_id, &links, IEEE80211_MLD_MAX_NUM_LINKS) {
++			arvif = wiphy_dereference(hw->wiphy, ahvif->link[link_id]);
++			if (!arvif || !arvif->ar)
++				continue;
+ 
+-	enable_ps = arvif->ahvif->ps;
+-	if (enable_ps) {
+-		psmode = WMI_STA_PS_MODE_ENABLED;
+-		param = WMI_STA_PS_PARAM_INACTIVITY_TIME;
++			ar = arvif->ar;
+ 
+-		timeout = conf->dynamic_ps_timeout;
+-		if (timeout == 0) {
+-			info = ath12k_mac_get_link_bss_conf(arvif);
+-			if (!info) {
+-				ath12k_warn(ar->ab, "unable to access bss link conf in setup ps for vif %pM link %u\n",
+-					    vif->addr, arvif->link_id);
+-				return;
++			if (ar->ab->hw_params->supports_sta_ps) {
++				ahvif->ps = vif_cfg->ps;
++				ath12k_mac_vif_setup_ps(arvif);
+ 			}
+-
+-			/* firmware doesn't like 0 */
+-			timeout = ieee80211_tu_to_usec(info->beacon_int) / 1000;
+-		}
+-
+-		ret = ath12k_wmi_set_sta_ps_param(ar, arvif->vdev_id, param,
+-						  timeout);
+-		if (ret) {
+-			ath12k_warn(ar->ab, "failed to set inactivity time for vdev %d: %i\n",
+-				    arvif->vdev_id, ret);
+-			return;
+ 		}
+-	} else {
+-		psmode = WMI_STA_PS_MODE_DISABLED;
+ 	}
+-
+-	ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac vdev %d psmode %s\n",
+-		   arvif->vdev_id, psmode ? "enable" : "disable");
+-
+-	ret = ath12k_wmi_pdev_set_ps_mode(ar, arvif->vdev_id, psmode);
+-	if (ret)
+-		ath12k_warn(ar->ab, "failed to set sta power save mode %d for vdev %d: %d\n",
+-			    psmode, arvif->vdev_id, ret);
+ }
+ 
+ static bool ath12k_mac_supports_station_tpc(struct ath12k *ar,
+@@ -3795,7 +3814,6 @@ static void ath12k_mac_bss_info_changed(struct ath12k *ar,
+ {
+ 	struct ath12k_vif *ahvif = arvif->ahvif;
+ 	struct ieee80211_vif *vif = ath12k_ahvif_to_vif(ahvif);
+-	struct ieee80211_vif_cfg *vif_cfg = &vif->cfg;
+ 	struct cfg80211_chan_def def;
+ 	u32 param_id, param_value;
+ 	enum nl80211_band band;
+@@ -4069,12 +4087,6 @@ static void ath12k_mac_bss_info_changed(struct ath12k *ar,
+ 	}
+ 
+ 	ath12k_mac_fils_discovery(arvif, info);
+-
+-	if (changed & BSS_CHANGED_PS &&
+-	    ar->ab->hw_params->supports_sta_ps) {
+-		ahvif->ps = vif_cfg->ps;
+-		ath12k_mac_vif_setup_ps(arvif);
+-	}
+ }
+ 
+ static struct ath12k_vif_cache *ath12k_ahvif_get_link_cache(struct ath12k_vif *ahvif,
+@@ -7673,7 +7685,7 @@ static int ath12k_mac_mgmt_tx_wmi(struct ath12k *ar, struct ath12k_link_vif *arv
+ 
+ 	skb_cb->paddr = paddr;
+ 
+-	ret = ath12k_wmi_mgmt_send(ar, arvif->vdev_id, buf_id, skb);
++	ret = ath12k_wmi_mgmt_send(arvif, buf_id, skb);
+ 	if (ret) {
+ 		ath12k_warn(ar->ab, "failed to send mgmt frame: %d\n", ret);
+ 		goto err_unmap_buf;
+@@ -7985,6 +7997,9 @@ static void ath12k_mac_op_tx(struct ieee80211_hw *hw,
+ 
+ 		skb_cb->flags |= ATH12K_SKB_HW_80211_ENCAP;
+ 	} else if (ieee80211_is_mgmt(hdr->frame_control)) {
++		if (sta && sta->mlo)
++			skb_cb->flags |= ATH12K_SKB_MLO_STA;
++
+ 		ret = ath12k_mac_mgmt_tx(ar, skb, is_prb_rsp);
+ 		if (ret) {
+ 			ath12k_warn(ar->ab, "failed to queue management frame %d\n",
+diff --git a/drivers/net/wireless/ath/ath12k/peer.c b/drivers/net/wireless/ath/ath12k/peer.c
+index ec7236bbccc0fe..eb7aeff0149038 100644
+--- a/drivers/net/wireless/ath/ath12k/peer.c
++++ b/drivers/net/wireless/ath/ath12k/peer.c
+@@ -8,7 +8,7 @@
+ #include "peer.h"
+ #include "debug.h"
+ 
+-static struct ath12k_ml_peer *ath12k_peer_ml_find(struct ath12k_hw *ah, const u8 *addr)
++struct ath12k_ml_peer *ath12k_peer_ml_find(struct ath12k_hw *ah, const u8 *addr)
+ {
+ 	struct ath12k_ml_peer *ml_peer;
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/peer.h b/drivers/net/wireless/ath/ath12k/peer.h
+index f3a5e054d2b556..44afc0b7dd53ea 100644
+--- a/drivers/net/wireless/ath/ath12k/peer.h
++++ b/drivers/net/wireless/ath/ath12k/peer.h
+@@ -91,5 +91,33 @@ struct ath12k_peer *ath12k_peer_find_by_ast(struct ath12k_base *ab, int ast_hash
+ int ath12k_peer_ml_create(struct ath12k_hw *ah, struct ieee80211_sta *sta);
+ int ath12k_peer_ml_delete(struct ath12k_hw *ah, struct ieee80211_sta *sta);
+ int ath12k_peer_mlo_link_peers_delete(struct ath12k_vif *ahvif, struct ath12k_sta *ahsta);
++struct ath12k_ml_peer *ath12k_peer_ml_find(struct ath12k_hw *ah,
++					   const u8 *addr);
++static inline
++struct ath12k_link_sta *ath12k_peer_get_link_sta(struct ath12k_base *ab,
++						 struct ath12k_peer *peer)
++{
++	struct ath12k_sta *ahsta;
++	struct ath12k_link_sta *arsta;
++
++	if (!peer->sta)
++		return NULL;
++
++	ahsta = ath12k_sta_to_ahsta(peer->sta);
++	if (peer->ml_id & ATH12K_PEER_ML_ID_VALID) {
++		if (!(ahsta->links_map & BIT(peer->link_id))) {
++			ath12k_warn(ab, "peer %pM id %d link_id %d can't found in STA link_map 0x%x\n",
++				    peer->addr, peer->peer_id, peer->link_id,
++				    ahsta->links_map);
++			return NULL;
++		}
++		arsta = rcu_dereference(ahsta->link[peer->link_id]);
++		if (!arsta)
++			return NULL;
++	} else {
++		arsta =  &ahsta->deflink;
++	}
++	return arsta;
++}
+ 
+ #endif /* _PEER_H_ */
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
+index eac5d48cade663..d740326079e1d7 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.c
++++ b/drivers/net/wireless/ath/ath12k/wmi.c
+@@ -782,20 +782,46 @@ struct sk_buff *ath12k_wmi_alloc_skb(struct ath12k_wmi_base *wmi_ab, u32 len)
+ 	return skb;
+ }
+ 
+-int ath12k_wmi_mgmt_send(struct ath12k *ar, u32 vdev_id, u32 buf_id,
++int ath12k_wmi_mgmt_send(struct ath12k_link_vif *arvif, u32 buf_id,
+ 			 struct sk_buff *frame)
+ {
++	struct ath12k *ar = arvif->ar;
+ 	struct ath12k_wmi_pdev *wmi = ar->wmi;
+ 	struct wmi_mgmt_send_cmd *cmd;
+ 	struct ieee80211_tx_info *info = IEEE80211_SKB_CB(frame);
+-	struct wmi_tlv *frame_tlv;
++	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)frame->data;
++	struct ieee80211_vif *vif = ath12k_ahvif_to_vif(arvif->ahvif);
++	int cmd_len = sizeof(struct ath12k_wmi_mgmt_send_tx_params);
++	struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)hdr;
++	struct ath12k_wmi_mlo_mgmt_send_params *ml_params;
++	struct ath12k_base *ab = ar->ab;
++	struct wmi_tlv *frame_tlv, *tlv;
++	struct ath12k_skb_cb *skb_cb;
++	u32 buf_len, buf_len_aligned;
++	u32 vdev_id = arvif->vdev_id;
++	bool link_agnostic = false;
+ 	struct sk_buff *skb;
+-	u32 buf_len;
+ 	int ret, len;
++	void *ptr;
+ 
+ 	buf_len = min_t(int, frame->len, WMI_MGMT_SEND_DOWNLD_LEN);
+ 
+-	len = sizeof(*cmd) + sizeof(*frame_tlv) + roundup(buf_len, 4);
++	buf_len_aligned = roundup(buf_len, sizeof(u32));
++
++	len = sizeof(*cmd) + sizeof(*frame_tlv) + buf_len_aligned;
++
++	if (ieee80211_vif_is_mld(vif)) {
++		skb_cb = ATH12K_SKB_CB(frame);
++		if ((skb_cb->flags & ATH12K_SKB_MLO_STA) &&
++		    ab->hw_params->hw_ops->is_frame_link_agnostic &&
++		    ab->hw_params->hw_ops->is_frame_link_agnostic(arvif, mgmt)) {
++			len += cmd_len + TLV_HDR_SIZE + sizeof(*ml_params);
++			ath12k_generic_dbg(ATH12K_DBG_MGMT,
++					   "Sending Mgmt Frame fc 0x%0x as link agnostic",
++					   mgmt->frame_control);
++			link_agnostic = true;
++		}
++	}
+ 
+ 	skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, len);
+ 	if (!skb)
+@@ -814,10 +840,32 @@ int ath12k_wmi_mgmt_send(struct ath12k *ar, u32 vdev_id, u32 buf_id,
+ 	cmd->tx_params_valid = 0;
+ 
+ 	frame_tlv = (struct wmi_tlv *)(skb->data + sizeof(*cmd));
+-	frame_tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, buf_len);
++	frame_tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, buf_len_aligned);
+ 
+ 	memcpy(frame_tlv->value, frame->data, buf_len);
+ 
++	if (!link_agnostic)
++		goto send;
++
++	ptr = skb->data + sizeof(*cmd) + sizeof(*frame_tlv) + buf_len_aligned;
++
++	tlv = ptr;
++
++	/* Tx params not used currently */
++	tlv->header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_TX_SEND_PARAMS, cmd_len);
++	ptr += cmd_len;
++
++	tlv = ptr;
++	tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT, sizeof(*ml_params));
++	ptr += TLV_HDR_SIZE;
++
++	ml_params = ptr;
++	ml_params->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_MLO_TX_SEND_PARAMS,
++						       sizeof(*ml_params));
++
++	ml_params->hw_link_id = cpu_to_le32(WMI_MGMT_LINK_AGNOSTIC_ID);
++
++send:
+ 	ret = ath12k_wmi_cmd_send(wmi, skb, WMI_MGMT_TX_SEND_CMDID);
+ 	if (ret) {
+ 		ath12k_warn(ar->ab,
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.h b/drivers/net/wireless/ath/ath12k/wmi.h
+index 8627154f1680fa..6dbcedcf081759 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.h
++++ b/drivers/net/wireless/ath/ath12k/wmi.h
+@@ -3963,6 +3963,7 @@ struct wmi_scan_chan_list_cmd {
+ } __packed;
+ 
+ #define WMI_MGMT_SEND_DOWNLD_LEN	64
++#define WMI_MGMT_LINK_AGNOSTIC_ID	0xFFFFFFFF
+ 
+ #define WMI_TX_PARAMS_DWORD0_POWER		GENMASK(7, 0)
+ #define WMI_TX_PARAMS_DWORD0_MCS_MASK		GENMASK(19, 8)
+@@ -3988,7 +3989,18 @@ struct wmi_mgmt_send_cmd {
+ 
+ 	/* This TLV is followed by struct wmi_mgmt_frame */
+ 
+-	/* Followed by struct wmi_mgmt_send_params */
++	/* Followed by struct ath12k_wmi_mlo_mgmt_send_params */
++} __packed;
++
++struct ath12k_wmi_mlo_mgmt_send_params {
++	__le32 tlv_header;
++	__le32 hw_link_id;
++} __packed;
++
++struct ath12k_wmi_mgmt_send_tx_params {
++	__le32 tlv_header;
++	__le32 tx_param_dword0;
++	__le32 tx_param_dword1;
+ } __packed;
+ 
+ struct wmi_sta_powersave_mode_cmd {
+@@ -6183,7 +6195,7 @@ void ath12k_wmi_init_wcn7850(struct ath12k_base *ab,
+ int ath12k_wmi_cmd_send(struct ath12k_wmi_pdev *wmi, struct sk_buff *skb,
+ 			u32 cmd_id);
+ struct sk_buff *ath12k_wmi_alloc_skb(struct ath12k_wmi_base *wmi_sc, u32 len);
+-int ath12k_wmi_mgmt_send(struct ath12k *ar, u32 vdev_id, u32 buf_id,
++int ath12k_wmi_mgmt_send(struct ath12k_link_vif *arvif, u32 buf_id,
+ 			 struct sk_buff *frame);
+ int ath12k_wmi_p2p_go_bcn_ie(struct ath12k *ar, u32 vdev_id,
+ 			     const u8 *p2p_ie);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 4e47ccb43bd86c..edd99d71016cb1 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -124,13 +124,13 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct pci_device_id iwl_hw_card_ids[] = {
+ 	{IWL_PCI_DEVICE(0x0082, 0x1304, iwl6005_mac_cfg)},/* low 5GHz active */
+ 	{IWL_PCI_DEVICE(0x0082, 0x1305, iwl6005_mac_cfg)},/* high 5GHz active */
+ 
+-/* 6x30 Series */
+-	{IWL_PCI_DEVICE(0x008A, 0x5305, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x008A, 0x5307, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x008A, 0x5325, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x008A, 0x5327, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x008B, 0x5315, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x008B, 0x5317, iwl1000_mac_cfg)},
++/* 1030/6x30 Series */
++	{IWL_PCI_DEVICE(0x008A, 0x5305, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x008A, 0x5307, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x008A, 0x5325, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x008A, 0x5327, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x008B, 0x5315, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x008B, 0x5317, iwl6030_mac_cfg)},
+ 	{IWL_PCI_DEVICE(0x0090, 0x5211, iwl6030_mac_cfg)},
+ 	{IWL_PCI_DEVICE(0x0090, 0x5215, iwl6030_mac_cfg)},
+ 	{IWL_PCI_DEVICE(0x0090, 0x5216, iwl6030_mac_cfg)},
+@@ -181,12 +181,12 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct pci_device_id iwl_hw_card_ids[] = {
+ 	{IWL_PCI_DEVICE(0x08AE, 0x1027, iwl1000_mac_cfg)},
+ 
+ /* 130 Series WiFi */
+-	{IWL_PCI_DEVICE(0x0896, 0x5005, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x0896, 0x5007, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x0897, 0x5015, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x0897, 0x5017, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x0896, 0x5025, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x0896, 0x5027, iwl1000_mac_cfg)},
++	{IWL_PCI_DEVICE(0x0896, 0x5005, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x0896, 0x5007, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x0897, 0x5015, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x0897, 0x5017, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x0896, 0x5025, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x0896, 0x5027, iwl6030_mac_cfg)},
+ 
+ /* 2x00 Series */
+ 	{IWL_PCI_DEVICE(0x0890, 0x4022, iwl2000_mac_cfg)},
+diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
+index a4a2bac4f4b279..2f8d0223c1a6de 100644
+--- a/drivers/pci/controller/pci-mvebu.c
++++ b/drivers/pci/controller/pci-mvebu.c
+@@ -1168,12 +1168,6 @@ static void __iomem *mvebu_pcie_map_registers(struct platform_device *pdev,
+ 	return devm_ioremap_resource(&pdev->dev, &port->regs);
+ }
+ 
+-#define DT_FLAGS_TO_TYPE(flags)       (((flags) >> 24) & 0x03)
+-#define    DT_TYPE_IO                 0x1
+-#define    DT_TYPE_MEM32              0x2
+-#define DT_CPUADDR_TO_TARGET(cpuaddr) (((cpuaddr) >> 56) & 0xFF)
+-#define DT_CPUADDR_TO_ATTR(cpuaddr)   (((cpuaddr) >> 48) & 0xFF)
+-
+ static int mvebu_get_tgt_attr(struct device_node *np, int devfn,
+ 			      unsigned long type,
+ 			      unsigned int *tgt,
+@@ -1189,19 +1183,12 @@ static int mvebu_get_tgt_attr(struct device_node *np, int devfn,
+ 		return -EINVAL;
+ 
+ 	for_each_of_range(&parser, &range) {
+-		unsigned long rtype;
+ 		u32 slot = upper_32_bits(range.bus_addr);
+ 
+-		if (DT_FLAGS_TO_TYPE(range.flags) == DT_TYPE_IO)
+-			rtype = IORESOURCE_IO;
+-		else if (DT_FLAGS_TO_TYPE(range.flags) == DT_TYPE_MEM32)
+-			rtype = IORESOURCE_MEM;
+-		else
+-			continue;
+-
+-		if (slot == PCI_SLOT(devfn) && type == rtype) {
+-			*tgt = DT_CPUADDR_TO_TARGET(range.cpu_addr);
+-			*attr = DT_CPUADDR_TO_ATTR(range.cpu_addr);
++		if (slot == PCI_SLOT(devfn) &&
++		    type == (range.flags & IORESOURCE_TYPE_BITS)) {
++			*tgt = (range.parent_bus_addr >> 56) & 0xFF;
++			*attr = (range.parent_bus_addr >> 48) & 0xFF;
+ 			return 0;
+ 		}
+ 	}
+diff --git a/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c b/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c
+index d7493c2294ef23..3709fba42ebd85 100644
+--- a/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c
++++ b/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c
+@@ -127,13 +127,13 @@ static int eusb2_repeater_init(struct phy *phy)
+ 			     rptr->cfg->init_tbl[i].value);
+ 
+ 	/* Override registers from devicetree values */
+-	if (!of_property_read_u8(np, "qcom,tune-usb2-amplitude", &val))
++	if (!of_property_read_u8(np, "qcom,tune-usb2-preem", &val))
+ 		regmap_write(regmap, base + EUSB2_TUNE_USB2_PREEM, val);
+ 
+ 	if (!of_property_read_u8(np, "qcom,tune-usb2-disc-thres", &val))
+ 		regmap_write(regmap, base + EUSB2_TUNE_HSDISC, val);
+ 
+-	if (!of_property_read_u8(np, "qcom,tune-usb2-preem", &val))
++	if (!of_property_read_u8(np, "qcom,tune-usb2-amplitude", &val))
+ 		regmap_write(regmap, base + EUSB2_TUNE_IUSB2, val);
+ 
+ 	/* Wait for status OK */
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c b/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
+index 461b9e0af610a1..498f23c43aa139 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
+@@ -3064,6 +3064,14 @@ struct qmp_pcie {
+ 	struct clk_fixed_rate aux_clk_fixed;
+ };
+ 
++static bool qphy_checkbits(const void __iomem *base, u32 offset, u32 val)
++{
++	u32 reg;
++
++	reg = readl(base + offset);
++	return (reg & val) == val;
++}
++
+ static inline void qphy_setbits(void __iomem *base, u32 offset, u32 val)
+ {
+ 	u32 reg;
+@@ -4332,16 +4340,21 @@ static int qmp_pcie_init(struct phy *phy)
+ 	struct qmp_pcie *qmp = phy_get_drvdata(phy);
+ 	const struct qmp_phy_cfg *cfg = qmp->cfg;
+ 	void __iomem *pcs = qmp->pcs;
+-	bool phy_initialized = !!(readl(pcs + cfg->regs[QPHY_START_CTRL]));
+ 	int ret;
+ 
+-	qmp->skip_init = qmp->nocsr_reset && phy_initialized;
+ 	/*
+-	 * We need to check the existence of init sequences in two cases:
+-	 * 1. The PHY doesn't support no_csr reset.
+-	 * 2. The PHY supports no_csr reset but isn't initialized by bootloader.
+-	 * As we can't skip init in these two cases.
++	 * We can skip PHY initialization if all of the following conditions
++	 * are met:
++	 *  1. The PHY supports the nocsr_reset that preserves the PHY config.
++	 *  2. The PHY was started (and not powered down again) by the
++	 *     bootloader, with all of the expected bits set correctly.
++	 * In this case, we can continue without having the init sequence
++	 * defined in the driver.
+ 	 */
++	qmp->skip_init = qmp->nocsr_reset &&
++		qphy_checkbits(pcs, cfg->regs[QPHY_START_CTRL], SERDES_START | PCS_START) &&
++		qphy_checkbits(pcs, cfg->regs[QPHY_PCS_POWER_DOWN_CONTROL], cfg->pwrdn_ctrl);
++
+ 	if (!qmp->skip_init && !cfg->tbls.serdes_num) {
+ 		dev_err(qmp->dev, "Init sequence not available\n");
+ 		return -ENODATA;
+diff --git a/drivers/phy/tegra/xusb-tegra210.c b/drivers/phy/tegra/xusb-tegra210.c
+index ebc8a7e21a3181..3409924498e9cf 100644
+--- a/drivers/phy/tegra/xusb-tegra210.c
++++ b/drivers/phy/tegra/xusb-tegra210.c
+@@ -3164,18 +3164,22 @@ tegra210_xusb_padctl_probe(struct device *dev,
+ 	}
+ 
+ 	pdev = of_find_device_by_node(np);
++	of_node_put(np);
+ 	if (!pdev) {
+ 		dev_warn(dev, "PMC device is not available\n");
+ 		goto out;
+ 	}
+ 
+-	if (!platform_get_drvdata(pdev))
++	if (!platform_get_drvdata(pdev)) {
++		put_device(&pdev->dev);
+ 		return ERR_PTR(-EPROBE_DEFER);
++	}
+ 
+ 	padctl->regmap = dev_get_regmap(&pdev->dev, "usb_sleepwalk");
+ 	if (!padctl->regmap)
+ 		dev_info(dev, "failed to find PMC regmap\n");
+ 
++	put_device(&pdev->dev);
+ out:
+ 	return &padctl->base;
+ }
+diff --git a/drivers/phy/ti/phy-omap-usb2.c b/drivers/phy/ti/phy-omap-usb2.c
+index c1a0ef979142ce..c444bb2530ca29 100644
+--- a/drivers/phy/ti/phy-omap-usb2.c
++++ b/drivers/phy/ti/phy-omap-usb2.c
+@@ -363,6 +363,13 @@ static void omap_usb2_init_errata(struct omap_usb *phy)
+ 		phy->flags |= OMAP_USB2_DISABLE_CHRG_DET;
+ }
+ 
++static void omap_usb2_put_device(void *_dev)
++{
++	struct device *dev = _dev;
++
++	put_device(dev);
++}
++
+ static int omap_usb2_probe(struct platform_device *pdev)
+ {
+ 	struct omap_usb	*phy;
+@@ -373,6 +380,7 @@ static int omap_usb2_probe(struct platform_device *pdev)
+ 	struct device_node *control_node;
+ 	struct platform_device *control_pdev;
+ 	const struct usb_phy_data *phy_data;
++	int ret;
+ 
+ 	phy_data = device_get_match_data(&pdev->dev);
+ 	if (!phy_data)
+@@ -423,6 +431,11 @@ static int omap_usb2_probe(struct platform_device *pdev)
+ 			return -EINVAL;
+ 		}
+ 		phy->control_dev = &control_pdev->dev;
++
++		ret = devm_add_action_or_reset(&pdev->dev, omap_usb2_put_device,
++					       phy->control_dev);
++		if (ret)
++			return ret;
+ 	} else {
+ 		if (of_property_read_u32_index(node,
+ 					       "syscon-phy-power", 1,
+diff --git a/drivers/phy/ti/phy-ti-pipe3.c b/drivers/phy/ti/phy-ti-pipe3.c
+index da2cbacb982c6b..ae764d6524c99a 100644
+--- a/drivers/phy/ti/phy-ti-pipe3.c
++++ b/drivers/phy/ti/phy-ti-pipe3.c
+@@ -667,12 +667,20 @@ static int ti_pipe3_get_clk(struct ti_pipe3 *phy)
+ 	return 0;
+ }
+ 
++static void ti_pipe3_put_device(void *_dev)
++{
++	struct device *dev = _dev;
++
++	put_device(dev);
++}
++
+ static int ti_pipe3_get_sysctrl(struct ti_pipe3 *phy)
+ {
+ 	struct device *dev = phy->dev;
+ 	struct device_node *node = dev->of_node;
+ 	struct device_node *control_node;
+ 	struct platform_device *control_pdev;
++	int ret;
+ 
+ 	phy->phy_power_syscon = syscon_regmap_lookup_by_phandle(node,
+ 							"syscon-phy-power");
+@@ -704,6 +712,11 @@ static int ti_pipe3_get_sysctrl(struct ti_pipe3 *phy)
+ 		}
+ 
+ 		phy->control_dev = &control_pdev->dev;
++
++		ret = devm_add_action_or_reset(dev, ti_pipe3_put_device,
++					       phy->control_dev);
++		if (ret)
++			return ret;
+ 	}
+ 
+ 	if (phy->mode == PIPE3_MODE_PCIE) {
+diff --git a/drivers/regulator/sy7636a-regulator.c b/drivers/regulator/sy7636a-regulator.c
+index d1e7ba1fb3e1af..27e3d939b7bb9e 100644
+--- a/drivers/regulator/sy7636a-regulator.c
++++ b/drivers/regulator/sy7636a-regulator.c
+@@ -83,9 +83,11 @@ static int sy7636a_regulator_probe(struct platform_device *pdev)
+ 	if (!regmap)
+ 		return -EPROBE_DEFER;
+ 
+-	gdp = devm_gpiod_get(pdev->dev.parent, "epd-pwr-good", GPIOD_IN);
++	device_set_of_node_from_dev(&pdev->dev, pdev->dev.parent);
++
++	gdp = devm_gpiod_get(&pdev->dev, "epd-pwr-good", GPIOD_IN);
+ 	if (IS_ERR(gdp)) {
+-		dev_err(pdev->dev.parent, "Power good GPIO fault %ld\n", PTR_ERR(gdp));
++		dev_err(&pdev->dev, "Power good GPIO fault %ld\n", PTR_ERR(gdp));
+ 		return PTR_ERR(gdp);
+ 	}
+ 
+@@ -105,7 +107,6 @@ static int sy7636a_regulator_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	config.dev = &pdev->dev;
+-	config.dev->of_node = pdev->dev.parent->of_node;
+ 	config.regmap = regmap;
+ 
+ 	rdev = devm_regulator_register(&pdev->dev, &desc, &config);
+diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c
+index cd1f657f782df2..13c663a154c4e8 100644
+--- a/drivers/tty/hvc/hvc_console.c
++++ b/drivers/tty/hvc/hvc_console.c
+@@ -543,10 +543,10 @@ static ssize_t hvc_write(struct tty_struct *tty, const u8 *buf, size_t count)
+ 	}
+ 
+ 	/*
+-	 * Racy, but harmless, kick thread if there is still pending data.
++	 * Kick thread to flush if there's still pending data
++	 * or to wakeup the write queue.
+ 	 */
+-	if (hp->n_outbuf)
+-		hvc_kick();
++	hvc_kick();
+ 
+ 	return written;
+ }
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index 5ea8aadb6e69c1..9056cb82456f14 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -1177,17 +1177,6 @@ static int sc16is7xx_startup(struct uart_port *port)
+ 	sc16is7xx_port_write(port, SC16IS7XX_FCR_REG,
+ 			     SC16IS7XX_FCR_FIFO_BIT);
+ 
+-	/* Enable EFR */
+-	sc16is7xx_port_write(port, SC16IS7XX_LCR_REG,
+-			     SC16IS7XX_LCR_CONF_MODE_B);
+-
+-	regcache_cache_bypass(one->regmap, true);
+-
+-	/* Enable write access to enhanced features and internal clock div */
+-	sc16is7xx_port_update(port, SC16IS7XX_EFR_REG,
+-			      SC16IS7XX_EFR_ENABLE_BIT,
+-			      SC16IS7XX_EFR_ENABLE_BIT);
+-
+ 	/* Enable TCR/TLR */
+ 	sc16is7xx_port_update(port, SC16IS7XX_MCR_REG,
+ 			      SC16IS7XX_MCR_TCRTLR_BIT,
+@@ -1199,7 +1188,8 @@ static int sc16is7xx_startup(struct uart_port *port)
+ 			     SC16IS7XX_TCR_RX_RESUME(24) |
+ 			     SC16IS7XX_TCR_RX_HALT(48));
+ 
+-	regcache_cache_bypass(one->regmap, false);
++	/* Disable TCR/TLR access */
++	sc16is7xx_port_update(port, SC16IS7XX_MCR_REG, SC16IS7XX_MCR_TCRTLR_BIT, 0);
+ 
+ 	/* Now, initialize the UART */
+ 	sc16is7xx_port_write(port, SC16IS7XX_LCR_REG, SC16IS7XX_LCR_WORD_LEN_8);
+diff --git a/drivers/usb/gadget/function/f_midi2.c b/drivers/usb/gadget/function/f_midi2.c
+index 0a800ba53816a8..de16b02d857e07 100644
+--- a/drivers/usb/gadget/function/f_midi2.c
++++ b/drivers/usb/gadget/function/f_midi2.c
+@@ -1599,6 +1599,7 @@ static int f_midi2_create_card(struct f_midi2 *midi2)
+ 			strscpy(fb->info.name, ump_fb_name(b),
+ 				sizeof(fb->info.name));
+ 		}
++		snd_ump_update_group_attrs(ump);
+ 	}
+ 
+ 	for (i = 0; i < midi2->num_eps; i++) {
+@@ -1736,9 +1737,12 @@ static int f_midi2_create_usb_configs(struct f_midi2 *midi2,
+ 	case USB_SPEED_HIGH:
+ 		midi2_midi1_ep_out_desc.wMaxPacketSize = cpu_to_le16(512);
+ 		midi2_midi1_ep_in_desc.wMaxPacketSize = cpu_to_le16(512);
+-		for (i = 0; i < midi2->num_eps; i++)
++		for (i = 0; i < midi2->num_eps; i++) {
+ 			midi2_midi2_ep_out_desc[i].wMaxPacketSize =
+ 				cpu_to_le16(512);
++			midi2_midi2_ep_in_desc[i].wMaxPacketSize =
++				cpu_to_le16(512);
++		}
+ 		fallthrough;
+ 	case USB_SPEED_FULL:
+ 		midi1_in_eps = midi2_midi1_ep_in_descs;
+@@ -1747,9 +1751,12 @@ static int f_midi2_create_usb_configs(struct f_midi2 *midi2,
+ 	case USB_SPEED_SUPER:
+ 		midi2_midi1_ep_out_desc.wMaxPacketSize = cpu_to_le16(1024);
+ 		midi2_midi1_ep_in_desc.wMaxPacketSize = cpu_to_le16(1024);
+-		for (i = 0; i < midi2->num_eps; i++)
++		for (i = 0; i < midi2->num_eps; i++) {
+ 			midi2_midi2_ep_out_desc[i].wMaxPacketSize =
+ 				cpu_to_le16(1024);
++			midi2_midi2_ep_in_desc[i].wMaxPacketSize =
++				cpu_to_le16(1024);
++		}
+ 		midi1_in_eps = midi2_midi1_ep_in_ss_descs;
+ 		midi1_out_eps = midi2_midi1_ep_out_ss_descs;
+ 		break;
+diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c
+index 27c9699365ab95..18cd4b925e5e63 100644
+--- a/drivers/usb/gadget/udc/dummy_hcd.c
++++ b/drivers/usb/gadget/udc/dummy_hcd.c
+@@ -765,8 +765,7 @@ static int dummy_dequeue(struct usb_ep *_ep, struct usb_request *_req)
+ 	if (!dum->driver)
+ 		return -ESHUTDOWN;
+ 
+-	local_irq_save(flags);
+-	spin_lock(&dum->lock);
++	spin_lock_irqsave(&dum->lock, flags);
+ 	list_for_each_entry(iter, &ep->queue, queue) {
+ 		if (&iter->req != _req)
+ 			continue;
+@@ -776,15 +775,16 @@ static int dummy_dequeue(struct usb_ep *_ep, struct usb_request *_req)
+ 		retval = 0;
+ 		break;
+ 	}
+-	spin_unlock(&dum->lock);
+ 
+ 	if (retval == 0) {
+ 		dev_dbg(udc_dev(dum),
+ 				"dequeued req %p from %s, len %d buf %p\n",
+ 				req, _ep->name, _req->length, _req->buf);
++		spin_unlock(&dum->lock);
+ 		usb_gadget_giveback_request(_ep, _req);
++		spin_lock(&dum->lock);
+ 	}
+-	local_irq_restore(flags);
++	spin_unlock_irqrestore(&dum->lock, flags);
+ 	return retval;
+ }
+ 
+diff --git a/drivers/usb/host/xhci-dbgcap.c b/drivers/usb/host/xhci-dbgcap.c
+index 06a2edb9e86ef7..63edf2d8f24501 100644
+--- a/drivers/usb/host/xhci-dbgcap.c
++++ b/drivers/usb/host/xhci-dbgcap.c
+@@ -101,13 +101,34 @@ static u32 xhci_dbc_populate_strings(struct dbc_str_descs *strings)
+ 	return string_length;
+ }
+ 
++static void xhci_dbc_init_ep_contexts(struct xhci_dbc *dbc)
++{
++	struct xhci_ep_ctx      *ep_ctx;
++	unsigned int		max_burst;
++	dma_addr_t		deq;
++
++	max_burst               = DBC_CTRL_MAXBURST(readl(&dbc->regs->control));
++
++	/* Populate bulk out endpoint context: */
++	ep_ctx                  = dbc_bulkout_ctx(dbc);
++	deq                     = dbc_bulkout_enq(dbc);
++	ep_ctx->ep_info         = 0;
++	ep_ctx->ep_info2        = dbc_epctx_info2(BULK_OUT_EP, 1024, max_burst);
++	ep_ctx->deq             = cpu_to_le64(deq | dbc->ring_out->cycle_state);
++
++	/* Populate bulk in endpoint context: */
++	ep_ctx                  = dbc_bulkin_ctx(dbc);
++	deq                     = dbc_bulkin_enq(dbc);
++	ep_ctx->ep_info         = 0;
++	ep_ctx->ep_info2        = dbc_epctx_info2(BULK_IN_EP, 1024, max_burst);
++	ep_ctx->deq             = cpu_to_le64(deq | dbc->ring_in->cycle_state);
++}
++
+ static void xhci_dbc_init_contexts(struct xhci_dbc *dbc, u32 string_length)
+ {
+ 	struct dbc_info_context	*info;
+-	struct xhci_ep_ctx	*ep_ctx;
+ 	u32			dev_info;
+-	dma_addr_t		deq, dma;
+-	unsigned int		max_burst;
++	dma_addr_t		dma;
+ 
+ 	if (!dbc)
+ 		return;
+@@ -121,20 +142,8 @@ static void xhci_dbc_init_contexts(struct xhci_dbc *dbc, u32 string_length)
+ 	info->serial		= cpu_to_le64(dma + DBC_MAX_STRING_LENGTH * 3);
+ 	info->length		= cpu_to_le32(string_length);
+ 
+-	/* Populate bulk out endpoint context: */
+-	ep_ctx			= dbc_bulkout_ctx(dbc);
+-	max_burst		= DBC_CTRL_MAXBURST(readl(&dbc->regs->control));
+-	deq			= dbc_bulkout_enq(dbc);
+-	ep_ctx->ep_info		= 0;
+-	ep_ctx->ep_info2	= dbc_epctx_info2(BULK_OUT_EP, 1024, max_burst);
+-	ep_ctx->deq		= cpu_to_le64(deq | dbc->ring_out->cycle_state);
+-
+-	/* Populate bulk in endpoint context: */
+-	ep_ctx			= dbc_bulkin_ctx(dbc);
+-	deq			= dbc_bulkin_enq(dbc);
+-	ep_ctx->ep_info		= 0;
+-	ep_ctx->ep_info2	= dbc_epctx_info2(BULK_IN_EP, 1024, max_burst);
+-	ep_ctx->deq		= cpu_to_le64(deq | dbc->ring_in->cycle_state);
++	/* Populate bulk in and out endpoint contexts: */
++	xhci_dbc_init_ep_contexts(dbc);
+ 
+ 	/* Set DbC context and info registers: */
+ 	lo_hi_writeq(dbc->ctx->dma, &dbc->regs->dccp);
+@@ -436,6 +445,42 @@ dbc_alloc_ctx(struct device *dev, gfp_t flags)
+ 	return ctx;
+ }
+ 
++static void xhci_dbc_ring_init(struct xhci_ring *ring)
++{
++	struct xhci_segment *seg = ring->first_seg;
++
++	/* clear all trbs on ring in case of old ring */
++	memset(seg->trbs, 0, TRB_SEGMENT_SIZE);
++
++	/* Only event ring does not use link TRB */
++	if (ring->type != TYPE_EVENT) {
++		union xhci_trb *trb = &seg->trbs[TRBS_PER_SEGMENT - 1];
++
++		trb->link.segment_ptr = cpu_to_le64(ring->first_seg->dma);
++		trb->link.control = cpu_to_le32(LINK_TOGGLE | TRB_TYPE(TRB_LINK));
++	}
++	xhci_initialize_ring_info(ring);
++}
++
++static int xhci_dbc_reinit_ep_rings(struct xhci_dbc *dbc)
++{
++	struct xhci_ring *in_ring = dbc->eps[BULK_IN].ring;
++	struct xhci_ring *out_ring = dbc->eps[BULK_OUT].ring;
++
++	if (!in_ring || !out_ring || !dbc->ctx) {
++		dev_warn(dbc->dev, "Can't re-init unallocated endpoints\n");
++		return -ENODEV;
++	}
++
++	xhci_dbc_ring_init(in_ring);
++	xhci_dbc_ring_init(out_ring);
++
++	/* set ep context enqueue, dequeue, and cycle to initial values */
++	xhci_dbc_init_ep_contexts(dbc);
++
++	return 0;
++}
++
+ static struct xhci_ring *
+ xhci_dbc_ring_alloc(struct device *dev, enum xhci_ring_type type, gfp_t flags)
+ {
+@@ -464,15 +509,10 @@ xhci_dbc_ring_alloc(struct device *dev, enum xhci_ring_type type, gfp_t flags)
+ 
+ 	seg->dma = dma;
+ 
+-	/* Only event ring does not use link TRB */
+-	if (type != TYPE_EVENT) {
+-		union xhci_trb *trb = &seg->trbs[TRBS_PER_SEGMENT - 1];
+-
+-		trb->link.segment_ptr = cpu_to_le64(dma);
+-		trb->link.control = cpu_to_le32(LINK_TOGGLE | TRB_TYPE(TRB_LINK));
+-	}
+ 	INIT_LIST_HEAD(&ring->td_list);
+-	xhci_initialize_ring_info(ring);
++
++	xhci_dbc_ring_init(ring);
++
+ 	return ring;
+ dma_fail:
+ 	kfree(seg);
+@@ -864,7 +904,7 @@ static enum evtreturn xhci_dbc_do_handle_events(struct xhci_dbc *dbc)
+ 			dev_info(dbc->dev, "DbC cable unplugged\n");
+ 			dbc->state = DS_ENABLED;
+ 			xhci_dbc_flush_requests(dbc);
+-
++			xhci_dbc_reinit_ep_rings(dbc);
+ 			return EVT_DISC;
+ 		}
+ 
+@@ -874,7 +914,7 @@ static enum evtreturn xhci_dbc_do_handle_events(struct xhci_dbc *dbc)
+ 			writel(portsc, &dbc->regs->portsc);
+ 			dbc->state = DS_ENABLED;
+ 			xhci_dbc_flush_requests(dbc);
+-
++			xhci_dbc_reinit_ep_rings(dbc);
+ 			return EVT_DISC;
+ 		}
+ 
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 81eaad87a3d9d0..c4a6544aa10751 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -962,7 +962,7 @@ static void xhci_free_virt_devices_depth_first(struct xhci_hcd *xhci, int slot_i
+ out:
+ 	/* we are now at a leaf device */
+ 	xhci_debugfs_remove_slot(xhci, slot_id);
+-	xhci_free_virt_device(xhci, vdev, slot_id);
++	xhci_free_virt_device(xhci, xhci->devs[slot_id], slot_id);
+ }
+ 
+ int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id,
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index e5cd3309342364..fc869b7f803f04 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1322,7 +1322,18 @@ static const struct usb_device_id option_ids[] = {
+ 	 .driver_info = NCTRL(0) | RSVD(3) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1033, 0xff),	/* Telit LE910C1-EUX (ECM) */
+ 	 .driver_info = NCTRL(0) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1034, 0xff),	/* Telit LE910C4-WWX (rmnet) */
++	 .driver_info = RSVD(2) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1035, 0xff) }, /* Telit LE910C4-WWX (ECM) */
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1036, 0xff) },  /* Telit LE910C4-WWX */
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1037, 0xff),	/* Telit LE910C4-WWX (rmnet) */
++	 .driver_info = NCTRL(0) | NCTRL(1) | RSVD(4) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1038, 0xff),	/* Telit LE910C4-WWX (rmnet) */
++	 .driver_info = NCTRL(0) | RSVD(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x103b, 0xff),	/* Telit LE910C4-WWX */
++	 .driver_info = NCTRL(0) | NCTRL(1) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x103c, 0xff),	/* Telit LE910C4-WWX */
++	 .driver_info = NCTRL(0) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG0),
+ 	  .driver_info = RSVD(0) | RSVD(1) | NCTRL(2) | RSVD(3) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG1),
+@@ -1369,6 +1380,12 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(0) | RSVD(1) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1075, 0xff),	/* Telit FN990A (PCIe) */
+ 	  .driver_info = RSVD(0) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1077, 0xff),	/* Telit FN990A (rmnet + audio) */
++	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1078, 0xff),	/* Telit FN990A (MBIM + audio) */
++	  .driver_info = NCTRL(0) | RSVD(1) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1079, 0xff),	/* Telit FN990A (RNDIS + audio) */
++	  .driver_info = NCTRL(2) | RSVD(3) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1080, 0xff),	/* Telit FE990A (rmnet) */
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1081, 0xff),	/* Telit FE990A (MBIM) */
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 1f6fdfaa34bf12..b2a568a5bc9b0b 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -2426,17 +2426,21 @@ static void tcpm_handle_vdm_request(struct tcpm_port *port,
+ 		case ADEV_NONE:
+ 			break;
+ 		case ADEV_NOTIFY_USB_AND_QUEUE_VDM:
+-			WARN_ON(typec_altmode_notify(adev, TYPEC_STATE_USB, NULL));
+-			typec_altmode_vdm(adev, p[0], &p[1], cnt);
++			if (rx_sop_type == TCPC_TX_SOP_PRIME) {
++				typec_cable_altmode_vdm(adev, TYPEC_PLUG_SOP_P, p[0], &p[1], cnt);
++			} else {
++				WARN_ON(typec_altmode_notify(adev, TYPEC_STATE_USB, NULL));
++				typec_altmode_vdm(adev, p[0], &p[1], cnt);
++			}
+ 			break;
+ 		case ADEV_QUEUE_VDM:
+-			if (response_tx_sop_type == TCPC_TX_SOP_PRIME)
++			if (rx_sop_type == TCPC_TX_SOP_PRIME)
+ 				typec_cable_altmode_vdm(adev, TYPEC_PLUG_SOP_P, p[0], &p[1], cnt);
+ 			else
+ 				typec_altmode_vdm(adev, p[0], &p[1], cnt);
+ 			break;
+ 		case ADEV_QUEUE_VDM_SEND_EXIT_MODE_ON_FAIL:
+-			if (response_tx_sop_type == TCPC_TX_SOP_PRIME) {
++			if (rx_sop_type == TCPC_TX_SOP_PRIME) {
+ 				if (typec_cable_altmode_vdm(adev, TYPEC_PLUG_SOP_P,
+ 							    p[0], &p[1], cnt)) {
+ 					int svdm_version = typec_get_cable_svdm_version(
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index fac4000a5bcaef..b843db855f402f 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -110,6 +110,25 @@ struct btrfs_bio_ctrl {
+ 	 * This is to avoid touching ranges covered by compression/inline.
+ 	 */
+ 	unsigned long submit_bitmap;
++	struct readahead_control *ractl;
++
++	/*
++	 * The start offset of the last used extent map by a read operation.
++	 *
++	 * This is for proper compressed read merge.
++	 * U64_MAX means we are starting the read and have made no progress yet.
++	 *
++	 * The current btrfs_bio_is_contig() only uses disk_bytenr as
++	 * the condition to check if the read can be merged with previous
++	 * bio, which is not correct. E.g. two file extents pointing to the
++	 * same extent but with different offset.
++	 *
++	 * So here we need to do extra checks to only merge reads that are
++	 * covered by the same extent map.
++	 * Just extent_map::start will be enough, as they are unique
++	 * inside the same inode.
++	 */
++	u64 last_em_start;
+ };
+ 
+ static void submit_one_bio(struct btrfs_bio_ctrl *bio_ctrl)
+@@ -882,6 +901,25 @@ static struct extent_map *get_extent_map(struct btrfs_inode *inode,
+ 
+ 	return em;
+ }
++
++static void btrfs_readahead_expand(struct readahead_control *ractl,
++				   const struct extent_map *em)
++{
++	const u64 ra_pos = readahead_pos(ractl);
++	const u64 ra_end = ra_pos + readahead_length(ractl);
++	const u64 em_end = em->start + em->ram_bytes;
++
++	/* No expansion for holes and inline extents. */
++	if (em->disk_bytenr > EXTENT_MAP_LAST_BYTE)
++		return;
++
++	ASSERT(em_end >= ra_pos,
++	       "extent_map %llu %llu ends before current readahead position %llu",
++	       em->start, em->len, ra_pos);
++	if (em_end > ra_end)
++		readahead_expand(ractl, ra_pos, em_end - ra_pos);
++}
++
+ /*
+  * basic readpage implementation.  Locked extent state structs are inserted
+  * into the tree that are removed when the IO is done (by the end_io
+@@ -890,7 +928,7 @@ static struct extent_map *get_extent_map(struct btrfs_inode *inode,
+  * return 0 on success, otherwise return error
+  */
+ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
+-		      struct btrfs_bio_ctrl *bio_ctrl, u64 *prev_em_start)
++			     struct btrfs_bio_ctrl *bio_ctrl)
+ {
+ 	struct inode *inode = folio->mapping->host;
+ 	struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
+@@ -945,6 +983,16 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
+ 
+ 		compress_type = btrfs_extent_map_compression(em);
+ 
++		/*
++		 * Only expand readahead for extents which are already creating
++		 * the pages anyway in add_ra_bio_pages, which is compressed
++		 * extents in the non subpage case.
++		 */
++		if (bio_ctrl->ractl &&
++		    !btrfs_is_subpage(fs_info, folio) &&
++		    compress_type != BTRFS_COMPRESS_NONE)
++			btrfs_readahead_expand(bio_ctrl->ractl, em);
++
+ 		if (compress_type != BTRFS_COMPRESS_NONE)
+ 			disk_bytenr = em->disk_bytenr;
+ 		else
+@@ -990,12 +1038,11 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
+ 		 * non-optimal behavior (submitting 2 bios for the same extent).
+ 		 */
+ 		if (compress_type != BTRFS_COMPRESS_NONE &&
+-		    prev_em_start && *prev_em_start != (u64)-1 &&
+-		    *prev_em_start != em->start)
++		    bio_ctrl->last_em_start != U64_MAX &&
++		    bio_ctrl->last_em_start != em->start)
+ 			force_bio_submit = true;
+ 
+-		if (prev_em_start)
+-			*prev_em_start = em->start;
++		bio_ctrl->last_em_start = em->start;
+ 
+ 		btrfs_free_extent_map(em);
+ 		em = NULL;
+@@ -1209,12 +1256,15 @@ int btrfs_read_folio(struct file *file, struct folio *folio)
+ 	const u64 start = folio_pos(folio);
+ 	const u64 end = start + folio_size(folio) - 1;
+ 	struct extent_state *cached_state = NULL;
+-	struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_READ };
++	struct btrfs_bio_ctrl bio_ctrl = {
++		.opf = REQ_OP_READ,
++		.last_em_start = U64_MAX,
++	};
+ 	struct extent_map *em_cached = NULL;
+ 	int ret;
+ 
+ 	lock_extents_for_read(inode, start, end, &cached_state);
+-	ret = btrfs_do_readpage(folio, &em_cached, &bio_ctrl, NULL);
++	ret = btrfs_do_readpage(folio, &em_cached, &bio_ctrl);
+ 	btrfs_unlock_extent(&inode->io_tree, start, end, &cached_state);
+ 
+ 	btrfs_free_extent_map(em_cached);
+@@ -2550,19 +2600,22 @@ int btrfs_writepages(struct address_space *mapping, struct writeback_control *wb
+ 
+ void btrfs_readahead(struct readahead_control *rac)
+ {
+-	struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_READ | REQ_RAHEAD };
++	struct btrfs_bio_ctrl bio_ctrl = {
++		.opf = REQ_OP_READ | REQ_RAHEAD,
++		.ractl = rac,
++		.last_em_start = U64_MAX,
++	};
+ 	struct folio *folio;
+ 	struct btrfs_inode *inode = BTRFS_I(rac->mapping->host);
+ 	const u64 start = readahead_pos(rac);
+ 	const u64 end = start + readahead_length(rac) - 1;
+ 	struct extent_state *cached_state = NULL;
+ 	struct extent_map *em_cached = NULL;
+-	u64 prev_em_start = (u64)-1;
+ 
+ 	lock_extents_for_read(inode, start, end, &cached_state);
+ 
+ 	while ((folio = readahead_folio(rac)) != NULL)
+-		btrfs_do_readpage(folio, &em_cached, &bio_ctrl, &prev_em_start);
++		btrfs_do_readpage(folio, &em_cached, &bio_ctrl);
+ 
+ 	btrfs_unlock_extent(&inode->io_tree, start, end, &cached_state);
+ 
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index ffa5d6c1594050..e266a229484852 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -5685,7 +5685,17 @@ static void btrfs_del_inode_from_root(struct btrfs_inode *inode)
+ 	bool empty = false;
+ 
+ 	xa_lock(&root->inodes);
+-	entry = __xa_erase(&root->inodes, btrfs_ino(inode));
++	/*
++	 * This btrfs_inode is being freed and has already been unhashed at this
++	 * point. It's possible that another btrfs_inode has already been
++	 * allocated for the same inode and inserted itself into the root, so
++	 * don't delete it in that case.
++	 *
++	 * Note that this shouldn't need to allocate memory, so the gfp flags
++	 * don't really matter.
++	 */
++	entry = __xa_cmpxchg(&root->inodes, btrfs_ino(inode), inode, NULL,
++			     GFP_ATOMIC);
+ 	if (entry == inode)
+ 		empty = xa_empty(&root->inodes);
+ 	xa_unlock(&root->inodes);
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 68cbb2b1e3df8e..1fc99ba185164b 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1483,6 +1483,7 @@ static int __qgroup_excl_accounting(struct btrfs_fs_info *fs_info, u64 ref_root,
+ 	struct btrfs_qgroup *qgroup;
+ 	LIST_HEAD(qgroup_list);
+ 	u64 num_bytes = src->excl;
++	u64 num_bytes_cmpr = src->excl_cmpr;
+ 	int ret = 0;
+ 
+ 	qgroup = find_qgroup_rb(fs_info, ref_root);
+@@ -1494,11 +1495,12 @@ static int __qgroup_excl_accounting(struct btrfs_fs_info *fs_info, u64 ref_root,
+ 		struct btrfs_qgroup_list *glist;
+ 
+ 		qgroup->rfer += sign * num_bytes;
+-		qgroup->rfer_cmpr += sign * num_bytes;
++		qgroup->rfer_cmpr += sign * num_bytes_cmpr;
+ 
+ 		WARN_ON(sign < 0 && qgroup->excl < num_bytes);
++		WARN_ON(sign < 0 && qgroup->excl_cmpr < num_bytes_cmpr);
+ 		qgroup->excl += sign * num_bytes;
+-		qgroup->excl_cmpr += sign * num_bytes;
++		qgroup->excl_cmpr += sign * num_bytes_cmpr;
+ 
+ 		if (sign > 0)
+ 			qgroup_rsv_add_by_qgroup(fs_info, qgroup, src);
+diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
+index 60a621b00c656d..1777d707fd7561 100644
+--- a/fs/ceph/addr.c
++++ b/fs/ceph/addr.c
+@@ -1264,7 +1264,9 @@ static inline int move_dirty_folio_in_page_array(struct address_space *mapping,
+ 								0,
+ 								gfp_flags);
+ 		if (IS_ERR(pages[index])) {
+-			if (PTR_ERR(pages[index]) == -EINVAL) {
++			int err = PTR_ERR(pages[index]);
++
++			if (err == -EINVAL) {
+ 				pr_err_client(cl, "inode->i_blkbits=%hhu\n",
+ 						inode->i_blkbits);
+ 			}
+@@ -1273,7 +1275,7 @@ static inline int move_dirty_folio_in_page_array(struct address_space *mapping,
+ 			BUG_ON(ceph_wbc->locked_pages == 0);
+ 
+ 			pages[index] = NULL;
+-			return PTR_ERR(pages[index]);
++			return err;
+ 		}
+ 	} else {
+ 		pages[index] = &folio->page;
+@@ -1687,6 +1689,7 @@ static int ceph_writepages_start(struct address_space *mapping,
+ 
+ process_folio_batch:
+ 		rc = ceph_process_folio_batch(mapping, wbc, &ceph_wbc);
++		ceph_shift_unused_folios_left(&ceph_wbc.fbatch);
+ 		if (rc)
+ 			goto release_folios;
+ 
+@@ -1695,8 +1698,6 @@ static int ceph_writepages_start(struct address_space *mapping,
+ 			goto release_folios;
+ 
+ 		if (ceph_wbc.processed_in_fbatch) {
+-			ceph_shift_unused_folios_left(&ceph_wbc.fbatch);
+-
+ 			if (folio_batch_count(&ceph_wbc.fbatch) == 0 &&
+ 			    ceph_wbc.locked_pages < ceph_wbc.max_pages) {
+ 				doutc(cl, "reached end fbatch, trying for more\n");
+diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
+index fdd404fc81124d..f3fe786b4143d4 100644
+--- a/fs/ceph/debugfs.c
++++ b/fs/ceph/debugfs.c
+@@ -55,8 +55,6 @@ static int mdsc_show(struct seq_file *s, void *p)
+ 	struct ceph_mds_client *mdsc = fsc->mdsc;
+ 	struct ceph_mds_request *req;
+ 	struct rb_node *rp;
+-	int pathlen = 0;
+-	u64 pathbase;
+ 	char *path;
+ 
+ 	mutex_lock(&mdsc->mutex);
+@@ -81,8 +79,8 @@ static int mdsc_show(struct seq_file *s, void *p)
+ 		if (req->r_inode) {
+ 			seq_printf(s, " #%llx", ceph_ino(req->r_inode));
+ 		} else if (req->r_dentry) {
+-			path = ceph_mdsc_build_path(mdsc, req->r_dentry, &pathlen,
+-						    &pathbase, 0);
++			struct ceph_path_info path_info;
++			path = ceph_mdsc_build_path(mdsc, req->r_dentry, &path_info, 0);
+ 			if (IS_ERR(path))
+ 				path = NULL;
+ 			spin_lock(&req->r_dentry->d_lock);
+@@ -91,7 +89,7 @@ static int mdsc_show(struct seq_file *s, void *p)
+ 				   req->r_dentry,
+ 				   path ? path : "");
+ 			spin_unlock(&req->r_dentry->d_lock);
+-			ceph_mdsc_free_path(path, pathlen);
++			ceph_mdsc_free_path_info(&path_info);
+ 		} else if (req->r_path1) {
+ 			seq_printf(s, " #%llx/%s", req->r_ino1.ino,
+ 				   req->r_path1);
+@@ -100,8 +98,8 @@ static int mdsc_show(struct seq_file *s, void *p)
+ 		}
+ 
+ 		if (req->r_old_dentry) {
+-			path = ceph_mdsc_build_path(mdsc, req->r_old_dentry, &pathlen,
+-						    &pathbase, 0);
++			struct ceph_path_info path_info;
++			path = ceph_mdsc_build_path(mdsc, req->r_old_dentry, &path_info, 0);
+ 			if (IS_ERR(path))
+ 				path = NULL;
+ 			spin_lock(&req->r_old_dentry->d_lock);
+@@ -111,7 +109,7 @@ static int mdsc_show(struct seq_file *s, void *p)
+ 				   req->r_old_dentry,
+ 				   path ? path : "");
+ 			spin_unlock(&req->r_old_dentry->d_lock);
+-			ceph_mdsc_free_path(path, pathlen);
++			ceph_mdsc_free_path_info(&path_info);
+ 		} else if (req->r_path2 && req->r_op != CEPH_MDS_OP_SYMLINK) {
+ 			if (req->r_ino2.ino)
+ 				seq_printf(s, " #%llx/%s", req->r_ino2.ino,
+diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c
+index a321aa6d0ed226..e7af63bf77b469 100644
+--- a/fs/ceph/dir.c
++++ b/fs/ceph/dir.c
+@@ -1272,10 +1272,8 @@ static void ceph_async_unlink_cb(struct ceph_mds_client *mdsc,
+ 
+ 	/* If op failed, mark everyone involved for errors */
+ 	if (result) {
+-		int pathlen = 0;
+-		u64 base = 0;
+-		char *path = ceph_mdsc_build_path(mdsc, dentry, &pathlen,
+-						  &base, 0);
++		struct ceph_path_info path_info = {0};
++		char *path = ceph_mdsc_build_path(mdsc, dentry, &path_info, 0);
+ 
+ 		/* mark error on parent + clear complete */
+ 		mapping_set_error(req->r_parent->i_mapping, result);
+@@ -1289,8 +1287,8 @@ static void ceph_async_unlink_cb(struct ceph_mds_client *mdsc,
+ 		mapping_set_error(req->r_old_inode->i_mapping, result);
+ 
+ 		pr_warn_client(cl, "failure path=(%llx)%s result=%d!\n",
+-			       base, IS_ERR(path) ? "<<bad>>" : path, result);
+-		ceph_mdsc_free_path(path, pathlen);
++			       path_info.vino.ino, IS_ERR(path) ? "<<bad>>" : path, result);
++		ceph_mdsc_free_path_info(&path_info);
+ 	}
+ out:
+ 	iput(req->r_old_inode);
+@@ -1348,8 +1346,6 @@ static int ceph_unlink(struct inode *dir, struct dentry *dentry)
+ 	int err = -EROFS;
+ 	int op;
+ 	char *path;
+-	int pathlen;
+-	u64 pathbase;
+ 
+ 	if (ceph_snap(dir) == CEPH_SNAPDIR) {
+ 		/* rmdir .snap/foo is RMSNAP */
+@@ -1368,14 +1364,15 @@ static int ceph_unlink(struct inode *dir, struct dentry *dentry)
+ 	if (!dn) {
+ 		try_async = false;
+ 	} else {
+-		path = ceph_mdsc_build_path(mdsc, dn, &pathlen, &pathbase, 0);
++		struct ceph_path_info path_info;
++		path = ceph_mdsc_build_path(mdsc, dn, &path_info, 0);
+ 		if (IS_ERR(path)) {
+ 			try_async = false;
+ 			err = 0;
+ 		} else {
+ 			err = ceph_mds_check_access(mdsc, path, MAY_WRITE);
+ 		}
+-		ceph_mdsc_free_path(path, pathlen);
++		ceph_mdsc_free_path_info(&path_info);
+ 		dput(dn);
+ 
+ 		/* For none EACCES cases will let the MDS do the mds auth check */
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index a7254cab44cc2e..6587c2d5af1e08 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -368,8 +368,6 @@ int ceph_open(struct inode *inode, struct file *file)
+ 	int flags, fmode, wanted;
+ 	struct dentry *dentry;
+ 	char *path;
+-	int pathlen;
+-	u64 pathbase;
+ 	bool do_sync = false;
+ 	int mask = MAY_READ;
+ 
+@@ -399,14 +397,15 @@ int ceph_open(struct inode *inode, struct file *file)
+ 	if (!dentry) {
+ 		do_sync = true;
+ 	} else {
+-		path = ceph_mdsc_build_path(mdsc, dentry, &pathlen, &pathbase, 0);
++		struct ceph_path_info path_info;
++		path = ceph_mdsc_build_path(mdsc, dentry, &path_info, 0);
+ 		if (IS_ERR(path)) {
+ 			do_sync = true;
+ 			err = 0;
+ 		} else {
+ 			err = ceph_mds_check_access(mdsc, path, mask);
+ 		}
+-		ceph_mdsc_free_path(path, pathlen);
++		ceph_mdsc_free_path_info(&path_info);
+ 		dput(dentry);
+ 
+ 		/* For none EACCES cases will let the MDS do the mds auth check */
+@@ -614,15 +613,13 @@ static void ceph_async_create_cb(struct ceph_mds_client *mdsc,
+ 	mapping_set_error(req->r_parent->i_mapping, result);
+ 
+ 	if (result) {
+-		int pathlen = 0;
+-		u64 base = 0;
+-		char *path = ceph_mdsc_build_path(mdsc, req->r_dentry, &pathlen,
+-						  &base, 0);
++		struct ceph_path_info path_info = {0};
++		char *path = ceph_mdsc_build_path(mdsc, req->r_dentry, &path_info, 0);
+ 
+ 		pr_warn_client(cl,
+ 			"async create failure path=(%llx)%s result=%d!\n",
+-			base, IS_ERR(path) ? "<<bad>>" : path, result);
+-		ceph_mdsc_free_path(path, pathlen);
++			path_info.vino.ino, IS_ERR(path) ? "<<bad>>" : path, result);
++		ceph_mdsc_free_path_info(&path_info);
+ 
+ 		ceph_dir_clear_complete(req->r_parent);
+ 		if (!d_unhashed(dentry))
+@@ -791,8 +788,6 @@ int ceph_atomic_open(struct inode *dir, struct dentry *dentry,
+ 	int mask;
+ 	int err;
+ 	char *path;
+-	int pathlen;
+-	u64 pathbase;
+ 
+ 	doutc(cl, "%p %llx.%llx dentry %p '%pd' %s flags %d mode 0%o\n",
+ 	      dir, ceph_vinop(dir), dentry, dentry,
+@@ -814,7 +809,8 @@ int ceph_atomic_open(struct inode *dir, struct dentry *dentry,
+ 	if (!dn) {
+ 		try_async = false;
+ 	} else {
+-		path = ceph_mdsc_build_path(mdsc, dn, &pathlen, &pathbase, 0);
++		struct ceph_path_info path_info;
++		path = ceph_mdsc_build_path(mdsc, dn, &path_info, 0);
+ 		if (IS_ERR(path)) {
+ 			try_async = false;
+ 			err = 0;
+@@ -826,7 +822,7 @@ int ceph_atomic_open(struct inode *dir, struct dentry *dentry,
+ 				mask |= MAY_WRITE;
+ 			err = ceph_mds_check_access(mdsc, path, mask);
+ 		}
+-		ceph_mdsc_free_path(path, pathlen);
++		ceph_mdsc_free_path_info(&path_info);
+ 		dput(dn);
+ 
+ 		/* For none EACCES cases will let the MDS do the mds auth check */
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index 06cd2963e41ee0..14e59c15dd68dd 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -55,6 +55,52 @@ static int ceph_set_ino_cb(struct inode *inode, void *data)
+ 	return 0;
+ }
+ 
++/*
++ * Check if the parent inode matches the vino from directory reply info
++ */
++static inline bool ceph_vino_matches_parent(struct inode *parent,
++					    struct ceph_vino vino)
++{
++	return ceph_ino(parent) == vino.ino && ceph_snap(parent) == vino.snap;
++}
++
++/*
++ * Validate that the directory inode referenced by @req->r_parent matches the
++ * inode number and snapshot id contained in the reply's directory record.  If
++ * they do not match – which can theoretically happen if the parent dentry was
++ * moved between the time the request was issued and the reply arrived – fall
++ * back to looking up the correct inode in the inode cache.
++ *
++ * A reference is *always* returned.  Callers that receive a different inode
++ * than the original @parent are responsible for dropping the extra reference
++ * once the reply has been processed.
++ */
++static struct inode *ceph_get_reply_dir(struct super_block *sb,
++					struct inode *parent,
++					struct ceph_mds_reply_info_parsed *rinfo)
++{
++	struct ceph_vino vino;
++
++	if (unlikely(!rinfo->diri.in))
++		return parent; /* nothing to compare against */
++
++	/* If we didn't have a cached parent inode to begin with, just bail out. */
++	if (!parent)
++		return NULL;
++
++	vino.ino  = le64_to_cpu(rinfo->diri.in->ino);
++	vino.snap = le64_to_cpu(rinfo->diri.in->snapid);
++
++	if (likely(ceph_vino_matches_parent(parent, vino)))
++		return parent; /* matches – use the original reference */
++
++	/* Mismatch – this should be rare.  Emit a WARN and obtain the correct inode. */
++	WARN_ONCE(1, "ceph: reply dir mismatch (parent valid %llx.%llx reply %llx.%llx)\n",
++		  ceph_ino(parent), ceph_snap(parent), vino.ino, vino.snap);
++
++	return ceph_get_inode(sb, vino, NULL);
++}
++
+ /**
+  * ceph_new_inode - allocate a new inode in advance of an expected create
+  * @dir: parent directory for new inode
+@@ -1523,6 +1569,7 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ 	struct ceph_vino tvino, dvino;
+ 	struct ceph_fs_client *fsc = ceph_sb_to_fs_client(sb);
+ 	struct ceph_client *cl = fsc->client;
++	struct inode *parent_dir = NULL;
+ 	int err = 0;
+ 
+ 	doutc(cl, "%p is_dentry %d is_target %d\n", req,
+@@ -1536,10 +1583,17 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ 	}
+ 
+ 	if (rinfo->head->is_dentry) {
+-		struct inode *dir = req->r_parent;
+-
+-		if (dir) {
+-			err = ceph_fill_inode(dir, NULL, &rinfo->diri,
++		/*
++		 * r_parent may be stale, in cases when R_PARENT_LOCKED is not set,
++		 * so we need to get the correct inode
++		 */
++		parent_dir = ceph_get_reply_dir(sb, req->r_parent, rinfo);
++		if (unlikely(IS_ERR(parent_dir))) {
++			err = PTR_ERR(parent_dir);
++			goto done;
++		}
++		if (parent_dir) {
++			err = ceph_fill_inode(parent_dir, NULL, &rinfo->diri,
+ 					      rinfo->dirfrag, session, -1,
+ 					      &req->r_caps_reservation);
+ 			if (err < 0)
+@@ -1548,14 +1602,14 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ 			WARN_ON_ONCE(1);
+ 		}
+ 
+-		if (dir && req->r_op == CEPH_MDS_OP_LOOKUPNAME &&
++		if (parent_dir && req->r_op == CEPH_MDS_OP_LOOKUPNAME &&
+ 		    test_bit(CEPH_MDS_R_PARENT_LOCKED, &req->r_req_flags) &&
+ 		    !test_bit(CEPH_MDS_R_ABORTED, &req->r_req_flags)) {
+ 			bool is_nokey = false;
+ 			struct qstr dname;
+ 			struct dentry *dn, *parent;
+ 			struct fscrypt_str oname = FSTR_INIT(NULL, 0);
+-			struct ceph_fname fname = { .dir	= dir,
++			struct ceph_fname fname = { .dir	= parent_dir,
+ 						    .name	= rinfo->dname,
+ 						    .ctext	= rinfo->altname,
+ 						    .name_len	= rinfo->dname_len,
+@@ -1564,10 +1618,10 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ 			BUG_ON(!rinfo->head->is_target);
+ 			BUG_ON(req->r_dentry);
+ 
+-			parent = d_find_any_alias(dir);
++			parent = d_find_any_alias(parent_dir);
+ 			BUG_ON(!parent);
+ 
+-			err = ceph_fname_alloc_buffer(dir, &oname);
++			err = ceph_fname_alloc_buffer(parent_dir, &oname);
+ 			if (err < 0) {
+ 				dput(parent);
+ 				goto done;
+@@ -1576,7 +1630,7 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ 			err = ceph_fname_to_usr(&fname, NULL, &oname, &is_nokey);
+ 			if (err < 0) {
+ 				dput(parent);
+-				ceph_fname_free_buffer(dir, &oname);
++				ceph_fname_free_buffer(parent_dir, &oname);
+ 				goto done;
+ 			}
+ 			dname.name = oname.name;
+@@ -1595,7 +1649,7 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ 				      dname.len, dname.name, dn);
+ 				if (!dn) {
+ 					dput(parent);
+-					ceph_fname_free_buffer(dir, &oname);
++					ceph_fname_free_buffer(parent_dir, &oname);
+ 					err = -ENOMEM;
+ 					goto done;
+ 				}
+@@ -1610,12 +1664,12 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ 				    ceph_snap(d_inode(dn)) != tvino.snap)) {
+ 				doutc(cl, " dn %p points to wrong inode %p\n",
+ 				      dn, d_inode(dn));
+-				ceph_dir_clear_ordered(dir);
++				ceph_dir_clear_ordered(parent_dir);
+ 				d_delete(dn);
+ 				dput(dn);
+ 				goto retry_lookup;
+ 			}
+-			ceph_fname_free_buffer(dir, &oname);
++			ceph_fname_free_buffer(parent_dir, &oname);
+ 
+ 			req->r_dentry = dn;
+ 			dput(parent);
+@@ -1794,6 +1848,9 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ 					    &dvino, ptvino);
+ 	}
+ done:
++	/* Drop extra ref from ceph_get_reply_dir() if it returned a new inode */
++	if (unlikely(!IS_ERR_OR_NULL(parent_dir) && parent_dir != req->r_parent))
++		iput(parent_dir);
+ 	doutc(cl, "done err=%d\n", err);
+ 	return err;
+ }
+@@ -2488,22 +2545,21 @@ int __ceph_setattr(struct mnt_idmap *idmap, struct inode *inode,
+ 	int truncate_retry = 20; /* The RMW will take around 50ms */
+ 	struct dentry *dentry;
+ 	char *path;
+-	int pathlen;
+-	u64 pathbase;
+ 	bool do_sync = false;
+ 
+ 	dentry = d_find_alias(inode);
+ 	if (!dentry) {
+ 		do_sync = true;
+ 	} else {
+-		path = ceph_mdsc_build_path(mdsc, dentry, &pathlen, &pathbase, 0);
++		struct ceph_path_info path_info;
++		path = ceph_mdsc_build_path(mdsc, dentry, &path_info, 0);
+ 		if (IS_ERR(path)) {
+ 			do_sync = true;
+ 			err = 0;
+ 		} else {
+ 			err = ceph_mds_check_access(mdsc, path, MAY_WRITE);
+ 		}
+-		ceph_mdsc_free_path(path, pathlen);
++		ceph_mdsc_free_path_info(&path_info);
+ 		dput(dentry);
+ 
+ 		/* For none EACCES cases will let the MDS do the mds auth check */
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 230e0c3f341f71..94f109ca853eba 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -2681,8 +2681,7 @@ static u8 *get_fscrypt_altname(const struct ceph_mds_request *req, u32 *plen)
+  * ceph_mdsc_build_path - build a path string to a given dentry
+  * @mdsc: mds client
+  * @dentry: dentry to which path should be built
+- * @plen: returned length of string
+- * @pbase: returned base inode number
++ * @path_info: output path, length, base ino+snap, and freepath ownership flag
+  * @for_wire: is this path going to be sent to the MDS?
+  *
+  * Build a string that represents the path to the dentry. This is mostly called
+@@ -2700,7 +2699,7 @@ static u8 *get_fscrypt_altname(const struct ceph_mds_request *req, u32 *plen)
+  *   foo/.snap/bar -> foo//bar
+  */
+ char *ceph_mdsc_build_path(struct ceph_mds_client *mdsc, struct dentry *dentry,
+-			   int *plen, u64 *pbase, int for_wire)
++			   struct ceph_path_info *path_info, int for_wire)
+ {
+ 	struct ceph_client *cl = mdsc->fsc->client;
+ 	struct dentry *cur;
+@@ -2810,16 +2809,28 @@ char *ceph_mdsc_build_path(struct ceph_mds_client *mdsc, struct dentry *dentry,
+ 		return ERR_PTR(-ENAMETOOLONG);
+ 	}
+ 
+-	*pbase = base;
+-	*plen = PATH_MAX - 1 - pos;
++	/* Initialize the output structure */
++	memset(path_info, 0, sizeof(*path_info));
++
++	path_info->vino.ino = base;
++	path_info->pathlen = PATH_MAX - 1 - pos;
++	path_info->path = path + pos;
++	path_info->freepath = true;
++
++	/* Set snap from dentry if available */
++	if (d_inode(dentry))
++		path_info->vino.snap = ceph_snap(d_inode(dentry));
++	else
++		path_info->vino.snap = CEPH_NOSNAP;
++
+ 	doutc(cl, "on %p %d built %llx '%.*s'\n", dentry, d_count(dentry),
+-	      base, *plen, path + pos);
++	      base, PATH_MAX - 1 - pos, path + pos);
+ 	return path + pos;
+ }
+ 
+ static int build_dentry_path(struct ceph_mds_client *mdsc, struct dentry *dentry,
+-			     struct inode *dir, const char **ppath, int *ppathlen,
+-			     u64 *pino, bool *pfreepath, bool parent_locked)
++			     struct inode *dir, struct ceph_path_info *path_info,
++			     bool parent_locked)
+ {
+ 	char *path;
+ 
+@@ -2828,41 +2839,47 @@ static int build_dentry_path(struct ceph_mds_client *mdsc, struct dentry *dentry
+ 		dir = d_inode_rcu(dentry->d_parent);
+ 	if (dir && parent_locked && ceph_snap(dir) == CEPH_NOSNAP &&
+ 	    !IS_ENCRYPTED(dir)) {
+-		*pino = ceph_ino(dir);
++		path_info->vino.ino = ceph_ino(dir);
++		path_info->vino.snap = ceph_snap(dir);
+ 		rcu_read_unlock();
+-		*ppath = dentry->d_name.name;
+-		*ppathlen = dentry->d_name.len;
++		path_info->path = dentry->d_name.name;
++		path_info->pathlen = dentry->d_name.len;
++		path_info->freepath = false;
+ 		return 0;
+ 	}
+ 	rcu_read_unlock();
+-	path = ceph_mdsc_build_path(mdsc, dentry, ppathlen, pino, 1);
++	path = ceph_mdsc_build_path(mdsc, dentry, path_info, 1);
+ 	if (IS_ERR(path))
+ 		return PTR_ERR(path);
+-	*ppath = path;
+-	*pfreepath = true;
++	/*
++	 * ceph_mdsc_build_path already fills path_info, including snap handling.
++	 */
+ 	return 0;
+ }
+ 
+-static int build_inode_path(struct inode *inode,
+-			    const char **ppath, int *ppathlen, u64 *pino,
+-			    bool *pfreepath)
++static int build_inode_path(struct inode *inode, struct ceph_path_info *path_info)
+ {
+ 	struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(inode->i_sb);
+ 	struct dentry *dentry;
+ 	char *path;
+ 
+ 	if (ceph_snap(inode) == CEPH_NOSNAP) {
+-		*pino = ceph_ino(inode);
+-		*ppathlen = 0;
++		path_info->vino.ino = ceph_ino(inode);
++		path_info->vino.snap = ceph_snap(inode);
++		path_info->pathlen = 0;
++		path_info->freepath = false;
+ 		return 0;
+ 	}
+ 	dentry = d_find_alias(inode);
+-	path = ceph_mdsc_build_path(mdsc, dentry, ppathlen, pino, 1);
++	path = ceph_mdsc_build_path(mdsc, dentry, path_info, 1);
+ 	dput(dentry);
+ 	if (IS_ERR(path))
+ 		return PTR_ERR(path);
+-	*ppath = path;
+-	*pfreepath = true;
++	/*
++	 * ceph_mdsc_build_path already fills path_info, including snap from dentry.
++	 * Override with inode's snap since that's what this function is for.
++	 */
++	path_info->vino.snap = ceph_snap(inode);
+ 	return 0;
+ }
+ 
+@@ -2872,26 +2889,32 @@ static int build_inode_path(struct inode *inode,
+  */
+ static int set_request_path_attr(struct ceph_mds_client *mdsc, struct inode *rinode,
+ 				 struct dentry *rdentry, struct inode *rdiri,
+-				 const char *rpath, u64 rino, const char **ppath,
+-				 int *pathlen, u64 *ino, bool *freepath,
++				 const char *rpath, u64 rino,
++				 struct ceph_path_info *path_info,
+ 				 bool parent_locked)
+ {
+ 	struct ceph_client *cl = mdsc->fsc->client;
+ 	int r = 0;
+ 
++	/* Initialize the output structure */
++	memset(path_info, 0, sizeof(*path_info));
++
+ 	if (rinode) {
+-		r = build_inode_path(rinode, ppath, pathlen, ino, freepath);
++		r = build_inode_path(rinode, path_info);
+ 		doutc(cl, " inode %p %llx.%llx\n", rinode, ceph_ino(rinode),
+ 		      ceph_snap(rinode));
+ 	} else if (rdentry) {
+-		r = build_dentry_path(mdsc, rdentry, rdiri, ppath, pathlen, ino,
+-					freepath, parent_locked);
+-		doutc(cl, " dentry %p %llx/%.*s\n", rdentry, *ino, *pathlen, *ppath);
++		r = build_dentry_path(mdsc, rdentry, rdiri, path_info, parent_locked);
++		doutc(cl, " dentry %p %llx/%.*s\n", rdentry, path_info->vino.ino,
++		      path_info->pathlen, path_info->path);
+ 	} else if (rpath || rino) {
+-		*ino = rino;
+-		*ppath = rpath;
+-		*pathlen = rpath ? strlen(rpath) : 0;
+-		doutc(cl, " path %.*s\n", *pathlen, rpath);
++		path_info->vino.ino = rino;
++		path_info->vino.snap = CEPH_NOSNAP;
++		path_info->path = rpath;
++		path_info->pathlen = rpath ? strlen(rpath) : 0;
++		path_info->freepath = false;
++
++		doutc(cl, " path %.*s\n", path_info->pathlen, rpath);
+ 	}
+ 
+ 	return r;
+@@ -2968,11 +2991,8 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
+ 	struct ceph_client *cl = mdsc->fsc->client;
+ 	struct ceph_msg *msg;
+ 	struct ceph_mds_request_head_legacy *lhead;
+-	const char *path1 = NULL;
+-	const char *path2 = NULL;
+-	u64 ino1 = 0, ino2 = 0;
+-	int pathlen1 = 0, pathlen2 = 0;
+-	bool freepath1 = false, freepath2 = false;
++	struct ceph_path_info path_info1 = {0};
++	struct ceph_path_info path_info2 = {0};
+ 	struct dentry *old_dentry = NULL;
+ 	int len;
+ 	u16 releases;
+@@ -2982,25 +3002,49 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
+ 	u16 request_head_version = mds_supported_head_version(session);
+ 	kuid_t caller_fsuid = req->r_cred->fsuid;
+ 	kgid_t caller_fsgid = req->r_cred->fsgid;
++	bool parent_locked = test_bit(CEPH_MDS_R_PARENT_LOCKED, &req->r_req_flags);
+ 
+ 	ret = set_request_path_attr(mdsc, req->r_inode, req->r_dentry,
+-			      req->r_parent, req->r_path1, req->r_ino1.ino,
+-			      &path1, &pathlen1, &ino1, &freepath1,
+-			      test_bit(CEPH_MDS_R_PARENT_LOCKED,
+-					&req->r_req_flags));
++				    req->r_parent, req->r_path1, req->r_ino1.ino,
++				    &path_info1, parent_locked);
+ 	if (ret < 0) {
+ 		msg = ERR_PTR(ret);
+ 		goto out;
+ 	}
+ 
++	/*
++	 * When the parent directory's i_rwsem is *not* locked, req->r_parent may
++	 * have become stale (e.g. after a concurrent rename) between the time the
++	 * dentry was looked up and now.  If we detect that the stored r_parent
++	 * does not match the inode number we just encoded for the request, switch
++	 * to the correct inode so that the MDS receives a valid parent reference.
++	 */
++	if (!parent_locked && req->r_parent && path_info1.vino.ino &&
++	    ceph_ino(req->r_parent) != path_info1.vino.ino) {
++		struct inode *old_parent = req->r_parent;
++		struct inode *correct_dir = ceph_get_inode(mdsc->fsc->sb, path_info1.vino, NULL);
++		if (!IS_ERR(correct_dir)) {
++			WARN_ONCE(1, "ceph: r_parent mismatch (had %llx wanted %llx) - updating\n",
++			          ceph_ino(old_parent), path_info1.vino.ino);
++			/*
++			 * Transfer CEPH_CAP_PIN from the old parent to the new one.
++			 * The pin was taken earlier in ceph_mdsc_submit_request().
++			 */
++			ceph_put_cap_refs(ceph_inode(old_parent), CEPH_CAP_PIN);
++			iput(old_parent);
++			req->r_parent = correct_dir;
++			ceph_get_cap_refs(ceph_inode(req->r_parent), CEPH_CAP_PIN);
++		}
++	}
++
+ 	/* If r_old_dentry is set, then assume that its parent is locked */
+ 	if (req->r_old_dentry &&
+ 	    !(req->r_old_dentry->d_flags & DCACHE_DISCONNECTED))
+ 		old_dentry = req->r_old_dentry;
+ 	ret = set_request_path_attr(mdsc, NULL, old_dentry,
+-			      req->r_old_dentry_dir,
+-			      req->r_path2, req->r_ino2.ino,
+-			      &path2, &pathlen2, &ino2, &freepath2, true);
++				    req->r_old_dentry_dir,
++				    req->r_path2, req->r_ino2.ino,
++				    &path_info2, true);
+ 	if (ret < 0) {
+ 		msg = ERR_PTR(ret);
+ 		goto out_free1;
+@@ -3031,7 +3075,7 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
+ 
+ 	/* filepaths */
+ 	len += 2 * (1 + sizeof(u32) + sizeof(u64));
+-	len += pathlen1 + pathlen2;
++	len += path_info1.pathlen + path_info2.pathlen;
+ 
+ 	/* cap releases */
+ 	len += sizeof(struct ceph_mds_request_release) *
+@@ -3039,9 +3083,9 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
+ 		 !!req->r_old_inode_drop + !!req->r_old_dentry_drop);
+ 
+ 	if (req->r_dentry_drop)
+-		len += pathlen1;
++		len += path_info1.pathlen;
+ 	if (req->r_old_dentry_drop)
+-		len += pathlen2;
++		len += path_info2.pathlen;
+ 
+ 	/* MClientRequest tail */
+ 
+@@ -3154,8 +3198,8 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
+ 	lhead->ino = cpu_to_le64(req->r_deleg_ino);
+ 	lhead->args = req->r_args;
+ 
+-	ceph_encode_filepath(&p, end, ino1, path1);
+-	ceph_encode_filepath(&p, end, ino2, path2);
++	ceph_encode_filepath(&p, end, path_info1.vino.ino, path_info1.path);
++	ceph_encode_filepath(&p, end, path_info2.vino.ino, path_info2.path);
+ 
+ 	/* make note of release offset, in case we need to replay */
+ 	req->r_request_release_offset = p - msg->front.iov_base;
+@@ -3218,11 +3262,9 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
+ 	msg->hdr.data_off = cpu_to_le16(0);
+ 
+ out_free2:
+-	if (freepath2)
+-		ceph_mdsc_free_path((char *)path2, pathlen2);
++	ceph_mdsc_free_path_info(&path_info2);
+ out_free1:
+-	if (freepath1)
+-		ceph_mdsc_free_path((char *)path1, pathlen1);
++	ceph_mdsc_free_path_info(&path_info1);
+ out:
+ 	return msg;
+ out_err:
+@@ -4579,24 +4621,20 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ 	struct ceph_pagelist *pagelist = recon_state->pagelist;
+ 	struct dentry *dentry;
+ 	struct ceph_cap *cap;
+-	char *path;
+-	int pathlen = 0, err;
+-	u64 pathbase;
++	struct ceph_path_info path_info = {0};
++	int err;
+ 	u64 snap_follows;
+ 
+ 	dentry = d_find_primary(inode);
+ 	if (dentry) {
+ 		/* set pathbase to parent dir when msg_version >= 2 */
+-		path = ceph_mdsc_build_path(mdsc, dentry, &pathlen, &pathbase,
++		char *path = ceph_mdsc_build_path(mdsc, dentry, &path_info,
+ 					    recon_state->msg_version >= 2);
+ 		dput(dentry);
+ 		if (IS_ERR(path)) {
+ 			err = PTR_ERR(path);
+ 			goto out_err;
+ 		}
+-	} else {
+-		path = NULL;
+-		pathbase = 0;
+ 	}
+ 
+ 	spin_lock(&ci->i_ceph_lock);
+@@ -4629,7 +4667,7 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ 		rec.v2.wanted = cpu_to_le32(__ceph_caps_wanted(ci));
+ 		rec.v2.issued = cpu_to_le32(cap->issued);
+ 		rec.v2.snaprealm = cpu_to_le64(ci->i_snap_realm->ino);
+-		rec.v2.pathbase = cpu_to_le64(pathbase);
++		rec.v2.pathbase = cpu_to_le64(path_info.vino.ino);
+ 		rec.v2.flock_len = (__force __le32)
+ 			((ci->i_ceph_flags & CEPH_I_ERROR_FILELOCK) ? 0 : 1);
+ 	} else {
+@@ -4644,7 +4682,7 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ 		ts = inode_get_atime(inode);
+ 		ceph_encode_timespec64(&rec.v1.atime, &ts);
+ 		rec.v1.snaprealm = cpu_to_le64(ci->i_snap_realm->ino);
+-		rec.v1.pathbase = cpu_to_le64(pathbase);
++		rec.v1.pathbase = cpu_to_le64(path_info.vino.ino);
+ 	}
+ 
+ 	if (list_empty(&ci->i_cap_snaps)) {
+@@ -4706,7 +4744,7 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ 			    sizeof(struct ceph_filelock);
+ 		rec.v2.flock_len = cpu_to_le32(struct_len);
+ 
+-		struct_len += sizeof(u32) + pathlen + sizeof(rec.v2);
++		struct_len += sizeof(u32) + path_info.pathlen + sizeof(rec.v2);
+ 
+ 		if (struct_v >= 2)
+ 			struct_len += sizeof(u64); /* snap_follows */
+@@ -4730,7 +4768,7 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ 			ceph_pagelist_encode_8(pagelist, 1);
+ 			ceph_pagelist_encode_32(pagelist, struct_len);
+ 		}
+-		ceph_pagelist_encode_string(pagelist, path, pathlen);
++		ceph_pagelist_encode_string(pagelist, (char *)path_info.path, path_info.pathlen);
+ 		ceph_pagelist_append(pagelist, &rec, sizeof(rec.v2));
+ 		ceph_locks_to_pagelist(flocks, pagelist,
+ 				       num_fcntl_locks, num_flock_locks);
+@@ -4741,17 +4779,17 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ 	} else {
+ 		err = ceph_pagelist_reserve(pagelist,
+ 					    sizeof(u64) + sizeof(u32) +
+-					    pathlen + sizeof(rec.v1));
++					    path_info.pathlen + sizeof(rec.v1));
+ 		if (err)
+ 			goto out_err;
+ 
+ 		ceph_pagelist_encode_64(pagelist, ceph_ino(inode));
+-		ceph_pagelist_encode_string(pagelist, path, pathlen);
++		ceph_pagelist_encode_string(pagelist, (char *)path_info.path, path_info.pathlen);
+ 		ceph_pagelist_append(pagelist, &rec, sizeof(rec.v1));
+ 	}
+ 
+ out_err:
+-	ceph_mdsc_free_path(path, pathlen);
++	ceph_mdsc_free_path_info(&path_info);
+ 	if (!err)
+ 		recon_state->nr_caps++;
+ 	return err;
+diff --git a/fs/ceph/mds_client.h b/fs/ceph/mds_client.h
+index 3e2a6fa7c19aab..0428a5eaf28c65 100644
+--- a/fs/ceph/mds_client.h
++++ b/fs/ceph/mds_client.h
+@@ -617,14 +617,24 @@ extern int ceph_mds_check_access(struct ceph_mds_client *mdsc, char *tpath,
+ 
+ extern void ceph_mdsc_pre_umount(struct ceph_mds_client *mdsc);
+ 
+-static inline void ceph_mdsc_free_path(char *path, int len)
++/*
++ * Structure to group path-related output parameters for build_*_path functions
++ */
++struct ceph_path_info {
++	const char *path;
++	int pathlen;
++	struct ceph_vino vino;
++	bool freepath;
++};
++
++static inline void ceph_mdsc_free_path_info(const struct ceph_path_info *path_info)
+ {
+-	if (!IS_ERR_OR_NULL(path))
+-		__putname(path - (PATH_MAX - 1 - len));
++	if (path_info && path_info->freepath && !IS_ERR_OR_NULL(path_info->path))
++		__putname((char *)path_info->path - (PATH_MAX - 1 - path_info->pathlen));
+ }
+ 
+ extern char *ceph_mdsc_build_path(struct ceph_mds_client *mdsc,
+-				  struct dentry *dentry, int *plen, u64 *base,
++				  struct dentry *dentry, struct ceph_path_info *path_info,
+ 				  int for_wire);
+ 
+ extern void __ceph_mdsc_drop_dentry_lease(struct dentry *dentry);
+diff --git a/fs/coredump.c b/fs/coredump.c
+index f217ebf2b3b68f..012915262d11b7 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -1263,11 +1263,15 @@ static int proc_dostring_coredump(const struct ctl_table *table, int write,
+ 	ssize_t retval;
+ 	char old_core_pattern[CORENAME_MAX_SIZE];
+ 
++	if (write)
++		return proc_dostring(table, write, buffer, lenp, ppos);
++
+ 	retval = strscpy(old_core_pattern, core_pattern, CORENAME_MAX_SIZE);
+ 
+ 	error = proc_dostring(table, write, buffer, lenp, ppos);
+ 	if (error)
+ 		return error;
++
+ 	if (!check_coredump_socket()) {
+ 		strscpy(core_pattern, old_core_pattern, retval + 1);
+ 		return -EINVAL;
+diff --git a/fs/erofs/data.c b/fs/erofs/data.c
+index 16e4a6bd9b9737..dd7d86809c1881 100644
+--- a/fs/erofs/data.c
++++ b/fs/erofs/data.c
+@@ -65,10 +65,10 @@ void erofs_init_metabuf(struct erofs_buf *buf, struct super_block *sb)
+ }
+ 
+ void *erofs_read_metabuf(struct erofs_buf *buf, struct super_block *sb,
+-			 erofs_off_t offset, bool need_kmap)
++			 erofs_off_t offset)
+ {
+ 	erofs_init_metabuf(buf, sb);
+-	return erofs_bread(buf, offset, need_kmap);
++	return erofs_bread(buf, offset, true);
+ }
+ 
+ int erofs_map_blocks(struct inode *inode, struct erofs_map_blocks *map)
+@@ -118,7 +118,7 @@ int erofs_map_blocks(struct inode *inode, struct erofs_map_blocks *map)
+ 	pos = ALIGN(erofs_iloc(inode) + vi->inode_isize +
+ 		    vi->xattr_isize, unit) + unit * chunknr;
+ 
+-	idx = erofs_read_metabuf(&buf, sb, pos, true);
++	idx = erofs_read_metabuf(&buf, sb, pos);
+ 	if (IS_ERR(idx)) {
+ 		err = PTR_ERR(idx);
+ 		goto out;
+@@ -299,7 +299,7 @@ static int erofs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+ 		struct erofs_buf buf = __EROFS_BUF_INITIALIZER;
+ 
+ 		iomap->type = IOMAP_INLINE;
+-		ptr = erofs_read_metabuf(&buf, sb, mdev.m_pa, true);
++		ptr = erofs_read_metabuf(&buf, sb, mdev.m_pa);
+ 		if (IS_ERR(ptr))
+ 			return PTR_ERR(ptr);
+ 		iomap->inline_data = ptr;
+diff --git a/fs/erofs/fileio.c b/fs/erofs/fileio.c
+index 91781718199e2a..3ee082476c8c53 100644
+--- a/fs/erofs/fileio.c
++++ b/fs/erofs/fileio.c
+@@ -115,7 +115,7 @@ static int erofs_fileio_scan_folio(struct erofs_fileio *io, struct folio *folio)
+ 			void *src;
+ 
+ 			src = erofs_read_metabuf(&buf, inode->i_sb,
+-						 map->m_pa + ofs, true);
++						 map->m_pa + ofs);
+ 			if (IS_ERR(src)) {
+ 				err = PTR_ERR(src);
+ 				break;
+diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
+index 34517ca9df9157..9a8ee646e51d9d 100644
+--- a/fs/erofs/fscache.c
++++ b/fs/erofs/fscache.c
+@@ -274,7 +274,7 @@ static int erofs_fscache_data_read_slice(struct erofs_fscache_rq *req)
+ 		size_t size = map.m_llen;
+ 		void *src;
+ 
+-		src = erofs_read_metabuf(&buf, sb, map.m_pa, true);
++		src = erofs_read_metabuf(&buf, sb, map.m_pa);
+ 		if (IS_ERR(src))
+ 			return PTR_ERR(src);
+ 
+diff --git a/fs/erofs/inode.c b/fs/erofs/inode.c
+index a0ae0b4f7b012a..47215c5e33855b 100644
+--- a/fs/erofs/inode.c
++++ b/fs/erofs/inode.c
+@@ -39,10 +39,10 @@ static int erofs_read_inode(struct inode *inode)
+ 	void *ptr;
+ 	int err = 0;
+ 
+-	ptr = erofs_read_metabuf(&buf, sb, erofs_pos(sb, blkaddr), true);
++	ptr = erofs_read_metabuf(&buf, sb, erofs_pos(sb, blkaddr));
+ 	if (IS_ERR(ptr)) {
+ 		err = PTR_ERR(ptr);
+-		erofs_err(sb, "failed to get inode (nid: %llu) page, err %d",
++		erofs_err(sb, "failed to read inode meta block (nid: %llu): %d",
+ 			  vi->nid, err);
+ 		goto err_out;
+ 	}
+@@ -78,10 +78,10 @@ static int erofs_read_inode(struct inode *inode)
+ 
+ 			memcpy(&copied, dic, gotten);
+ 			ptr = erofs_read_metabuf(&buf, sb,
+-					erofs_pos(sb, blkaddr + 1), true);
++					erofs_pos(sb, blkaddr + 1));
+ 			if (IS_ERR(ptr)) {
+ 				err = PTR_ERR(ptr);
+-				erofs_err(sb, "failed to get inode payload block (nid: %llu), err %d",
++				erofs_err(sb, "failed to read inode payload block (nid: %llu): %d",
+ 					  vi->nid, err);
+ 				goto err_out;
+ 			}
+diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
+index 06b867d2fc3b7c..a7699114f6fe6d 100644
+--- a/fs/erofs/internal.h
++++ b/fs/erofs/internal.h
+@@ -385,7 +385,7 @@ void erofs_put_metabuf(struct erofs_buf *buf);
+ void *erofs_bread(struct erofs_buf *buf, erofs_off_t offset, bool need_kmap);
+ void erofs_init_metabuf(struct erofs_buf *buf, struct super_block *sb);
+ void *erofs_read_metabuf(struct erofs_buf *buf, struct super_block *sb,
+-			 erofs_off_t offset, bool need_kmap);
++			 erofs_off_t offset);
+ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *dev);
+ int erofs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ 		 u64 start, u64 len);
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index cad87e4d669432..06c8981eea7f8c 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -141,7 +141,7 @@ static int erofs_init_device(struct erofs_buf *buf, struct super_block *sb,
+ 	struct erofs_deviceslot *dis;
+ 	struct file *file;
+ 
+-	dis = erofs_read_metabuf(buf, sb, *pos, true);
++	dis = erofs_read_metabuf(buf, sb, *pos);
+ 	if (IS_ERR(dis))
+ 		return PTR_ERR(dis);
+ 
+@@ -268,7 +268,7 @@ static int erofs_read_superblock(struct super_block *sb)
+ 	void *data;
+ 	int ret;
+ 
+-	data = erofs_read_metabuf(&buf, sb, 0, true);
++	data = erofs_read_metabuf(&buf, sb, 0);
+ 	if (IS_ERR(data)) {
+ 		erofs_err(sb, "cannot read erofs superblock");
+ 		return PTR_ERR(data);
+@@ -999,10 +999,22 @@ static int erofs_show_options(struct seq_file *seq, struct dentry *root)
+ 	return 0;
+ }
+ 
++static void erofs_evict_inode(struct inode *inode)
++{
++#ifdef CONFIG_FS_DAX
++	if (IS_DAX(inode))
++		dax_break_layout_final(inode);
++#endif
++
++	truncate_inode_pages_final(&inode->i_data);
++	clear_inode(inode);
++}
++
+ const struct super_operations erofs_sops = {
+ 	.put_super = erofs_put_super,
+ 	.alloc_inode = erofs_alloc_inode,
+ 	.free_inode = erofs_free_inode,
++	.evict_inode = erofs_evict_inode,
+ 	.statfs = erofs_statfs,
+ 	.show_options = erofs_show_options,
+ };
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index 9bb53f00c2c629..e8f30eee29b441 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -805,6 +805,7 @@ static int z_erofs_pcluster_begin(struct z_erofs_frontend *fe)
+ 	struct erofs_map_blocks *map = &fe->map;
+ 	struct super_block *sb = fe->inode->i_sb;
+ 	struct z_erofs_pcluster *pcl = NULL;
++	void *ptr;
+ 	int ret;
+ 
+ 	DBG_BUGON(fe->pcl);
+@@ -854,15 +855,14 @@ static int z_erofs_pcluster_begin(struct z_erofs_frontend *fe)
+ 		/* bind cache first when cached decompression is preferred */
+ 		z_erofs_bind_cache(fe);
+ 	} else {
+-		void *mptr;
+-
+-		mptr = erofs_read_metabuf(&map->buf, sb, map->m_pa, false);
+-		if (IS_ERR(mptr)) {
+-			ret = PTR_ERR(mptr);
+-			erofs_err(sb, "failed to get inline data %d", ret);
++		erofs_init_metabuf(&map->buf, sb);
++		ptr = erofs_bread(&map->buf, map->m_pa, false);
++		if (IS_ERR(ptr)) {
++			ret = PTR_ERR(ptr);
++			erofs_err(sb, "failed to get inline folio %d", ret);
+ 			return ret;
+ 		}
+-		get_page(map->buf.page);
++		folio_get(page_folio(map->buf.page));
+ 		WRITE_ONCE(fe->pcl->compressed_bvecs[0].page, map->buf.page);
+ 		fe->pcl->pageofs_in = map->m_pa & ~PAGE_MASK;
+ 		fe->mode = Z_EROFS_PCLUSTER_FOLLOWED_NOINPLACE;
+@@ -1325,9 +1325,8 @@ static int z_erofs_decompress_pcluster(struct z_erofs_backend *be, int err)
+ 
+ 	/* must handle all compressed pages before actual file pages */
+ 	if (pcl->from_meta) {
+-		page = pcl->compressed_bvecs[0].page;
++		folio_put(page_folio(pcl->compressed_bvecs[0].page));
+ 		WRITE_ONCE(pcl->compressed_bvecs[0].page, NULL);
+-		put_page(page);
+ 	} else {
+ 		/* managed folios are still left in compressed_bvecs[] */
+ 		for (i = 0; i < pclusterpages; ++i) {
+diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
+index f1a15ff22147ba..14d01474ad9dda 100644
+--- a/fs/erofs/zmap.c
++++ b/fs/erofs/zmap.c
+@@ -31,7 +31,7 @@ static int z_erofs_load_full_lcluster(struct z_erofs_maprecorder *m,
+ 	struct z_erofs_lcluster_index *di;
+ 	unsigned int advise;
+ 
+-	di = erofs_read_metabuf(&m->map->buf, inode->i_sb, pos, true);
++	di = erofs_read_metabuf(&m->map->buf, inode->i_sb, pos);
+ 	if (IS_ERR(di))
+ 		return PTR_ERR(di);
+ 	m->lcn = lcn;
+@@ -146,7 +146,7 @@ static int z_erofs_load_compact_lcluster(struct z_erofs_maprecorder *m,
+ 	else
+ 		return -EOPNOTSUPP;
+ 
+-	in = erofs_read_metabuf(&m->map->buf, m->inode->i_sb, pos, true);
++	in = erofs_read_metabuf(&m->map->buf, m->inode->i_sb, pos);
+ 	if (IS_ERR(in))
+ 		return PTR_ERR(in);
+ 
+@@ -403,10 +403,10 @@ static int z_erofs_map_blocks_fo(struct inode *inode,
+ 		.inode = inode,
+ 		.map = map,
+ 	};
+-	int err = 0;
+-	unsigned int endoff, afmt;
++	unsigned int endoff;
+ 	unsigned long initial_lcn;
+ 	unsigned long long ofs, end;
++	int err;
+ 
+ 	ofs = flags & EROFS_GET_BLOCKS_FINDTAIL ? inode->i_size - 1 : map->m_la;
+ 	if (fragment && !(flags & EROFS_GET_BLOCKS_FINDTAIL) &&
+@@ -502,20 +502,15 @@ static int z_erofs_map_blocks_fo(struct inode *inode,
+ 			err = -EFSCORRUPTED;
+ 			goto unmap_out;
+ 		}
+-		afmt = vi->z_advise & Z_EROFS_ADVISE_INTERLACED_PCLUSTER ?
+-			Z_EROFS_COMPRESSION_INTERLACED :
+-			Z_EROFS_COMPRESSION_SHIFTED;
++		if (vi->z_advise & Z_EROFS_ADVISE_INTERLACED_PCLUSTER)
++			map->m_algorithmformat = Z_EROFS_COMPRESSION_INTERLACED;
++		else
++			map->m_algorithmformat = Z_EROFS_COMPRESSION_SHIFTED;
++	} else if (m.headtype == Z_EROFS_LCLUSTER_TYPE_HEAD2) {
++		map->m_algorithmformat = vi->z_algorithmtype[1];
+ 	} else {
+-		afmt = m.headtype == Z_EROFS_LCLUSTER_TYPE_HEAD2 ?
+-			vi->z_algorithmtype[1] : vi->z_algorithmtype[0];
+-		if (!(EROFS_I_SB(inode)->available_compr_algs & (1 << afmt))) {
+-			erofs_err(sb, "inconsistent algorithmtype %u for nid %llu",
+-				  afmt, vi->nid);
+-			err = -EFSCORRUPTED;
+-			goto unmap_out;
+-		}
++		map->m_algorithmformat = vi->z_algorithmtype[0];
+ 	}
+-	map->m_algorithmformat = afmt;
+ 
+ 	if ((flags & EROFS_GET_BLOCKS_FIEMAP) ||
+ 	    ((flags & EROFS_GET_BLOCKS_READMORE) &&
+@@ -551,7 +546,7 @@ static int z_erofs_map_blocks_ext(struct inode *inode,
+ 	map->m_flags = 0;
+ 	if (recsz <= offsetof(struct z_erofs_extent, pstart_hi)) {
+ 		if (recsz <= offsetof(struct z_erofs_extent, pstart_lo)) {
+-			ext = erofs_read_metabuf(&map->buf, sb, pos, true);
++			ext = erofs_read_metabuf(&map->buf, sb, pos);
+ 			if (IS_ERR(ext))
+ 				return PTR_ERR(ext);
+ 			pa = le64_to_cpu(*(__le64 *)ext);
+@@ -564,7 +559,7 @@ static int z_erofs_map_blocks_ext(struct inode *inode,
+ 		}
+ 
+ 		for (; lstart <= map->m_la; lstart += 1 << vi->z_lclusterbits) {
+-			ext = erofs_read_metabuf(&map->buf, sb, pos, true);
++			ext = erofs_read_metabuf(&map->buf, sb, pos);
+ 			if (IS_ERR(ext))
+ 				return PTR_ERR(ext);
+ 			map->m_plen = le32_to_cpu(ext->plen);
+@@ -584,7 +579,7 @@ static int z_erofs_map_blocks_ext(struct inode *inode,
+ 		for (l = 0, r = vi->z_extents; l < r; ) {
+ 			mid = l + (r - l) / 2;
+ 			ext = erofs_read_metabuf(&map->buf, sb,
+-						 pos + mid * recsz, true);
++						 pos + mid * recsz);
+ 			if (IS_ERR(ext))
+ 				return PTR_ERR(ext);
+ 
+@@ -641,14 +636,13 @@ static int z_erofs_map_blocks_ext(struct inode *inode,
+ 	return 0;
+ }
+ 
+-static int z_erofs_fill_inode_lazy(struct inode *inode)
++static int z_erofs_fill_inode(struct inode *inode, struct erofs_map_blocks *map)
+ {
+ 	struct erofs_inode *const vi = EROFS_I(inode);
+ 	struct super_block *const sb = inode->i_sb;
+-	int err, headnr;
+-	erofs_off_t pos;
+-	struct erofs_buf buf = __EROFS_BUF_INITIALIZER;
+ 	struct z_erofs_map_header *h;
++	erofs_off_t pos;
++	int err = 0;
+ 
+ 	if (test_bit(EROFS_I_Z_INITED_BIT, &vi->flags)) {
+ 		/*
+@@ -662,12 +656,11 @@ static int z_erofs_fill_inode_lazy(struct inode *inode)
+ 	if (wait_on_bit_lock(&vi->flags, EROFS_I_BL_Z_BIT, TASK_KILLABLE))
+ 		return -ERESTARTSYS;
+ 
+-	err = 0;
+ 	if (test_bit(EROFS_I_Z_INITED_BIT, &vi->flags))
+ 		goto out_unlock;
+ 
+ 	pos = ALIGN(erofs_iloc(inode) + vi->inode_isize + vi->xattr_isize, 8);
+-	h = erofs_read_metabuf(&buf, sb, pos, true);
++	h = erofs_read_metabuf(&map->buf, sb, pos);
+ 	if (IS_ERR(h)) {
+ 		err = PTR_ERR(h);
+ 		goto out_unlock;
+@@ -699,22 +692,13 @@ static int z_erofs_fill_inode_lazy(struct inode *inode)
+ 	else if (vi->z_advise & Z_EROFS_ADVISE_INLINE_PCLUSTER)
+ 		vi->z_idata_size = le16_to_cpu(h->h_idata_size);
+ 
+-	headnr = 0;
+-	if (vi->z_algorithmtype[0] >= Z_EROFS_COMPRESSION_MAX ||
+-	    vi->z_algorithmtype[++headnr] >= Z_EROFS_COMPRESSION_MAX) {
+-		erofs_err(sb, "unknown HEAD%u format %u for nid %llu, please upgrade kernel",
+-			  headnr + 1, vi->z_algorithmtype[headnr], vi->nid);
+-		err = -EOPNOTSUPP;
+-		goto out_put_metabuf;
+-	}
+-
+ 	if (!erofs_sb_has_big_pcluster(EROFS_SB(sb)) &&
+ 	    vi->z_advise & (Z_EROFS_ADVISE_BIG_PCLUSTER_1 |
+ 			    Z_EROFS_ADVISE_BIG_PCLUSTER_2)) {
+ 		erofs_err(sb, "per-inode big pcluster without sb feature for nid %llu",
+ 			  vi->nid);
+ 		err = -EFSCORRUPTED;
+-		goto out_put_metabuf;
++		goto out_unlock;
+ 	}
+ 	if (vi->datalayout == EROFS_INODE_COMPRESSED_COMPACT &&
+ 	    !(vi->z_advise & Z_EROFS_ADVISE_BIG_PCLUSTER_1) ^
+@@ -722,32 +706,54 @@ static int z_erofs_fill_inode_lazy(struct inode *inode)
+ 		erofs_err(sb, "big pcluster head1/2 of compact indexes should be consistent for nid %llu",
+ 			  vi->nid);
+ 		err = -EFSCORRUPTED;
+-		goto out_put_metabuf;
++		goto out_unlock;
+ 	}
+ 
+ 	if (vi->z_idata_size ||
+ 	    (vi->z_advise & Z_EROFS_ADVISE_FRAGMENT_PCLUSTER)) {
+-		struct erofs_map_blocks map = {
++		struct erofs_map_blocks tm = {
+ 			.buf = __EROFS_BUF_INITIALIZER
+ 		};
+ 
+-		err = z_erofs_map_blocks_fo(inode, &map,
++		err = z_erofs_map_blocks_fo(inode, &tm,
+ 					    EROFS_GET_BLOCKS_FINDTAIL);
+-		erofs_put_metabuf(&map.buf);
++		erofs_put_metabuf(&tm.buf);
+ 		if (err < 0)
+-			goto out_put_metabuf;
++			goto out_unlock;
+ 	}
+ done:
+ 	/* paired with smp_mb() at the beginning of the function */
+ 	smp_mb();
+ 	set_bit(EROFS_I_Z_INITED_BIT, &vi->flags);
+-out_put_metabuf:
+-	erofs_put_metabuf(&buf);
+ out_unlock:
+ 	clear_and_wake_up_bit(EROFS_I_BL_Z_BIT, &vi->flags);
+ 	return err;
+ }
+ 
++static int z_erofs_map_sanity_check(struct inode *inode,
++				    struct erofs_map_blocks *map)
++{
++	struct erofs_sb_info *sbi = EROFS_I_SB(inode);
++
++	if (!(map->m_flags & EROFS_MAP_ENCODED))
++		return 0;
++	if (unlikely(map->m_algorithmformat >= Z_EROFS_COMPRESSION_RUNTIME_MAX)) {
++		erofs_err(inode->i_sb, "unknown algorithm %d @ pos %llu for nid %llu, please upgrade kernel",
++			  map->m_algorithmformat, map->m_la, EROFS_I(inode)->nid);
++		return -EOPNOTSUPP;
++	}
++	if (unlikely(map->m_algorithmformat < Z_EROFS_COMPRESSION_MAX &&
++		     !(sbi->available_compr_algs & (1 << map->m_algorithmformat)))) {
++		erofs_err(inode->i_sb, "inconsistent algorithmtype %u for nid %llu",
++			  map->m_algorithmformat, EROFS_I(inode)->nid);
++		return -EFSCORRUPTED;
++	}
++	if (unlikely(map->m_plen > Z_EROFS_PCLUSTER_MAX_SIZE ||
++		     map->m_llen > Z_EROFS_PCLUSTER_MAX_DSIZE))
++		return -EOPNOTSUPP;
++	return 0;
++}
++
+ int z_erofs_map_blocks_iter(struct inode *inode, struct erofs_map_blocks *map,
+ 			    int flags)
+ {
+@@ -760,7 +766,7 @@ int z_erofs_map_blocks_iter(struct inode *inode, struct erofs_map_blocks *map,
+ 		map->m_la = inode->i_size;
+ 		map->m_flags = 0;
+ 	} else {
+-		err = z_erofs_fill_inode_lazy(inode);
++		err = z_erofs_fill_inode(inode, map);
+ 		if (!err) {
+ 			if (vi->datalayout == EROFS_INODE_COMPRESSED_FULL &&
+ 			    (vi->z_advise & Z_EROFS_ADVISE_EXTENTS))
+@@ -768,10 +774,8 @@ int z_erofs_map_blocks_iter(struct inode *inode, struct erofs_map_blocks *map,
+ 			else
+ 				err = z_erofs_map_blocks_fo(inode, map, flags);
+ 		}
+-		if (!err && (map->m_flags & EROFS_MAP_ENCODED) &&
+-		    unlikely(map->m_plen > Z_EROFS_PCLUSTER_MAX_SIZE ||
+-			     map->m_llen > Z_EROFS_PCLUSTER_MAX_DSIZE))
+-			err = -EOPNOTSUPP;
++		if (!err)
++			err = z_erofs_map_sanity_check(inode, map);
+ 		if (err)
+ 			map->m_llen = 0;
+ 	}
+diff --git a/fs/exec.c b/fs/exec.c
+index ba400aafd64061..551e1cc5bf1e3e 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -2048,7 +2048,7 @@ static int proc_dointvec_minmax_coredump(const struct ctl_table *table, int writ
+ {
+ 	int error = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+ 
+-	if (!error)
++	if (!error && !write)
+ 		validate_coredump_safety();
+ 	return error;
+ }
+diff --git a/fs/fhandle.c b/fs/fhandle.c
+index e21ec857f2abcf..52c72896e1c164 100644
+--- a/fs/fhandle.c
++++ b/fs/fhandle.c
+@@ -202,6 +202,14 @@ static int vfs_dentry_acceptable(void *context, struct dentry *dentry)
+ 	if (!ctx->flags)
+ 		return 1;
+ 
++	/*
++	 * Verify that the decoded dentry itself has a valid id mapping.
++	 * In case the decoded dentry is the mountfd root itself, this
++	 * verifies that the mountfd inode itself has a valid id mapping.
++	 */
++	if (!privileged_wrt_inode_uidgid(user_ns, idmap, d_inode(dentry)))
++		return 0;
++
+ 	/*
+ 	 * It's racy as we're not taking rename_lock but we're able to ignore
+ 	 * permissions and we just need an approximation whether we were able
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index e80cd8f2c049f9..5150aa25e64be9 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -1893,7 +1893,7 @@ static int fuse_retrieve(struct fuse_mount *fm, struct inode *inode,
+ 
+ 	index = outarg->offset >> PAGE_SHIFT;
+ 
+-	while (num) {
++	while (num && ap->num_folios < num_pages) {
+ 		struct folio *folio;
+ 		unsigned int folio_offset;
+ 		unsigned int nr_bytes;
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 47006d0753f1cd..b8dc8ce3e5564a 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -3013,7 +3013,7 @@ static ssize_t __fuse_copy_file_range(struct file *file_in, loff_t pos_in,
+ 		.nodeid_out = ff_out->nodeid,
+ 		.fh_out = ff_out->fh,
+ 		.off_out = pos_out,
+-		.len = len,
++		.len = min_t(size_t, len, UINT_MAX & PAGE_MASK),
+ 		.flags = flags
+ 	};
+ 	struct fuse_write_out outarg;
+@@ -3079,6 +3079,9 @@ static ssize_t __fuse_copy_file_range(struct file *file_in, loff_t pos_in,
+ 		fc->no_copy_file_range = 1;
+ 		err = -EOPNOTSUPP;
+ 	}
++	if (!err && outarg.size > len)
++		err = -EIO;
++
+ 	if (err)
+ 		goto out;
+ 
+diff --git a/fs/fuse/passthrough.c b/fs/fuse/passthrough.c
+index 607ef735ad4ab3..eb97ac009e75d9 100644
+--- a/fs/fuse/passthrough.c
++++ b/fs/fuse/passthrough.c
+@@ -237,6 +237,11 @@ int fuse_backing_open(struct fuse_conn *fc, struct fuse_backing_map *map)
+ 	if (!file)
+ 		goto out;
+ 
++	/* read/write/splice/mmap passthrough only relevant for regular files */
++	res = d_is_dir(file->f_path.dentry) ? -EISDIR : -EINVAL;
++	if (!d_is_reg(file->f_path.dentry))
++		goto out_fput;
++
+ 	backing_sb = file_inode(file)->i_sb;
+ 	res = -ELOOP;
+ 	if (backing_sb->s_stack_depth >= fc->max_stack_depth)
+diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c
+index a6c692cac61659..9adf36e6364b7d 100644
+--- a/fs/kernfs/file.c
++++ b/fs/kernfs/file.c
+@@ -70,6 +70,24 @@ static struct kernfs_open_node *of_on(struct kernfs_open_file *of)
+ 					 !list_empty(&of->list));
+ }
+ 
++/* Get active reference to kernfs node for an open file */
++static struct kernfs_open_file *kernfs_get_active_of(struct kernfs_open_file *of)
++{
++	/* Skip if file was already released */
++	if (unlikely(of->released))
++		return NULL;
++
++	if (!kernfs_get_active(of->kn))
++		return NULL;
++
++	return of;
++}
++
++static void kernfs_put_active_of(struct kernfs_open_file *of)
++{
++	return kernfs_put_active(of->kn);
++}
++
+ /**
+  * kernfs_deref_open_node_locked - Get kernfs_open_node corresponding to @kn
+  *
+@@ -139,7 +157,7 @@ static void kernfs_seq_stop_active(struct seq_file *sf, void *v)
+ 
+ 	if (ops->seq_stop)
+ 		ops->seq_stop(sf, v);
+-	kernfs_put_active(of->kn);
++	kernfs_put_active_of(of);
+ }
+ 
+ static void *kernfs_seq_start(struct seq_file *sf, loff_t *ppos)
+@@ -152,7 +170,7 @@ static void *kernfs_seq_start(struct seq_file *sf, loff_t *ppos)
+ 	 * the ops aren't called concurrently for the same open file.
+ 	 */
+ 	mutex_lock(&of->mutex);
+-	if (!kernfs_get_active(of->kn))
++	if (!kernfs_get_active_of(of))
+ 		return ERR_PTR(-ENODEV);
+ 
+ 	ops = kernfs_ops(of->kn);
+@@ -238,7 +256,7 @@ static ssize_t kernfs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 	 * the ops aren't called concurrently for the same open file.
+ 	 */
+ 	mutex_lock(&of->mutex);
+-	if (!kernfs_get_active(of->kn)) {
++	if (!kernfs_get_active_of(of)) {
+ 		len = -ENODEV;
+ 		mutex_unlock(&of->mutex);
+ 		goto out_free;
+@@ -252,7 +270,7 @@ static ssize_t kernfs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 	else
+ 		len = -EINVAL;
+ 
+-	kernfs_put_active(of->kn);
++	kernfs_put_active_of(of);
+ 	mutex_unlock(&of->mutex);
+ 
+ 	if (len < 0)
+@@ -323,7 +341,7 @@ static ssize_t kernfs_fop_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 	 * the ops aren't called concurrently for the same open file.
+ 	 */
+ 	mutex_lock(&of->mutex);
+-	if (!kernfs_get_active(of->kn)) {
++	if (!kernfs_get_active_of(of)) {
+ 		mutex_unlock(&of->mutex);
+ 		len = -ENODEV;
+ 		goto out_free;
+@@ -335,7 +353,7 @@ static ssize_t kernfs_fop_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 	else
+ 		len = -EINVAL;
+ 
+-	kernfs_put_active(of->kn);
++	kernfs_put_active_of(of);
+ 	mutex_unlock(&of->mutex);
+ 
+ 	if (len > 0)
+@@ -357,13 +375,13 @@ static void kernfs_vma_open(struct vm_area_struct *vma)
+ 	if (!of->vm_ops)
+ 		return;
+ 
+-	if (!kernfs_get_active(of->kn))
++	if (!kernfs_get_active_of(of))
+ 		return;
+ 
+ 	if (of->vm_ops->open)
+ 		of->vm_ops->open(vma);
+ 
+-	kernfs_put_active(of->kn);
++	kernfs_put_active_of(of);
+ }
+ 
+ static vm_fault_t kernfs_vma_fault(struct vm_fault *vmf)
+@@ -375,14 +393,14 @@ static vm_fault_t kernfs_vma_fault(struct vm_fault *vmf)
+ 	if (!of->vm_ops)
+ 		return VM_FAULT_SIGBUS;
+ 
+-	if (!kernfs_get_active(of->kn))
++	if (!kernfs_get_active_of(of))
+ 		return VM_FAULT_SIGBUS;
+ 
+ 	ret = VM_FAULT_SIGBUS;
+ 	if (of->vm_ops->fault)
+ 		ret = of->vm_ops->fault(vmf);
+ 
+-	kernfs_put_active(of->kn);
++	kernfs_put_active_of(of);
+ 	return ret;
+ }
+ 
+@@ -395,7 +413,7 @@ static vm_fault_t kernfs_vma_page_mkwrite(struct vm_fault *vmf)
+ 	if (!of->vm_ops)
+ 		return VM_FAULT_SIGBUS;
+ 
+-	if (!kernfs_get_active(of->kn))
++	if (!kernfs_get_active_of(of))
+ 		return VM_FAULT_SIGBUS;
+ 
+ 	ret = 0;
+@@ -404,7 +422,7 @@ static vm_fault_t kernfs_vma_page_mkwrite(struct vm_fault *vmf)
+ 	else
+ 		file_update_time(file);
+ 
+-	kernfs_put_active(of->kn);
++	kernfs_put_active_of(of);
+ 	return ret;
+ }
+ 
+@@ -418,14 +436,14 @@ static int kernfs_vma_access(struct vm_area_struct *vma, unsigned long addr,
+ 	if (!of->vm_ops)
+ 		return -EINVAL;
+ 
+-	if (!kernfs_get_active(of->kn))
++	if (!kernfs_get_active_of(of))
+ 		return -EINVAL;
+ 
+ 	ret = -EINVAL;
+ 	if (of->vm_ops->access)
+ 		ret = of->vm_ops->access(vma, addr, buf, len, write);
+ 
+-	kernfs_put_active(of->kn);
++	kernfs_put_active_of(of);
+ 	return ret;
+ }
+ 
+@@ -455,7 +473,7 @@ static int kernfs_fop_mmap(struct file *file, struct vm_area_struct *vma)
+ 	mutex_lock(&of->mutex);
+ 
+ 	rc = -ENODEV;
+-	if (!kernfs_get_active(of->kn))
++	if (!kernfs_get_active_of(of))
+ 		goto out_unlock;
+ 
+ 	ops = kernfs_ops(of->kn);
+@@ -490,7 +508,7 @@ static int kernfs_fop_mmap(struct file *file, struct vm_area_struct *vma)
+ 	}
+ 	vma->vm_ops = &kernfs_vm_ops;
+ out_put:
+-	kernfs_put_active(of->kn);
++	kernfs_put_active_of(of);
+ out_unlock:
+ 	mutex_unlock(&of->mutex);
+ 
+@@ -852,7 +870,7 @@ static __poll_t kernfs_fop_poll(struct file *filp, poll_table *wait)
+ 	struct kernfs_node *kn = kernfs_dentry_node(filp->f_path.dentry);
+ 	__poll_t ret;
+ 
+-	if (!kernfs_get_active(kn))
++	if (!kernfs_get_active_of(of))
+ 		return DEFAULT_POLLMASK|EPOLLERR|EPOLLPRI;
+ 
+ 	if (kn->attr.ops->poll)
+@@ -860,7 +878,7 @@ static __poll_t kernfs_fop_poll(struct file *filp, poll_table *wait)
+ 	else
+ 		ret = kernfs_generic_poll(of, wait);
+ 
+-	kernfs_put_active(kn);
++	kernfs_put_active_of(of);
+ 	return ret;
+ }
+ 
+@@ -875,7 +893,7 @@ static loff_t kernfs_fop_llseek(struct file *file, loff_t offset, int whence)
+ 	 * the ops aren't called concurrently for the same open file.
+ 	 */
+ 	mutex_lock(&of->mutex);
+-	if (!kernfs_get_active(of->kn)) {
++	if (!kernfs_get_active_of(of)) {
+ 		mutex_unlock(&of->mutex);
+ 		return -ENODEV;
+ 	}
+@@ -886,7 +904,7 @@ static loff_t kernfs_fop_llseek(struct file *file, loff_t offset, int whence)
+ 	else
+ 		ret = generic_file_llseek(file, offset, whence);
+ 
+-	kernfs_put_active(of->kn);
++	kernfs_put_active_of(of);
+ 	mutex_unlock(&of->mutex);
+ 	return ret;
+ }
+diff --git a/fs/nfs/client.c b/fs/nfs/client.c
+index 3bcf5c204578c1..97bd9d2a4b0cde 100644
+--- a/fs/nfs/client.c
++++ b/fs/nfs/client.c
+@@ -890,6 +890,8 @@ static void nfs_server_set_fsinfo(struct nfs_server *server,
+ 
+ 	if (fsinfo->xattr_support)
+ 		server->caps |= NFS_CAP_XATTR;
++	else
++		server->caps &= ~NFS_CAP_XATTR;
+ #endif
+ }
+ 
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index 033feeab8c346e..a16a619fb8c33b 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -437,10 +437,11 @@ static void nfs_invalidate_folio(struct folio *folio, size_t offset,
+ 	dfprintk(PAGECACHE, "NFS: invalidate_folio(%lu, %zu, %zu)\n",
+ 		 folio->index, offset, length);
+ 
+-	if (offset != 0 || length < folio_size(folio))
+-		return;
+ 	/* Cancel any unstarted writes on this page */
+-	nfs_wb_folio_cancel(inode, folio);
++	if (offset != 0 || length < folio_size(folio))
++		nfs_wb_folio(inode, folio);
++	else
++		nfs_wb_folio_cancel(inode, folio);
+ 	folio_wait_private_2(folio); /* [DEPRECATED] */
+ 	trace_nfs_invalidate_folio(inode, folio_pos(folio) + offset, length);
+ }
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index 8dc921d835388e..9edb5f9b0c4e47 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -293,7 +293,7 @@ ff_lseg_match_mirrors(struct pnfs_layout_segment *l1,
+ 		struct pnfs_layout_segment *l2)
+ {
+ 	const struct nfs4_ff_layout_segment *fl1 = FF_LAYOUT_LSEG(l1);
+-	const struct nfs4_ff_layout_segment *fl2 = FF_LAYOUT_LSEG(l1);
++	const struct nfs4_ff_layout_segment *fl2 = FF_LAYOUT_LSEG(l2);
+ 	u32 i;
+ 
+ 	if (fl1->mirror_array_cnt != fl2->mirror_array_cnt)
+@@ -773,8 +773,11 @@ ff_layout_choose_ds_for_read(struct pnfs_layout_segment *lseg,
+ 			continue;
+ 
+ 		if (check_device &&
+-		    nfs4_test_deviceid_unavailable(&mirror->mirror_ds->id_node))
++		    nfs4_test_deviceid_unavailable(&mirror->mirror_ds->id_node)) {
++			// reinitialize the error state in case if this is the last iteration
++			ds = ERR_PTR(-EINVAL);
+ 			continue;
++		}
+ 
+ 		*best_idx = idx;
+ 		break;
+@@ -804,7 +807,7 @@ ff_layout_choose_best_ds_for_read(struct pnfs_layout_segment *lseg,
+ 	struct nfs4_pnfs_ds *ds;
+ 
+ 	ds = ff_layout_choose_valid_ds_for_read(lseg, start_idx, best_idx);
+-	if (ds)
++	if (!IS_ERR(ds))
+ 		return ds;
+ 	return ff_layout_choose_any_ds_for_read(lseg, start_idx, best_idx);
+ }
+@@ -818,7 +821,7 @@ ff_layout_get_ds_for_read(struct nfs_pageio_descriptor *pgio,
+ 
+ 	ds = ff_layout_choose_best_ds_for_read(lseg, pgio->pg_mirror_idx,
+ 					       best_idx);
+-	if (ds || !pgio->pg_mirror_idx)
++	if (!IS_ERR(ds) || !pgio->pg_mirror_idx)
+ 		return ds;
+ 	return ff_layout_choose_best_ds_for_read(lseg, 0, best_idx);
+ }
+@@ -868,7 +871,7 @@ ff_layout_pg_init_read(struct nfs_pageio_descriptor *pgio,
+ 	req->wb_nio = 0;
+ 
+ 	ds = ff_layout_get_ds_for_read(pgio, &ds_idx);
+-	if (!ds) {
++	if (IS_ERR(ds)) {
+ 		if (!ff_layout_no_fallback_to_mds(pgio->pg_lseg))
+ 			goto out_mds;
+ 		pnfs_generic_pg_cleanup(pgio);
+@@ -1072,11 +1075,13 @@ static void ff_layout_resend_pnfs_read(struct nfs_pgio_header *hdr)
+ {
+ 	u32 idx = hdr->pgio_mirror_idx + 1;
+ 	u32 new_idx = 0;
++	struct nfs4_pnfs_ds *ds;
+ 
+-	if (ff_layout_choose_any_ds_for_read(hdr->lseg, idx, &new_idx))
+-		ff_layout_send_layouterror(hdr->lseg);
+-	else
++	ds = ff_layout_choose_any_ds_for_read(hdr->lseg, idx, &new_idx);
++	if (IS_ERR(ds))
+ 		pnfs_error_mark_layout_for_return(hdr->inode, hdr->lseg);
++	else
++		ff_layout_send_layouterror(hdr->lseg);
+ 	pnfs_read_resend_pnfs(hdr, new_idx);
+ }
+ 
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index a2fa6bc4d74e37..a32cc45425e287 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -761,8 +761,10 @@ nfs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 	trace_nfs_setattr_enter(inode);
+ 
+ 	/* Write all dirty data */
+-	if (S_ISREG(inode->i_mode))
++	if (S_ISREG(inode->i_mode)) {
++		nfs_file_block_o_direct(NFS_I(inode));
+ 		nfs_sync_inode(inode);
++	}
+ 
+ 	fattr = nfs_alloc_fattr_with_label(NFS_SERVER(inode));
+ 	if (fattr == NULL) {
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 9dcbc339649221..0ef0fc6aba3b3c 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -531,6 +531,16 @@ static inline bool nfs_file_io_is_buffered(struct nfs_inode *nfsi)
+ 	return test_bit(NFS_INO_ODIRECT, &nfsi->flags) == 0;
+ }
+ 
++/* Must be called with exclusively locked inode->i_rwsem */
++static inline void nfs_file_block_o_direct(struct nfs_inode *nfsi)
++{
++	if (test_bit(NFS_INO_ODIRECT, &nfsi->flags)) {
++		clear_bit(NFS_INO_ODIRECT, &nfsi->flags);
++		inode_dio_wait(&nfsi->vfs_inode);
++	}
++}
++
++
+ /* namespace.c */
+ #define NFS_PATH_CANONICAL 1
+ extern char *nfs_path(char **p, struct dentry *dentry,
+diff --git a/fs/nfs/io.c b/fs/nfs/io.c
+index 3388faf2acb9f5..d275b0a250bf3b 100644
+--- a/fs/nfs/io.c
++++ b/fs/nfs/io.c
+@@ -14,15 +14,6 @@
+ 
+ #include "internal.h"
+ 
+-/* Call with exclusively locked inode->i_rwsem */
+-static void nfs_block_o_direct(struct nfs_inode *nfsi, struct inode *inode)
+-{
+-	if (test_bit(NFS_INO_ODIRECT, &nfsi->flags)) {
+-		clear_bit(NFS_INO_ODIRECT, &nfsi->flags);
+-		inode_dio_wait(inode);
+-	}
+-}
+-
+ /**
+  * nfs_start_io_read - declare the file is being used for buffered reads
+  * @inode: file inode
+@@ -57,7 +48,7 @@ nfs_start_io_read(struct inode *inode)
+ 	err = down_write_killable(&inode->i_rwsem);
+ 	if (err)
+ 		return err;
+-	nfs_block_o_direct(nfsi, inode);
++	nfs_file_block_o_direct(nfsi);
+ 	downgrade_write(&inode->i_rwsem);
+ 
+ 	return 0;
+@@ -90,7 +81,7 @@ nfs_start_io_write(struct inode *inode)
+ 
+ 	err = down_write_killable(&inode->i_rwsem);
+ 	if (!err)
+-		nfs_block_o_direct(NFS_I(inode), inode);
++		nfs_file_block_o_direct(NFS_I(inode));
+ 	return err;
+ }
+ 
+diff --git a/fs/nfs/localio.c b/fs/nfs/localio.c
+index 510d0a16cfe917..e2213ef18baede 100644
+--- a/fs/nfs/localio.c
++++ b/fs/nfs/localio.c
+@@ -453,12 +453,13 @@ static void nfs_local_call_read(struct work_struct *work)
+ 	nfs_local_iter_init(&iter, iocb, READ);
+ 
+ 	status = filp->f_op->read_iter(&iocb->kiocb, &iter);
++
++	revert_creds(save_cred);
++
+ 	if (status != -EIOCBQUEUED) {
+ 		nfs_local_read_done(iocb, status);
+ 		nfs_local_pgio_release(iocb);
+ 	}
+-
+-	revert_creds(save_cred);
+ }
+ 
+ static int
+@@ -649,14 +650,15 @@ static void nfs_local_call_write(struct work_struct *work)
+ 	file_start_write(filp);
+ 	status = filp->f_op->write_iter(&iocb->kiocb, &iter);
+ 	file_end_write(filp);
++
++	revert_creds(save_cred);
++	current->flags = old_flags;
++
+ 	if (status != -EIOCBQUEUED) {
+ 		nfs_local_write_done(iocb, status);
+ 		nfs_local_vfs_getattr(iocb);
+ 		nfs_local_pgio_release(iocb);
+ 	}
+-
+-	revert_creds(save_cred);
+-	current->flags = old_flags;
+ }
+ 
+ static int
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index 01c01f45358b7c..48ee3d5d89c4ae 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -114,6 +114,7 @@ static int nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
+ 	exception.inode = inode;
+ 	exception.state = lock->open_context->state;
+ 
++	nfs_file_block_o_direct(NFS_I(inode));
+ 	err = nfs_sync_inode(inode);
+ 	if (err)
+ 		goto out;
+@@ -430,6 +431,7 @@ static ssize_t _nfs42_proc_copy(struct file *src,
+ 		return status;
+ 	}
+ 
++	nfs_file_block_o_direct(NFS_I(dst_inode));
+ 	status = nfs_sync_inode(dst_inode);
+ 	if (status)
+ 		return status;
+diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
+index 5e9d66f3466c8d..1fa69a0b33ab19 100644
+--- a/fs/nfs/nfs4file.c
++++ b/fs/nfs/nfs4file.c
+@@ -291,9 +291,11 @@ static loff_t nfs42_remap_file_range(struct file *src_file, loff_t src_off,
+ 
+ 	/* flush all pending writes on both src and dst so that server
+ 	 * has the latest data */
++	nfs_file_block_o_direct(NFS_I(src_inode));
+ 	ret = nfs_sync_inode(src_inode);
+ 	if (ret)
+ 		goto out_unlock;
++	nfs_file_block_o_direct(NFS_I(dst_inode));
+ 	ret = nfs_sync_inode(dst_inode);
+ 	if (ret)
+ 		goto out_unlock;
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 7e203857f46687..8d492e3b216312 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -4007,8 +4007,10 @@ static int _nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *f
+ 				     res.attr_bitmask[2];
+ 		}
+ 		memcpy(server->attr_bitmask, res.attr_bitmask, sizeof(server->attr_bitmask));
+-		server->caps &= ~(NFS_CAP_ACLS | NFS_CAP_HARDLINKS |
+-				  NFS_CAP_SYMLINKS| NFS_CAP_SECURITY_LABEL);
++		server->caps &=
++			~(NFS_CAP_ACLS | NFS_CAP_HARDLINKS | NFS_CAP_SYMLINKS |
++			  NFS_CAP_SECURITY_LABEL | NFS_CAP_FS_LOCATIONS |
++			  NFS_CAP_OPEN_XOR | NFS_CAP_DELEGTIME);
+ 		server->fattr_valid = NFS_ATTR_FATTR_V4;
+ 		if (res.attr_bitmask[0] & FATTR4_WORD0_ACL &&
+ 				res.acl_bitmask & ACL4_SUPPORT_ALLOW_ACL)
+@@ -4082,7 +4084,6 @@ int nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *fhandle)
+ 	};
+ 	int err;
+ 
+-	nfs_server_set_init_caps(server);
+ 	do {
+ 		err = nfs4_handle_exception(server,
+ 				_nfs4_server_capabilities(server, fhandle),
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index ff29335ed85999..08fd1c0d45ec27 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -2045,6 +2045,7 @@ int nfs_wb_folio_cancel(struct inode *inode, struct folio *folio)
+ 		 * release it */
+ 		nfs_inode_remove_request(req);
+ 		nfs_unlock_and_release_request(req);
++		folio_cancel_dirty(folio);
+ 	}
+ 
+ 	return ret;
+diff --git a/fs/ocfs2/extent_map.c b/fs/ocfs2/extent_map.c
+index 930150ed5db15f..ef147e8b327126 100644
+--- a/fs/ocfs2/extent_map.c
++++ b/fs/ocfs2/extent_map.c
+@@ -706,6 +706,8 @@ int ocfs2_extent_map_get_blocks(struct inode *inode, u64 v_blkno, u64 *p_blkno,
+  * it not only handles the fiemap for inlined files, but also deals
+  * with the fast symlink, cause they have no difference for extent
+  * mapping per se.
++ *
++ * Must be called with ip_alloc_sem semaphore held.
+  */
+ static int ocfs2_fiemap_inline(struct inode *inode, struct buffer_head *di_bh,
+ 			       struct fiemap_extent_info *fieinfo,
+@@ -717,6 +719,7 @@ static int ocfs2_fiemap_inline(struct inode *inode, struct buffer_head *di_bh,
+ 	u64 phys;
+ 	u32 flags = FIEMAP_EXTENT_DATA_INLINE|FIEMAP_EXTENT_LAST;
+ 	struct ocfs2_inode_info *oi = OCFS2_I(inode);
++	lockdep_assert_held_read(&oi->ip_alloc_sem);
+ 
+ 	di = (struct ocfs2_dinode *)di_bh->b_data;
+ 	if (ocfs2_inode_is_fast_symlink(inode))
+@@ -732,8 +735,11 @@ static int ocfs2_fiemap_inline(struct inode *inode, struct buffer_head *di_bh,
+ 			phys += offsetof(struct ocfs2_dinode,
+ 					 id2.i_data.id_data);
+ 
++		/* Release the ip_alloc_sem to prevent deadlock on page fault */
++		up_read(&OCFS2_I(inode)->ip_alloc_sem);
+ 		ret = fiemap_fill_next_extent(fieinfo, 0, phys, id_count,
+ 					      flags);
++		down_read(&OCFS2_I(inode)->ip_alloc_sem);
+ 		if (ret < 0)
+ 			return ret;
+ 	}
+@@ -802,9 +808,11 @@ int ocfs2_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ 		len_bytes = (u64)le16_to_cpu(rec.e_leaf_clusters) << osb->s_clustersize_bits;
+ 		phys_bytes = le64_to_cpu(rec.e_blkno) << osb->sb->s_blocksize_bits;
+ 		virt_bytes = (u64)le32_to_cpu(rec.e_cpos) << osb->s_clustersize_bits;
+-
++		/* Release the ip_alloc_sem to prevent deadlock on page fault */
++		up_read(&OCFS2_I(inode)->ip_alloc_sem);
+ 		ret = fiemap_fill_next_extent(fieinfo, virt_bytes, phys_bytes,
+ 					      len_bytes, fe_flags);
++		down_read(&OCFS2_I(inode)->ip_alloc_sem);
+ 		if (ret)
+ 			break;
+ 
+diff --git a/fs/proc/generic.c b/fs/proc/generic.c
+index 409bc1d11eca39..8e1e48760ffe05 100644
+--- a/fs/proc/generic.c
++++ b/fs/proc/generic.c
+@@ -390,7 +390,8 @@ struct proc_dir_entry *proc_register(struct proc_dir_entry *dir,
+ 	if (proc_alloc_inum(&dp->low_ino))
+ 		goto out_free_entry;
+ 
+-	pde_set_flags(dp);
++	if (!S_ISDIR(dp->mode))
++		pde_set_flags(dp);
+ 
+ 	write_lock(&proc_subdir_lock);
+ 	dp->parent = dir;
+diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c
+index d98e0d2de09fd0..3c39cfacb25183 100644
+--- a/fs/resctrl/ctrlmondata.c
++++ b/fs/resctrl/ctrlmondata.c
+@@ -625,11 +625,11 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg)
+ 		 */
+ 		list_for_each_entry(d, &r->mon_domains, hdr.list) {
+ 			if (d->ci_id == domid) {
+-				rr.ci_id = d->ci_id;
+ 				cpu = cpumask_any(&d->hdr.cpu_mask);
+ 				ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE);
+ 				if (!ci)
+ 					continue;
++				rr.ci = ci;
+ 				mon_event_read(&rr, r, NULL, rdtgrp,
+ 					       &ci->shared_cpu_map, evtid, false);
+ 				goto checkresult;
+diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h
+index 0a1eedba2b03ad..9a8cf6f11151d9 100644
+--- a/fs/resctrl/internal.h
++++ b/fs/resctrl/internal.h
+@@ -98,7 +98,7 @@ struct mon_data {
+  *	   domains in @r sharing L3 @ci.id
+  * @evtid: Which monitor event to read.
+  * @first: Initialize MBM counter when true.
+- * @ci_id: Cacheinfo id for L3. Only set when @d is NULL. Used when summing domains.
++ * @ci:    Cacheinfo for L3. Only set when @d is NULL. Used when summing domains.
+  * @err:   Error encountered when reading counter.
+  * @val:   Returned value of event counter. If @rgrp is a parent resource group,
+  *	   @val includes the sum of event counts from its child resource groups.
+@@ -112,7 +112,7 @@ struct rmid_read {
+ 	struct rdt_mon_domain	*d;
+ 	enum resctrl_event_id	evtid;
+ 	bool			first;
+-	unsigned int		ci_id;
++	struct cacheinfo	*ci;
+ 	int			err;
+ 	u64			val;
+ 	void			*arch_mon_ctx;
+diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c
+index f5637855c3acac..7326c28a7908f3 100644
+--- a/fs/resctrl/monitor.c
++++ b/fs/resctrl/monitor.c
+@@ -361,7 +361,6 @@ static int __mon_event_count(u32 closid, u32 rmid, struct rmid_read *rr)
+ {
+ 	int cpu = smp_processor_id();
+ 	struct rdt_mon_domain *d;
+-	struct cacheinfo *ci;
+ 	struct mbm_state *m;
+ 	int err, ret;
+ 	u64 tval = 0;
+@@ -389,8 +388,7 @@ static int __mon_event_count(u32 closid, u32 rmid, struct rmid_read *rr)
+ 	}
+ 
+ 	/* Summing domains that share a cache, must be on a CPU for that cache. */
+-	ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE);
+-	if (!ci || ci->id != rr->ci_id)
++	if (!cpumask_test_cpu(cpu, &rr->ci->shared_cpu_map))
+ 		return -EINVAL;
+ 
+ 	/*
+@@ -402,7 +400,7 @@ static int __mon_event_count(u32 closid, u32 rmid, struct rmid_read *rr)
+ 	 */
+ 	ret = -EINVAL;
+ 	list_for_each_entry(d, &rr->r->mon_domains, hdr.list) {
+-		if (d->ci_id != rr->ci_id)
++		if (d->ci_id != rr->ci->id)
+ 			continue;
+ 		err = resctrl_arch_rmid_read(rr->r, d, closid, rmid,
+ 					     rr->evtid, &tval, rr->arch_mon_ctx);
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 89160bc34d3539..4aaa9e8d9cbeff 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -87,7 +87,7 @@
+ #define SMB_INTERFACE_POLL_INTERVAL	600
+ 
+ /* maximum number of PDUs in one compound */
+-#define MAX_COMPOUND 7
++#define MAX_COMPOUND 10
+ 
+ /*
+  * Default number of credits to keep available for SMB3.
+@@ -1877,9 +1877,12 @@ static inline bool is_replayable_error(int error)
+ 
+ 
+ /* cifs_get_writable_file() flags */
+-#define FIND_WR_ANY         0
+-#define FIND_WR_FSUID_ONLY  1
+-#define FIND_WR_WITH_DELETE 2
++enum cifs_writable_file_flags {
++	FIND_WR_ANY			= 0U,
++	FIND_WR_FSUID_ONLY		= (1U << 0),
++	FIND_WR_WITH_DELETE		= (1U << 1),
++	FIND_WR_NO_PENDING_DELETE	= (1U << 2),
++};
+ 
+ #define   MID_FREE 0
+ #define   MID_REQUEST_ALLOCATED 1
+@@ -2339,6 +2342,8 @@ struct smb2_compound_vars {
+ 	struct kvec qi_iov;
+ 	struct kvec io_iov[SMB2_IOCTL_IOV_SIZE];
+ 	struct kvec si_iov[SMB2_SET_INFO_IOV_SIZE];
++	struct kvec unlink_iov[SMB2_SET_INFO_IOV_SIZE];
++	struct kvec rename_iov[SMB2_SET_INFO_IOV_SIZE];
+ 	struct kvec close_iov;
+ 	struct smb2_file_rename_info_hdr rename_info;
+ 	struct smb2_file_link_info_hdr link_info;
+diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
+index 1421bde045c21d..8b407d2a8516d1 100644
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -998,7 +998,10 @@ int cifs_open(struct inode *inode, struct file *file)
+ 
+ 	/* Get the cached handle as SMB2 close is deferred */
+ 	if (OPEN_FMODE(file->f_flags) & FMODE_WRITE) {
+-		rc = cifs_get_writable_path(tcon, full_path, FIND_WR_FSUID_ONLY, &cfile);
++		rc = cifs_get_writable_path(tcon, full_path,
++					    FIND_WR_FSUID_ONLY |
++					    FIND_WR_NO_PENDING_DELETE,
++					    &cfile);
+ 	} else {
+ 		rc = cifs_get_readable_path(tcon, full_path, &cfile);
+ 	}
+@@ -2530,6 +2533,9 @@ cifs_get_writable_file(struct cifsInodeInfo *cifs_inode, int flags,
+ 			continue;
+ 		if (with_delete && !(open_file->fid.access & DELETE))
+ 			continue;
++		if ((flags & FIND_WR_NO_PENDING_DELETE) &&
++		    open_file->status_file_deleted)
++			continue;
+ 		if (OPEN_FMODE(open_file->f_flags) & FMODE_WRITE) {
+ 			if (!open_file->invalidHandle) {
+ 				/* found a good writable file */
+@@ -2647,6 +2653,16 @@ cifs_get_readable_path(struct cifs_tcon *tcon, const char *name,
+ 		spin_unlock(&tcon->open_file_lock);
+ 		free_dentry_path(page);
+ 		*ret_file = find_readable_file(cinode, 0);
++		if (*ret_file) {
++			spin_lock(&cinode->open_file_lock);
++			if ((*ret_file)->status_file_deleted) {
++				spin_unlock(&cinode->open_file_lock);
++				cifsFileInfo_put(*ret_file);
++				*ret_file = NULL;
++			} else {
++				spin_unlock(&cinode->open_file_lock);
++			}
++		}
+ 		return *ret_file ? 0 : -ENOENT;
+ 	}
+ 
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index fe453a4b3dc831..11d442e8b3d622 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -1931,7 +1931,7 @@ cifs_drop_nlink(struct inode *inode)
+  * but will return the EACCES to the caller. Note that the VFS does not call
+  * unlink on negative dentries currently.
+  */
+-int cifs_unlink(struct inode *dir, struct dentry *dentry)
++static int __cifs_unlink(struct inode *dir, struct dentry *dentry, bool sillyrename)
+ {
+ 	int rc = 0;
+ 	unsigned int xid;
+@@ -2003,7 +2003,11 @@ int cifs_unlink(struct inode *dir, struct dentry *dentry)
+ 		goto psx_del_no_retry;
+ 	}
+ 
+-	rc = server->ops->unlink(xid, tcon, full_path, cifs_sb, dentry);
++	if (sillyrename || (server->vals->protocol_id > SMB10_PROT_ID &&
++			    d_is_positive(dentry) && d_count(dentry) > 2))
++		rc = -EBUSY;
++	else
++		rc = server->ops->unlink(xid, tcon, full_path, cifs_sb, dentry);
+ 
+ psx_del_no_retry:
+ 	if (!rc) {
+@@ -2071,6 +2075,11 @@ int cifs_unlink(struct inode *dir, struct dentry *dentry)
+ 	return rc;
+ }
+ 
++int cifs_unlink(struct inode *dir, struct dentry *dentry)
++{
++	return __cifs_unlink(dir, dentry, false);
++}
++
+ static int
+ cifs_mkdir_qinfo(struct inode *parent, struct dentry *dentry, umode_t mode,
+ 		 const char *full_path, struct cifs_sb_info *cifs_sb,
+@@ -2358,14 +2367,16 @@ int cifs_rmdir(struct inode *inode, struct dentry *direntry)
+ 	rc = server->ops->rmdir(xid, tcon, full_path, cifs_sb);
+ 	cifs_put_tlink(tlink);
+ 
++	cifsInode = CIFS_I(d_inode(direntry));
++
+ 	if (!rc) {
++		set_bit(CIFS_INO_DELETE_PENDING, &cifsInode->flags);
+ 		spin_lock(&d_inode(direntry)->i_lock);
+ 		i_size_write(d_inode(direntry), 0);
+ 		clear_nlink(d_inode(direntry));
+ 		spin_unlock(&d_inode(direntry)->i_lock);
+ 	}
+ 
+-	cifsInode = CIFS_I(d_inode(direntry));
+ 	/* force revalidate to go get info when needed */
+ 	cifsInode->time = 0;
+ 
+@@ -2458,8 +2469,11 @@ cifs_do_rename(const unsigned int xid, struct dentry *from_dentry,
+ 	}
+ #endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */
+ do_rename_exit:
+-	if (rc == 0)
++	if (rc == 0) {
+ 		d_move(from_dentry, to_dentry);
++		/* Force a new lookup */
++		d_drop(from_dentry);
++	}
+ 	cifs_put_tlink(tlink);
+ 	return rc;
+ }
+@@ -2470,6 +2484,7 @@ cifs_rename2(struct mnt_idmap *idmap, struct inode *source_dir,
+ 	     struct dentry *target_dentry, unsigned int flags)
+ {
+ 	const char *from_name, *to_name;
++	struct TCP_Server_Info *server;
+ 	void *page1, *page2;
+ 	struct cifs_sb_info *cifs_sb;
+ 	struct tcon_link *tlink;
+@@ -2505,6 +2520,7 @@ cifs_rename2(struct mnt_idmap *idmap, struct inode *source_dir,
+ 	if (IS_ERR(tlink))
+ 		return PTR_ERR(tlink);
+ 	tcon = tlink_tcon(tlink);
++	server = tcon->ses->server;
+ 
+ 	page1 = alloc_dentry_path();
+ 	page2 = alloc_dentry_path();
+@@ -2591,19 +2607,53 @@ cifs_rename2(struct mnt_idmap *idmap, struct inode *source_dir,
+ 
+ unlink_target:
+ #endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */
+-
+-	/* Try unlinking the target dentry if it's not negative */
+-	if (d_really_is_positive(target_dentry) && (rc == -EACCES || rc == -EEXIST)) {
+-		if (d_is_dir(target_dentry))
+-			tmprc = cifs_rmdir(target_dir, target_dentry);
+-		else
+-			tmprc = cifs_unlink(target_dir, target_dentry);
+-		if (tmprc)
+-			goto cifs_rename_exit;
+-		rc = cifs_do_rename(xid, source_dentry, from_name,
+-				    target_dentry, to_name);
+-		if (!rc)
+-			rehash = false;
++	if (d_really_is_positive(target_dentry)) {
++		if (!rc) {
++			struct inode *inode = d_inode(target_dentry);
++			/*
++			 * Samba and ksmbd servers allow renaming a target
++			 * directory that is open, so make sure to update
++			 * ->i_nlink and then mark it as delete pending.
++			 */
++			if (S_ISDIR(inode->i_mode)) {
++				drop_cached_dir_by_name(xid, tcon, to_name, cifs_sb);
++				spin_lock(&inode->i_lock);
++				i_size_write(inode, 0);
++				clear_nlink(inode);
++				spin_unlock(&inode->i_lock);
++				set_bit(CIFS_INO_DELETE_PENDING, &CIFS_I(inode)->flags);
++				CIFS_I(inode)->time = 0; /* force reval */
++				inode_set_ctime_current(inode);
++				inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode));
++			}
++		} else if (rc == -EACCES || rc == -EEXIST) {
++			/*
++			 * Rename failed, possibly due to a busy target.
++			 * Retry it by unliking the target first.
++			 */
++			if (d_is_dir(target_dentry)) {
++				tmprc = cifs_rmdir(target_dir, target_dentry);
++			} else {
++				tmprc = __cifs_unlink(target_dir, target_dentry,
++						      server->vals->protocol_id > SMB10_PROT_ID);
++			}
++			if (tmprc) {
++				/*
++				 * Some servers will return STATUS_ACCESS_DENIED
++				 * or STATUS_DIRECTORY_NOT_EMPTY when failing to
++				 * rename a non-empty directory.  Make sure to
++				 * propagate the appropriate error back to
++				 * userspace.
++				 */
++				if (tmprc == -EEXIST || tmprc == -ENOTEMPTY)
++					rc = tmprc;
++				goto cifs_rename_exit;
++			}
++			rc = cifs_do_rename(xid, source_dentry, from_name,
++					    target_dentry, to_name);
++			if (!rc)
++				rehash = false;
++		}
+ 	}
+ 
+ 	/* force revalidate to go get info when needed */
+@@ -2629,6 +2679,8 @@ cifs_dentry_needs_reval(struct dentry *dentry)
+ 	struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);
+ 	struct cached_fid *cfid = NULL;
+ 
++	if (test_bit(CIFS_INO_DELETE_PENDING, &cifs_i->flags))
++		return false;
+ 	if (cifs_i->time == 0)
+ 		return true;
+ 
+diff --git a/fs/smb/client/smb2glob.h b/fs/smb/client/smb2glob.h
+index 224495322a05da..e56e4d402f1382 100644
+--- a/fs/smb/client/smb2glob.h
++++ b/fs/smb/client/smb2glob.h
+@@ -30,10 +30,9 @@ enum smb2_compound_ops {
+ 	SMB2_OP_QUERY_DIR,
+ 	SMB2_OP_MKDIR,
+ 	SMB2_OP_RENAME,
+-	SMB2_OP_DELETE,
+ 	SMB2_OP_HARDLINK,
+ 	SMB2_OP_SET_EOF,
+-	SMB2_OP_RMDIR,
++	SMB2_OP_UNLINK,
+ 	SMB2_OP_POSIX_QUERY_INFO,
+ 	SMB2_OP_SET_REPARSE,
+ 	SMB2_OP_GET_REPARSE,
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index 8b271bbe41c471..86cad8ee8e6f3b 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -346,9 +346,6 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 			trace_smb3_posix_query_info_compound_enter(xid, tcon->tid,
+ 								   ses->Suid, full_path);
+ 			break;
+-		case SMB2_OP_DELETE:
+-			trace_smb3_delete_enter(xid, tcon->tid, ses->Suid, full_path);
+-			break;
+ 		case SMB2_OP_MKDIR:
+ 			/*
+ 			 * Directories are created through parameters in the
+@@ -356,23 +353,40 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 			 */
+ 			trace_smb3_mkdir_enter(xid, tcon->tid, ses->Suid, full_path);
+ 			break;
+-		case SMB2_OP_RMDIR:
+-			rqst[num_rqst].rq_iov = &vars->si_iov[0];
++		case SMB2_OP_UNLINK:
++			rqst[num_rqst].rq_iov = vars->unlink_iov;
+ 			rqst[num_rqst].rq_nvec = 1;
+ 
+ 			size[0] = 1; /* sizeof __u8 See MS-FSCC section 2.4.11 */
+ 			data[0] = &delete_pending[0];
+ 
+-			rc = SMB2_set_info_init(tcon, server,
+-						&rqst[num_rqst], COMPOUND_FID,
+-						COMPOUND_FID, current->tgid,
+-						FILE_DISPOSITION_INFORMATION,
+-						SMB2_O_INFO_FILE, 0, data, size);
+-			if (rc)
++			if (cfile) {
++				rc = SMB2_set_info_init(tcon, server,
++							&rqst[num_rqst],
++							cfile->fid.persistent_fid,
++							cfile->fid.volatile_fid,
++							current->tgid,
++							FILE_DISPOSITION_INFORMATION,
++							SMB2_O_INFO_FILE, 0,
++							data, size);
++			} else {
++				rc = SMB2_set_info_init(tcon, server,
++							&rqst[num_rqst],
++							COMPOUND_FID,
++							COMPOUND_FID,
++							current->tgid,
++							FILE_DISPOSITION_INFORMATION,
++							SMB2_O_INFO_FILE, 0,
++							data, size);
++			}
++			if (!rc && (!cfile || num_rqst > 1)) {
++				smb2_set_next_command(tcon, &rqst[num_rqst]);
++				smb2_set_related(&rqst[num_rqst]);
++			} else if (rc) {
+ 				goto finished;
+-			smb2_set_next_command(tcon, &rqst[num_rqst]);
+-			smb2_set_related(&rqst[num_rqst++]);
+-			trace_smb3_rmdir_enter(xid, tcon->tid, ses->Suid, full_path);
++			}
++			num_rqst++;
++			trace_smb3_unlink_enter(xid, tcon->tid, ses->Suid, full_path);
+ 			break;
+ 		case SMB2_OP_SET_EOF:
+ 			rqst[num_rqst].rq_iov = &vars->si_iov[0];
+@@ -442,7 +456,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 							   ses->Suid, full_path);
+ 			break;
+ 		case SMB2_OP_RENAME:
+-			rqst[num_rqst].rq_iov = &vars->si_iov[0];
++			rqst[num_rqst].rq_iov = vars->rename_iov;
+ 			rqst[num_rqst].rq_nvec = 2;
+ 
+ 			len = in_iov[i].iov_len;
+@@ -732,19 +746,6 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 				trace_smb3_posix_query_info_compound_done(xid, tcon->tid,
+ 									  ses->Suid);
+ 			break;
+-		case SMB2_OP_DELETE:
+-			if (rc)
+-				trace_smb3_delete_err(xid, tcon->tid, ses->Suid, rc);
+-			else {
+-				/*
+-				 * If dentry (hence, inode) is NULL, lease break is going to
+-				 * take care of degrading leases on handles for deleted files.
+-				 */
+-				if (inode)
+-					cifs_mark_open_handles_for_deleted_file(inode, full_path);
+-				trace_smb3_delete_done(xid, tcon->tid, ses->Suid);
+-			}
+-			break;
+ 		case SMB2_OP_MKDIR:
+ 			if (rc)
+ 				trace_smb3_mkdir_err(xid, tcon->tid, ses->Suid, rc);
+@@ -765,11 +766,11 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 				trace_smb3_rename_done(xid, tcon->tid, ses->Suid);
+ 			SMB2_set_info_free(&rqst[num_rqst++]);
+ 			break;
+-		case SMB2_OP_RMDIR:
+-			if (rc)
+-				trace_smb3_rmdir_err(xid, tcon->tid, ses->Suid, rc);
++		case SMB2_OP_UNLINK:
++			if (!rc)
++				trace_smb3_unlink_done(xid, tcon->tid, ses->Suid);
+ 			else
+-				trace_smb3_rmdir_done(xid, tcon->tid, ses->Suid);
++				trace_smb3_unlink_err(xid, tcon->tid, ses->Suid, rc);
+ 			SMB2_set_info_free(&rqst[num_rqst++]);
+ 			break;
+ 		case SMB2_OP_SET_EOF:
+@@ -1165,7 +1166,7 @@ smb2_rmdir(const unsigned int xid, struct cifs_tcon *tcon, const char *name,
+ 			     FILE_OPEN, CREATE_NOT_FILE, ACL_NO_MODE);
+ 	return smb2_compound_op(xid, tcon, cifs_sb,
+ 				name, &oparms, NULL,
+-				&(int){SMB2_OP_RMDIR}, 1,
++				&(int){SMB2_OP_UNLINK}, 1,
+ 				NULL, NULL, NULL, NULL);
+ }
+ 
+@@ -1174,20 +1175,29 @@ smb2_unlink(const unsigned int xid, struct cifs_tcon *tcon, const char *name,
+ 	    struct cifs_sb_info *cifs_sb, struct dentry *dentry)
+ {
+ 	struct cifs_open_parms oparms;
++	struct inode *inode = NULL;
++	int rc;
+ 
+-	oparms = CIFS_OPARMS(cifs_sb, tcon, name,
+-			     DELETE, FILE_OPEN,
+-			     CREATE_DELETE_ON_CLOSE | OPEN_REPARSE_POINT,
+-			     ACL_NO_MODE);
+-	int rc = smb2_compound_op(xid, tcon, cifs_sb, name, &oparms,
+-				  NULL, &(int){SMB2_OP_DELETE}, 1,
+-				  NULL, NULL, NULL, dentry);
++	if (dentry)
++		inode = d_inode(dentry);
++
++	oparms = CIFS_OPARMS(cifs_sb, tcon, name, DELETE,
++			     FILE_OPEN, OPEN_REPARSE_POINT, ACL_NO_MODE);
++	rc = smb2_compound_op(xid, tcon, cifs_sb, name, &oparms,
++			      NULL, &(int){SMB2_OP_UNLINK},
++			      1, NULL, NULL, NULL, dentry);
+ 	if (rc == -EINVAL) {
+ 		cifs_dbg(FYI, "invalid lease key, resending request without lease");
+ 		rc = smb2_compound_op(xid, tcon, cifs_sb, name, &oparms,
+-				      NULL, &(int){SMB2_OP_DELETE}, 1,
+-				      NULL, NULL, NULL, NULL);
++				      NULL, &(int){SMB2_OP_UNLINK},
++				      1, NULL, NULL, NULL, NULL);
+ 	}
++	/*
++	 * If dentry (hence, inode) is NULL, lease break is going to
++	 * take care of degrading leases on handles for deleted files.
++	 */
++	if (!rc && inode)
++		cifs_mark_open_handles_for_deleted_file(inode, name);
+ 	return rc;
+ }
+ 
+@@ -1441,3 +1451,113 @@ int smb2_query_reparse_point(const unsigned int xid,
+ 	cifs_free_open_info(&data);
+ 	return rc;
+ }
++
++static inline __le16 *utf16_smb2_path(struct cifs_sb_info *cifs_sb,
++				      const char *name, size_t namelen)
++{
++	int len;
++
++	if (*name == '\\' ||
++	    (cifs_sb_master_tlink(cifs_sb) &&
++	     cifs_sb_master_tcon(cifs_sb)->posix_extensions && *name == '/'))
++		name++;
++	return cifs_strndup_to_utf16(name, namelen, &len,
++				     cifs_sb->local_nls,
++				     cifs_remap(cifs_sb));
++}
++
++int smb2_rename_pending_delete(const char *full_path,
++			       struct dentry *dentry,
++			       const unsigned int xid)
++{
++	struct cifs_sb_info *cifs_sb = CIFS_SB(d_inode(dentry)->i_sb);
++	struct cifsInodeInfo *cinode = CIFS_I(d_inode(dentry));
++	__le16 *utf16_path __free(kfree) = NULL;
++	__u32 co = file_create_options(dentry);
++	int cmds[] = {
++		SMB2_OP_SET_INFO,
++		SMB2_OP_RENAME,
++		SMB2_OP_UNLINK,
++	};
++	const int num_cmds = ARRAY_SIZE(cmds);
++	char *to_name __free(kfree) = NULL;
++	__u32 attrs = cinode->cifsAttrs;
++	struct cifs_open_parms oparms;
++	static atomic_t sillycounter;
++	struct cifsFileInfo *cfile;
++	struct tcon_link *tlink;
++	struct cifs_tcon *tcon;
++	struct kvec iov[2];
++	const char *ppath;
++	void *page;
++	size_t len;
++	int rc;
++
++	tlink = cifs_sb_tlink(cifs_sb);
++	if (IS_ERR(tlink))
++		return PTR_ERR(tlink);
++	tcon = tlink_tcon(tlink);
++
++	page = alloc_dentry_path();
++
++	ppath = build_path_from_dentry(dentry->d_parent, page);
++	if (IS_ERR(ppath)) {
++		rc = PTR_ERR(ppath);
++		goto out;
++	}
++
++	len = strlen(ppath) + strlen("/.__smb1234") + 1;
++	to_name = kmalloc(len, GFP_KERNEL);
++	if (!to_name) {
++		rc = -ENOMEM;
++		goto out;
++	}
++
++	scnprintf(to_name, len, "%s%c.__smb%04X", ppath, CIFS_DIR_SEP(cifs_sb),
++		  atomic_inc_return(&sillycounter) & 0xffff);
++
++	utf16_path = utf16_smb2_path(cifs_sb, to_name, len);
++	if (!utf16_path) {
++		rc = -ENOMEM;
++		goto out;
++	}
++
++	drop_cached_dir_by_name(xid, tcon, full_path, cifs_sb);
++	oparms = CIFS_OPARMS(cifs_sb, tcon, full_path,
++			     DELETE | FILE_WRITE_ATTRIBUTES,
++			     FILE_OPEN, co, ACL_NO_MODE);
++
++	attrs &= ~ATTR_READONLY;
++	if (!attrs)
++		attrs = ATTR_NORMAL;
++	if (d_inode(dentry)->i_nlink <= 1)
++		attrs |= ATTR_HIDDEN;
++	iov[0].iov_base = &(FILE_BASIC_INFO) {
++		.Attributes = cpu_to_le32(attrs),
++	};
++	iov[0].iov_len = sizeof(FILE_BASIC_INFO);
++	iov[1].iov_base = utf16_path;
++	iov[1].iov_len = sizeof(*utf16_path) * UniStrlen((wchar_t *)utf16_path);
++
++	cifs_get_writable_path(tcon, full_path, FIND_WR_WITH_DELETE, &cfile);
++	rc = smb2_compound_op(xid, tcon, cifs_sb, full_path, &oparms, iov,
++			      cmds, num_cmds, cfile, NULL, NULL, dentry);
++	if (rc == -EINVAL) {
++		cifs_dbg(FYI, "invalid lease key, resending request without lease\n");
++		cifs_get_writable_path(tcon, full_path,
++				       FIND_WR_WITH_DELETE, &cfile);
++		rc = smb2_compound_op(xid, tcon, cifs_sb, full_path, &oparms, iov,
++				      cmds, num_cmds, cfile, NULL, NULL, NULL);
++	}
++	if (!rc) {
++		set_bit(CIFS_INO_DELETE_PENDING, &cinode->flags);
++	} else {
++		cifs_tcon_dbg(FYI, "%s: failed to rename '%s' to '%s': %d\n",
++			      __func__, full_path, to_name, rc);
++		rc = -EIO;
++	}
++out:
++	cifs_put_tlink(tlink);
++	free_dentry_path(page);
++	return rc;
++}
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index d3e09b10dea476..cd051bb1a9d608 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -2640,13 +2640,35 @@ smb2_set_next_command(struct cifs_tcon *tcon, struct smb_rqst *rqst)
+ 	}
+ 
+ 	/* SMB headers in a compound are 8 byte aligned. */
+-	if (!IS_ALIGNED(len, 8)) {
+-		num_padding = 8 - (len & 7);
++	if (IS_ALIGNED(len, 8))
++		goto out;
++
++	num_padding = 8 - (len & 7);
++	if (smb3_encryption_required(tcon)) {
++		int i;
++
++		/*
++		 * Flatten request into a single buffer with required padding as
++		 * the encryption layer can't handle the padding iovs.
++		 */
++		for (i = 1; i < rqst->rq_nvec; i++) {
++			memcpy(rqst->rq_iov[0].iov_base +
++			       rqst->rq_iov[0].iov_len,
++			       rqst->rq_iov[i].iov_base,
++			       rqst->rq_iov[i].iov_len);
++			rqst->rq_iov[0].iov_len += rqst->rq_iov[i].iov_len;
++		}
++		memset(rqst->rq_iov[0].iov_base + rqst->rq_iov[0].iov_len,
++		       0, num_padding);
++		rqst->rq_iov[0].iov_len += num_padding;
++		rqst->rq_nvec = 1;
++	} else {
+ 		rqst->rq_iov[rqst->rq_nvec].iov_base = smb2_padding;
+ 		rqst->rq_iov[rqst->rq_nvec].iov_len = num_padding;
+ 		rqst->rq_nvec++;
+-		len += num_padding;
+ 	}
++	len += num_padding;
++out:
+ 	shdr->NextCommand = cpu_to_le32(len);
+ }
+ 
+@@ -5377,6 +5399,7 @@ struct smb_version_operations smb20_operations = {
+ 	.llseek = smb3_llseek,
+ 	.is_status_io_timeout = smb2_is_status_io_timeout,
+ 	.is_network_name_deleted = smb2_is_network_name_deleted,
++	.rename_pending_delete = smb2_rename_pending_delete,
+ };
+ #endif /* CIFS_ALLOW_INSECURE_LEGACY */
+ 
+@@ -5482,6 +5505,7 @@ struct smb_version_operations smb21_operations = {
+ 	.llseek = smb3_llseek,
+ 	.is_status_io_timeout = smb2_is_status_io_timeout,
+ 	.is_network_name_deleted = smb2_is_network_name_deleted,
++	.rename_pending_delete = smb2_rename_pending_delete,
+ };
+ 
+ struct smb_version_operations smb30_operations = {
+@@ -5598,6 +5622,7 @@ struct smb_version_operations smb30_operations = {
+ 	.llseek = smb3_llseek,
+ 	.is_status_io_timeout = smb2_is_status_io_timeout,
+ 	.is_network_name_deleted = smb2_is_network_name_deleted,
++	.rename_pending_delete = smb2_rename_pending_delete,
+ };
+ 
+ struct smb_version_operations smb311_operations = {
+@@ -5714,6 +5739,7 @@ struct smb_version_operations smb311_operations = {
+ 	.llseek = smb3_llseek,
+ 	.is_status_io_timeout = smb2_is_status_io_timeout,
+ 	.is_network_name_deleted = smb2_is_network_name_deleted,
++	.rename_pending_delete = smb2_rename_pending_delete,
+ };
+ 
+ #ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+diff --git a/fs/smb/client/smb2proto.h b/fs/smb/client/smb2proto.h
+index 035aa16240535c..8d6b42ff38fe68 100644
+--- a/fs/smb/client/smb2proto.h
++++ b/fs/smb/client/smb2proto.h
+@@ -320,5 +320,8 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
+ int smb2_make_nfs_node(unsigned int xid, struct inode *inode,
+ 		       struct dentry *dentry, struct cifs_tcon *tcon,
+ 		       const char *full_path, umode_t mode, dev_t dev);
++int smb2_rename_pending_delete(const char *full_path,
++			       struct dentry *dentry,
++			       const unsigned int xid);
+ 
+ #endif			/* _SMB2PROTO_H */
+diff --git a/fs/smb/client/trace.h b/fs/smb/client/trace.h
+index 93e5b2bb9f28a2..a8c6f11699a3b6 100644
+--- a/fs/smb/client/trace.h
++++ b/fs/smb/client/trace.h
+@@ -669,13 +669,12 @@ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(query_info_compound_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(posix_query_info_compound_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(hardlink_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(rename_enter);
+-DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(rmdir_enter);
++DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(unlink_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(set_eof_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(set_info_compound_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(set_reparse_compound_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(get_reparse_compound_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(query_wsl_ea_compound_enter);
+-DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(delete_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(mkdir_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(tdis_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(mknod_enter);
+@@ -710,13 +709,12 @@ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(query_info_compound_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(posix_query_info_compound_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(hardlink_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(rename_done);
+-DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(rmdir_done);
++DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(unlink_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(set_eof_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(set_info_compound_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(set_reparse_compound_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(get_reparse_compound_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(query_wsl_ea_compound_done);
+-DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(delete_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(mkdir_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(tdis_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(mknod_done);
+@@ -756,14 +754,13 @@ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(query_info_compound_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(posix_query_info_compound_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(hardlink_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(rename_err);
+-DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(rmdir_err);
++DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(unlink_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(set_eof_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(set_info_compound_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(set_reparse_compound_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(get_reparse_compound_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(query_wsl_ea_compound_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(mkdir_err);
+-DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(delete_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(tdis_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(mknod_err);
+ 
+diff --git a/include/drm/display/drm_dp_helper.h b/include/drm/display/drm_dp_helper.h
+index e4ca35143ff965..3e35a68b2b4122 100644
+--- a/include/drm/display/drm_dp_helper.h
++++ b/include/drm/display/drm_dp_helper.h
+@@ -523,10 +523,16 @@ struct drm_dp_aux {
+ 	 * @no_zero_sized: If the hw can't use zero sized transfers (NVIDIA)
+ 	 */
+ 	bool no_zero_sized;
++
++	/**
++	 * @dpcd_probe_disabled: If probing before a DPCD access is disabled.
++	 */
++	bool dpcd_probe_disabled;
+ };
+ 
+ int drm_dp_dpcd_probe(struct drm_dp_aux *aux, unsigned int offset);
+ void drm_dp_dpcd_set_powered(struct drm_dp_aux *aux, bool powered);
++void drm_dp_dpcd_set_probe(struct drm_dp_aux *aux, bool enable);
+ ssize_t drm_dp_dpcd_read(struct drm_dp_aux *aux, unsigned int offset,
+ 			 void *buffer, size_t size);
+ ssize_t drm_dp_dpcd_write(struct drm_dp_aux *aux, unsigned int offset,
+diff --git a/include/drm/drm_connector.h b/include/drm/drm_connector.h
+index f13d597370a30d..da49d520aa3bae 100644
+--- a/include/drm/drm_connector.h
++++ b/include/drm/drm_connector.h
+@@ -843,7 +843,9 @@ struct drm_display_info {
+ 	int vics_len;
+ 
+ 	/**
+-	 * @quirks: EDID based quirks. Internal to EDID parsing.
++	 * @quirks: EDID based quirks. DRM core and drivers can query the
++	 * @drm_edid_quirk quirks using drm_edid_has_quirk(), the rest of
++	 * the quirks also tracked here are internal to EDID parsing.
+ 	 */
+ 	u32 quirks;
+ 
+diff --git a/include/drm/drm_edid.h b/include/drm/drm_edid.h
+index b38409670868d8..3d1aecfec9b2a4 100644
+--- a/include/drm/drm_edid.h
++++ b/include/drm/drm_edid.h
+@@ -109,6 +109,13 @@ struct detailed_data_string {
+ #define DRM_EDID_CVT_FLAGS_STANDARD_BLANKING (1 << 3)
+ #define DRM_EDID_CVT_FLAGS_REDUCED_BLANKING  (1 << 4)
+ 
++enum drm_edid_quirk {
++	/* Do a dummy read before DPCD accesses, to prevent corruption. */
++	DRM_EDID_QUIRK_DP_DPCD_PROBE,
++
++	DRM_EDID_QUIRK_NUM,
++};
++
+ struct detailed_data_monitor_range {
+ 	u8 min_vfreq;
+ 	u8 max_vfreq;
+@@ -476,5 +483,6 @@ void drm_edid_print_product_id(struct drm_printer *p,
+ u32 drm_edid_get_panel_id(const struct drm_edid *drm_edid);
+ bool drm_edid_match(const struct drm_edid *drm_edid,
+ 		    const struct drm_edid_ident *ident);
++bool drm_edid_has_quirk(struct drm_connector *connector, enum drm_edid_quirk quirk);
+ 
+ #endif /* __DRM_EDID_H__ */
+diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h
+index 4fc8e26914adfd..f9f36d6af9a710 100644
+--- a/include/linux/compiler-clang.h
++++ b/include/linux/compiler-clang.h
+@@ -18,23 +18,42 @@
+ #define KASAN_ABI_VERSION 5
+ 
+ /*
++ * Clang 22 added preprocessor macros to match GCC, in hopes of eventually
++ * dropping __has_feature support for sanitizers:
++ * https://github.com/llvm/llvm-project/commit/568c23bbd3303518c5056d7f03444dae4fdc8a9c
++ * Create these macros for older versions of clang so that it is easy to clean
++ * up once the minimum supported version of LLVM for building the kernel always
++ * creates these macros.
++ *
+  * Note: Checking __has_feature(*_sanitizer) is only true if the feature is
+  * enabled. Therefore it is not required to additionally check defined(CONFIG_*)
+  * to avoid adding redundant attributes in other configurations.
+  */
++#if __has_feature(address_sanitizer) && !defined(__SANITIZE_ADDRESS__)
++#define __SANITIZE_ADDRESS__
++#endif
++#if __has_feature(hwaddress_sanitizer) && !defined(__SANITIZE_HWADDRESS__)
++#define __SANITIZE_HWADDRESS__
++#endif
++#if __has_feature(thread_sanitizer) && !defined(__SANITIZE_THREAD__)
++#define __SANITIZE_THREAD__
++#endif
+ 
+-#if __has_feature(address_sanitizer) || __has_feature(hwaddress_sanitizer)
+-/* Emulate GCC's __SANITIZE_ADDRESS__ flag */
++/*
++ * Treat __SANITIZE_HWADDRESS__ the same as __SANITIZE_ADDRESS__ in the kernel.
++ */
++#ifdef __SANITIZE_HWADDRESS__
+ #define __SANITIZE_ADDRESS__
++#endif
++
++#ifdef __SANITIZE_ADDRESS__
+ #define __no_sanitize_address \
+ 		__attribute__((no_sanitize("address", "hwaddress")))
+ #else
+ #define __no_sanitize_address
+ #endif
+ 
+-#if __has_feature(thread_sanitizer)
+-/* emulate gcc's __SANITIZE_THREAD__ flag */
+-#define __SANITIZE_THREAD__
++#ifdef __SANITIZE_THREAD__
+ #define __no_sanitize_thread \
+ 		__attribute__((no_sanitize("thread")))
+ #else
+diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
+index 7fa1eb3cc82399..61d50571ad88ac 100644
+--- a/include/linux/energy_model.h
++++ b/include/linux/energy_model.h
+@@ -171,6 +171,9 @@ int em_dev_update_perf_domain(struct device *dev,
+ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
+ 				const struct em_data_callback *cb,
+ 				const cpumask_t *cpus, bool microwatts);
++int em_dev_register_pd_no_update(struct device *dev, unsigned int nr_states,
++				 const struct em_data_callback *cb,
++				 const cpumask_t *cpus, bool microwatts);
+ void em_dev_unregister_perf_domain(struct device *dev);
+ struct em_perf_table *em_table_alloc(struct em_perf_domain *pd);
+ void em_table_free(struct em_perf_table *table);
+@@ -350,6 +353,13 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
+ {
+ 	return -EINVAL;
+ }
++static inline
++int em_dev_register_pd_no_update(struct device *dev, unsigned int nr_states,
++				 const struct em_data_callback *cb,
++				 const cpumask_t *cpus, bool microwatts)
++{
++	return -EINVAL;
++}
+ static inline void em_dev_unregister_perf_domain(struct device *dev)
+ {
+ }
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 040c0036320fdf..d6716ff498a7aa 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -149,7 +149,8 @@ typedef int (dio_iodone_t)(struct kiocb *iocb, loff_t offset,
+ /* Expect random access pattern */
+ #define FMODE_RANDOM		((__force fmode_t)(1 << 12))
+ 
+-/* FMODE_* bit 13 */
++/* Supports IOCB_HAS_METADATA */
++#define FMODE_HAS_METADATA	((__force fmode_t)(1 << 13))
+ 
+ /* File is opened with O_PATH; almost nothing can be done with it */
+ #define FMODE_PATH		((__force fmode_t)(1 << 14))
+diff --git a/include/linux/kasan.h b/include/linux/kasan.h
+index 890011071f2b14..fe5ce9215821db 100644
+--- a/include/linux/kasan.h
++++ b/include/linux/kasan.h
+@@ -562,7 +562,7 @@ static inline void kasan_init_hw_tags(void) { }
+ #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
+ 
+ void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
+-int kasan_populate_vmalloc(unsigned long addr, unsigned long size);
++int kasan_populate_vmalloc(unsigned long addr, unsigned long size, gfp_t gfp_mask);
+ void kasan_release_vmalloc(unsigned long start, unsigned long end,
+ 			   unsigned long free_region_start,
+ 			   unsigned long free_region_end,
+@@ -574,7 +574,7 @@ static inline void kasan_populate_early_vm_area_shadow(void *start,
+ 						       unsigned long size)
+ { }
+ static inline int kasan_populate_vmalloc(unsigned long start,
+-					unsigned long size)
++					unsigned long size, gfp_t gfp_mask)
+ {
+ 	return 0;
+ }
+@@ -610,7 +610,7 @@ static __always_inline void kasan_poison_vmalloc(const void *start,
+ static inline void kasan_populate_early_vm_area_shadow(void *start,
+ 						       unsigned long size) { }
+ static inline int kasan_populate_vmalloc(unsigned long start,
+-					unsigned long size)
++					unsigned long size, gfp_t gfp_mask)
+ {
+ 	return 0;
+ }
+diff --git a/include/linux/mtd/spinand.h b/include/linux/mtd/spinand.h
+index 15eaa09da998ce..d668b6266c34a2 100644
+--- a/include/linux/mtd/spinand.h
++++ b/include/linux/mtd/spinand.h
+@@ -484,6 +484,7 @@ struct spinand_user_otp {
+  * @op_variants.update_cache: variants of the update-cache operation
+  * @select_target: function used to select a target/die. Required only for
+  *		   multi-die chips
++ * @configure_chip: Align the chip configuration with the core settings
+  * @set_cont_read: enable/disable continuous cached reads
+  * @fact_otp: SPI NAND factory OTP info.
+  * @user_otp: SPI NAND user OTP info.
+@@ -507,6 +508,7 @@ struct spinand_info {
+ 	} op_variants;
+ 	int (*select_target)(struct spinand_device *spinand,
+ 			     unsigned int target);
++	int (*configure_chip)(struct spinand_device *spinand);
+ 	int (*set_cont_read)(struct spinand_device *spinand,
+ 			     bool enable);
+ 	struct spinand_fact_otp fact_otp;
+@@ -539,6 +541,9 @@ struct spinand_info {
+ #define SPINAND_SELECT_TARGET(__func)					\
+ 	.select_target = __func
+ 
++#define SPINAND_CONFIGURE_CHIP(__configure_chip)			\
++	.configure_chip = __configure_chip
++
+ #define SPINAND_CONT_READ(__set_cont_read)				\
+ 	.set_cont_read = __set_cont_read
+ 
+@@ -607,6 +612,7 @@ struct spinand_dirmap {
+  *		passed in spi_mem_op be DMA-able, so we can't based the bufs on
+  *		the stack
+  * @manufacturer: SPI NAND manufacturer information
++ * @configure_chip: Align the chip configuration with the core settings
+  * @cont_read_possible: Field filled by the core once the whole system
+  *		configuration is known to tell whether continuous reads are
+  *		suitable to use or not in general with this chip/configuration.
+@@ -647,6 +653,7 @@ struct spinand_device {
+ 	const struct spinand_manufacturer *manufacturer;
+ 	void *priv;
+ 
++	int (*configure_chip)(struct spinand_device *spinand);
+ 	bool cont_read_possible;
+ 	int (*set_cont_read)(struct spinand_device *spinand,
+ 			     bool enable);
+@@ -723,6 +730,7 @@ int spinand_match_and_init(struct spinand_device *spinand,
+ 			   enum spinand_readid_method rdid_method);
+ 
+ int spinand_upd_cfg(struct spinand_device *spinand, u8 mask, u8 val);
++int spinand_read_reg_op(struct spinand_device *spinand, u8 reg, u8 *val);
+ int spinand_write_reg_op(struct spinand_device *spinand, u8 reg, u8 val);
+ int spinand_select_target(struct spinand_device *spinand, unsigned int target);
+ 
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 5e49619ae49c69..16daeac2ac555e 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -459,19 +459,17 @@ struct nft_set_ext;
+  *	control plane functions.
+  */
+ struct nft_set_ops {
+-	bool				(*lookup)(const struct net *net,
++	const struct nft_set_ext *	(*lookup)(const struct net *net,
+ 						  const struct nft_set *set,
+-						  const u32 *key,
+-						  const struct nft_set_ext **ext);
+-	bool				(*update)(struct nft_set *set,
++						  const u32 *key);
++	const struct nft_set_ext *	(*update)(struct nft_set *set,
+ 						  const u32 *key,
+ 						  struct nft_elem_priv *
+ 							(*new)(struct nft_set *,
+ 							       const struct nft_expr *,
+ 							       struct nft_regs *),
+ 						  const struct nft_expr *expr,
+-						  struct nft_regs *regs,
+-						  const struct nft_set_ext **ext);
++						  struct nft_regs *regs);
+ 	bool				(*delete)(const struct nft_set *set,
+ 						  const u32 *key);
+ 
+@@ -1918,7 +1916,6 @@ struct nftables_pernet {
+ 	struct mutex		commit_mutex;
+ 	u64			table_handle;
+ 	u64			tstamp;
+-	unsigned int		base_seq;
+ 	unsigned int		gc_seq;
+ 	u8			validate_state;
+ 	struct work_struct	destroy_work;
+diff --git a/include/net/netfilter/nf_tables_core.h b/include/net/netfilter/nf_tables_core.h
+index 03b6165756fc5d..04699eac5b5243 100644
+--- a/include/net/netfilter/nf_tables_core.h
++++ b/include/net/netfilter/nf_tables_core.h
+@@ -94,34 +94,35 @@ extern const struct nft_set_type nft_set_pipapo_type;
+ extern const struct nft_set_type nft_set_pipapo_avx2_type;
+ 
+ #ifdef CONFIG_MITIGATION_RETPOLINE
+-bool nft_rhash_lookup(const struct net *net, const struct nft_set *set,
+-		      const u32 *key, const struct nft_set_ext **ext);
+-bool nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
+-		       const u32 *key, const struct nft_set_ext **ext);
+-bool nft_bitmap_lookup(const struct net *net, const struct nft_set *set,
+-		       const u32 *key, const struct nft_set_ext **ext);
+-bool nft_hash_lookup_fast(const struct net *net,
+-			  const struct nft_set *set,
+-			  const u32 *key, const struct nft_set_ext **ext);
+-bool nft_hash_lookup(const struct net *net, const struct nft_set *set,
+-		     const u32 *key, const struct nft_set_ext **ext);
+-bool nft_set_do_lookup(const struct net *net, const struct nft_set *set,
+-		       const u32 *key, const struct nft_set_ext **ext);
+-#else
+-static inline bool
+-nft_set_do_lookup(const struct net *net, const struct nft_set *set,
+-		  const u32 *key, const struct nft_set_ext **ext)
+-{
+-	return set->ops->lookup(net, set, key, ext);
+-}
++const struct nft_set_ext *
++nft_rhash_lookup(const struct net *net, const struct nft_set *set,
++		 const u32 *key);
++const struct nft_set_ext *
++nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
++		  const u32 *key);
++const struct nft_set_ext *
++nft_bitmap_lookup(const struct net *net, const struct nft_set *set,
++		  const u32 *key);
++const struct nft_set_ext *
++nft_hash_lookup_fast(const struct net *net, const struct nft_set *set,
++		     const u32 *key);
++const struct nft_set_ext *
++nft_hash_lookup(const struct net *net, const struct nft_set *set,
++		const u32 *key);
+ #endif
+ 
++const struct nft_set_ext *
++nft_set_do_lookup(const struct net *net, const struct nft_set *set,
++		  const u32 *key);
++
+ /* called from nft_pipapo_avx2.c */
+-bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+-		       const u32 *key, const struct nft_set_ext **ext);
++const struct nft_set_ext *
++nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
++		  const u32 *key);
+ /* called from nft_set_pipapo.c */
+-bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+-			    const u32 *key, const struct nft_set_ext **ext);
++const struct nft_set_ext *
++nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
++			const u32 *key);
+ 
+ void nft_counter_init_seqcount(void);
+ 
+diff --git a/include/net/netns/nftables.h b/include/net/netns/nftables.h
+index cc8060c017d5fb..99dd166c5d07c3 100644
+--- a/include/net/netns/nftables.h
++++ b/include/net/netns/nftables.h
+@@ -3,6 +3,7 @@
+ #define _NETNS_NFTABLES_H_
+ 
+ struct netns_nftables {
++	unsigned int		base_seq;
+ 	u8			gencursor;
+ };
+ 
+diff --git a/include/uapi/linux/raid/md_p.h b/include/uapi/linux/raid/md_p.h
+index b1394628727758..ac74133a476887 100644
+--- a/include/uapi/linux/raid/md_p.h
++++ b/include/uapi/linux/raid/md_p.h
+@@ -173,7 +173,7 @@ typedef struct mdp_superblock_s {
+ #else
+ #error unspecified endianness
+ #endif
+-	__u32 resync_offset;	/* 11 resync checkpoint sector count	      */
++	__u32 recovery_cp;	/* 11 resync checkpoint sector count	      */
+ 	/* There are only valid for minor_version > 90 */
+ 	__u64 reshape_position;	/* 12,13 next address in array-space for reshape */
+ 	__u32 new_level;	/* 14 new level we are reshaping to	      */
+diff --git a/io_uring/rw.c b/io_uring/rw.c
+index 52a5b950b2e5e9..af5a54b5db1233 100644
+--- a/io_uring/rw.c
++++ b/io_uring/rw.c
+@@ -886,6 +886,9 @@ static int io_rw_init_file(struct io_kiocb *req, fmode_t mode, int rw_type)
+ 	if (req->flags & REQ_F_HAS_METADATA) {
+ 		struct io_async_rw *io = req->async_data;
+ 
++		if (!(file->f_mode & FMODE_HAS_METADATA))
++			return -EINVAL;
++
+ 		/*
+ 		 * We have a union of meta fields with wpq used for buffered-io
+ 		 * in io_async_rw, so fail it here.
+diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
+index 3a335c50e6e3cb..12ec926ed7114e 100644
+--- a/kernel/bpf/Makefile
++++ b/kernel/bpf/Makefile
+@@ -62,3 +62,4 @@ CFLAGS_REMOVE_bpf_lru_list.o = $(CC_FLAGS_FTRACE)
+ CFLAGS_REMOVE_queue_stack_maps.o = $(CC_FLAGS_FTRACE)
+ CFLAGS_REMOVE_lpm_trie.o = $(CC_FLAGS_FTRACE)
+ CFLAGS_REMOVE_ringbuf.o = $(CC_FLAGS_FTRACE)
++CFLAGS_REMOVE_rqspinlock.o = $(CC_FLAGS_FTRACE)
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index d966e971893ab3..829f0792d8d831 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -2354,8 +2354,7 @@ static unsigned int __bpf_prog_ret0_warn(const void *ctx,
+ 					 const struct bpf_insn *insn)
+ {
+ 	/* If this handler ever gets executed, then BPF_JIT_ALWAYS_ON
+-	 * is not working properly, or interpreter is being used when
+-	 * prog->jit_requested is not 0, so warn about it!
++	 * is not working properly, so warn about it!
+ 	 */
+ 	WARN_ON_ONCE(1);
+ 	return 0;
+@@ -2456,8 +2455,9 @@ static int bpf_check_tail_call(const struct bpf_prog *fp)
+ 	return ret;
+ }
+ 
+-static void bpf_prog_select_func(struct bpf_prog *fp)
++static bool bpf_prog_select_interpreter(struct bpf_prog *fp)
+ {
++	bool select_interpreter = false;
+ #ifndef CONFIG_BPF_JIT_ALWAYS_ON
+ 	u32 stack_depth = max_t(u32, fp->aux->stack_depth, 1);
+ 	u32 idx = (round_up(stack_depth, 32) / 32) - 1;
+@@ -2466,15 +2466,16 @@ static void bpf_prog_select_func(struct bpf_prog *fp)
+ 	 * But for non-JITed programs, we don't need bpf_func, so no bounds
+ 	 * check needed.
+ 	 */
+-	if (!fp->jit_requested &&
+-	    !WARN_ON_ONCE(idx >= ARRAY_SIZE(interpreters))) {
++	if (idx < ARRAY_SIZE(interpreters)) {
+ 		fp->bpf_func = interpreters[idx];
++		select_interpreter = true;
+ 	} else {
+ 		fp->bpf_func = __bpf_prog_ret0_warn;
+ 	}
+ #else
+ 	fp->bpf_func = __bpf_prog_ret0_warn;
+ #endif
++	return select_interpreter;
+ }
+ 
+ /**
+@@ -2493,7 +2494,7 @@ struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err)
+ 	/* In case of BPF to BPF calls, verifier did all the prep
+ 	 * work with regards to JITing, etc.
+ 	 */
+-	bool jit_needed = fp->jit_requested;
++	bool jit_needed = false;
+ 
+ 	if (fp->bpf_func)
+ 		goto finalize;
+@@ -2502,7 +2503,8 @@ struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err)
+ 	    bpf_prog_has_kfunc_call(fp))
+ 		jit_needed = true;
+ 
+-	bpf_prog_select_func(fp);
++	if (!bpf_prog_select_interpreter(fp))
++		jit_needed = true;
+ 
+ 	/* eBPF JITs can rewrite the program in case constant
+ 	 * blinding is active. However, in case of error during
+diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
+index 67e8a2fc1a99de..cfcf7ed57ca0d2 100644
+--- a/kernel/bpf/cpumap.c
++++ b/kernel/bpf/cpumap.c
+@@ -186,7 +186,6 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
+ 	struct xdp_buff xdp;
+ 	int i, nframes = 0;
+ 
+-	xdp_set_return_frame_no_direct();
+ 	xdp.rxq = &rxq;
+ 
+ 	for (i = 0; i < n; i++) {
+@@ -231,7 +230,6 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
+ 		}
+ 	}
+ 
+-	xdp_clear_return_frame_no_direct();
+ 	stats->pass += nframes;
+ 
+ 	return nframes;
+@@ -255,6 +253,7 @@ static void cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames,
+ 
+ 	rcu_read_lock();
+ 	bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx);
++	xdp_set_return_frame_no_direct();
+ 
+ 	ret->xdp_n = cpu_map_bpf_prog_run_xdp(rcpu, frames, ret->xdp_n, stats);
+ 	if (unlikely(ret->skb_n))
+@@ -264,6 +263,7 @@ static void cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames,
+ 	if (stats->redirect)
+ 		xdp_do_flush();
+ 
++	xdp_clear_return_frame_no_direct();
+ 	bpf_net_ctx_clear(bpf_net_ctx);
+ 	rcu_read_unlock();
+ 
+diff --git a/kernel/bpf/crypto.c b/kernel/bpf/crypto.c
+index 94854cd9c4cc32..83c4d9943084b9 100644
+--- a/kernel/bpf/crypto.c
++++ b/kernel/bpf/crypto.c
+@@ -278,7 +278,7 @@ static int bpf_crypto_crypt(const struct bpf_crypto_ctx *ctx,
+ 	siv_len = siv ? __bpf_dynptr_size(siv) : 0;
+ 	src_len = __bpf_dynptr_size(src);
+ 	dst_len = __bpf_dynptr_size(dst);
+-	if (!src_len || !dst_len)
++	if (!src_len || !dst_len || src_len > dst_len)
+ 		return -EINVAL;
+ 
+ 	if (siv_len != ctx->siv_len)
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index fdf8737542ac45..3abbdebb2d9efc 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -1277,8 +1277,11 @@ static int __bpf_async_init(struct bpf_async_kern *async, struct bpf_map *map, u
+ 		goto out;
+ 	}
+ 
+-	/* allocate hrtimer via map_kmalloc to use memcg accounting */
+-	cb = bpf_map_kmalloc_node(map, size, GFP_ATOMIC, map->numa_node);
++	/* Allocate via bpf_map_kmalloc_node() for memcg accounting. Until
++	 * kmalloc_nolock() is available, avoid locking issues by using
++	 * __GFP_HIGH (GFP_ATOMIC & ~__GFP_RECLAIM).
++	 */
++	cb = bpf_map_kmalloc_node(map, size, __GFP_HIGH, map->numa_node);
+ 	if (!cb) {
+ 		ret = -ENOMEM;
+ 		goto out;
+diff --git a/kernel/bpf/rqspinlock.c b/kernel/bpf/rqspinlock.c
+index 338305c8852cf6..804e619f1e0066 100644
+--- a/kernel/bpf/rqspinlock.c
++++ b/kernel/bpf/rqspinlock.c
+@@ -471,7 +471,7 @@ int __lockfunc resilient_queued_spin_lock_slowpath(rqspinlock_t *lock, u32 val)
+ 	 * any MCS node. This is not the most elegant solution, but is
+ 	 * simple enough.
+ 	 */
+-	if (unlikely(idx >= _Q_MAX_NODES)) {
++	if (unlikely(idx >= _Q_MAX_NODES || in_nmi())) {
+ 		lockevent_inc(lock_no_node);
+ 		RES_RESET_TIMEOUT(ts, RES_DEF_TIMEOUT);
+ 		while (!queued_spin_trylock(lock)) {
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index e43c6de2bce4e7..b82399437db031 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -39,6 +39,7 @@ enum {
+ 	dma_debug_sg,
+ 	dma_debug_coherent,
+ 	dma_debug_resource,
++	dma_debug_noncoherent,
+ };
+ 
+ enum map_err_types {
+@@ -141,6 +142,7 @@ static const char *type2name[] = {
+ 	[dma_debug_sg] = "scatter-gather",
+ 	[dma_debug_coherent] = "coherent",
+ 	[dma_debug_resource] = "resource",
++	[dma_debug_noncoherent] = "noncoherent",
+ };
+ 
+ static const char *dir2name[] = {
+@@ -993,7 +995,8 @@ static void check_unmap(struct dma_debug_entry *ref)
+ 			   "[mapped as %s] [unmapped as %s]\n",
+ 			   ref->dev_addr, ref->size,
+ 			   type2name[entry->type], type2name[ref->type]);
+-	} else if (entry->type == dma_debug_coherent &&
++	} else if ((entry->type == dma_debug_coherent ||
++		    entry->type == dma_debug_noncoherent) &&
+ 		   ref->paddr != entry->paddr) {
+ 		err_printk(ref->dev, entry, "device driver frees "
+ 			   "DMA memory with different CPU address "
+@@ -1581,6 +1584,49 @@ void debug_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
+ 	}
+ }
+ 
++void debug_dma_alloc_pages(struct device *dev, struct page *page,
++			   size_t size, int direction,
++			   dma_addr_t dma_addr,
++			   unsigned long attrs)
++{
++	struct dma_debug_entry *entry;
++
++	if (unlikely(dma_debug_disabled()))
++		return;
++
++	entry = dma_entry_alloc();
++	if (!entry)
++		return;
++
++	entry->type      = dma_debug_noncoherent;
++	entry->dev       = dev;
++	entry->paddr	 = page_to_phys(page);
++	entry->size      = size;
++	entry->dev_addr  = dma_addr;
++	entry->direction = direction;
++
++	add_dma_entry(entry, attrs);
++}
++
++void debug_dma_free_pages(struct device *dev, struct page *page,
++			  size_t size, int direction,
++			  dma_addr_t dma_addr)
++{
++	struct dma_debug_entry ref = {
++		.type           = dma_debug_noncoherent,
++		.dev            = dev,
++		.paddr		= page_to_phys(page),
++		.dev_addr       = dma_addr,
++		.size           = size,
++		.direction      = direction,
++	};
++
++	if (unlikely(dma_debug_disabled()))
++		return;
++
++	check_unmap(&ref);
++}
++
+ static int __init dma_debug_driver_setup(char *str)
+ {
+ 	int i;
+diff --git a/kernel/dma/debug.h b/kernel/dma/debug.h
+index f525197d3cae60..48757ca13f3140 100644
+--- a/kernel/dma/debug.h
++++ b/kernel/dma/debug.h
+@@ -54,6 +54,13 @@ extern void debug_dma_sync_sg_for_cpu(struct device *dev,
+ extern void debug_dma_sync_sg_for_device(struct device *dev,
+ 					 struct scatterlist *sg,
+ 					 int nelems, int direction);
++extern void debug_dma_alloc_pages(struct device *dev, struct page *page,
++				  size_t size, int direction,
++				  dma_addr_t dma_addr,
++				  unsigned long attrs);
++extern void debug_dma_free_pages(struct device *dev, struct page *page,
++				 size_t size, int direction,
++				 dma_addr_t dma_addr);
+ #else /* CONFIG_DMA_API_DEBUG */
+ static inline void debug_dma_map_page(struct device *dev, struct page *page,
+ 				      size_t offset, size_t size,
+@@ -126,5 +133,18 @@ static inline void debug_dma_sync_sg_for_device(struct device *dev,
+ 						int nelems, int direction)
+ {
+ }
++
++static inline void debug_dma_alloc_pages(struct device *dev, struct page *page,
++					 size_t size, int direction,
++					 dma_addr_t dma_addr,
++					 unsigned long attrs)
++{
++}
++
++static inline void debug_dma_free_pages(struct device *dev, struct page *page,
++					size_t size, int direction,
++					dma_addr_t dma_addr)
++{
++}
+ #endif /* CONFIG_DMA_API_DEBUG */
+ #endif /* _KERNEL_DMA_DEBUG_H */
+diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
+index 107e4a4d251df6..56de28a3b1799f 100644
+--- a/kernel/dma/mapping.c
++++ b/kernel/dma/mapping.c
+@@ -712,7 +712,7 @@ struct page *dma_alloc_pages(struct device *dev, size_t size,
+ 	if (page) {
+ 		trace_dma_alloc_pages(dev, page_to_virt(page), *dma_handle,
+ 				      size, dir, gfp, 0);
+-		debug_dma_map_page(dev, page, 0, size, dir, *dma_handle, 0);
++		debug_dma_alloc_pages(dev, page, size, dir, *dma_handle, 0);
+ 	} else {
+ 		trace_dma_alloc_pages(dev, NULL, 0, size, dir, gfp, 0);
+ 	}
+@@ -738,7 +738,7 @@ void dma_free_pages(struct device *dev, size_t size, struct page *page,
+ 		dma_addr_t dma_handle, enum dma_data_direction dir)
+ {
+ 	trace_dma_free_pages(dev, page_to_virt(page), dma_handle, size, dir, 0);
+-	debug_dma_unmap_page(dev, dma_handle, size, dir);
++	debug_dma_free_pages(dev, page, size, dir, dma_handle);
+ 	__dma_free_pages(dev, size, page, dma_handle, dir);
+ }
+ EXPORT_SYMBOL_GPL(dma_free_pages);
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 872122e074e5fe..820127536e62b7 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -10330,6 +10330,7 @@ static int __perf_event_overflow(struct perf_event *event,
+ 		ret = 1;
+ 		event->pending_kill = POLL_HUP;
+ 		perf_event_disable_inatomic(event);
++		event->pmu->stop(event, 0);
+ 	}
+ 
+ 	if (event->attr.sigtrap) {
+diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
+index ea7995a25780f3..8df55397414a12 100644
+--- a/kernel/power/energy_model.c
++++ b/kernel/power/energy_model.c
+@@ -552,6 +552,30 @@ EXPORT_SYMBOL_GPL(em_cpu_get);
+ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
+ 				const struct em_data_callback *cb,
+ 				const cpumask_t *cpus, bool microwatts)
++{
++	int ret = em_dev_register_pd_no_update(dev, nr_states, cb, cpus, microwatts);
++
++	if (_is_cpu_device(dev))
++		em_check_capacity_update();
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(em_dev_register_perf_domain);
++
++/**
++ * em_dev_register_pd_no_update() - Register a perf domain for a device
++ * @dev : Device to register the PD for
++ * @nr_states : Number of performance states in the new PD
++ * @cb : Callback functions for populating the energy model
++ * @cpus : CPUs to include in the new PD (mandatory if @dev is a CPU device)
++ * @microwatts : Whether or not the power values in the EM will be in uW
++ *
++ * Like em_dev_register_perf_domain(), but does not trigger a CPU capacity
++ * update after registering the PD, even if @dev is a CPU device.
++ */
++int em_dev_register_pd_no_update(struct device *dev, unsigned int nr_states,
++				 const struct em_data_callback *cb,
++				 const cpumask_t *cpus, bool microwatts)
+ {
+ 	struct em_perf_table *em_table;
+ 	unsigned long cap, prev_cap = 0;
+@@ -636,12 +660,9 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
+ unlock:
+ 	mutex_unlock(&em_pd_mutex);
+ 
+-	if (_is_cpu_device(dev))
+-		em_check_capacity_update();
+-
+ 	return ret;
+ }
+-EXPORT_SYMBOL_GPL(em_dev_register_perf_domain);
++EXPORT_SYMBOL_GPL(em_dev_register_pd_no_update);
+ 
+ /**
+  * em_dev_unregister_perf_domain() - Unregister Energy Model (EM) for a device
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index 9216e3b91d3b3b..c8022a477d3a1c 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -423,6 +423,7 @@ int hibernation_snapshot(int platform_mode)
+ 	}
+ 
+ 	console_suspend_all();
++	pm_restrict_gfp_mask();
+ 
+ 	error = dpm_suspend(PMSG_FREEZE);
+ 
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 30899a8cc52c0a..e8c479329282f9 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -787,10 +787,10 @@ static void retrigger_next_event(void *arg)
+ 	 * of the next expiring timer is enough. The return from the SMP
+ 	 * function call will take care of the reprogramming in case the
+ 	 * CPU was in a NOHZ idle sleep.
++	 *
++	 * In periodic low resolution mode, the next softirq expiration
++	 * must also be updated.
+ 	 */
+-	if (!hrtimer_hres_active(base) && !tick_nohz_active)
+-		return;
+-
+ 	raw_spin_lock(&base->lock);
+ 	hrtimer_update_base(base);
+ 	if (hrtimer_hres_active(base))
+@@ -2295,11 +2295,6 @@ int hrtimers_cpu_dying(unsigned int dying_cpu)
+ 				     &new_base->clock_base[i]);
+ 	}
+ 
+-	/*
+-	 * The migration might have changed the first expiring softirq
+-	 * timer on this CPU. Update it.
+-	 */
+-	__hrtimer_get_next_event(new_base, HRTIMER_ACTIVE_SOFT);
+ 	/* Tell the other CPU to retrigger the next event */
+ 	smp_call_function_single(ncpu, retrigger_next_event, NULL, 0);
+ 
+diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
+index dac2d58f39490b..db40ec5cc9d731 100644
+--- a/kernel/trace/fgraph.c
++++ b/kernel/trace/fgraph.c
+@@ -1393,7 +1393,8 @@ int register_ftrace_graph(struct fgraph_ops *gops)
+ 		ftrace_graph_active--;
+ 		gops->saved_func = NULL;
+ 		fgraph_lru_release_index(i);
+-		unregister_pm_notifier(&ftrace_suspend_notifier);
++		if (!ftrace_graph_active)
++			unregister_pm_notifier(&ftrace_suspend_notifier);
+ 	}
+ 	return ret;
+ }
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index b91fa02cc54a6a..56f6cebdb22998 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -846,7 +846,10 @@ int trace_pid_write(struct trace_pid_list *filtered_pids,
+ 		/* copy the current bits to the new max */
+ 		ret = trace_pid_list_first(filtered_pids, &pid);
+ 		while (!ret) {
+-			trace_pid_list_set(pid_list, pid);
++			ret = trace_pid_list_set(pid_list, pid);
++			if (ret < 0)
++				goto out;
++
+ 			ret = trace_pid_list_next(filtered_pids, pid + 1, &pid);
+ 			nr_pids++;
+ 		}
+@@ -883,6 +886,7 @@ int trace_pid_write(struct trace_pid_list *filtered_pids,
+ 		trace_parser_clear(&parser);
+ 		ret = 0;
+ 	}
++ out:
+ 	trace_parser_put(&parser);
+ 
+ 	if (ret < 0) {
+@@ -7264,7 +7268,7 @@ static ssize_t write_marker_to_buffer(struct trace_array *tr, const char __user
+ 	entry = ring_buffer_event_data(event);
+ 	entry->ip = ip;
+ 
+-	len = __copy_from_user_inatomic(&entry->buf, ubuf, cnt);
++	len = copy_from_user_nofault(&entry->buf, ubuf, cnt);
+ 	if (len) {
+ 		memcpy(&entry->buf, FAULTED_STR, FAULTED_SIZE);
+ 		cnt = FAULTED_SIZE;
+@@ -7361,7 +7365,7 @@ static ssize_t write_raw_marker_to_buffer(struct trace_array *tr,
+ 
+ 	entry = ring_buffer_event_data(event);
+ 
+-	len = __copy_from_user_inatomic(&entry->id, ubuf, cnt);
++	len = copy_from_user_nofault(&entry->id, ubuf, cnt);
+ 	if (len) {
+ 		entry->id = -1;
+ 		memcpy(&entry->buf, FAULTED_STR, FAULTED_SIZE);
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index fd259da0aa6456..337bc0eb5d71bf 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -2322,6 +2322,9 @@ osnoise_cpus_write(struct file *filp, const char __user *ubuf, size_t count,
+ 	int running, err;
+ 	char *buf __free(kfree) = NULL;
+ 
++	if (count < 1)
++		return 0;
++
+ 	buf = kmalloc(count, GFP_KERNEL);
+ 	if (!buf)
+ 		return -ENOMEM;
+diff --git a/mm/damon/core.c b/mm/damon/core.c
+index 8ead13792f0495..d87fbb8c418d00 100644
+--- a/mm/damon/core.c
++++ b/mm/damon/core.c
+@@ -2050,6 +2050,10 @@ static void damos_adjust_quota(struct damon_ctx *c, struct damos *s)
+ 	if (!quota->ms && !quota->sz && list_empty(&quota->goals))
+ 		return;
+ 
++	/* First charge window */
++	if (!quota->total_charged_sz && !quota->charged_from)
++		quota->charged_from = jiffies;
++
+ 	/* New charge window starts */
+ 	if (time_after_eq(jiffies, quota->charged_from +
+ 				msecs_to_jiffies(quota->reset_interval))) {
+diff --git a/mm/damon/lru_sort.c b/mm/damon/lru_sort.c
+index 4af8fd4a390b66..c2b4f0b0714727 100644
+--- a/mm/damon/lru_sort.c
++++ b/mm/damon/lru_sort.c
+@@ -198,6 +198,11 @@ static int damon_lru_sort_apply_parameters(void)
+ 	if (err)
+ 		return err;
+ 
++	if (!damon_lru_sort_mon_attrs.sample_interval) {
++		err = -EINVAL;
++		goto out;
++	}
++
+ 	err = damon_set_attrs(ctx, &damon_lru_sort_mon_attrs);
+ 	if (err)
+ 		goto out;
+diff --git a/mm/damon/reclaim.c b/mm/damon/reclaim.c
+index a675150965e020..ade3ff724b24cd 100644
+--- a/mm/damon/reclaim.c
++++ b/mm/damon/reclaim.c
+@@ -194,6 +194,11 @@ static int damon_reclaim_apply_parameters(void)
+ 	if (err)
+ 		return err;
+ 
++	if (!damon_reclaim_mon_attrs.aggr_interval) {
++		err = -EINVAL;
++		goto out;
++	}
++
+ 	err = damon_set_attrs(ctx, &damon_reclaim_mon_attrs);
+ 	if (err)
+ 		goto out;
+diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c
+index 1af6aff35d84a0..57d4ec256682ce 100644
+--- a/mm/damon/sysfs.c
++++ b/mm/damon/sysfs.c
+@@ -1243,14 +1243,18 @@ static ssize_t state_show(struct kobject *kobj, struct kobj_attribute *attr,
+ {
+ 	struct damon_sysfs_kdamond *kdamond = container_of(kobj,
+ 			struct damon_sysfs_kdamond, kobj);
+-	struct damon_ctx *ctx = kdamond->damon_ctx;
+-	bool running;
++	struct damon_ctx *ctx;
++	bool running = false;
+ 
+-	if (!ctx)
+-		running = false;
+-	else
++	if (!mutex_trylock(&damon_sysfs_lock))
++		return -EBUSY;
++
++	ctx = kdamond->damon_ctx;
++	if (ctx)
+ 		running = damon_sysfs_ctx_running(ctx);
+ 
++	mutex_unlock(&damon_sysfs_lock);
++
+ 	return sysfs_emit(buf, "%s\n", running ?
+ 			damon_sysfs_cmd_strs[DAMON_SYSFS_CMD_ON] :
+ 			damon_sysfs_cmd_strs[DAMON_SYSFS_CMD_OFF]);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index a0d285d2099252..eee833f7068157 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -5855,7 +5855,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ 	spinlock_t *ptl;
+ 	struct hstate *h = hstate_vma(vma);
+ 	unsigned long sz = huge_page_size(h);
+-	bool adjust_reservation = false;
++	bool adjust_reservation;
+ 	unsigned long last_addr_mask;
+ 	bool force_flush = false;
+ 
+@@ -5948,6 +5948,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ 					sz);
+ 		hugetlb_count_sub(pages_per_huge_page(h), mm);
+ 		hugetlb_remove_rmap(folio);
++		spin_unlock(ptl);
+ 
+ 		/*
+ 		 * Restore the reservation for anonymous page, otherwise the
+@@ -5955,14 +5956,16 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ 		 * If there we are freeing a surplus, do not set the restore
+ 		 * reservation bit.
+ 		 */
++		adjust_reservation = false;
++
++		spin_lock_irq(&hugetlb_lock);
+ 		if (!h->surplus_huge_pages && __vma_private_lock(vma) &&
+ 		    folio_test_anon(folio)) {
+ 			folio_set_hugetlb_restore_reserve(folio);
+ 			/* Reservation to be adjusted after the spin lock */
+ 			adjust_reservation = true;
+ 		}
+-
+-		spin_unlock(ptl);
++		spin_unlock_irq(&hugetlb_lock);
+ 
+ 		/*
+ 		 * Adjust the reservation for the region that will have the
+diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
+index d2c70cd2afb1de..c7c0be11917370 100644
+--- a/mm/kasan/shadow.c
++++ b/mm/kasan/shadow.c
+@@ -335,13 +335,13 @@ static void ___free_pages_bulk(struct page **pages, int nr_pages)
+ 	}
+ }
+ 
+-static int ___alloc_pages_bulk(struct page **pages, int nr_pages)
++static int ___alloc_pages_bulk(struct page **pages, int nr_pages, gfp_t gfp_mask)
+ {
+ 	unsigned long nr_populated, nr_total = nr_pages;
+ 	struct page **page_array = pages;
+ 
+ 	while (nr_pages) {
+-		nr_populated = alloc_pages_bulk(GFP_KERNEL, nr_pages, pages);
++		nr_populated = alloc_pages_bulk(gfp_mask, nr_pages, pages);
+ 		if (!nr_populated) {
+ 			___free_pages_bulk(page_array, nr_total - nr_pages);
+ 			return -ENOMEM;
+@@ -353,25 +353,42 @@ static int ___alloc_pages_bulk(struct page **pages, int nr_pages)
+ 	return 0;
+ }
+ 
+-static int __kasan_populate_vmalloc(unsigned long start, unsigned long end)
++static int __kasan_populate_vmalloc(unsigned long start, unsigned long end, gfp_t gfp_mask)
+ {
+ 	unsigned long nr_pages, nr_total = PFN_UP(end - start);
+ 	struct vmalloc_populate_data data;
++	unsigned int flags;
+ 	int ret = 0;
+ 
+-	data.pages = (struct page **)__get_free_page(GFP_KERNEL | __GFP_ZERO);
++	data.pages = (struct page **)__get_free_page(gfp_mask | __GFP_ZERO);
+ 	if (!data.pages)
+ 		return -ENOMEM;
+ 
+ 	while (nr_total) {
+ 		nr_pages = min(nr_total, PAGE_SIZE / sizeof(data.pages[0]));
+-		ret = ___alloc_pages_bulk(data.pages, nr_pages);
++		ret = ___alloc_pages_bulk(data.pages, nr_pages, gfp_mask);
+ 		if (ret)
+ 			break;
+ 
+ 		data.start = start;
++
++		/*
++		 * page tables allocations ignore external gfp mask, enforce it
++		 * by the scope API
++		 */
++		if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO)
++			flags = memalloc_nofs_save();
++		else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0)
++			flags = memalloc_noio_save();
++
+ 		ret = apply_to_page_range(&init_mm, start, nr_pages * PAGE_SIZE,
+ 					  kasan_populate_vmalloc_pte, &data);
++
++		if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO)
++			memalloc_nofs_restore(flags);
++		else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0)
++			memalloc_noio_restore(flags);
++
+ 		___free_pages_bulk(data.pages, nr_pages);
+ 		if (ret)
+ 			break;
+@@ -385,7 +402,7 @@ static int __kasan_populate_vmalloc(unsigned long start, unsigned long end)
+ 	return ret;
+ }
+ 
+-int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
++int kasan_populate_vmalloc(unsigned long addr, unsigned long size, gfp_t gfp_mask)
+ {
+ 	unsigned long shadow_start, shadow_end;
+ 	int ret;
+@@ -414,7 +431,7 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
+ 	shadow_start = PAGE_ALIGN_DOWN(shadow_start);
+ 	shadow_end = PAGE_ALIGN(shadow_end);
+ 
+-	ret = __kasan_populate_vmalloc(shadow_start, shadow_end);
++	ret = __kasan_populate_vmalloc(shadow_start, shadow_end, gfp_mask);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 15203ea7d0073d..a0c040336fc59b 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -1400,8 +1400,8 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
+ 		 */
+ 		if (cc->is_khugepaged &&
+ 		    (pte_young(pteval) || folio_test_young(folio) ||
+-		     folio_test_referenced(folio) || mmu_notifier_test_young(vma->vm_mm,
+-								     address)))
++		     folio_test_referenced(folio) ||
++		     mmu_notifier_test_young(vma->vm_mm, _address)))
+ 			referenced++;
+ 	}
+ 	if (!writable) {
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index dd543dd7755fc0..e626b6c93ffeee 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -950,7 +950,7 @@ static const char * const action_page_types[] = {
+ 	[MF_MSG_BUDDY]			= "free buddy page",
+ 	[MF_MSG_DAX]			= "dax page",
+ 	[MF_MSG_UNSPLIT_THP]		= "unsplit thp",
+-	[MF_MSG_ALREADY_POISONED]	= "already poisoned",
++	[MF_MSG_ALREADY_POISONED]	= "already poisoned page",
+ 	[MF_MSG_UNKNOWN]		= "unknown page",
+ };
+ 
+@@ -1343,9 +1343,10 @@ static int action_result(unsigned long pfn, enum mf_action_page_type type,
+ {
+ 	trace_memory_failure_event(pfn, type, result);
+ 
+-	num_poisoned_pages_inc(pfn);
+-
+-	update_per_node_mf_stats(pfn, result);
++	if (type != MF_MSG_ALREADY_POISONED) {
++		num_poisoned_pages_inc(pfn);
++		update_per_node_mf_stats(pfn, result);
++	}
+ 
+ 	pr_err("%#lx: recovery action for %s: %s\n",
+ 		pfn, action_page_types[type], action_name[result]);
+@@ -2088,12 +2089,11 @@ static int try_memory_failure_hugetlb(unsigned long pfn, int flags, int *hugetlb
+ 		*hugetlb = 0;
+ 		return 0;
+ 	} else if (res == -EHWPOISON) {
+-		pr_err("%#lx: already hardware poisoned\n", pfn);
+ 		if (flags & MF_ACTION_REQUIRED) {
+ 			folio = page_folio(p);
+ 			res = kill_accessing_process(current, folio_pfn(folio), flags);
+-			action_result(pfn, MF_MSG_ALREADY_POISONED, MF_FAILED);
+ 		}
++		action_result(pfn, MF_MSG_ALREADY_POISONED, MF_FAILED);
+ 		return res;
+ 	} else if (res == -EBUSY) {
+ 		if (!(flags & MF_NO_RETRY)) {
+@@ -2279,7 +2279,6 @@ int memory_failure(unsigned long pfn, int flags)
+ 		goto unlock_mutex;
+ 
+ 	if (TestSetPageHWPoison(p)) {
+-		pr_err("%#lx: already hardware poisoned\n", pfn);
+ 		res = -EHWPOISON;
+ 		if (flags & MF_ACTION_REQUIRED)
+ 			res = kill_accessing_process(current, pfn, flags);
+@@ -2576,10 +2575,9 @@ int unpoison_memory(unsigned long pfn)
+ 	static DEFINE_RATELIMIT_STATE(unpoison_rs, DEFAULT_RATELIMIT_INTERVAL,
+ 					DEFAULT_RATELIMIT_BURST);
+ 
+-	if (!pfn_valid(pfn))
+-		return -ENXIO;
+-
+-	p = pfn_to_page(pfn);
++	p = pfn_to_online_page(pfn);
++	if (!p)
++		return -EIO;
+ 	folio = page_folio(p);
+ 
+ 	mutex_lock(&mf_mutex);
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 6dbcdceecae134..5edd536ba9d2a5 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -2026,6 +2026,8 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
+ 	if (unlikely(!vmap_initialized))
+ 		return ERR_PTR(-EBUSY);
+ 
++	/* Only reclaim behaviour flags are relevant. */
++	gfp_mask = gfp_mask & GFP_RECLAIM_MASK;
+ 	might_sleep();
+ 
+ 	/*
+@@ -2038,8 +2040,6 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
+ 	 */
+ 	va = node_alloc(size, align, vstart, vend, &addr, &vn_id);
+ 	if (!va) {
+-		gfp_mask = gfp_mask & GFP_RECLAIM_MASK;
+-
+ 		va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node);
+ 		if (unlikely(!va))
+ 			return ERR_PTR(-ENOMEM);
+@@ -2089,7 +2089,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
+ 	BUG_ON(va->va_start < vstart);
+ 	BUG_ON(va->va_end > vend);
+ 
+-	ret = kasan_populate_vmalloc(addr, size);
++	ret = kasan_populate_vmalloc(addr, size, gfp_mask);
+ 	if (ret) {
+ 		free_vmap_area(va);
+ 		return ERR_PTR(ret);
+@@ -4826,7 +4826,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
+ 
+ 	/* populate the kasan shadow space */
+ 	for (area = 0; area < nr_vms; area++) {
+-		if (kasan_populate_vmalloc(vas[area]->va_start, sizes[area]))
++		if (kasan_populate_vmalloc(vas[area]->va_start, sizes[area], GFP_KERNEL))
+ 			goto err_free_shadow;
+ 	}
+ 
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index ad5574e9a93ee9..ce17e489c67c37 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -829,7 +829,17 @@ static void bis_cleanup(struct hci_conn *conn)
+ 		/* Check if ISO connection is a BIS and terminate advertising
+ 		 * set and BIG if there are no other connections using it.
+ 		 */
+-		bis = hci_conn_hash_lookup_big(hdev, conn->iso_qos.bcast.big);
++		bis = hci_conn_hash_lookup_big_state(hdev,
++						     conn->iso_qos.bcast.big,
++						     BT_CONNECTED,
++						     HCI_ROLE_MASTER);
++		if (bis)
++			return;
++
++		bis = hci_conn_hash_lookup_big_state(hdev,
++						     conn->iso_qos.bcast.big,
++						     BT_CONNECT,
++						     HCI_ROLE_MASTER);
+ 		if (bis)
+ 			return;
+ 
+@@ -2274,7 +2284,7 @@ struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst,
+ 	 * the start periodic advertising and create BIG commands have
+ 	 * been queued
+ 	 */
+-	hci_conn_hash_list_state(hdev, bis_mark_per_adv, PA_LINK,
++	hci_conn_hash_list_state(hdev, bis_mark_per_adv, BIS_LINK,
+ 				 BT_BOUND, &data);
+ 
+ 	/* Queue start periodic advertising and create BIG */
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 0ffdbe249f5d3d..090c7ffa515252 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -6973,9 +6973,14 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+ 				continue;
+ 		}
+ 
+-		if (ev->status != 0x42)
++		if (ev->status != 0x42) {
+ 			/* Mark PA sync as established */
+ 			set_bit(HCI_CONN_PA_SYNC, &bis->flags);
++			/* Reset cleanup callback of PA Sync so it doesn't
++			 * terminate the sync when deleting the connection.
++			 */
++			conn->cleanup = NULL;
++		}
+ 
+ 		bis->sync_handle = conn->sync_handle;
+ 		bis->iso_qos.bcast.big = ev->handle;
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index 14a4215352d5f1..c21566e1494a99 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -1347,7 +1347,7 @@ static int iso_sock_getname(struct socket *sock, struct sockaddr *addr,
+ 		bacpy(&sa->iso_bdaddr, &iso_pi(sk)->dst);
+ 		sa->iso_bdaddr_type = iso_pi(sk)->dst_type;
+ 
+-		if (hcon && hcon->type == BIS_LINK) {
++		if (hcon && (hcon->type == BIS_LINK || hcon->type == PA_LINK)) {
+ 			sa->iso_bc->bc_sid = iso_pi(sk)->bc_sid;
+ 			sa->iso_bc->bc_num_bis = iso_pi(sk)->bc_num_bis;
+ 			memcpy(sa->iso_bc->bc_bis, iso_pi(sk)->bc_bis,
+diff --git a/net/bridge/br.c b/net/bridge/br.c
+index 0adeafe11a3651..ad2d8f59fc7bcc 100644
+--- a/net/bridge/br.c
++++ b/net/bridge/br.c
+@@ -324,6 +324,13 @@ int br_boolopt_multi_toggle(struct net_bridge *br,
+ 	int err = 0;
+ 	int opt_id;
+ 
++	opt_id = find_next_bit(&bitmap, BITS_PER_LONG, BR_BOOLOPT_MAX);
++	if (opt_id != BITS_PER_LONG) {
++		NL_SET_ERR_MSG_FMT_MOD(extack, "Unknown boolean option %d",
++				       opt_id);
++		return -EINVAL;
++	}
++
+ 	for_each_set_bit(opt_id, &bitmap, BR_BOOLOPT_MAX) {
+ 		bool on = !!(bm->optval & BIT(opt_id));
+ 
+diff --git a/net/can/j1939/bus.c b/net/can/j1939/bus.c
+index 39844f14eed862..797719cb227ec5 100644
+--- a/net/can/j1939/bus.c
++++ b/net/can/j1939/bus.c
+@@ -290,8 +290,11 @@ int j1939_local_ecu_get(struct j1939_priv *priv, name_t name, u8 sa)
+ 	if (!ecu)
+ 		ecu = j1939_ecu_create_locked(priv, name);
+ 	err = PTR_ERR_OR_ZERO(ecu);
+-	if (err)
++	if (err) {
++		if (j1939_address_is_unicast(sa))
++			priv->ents[sa].nusers--;
+ 		goto done;
++	}
+ 
+ 	ecu->nusers++;
+ 	/* TODO: do we care if ecu->addr != sa? */
+diff --git a/net/can/j1939/j1939-priv.h b/net/can/j1939/j1939-priv.h
+index 31a93cae5111b5..81f58924b4acd7 100644
+--- a/net/can/j1939/j1939-priv.h
++++ b/net/can/j1939/j1939-priv.h
+@@ -212,6 +212,7 @@ void j1939_priv_get(struct j1939_priv *priv);
+ 
+ /* notify/alert all j1939 sockets bound to ifindex */
+ void j1939_sk_netdev_event_netdown(struct j1939_priv *priv);
++void j1939_sk_netdev_event_unregister(struct j1939_priv *priv);
+ int j1939_cancel_active_session(struct j1939_priv *priv, struct sock *sk);
+ void j1939_tp_init(struct j1939_priv *priv);
+ 
+diff --git a/net/can/j1939/main.c b/net/can/j1939/main.c
+index 7e8a20f2fc42b5..3706a872ecafdb 100644
+--- a/net/can/j1939/main.c
++++ b/net/can/j1939/main.c
+@@ -377,6 +377,9 @@ static int j1939_netdev_notify(struct notifier_block *nb,
+ 		j1939_sk_netdev_event_netdown(priv);
+ 		j1939_ecu_unmap_all(priv);
+ 		break;
++	case NETDEV_UNREGISTER:
++		j1939_sk_netdev_event_unregister(priv);
++		break;
+ 	}
+ 
+ 	j1939_priv_put(priv);
+diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
+index 6fefe7a6876116..785b883a1319d3 100644
+--- a/net/can/j1939/socket.c
++++ b/net/can/j1939/socket.c
+@@ -520,6 +520,9 @@ static int j1939_sk_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 	ret = j1939_local_ecu_get(priv, jsk->addr.src_name, jsk->addr.sa);
+ 	if (ret) {
+ 		j1939_netdev_stop(priv);
++		jsk->priv = NULL;
++		synchronize_rcu();
++		j1939_priv_put(priv);
+ 		goto out_release_sock;
+ 	}
+ 
+@@ -1299,6 +1302,55 @@ void j1939_sk_netdev_event_netdown(struct j1939_priv *priv)
+ 	read_unlock_bh(&priv->j1939_socks_lock);
+ }
+ 
++void j1939_sk_netdev_event_unregister(struct j1939_priv *priv)
++{
++	struct sock *sk;
++	struct j1939_sock *jsk;
++	bool wait_rcu = false;
++
++rescan: /* The caller is holding a ref on this "priv" via j1939_priv_get_by_ndev(). */
++	read_lock_bh(&priv->j1939_socks_lock);
++	list_for_each_entry(jsk, &priv->j1939_socks, list) {
++		/* Skip if j1939_jsk_add() is not called on this socket. */
++		if (!(jsk->state & J1939_SOCK_BOUND))
++			continue;
++		sk = &jsk->sk;
++		sock_hold(sk);
++		read_unlock_bh(&priv->j1939_socks_lock);
++		/* Check if j1939_jsk_del() is not yet called on this socket after holding
++		 * socket's lock, for both j1939_sk_bind() and j1939_sk_release() call
++		 * j1939_jsk_del() with socket's lock held.
++		 */
++		lock_sock(sk);
++		if (jsk->state & J1939_SOCK_BOUND) {
++			/* Neither j1939_sk_bind() nor j1939_sk_release() called j1939_jsk_del().
++			 * Make this socket no longer bound, by pretending as if j1939_sk_bind()
++			 * dropped old references but did not get new references.
++			 */
++			j1939_jsk_del(priv, jsk);
++			j1939_local_ecu_put(priv, jsk->addr.src_name, jsk->addr.sa);
++			j1939_netdev_stop(priv);
++			/* Call j1939_priv_put() now and prevent j1939_sk_sock_destruct() from
++			 * calling the corresponding j1939_priv_put().
++			 *
++			 * j1939_sk_sock_destruct() is supposed to call j1939_priv_put() after
++			 * an RCU grace period. But since the caller is holding a ref on this
++			 * "priv", we can defer synchronize_rcu() until immediately before
++			 * the caller calls j1939_priv_put().
++			 */
++			j1939_priv_put(priv);
++			jsk->priv = NULL;
++			wait_rcu = true;
++		}
++		release_sock(sk);
++		sock_put(sk);
++		goto rescan;
++	}
++	read_unlock_bh(&priv->j1939_socks_lock);
++	if (wait_rcu)
++		synchronize_rcu();
++}
++
+ static int j1939_sk_no_ioctlcmd(struct socket *sock, unsigned int cmd,
+ 				unsigned long arg)
+ {
+diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
+index d1b5705dc0c648..9f6d860411cbd1 100644
+--- a/net/ceph/messenger.c
++++ b/net/ceph/messenger.c
+@@ -1524,7 +1524,7 @@ static void con_fault_finish(struct ceph_connection *con)
+ 	 * in case we faulted due to authentication, invalidate our
+ 	 * current tickets so that we can get new ones.
+ 	 */
+-	if (con->v1.auth_retry) {
++	if (!ceph_msgr2(from_msgr(con->msgr)) && con->v1.auth_retry) {
+ 		dout("auth_retry %d, invalidating\n", con->v1.auth_retry);
+ 		if (con->ops->invalidate_authorizer)
+ 			con->ops->invalidate_authorizer(con);
+@@ -1714,9 +1714,10 @@ static void clear_standby(struct ceph_connection *con)
+ {
+ 	/* come back from STANDBY? */
+ 	if (con->state == CEPH_CON_S_STANDBY) {
+-		dout("clear_standby %p and ++connect_seq\n", con);
++		dout("clear_standby %p\n", con);
+ 		con->state = CEPH_CON_S_PREOPEN;
+-		con->v1.connect_seq++;
++		if (!ceph_msgr2(from_msgr(con->msgr)))
++			con->v1.connect_seq++;
+ 		WARN_ON(ceph_con_flag_test(con, CEPH_CON_F_WRITE_PENDING));
+ 		WARN_ON(ceph_con_flag_test(con, CEPH_CON_F_KEEPALIVE_PENDING));
+ 	}
+diff --git a/net/core/dev_ioctl.c b/net/core/dev_ioctl.c
+index 616479e7146633..9447065d01afb0 100644
+--- a/net/core/dev_ioctl.c
++++ b/net/core/dev_ioctl.c
+@@ -464,8 +464,15 @@ int generic_hwtstamp_get_lower(struct net_device *dev,
+ 	if (!netif_device_present(dev))
+ 		return -ENODEV;
+ 
+-	if (ops->ndo_hwtstamp_get)
+-		return dev_get_hwtstamp_phylib(dev, kernel_cfg);
++	if (ops->ndo_hwtstamp_get) {
++		int err;
++
++		netdev_lock_ops(dev);
++		err = dev_get_hwtstamp_phylib(dev, kernel_cfg);
++		netdev_unlock_ops(dev);
++
++		return err;
++	}
+ 
+ 	/* Legacy path: unconverted lower driver */
+ 	return generic_hwtstamp_ioctl_lower(dev, SIOCGHWTSTAMP, kernel_cfg);
+@@ -481,8 +488,15 @@ int generic_hwtstamp_set_lower(struct net_device *dev,
+ 	if (!netif_device_present(dev))
+ 		return -ENODEV;
+ 
+-	if (ops->ndo_hwtstamp_set)
+-		return dev_set_hwtstamp_phylib(dev, kernel_cfg, extack);
++	if (ops->ndo_hwtstamp_set) {
++		int err;
++
++		netdev_lock_ops(dev);
++		err = dev_set_hwtstamp_phylib(dev, kernel_cfg, extack);
++		netdev_unlock_ops(dev);
++
++		return err;
++	}
+ 
+ 	/* Legacy path: unconverted lower driver */
+ 	return generic_hwtstamp_ioctl_lower(dev, SIOCSHWTSTAMP, kernel_cfg);
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index 88657255fec12b..fbbc3ccf9df64b 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -49,7 +49,7 @@ static bool hsr_check_carrier(struct hsr_port *master)
+ 
+ 	ASSERT_RTNL();
+ 
+-	hsr_for_each_port(master->hsr, port) {
++	hsr_for_each_port_rtnl(master->hsr, port) {
+ 		if (port->type != HSR_PT_MASTER && is_slave_up(port->dev)) {
+ 			netif_carrier_on(master->dev);
+ 			return true;
+@@ -105,7 +105,7 @@ int hsr_get_max_mtu(struct hsr_priv *hsr)
+ 	struct hsr_port *port;
+ 
+ 	mtu_max = ETH_DATA_LEN;
+-	hsr_for_each_port(hsr, port)
++	hsr_for_each_port_rtnl(hsr, port)
+ 		if (port->type != HSR_PT_MASTER)
+ 			mtu_max = min(port->dev->mtu, mtu_max);
+ 
+@@ -139,7 +139,7 @@ static int hsr_dev_open(struct net_device *dev)
+ 
+ 	hsr = netdev_priv(dev);
+ 
+-	hsr_for_each_port(hsr, port) {
++	hsr_for_each_port_rtnl(hsr, port) {
+ 		if (port->type == HSR_PT_MASTER)
+ 			continue;
+ 		switch (port->type) {
+@@ -172,7 +172,7 @@ static int hsr_dev_close(struct net_device *dev)
+ 	struct hsr_priv *hsr;
+ 
+ 	hsr = netdev_priv(dev);
+-	hsr_for_each_port(hsr, port) {
++	hsr_for_each_port_rtnl(hsr, port) {
+ 		if (port->type == HSR_PT_MASTER)
+ 			continue;
+ 		switch (port->type) {
+@@ -205,7 +205,7 @@ static netdev_features_t hsr_features_recompute(struct hsr_priv *hsr,
+ 	 * may become enabled.
+ 	 */
+ 	features &= ~NETIF_F_ONE_FOR_ALL;
+-	hsr_for_each_port(hsr, port)
++	hsr_for_each_port_rtnl(hsr, port)
+ 		features = netdev_increment_features(features,
+ 						     port->dev->features,
+ 						     mask);
+@@ -226,6 +226,7 @@ static netdev_tx_t hsr_dev_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	struct hsr_priv *hsr = netdev_priv(dev);
+ 	struct hsr_port *master;
+ 
++	rcu_read_lock();
+ 	master = hsr_port_get_hsr(hsr, HSR_PT_MASTER);
+ 	if (master) {
+ 		skb->dev = master->dev;
+@@ -238,6 +239,8 @@ static netdev_tx_t hsr_dev_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		dev_core_stats_tx_dropped_inc(dev);
+ 		dev_kfree_skb_any(skb);
+ 	}
++	rcu_read_unlock();
++
+ 	return NETDEV_TX_OK;
+ }
+ 
+@@ -484,7 +487,7 @@ static void hsr_set_rx_mode(struct net_device *dev)
+ 
+ 	hsr = netdev_priv(dev);
+ 
+-	hsr_for_each_port(hsr, port) {
++	hsr_for_each_port_rtnl(hsr, port) {
+ 		if (port->type == HSR_PT_MASTER)
+ 			continue;
+ 		switch (port->type) {
+@@ -506,7 +509,7 @@ static void hsr_change_rx_flags(struct net_device *dev, int change)
+ 
+ 	hsr = netdev_priv(dev);
+ 
+-	hsr_for_each_port(hsr, port) {
++	hsr_for_each_port_rtnl(hsr, port) {
+ 		if (port->type == HSR_PT_MASTER)
+ 			continue;
+ 		switch (port->type) {
+@@ -534,7 +537,7 @@ static int hsr_ndo_vlan_rx_add_vid(struct net_device *dev,
+ 
+ 	hsr = netdev_priv(dev);
+ 
+-	hsr_for_each_port(hsr, port) {
++	hsr_for_each_port_rtnl(hsr, port) {
+ 		if (port->type == HSR_PT_MASTER ||
+ 		    port->type == HSR_PT_INTERLINK)
+ 			continue;
+@@ -580,7 +583,7 @@ static int hsr_ndo_vlan_rx_kill_vid(struct net_device *dev,
+ 
+ 	hsr = netdev_priv(dev);
+ 
+-	hsr_for_each_port(hsr, port) {
++	hsr_for_each_port_rtnl(hsr, port) {
+ 		switch (port->type) {
+ 		case HSR_PT_SLAVE_A:
+ 		case HSR_PT_SLAVE_B:
+@@ -672,9 +675,14 @@ struct net_device *hsr_get_port_ndev(struct net_device *ndev,
+ 	struct hsr_priv *hsr = netdev_priv(ndev);
+ 	struct hsr_port *port;
+ 
++	rcu_read_lock();
+ 	hsr_for_each_port(hsr, port)
+-		if (port->type == pt)
++		if (port->type == pt) {
++			dev_hold(port->dev);
++			rcu_read_unlock();
+ 			return port->dev;
++		}
++	rcu_read_unlock();
+ 	return NULL;
+ }
+ EXPORT_SYMBOL(hsr_get_port_ndev);
+diff --git a/net/hsr/hsr_main.c b/net/hsr/hsr_main.c
+index 192893c3f2ec73..bc94b07101d80e 100644
+--- a/net/hsr/hsr_main.c
++++ b/net/hsr/hsr_main.c
+@@ -22,7 +22,7 @@ static bool hsr_slave_empty(struct hsr_priv *hsr)
+ {
+ 	struct hsr_port *port;
+ 
+-	hsr_for_each_port(hsr, port)
++	hsr_for_each_port_rtnl(hsr, port)
+ 		if (port->type != HSR_PT_MASTER)
+ 			return false;
+ 	return true;
+@@ -134,7 +134,7 @@ struct hsr_port *hsr_port_get_hsr(struct hsr_priv *hsr, enum hsr_port_type pt)
+ {
+ 	struct hsr_port *port;
+ 
+-	hsr_for_each_port(hsr, port)
++	hsr_for_each_port_rtnl(hsr, port)
+ 		if (port->type == pt)
+ 			return port;
+ 	return NULL;
+diff --git a/net/hsr/hsr_main.h b/net/hsr/hsr_main.h
+index 135ec5fce01967..33b0d2460c9bcd 100644
+--- a/net/hsr/hsr_main.h
++++ b/net/hsr/hsr_main.h
+@@ -224,6 +224,9 @@ struct hsr_priv {
+ #define hsr_for_each_port(hsr, port) \
+ 	list_for_each_entry_rcu((port), &(hsr)->ports, port_list)
+ 
++#define hsr_for_each_port_rtnl(hsr, port) \
++	list_for_each_entry_rcu((port), &(hsr)->ports, port_list, lockdep_rtnl_is_held())
++
+ struct hsr_port *hsr_port_get_hsr(struct hsr_priv *hsr, enum hsr_port_type pt);
+ 
+ /* Caller must ensure skb is a valid HSR frame */
+diff --git a/net/ipv4/ip_tunnel_core.c b/net/ipv4/ip_tunnel_core.c
+index f65d2f7273813b..8392d304a72ebe 100644
+--- a/net/ipv4/ip_tunnel_core.c
++++ b/net/ipv4/ip_tunnel_core.c
+@@ -204,6 +204,9 @@ static int iptunnel_pmtud_build_icmp(struct sk_buff *skb, int mtu)
+ 	if (!pskb_may_pull(skb, ETH_HLEN + sizeof(struct iphdr)))
+ 		return -EINVAL;
+ 
++	if (skb_is_gso(skb))
++		skb_gso_reset(skb);
++
+ 	skb_copy_bits(skb, skb_mac_offset(skb), &eh, ETH_HLEN);
+ 	pskb_pull(skb, ETH_HLEN);
+ 	skb_reset_network_header(skb);
+@@ -298,6 +301,9 @@ static int iptunnel_pmtud_build_icmpv6(struct sk_buff *skb, int mtu)
+ 	if (!pskb_may_pull(skb, ETH_HLEN + sizeof(struct ipv6hdr)))
+ 		return -EINVAL;
+ 
++	if (skb_is_gso(skb))
++		skb_gso_reset(skb);
++
+ 	skb_copy_bits(skb, skb_mac_offset(skb), &eh, ETH_HLEN);
+ 	pskb_pull(skb, ETH_HLEN);
+ 	skb_reset_network_header(skb);
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index ba581785adb4b3..a268e1595b22aa 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -408,8 +408,11 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
+ 		if (!psock->cork) {
+ 			psock->cork = kzalloc(sizeof(*psock->cork),
+ 					      GFP_ATOMIC | __GFP_NOWARN);
+-			if (!psock->cork)
++			if (!psock->cork) {
++				sk_msg_free(sk, msg);
++				*copied = 0;
+ 				return -ENOMEM;
++			}
+ 		}
+ 		memcpy(psock->cork, msg, sizeof(*msg));
+ 		return 0;
+diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
+index 3caa0a9d3b3885..25d2b65653cd40 100644
+--- a/net/mptcp/sockopt.c
++++ b/net/mptcp/sockopt.c
+@@ -1508,13 +1508,12 @@ static void sync_socket_options(struct mptcp_sock *msk, struct sock *ssk)
+ {
+ 	static const unsigned int tx_rx_locks = SOCK_RCVBUF_LOCK | SOCK_SNDBUF_LOCK;
+ 	struct sock *sk = (struct sock *)msk;
++	bool keep_open;
+ 
+-	if (ssk->sk_prot->keepalive) {
+-		if (sock_flag(sk, SOCK_KEEPOPEN))
+-			ssk->sk_prot->keepalive(ssk, 1);
+-		else
+-			ssk->sk_prot->keepalive(ssk, 0);
+-	}
++	keep_open = sock_flag(sk, SOCK_KEEPOPEN);
++	if (ssk->sk_prot->keepalive)
++		ssk->sk_prot->keepalive(ssk, keep_open);
++	sock_valbool_flag(ssk, SOCK_KEEPOPEN, keep_open);
+ 
+ 	ssk->sk_priority = sk->sk_priority;
+ 	ssk->sk_bound_dev_if = sk->sk_bound_dev_if;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 0e86434ca13b00..cde63e5f18d8f9 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1131,11 +1131,14 @@ nf_tables_chain_type_lookup(struct net *net, const struct nlattr *nla,
+ 	return ERR_PTR(-ENOENT);
+ }
+ 
+-static __be16 nft_base_seq(const struct net *net)
++static unsigned int nft_base_seq(const struct net *net)
+ {
+-	struct nftables_pernet *nft_net = nft_pernet(net);
++	return READ_ONCE(net->nft.base_seq);
++}
+ 
+-	return htons(nft_net->base_seq & 0xffff);
++static __be16 nft_base_seq_be16(const struct net *net)
++{
++	return htons(nft_base_seq(net) & 0xffff);
+ }
+ 
+ static const struct nla_policy nft_table_policy[NFTA_TABLE_MAX + 1] = {
+@@ -1153,9 +1156,9 @@ static int nf_tables_fill_table_info(struct sk_buff *skb, struct net *net,
+ {
+ 	struct nlmsghdr *nlh;
+ 
+-	event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+-	nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
+-			   NFNETLINK_V0, nft_base_seq(net));
++	nlh = nfnl_msg_put(skb, portid, seq,
++			   nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event),
++			   flags, family, NFNETLINK_V0, nft_base_seq_be16(net));
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+@@ -1165,6 +1168,12 @@ static int nf_tables_fill_table_info(struct sk_buff *skb, struct net *net,
+ 			 NFTA_TABLE_PAD))
+ 		goto nla_put_failure;
+ 
++	if (event == NFT_MSG_DELTABLE ||
++	    event == NFT_MSG_DESTROYTABLE) {
++		nlmsg_end(skb, nlh);
++		return 0;
++	}
++
+ 	if (nla_put_be32(skb, NFTA_TABLE_FLAGS,
+ 			 htonl(table->flags & NFT_TABLE_F_MASK)))
+ 		goto nla_put_failure;
+@@ -1242,7 +1251,7 @@ static int nf_tables_dump_tables(struct sk_buff *skb,
+ 
+ 	rcu_read_lock();
+ 	nft_net = nft_pernet(net);
+-	cb->seq = READ_ONCE(nft_net->base_seq);
++	cb->seq = nft_base_seq(net);
+ 
+ 	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -2022,9 +2031,9 @@ static int nf_tables_fill_chain_info(struct sk_buff *skb, struct net *net,
+ {
+ 	struct nlmsghdr *nlh;
+ 
+-	event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+-	nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
+-			   NFNETLINK_V0, nft_base_seq(net));
++	nlh = nfnl_msg_put(skb, portid, seq,
++			   nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event),
++			   flags, family, NFNETLINK_V0, nft_base_seq_be16(net));
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+@@ -2034,6 +2043,13 @@ static int nf_tables_fill_chain_info(struct sk_buff *skb, struct net *net,
+ 			 NFTA_CHAIN_PAD))
+ 		goto nla_put_failure;
+ 
++	if (!hook_list &&
++	    (event == NFT_MSG_DELCHAIN ||
++	     event == NFT_MSG_DESTROYCHAIN)) {
++		nlmsg_end(skb, nlh);
++		return 0;
++	}
++
+ 	if (nft_is_base_chain(chain)) {
+ 		const struct nft_base_chain *basechain = nft_base_chain(chain);
+ 		struct nft_stats __percpu *stats;
+@@ -2120,7 +2136,7 @@ static int nf_tables_dump_chains(struct sk_buff *skb,
+ 
+ 	rcu_read_lock();
+ 	nft_net = nft_pernet(net);
+-	cb->seq = READ_ONCE(nft_net->base_seq);
++	cb->seq = nft_base_seq(net);
+ 
+ 	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -3658,7 +3674,7 @@ static int nf_tables_fill_rule_info(struct sk_buff *skb, struct net *net,
+ 	u16 type = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+ 
+ 	nlh = nfnl_msg_put(skb, portid, seq, type, flags, family, NFNETLINK_V0,
+-			   nft_base_seq(net));
++			   nft_base_seq_be16(net));
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+@@ -3826,7 +3842,7 @@ static int nf_tables_dump_rules(struct sk_buff *skb,
+ 
+ 	rcu_read_lock();
+ 	nft_net = nft_pernet(net);
+-	cb->seq = READ_ONCE(nft_net->base_seq);
++	cb->seq = nft_base_seq(net);
+ 
+ 	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -4037,7 +4053,7 @@ static int nf_tables_getrule_reset(struct sk_buff *skb,
+ 	buf = kasprintf(GFP_ATOMIC, "%.*s:%u",
+ 			nla_len(nla[NFTA_RULE_TABLE]),
+ 			(char *)nla_data(nla[NFTA_RULE_TABLE]),
+-			nft_net->base_seq);
++			nft_base_seq(net));
+ 	audit_log_nfcfg(buf, info->nfmsg->nfgen_family, 1,
+ 			AUDIT_NFT_OP_RULE_RESET, GFP_ATOMIC);
+ 	kfree(buf);
+@@ -4871,9 +4887,10 @@ static int nf_tables_fill_set(struct sk_buff *skb, const struct nft_ctx *ctx,
+ 	u32 seq = ctx->seq;
+ 	int i;
+ 
+-	event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+-	nlh = nfnl_msg_put(skb, portid, seq, event, flags, ctx->family,
+-			   NFNETLINK_V0, nft_base_seq(ctx->net));
++	nlh = nfnl_msg_put(skb, portid, seq,
++			   nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event),
++			   flags, ctx->family, NFNETLINK_V0,
++			   nft_base_seq_be16(ctx->net));
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+@@ -4885,6 +4902,12 @@ static int nf_tables_fill_set(struct sk_buff *skb, const struct nft_ctx *ctx,
+ 			 NFTA_SET_PAD))
+ 		goto nla_put_failure;
+ 
++	if (event == NFT_MSG_DELSET ||
++	    event == NFT_MSG_DESTROYSET) {
++		nlmsg_end(skb, nlh);
++		return 0;
++	}
++
+ 	if (set->flags != 0)
+ 		if (nla_put_be32(skb, NFTA_SET_FLAGS, htonl(set->flags)))
+ 			goto nla_put_failure;
+@@ -5012,7 +5035,7 @@ static int nf_tables_dump_sets(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ 	rcu_read_lock();
+ 	nft_net = nft_pernet(net);
+-	cb->seq = READ_ONCE(nft_net->base_seq);
++	cb->seq = nft_base_seq(net);
+ 
+ 	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (ctx->family != NFPROTO_UNSPEC &&
+@@ -6189,7 +6212,7 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ 	rcu_read_lock();
+ 	nft_net = nft_pernet(net);
+-	cb->seq = READ_ONCE(nft_net->base_seq);
++	cb->seq = nft_base_seq(net);
+ 
+ 	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (dump_ctx->ctx.family != NFPROTO_UNSPEC &&
+@@ -6218,7 +6241,7 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
+ 	seq    = cb->nlh->nlmsg_seq;
+ 
+ 	nlh = nfnl_msg_put(skb, portid, seq, event, NLM_F_MULTI,
+-			   table->family, NFNETLINK_V0, nft_base_seq(net));
++			   table->family, NFNETLINK_V0, nft_base_seq_be16(net));
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+@@ -6311,7 +6334,7 @@ static int nf_tables_fill_setelem_info(struct sk_buff *skb,
+ 
+ 	event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+ 	nlh = nfnl_msg_put(skb, portid, seq, event, flags, ctx->family,
+-			   NFNETLINK_V0, nft_base_seq(ctx->net));
++			   NFNETLINK_V0, nft_base_seq_be16(ctx->net));
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+@@ -6610,7 +6633,7 @@ static int nf_tables_getsetelem_reset(struct sk_buff *skb,
+ 		}
+ 		nelems++;
+ 	}
+-	audit_log_nft_set_reset(dump_ctx.ctx.table, nft_net->base_seq, nelems);
++	audit_log_nft_set_reset(dump_ctx.ctx.table, nft_base_seq(info->net), nelems);
+ 
+ out_unlock:
+ 	rcu_read_unlock();
+@@ -8359,20 +8382,26 @@ static int nf_tables_fill_obj_info(struct sk_buff *skb, struct net *net,
+ {
+ 	struct nlmsghdr *nlh;
+ 
+-	event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+-	nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
+-			   NFNETLINK_V0, nft_base_seq(net));
++	nlh = nfnl_msg_put(skb, portid, seq,
++			   nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event),
++			   flags, family, NFNETLINK_V0, nft_base_seq_be16(net));
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+ 	if (nla_put_string(skb, NFTA_OBJ_TABLE, table->name) ||
+ 	    nla_put_string(skb, NFTA_OBJ_NAME, obj->key.name) ||
++	    nla_put_be32(skb, NFTA_OBJ_TYPE, htonl(obj->ops->type->type)) ||
+ 	    nla_put_be64(skb, NFTA_OBJ_HANDLE, cpu_to_be64(obj->handle),
+ 			 NFTA_OBJ_PAD))
+ 		goto nla_put_failure;
+ 
+-	if (nla_put_be32(skb, NFTA_OBJ_TYPE, htonl(obj->ops->type->type)) ||
+-	    nla_put_be32(skb, NFTA_OBJ_USE, htonl(obj->use)) ||
++	if (event == NFT_MSG_DELOBJ ||
++	    event == NFT_MSG_DESTROYOBJ) {
++		nlmsg_end(skb, nlh);
++		return 0;
++	}
++
++	if (nla_put_be32(skb, NFTA_OBJ_USE, htonl(obj->use)) ||
+ 	    nft_object_dump(skb, NFTA_OBJ_DATA, obj, reset))
+ 		goto nla_put_failure;
+ 
+@@ -8420,7 +8449,7 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ 	rcu_read_lock();
+ 	nft_net = nft_pernet(net);
+-	cb->seq = READ_ONCE(nft_net->base_seq);
++	cb->seq = nft_base_seq(net);
+ 
+ 	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -8454,7 +8483,7 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
+ 			idx++;
+ 		}
+ 		if (ctx->reset && entries)
+-			audit_log_obj_reset(table, nft_net->base_seq, entries);
++			audit_log_obj_reset(table, nft_base_seq(net), entries);
+ 		if (rc < 0)
+ 			break;
+ 	}
+@@ -8623,7 +8652,7 @@ static int nf_tables_getobj_reset(struct sk_buff *skb,
+ 	buf = kasprintf(GFP_ATOMIC, "%.*s:%u",
+ 			nla_len(nla[NFTA_OBJ_TABLE]),
+ 			(char *)nla_data(nla[NFTA_OBJ_TABLE]),
+-			nft_net->base_seq);
++			nft_base_seq(net));
+ 	audit_log_nfcfg(buf, info->nfmsg->nfgen_family, 1,
+ 			AUDIT_NFT_OP_OBJ_RESET, GFP_ATOMIC);
+ 	kfree(buf);
+@@ -8728,9 +8757,8 @@ void nft_obj_notify(struct net *net, const struct nft_table *table,
+ 		    struct nft_object *obj, u32 portid, u32 seq, int event,
+ 		    u16 flags, int family, int report, gfp_t gfp)
+ {
+-	struct nftables_pernet *nft_net = nft_pernet(net);
+ 	char *buf = kasprintf(gfp, "%s:%u",
+-			      table->name, nft_net->base_seq);
++			      table->name, nft_base_seq(net));
+ 
+ 	audit_log_nfcfg(buf,
+ 			family,
+@@ -9413,9 +9441,9 @@ static int nf_tables_fill_flowtable_info(struct sk_buff *skb, struct net *net,
+ 	struct nft_hook *hook;
+ 	struct nlmsghdr *nlh;
+ 
+-	event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+-	nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
+-			   NFNETLINK_V0, nft_base_seq(net));
++	nlh = nfnl_msg_put(skb, portid, seq,
++			   nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event),
++			   flags, family, NFNETLINK_V0, nft_base_seq_be16(net));
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+@@ -9425,6 +9453,13 @@ static int nf_tables_fill_flowtable_info(struct sk_buff *skb, struct net *net,
+ 			 NFTA_FLOWTABLE_PAD))
+ 		goto nla_put_failure;
+ 
++	if (!hook_list &&
++	    (event == NFT_MSG_DELFLOWTABLE ||
++	     event == NFT_MSG_DESTROYFLOWTABLE)) {
++		nlmsg_end(skb, nlh);
++		return 0;
++	}
++
+ 	if (nla_put_be32(skb, NFTA_FLOWTABLE_USE, htonl(flowtable->use)) ||
+ 	    nla_put_be32(skb, NFTA_FLOWTABLE_FLAGS, htonl(flowtable->data.flags)))
+ 		goto nla_put_failure;
+@@ -9477,7 +9512,7 @@ static int nf_tables_dump_flowtable(struct sk_buff *skb,
+ 
+ 	rcu_read_lock();
+ 	nft_net = nft_pernet(net);
+-	cb->seq = READ_ONCE(nft_net->base_seq);
++	cb->seq = nft_base_seq(net);
+ 
+ 	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -9662,17 +9697,16 @@ static void nf_tables_flowtable_destroy(struct nft_flowtable *flowtable)
+ static int nf_tables_fill_gen_info(struct sk_buff *skb, struct net *net,
+ 				   u32 portid, u32 seq)
+ {
+-	struct nftables_pernet *nft_net = nft_pernet(net);
+ 	struct nlmsghdr *nlh;
+ 	char buf[TASK_COMM_LEN];
+ 	int event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, NFT_MSG_NEWGEN);
+ 
+ 	nlh = nfnl_msg_put(skb, portid, seq, event, 0, AF_UNSPEC,
+-			   NFNETLINK_V0, nft_base_seq(net));
++			   NFNETLINK_V0, nft_base_seq_be16(net));
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+-	if (nla_put_be32(skb, NFTA_GEN_ID, htonl(nft_net->base_seq)) ||
++	if (nla_put_be32(skb, NFTA_GEN_ID, htonl(nft_base_seq(net))) ||
+ 	    nla_put_be32(skb, NFTA_GEN_PROC_PID, htonl(task_pid_nr(current))) ||
+ 	    nla_put_string(skb, NFTA_GEN_PROC_NAME, get_task_comm(buf, current)))
+ 		goto nla_put_failure;
+@@ -10933,11 +10967,12 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 	 * Bump generation counter, invalidate any dump in progress.
+ 	 * Cannot fail after this point.
+ 	 */
+-	base_seq = READ_ONCE(nft_net->base_seq);
++	base_seq = nft_base_seq(net);
+ 	while (++base_seq == 0)
+ 		;
+ 
+-	WRITE_ONCE(nft_net->base_seq, base_seq);
++	/* pairs with smp_load_acquire in nft_lookup_eval */
++	smp_store_release(&net->nft.base_seq, base_seq);
+ 
+ 	gc_seq = nft_gc_seq_begin(nft_net);
+ 
+@@ -11146,7 +11181,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 
+ 	nft_commit_notify(net, NETLINK_CB(skb).portid);
+ 	nf_tables_gen_notify(net, skb, NFT_MSG_NEWGEN);
+-	nf_tables_commit_audit_log(&adl, nft_net->base_seq);
++	nf_tables_commit_audit_log(&adl, nft_base_seq(net));
+ 
+ 	nft_gc_seq_end(nft_net, gc_seq);
+ 	nft_net->validate_state = NFT_VALIDATE_SKIP;
+@@ -11471,7 +11506,7 @@ static bool nf_tables_valid_genid(struct net *net, u32 genid)
+ 	mutex_lock(&nft_net->commit_mutex);
+ 	nft_net->tstamp = get_jiffies_64();
+ 
+-	genid_ok = genid == 0 || nft_net->base_seq == genid;
++	genid_ok = genid == 0 || nft_base_seq(net) == genid;
+ 	if (!genid_ok)
+ 		mutex_unlock(&nft_net->commit_mutex);
+ 
+@@ -12108,7 +12143,7 @@ static int __net_init nf_tables_init_net(struct net *net)
+ 	INIT_LIST_HEAD(&nft_net->module_list);
+ 	INIT_LIST_HEAD(&nft_net->notify_list);
+ 	mutex_init(&nft_net->commit_mutex);
+-	nft_net->base_seq = 1;
++	net->nft.base_seq = 1;
+ 	nft_net->gc_seq = 0;
+ 	nft_net->validate_state = NFT_VALIDATE_SKIP;
+ 	INIT_WORK(&nft_net->destroy_work, nf_tables_trans_destroy_work);
+diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
+index 88922e0e8e8377..e24493d9e77615 100644
+--- a/net/netfilter/nft_dynset.c
++++ b/net/netfilter/nft_dynset.c
+@@ -91,8 +91,9 @@ void nft_dynset_eval(const struct nft_expr *expr,
+ 		return;
+ 	}
+ 
+-	if (set->ops->update(set, &regs->data[priv->sreg_key], nft_dynset_new,
+-			     expr, regs, &ext)) {
++	ext = set->ops->update(set, &regs->data[priv->sreg_key], nft_dynset_new,
++			     expr, regs);
++	if (ext) {
+ 		if (priv->op == NFT_DYNSET_OP_UPDATE &&
+ 		    nft_set_ext_exists(ext, NFT_SET_EXT_TIMEOUT) &&
+ 		    READ_ONCE(nft_set_ext_timeout(ext)->timeout) != 0) {
+diff --git a/net/netfilter/nft_lookup.c b/net/netfilter/nft_lookup.c
+index 63ef832b8aa710..58c5b14889c474 100644
+--- a/net/netfilter/nft_lookup.c
++++ b/net/netfilter/nft_lookup.c
+@@ -24,36 +24,73 @@ struct nft_lookup {
+ 	struct nft_set_binding		binding;
+ };
+ 
+-#ifdef CONFIG_MITIGATION_RETPOLINE
+-bool nft_set_do_lookup(const struct net *net, const struct nft_set *set,
+-		       const u32 *key, const struct nft_set_ext **ext)
++static const struct nft_set_ext *
++__nft_set_do_lookup(const struct net *net, const struct nft_set *set,
++		    const u32 *key)
+ {
++#ifdef CONFIG_MITIGATION_RETPOLINE
+ 	if (set->ops == &nft_set_hash_fast_type.ops)
+-		return nft_hash_lookup_fast(net, set, key, ext);
++		return nft_hash_lookup_fast(net, set, key);
+ 	if (set->ops == &nft_set_hash_type.ops)
+-		return nft_hash_lookup(net, set, key, ext);
++		return nft_hash_lookup(net, set, key);
+ 
+ 	if (set->ops == &nft_set_rhash_type.ops)
+-		return nft_rhash_lookup(net, set, key, ext);
++		return nft_rhash_lookup(net, set, key);
+ 
+ 	if (set->ops == &nft_set_bitmap_type.ops)
+-		return nft_bitmap_lookup(net, set, key, ext);
++		return nft_bitmap_lookup(net, set, key);
+ 
+ 	if (set->ops == &nft_set_pipapo_type.ops)
+-		return nft_pipapo_lookup(net, set, key, ext);
++		return nft_pipapo_lookup(net, set, key);
+ #if defined(CONFIG_X86_64) && !defined(CONFIG_UML)
+ 	if (set->ops == &nft_set_pipapo_avx2_type.ops)
+-		return nft_pipapo_avx2_lookup(net, set, key, ext);
++		return nft_pipapo_avx2_lookup(net, set, key);
+ #endif
+ 
+ 	if (set->ops == &nft_set_rbtree_type.ops)
+-		return nft_rbtree_lookup(net, set, key, ext);
++		return nft_rbtree_lookup(net, set, key);
+ 
+ 	WARN_ON_ONCE(1);
+-	return set->ops->lookup(net, set, key, ext);
++#endif
++	return set->ops->lookup(net, set, key);
++}
++
++static unsigned int nft_base_seq(const struct net *net)
++{
++	/* pairs with smp_store_release() in nf_tables_commit() */
++	return smp_load_acquire(&net->nft.base_seq);
++}
++
++static bool nft_lookup_should_retry(const struct net *net, unsigned int seq)
++{
++	return unlikely(seq != nft_base_seq(net));
++}
++
++const struct nft_set_ext *
++nft_set_do_lookup(const struct net *net, const struct nft_set *set,
++		  const u32 *key)
++{
++	const struct nft_set_ext *ext;
++	unsigned int base_seq;
++
++	do {
++		base_seq = nft_base_seq(net);
++
++		ext = __nft_set_do_lookup(net, set, key);
++		if (ext)
++			break;
++		/* No match?  There is a small chance that lookup was
++		 * performed in the old generation, but nf_tables_commit()
++		 * already unlinked a (matching) element.
++		 *
++		 * We need to repeat the lookup to make sure that we didn't
++		 * miss a matching element in the new generation.
++		 */
++	} while (nft_lookup_should_retry(net, base_seq));
++
++	return ext;
+ }
+ EXPORT_SYMBOL_GPL(nft_set_do_lookup);
+-#endif
+ 
+ void nft_lookup_eval(const struct nft_expr *expr,
+ 		     struct nft_regs *regs,
+@@ -61,12 +98,12 @@ void nft_lookup_eval(const struct nft_expr *expr,
+ {
+ 	const struct nft_lookup *priv = nft_expr_priv(expr);
+ 	const struct nft_set *set = priv->set;
+-	const struct nft_set_ext *ext = NULL;
+ 	const struct net *net = nft_net(pkt);
++	const struct nft_set_ext *ext;
+ 	bool found;
+ 
+-	found =	nft_set_do_lookup(net, set, &regs->data[priv->sreg], &ext) ^
+-				  priv->invert;
++	ext = nft_set_do_lookup(net, set, &regs->data[priv->sreg]);
++	found = !!ext ^ priv->invert;
+ 	if (!found) {
+ 		ext = nft_set_catchall_lookup(net, set);
+ 		if (!ext) {
+diff --git a/net/netfilter/nft_objref.c b/net/netfilter/nft_objref.c
+index 09da7a3f9f9677..8ee66a86c3bc75 100644
+--- a/net/netfilter/nft_objref.c
++++ b/net/netfilter/nft_objref.c
+@@ -111,10 +111,9 @@ void nft_objref_map_eval(const struct nft_expr *expr,
+ 	struct net *net = nft_net(pkt);
+ 	const struct nft_set_ext *ext;
+ 	struct nft_object *obj;
+-	bool found;
+ 
+-	found = nft_set_do_lookup(net, set, &regs->data[priv->sreg], &ext);
+-	if (!found) {
++	ext = nft_set_do_lookup(net, set, &regs->data[priv->sreg]);
++	if (!ext) {
+ 		ext = nft_set_catchall_lookup(net, set);
+ 		if (!ext) {
+ 			regs->verdict.code = NFT_BREAK;
+diff --git a/net/netfilter/nft_set_bitmap.c b/net/netfilter/nft_set_bitmap.c
+index 12390d2e994fc6..8d3f040a904a2c 100644
+--- a/net/netfilter/nft_set_bitmap.c
++++ b/net/netfilter/nft_set_bitmap.c
+@@ -75,16 +75,21 @@ nft_bitmap_active(const u8 *bitmap, u32 idx, u32 off, u8 genmask)
+ }
+ 
+ INDIRECT_CALLABLE_SCOPE
+-bool nft_bitmap_lookup(const struct net *net, const struct nft_set *set,
+-		       const u32 *key, const struct nft_set_ext **ext)
++const struct nft_set_ext *
++nft_bitmap_lookup(const struct net *net, const struct nft_set *set,
++		  const u32 *key)
+ {
+ 	const struct nft_bitmap *priv = nft_set_priv(set);
++	static const struct nft_set_ext found;
+ 	u8 genmask = nft_genmask_cur(net);
+ 	u32 idx, off;
+ 
+ 	nft_bitmap_location(set, key, &idx, &off);
+ 
+-	return nft_bitmap_active(priv->bitmap, idx, off, genmask);
++	if (nft_bitmap_active(priv->bitmap, idx, off, genmask))
++		return &found;
++
++	return NULL;
+ }
+ 
+ static struct nft_bitmap_elem *
+@@ -221,7 +226,8 @@ static void nft_bitmap_walk(const struct nft_ctx *ctx,
+ 	const struct nft_bitmap *priv = nft_set_priv(set);
+ 	struct nft_bitmap_elem *be;
+ 
+-	list_for_each_entry_rcu(be, &priv->list, head) {
++	list_for_each_entry_rcu(be, &priv->list, head,
++				lockdep_is_held(&nft_pernet(ctx->net)->commit_mutex)) {
+ 		if (iter->count < iter->skip)
+ 			goto cont;
+ 
+diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
+index abb0c8ec637191..9903c737c9f0ad 100644
+--- a/net/netfilter/nft_set_hash.c
++++ b/net/netfilter/nft_set_hash.c
+@@ -81,8 +81,9 @@ static const struct rhashtable_params nft_rhash_params = {
+ };
+ 
+ INDIRECT_CALLABLE_SCOPE
+-bool nft_rhash_lookup(const struct net *net, const struct nft_set *set,
+-		      const u32 *key, const struct nft_set_ext **ext)
++const struct nft_set_ext *
++nft_rhash_lookup(const struct net *net, const struct nft_set *set,
++		 const u32 *key)
+ {
+ 	struct nft_rhash *priv = nft_set_priv(set);
+ 	const struct nft_rhash_elem *he;
+@@ -95,9 +96,9 @@ bool nft_rhash_lookup(const struct net *net, const struct nft_set *set,
+ 
+ 	he = rhashtable_lookup(&priv->ht, &arg, nft_rhash_params);
+ 	if (he != NULL)
+-		*ext = &he->ext;
++		return &he->ext;
+ 
+-	return !!he;
++	return NULL;
+ }
+ 
+ static struct nft_elem_priv *
+@@ -120,14 +121,11 @@ nft_rhash_get(const struct net *net, const struct nft_set *set,
+ 	return ERR_PTR(-ENOENT);
+ }
+ 
+-static bool nft_rhash_update(struct nft_set *set, const u32 *key,
+-			     struct nft_elem_priv *
+-				   (*new)(struct nft_set *,
+-					  const struct nft_expr *,
+-					  struct nft_regs *regs),
+-			     const struct nft_expr *expr,
+-			     struct nft_regs *regs,
+-			     const struct nft_set_ext **ext)
++static const struct nft_set_ext *
++nft_rhash_update(struct nft_set *set, const u32 *key,
++		 struct nft_elem_priv *(*new)(struct nft_set *, const struct nft_expr *,
++		 struct nft_regs *regs),
++		 const struct nft_expr *expr, struct nft_regs *regs)
+ {
+ 	struct nft_rhash *priv = nft_set_priv(set);
+ 	struct nft_rhash_elem *he, *prev;
+@@ -161,14 +159,13 @@ static bool nft_rhash_update(struct nft_set *set, const u32 *key,
+ 	}
+ 
+ out:
+-	*ext = &he->ext;
+-	return true;
++	return &he->ext;
+ 
+ err2:
+ 	nft_set_elem_destroy(set, &he->priv, true);
+ 	atomic_dec(&set->nelems);
+ err1:
+-	return false;
++	return NULL;
+ }
+ 
+ static int nft_rhash_insert(const struct net *net, const struct nft_set *set,
+@@ -507,8 +504,9 @@ struct nft_hash_elem {
+ };
+ 
+ INDIRECT_CALLABLE_SCOPE
+-bool nft_hash_lookup(const struct net *net, const struct nft_set *set,
+-		     const u32 *key, const struct nft_set_ext **ext)
++const struct nft_set_ext *
++nft_hash_lookup(const struct net *net, const struct nft_set *set,
++		const u32 *key)
+ {
+ 	struct nft_hash *priv = nft_set_priv(set);
+ 	u8 genmask = nft_genmask_cur(net);
+@@ -519,12 +517,10 @@ bool nft_hash_lookup(const struct net *net, const struct nft_set *set,
+ 	hash = reciprocal_scale(hash, priv->buckets);
+ 	hlist_for_each_entry_rcu(he, &priv->table[hash], node) {
+ 		if (!memcmp(nft_set_ext_key(&he->ext), key, set->klen) &&
+-		    nft_set_elem_active(&he->ext, genmask)) {
+-			*ext = &he->ext;
+-			return true;
+-		}
++		    nft_set_elem_active(&he->ext, genmask))
++			return &he->ext;
+ 	}
+-	return false;
++	return NULL;
+ }
+ 
+ static struct nft_elem_priv *
+@@ -547,9 +543,9 @@ nft_hash_get(const struct net *net, const struct nft_set *set,
+ }
+ 
+ INDIRECT_CALLABLE_SCOPE
+-bool nft_hash_lookup_fast(const struct net *net,
+-			  const struct nft_set *set,
+-			  const u32 *key, const struct nft_set_ext **ext)
++const struct nft_set_ext *
++nft_hash_lookup_fast(const struct net *net, const struct nft_set *set,
++		     const u32 *key)
+ {
+ 	struct nft_hash *priv = nft_set_priv(set);
+ 	u8 genmask = nft_genmask_cur(net);
+@@ -562,12 +558,10 @@ bool nft_hash_lookup_fast(const struct net *net,
+ 	hlist_for_each_entry_rcu(he, &priv->table[hash], node) {
+ 		k2 = *(u32 *)nft_set_ext_key(&he->ext)->data;
+ 		if (k1 == k2 &&
+-		    nft_set_elem_active(&he->ext, genmask)) {
+-			*ext = &he->ext;
+-			return true;
+-		}
++		    nft_set_elem_active(&he->ext, genmask))
++			return &he->ext;
+ 	}
+-	return false;
++	return NULL;
+ }
+ 
+ static u32 nft_jhash(const struct nft_set *set, const struct nft_hash *priv,
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 9e4e25f2458f99..793790d79d1384 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -397,37 +397,38 @@ int pipapo_refill(unsigned long *map, unsigned int len, unsigned int rules,
+ }
+ 
+ /**
+- * nft_pipapo_lookup() - Lookup function
+- * @net:	Network namespace
+- * @set:	nftables API set representation
+- * @key:	nftables API element representation containing key data
+- * @ext:	nftables API extension pointer, filled with matching reference
++ * pipapo_get() - Get matching element reference given key data
++ * @m:		storage containing the set elements
++ * @data:	Key data to be matched against existing elements
++ * @genmask:	If set, check that element is active in given genmask
++ * @tstamp:	timestamp to check for expired elements
+  *
+  * For more details, see DOC: Theory of Operation.
+  *
+- * Return: true on match, false otherwise.
++ * This is the main lookup function.  It matches key data against either
++ * the working match set or the uncommitted copy, depending on what the
++ * caller passed to us.
++ * nft_pipapo_get (lookup from userspace/control plane) and nft_pipapo_lookup
++ * (datapath lookup) pass the active copy.
++ * The insertion path will pass the uncommitted working copy.
++ *
++ * Return: pointer to &struct nft_pipapo_elem on match, NULL otherwise.
+  */
+-bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+-		       const u32 *key, const struct nft_set_ext **ext)
++static struct nft_pipapo_elem *pipapo_get(const struct nft_pipapo_match *m,
++					  const u8 *data, u8 genmask,
++					  u64 tstamp)
+ {
+-	struct nft_pipapo *priv = nft_set_priv(set);
+ 	struct nft_pipapo_scratch *scratch;
+ 	unsigned long *res_map, *fill_map;
+-	u8 genmask = nft_genmask_cur(net);
+-	const struct nft_pipapo_match *m;
+ 	const struct nft_pipapo_field *f;
+-	const u8 *rp = (const u8 *)key;
+ 	bool map_index;
+ 	int i;
+ 
+ 	local_bh_disable();
+ 
+-	m = rcu_dereference(priv->match);
+-
+-	if (unlikely(!m || !*raw_cpu_ptr(m->scratch)))
+-		goto out;
+-
+ 	scratch = *raw_cpu_ptr(m->scratch);
++	if (unlikely(!scratch))
++		goto out;
+ 
+ 	map_index = scratch->map_index;
+ 
+@@ -444,12 +445,12 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ 		 * packet bytes value, then AND bucket value
+ 		 */
+ 		if (likely(f->bb == 8))
+-			pipapo_and_field_buckets_8bit(f, res_map, rp);
++			pipapo_and_field_buckets_8bit(f, res_map, data);
+ 		else
+-			pipapo_and_field_buckets_4bit(f, res_map, rp);
++			pipapo_and_field_buckets_4bit(f, res_map, data);
+ 		NFT_PIPAPO_GROUP_BITS_ARE_8_OR_4;
+ 
+-		rp += f->groups / NFT_PIPAPO_GROUPS_PER_BYTE(f);
++		data += f->groups / NFT_PIPAPO_GROUPS_PER_BYTE(f);
+ 
+ 		/* Now populate the bitmap for the next field, unless this is
+ 		 * the last field, in which case return the matched 'ext'
+@@ -465,13 +466,15 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ 			scratch->map_index = map_index;
+ 			local_bh_enable();
+ 
+-			return false;
++			return NULL;
+ 		}
+ 
+ 		if (last) {
+-			*ext = &f->mt[b].e->ext;
+-			if (unlikely(nft_set_elem_expired(*ext) ||
+-				     !nft_set_elem_active(*ext, genmask)))
++			struct nft_pipapo_elem *e;
++
++			e = f->mt[b].e;
++			if (unlikely(__nft_set_elem_expired(&e->ext, tstamp) ||
++				     !nft_set_elem_active(&e->ext, genmask)))
+ 				goto next_match;
+ 
+ 			/* Last field: we're just returning the key without
+@@ -481,8 +484,7 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ 			 */
+ 			scratch->map_index = map_index;
+ 			local_bh_enable();
+-
+-			return true;
++			return e;
+ 		}
+ 
+ 		/* Swap bitmap indices: res_map is the initial bitmap for the
+@@ -492,112 +494,54 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ 		map_index = !map_index;
+ 		swap(res_map, fill_map);
+ 
+-		rp += NFT_PIPAPO_GROUPS_PADDING(f);
++		data += NFT_PIPAPO_GROUPS_PADDING(f);
+ 	}
+ 
+ out:
+ 	local_bh_enable();
+-	return false;
++	return NULL;
+ }
+ 
+ /**
+- * pipapo_get() - Get matching element reference given key data
++ * nft_pipapo_lookup() - Dataplane fronted for main lookup function
+  * @net:	Network namespace
+  * @set:	nftables API set representation
+- * @m:		storage containing active/existing elements
+- * @data:	Key data to be matched against existing elements
+- * @genmask:	If set, check that element is active in given genmask
+- * @tstamp:	timestamp to check for expired elements
+- * @gfp:	the type of memory to allocate (see kmalloc).
++ * @key:	pointer to nft registers containing key data
++ *
++ * This function is called from the data path.  It will search for
++ * an element matching the given key in the current active copy.
++ * Unlike other set types, this uses NFT_GENMASK_ANY instead of
++ * nft_genmask_cur().
+  *
+- * This is essentially the same as the lookup function, except that it matches
+- * key data against the uncommitted copy and doesn't use preallocated maps for
+- * bitmap results.
++ * This is because new (future) elements are not reachable from
++ * priv->match, they get added to priv->clone instead.
++ * When the commit phase flips the generation bitmask, the
++ * 'now old' entries are skipped but without the 'now current'
++ * elements becoming visible. Using nft_genmask_cur() thus creates
++ * inconsistent state: matching old entries get skipped but thew
++ * newly matching entries are unreachable.
+  *
+- * Return: pointer to &struct nft_pipapo_elem on match, error pointer otherwise.
++ * GENMASK will still find the 'now old' entries which ensures consistent
++ * priv->match view.
++ *
++ * nft_pipapo_commit swaps ->clone and ->match shortly after the
++ * genbit flip.  As ->clone doesn't contain the old entries in the first
++ * place, lookup will only find the now-current ones.
++ *
++ * Return: ntables API extension pointer or NULL if no match.
+  */
+-static struct nft_pipapo_elem *pipapo_get(const struct net *net,
+-					  const struct nft_set *set,
+-					  const struct nft_pipapo_match *m,
+-					  const u8 *data, u8 genmask,
+-					  u64 tstamp, gfp_t gfp)
++const struct nft_set_ext *
++nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
++		  const u32 *key)
+ {
+-	struct nft_pipapo_elem *ret = ERR_PTR(-ENOENT);
+-	unsigned long *res_map, *fill_map = NULL;
+-	const struct nft_pipapo_field *f;
+-	int i;
+-
+-	if (m->bsize_max == 0)
+-		return ret;
+-
+-	res_map = kmalloc_array(m->bsize_max, sizeof(*res_map), gfp);
+-	if (!res_map) {
+-		ret = ERR_PTR(-ENOMEM);
+-		goto out;
+-	}
+-
+-	fill_map = kcalloc(m->bsize_max, sizeof(*res_map), gfp);
+-	if (!fill_map) {
+-		ret = ERR_PTR(-ENOMEM);
+-		goto out;
+-	}
+-
+-	pipapo_resmap_init(m, res_map);
+-
+-	nft_pipapo_for_each_field(f, i, m) {
+-		bool last = i == m->field_count - 1;
+-		int b;
+-
+-		/* For each bit group: select lookup table bucket depending on
+-		 * packet bytes value, then AND bucket value
+-		 */
+-		if (f->bb == 8)
+-			pipapo_and_field_buckets_8bit(f, res_map, data);
+-		else if (f->bb == 4)
+-			pipapo_and_field_buckets_4bit(f, res_map, data);
+-		else
+-			BUG();
+-
+-		data += f->groups / NFT_PIPAPO_GROUPS_PER_BYTE(f);
+-
+-		/* Now populate the bitmap for the next field, unless this is
+-		 * the last field, in which case return the matched 'ext'
+-		 * pointer if any.
+-		 *
+-		 * Now res_map contains the matching bitmap, and fill_map is the
+-		 * bitmap for the next field.
+-		 */
+-next_match:
+-		b = pipapo_refill(res_map, f->bsize, f->rules, fill_map, f->mt,
+-				  last);
+-		if (b < 0)
+-			goto out;
+-
+-		if (last) {
+-			if (__nft_set_elem_expired(&f->mt[b].e->ext, tstamp))
+-				goto next_match;
+-			if ((genmask &&
+-			     !nft_set_elem_active(&f->mt[b].e->ext, genmask)))
+-				goto next_match;
+-
+-			ret = f->mt[b].e;
+-			goto out;
+-		}
+-
+-		data += NFT_PIPAPO_GROUPS_PADDING(f);
++	struct nft_pipapo *priv = nft_set_priv(set);
++	const struct nft_pipapo_match *m;
++	const struct nft_pipapo_elem *e;
+ 
+-		/* Swap bitmap indices: fill_map will be the initial bitmap for
+-		 * the next field (i.e. the new res_map), and res_map is
+-		 * guaranteed to be all-zeroes at this point, ready to be filled
+-		 * according to the next mapping table.
+-		 */
+-		swap(res_map, fill_map);
+-	}
++	m = rcu_dereference(priv->match);
++	e = pipapo_get(m, (const u8 *)key, NFT_GENMASK_ANY, get_jiffies_64());
+ 
+-out:
+-	kfree(fill_map);
+-	kfree(res_map);
+-	return ret;
++	return e ? &e->ext : NULL;
+ }
+ 
+ /**
+@@ -606,6 +550,11 @@ static struct nft_pipapo_elem *pipapo_get(const struct net *net,
+  * @set:	nftables API set representation
+  * @elem:	nftables API element representation containing key data
+  * @flags:	Unused
++ *
++ * This function is called from the control plane path under
++ * RCU read lock.
++ *
++ * Return: set element private pointer or ERR_PTR(-ENOENT).
+  */
+ static struct nft_elem_priv *
+ nft_pipapo_get(const struct net *net, const struct nft_set *set,
+@@ -615,11 +564,10 @@ nft_pipapo_get(const struct net *net, const struct nft_set *set,
+ 	struct nft_pipapo_match *m = rcu_dereference(priv->match);
+ 	struct nft_pipapo_elem *e;
+ 
+-	e = pipapo_get(net, set, m, (const u8 *)elem->key.val.data,
+-		       nft_genmask_cur(net), get_jiffies_64(),
+-		       GFP_ATOMIC);
+-	if (IS_ERR(e))
+-		return ERR_CAST(e);
++	e = pipapo_get(m, (const u8 *)elem->key.val.data,
++		       nft_genmask_cur(net), get_jiffies_64());
++	if (!e)
++		return ERR_PTR(-ENOENT);
+ 
+ 	return &e->priv;
+ }
+@@ -1344,8 +1292,8 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
+ 	else
+ 		end = start;
+ 
+-	dup = pipapo_get(net, set, m, start, genmask, tstamp, GFP_KERNEL);
+-	if (!IS_ERR(dup)) {
++	dup = pipapo_get(m, start, genmask, tstamp);
++	if (dup) {
+ 		/* Check if we already have the same exact entry */
+ 		const struct nft_data *dup_key, *dup_end;
+ 
+@@ -1364,15 +1312,9 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
+ 		return -ENOTEMPTY;
+ 	}
+ 
+-	if (PTR_ERR(dup) == -ENOENT) {
+-		/* Look for partially overlapping entries */
+-		dup = pipapo_get(net, set, m, end, nft_genmask_next(net), tstamp,
+-				 GFP_KERNEL);
+-	}
+-
+-	if (PTR_ERR(dup) != -ENOENT) {
+-		if (IS_ERR(dup))
+-			return PTR_ERR(dup);
++	/* Look for partially overlapping entries */
++	dup = pipapo_get(m, end, nft_genmask_next(net), tstamp);
++	if (dup) {
+ 		*elem_priv = &dup->priv;
+ 		return -ENOTEMPTY;
+ 	}
+@@ -1913,9 +1855,9 @@ nft_pipapo_deactivate(const struct net *net, const struct nft_set *set,
+ 	if (!m)
+ 		return NULL;
+ 
+-	e = pipapo_get(net, set, m, (const u8 *)elem->key.val.data,
+-		       nft_genmask_next(net), nft_net_tstamp(net), GFP_KERNEL);
+-	if (IS_ERR(e))
++	e = pipapo_get(m, (const u8 *)elem->key.val.data,
++		       nft_genmask_next(net), nft_net_tstamp(net));
++	if (!e)
+ 		return NULL;
+ 
+ 	nft_set_elem_change_active(net, set, &e->ext);
+diff --git a/net/netfilter/nft_set_pipapo_avx2.c b/net/netfilter/nft_set_pipapo_avx2.c
+index be7c16c79f711e..39e356c9687a98 100644
+--- a/net/netfilter/nft_set_pipapo_avx2.c
++++ b/net/netfilter/nft_set_pipapo_avx2.c
+@@ -1146,26 +1146,27 @@ static inline void pipapo_resmap_init_avx2(const struct nft_pipapo_match *m, uns
+  *
+  * Return: true on match, false otherwise.
+  */
+-bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+-			    const u32 *key, const struct nft_set_ext **ext)
++const struct nft_set_ext *
++nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
++		       const u32 *key)
+ {
+ 	struct nft_pipapo *priv = nft_set_priv(set);
++	const struct nft_set_ext *ext = NULL;
+ 	struct nft_pipapo_scratch *scratch;
+-	u8 genmask = nft_genmask_cur(net);
+ 	const struct nft_pipapo_match *m;
+ 	const struct nft_pipapo_field *f;
+ 	const u8 *rp = (const u8 *)key;
+ 	unsigned long *res, *fill;
+ 	bool map_index;
+-	int i, ret = 0;
++	int i;
+ 
+ 	local_bh_disable();
+ 
+ 	if (unlikely(!irq_fpu_usable())) {
+-		bool fallback_res = nft_pipapo_lookup(net, set, key, ext);
++		ext = nft_pipapo_lookup(net, set, key);
+ 
+ 		local_bh_enable();
+-		return fallback_res;
++		return ext;
+ 	}
+ 
+ 	m = rcu_dereference(priv->match);
+@@ -1182,7 +1183,7 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 	if (unlikely(!scratch)) {
+ 		kernel_fpu_end();
+ 		local_bh_enable();
+-		return false;
++		return NULL;
+ 	}
+ 
+ 	map_index = scratch->map_index;
+@@ -1197,6 +1198,7 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ next_match:
+ 	nft_pipapo_for_each_field(f, i, m) {
+ 		bool last = i == m->field_count - 1, first = !i;
++		int ret = 0;
+ 
+ #define NFT_SET_PIPAPO_AVX2_LOOKUP(b, n)				\
+ 		(ret = nft_pipapo_avx2_lookup_##b##b_##n(res, fill, f,	\
+@@ -1244,13 +1246,12 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 			goto out;
+ 
+ 		if (last) {
+-			*ext = &f->mt[ret].e->ext;
+-			if (unlikely(nft_set_elem_expired(*ext) ||
+-				     !nft_set_elem_active(*ext, genmask))) {
+-				ret = 0;
++			const struct nft_set_ext *e = &f->mt[ret].e->ext;
++
++			if (unlikely(nft_set_elem_expired(e)))
+ 				goto next_match;
+-			}
+ 
++			ext = e;
+ 			goto out;
+ 		}
+ 
+@@ -1264,5 +1265,5 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 	kernel_fpu_end();
+ 	local_bh_enable();
+ 
+-	return ret >= 0;
++	return ext;
+ }
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 2e8ef16ff191d4..b1f04168ec9377 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -52,9 +52,9 @@ static bool nft_rbtree_elem_expired(const struct nft_rbtree_elem *rbe)
+ 	return nft_set_elem_expired(&rbe->ext);
+ }
+ 
+-static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
+-				const u32 *key, const struct nft_set_ext **ext,
+-				unsigned int seq)
++static const struct nft_set_ext *
++__nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
++		    const u32 *key, unsigned int seq)
+ {
+ 	struct nft_rbtree *priv = nft_set_priv(set);
+ 	const struct nft_rbtree_elem *rbe, *interval = NULL;
+@@ -65,7 +65,7 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
+ 	parent = rcu_dereference_raw(priv->root.rb_node);
+ 	while (parent != NULL) {
+ 		if (read_seqcount_retry(&priv->count, seq))
+-			return false;
++			return NULL;
+ 
+ 		rbe = rb_entry(parent, struct nft_rbtree_elem, node);
+ 
+@@ -77,7 +77,9 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
+ 			    nft_rbtree_interval_end(rbe) &&
+ 			    nft_rbtree_interval_start(interval))
+ 				continue;
+-			interval = rbe;
++			if (nft_set_elem_active(&rbe->ext, genmask) &&
++			    !nft_rbtree_elem_expired(rbe))
++				interval = rbe;
+ 		} else if (d > 0)
+ 			parent = rcu_dereference_raw(parent->rb_right);
+ 		else {
+@@ -87,50 +89,46 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
+ 			}
+ 
+ 			if (nft_rbtree_elem_expired(rbe))
+-				return false;
++				return NULL;
+ 
+ 			if (nft_rbtree_interval_end(rbe)) {
+ 				if (nft_set_is_anonymous(set))
+-					return false;
++					return NULL;
+ 				parent = rcu_dereference_raw(parent->rb_left);
+ 				interval = NULL;
+ 				continue;
+ 			}
+ 
+-			*ext = &rbe->ext;
+-			return true;
++			return &rbe->ext;
+ 		}
+ 	}
+ 
+ 	if (set->flags & NFT_SET_INTERVAL && interval != NULL &&
+-	    nft_set_elem_active(&interval->ext, genmask) &&
+-	    !nft_rbtree_elem_expired(interval) &&
+-	    nft_rbtree_interval_start(interval)) {
+-		*ext = &interval->ext;
+-		return true;
+-	}
++	    nft_rbtree_interval_start(interval))
++		return &interval->ext;
+ 
+-	return false;
++	return NULL;
+ }
+ 
+ INDIRECT_CALLABLE_SCOPE
+-bool nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
+-		       const u32 *key, const struct nft_set_ext **ext)
++const struct nft_set_ext *
++nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
++		  const u32 *key)
+ {
+ 	struct nft_rbtree *priv = nft_set_priv(set);
+ 	unsigned int seq = read_seqcount_begin(&priv->count);
+-	bool ret;
++	const struct nft_set_ext *ext;
+ 
+-	ret = __nft_rbtree_lookup(net, set, key, ext, seq);
+-	if (ret || !read_seqcount_retry(&priv->count, seq))
+-		return ret;
++	ext = __nft_rbtree_lookup(net, set, key, seq);
++	if (ext || !read_seqcount_retry(&priv->count, seq))
++		return ext;
+ 
+ 	read_lock_bh(&priv->lock);
+ 	seq = read_seqcount_begin(&priv->count);
+-	ret = __nft_rbtree_lookup(net, set, key, ext, seq);
++	ext = __nft_rbtree_lookup(net, set, key, seq);
+ 	read_unlock_bh(&priv->lock);
+ 
+-	return ret;
++	return ext;
+ }
+ 
+ static bool __nft_rbtree_get(const struct net *net, const struct nft_set *set,
+diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
+index 104732d3454348..978c129c609501 100644
+--- a/net/netlink/genetlink.c
++++ b/net/netlink/genetlink.c
+@@ -1836,6 +1836,9 @@ static int genl_bind(struct net *net, int group)
+ 		    !ns_capable(net->user_ns, CAP_SYS_ADMIN))
+ 			ret = -EPERM;
+ 
++		if (ret)
++			break;
++
+ 		if (family->bind)
+ 			family->bind(i);
+ 
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index 73bc39281ef5f5..9b45fbdc90cabe 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -276,8 +276,6 @@ EXPORT_SYMBOL_GPL(rpc_destroy_wait_queue);
+ 
+ static int rpc_wait_bit_killable(struct wait_bit_key *key, int mode)
+ {
+-	if (unlikely(current->flags & PF_EXITING))
+-		return -EINTR;
+ 	schedule();
+ 	if (signal_pending_state(mode, current))
+ 		return -ERESTARTSYS;
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index c5f7bbf5775ff8..3aa987e7f0724d 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -407,9 +407,9 @@ xs_sock_recv_cmsg(struct socket *sock, unsigned int *msg_flags, int flags)
+ 	iov_iter_kvec(&msg.msg_iter, ITER_DEST, &alert_kvec, 1,
+ 		      alert_kvec.iov_len);
+ 	ret = sock_recvmsg(sock, &msg, flags);
+-	if (ret > 0 &&
+-	    tls_get_record_type(sock->sk, &u.cmsg) == TLS_RECORD_TYPE_ALERT) {
+-		iov_iter_revert(&msg.msg_iter, ret);
++	if (ret > 0) {
++		if (tls_get_record_type(sock->sk, &u.cmsg) == TLS_RECORD_TYPE_ALERT)
++			iov_iter_revert(&msg.msg_iter, ret);
+ 		ret = xs_sock_process_cmsg(sock, &msg, msg_flags, &u.cmsg,
+ 					   -EAGAIN);
+ 	}
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 72c000c0ae5f57..de331541fdb387 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -36,6 +36,20 @@
+ #define TX_BATCH_SIZE 32
+ #define MAX_PER_SOCKET_BUDGET (TX_BATCH_SIZE)
+ 
++struct xsk_addr_node {
++	u64 addr;
++	struct list_head addr_node;
++};
++
++struct xsk_addr_head {
++	u32 num_descs;
++	struct list_head addrs_list;
++};
++
++static struct kmem_cache *xsk_tx_generic_cache;
++
++#define XSKCB(skb) ((struct xsk_addr_head *)((skb)->cb))
++
+ void xsk_set_rx_need_wakeup(struct xsk_buff_pool *pool)
+ {
+ 	if (pool->cached_need_wakeup & XDP_WAKEUP_RX)
+@@ -528,24 +542,43 @@ static int xsk_wakeup(struct xdp_sock *xs, u8 flags)
+ 	return dev->netdev_ops->ndo_xsk_wakeup(dev, xs->queue_id, flags);
+ }
+ 
+-static int xsk_cq_reserve_addr_locked(struct xsk_buff_pool *pool, u64 addr)
++static int xsk_cq_reserve_locked(struct xsk_buff_pool *pool)
+ {
+ 	unsigned long flags;
+ 	int ret;
+ 
+ 	spin_lock_irqsave(&pool->cq_lock, flags);
+-	ret = xskq_prod_reserve_addr(pool->cq, addr);
++	ret = xskq_prod_reserve(pool->cq);
+ 	spin_unlock_irqrestore(&pool->cq_lock, flags);
+ 
+ 	return ret;
+ }
+ 
+-static void xsk_cq_submit_locked(struct xsk_buff_pool *pool, u32 n)
++static void xsk_cq_submit_addr_locked(struct xsk_buff_pool *pool,
++				      struct sk_buff *skb)
+ {
++	struct xsk_addr_node *pos, *tmp;
++	u32 descs_processed = 0;
+ 	unsigned long flags;
++	u32 idx;
+ 
+ 	spin_lock_irqsave(&pool->cq_lock, flags);
+-	xskq_prod_submit_n(pool->cq, n);
++	idx = xskq_get_prod(pool->cq);
++
++	xskq_prod_write_addr(pool->cq, idx,
++			     (u64)(uintptr_t)skb_shinfo(skb)->destructor_arg);
++	descs_processed++;
++
++	if (unlikely(XSKCB(skb)->num_descs > 1)) {
++		list_for_each_entry_safe(pos, tmp, &XSKCB(skb)->addrs_list, addr_node) {
++			xskq_prod_write_addr(pool->cq, idx + descs_processed,
++					     pos->addr);
++			descs_processed++;
++			list_del(&pos->addr_node);
++			kmem_cache_free(xsk_tx_generic_cache, pos);
++		}
++	}
++	xskq_prod_submit_n(pool->cq, descs_processed);
+ 	spin_unlock_irqrestore(&pool->cq_lock, flags);
+ }
+ 
+@@ -558,9 +591,14 @@ static void xsk_cq_cancel_locked(struct xsk_buff_pool *pool, u32 n)
+ 	spin_unlock_irqrestore(&pool->cq_lock, flags);
+ }
+ 
++static void xsk_inc_num_desc(struct sk_buff *skb)
++{
++	XSKCB(skb)->num_descs++;
++}
++
+ static u32 xsk_get_num_desc(struct sk_buff *skb)
+ {
+-	return skb ? (long)skb_shinfo(skb)->destructor_arg : 0;
++	return XSKCB(skb)->num_descs;
+ }
+ 
+ static void xsk_destruct_skb(struct sk_buff *skb)
+@@ -572,23 +610,33 @@ static void xsk_destruct_skb(struct sk_buff *skb)
+ 		*compl->tx_timestamp = ktime_get_tai_fast_ns();
+ 	}
+ 
+-	xsk_cq_submit_locked(xdp_sk(skb->sk)->pool, xsk_get_num_desc(skb));
++	xsk_cq_submit_addr_locked(xdp_sk(skb->sk)->pool, skb);
+ 	sock_wfree(skb);
+ }
+ 
+-static void xsk_set_destructor_arg(struct sk_buff *skb)
++static void xsk_set_destructor_arg(struct sk_buff *skb, u64 addr)
+ {
+-	long num = xsk_get_num_desc(xdp_sk(skb->sk)->skb) + 1;
+-
+-	skb_shinfo(skb)->destructor_arg = (void *)num;
++	BUILD_BUG_ON(sizeof(struct xsk_addr_head) > sizeof(skb->cb));
++	INIT_LIST_HEAD(&XSKCB(skb)->addrs_list);
++	XSKCB(skb)->num_descs = 0;
++	skb_shinfo(skb)->destructor_arg = (void *)(uintptr_t)addr;
+ }
+ 
+ static void xsk_consume_skb(struct sk_buff *skb)
+ {
+ 	struct xdp_sock *xs = xdp_sk(skb->sk);
++	u32 num_descs = xsk_get_num_desc(skb);
++	struct xsk_addr_node *pos, *tmp;
++
++	if (unlikely(num_descs > 1)) {
++		list_for_each_entry_safe(pos, tmp, &XSKCB(skb)->addrs_list, addr_node) {
++			list_del(&pos->addr_node);
++			kmem_cache_free(xsk_tx_generic_cache, pos);
++		}
++	}
+ 
+ 	skb->destructor = sock_wfree;
+-	xsk_cq_cancel_locked(xs->pool, xsk_get_num_desc(skb));
++	xsk_cq_cancel_locked(xs->pool, num_descs);
+ 	/* Free skb without triggering the perf drop trace */
+ 	consume_skb(skb);
+ 	xs->skb = NULL;
+@@ -605,6 +653,7 @@ static struct sk_buff *xsk_build_skb_zerocopy(struct xdp_sock *xs,
+ {
+ 	struct xsk_buff_pool *pool = xs->pool;
+ 	u32 hr, len, ts, offset, copy, copied;
++	struct xsk_addr_node *xsk_addr;
+ 	struct sk_buff *skb = xs->skb;
+ 	struct page *page;
+ 	void *buffer;
+@@ -619,6 +668,19 @@ static struct sk_buff *xsk_build_skb_zerocopy(struct xdp_sock *xs,
+ 			return ERR_PTR(err);
+ 
+ 		skb_reserve(skb, hr);
++
++		xsk_set_destructor_arg(skb, desc->addr);
++	} else {
++		xsk_addr = kmem_cache_zalloc(xsk_tx_generic_cache, GFP_KERNEL);
++		if (!xsk_addr)
++			return ERR_PTR(-ENOMEM);
++
++		/* in case of -EOVERFLOW that could happen below,
++		 * xsk_consume_skb() will release this node as whole skb
++		 * would be dropped, which implies freeing all list elements
++		 */
++		xsk_addr->addr = desc->addr;
++		list_add_tail(&xsk_addr->addr_node, &XSKCB(skb)->addrs_list);
+ 	}
+ 
+ 	addr = desc->addr;
+@@ -690,8 +752,11 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ 			err = skb_store_bits(skb, 0, buffer, len);
+ 			if (unlikely(err))
+ 				goto free_err;
++
++			xsk_set_destructor_arg(skb, desc->addr);
+ 		} else {
+ 			int nr_frags = skb_shinfo(skb)->nr_frags;
++			struct xsk_addr_node *xsk_addr;
+ 			struct page *page;
+ 			u8 *vaddr;
+ 
+@@ -706,12 +771,22 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ 				goto free_err;
+ 			}
+ 
++			xsk_addr = kmem_cache_zalloc(xsk_tx_generic_cache, GFP_KERNEL);
++			if (!xsk_addr) {
++				__free_page(page);
++				err = -ENOMEM;
++				goto free_err;
++			}
++
+ 			vaddr = kmap_local_page(page);
+ 			memcpy(vaddr, buffer, len);
+ 			kunmap_local(vaddr);
+ 
+ 			skb_add_rx_frag(skb, nr_frags, page, 0, len, PAGE_SIZE);
+ 			refcount_add(PAGE_SIZE, &xs->sk.sk_wmem_alloc);
++
++			xsk_addr->addr = desc->addr;
++			list_add_tail(&xsk_addr->addr_node, &XSKCB(skb)->addrs_list);
+ 		}
+ 
+ 		if (first_frag && desc->options & XDP_TX_METADATA) {
+@@ -755,7 +830,7 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ 	skb->mark = READ_ONCE(xs->sk.sk_mark);
+ 	skb->destructor = xsk_destruct_skb;
+ 	xsk_tx_metadata_to_compl(meta, &skb_shinfo(skb)->xsk_meta);
+-	xsk_set_destructor_arg(skb);
++	xsk_inc_num_desc(skb);
+ 
+ 	return skb;
+ 
+@@ -765,7 +840,7 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ 
+ 	if (err == -EOVERFLOW) {
+ 		/* Drop the packet */
+-		xsk_set_destructor_arg(xs->skb);
++		xsk_inc_num_desc(xs->skb);
+ 		xsk_drop_skb(xs->skb);
+ 		xskq_cons_release(xs->tx);
+ 	} else {
+@@ -807,7 +882,7 @@ static int __xsk_generic_xmit(struct sock *sk)
+ 		 * if there is space in it. This avoids having to implement
+ 		 * any buffering in the Tx path.
+ 		 */
+-		err = xsk_cq_reserve_addr_locked(xs->pool, desc.addr);
++		err = xsk_cq_reserve_locked(xs->pool);
+ 		if (err) {
+ 			err = -EAGAIN;
+ 			goto out;
+@@ -1795,8 +1870,18 @@ static int __init xsk_init(void)
+ 	if (err)
+ 		goto out_pernet;
+ 
++	xsk_tx_generic_cache = kmem_cache_create("xsk_generic_xmit_cache",
++						 sizeof(struct xsk_addr_node),
++						 0, SLAB_HWCACHE_ALIGN, NULL);
++	if (!xsk_tx_generic_cache) {
++		err = -ENOMEM;
++		goto out_unreg_notif;
++	}
++
+ 	return 0;
+ 
++out_unreg_notif:
++	unregister_netdevice_notifier(&xsk_netdev_notifier);
+ out_pernet:
+ 	unregister_pernet_subsys(&xsk_net_ops);
+ out_sk:
+diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
+index 46d87e961ad6d3..f16f390370dc43 100644
+--- a/net/xdp/xsk_queue.h
++++ b/net/xdp/xsk_queue.h
+@@ -344,6 +344,11 @@ static inline u32 xskq_cons_present_entries(struct xsk_queue *q)
+ 
+ /* Functions for producers */
+ 
++static inline u32 xskq_get_prod(struct xsk_queue *q)
++{
++	return READ_ONCE(q->ring->producer);
++}
++
+ static inline u32 xskq_prod_nb_free(struct xsk_queue *q, u32 max)
+ {
+ 	u32 free_entries = q->nentries - (q->cached_prod - q->cached_cons);
+@@ -390,6 +395,13 @@ static inline int xskq_prod_reserve_addr(struct xsk_queue *q, u64 addr)
+ 	return 0;
+ }
+ 
++static inline void xskq_prod_write_addr(struct xsk_queue *q, u32 idx, u64 addr)
++{
++	struct xdp_umem_ring *ring = (struct xdp_umem_ring *)q->ring;
++
++	ring->desc[idx & q->ring_mask] = addr;
++}
++
+ static inline void xskq_prod_write_addr_batch(struct xsk_queue *q, struct xdp_desc *descs,
+ 					      u32 nb_entries)
+ {
+diff --git a/samples/ftrace/ftrace-direct-modify.c b/samples/ftrace/ftrace-direct-modify.c
+index cfea7a38befb05..da3a9f2091f55b 100644
+--- a/samples/ftrace/ftrace-direct-modify.c
++++ b/samples/ftrace/ftrace-direct-modify.c
+@@ -75,8 +75,8 @@ asm (
+ 	CALL_DEPTH_ACCOUNT
+ "	call my_direct_func1\n"
+ "	leave\n"
+-"	.size		my_tramp1, .-my_tramp1\n"
+ 	ASM_RET
++"	.size		my_tramp1, .-my_tramp1\n"
+ 
+ "	.type		my_tramp2, @function\n"
+ "	.globl		my_tramp2\n"
+diff --git a/tools/testing/selftests/net/can/config b/tools/testing/selftests/net/can/config
+new file mode 100644
+index 00000000000000..188f7979667097
+--- /dev/null
++++ b/tools/testing/selftests/net/can/config
+@@ -0,0 +1,3 @@
++CONFIG_CAN=m
++CONFIG_CAN_DEV=m
++CONFIG_CAN_VCAN=m

diff --git a/2991_libbpf_add_WERROR_option.patch b/2991_libbpf_add_WERROR_option.patch
index e8649909..39d485f9 100644
--- a/2991_libbpf_add_WERROR_option.patch
+++ b/2991_libbpf_add_WERROR_option.patch
@@ -1,17 +1,6 @@
 Subject: [PATCH] tools/libbpf: add WERROR option
-Date: Sat,  5 Jul 2025 11:43:12 +0100
-Message-ID: <7e6c41e47c6a8ab73945e6aac319e0dd53337e1b.1751712192.git.sam@gentoo.org>
-X-Mailer: git-send-email 2.50.0
-Precedence: bulk
-X-Mailing-List: bpf@vger.kernel.org
-List-Id: <bpf.vger.kernel.org>
-List-Subscribe: <mailto:bpf+subscribe@vger.kernel.org>
-List-Unsubscribe: <mailto:bpf+unsubscribe@vger.kernel.org>
-MIME-Version: 1.0
-Content-Transfer-Encoding: 8bit
 
 Check the 'WERROR' variable and suppress adding '-Werror' if WERROR=0.
-
 This mirrors what tools/perf and other directories in tools do to handle
 -Werror rather than adding it unconditionally.
 


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-09-20  5:31 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-09-20  5:31 UTC (permalink / raw
  To: gentoo-commits

commit:     ed778b98554854cb8623e3d4be53af4346f1139a
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 20 05:30:54 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Sat Sep 20 05:30:54 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ed778b98

Remove 1801_proc_fix_type_confusion_in_pde_set_flags.patch

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                        |  4 ---
 ..._proc_fix_type_confusion_in_pde_set_flags.patch | 40 ----------------------
 2 files changed, 44 deletions(-)

diff --git a/0000_README b/0000_README
index fb6f961a..b72d6630 100644
--- a/0000_README
+++ b/0000_README
@@ -87,10 +87,6 @@ Patch:  1730_parisc-Disable-prctl.patch
 From:   https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
 Desc:   prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
 
-Patch:  1801_proc_fix_type_confusion_in_pde_set_flags.patch
-From:   https://lore.kernel.org/linux-fsdevel/20250904135715.3972782-1-wangzijie1@honor.com/
-Desc:   proc: fix type confusion in pde_set_flags()
-
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1801_proc_fix_type_confusion_in_pde_set_flags.patch b/1801_proc_fix_type_confusion_in_pde_set_flags.patch
deleted file mode 100644
index 4777dbdc..00000000
--- a/1801_proc_fix_type_confusion_in_pde_set_flags.patch
+++ /dev/null
@@ -1,40 +0,0 @@
-Subject: [PATCH] proc: fix type confusion in pde_set_flags()
-
-Commit 2ce3d282bd50 ("proc: fix missing pde_set_flags() for net proc files")
-missed a key part in the definition of proc_dir_entry:
-
-union {
-	const struct proc_ops *proc_ops;
-	const struct file_operations *proc_dir_ops;
-};
-
-So dereference of ->proc_ops assumes it is a proc_ops structure results in
-type confusion and make NULL check for 'proc_ops' not work for proc dir.
-
-Add !S_ISDIR(dp->mode) test before calling pde_set_flags() to fix it.
-
-Fixes: 2ce3d282bd50 ("proc: fix missing pde_set_flags() for net proc files")
-Reported-by: Brad Spengler <spender@grsecurity.net>
-Signed-off-by: wangzijie <wangzijie1@honor.com>
----
- fs/proc/generic.c | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-diff --git a/fs/proc/generic.c b/fs/proc/generic.c
-index bd0c099cf..176281112 100644
---- a/fs/proc/generic.c
-+++ b/fs/proc/generic.c
-@@ -393,7 +393,8 @@ struct proc_dir_entry *proc_register(struct proc_dir_entry *dir,
- 	if (proc_alloc_inum(&dp->low_ino))
- 		goto out_free_entry;
- 
--	pde_set_flags(dp);
-+	if (!S_ISDIR(dp->mode))
-+		pde_set_flags(dp);
- 
- 	write_lock(&proc_subdir_lock);
- 	dp->parent = dir;
--- 
-2.25.1
-
-


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-09-20  6:29 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-09-20  6:29 UTC (permalink / raw
  To: gentoo-commits

commit:     0b22324043ed27b93c07c828d4d00963d46f73b0
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 20 05:30:54 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Sat Sep 20 06:28:04 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0b223240

Remove 1801_proc_fix_type_confusion_in_pde_set_flags.patch

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                        |  4 ---
 ..._proc_fix_type_confusion_in_pde_set_flags.patch | 40 ----------------------
 2 files changed, 44 deletions(-)

diff --git a/0000_README b/0000_README
index fb6f961a..b72d6630 100644
--- a/0000_README
+++ b/0000_README
@@ -87,10 +87,6 @@ Patch:  1730_parisc-Disable-prctl.patch
 From:   https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
 Desc:   prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
 
-Patch:  1801_proc_fix_type_confusion_in_pde_set_flags.patch
-From:   https://lore.kernel.org/linux-fsdevel/20250904135715.3972782-1-wangzijie1@honor.com/
-Desc:   proc: fix type confusion in pde_set_flags()
-
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1801_proc_fix_type_confusion_in_pde_set_flags.patch b/1801_proc_fix_type_confusion_in_pde_set_flags.patch
deleted file mode 100644
index 4777dbdc..00000000
--- a/1801_proc_fix_type_confusion_in_pde_set_flags.patch
+++ /dev/null
@@ -1,40 +0,0 @@
-Subject: [PATCH] proc: fix type confusion in pde_set_flags()
-
-Commit 2ce3d282bd50 ("proc: fix missing pde_set_flags() for net proc files")
-missed a key part in the definition of proc_dir_entry:
-
-union {
-	const struct proc_ops *proc_ops;
-	const struct file_operations *proc_dir_ops;
-};
-
-So dereference of ->proc_ops assumes it is a proc_ops structure results in
-type confusion and make NULL check for 'proc_ops' not work for proc dir.
-
-Add !S_ISDIR(dp->mode) test before calling pde_set_flags() to fix it.
-
-Fixes: 2ce3d282bd50 ("proc: fix missing pde_set_flags() for net proc files")
-Reported-by: Brad Spengler <spender@grsecurity.net>
-Signed-off-by: wangzijie <wangzijie1@honor.com>
----
- fs/proc/generic.c | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-diff --git a/fs/proc/generic.c b/fs/proc/generic.c
-index bd0c099cf..176281112 100644
---- a/fs/proc/generic.c
-+++ b/fs/proc/generic.c
-@@ -393,7 +393,8 @@ struct proc_dir_entry *proc_register(struct proc_dir_entry *dir,
- 	if (proc_alloc_inum(&dp->low_ino))
- 		goto out_free_entry;
- 
--	pde_set_flags(dp);
-+	if (!S_ISDIR(dp->mode))
-+		pde_set_flags(dp);
- 
- 	write_lock(&proc_subdir_lock);
- 	dp->parent = dir;
--- 
-2.25.1
-
-


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-09-20  6:29 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-09-20  6:29 UTC (permalink / raw
  To: gentoo-commits

commit:     ae358267386f5d34de7745cb3b7087e42d2342cf
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 20 05:25:20 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Sat Sep 20 06:27:46 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ae358267

Linux patch 6.16.8

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README             |     4 +
 1007_linux-6.16.8.patch | 11381 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 11385 insertions(+)

diff --git a/0000_README b/0000_README
index 33049ae5..fb6f961a 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1006_linux-6.16.7.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.16.7
 
+Patch:  1007_linux-6.16.8.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.16.8
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1007_linux-6.16.8.patch b/1007_linux-6.16.8.patch
new file mode 100644
index 00000000..81ef3369
--- /dev/null
+++ b/1007_linux-6.16.8.patch
@@ -0,0 +1,11381 @@
+diff --git a/Documentation/devicetree/bindings/serial/brcm,bcm7271-uart.yaml b/Documentation/devicetree/bindings/serial/brcm,bcm7271-uart.yaml
+index 89c462653e2d33..8cc848ae11cb73 100644
+--- a/Documentation/devicetree/bindings/serial/brcm,bcm7271-uart.yaml
++++ b/Documentation/devicetree/bindings/serial/brcm,bcm7271-uart.yaml
+@@ -41,7 +41,7 @@ properties:
+           - const: dma_intr2
+ 
+   clocks:
+-    minItems: 1
++    maxItems: 1
+ 
+   clock-names:
+     const: sw_baud
+diff --git a/Documentation/netlink/specs/mptcp_pm.yaml b/Documentation/netlink/specs/mptcp_pm.yaml
+index fb57860fe778c6..ecfe5ee33de2d8 100644
+--- a/Documentation/netlink/specs/mptcp_pm.yaml
++++ b/Documentation/netlink/specs/mptcp_pm.yaml
+@@ -256,7 +256,7 @@ attribute-sets:
+         type: u32
+       -
+         name: if-idx
+-        type: u32
++        type: s32
+       -
+         name: reset-reason
+         type: u32
+diff --git a/Documentation/networking/can.rst b/Documentation/networking/can.rst
+index b018ce34639265..515a3876f58cfd 100644
+--- a/Documentation/networking/can.rst
++++ b/Documentation/networking/can.rst
+@@ -742,7 +742,7 @@ The broadcast manager sends responses to user space in the same form:
+             struct timeval ival1, ival2;    /* count and subsequent interval */
+             canid_t can_id;                 /* unique can_id for task */
+             __u32 nframes;                  /* number of can_frames following */
+-            struct can_frame frames[0];
++            struct can_frame frames[];
+     };
+ 
+ The aligned payload 'frames' uses the same basic CAN frame structure defined
+diff --git a/Documentation/networking/mptcp.rst b/Documentation/networking/mptcp.rst
+index 17f2bab6116447..2e31038d646205 100644
+--- a/Documentation/networking/mptcp.rst
++++ b/Documentation/networking/mptcp.rst
+@@ -60,10 +60,10 @@ address announcements. Typically, it is the client side that initiates subflows,
+ and the server side that announces additional addresses via the ``ADD_ADDR`` and
+ ``REMOVE_ADDR`` options.
+ 
+-Path managers are controlled by the ``net.mptcp.pm_type`` sysctl knob -- see
+-mptcp-sysctl.rst. There are two types: the in-kernel one (type ``0``) where the
+-same rules are applied for all the connections (see: ``ip mptcp``) ; and the
+-userspace one (type ``1``), controlled by a userspace daemon (i.e. `mptcpd
++Path managers are controlled by the ``net.mptcp.path_manager`` sysctl knob --
++see mptcp-sysctl.rst. There are two types: the in-kernel one (``kernel``) where
++the same rules are applied for all the connections (see: ``ip mptcp``) ; and the
++userspace one (``userspace``), controlled by a userspace daemon (i.e. `mptcpd
+ <https://mptcpd.mptcp.dev/>`_) where different rules can be applied for each
+ connection. The path managers can be controlled via a Netlink API; see
+ netlink_spec/mptcp_pm.rst.
+diff --git a/Makefile b/Makefile
+index 86359283ccc9a9..7594f35cbc2a5a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 16
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm64/kernel/machine_kexec_file.c b/arch/arm64/kernel/machine_kexec_file.c
+index af1ca875c52ce2..410060ebd86dfd 100644
+--- a/arch/arm64/kernel/machine_kexec_file.c
++++ b/arch/arm64/kernel/machine_kexec_file.c
+@@ -94,7 +94,7 @@ int load_other_segments(struct kimage *image,
+ 			char *initrd, unsigned long initrd_len,
+ 			char *cmdline)
+ {
+-	struct kexec_buf kbuf;
++	struct kexec_buf kbuf = {};
+ 	void *dtb = NULL;
+ 	unsigned long initrd_load_addr = 0, dtb_len,
+ 		      orig_segments = image->nr_segments;
+diff --git a/arch/s390/kernel/kexec_elf.c b/arch/s390/kernel/kexec_elf.c
+index 4d364de4379921..143e34a4eca57c 100644
+--- a/arch/s390/kernel/kexec_elf.c
++++ b/arch/s390/kernel/kexec_elf.c
+@@ -16,7 +16,7 @@
+ static int kexec_file_add_kernel_elf(struct kimage *image,
+ 				     struct s390_load_data *data)
+ {
+-	struct kexec_buf buf;
++	struct kexec_buf buf = {};
+ 	const Elf_Ehdr *ehdr;
+ 	const Elf_Phdr *phdr;
+ 	Elf_Addr entry;
+diff --git a/arch/s390/kernel/kexec_image.c b/arch/s390/kernel/kexec_image.c
+index a32ce8bea745cf..9a439175723cad 100644
+--- a/arch/s390/kernel/kexec_image.c
++++ b/arch/s390/kernel/kexec_image.c
+@@ -16,7 +16,7 @@
+ static int kexec_file_add_kernel_image(struct kimage *image,
+ 				       struct s390_load_data *data)
+ {
+-	struct kexec_buf buf;
++	struct kexec_buf buf = {};
+ 
+ 	buf.image = image;
+ 
+diff --git a/arch/s390/kernel/machine_kexec_file.c b/arch/s390/kernel/machine_kexec_file.c
+index c2bac14dd668ae..a36d7311c6683b 100644
+--- a/arch/s390/kernel/machine_kexec_file.c
++++ b/arch/s390/kernel/machine_kexec_file.c
+@@ -129,7 +129,7 @@ static int kexec_file_update_purgatory(struct kimage *image,
+ static int kexec_file_add_purgatory(struct kimage *image,
+ 				    struct s390_load_data *data)
+ {
+-	struct kexec_buf buf;
++	struct kexec_buf buf = {};
+ 	int ret;
+ 
+ 	buf.image = image;
+@@ -152,7 +152,7 @@ static int kexec_file_add_purgatory(struct kimage *image,
+ static int kexec_file_add_initrd(struct kimage *image,
+ 				 struct s390_load_data *data)
+ {
+-	struct kexec_buf buf;
++	struct kexec_buf buf = {};
+ 	int ret;
+ 
+ 	buf.image = image;
+@@ -184,7 +184,7 @@ static int kexec_file_add_ipl_report(struct kimage *image,
+ {
+ 	__u32 *lc_ipl_parmblock_ptr;
+ 	unsigned int len, ncerts;
+-	struct kexec_buf buf;
++	struct kexec_buf buf = {};
+ 	unsigned long addr;
+ 	void *ptr, *end;
+ 	int ret;
+diff --git a/arch/s390/kernel/perf_cpum_cf.c b/arch/s390/kernel/perf_cpum_cf.c
+index 6a262e198e35ec..952cc8d103693f 100644
+--- a/arch/s390/kernel/perf_cpum_cf.c
++++ b/arch/s390/kernel/perf_cpum_cf.c
+@@ -761,8 +761,6 @@ static int __hw_perf_event_init(struct perf_event *event, unsigned int type)
+ 		break;
+ 
+ 	case PERF_TYPE_HARDWARE:
+-		if (is_sampling_event(event))	/* No sampling support */
+-			return -ENOENT;
+ 		ev = attr->config;
+ 		if (!attr->exclude_user && attr->exclude_kernel) {
+ 			/*
+@@ -860,6 +858,8 @@ static int cpumf_pmu_event_init(struct perf_event *event)
+ 	unsigned int type = event->attr.type;
+ 	int err = -ENOENT;
+ 
++	if (is_sampling_event(event))	/* No sampling support */
++		return err;
+ 	if (type == PERF_TYPE_HARDWARE || type == PERF_TYPE_RAW)
+ 		err = __hw_perf_event_init(event, type);
+ 	else if (event->pmu->type == type)
+diff --git a/arch/s390/kernel/perf_pai_crypto.c b/arch/s390/kernel/perf_pai_crypto.c
+index 63875270941bc4..01cc6493367a46 100644
+--- a/arch/s390/kernel/perf_pai_crypto.c
++++ b/arch/s390/kernel/perf_pai_crypto.c
+@@ -286,10 +286,10 @@ static int paicrypt_event_init(struct perf_event *event)
+ 	/* PAI crypto PMU registered as PERF_TYPE_RAW, check event type */
+ 	if (a->type != PERF_TYPE_RAW && event->pmu->type != a->type)
+ 		return -ENOENT;
+-	/* PAI crypto event must be in valid range */
++	/* PAI crypto event must be in valid range, try others if not */
+ 	if (a->config < PAI_CRYPTO_BASE ||
+ 	    a->config > PAI_CRYPTO_BASE + paicrypt_cnt)
+-		return -EINVAL;
++		return -ENOENT;
+ 	/* Allow only CRYPTO_ALL for sampling */
+ 	if (a->sample_period && a->config != PAI_CRYPTO_BASE)
+ 		return -EINVAL;
+diff --git a/arch/s390/kernel/perf_pai_ext.c b/arch/s390/kernel/perf_pai_ext.c
+index fd14d5ebccbca0..d65a9730753c55 100644
+--- a/arch/s390/kernel/perf_pai_ext.c
++++ b/arch/s390/kernel/perf_pai_ext.c
+@@ -266,7 +266,7 @@ static int paiext_event_valid(struct perf_event *event)
+ 		event->hw.config_base = offsetof(struct paiext_cb, acc);
+ 		return 0;
+ 	}
+-	return -EINVAL;
++	return -ENOENT;
+ }
+ 
+ /* Might be called on different CPU than the one the event is intended for. */
+diff --git a/arch/x86/kernel/cpu/topology_amd.c b/arch/x86/kernel/cpu/topology_amd.c
+index 827dd0dbb6e9d2..c79ebbb639cbff 100644
+--- a/arch/x86/kernel/cpu/topology_amd.c
++++ b/arch/x86/kernel/cpu/topology_amd.c
+@@ -175,27 +175,30 @@ static void topoext_fixup(struct topo_scan *tscan)
+ 
+ static void parse_topology_amd(struct topo_scan *tscan)
+ {
+-	bool has_topoext = false;
+-
+ 	/*
+-	 * If the extended topology leaf 0x8000_001e is available
+-	 * try to get SMT, CORE, TILE, and DIE shifts from extended
++	 * Try to get SMT, CORE, TILE, and DIE shifts from extended
+ 	 * CPUID leaf 0x8000_0026 on supported processors first. If
+ 	 * extended CPUID leaf 0x8000_0026 is not supported, try to
+-	 * get SMT and CORE shift from leaf 0xb first, then try to
+-	 * get the CORE shift from leaf 0x8000_0008.
++	 * get SMT and CORE shift from leaf 0xb. If either leaf is
++	 * available, cpu_parse_topology_ext() will return true.
+ 	 */
+-	if (cpu_feature_enabled(X86_FEATURE_TOPOEXT))
+-		has_topoext = cpu_parse_topology_ext(tscan);
++	bool has_xtopology = cpu_parse_topology_ext(tscan);
+ 
+ 	if (cpu_feature_enabled(X86_FEATURE_AMD_HTR_CORES))
+ 		tscan->c->topo.cpu_type = cpuid_ebx(0x80000026);
+ 
+-	if (!has_topoext && !parse_8000_0008(tscan))
++	/*
++	 * If XTOPOLOGY leaves (0x26/0xb) are not available, try to
++	 * get the CORE shift from leaf 0x8000_0008 first.
++	 */
++	if (!has_xtopology && !parse_8000_0008(tscan))
+ 		return;
+ 
+-	/* Prefer leaf 0x8000001e if available */
+-	if (parse_8000_001e(tscan, has_topoext))
++	/*
++	 * Prefer leaf 0x8000001e if available to get the SMT shift and
++	 * the initial APIC ID if XTOPOLOGY leaves are not available.
++	 */
++	if (parse_8000_001e(tscan, has_xtopology))
+ 		return;
+ 
+ 	/* Try the NODEID MSR */
+diff --git a/block/fops.c b/block/fops.c
+index 1309861d4c2c4b..d62fbefb2e6712 100644
+--- a/block/fops.c
++++ b/block/fops.c
+@@ -7,6 +7,7 @@
+ #include <linux/init.h>
+ #include <linux/mm.h>
+ #include <linux/blkdev.h>
++#include <linux/blk-integrity.h>
+ #include <linux/buffer_head.h>
+ #include <linux/mpage.h>
+ #include <linux/uio.h>
+@@ -54,7 +55,6 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
+ 	struct bio bio;
+ 	ssize_t ret;
+ 
+-	WARN_ON_ONCE(iocb->ki_flags & IOCB_HAS_METADATA);
+ 	if (nr_pages <= DIO_INLINE_BIO_VECS)
+ 		vecs = inline_vecs;
+ 	else {
+@@ -131,7 +131,7 @@ static void blkdev_bio_end_io(struct bio *bio)
+ 	if (bio->bi_status && !dio->bio.bi_status)
+ 		dio->bio.bi_status = bio->bi_status;
+ 
+-	if (!is_sync && (dio->iocb->ki_flags & IOCB_HAS_METADATA))
++	if (bio_integrity(bio))
+ 		bio_integrity_unmap_user(bio);
+ 
+ 	if (atomic_dec_and_test(&dio->ref)) {
+@@ -233,7 +233,7 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
+ 			}
+ 			bio->bi_opf |= REQ_NOWAIT;
+ 		}
+-		if (!is_sync && (iocb->ki_flags & IOCB_HAS_METADATA)) {
++		if (iocb->ki_flags & IOCB_HAS_METADATA) {
+ 			ret = bio_integrity_map_iter(bio, iocb->private);
+ 			if (unlikely(ret))
+ 				goto fail;
+@@ -301,7 +301,7 @@ static void blkdev_bio_end_io_async(struct bio *bio)
+ 		ret = blk_status_to_errno(bio->bi_status);
+ 	}
+ 
+-	if (iocb->ki_flags & IOCB_HAS_METADATA)
++	if (bio_integrity(bio))
+ 		bio_integrity_unmap_user(bio);
+ 
+ 	iocb->ki_complete(iocb, ret);
+@@ -422,7 +422,8 @@ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ 	}
+ 
+ 	nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1);
+-	if (likely(nr_pages <= BIO_MAX_VECS)) {
++	if (likely(nr_pages <= BIO_MAX_VECS &&
++		   !(iocb->ki_flags & IOCB_HAS_METADATA))) {
+ 		if (is_sync_kiocb(iocb))
+ 			return __blkdev_direct_IO_simple(iocb, iter, bdev,
+ 							nr_pages);
+@@ -672,6 +673,8 @@ static int blkdev_open(struct inode *inode, struct file *filp)
+ 
+ 	if (bdev_can_atomic_write(bdev))
+ 		filp->f_mode |= FMODE_CAN_ATOMIC_WRITE;
++	if (blk_get_integrity(bdev->bd_disk))
++		filp->f_mode |= FMODE_HAS_METADATA;
+ 
+ 	ret = bdev_open(bdev, mode, filp->private_data, NULL, filp);
+ 	if (ret)
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index f3477ab377425f..e9aaf72502e51a 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -1547,13 +1547,15 @@ static void amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy)
+ 	pr_debug("CPU %d exiting\n", policy->cpu);
+ }
+ 
+-static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
++static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy, bool policy_change)
+ {
+ 	struct amd_cpudata *cpudata = policy->driver_data;
+ 	union perf_cached perf;
+ 	u8 epp;
+ 
+-	if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq)
++	if (policy_change ||
++	    policy->min != cpudata->min_limit_freq ||
++	    policy->max != cpudata->max_limit_freq)
+ 		amd_pstate_update_min_max_limit(policy);
+ 
+ 	if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)
+@@ -1577,7 +1579,7 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
+ 
+ 	cpudata->policy = policy->policy;
+ 
+-	ret = amd_pstate_epp_update_limit(policy);
++	ret = amd_pstate_epp_update_limit(policy, true);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1619,13 +1621,14 @@ static int amd_pstate_suspend(struct cpufreq_policy *policy)
+ 	 * min_perf value across kexec reboots. If this CPU is just resumed back without kexec,
+ 	 * the limits, epp and desired perf will get reset to the cached values in cpudata struct
+ 	 */
+-	ret = amd_pstate_update_perf(policy, perf.bios_min_perf, 0U, 0U, 0U, false);
++	ret = amd_pstate_update_perf(policy, perf.bios_min_perf,
++				     FIELD_GET(AMD_CPPC_DES_PERF_MASK, cpudata->cppc_req_cached),
++				     FIELD_GET(AMD_CPPC_MAX_PERF_MASK, cpudata->cppc_req_cached),
++				     FIELD_GET(AMD_CPPC_EPP_PERF_MASK, cpudata->cppc_req_cached),
++				     false);
+ 	if (ret)
+ 		return ret;
+ 
+-	/* invalidate to ensure it's rewritten during resume */
+-	cpudata->cppc_req_cached = 0;
+-
+ 	/* set this flag to avoid setting core offline*/
+ 	cpudata->suspended = true;
+ 
+@@ -1651,7 +1654,7 @@ static int amd_pstate_epp_resume(struct cpufreq_policy *policy)
+ 		int ret;
+ 
+ 		/* enable amd pstate from suspend state*/
+-		ret = amd_pstate_epp_update_limit(policy);
++		ret = amd_pstate_epp_update_limit(policy, false);
+ 		if (ret)
+ 			return ret;
+ 
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 06a1c7dd081ffb..9a85c58922a0c8 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -1034,8 +1034,8 @@ static bool hybrid_register_perf_domain(unsigned int cpu)
+ 	if (!cpu_dev)
+ 		return false;
+ 
+-	if (em_dev_register_perf_domain(cpu_dev, HYBRID_EM_STATE_COUNT, &cb,
+-					cpumask_of(cpu), false))
++	if (em_dev_register_pd_no_update(cpu_dev, HYBRID_EM_STATE_COUNT, &cb,
++					 cpumask_of(cpu), false))
+ 		return false;
+ 
+ 	cpudata->pd_registered = true;
+diff --git a/drivers/dma/dw/rzn1-dmamux.c b/drivers/dma/dw/rzn1-dmamux.c
+index 4fb8508419dbd8..deadf135681b67 100644
+--- a/drivers/dma/dw/rzn1-dmamux.c
++++ b/drivers/dma/dw/rzn1-dmamux.c
+@@ -48,12 +48,16 @@ static void *rzn1_dmamux_route_allocate(struct of_phandle_args *dma_spec,
+ 	u32 mask;
+ 	int ret;
+ 
+-	if (dma_spec->args_count != RNZ1_DMAMUX_NCELLS)
+-		return ERR_PTR(-EINVAL);
++	if (dma_spec->args_count != RNZ1_DMAMUX_NCELLS) {
++		ret = -EINVAL;
++		goto put_device;
++	}
+ 
+ 	map = kzalloc(sizeof(*map), GFP_KERNEL);
+-	if (!map)
+-		return ERR_PTR(-ENOMEM);
++	if (!map) {
++		ret = -ENOMEM;
++		goto put_device;
++	}
+ 
+ 	chan = dma_spec->args[0];
+ 	map->req_idx = dma_spec->args[4];
+@@ -94,12 +98,15 @@ static void *rzn1_dmamux_route_allocate(struct of_phandle_args *dma_spec,
+ 	if (ret)
+ 		goto clear_bitmap;
+ 
++	put_device(&pdev->dev);
+ 	return map;
+ 
+ clear_bitmap:
+ 	clear_bit(map->req_idx, dmamux->used_chans);
+ free_map:
+ 	kfree(map);
++put_device:
++	put_device(&pdev->dev);
+ 
+ 	return ERR_PTR(ret);
+ }
+diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
+index 80355d03004dbd..b559b0e18809e4 100644
+--- a/drivers/dma/idxd/init.c
++++ b/drivers/dma/idxd/init.c
+@@ -189,27 +189,30 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
+ 	idxd->wq_enable_map = bitmap_zalloc_node(idxd->max_wqs, GFP_KERNEL, dev_to_node(dev));
+ 	if (!idxd->wq_enable_map) {
+ 		rc = -ENOMEM;
+-		goto err_bitmap;
++		goto err_free_wqs;
+ 	}
+ 
+ 	for (i = 0; i < idxd->max_wqs; i++) {
+ 		wq = kzalloc_node(sizeof(*wq), GFP_KERNEL, dev_to_node(dev));
+ 		if (!wq) {
+ 			rc = -ENOMEM;
+-			goto err;
++			goto err_unwind;
+ 		}
+ 
+ 		idxd_dev_set_type(&wq->idxd_dev, IDXD_DEV_WQ);
+ 		conf_dev = wq_confdev(wq);
+ 		wq->id = i;
+ 		wq->idxd = idxd;
+-		device_initialize(wq_confdev(wq));
++		device_initialize(conf_dev);
+ 		conf_dev->parent = idxd_confdev(idxd);
+ 		conf_dev->bus = &dsa_bus_type;
+ 		conf_dev->type = &idxd_wq_device_type;
+ 		rc = dev_set_name(conf_dev, "wq%d.%d", idxd->id, wq->id);
+-		if (rc < 0)
+-			goto err;
++		if (rc < 0) {
++			put_device(conf_dev);
++			kfree(wq);
++			goto err_unwind;
++		}
+ 
+ 		mutex_init(&wq->wq_lock);
+ 		init_waitqueue_head(&wq->err_queue);
+@@ -220,15 +223,20 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
+ 		wq->enqcmds_retries = IDXD_ENQCMDS_RETRIES;
+ 		wq->wqcfg = kzalloc_node(idxd->wqcfg_size, GFP_KERNEL, dev_to_node(dev));
+ 		if (!wq->wqcfg) {
++			put_device(conf_dev);
++			kfree(wq);
+ 			rc = -ENOMEM;
+-			goto err;
++			goto err_unwind;
+ 		}
+ 
+ 		if (idxd->hw.wq_cap.op_config) {
+ 			wq->opcap_bmap = bitmap_zalloc(IDXD_MAX_OPCAP_BITS, GFP_KERNEL);
+ 			if (!wq->opcap_bmap) {
++				kfree(wq->wqcfg);
++				put_device(conf_dev);
++				kfree(wq);
+ 				rc = -ENOMEM;
+-				goto err_opcap_bmap;
++				goto err_unwind;
+ 			}
+ 			bitmap_copy(wq->opcap_bmap, idxd->opcap_bmap, IDXD_MAX_OPCAP_BITS);
+ 		}
+@@ -239,13 +247,7 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
+ 
+ 	return 0;
+ 
+-err_opcap_bmap:
+-	kfree(wq->wqcfg);
+-
+-err:
+-	put_device(conf_dev);
+-	kfree(wq);
+-
++err_unwind:
+ 	while (--i >= 0) {
+ 		wq = idxd->wqs[i];
+ 		if (idxd->hw.wq_cap.op_config)
+@@ -254,11 +256,10 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
+ 		conf_dev = wq_confdev(wq);
+ 		put_device(conf_dev);
+ 		kfree(wq);
+-
+ 	}
+ 	bitmap_free(idxd->wq_enable_map);
+ 
+-err_bitmap:
++err_free_wqs:
+ 	kfree(idxd->wqs);
+ 
+ 	return rc;
+@@ -1292,10 +1293,12 @@ static void idxd_remove(struct pci_dev *pdev)
+ 	device_unregister(idxd_confdev(idxd));
+ 	idxd_shutdown(pdev);
+ 	idxd_device_remove_debugfs(idxd);
+-	idxd_cleanup(idxd);
++	perfmon_pmu_remove(idxd);
++	idxd_cleanup_interrupts(idxd);
++	if (device_pasid_enabled(idxd))
++		idxd_disable_system_pasid(idxd);
+ 	pci_iounmap(pdev, idxd->reg_base);
+ 	put_device(idxd_confdev(idxd));
+-	idxd_free(idxd);
+ 	pci_disable_device(pdev);
+ }
+ 
+diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c
+index bbc3276992bb01..2cf060174795fe 100644
+--- a/drivers/dma/qcom/bam_dma.c
++++ b/drivers/dma/qcom/bam_dma.c
+@@ -1283,13 +1283,17 @@ static int bam_dma_probe(struct platform_device *pdev)
+ 	if (!bdev->bamclk) {
+ 		ret = of_property_read_u32(pdev->dev.of_node, "num-channels",
+ 					   &bdev->num_channels);
+-		if (ret)
++		if (ret) {
+ 			dev_err(bdev->dev, "num-channels unspecified in dt\n");
++			return ret;
++		}
+ 
+ 		ret = of_property_read_u32(pdev->dev.of_node, "qcom,num-ees",
+ 					   &bdev->num_ees);
+-		if (ret)
++		if (ret) {
+ 			dev_err(bdev->dev, "num-ees unspecified in dt\n");
++			return ret;
++		}
+ 	}
+ 
+ 	ret = clk_prepare_enable(bdev->bamclk);
+diff --git a/drivers/dma/ti/edma.c b/drivers/dma/ti/edma.c
+index 3ed406f08c442e..552be71db6c47b 100644
+--- a/drivers/dma/ti/edma.c
++++ b/drivers/dma/ti/edma.c
+@@ -2064,8 +2064,8 @@ static int edma_setup_from_hw(struct device *dev, struct edma_soc_info *pdata,
+ 	 * priority. So Q0 is the highest priority queue and the last queue has
+ 	 * the lowest priority.
+ 	 */
+-	queue_priority_map = devm_kcalloc(dev, ecc->num_tc + 1, sizeof(s8),
+-					  GFP_KERNEL);
++	queue_priority_map = devm_kcalloc(dev, ecc->num_tc + 1,
++					  sizeof(*queue_priority_map), GFP_KERNEL);
+ 	if (!queue_priority_map)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/edac/altera_edac.c b/drivers/edac/altera_edac.c
+index cae52c654a15c6..7685a8550d4b1f 100644
+--- a/drivers/edac/altera_edac.c
++++ b/drivers/edac/altera_edac.c
+@@ -128,7 +128,6 @@ static ssize_t altr_sdr_mc_err_inject_write(struct file *file,
+ 
+ 	ptemp = dma_alloc_coherent(mci->pdev, 16, &dma_handle, GFP_KERNEL);
+ 	if (!ptemp) {
+-		dma_free_coherent(mci->pdev, 16, ptemp, dma_handle);
+ 		edac_printk(KERN_ERR, EDAC_MC,
+ 			    "Inject: Buffer Allocation error\n");
+ 		return -ENOMEM;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index f9ceda7861f1b1..cdafce9781ed32 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -596,10 +596,6 @@ int psp_wait_for(struct psp_context *psp, uint32_t reg_index,
+ 		udelay(1);
+ 	}
+ 
+-	dev_err(adev->dev,
+-		"psp reg (0x%x) wait timed out, mask: %x, read: %x exp: %x",
+-		reg_index, mask, val, reg_val);
+-
+ 	return -ETIME;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
+index a4a00855d0b238..428adc7f741de3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
+@@ -51,17 +51,6 @@
+ #define C2PMSG_CMD_SPI_GET_ROM_IMAGE_ADDR_HI 0x10
+ #define C2PMSG_CMD_SPI_GET_FLASH_IMAGE 0x11
+ 
+-/* Command register bit 31 set to indicate readiness */
+-#define MBOX_TOS_READY_FLAG (GFX_FLAG_RESPONSE)
+-#define MBOX_TOS_READY_MASK (GFX_CMD_RESPONSE_MASK | GFX_CMD_STATUS_MASK)
+-
+-/* Values to check for a successful GFX_CMD response wait. Check against
+- * both status bits and response state - helps to detect a command failure
+- * or other unexpected cases like a device drop reading all 0xFFs
+- */
+-#define MBOX_TOS_RESP_FLAG (GFX_FLAG_RESPONSE)
+-#define MBOX_TOS_RESP_MASK (GFX_CMD_RESPONSE_MASK | GFX_CMD_STATUS_MASK)
+-
+ extern const struct attribute_group amdgpu_flash_attr_group;
+ 
+ enum psp_shared_mem_size {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+index 7c5584742471e9..a0b7ac7486dc55 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+@@ -389,8 +389,6 @@ void amdgpu_ring_fini(struct amdgpu_ring *ring)
+ 	dma_fence_put(ring->vmid_wait);
+ 	ring->vmid_wait = NULL;
+ 	ring->me = 0;
+-
+-	ring->adev->rings[ring->idx] = NULL;
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c b/drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c
+index 574880d6700995..2ab6fa4fcf20b6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c
++++ b/drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c
+@@ -29,6 +29,8 @@
+ #include "amdgpu.h"
+ #include "isp_v4_1_1.h"
+ 
++MODULE_FIRMWARE("amdgpu/isp_4_1_1.bin");
++
+ static const unsigned int isp_4_1_1_int_srcid[MAX_ISP411_INT_SRC] = {
+ 	ISP_4_1__SRCID__ISP_RINGBUFFER_WPT9,
+ 	ISP_4_1__SRCID__ISP_RINGBUFFER_WPT10,
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
+index 2c4ebd98927ff3..145186a1e48f6b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
+@@ -94,7 +94,7 @@ static int psp_v10_0_ring_create(struct psp_context *psp,
+ 
+ 	/* Wait for response flag (bit 31) in C2PMSG_64 */
+ 	ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			   MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++			   0x80000000, 0x8000FFFF, false);
+ 
+ 	return ret;
+ }
+@@ -115,7 +115,7 @@ static int psp_v10_0_ring_stop(struct psp_context *psp,
+ 
+ 	/* Wait for response flag (bit 31) in C2PMSG_64 */
+ 	ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			   MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++			   0x80000000, 0x80000000, false);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
+index 1a4a26e6ffd24c..215543575f477c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
+@@ -277,13 +277,11 @@ static int psp_v11_0_ring_stop(struct psp_context *psp,
+ 
+ 	/* Wait for response flag (bit 31) */
+ 	if (amdgpu_sriov_vf(adev))
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++				   0x80000000, 0x80000000, false);
+ 	else
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 
+ 	return ret;
+ }
+@@ -319,15 +317,13 @@ static int psp_v11_0_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_101 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++				   0x80000000, 0x8000FFFF, false);
+ 
+ 	} else {
+ 		/* Wait for sOS ready for ring creation */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 		if (ret) {
+ 			DRM_ERROR("Failed to wait for sOS ready for ring creation\n");
+ 			return ret;
+@@ -351,9 +347,8 @@ static int psp_v11_0_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_64 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x8000FFFF, false);
+ 	}
+ 
+ 	return ret;
+@@ -386,8 +381,7 @@ static int psp_v11_0_mode1_reset(struct psp_context *psp)
+ 
+ 	offset = SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64);
+ 
+-	ret = psp_wait_for(psp, offset, MBOX_TOS_READY_FLAG,
+-			   MBOX_TOS_READY_MASK, false);
++	ret = psp_wait_for(psp, offset, 0x80000000, 0x8000FFFF, false);
+ 
+ 	if (ret) {
+ 		DRM_INFO("psp is not working correctly before mode1 reset!\n");
+@@ -401,8 +395,7 @@ static int psp_v11_0_mode1_reset(struct psp_context *psp)
+ 
+ 	offset = SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_33);
+ 
+-	ret = psp_wait_for(psp, offset, MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK,
+-			   false);
++	ret = psp_wait_for(psp, offset, 0x80000000, 0x80000000, false);
+ 
+ 	if (ret) {
+ 		DRM_INFO("psp mode 1 reset failed!\n");
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v11_0_8.c b/drivers/gpu/drm/amd/amdgpu/psp_v11_0_8.c
+index 338d015c0f2ee2..5697760a819bc7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v11_0_8.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v11_0_8.c
+@@ -41,9 +41,8 @@ static int psp_v11_0_8_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++				   0x80000000, 0x80000000, false);
+ 	} else {
+ 		/* Write the ring destroy command*/
+ 		WREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_64,
+@@ -51,9 +50,8 @@ static int psp_v11_0_8_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 	}
+ 
+ 	return ret;
+@@ -89,15 +87,13 @@ static int psp_v11_0_8_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_101 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++				   0x80000000, 0x8000FFFF, false);
+ 
+ 	} else {
+ 		/* Wait for sOS ready for ring creation */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 		if (ret) {
+ 			DRM_ERROR("Failed to wait for trust OS ready for ring creation\n");
+ 			return ret;
+@@ -121,9 +117,8 @@ static int psp_v11_0_8_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_64 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x8000FFFF, false);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
+index d54b3e0fabaf40..80153f8374704a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
+@@ -163,7 +163,7 @@ static int psp_v12_0_ring_create(struct psp_context *psp,
+ 
+ 	/* Wait for response flag (bit 31) in C2PMSG_64 */
+ 	ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			   MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++			   0x80000000, 0x8000FFFF, false);
+ 
+ 	return ret;
+ }
+@@ -184,13 +184,11 @@ static int psp_v12_0_ring_stop(struct psp_context *psp,
+ 
+ 	/* Wait for response flag (bit 31) */
+ 	if (amdgpu_sriov_vf(adev))
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++				   0x80000000, 0x80000000, false);
+ 	else
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 
+ 	return ret;
+ }
+@@ -221,8 +219,7 @@ static int psp_v12_0_mode1_reset(struct psp_context *psp)
+ 
+ 	offset = SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64);
+ 
+-	ret = psp_wait_for(psp, offset, MBOX_TOS_READY_FLAG,
+-			   MBOX_TOS_READY_MASK, false);
++	ret = psp_wait_for(psp, offset, 0x80000000, 0x8000FFFF, false);
+ 
+ 	if (ret) {
+ 		DRM_INFO("psp is not working correctly before mode1 reset!\n");
+@@ -236,8 +233,7 @@ static int psp_v12_0_mode1_reset(struct psp_context *psp)
+ 
+ 	offset = SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_33);
+ 
+-	ret = psp_wait_for(psp, offset, MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK,
+-			   false);
++	ret = psp_wait_for(psp, offset, 0x80000000, 0x80000000, false);
+ 
+ 	if (ret) {
+ 		DRM_INFO("psp mode 1 reset failed!\n");
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
+index 58b6b64dcd683b..ead616c117057f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
+@@ -384,9 +384,8 @@ static int psp_v13_0_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
++				   0x80000000, 0x80000000, false);
+ 	} else {
+ 		/* Write the ring destroy command*/
+ 		WREG32_SOC15(MP0, 0, regMP0_SMN_C2PMSG_64,
+@@ -394,9 +393,8 @@ static int psp_v13_0_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 	}
+ 
+ 	return ret;
+@@ -432,15 +430,13 @@ static int psp_v13_0_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_101 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
++				   0x80000000, 0x8000FFFF, false);
+ 
+ 	} else {
+ 		/* Wait for sOS ready for ring creation */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 		if (ret) {
+ 			DRM_ERROR("Failed to wait for trust OS ready for ring creation\n");
+ 			return ret;
+@@ -464,9 +460,8 @@ static int psp_v13_0_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_64 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x8000FFFF, false);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v13_0_4.c b/drivers/gpu/drm/amd/amdgpu/psp_v13_0_4.c
+index f65af52c1c1939..eaa5512a21dacd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v13_0_4.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v13_0_4.c
+@@ -204,9 +204,8 @@ static int psp_v13_0_4_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
++				   0x80000000, 0x80000000, false);
+ 	} else {
+ 		/* Write the ring destroy command*/
+ 		WREG32_SOC15(MP0, 0, regMP0_SMN_C2PMSG_64,
+@@ -214,9 +213,8 @@ static int psp_v13_0_4_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 	}
+ 
+ 	return ret;
+@@ -252,15 +250,13 @@ static int psp_v13_0_4_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_101 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
++				   0x80000000, 0x8000FFFF, false);
+ 
+ 	} else {
+ 		/* Wait for sOS ready for ring creation */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 		if (ret) {
+ 			DRM_ERROR("Failed to wait for trust OS ready for ring creation\n");
+ 			return ret;
+@@ -284,9 +280,8 @@ static int psp_v13_0_4_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_64 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++				   0x80000000, 0x8000FFFF, false);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c
+index b029f301aaccaf..30d8eecc567481 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c
+@@ -250,9 +250,8 @@ static int psp_v14_0_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_101),
++				   0x80000000, 0x80000000, false);
+ 	} else {
+ 		/* Write the ring destroy command*/
+ 		WREG32_SOC15(MP0, 0, regMPASP_SMN_C2PMSG_64,
+@@ -260,9 +259,8 @@ static int psp_v14_0_ring_stop(struct psp_context *psp,
+ 		/* there might be handshake issue with hardware which needs delay */
+ 		mdelay(20);
+ 		/* Wait for response flag (bit 31) */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 	}
+ 
+ 	return ret;
+@@ -298,15 +296,13 @@ static int psp_v14_0_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_101 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_101),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_101),
++				   0x80000000, 0x8000FFFF, false);
+ 
+ 	} else {
+ 		/* Wait for sOS ready for ring creation */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
+-			MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
++				   0x80000000, 0x80000000, false);
+ 		if (ret) {
+ 			DRM_ERROR("Failed to wait for trust OS ready for ring creation\n");
+ 			return ret;
+@@ -330,9 +326,8 @@ static int psp_v14_0_ring_create(struct psp_context *psp,
+ 		mdelay(20);
+ 
+ 		/* Wait for response flag (bit 31) in C2PMSG_64 */
+-		ret = psp_wait_for(
+-			psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
+-			MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++		ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
++				   0x80000000, 0x8000FFFF, false);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+index 9fb0d53805892d..614e0886556271 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+@@ -1875,15 +1875,19 @@ static int vcn_v3_0_limit_sched(struct amdgpu_cs_parser *p,
+ 				struct amdgpu_job *job)
+ {
+ 	struct drm_gpu_scheduler **scheds;
+-
+-	/* The create msg must be in the first IB submitted */
+-	if (atomic_read(&job->base.entity->fence_seq))
+-		return -EINVAL;
++	struct dma_fence *fence;
+ 
+ 	/* if VCN0 is harvested, we can't support AV1 */
+ 	if (p->adev->vcn.harvest_config & AMDGPU_VCN_HARVEST_VCN0)
+ 		return -EINVAL;
+ 
++	/* wait for all jobs to finish before switching to instance 0 */
++	fence = amdgpu_ctx_get_fence(p->ctx, job->base.entity, ~0ull);
++	if (fence) {
++		dma_fence_wait(fence, false);
++		dma_fence_put(fence);
++	}
++
+ 	scheds = p->adev->gpu_sched[AMDGPU_HW_IP_VCN_DEC]
+ 		[AMDGPU_RING_PRIO_DEFAULT].sched;
+ 	drm_sched_entity_modify_sched(job->base.entity, scheds, 1);
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+index 46c329a1b2f5f0..e77f2df1beb773 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+@@ -1807,15 +1807,19 @@ static int vcn_v4_0_limit_sched(struct amdgpu_cs_parser *p,
+ 				struct amdgpu_job *job)
+ {
+ 	struct drm_gpu_scheduler **scheds;
+-
+-	/* The create msg must be in the first IB submitted */
+-	if (atomic_read(&job->base.entity->fence_seq))
+-		return -EINVAL;
++	struct dma_fence *fence;
+ 
+ 	/* if VCN0 is harvested, we can't support AV1 */
+ 	if (p->adev->vcn.harvest_config & AMDGPU_VCN_HARVEST_VCN0)
+ 		return -EINVAL;
+ 
++	/* wait for all jobs to finish before switching to instance 0 */
++	fence = amdgpu_ctx_get_fence(p->ctx, job->base.entity, ~0ull);
++	if (fence) {
++		dma_fence_wait(fence, false);
++		dma_fence_put(fence);
++	}
++
+ 	scheds = p->adev->gpu_sched[AMDGPU_HW_IP_VCN_ENC]
+ 		[AMDGPU_RING_PRIO_0].sched;
+ 	drm_sched_entity_modify_sched(job->base.entity, scheds, 1);
+@@ -1906,22 +1910,16 @@ static int vcn_v4_0_dec_msg(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
+ 
+ #define RADEON_VCN_ENGINE_TYPE_ENCODE			(0x00000002)
+ #define RADEON_VCN_ENGINE_TYPE_DECODE			(0x00000003)
+-
+ #define RADEON_VCN_ENGINE_INFO				(0x30000001)
+-#define RADEON_VCN_ENGINE_INFO_MAX_OFFSET		16
+-
+ #define RENCODE_ENCODE_STANDARD_AV1			2
+ #define RENCODE_IB_PARAM_SESSION_INIT			0x00000003
+-#define RENCODE_IB_PARAM_SESSION_INIT_MAX_OFFSET	64
+ 
+-/* return the offset in ib if id is found, -1 otherwise
+- * to speed up the searching we only search upto max_offset
+- */
+-static int vcn_v4_0_enc_find_ib_param(struct amdgpu_ib *ib, uint32_t id, int max_offset)
++/* return the offset in ib if id is found, -1 otherwise */
++static int vcn_v4_0_enc_find_ib_param(struct amdgpu_ib *ib, uint32_t id, int start)
+ {
+ 	int i;
+ 
+-	for (i = 0; i < ib->length_dw && i < max_offset && ib->ptr[i] >= 8; i += ib->ptr[i]/4) {
++	for (i = start; i < ib->length_dw && ib->ptr[i] >= 8; i += ib->ptr[i] / 4) {
+ 		if (ib->ptr[i + 1] == id)
+ 			return i;
+ 	}
+@@ -1936,33 +1934,29 @@ static int vcn_v4_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
+ 	struct amdgpu_vcn_decode_buffer *decode_buffer;
+ 	uint64_t addr;
+ 	uint32_t val;
+-	int idx;
++	int idx = 0, sidx;
+ 
+ 	/* The first instance can decode anything */
+ 	if (!ring->me)
+ 		return 0;
+ 
+-	/* RADEON_VCN_ENGINE_INFO is at the top of ib block */
+-	idx = vcn_v4_0_enc_find_ib_param(ib, RADEON_VCN_ENGINE_INFO,
+-			RADEON_VCN_ENGINE_INFO_MAX_OFFSET);
+-	if (idx < 0) /* engine info is missing */
+-		return 0;
+-
+-	val = amdgpu_ib_get_value(ib, idx + 2); /* RADEON_VCN_ENGINE_TYPE */
+-	if (val == RADEON_VCN_ENGINE_TYPE_DECODE) {
+-		decode_buffer = (struct amdgpu_vcn_decode_buffer *)&ib->ptr[idx + 6];
+-
+-		if (!(decode_buffer->valid_buf_flag  & 0x1))
+-			return 0;
+-
+-		addr = ((u64)decode_buffer->msg_buffer_address_hi) << 32 |
+-			decode_buffer->msg_buffer_address_lo;
+-		return vcn_v4_0_dec_msg(p, job, addr);
+-	} else if (val == RADEON_VCN_ENGINE_TYPE_ENCODE) {
+-		idx = vcn_v4_0_enc_find_ib_param(ib, RENCODE_IB_PARAM_SESSION_INIT,
+-			RENCODE_IB_PARAM_SESSION_INIT_MAX_OFFSET);
+-		if (idx >= 0 && ib->ptr[idx + 2] == RENCODE_ENCODE_STANDARD_AV1)
+-			return vcn_v4_0_limit_sched(p, job);
++	while ((idx = vcn_v4_0_enc_find_ib_param(ib, RADEON_VCN_ENGINE_INFO, idx)) >= 0) {
++		val = amdgpu_ib_get_value(ib, idx + 2); /* RADEON_VCN_ENGINE_TYPE */
++		if (val == RADEON_VCN_ENGINE_TYPE_DECODE) {
++			decode_buffer = (struct amdgpu_vcn_decode_buffer *)&ib->ptr[idx + 6];
++
++			if (!(decode_buffer->valid_buf_flag & 0x1))
++				return 0;
++
++			addr = ((u64)decode_buffer->msg_buffer_address_hi) << 32 |
++				decode_buffer->msg_buffer_address_lo;
++			return vcn_v4_0_dec_msg(p, job, addr);
++		} else if (val == RADEON_VCN_ENGINE_TYPE_ENCODE) {
++			sidx = vcn_v4_0_enc_find_ib_param(ib, RENCODE_IB_PARAM_SESSION_INIT, idx);
++			if (sidx >= 0 && ib->ptr[sidx + 2] == RENCODE_ENCODE_STANDARD_AV1)
++				return vcn_v4_0_limit_sched(p, job);
++		}
++		idx += ib->ptr[idx] / 4;
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 2d94fec5b545d7..312f6075e39d11 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2910,6 +2910,17 @@ static int dm_oem_i2c_hw_init(struct amdgpu_device *adev)
+ 	return 0;
+ }
+ 
++static void dm_oem_i2c_hw_fini(struct amdgpu_device *adev)
++{
++	struct amdgpu_display_manager *dm = &adev->dm;
++
++	if (dm->oem_i2c) {
++		i2c_del_adapter(&dm->oem_i2c->base);
++		kfree(dm->oem_i2c);
++		dm->oem_i2c = NULL;
++	}
++}
++
+ /**
+  * dm_hw_init() - Initialize DC device
+  * @ip_block: Pointer to the amdgpu_ip_block for this hw instance.
+@@ -2960,7 +2971,7 @@ static int dm_hw_fini(struct amdgpu_ip_block *ip_block)
+ {
+ 	struct amdgpu_device *adev = ip_block->adev;
+ 
+-	kfree(adev->dm.oem_i2c);
++	dm_oem_i2c_hw_fini(adev);
+ 
+ 	amdgpu_dm_hpd_fini(adev);
+ 
+@@ -3073,16 +3084,55 @@ static int dm_cache_state(struct amdgpu_device *adev)
+ 	return adev->dm.cached_state ? 0 : r;
+ }
+ 
+-static int dm_prepare_suspend(struct amdgpu_ip_block *ip_block)
++static void dm_destroy_cached_state(struct amdgpu_device *adev)
+ {
+-	struct amdgpu_device *adev = ip_block->adev;
++	struct amdgpu_display_manager *dm = &adev->dm;
++	struct drm_device *ddev = adev_to_drm(adev);
++	struct dm_plane_state *dm_new_plane_state;
++	struct drm_plane_state *new_plane_state;
++	struct dm_crtc_state *dm_new_crtc_state;
++	struct drm_crtc_state *new_crtc_state;
++	struct drm_plane *plane;
++	struct drm_crtc *crtc;
++	int i;
+ 
+-	if (amdgpu_in_reset(adev))
+-		return 0;
++	if (!dm->cached_state)
++		return;
++
++	/* Force mode set in atomic commit */
++	for_each_new_crtc_in_state(dm->cached_state, crtc, new_crtc_state, i) {
++		new_crtc_state->active_changed = true;
++		dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
++		reset_freesync_config_for_crtc(dm_new_crtc_state);
++	}
++
++	/*
++	 * atomic_check is expected to create the dc states. We need to release
++	 * them here, since they were duplicated as part of the suspend
++	 * procedure.
++	 */
++	for_each_new_crtc_in_state(dm->cached_state, crtc, new_crtc_state, i) {
++		dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
++		if (dm_new_crtc_state->stream) {
++			WARN_ON(kref_read(&dm_new_crtc_state->stream->refcount) > 1);
++			dc_stream_release(dm_new_crtc_state->stream);
++			dm_new_crtc_state->stream = NULL;
++		}
++		dm_new_crtc_state->base.color_mgmt_changed = true;
++	}
++
++	for_each_new_plane_in_state(dm->cached_state, plane, new_plane_state, i) {
++		dm_new_plane_state = to_dm_plane_state(new_plane_state);
++		if (dm_new_plane_state->dc_state) {
++			WARN_ON(kref_read(&dm_new_plane_state->dc_state->refcount) > 1);
++			dc_plane_state_release(dm_new_plane_state->dc_state);
++			dm_new_plane_state->dc_state = NULL;
++		}
++	}
+ 
+-	WARN_ON(adev->dm.cached_state);
++	drm_atomic_helper_resume(ddev, dm->cached_state);
+ 
+-	return dm_cache_state(adev);
++	dm->cached_state = NULL;
+ }
+ 
+ static int dm_suspend(struct amdgpu_ip_block *ip_block)
+@@ -3306,12 +3356,6 @@ static int dm_resume(struct amdgpu_ip_block *ip_block)
+ 	struct amdgpu_dm_connector *aconnector;
+ 	struct drm_connector *connector;
+ 	struct drm_connector_list_iter iter;
+-	struct drm_crtc *crtc;
+-	struct drm_crtc_state *new_crtc_state;
+-	struct dm_crtc_state *dm_new_crtc_state;
+-	struct drm_plane *plane;
+-	struct drm_plane_state *new_plane_state;
+-	struct dm_plane_state *dm_new_plane_state;
+ 	struct dm_atomic_state *dm_state = to_dm_atomic_state(dm->atomic_obj.state);
+ 	enum dc_connection_type new_connection_type = dc_connection_none;
+ 	struct dc_state *dc_state;
+@@ -3470,40 +3514,7 @@ static int dm_resume(struct amdgpu_ip_block *ip_block)
+ 	}
+ 	drm_connector_list_iter_end(&iter);
+ 
+-	/* Force mode set in atomic commit */
+-	for_each_new_crtc_in_state(dm->cached_state, crtc, new_crtc_state, i) {
+-		new_crtc_state->active_changed = true;
+-		dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
+-		reset_freesync_config_for_crtc(dm_new_crtc_state);
+-	}
+-
+-	/*
+-	 * atomic_check is expected to create the dc states. We need to release
+-	 * them here, since they were duplicated as part of the suspend
+-	 * procedure.
+-	 */
+-	for_each_new_crtc_in_state(dm->cached_state, crtc, new_crtc_state, i) {
+-		dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
+-		if (dm_new_crtc_state->stream) {
+-			WARN_ON(kref_read(&dm_new_crtc_state->stream->refcount) > 1);
+-			dc_stream_release(dm_new_crtc_state->stream);
+-			dm_new_crtc_state->stream = NULL;
+-		}
+-		dm_new_crtc_state->base.color_mgmt_changed = true;
+-	}
+-
+-	for_each_new_plane_in_state(dm->cached_state, plane, new_plane_state, i) {
+-		dm_new_plane_state = to_dm_plane_state(new_plane_state);
+-		if (dm_new_plane_state->dc_state) {
+-			WARN_ON(kref_read(&dm_new_plane_state->dc_state->refcount) > 1);
+-			dc_plane_state_release(dm_new_plane_state->dc_state);
+-			dm_new_plane_state->dc_state = NULL;
+-		}
+-	}
+-
+-	drm_atomic_helper_resume(ddev, dm->cached_state);
+-
+-	dm->cached_state = NULL;
++	dm_destroy_cached_state(adev);
+ 
+ 	/* Do mst topology probing after resuming cached state*/
+ 	drm_connector_list_iter_begin(ddev, &iter);
+@@ -3549,7 +3560,6 @@ static const struct amd_ip_funcs amdgpu_dm_funcs = {
+ 	.early_fini = amdgpu_dm_early_fini,
+ 	.hw_init = dm_hw_init,
+ 	.hw_fini = dm_hw_fini,
+-	.prepare_suspend = dm_prepare_suspend,
+ 	.suspend = dm_suspend,
+ 	.resume = dm_resume,
+ 	.is_idle = dm_is_idle,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 25e8befbcc479a..99fd064324baa6 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -809,6 +809,7 @@ void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm,
+ 	drm_dp_aux_init(&aconnector->dm_dp_aux.aux);
+ 	drm_dp_cec_register_connector(&aconnector->dm_dp_aux.aux,
+ 				      &aconnector->base);
++	drm_dp_dpcd_set_probe(&aconnector->dm_dp_aux.aux, false);
+ 
+ 	if (aconnector->base.connector_type == DRM_MODE_CONNECTOR_eDP)
+ 		return;
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index f41073c0147e23..7dfbfb18593c12 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -1095,6 +1095,7 @@ struct dc_debug_options {
+ 	bool enable_hblank_borrow;
+ 	bool force_subvp_df_throttle;
+ 	uint32_t acpi_transition_bitmasks[MAX_PIPES];
++	bool enable_pg_cntl_debug_logs;
+ };
+ 
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c b/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
+index 58c84f555c0fb8..0ce9489ac6b728 100644
+--- a/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
++++ b/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
+@@ -133,30 +133,34 @@ enum dsc_clk_source {
+ };
+ 
+ 
+-static void dccg35_set_dsc_clk_rcg(struct dccg *dccg, int inst, bool enable)
++static void dccg35_set_dsc_clk_rcg(struct dccg *dccg, int inst, bool allow_rcg)
+ {
+ 	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+ 
+-	if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dsc && enable)
++	if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dsc && allow_rcg)
+ 		return;
+ 
+ 	switch (inst) {
+ 	case 0:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK0_ROOT_GATE_DISABLE, enable ? 0 : 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK0_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ 		break;
+ 	case 1:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK1_ROOT_GATE_DISABLE, enable ? 0 : 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK1_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ 		break;
+ 	case 2:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK2_ROOT_GATE_DISABLE, enable ? 0 : 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK2_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ 		break;
+ 	case 3:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK3_ROOT_GATE_DISABLE, enable ? 0 : 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK3_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ 		break;
+ 	default:
+ 		BREAK_TO_DEBUGGER();
+ 		return;
+ 	}
++
++	/* Wait for clock to ramp */
++	if (!allow_rcg)
++		udelay(10);
+ }
+ 
+ static void dccg35_set_symclk32_se_rcg(
+@@ -385,35 +389,34 @@ static void dccg35_set_dtbclk_p_rcg(struct dccg *dccg, int inst, bool enable)
+ 	}
+ }
+ 
+-static void dccg35_set_dppclk_rcg(struct dccg *dccg,
+-												int inst, bool enable)
++static void dccg35_set_dppclk_rcg(struct dccg *dccg, int inst, bool allow_rcg)
+ {
+-
+ 	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+ 
+-
+-	if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dpp && enable)
++	if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dpp && allow_rcg)
+ 		return;
+ 
+ 	switch (inst) {
+ 	case 0:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK0_ROOT_GATE_DISABLE, enable ? 0 : 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK0_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ 		break;
+ 	case 1:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK1_ROOT_GATE_DISABLE, enable ? 0 : 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK1_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ 		break;
+ 	case 2:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK2_ROOT_GATE_DISABLE, enable ? 0 : 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK2_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ 		break;
+ 	case 3:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK3_ROOT_GATE_DISABLE, enable ? 0 : 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK3_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ 		break;
+ 	default:
+ 	BREAK_TO_DEBUGGER();
+ 		break;
+ 	}
+-	//DC_LOG_DEBUG("%s: inst(%d) DPPCLK rcg_disable: %d\n", __func__, inst, enable ? 0 : 1);
+ 
++	/* Wait for clock to ramp */
++	if (!allow_rcg)
++		udelay(10);
+ }
+ 
+ static void dccg35_set_dpstreamclk_rcg(
+@@ -1177,32 +1180,34 @@ static void dccg35_update_dpp_dto(struct dccg *dccg, int dpp_inst,
+ }
+ 
+ static void dccg35_set_dppclk_root_clock_gating(struct dccg *dccg,
+-		 uint32_t dpp_inst, uint32_t enable)
++		 uint32_t dpp_inst, uint32_t disallow_rcg)
+ {
+ 	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+ 
+-	if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dpp)
++	if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dpp && !disallow_rcg)
+ 		return;
+ 
+ 
+ 	switch (dpp_inst) {
+ 	case 0:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK0_ROOT_GATE_DISABLE, enable);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK0_ROOT_GATE_DISABLE, disallow_rcg);
+ 		break;
+ 	case 1:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK1_ROOT_GATE_DISABLE, enable);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK1_ROOT_GATE_DISABLE, disallow_rcg);
+ 		break;
+ 	case 2:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK2_ROOT_GATE_DISABLE, enable);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK2_ROOT_GATE_DISABLE, disallow_rcg);
+ 		break;
+ 	case 3:
+-		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK3_ROOT_GATE_DISABLE, enable);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK3_ROOT_GATE_DISABLE, disallow_rcg);
+ 		break;
+ 	default:
+ 		break;
+ 	}
+-	//DC_LOG_DEBUG("%s: dpp_inst(%d) rcg: %d\n", __func__, dpp_inst, enable);
+ 
++	/* Wait for clock to ramp */
++	if (disallow_rcg)
++		udelay(10);
+ }
+ 
+ static void dccg35_get_pixel_rate_div(
+@@ -1782,8 +1787,7 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
+ 	//Disable DTO
+ 	switch (inst) {
+ 	case 0:
+-		if (dccg->ctx->dc->debug.root_clock_optimization.bits.dsc)
+-			REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK0_ROOT_GATE_DISABLE, 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK0_ROOT_GATE_DISABLE, 1);
+ 
+ 		REG_UPDATE_2(DSCCLK0_DTO_PARAM,
+ 				DSCCLK0_DTO_PHASE, 0,
+@@ -1791,8 +1795,7 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
+ 		REG_UPDATE(DSCCLK_DTO_CTRL,	DSCCLK0_EN, 1);
+ 		break;
+ 	case 1:
+-		if (dccg->ctx->dc->debug.root_clock_optimization.bits.dsc)
+-			REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK1_ROOT_GATE_DISABLE, 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK1_ROOT_GATE_DISABLE, 1);
+ 
+ 		REG_UPDATE_2(DSCCLK1_DTO_PARAM,
+ 				DSCCLK1_DTO_PHASE, 0,
+@@ -1800,8 +1803,7 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
+ 		REG_UPDATE(DSCCLK_DTO_CTRL, DSCCLK1_EN, 1);
+ 		break;
+ 	case 2:
+-		if (dccg->ctx->dc->debug.root_clock_optimization.bits.dsc)
+-			REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK2_ROOT_GATE_DISABLE, 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK2_ROOT_GATE_DISABLE, 1);
+ 
+ 		REG_UPDATE_2(DSCCLK2_DTO_PARAM,
+ 				DSCCLK2_DTO_PHASE, 0,
+@@ -1809,8 +1811,7 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
+ 		REG_UPDATE(DSCCLK_DTO_CTRL, DSCCLK2_EN, 1);
+ 		break;
+ 	case 3:
+-		if (dccg->ctx->dc->debug.root_clock_optimization.bits.dsc)
+-			REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK3_ROOT_GATE_DISABLE, 1);
++		REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK3_ROOT_GATE_DISABLE, 1);
+ 
+ 		REG_UPDATE_2(DSCCLK3_DTO_PARAM,
+ 				DSCCLK3_DTO_PHASE, 0,
+@@ -1821,6 +1822,9 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
+ 		BREAK_TO_DEBUGGER();
+ 		return;
+ 	}
++
++	/* Wait for clock to ramp */
++	udelay(10);
+ }
+ 
+ static void dccg35_disable_dscclk(struct dccg *dccg,
+@@ -1864,6 +1868,9 @@ static void dccg35_disable_dscclk(struct dccg *dccg,
+ 	default:
+ 		return;
+ 	}
++
++	/* Wait for clock ramp */
++	udelay(10);
+ }
+ 
+ static void dccg35_enable_symclk_se(struct dccg *dccg, uint32_t stream_enc_inst, uint32_t link_enc_inst)
+@@ -2349,10 +2356,7 @@ static void dccg35_disable_symclk_se_cb(
+ 
+ void dccg35_root_gate_disable_control(struct dccg *dccg, uint32_t pipe_idx, uint32_t disable_clock_gating)
+ {
+-
+-	if (dccg->ctx->dc->debug.root_clock_optimization.bits.dpp) {
+-		dccg35_set_dppclk_root_clock_gating(dccg, pipe_idx, disable_clock_gating);
+-	}
++	dccg35_set_dppclk_root_clock_gating(dccg, pipe_idx, disable_clock_gating);
+ }
+ 
+ static const struct dccg_funcs dccg35_funcs_new = {
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+index cdb8685ae7d719..454e362ff096aa 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+@@ -955,7 +955,7 @@ enum dc_status dcn20_enable_stream_timing(
+ 		return DC_ERROR_UNEXPECTED;
+ 	}
+ 
+-	fsleep(stream->timing.v_total * (stream->timing.h_total * 10000u / stream->timing.pix_clk_100hz));
++	udelay(stream->timing.v_total * (stream->timing.h_total * 10000u / stream->timing.pix_clk_100hz));
+ 
+ 	params.vertical_total_min = stream->adjust.v_total_min;
+ 	params.vertical_total_max = stream->adjust.v_total_max;
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+index a267f574b61937..764eff6a4ec6b7 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+@@ -113,6 +113,14 @@ static void enable_memory_low_power(struct dc *dc)
+ }
+ #endif
+ 
++static void print_pg_status(struct dc *dc, const char *debug_func, const char *debug_log)
++{
++	if (dc->debug.enable_pg_cntl_debug_logs && dc->res_pool->pg_cntl) {
++		if (dc->res_pool->pg_cntl->funcs->print_pg_status)
++			dc->res_pool->pg_cntl->funcs->print_pg_status(dc->res_pool->pg_cntl, debug_func, debug_log);
++	}
++}
++
+ void dcn35_set_dmu_fgcg(struct dce_hwseq *hws, bool enable)
+ {
+ 	REG_UPDATE_3(DMU_CLK_CNTL,
+@@ -137,6 +145,8 @@ void dcn35_init_hw(struct dc *dc)
+ 	uint32_t user_level = MAX_BACKLIGHT_LEVEL;
+ 	int i;
+ 
++	print_pg_status(dc, __func__, ": start");
++
+ 	if (dc->clk_mgr && dc->clk_mgr->funcs->init_clocks)
+ 		dc->clk_mgr->funcs->init_clocks(dc->clk_mgr);
+ 
+@@ -200,10 +210,7 @@ void dcn35_init_hw(struct dc *dc)
+ 
+ 	/* we want to turn off all dp displays before doing detection */
+ 	dc->link_srv->blank_all_dp_displays(dc);
+-/*
+-	if (hws->funcs.enable_power_gating_plane)
+-		hws->funcs.enable_power_gating_plane(dc->hwseq, true);
+-*/
++
+ 	if (res_pool->hubbub && res_pool->hubbub->funcs->dchubbub_init)
+ 		res_pool->hubbub->funcs->dchubbub_init(dc->res_pool->hubbub);
+ 	/* If taking control over from VBIOS, we may want to optimize our first
+@@ -236,6 +243,8 @@ void dcn35_init_hw(struct dc *dc)
+ 		}
+ 
+ 		hws->funcs.init_pipes(dc, dc->current_state);
++		print_pg_status(dc, __func__, ": after init_pipes");
++
+ 		if (dc->res_pool->hubbub->funcs->allow_self_refresh_control &&
+ 			!dc->res_pool->hubbub->ctx->dc->debug.disable_stutter)
+ 			dc->res_pool->hubbub->funcs->allow_self_refresh_control(dc->res_pool->hubbub,
+@@ -312,6 +321,7 @@ void dcn35_init_hw(struct dc *dc)
+ 		if (dc->res_pool->pg_cntl->funcs->init_pg_status)
+ 			dc->res_pool->pg_cntl->funcs->init_pg_status(dc->res_pool->pg_cntl);
+ 	}
++	print_pg_status(dc, __func__, ": after init_pg_status");
+ }
+ 
+ static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
+@@ -500,97 +510,6 @@ void dcn35_physymclk_root_clock_control(struct dce_hwseq *hws, unsigned int phy_
+ 	}
+ }
+ 
+-void dcn35_dsc_pg_control(
+-		struct dce_hwseq *hws,
+-		unsigned int dsc_inst,
+-		bool power_on)
+-{
+-	uint32_t power_gate = power_on ? 0 : 1;
+-	uint32_t pwr_status = power_on ? 0 : 2;
+-	uint32_t org_ip_request_cntl = 0;
+-
+-	if (hws->ctx->dc->debug.disable_dsc_power_gate)
+-		return;
+-	if (hws->ctx->dc->debug.ignore_pg)
+-		return;
+-	REG_GET(DC_IP_REQUEST_CNTL, IP_REQUEST_EN, &org_ip_request_cntl);
+-	if (org_ip_request_cntl == 0)
+-		REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 1);
+-
+-	switch (dsc_inst) {
+-	case 0: /* DSC0 */
+-		REG_UPDATE(DOMAIN16_PG_CONFIG,
+-				DOMAIN_POWER_GATE, power_gate);
+-
+-		REG_WAIT(DOMAIN16_PG_STATUS,
+-				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+-				1, 1000);
+-		break;
+-	case 1: /* DSC1 */
+-		REG_UPDATE(DOMAIN17_PG_CONFIG,
+-				DOMAIN_POWER_GATE, power_gate);
+-
+-		REG_WAIT(DOMAIN17_PG_STATUS,
+-				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+-				1, 1000);
+-		break;
+-	case 2: /* DSC2 */
+-		REG_UPDATE(DOMAIN18_PG_CONFIG,
+-				DOMAIN_POWER_GATE, power_gate);
+-
+-		REG_WAIT(DOMAIN18_PG_STATUS,
+-				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+-				1, 1000);
+-		break;
+-	case 3: /* DSC3 */
+-		REG_UPDATE(DOMAIN19_PG_CONFIG,
+-				DOMAIN_POWER_GATE, power_gate);
+-
+-		REG_WAIT(DOMAIN19_PG_STATUS,
+-				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+-				1, 1000);
+-		break;
+-	default:
+-		BREAK_TO_DEBUGGER();
+-		break;
+-	}
+-
+-	if (org_ip_request_cntl == 0)
+-		REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 0);
+-}
+-
+-void dcn35_enable_power_gating_plane(struct dce_hwseq *hws, bool enable)
+-{
+-	bool force_on = true; /* disable power gating */
+-	uint32_t org_ip_request_cntl = 0;
+-
+-	if (hws->ctx->dc->debug.disable_hubp_power_gate)
+-		return;
+-	if (hws->ctx->dc->debug.ignore_pg)
+-		return;
+-	REG_GET(DC_IP_REQUEST_CNTL, IP_REQUEST_EN, &org_ip_request_cntl);
+-	if (org_ip_request_cntl == 0)
+-		REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 1);
+-	/* DCHUBP0/1/2/3/4/5 */
+-	REG_UPDATE(DOMAIN0_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+-	REG_UPDATE(DOMAIN2_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+-	/* DPP0/1/2/3/4/5 */
+-	REG_UPDATE(DOMAIN1_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+-	REG_UPDATE(DOMAIN3_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+-
+-	force_on = true; /* disable power gating */
+-	if (enable && !hws->ctx->dc->debug.disable_dsc_power_gate)
+-		force_on = false;
+-
+-	/* DCS0/1/2/3/4 */
+-	REG_UPDATE(DOMAIN16_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+-	REG_UPDATE(DOMAIN17_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+-	REG_UPDATE(DOMAIN18_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+-	REG_UPDATE(DOMAIN19_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+-
+-
+-}
+-
+ /* In headless boot cases, DIG may be turned
+  * on which causes HW/SW discrepancies.
+  * To avoid this, power down hardware on boot
+@@ -1453,6 +1372,8 @@ void dcn35_prepare_bandwidth(
+ 	}
+ 
+ 	dcn20_prepare_bandwidth(dc, context);
++
++	print_pg_status(dc, __func__, ": after rcg and power up");
+ }
+ 
+ void dcn35_optimize_bandwidth(
+@@ -1461,6 +1382,8 @@ void dcn35_optimize_bandwidth(
+ {
+ 	struct pg_block_update pg_update_state;
+ 
++	print_pg_status(dc, __func__, ": before rcg and power up");
++
+ 	dcn20_optimize_bandwidth(dc, context);
+ 
+ 	if (dc->hwss.calc_blocks_to_gate) {
+@@ -1472,6 +1395,8 @@ void dcn35_optimize_bandwidth(
+ 		if (dc->hwss.root_clock_control)
+ 			dc->hwss.root_clock_control(dc, &pg_update_state, false);
+ 	}
++
++	print_pg_status(dc, __func__, ": after rcg and power up");
+ }
+ 
+ void dcn35_set_drr(struct pipe_ctx **pipe_ctx,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c
+index a3ccf805bd16ae..aefb7c47374158 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c
+@@ -115,7 +115,6 @@ static const struct hw_sequencer_funcs dcn35_funcs = {
+ 	.exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state,
+ 	.update_visual_confirm_color = dcn10_update_visual_confirm_color,
+ 	.apply_idle_power_optimizations = dcn35_apply_idle_power_optimizations,
+-	.update_dsc_pg = dcn32_update_dsc_pg,
+ 	.calc_blocks_to_gate = dcn35_calc_blocks_to_gate,
+ 	.calc_blocks_to_ungate = dcn35_calc_blocks_to_ungate,
+ 	.hw_block_power_up = dcn35_hw_block_power_up,
+@@ -150,7 +149,6 @@ static const struct hwseq_private_funcs dcn35_private_funcs = {
+ 	.plane_atomic_disable = dcn35_plane_atomic_disable,
+ 	//.plane_atomic_disable = dcn20_plane_atomic_disable,/*todo*/
+ 	//.hubp_pg_control = dcn35_hubp_pg_control,
+-	.enable_power_gating_plane = dcn35_enable_power_gating_plane,
+ 	.dpp_root_clock_control = dcn35_dpp_root_clock_control,
+ 	.dpstream_root_clock_control = dcn35_dpstream_root_clock_control,
+ 	.physymclk_root_clock_control = dcn35_physymclk_root_clock_control,
+@@ -165,7 +163,6 @@ static const struct hwseq_private_funcs dcn35_private_funcs = {
+ 	.calculate_dccg_k1_k2_values = dcn32_calculate_dccg_k1_k2_values,
+ 	.resync_fifo_dccg_dio = dcn314_resync_fifo_dccg_dio,
+ 	.is_dp_dig_pixel_rate_div_policy = dcn35_is_dp_dig_pixel_rate_div_policy,
+-	.dsc_pg_control = dcn35_dsc_pg_control,
+ 	.dsc_pg_status = dcn32_dsc_pg_status,
+ 	.enable_plane = dcn35_enable_plane,
+ 	.wait_for_pipe_update_if_needed = dcn10_wait_for_pipe_update_if_needed,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c
+index 58f2be2a326b89..a580a55695c3b0 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c
+@@ -114,7 +114,6 @@ static const struct hw_sequencer_funcs dcn351_funcs = {
+ 	.exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state,
+ 	.update_visual_confirm_color = dcn10_update_visual_confirm_color,
+ 	.apply_idle_power_optimizations = dcn35_apply_idle_power_optimizations,
+-	.update_dsc_pg = dcn32_update_dsc_pg,
+ 	.calc_blocks_to_gate = dcn351_calc_blocks_to_gate,
+ 	.calc_blocks_to_ungate = dcn351_calc_blocks_to_ungate,
+ 	.hw_block_power_up = dcn351_hw_block_power_up,
+@@ -145,7 +144,6 @@ static const struct hwseq_private_funcs dcn351_private_funcs = {
+ 	.plane_atomic_disable = dcn35_plane_atomic_disable,
+ 	//.plane_atomic_disable = dcn20_plane_atomic_disable,/*todo*/
+ 	//.hubp_pg_control = dcn35_hubp_pg_control,
+-	.enable_power_gating_plane = dcn35_enable_power_gating_plane,
+ 	.dpp_root_clock_control = dcn35_dpp_root_clock_control,
+ 	.dpstream_root_clock_control = dcn35_dpstream_root_clock_control,
+ 	.physymclk_root_clock_control = dcn35_physymclk_root_clock_control,
+@@ -159,7 +157,6 @@ static const struct hwseq_private_funcs dcn351_private_funcs = {
+ 	.setup_hpo_hw_control = dcn35_setup_hpo_hw_control,
+ 	.calculate_dccg_k1_k2_values = dcn32_calculate_dccg_k1_k2_values,
+ 	.is_dp_dig_pixel_rate_div_policy = dcn35_is_dp_dig_pixel_rate_div_policy,
+-	.dsc_pg_control = dcn35_dsc_pg_control,
+ 	.dsc_pg_status = dcn32_dsc_pg_status,
+ 	.enable_plane = dcn35_enable_plane,
+ 	.wait_for_pipe_update_if_needed = dcn10_wait_for_pipe_update_if_needed,
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/pg_cntl.h b/drivers/gpu/drm/amd/display/dc/inc/hw/pg_cntl.h
+index 00ea3864dd4df4..bcd0b0dd9c429a 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/pg_cntl.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/pg_cntl.h
+@@ -47,6 +47,7 @@ struct pg_cntl_funcs {
+ 	void (*optc_pg_control)(struct pg_cntl *pg_cntl, unsigned int optc_inst, bool power_on);
+ 	void (*dwb_pg_control)(struct pg_cntl *pg_cntl, bool power_on);
+ 	void (*init_pg_status)(struct pg_cntl *pg_cntl);
++	void (*print_pg_status)(struct pg_cntl *pg_cntl, const char *debug_func, const char *debug_log);
+ };
+ 
+ #endif //__DC_PG_CNTL_H__
+diff --git a/drivers/gpu/drm/amd/display/dc/pg/dcn35/dcn35_pg_cntl.c b/drivers/gpu/drm/amd/display/dc/pg/dcn35/dcn35_pg_cntl.c
+index af21c0a27f8657..72bd43f9bbe288 100644
+--- a/drivers/gpu/drm/amd/display/dc/pg/dcn35/dcn35_pg_cntl.c
++++ b/drivers/gpu/drm/amd/display/dc/pg/dcn35/dcn35_pg_cntl.c
+@@ -79,16 +79,12 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
+ 	uint32_t power_gate = power_on ? 0 : 1;
+ 	uint32_t pwr_status = power_on ? 0 : 2;
+ 	uint32_t org_ip_request_cntl = 0;
+-	bool block_enabled;
+-
+-	/*need to enable dscclk regardless DSC_PG*/
+-	if (pg_cntl->ctx->dc->res_pool->dccg->funcs->enable_dsc && power_on)
+-		pg_cntl->ctx->dc->res_pool->dccg->funcs->enable_dsc(
+-				pg_cntl->ctx->dc->res_pool->dccg, dsc_inst);
++	bool block_enabled = false;
++	bool skip_pg = pg_cntl->ctx->dc->debug.ignore_pg ||
++		       pg_cntl->ctx->dc->debug.disable_dsc_power_gate ||
++		       pg_cntl->ctx->dc->idle_optimizations_allowed;
+ 
+-	if (pg_cntl->ctx->dc->debug.ignore_pg ||
+-		pg_cntl->ctx->dc->debug.disable_dsc_power_gate ||
+-		pg_cntl->ctx->dc->idle_optimizations_allowed)
++	if (skip_pg && !power_on)
+ 		return;
+ 
+ 	block_enabled = pg_cntl35_dsc_pg_status(pg_cntl, dsc_inst);
+@@ -111,7 +107,7 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
+ 
+ 		REG_WAIT(DOMAIN16_PG_STATUS,
+ 				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+-				1, 1000);
++				1, 10000);
+ 		break;
+ 	case 1: /* DSC1 */
+ 		REG_UPDATE(DOMAIN17_PG_CONFIG,
+@@ -119,7 +115,7 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
+ 
+ 		REG_WAIT(DOMAIN17_PG_STATUS,
+ 				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+-				1, 1000);
++				1, 10000);
+ 		break;
+ 	case 2: /* DSC2 */
+ 		REG_UPDATE(DOMAIN18_PG_CONFIG,
+@@ -127,7 +123,7 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
+ 
+ 		REG_WAIT(DOMAIN18_PG_STATUS,
+ 				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+-				1, 1000);
++				1, 10000);
+ 		break;
+ 	case 3: /* DSC3 */
+ 		REG_UPDATE(DOMAIN19_PG_CONFIG,
+@@ -135,7 +131,7 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
+ 
+ 		REG_WAIT(DOMAIN19_PG_STATUS,
+ 				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+-				1, 1000);
++				1, 10000);
+ 		break;
+ 	default:
+ 		BREAK_TO_DEBUGGER();
+@@ -144,12 +140,6 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
+ 
+ 	if (dsc_inst < MAX_PIPES)
+ 		pg_cntl->pg_pipe_res_enable[PG_DSC][dsc_inst] = power_on;
+-
+-	if (pg_cntl->ctx->dc->res_pool->dccg->funcs->disable_dsc && !power_on) {
+-		/*this is to disable dscclk*/
+-		pg_cntl->ctx->dc->res_pool->dccg->funcs->disable_dsc(
+-			pg_cntl->ctx->dc->res_pool->dccg, dsc_inst);
+-	}
+ }
+ 
+ static bool pg_cntl35_hubp_dpp_pg_status(struct pg_cntl *pg_cntl, unsigned int hubp_dpp_inst)
+@@ -189,11 +179,12 @@ void pg_cntl35_hubp_dpp_pg_control(struct pg_cntl *pg_cntl, unsigned int hubp_dp
+ 	uint32_t pwr_status = power_on ? 0 : 2;
+ 	uint32_t org_ip_request_cntl;
+ 	bool block_enabled;
++	bool skip_pg = pg_cntl->ctx->dc->debug.ignore_pg ||
++		       pg_cntl->ctx->dc->debug.disable_hubp_power_gate ||
++		       pg_cntl->ctx->dc->debug.disable_dpp_power_gate ||
++		       pg_cntl->ctx->dc->idle_optimizations_allowed;
+ 
+-	if (pg_cntl->ctx->dc->debug.ignore_pg ||
+-		pg_cntl->ctx->dc->debug.disable_hubp_power_gate ||
+-		pg_cntl->ctx->dc->debug.disable_dpp_power_gate ||
+-		pg_cntl->ctx->dc->idle_optimizations_allowed)
++	if (skip_pg && !power_on)
+ 		return;
+ 
+ 	block_enabled = pg_cntl35_hubp_dpp_pg_status(pg_cntl, hubp_dpp_inst);
+@@ -213,22 +204,22 @@ void pg_cntl35_hubp_dpp_pg_control(struct pg_cntl *pg_cntl, unsigned int hubp_dp
+ 	case 0:
+ 		/* DPP0 & HUBP0 */
+ 		REG_UPDATE(DOMAIN0_PG_CONFIG, DOMAIN_POWER_GATE, power_gate);
+-		REG_WAIT(DOMAIN0_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
++		REG_WAIT(DOMAIN0_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 10000);
+ 		break;
+ 	case 1:
+ 		/* DPP1 & HUBP1 */
+ 		REG_UPDATE(DOMAIN1_PG_CONFIG, DOMAIN_POWER_GATE, power_gate);
+-		REG_WAIT(DOMAIN1_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
++		REG_WAIT(DOMAIN1_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 10000);
+ 		break;
+ 	case 2:
+ 		/* DPP2 & HUBP2 */
+ 		REG_UPDATE(DOMAIN2_PG_CONFIG, DOMAIN_POWER_GATE, power_gate);
+-		REG_WAIT(DOMAIN2_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
++		REG_WAIT(DOMAIN2_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 10000);
+ 		break;
+ 	case 3:
+ 		/* DPP3 & HUBP3 */
+ 		REG_UPDATE(DOMAIN3_PG_CONFIG, DOMAIN_POWER_GATE, power_gate);
+-		REG_WAIT(DOMAIN3_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
++		REG_WAIT(DOMAIN3_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 10000);
+ 		break;
+ 	default:
+ 		BREAK_TO_DEBUGGER();
+@@ -501,6 +492,36 @@ void pg_cntl35_init_pg_status(struct pg_cntl *pg_cntl)
+ 	pg_cntl->pg_res_enable[PG_DWB] = block_enabled;
+ }
+ 
++static void pg_cntl35_print_pg_status(struct pg_cntl *pg_cntl, const char *debug_func, const char *debug_log)
++{
++	int i = 0;
++	bool block_enabled = false;
++
++	DC_LOG_DEBUG("%s: %s", debug_func, debug_log);
++
++	DC_LOG_DEBUG("PG_CNTL status:\n");
++
++	block_enabled = pg_cntl35_io_clk_status(pg_cntl);
++	DC_LOG_DEBUG("ONO0=%d (DCCG, DIO, DCIO)\n", block_enabled ? 1 : 0);
++
++	block_enabled = pg_cntl35_mem_status(pg_cntl);
++	DC_LOG_DEBUG("ONO1=%d (DCHUBBUB, DCHVM, DCHUBBUBMEM)\n", block_enabled ? 1 : 0);
++
++	block_enabled = pg_cntl35_plane_otg_status(pg_cntl);
++	DC_LOG_DEBUG("ONO2=%d (MPC, OPP, OPTC, DWB)\n", block_enabled ? 1 : 0);
++
++	block_enabled = pg_cntl35_hpo_pg_status(pg_cntl);
++	DC_LOG_DEBUG("ONO3=%d (HPO)\n", block_enabled ? 1 : 0);
++
++	for (i = 0; i < pg_cntl->ctx->dc->res_pool->pipe_count; i++) {
++		block_enabled = pg_cntl35_hubp_dpp_pg_status(pg_cntl, i);
++		DC_LOG_DEBUG("ONO%d=%d (DCHUBP%d, DPP%d)\n", 4 + i * 2, block_enabled ? 1 : 0, i, i);
++
++		block_enabled = pg_cntl35_dsc_pg_status(pg_cntl, i);
++		DC_LOG_DEBUG("ONO%d=%d (DSC%d)\n", 5 + i * 2, block_enabled ? 1 : 0, i);
++	}
++}
++
+ static const struct pg_cntl_funcs pg_cntl35_funcs = {
+ 	.init_pg_status = pg_cntl35_init_pg_status,
+ 	.dsc_pg_control = pg_cntl35_dsc_pg_control,
+@@ -511,7 +532,8 @@ static const struct pg_cntl_funcs pg_cntl35_funcs = {
+ 	.mpcc_pg_control = pg_cntl35_mpcc_pg_control,
+ 	.opp_pg_control = pg_cntl35_opp_pg_control,
+ 	.optc_pg_control = pg_cntl35_optc_pg_control,
+-	.dwb_pg_control = pg_cntl35_dwb_pg_control
++	.dwb_pg_control = pg_cntl35_dwb_pg_control,
++	.print_pg_status = pg_cntl35_print_pg_status
+ };
+ 
+ struct pg_cntl *pg_cntl35_create(
+diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
+index ea78c6c8ca7a63..2d4e9368394641 100644
+--- a/drivers/gpu/drm/display/drm_dp_helper.c
++++ b/drivers/gpu/drm/display/drm_dp_helper.c
+@@ -691,6 +691,34 @@ void drm_dp_dpcd_set_powered(struct drm_dp_aux *aux, bool powered)
+ }
+ EXPORT_SYMBOL(drm_dp_dpcd_set_powered);
+ 
++/**
++ * drm_dp_dpcd_set_probe() - Set whether a probing before DPCD access is done
++ * @aux: DisplayPort AUX channel
++ * @enable: Enable the probing if required
++ */
++void drm_dp_dpcd_set_probe(struct drm_dp_aux *aux, bool enable)
++{
++	WRITE_ONCE(aux->dpcd_probe_disabled, !enable);
++}
++EXPORT_SYMBOL(drm_dp_dpcd_set_probe);
++
++static bool dpcd_access_needs_probe(struct drm_dp_aux *aux)
++{
++	/*
++	 * HP ZR24w corrupts the first DPCD access after entering power save
++	 * mode. Eg. on a read, the entire buffer will be filled with the same
++	 * byte. Do a throw away read to avoid corrupting anything we care
++	 * about. Afterwards things will work correctly until the monitor
++	 * gets woken up and subsequently re-enters power save mode.
++	 *
++	 * The user pressing any button on the monitor is enough to wake it
++	 * up, so there is no particularly good place to do the workaround.
++	 * We just have to do it before any DPCD access and hope that the
++	 * monitor doesn't power down exactly after the throw away read.
++	 */
++	return !aux->is_remote && !READ_ONCE(aux->dpcd_probe_disabled);
++}
++
+ /**
+  * drm_dp_dpcd_read() - read a series of bytes from the DPCD
+  * @aux: DisplayPort AUX channel (SST or MST)
+@@ -712,19 +740,7 @@ ssize_t drm_dp_dpcd_read(struct drm_dp_aux *aux, unsigned int offset,
+ {
+ 	int ret;
+ 
+-	/*
+-	 * HP ZR24w corrupts the first DPCD access after entering power save
+-	 * mode. Eg. on a read, the entire buffer will be filled with the same
+-	 * byte. Do a throw away read to avoid corrupting anything we care
+-	 * about. Afterwards things will work correctly until the monitor
+-	 * gets woken up and subsequently re-enters power save mode.
+-	 *
+-	 * The user pressing any button on the monitor is enough to wake it
+-	 * up, so there is no particularly good place to do the workaround.
+-	 * We just have to do it before any DPCD access and hope that the
+-	 * monitor doesn't power down exactly after the throw away read.
+-	 */
+-	if (!aux->is_remote) {
++	if (dpcd_access_needs_probe(aux)) {
+ 		ret = drm_dp_dpcd_probe(aux, DP_TRAINING_PATTERN_SET);
+ 		if (ret < 0)
+ 			return ret;
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 74e77742b2bd4f..9c8822b337e2e4 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -66,34 +66,36 @@ static int oui(u8 first, u8 second, u8 third)
+  * on as many displays as possible).
+  */
+ 
+-/* First detailed mode wrong, use largest 60Hz mode */
+-#define EDID_QUIRK_PREFER_LARGE_60		(1 << 0)
+-/* Reported 135MHz pixel clock is too high, needs adjustment */
+-#define EDID_QUIRK_135_CLOCK_TOO_HIGH		(1 << 1)
+-/* Prefer the largest mode at 75 Hz */
+-#define EDID_QUIRK_PREFER_LARGE_75		(1 << 2)
+-/* Detail timing is in cm not mm */
+-#define EDID_QUIRK_DETAILED_IN_CM		(1 << 3)
+-/* Detailed timing descriptors have bogus size values, so just take the
+- * maximum size and use that.
+- */
+-#define EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE	(1 << 4)
+-/* use +hsync +vsync for detailed mode */
+-#define EDID_QUIRK_DETAILED_SYNC_PP		(1 << 6)
+-/* Force reduced-blanking timings for detailed modes */
+-#define EDID_QUIRK_FORCE_REDUCED_BLANKING	(1 << 7)
+-/* Force 8bpc */
+-#define EDID_QUIRK_FORCE_8BPC			(1 << 8)
+-/* Force 12bpc */
+-#define EDID_QUIRK_FORCE_12BPC			(1 << 9)
+-/* Force 6bpc */
+-#define EDID_QUIRK_FORCE_6BPC			(1 << 10)
+-/* Force 10bpc */
+-#define EDID_QUIRK_FORCE_10BPC			(1 << 11)
+-/* Non desktop display (i.e. HMD) */
+-#define EDID_QUIRK_NON_DESKTOP			(1 << 12)
+-/* Cap the DSC target bitrate to 15bpp */
+-#define EDID_QUIRK_CAP_DSC_15BPP		(1 << 13)
++enum drm_edid_internal_quirk {
++	/* First detailed mode wrong, use largest 60Hz mode */
++	EDID_QUIRK_PREFER_LARGE_60 = DRM_EDID_QUIRK_NUM,
++	/* Reported 135MHz pixel clock is too high, needs adjustment */
++	EDID_QUIRK_135_CLOCK_TOO_HIGH,
++	/* Prefer the largest mode at 75 Hz */
++	EDID_QUIRK_PREFER_LARGE_75,
++	/* Detail timing is in cm not mm */
++	EDID_QUIRK_DETAILED_IN_CM,
++	/* Detailed timing descriptors have bogus size values, so just take the
++	 * maximum size and use that.
++	 */
++	EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE,
++	/* use +hsync +vsync for detailed mode */
++	EDID_QUIRK_DETAILED_SYNC_PP,
++	/* Force reduced-blanking timings for detailed modes */
++	EDID_QUIRK_FORCE_REDUCED_BLANKING,
++	/* Force 8bpc */
++	EDID_QUIRK_FORCE_8BPC,
++	/* Force 12bpc */
++	EDID_QUIRK_FORCE_12BPC,
++	/* Force 6bpc */
++	EDID_QUIRK_FORCE_6BPC,
++	/* Force 10bpc */
++	EDID_QUIRK_FORCE_10BPC,
++	/* Non desktop display (i.e. HMD) */
++	EDID_QUIRK_NON_DESKTOP,
++	/* Cap the DSC target bitrate to 15bpp */
++	EDID_QUIRK_CAP_DSC_15BPP,
++};
+ 
+ #define MICROSOFT_IEEE_OUI	0xca125c
+ 
+@@ -128,124 +130,132 @@ static const struct edid_quirk {
+ 	u32 quirks;
+ } edid_quirk_list[] = {
+ 	/* Acer AL1706 */
+-	EDID_QUIRK('A', 'C', 'R', 44358, EDID_QUIRK_PREFER_LARGE_60),
++	EDID_QUIRK('A', 'C', 'R', 44358, BIT(EDID_QUIRK_PREFER_LARGE_60)),
+ 	/* Acer F51 */
+-	EDID_QUIRK('A', 'P', 'I', 0x7602, EDID_QUIRK_PREFER_LARGE_60),
++	EDID_QUIRK('A', 'P', 'I', 0x7602, BIT(EDID_QUIRK_PREFER_LARGE_60)),
+ 
+ 	/* AEO model 0 reports 8 bpc, but is a 6 bpc panel */
+-	EDID_QUIRK('A', 'E', 'O', 0, EDID_QUIRK_FORCE_6BPC),
++	EDID_QUIRK('A', 'E', 'O', 0, BIT(EDID_QUIRK_FORCE_6BPC)),
+ 
+ 	/* BenQ GW2765 */
+-	EDID_QUIRK('B', 'N', 'Q', 0x78d6, EDID_QUIRK_FORCE_8BPC),
++	EDID_QUIRK('B', 'N', 'Q', 0x78d6, BIT(EDID_QUIRK_FORCE_8BPC)),
+ 
+ 	/* BOE model on HP Pavilion 15-n233sl reports 8 bpc, but is a 6 bpc panel */
+-	EDID_QUIRK('B', 'O', 'E', 0x78b, EDID_QUIRK_FORCE_6BPC),
++	EDID_QUIRK('B', 'O', 'E', 0x78b, BIT(EDID_QUIRK_FORCE_6BPC)),
+ 
+ 	/* CPT panel of Asus UX303LA reports 8 bpc, but is a 6 bpc panel */
+-	EDID_QUIRK('C', 'P', 'T', 0x17df, EDID_QUIRK_FORCE_6BPC),
++	EDID_QUIRK('C', 'P', 'T', 0x17df, BIT(EDID_QUIRK_FORCE_6BPC)),
+ 
+ 	/* SDC panel of Lenovo B50-80 reports 8 bpc, but is a 6 bpc panel */
+-	EDID_QUIRK('S', 'D', 'C', 0x3652, EDID_QUIRK_FORCE_6BPC),
++	EDID_QUIRK('S', 'D', 'C', 0x3652, BIT(EDID_QUIRK_FORCE_6BPC)),
+ 
+ 	/* BOE model 0x0771 reports 8 bpc, but is a 6 bpc panel */
+-	EDID_QUIRK('B', 'O', 'E', 0x0771, EDID_QUIRK_FORCE_6BPC),
++	EDID_QUIRK('B', 'O', 'E', 0x0771, BIT(EDID_QUIRK_FORCE_6BPC)),
+ 
+ 	/* Belinea 10 15 55 */
+-	EDID_QUIRK('M', 'A', 'X', 1516, EDID_QUIRK_PREFER_LARGE_60),
+-	EDID_QUIRK('M', 'A', 'X', 0x77e, EDID_QUIRK_PREFER_LARGE_60),
++	EDID_QUIRK('M', 'A', 'X', 1516, BIT(EDID_QUIRK_PREFER_LARGE_60)),
++	EDID_QUIRK('M', 'A', 'X', 0x77e, BIT(EDID_QUIRK_PREFER_LARGE_60)),
+ 
+ 	/* Envision Peripherals, Inc. EN-7100e */
+-	EDID_QUIRK('E', 'P', 'I', 59264, EDID_QUIRK_135_CLOCK_TOO_HIGH),
++	EDID_QUIRK('E', 'P', 'I', 59264, BIT(EDID_QUIRK_135_CLOCK_TOO_HIGH)),
+ 	/* Envision EN2028 */
+-	EDID_QUIRK('E', 'P', 'I', 8232, EDID_QUIRK_PREFER_LARGE_60),
++	EDID_QUIRK('E', 'P', 'I', 8232, BIT(EDID_QUIRK_PREFER_LARGE_60)),
+ 
+ 	/* Funai Electronics PM36B */
+-	EDID_QUIRK('F', 'C', 'M', 13600, EDID_QUIRK_PREFER_LARGE_75 |
+-				       EDID_QUIRK_DETAILED_IN_CM),
++	EDID_QUIRK('F', 'C', 'M', 13600, BIT(EDID_QUIRK_PREFER_LARGE_75) |
++					 BIT(EDID_QUIRK_DETAILED_IN_CM)),
+ 
+ 	/* LG 27GP950 */
+-	EDID_QUIRK('G', 'S', 'M', 0x5bbf, EDID_QUIRK_CAP_DSC_15BPP),
++	EDID_QUIRK('G', 'S', 'M', 0x5bbf, BIT(EDID_QUIRK_CAP_DSC_15BPP)),
+ 
+ 	/* LG 27GN950 */
+-	EDID_QUIRK('G', 'S', 'M', 0x5b9a, EDID_QUIRK_CAP_DSC_15BPP),
++	EDID_QUIRK('G', 'S', 'M', 0x5b9a, BIT(EDID_QUIRK_CAP_DSC_15BPP)),
+ 
+ 	/* LGD panel of HP zBook 17 G2, eDP 10 bpc, but reports unknown bpc */
+-	EDID_QUIRK('L', 'G', 'D', 764, EDID_QUIRK_FORCE_10BPC),
++	EDID_QUIRK('L', 'G', 'D', 764, BIT(EDID_QUIRK_FORCE_10BPC)),
+ 
+ 	/* LG Philips LCD LP154W01-A5 */
+-	EDID_QUIRK('L', 'P', 'L', 0, EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE),
+-	EDID_QUIRK('L', 'P', 'L', 0x2a00, EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE),
++	EDID_QUIRK('L', 'P', 'L', 0, BIT(EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE)),
++	EDID_QUIRK('L', 'P', 'L', 0x2a00, BIT(EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE)),
+ 
+ 	/* Samsung SyncMaster 205BW.  Note: irony */
+-	EDID_QUIRK('S', 'A', 'M', 541, EDID_QUIRK_DETAILED_SYNC_PP),
++	EDID_QUIRK('S', 'A', 'M', 541, BIT(EDID_QUIRK_DETAILED_SYNC_PP)),
+ 	/* Samsung SyncMaster 22[5-6]BW */
+-	EDID_QUIRK('S', 'A', 'M', 596, EDID_QUIRK_PREFER_LARGE_60),
+-	EDID_QUIRK('S', 'A', 'M', 638, EDID_QUIRK_PREFER_LARGE_60),
++	EDID_QUIRK('S', 'A', 'M', 596, BIT(EDID_QUIRK_PREFER_LARGE_60)),
++	EDID_QUIRK('S', 'A', 'M', 638, BIT(EDID_QUIRK_PREFER_LARGE_60)),
+ 
+ 	/* Sony PVM-2541A does up to 12 bpc, but only reports max 8 bpc */
+-	EDID_QUIRK('S', 'N', 'Y', 0x2541, EDID_QUIRK_FORCE_12BPC),
++	EDID_QUIRK('S', 'N', 'Y', 0x2541, BIT(EDID_QUIRK_FORCE_12BPC)),
+ 
+ 	/* ViewSonic VA2026w */
+-	EDID_QUIRK('V', 'S', 'C', 5020, EDID_QUIRK_FORCE_REDUCED_BLANKING),
++	EDID_QUIRK('V', 'S', 'C', 5020, BIT(EDID_QUIRK_FORCE_REDUCED_BLANKING)),
+ 
+ 	/* Medion MD 30217 PG */
+-	EDID_QUIRK('M', 'E', 'D', 0x7b8, EDID_QUIRK_PREFER_LARGE_75),
++	EDID_QUIRK('M', 'E', 'D', 0x7b8, BIT(EDID_QUIRK_PREFER_LARGE_75)),
+ 
+ 	/* Lenovo G50 */
+-	EDID_QUIRK('S', 'D', 'C', 18514, EDID_QUIRK_FORCE_6BPC),
++	EDID_QUIRK('S', 'D', 'C', 18514, BIT(EDID_QUIRK_FORCE_6BPC)),
+ 
+ 	/* Panel in Samsung NP700G7A-S01PL notebook reports 6bpc */
+-	EDID_QUIRK('S', 'E', 'C', 0xd033, EDID_QUIRK_FORCE_8BPC),
++	EDID_QUIRK('S', 'E', 'C', 0xd033, BIT(EDID_QUIRK_FORCE_8BPC)),
+ 
+ 	/* Rotel RSX-1058 forwards sink's EDID but only does HDMI 1.1*/
+-	EDID_QUIRK('E', 'T', 'R', 13896, EDID_QUIRK_FORCE_8BPC),
++	EDID_QUIRK('E', 'T', 'R', 13896, BIT(EDID_QUIRK_FORCE_8BPC)),
+ 
+ 	/* Valve Index Headset */
+-	EDID_QUIRK('V', 'L', 'V', 0x91a8, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b0, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b1, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b2, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b3, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b4, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b5, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b6, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b7, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b8, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91b9, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91ba, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91bb, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91bc, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91bd, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91be, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('V', 'L', 'V', 0x91bf, EDID_QUIRK_NON_DESKTOP),
++	EDID_QUIRK('V', 'L', 'V', 0x91a8, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b0, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b1, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b2, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b3, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b4, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b5, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b6, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b7, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b8, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91b9, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91ba, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91bb, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91bc, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91bd, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91be, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('V', 'L', 'V', 0x91bf, BIT(EDID_QUIRK_NON_DESKTOP)),
+ 
+ 	/* HTC Vive and Vive Pro VR Headsets */
+-	EDID_QUIRK('H', 'V', 'R', 0xaa01, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('H', 'V', 'R', 0xaa02, EDID_QUIRK_NON_DESKTOP),
++	EDID_QUIRK('H', 'V', 'R', 0xaa01, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('H', 'V', 'R', 0xaa02, BIT(EDID_QUIRK_NON_DESKTOP)),
+ 
+ 	/* Oculus Rift DK1, DK2, CV1 and Rift S VR Headsets */
+-	EDID_QUIRK('O', 'V', 'R', 0x0001, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('O', 'V', 'R', 0x0003, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('O', 'V', 'R', 0x0004, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('O', 'V', 'R', 0x0012, EDID_QUIRK_NON_DESKTOP),
++	EDID_QUIRK('O', 'V', 'R', 0x0001, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('O', 'V', 'R', 0x0003, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('O', 'V', 'R', 0x0004, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('O', 'V', 'R', 0x0012, BIT(EDID_QUIRK_NON_DESKTOP)),
+ 
+ 	/* Windows Mixed Reality Headsets */
+-	EDID_QUIRK('A', 'C', 'R', 0x7fce, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('L', 'E', 'N', 0x0408, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('F', 'U', 'J', 0x1970, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('D', 'E', 'L', 0x7fce, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('S', 'E', 'C', 0x144a, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('A', 'U', 'S', 0xc102, EDID_QUIRK_NON_DESKTOP),
++	EDID_QUIRK('A', 'C', 'R', 0x7fce, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('L', 'E', 'N', 0x0408, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('F', 'U', 'J', 0x1970, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('D', 'E', 'L', 0x7fce, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('S', 'E', 'C', 0x144a, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('A', 'U', 'S', 0xc102, BIT(EDID_QUIRK_NON_DESKTOP)),
+ 
+ 	/* Sony PlayStation VR Headset */
+-	EDID_QUIRK('S', 'N', 'Y', 0x0704, EDID_QUIRK_NON_DESKTOP),
++	EDID_QUIRK('S', 'N', 'Y', 0x0704, BIT(EDID_QUIRK_NON_DESKTOP)),
+ 
+ 	/* Sensics VR Headsets */
+-	EDID_QUIRK('S', 'E', 'N', 0x1019, EDID_QUIRK_NON_DESKTOP),
++	EDID_QUIRK('S', 'E', 'N', 0x1019, BIT(EDID_QUIRK_NON_DESKTOP)),
+ 
+ 	/* OSVR HDK and HDK2 VR Headsets */
+-	EDID_QUIRK('S', 'V', 'R', 0x1019, EDID_QUIRK_NON_DESKTOP),
+-	EDID_QUIRK('A', 'U', 'O', 0x1111, EDID_QUIRK_NON_DESKTOP),
++	EDID_QUIRK('S', 'V', 'R', 0x1019, BIT(EDID_QUIRK_NON_DESKTOP)),
++	EDID_QUIRK('A', 'U', 'O', 0x1111, BIT(EDID_QUIRK_NON_DESKTOP)),
++
++	/*
++	 * @drm_edid_internal_quirk entries end here, following with the
++	 * @drm_edid_quirk entries.
++	 */
++
++	/* HP ZR24w DP AUX DPCD access requires probing to prevent corruption. */
++	EDID_QUIRK('H', 'W', 'P', 0x2869, BIT(DRM_EDID_QUIRK_DP_DPCD_PROBE)),
+ };
+ 
+ /*
+@@ -2951,6 +2961,18 @@ static u32 edid_get_quirks(const struct drm_edid *drm_edid)
+ 	return 0;
+ }
+ 
++static bool drm_edid_has_internal_quirk(struct drm_connector *connector,
++					enum drm_edid_internal_quirk quirk)
++{
++	return connector->display_info.quirks & BIT(quirk);
++}
++
++bool drm_edid_has_quirk(struct drm_connector *connector, enum drm_edid_quirk quirk)
++{
++	return connector->display_info.quirks & BIT(quirk);
++}
++EXPORT_SYMBOL(drm_edid_has_quirk);
++
+ #define MODE_SIZE(m) ((m)->hdisplay * (m)->vdisplay)
+ #define MODE_REFRESH_DIFF(c,t) (abs((c) - (t)))
+ 
+@@ -2960,7 +2982,6 @@ static u32 edid_get_quirks(const struct drm_edid *drm_edid)
+  */
+ static void edid_fixup_preferred(struct drm_connector *connector)
+ {
+-	const struct drm_display_info *info = &connector->display_info;
+ 	struct drm_display_mode *t, *cur_mode, *preferred_mode;
+ 	int target_refresh = 0;
+ 	int cur_vrefresh, preferred_vrefresh;
+@@ -2968,9 +2989,9 @@ static void edid_fixup_preferred(struct drm_connector *connector)
+ 	if (list_empty(&connector->probed_modes))
+ 		return;
+ 
+-	if (info->quirks & EDID_QUIRK_PREFER_LARGE_60)
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_PREFER_LARGE_60))
+ 		target_refresh = 60;
+-	if (info->quirks & EDID_QUIRK_PREFER_LARGE_75)
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_PREFER_LARGE_75))
+ 		target_refresh = 75;
+ 
+ 	preferred_mode = list_first_entry(&connector->probed_modes,
+@@ -3474,7 +3495,6 @@ static struct drm_display_mode *drm_mode_detailed(struct drm_connector *connecto
+ 						  const struct drm_edid *drm_edid,
+ 						  const struct detailed_timing *timing)
+ {
+-	const struct drm_display_info *info = &connector->display_info;
+ 	struct drm_device *dev = connector->dev;
+ 	struct drm_display_mode *mode;
+ 	const struct detailed_pixel_timing *pt = &timing->data.pixel_data;
+@@ -3508,7 +3528,7 @@ static struct drm_display_mode *drm_mode_detailed(struct drm_connector *connecto
+ 		return NULL;
+ 	}
+ 
+-	if (info->quirks & EDID_QUIRK_FORCE_REDUCED_BLANKING) {
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_FORCE_REDUCED_BLANKING)) {
+ 		mode = drm_cvt_mode(dev, hactive, vactive, 60, true, false, false);
+ 		if (!mode)
+ 			return NULL;
+@@ -3520,7 +3540,7 @@ static struct drm_display_mode *drm_mode_detailed(struct drm_connector *connecto
+ 	if (!mode)
+ 		return NULL;
+ 
+-	if (info->quirks & EDID_QUIRK_135_CLOCK_TOO_HIGH)
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_135_CLOCK_TOO_HIGH))
+ 		mode->clock = 1088 * 10;
+ 	else
+ 		mode->clock = le16_to_cpu(timing->pixel_clock) * 10;
+@@ -3551,7 +3571,7 @@ static struct drm_display_mode *drm_mode_detailed(struct drm_connector *connecto
+ 
+ 	drm_mode_do_interlace_quirk(mode, pt);
+ 
+-	if (info->quirks & EDID_QUIRK_DETAILED_SYNC_PP) {
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_DETAILED_SYNC_PP)) {
+ 		mode->flags |= DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC;
+ 	} else {
+ 		mode->flags |= (pt->misc & DRM_EDID_PT_HSYNC_POSITIVE) ?
+@@ -3564,12 +3584,12 @@ static struct drm_display_mode *drm_mode_detailed(struct drm_connector *connecto
+ 	mode->width_mm = pt->width_mm_lo | (pt->width_height_mm_hi & 0xf0) << 4;
+ 	mode->height_mm = pt->height_mm_lo | (pt->width_height_mm_hi & 0xf) << 8;
+ 
+-	if (info->quirks & EDID_QUIRK_DETAILED_IN_CM) {
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_DETAILED_IN_CM)) {
+ 		mode->width_mm *= 10;
+ 		mode->height_mm *= 10;
+ 	}
+ 
+-	if (info->quirks & EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE) {
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE)) {
+ 		mode->width_mm = drm_edid->edid->width_cm * 10;
+ 		mode->height_mm = drm_edid->edid->height_cm * 10;
+ 	}
+@@ -6734,26 +6754,26 @@ static void update_display_info(struct drm_connector *connector,
+ 	drm_update_mso(connector, drm_edid);
+ 
+ out:
+-	if (info->quirks & EDID_QUIRK_NON_DESKTOP) {
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_NON_DESKTOP)) {
+ 		drm_dbg_kms(connector->dev, "[CONNECTOR:%d:%s] Non-desktop display%s\n",
+ 			    connector->base.id, connector->name,
+ 			    info->non_desktop ? " (redundant quirk)" : "");
+ 		info->non_desktop = true;
+ 	}
+ 
+-	if (info->quirks & EDID_QUIRK_CAP_DSC_15BPP)
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_CAP_DSC_15BPP))
+ 		info->max_dsc_bpp = 15;
+ 
+-	if (info->quirks & EDID_QUIRK_FORCE_6BPC)
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_FORCE_6BPC))
+ 		info->bpc = 6;
+ 
+-	if (info->quirks & EDID_QUIRK_FORCE_8BPC)
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_FORCE_8BPC))
+ 		info->bpc = 8;
+ 
+-	if (info->quirks & EDID_QUIRK_FORCE_10BPC)
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_FORCE_10BPC))
+ 		info->bpc = 10;
+ 
+-	if (info->quirks & EDID_QUIRK_FORCE_12BPC)
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_FORCE_12BPC))
+ 		info->bpc = 12;
+ 
+ 	/* Depends on info->cea_rev set by drm_parse_cea_ext() above */
+@@ -6918,7 +6938,6 @@ static int add_displayid_detailed_modes(struct drm_connector *connector,
+ static int _drm_edid_connector_add_modes(struct drm_connector *connector,
+ 					 const struct drm_edid *drm_edid)
+ {
+-	const struct drm_display_info *info = &connector->display_info;
+ 	int num_modes = 0;
+ 
+ 	if (!drm_edid)
+@@ -6948,7 +6967,8 @@ static int _drm_edid_connector_add_modes(struct drm_connector *connector,
+ 	if (drm_edid->edid->features & DRM_EDID_FEATURE_CONTINUOUS_FREQ)
+ 		num_modes += add_inferred_modes(connector, drm_edid);
+ 
+-	if (info->quirks & (EDID_QUIRK_PREFER_LARGE_60 | EDID_QUIRK_PREFER_LARGE_75))
++	if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_PREFER_LARGE_60) ||
++	    drm_edid_has_internal_quirk(connector, EDID_QUIRK_PREFER_LARGE_75))
+ 		edid_fixup_preferred(connector);
+ 
+ 	return num_modes;
+diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c
+index 16356523816fb8..068ed911e12458 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_power.c
++++ b/drivers/gpu/drm/i915/display/intel_display_power.c
+@@ -1169,7 +1169,7 @@ static void icl_mbus_init(struct intel_display *display)
+ 	if (DISPLAY_VER(display) == 12)
+ 		abox_regs |= BIT(0);
+ 
+-	for_each_set_bit(i, &abox_regs, sizeof(abox_regs))
++	for_each_set_bit(i, &abox_regs, BITS_PER_TYPE(abox_regs))
+ 		intel_de_rmw(display, MBUS_ABOX_CTL(i), mask, val);
+ }
+ 
+@@ -1630,11 +1630,11 @@ static void tgl_bw_buddy_init(struct intel_display *display)
+ 	if (table[config].page_mask == 0) {
+ 		drm_dbg_kms(display->drm,
+ 			    "Unknown memory configuration; disabling address buddy logic.\n");
+-		for_each_set_bit(i, &abox_mask, sizeof(abox_mask))
++		for_each_set_bit(i, &abox_mask, BITS_PER_TYPE(abox_mask))
+ 			intel_de_write(display, BW_BUDDY_CTL(i),
+ 				       BW_BUDDY_DISABLE);
+ 	} else {
+-		for_each_set_bit(i, &abox_mask, sizeof(abox_mask)) {
++		for_each_set_bit(i, &abox_mask, BITS_PER_TYPE(abox_mask)) {
+ 			intel_de_write(display, BW_BUDDY_PAGE_MASK(i),
+ 				       table[config].page_mask);
+ 
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+index 34131ae2c207df..3b02ed0a16dab1 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+@@ -388,11 +388,11 @@ static bool mtk_drm_get_all_drm_priv(struct device *dev)
+ 
+ 		of_id = of_match_node(mtk_drm_of_ids, node);
+ 		if (!of_id)
+-			goto next_put_node;
++			continue;
+ 
+ 		pdev = of_find_device_by_node(node);
+ 		if (!pdev)
+-			goto next_put_node;
++			continue;
+ 
+ 		drm_dev = device_find_child(&pdev->dev, NULL, mtk_drm_match);
+ 		if (!drm_dev)
+@@ -418,11 +418,10 @@ static bool mtk_drm_get_all_drm_priv(struct device *dev)
+ next_put_device_pdev_dev:
+ 		put_device(&pdev->dev);
+ 
+-next_put_node:
+-		of_node_put(node);
+-
+-		if (cnt == MAX_CRTC)
++		if (cnt == MAX_CRTC) {
++			of_node_put(node);
+ 			break;
++		}
+ 	}
+ 
+ 	if (drm_priv->data->mmsys_dev_num == cnt) {
+diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
+index 6200cad22563a3..0f4ab9e5ef95cb 100644
+--- a/drivers/gpu/drm/panthor/panthor_drv.c
++++ b/drivers/gpu/drm/panthor/panthor_drv.c
+@@ -1093,7 +1093,7 @@ static int panthor_ioctl_group_create(struct drm_device *ddev, void *data,
+ 	struct drm_panthor_queue_create *queue_args;
+ 	int ret;
+ 
+-	if (!args->queues.count)
++	if (!args->queues.count || args->queues.count > MAX_CS_PER_CSG)
+ 		return -EINVAL;
+ 
+ 	ret = PANTHOR_UOBJ_GET_ARRAY(queue_args, &args->queues);
+diff --git a/drivers/gpu/drm/xe/tests/xe_bo.c b/drivers/gpu/drm/xe/tests/xe_bo.c
+index 378dcd0fb41493..a34d1e2597b79f 100644
+--- a/drivers/gpu/drm/xe/tests/xe_bo.c
++++ b/drivers/gpu/drm/xe/tests/xe_bo.c
+@@ -236,7 +236,7 @@ static int evict_test_run_tile(struct xe_device *xe, struct xe_tile *tile, struc
+ 		}
+ 
+ 		xe_bo_lock(external, false);
+-		err = xe_bo_pin_external(external);
++		err = xe_bo_pin_external(external, false);
+ 		xe_bo_unlock(external);
+ 		if (err) {
+ 			KUNIT_FAIL(test, "external bo pin err=%pe\n",
+diff --git a/drivers/gpu/drm/xe/tests/xe_dma_buf.c b/drivers/gpu/drm/xe/tests/xe_dma_buf.c
+index c53f67ce4b0aa2..121f17c112ec6a 100644
+--- a/drivers/gpu/drm/xe/tests/xe_dma_buf.c
++++ b/drivers/gpu/drm/xe/tests/xe_dma_buf.c
+@@ -89,15 +89,7 @@ static void check_residency(struct kunit *test, struct xe_bo *exported,
+ 		return;
+ 	}
+ 
+-	/*
+-	 * If on different devices, the exporter is kept in system  if
+-	 * possible, saving a migration step as the transfer is just
+-	 * likely as fast from system memory.
+-	 */
+-	if (params->mem_mask & XE_BO_FLAG_SYSTEM)
+-		KUNIT_EXPECT_TRUE(test, xe_bo_is_mem_type(exported, XE_PL_TT));
+-	else
+-		KUNIT_EXPECT_TRUE(test, xe_bo_is_mem_type(exported, mem_type));
++	KUNIT_EXPECT_TRUE(test, xe_bo_is_mem_type(exported, mem_type));
+ 
+ 	if (params->force_different_devices)
+ 		KUNIT_EXPECT_TRUE(test, xe_bo_is_mem_type(imported, XE_PL_TT));
+diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
+index 50326e756f8975..5390f535394695 100644
+--- a/drivers/gpu/drm/xe/xe_bo.c
++++ b/drivers/gpu/drm/xe/xe_bo.c
+@@ -184,6 +184,8 @@ static void try_add_system(struct xe_device *xe, struct xe_bo *bo,
+ 
+ 		bo->placements[*c] = (struct ttm_place) {
+ 			.mem_type = XE_PL_TT,
++			.flags = (bo_flags & XE_BO_FLAG_VRAM_MASK) ?
++			TTM_PL_FLAG_FALLBACK : 0,
+ 		};
+ 		*c += 1;
+ 	}
+@@ -2266,6 +2268,7 @@ uint64_t vram_region_gpu_offset(struct ttm_resource *res)
+ /**
+  * xe_bo_pin_external - pin an external BO
+  * @bo: buffer object to be pinned
++ * @in_place: Pin in current placement, don't attempt to migrate.
+  *
+  * Pin an external (not tied to a VM, can be exported via dma-buf / prime FD)
+  * BO. Unique call compared to xe_bo_pin as this function has it own set of
+@@ -2273,7 +2276,7 @@ uint64_t vram_region_gpu_offset(struct ttm_resource *res)
+  *
+  * Returns 0 for success, negative error code otherwise.
+  */
+-int xe_bo_pin_external(struct xe_bo *bo)
++int xe_bo_pin_external(struct xe_bo *bo, bool in_place)
+ {
+ 	struct xe_device *xe = xe_bo_device(bo);
+ 	int err;
+@@ -2282,9 +2285,11 @@ int xe_bo_pin_external(struct xe_bo *bo)
+ 	xe_assert(xe, xe_bo_is_user(bo));
+ 
+ 	if (!xe_bo_is_pinned(bo)) {
+-		err = xe_bo_validate(bo, NULL, false);
+-		if (err)
+-			return err;
++		if (!in_place) {
++			err = xe_bo_validate(bo, NULL, false);
++			if (err)
++				return err;
++		}
+ 
+ 		spin_lock(&xe->pinned.lock);
+ 		list_add_tail(&bo->pinned_link, &xe->pinned.late.external);
+@@ -2437,6 +2442,9 @@ int xe_bo_validate(struct xe_bo *bo, struct xe_vm *vm, bool allow_res_evict)
+ 	};
+ 	int ret;
+ 
++	if (xe_bo_is_pinned(bo))
++		return 0;
++
+ 	if (vm) {
+ 		lockdep_assert_held(&vm->lock);
+ 		xe_vm_assert_held(vm);
+diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
+index 02ada1fb8a2359..bf0432c360bbba 100644
+--- a/drivers/gpu/drm/xe/xe_bo.h
++++ b/drivers/gpu/drm/xe/xe_bo.h
+@@ -201,7 +201,7 @@ static inline void xe_bo_unlock_vm_held(struct xe_bo *bo)
+ 	}
+ }
+ 
+-int xe_bo_pin_external(struct xe_bo *bo);
++int xe_bo_pin_external(struct xe_bo *bo, bool in_place);
+ int xe_bo_pin(struct xe_bo *bo);
+ void xe_bo_unpin_external(struct xe_bo *bo);
+ void xe_bo_unpin(struct xe_bo *bo);
+diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
+index 6383a1c0d47847..1db2aba4738c23 100644
+--- a/drivers/gpu/drm/xe/xe_device_types.h
++++ b/drivers/gpu/drm/xe/xe_device_types.h
+@@ -529,6 +529,12 @@ struct xe_device {
+ 
+ 	/** @pm_notifier: Our PM notifier to perform actions in response to various PM events. */
+ 	struct notifier_block pm_notifier;
++	/** @pm_block: Completion to block validating tasks on suspend / hibernate prepare */
++	struct completion pm_block;
++	/** @rebind_resume_list: List of wq items to kick on resume. */
++	struct list_head rebind_resume_list;
++	/** @rebind_resume_lock: Lock to protect the rebind_resume_list */
++	struct mutex rebind_resume_lock;
+ 
+ 	/** @pmt: Support the PMT driver callback interface */
+ 	struct {
+diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c b/drivers/gpu/drm/xe/xe_dma_buf.c
+index 346f857f38374f..af64baf872ef7b 100644
+--- a/drivers/gpu/drm/xe/xe_dma_buf.c
++++ b/drivers/gpu/drm/xe/xe_dma_buf.c
+@@ -72,7 +72,7 @@ static int xe_dma_buf_pin(struct dma_buf_attachment *attach)
+ 		return ret;
+ 	}
+ 
+-	ret = xe_bo_pin_external(bo);
++	ret = xe_bo_pin_external(bo, true);
+ 	xe_assert(xe, !ret);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
+index 44364c042ad72d..374c831e691b2b 100644
+--- a/drivers/gpu/drm/xe/xe_exec.c
++++ b/drivers/gpu/drm/xe/xe_exec.c
+@@ -237,6 +237,15 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+ 		goto err_unlock_list;
+ 	}
+ 
++	/*
++	 * It's OK to block interruptible here with the vm lock held, since
++	 * on task freezing during suspend / hibernate, the call will
++	 * return -ERESTARTSYS and the IOCTL will be rerun.
++	 */
++	err = wait_for_completion_interruptible(&xe->pm_block);
++	if (err)
++		goto err_unlock_list;
++
+ 	vm_exec.vm = &vm->gpuvm;
+ 	vm_exec.flags = DRM_EXEC_INTERRUPTIBLE_WAIT;
+ 	if (xe_vm_in_lr_mode(vm)) {
+diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
+index ad263de44111d4..375a197a86089b 100644
+--- a/drivers/gpu/drm/xe/xe_pm.c
++++ b/drivers/gpu/drm/xe/xe_pm.c
+@@ -23,6 +23,7 @@
+ #include "xe_pcode.h"
+ #include "xe_pxp.h"
+ #include "xe_trace.h"
++#include "xe_vm.h"
+ #include "xe_wa.h"
+ 
+ /**
+@@ -285,6 +286,19 @@ static u32 vram_threshold_value(struct xe_device *xe)
+ 	return DEFAULT_VRAM_THRESHOLD;
+ }
+ 
++static void xe_pm_wake_rebind_workers(struct xe_device *xe)
++{
++	struct xe_vm *vm, *next;
++
++	mutex_lock(&xe->rebind_resume_lock);
++	list_for_each_entry_safe(vm, next, &xe->rebind_resume_list,
++				 preempt.pm_activate_link) {
++		list_del_init(&vm->preempt.pm_activate_link);
++		xe_vm_resume_rebind_worker(vm);
++	}
++	mutex_unlock(&xe->rebind_resume_lock);
++}
++
+ static int xe_pm_notifier_callback(struct notifier_block *nb,
+ 				   unsigned long action, void *data)
+ {
+@@ -294,30 +308,30 @@ static int xe_pm_notifier_callback(struct notifier_block *nb,
+ 	switch (action) {
+ 	case PM_HIBERNATION_PREPARE:
+ 	case PM_SUSPEND_PREPARE:
++		reinit_completion(&xe->pm_block);
+ 		xe_pm_runtime_get(xe);
+ 		err = xe_bo_evict_all_user(xe);
+-		if (err) {
++		if (err)
+ 			drm_dbg(&xe->drm, "Notifier evict user failed (%d)\n", err);
+-			xe_pm_runtime_put(xe);
+-			break;
+-		}
+ 
+ 		err = xe_bo_notifier_prepare_all_pinned(xe);
+-		if (err) {
++		if (err)
+ 			drm_dbg(&xe->drm, "Notifier prepare pin failed (%d)\n", err);
+-			xe_pm_runtime_put(xe);
+-		}
++		/*
++		 * Keep the runtime pm reference until post hibernation / post suspend to
++		 * avoid a runtime suspend interfering with evicted objects or backup
++		 * allocations.
++		 */
+ 		break;
+ 	case PM_POST_HIBERNATION:
+ 	case PM_POST_SUSPEND:
++		complete_all(&xe->pm_block);
++		xe_pm_wake_rebind_workers(xe);
+ 		xe_bo_notifier_unprepare_all_pinned(xe);
+ 		xe_pm_runtime_put(xe);
+ 		break;
+ 	}
+ 
+-	if (err)
+-		return NOTIFY_BAD;
+-
+ 	return NOTIFY_DONE;
+ }
+ 
+@@ -339,6 +353,14 @@ int xe_pm_init(struct xe_device *xe)
+ 	if (err)
+ 		return err;
+ 
++	err = drmm_mutex_init(&xe->drm, &xe->rebind_resume_lock);
++	if (err)
++		goto err_unregister;
++
++	init_completion(&xe->pm_block);
++	complete_all(&xe->pm_block);
++	INIT_LIST_HEAD(&xe->rebind_resume_list);
++
+ 	/* For now suspend/resume is only allowed with GuC */
+ 	if (!xe_device_uc_enabled(xe))
+ 		return 0;
+diff --git a/drivers/gpu/drm/xe/xe_survivability_mode.c b/drivers/gpu/drm/xe/xe_survivability_mode.c
+index 1f710b3fc599b5..5ae3d70e45167f 100644
+--- a/drivers/gpu/drm/xe/xe_survivability_mode.c
++++ b/drivers/gpu/drm/xe/xe_survivability_mode.c
+@@ -40,6 +40,8 @@
+  *
+  *	# echo 1 > /sys/kernel/config/xe/0000:03:00.0/survivability_mode
+  *
++ * It is the responsibility of the user to clear the mode once firmware flash is complete.
++ *
+  * Refer :ref:`xe_configfs` for more details on how to use configfs
+  *
+  * Survivability mode is indicated by the below admin-only readable sysfs which provides additional
+@@ -146,7 +148,6 @@ static void xe_survivability_mode_fini(void *arg)
+ 	struct pci_dev *pdev = to_pci_dev(xe->drm.dev);
+ 	struct device *dev = &pdev->dev;
+ 
+-	xe_configfs_clear_survivability_mode(pdev);
+ 	sysfs_remove_file(&dev->kobj, &dev_attr_survivability_mode.attr);
+ }
+ 
+diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
+index e278aad1a6eb29..84052b98002d14 100644
+--- a/drivers/gpu/drm/xe/xe_vm.c
++++ b/drivers/gpu/drm/xe/xe_vm.c
+@@ -393,6 +393,9 @@ static int xe_gpuvm_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec)
+ 		list_move_tail(&gpuva_to_vma(gpuva)->combined_links.rebind,
+ 			       &vm->rebind_list);
+ 
++	if (!try_wait_for_completion(&vm->xe->pm_block))
++		return -EAGAIN;
++
+ 	ret = xe_bo_validate(gem_to_xe_bo(vm_bo->obj), vm, false);
+ 	if (ret)
+ 		return ret;
+@@ -479,6 +482,33 @@ static int xe_preempt_work_begin(struct drm_exec *exec, struct xe_vm *vm,
+ 	return xe_vm_validate_rebind(vm, exec, vm->preempt.num_exec_queues);
+ }
+ 
++static bool vm_suspend_rebind_worker(struct xe_vm *vm)
++{
++	struct xe_device *xe = vm->xe;
++	bool ret = false;
++
++	mutex_lock(&xe->rebind_resume_lock);
++	if (!try_wait_for_completion(&vm->xe->pm_block)) {
++		ret = true;
++		list_move_tail(&vm->preempt.pm_activate_link, &xe->rebind_resume_list);
++	}
++	mutex_unlock(&xe->rebind_resume_lock);
++
++	return ret;
++}
++
++/**
++ * xe_vm_resume_rebind_worker() - Resume the rebind worker.
++ * @vm: The vm whose preempt worker to resume.
++ *
++ * Resume a preempt worker that was previously suspended by
++ * vm_suspend_rebind_worker().
++ */
++void xe_vm_resume_rebind_worker(struct xe_vm *vm)
++{
++	queue_work(vm->xe->ordered_wq, &vm->preempt.rebind_work);
++}
++
+ static void preempt_rebind_work_func(struct work_struct *w)
+ {
+ 	struct xe_vm *vm = container_of(w, struct xe_vm, preempt.rebind_work);
+@@ -502,6 +532,11 @@ static void preempt_rebind_work_func(struct work_struct *w)
+ 	}
+ 
+ retry:
++	if (!try_wait_for_completion(&vm->xe->pm_block) && vm_suspend_rebind_worker(vm)) {
++		up_write(&vm->lock);
++		return;
++	}
++
+ 	if (xe_vm_userptr_check_repin(vm)) {
+ 		err = xe_vm_userptr_pin(vm);
+ 		if (err)
+@@ -1686,6 +1721,7 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef)
+ 	if (flags & XE_VM_FLAG_LR_MODE) {
+ 		INIT_WORK(&vm->preempt.rebind_work, preempt_rebind_work_func);
+ 		xe_pm_runtime_get_noresume(xe);
++		INIT_LIST_HEAD(&vm->preempt.pm_activate_link);
+ 	}
+ 
+ 	if (flags & XE_VM_FLAG_FAULT_MODE) {
+@@ -1867,8 +1903,12 @@ void xe_vm_close_and_put(struct xe_vm *vm)
+ 	xe_assert(xe, !vm->preempt.num_exec_queues);
+ 
+ 	xe_vm_close(vm);
+-	if (xe_vm_in_preempt_fence_mode(vm))
++	if (xe_vm_in_preempt_fence_mode(vm)) {
++		mutex_lock(&xe->rebind_resume_lock);
++		list_del_init(&vm->preempt.pm_activate_link);
++		mutex_unlock(&xe->rebind_resume_lock);
+ 		flush_work(&vm->preempt.rebind_work);
++	}
+ 	if (xe_vm_in_fault_mode(vm))
+ 		xe_svm_close(vm);
+ 
+diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
+index e54ca835b58282..e493a17e0f19d9 100644
+--- a/drivers/gpu/drm/xe/xe_vm.h
++++ b/drivers/gpu/drm/xe/xe_vm.h
+@@ -268,6 +268,8 @@ struct dma_fence *xe_vm_bind_kernel_bo(struct xe_vm *vm, struct xe_bo *bo,
+ 				       struct xe_exec_queue *q, u64 addr,
+ 				       enum xe_cache_level cache_lvl);
+ 
++void xe_vm_resume_rebind_worker(struct xe_vm *vm);
++
+ /**
+  * xe_vm_resv() - Return's the vm's reservation object
+  * @vm: The vm
+diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
+index 1979e9bdbdf36b..4ebd3dc53f3c1a 100644
+--- a/drivers/gpu/drm/xe/xe_vm_types.h
++++ b/drivers/gpu/drm/xe/xe_vm_types.h
+@@ -286,6 +286,11 @@ struct xe_vm {
+ 		 * BOs
+ 		 */
+ 		struct work_struct rebind_work;
++		/**
++		 * @preempt.pm_activate_link: Link to list of rebind workers to be
++		 * kicked on resume.
++		 */
++		struct list_head pm_activate_link;
+ 	} preempt;
+ 
+ 	/** @um: unified memory state */
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index a7f89946dad418..e94ac746a741af 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -1052,7 +1052,7 @@ static const struct pci_device_id i801_ids[] = {
+ 	{ PCI_DEVICE_DATA(INTEL, METEOR_LAKE_P_SMBUS,		FEATURES_ICH5 | FEATURE_TCO_CNL) },
+ 	{ PCI_DEVICE_DATA(INTEL, METEOR_LAKE_SOC_S_SMBUS,	FEATURES_ICH5 | FEATURE_TCO_CNL) },
+ 	{ PCI_DEVICE_DATA(INTEL, METEOR_LAKE_PCH_S_SMBUS,	FEATURES_ICH5 | FEATURE_TCO_CNL) },
+-	{ PCI_DEVICE_DATA(INTEL, BIRCH_STREAM_SMBUS,		FEATURES_ICH5 | FEATURE_TCO_CNL) },
++	{ PCI_DEVICE_DATA(INTEL, BIRCH_STREAM_SMBUS,		FEATURES_ICH5)			 },
+ 	{ PCI_DEVICE_DATA(INTEL, ARROW_LAKE_H_SMBUS,		FEATURES_ICH5 | FEATURE_TCO_CNL) },
+ 	{ PCI_DEVICE_DATA(INTEL, PANTHER_LAKE_H_SMBUS,		FEATURES_ICH5 | FEATURE_TCO_CNL) },
+ 	{ PCI_DEVICE_DATA(INTEL, PANTHER_LAKE_P_SMBUS,		FEATURES_ICH5 | FEATURE_TCO_CNL) },
+diff --git a/drivers/i2c/busses/i2c-rtl9300.c b/drivers/i2c/busses/i2c-rtl9300.c
+index cfafe089102aa2..9e1f71fed0feac 100644
+--- a/drivers/i2c/busses/i2c-rtl9300.c
++++ b/drivers/i2c/busses/i2c-rtl9300.c
+@@ -99,6 +99,9 @@ static int rtl9300_i2c_config_xfer(struct rtl9300_i2c *i2c, struct rtl9300_i2c_c
+ {
+ 	u32 val, mask;
+ 
++	if (len < 1 || len > 16)
++		return -EINVAL;
++
+ 	val = chan->bus_freq << RTL9300_I2C_MST_CTRL2_SCL_FREQ_OFS;
+ 	mask = RTL9300_I2C_MST_CTRL2_SCL_FREQ_MASK;
+ 
+@@ -222,15 +225,6 @@ static int rtl9300_i2c_smbus_xfer(struct i2c_adapter *adap, u16 addr, unsigned s
+ 	}
+ 
+ 	switch (size) {
+-	case I2C_SMBUS_QUICK:
+-		ret = rtl9300_i2c_config_xfer(i2c, chan, addr, 0);
+-		if (ret)
+-			goto out_unlock;
+-		ret = rtl9300_i2c_reg_addr_set(i2c, 0, 0);
+-		if (ret)
+-			goto out_unlock;
+-		break;
+-
+ 	case I2C_SMBUS_BYTE:
+ 		if (read_write == I2C_SMBUS_WRITE) {
+ 			ret = rtl9300_i2c_config_xfer(i2c, chan, addr, 0);
+@@ -312,9 +306,9 @@ static int rtl9300_i2c_smbus_xfer(struct i2c_adapter *adap, u16 addr, unsigned s
+ 
+ static u32 rtl9300_i2c_func(struct i2c_adapter *a)
+ {
+-	return I2C_FUNC_SMBUS_QUICK | I2C_FUNC_SMBUS_BYTE |
+-	       I2C_FUNC_SMBUS_BYTE_DATA | I2C_FUNC_SMBUS_WORD_DATA |
+-	       I2C_FUNC_SMBUS_BLOCK_DATA;
++	return I2C_FUNC_SMBUS_BYTE | I2C_FUNC_SMBUS_BYTE_DATA |
++	       I2C_FUNC_SMBUS_WORD_DATA | I2C_FUNC_SMBUS_BLOCK_DATA |
++	       I2C_FUNC_SMBUS_I2C_BLOCK;
+ }
+ 
+ static const struct i2c_algorithm rtl9300_i2c_algo = {
+@@ -323,7 +317,7 @@ static const struct i2c_algorithm rtl9300_i2c_algo = {
+ };
+ 
+ static struct i2c_adapter_quirks rtl9300_i2c_quirks = {
+-	.flags		= I2C_AQ_NO_CLK_STRETCH,
++	.flags		= I2C_AQ_NO_CLK_STRETCH | I2C_AQ_NO_ZERO_LEN,
+ 	.max_read_len	= 16,
+ 	.max_write_len	= 16,
+ };
+@@ -353,7 +347,7 @@ static int rtl9300_i2c_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, i2c);
+ 
+-	if (device_get_child_node_count(dev) >= RTL9300_I2C_MUX_NCHAN)
++	if (device_get_child_node_count(dev) > RTL9300_I2C_MUX_NCHAN)
+ 		return dev_err_probe(dev, -EINVAL, "Too many channels\n");
+ 
+ 	device_for_each_child_node(dev, child) {
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 1d8c579b54331e..4ee9e403d38501 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -422,6 +422,7 @@ static const struct xpad_device {
+ 	{ 0x3537, 0x1010, "GameSir G7 SE", 0, XTYPE_XBOXONE },
+ 	{ 0x366c, 0x0005, "ByoWave Proteus Controller", MAP_SHARE_BUTTON, XTYPE_XBOXONE, FLAG_DELAY_INIT },
+ 	{ 0x3767, 0x0101, "Fanatec Speedster 3 Forceshock Wheel", 0, XTYPE_XBOX },
++	{ 0x37d7, 0x2501, "Flydigi Apex 5", 0, XTYPE_XBOX360 },
+ 	{ 0x413d, 0x2104, "Black Shark Green Ghost Gamepad", 0, XTYPE_XBOX360 },
+ 	{ 0xffff, 0xffff, "Chinese-made Xbox Controller", 0, XTYPE_XBOX },
+ 	{ 0x0000, 0x0000, "Generic X-Box pad", 0, XTYPE_UNKNOWN }
+@@ -578,6 +579,7 @@ static const struct usb_device_id xpad_table[] = {
+ 	XPAD_XBOX360_VENDOR(0x3537),		/* GameSir Controllers */
+ 	XPAD_XBOXONE_VENDOR(0x3537),		/* GameSir Controllers */
+ 	XPAD_XBOXONE_VENDOR(0x366c),		/* ByoWave controllers */
++	XPAD_XBOX360_VENDOR(0x37d7),		/* Flydigi Controllers */
+ 	XPAD_XBOX360_VENDOR(0x413d),		/* Black Shark Green Ghost Controller */
+ 	{ }
+ };
+diff --git a/drivers/input/misc/iqs7222.c b/drivers/input/misc/iqs7222.c
+index 6fac31c0d99f2b..ff23219a582ab8 100644
+--- a/drivers/input/misc/iqs7222.c
++++ b/drivers/input/misc/iqs7222.c
+@@ -2427,6 +2427,9 @@ static int iqs7222_parse_chan(struct iqs7222_private *iqs7222,
+ 		if (error)
+ 			return error;
+ 
++		if (!iqs7222->kp_type[chan_index][i])
++			continue;
++
+ 		if (!dev_desc->event_offset)
+ 			continue;
+ 
+diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h
+index 6ed9fc34948cbe..1caa6c4ca435c7 100644
+--- a/drivers/input/serio/i8042-acpipnpio.h
++++ b/drivers/input/serio/i8042-acpipnpio.h
+@@ -1155,6 +1155,20 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ 		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+ 					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ 	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "XxHP4NAx"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "XxKK4NAx_XxSP4NAx"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++	},
+ 	/*
+ 	 * A lot of modern Clevo barebones have touchpad and/or keyboard issues
+ 	 * after suspend fixable with the forcenorestore quirk.
+diff --git a/drivers/iommu/intel/cache.c b/drivers/iommu/intel/cache.c
+index c8b79de84d3fb9..071f78e67fcba0 100644
+--- a/drivers/iommu/intel/cache.c
++++ b/drivers/iommu/intel/cache.c
+@@ -370,7 +370,7 @@ static void cache_tag_flush_iotlb(struct dmar_domain *domain, struct cache_tag *
+ 	struct intel_iommu *iommu = tag->iommu;
+ 	u64 type = DMA_TLB_PSI_FLUSH;
+ 
+-	if (domain->use_first_level) {
++	if (intel_domain_is_fs_paging(domain)) {
+ 		qi_batch_add_piotlb(iommu, tag->domain_id, tag->pasid, addr,
+ 				    pages, ih, domain->qi_batch);
+ 		return;
+@@ -529,7 +529,8 @@ void cache_tag_flush_range_np(struct dmar_domain *domain, unsigned long start,
+ 			qi_batch_flush_descs(iommu, domain->qi_batch);
+ 		iommu = tag->iommu;
+ 
+-		if (!cap_caching_mode(iommu->cap) || domain->use_first_level) {
++		if (!cap_caching_mode(iommu->cap) ||
++		    intel_domain_is_fs_paging(domain)) {
+ 			iommu_flush_write_buffer(iommu);
+ 			continue;
+ 		}
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index c239e280e43d91..34dd175a331dc7 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -57,6 +57,8 @@
+ static void __init check_tylersburg_isoch(void);
+ static int rwbf_quirk;
+ 
++#define rwbf_required(iommu)	(rwbf_quirk || cap_rwbf((iommu)->cap))
++
+ /*
+  * set to 1 to panic kernel if can't successfully enable VT-d
+  * (used when kernel is launched w/ TXT)
+@@ -1479,6 +1481,9 @@ static int domain_context_mapping_one(struct dmar_domain *domain,
+ 	struct context_entry *context;
+ 	int ret;
+ 
++	if (WARN_ON(!intel_domain_is_ss_paging(domain)))
++		return -EINVAL;
++
+ 	pr_debug("Set context mapping for %02x:%02x.%d\n",
+ 		bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+ 
+@@ -1795,18 +1800,6 @@ static int domain_setup_first_level(struct intel_iommu *iommu,
+ 					  (pgd_t *)pgd, flags, old);
+ }
+ 
+-static bool domain_need_iotlb_sync_map(struct dmar_domain *domain,
+-				       struct intel_iommu *iommu)
+-{
+-	if (cap_caching_mode(iommu->cap) && !domain->use_first_level)
+-		return true;
+-
+-	if (rwbf_quirk || cap_rwbf(iommu->cap))
+-		return true;
+-
+-	return false;
+-}
+-
+ static int dmar_domain_attach_device(struct dmar_domain *domain,
+ 				     struct device *dev)
+ {
+@@ -1830,12 +1823,14 @@ static int dmar_domain_attach_device(struct dmar_domain *domain,
+ 
+ 	if (!sm_supported(iommu))
+ 		ret = domain_context_mapping(domain, dev);
+-	else if (domain->use_first_level)
++	else if (intel_domain_is_fs_paging(domain))
+ 		ret = domain_setup_first_level(iommu, domain, dev,
+ 					       IOMMU_NO_PASID, NULL);
+-	else
++	else if (intel_domain_is_ss_paging(domain))
+ 		ret = domain_setup_second_level(iommu, domain, dev,
+ 						IOMMU_NO_PASID, NULL);
++	else if (WARN_ON(true))
++		ret = -EINVAL;
+ 
+ 	if (ret)
+ 		goto out_block_translation;
+@@ -1844,8 +1839,6 @@ static int dmar_domain_attach_device(struct dmar_domain *domain,
+ 	if (ret)
+ 		goto out_block_translation;
+ 
+-	domain->iotlb_sync_map |= domain_need_iotlb_sync_map(domain, iommu);
+-
+ 	return 0;
+ 
+ out_block_translation:
+@@ -3299,10 +3292,14 @@ static struct dmar_domain *paging_domain_alloc(struct device *dev, bool first_st
+ 	spin_lock_init(&domain->lock);
+ 	spin_lock_init(&domain->cache_lock);
+ 	xa_init(&domain->iommu_array);
++	INIT_LIST_HEAD(&domain->s1_domains);
++	spin_lock_init(&domain->s1_lock);
+ 
+ 	domain->nid = dev_to_node(dev);
+ 	domain->use_first_level = first_stage;
+ 
++	domain->domain.type = IOMMU_DOMAIN_UNMANAGED;
++
+ 	/* calculate the address width */
+ 	addr_width = agaw_to_width(iommu->agaw);
+ 	if (addr_width > cap_mgaw(iommu->cap))
+@@ -3344,62 +3341,92 @@ static struct dmar_domain *paging_domain_alloc(struct device *dev, bool first_st
+ }
+ 
+ static struct iommu_domain *
+-intel_iommu_domain_alloc_paging_flags(struct device *dev, u32 flags,
+-				      const struct iommu_user_data *user_data)
++intel_iommu_domain_alloc_first_stage(struct device *dev,
++				     struct intel_iommu *iommu, u32 flags)
++{
++	struct dmar_domain *dmar_domain;
++
++	if (flags & ~IOMMU_HWPT_ALLOC_PASID)
++		return ERR_PTR(-EOPNOTSUPP);
++
++	/* Only SL is available in legacy mode */
++	if (!sm_supported(iommu) || !ecap_flts(iommu->ecap))
++		return ERR_PTR(-EOPNOTSUPP);
++
++	dmar_domain = paging_domain_alloc(dev, true);
++	if (IS_ERR(dmar_domain))
++		return ERR_CAST(dmar_domain);
++
++	dmar_domain->domain.ops = &intel_fs_paging_domain_ops;
++	/*
++	 * iotlb sync for map is only needed for legacy implementations that
++	 * explicitly require flushing internal write buffers to ensure memory
++	 * coherence.
++	 */
++	if (rwbf_required(iommu))
++		dmar_domain->iotlb_sync_map = true;
++
++	return &dmar_domain->domain;
++}
++
++static struct iommu_domain *
++intel_iommu_domain_alloc_second_stage(struct device *dev,
++				      struct intel_iommu *iommu, u32 flags)
+ {
+-	struct device_domain_info *info = dev_iommu_priv_get(dev);
+-	bool dirty_tracking = flags & IOMMU_HWPT_ALLOC_DIRTY_TRACKING;
+-	bool nested_parent = flags & IOMMU_HWPT_ALLOC_NEST_PARENT;
+-	struct intel_iommu *iommu = info->iommu;
+ 	struct dmar_domain *dmar_domain;
+-	struct iommu_domain *domain;
+-	bool first_stage;
+ 
+ 	if (flags &
+ 	    (~(IOMMU_HWPT_ALLOC_NEST_PARENT | IOMMU_HWPT_ALLOC_DIRTY_TRACKING |
+ 	       IOMMU_HWPT_ALLOC_PASID)))
+ 		return ERR_PTR(-EOPNOTSUPP);
+-	if (nested_parent && !nested_supported(iommu))
++
++	if (((flags & IOMMU_HWPT_ALLOC_NEST_PARENT) &&
++	     !nested_supported(iommu)) ||
++	    ((flags & IOMMU_HWPT_ALLOC_DIRTY_TRACKING) &&
++	     !ssads_supported(iommu)))
+ 		return ERR_PTR(-EOPNOTSUPP);
+-	if (user_data || (dirty_tracking && !ssads_supported(iommu)))
++
++	/* Legacy mode always supports second stage */
++	if (sm_supported(iommu) && !ecap_slts(iommu->ecap))
+ 		return ERR_PTR(-EOPNOTSUPP);
+ 
++	dmar_domain = paging_domain_alloc(dev, false);
++	if (IS_ERR(dmar_domain))
++		return ERR_CAST(dmar_domain);
++
++	dmar_domain->domain.ops = &intel_ss_paging_domain_ops;
++	dmar_domain->nested_parent = flags & IOMMU_HWPT_ALLOC_NEST_PARENT;
++
++	if (flags & IOMMU_HWPT_ALLOC_DIRTY_TRACKING)
++		dmar_domain->domain.dirty_ops = &intel_dirty_ops;
++
+ 	/*
+-	 * Always allocate the guest compatible page table unless
+-	 * IOMMU_HWPT_ALLOC_NEST_PARENT or IOMMU_HWPT_ALLOC_DIRTY_TRACKING
+-	 * is specified.
++	 * Besides the internal write buffer flush, the caching mode used for
++	 * legacy nested translation (which utilizes shadowing page tables)
++	 * also requires iotlb sync on map.
+ 	 */
+-	if (nested_parent || dirty_tracking) {
+-		if (!sm_supported(iommu) || !ecap_slts(iommu->ecap))
+-			return ERR_PTR(-EOPNOTSUPP);
+-		first_stage = false;
+-	} else {
+-		first_stage = first_level_by_default(iommu);
+-	}
++	if (rwbf_required(iommu) || cap_caching_mode(iommu->cap))
++		dmar_domain->iotlb_sync_map = true;
+ 
+-	dmar_domain = paging_domain_alloc(dev, first_stage);
+-	if (IS_ERR(dmar_domain))
+-		return ERR_CAST(dmar_domain);
+-	domain = &dmar_domain->domain;
+-	domain->type = IOMMU_DOMAIN_UNMANAGED;
+-	domain->owner = &intel_iommu_ops;
+-	domain->ops = intel_iommu_ops.default_domain_ops;
+-
+-	if (nested_parent) {
+-		dmar_domain->nested_parent = true;
+-		INIT_LIST_HEAD(&dmar_domain->s1_domains);
+-		spin_lock_init(&dmar_domain->s1_lock);
+-	}
++	return &dmar_domain->domain;
++}
+ 
+-	if (dirty_tracking) {
+-		if (dmar_domain->use_first_level) {
+-			iommu_domain_free(domain);
+-			return ERR_PTR(-EOPNOTSUPP);
+-		}
+-		domain->dirty_ops = &intel_dirty_ops;
+-	}
++static struct iommu_domain *
++intel_iommu_domain_alloc_paging_flags(struct device *dev, u32 flags,
++				      const struct iommu_user_data *user_data)
++{
++	struct device_domain_info *info = dev_iommu_priv_get(dev);
++	struct intel_iommu *iommu = info->iommu;
++	struct iommu_domain *domain;
+ 
+-	return domain;
++	if (user_data)
++		return ERR_PTR(-EOPNOTSUPP);
++
++	/* Prefer first stage if possible by default. */
++	domain = intel_iommu_domain_alloc_first_stage(dev, iommu, flags);
++	if (domain != ERR_PTR(-EOPNOTSUPP))
++		return domain;
++	return intel_iommu_domain_alloc_second_stage(dev, iommu, flags);
+ }
+ 
+ static void intel_iommu_domain_free(struct iommu_domain *domain)
+@@ -3411,33 +3438,86 @@ static void intel_iommu_domain_free(struct iommu_domain *domain)
+ 	domain_exit(dmar_domain);
+ }
+ 
++static int paging_domain_compatible_first_stage(struct dmar_domain *dmar_domain,
++						struct intel_iommu *iommu)
++{
++	if (WARN_ON(dmar_domain->domain.dirty_ops ||
++		    dmar_domain->nested_parent))
++		return -EINVAL;
++
++	/* Only SL is available in legacy mode */
++	if (!sm_supported(iommu) || !ecap_flts(iommu->ecap))
++		return -EINVAL;
++
++	/* Same page size support */
++	if (!cap_fl1gp_support(iommu->cap) &&
++	    (dmar_domain->domain.pgsize_bitmap & SZ_1G))
++		return -EINVAL;
++
++	/* iotlb sync on map requirement */
++	if ((rwbf_required(iommu)) && !dmar_domain->iotlb_sync_map)
++		return -EINVAL;
++
++	return 0;
++}
++
++static int
++paging_domain_compatible_second_stage(struct dmar_domain *dmar_domain,
++				      struct intel_iommu *iommu)
++{
++	unsigned int sslps = cap_super_page_val(iommu->cap);
++
++	if (dmar_domain->domain.dirty_ops && !ssads_supported(iommu))
++		return -EINVAL;
++	if (dmar_domain->nested_parent && !nested_supported(iommu))
++		return -EINVAL;
++
++	/* Legacy mode always supports second stage */
++	if (sm_supported(iommu) && !ecap_slts(iommu->ecap))
++		return -EINVAL;
++
++	/* Same page size support */
++	if (!(sslps & BIT(0)) && (dmar_domain->domain.pgsize_bitmap & SZ_2M))
++		return -EINVAL;
++	if (!(sslps & BIT(1)) && (dmar_domain->domain.pgsize_bitmap & SZ_1G))
++		return -EINVAL;
++
++	/* iotlb sync on map requirement */
++	if ((rwbf_required(iommu) || cap_caching_mode(iommu->cap)) &&
++	    !dmar_domain->iotlb_sync_map)
++		return -EINVAL;
++
++	return 0;
++}
++
+ int paging_domain_compatible(struct iommu_domain *domain, struct device *dev)
+ {
+ 	struct device_domain_info *info = dev_iommu_priv_get(dev);
+ 	struct dmar_domain *dmar_domain = to_dmar_domain(domain);
+ 	struct intel_iommu *iommu = info->iommu;
++	int ret = -EINVAL;
+ 	int addr_width;
+ 
+-	if (WARN_ON_ONCE(!(domain->type & __IOMMU_DOMAIN_PAGING)))
+-		return -EPERM;
++	if (intel_domain_is_fs_paging(dmar_domain))
++		ret = paging_domain_compatible_first_stage(dmar_domain, iommu);
++	else if (intel_domain_is_ss_paging(dmar_domain))
++		ret = paging_domain_compatible_second_stage(dmar_domain, iommu);
++	else if (WARN_ON(true))
++		ret = -EINVAL;
++	if (ret)
++		return ret;
+ 
++	/*
++	 * FIXME this is locked wrong, it needs to be under the
++	 * dmar_domain->lock
++	 */
+ 	if (dmar_domain->force_snooping && !ecap_sc_support(iommu->ecap))
+ 		return -EINVAL;
+ 
+-	if (domain->dirty_ops && !ssads_supported(iommu))
+-		return -EINVAL;
+-
+ 	if (dmar_domain->iommu_coherency !=
+ 			iommu_paging_structure_coherency(iommu))
+ 		return -EINVAL;
+ 
+-	if (dmar_domain->iommu_superpage !=
+-			iommu_superpage_capability(iommu, dmar_domain->use_first_level))
+-		return -EINVAL;
+-
+-	if (dmar_domain->use_first_level &&
+-	    (!sm_supported(iommu) || !ecap_flts(iommu->ecap)))
+-		return -EINVAL;
+ 
+ 	/* check if this iommu agaw is sufficient for max mapped address */
+ 	addr_width = agaw_to_width(iommu->agaw);
+@@ -4094,12 +4174,15 @@ static int intel_iommu_set_dev_pasid(struct iommu_domain *domain,
+ 	if (ret)
+ 		goto out_remove_dev_pasid;
+ 
+-	if (dmar_domain->use_first_level)
++	if (intel_domain_is_fs_paging(dmar_domain))
+ 		ret = domain_setup_first_level(iommu, dmar_domain,
+ 					       dev, pasid, old);
+-	else
++	else if (intel_domain_is_ss_paging(dmar_domain))
+ 		ret = domain_setup_second_level(iommu, dmar_domain,
+ 						dev, pasid, old);
++	else if (WARN_ON(true))
++		ret = -EINVAL;
++
+ 	if (ret)
+ 		goto out_unwind_iopf;
+ 
+@@ -4374,6 +4457,32 @@ static struct iommu_domain identity_domain = {
+ 	},
+ };
+ 
++const struct iommu_domain_ops intel_fs_paging_domain_ops = {
++	.attach_dev = intel_iommu_attach_device,
++	.set_dev_pasid = intel_iommu_set_dev_pasid,
++	.map_pages = intel_iommu_map_pages,
++	.unmap_pages = intel_iommu_unmap_pages,
++	.iotlb_sync_map = intel_iommu_iotlb_sync_map,
++	.flush_iotlb_all = intel_flush_iotlb_all,
++	.iotlb_sync = intel_iommu_tlb_sync,
++	.iova_to_phys = intel_iommu_iova_to_phys,
++	.free = intel_iommu_domain_free,
++	.enforce_cache_coherency = intel_iommu_enforce_cache_coherency,
++};
++
++const struct iommu_domain_ops intel_ss_paging_domain_ops = {
++	.attach_dev = intel_iommu_attach_device,
++	.set_dev_pasid = intel_iommu_set_dev_pasid,
++	.map_pages = intel_iommu_map_pages,
++	.unmap_pages = intel_iommu_unmap_pages,
++	.iotlb_sync_map = intel_iommu_iotlb_sync_map,
++	.flush_iotlb_all = intel_flush_iotlb_all,
++	.iotlb_sync = intel_iommu_tlb_sync,
++	.iova_to_phys = intel_iommu_iova_to_phys,
++	.free = intel_iommu_domain_free,
++	.enforce_cache_coherency = intel_iommu_enforce_cache_coherency,
++};
++
+ const struct iommu_ops intel_iommu_ops = {
+ 	.blocked_domain		= &blocking_domain,
+ 	.release_domain		= &blocking_domain,
+@@ -4391,18 +4500,6 @@ const struct iommu_ops intel_iommu_ops = {
+ 	.is_attach_deferred	= intel_iommu_is_attach_deferred,
+ 	.def_domain_type	= device_def_domain_type,
+ 	.page_response		= intel_iommu_page_response,
+-	.default_domain_ops = &(const struct iommu_domain_ops) {
+-		.attach_dev		= intel_iommu_attach_device,
+-		.set_dev_pasid		= intel_iommu_set_dev_pasid,
+-		.map_pages		= intel_iommu_map_pages,
+-		.unmap_pages		= intel_iommu_unmap_pages,
+-		.iotlb_sync_map		= intel_iommu_iotlb_sync_map,
+-		.flush_iotlb_all        = intel_flush_iotlb_all,
+-		.iotlb_sync		= intel_iommu_tlb_sync,
+-		.iova_to_phys		= intel_iommu_iova_to_phys,
+-		.free			= intel_iommu_domain_free,
+-		.enforce_cache_coherency = intel_iommu_enforce_cache_coherency,
+-	}
+ };
+ 
+ static void quirk_iommu_igfx(struct pci_dev *dev)
+diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h
+index 61f42802fe9e95..c699ed8810f23c 100644
+--- a/drivers/iommu/intel/iommu.h
++++ b/drivers/iommu/intel/iommu.h
+@@ -1381,6 +1381,18 @@ struct context_entry *iommu_context_addr(struct intel_iommu *iommu, u8 bus,
+ 					 u8 devfn, int alloc);
+ 
+ extern const struct iommu_ops intel_iommu_ops;
++extern const struct iommu_domain_ops intel_fs_paging_domain_ops;
++extern const struct iommu_domain_ops intel_ss_paging_domain_ops;
++
++static inline bool intel_domain_is_fs_paging(struct dmar_domain *domain)
++{
++	return domain->domain.ops == &intel_fs_paging_domain_ops;
++}
++
++static inline bool intel_domain_is_ss_paging(struct dmar_domain *domain)
++{
++	return domain->domain.ops == &intel_ss_paging_domain_ops;
++}
+ 
+ #ifdef CONFIG_INTEL_IOMMU
+ extern int intel_iommu_sm;
+diff --git a/drivers/iommu/intel/nested.c b/drivers/iommu/intel/nested.c
+index fc312f649f9ef0..1b6ad9c900a5ad 100644
+--- a/drivers/iommu/intel/nested.c
++++ b/drivers/iommu/intel/nested.c
+@@ -216,8 +216,7 @@ intel_iommu_domain_alloc_nested(struct device *dev, struct iommu_domain *parent,
+ 	/* Must be nested domain */
+ 	if (user_data->type != IOMMU_HWPT_DATA_VTD_S1)
+ 		return ERR_PTR(-EOPNOTSUPP);
+-	if (parent->ops != intel_iommu_ops.default_domain_ops ||
+-	    !s2_domain->nested_parent)
++	if (!intel_domain_is_ss_paging(s2_domain) || !s2_domain->nested_parent)
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	ret = iommu_copy_struct_from_user(&vtd, user_data,
+@@ -229,7 +228,6 @@ intel_iommu_domain_alloc_nested(struct device *dev, struct iommu_domain *parent,
+ 	if (!domain)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	domain->use_first_level = true;
+ 	domain->s2_domain = s2_domain;
+ 	domain->s1_cfg = vtd;
+ 	domain->domain.ops = &intel_nested_domain_ops;
+diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
+index f3da596410b5e5..3994521f6ea488 100644
+--- a/drivers/iommu/intel/svm.c
++++ b/drivers/iommu/intel/svm.c
+@@ -214,7 +214,6 @@ struct iommu_domain *intel_svm_domain_alloc(struct device *dev,
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	domain->domain.ops = &intel_svm_domain_ops;
+-	domain->use_first_level = true;
+ 	INIT_LIST_HEAD(&domain->dev_pasids);
+ 	INIT_LIST_HEAD(&domain->cache_tags);
+ 	spin_lock_init(&domain->cache_lock);
+diff --git a/drivers/irqchip/irq-mvebu-gicp.c b/drivers/irqchip/irq-mvebu-gicp.c
+index 54833717f8a70f..667bde3c651ff2 100644
+--- a/drivers/irqchip/irq-mvebu-gicp.c
++++ b/drivers/irqchip/irq-mvebu-gicp.c
+@@ -238,7 +238,7 @@ static int mvebu_gicp_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	base = ioremap(gicp->res->start, resource_size(gicp->res));
+-	if (IS_ERR(base)) {
++	if (!base) {
+ 		dev_err(&pdev->dev, "ioremap() failed. Unable to clear pending interrupts.\n");
+ 	} else {
+ 		for (i = 0; i < 64; i++)
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 3f355bb85797f8..0f41573fa9f5ec 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -1406,7 +1406,7 @@ static int super_90_validate(struct mddev *mddev, struct md_rdev *freshest, stru
+ 		else {
+ 			if (sb->events_hi == sb->cp_events_hi &&
+ 				sb->events_lo == sb->cp_events_lo) {
+-				mddev->resync_offset = sb->resync_offset;
++				mddev->resync_offset = sb->recovery_cp;
+ 			} else
+ 				mddev->resync_offset = 0;
+ 		}
+@@ -1534,13 +1534,13 @@ static void super_90_sync(struct mddev *mddev, struct md_rdev *rdev)
+ 	mddev->minor_version = sb->minor_version;
+ 	if (mddev->in_sync)
+ 	{
+-		sb->resync_offset = mddev->resync_offset;
++		sb->recovery_cp = mddev->resync_offset;
+ 		sb->cp_events_hi = (mddev->events>>32);
+ 		sb->cp_events_lo = (u32)mddev->events;
+ 		if (mddev->resync_offset == MaxSector)
+ 			sb->state = (1<< MD_SB_CLEAN);
+ 	} else
+-		sb->resync_offset = 0;
++		sb->recovery_cp = 0;
+ 
+ 	sb->layout = mddev->layout;
+ 	sb->chunk_size = mddev->chunk_sectors << 9;
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index 84ab4a83cbd686..db94d14a3807f5 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -1377,14 +1377,24 @@ static int atmel_smc_nand_prepare_smcconf(struct atmel_nand *nand,
+ 	if (ret)
+ 		return ret;
+ 
++	/*
++	 * Read setup timing depends on the operation done on the NAND:
++	 *
++	 * NRD_SETUP = max(tAR, tCLR)
++	 */
++	timeps = max(conf->timings.sdr.tAR_min, conf->timings.sdr.tCLR_min);
++	ncycles = DIV_ROUND_UP(timeps, mckperiodps);
++	totalcycles += ncycles;
++	ret = atmel_smc_cs_conf_set_setup(smcconf, ATMEL_SMC_NRD_SHIFT, ncycles);
++	if (ret)
++		return ret;
++
+ 	/*
+ 	 * The read cycle timing is directly matching tRC, but is also
+ 	 * dependent on the setup and hold timings we calculated earlier,
+ 	 * which gives:
+ 	 *
+-	 * NRD_CYCLE = max(tRC, NRD_PULSE + NRD_HOLD)
+-	 *
+-	 * NRD_SETUP is always 0.
++	 * NRD_CYCLE = max(tRC, NRD_SETUP + NRD_PULSE + NRD_HOLD)
+ 	 */
+ 	ncycles = DIV_ROUND_UP(conf->timings.sdr.tRC_min, mckperiodps);
+ 	ncycles = max(totalcycles, ncycles);
+diff --git a/drivers/mtd/nand/raw/nuvoton-ma35d1-nand-controller.c b/drivers/mtd/nand/raw/nuvoton-ma35d1-nand-controller.c
+index c23b537948d5e6..1a285cd8fad62a 100644
+--- a/drivers/mtd/nand/raw/nuvoton-ma35d1-nand-controller.c
++++ b/drivers/mtd/nand/raw/nuvoton-ma35d1-nand-controller.c
+@@ -935,10 +935,10 @@ static void ma35_chips_cleanup(struct ma35_nand_info *nand)
+ 
+ static int ma35_nand_chips_init(struct device *dev, struct ma35_nand_info *nand)
+ {
+-	struct device_node *np = dev->of_node, *nand_np;
++	struct device_node *np = dev->of_node;
+ 	int ret;
+ 
+-	for_each_child_of_node(np, nand_np) {
++	for_each_child_of_node_scoped(np, nand_np) {
+ 		ret = ma35_nand_chip_init(dev, nand, nand_np);
+ 		if (ret) {
+ 			ma35_chips_cleanup(nand);
+diff --git a/drivers/mtd/nand/raw/stm32_fmc2_nand.c b/drivers/mtd/nand/raw/stm32_fmc2_nand.c
+index a960403081f110..d957327fb4fa04 100644
+--- a/drivers/mtd/nand/raw/stm32_fmc2_nand.c
++++ b/drivers/mtd/nand/raw/stm32_fmc2_nand.c
+@@ -272,6 +272,7 @@ struct stm32_fmc2_nfc {
+ 	struct sg_table dma_data_sg;
+ 	struct sg_table dma_ecc_sg;
+ 	u8 *ecc_buf;
++	dma_addr_t dma_ecc_addr;
+ 	int dma_ecc_len;
+ 	u32 tx_dma_max_burst;
+ 	u32 rx_dma_max_burst;
+@@ -902,17 +903,10 @@ static int stm32_fmc2_nfc_xfer(struct nand_chip *chip, const u8 *buf,
+ 
+ 	if (!write_data && !raw) {
+ 		/* Configure DMA ECC status */
+-		p = nfc->ecc_buf;
+ 		for_each_sg(nfc->dma_ecc_sg.sgl, sg, eccsteps, s) {
+-			sg_set_buf(sg, p, nfc->dma_ecc_len);
+-			p += nfc->dma_ecc_len;
+-		}
+-
+-		ret = dma_map_sg(nfc->dev, nfc->dma_ecc_sg.sgl,
+-				 eccsteps, dma_data_dir);
+-		if (!ret) {
+-			ret = -EIO;
+-			goto err_unmap_data;
++			sg_dma_address(sg) = nfc->dma_ecc_addr +
++					     s * nfc->dma_ecc_len;
++			sg_dma_len(sg) = nfc->dma_ecc_len;
+ 		}
+ 
+ 		desc_ecc = dmaengine_prep_slave_sg(nfc->dma_ecc_ch,
+@@ -921,7 +915,7 @@ static int stm32_fmc2_nfc_xfer(struct nand_chip *chip, const u8 *buf,
+ 						   DMA_PREP_INTERRUPT);
+ 		if (!desc_ecc) {
+ 			ret = -ENOMEM;
+-			goto err_unmap_ecc;
++			goto err_unmap_data;
+ 		}
+ 
+ 		reinit_completion(&nfc->dma_ecc_complete);
+@@ -929,7 +923,7 @@ static int stm32_fmc2_nfc_xfer(struct nand_chip *chip, const u8 *buf,
+ 		desc_ecc->callback_param = &nfc->dma_ecc_complete;
+ 		ret = dma_submit_error(dmaengine_submit(desc_ecc));
+ 		if (ret)
+-			goto err_unmap_ecc;
++			goto err_unmap_data;
+ 
+ 		dma_async_issue_pending(nfc->dma_ecc_ch);
+ 	}
+@@ -949,7 +943,7 @@ static int stm32_fmc2_nfc_xfer(struct nand_chip *chip, const u8 *buf,
+ 		if (!write_data && !raw)
+ 			dmaengine_terminate_all(nfc->dma_ecc_ch);
+ 		ret = -ETIMEDOUT;
+-		goto err_unmap_ecc;
++		goto err_unmap_data;
+ 	}
+ 
+ 	/* Wait DMA data transfer completion */
+@@ -969,11 +963,6 @@ static int stm32_fmc2_nfc_xfer(struct nand_chip *chip, const u8 *buf,
+ 		}
+ 	}
+ 
+-err_unmap_ecc:
+-	if (!write_data && !raw)
+-		dma_unmap_sg(nfc->dev, nfc->dma_ecc_sg.sgl,
+-			     eccsteps, dma_data_dir);
+-
+ err_unmap_data:
+ 	dma_unmap_sg(nfc->dev, nfc->dma_data_sg.sgl, eccsteps, dma_data_dir);
+ 
+@@ -996,9 +985,21 @@ static int stm32_fmc2_nfc_seq_write(struct nand_chip *chip, const u8 *buf,
+ 
+ 	/* Write oob */
+ 	if (oob_required) {
+-		ret = nand_change_write_column_op(chip, mtd->writesize,
+-						  chip->oob_poi, mtd->oobsize,
+-						  false);
++		unsigned int offset_in_page = mtd->writesize;
++		const void *buf = chip->oob_poi;
++		unsigned int len = mtd->oobsize;
++
++		if (!raw) {
++			struct mtd_oob_region oob_free;
++
++			mtd_ooblayout_free(mtd, 0, &oob_free);
++			offset_in_page += oob_free.offset;
++			buf += oob_free.offset;
++			len = oob_free.length;
++		}
++
++		ret = nand_change_write_column_op(chip, offset_in_page,
++						  buf, len, false);
+ 		if (ret)
+ 			return ret;
+ 	}
+@@ -1610,7 +1611,8 @@ static int stm32_fmc2_nfc_dma_setup(struct stm32_fmc2_nfc *nfc)
+ 		return ret;
+ 
+ 	/* Allocate a buffer to store ECC status registers */
+-	nfc->ecc_buf = devm_kzalloc(nfc->dev, FMC2_MAX_ECC_BUF_LEN, GFP_KERNEL);
++	nfc->ecc_buf = dmam_alloc_coherent(nfc->dev, FMC2_MAX_ECC_BUF_LEN,
++					   &nfc->dma_ecc_addr, GFP_KERNEL);
+ 	if (!nfc->ecc_buf)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
+index b90f15c986a317..aa6fb862451aa4 100644
+--- a/drivers/mtd/nand/spi/core.c
++++ b/drivers/mtd/nand/spi/core.c
+@@ -20,7 +20,7 @@
+ #include <linux/spi/spi.h>
+ #include <linux/spi/spi-mem.h>
+ 
+-static int spinand_read_reg_op(struct spinand_device *spinand, u8 reg, u8 *val)
++int spinand_read_reg_op(struct spinand_device *spinand, u8 reg, u8 *val)
+ {
+ 	struct spi_mem_op op = SPINAND_GET_FEATURE_1S_1S_1S_OP(reg,
+ 						      spinand->scratchbuf);
+@@ -1253,8 +1253,19 @@ static int spinand_id_detect(struct spinand_device *spinand)
+ 
+ static int spinand_manufacturer_init(struct spinand_device *spinand)
+ {
+-	if (spinand->manufacturer->ops->init)
+-		return spinand->manufacturer->ops->init(spinand);
++	int ret;
++
++	if (spinand->manufacturer->ops->init) {
++		ret = spinand->manufacturer->ops->init(spinand);
++		if (ret)
++			return ret;
++	}
++
++	if (spinand->configure_chip) {
++		ret = spinand->configure_chip(spinand);
++		if (ret)
++			return ret;
++	}
+ 
+ 	return 0;
+ }
+@@ -1349,6 +1360,7 @@ int spinand_match_and_init(struct spinand_device *spinand,
+ 		spinand->flags = table[i].flags;
+ 		spinand->id.len = 1 + table[i].devid.len;
+ 		spinand->select_target = table[i].select_target;
++		spinand->configure_chip = table[i].configure_chip;
+ 		spinand->set_cont_read = table[i].set_cont_read;
+ 		spinand->fact_otp = &table[i].fact_otp;
+ 		spinand->user_otp = &table[i].user_otp;
+diff --git a/drivers/mtd/nand/spi/winbond.c b/drivers/mtd/nand/spi/winbond.c
+index b7a28f001a387b..116ac17591a86b 100644
+--- a/drivers/mtd/nand/spi/winbond.c
++++ b/drivers/mtd/nand/spi/winbond.c
+@@ -18,6 +18,9 @@
+ 
+ #define W25N04KV_STATUS_ECC_5_8_BITFLIPS	(3 << 4)
+ 
++#define W25N0XJW_SR4			0xD0
++#define W25N0XJW_SR4_HS			BIT(2)
++
+ /*
+  * "X2" in the core is equivalent to "dual output" in the datasheets,
+  * "X4" in the core is equivalent to "quad output" in the datasheets.
+@@ -42,10 +45,12 @@ static SPINAND_OP_VARIANTS(update_cache_octal_variants,
+ static SPINAND_OP_VARIANTS(read_cache_dual_quad_dtr_variants,
+ 		SPINAND_PAGE_READ_FROM_CACHE_1S_4D_4D_OP(0, 8, NULL, 0, 80 * HZ_PER_MHZ),
+ 		SPINAND_PAGE_READ_FROM_CACHE_1S_1D_4D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 4, NULL, 0, 0),
+ 		SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0, 104 * HZ_PER_MHZ),
+ 		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0),
+ 		SPINAND_PAGE_READ_FROM_CACHE_1S_2D_2D_OP(0, 4, NULL, 0, 80 * HZ_PER_MHZ),
+ 		SPINAND_PAGE_READ_FROM_CACHE_1S_1D_2D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ),
++		SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 2, NULL, 0, 0),
+ 		SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0, 104 * HZ_PER_MHZ),
+ 		SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0),
+ 		SPINAND_PAGE_READ_FROM_CACHE_1S_1D_1D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ),
+@@ -157,6 +162,36 @@ static const struct mtd_ooblayout_ops w25n02kv_ooblayout = {
+ 	.free = w25n02kv_ooblayout_free,
+ };
+ 
++static int w25n01jw_ooblayout_ecc(struct mtd_info *mtd, int section,
++				  struct mtd_oob_region *region)
++{
++	if (section > 3)
++		return -ERANGE;
++
++	region->offset = (16 * section) + 12;
++	region->length = 4;
++
++	return 0;
++}
++
++static int w25n01jw_ooblayout_free(struct mtd_info *mtd, int section,
++				   struct mtd_oob_region *region)
++{
++	if (section > 3)
++		return -ERANGE;
++
++	region->offset = (16 * section);
++	region->length = 12;
++
++	/* Extract BBM */
++	if (!section) {
++		region->offset += 2;
++		region->length -= 2;
++	}
++
++	return 0;
++}
++
+ static int w35n01jw_ooblayout_ecc(struct mtd_info *mtd, int section,
+ 				  struct mtd_oob_region *region)
+ {
+@@ -187,6 +222,11 @@ static int w35n01jw_ooblayout_free(struct mtd_info *mtd, int section,
+ 	return 0;
+ }
+ 
++static const struct mtd_ooblayout_ops w25n01jw_ooblayout = {
++	.ecc = w25n01jw_ooblayout_ecc,
++	.free = w25n01jw_ooblayout_free,
++};
++
+ static const struct mtd_ooblayout_ops w35n01jw_ooblayout = {
+ 	.ecc = w35n01jw_ooblayout_ecc,
+ 	.free = w35n01jw_ooblayout_free,
+@@ -230,6 +270,40 @@ static int w25n02kv_ecc_get_status(struct spinand_device *spinand,
+ 	return -EINVAL;
+ }
+ 
++static int w25n0xjw_hs_cfg(struct spinand_device *spinand)
++{
++	const struct spi_mem_op *op;
++	bool hs;
++	u8 sr4;
++	int ret;
++
++	op = spinand->op_templates.read_cache;
++	if (op->cmd.dtr || op->addr.dtr || op->dummy.dtr || op->data.dtr)
++		hs = false;
++	else if (op->cmd.buswidth == 1 && op->addr.buswidth == 1 &&
++		 op->dummy.buswidth == 1 && op->data.buswidth == 1)
++		hs = false;
++	else if (!op->max_freq)
++		hs = true;
++	else
++		hs = false;
++
++	ret = spinand_read_reg_op(spinand, W25N0XJW_SR4, &sr4);
++	if (ret)
++		return ret;
++
++	if (hs)
++		sr4 |= W25N0XJW_SR4_HS;
++	else
++		sr4 &= ~W25N0XJW_SR4_HS;
++
++	ret = spinand_write_reg_op(spinand, W25N0XJW_SR4, sr4);
++	if (ret)
++		return ret;
++
++	return 0;
++}
++
+ static const struct spinand_info winbond_spinand_table[] = {
+ 	/* 512M-bit densities */
+ 	SPINAND_INFO("W25N512GW", /* 1.8V */
+@@ -268,7 +342,8 @@ static const struct spinand_info winbond_spinand_table[] = {
+ 					      &write_cache_variants,
+ 					      &update_cache_variants),
+ 		     0,
+-		     SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL)),
++		     SPINAND_ECCINFO(&w25n01jw_ooblayout, NULL),
++		     SPINAND_CONFIGURE_CHIP(w25n0xjw_hs_cfg)),
+ 	SPINAND_INFO("W25N01KV", /* 3.3V */
+ 		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xae, 0x21),
+ 		     NAND_MEMORG(1, 2048, 96, 64, 1024, 20, 1, 1, 1),
+@@ -324,7 +399,8 @@ static const struct spinand_info winbond_spinand_table[] = {
+ 					      &write_cache_variants,
+ 					      &update_cache_variants),
+ 		     0,
+-		     SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL)),
++		     SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL),
++		     SPINAND_CONFIGURE_CHIP(w25n0xjw_hs_cfg)),
+ 	SPINAND_INFO("W25N02KV", /* 3.3V */
+ 		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xaa, 0x22),
+ 		     NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 1, 1, 1),
+diff --git a/drivers/net/can/xilinx_can.c b/drivers/net/can/xilinx_can.c
+index 3f2e378199abba..5abe4af61655cd 100644
+--- a/drivers/net/can/xilinx_can.c
++++ b/drivers/net/can/xilinx_can.c
+@@ -690,14 +690,6 @@ static void xcan_write_frame(struct net_device *ndev, struct sk_buff *skb,
+ 		dlc |= XCAN_DLCR_EDL_MASK;
+ 	}
+ 
+-	if (!(priv->devtype.flags & XCAN_FLAG_TX_MAILBOXES) &&
+-	    (priv->devtype.flags & XCAN_FLAG_TXFEMP))
+-		can_put_echo_skb(skb, ndev, priv->tx_head % priv->tx_max, 0);
+-	else
+-		can_put_echo_skb(skb, ndev, 0, 0);
+-
+-	priv->tx_head++;
+-
+ 	priv->write_reg(priv, XCAN_FRAME_ID_OFFSET(frame_offset), id);
+ 	/* If the CAN frame is RTR frame this write triggers transmission
+ 	 * (not on CAN FD)
+@@ -730,6 +722,14 @@ static void xcan_write_frame(struct net_device *ndev, struct sk_buff *skb,
+ 					data[1]);
+ 		}
+ 	}
++
++	if (!(priv->devtype.flags & XCAN_FLAG_TX_MAILBOXES) &&
++	    (priv->devtype.flags & XCAN_FLAG_TXFEMP))
++		can_put_echo_skb(skb, ndev, priv->tx_head % priv->tx_max, 0);
++	else
++		can_put_echo_skb(skb, ndev, 0, 0);
++
++	priv->tx_head++;
+ }
+ 
+ /**
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index d15d912690c40e..073d20241a4c9c 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -1229,9 +1229,15 @@ static int b53_setup(struct dsa_switch *ds)
+ 	 */
+ 	ds->untag_vlan_aware_bridge_pvid = true;
+ 
+-	/* Ageing time is set in seconds */
+-	ds->ageing_time_min = 1 * 1000;
+-	ds->ageing_time_max = AGE_TIME_MAX * 1000;
++	if (dev->chip_id == BCM53101_DEVICE_ID) {
++		/* BCM53101 uses 0.5 second increments */
++		ds->ageing_time_min = 1 * 500;
++		ds->ageing_time_max = AGE_TIME_MAX * 500;
++	} else {
++		/* Everything else uses 1 second increments */
++		ds->ageing_time_min = 1 * 1000;
++		ds->ageing_time_max = AGE_TIME_MAX * 1000;
++	}
+ 
+ 	ret = b53_reset_switch(dev);
+ 	if (ret) {
+@@ -2448,7 +2454,10 @@ int b53_set_ageing_time(struct dsa_switch *ds, unsigned int msecs)
+ 	else
+ 		reg = B53_AGING_TIME_CONTROL;
+ 
+-	atc = DIV_ROUND_CLOSEST(msecs, 1000);
++	if (dev->chip_id == BCM53101_DEVICE_ID)
++		atc = DIV_ROUND_CLOSEST(msecs, 500);
++	else
++		atc = DIV_ROUND_CLOSEST(msecs, 1000);
+ 
+ 	if (!is5325(dev) && !is5365(dev))
+ 		atc |= AGE_CHANGE;
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 651b73163b6ee9..5f15f42070c539 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -2358,7 +2358,8 @@ static void fec_enet_phy_reset_after_clk_enable(struct net_device *ndev)
+ 		 */
+ 		phy_dev = of_phy_find_device(fep->phy_node);
+ 		phy_reset_after_clk_enable(phy_dev);
+-		put_device(&phy_dev->mdio.dev);
++		if (phy_dev)
++			put_device(&phy_dev->mdio.dev);
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index f1c9e575703eaa..26dcdceae741e4 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -4182,7 +4182,7 @@ static int i40e_vsi_request_irq_msix(struct i40e_vsi *vsi, char *basename)
+ 		irq_num = pf->msix_entries[base + vector].vector;
+ 		irq_set_affinity_notifier(irq_num, NULL);
+ 		irq_update_affinity_hint(irq_num, NULL);
+-		free_irq(irq_num, &vsi->q_vectors[vector]);
++		free_irq(irq_num, vsi->q_vectors[vector]);
+ 	}
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+index ca6ccbc139548b..6412c84e2d17db 100644
+--- a/drivers/net/ethernet/intel/igb/igb_ethtool.c
++++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+@@ -2081,11 +2081,8 @@ static void igb_diag_test(struct net_device *netdev,
+ 	} else {
+ 		dev_info(&adapter->pdev->dev, "online testing starting\n");
+ 
+-		/* PHY is powered down when interface is down */
+-		if (if_running && igb_link_test(adapter, &data[TEST_LINK]))
++		if (igb_link_test(adapter, &data[TEST_LINK]))
+ 			eth_test->flags |= ETH_TEST_FL_FAILED;
+-		else
+-			data[TEST_LINK] = 0;
+ 
+ 		/* Online tests aren't run; pass by default */
+ 		data[TEST_REG] = 0;
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index b76a154e635e00..d87438bef6fba5 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -4451,8 +4451,7 @@ int igb_setup_rx_resources(struct igb_ring *rx_ring)
+ 	if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq))
+ 		xdp_rxq_info_unreg(&rx_ring->xdp_rxq);
+ 	res = xdp_rxq_info_reg(&rx_ring->xdp_rxq, rx_ring->netdev,
+-			       rx_ring->queue_index,
+-			       rx_ring->q_vector->napi.napi_id);
++			       rx_ring->queue_index, 0);
+ 	if (res < 0) {
+ 		dev_err(dev, "Failed to register xdp_rxq index %u\n",
+ 			rx_ring->queue_index);
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.c b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+index f436d7cf565a14..1a9cc8206430b2 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+@@ -691,7 +691,7 @@ static void icssg_prueth_hsr_fdb_add_del(struct prueth_emac *emac,
+ 
+ static int icssg_prueth_hsr_add_mcast(struct net_device *ndev, const u8 *addr)
+ {
+-	struct net_device *real_dev;
++	struct net_device *real_dev, *port_dev;
+ 	struct prueth_emac *emac;
+ 	u8 vlan_id, i;
+ 
+@@ -700,11 +700,15 @@ static int icssg_prueth_hsr_add_mcast(struct net_device *ndev, const u8 *addr)
+ 
+ 	if (is_hsr_master(real_dev)) {
+ 		for (i = HSR_PT_SLAVE_A; i < HSR_PT_INTERLINK; i++) {
+-			emac = netdev_priv(hsr_get_port_ndev(real_dev, i));
+-			if (!emac)
++			port_dev = hsr_get_port_ndev(real_dev, i);
++			emac = netdev_priv(port_dev);
++			if (!emac) {
++				dev_put(port_dev);
+ 				return -EINVAL;
++			}
+ 			icssg_prueth_hsr_fdb_add_del(emac, addr, vlan_id,
+ 						     true);
++			dev_put(port_dev);
+ 		}
+ 	} else {
+ 		emac = netdev_priv(real_dev);
+@@ -716,7 +720,7 @@ static int icssg_prueth_hsr_add_mcast(struct net_device *ndev, const u8 *addr)
+ 
+ static int icssg_prueth_hsr_del_mcast(struct net_device *ndev, const u8 *addr)
+ {
+-	struct net_device *real_dev;
++	struct net_device *real_dev, *port_dev;
+ 	struct prueth_emac *emac;
+ 	u8 vlan_id, i;
+ 
+@@ -725,11 +729,15 @@ static int icssg_prueth_hsr_del_mcast(struct net_device *ndev, const u8 *addr)
+ 
+ 	if (is_hsr_master(real_dev)) {
+ 		for (i = HSR_PT_SLAVE_A; i < HSR_PT_INTERLINK; i++) {
+-			emac = netdev_priv(hsr_get_port_ndev(real_dev, i));
+-			if (!emac)
++			port_dev = hsr_get_port_ndev(real_dev, i);
++			emac = netdev_priv(port_dev);
++			if (!emac) {
++				dev_put(port_dev);
+ 				return -EINVAL;
++			}
+ 			icssg_prueth_hsr_fdb_add_del(emac, addr, vlan_id,
+ 						     false);
++			dev_put(port_dev);
+ 		}
+ 	} else {
+ 		emac = netdev_priv(real_dev);
+diff --git a/drivers/net/ethernet/wangxun/libwx/wx_hw.c b/drivers/net/ethernet/wangxun/libwx/wx_hw.c
+index f0823aa1ede607..bb1dcdf5fd0d58 100644
+--- a/drivers/net/ethernet/wangxun/libwx/wx_hw.c
++++ b/drivers/net/ethernet/wangxun/libwx/wx_hw.c
+@@ -2071,10 +2071,6 @@ static void wx_setup_mrqc(struct wx *wx)
+ {
+ 	u32 rss_field = 0;
+ 
+-	/* VT, and RSS do not coexist at the same time */
+-	if (test_bit(WX_FLAG_VMDQ_ENABLED, wx->flags))
+-		return;
+-
+ 	/* Disable indicating checksum in descriptor, enables RSS hash */
+ 	wr32m(wx, WX_PSR_CTL, WX_PSR_CTL_PCSD, WX_PSR_CTL_PCSD);
+ 
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 01329fe7451a12..0eca96eeed58ab 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -4286,6 +4286,7 @@ static int macsec_newlink(struct net_device *dev,
+ 	if (err < 0)
+ 		goto del_dev;
+ 
++	netdev_update_features(dev);
+ 	netif_stacked_transfer_operstate(real_dev, dev);
+ 	linkwatch_fire_event(dev);
+ 
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index 13df28445f0201..c02da57a4da5e3 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -1065,23 +1065,19 @@ EXPORT_SYMBOL_GPL(phy_inband_caps);
+  */
+ int phy_config_inband(struct phy_device *phydev, unsigned int modes)
+ {
+-	int err;
++	lockdep_assert_held(&phydev->lock);
+ 
+ 	if (!!(modes & LINK_INBAND_DISABLE) +
+ 	    !!(modes & LINK_INBAND_ENABLE) +
+ 	    !!(modes & LINK_INBAND_BYPASS) != 1)
+ 		return -EINVAL;
+ 
+-	mutex_lock(&phydev->lock);
+ 	if (!phydev->drv)
+-		err = -EIO;
++		return -EIO;
+ 	else if (!phydev->drv->config_inband)
+-		err = -EOPNOTSUPP;
+-	else
+-		err = phydev->drv->config_inband(phydev, modes);
+-	mutex_unlock(&phydev->lock);
++		return -EOPNOTSUPP;
+ 
+-	return err;
++	return phydev->drv->config_inband(phydev, modes);
+ }
+ EXPORT_SYMBOL(phy_config_inband);
+ 
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index 0faa3d97e06b94..229a503d601eed 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -67,6 +67,8 @@ struct phylink {
+ 	struct timer_list link_poll;
+ 
+ 	struct mutex state_mutex;
++	/* Serialize updates to pl->phydev with phylink_resolve() */
++	struct mutex phydev_mutex;
+ 	struct phylink_link_state phy_state;
+ 	unsigned int phy_ib_mode;
+ 	struct work_struct resolve;
+@@ -1409,6 +1411,7 @@ static void phylink_get_fixed_state(struct phylink *pl,
+ static void phylink_mac_initial_config(struct phylink *pl, bool force_restart)
+ {
+ 	struct phylink_link_state link_state;
++	struct phy_device *phy = pl->phydev;
+ 
+ 	switch (pl->req_link_an_mode) {
+ 	case MLO_AN_PHY:
+@@ -1432,7 +1435,11 @@ static void phylink_mac_initial_config(struct phylink *pl, bool force_restart)
+ 	link_state.link = false;
+ 
+ 	phylink_apply_manual_flow(pl, &link_state);
++	if (phy)
++		mutex_lock(&phy->lock);
+ 	phylink_major_config(pl, force_restart, &link_state);
++	if (phy)
++		mutex_unlock(&phy->lock);
+ }
+ 
+ static const char *phylink_pause_to_str(int pause)
+@@ -1568,8 +1575,13 @@ static void phylink_resolve(struct work_struct *w)
+ 	struct phylink_link_state link_state;
+ 	bool mac_config = false;
+ 	bool retrigger = false;
++	struct phy_device *phy;
+ 	bool cur_link_state;
+ 
++	mutex_lock(&pl->phydev_mutex);
++	phy = pl->phydev;
++	if (phy)
++		mutex_lock(&phy->lock);
+ 	mutex_lock(&pl->state_mutex);
+ 	cur_link_state = phylink_link_is_up(pl);
+ 
+@@ -1603,11 +1615,11 @@ static void phylink_resolve(struct work_struct *w)
+ 		/* If we have a phy, the "up" state is the union of both the
+ 		 * PHY and the MAC
+ 		 */
+-		if (pl->phydev)
++		if (phy)
+ 			link_state.link &= pl->phy_state.link;
+ 
+ 		/* Only update if the PHY link is up */
+-		if (pl->phydev && pl->phy_state.link) {
++		if (phy && pl->phy_state.link) {
+ 			/* If the interface has changed, force a link down
+ 			 * event if the link isn't already down, and re-resolve.
+ 			 */
+@@ -1671,6 +1683,9 @@ static void phylink_resolve(struct work_struct *w)
+ 		queue_work(system_power_efficient_wq, &pl->resolve);
+ 	}
+ 	mutex_unlock(&pl->state_mutex);
++	if (phy)
++		mutex_unlock(&phy->lock);
++	mutex_unlock(&pl->phydev_mutex);
+ }
+ 
+ static void phylink_run_resolve(struct phylink *pl)
+@@ -1806,6 +1821,7 @@ struct phylink *phylink_create(struct phylink_config *config,
+ 	if (!pl)
+ 		return ERR_PTR(-ENOMEM);
+ 
++	mutex_init(&pl->phydev_mutex);
+ 	mutex_init(&pl->state_mutex);
+ 	INIT_WORK(&pl->resolve, phylink_resolve);
+ 
+@@ -2066,6 +2082,7 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy,
+ 		     dev_name(&phy->mdio.dev), phy->drv->name, irq_str);
+ 	kfree(irq_str);
+ 
++	mutex_lock(&pl->phydev_mutex);
+ 	mutex_lock(&phy->lock);
+ 	mutex_lock(&pl->state_mutex);
+ 	pl->phydev = phy;
+@@ -2111,6 +2128,7 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy,
+ 
+ 	mutex_unlock(&pl->state_mutex);
+ 	mutex_unlock(&phy->lock);
++	mutex_unlock(&pl->phydev_mutex);
+ 
+ 	phylink_dbg(pl,
+ 		    "phy: %s setting supported %*pb advertising %*pb\n",
+@@ -2289,6 +2307,7 @@ void phylink_disconnect_phy(struct phylink *pl)
+ 
+ 	ASSERT_RTNL();
+ 
++	mutex_lock(&pl->phydev_mutex);
+ 	phy = pl->phydev;
+ 	if (phy) {
+ 		mutex_lock(&phy->lock);
+@@ -2298,8 +2317,11 @@ void phylink_disconnect_phy(struct phylink *pl)
+ 		pl->mac_tx_clk_stop = false;
+ 		mutex_unlock(&pl->state_mutex);
+ 		mutex_unlock(&phy->lock);
+-		flush_work(&pl->resolve);
++	}
++	mutex_unlock(&pl->phydev_mutex);
+ 
++	if (phy) {
++		flush_work(&pl->resolve);
+ 		phy_disconnect(phy);
+ 	}
+ }
+diff --git a/drivers/net/wireless/ath/ath12k/core.h b/drivers/net/wireless/ath/ath12k/core.h
+index 4bd286296da794..cebdf62ce3db9b 100644
+--- a/drivers/net/wireless/ath/ath12k/core.h
++++ b/drivers/net/wireless/ath/ath12k/core.h
+@@ -116,6 +116,7 @@ static inline u64 ath12k_le32hilo_to_u64(__le32 hi, __le32 lo)
+ enum ath12k_skb_flags {
+ 	ATH12K_SKB_HW_80211_ENCAP = BIT(0),
+ 	ATH12K_SKB_CIPHER_SET = BIT(1),
++	ATH12K_SKB_MLO_STA = BIT(2),
+ };
+ 
+ struct ath12k_skb_cb {
+diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.c b/drivers/net/wireless/ath/ath12k/dp_mon.c
+index 91f4e3aff74c38..6a0915a0c7aae3 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_mon.c
++++ b/drivers/net/wireless/ath/ath12k/dp_mon.c
+@@ -3610,7 +3610,6 @@ ath12k_dp_mon_rx_update_user_stats(struct ath12k *ar,
+ 				   struct hal_rx_mon_ppdu_info *ppdu_info,
+ 				   u32 uid)
+ {
+-	struct ath12k_sta *ahsta;
+ 	struct ath12k_link_sta *arsta;
+ 	struct ath12k_rx_peer_stats *rx_stats = NULL;
+ 	struct hal_rx_user_status *user_stats = &ppdu_info->userstats[uid];
+@@ -3628,8 +3627,13 @@ ath12k_dp_mon_rx_update_user_stats(struct ath12k *ar,
+ 		return;
+ 	}
+ 
+-	ahsta = ath12k_sta_to_ahsta(peer->sta);
+-	arsta = &ahsta->deflink;
++	arsta = ath12k_peer_get_link_sta(ar->ab, peer);
++	if (!arsta) {
++		ath12k_warn(ar->ab, "link sta not found on peer %pM id %d\n",
++			    peer->addr, peer->peer_id);
++		return;
++	}
++
+ 	arsta->rssi_comb = ppdu_info->rssi_comb;
+ 	ewma_avg_rssi_add(&arsta->avg_rssi, ppdu_info->rssi_comb);
+ 	rx_stats = arsta->rx_stats;
+@@ -3742,7 +3746,6 @@ int ath12k_dp_mon_srng_process(struct ath12k *ar, int *budget,
+ 	struct dp_srng *mon_dst_ring;
+ 	struct hal_srng *srng;
+ 	struct dp_rxdma_mon_ring *buf_ring;
+-	struct ath12k_sta *ahsta = NULL;
+ 	struct ath12k_link_sta *arsta;
+ 	struct ath12k_peer *peer;
+ 	struct sk_buff_head skb_list;
+@@ -3868,8 +3871,15 @@ int ath12k_dp_mon_srng_process(struct ath12k *ar, int *budget,
+ 		}
+ 
+ 		if (ppdu_info->reception_type == HAL_RX_RECEPTION_TYPE_SU) {
+-			ahsta = ath12k_sta_to_ahsta(peer->sta);
+-			arsta = &ahsta->deflink;
++			arsta = ath12k_peer_get_link_sta(ar->ab, peer);
++			if (!arsta) {
++				ath12k_warn(ar->ab, "link sta not found on peer %pM id %d\n",
++					    peer->addr, peer->peer_id);
++				spin_unlock_bh(&ab->base_lock);
++				rcu_read_unlock();
++				dev_kfree_skb_any(skb);
++				continue;
++			}
+ 			ath12k_dp_mon_rx_update_peer_su_stats(ar, arsta,
+ 							      ppdu_info);
+ 		} else if ((ppdu_info->fc_valid) &&
+diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.c b/drivers/net/wireless/ath/ath12k/dp_rx.c
+index bd95dc88f9b21f..e9137ffeb5ab48 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_rx.c
+@@ -1418,8 +1418,6 @@ ath12k_update_per_peer_tx_stats(struct ath12k *ar,
+ {
+ 	struct ath12k_base *ab = ar->ab;
+ 	struct ath12k_peer *peer;
+-	struct ieee80211_sta *sta;
+-	struct ath12k_sta *ahsta;
+ 	struct ath12k_link_sta *arsta;
+ 	struct htt_ppdu_stats_user_rate *user_rate;
+ 	struct ath12k_per_peer_tx_stats *peer_stats = &ar->peer_tx_stats;
+@@ -1500,9 +1498,12 @@ ath12k_update_per_peer_tx_stats(struct ath12k *ar,
+ 		return;
+ 	}
+ 
+-	sta = peer->sta;
+-	ahsta = ath12k_sta_to_ahsta(sta);
+-	arsta = &ahsta->deflink;
++	arsta = ath12k_peer_get_link_sta(ab, peer);
++	if (!arsta) {
++		spin_unlock_bh(&ab->base_lock);
++		rcu_read_unlock();
++		return;
++	}
+ 
+ 	memset(&arsta->txrate, 0, sizeof(arsta->txrate));
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/hw.c b/drivers/net/wireless/ath/ath12k/hw.c
+index ec77ad498b33a2..6791ae1d64e50f 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.c
++++ b/drivers/net/wireless/ath/ath12k/hw.c
+@@ -14,6 +14,7 @@
+ #include "hw.h"
+ #include "mhi.h"
+ #include "dp_rx.h"
++#include "peer.h"
+ 
+ static const guid_t wcn7850_uuid = GUID_INIT(0xf634f534, 0x6147, 0x11ec,
+ 					     0x90, 0xd6, 0x02, 0x42,
+@@ -49,6 +50,12 @@ static bool ath12k_dp_srng_is_comp_ring_qcn9274(int ring_num)
+ 	return false;
+ }
+ 
++static bool ath12k_is_frame_link_agnostic_qcn9274(struct ath12k_link_vif *arvif,
++						  struct ieee80211_mgmt *mgmt)
++{
++	return ieee80211_is_action(mgmt->frame_control);
++}
++
+ static int ath12k_hw_mac_id_to_pdev_id_wcn7850(const struct ath12k_hw_params *hw,
+ 					       int mac_id)
+ {
+@@ -74,6 +81,52 @@ static bool ath12k_dp_srng_is_comp_ring_wcn7850(int ring_num)
+ 	return false;
+ }
+ 
++static bool ath12k_is_addba_resp_action_code(struct ieee80211_mgmt *mgmt)
++{
++	if (!ieee80211_is_action(mgmt->frame_control))
++		return false;
++
++	if (mgmt->u.action.category != WLAN_CATEGORY_BACK)
++		return false;
++
++	if (mgmt->u.action.u.addba_resp.action_code != WLAN_ACTION_ADDBA_RESP)
++		return false;
++
++	return true;
++}
++
++static bool ath12k_is_frame_link_agnostic_wcn7850(struct ath12k_link_vif *arvif,
++						  struct ieee80211_mgmt *mgmt)
++{
++	struct ieee80211_vif *vif = ath12k_ahvif_to_vif(arvif->ahvif);
++	struct ath12k_hw *ah = ath12k_ar_to_ah(arvif->ar);
++	struct ath12k_base *ab = arvif->ar->ab;
++	__le16 fc = mgmt->frame_control;
++
++	spin_lock_bh(&ab->base_lock);
++	if (!ath12k_peer_find_by_addr(ab, mgmt->da) &&
++	    !ath12k_peer_ml_find(ah, mgmt->da)) {
++		spin_unlock_bh(&ab->base_lock);
++		return false;
++	}
++	spin_unlock_bh(&ab->base_lock);
++
++	if (vif->type == NL80211_IFTYPE_STATION)
++		return arvif->is_up &&
++		       (vif->valid_links == vif->active_links) &&
++		       !ieee80211_is_probe_req(fc) &&
++		       !ieee80211_is_auth(fc) &&
++		       !ieee80211_is_deauth(fc) &&
++		       !ath12k_is_addba_resp_action_code(mgmt);
++
++	if (vif->type == NL80211_IFTYPE_AP)
++		return !(ieee80211_is_probe_resp(fc) || ieee80211_is_auth(fc) ||
++			 ieee80211_is_assoc_resp(fc) || ieee80211_is_reassoc_resp(fc) ||
++			 ath12k_is_addba_resp_action_code(mgmt));
++
++	return false;
++}
++
+ static const struct ath12k_hw_ops qcn9274_ops = {
+ 	.get_hw_mac_from_pdev_id = ath12k_hw_qcn9274_mac_from_pdev_id,
+ 	.mac_id_to_pdev_id = ath12k_hw_mac_id_to_pdev_id_qcn9274,
+@@ -81,6 +134,7 @@ static const struct ath12k_hw_ops qcn9274_ops = {
+ 	.rxdma_ring_sel_config = ath12k_dp_rxdma_ring_sel_config_qcn9274,
+ 	.get_ring_selector = ath12k_hw_get_ring_selector_qcn9274,
+ 	.dp_srng_is_tx_comp_ring = ath12k_dp_srng_is_comp_ring_qcn9274,
++	.is_frame_link_agnostic = ath12k_is_frame_link_agnostic_qcn9274,
+ };
+ 
+ static const struct ath12k_hw_ops wcn7850_ops = {
+@@ -90,6 +144,7 @@ static const struct ath12k_hw_ops wcn7850_ops = {
+ 	.rxdma_ring_sel_config = ath12k_dp_rxdma_ring_sel_config_wcn7850,
+ 	.get_ring_selector = ath12k_hw_get_ring_selector_wcn7850,
+ 	.dp_srng_is_tx_comp_ring = ath12k_dp_srng_is_comp_ring_wcn7850,
++	.is_frame_link_agnostic = ath12k_is_frame_link_agnostic_wcn7850,
+ };
+ 
+ #define ATH12K_TX_RING_MASK_0 0x1
+diff --git a/drivers/net/wireless/ath/ath12k/hw.h b/drivers/net/wireless/ath/ath12k/hw.h
+index 0a75bc5abfa241..9c69dd5a22afa4 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.h
++++ b/drivers/net/wireless/ath/ath12k/hw.h
+@@ -246,6 +246,8 @@ struct ath12k_hw_ops {
+ 	int (*rxdma_ring_sel_config)(struct ath12k_base *ab);
+ 	u8 (*get_ring_selector)(struct sk_buff *skb);
+ 	bool (*dp_srng_is_tx_comp_ring)(int ring_num);
++	bool (*is_frame_link_agnostic)(struct ath12k_link_vif *arvif,
++				       struct ieee80211_mgmt *mgmt);
+ };
+ 
+ static inline
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index a885dd168a372a..708dc3dd4347ad 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -3650,12 +3650,68 @@ static int ath12k_mac_fils_discovery(struct ath12k_link_vif *arvif,
+ 	return ret;
+ }
+ 
++static void ath12k_mac_vif_setup_ps(struct ath12k_link_vif *arvif)
++{
++	struct ath12k *ar = arvif->ar;
++	struct ieee80211_vif *vif = arvif->ahvif->vif;
++	struct ieee80211_conf *conf = &ath12k_ar_to_hw(ar)->conf;
++	enum wmi_sta_powersave_param param;
++	struct ieee80211_bss_conf *info;
++	enum wmi_sta_ps_mode psmode;
++	int ret;
++	int timeout;
++	bool enable_ps;
++
++	lockdep_assert_wiphy(ath12k_ar_to_hw(ar)->wiphy);
++
++	if (vif->type != NL80211_IFTYPE_STATION)
++		return;
++
++	enable_ps = arvif->ahvif->ps;
++	if (enable_ps) {
++		psmode = WMI_STA_PS_MODE_ENABLED;
++		param = WMI_STA_PS_PARAM_INACTIVITY_TIME;
++
++		timeout = conf->dynamic_ps_timeout;
++		if (timeout == 0) {
++			info = ath12k_mac_get_link_bss_conf(arvif);
++			if (!info) {
++				ath12k_warn(ar->ab, "unable to access bss link conf in setup ps for vif %pM link %u\n",
++					    vif->addr, arvif->link_id);
++				return;
++			}
++
++			/* firmware doesn't like 0 */
++			timeout = ieee80211_tu_to_usec(info->beacon_int) / 1000;
++		}
++
++		ret = ath12k_wmi_set_sta_ps_param(ar, arvif->vdev_id, param,
++						  timeout);
++		if (ret) {
++			ath12k_warn(ar->ab, "failed to set inactivity time for vdev %d: %i\n",
++				    arvif->vdev_id, ret);
++			return;
++		}
++	} else {
++		psmode = WMI_STA_PS_MODE_DISABLED;
++	}
++
++	ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac vdev %d psmode %s\n",
++		   arvif->vdev_id, psmode ? "enable" : "disable");
++
++	ret = ath12k_wmi_pdev_set_ps_mode(ar, arvif->vdev_id, psmode);
++	if (ret)
++		ath12k_warn(ar->ab, "failed to set sta power save mode %d for vdev %d: %d\n",
++			    psmode, arvif->vdev_id, ret);
++}
++
+ static void ath12k_mac_op_vif_cfg_changed(struct ieee80211_hw *hw,
+ 					  struct ieee80211_vif *vif,
+ 					  u64 changed)
+ {
+ 	struct ath12k_vif *ahvif = ath12k_vif_to_ahvif(vif);
+ 	unsigned long links = ahvif->links_map;
++	struct ieee80211_vif_cfg *vif_cfg;
+ 	struct ieee80211_bss_conf *info;
+ 	struct ath12k_link_vif *arvif;
+ 	struct ieee80211_sta *sta;
+@@ -3719,61 +3775,24 @@ static void ath12k_mac_op_vif_cfg_changed(struct ieee80211_hw *hw,
+ 			}
+ 		}
+ 	}
+-}
+ 
+-static void ath12k_mac_vif_setup_ps(struct ath12k_link_vif *arvif)
+-{
+-	struct ath12k *ar = arvif->ar;
+-	struct ieee80211_vif *vif = arvif->ahvif->vif;
+-	struct ieee80211_conf *conf = &ath12k_ar_to_hw(ar)->conf;
+-	enum wmi_sta_powersave_param param;
+-	struct ieee80211_bss_conf *info;
+-	enum wmi_sta_ps_mode psmode;
+-	int ret;
+-	int timeout;
+-	bool enable_ps;
++	if (changed & BSS_CHANGED_PS) {
++		links = ahvif->links_map;
++		vif_cfg = &vif->cfg;
+ 
+-	lockdep_assert_wiphy(ath12k_ar_to_hw(ar)->wiphy);
+-
+-	if (vif->type != NL80211_IFTYPE_STATION)
+-		return;
++		for_each_set_bit(link_id, &links, IEEE80211_MLD_MAX_NUM_LINKS) {
++			arvif = wiphy_dereference(hw->wiphy, ahvif->link[link_id]);
++			if (!arvif || !arvif->ar)
++				continue;
+ 
+-	enable_ps = arvif->ahvif->ps;
+-	if (enable_ps) {
+-		psmode = WMI_STA_PS_MODE_ENABLED;
+-		param = WMI_STA_PS_PARAM_INACTIVITY_TIME;
++			ar = arvif->ar;
+ 
+-		timeout = conf->dynamic_ps_timeout;
+-		if (timeout == 0) {
+-			info = ath12k_mac_get_link_bss_conf(arvif);
+-			if (!info) {
+-				ath12k_warn(ar->ab, "unable to access bss link conf in setup ps for vif %pM link %u\n",
+-					    vif->addr, arvif->link_id);
+-				return;
++			if (ar->ab->hw_params->supports_sta_ps) {
++				ahvif->ps = vif_cfg->ps;
++				ath12k_mac_vif_setup_ps(arvif);
+ 			}
+-
+-			/* firmware doesn't like 0 */
+-			timeout = ieee80211_tu_to_usec(info->beacon_int) / 1000;
+-		}
+-
+-		ret = ath12k_wmi_set_sta_ps_param(ar, arvif->vdev_id, param,
+-						  timeout);
+-		if (ret) {
+-			ath12k_warn(ar->ab, "failed to set inactivity time for vdev %d: %i\n",
+-				    arvif->vdev_id, ret);
+-			return;
+ 		}
+-	} else {
+-		psmode = WMI_STA_PS_MODE_DISABLED;
+ 	}
+-
+-	ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac vdev %d psmode %s\n",
+-		   arvif->vdev_id, psmode ? "enable" : "disable");
+-
+-	ret = ath12k_wmi_pdev_set_ps_mode(ar, arvif->vdev_id, psmode);
+-	if (ret)
+-		ath12k_warn(ar->ab, "failed to set sta power save mode %d for vdev %d: %d\n",
+-			    psmode, arvif->vdev_id, ret);
+ }
+ 
+ static bool ath12k_mac_supports_station_tpc(struct ath12k *ar,
+@@ -3795,7 +3814,6 @@ static void ath12k_mac_bss_info_changed(struct ath12k *ar,
+ {
+ 	struct ath12k_vif *ahvif = arvif->ahvif;
+ 	struct ieee80211_vif *vif = ath12k_ahvif_to_vif(ahvif);
+-	struct ieee80211_vif_cfg *vif_cfg = &vif->cfg;
+ 	struct cfg80211_chan_def def;
+ 	u32 param_id, param_value;
+ 	enum nl80211_band band;
+@@ -4069,12 +4087,6 @@ static void ath12k_mac_bss_info_changed(struct ath12k *ar,
+ 	}
+ 
+ 	ath12k_mac_fils_discovery(arvif, info);
+-
+-	if (changed & BSS_CHANGED_PS &&
+-	    ar->ab->hw_params->supports_sta_ps) {
+-		ahvif->ps = vif_cfg->ps;
+-		ath12k_mac_vif_setup_ps(arvif);
+-	}
+ }
+ 
+ static struct ath12k_vif_cache *ath12k_ahvif_get_link_cache(struct ath12k_vif *ahvif,
+@@ -7673,7 +7685,7 @@ static int ath12k_mac_mgmt_tx_wmi(struct ath12k *ar, struct ath12k_link_vif *arv
+ 
+ 	skb_cb->paddr = paddr;
+ 
+-	ret = ath12k_wmi_mgmt_send(ar, arvif->vdev_id, buf_id, skb);
++	ret = ath12k_wmi_mgmt_send(arvif, buf_id, skb);
+ 	if (ret) {
+ 		ath12k_warn(ar->ab, "failed to send mgmt frame: %d\n", ret);
+ 		goto err_unmap_buf;
+@@ -7985,6 +7997,9 @@ static void ath12k_mac_op_tx(struct ieee80211_hw *hw,
+ 
+ 		skb_cb->flags |= ATH12K_SKB_HW_80211_ENCAP;
+ 	} else if (ieee80211_is_mgmt(hdr->frame_control)) {
++		if (sta && sta->mlo)
++			skb_cb->flags |= ATH12K_SKB_MLO_STA;
++
+ 		ret = ath12k_mac_mgmt_tx(ar, skb, is_prb_rsp);
+ 		if (ret) {
+ 			ath12k_warn(ar->ab, "failed to queue management frame %d\n",
+diff --git a/drivers/net/wireless/ath/ath12k/peer.c b/drivers/net/wireless/ath/ath12k/peer.c
+index ec7236bbccc0fe..eb7aeff0149038 100644
+--- a/drivers/net/wireless/ath/ath12k/peer.c
++++ b/drivers/net/wireless/ath/ath12k/peer.c
+@@ -8,7 +8,7 @@
+ #include "peer.h"
+ #include "debug.h"
+ 
+-static struct ath12k_ml_peer *ath12k_peer_ml_find(struct ath12k_hw *ah, const u8 *addr)
++struct ath12k_ml_peer *ath12k_peer_ml_find(struct ath12k_hw *ah, const u8 *addr)
+ {
+ 	struct ath12k_ml_peer *ml_peer;
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/peer.h b/drivers/net/wireless/ath/ath12k/peer.h
+index f3a5e054d2b556..44afc0b7dd53ea 100644
+--- a/drivers/net/wireless/ath/ath12k/peer.h
++++ b/drivers/net/wireless/ath/ath12k/peer.h
+@@ -91,5 +91,33 @@ struct ath12k_peer *ath12k_peer_find_by_ast(struct ath12k_base *ab, int ast_hash
+ int ath12k_peer_ml_create(struct ath12k_hw *ah, struct ieee80211_sta *sta);
+ int ath12k_peer_ml_delete(struct ath12k_hw *ah, struct ieee80211_sta *sta);
+ int ath12k_peer_mlo_link_peers_delete(struct ath12k_vif *ahvif, struct ath12k_sta *ahsta);
++struct ath12k_ml_peer *ath12k_peer_ml_find(struct ath12k_hw *ah,
++					   const u8 *addr);
++static inline
++struct ath12k_link_sta *ath12k_peer_get_link_sta(struct ath12k_base *ab,
++						 struct ath12k_peer *peer)
++{
++	struct ath12k_sta *ahsta;
++	struct ath12k_link_sta *arsta;
++
++	if (!peer->sta)
++		return NULL;
++
++	ahsta = ath12k_sta_to_ahsta(peer->sta);
++	if (peer->ml_id & ATH12K_PEER_ML_ID_VALID) {
++		if (!(ahsta->links_map & BIT(peer->link_id))) {
++			ath12k_warn(ab, "peer %pM id %d link_id %d can't found in STA link_map 0x%x\n",
++				    peer->addr, peer->peer_id, peer->link_id,
++				    ahsta->links_map);
++			return NULL;
++		}
++		arsta = rcu_dereference(ahsta->link[peer->link_id]);
++		if (!arsta)
++			return NULL;
++	} else {
++		arsta =  &ahsta->deflink;
++	}
++	return arsta;
++}
+ 
+ #endif /* _PEER_H_ */
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
+index eac5d48cade663..d740326079e1d7 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.c
++++ b/drivers/net/wireless/ath/ath12k/wmi.c
+@@ -782,20 +782,46 @@ struct sk_buff *ath12k_wmi_alloc_skb(struct ath12k_wmi_base *wmi_ab, u32 len)
+ 	return skb;
+ }
+ 
+-int ath12k_wmi_mgmt_send(struct ath12k *ar, u32 vdev_id, u32 buf_id,
++int ath12k_wmi_mgmt_send(struct ath12k_link_vif *arvif, u32 buf_id,
+ 			 struct sk_buff *frame)
+ {
++	struct ath12k *ar = arvif->ar;
+ 	struct ath12k_wmi_pdev *wmi = ar->wmi;
+ 	struct wmi_mgmt_send_cmd *cmd;
+ 	struct ieee80211_tx_info *info = IEEE80211_SKB_CB(frame);
+-	struct wmi_tlv *frame_tlv;
++	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)frame->data;
++	struct ieee80211_vif *vif = ath12k_ahvif_to_vif(arvif->ahvif);
++	int cmd_len = sizeof(struct ath12k_wmi_mgmt_send_tx_params);
++	struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)hdr;
++	struct ath12k_wmi_mlo_mgmt_send_params *ml_params;
++	struct ath12k_base *ab = ar->ab;
++	struct wmi_tlv *frame_tlv, *tlv;
++	struct ath12k_skb_cb *skb_cb;
++	u32 buf_len, buf_len_aligned;
++	u32 vdev_id = arvif->vdev_id;
++	bool link_agnostic = false;
+ 	struct sk_buff *skb;
+-	u32 buf_len;
+ 	int ret, len;
++	void *ptr;
+ 
+ 	buf_len = min_t(int, frame->len, WMI_MGMT_SEND_DOWNLD_LEN);
+ 
+-	len = sizeof(*cmd) + sizeof(*frame_tlv) + roundup(buf_len, 4);
++	buf_len_aligned = roundup(buf_len, sizeof(u32));
++
++	len = sizeof(*cmd) + sizeof(*frame_tlv) + buf_len_aligned;
++
++	if (ieee80211_vif_is_mld(vif)) {
++		skb_cb = ATH12K_SKB_CB(frame);
++		if ((skb_cb->flags & ATH12K_SKB_MLO_STA) &&
++		    ab->hw_params->hw_ops->is_frame_link_agnostic &&
++		    ab->hw_params->hw_ops->is_frame_link_agnostic(arvif, mgmt)) {
++			len += cmd_len + TLV_HDR_SIZE + sizeof(*ml_params);
++			ath12k_generic_dbg(ATH12K_DBG_MGMT,
++					   "Sending Mgmt Frame fc 0x%0x as link agnostic",
++					   mgmt->frame_control);
++			link_agnostic = true;
++		}
++	}
+ 
+ 	skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, len);
+ 	if (!skb)
+@@ -814,10 +840,32 @@ int ath12k_wmi_mgmt_send(struct ath12k *ar, u32 vdev_id, u32 buf_id,
+ 	cmd->tx_params_valid = 0;
+ 
+ 	frame_tlv = (struct wmi_tlv *)(skb->data + sizeof(*cmd));
+-	frame_tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, buf_len);
++	frame_tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, buf_len_aligned);
+ 
+ 	memcpy(frame_tlv->value, frame->data, buf_len);
+ 
++	if (!link_agnostic)
++		goto send;
++
++	ptr = skb->data + sizeof(*cmd) + sizeof(*frame_tlv) + buf_len_aligned;
++
++	tlv = ptr;
++
++	/* Tx params not used currently */
++	tlv->header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_TX_SEND_PARAMS, cmd_len);
++	ptr += cmd_len;
++
++	tlv = ptr;
++	tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT, sizeof(*ml_params));
++	ptr += TLV_HDR_SIZE;
++
++	ml_params = ptr;
++	ml_params->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_MLO_TX_SEND_PARAMS,
++						       sizeof(*ml_params));
++
++	ml_params->hw_link_id = cpu_to_le32(WMI_MGMT_LINK_AGNOSTIC_ID);
++
++send:
+ 	ret = ath12k_wmi_cmd_send(wmi, skb, WMI_MGMT_TX_SEND_CMDID);
+ 	if (ret) {
+ 		ath12k_warn(ar->ab,
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.h b/drivers/net/wireless/ath/ath12k/wmi.h
+index 8627154f1680fa..6dbcedcf081759 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.h
++++ b/drivers/net/wireless/ath/ath12k/wmi.h
+@@ -3963,6 +3963,7 @@ struct wmi_scan_chan_list_cmd {
+ } __packed;
+ 
+ #define WMI_MGMT_SEND_DOWNLD_LEN	64
++#define WMI_MGMT_LINK_AGNOSTIC_ID	0xFFFFFFFF
+ 
+ #define WMI_TX_PARAMS_DWORD0_POWER		GENMASK(7, 0)
+ #define WMI_TX_PARAMS_DWORD0_MCS_MASK		GENMASK(19, 8)
+@@ -3988,7 +3989,18 @@ struct wmi_mgmt_send_cmd {
+ 
+ 	/* This TLV is followed by struct wmi_mgmt_frame */
+ 
+-	/* Followed by struct wmi_mgmt_send_params */
++	/* Followed by struct ath12k_wmi_mlo_mgmt_send_params */
++} __packed;
++
++struct ath12k_wmi_mlo_mgmt_send_params {
++	__le32 tlv_header;
++	__le32 hw_link_id;
++} __packed;
++
++struct ath12k_wmi_mgmt_send_tx_params {
++	__le32 tlv_header;
++	__le32 tx_param_dword0;
++	__le32 tx_param_dword1;
+ } __packed;
+ 
+ struct wmi_sta_powersave_mode_cmd {
+@@ -6183,7 +6195,7 @@ void ath12k_wmi_init_wcn7850(struct ath12k_base *ab,
+ int ath12k_wmi_cmd_send(struct ath12k_wmi_pdev *wmi, struct sk_buff *skb,
+ 			u32 cmd_id);
+ struct sk_buff *ath12k_wmi_alloc_skb(struct ath12k_wmi_base *wmi_sc, u32 len);
+-int ath12k_wmi_mgmt_send(struct ath12k *ar, u32 vdev_id, u32 buf_id,
++int ath12k_wmi_mgmt_send(struct ath12k_link_vif *arvif, u32 buf_id,
+ 			 struct sk_buff *frame);
+ int ath12k_wmi_p2p_go_bcn_ie(struct ath12k *ar, u32 vdev_id,
+ 			     const u8 *p2p_ie);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 4e47ccb43bd86c..edd99d71016cb1 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -124,13 +124,13 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct pci_device_id iwl_hw_card_ids[] = {
+ 	{IWL_PCI_DEVICE(0x0082, 0x1304, iwl6005_mac_cfg)},/* low 5GHz active */
+ 	{IWL_PCI_DEVICE(0x0082, 0x1305, iwl6005_mac_cfg)},/* high 5GHz active */
+ 
+-/* 6x30 Series */
+-	{IWL_PCI_DEVICE(0x008A, 0x5305, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x008A, 0x5307, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x008A, 0x5325, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x008A, 0x5327, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x008B, 0x5315, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x008B, 0x5317, iwl1000_mac_cfg)},
++/* 1030/6x30 Series */
++	{IWL_PCI_DEVICE(0x008A, 0x5305, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x008A, 0x5307, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x008A, 0x5325, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x008A, 0x5327, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x008B, 0x5315, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x008B, 0x5317, iwl6030_mac_cfg)},
+ 	{IWL_PCI_DEVICE(0x0090, 0x5211, iwl6030_mac_cfg)},
+ 	{IWL_PCI_DEVICE(0x0090, 0x5215, iwl6030_mac_cfg)},
+ 	{IWL_PCI_DEVICE(0x0090, 0x5216, iwl6030_mac_cfg)},
+@@ -181,12 +181,12 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct pci_device_id iwl_hw_card_ids[] = {
+ 	{IWL_PCI_DEVICE(0x08AE, 0x1027, iwl1000_mac_cfg)},
+ 
+ /* 130 Series WiFi */
+-	{IWL_PCI_DEVICE(0x0896, 0x5005, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x0896, 0x5007, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x0897, 0x5015, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x0897, 0x5017, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x0896, 0x5025, iwl1000_mac_cfg)},
+-	{IWL_PCI_DEVICE(0x0896, 0x5027, iwl1000_mac_cfg)},
++	{IWL_PCI_DEVICE(0x0896, 0x5005, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x0896, 0x5007, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x0897, 0x5015, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x0897, 0x5017, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x0896, 0x5025, iwl6030_mac_cfg)},
++	{IWL_PCI_DEVICE(0x0896, 0x5027, iwl6030_mac_cfg)},
+ 
+ /* 2x00 Series */
+ 	{IWL_PCI_DEVICE(0x0890, 0x4022, iwl2000_mac_cfg)},
+diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
+index a4a2bac4f4b279..2f8d0223c1a6de 100644
+--- a/drivers/pci/controller/pci-mvebu.c
++++ b/drivers/pci/controller/pci-mvebu.c
+@@ -1168,12 +1168,6 @@ static void __iomem *mvebu_pcie_map_registers(struct platform_device *pdev,
+ 	return devm_ioremap_resource(&pdev->dev, &port->regs);
+ }
+ 
+-#define DT_FLAGS_TO_TYPE(flags)       (((flags) >> 24) & 0x03)
+-#define    DT_TYPE_IO                 0x1
+-#define    DT_TYPE_MEM32              0x2
+-#define DT_CPUADDR_TO_TARGET(cpuaddr) (((cpuaddr) >> 56) & 0xFF)
+-#define DT_CPUADDR_TO_ATTR(cpuaddr)   (((cpuaddr) >> 48) & 0xFF)
+-
+ static int mvebu_get_tgt_attr(struct device_node *np, int devfn,
+ 			      unsigned long type,
+ 			      unsigned int *tgt,
+@@ -1189,19 +1183,12 @@ static int mvebu_get_tgt_attr(struct device_node *np, int devfn,
+ 		return -EINVAL;
+ 
+ 	for_each_of_range(&parser, &range) {
+-		unsigned long rtype;
+ 		u32 slot = upper_32_bits(range.bus_addr);
+ 
+-		if (DT_FLAGS_TO_TYPE(range.flags) == DT_TYPE_IO)
+-			rtype = IORESOURCE_IO;
+-		else if (DT_FLAGS_TO_TYPE(range.flags) == DT_TYPE_MEM32)
+-			rtype = IORESOURCE_MEM;
+-		else
+-			continue;
+-
+-		if (slot == PCI_SLOT(devfn) && type == rtype) {
+-			*tgt = DT_CPUADDR_TO_TARGET(range.cpu_addr);
+-			*attr = DT_CPUADDR_TO_ATTR(range.cpu_addr);
++		if (slot == PCI_SLOT(devfn) &&
++		    type == (range.flags & IORESOURCE_TYPE_BITS)) {
++			*tgt = (range.parent_bus_addr >> 56) & 0xFF;
++			*attr = (range.parent_bus_addr >> 48) & 0xFF;
+ 			return 0;
+ 		}
+ 	}
+diff --git a/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c b/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c
+index d7493c2294ef23..3709fba42ebd85 100644
+--- a/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c
++++ b/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c
+@@ -127,13 +127,13 @@ static int eusb2_repeater_init(struct phy *phy)
+ 			     rptr->cfg->init_tbl[i].value);
+ 
+ 	/* Override registers from devicetree values */
+-	if (!of_property_read_u8(np, "qcom,tune-usb2-amplitude", &val))
++	if (!of_property_read_u8(np, "qcom,tune-usb2-preem", &val))
+ 		regmap_write(regmap, base + EUSB2_TUNE_USB2_PREEM, val);
+ 
+ 	if (!of_property_read_u8(np, "qcom,tune-usb2-disc-thres", &val))
+ 		regmap_write(regmap, base + EUSB2_TUNE_HSDISC, val);
+ 
+-	if (!of_property_read_u8(np, "qcom,tune-usb2-preem", &val))
++	if (!of_property_read_u8(np, "qcom,tune-usb2-amplitude", &val))
+ 		regmap_write(regmap, base + EUSB2_TUNE_IUSB2, val);
+ 
+ 	/* Wait for status OK */
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c b/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
+index 461b9e0af610a1..498f23c43aa139 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
+@@ -3064,6 +3064,14 @@ struct qmp_pcie {
+ 	struct clk_fixed_rate aux_clk_fixed;
+ };
+ 
++static bool qphy_checkbits(const void __iomem *base, u32 offset, u32 val)
++{
++	u32 reg;
++
++	reg = readl(base + offset);
++	return (reg & val) == val;
++}
++
+ static inline void qphy_setbits(void __iomem *base, u32 offset, u32 val)
+ {
+ 	u32 reg;
+@@ -4332,16 +4340,21 @@ static int qmp_pcie_init(struct phy *phy)
+ 	struct qmp_pcie *qmp = phy_get_drvdata(phy);
+ 	const struct qmp_phy_cfg *cfg = qmp->cfg;
+ 	void __iomem *pcs = qmp->pcs;
+-	bool phy_initialized = !!(readl(pcs + cfg->regs[QPHY_START_CTRL]));
+ 	int ret;
+ 
+-	qmp->skip_init = qmp->nocsr_reset && phy_initialized;
+ 	/*
+-	 * We need to check the existence of init sequences in two cases:
+-	 * 1. The PHY doesn't support no_csr reset.
+-	 * 2. The PHY supports no_csr reset but isn't initialized by bootloader.
+-	 * As we can't skip init in these two cases.
++	 * We can skip PHY initialization if all of the following conditions
++	 * are met:
++	 *  1. The PHY supports the nocsr_reset that preserves the PHY config.
++	 *  2. The PHY was started (and not powered down again) by the
++	 *     bootloader, with all of the expected bits set correctly.
++	 * In this case, we can continue without having the init sequence
++	 * defined in the driver.
+ 	 */
++	qmp->skip_init = qmp->nocsr_reset &&
++		qphy_checkbits(pcs, cfg->regs[QPHY_START_CTRL], SERDES_START | PCS_START) &&
++		qphy_checkbits(pcs, cfg->regs[QPHY_PCS_POWER_DOWN_CONTROL], cfg->pwrdn_ctrl);
++
+ 	if (!qmp->skip_init && !cfg->tbls.serdes_num) {
+ 		dev_err(qmp->dev, "Init sequence not available\n");
+ 		return -ENODATA;
+diff --git a/drivers/phy/tegra/xusb-tegra210.c b/drivers/phy/tegra/xusb-tegra210.c
+index ebc8a7e21a3181..3409924498e9cf 100644
+--- a/drivers/phy/tegra/xusb-tegra210.c
++++ b/drivers/phy/tegra/xusb-tegra210.c
+@@ -3164,18 +3164,22 @@ tegra210_xusb_padctl_probe(struct device *dev,
+ 	}
+ 
+ 	pdev = of_find_device_by_node(np);
++	of_node_put(np);
+ 	if (!pdev) {
+ 		dev_warn(dev, "PMC device is not available\n");
+ 		goto out;
+ 	}
+ 
+-	if (!platform_get_drvdata(pdev))
++	if (!platform_get_drvdata(pdev)) {
++		put_device(&pdev->dev);
+ 		return ERR_PTR(-EPROBE_DEFER);
++	}
+ 
+ 	padctl->regmap = dev_get_regmap(&pdev->dev, "usb_sleepwalk");
+ 	if (!padctl->regmap)
+ 		dev_info(dev, "failed to find PMC regmap\n");
+ 
++	put_device(&pdev->dev);
+ out:
+ 	return &padctl->base;
+ }
+diff --git a/drivers/phy/ti/phy-omap-usb2.c b/drivers/phy/ti/phy-omap-usb2.c
+index c1a0ef979142ce..c444bb2530ca29 100644
+--- a/drivers/phy/ti/phy-omap-usb2.c
++++ b/drivers/phy/ti/phy-omap-usb2.c
+@@ -363,6 +363,13 @@ static void omap_usb2_init_errata(struct omap_usb *phy)
+ 		phy->flags |= OMAP_USB2_DISABLE_CHRG_DET;
+ }
+ 
++static void omap_usb2_put_device(void *_dev)
++{
++	struct device *dev = _dev;
++
++	put_device(dev);
++}
++
+ static int omap_usb2_probe(struct platform_device *pdev)
+ {
+ 	struct omap_usb	*phy;
+@@ -373,6 +380,7 @@ static int omap_usb2_probe(struct platform_device *pdev)
+ 	struct device_node *control_node;
+ 	struct platform_device *control_pdev;
+ 	const struct usb_phy_data *phy_data;
++	int ret;
+ 
+ 	phy_data = device_get_match_data(&pdev->dev);
+ 	if (!phy_data)
+@@ -423,6 +431,11 @@ static int omap_usb2_probe(struct platform_device *pdev)
+ 			return -EINVAL;
+ 		}
+ 		phy->control_dev = &control_pdev->dev;
++
++		ret = devm_add_action_or_reset(&pdev->dev, omap_usb2_put_device,
++					       phy->control_dev);
++		if (ret)
++			return ret;
+ 	} else {
+ 		if (of_property_read_u32_index(node,
+ 					       "syscon-phy-power", 1,
+diff --git a/drivers/phy/ti/phy-ti-pipe3.c b/drivers/phy/ti/phy-ti-pipe3.c
+index da2cbacb982c6b..ae764d6524c99a 100644
+--- a/drivers/phy/ti/phy-ti-pipe3.c
++++ b/drivers/phy/ti/phy-ti-pipe3.c
+@@ -667,12 +667,20 @@ static int ti_pipe3_get_clk(struct ti_pipe3 *phy)
+ 	return 0;
+ }
+ 
++static void ti_pipe3_put_device(void *_dev)
++{
++	struct device *dev = _dev;
++
++	put_device(dev);
++}
++
+ static int ti_pipe3_get_sysctrl(struct ti_pipe3 *phy)
+ {
+ 	struct device *dev = phy->dev;
+ 	struct device_node *node = dev->of_node;
+ 	struct device_node *control_node;
+ 	struct platform_device *control_pdev;
++	int ret;
+ 
+ 	phy->phy_power_syscon = syscon_regmap_lookup_by_phandle(node,
+ 							"syscon-phy-power");
+@@ -704,6 +712,11 @@ static int ti_pipe3_get_sysctrl(struct ti_pipe3 *phy)
+ 		}
+ 
+ 		phy->control_dev = &control_pdev->dev;
++
++		ret = devm_add_action_or_reset(dev, ti_pipe3_put_device,
++					       phy->control_dev);
++		if (ret)
++			return ret;
+ 	}
+ 
+ 	if (phy->mode == PIPE3_MODE_PCIE) {
+diff --git a/drivers/regulator/sy7636a-regulator.c b/drivers/regulator/sy7636a-regulator.c
+index d1e7ba1fb3e1af..27e3d939b7bb9e 100644
+--- a/drivers/regulator/sy7636a-regulator.c
++++ b/drivers/regulator/sy7636a-regulator.c
+@@ -83,9 +83,11 @@ static int sy7636a_regulator_probe(struct platform_device *pdev)
+ 	if (!regmap)
+ 		return -EPROBE_DEFER;
+ 
+-	gdp = devm_gpiod_get(pdev->dev.parent, "epd-pwr-good", GPIOD_IN);
++	device_set_of_node_from_dev(&pdev->dev, pdev->dev.parent);
++
++	gdp = devm_gpiod_get(&pdev->dev, "epd-pwr-good", GPIOD_IN);
+ 	if (IS_ERR(gdp)) {
+-		dev_err(pdev->dev.parent, "Power good GPIO fault %ld\n", PTR_ERR(gdp));
++		dev_err(&pdev->dev, "Power good GPIO fault %ld\n", PTR_ERR(gdp));
+ 		return PTR_ERR(gdp);
+ 	}
+ 
+@@ -105,7 +107,6 @@ static int sy7636a_regulator_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	config.dev = &pdev->dev;
+-	config.dev->of_node = pdev->dev.parent->of_node;
+ 	config.regmap = regmap;
+ 
+ 	rdev = devm_regulator_register(&pdev->dev, &desc, &config);
+diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c
+index cd1f657f782df2..13c663a154c4e8 100644
+--- a/drivers/tty/hvc/hvc_console.c
++++ b/drivers/tty/hvc/hvc_console.c
+@@ -543,10 +543,10 @@ static ssize_t hvc_write(struct tty_struct *tty, const u8 *buf, size_t count)
+ 	}
+ 
+ 	/*
+-	 * Racy, but harmless, kick thread if there is still pending data.
++	 * Kick thread to flush if there's still pending data
++	 * or to wakeup the write queue.
+ 	 */
+-	if (hp->n_outbuf)
+-		hvc_kick();
++	hvc_kick();
+ 
+ 	return written;
+ }
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index 5ea8aadb6e69c1..9056cb82456f14 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -1177,17 +1177,6 @@ static int sc16is7xx_startup(struct uart_port *port)
+ 	sc16is7xx_port_write(port, SC16IS7XX_FCR_REG,
+ 			     SC16IS7XX_FCR_FIFO_BIT);
+ 
+-	/* Enable EFR */
+-	sc16is7xx_port_write(port, SC16IS7XX_LCR_REG,
+-			     SC16IS7XX_LCR_CONF_MODE_B);
+-
+-	regcache_cache_bypass(one->regmap, true);
+-
+-	/* Enable write access to enhanced features and internal clock div */
+-	sc16is7xx_port_update(port, SC16IS7XX_EFR_REG,
+-			      SC16IS7XX_EFR_ENABLE_BIT,
+-			      SC16IS7XX_EFR_ENABLE_BIT);
+-
+ 	/* Enable TCR/TLR */
+ 	sc16is7xx_port_update(port, SC16IS7XX_MCR_REG,
+ 			      SC16IS7XX_MCR_TCRTLR_BIT,
+@@ -1199,7 +1188,8 @@ static int sc16is7xx_startup(struct uart_port *port)
+ 			     SC16IS7XX_TCR_RX_RESUME(24) |
+ 			     SC16IS7XX_TCR_RX_HALT(48));
+ 
+-	regcache_cache_bypass(one->regmap, false);
++	/* Disable TCR/TLR access */
++	sc16is7xx_port_update(port, SC16IS7XX_MCR_REG, SC16IS7XX_MCR_TCRTLR_BIT, 0);
+ 
+ 	/* Now, initialize the UART */
+ 	sc16is7xx_port_write(port, SC16IS7XX_LCR_REG, SC16IS7XX_LCR_WORD_LEN_8);
+diff --git a/drivers/usb/gadget/function/f_midi2.c b/drivers/usb/gadget/function/f_midi2.c
+index 0a800ba53816a8..de16b02d857e07 100644
+--- a/drivers/usb/gadget/function/f_midi2.c
++++ b/drivers/usb/gadget/function/f_midi2.c
+@@ -1599,6 +1599,7 @@ static int f_midi2_create_card(struct f_midi2 *midi2)
+ 			strscpy(fb->info.name, ump_fb_name(b),
+ 				sizeof(fb->info.name));
+ 		}
++		snd_ump_update_group_attrs(ump);
+ 	}
+ 
+ 	for (i = 0; i < midi2->num_eps; i++) {
+@@ -1736,9 +1737,12 @@ static int f_midi2_create_usb_configs(struct f_midi2 *midi2,
+ 	case USB_SPEED_HIGH:
+ 		midi2_midi1_ep_out_desc.wMaxPacketSize = cpu_to_le16(512);
+ 		midi2_midi1_ep_in_desc.wMaxPacketSize = cpu_to_le16(512);
+-		for (i = 0; i < midi2->num_eps; i++)
++		for (i = 0; i < midi2->num_eps; i++) {
+ 			midi2_midi2_ep_out_desc[i].wMaxPacketSize =
+ 				cpu_to_le16(512);
++			midi2_midi2_ep_in_desc[i].wMaxPacketSize =
++				cpu_to_le16(512);
++		}
+ 		fallthrough;
+ 	case USB_SPEED_FULL:
+ 		midi1_in_eps = midi2_midi1_ep_in_descs;
+@@ -1747,9 +1751,12 @@ static int f_midi2_create_usb_configs(struct f_midi2 *midi2,
+ 	case USB_SPEED_SUPER:
+ 		midi2_midi1_ep_out_desc.wMaxPacketSize = cpu_to_le16(1024);
+ 		midi2_midi1_ep_in_desc.wMaxPacketSize = cpu_to_le16(1024);
+-		for (i = 0; i < midi2->num_eps; i++)
++		for (i = 0; i < midi2->num_eps; i++) {
+ 			midi2_midi2_ep_out_desc[i].wMaxPacketSize =
+ 				cpu_to_le16(1024);
++			midi2_midi2_ep_in_desc[i].wMaxPacketSize =
++				cpu_to_le16(1024);
++		}
+ 		midi1_in_eps = midi2_midi1_ep_in_ss_descs;
+ 		midi1_out_eps = midi2_midi1_ep_out_ss_descs;
+ 		break;
+diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c
+index 27c9699365ab95..18cd4b925e5e63 100644
+--- a/drivers/usb/gadget/udc/dummy_hcd.c
++++ b/drivers/usb/gadget/udc/dummy_hcd.c
+@@ -765,8 +765,7 @@ static int dummy_dequeue(struct usb_ep *_ep, struct usb_request *_req)
+ 	if (!dum->driver)
+ 		return -ESHUTDOWN;
+ 
+-	local_irq_save(flags);
+-	spin_lock(&dum->lock);
++	spin_lock_irqsave(&dum->lock, flags);
+ 	list_for_each_entry(iter, &ep->queue, queue) {
+ 		if (&iter->req != _req)
+ 			continue;
+@@ -776,15 +775,16 @@ static int dummy_dequeue(struct usb_ep *_ep, struct usb_request *_req)
+ 		retval = 0;
+ 		break;
+ 	}
+-	spin_unlock(&dum->lock);
+ 
+ 	if (retval == 0) {
+ 		dev_dbg(udc_dev(dum),
+ 				"dequeued req %p from %s, len %d buf %p\n",
+ 				req, _ep->name, _req->length, _req->buf);
++		spin_unlock(&dum->lock);
+ 		usb_gadget_giveback_request(_ep, _req);
++		spin_lock(&dum->lock);
+ 	}
+-	local_irq_restore(flags);
++	spin_unlock_irqrestore(&dum->lock, flags);
+ 	return retval;
+ }
+ 
+diff --git a/drivers/usb/host/xhci-dbgcap.c b/drivers/usb/host/xhci-dbgcap.c
+index 06a2edb9e86ef7..63edf2d8f24501 100644
+--- a/drivers/usb/host/xhci-dbgcap.c
++++ b/drivers/usb/host/xhci-dbgcap.c
+@@ -101,13 +101,34 @@ static u32 xhci_dbc_populate_strings(struct dbc_str_descs *strings)
+ 	return string_length;
+ }
+ 
++static void xhci_dbc_init_ep_contexts(struct xhci_dbc *dbc)
++{
++	struct xhci_ep_ctx      *ep_ctx;
++	unsigned int		max_burst;
++	dma_addr_t		deq;
++
++	max_burst               = DBC_CTRL_MAXBURST(readl(&dbc->regs->control));
++
++	/* Populate bulk out endpoint context: */
++	ep_ctx                  = dbc_bulkout_ctx(dbc);
++	deq                     = dbc_bulkout_enq(dbc);
++	ep_ctx->ep_info         = 0;
++	ep_ctx->ep_info2        = dbc_epctx_info2(BULK_OUT_EP, 1024, max_burst);
++	ep_ctx->deq             = cpu_to_le64(deq | dbc->ring_out->cycle_state);
++
++	/* Populate bulk in endpoint context: */
++	ep_ctx                  = dbc_bulkin_ctx(dbc);
++	deq                     = dbc_bulkin_enq(dbc);
++	ep_ctx->ep_info         = 0;
++	ep_ctx->ep_info2        = dbc_epctx_info2(BULK_IN_EP, 1024, max_burst);
++	ep_ctx->deq             = cpu_to_le64(deq | dbc->ring_in->cycle_state);
++}
++
+ static void xhci_dbc_init_contexts(struct xhci_dbc *dbc, u32 string_length)
+ {
+ 	struct dbc_info_context	*info;
+-	struct xhci_ep_ctx	*ep_ctx;
+ 	u32			dev_info;
+-	dma_addr_t		deq, dma;
+-	unsigned int		max_burst;
++	dma_addr_t		dma;
+ 
+ 	if (!dbc)
+ 		return;
+@@ -121,20 +142,8 @@ static void xhci_dbc_init_contexts(struct xhci_dbc *dbc, u32 string_length)
+ 	info->serial		= cpu_to_le64(dma + DBC_MAX_STRING_LENGTH * 3);
+ 	info->length		= cpu_to_le32(string_length);
+ 
+-	/* Populate bulk out endpoint context: */
+-	ep_ctx			= dbc_bulkout_ctx(dbc);
+-	max_burst		= DBC_CTRL_MAXBURST(readl(&dbc->regs->control));
+-	deq			= dbc_bulkout_enq(dbc);
+-	ep_ctx->ep_info		= 0;
+-	ep_ctx->ep_info2	= dbc_epctx_info2(BULK_OUT_EP, 1024, max_burst);
+-	ep_ctx->deq		= cpu_to_le64(deq | dbc->ring_out->cycle_state);
+-
+-	/* Populate bulk in endpoint context: */
+-	ep_ctx			= dbc_bulkin_ctx(dbc);
+-	deq			= dbc_bulkin_enq(dbc);
+-	ep_ctx->ep_info		= 0;
+-	ep_ctx->ep_info2	= dbc_epctx_info2(BULK_IN_EP, 1024, max_burst);
+-	ep_ctx->deq		= cpu_to_le64(deq | dbc->ring_in->cycle_state);
++	/* Populate bulk in and out endpoint contexts: */
++	xhci_dbc_init_ep_contexts(dbc);
+ 
+ 	/* Set DbC context and info registers: */
+ 	lo_hi_writeq(dbc->ctx->dma, &dbc->regs->dccp);
+@@ -436,6 +445,42 @@ dbc_alloc_ctx(struct device *dev, gfp_t flags)
+ 	return ctx;
+ }
+ 
++static void xhci_dbc_ring_init(struct xhci_ring *ring)
++{
++	struct xhci_segment *seg = ring->first_seg;
++
++	/* clear all trbs on ring in case of old ring */
++	memset(seg->trbs, 0, TRB_SEGMENT_SIZE);
++
++	/* Only event ring does not use link TRB */
++	if (ring->type != TYPE_EVENT) {
++		union xhci_trb *trb = &seg->trbs[TRBS_PER_SEGMENT - 1];
++
++		trb->link.segment_ptr = cpu_to_le64(ring->first_seg->dma);
++		trb->link.control = cpu_to_le32(LINK_TOGGLE | TRB_TYPE(TRB_LINK));
++	}
++	xhci_initialize_ring_info(ring);
++}
++
++static int xhci_dbc_reinit_ep_rings(struct xhci_dbc *dbc)
++{
++	struct xhci_ring *in_ring = dbc->eps[BULK_IN].ring;
++	struct xhci_ring *out_ring = dbc->eps[BULK_OUT].ring;
++
++	if (!in_ring || !out_ring || !dbc->ctx) {
++		dev_warn(dbc->dev, "Can't re-init unallocated endpoints\n");
++		return -ENODEV;
++	}
++
++	xhci_dbc_ring_init(in_ring);
++	xhci_dbc_ring_init(out_ring);
++
++	/* set ep context enqueue, dequeue, and cycle to initial values */
++	xhci_dbc_init_ep_contexts(dbc);
++
++	return 0;
++}
++
+ static struct xhci_ring *
+ xhci_dbc_ring_alloc(struct device *dev, enum xhci_ring_type type, gfp_t flags)
+ {
+@@ -464,15 +509,10 @@ xhci_dbc_ring_alloc(struct device *dev, enum xhci_ring_type type, gfp_t flags)
+ 
+ 	seg->dma = dma;
+ 
+-	/* Only event ring does not use link TRB */
+-	if (type != TYPE_EVENT) {
+-		union xhci_trb *trb = &seg->trbs[TRBS_PER_SEGMENT - 1];
+-
+-		trb->link.segment_ptr = cpu_to_le64(dma);
+-		trb->link.control = cpu_to_le32(LINK_TOGGLE | TRB_TYPE(TRB_LINK));
+-	}
+ 	INIT_LIST_HEAD(&ring->td_list);
+-	xhci_initialize_ring_info(ring);
++
++	xhci_dbc_ring_init(ring);
++
+ 	return ring;
+ dma_fail:
+ 	kfree(seg);
+@@ -864,7 +904,7 @@ static enum evtreturn xhci_dbc_do_handle_events(struct xhci_dbc *dbc)
+ 			dev_info(dbc->dev, "DbC cable unplugged\n");
+ 			dbc->state = DS_ENABLED;
+ 			xhci_dbc_flush_requests(dbc);
+-
++			xhci_dbc_reinit_ep_rings(dbc);
+ 			return EVT_DISC;
+ 		}
+ 
+@@ -874,7 +914,7 @@ static enum evtreturn xhci_dbc_do_handle_events(struct xhci_dbc *dbc)
+ 			writel(portsc, &dbc->regs->portsc);
+ 			dbc->state = DS_ENABLED;
+ 			xhci_dbc_flush_requests(dbc);
+-
++			xhci_dbc_reinit_ep_rings(dbc);
+ 			return EVT_DISC;
+ 		}
+ 
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 81eaad87a3d9d0..c4a6544aa10751 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -962,7 +962,7 @@ static void xhci_free_virt_devices_depth_first(struct xhci_hcd *xhci, int slot_i
+ out:
+ 	/* we are now at a leaf device */
+ 	xhci_debugfs_remove_slot(xhci, slot_id);
+-	xhci_free_virt_device(xhci, vdev, slot_id);
++	xhci_free_virt_device(xhci, xhci->devs[slot_id], slot_id);
+ }
+ 
+ int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id,
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index e5cd3309342364..fc869b7f803f04 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1322,7 +1322,18 @@ static const struct usb_device_id option_ids[] = {
+ 	 .driver_info = NCTRL(0) | RSVD(3) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1033, 0xff),	/* Telit LE910C1-EUX (ECM) */
+ 	 .driver_info = NCTRL(0) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1034, 0xff),	/* Telit LE910C4-WWX (rmnet) */
++	 .driver_info = RSVD(2) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1035, 0xff) }, /* Telit LE910C4-WWX (ECM) */
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1036, 0xff) },  /* Telit LE910C4-WWX */
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1037, 0xff),	/* Telit LE910C4-WWX (rmnet) */
++	 .driver_info = NCTRL(0) | NCTRL(1) | RSVD(4) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1038, 0xff),	/* Telit LE910C4-WWX (rmnet) */
++	 .driver_info = NCTRL(0) | RSVD(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x103b, 0xff),	/* Telit LE910C4-WWX */
++	 .driver_info = NCTRL(0) | NCTRL(1) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x103c, 0xff),	/* Telit LE910C4-WWX */
++	 .driver_info = NCTRL(0) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG0),
+ 	  .driver_info = RSVD(0) | RSVD(1) | NCTRL(2) | RSVD(3) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG1),
+@@ -1369,6 +1380,12 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(0) | RSVD(1) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1075, 0xff),	/* Telit FN990A (PCIe) */
+ 	  .driver_info = RSVD(0) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1077, 0xff),	/* Telit FN990A (rmnet + audio) */
++	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1078, 0xff),	/* Telit FN990A (MBIM + audio) */
++	  .driver_info = NCTRL(0) | RSVD(1) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1079, 0xff),	/* Telit FN990A (RNDIS + audio) */
++	  .driver_info = NCTRL(2) | RSVD(3) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1080, 0xff),	/* Telit FE990A (rmnet) */
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1081, 0xff),	/* Telit FE990A (MBIM) */
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 1f6fdfaa34bf12..b2a568a5bc9b0b 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -2426,17 +2426,21 @@ static void tcpm_handle_vdm_request(struct tcpm_port *port,
+ 		case ADEV_NONE:
+ 			break;
+ 		case ADEV_NOTIFY_USB_AND_QUEUE_VDM:
+-			WARN_ON(typec_altmode_notify(adev, TYPEC_STATE_USB, NULL));
+-			typec_altmode_vdm(adev, p[0], &p[1], cnt);
++			if (rx_sop_type == TCPC_TX_SOP_PRIME) {
++				typec_cable_altmode_vdm(adev, TYPEC_PLUG_SOP_P, p[0], &p[1], cnt);
++			} else {
++				WARN_ON(typec_altmode_notify(adev, TYPEC_STATE_USB, NULL));
++				typec_altmode_vdm(adev, p[0], &p[1], cnt);
++			}
+ 			break;
+ 		case ADEV_QUEUE_VDM:
+-			if (response_tx_sop_type == TCPC_TX_SOP_PRIME)
++			if (rx_sop_type == TCPC_TX_SOP_PRIME)
+ 				typec_cable_altmode_vdm(adev, TYPEC_PLUG_SOP_P, p[0], &p[1], cnt);
+ 			else
+ 				typec_altmode_vdm(adev, p[0], &p[1], cnt);
+ 			break;
+ 		case ADEV_QUEUE_VDM_SEND_EXIT_MODE_ON_FAIL:
+-			if (response_tx_sop_type == TCPC_TX_SOP_PRIME) {
++			if (rx_sop_type == TCPC_TX_SOP_PRIME) {
+ 				if (typec_cable_altmode_vdm(adev, TYPEC_PLUG_SOP_P,
+ 							    p[0], &p[1], cnt)) {
+ 					int svdm_version = typec_get_cable_svdm_version(
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index fac4000a5bcaef..b843db855f402f 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -110,6 +110,25 @@ struct btrfs_bio_ctrl {
+ 	 * This is to avoid touching ranges covered by compression/inline.
+ 	 */
+ 	unsigned long submit_bitmap;
++	struct readahead_control *ractl;
++
++	/*
++	 * The start offset of the last used extent map by a read operation.
++	 *
++	 * This is for proper compressed read merge.
++	 * U64_MAX means we are starting the read and have made no progress yet.
++	 *
++	 * The current btrfs_bio_is_contig() only uses disk_bytenr as
++	 * the condition to check if the read can be merged with previous
++	 * bio, which is not correct. E.g. two file extents pointing to the
++	 * same extent but with different offset.
++	 *
++	 * So here we need to do extra checks to only merge reads that are
++	 * covered by the same extent map.
++	 * Just extent_map::start will be enough, as they are unique
++	 * inside the same inode.
++	 */
++	u64 last_em_start;
+ };
+ 
+ static void submit_one_bio(struct btrfs_bio_ctrl *bio_ctrl)
+@@ -882,6 +901,25 @@ static struct extent_map *get_extent_map(struct btrfs_inode *inode,
+ 
+ 	return em;
+ }
++
++static void btrfs_readahead_expand(struct readahead_control *ractl,
++				   const struct extent_map *em)
++{
++	const u64 ra_pos = readahead_pos(ractl);
++	const u64 ra_end = ra_pos + readahead_length(ractl);
++	const u64 em_end = em->start + em->ram_bytes;
++
++	/* No expansion for holes and inline extents. */
++	if (em->disk_bytenr > EXTENT_MAP_LAST_BYTE)
++		return;
++
++	ASSERT(em_end >= ra_pos,
++	       "extent_map %llu %llu ends before current readahead position %llu",
++	       em->start, em->len, ra_pos);
++	if (em_end > ra_end)
++		readahead_expand(ractl, ra_pos, em_end - ra_pos);
++}
++
+ /*
+  * basic readpage implementation.  Locked extent state structs are inserted
+  * into the tree that are removed when the IO is done (by the end_io
+@@ -890,7 +928,7 @@ static struct extent_map *get_extent_map(struct btrfs_inode *inode,
+  * return 0 on success, otherwise return error
+  */
+ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
+-		      struct btrfs_bio_ctrl *bio_ctrl, u64 *prev_em_start)
++			     struct btrfs_bio_ctrl *bio_ctrl)
+ {
+ 	struct inode *inode = folio->mapping->host;
+ 	struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
+@@ -945,6 +983,16 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
+ 
+ 		compress_type = btrfs_extent_map_compression(em);
+ 
++		/*
++		 * Only expand readahead for extents which are already creating
++		 * the pages anyway in add_ra_bio_pages, which is compressed
++		 * extents in the non subpage case.
++		 */
++		if (bio_ctrl->ractl &&
++		    !btrfs_is_subpage(fs_info, folio) &&
++		    compress_type != BTRFS_COMPRESS_NONE)
++			btrfs_readahead_expand(bio_ctrl->ractl, em);
++
+ 		if (compress_type != BTRFS_COMPRESS_NONE)
+ 			disk_bytenr = em->disk_bytenr;
+ 		else
+@@ -990,12 +1038,11 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
+ 		 * non-optimal behavior (submitting 2 bios for the same extent).
+ 		 */
+ 		if (compress_type != BTRFS_COMPRESS_NONE &&
+-		    prev_em_start && *prev_em_start != (u64)-1 &&
+-		    *prev_em_start != em->start)
++		    bio_ctrl->last_em_start != U64_MAX &&
++		    bio_ctrl->last_em_start != em->start)
+ 			force_bio_submit = true;
+ 
+-		if (prev_em_start)
+-			*prev_em_start = em->start;
++		bio_ctrl->last_em_start = em->start;
+ 
+ 		btrfs_free_extent_map(em);
+ 		em = NULL;
+@@ -1209,12 +1256,15 @@ int btrfs_read_folio(struct file *file, struct folio *folio)
+ 	const u64 start = folio_pos(folio);
+ 	const u64 end = start + folio_size(folio) - 1;
+ 	struct extent_state *cached_state = NULL;
+-	struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_READ };
++	struct btrfs_bio_ctrl bio_ctrl = {
++		.opf = REQ_OP_READ,
++		.last_em_start = U64_MAX,
++	};
+ 	struct extent_map *em_cached = NULL;
+ 	int ret;
+ 
+ 	lock_extents_for_read(inode, start, end, &cached_state);
+-	ret = btrfs_do_readpage(folio, &em_cached, &bio_ctrl, NULL);
++	ret = btrfs_do_readpage(folio, &em_cached, &bio_ctrl);
+ 	btrfs_unlock_extent(&inode->io_tree, start, end, &cached_state);
+ 
+ 	btrfs_free_extent_map(em_cached);
+@@ -2550,19 +2600,22 @@ int btrfs_writepages(struct address_space *mapping, struct writeback_control *wb
+ 
+ void btrfs_readahead(struct readahead_control *rac)
+ {
+-	struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_READ | REQ_RAHEAD };
++	struct btrfs_bio_ctrl bio_ctrl = {
++		.opf = REQ_OP_READ | REQ_RAHEAD,
++		.ractl = rac,
++		.last_em_start = U64_MAX,
++	};
+ 	struct folio *folio;
+ 	struct btrfs_inode *inode = BTRFS_I(rac->mapping->host);
+ 	const u64 start = readahead_pos(rac);
+ 	const u64 end = start + readahead_length(rac) - 1;
+ 	struct extent_state *cached_state = NULL;
+ 	struct extent_map *em_cached = NULL;
+-	u64 prev_em_start = (u64)-1;
+ 
+ 	lock_extents_for_read(inode, start, end, &cached_state);
+ 
+ 	while ((folio = readahead_folio(rac)) != NULL)
+-		btrfs_do_readpage(folio, &em_cached, &bio_ctrl, &prev_em_start);
++		btrfs_do_readpage(folio, &em_cached, &bio_ctrl);
+ 
+ 	btrfs_unlock_extent(&inode->io_tree, start, end, &cached_state);
+ 
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index ffa5d6c1594050..e266a229484852 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -5685,7 +5685,17 @@ static void btrfs_del_inode_from_root(struct btrfs_inode *inode)
+ 	bool empty = false;
+ 
+ 	xa_lock(&root->inodes);
+-	entry = __xa_erase(&root->inodes, btrfs_ino(inode));
++	/*
++	 * This btrfs_inode is being freed and has already been unhashed at this
++	 * point. It's possible that another btrfs_inode has already been
++	 * allocated for the same inode and inserted itself into the root, so
++	 * don't delete it in that case.
++	 *
++	 * Note that this shouldn't need to allocate memory, so the gfp flags
++	 * don't really matter.
++	 */
++	entry = __xa_cmpxchg(&root->inodes, btrfs_ino(inode), inode, NULL,
++			     GFP_ATOMIC);
+ 	if (entry == inode)
+ 		empty = xa_empty(&root->inodes);
+ 	xa_unlock(&root->inodes);
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 68cbb2b1e3df8e..1fc99ba185164b 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1483,6 +1483,7 @@ static int __qgroup_excl_accounting(struct btrfs_fs_info *fs_info, u64 ref_root,
+ 	struct btrfs_qgroup *qgroup;
+ 	LIST_HEAD(qgroup_list);
+ 	u64 num_bytes = src->excl;
++	u64 num_bytes_cmpr = src->excl_cmpr;
+ 	int ret = 0;
+ 
+ 	qgroup = find_qgroup_rb(fs_info, ref_root);
+@@ -1494,11 +1495,12 @@ static int __qgroup_excl_accounting(struct btrfs_fs_info *fs_info, u64 ref_root,
+ 		struct btrfs_qgroup_list *glist;
+ 
+ 		qgroup->rfer += sign * num_bytes;
+-		qgroup->rfer_cmpr += sign * num_bytes;
++		qgroup->rfer_cmpr += sign * num_bytes_cmpr;
+ 
+ 		WARN_ON(sign < 0 && qgroup->excl < num_bytes);
++		WARN_ON(sign < 0 && qgroup->excl_cmpr < num_bytes_cmpr);
+ 		qgroup->excl += sign * num_bytes;
+-		qgroup->excl_cmpr += sign * num_bytes;
++		qgroup->excl_cmpr += sign * num_bytes_cmpr;
+ 
+ 		if (sign > 0)
+ 			qgroup_rsv_add_by_qgroup(fs_info, qgroup, src);
+diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
+index 60a621b00c656d..1777d707fd7561 100644
+--- a/fs/ceph/addr.c
++++ b/fs/ceph/addr.c
+@@ -1264,7 +1264,9 @@ static inline int move_dirty_folio_in_page_array(struct address_space *mapping,
+ 								0,
+ 								gfp_flags);
+ 		if (IS_ERR(pages[index])) {
+-			if (PTR_ERR(pages[index]) == -EINVAL) {
++			int err = PTR_ERR(pages[index]);
++
++			if (err == -EINVAL) {
+ 				pr_err_client(cl, "inode->i_blkbits=%hhu\n",
+ 						inode->i_blkbits);
+ 			}
+@@ -1273,7 +1275,7 @@ static inline int move_dirty_folio_in_page_array(struct address_space *mapping,
+ 			BUG_ON(ceph_wbc->locked_pages == 0);
+ 
+ 			pages[index] = NULL;
+-			return PTR_ERR(pages[index]);
++			return err;
+ 		}
+ 	} else {
+ 		pages[index] = &folio->page;
+@@ -1687,6 +1689,7 @@ static int ceph_writepages_start(struct address_space *mapping,
+ 
+ process_folio_batch:
+ 		rc = ceph_process_folio_batch(mapping, wbc, &ceph_wbc);
++		ceph_shift_unused_folios_left(&ceph_wbc.fbatch);
+ 		if (rc)
+ 			goto release_folios;
+ 
+@@ -1695,8 +1698,6 @@ static int ceph_writepages_start(struct address_space *mapping,
+ 			goto release_folios;
+ 
+ 		if (ceph_wbc.processed_in_fbatch) {
+-			ceph_shift_unused_folios_left(&ceph_wbc.fbatch);
+-
+ 			if (folio_batch_count(&ceph_wbc.fbatch) == 0 &&
+ 			    ceph_wbc.locked_pages < ceph_wbc.max_pages) {
+ 				doutc(cl, "reached end fbatch, trying for more\n");
+diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
+index fdd404fc81124d..f3fe786b4143d4 100644
+--- a/fs/ceph/debugfs.c
++++ b/fs/ceph/debugfs.c
+@@ -55,8 +55,6 @@ static int mdsc_show(struct seq_file *s, void *p)
+ 	struct ceph_mds_client *mdsc = fsc->mdsc;
+ 	struct ceph_mds_request *req;
+ 	struct rb_node *rp;
+-	int pathlen = 0;
+-	u64 pathbase;
+ 	char *path;
+ 
+ 	mutex_lock(&mdsc->mutex);
+@@ -81,8 +79,8 @@ static int mdsc_show(struct seq_file *s, void *p)
+ 		if (req->r_inode) {
+ 			seq_printf(s, " #%llx", ceph_ino(req->r_inode));
+ 		} else if (req->r_dentry) {
+-			path = ceph_mdsc_build_path(mdsc, req->r_dentry, &pathlen,
+-						    &pathbase, 0);
++			struct ceph_path_info path_info;
++			path = ceph_mdsc_build_path(mdsc, req->r_dentry, &path_info, 0);
+ 			if (IS_ERR(path))
+ 				path = NULL;
+ 			spin_lock(&req->r_dentry->d_lock);
+@@ -91,7 +89,7 @@ static int mdsc_show(struct seq_file *s, void *p)
+ 				   req->r_dentry,
+ 				   path ? path : "");
+ 			spin_unlock(&req->r_dentry->d_lock);
+-			ceph_mdsc_free_path(path, pathlen);
++			ceph_mdsc_free_path_info(&path_info);
+ 		} else if (req->r_path1) {
+ 			seq_printf(s, " #%llx/%s", req->r_ino1.ino,
+ 				   req->r_path1);
+@@ -100,8 +98,8 @@ static int mdsc_show(struct seq_file *s, void *p)
+ 		}
+ 
+ 		if (req->r_old_dentry) {
+-			path = ceph_mdsc_build_path(mdsc, req->r_old_dentry, &pathlen,
+-						    &pathbase, 0);
++			struct ceph_path_info path_info;
++			path = ceph_mdsc_build_path(mdsc, req->r_old_dentry, &path_info, 0);
+ 			if (IS_ERR(path))
+ 				path = NULL;
+ 			spin_lock(&req->r_old_dentry->d_lock);
+@@ -111,7 +109,7 @@ static int mdsc_show(struct seq_file *s, void *p)
+ 				   req->r_old_dentry,
+ 				   path ? path : "");
+ 			spin_unlock(&req->r_old_dentry->d_lock);
+-			ceph_mdsc_free_path(path, pathlen);
++			ceph_mdsc_free_path_info(&path_info);
+ 		} else if (req->r_path2 && req->r_op != CEPH_MDS_OP_SYMLINK) {
+ 			if (req->r_ino2.ino)
+ 				seq_printf(s, " #%llx/%s", req->r_ino2.ino,
+diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c
+index a321aa6d0ed226..e7af63bf77b469 100644
+--- a/fs/ceph/dir.c
++++ b/fs/ceph/dir.c
+@@ -1272,10 +1272,8 @@ static void ceph_async_unlink_cb(struct ceph_mds_client *mdsc,
+ 
+ 	/* If op failed, mark everyone involved for errors */
+ 	if (result) {
+-		int pathlen = 0;
+-		u64 base = 0;
+-		char *path = ceph_mdsc_build_path(mdsc, dentry, &pathlen,
+-						  &base, 0);
++		struct ceph_path_info path_info = {0};
++		char *path = ceph_mdsc_build_path(mdsc, dentry, &path_info, 0);
+ 
+ 		/* mark error on parent + clear complete */
+ 		mapping_set_error(req->r_parent->i_mapping, result);
+@@ -1289,8 +1287,8 @@ static void ceph_async_unlink_cb(struct ceph_mds_client *mdsc,
+ 		mapping_set_error(req->r_old_inode->i_mapping, result);
+ 
+ 		pr_warn_client(cl, "failure path=(%llx)%s result=%d!\n",
+-			       base, IS_ERR(path) ? "<<bad>>" : path, result);
+-		ceph_mdsc_free_path(path, pathlen);
++			       path_info.vino.ino, IS_ERR(path) ? "<<bad>>" : path, result);
++		ceph_mdsc_free_path_info(&path_info);
+ 	}
+ out:
+ 	iput(req->r_old_inode);
+@@ -1348,8 +1346,6 @@ static int ceph_unlink(struct inode *dir, struct dentry *dentry)
+ 	int err = -EROFS;
+ 	int op;
+ 	char *path;
+-	int pathlen;
+-	u64 pathbase;
+ 
+ 	if (ceph_snap(dir) == CEPH_SNAPDIR) {
+ 		/* rmdir .snap/foo is RMSNAP */
+@@ -1368,14 +1364,15 @@ static int ceph_unlink(struct inode *dir, struct dentry *dentry)
+ 	if (!dn) {
+ 		try_async = false;
+ 	} else {
+-		path = ceph_mdsc_build_path(mdsc, dn, &pathlen, &pathbase, 0);
++		struct ceph_path_info path_info;
++		path = ceph_mdsc_build_path(mdsc, dn, &path_info, 0);
+ 		if (IS_ERR(path)) {
+ 			try_async = false;
+ 			err = 0;
+ 		} else {
+ 			err = ceph_mds_check_access(mdsc, path, MAY_WRITE);
+ 		}
+-		ceph_mdsc_free_path(path, pathlen);
++		ceph_mdsc_free_path_info(&path_info);
+ 		dput(dn);
+ 
+ 		/* For none EACCES cases will let the MDS do the mds auth check */
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index a7254cab44cc2e..6587c2d5af1e08 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -368,8 +368,6 @@ int ceph_open(struct inode *inode, struct file *file)
+ 	int flags, fmode, wanted;
+ 	struct dentry *dentry;
+ 	char *path;
+-	int pathlen;
+-	u64 pathbase;
+ 	bool do_sync = false;
+ 	int mask = MAY_READ;
+ 
+@@ -399,14 +397,15 @@ int ceph_open(struct inode *inode, struct file *file)
+ 	if (!dentry) {
+ 		do_sync = true;
+ 	} else {
+-		path = ceph_mdsc_build_path(mdsc, dentry, &pathlen, &pathbase, 0);
++		struct ceph_path_info path_info;
++		path = ceph_mdsc_build_path(mdsc, dentry, &path_info, 0);
+ 		if (IS_ERR(path)) {
+ 			do_sync = true;
+ 			err = 0;
+ 		} else {
+ 			err = ceph_mds_check_access(mdsc, path, mask);
+ 		}
+-		ceph_mdsc_free_path(path, pathlen);
++		ceph_mdsc_free_path_info(&path_info);
+ 		dput(dentry);
+ 
+ 		/* For none EACCES cases will let the MDS do the mds auth check */
+@@ -614,15 +613,13 @@ static void ceph_async_create_cb(struct ceph_mds_client *mdsc,
+ 	mapping_set_error(req->r_parent->i_mapping, result);
+ 
+ 	if (result) {
+-		int pathlen = 0;
+-		u64 base = 0;
+-		char *path = ceph_mdsc_build_path(mdsc, req->r_dentry, &pathlen,
+-						  &base, 0);
++		struct ceph_path_info path_info = {0};
++		char *path = ceph_mdsc_build_path(mdsc, req->r_dentry, &path_info, 0);
+ 
+ 		pr_warn_client(cl,
+ 			"async create failure path=(%llx)%s result=%d!\n",
+-			base, IS_ERR(path) ? "<<bad>>" : path, result);
+-		ceph_mdsc_free_path(path, pathlen);
++			path_info.vino.ino, IS_ERR(path) ? "<<bad>>" : path, result);
++		ceph_mdsc_free_path_info(&path_info);
+ 
+ 		ceph_dir_clear_complete(req->r_parent);
+ 		if (!d_unhashed(dentry))
+@@ -791,8 +788,6 @@ int ceph_atomic_open(struct inode *dir, struct dentry *dentry,
+ 	int mask;
+ 	int err;
+ 	char *path;
+-	int pathlen;
+-	u64 pathbase;
+ 
+ 	doutc(cl, "%p %llx.%llx dentry %p '%pd' %s flags %d mode 0%o\n",
+ 	      dir, ceph_vinop(dir), dentry, dentry,
+@@ -814,7 +809,8 @@ int ceph_atomic_open(struct inode *dir, struct dentry *dentry,
+ 	if (!dn) {
+ 		try_async = false;
+ 	} else {
+-		path = ceph_mdsc_build_path(mdsc, dn, &pathlen, &pathbase, 0);
++		struct ceph_path_info path_info;
++		path = ceph_mdsc_build_path(mdsc, dn, &path_info, 0);
+ 		if (IS_ERR(path)) {
+ 			try_async = false;
+ 			err = 0;
+@@ -826,7 +822,7 @@ int ceph_atomic_open(struct inode *dir, struct dentry *dentry,
+ 				mask |= MAY_WRITE;
+ 			err = ceph_mds_check_access(mdsc, path, mask);
+ 		}
+-		ceph_mdsc_free_path(path, pathlen);
++		ceph_mdsc_free_path_info(&path_info);
+ 		dput(dn);
+ 
+ 		/* For none EACCES cases will let the MDS do the mds auth check */
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index 06cd2963e41ee0..14e59c15dd68dd 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -55,6 +55,52 @@ static int ceph_set_ino_cb(struct inode *inode, void *data)
+ 	return 0;
+ }
+ 
++/*
++ * Check if the parent inode matches the vino from directory reply info
++ */
++static inline bool ceph_vino_matches_parent(struct inode *parent,
++					    struct ceph_vino vino)
++{
++	return ceph_ino(parent) == vino.ino && ceph_snap(parent) == vino.snap;
++}
++
++/*
++ * Validate that the directory inode referenced by @req->r_parent matches the
++ * inode number and snapshot id contained in the reply's directory record.  If
++ * they do not match – which can theoretically happen if the parent dentry was
++ * moved between the time the request was issued and the reply arrived – fall
++ * back to looking up the correct inode in the inode cache.
++ *
++ * A reference is *always* returned.  Callers that receive a different inode
++ * than the original @parent are responsible for dropping the extra reference
++ * once the reply has been processed.
++ */
++static struct inode *ceph_get_reply_dir(struct super_block *sb,
++					struct inode *parent,
++					struct ceph_mds_reply_info_parsed *rinfo)
++{
++	struct ceph_vino vino;
++
++	if (unlikely(!rinfo->diri.in))
++		return parent; /* nothing to compare against */
++
++	/* If we didn't have a cached parent inode to begin with, just bail out. */
++	if (!parent)
++		return NULL;
++
++	vino.ino  = le64_to_cpu(rinfo->diri.in->ino);
++	vino.snap = le64_to_cpu(rinfo->diri.in->snapid);
++
++	if (likely(ceph_vino_matches_parent(parent, vino)))
++		return parent; /* matches – use the original reference */
++
++	/* Mismatch – this should be rare.  Emit a WARN and obtain the correct inode. */
++	WARN_ONCE(1, "ceph: reply dir mismatch (parent valid %llx.%llx reply %llx.%llx)\n",
++		  ceph_ino(parent), ceph_snap(parent), vino.ino, vino.snap);
++
++	return ceph_get_inode(sb, vino, NULL);
++}
++
+ /**
+  * ceph_new_inode - allocate a new inode in advance of an expected create
+  * @dir: parent directory for new inode
+@@ -1523,6 +1569,7 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ 	struct ceph_vino tvino, dvino;
+ 	struct ceph_fs_client *fsc = ceph_sb_to_fs_client(sb);
+ 	struct ceph_client *cl = fsc->client;
++	struct inode *parent_dir = NULL;
+ 	int err = 0;
+ 
+ 	doutc(cl, "%p is_dentry %d is_target %d\n", req,
+@@ -1536,10 +1583,17 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ 	}
+ 
+ 	if (rinfo->head->is_dentry) {
+-		struct inode *dir = req->r_parent;
+-
+-		if (dir) {
+-			err = ceph_fill_inode(dir, NULL, &rinfo->diri,
++		/*
++		 * r_parent may be stale, in cases when R_PARENT_LOCKED is not set,
++		 * so we need to get the correct inode
++		 */
++		parent_dir = ceph_get_reply_dir(sb, req->r_parent, rinfo);
++		if (unlikely(IS_ERR(parent_dir))) {
++			err = PTR_ERR(parent_dir);
++			goto done;
++		}
++		if (parent_dir) {
++			err = ceph_fill_inode(parent_dir, NULL, &rinfo->diri,
+ 					      rinfo->dirfrag, session, -1,
+ 					      &req->r_caps_reservation);
+ 			if (err < 0)
+@@ -1548,14 +1602,14 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ 			WARN_ON_ONCE(1);
+ 		}
+ 
+-		if (dir && req->r_op == CEPH_MDS_OP_LOOKUPNAME &&
++		if (parent_dir && req->r_op == CEPH_MDS_OP_LOOKUPNAME &&
+ 		    test_bit(CEPH_MDS_R_PARENT_LOCKED, &req->r_req_flags) &&
+ 		    !test_bit(CEPH_MDS_R_ABORTED, &req->r_req_flags)) {
+ 			bool is_nokey = false;
+ 			struct qstr dname;
+ 			struct dentry *dn, *parent;
+ 			struct fscrypt_str oname = FSTR_INIT(NULL, 0);
+-			struct ceph_fname fname = { .dir	= dir,
++			struct ceph_fname fname = { .dir	= parent_dir,
+ 						    .name	= rinfo->dname,
+ 						    .ctext	= rinfo->altname,
+ 						    .name_len	= rinfo->dname_len,
+@@ -1564,10 +1618,10 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ 			BUG_ON(!rinfo->head->is_target);
+ 			BUG_ON(req->r_dentry);
+ 
+-			parent = d_find_any_alias(dir);
++			parent = d_find_any_alias(parent_dir);
+ 			BUG_ON(!parent);
+ 
+-			err = ceph_fname_alloc_buffer(dir, &oname);
++			err = ceph_fname_alloc_buffer(parent_dir, &oname);
+ 			if (err < 0) {
+ 				dput(parent);
+ 				goto done;
+@@ -1576,7 +1630,7 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ 			err = ceph_fname_to_usr(&fname, NULL, &oname, &is_nokey);
+ 			if (err < 0) {
+ 				dput(parent);
+-				ceph_fname_free_buffer(dir, &oname);
++				ceph_fname_free_buffer(parent_dir, &oname);
+ 				goto done;
+ 			}
+ 			dname.name = oname.name;
+@@ -1595,7 +1649,7 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ 				      dname.len, dname.name, dn);
+ 				if (!dn) {
+ 					dput(parent);
+-					ceph_fname_free_buffer(dir, &oname);
++					ceph_fname_free_buffer(parent_dir, &oname);
+ 					err = -ENOMEM;
+ 					goto done;
+ 				}
+@@ -1610,12 +1664,12 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ 				    ceph_snap(d_inode(dn)) != tvino.snap)) {
+ 				doutc(cl, " dn %p points to wrong inode %p\n",
+ 				      dn, d_inode(dn));
+-				ceph_dir_clear_ordered(dir);
++				ceph_dir_clear_ordered(parent_dir);
+ 				d_delete(dn);
+ 				dput(dn);
+ 				goto retry_lookup;
+ 			}
+-			ceph_fname_free_buffer(dir, &oname);
++			ceph_fname_free_buffer(parent_dir, &oname);
+ 
+ 			req->r_dentry = dn;
+ 			dput(parent);
+@@ -1794,6 +1848,9 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ 					    &dvino, ptvino);
+ 	}
+ done:
++	/* Drop extra ref from ceph_get_reply_dir() if it returned a new inode */
++	if (unlikely(!IS_ERR_OR_NULL(parent_dir) && parent_dir != req->r_parent))
++		iput(parent_dir);
+ 	doutc(cl, "done err=%d\n", err);
+ 	return err;
+ }
+@@ -2488,22 +2545,21 @@ int __ceph_setattr(struct mnt_idmap *idmap, struct inode *inode,
+ 	int truncate_retry = 20; /* The RMW will take around 50ms */
+ 	struct dentry *dentry;
+ 	char *path;
+-	int pathlen;
+-	u64 pathbase;
+ 	bool do_sync = false;
+ 
+ 	dentry = d_find_alias(inode);
+ 	if (!dentry) {
+ 		do_sync = true;
+ 	} else {
+-		path = ceph_mdsc_build_path(mdsc, dentry, &pathlen, &pathbase, 0);
++		struct ceph_path_info path_info;
++		path = ceph_mdsc_build_path(mdsc, dentry, &path_info, 0);
+ 		if (IS_ERR(path)) {
+ 			do_sync = true;
+ 			err = 0;
+ 		} else {
+ 			err = ceph_mds_check_access(mdsc, path, MAY_WRITE);
+ 		}
+-		ceph_mdsc_free_path(path, pathlen);
++		ceph_mdsc_free_path_info(&path_info);
+ 		dput(dentry);
+ 
+ 		/* For none EACCES cases will let the MDS do the mds auth check */
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 230e0c3f341f71..94f109ca853eba 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -2681,8 +2681,7 @@ static u8 *get_fscrypt_altname(const struct ceph_mds_request *req, u32 *plen)
+  * ceph_mdsc_build_path - build a path string to a given dentry
+  * @mdsc: mds client
+  * @dentry: dentry to which path should be built
+- * @plen: returned length of string
+- * @pbase: returned base inode number
++ * @path_info: output path, length, base ino+snap, and freepath ownership flag
+  * @for_wire: is this path going to be sent to the MDS?
+  *
+  * Build a string that represents the path to the dentry. This is mostly called
+@@ -2700,7 +2699,7 @@ static u8 *get_fscrypt_altname(const struct ceph_mds_request *req, u32 *plen)
+  *   foo/.snap/bar -> foo//bar
+  */
+ char *ceph_mdsc_build_path(struct ceph_mds_client *mdsc, struct dentry *dentry,
+-			   int *plen, u64 *pbase, int for_wire)
++			   struct ceph_path_info *path_info, int for_wire)
+ {
+ 	struct ceph_client *cl = mdsc->fsc->client;
+ 	struct dentry *cur;
+@@ -2810,16 +2809,28 @@ char *ceph_mdsc_build_path(struct ceph_mds_client *mdsc, struct dentry *dentry,
+ 		return ERR_PTR(-ENAMETOOLONG);
+ 	}
+ 
+-	*pbase = base;
+-	*plen = PATH_MAX - 1 - pos;
++	/* Initialize the output structure */
++	memset(path_info, 0, sizeof(*path_info));
++
++	path_info->vino.ino = base;
++	path_info->pathlen = PATH_MAX - 1 - pos;
++	path_info->path = path + pos;
++	path_info->freepath = true;
++
++	/* Set snap from dentry if available */
++	if (d_inode(dentry))
++		path_info->vino.snap = ceph_snap(d_inode(dentry));
++	else
++		path_info->vino.snap = CEPH_NOSNAP;
++
+ 	doutc(cl, "on %p %d built %llx '%.*s'\n", dentry, d_count(dentry),
+-	      base, *plen, path + pos);
++	      base, PATH_MAX - 1 - pos, path + pos);
+ 	return path + pos;
+ }
+ 
+ static int build_dentry_path(struct ceph_mds_client *mdsc, struct dentry *dentry,
+-			     struct inode *dir, const char **ppath, int *ppathlen,
+-			     u64 *pino, bool *pfreepath, bool parent_locked)
++			     struct inode *dir, struct ceph_path_info *path_info,
++			     bool parent_locked)
+ {
+ 	char *path;
+ 
+@@ -2828,41 +2839,47 @@ static int build_dentry_path(struct ceph_mds_client *mdsc, struct dentry *dentry
+ 		dir = d_inode_rcu(dentry->d_parent);
+ 	if (dir && parent_locked && ceph_snap(dir) == CEPH_NOSNAP &&
+ 	    !IS_ENCRYPTED(dir)) {
+-		*pino = ceph_ino(dir);
++		path_info->vino.ino = ceph_ino(dir);
++		path_info->vino.snap = ceph_snap(dir);
+ 		rcu_read_unlock();
+-		*ppath = dentry->d_name.name;
+-		*ppathlen = dentry->d_name.len;
++		path_info->path = dentry->d_name.name;
++		path_info->pathlen = dentry->d_name.len;
++		path_info->freepath = false;
+ 		return 0;
+ 	}
+ 	rcu_read_unlock();
+-	path = ceph_mdsc_build_path(mdsc, dentry, ppathlen, pino, 1);
++	path = ceph_mdsc_build_path(mdsc, dentry, path_info, 1);
+ 	if (IS_ERR(path))
+ 		return PTR_ERR(path);
+-	*ppath = path;
+-	*pfreepath = true;
++	/*
++	 * ceph_mdsc_build_path already fills path_info, including snap handling.
++	 */
+ 	return 0;
+ }
+ 
+-static int build_inode_path(struct inode *inode,
+-			    const char **ppath, int *ppathlen, u64 *pino,
+-			    bool *pfreepath)
++static int build_inode_path(struct inode *inode, struct ceph_path_info *path_info)
+ {
+ 	struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(inode->i_sb);
+ 	struct dentry *dentry;
+ 	char *path;
+ 
+ 	if (ceph_snap(inode) == CEPH_NOSNAP) {
+-		*pino = ceph_ino(inode);
+-		*ppathlen = 0;
++		path_info->vino.ino = ceph_ino(inode);
++		path_info->vino.snap = ceph_snap(inode);
++		path_info->pathlen = 0;
++		path_info->freepath = false;
+ 		return 0;
+ 	}
+ 	dentry = d_find_alias(inode);
+-	path = ceph_mdsc_build_path(mdsc, dentry, ppathlen, pino, 1);
++	path = ceph_mdsc_build_path(mdsc, dentry, path_info, 1);
+ 	dput(dentry);
+ 	if (IS_ERR(path))
+ 		return PTR_ERR(path);
+-	*ppath = path;
+-	*pfreepath = true;
++	/*
++	 * ceph_mdsc_build_path already fills path_info, including snap from dentry.
++	 * Override with inode's snap since that's what this function is for.
++	 */
++	path_info->vino.snap = ceph_snap(inode);
+ 	return 0;
+ }
+ 
+@@ -2872,26 +2889,32 @@ static int build_inode_path(struct inode *inode,
+  */
+ static int set_request_path_attr(struct ceph_mds_client *mdsc, struct inode *rinode,
+ 				 struct dentry *rdentry, struct inode *rdiri,
+-				 const char *rpath, u64 rino, const char **ppath,
+-				 int *pathlen, u64 *ino, bool *freepath,
++				 const char *rpath, u64 rino,
++				 struct ceph_path_info *path_info,
+ 				 bool parent_locked)
+ {
+ 	struct ceph_client *cl = mdsc->fsc->client;
+ 	int r = 0;
+ 
++	/* Initialize the output structure */
++	memset(path_info, 0, sizeof(*path_info));
++
+ 	if (rinode) {
+-		r = build_inode_path(rinode, ppath, pathlen, ino, freepath);
++		r = build_inode_path(rinode, path_info);
+ 		doutc(cl, " inode %p %llx.%llx\n", rinode, ceph_ino(rinode),
+ 		      ceph_snap(rinode));
+ 	} else if (rdentry) {
+-		r = build_dentry_path(mdsc, rdentry, rdiri, ppath, pathlen, ino,
+-					freepath, parent_locked);
+-		doutc(cl, " dentry %p %llx/%.*s\n", rdentry, *ino, *pathlen, *ppath);
++		r = build_dentry_path(mdsc, rdentry, rdiri, path_info, parent_locked);
++		doutc(cl, " dentry %p %llx/%.*s\n", rdentry, path_info->vino.ino,
++		      path_info->pathlen, path_info->path);
+ 	} else if (rpath || rino) {
+-		*ino = rino;
+-		*ppath = rpath;
+-		*pathlen = rpath ? strlen(rpath) : 0;
+-		doutc(cl, " path %.*s\n", *pathlen, rpath);
++		path_info->vino.ino = rino;
++		path_info->vino.snap = CEPH_NOSNAP;
++		path_info->path = rpath;
++		path_info->pathlen = rpath ? strlen(rpath) : 0;
++		path_info->freepath = false;
++
++		doutc(cl, " path %.*s\n", path_info->pathlen, rpath);
+ 	}
+ 
+ 	return r;
+@@ -2968,11 +2991,8 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
+ 	struct ceph_client *cl = mdsc->fsc->client;
+ 	struct ceph_msg *msg;
+ 	struct ceph_mds_request_head_legacy *lhead;
+-	const char *path1 = NULL;
+-	const char *path2 = NULL;
+-	u64 ino1 = 0, ino2 = 0;
+-	int pathlen1 = 0, pathlen2 = 0;
+-	bool freepath1 = false, freepath2 = false;
++	struct ceph_path_info path_info1 = {0};
++	struct ceph_path_info path_info2 = {0};
+ 	struct dentry *old_dentry = NULL;
+ 	int len;
+ 	u16 releases;
+@@ -2982,25 +3002,49 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
+ 	u16 request_head_version = mds_supported_head_version(session);
+ 	kuid_t caller_fsuid = req->r_cred->fsuid;
+ 	kgid_t caller_fsgid = req->r_cred->fsgid;
++	bool parent_locked = test_bit(CEPH_MDS_R_PARENT_LOCKED, &req->r_req_flags);
+ 
+ 	ret = set_request_path_attr(mdsc, req->r_inode, req->r_dentry,
+-			      req->r_parent, req->r_path1, req->r_ino1.ino,
+-			      &path1, &pathlen1, &ino1, &freepath1,
+-			      test_bit(CEPH_MDS_R_PARENT_LOCKED,
+-					&req->r_req_flags));
++				    req->r_parent, req->r_path1, req->r_ino1.ino,
++				    &path_info1, parent_locked);
+ 	if (ret < 0) {
+ 		msg = ERR_PTR(ret);
+ 		goto out;
+ 	}
+ 
++	/*
++	 * When the parent directory's i_rwsem is *not* locked, req->r_parent may
++	 * have become stale (e.g. after a concurrent rename) between the time the
++	 * dentry was looked up and now.  If we detect that the stored r_parent
++	 * does not match the inode number we just encoded for the request, switch
++	 * to the correct inode so that the MDS receives a valid parent reference.
++	 */
++	if (!parent_locked && req->r_parent && path_info1.vino.ino &&
++	    ceph_ino(req->r_parent) != path_info1.vino.ino) {
++		struct inode *old_parent = req->r_parent;
++		struct inode *correct_dir = ceph_get_inode(mdsc->fsc->sb, path_info1.vino, NULL);
++		if (!IS_ERR(correct_dir)) {
++			WARN_ONCE(1, "ceph: r_parent mismatch (had %llx wanted %llx) - updating\n",
++			          ceph_ino(old_parent), path_info1.vino.ino);
++			/*
++			 * Transfer CEPH_CAP_PIN from the old parent to the new one.
++			 * The pin was taken earlier in ceph_mdsc_submit_request().
++			 */
++			ceph_put_cap_refs(ceph_inode(old_parent), CEPH_CAP_PIN);
++			iput(old_parent);
++			req->r_parent = correct_dir;
++			ceph_get_cap_refs(ceph_inode(req->r_parent), CEPH_CAP_PIN);
++		}
++	}
++
+ 	/* If r_old_dentry is set, then assume that its parent is locked */
+ 	if (req->r_old_dentry &&
+ 	    !(req->r_old_dentry->d_flags & DCACHE_DISCONNECTED))
+ 		old_dentry = req->r_old_dentry;
+ 	ret = set_request_path_attr(mdsc, NULL, old_dentry,
+-			      req->r_old_dentry_dir,
+-			      req->r_path2, req->r_ino2.ino,
+-			      &path2, &pathlen2, &ino2, &freepath2, true);
++				    req->r_old_dentry_dir,
++				    req->r_path2, req->r_ino2.ino,
++				    &path_info2, true);
+ 	if (ret < 0) {
+ 		msg = ERR_PTR(ret);
+ 		goto out_free1;
+@@ -3031,7 +3075,7 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
+ 
+ 	/* filepaths */
+ 	len += 2 * (1 + sizeof(u32) + sizeof(u64));
+-	len += pathlen1 + pathlen2;
++	len += path_info1.pathlen + path_info2.pathlen;
+ 
+ 	/* cap releases */
+ 	len += sizeof(struct ceph_mds_request_release) *
+@@ -3039,9 +3083,9 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
+ 		 !!req->r_old_inode_drop + !!req->r_old_dentry_drop);
+ 
+ 	if (req->r_dentry_drop)
+-		len += pathlen1;
++		len += path_info1.pathlen;
+ 	if (req->r_old_dentry_drop)
+-		len += pathlen2;
++		len += path_info2.pathlen;
+ 
+ 	/* MClientRequest tail */
+ 
+@@ -3154,8 +3198,8 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
+ 	lhead->ino = cpu_to_le64(req->r_deleg_ino);
+ 	lhead->args = req->r_args;
+ 
+-	ceph_encode_filepath(&p, end, ino1, path1);
+-	ceph_encode_filepath(&p, end, ino2, path2);
++	ceph_encode_filepath(&p, end, path_info1.vino.ino, path_info1.path);
++	ceph_encode_filepath(&p, end, path_info2.vino.ino, path_info2.path);
+ 
+ 	/* make note of release offset, in case we need to replay */
+ 	req->r_request_release_offset = p - msg->front.iov_base;
+@@ -3218,11 +3262,9 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
+ 	msg->hdr.data_off = cpu_to_le16(0);
+ 
+ out_free2:
+-	if (freepath2)
+-		ceph_mdsc_free_path((char *)path2, pathlen2);
++	ceph_mdsc_free_path_info(&path_info2);
+ out_free1:
+-	if (freepath1)
+-		ceph_mdsc_free_path((char *)path1, pathlen1);
++	ceph_mdsc_free_path_info(&path_info1);
+ out:
+ 	return msg;
+ out_err:
+@@ -4579,24 +4621,20 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ 	struct ceph_pagelist *pagelist = recon_state->pagelist;
+ 	struct dentry *dentry;
+ 	struct ceph_cap *cap;
+-	char *path;
+-	int pathlen = 0, err;
+-	u64 pathbase;
++	struct ceph_path_info path_info = {0};
++	int err;
+ 	u64 snap_follows;
+ 
+ 	dentry = d_find_primary(inode);
+ 	if (dentry) {
+ 		/* set pathbase to parent dir when msg_version >= 2 */
+-		path = ceph_mdsc_build_path(mdsc, dentry, &pathlen, &pathbase,
++		char *path = ceph_mdsc_build_path(mdsc, dentry, &path_info,
+ 					    recon_state->msg_version >= 2);
+ 		dput(dentry);
+ 		if (IS_ERR(path)) {
+ 			err = PTR_ERR(path);
+ 			goto out_err;
+ 		}
+-	} else {
+-		path = NULL;
+-		pathbase = 0;
+ 	}
+ 
+ 	spin_lock(&ci->i_ceph_lock);
+@@ -4629,7 +4667,7 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ 		rec.v2.wanted = cpu_to_le32(__ceph_caps_wanted(ci));
+ 		rec.v2.issued = cpu_to_le32(cap->issued);
+ 		rec.v2.snaprealm = cpu_to_le64(ci->i_snap_realm->ino);
+-		rec.v2.pathbase = cpu_to_le64(pathbase);
++		rec.v2.pathbase = cpu_to_le64(path_info.vino.ino);
+ 		rec.v2.flock_len = (__force __le32)
+ 			((ci->i_ceph_flags & CEPH_I_ERROR_FILELOCK) ? 0 : 1);
+ 	} else {
+@@ -4644,7 +4682,7 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ 		ts = inode_get_atime(inode);
+ 		ceph_encode_timespec64(&rec.v1.atime, &ts);
+ 		rec.v1.snaprealm = cpu_to_le64(ci->i_snap_realm->ino);
+-		rec.v1.pathbase = cpu_to_le64(pathbase);
++		rec.v1.pathbase = cpu_to_le64(path_info.vino.ino);
+ 	}
+ 
+ 	if (list_empty(&ci->i_cap_snaps)) {
+@@ -4706,7 +4744,7 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ 			    sizeof(struct ceph_filelock);
+ 		rec.v2.flock_len = cpu_to_le32(struct_len);
+ 
+-		struct_len += sizeof(u32) + pathlen + sizeof(rec.v2);
++		struct_len += sizeof(u32) + path_info.pathlen + sizeof(rec.v2);
+ 
+ 		if (struct_v >= 2)
+ 			struct_len += sizeof(u64); /* snap_follows */
+@@ -4730,7 +4768,7 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ 			ceph_pagelist_encode_8(pagelist, 1);
+ 			ceph_pagelist_encode_32(pagelist, struct_len);
+ 		}
+-		ceph_pagelist_encode_string(pagelist, path, pathlen);
++		ceph_pagelist_encode_string(pagelist, (char *)path_info.path, path_info.pathlen);
+ 		ceph_pagelist_append(pagelist, &rec, sizeof(rec.v2));
+ 		ceph_locks_to_pagelist(flocks, pagelist,
+ 				       num_fcntl_locks, num_flock_locks);
+@@ -4741,17 +4779,17 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ 	} else {
+ 		err = ceph_pagelist_reserve(pagelist,
+ 					    sizeof(u64) + sizeof(u32) +
+-					    pathlen + sizeof(rec.v1));
++					    path_info.pathlen + sizeof(rec.v1));
+ 		if (err)
+ 			goto out_err;
+ 
+ 		ceph_pagelist_encode_64(pagelist, ceph_ino(inode));
+-		ceph_pagelist_encode_string(pagelist, path, pathlen);
++		ceph_pagelist_encode_string(pagelist, (char *)path_info.path, path_info.pathlen);
+ 		ceph_pagelist_append(pagelist, &rec, sizeof(rec.v1));
+ 	}
+ 
+ out_err:
+-	ceph_mdsc_free_path(path, pathlen);
++	ceph_mdsc_free_path_info(&path_info);
+ 	if (!err)
+ 		recon_state->nr_caps++;
+ 	return err;
+diff --git a/fs/ceph/mds_client.h b/fs/ceph/mds_client.h
+index 3e2a6fa7c19aab..0428a5eaf28c65 100644
+--- a/fs/ceph/mds_client.h
++++ b/fs/ceph/mds_client.h
+@@ -617,14 +617,24 @@ extern int ceph_mds_check_access(struct ceph_mds_client *mdsc, char *tpath,
+ 
+ extern void ceph_mdsc_pre_umount(struct ceph_mds_client *mdsc);
+ 
+-static inline void ceph_mdsc_free_path(char *path, int len)
++/*
++ * Structure to group path-related output parameters for build_*_path functions
++ */
++struct ceph_path_info {
++	const char *path;
++	int pathlen;
++	struct ceph_vino vino;
++	bool freepath;
++};
++
++static inline void ceph_mdsc_free_path_info(const struct ceph_path_info *path_info)
+ {
+-	if (!IS_ERR_OR_NULL(path))
+-		__putname(path - (PATH_MAX - 1 - len));
++	if (path_info && path_info->freepath && !IS_ERR_OR_NULL(path_info->path))
++		__putname((char *)path_info->path - (PATH_MAX - 1 - path_info->pathlen));
+ }
+ 
+ extern char *ceph_mdsc_build_path(struct ceph_mds_client *mdsc,
+-				  struct dentry *dentry, int *plen, u64 *base,
++				  struct dentry *dentry, struct ceph_path_info *path_info,
+ 				  int for_wire);
+ 
+ extern void __ceph_mdsc_drop_dentry_lease(struct dentry *dentry);
+diff --git a/fs/coredump.c b/fs/coredump.c
+index f217ebf2b3b68f..012915262d11b7 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -1263,11 +1263,15 @@ static int proc_dostring_coredump(const struct ctl_table *table, int write,
+ 	ssize_t retval;
+ 	char old_core_pattern[CORENAME_MAX_SIZE];
+ 
++	if (write)
++		return proc_dostring(table, write, buffer, lenp, ppos);
++
+ 	retval = strscpy(old_core_pattern, core_pattern, CORENAME_MAX_SIZE);
+ 
+ 	error = proc_dostring(table, write, buffer, lenp, ppos);
+ 	if (error)
+ 		return error;
++
+ 	if (!check_coredump_socket()) {
+ 		strscpy(core_pattern, old_core_pattern, retval + 1);
+ 		return -EINVAL;
+diff --git a/fs/erofs/data.c b/fs/erofs/data.c
+index 16e4a6bd9b9737..dd7d86809c1881 100644
+--- a/fs/erofs/data.c
++++ b/fs/erofs/data.c
+@@ -65,10 +65,10 @@ void erofs_init_metabuf(struct erofs_buf *buf, struct super_block *sb)
+ }
+ 
+ void *erofs_read_metabuf(struct erofs_buf *buf, struct super_block *sb,
+-			 erofs_off_t offset, bool need_kmap)
++			 erofs_off_t offset)
+ {
+ 	erofs_init_metabuf(buf, sb);
+-	return erofs_bread(buf, offset, need_kmap);
++	return erofs_bread(buf, offset, true);
+ }
+ 
+ int erofs_map_blocks(struct inode *inode, struct erofs_map_blocks *map)
+@@ -118,7 +118,7 @@ int erofs_map_blocks(struct inode *inode, struct erofs_map_blocks *map)
+ 	pos = ALIGN(erofs_iloc(inode) + vi->inode_isize +
+ 		    vi->xattr_isize, unit) + unit * chunknr;
+ 
+-	idx = erofs_read_metabuf(&buf, sb, pos, true);
++	idx = erofs_read_metabuf(&buf, sb, pos);
+ 	if (IS_ERR(idx)) {
+ 		err = PTR_ERR(idx);
+ 		goto out;
+@@ -299,7 +299,7 @@ static int erofs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+ 		struct erofs_buf buf = __EROFS_BUF_INITIALIZER;
+ 
+ 		iomap->type = IOMAP_INLINE;
+-		ptr = erofs_read_metabuf(&buf, sb, mdev.m_pa, true);
++		ptr = erofs_read_metabuf(&buf, sb, mdev.m_pa);
+ 		if (IS_ERR(ptr))
+ 			return PTR_ERR(ptr);
+ 		iomap->inline_data = ptr;
+diff --git a/fs/erofs/fileio.c b/fs/erofs/fileio.c
+index 91781718199e2a..3ee082476c8c53 100644
+--- a/fs/erofs/fileio.c
++++ b/fs/erofs/fileio.c
+@@ -115,7 +115,7 @@ static int erofs_fileio_scan_folio(struct erofs_fileio *io, struct folio *folio)
+ 			void *src;
+ 
+ 			src = erofs_read_metabuf(&buf, inode->i_sb,
+-						 map->m_pa + ofs, true);
++						 map->m_pa + ofs);
+ 			if (IS_ERR(src)) {
+ 				err = PTR_ERR(src);
+ 				break;
+diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
+index 34517ca9df9157..9a8ee646e51d9d 100644
+--- a/fs/erofs/fscache.c
++++ b/fs/erofs/fscache.c
+@@ -274,7 +274,7 @@ static int erofs_fscache_data_read_slice(struct erofs_fscache_rq *req)
+ 		size_t size = map.m_llen;
+ 		void *src;
+ 
+-		src = erofs_read_metabuf(&buf, sb, map.m_pa, true);
++		src = erofs_read_metabuf(&buf, sb, map.m_pa);
+ 		if (IS_ERR(src))
+ 			return PTR_ERR(src);
+ 
+diff --git a/fs/erofs/inode.c b/fs/erofs/inode.c
+index a0ae0b4f7b012a..47215c5e33855b 100644
+--- a/fs/erofs/inode.c
++++ b/fs/erofs/inode.c
+@@ -39,10 +39,10 @@ static int erofs_read_inode(struct inode *inode)
+ 	void *ptr;
+ 	int err = 0;
+ 
+-	ptr = erofs_read_metabuf(&buf, sb, erofs_pos(sb, blkaddr), true);
++	ptr = erofs_read_metabuf(&buf, sb, erofs_pos(sb, blkaddr));
+ 	if (IS_ERR(ptr)) {
+ 		err = PTR_ERR(ptr);
+-		erofs_err(sb, "failed to get inode (nid: %llu) page, err %d",
++		erofs_err(sb, "failed to read inode meta block (nid: %llu): %d",
+ 			  vi->nid, err);
+ 		goto err_out;
+ 	}
+@@ -78,10 +78,10 @@ static int erofs_read_inode(struct inode *inode)
+ 
+ 			memcpy(&copied, dic, gotten);
+ 			ptr = erofs_read_metabuf(&buf, sb,
+-					erofs_pos(sb, blkaddr + 1), true);
++					erofs_pos(sb, blkaddr + 1));
+ 			if (IS_ERR(ptr)) {
+ 				err = PTR_ERR(ptr);
+-				erofs_err(sb, "failed to get inode payload block (nid: %llu), err %d",
++				erofs_err(sb, "failed to read inode payload block (nid: %llu): %d",
+ 					  vi->nid, err);
+ 				goto err_out;
+ 			}
+diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
+index 06b867d2fc3b7c..a7699114f6fe6d 100644
+--- a/fs/erofs/internal.h
++++ b/fs/erofs/internal.h
+@@ -385,7 +385,7 @@ void erofs_put_metabuf(struct erofs_buf *buf);
+ void *erofs_bread(struct erofs_buf *buf, erofs_off_t offset, bool need_kmap);
+ void erofs_init_metabuf(struct erofs_buf *buf, struct super_block *sb);
+ void *erofs_read_metabuf(struct erofs_buf *buf, struct super_block *sb,
+-			 erofs_off_t offset, bool need_kmap);
++			 erofs_off_t offset);
+ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *dev);
+ int erofs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ 		 u64 start, u64 len);
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index cad87e4d669432..06c8981eea7f8c 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -141,7 +141,7 @@ static int erofs_init_device(struct erofs_buf *buf, struct super_block *sb,
+ 	struct erofs_deviceslot *dis;
+ 	struct file *file;
+ 
+-	dis = erofs_read_metabuf(buf, sb, *pos, true);
++	dis = erofs_read_metabuf(buf, sb, *pos);
+ 	if (IS_ERR(dis))
+ 		return PTR_ERR(dis);
+ 
+@@ -268,7 +268,7 @@ static int erofs_read_superblock(struct super_block *sb)
+ 	void *data;
+ 	int ret;
+ 
+-	data = erofs_read_metabuf(&buf, sb, 0, true);
++	data = erofs_read_metabuf(&buf, sb, 0);
+ 	if (IS_ERR(data)) {
+ 		erofs_err(sb, "cannot read erofs superblock");
+ 		return PTR_ERR(data);
+@@ -999,10 +999,22 @@ static int erofs_show_options(struct seq_file *seq, struct dentry *root)
+ 	return 0;
+ }
+ 
++static void erofs_evict_inode(struct inode *inode)
++{
++#ifdef CONFIG_FS_DAX
++	if (IS_DAX(inode))
++		dax_break_layout_final(inode);
++#endif
++
++	truncate_inode_pages_final(&inode->i_data);
++	clear_inode(inode);
++}
++
+ const struct super_operations erofs_sops = {
+ 	.put_super = erofs_put_super,
+ 	.alloc_inode = erofs_alloc_inode,
+ 	.free_inode = erofs_free_inode,
++	.evict_inode = erofs_evict_inode,
+ 	.statfs = erofs_statfs,
+ 	.show_options = erofs_show_options,
+ };
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index 9bb53f00c2c629..e8f30eee29b441 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -805,6 +805,7 @@ static int z_erofs_pcluster_begin(struct z_erofs_frontend *fe)
+ 	struct erofs_map_blocks *map = &fe->map;
+ 	struct super_block *sb = fe->inode->i_sb;
+ 	struct z_erofs_pcluster *pcl = NULL;
++	void *ptr;
+ 	int ret;
+ 
+ 	DBG_BUGON(fe->pcl);
+@@ -854,15 +855,14 @@ static int z_erofs_pcluster_begin(struct z_erofs_frontend *fe)
+ 		/* bind cache first when cached decompression is preferred */
+ 		z_erofs_bind_cache(fe);
+ 	} else {
+-		void *mptr;
+-
+-		mptr = erofs_read_metabuf(&map->buf, sb, map->m_pa, false);
+-		if (IS_ERR(mptr)) {
+-			ret = PTR_ERR(mptr);
+-			erofs_err(sb, "failed to get inline data %d", ret);
++		erofs_init_metabuf(&map->buf, sb);
++		ptr = erofs_bread(&map->buf, map->m_pa, false);
++		if (IS_ERR(ptr)) {
++			ret = PTR_ERR(ptr);
++			erofs_err(sb, "failed to get inline folio %d", ret);
+ 			return ret;
+ 		}
+-		get_page(map->buf.page);
++		folio_get(page_folio(map->buf.page));
+ 		WRITE_ONCE(fe->pcl->compressed_bvecs[0].page, map->buf.page);
+ 		fe->pcl->pageofs_in = map->m_pa & ~PAGE_MASK;
+ 		fe->mode = Z_EROFS_PCLUSTER_FOLLOWED_NOINPLACE;
+@@ -1325,9 +1325,8 @@ static int z_erofs_decompress_pcluster(struct z_erofs_backend *be, int err)
+ 
+ 	/* must handle all compressed pages before actual file pages */
+ 	if (pcl->from_meta) {
+-		page = pcl->compressed_bvecs[0].page;
++		folio_put(page_folio(pcl->compressed_bvecs[0].page));
+ 		WRITE_ONCE(pcl->compressed_bvecs[0].page, NULL);
+-		put_page(page);
+ 	} else {
+ 		/* managed folios are still left in compressed_bvecs[] */
+ 		for (i = 0; i < pclusterpages; ++i) {
+diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
+index f1a15ff22147ba..14d01474ad9dda 100644
+--- a/fs/erofs/zmap.c
++++ b/fs/erofs/zmap.c
+@@ -31,7 +31,7 @@ static int z_erofs_load_full_lcluster(struct z_erofs_maprecorder *m,
+ 	struct z_erofs_lcluster_index *di;
+ 	unsigned int advise;
+ 
+-	di = erofs_read_metabuf(&m->map->buf, inode->i_sb, pos, true);
++	di = erofs_read_metabuf(&m->map->buf, inode->i_sb, pos);
+ 	if (IS_ERR(di))
+ 		return PTR_ERR(di);
+ 	m->lcn = lcn;
+@@ -146,7 +146,7 @@ static int z_erofs_load_compact_lcluster(struct z_erofs_maprecorder *m,
+ 	else
+ 		return -EOPNOTSUPP;
+ 
+-	in = erofs_read_metabuf(&m->map->buf, m->inode->i_sb, pos, true);
++	in = erofs_read_metabuf(&m->map->buf, m->inode->i_sb, pos);
+ 	if (IS_ERR(in))
+ 		return PTR_ERR(in);
+ 
+@@ -403,10 +403,10 @@ static int z_erofs_map_blocks_fo(struct inode *inode,
+ 		.inode = inode,
+ 		.map = map,
+ 	};
+-	int err = 0;
+-	unsigned int endoff, afmt;
++	unsigned int endoff;
+ 	unsigned long initial_lcn;
+ 	unsigned long long ofs, end;
++	int err;
+ 
+ 	ofs = flags & EROFS_GET_BLOCKS_FINDTAIL ? inode->i_size - 1 : map->m_la;
+ 	if (fragment && !(flags & EROFS_GET_BLOCKS_FINDTAIL) &&
+@@ -502,20 +502,15 @@ static int z_erofs_map_blocks_fo(struct inode *inode,
+ 			err = -EFSCORRUPTED;
+ 			goto unmap_out;
+ 		}
+-		afmt = vi->z_advise & Z_EROFS_ADVISE_INTERLACED_PCLUSTER ?
+-			Z_EROFS_COMPRESSION_INTERLACED :
+-			Z_EROFS_COMPRESSION_SHIFTED;
++		if (vi->z_advise & Z_EROFS_ADVISE_INTERLACED_PCLUSTER)
++			map->m_algorithmformat = Z_EROFS_COMPRESSION_INTERLACED;
++		else
++			map->m_algorithmformat = Z_EROFS_COMPRESSION_SHIFTED;
++	} else if (m.headtype == Z_EROFS_LCLUSTER_TYPE_HEAD2) {
++		map->m_algorithmformat = vi->z_algorithmtype[1];
+ 	} else {
+-		afmt = m.headtype == Z_EROFS_LCLUSTER_TYPE_HEAD2 ?
+-			vi->z_algorithmtype[1] : vi->z_algorithmtype[0];
+-		if (!(EROFS_I_SB(inode)->available_compr_algs & (1 << afmt))) {
+-			erofs_err(sb, "inconsistent algorithmtype %u for nid %llu",
+-				  afmt, vi->nid);
+-			err = -EFSCORRUPTED;
+-			goto unmap_out;
+-		}
++		map->m_algorithmformat = vi->z_algorithmtype[0];
+ 	}
+-	map->m_algorithmformat = afmt;
+ 
+ 	if ((flags & EROFS_GET_BLOCKS_FIEMAP) ||
+ 	    ((flags & EROFS_GET_BLOCKS_READMORE) &&
+@@ -551,7 +546,7 @@ static int z_erofs_map_blocks_ext(struct inode *inode,
+ 	map->m_flags = 0;
+ 	if (recsz <= offsetof(struct z_erofs_extent, pstart_hi)) {
+ 		if (recsz <= offsetof(struct z_erofs_extent, pstart_lo)) {
+-			ext = erofs_read_metabuf(&map->buf, sb, pos, true);
++			ext = erofs_read_metabuf(&map->buf, sb, pos);
+ 			if (IS_ERR(ext))
+ 				return PTR_ERR(ext);
+ 			pa = le64_to_cpu(*(__le64 *)ext);
+@@ -564,7 +559,7 @@ static int z_erofs_map_blocks_ext(struct inode *inode,
+ 		}
+ 
+ 		for (; lstart <= map->m_la; lstart += 1 << vi->z_lclusterbits) {
+-			ext = erofs_read_metabuf(&map->buf, sb, pos, true);
++			ext = erofs_read_metabuf(&map->buf, sb, pos);
+ 			if (IS_ERR(ext))
+ 				return PTR_ERR(ext);
+ 			map->m_plen = le32_to_cpu(ext->plen);
+@@ -584,7 +579,7 @@ static int z_erofs_map_blocks_ext(struct inode *inode,
+ 		for (l = 0, r = vi->z_extents; l < r; ) {
+ 			mid = l + (r - l) / 2;
+ 			ext = erofs_read_metabuf(&map->buf, sb,
+-						 pos + mid * recsz, true);
++						 pos + mid * recsz);
+ 			if (IS_ERR(ext))
+ 				return PTR_ERR(ext);
+ 
+@@ -641,14 +636,13 @@ static int z_erofs_map_blocks_ext(struct inode *inode,
+ 	return 0;
+ }
+ 
+-static int z_erofs_fill_inode_lazy(struct inode *inode)
++static int z_erofs_fill_inode(struct inode *inode, struct erofs_map_blocks *map)
+ {
+ 	struct erofs_inode *const vi = EROFS_I(inode);
+ 	struct super_block *const sb = inode->i_sb;
+-	int err, headnr;
+-	erofs_off_t pos;
+-	struct erofs_buf buf = __EROFS_BUF_INITIALIZER;
+ 	struct z_erofs_map_header *h;
++	erofs_off_t pos;
++	int err = 0;
+ 
+ 	if (test_bit(EROFS_I_Z_INITED_BIT, &vi->flags)) {
+ 		/*
+@@ -662,12 +656,11 @@ static int z_erofs_fill_inode_lazy(struct inode *inode)
+ 	if (wait_on_bit_lock(&vi->flags, EROFS_I_BL_Z_BIT, TASK_KILLABLE))
+ 		return -ERESTARTSYS;
+ 
+-	err = 0;
+ 	if (test_bit(EROFS_I_Z_INITED_BIT, &vi->flags))
+ 		goto out_unlock;
+ 
+ 	pos = ALIGN(erofs_iloc(inode) + vi->inode_isize + vi->xattr_isize, 8);
+-	h = erofs_read_metabuf(&buf, sb, pos, true);
++	h = erofs_read_metabuf(&map->buf, sb, pos);
+ 	if (IS_ERR(h)) {
+ 		err = PTR_ERR(h);
+ 		goto out_unlock;
+@@ -699,22 +692,13 @@ static int z_erofs_fill_inode_lazy(struct inode *inode)
+ 	else if (vi->z_advise & Z_EROFS_ADVISE_INLINE_PCLUSTER)
+ 		vi->z_idata_size = le16_to_cpu(h->h_idata_size);
+ 
+-	headnr = 0;
+-	if (vi->z_algorithmtype[0] >= Z_EROFS_COMPRESSION_MAX ||
+-	    vi->z_algorithmtype[++headnr] >= Z_EROFS_COMPRESSION_MAX) {
+-		erofs_err(sb, "unknown HEAD%u format %u for nid %llu, please upgrade kernel",
+-			  headnr + 1, vi->z_algorithmtype[headnr], vi->nid);
+-		err = -EOPNOTSUPP;
+-		goto out_put_metabuf;
+-	}
+-
+ 	if (!erofs_sb_has_big_pcluster(EROFS_SB(sb)) &&
+ 	    vi->z_advise & (Z_EROFS_ADVISE_BIG_PCLUSTER_1 |
+ 			    Z_EROFS_ADVISE_BIG_PCLUSTER_2)) {
+ 		erofs_err(sb, "per-inode big pcluster without sb feature for nid %llu",
+ 			  vi->nid);
+ 		err = -EFSCORRUPTED;
+-		goto out_put_metabuf;
++		goto out_unlock;
+ 	}
+ 	if (vi->datalayout == EROFS_INODE_COMPRESSED_COMPACT &&
+ 	    !(vi->z_advise & Z_EROFS_ADVISE_BIG_PCLUSTER_1) ^
+@@ -722,32 +706,54 @@ static int z_erofs_fill_inode_lazy(struct inode *inode)
+ 		erofs_err(sb, "big pcluster head1/2 of compact indexes should be consistent for nid %llu",
+ 			  vi->nid);
+ 		err = -EFSCORRUPTED;
+-		goto out_put_metabuf;
++		goto out_unlock;
+ 	}
+ 
+ 	if (vi->z_idata_size ||
+ 	    (vi->z_advise & Z_EROFS_ADVISE_FRAGMENT_PCLUSTER)) {
+-		struct erofs_map_blocks map = {
++		struct erofs_map_blocks tm = {
+ 			.buf = __EROFS_BUF_INITIALIZER
+ 		};
+ 
+-		err = z_erofs_map_blocks_fo(inode, &map,
++		err = z_erofs_map_blocks_fo(inode, &tm,
+ 					    EROFS_GET_BLOCKS_FINDTAIL);
+-		erofs_put_metabuf(&map.buf);
++		erofs_put_metabuf(&tm.buf);
+ 		if (err < 0)
+-			goto out_put_metabuf;
++			goto out_unlock;
+ 	}
+ done:
+ 	/* paired with smp_mb() at the beginning of the function */
+ 	smp_mb();
+ 	set_bit(EROFS_I_Z_INITED_BIT, &vi->flags);
+-out_put_metabuf:
+-	erofs_put_metabuf(&buf);
+ out_unlock:
+ 	clear_and_wake_up_bit(EROFS_I_BL_Z_BIT, &vi->flags);
+ 	return err;
+ }
+ 
++static int z_erofs_map_sanity_check(struct inode *inode,
++				    struct erofs_map_blocks *map)
++{
++	struct erofs_sb_info *sbi = EROFS_I_SB(inode);
++
++	if (!(map->m_flags & EROFS_MAP_ENCODED))
++		return 0;
++	if (unlikely(map->m_algorithmformat >= Z_EROFS_COMPRESSION_RUNTIME_MAX)) {
++		erofs_err(inode->i_sb, "unknown algorithm %d @ pos %llu for nid %llu, please upgrade kernel",
++			  map->m_algorithmformat, map->m_la, EROFS_I(inode)->nid);
++		return -EOPNOTSUPP;
++	}
++	if (unlikely(map->m_algorithmformat < Z_EROFS_COMPRESSION_MAX &&
++		     !(sbi->available_compr_algs & (1 << map->m_algorithmformat)))) {
++		erofs_err(inode->i_sb, "inconsistent algorithmtype %u for nid %llu",
++			  map->m_algorithmformat, EROFS_I(inode)->nid);
++		return -EFSCORRUPTED;
++	}
++	if (unlikely(map->m_plen > Z_EROFS_PCLUSTER_MAX_SIZE ||
++		     map->m_llen > Z_EROFS_PCLUSTER_MAX_DSIZE))
++		return -EOPNOTSUPP;
++	return 0;
++}
++
+ int z_erofs_map_blocks_iter(struct inode *inode, struct erofs_map_blocks *map,
+ 			    int flags)
+ {
+@@ -760,7 +766,7 @@ int z_erofs_map_blocks_iter(struct inode *inode, struct erofs_map_blocks *map,
+ 		map->m_la = inode->i_size;
+ 		map->m_flags = 0;
+ 	} else {
+-		err = z_erofs_fill_inode_lazy(inode);
++		err = z_erofs_fill_inode(inode, map);
+ 		if (!err) {
+ 			if (vi->datalayout == EROFS_INODE_COMPRESSED_FULL &&
+ 			    (vi->z_advise & Z_EROFS_ADVISE_EXTENTS))
+@@ -768,10 +774,8 @@ int z_erofs_map_blocks_iter(struct inode *inode, struct erofs_map_blocks *map,
+ 			else
+ 				err = z_erofs_map_blocks_fo(inode, map, flags);
+ 		}
+-		if (!err && (map->m_flags & EROFS_MAP_ENCODED) &&
+-		    unlikely(map->m_plen > Z_EROFS_PCLUSTER_MAX_SIZE ||
+-			     map->m_llen > Z_EROFS_PCLUSTER_MAX_DSIZE))
+-			err = -EOPNOTSUPP;
++		if (!err)
++			err = z_erofs_map_sanity_check(inode, map);
+ 		if (err)
+ 			map->m_llen = 0;
+ 	}
+diff --git a/fs/exec.c b/fs/exec.c
+index ba400aafd64061..551e1cc5bf1e3e 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -2048,7 +2048,7 @@ static int proc_dointvec_minmax_coredump(const struct ctl_table *table, int writ
+ {
+ 	int error = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+ 
+-	if (!error)
++	if (!error && !write)
+ 		validate_coredump_safety();
+ 	return error;
+ }
+diff --git a/fs/fhandle.c b/fs/fhandle.c
+index e21ec857f2abcf..52c72896e1c164 100644
+--- a/fs/fhandle.c
++++ b/fs/fhandle.c
+@@ -202,6 +202,14 @@ static int vfs_dentry_acceptable(void *context, struct dentry *dentry)
+ 	if (!ctx->flags)
+ 		return 1;
+ 
++	/*
++	 * Verify that the decoded dentry itself has a valid id mapping.
++	 * In case the decoded dentry is the mountfd root itself, this
++	 * verifies that the mountfd inode itself has a valid id mapping.
++	 */
++	if (!privileged_wrt_inode_uidgid(user_ns, idmap, d_inode(dentry)))
++		return 0;
++
+ 	/*
+ 	 * It's racy as we're not taking rename_lock but we're able to ignore
+ 	 * permissions and we just need an approximation whether we were able
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index e80cd8f2c049f9..5150aa25e64be9 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -1893,7 +1893,7 @@ static int fuse_retrieve(struct fuse_mount *fm, struct inode *inode,
+ 
+ 	index = outarg->offset >> PAGE_SHIFT;
+ 
+-	while (num) {
++	while (num && ap->num_folios < num_pages) {
+ 		struct folio *folio;
+ 		unsigned int folio_offset;
+ 		unsigned int nr_bytes;
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 47006d0753f1cd..b8dc8ce3e5564a 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -3013,7 +3013,7 @@ static ssize_t __fuse_copy_file_range(struct file *file_in, loff_t pos_in,
+ 		.nodeid_out = ff_out->nodeid,
+ 		.fh_out = ff_out->fh,
+ 		.off_out = pos_out,
+-		.len = len,
++		.len = min_t(size_t, len, UINT_MAX & PAGE_MASK),
+ 		.flags = flags
+ 	};
+ 	struct fuse_write_out outarg;
+@@ -3079,6 +3079,9 @@ static ssize_t __fuse_copy_file_range(struct file *file_in, loff_t pos_in,
+ 		fc->no_copy_file_range = 1;
+ 		err = -EOPNOTSUPP;
+ 	}
++	if (!err && outarg.size > len)
++		err = -EIO;
++
+ 	if (err)
+ 		goto out;
+ 
+diff --git a/fs/fuse/passthrough.c b/fs/fuse/passthrough.c
+index 607ef735ad4ab3..eb97ac009e75d9 100644
+--- a/fs/fuse/passthrough.c
++++ b/fs/fuse/passthrough.c
+@@ -237,6 +237,11 @@ int fuse_backing_open(struct fuse_conn *fc, struct fuse_backing_map *map)
+ 	if (!file)
+ 		goto out;
+ 
++	/* read/write/splice/mmap passthrough only relevant for regular files */
++	res = d_is_dir(file->f_path.dentry) ? -EISDIR : -EINVAL;
++	if (!d_is_reg(file->f_path.dentry))
++		goto out_fput;
++
+ 	backing_sb = file_inode(file)->i_sb;
+ 	res = -ELOOP;
+ 	if (backing_sb->s_stack_depth >= fc->max_stack_depth)
+diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c
+index a6c692cac61659..9adf36e6364b7d 100644
+--- a/fs/kernfs/file.c
++++ b/fs/kernfs/file.c
+@@ -70,6 +70,24 @@ static struct kernfs_open_node *of_on(struct kernfs_open_file *of)
+ 					 !list_empty(&of->list));
+ }
+ 
++/* Get active reference to kernfs node for an open file */
++static struct kernfs_open_file *kernfs_get_active_of(struct kernfs_open_file *of)
++{
++	/* Skip if file was already released */
++	if (unlikely(of->released))
++		return NULL;
++
++	if (!kernfs_get_active(of->kn))
++		return NULL;
++
++	return of;
++}
++
++static void kernfs_put_active_of(struct kernfs_open_file *of)
++{
++	return kernfs_put_active(of->kn);
++}
++
+ /**
+  * kernfs_deref_open_node_locked - Get kernfs_open_node corresponding to @kn
+  *
+@@ -139,7 +157,7 @@ static void kernfs_seq_stop_active(struct seq_file *sf, void *v)
+ 
+ 	if (ops->seq_stop)
+ 		ops->seq_stop(sf, v);
+-	kernfs_put_active(of->kn);
++	kernfs_put_active_of(of);
+ }
+ 
+ static void *kernfs_seq_start(struct seq_file *sf, loff_t *ppos)
+@@ -152,7 +170,7 @@ static void *kernfs_seq_start(struct seq_file *sf, loff_t *ppos)
+ 	 * the ops aren't called concurrently for the same open file.
+ 	 */
+ 	mutex_lock(&of->mutex);
+-	if (!kernfs_get_active(of->kn))
++	if (!kernfs_get_active_of(of))
+ 		return ERR_PTR(-ENODEV);
+ 
+ 	ops = kernfs_ops(of->kn);
+@@ -238,7 +256,7 @@ static ssize_t kernfs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 	 * the ops aren't called concurrently for the same open file.
+ 	 */
+ 	mutex_lock(&of->mutex);
+-	if (!kernfs_get_active(of->kn)) {
++	if (!kernfs_get_active_of(of)) {
+ 		len = -ENODEV;
+ 		mutex_unlock(&of->mutex);
+ 		goto out_free;
+@@ -252,7 +270,7 @@ static ssize_t kernfs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 	else
+ 		len = -EINVAL;
+ 
+-	kernfs_put_active(of->kn);
++	kernfs_put_active_of(of);
+ 	mutex_unlock(&of->mutex);
+ 
+ 	if (len < 0)
+@@ -323,7 +341,7 @@ static ssize_t kernfs_fop_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 	 * the ops aren't called concurrently for the same open file.
+ 	 */
+ 	mutex_lock(&of->mutex);
+-	if (!kernfs_get_active(of->kn)) {
++	if (!kernfs_get_active_of(of)) {
+ 		mutex_unlock(&of->mutex);
+ 		len = -ENODEV;
+ 		goto out_free;
+@@ -335,7 +353,7 @@ static ssize_t kernfs_fop_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ 	else
+ 		len = -EINVAL;
+ 
+-	kernfs_put_active(of->kn);
++	kernfs_put_active_of(of);
+ 	mutex_unlock(&of->mutex);
+ 
+ 	if (len > 0)
+@@ -357,13 +375,13 @@ static void kernfs_vma_open(struct vm_area_struct *vma)
+ 	if (!of->vm_ops)
+ 		return;
+ 
+-	if (!kernfs_get_active(of->kn))
++	if (!kernfs_get_active_of(of))
+ 		return;
+ 
+ 	if (of->vm_ops->open)
+ 		of->vm_ops->open(vma);
+ 
+-	kernfs_put_active(of->kn);
++	kernfs_put_active_of(of);
+ }
+ 
+ static vm_fault_t kernfs_vma_fault(struct vm_fault *vmf)
+@@ -375,14 +393,14 @@ static vm_fault_t kernfs_vma_fault(struct vm_fault *vmf)
+ 	if (!of->vm_ops)
+ 		return VM_FAULT_SIGBUS;
+ 
+-	if (!kernfs_get_active(of->kn))
++	if (!kernfs_get_active_of(of))
+ 		return VM_FAULT_SIGBUS;
+ 
+ 	ret = VM_FAULT_SIGBUS;
+ 	if (of->vm_ops->fault)
+ 		ret = of->vm_ops->fault(vmf);
+ 
+-	kernfs_put_active(of->kn);
++	kernfs_put_active_of(of);
+ 	return ret;
+ }
+ 
+@@ -395,7 +413,7 @@ static vm_fault_t kernfs_vma_page_mkwrite(struct vm_fault *vmf)
+ 	if (!of->vm_ops)
+ 		return VM_FAULT_SIGBUS;
+ 
+-	if (!kernfs_get_active(of->kn))
++	if (!kernfs_get_active_of(of))
+ 		return VM_FAULT_SIGBUS;
+ 
+ 	ret = 0;
+@@ -404,7 +422,7 @@ static vm_fault_t kernfs_vma_page_mkwrite(struct vm_fault *vmf)
+ 	else
+ 		file_update_time(file);
+ 
+-	kernfs_put_active(of->kn);
++	kernfs_put_active_of(of);
+ 	return ret;
+ }
+ 
+@@ -418,14 +436,14 @@ static int kernfs_vma_access(struct vm_area_struct *vma, unsigned long addr,
+ 	if (!of->vm_ops)
+ 		return -EINVAL;
+ 
+-	if (!kernfs_get_active(of->kn))
++	if (!kernfs_get_active_of(of))
+ 		return -EINVAL;
+ 
+ 	ret = -EINVAL;
+ 	if (of->vm_ops->access)
+ 		ret = of->vm_ops->access(vma, addr, buf, len, write);
+ 
+-	kernfs_put_active(of->kn);
++	kernfs_put_active_of(of);
+ 	return ret;
+ }
+ 
+@@ -455,7 +473,7 @@ static int kernfs_fop_mmap(struct file *file, struct vm_area_struct *vma)
+ 	mutex_lock(&of->mutex);
+ 
+ 	rc = -ENODEV;
+-	if (!kernfs_get_active(of->kn))
++	if (!kernfs_get_active_of(of))
+ 		goto out_unlock;
+ 
+ 	ops = kernfs_ops(of->kn);
+@@ -490,7 +508,7 @@ static int kernfs_fop_mmap(struct file *file, struct vm_area_struct *vma)
+ 	}
+ 	vma->vm_ops = &kernfs_vm_ops;
+ out_put:
+-	kernfs_put_active(of->kn);
++	kernfs_put_active_of(of);
+ out_unlock:
+ 	mutex_unlock(&of->mutex);
+ 
+@@ -852,7 +870,7 @@ static __poll_t kernfs_fop_poll(struct file *filp, poll_table *wait)
+ 	struct kernfs_node *kn = kernfs_dentry_node(filp->f_path.dentry);
+ 	__poll_t ret;
+ 
+-	if (!kernfs_get_active(kn))
++	if (!kernfs_get_active_of(of))
+ 		return DEFAULT_POLLMASK|EPOLLERR|EPOLLPRI;
+ 
+ 	if (kn->attr.ops->poll)
+@@ -860,7 +878,7 @@ static __poll_t kernfs_fop_poll(struct file *filp, poll_table *wait)
+ 	else
+ 		ret = kernfs_generic_poll(of, wait);
+ 
+-	kernfs_put_active(kn);
++	kernfs_put_active_of(of);
+ 	return ret;
+ }
+ 
+@@ -875,7 +893,7 @@ static loff_t kernfs_fop_llseek(struct file *file, loff_t offset, int whence)
+ 	 * the ops aren't called concurrently for the same open file.
+ 	 */
+ 	mutex_lock(&of->mutex);
+-	if (!kernfs_get_active(of->kn)) {
++	if (!kernfs_get_active_of(of)) {
+ 		mutex_unlock(&of->mutex);
+ 		return -ENODEV;
+ 	}
+@@ -886,7 +904,7 @@ static loff_t kernfs_fop_llseek(struct file *file, loff_t offset, int whence)
+ 	else
+ 		ret = generic_file_llseek(file, offset, whence);
+ 
+-	kernfs_put_active(of->kn);
++	kernfs_put_active_of(of);
+ 	mutex_unlock(&of->mutex);
+ 	return ret;
+ }
+diff --git a/fs/nfs/client.c b/fs/nfs/client.c
+index 3bcf5c204578c1..97bd9d2a4b0cde 100644
+--- a/fs/nfs/client.c
++++ b/fs/nfs/client.c
+@@ -890,6 +890,8 @@ static void nfs_server_set_fsinfo(struct nfs_server *server,
+ 
+ 	if (fsinfo->xattr_support)
+ 		server->caps |= NFS_CAP_XATTR;
++	else
++		server->caps &= ~NFS_CAP_XATTR;
+ #endif
+ }
+ 
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index 033feeab8c346e..a16a619fb8c33b 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -437,10 +437,11 @@ static void nfs_invalidate_folio(struct folio *folio, size_t offset,
+ 	dfprintk(PAGECACHE, "NFS: invalidate_folio(%lu, %zu, %zu)\n",
+ 		 folio->index, offset, length);
+ 
+-	if (offset != 0 || length < folio_size(folio))
+-		return;
+ 	/* Cancel any unstarted writes on this page */
+-	nfs_wb_folio_cancel(inode, folio);
++	if (offset != 0 || length < folio_size(folio))
++		nfs_wb_folio(inode, folio);
++	else
++		nfs_wb_folio_cancel(inode, folio);
+ 	folio_wait_private_2(folio); /* [DEPRECATED] */
+ 	trace_nfs_invalidate_folio(inode, folio_pos(folio) + offset, length);
+ }
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index 8dc921d835388e..9edb5f9b0c4e47 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -293,7 +293,7 @@ ff_lseg_match_mirrors(struct pnfs_layout_segment *l1,
+ 		struct pnfs_layout_segment *l2)
+ {
+ 	const struct nfs4_ff_layout_segment *fl1 = FF_LAYOUT_LSEG(l1);
+-	const struct nfs4_ff_layout_segment *fl2 = FF_LAYOUT_LSEG(l1);
++	const struct nfs4_ff_layout_segment *fl2 = FF_LAYOUT_LSEG(l2);
+ 	u32 i;
+ 
+ 	if (fl1->mirror_array_cnt != fl2->mirror_array_cnt)
+@@ -773,8 +773,11 @@ ff_layout_choose_ds_for_read(struct pnfs_layout_segment *lseg,
+ 			continue;
+ 
+ 		if (check_device &&
+-		    nfs4_test_deviceid_unavailable(&mirror->mirror_ds->id_node))
++		    nfs4_test_deviceid_unavailable(&mirror->mirror_ds->id_node)) {
++			// reinitialize the error state in case if this is the last iteration
++			ds = ERR_PTR(-EINVAL);
+ 			continue;
++		}
+ 
+ 		*best_idx = idx;
+ 		break;
+@@ -804,7 +807,7 @@ ff_layout_choose_best_ds_for_read(struct pnfs_layout_segment *lseg,
+ 	struct nfs4_pnfs_ds *ds;
+ 
+ 	ds = ff_layout_choose_valid_ds_for_read(lseg, start_idx, best_idx);
+-	if (ds)
++	if (!IS_ERR(ds))
+ 		return ds;
+ 	return ff_layout_choose_any_ds_for_read(lseg, start_idx, best_idx);
+ }
+@@ -818,7 +821,7 @@ ff_layout_get_ds_for_read(struct nfs_pageio_descriptor *pgio,
+ 
+ 	ds = ff_layout_choose_best_ds_for_read(lseg, pgio->pg_mirror_idx,
+ 					       best_idx);
+-	if (ds || !pgio->pg_mirror_idx)
++	if (!IS_ERR(ds) || !pgio->pg_mirror_idx)
+ 		return ds;
+ 	return ff_layout_choose_best_ds_for_read(lseg, 0, best_idx);
+ }
+@@ -868,7 +871,7 @@ ff_layout_pg_init_read(struct nfs_pageio_descriptor *pgio,
+ 	req->wb_nio = 0;
+ 
+ 	ds = ff_layout_get_ds_for_read(pgio, &ds_idx);
+-	if (!ds) {
++	if (IS_ERR(ds)) {
+ 		if (!ff_layout_no_fallback_to_mds(pgio->pg_lseg))
+ 			goto out_mds;
+ 		pnfs_generic_pg_cleanup(pgio);
+@@ -1072,11 +1075,13 @@ static void ff_layout_resend_pnfs_read(struct nfs_pgio_header *hdr)
+ {
+ 	u32 idx = hdr->pgio_mirror_idx + 1;
+ 	u32 new_idx = 0;
++	struct nfs4_pnfs_ds *ds;
+ 
+-	if (ff_layout_choose_any_ds_for_read(hdr->lseg, idx, &new_idx))
+-		ff_layout_send_layouterror(hdr->lseg);
+-	else
++	ds = ff_layout_choose_any_ds_for_read(hdr->lseg, idx, &new_idx);
++	if (IS_ERR(ds))
+ 		pnfs_error_mark_layout_for_return(hdr->inode, hdr->lseg);
++	else
++		ff_layout_send_layouterror(hdr->lseg);
+ 	pnfs_read_resend_pnfs(hdr, new_idx);
+ }
+ 
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index a2fa6bc4d74e37..a32cc45425e287 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -761,8 +761,10 @@ nfs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 	trace_nfs_setattr_enter(inode);
+ 
+ 	/* Write all dirty data */
+-	if (S_ISREG(inode->i_mode))
++	if (S_ISREG(inode->i_mode)) {
++		nfs_file_block_o_direct(NFS_I(inode));
+ 		nfs_sync_inode(inode);
++	}
+ 
+ 	fattr = nfs_alloc_fattr_with_label(NFS_SERVER(inode));
+ 	if (fattr == NULL) {
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 9dcbc339649221..0ef0fc6aba3b3c 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -531,6 +531,16 @@ static inline bool nfs_file_io_is_buffered(struct nfs_inode *nfsi)
+ 	return test_bit(NFS_INO_ODIRECT, &nfsi->flags) == 0;
+ }
+ 
++/* Must be called with exclusively locked inode->i_rwsem */
++static inline void nfs_file_block_o_direct(struct nfs_inode *nfsi)
++{
++	if (test_bit(NFS_INO_ODIRECT, &nfsi->flags)) {
++		clear_bit(NFS_INO_ODIRECT, &nfsi->flags);
++		inode_dio_wait(&nfsi->vfs_inode);
++	}
++}
++
++
+ /* namespace.c */
+ #define NFS_PATH_CANONICAL 1
+ extern char *nfs_path(char **p, struct dentry *dentry,
+diff --git a/fs/nfs/io.c b/fs/nfs/io.c
+index 3388faf2acb9f5..d275b0a250bf3b 100644
+--- a/fs/nfs/io.c
++++ b/fs/nfs/io.c
+@@ -14,15 +14,6 @@
+ 
+ #include "internal.h"
+ 
+-/* Call with exclusively locked inode->i_rwsem */
+-static void nfs_block_o_direct(struct nfs_inode *nfsi, struct inode *inode)
+-{
+-	if (test_bit(NFS_INO_ODIRECT, &nfsi->flags)) {
+-		clear_bit(NFS_INO_ODIRECT, &nfsi->flags);
+-		inode_dio_wait(inode);
+-	}
+-}
+-
+ /**
+  * nfs_start_io_read - declare the file is being used for buffered reads
+  * @inode: file inode
+@@ -57,7 +48,7 @@ nfs_start_io_read(struct inode *inode)
+ 	err = down_write_killable(&inode->i_rwsem);
+ 	if (err)
+ 		return err;
+-	nfs_block_o_direct(nfsi, inode);
++	nfs_file_block_o_direct(nfsi);
+ 	downgrade_write(&inode->i_rwsem);
+ 
+ 	return 0;
+@@ -90,7 +81,7 @@ nfs_start_io_write(struct inode *inode)
+ 
+ 	err = down_write_killable(&inode->i_rwsem);
+ 	if (!err)
+-		nfs_block_o_direct(NFS_I(inode), inode);
++		nfs_file_block_o_direct(NFS_I(inode));
+ 	return err;
+ }
+ 
+diff --git a/fs/nfs/localio.c b/fs/nfs/localio.c
+index 510d0a16cfe917..e2213ef18baede 100644
+--- a/fs/nfs/localio.c
++++ b/fs/nfs/localio.c
+@@ -453,12 +453,13 @@ static void nfs_local_call_read(struct work_struct *work)
+ 	nfs_local_iter_init(&iter, iocb, READ);
+ 
+ 	status = filp->f_op->read_iter(&iocb->kiocb, &iter);
++
++	revert_creds(save_cred);
++
+ 	if (status != -EIOCBQUEUED) {
+ 		nfs_local_read_done(iocb, status);
+ 		nfs_local_pgio_release(iocb);
+ 	}
+-
+-	revert_creds(save_cred);
+ }
+ 
+ static int
+@@ -649,14 +650,15 @@ static void nfs_local_call_write(struct work_struct *work)
+ 	file_start_write(filp);
+ 	status = filp->f_op->write_iter(&iocb->kiocb, &iter);
+ 	file_end_write(filp);
++
++	revert_creds(save_cred);
++	current->flags = old_flags;
++
+ 	if (status != -EIOCBQUEUED) {
+ 		nfs_local_write_done(iocb, status);
+ 		nfs_local_vfs_getattr(iocb);
+ 		nfs_local_pgio_release(iocb);
+ 	}
+-
+-	revert_creds(save_cred);
+-	current->flags = old_flags;
+ }
+ 
+ static int
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index 01c01f45358b7c..48ee3d5d89c4ae 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -114,6 +114,7 @@ static int nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
+ 	exception.inode = inode;
+ 	exception.state = lock->open_context->state;
+ 
++	nfs_file_block_o_direct(NFS_I(inode));
+ 	err = nfs_sync_inode(inode);
+ 	if (err)
+ 		goto out;
+@@ -430,6 +431,7 @@ static ssize_t _nfs42_proc_copy(struct file *src,
+ 		return status;
+ 	}
+ 
++	nfs_file_block_o_direct(NFS_I(dst_inode));
+ 	status = nfs_sync_inode(dst_inode);
+ 	if (status)
+ 		return status;
+diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
+index 5e9d66f3466c8d..1fa69a0b33ab19 100644
+--- a/fs/nfs/nfs4file.c
++++ b/fs/nfs/nfs4file.c
+@@ -291,9 +291,11 @@ static loff_t nfs42_remap_file_range(struct file *src_file, loff_t src_off,
+ 
+ 	/* flush all pending writes on both src and dst so that server
+ 	 * has the latest data */
++	nfs_file_block_o_direct(NFS_I(src_inode));
+ 	ret = nfs_sync_inode(src_inode);
+ 	if (ret)
+ 		goto out_unlock;
++	nfs_file_block_o_direct(NFS_I(dst_inode));
+ 	ret = nfs_sync_inode(dst_inode);
+ 	if (ret)
+ 		goto out_unlock;
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 7e203857f46687..8d492e3b216312 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -4007,8 +4007,10 @@ static int _nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *f
+ 				     res.attr_bitmask[2];
+ 		}
+ 		memcpy(server->attr_bitmask, res.attr_bitmask, sizeof(server->attr_bitmask));
+-		server->caps &= ~(NFS_CAP_ACLS | NFS_CAP_HARDLINKS |
+-				  NFS_CAP_SYMLINKS| NFS_CAP_SECURITY_LABEL);
++		server->caps &=
++			~(NFS_CAP_ACLS | NFS_CAP_HARDLINKS | NFS_CAP_SYMLINKS |
++			  NFS_CAP_SECURITY_LABEL | NFS_CAP_FS_LOCATIONS |
++			  NFS_CAP_OPEN_XOR | NFS_CAP_DELEGTIME);
+ 		server->fattr_valid = NFS_ATTR_FATTR_V4;
+ 		if (res.attr_bitmask[0] & FATTR4_WORD0_ACL &&
+ 				res.acl_bitmask & ACL4_SUPPORT_ALLOW_ACL)
+@@ -4082,7 +4084,6 @@ int nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *fhandle)
+ 	};
+ 	int err;
+ 
+-	nfs_server_set_init_caps(server);
+ 	do {
+ 		err = nfs4_handle_exception(server,
+ 				_nfs4_server_capabilities(server, fhandle),
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index ff29335ed85999..08fd1c0d45ec27 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -2045,6 +2045,7 @@ int nfs_wb_folio_cancel(struct inode *inode, struct folio *folio)
+ 		 * release it */
+ 		nfs_inode_remove_request(req);
+ 		nfs_unlock_and_release_request(req);
++		folio_cancel_dirty(folio);
+ 	}
+ 
+ 	return ret;
+diff --git a/fs/ocfs2/extent_map.c b/fs/ocfs2/extent_map.c
+index 930150ed5db15f..ef147e8b327126 100644
+--- a/fs/ocfs2/extent_map.c
++++ b/fs/ocfs2/extent_map.c
+@@ -706,6 +706,8 @@ int ocfs2_extent_map_get_blocks(struct inode *inode, u64 v_blkno, u64 *p_blkno,
+  * it not only handles the fiemap for inlined files, but also deals
+  * with the fast symlink, cause they have no difference for extent
+  * mapping per se.
++ *
++ * Must be called with ip_alloc_sem semaphore held.
+  */
+ static int ocfs2_fiemap_inline(struct inode *inode, struct buffer_head *di_bh,
+ 			       struct fiemap_extent_info *fieinfo,
+@@ -717,6 +719,7 @@ static int ocfs2_fiemap_inline(struct inode *inode, struct buffer_head *di_bh,
+ 	u64 phys;
+ 	u32 flags = FIEMAP_EXTENT_DATA_INLINE|FIEMAP_EXTENT_LAST;
+ 	struct ocfs2_inode_info *oi = OCFS2_I(inode);
++	lockdep_assert_held_read(&oi->ip_alloc_sem);
+ 
+ 	di = (struct ocfs2_dinode *)di_bh->b_data;
+ 	if (ocfs2_inode_is_fast_symlink(inode))
+@@ -732,8 +735,11 @@ static int ocfs2_fiemap_inline(struct inode *inode, struct buffer_head *di_bh,
+ 			phys += offsetof(struct ocfs2_dinode,
+ 					 id2.i_data.id_data);
+ 
++		/* Release the ip_alloc_sem to prevent deadlock on page fault */
++		up_read(&OCFS2_I(inode)->ip_alloc_sem);
+ 		ret = fiemap_fill_next_extent(fieinfo, 0, phys, id_count,
+ 					      flags);
++		down_read(&OCFS2_I(inode)->ip_alloc_sem);
+ 		if (ret < 0)
+ 			return ret;
+ 	}
+@@ -802,9 +808,11 @@ int ocfs2_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ 		len_bytes = (u64)le16_to_cpu(rec.e_leaf_clusters) << osb->s_clustersize_bits;
+ 		phys_bytes = le64_to_cpu(rec.e_blkno) << osb->sb->s_blocksize_bits;
+ 		virt_bytes = (u64)le32_to_cpu(rec.e_cpos) << osb->s_clustersize_bits;
+-
++		/* Release the ip_alloc_sem to prevent deadlock on page fault */
++		up_read(&OCFS2_I(inode)->ip_alloc_sem);
+ 		ret = fiemap_fill_next_extent(fieinfo, virt_bytes, phys_bytes,
+ 					      len_bytes, fe_flags);
++		down_read(&OCFS2_I(inode)->ip_alloc_sem);
+ 		if (ret)
+ 			break;
+ 
+diff --git a/fs/proc/generic.c b/fs/proc/generic.c
+index 409bc1d11eca39..8e1e48760ffe05 100644
+--- a/fs/proc/generic.c
++++ b/fs/proc/generic.c
+@@ -390,7 +390,8 @@ struct proc_dir_entry *proc_register(struct proc_dir_entry *dir,
+ 	if (proc_alloc_inum(&dp->low_ino))
+ 		goto out_free_entry;
+ 
+-	pde_set_flags(dp);
++	if (!S_ISDIR(dp->mode))
++		pde_set_flags(dp);
+ 
+ 	write_lock(&proc_subdir_lock);
+ 	dp->parent = dir;
+diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c
+index d98e0d2de09fd0..3c39cfacb25183 100644
+--- a/fs/resctrl/ctrlmondata.c
++++ b/fs/resctrl/ctrlmondata.c
+@@ -625,11 +625,11 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg)
+ 		 */
+ 		list_for_each_entry(d, &r->mon_domains, hdr.list) {
+ 			if (d->ci_id == domid) {
+-				rr.ci_id = d->ci_id;
+ 				cpu = cpumask_any(&d->hdr.cpu_mask);
+ 				ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE);
+ 				if (!ci)
+ 					continue;
++				rr.ci = ci;
+ 				mon_event_read(&rr, r, NULL, rdtgrp,
+ 					       &ci->shared_cpu_map, evtid, false);
+ 				goto checkresult;
+diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h
+index 0a1eedba2b03ad..9a8cf6f11151d9 100644
+--- a/fs/resctrl/internal.h
++++ b/fs/resctrl/internal.h
+@@ -98,7 +98,7 @@ struct mon_data {
+  *	   domains in @r sharing L3 @ci.id
+  * @evtid: Which monitor event to read.
+  * @first: Initialize MBM counter when true.
+- * @ci_id: Cacheinfo id for L3. Only set when @d is NULL. Used when summing domains.
++ * @ci:    Cacheinfo for L3. Only set when @d is NULL. Used when summing domains.
+  * @err:   Error encountered when reading counter.
+  * @val:   Returned value of event counter. If @rgrp is a parent resource group,
+  *	   @val includes the sum of event counts from its child resource groups.
+@@ -112,7 +112,7 @@ struct rmid_read {
+ 	struct rdt_mon_domain	*d;
+ 	enum resctrl_event_id	evtid;
+ 	bool			first;
+-	unsigned int		ci_id;
++	struct cacheinfo	*ci;
+ 	int			err;
+ 	u64			val;
+ 	void			*arch_mon_ctx;
+diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c
+index f5637855c3acac..7326c28a7908f3 100644
+--- a/fs/resctrl/monitor.c
++++ b/fs/resctrl/monitor.c
+@@ -361,7 +361,6 @@ static int __mon_event_count(u32 closid, u32 rmid, struct rmid_read *rr)
+ {
+ 	int cpu = smp_processor_id();
+ 	struct rdt_mon_domain *d;
+-	struct cacheinfo *ci;
+ 	struct mbm_state *m;
+ 	int err, ret;
+ 	u64 tval = 0;
+@@ -389,8 +388,7 @@ static int __mon_event_count(u32 closid, u32 rmid, struct rmid_read *rr)
+ 	}
+ 
+ 	/* Summing domains that share a cache, must be on a CPU for that cache. */
+-	ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE);
+-	if (!ci || ci->id != rr->ci_id)
++	if (!cpumask_test_cpu(cpu, &rr->ci->shared_cpu_map))
+ 		return -EINVAL;
+ 
+ 	/*
+@@ -402,7 +400,7 @@ static int __mon_event_count(u32 closid, u32 rmid, struct rmid_read *rr)
+ 	 */
+ 	ret = -EINVAL;
+ 	list_for_each_entry(d, &rr->r->mon_domains, hdr.list) {
+-		if (d->ci_id != rr->ci_id)
++		if (d->ci_id != rr->ci->id)
+ 			continue;
+ 		err = resctrl_arch_rmid_read(rr->r, d, closid, rmid,
+ 					     rr->evtid, &tval, rr->arch_mon_ctx);
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 89160bc34d3539..4aaa9e8d9cbeff 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -87,7 +87,7 @@
+ #define SMB_INTERFACE_POLL_INTERVAL	600
+ 
+ /* maximum number of PDUs in one compound */
+-#define MAX_COMPOUND 7
++#define MAX_COMPOUND 10
+ 
+ /*
+  * Default number of credits to keep available for SMB3.
+@@ -1877,9 +1877,12 @@ static inline bool is_replayable_error(int error)
+ 
+ 
+ /* cifs_get_writable_file() flags */
+-#define FIND_WR_ANY         0
+-#define FIND_WR_FSUID_ONLY  1
+-#define FIND_WR_WITH_DELETE 2
++enum cifs_writable_file_flags {
++	FIND_WR_ANY			= 0U,
++	FIND_WR_FSUID_ONLY		= (1U << 0),
++	FIND_WR_WITH_DELETE		= (1U << 1),
++	FIND_WR_NO_PENDING_DELETE	= (1U << 2),
++};
+ 
+ #define   MID_FREE 0
+ #define   MID_REQUEST_ALLOCATED 1
+@@ -2339,6 +2342,8 @@ struct smb2_compound_vars {
+ 	struct kvec qi_iov;
+ 	struct kvec io_iov[SMB2_IOCTL_IOV_SIZE];
+ 	struct kvec si_iov[SMB2_SET_INFO_IOV_SIZE];
++	struct kvec unlink_iov[SMB2_SET_INFO_IOV_SIZE];
++	struct kvec rename_iov[SMB2_SET_INFO_IOV_SIZE];
+ 	struct kvec close_iov;
+ 	struct smb2_file_rename_info_hdr rename_info;
+ 	struct smb2_file_link_info_hdr link_info;
+diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
+index 1421bde045c21d..8b407d2a8516d1 100644
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -998,7 +998,10 @@ int cifs_open(struct inode *inode, struct file *file)
+ 
+ 	/* Get the cached handle as SMB2 close is deferred */
+ 	if (OPEN_FMODE(file->f_flags) & FMODE_WRITE) {
+-		rc = cifs_get_writable_path(tcon, full_path, FIND_WR_FSUID_ONLY, &cfile);
++		rc = cifs_get_writable_path(tcon, full_path,
++					    FIND_WR_FSUID_ONLY |
++					    FIND_WR_NO_PENDING_DELETE,
++					    &cfile);
+ 	} else {
+ 		rc = cifs_get_readable_path(tcon, full_path, &cfile);
+ 	}
+@@ -2530,6 +2533,9 @@ cifs_get_writable_file(struct cifsInodeInfo *cifs_inode, int flags,
+ 			continue;
+ 		if (with_delete && !(open_file->fid.access & DELETE))
+ 			continue;
++		if ((flags & FIND_WR_NO_PENDING_DELETE) &&
++		    open_file->status_file_deleted)
++			continue;
+ 		if (OPEN_FMODE(open_file->f_flags) & FMODE_WRITE) {
+ 			if (!open_file->invalidHandle) {
+ 				/* found a good writable file */
+@@ -2647,6 +2653,16 @@ cifs_get_readable_path(struct cifs_tcon *tcon, const char *name,
+ 		spin_unlock(&tcon->open_file_lock);
+ 		free_dentry_path(page);
+ 		*ret_file = find_readable_file(cinode, 0);
++		if (*ret_file) {
++			spin_lock(&cinode->open_file_lock);
++			if ((*ret_file)->status_file_deleted) {
++				spin_unlock(&cinode->open_file_lock);
++				cifsFileInfo_put(*ret_file);
++				*ret_file = NULL;
++			} else {
++				spin_unlock(&cinode->open_file_lock);
++			}
++		}
+ 		return *ret_file ? 0 : -ENOENT;
+ 	}
+ 
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index fe453a4b3dc831..11d442e8b3d622 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -1931,7 +1931,7 @@ cifs_drop_nlink(struct inode *inode)
+  * but will return the EACCES to the caller. Note that the VFS does not call
+  * unlink on negative dentries currently.
+  */
+-int cifs_unlink(struct inode *dir, struct dentry *dentry)
++static int __cifs_unlink(struct inode *dir, struct dentry *dentry, bool sillyrename)
+ {
+ 	int rc = 0;
+ 	unsigned int xid;
+@@ -2003,7 +2003,11 @@ int cifs_unlink(struct inode *dir, struct dentry *dentry)
+ 		goto psx_del_no_retry;
+ 	}
+ 
+-	rc = server->ops->unlink(xid, tcon, full_path, cifs_sb, dentry);
++	if (sillyrename || (server->vals->protocol_id > SMB10_PROT_ID &&
++			    d_is_positive(dentry) && d_count(dentry) > 2))
++		rc = -EBUSY;
++	else
++		rc = server->ops->unlink(xid, tcon, full_path, cifs_sb, dentry);
+ 
+ psx_del_no_retry:
+ 	if (!rc) {
+@@ -2071,6 +2075,11 @@ int cifs_unlink(struct inode *dir, struct dentry *dentry)
+ 	return rc;
+ }
+ 
++int cifs_unlink(struct inode *dir, struct dentry *dentry)
++{
++	return __cifs_unlink(dir, dentry, false);
++}
++
+ static int
+ cifs_mkdir_qinfo(struct inode *parent, struct dentry *dentry, umode_t mode,
+ 		 const char *full_path, struct cifs_sb_info *cifs_sb,
+@@ -2358,14 +2367,16 @@ int cifs_rmdir(struct inode *inode, struct dentry *direntry)
+ 	rc = server->ops->rmdir(xid, tcon, full_path, cifs_sb);
+ 	cifs_put_tlink(tlink);
+ 
++	cifsInode = CIFS_I(d_inode(direntry));
++
+ 	if (!rc) {
++		set_bit(CIFS_INO_DELETE_PENDING, &cifsInode->flags);
+ 		spin_lock(&d_inode(direntry)->i_lock);
+ 		i_size_write(d_inode(direntry), 0);
+ 		clear_nlink(d_inode(direntry));
+ 		spin_unlock(&d_inode(direntry)->i_lock);
+ 	}
+ 
+-	cifsInode = CIFS_I(d_inode(direntry));
+ 	/* force revalidate to go get info when needed */
+ 	cifsInode->time = 0;
+ 
+@@ -2458,8 +2469,11 @@ cifs_do_rename(const unsigned int xid, struct dentry *from_dentry,
+ 	}
+ #endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */
+ do_rename_exit:
+-	if (rc == 0)
++	if (rc == 0) {
+ 		d_move(from_dentry, to_dentry);
++		/* Force a new lookup */
++		d_drop(from_dentry);
++	}
+ 	cifs_put_tlink(tlink);
+ 	return rc;
+ }
+@@ -2470,6 +2484,7 @@ cifs_rename2(struct mnt_idmap *idmap, struct inode *source_dir,
+ 	     struct dentry *target_dentry, unsigned int flags)
+ {
+ 	const char *from_name, *to_name;
++	struct TCP_Server_Info *server;
+ 	void *page1, *page2;
+ 	struct cifs_sb_info *cifs_sb;
+ 	struct tcon_link *tlink;
+@@ -2505,6 +2520,7 @@ cifs_rename2(struct mnt_idmap *idmap, struct inode *source_dir,
+ 	if (IS_ERR(tlink))
+ 		return PTR_ERR(tlink);
+ 	tcon = tlink_tcon(tlink);
++	server = tcon->ses->server;
+ 
+ 	page1 = alloc_dentry_path();
+ 	page2 = alloc_dentry_path();
+@@ -2591,19 +2607,53 @@ cifs_rename2(struct mnt_idmap *idmap, struct inode *source_dir,
+ 
+ unlink_target:
+ #endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */
+-
+-	/* Try unlinking the target dentry if it's not negative */
+-	if (d_really_is_positive(target_dentry) && (rc == -EACCES || rc == -EEXIST)) {
+-		if (d_is_dir(target_dentry))
+-			tmprc = cifs_rmdir(target_dir, target_dentry);
+-		else
+-			tmprc = cifs_unlink(target_dir, target_dentry);
+-		if (tmprc)
+-			goto cifs_rename_exit;
+-		rc = cifs_do_rename(xid, source_dentry, from_name,
+-				    target_dentry, to_name);
+-		if (!rc)
+-			rehash = false;
++	if (d_really_is_positive(target_dentry)) {
++		if (!rc) {
++			struct inode *inode = d_inode(target_dentry);
++			/*
++			 * Samba and ksmbd servers allow renaming a target
++			 * directory that is open, so make sure to update
++			 * ->i_nlink and then mark it as delete pending.
++			 */
++			if (S_ISDIR(inode->i_mode)) {
++				drop_cached_dir_by_name(xid, tcon, to_name, cifs_sb);
++				spin_lock(&inode->i_lock);
++				i_size_write(inode, 0);
++				clear_nlink(inode);
++				spin_unlock(&inode->i_lock);
++				set_bit(CIFS_INO_DELETE_PENDING, &CIFS_I(inode)->flags);
++				CIFS_I(inode)->time = 0; /* force reval */
++				inode_set_ctime_current(inode);
++				inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode));
++			}
++		} else if (rc == -EACCES || rc == -EEXIST) {
++			/*
++			 * Rename failed, possibly due to a busy target.
++			 * Retry it by unliking the target first.
++			 */
++			if (d_is_dir(target_dentry)) {
++				tmprc = cifs_rmdir(target_dir, target_dentry);
++			} else {
++				tmprc = __cifs_unlink(target_dir, target_dentry,
++						      server->vals->protocol_id > SMB10_PROT_ID);
++			}
++			if (tmprc) {
++				/*
++				 * Some servers will return STATUS_ACCESS_DENIED
++				 * or STATUS_DIRECTORY_NOT_EMPTY when failing to
++				 * rename a non-empty directory.  Make sure to
++				 * propagate the appropriate error back to
++				 * userspace.
++				 */
++				if (tmprc == -EEXIST || tmprc == -ENOTEMPTY)
++					rc = tmprc;
++				goto cifs_rename_exit;
++			}
++			rc = cifs_do_rename(xid, source_dentry, from_name,
++					    target_dentry, to_name);
++			if (!rc)
++				rehash = false;
++		}
+ 	}
+ 
+ 	/* force revalidate to go get info when needed */
+@@ -2629,6 +2679,8 @@ cifs_dentry_needs_reval(struct dentry *dentry)
+ 	struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);
+ 	struct cached_fid *cfid = NULL;
+ 
++	if (test_bit(CIFS_INO_DELETE_PENDING, &cifs_i->flags))
++		return false;
+ 	if (cifs_i->time == 0)
+ 		return true;
+ 
+diff --git a/fs/smb/client/smb2glob.h b/fs/smb/client/smb2glob.h
+index 224495322a05da..e56e4d402f1382 100644
+--- a/fs/smb/client/smb2glob.h
++++ b/fs/smb/client/smb2glob.h
+@@ -30,10 +30,9 @@ enum smb2_compound_ops {
+ 	SMB2_OP_QUERY_DIR,
+ 	SMB2_OP_MKDIR,
+ 	SMB2_OP_RENAME,
+-	SMB2_OP_DELETE,
+ 	SMB2_OP_HARDLINK,
+ 	SMB2_OP_SET_EOF,
+-	SMB2_OP_RMDIR,
++	SMB2_OP_UNLINK,
+ 	SMB2_OP_POSIX_QUERY_INFO,
+ 	SMB2_OP_SET_REPARSE,
+ 	SMB2_OP_GET_REPARSE,
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index 8b271bbe41c471..86cad8ee8e6f3b 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -346,9 +346,6 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 			trace_smb3_posix_query_info_compound_enter(xid, tcon->tid,
+ 								   ses->Suid, full_path);
+ 			break;
+-		case SMB2_OP_DELETE:
+-			trace_smb3_delete_enter(xid, tcon->tid, ses->Suid, full_path);
+-			break;
+ 		case SMB2_OP_MKDIR:
+ 			/*
+ 			 * Directories are created through parameters in the
+@@ -356,23 +353,40 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 			 */
+ 			trace_smb3_mkdir_enter(xid, tcon->tid, ses->Suid, full_path);
+ 			break;
+-		case SMB2_OP_RMDIR:
+-			rqst[num_rqst].rq_iov = &vars->si_iov[0];
++		case SMB2_OP_UNLINK:
++			rqst[num_rqst].rq_iov = vars->unlink_iov;
+ 			rqst[num_rqst].rq_nvec = 1;
+ 
+ 			size[0] = 1; /* sizeof __u8 See MS-FSCC section 2.4.11 */
+ 			data[0] = &delete_pending[0];
+ 
+-			rc = SMB2_set_info_init(tcon, server,
+-						&rqst[num_rqst], COMPOUND_FID,
+-						COMPOUND_FID, current->tgid,
+-						FILE_DISPOSITION_INFORMATION,
+-						SMB2_O_INFO_FILE, 0, data, size);
+-			if (rc)
++			if (cfile) {
++				rc = SMB2_set_info_init(tcon, server,
++							&rqst[num_rqst],
++							cfile->fid.persistent_fid,
++							cfile->fid.volatile_fid,
++							current->tgid,
++							FILE_DISPOSITION_INFORMATION,
++							SMB2_O_INFO_FILE, 0,
++							data, size);
++			} else {
++				rc = SMB2_set_info_init(tcon, server,
++							&rqst[num_rqst],
++							COMPOUND_FID,
++							COMPOUND_FID,
++							current->tgid,
++							FILE_DISPOSITION_INFORMATION,
++							SMB2_O_INFO_FILE, 0,
++							data, size);
++			}
++			if (!rc && (!cfile || num_rqst > 1)) {
++				smb2_set_next_command(tcon, &rqst[num_rqst]);
++				smb2_set_related(&rqst[num_rqst]);
++			} else if (rc) {
+ 				goto finished;
+-			smb2_set_next_command(tcon, &rqst[num_rqst]);
+-			smb2_set_related(&rqst[num_rqst++]);
+-			trace_smb3_rmdir_enter(xid, tcon->tid, ses->Suid, full_path);
++			}
++			num_rqst++;
++			trace_smb3_unlink_enter(xid, tcon->tid, ses->Suid, full_path);
+ 			break;
+ 		case SMB2_OP_SET_EOF:
+ 			rqst[num_rqst].rq_iov = &vars->si_iov[0];
+@@ -442,7 +456,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 							   ses->Suid, full_path);
+ 			break;
+ 		case SMB2_OP_RENAME:
+-			rqst[num_rqst].rq_iov = &vars->si_iov[0];
++			rqst[num_rqst].rq_iov = vars->rename_iov;
+ 			rqst[num_rqst].rq_nvec = 2;
+ 
+ 			len = in_iov[i].iov_len;
+@@ -732,19 +746,6 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 				trace_smb3_posix_query_info_compound_done(xid, tcon->tid,
+ 									  ses->Suid);
+ 			break;
+-		case SMB2_OP_DELETE:
+-			if (rc)
+-				trace_smb3_delete_err(xid, tcon->tid, ses->Suid, rc);
+-			else {
+-				/*
+-				 * If dentry (hence, inode) is NULL, lease break is going to
+-				 * take care of degrading leases on handles for deleted files.
+-				 */
+-				if (inode)
+-					cifs_mark_open_handles_for_deleted_file(inode, full_path);
+-				trace_smb3_delete_done(xid, tcon->tid, ses->Suid);
+-			}
+-			break;
+ 		case SMB2_OP_MKDIR:
+ 			if (rc)
+ 				trace_smb3_mkdir_err(xid, tcon->tid, ses->Suid, rc);
+@@ -765,11 +766,11 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 				trace_smb3_rename_done(xid, tcon->tid, ses->Suid);
+ 			SMB2_set_info_free(&rqst[num_rqst++]);
+ 			break;
+-		case SMB2_OP_RMDIR:
+-			if (rc)
+-				trace_smb3_rmdir_err(xid, tcon->tid, ses->Suid, rc);
++		case SMB2_OP_UNLINK:
++			if (!rc)
++				trace_smb3_unlink_done(xid, tcon->tid, ses->Suid);
+ 			else
+-				trace_smb3_rmdir_done(xid, tcon->tid, ses->Suid);
++				trace_smb3_unlink_err(xid, tcon->tid, ses->Suid, rc);
+ 			SMB2_set_info_free(&rqst[num_rqst++]);
+ 			break;
+ 		case SMB2_OP_SET_EOF:
+@@ -1165,7 +1166,7 @@ smb2_rmdir(const unsigned int xid, struct cifs_tcon *tcon, const char *name,
+ 			     FILE_OPEN, CREATE_NOT_FILE, ACL_NO_MODE);
+ 	return smb2_compound_op(xid, tcon, cifs_sb,
+ 				name, &oparms, NULL,
+-				&(int){SMB2_OP_RMDIR}, 1,
++				&(int){SMB2_OP_UNLINK}, 1,
+ 				NULL, NULL, NULL, NULL);
+ }
+ 
+@@ -1174,20 +1175,29 @@ smb2_unlink(const unsigned int xid, struct cifs_tcon *tcon, const char *name,
+ 	    struct cifs_sb_info *cifs_sb, struct dentry *dentry)
+ {
+ 	struct cifs_open_parms oparms;
++	struct inode *inode = NULL;
++	int rc;
+ 
+-	oparms = CIFS_OPARMS(cifs_sb, tcon, name,
+-			     DELETE, FILE_OPEN,
+-			     CREATE_DELETE_ON_CLOSE | OPEN_REPARSE_POINT,
+-			     ACL_NO_MODE);
+-	int rc = smb2_compound_op(xid, tcon, cifs_sb, name, &oparms,
+-				  NULL, &(int){SMB2_OP_DELETE}, 1,
+-				  NULL, NULL, NULL, dentry);
++	if (dentry)
++		inode = d_inode(dentry);
++
++	oparms = CIFS_OPARMS(cifs_sb, tcon, name, DELETE,
++			     FILE_OPEN, OPEN_REPARSE_POINT, ACL_NO_MODE);
++	rc = smb2_compound_op(xid, tcon, cifs_sb, name, &oparms,
++			      NULL, &(int){SMB2_OP_UNLINK},
++			      1, NULL, NULL, NULL, dentry);
+ 	if (rc == -EINVAL) {
+ 		cifs_dbg(FYI, "invalid lease key, resending request without lease");
+ 		rc = smb2_compound_op(xid, tcon, cifs_sb, name, &oparms,
+-				      NULL, &(int){SMB2_OP_DELETE}, 1,
+-				      NULL, NULL, NULL, NULL);
++				      NULL, &(int){SMB2_OP_UNLINK},
++				      1, NULL, NULL, NULL, NULL);
+ 	}
++	/*
++	 * If dentry (hence, inode) is NULL, lease break is going to
++	 * take care of degrading leases on handles for deleted files.
++	 */
++	if (!rc && inode)
++		cifs_mark_open_handles_for_deleted_file(inode, name);
+ 	return rc;
+ }
+ 
+@@ -1441,3 +1451,113 @@ int smb2_query_reparse_point(const unsigned int xid,
+ 	cifs_free_open_info(&data);
+ 	return rc;
+ }
++
++static inline __le16 *utf16_smb2_path(struct cifs_sb_info *cifs_sb,
++				      const char *name, size_t namelen)
++{
++	int len;
++
++	if (*name == '\\' ||
++	    (cifs_sb_master_tlink(cifs_sb) &&
++	     cifs_sb_master_tcon(cifs_sb)->posix_extensions && *name == '/'))
++		name++;
++	return cifs_strndup_to_utf16(name, namelen, &len,
++				     cifs_sb->local_nls,
++				     cifs_remap(cifs_sb));
++}
++
++int smb2_rename_pending_delete(const char *full_path,
++			       struct dentry *dentry,
++			       const unsigned int xid)
++{
++	struct cifs_sb_info *cifs_sb = CIFS_SB(d_inode(dentry)->i_sb);
++	struct cifsInodeInfo *cinode = CIFS_I(d_inode(dentry));
++	__le16 *utf16_path __free(kfree) = NULL;
++	__u32 co = file_create_options(dentry);
++	int cmds[] = {
++		SMB2_OP_SET_INFO,
++		SMB2_OP_RENAME,
++		SMB2_OP_UNLINK,
++	};
++	const int num_cmds = ARRAY_SIZE(cmds);
++	char *to_name __free(kfree) = NULL;
++	__u32 attrs = cinode->cifsAttrs;
++	struct cifs_open_parms oparms;
++	static atomic_t sillycounter;
++	struct cifsFileInfo *cfile;
++	struct tcon_link *tlink;
++	struct cifs_tcon *tcon;
++	struct kvec iov[2];
++	const char *ppath;
++	void *page;
++	size_t len;
++	int rc;
++
++	tlink = cifs_sb_tlink(cifs_sb);
++	if (IS_ERR(tlink))
++		return PTR_ERR(tlink);
++	tcon = tlink_tcon(tlink);
++
++	page = alloc_dentry_path();
++
++	ppath = build_path_from_dentry(dentry->d_parent, page);
++	if (IS_ERR(ppath)) {
++		rc = PTR_ERR(ppath);
++		goto out;
++	}
++
++	len = strlen(ppath) + strlen("/.__smb1234") + 1;
++	to_name = kmalloc(len, GFP_KERNEL);
++	if (!to_name) {
++		rc = -ENOMEM;
++		goto out;
++	}
++
++	scnprintf(to_name, len, "%s%c.__smb%04X", ppath, CIFS_DIR_SEP(cifs_sb),
++		  atomic_inc_return(&sillycounter) & 0xffff);
++
++	utf16_path = utf16_smb2_path(cifs_sb, to_name, len);
++	if (!utf16_path) {
++		rc = -ENOMEM;
++		goto out;
++	}
++
++	drop_cached_dir_by_name(xid, tcon, full_path, cifs_sb);
++	oparms = CIFS_OPARMS(cifs_sb, tcon, full_path,
++			     DELETE | FILE_WRITE_ATTRIBUTES,
++			     FILE_OPEN, co, ACL_NO_MODE);
++
++	attrs &= ~ATTR_READONLY;
++	if (!attrs)
++		attrs = ATTR_NORMAL;
++	if (d_inode(dentry)->i_nlink <= 1)
++		attrs |= ATTR_HIDDEN;
++	iov[0].iov_base = &(FILE_BASIC_INFO) {
++		.Attributes = cpu_to_le32(attrs),
++	};
++	iov[0].iov_len = sizeof(FILE_BASIC_INFO);
++	iov[1].iov_base = utf16_path;
++	iov[1].iov_len = sizeof(*utf16_path) * UniStrlen((wchar_t *)utf16_path);
++
++	cifs_get_writable_path(tcon, full_path, FIND_WR_WITH_DELETE, &cfile);
++	rc = smb2_compound_op(xid, tcon, cifs_sb, full_path, &oparms, iov,
++			      cmds, num_cmds, cfile, NULL, NULL, dentry);
++	if (rc == -EINVAL) {
++		cifs_dbg(FYI, "invalid lease key, resending request without lease\n");
++		cifs_get_writable_path(tcon, full_path,
++				       FIND_WR_WITH_DELETE, &cfile);
++		rc = smb2_compound_op(xid, tcon, cifs_sb, full_path, &oparms, iov,
++				      cmds, num_cmds, cfile, NULL, NULL, NULL);
++	}
++	if (!rc) {
++		set_bit(CIFS_INO_DELETE_PENDING, &cinode->flags);
++	} else {
++		cifs_tcon_dbg(FYI, "%s: failed to rename '%s' to '%s': %d\n",
++			      __func__, full_path, to_name, rc);
++		rc = -EIO;
++	}
++out:
++	cifs_put_tlink(tlink);
++	free_dentry_path(page);
++	return rc;
++}
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index d3e09b10dea476..cd051bb1a9d608 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -2640,13 +2640,35 @@ smb2_set_next_command(struct cifs_tcon *tcon, struct smb_rqst *rqst)
+ 	}
+ 
+ 	/* SMB headers in a compound are 8 byte aligned. */
+-	if (!IS_ALIGNED(len, 8)) {
+-		num_padding = 8 - (len & 7);
++	if (IS_ALIGNED(len, 8))
++		goto out;
++
++	num_padding = 8 - (len & 7);
++	if (smb3_encryption_required(tcon)) {
++		int i;
++
++		/*
++		 * Flatten request into a single buffer with required padding as
++		 * the encryption layer can't handle the padding iovs.
++		 */
++		for (i = 1; i < rqst->rq_nvec; i++) {
++			memcpy(rqst->rq_iov[0].iov_base +
++			       rqst->rq_iov[0].iov_len,
++			       rqst->rq_iov[i].iov_base,
++			       rqst->rq_iov[i].iov_len);
++			rqst->rq_iov[0].iov_len += rqst->rq_iov[i].iov_len;
++		}
++		memset(rqst->rq_iov[0].iov_base + rqst->rq_iov[0].iov_len,
++		       0, num_padding);
++		rqst->rq_iov[0].iov_len += num_padding;
++		rqst->rq_nvec = 1;
++	} else {
+ 		rqst->rq_iov[rqst->rq_nvec].iov_base = smb2_padding;
+ 		rqst->rq_iov[rqst->rq_nvec].iov_len = num_padding;
+ 		rqst->rq_nvec++;
+-		len += num_padding;
+ 	}
++	len += num_padding;
++out:
+ 	shdr->NextCommand = cpu_to_le32(len);
+ }
+ 
+@@ -5377,6 +5399,7 @@ struct smb_version_operations smb20_operations = {
+ 	.llseek = smb3_llseek,
+ 	.is_status_io_timeout = smb2_is_status_io_timeout,
+ 	.is_network_name_deleted = smb2_is_network_name_deleted,
++	.rename_pending_delete = smb2_rename_pending_delete,
+ };
+ #endif /* CIFS_ALLOW_INSECURE_LEGACY */
+ 
+@@ -5482,6 +5505,7 @@ struct smb_version_operations smb21_operations = {
+ 	.llseek = smb3_llseek,
+ 	.is_status_io_timeout = smb2_is_status_io_timeout,
+ 	.is_network_name_deleted = smb2_is_network_name_deleted,
++	.rename_pending_delete = smb2_rename_pending_delete,
+ };
+ 
+ struct smb_version_operations smb30_operations = {
+@@ -5598,6 +5622,7 @@ struct smb_version_operations smb30_operations = {
+ 	.llseek = smb3_llseek,
+ 	.is_status_io_timeout = smb2_is_status_io_timeout,
+ 	.is_network_name_deleted = smb2_is_network_name_deleted,
++	.rename_pending_delete = smb2_rename_pending_delete,
+ };
+ 
+ struct smb_version_operations smb311_operations = {
+@@ -5714,6 +5739,7 @@ struct smb_version_operations smb311_operations = {
+ 	.llseek = smb3_llseek,
+ 	.is_status_io_timeout = smb2_is_status_io_timeout,
+ 	.is_network_name_deleted = smb2_is_network_name_deleted,
++	.rename_pending_delete = smb2_rename_pending_delete,
+ };
+ 
+ #ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+diff --git a/fs/smb/client/smb2proto.h b/fs/smb/client/smb2proto.h
+index 035aa16240535c..8d6b42ff38fe68 100644
+--- a/fs/smb/client/smb2proto.h
++++ b/fs/smb/client/smb2proto.h
+@@ -320,5 +320,8 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
+ int smb2_make_nfs_node(unsigned int xid, struct inode *inode,
+ 		       struct dentry *dentry, struct cifs_tcon *tcon,
+ 		       const char *full_path, umode_t mode, dev_t dev);
++int smb2_rename_pending_delete(const char *full_path,
++			       struct dentry *dentry,
++			       const unsigned int xid);
+ 
+ #endif			/* _SMB2PROTO_H */
+diff --git a/fs/smb/client/trace.h b/fs/smb/client/trace.h
+index 93e5b2bb9f28a2..a8c6f11699a3b6 100644
+--- a/fs/smb/client/trace.h
++++ b/fs/smb/client/trace.h
+@@ -669,13 +669,12 @@ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(query_info_compound_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(posix_query_info_compound_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(hardlink_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(rename_enter);
+-DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(rmdir_enter);
++DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(unlink_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(set_eof_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(set_info_compound_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(set_reparse_compound_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(get_reparse_compound_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(query_wsl_ea_compound_enter);
+-DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(delete_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(mkdir_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(tdis_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(mknod_enter);
+@@ -710,13 +709,12 @@ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(query_info_compound_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(posix_query_info_compound_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(hardlink_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(rename_done);
+-DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(rmdir_done);
++DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(unlink_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(set_eof_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(set_info_compound_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(set_reparse_compound_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(get_reparse_compound_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(query_wsl_ea_compound_done);
+-DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(delete_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(mkdir_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(tdis_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(mknod_done);
+@@ -756,14 +754,13 @@ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(query_info_compound_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(posix_query_info_compound_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(hardlink_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(rename_err);
+-DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(rmdir_err);
++DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(unlink_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(set_eof_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(set_info_compound_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(set_reparse_compound_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(get_reparse_compound_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(query_wsl_ea_compound_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(mkdir_err);
+-DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(delete_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(tdis_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(mknod_err);
+ 
+diff --git a/include/drm/display/drm_dp_helper.h b/include/drm/display/drm_dp_helper.h
+index e4ca35143ff965..3e35a68b2b4122 100644
+--- a/include/drm/display/drm_dp_helper.h
++++ b/include/drm/display/drm_dp_helper.h
+@@ -523,10 +523,16 @@ struct drm_dp_aux {
+ 	 * @no_zero_sized: If the hw can't use zero sized transfers (NVIDIA)
+ 	 */
+ 	bool no_zero_sized;
++
++	/**
++	 * @dpcd_probe_disabled: If probing before a DPCD access is disabled.
++	 */
++	bool dpcd_probe_disabled;
+ };
+ 
+ int drm_dp_dpcd_probe(struct drm_dp_aux *aux, unsigned int offset);
+ void drm_dp_dpcd_set_powered(struct drm_dp_aux *aux, bool powered);
++void drm_dp_dpcd_set_probe(struct drm_dp_aux *aux, bool enable);
+ ssize_t drm_dp_dpcd_read(struct drm_dp_aux *aux, unsigned int offset,
+ 			 void *buffer, size_t size);
+ ssize_t drm_dp_dpcd_write(struct drm_dp_aux *aux, unsigned int offset,
+diff --git a/include/drm/drm_connector.h b/include/drm/drm_connector.h
+index f13d597370a30d..da49d520aa3bae 100644
+--- a/include/drm/drm_connector.h
++++ b/include/drm/drm_connector.h
+@@ -843,7 +843,9 @@ struct drm_display_info {
+ 	int vics_len;
+ 
+ 	/**
+-	 * @quirks: EDID based quirks. Internal to EDID parsing.
++	 * @quirks: EDID based quirks. DRM core and drivers can query the
++	 * @drm_edid_quirk quirks using drm_edid_has_quirk(), the rest of
++	 * the quirks also tracked here are internal to EDID parsing.
+ 	 */
+ 	u32 quirks;
+ 
+diff --git a/include/drm/drm_edid.h b/include/drm/drm_edid.h
+index b38409670868d8..3d1aecfec9b2a4 100644
+--- a/include/drm/drm_edid.h
++++ b/include/drm/drm_edid.h
+@@ -109,6 +109,13 @@ struct detailed_data_string {
+ #define DRM_EDID_CVT_FLAGS_STANDARD_BLANKING (1 << 3)
+ #define DRM_EDID_CVT_FLAGS_REDUCED_BLANKING  (1 << 4)
+ 
++enum drm_edid_quirk {
++	/* Do a dummy read before DPCD accesses, to prevent corruption. */
++	DRM_EDID_QUIRK_DP_DPCD_PROBE,
++
++	DRM_EDID_QUIRK_NUM,
++};
++
+ struct detailed_data_monitor_range {
+ 	u8 min_vfreq;
+ 	u8 max_vfreq;
+@@ -476,5 +483,6 @@ void drm_edid_print_product_id(struct drm_printer *p,
+ u32 drm_edid_get_panel_id(const struct drm_edid *drm_edid);
+ bool drm_edid_match(const struct drm_edid *drm_edid,
+ 		    const struct drm_edid_ident *ident);
++bool drm_edid_has_quirk(struct drm_connector *connector, enum drm_edid_quirk quirk);
+ 
+ #endif /* __DRM_EDID_H__ */
+diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h
+index 4fc8e26914adfd..f9f36d6af9a710 100644
+--- a/include/linux/compiler-clang.h
++++ b/include/linux/compiler-clang.h
+@@ -18,23 +18,42 @@
+ #define KASAN_ABI_VERSION 5
+ 
+ /*
++ * Clang 22 added preprocessor macros to match GCC, in hopes of eventually
++ * dropping __has_feature support for sanitizers:
++ * https://github.com/llvm/llvm-project/commit/568c23bbd3303518c5056d7f03444dae4fdc8a9c
++ * Create these macros for older versions of clang so that it is easy to clean
++ * up once the minimum supported version of LLVM for building the kernel always
++ * creates these macros.
++ *
+  * Note: Checking __has_feature(*_sanitizer) is only true if the feature is
+  * enabled. Therefore it is not required to additionally check defined(CONFIG_*)
+  * to avoid adding redundant attributes in other configurations.
+  */
++#if __has_feature(address_sanitizer) && !defined(__SANITIZE_ADDRESS__)
++#define __SANITIZE_ADDRESS__
++#endif
++#if __has_feature(hwaddress_sanitizer) && !defined(__SANITIZE_HWADDRESS__)
++#define __SANITIZE_HWADDRESS__
++#endif
++#if __has_feature(thread_sanitizer) && !defined(__SANITIZE_THREAD__)
++#define __SANITIZE_THREAD__
++#endif
+ 
+-#if __has_feature(address_sanitizer) || __has_feature(hwaddress_sanitizer)
+-/* Emulate GCC's __SANITIZE_ADDRESS__ flag */
++/*
++ * Treat __SANITIZE_HWADDRESS__ the same as __SANITIZE_ADDRESS__ in the kernel.
++ */
++#ifdef __SANITIZE_HWADDRESS__
+ #define __SANITIZE_ADDRESS__
++#endif
++
++#ifdef __SANITIZE_ADDRESS__
+ #define __no_sanitize_address \
+ 		__attribute__((no_sanitize("address", "hwaddress")))
+ #else
+ #define __no_sanitize_address
+ #endif
+ 
+-#if __has_feature(thread_sanitizer)
+-/* emulate gcc's __SANITIZE_THREAD__ flag */
+-#define __SANITIZE_THREAD__
++#ifdef __SANITIZE_THREAD__
+ #define __no_sanitize_thread \
+ 		__attribute__((no_sanitize("thread")))
+ #else
+diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
+index 7fa1eb3cc82399..61d50571ad88ac 100644
+--- a/include/linux/energy_model.h
++++ b/include/linux/energy_model.h
+@@ -171,6 +171,9 @@ int em_dev_update_perf_domain(struct device *dev,
+ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
+ 				const struct em_data_callback *cb,
+ 				const cpumask_t *cpus, bool microwatts);
++int em_dev_register_pd_no_update(struct device *dev, unsigned int nr_states,
++				 const struct em_data_callback *cb,
++				 const cpumask_t *cpus, bool microwatts);
+ void em_dev_unregister_perf_domain(struct device *dev);
+ struct em_perf_table *em_table_alloc(struct em_perf_domain *pd);
+ void em_table_free(struct em_perf_table *table);
+@@ -350,6 +353,13 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
+ {
+ 	return -EINVAL;
+ }
++static inline
++int em_dev_register_pd_no_update(struct device *dev, unsigned int nr_states,
++				 const struct em_data_callback *cb,
++				 const cpumask_t *cpus, bool microwatts)
++{
++	return -EINVAL;
++}
+ static inline void em_dev_unregister_perf_domain(struct device *dev)
+ {
+ }
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 040c0036320fdf..d6716ff498a7aa 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -149,7 +149,8 @@ typedef int (dio_iodone_t)(struct kiocb *iocb, loff_t offset,
+ /* Expect random access pattern */
+ #define FMODE_RANDOM		((__force fmode_t)(1 << 12))
+ 
+-/* FMODE_* bit 13 */
++/* Supports IOCB_HAS_METADATA */
++#define FMODE_HAS_METADATA	((__force fmode_t)(1 << 13))
+ 
+ /* File is opened with O_PATH; almost nothing can be done with it */
+ #define FMODE_PATH		((__force fmode_t)(1 << 14))
+diff --git a/include/linux/kasan.h b/include/linux/kasan.h
+index 890011071f2b14..fe5ce9215821db 100644
+--- a/include/linux/kasan.h
++++ b/include/linux/kasan.h
+@@ -562,7 +562,7 @@ static inline void kasan_init_hw_tags(void) { }
+ #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
+ 
+ void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
+-int kasan_populate_vmalloc(unsigned long addr, unsigned long size);
++int kasan_populate_vmalloc(unsigned long addr, unsigned long size, gfp_t gfp_mask);
+ void kasan_release_vmalloc(unsigned long start, unsigned long end,
+ 			   unsigned long free_region_start,
+ 			   unsigned long free_region_end,
+@@ -574,7 +574,7 @@ static inline void kasan_populate_early_vm_area_shadow(void *start,
+ 						       unsigned long size)
+ { }
+ static inline int kasan_populate_vmalloc(unsigned long start,
+-					unsigned long size)
++					unsigned long size, gfp_t gfp_mask)
+ {
+ 	return 0;
+ }
+@@ -610,7 +610,7 @@ static __always_inline void kasan_poison_vmalloc(const void *start,
+ static inline void kasan_populate_early_vm_area_shadow(void *start,
+ 						       unsigned long size) { }
+ static inline int kasan_populate_vmalloc(unsigned long start,
+-					unsigned long size)
++					unsigned long size, gfp_t gfp_mask)
+ {
+ 	return 0;
+ }
+diff --git a/include/linux/mtd/spinand.h b/include/linux/mtd/spinand.h
+index 15eaa09da998ce..d668b6266c34a2 100644
+--- a/include/linux/mtd/spinand.h
++++ b/include/linux/mtd/spinand.h
+@@ -484,6 +484,7 @@ struct spinand_user_otp {
+  * @op_variants.update_cache: variants of the update-cache operation
+  * @select_target: function used to select a target/die. Required only for
+  *		   multi-die chips
++ * @configure_chip: Align the chip configuration with the core settings
+  * @set_cont_read: enable/disable continuous cached reads
+  * @fact_otp: SPI NAND factory OTP info.
+  * @user_otp: SPI NAND user OTP info.
+@@ -507,6 +508,7 @@ struct spinand_info {
+ 	} op_variants;
+ 	int (*select_target)(struct spinand_device *spinand,
+ 			     unsigned int target);
++	int (*configure_chip)(struct spinand_device *spinand);
+ 	int (*set_cont_read)(struct spinand_device *spinand,
+ 			     bool enable);
+ 	struct spinand_fact_otp fact_otp;
+@@ -539,6 +541,9 @@ struct spinand_info {
+ #define SPINAND_SELECT_TARGET(__func)					\
+ 	.select_target = __func
+ 
++#define SPINAND_CONFIGURE_CHIP(__configure_chip)			\
++	.configure_chip = __configure_chip
++
+ #define SPINAND_CONT_READ(__set_cont_read)				\
+ 	.set_cont_read = __set_cont_read
+ 
+@@ -607,6 +612,7 @@ struct spinand_dirmap {
+  *		passed in spi_mem_op be DMA-able, so we can't based the bufs on
+  *		the stack
+  * @manufacturer: SPI NAND manufacturer information
++ * @configure_chip: Align the chip configuration with the core settings
+  * @cont_read_possible: Field filled by the core once the whole system
+  *		configuration is known to tell whether continuous reads are
+  *		suitable to use or not in general with this chip/configuration.
+@@ -647,6 +653,7 @@ struct spinand_device {
+ 	const struct spinand_manufacturer *manufacturer;
+ 	void *priv;
+ 
++	int (*configure_chip)(struct spinand_device *spinand);
+ 	bool cont_read_possible;
+ 	int (*set_cont_read)(struct spinand_device *spinand,
+ 			     bool enable);
+@@ -723,6 +730,7 @@ int spinand_match_and_init(struct spinand_device *spinand,
+ 			   enum spinand_readid_method rdid_method);
+ 
+ int spinand_upd_cfg(struct spinand_device *spinand, u8 mask, u8 val);
++int spinand_read_reg_op(struct spinand_device *spinand, u8 reg, u8 *val);
+ int spinand_write_reg_op(struct spinand_device *spinand, u8 reg, u8 val);
+ int spinand_select_target(struct spinand_device *spinand, unsigned int target);
+ 
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 5e49619ae49c69..16daeac2ac555e 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -459,19 +459,17 @@ struct nft_set_ext;
+  *	control plane functions.
+  */
+ struct nft_set_ops {
+-	bool				(*lookup)(const struct net *net,
++	const struct nft_set_ext *	(*lookup)(const struct net *net,
+ 						  const struct nft_set *set,
+-						  const u32 *key,
+-						  const struct nft_set_ext **ext);
+-	bool				(*update)(struct nft_set *set,
++						  const u32 *key);
++	const struct nft_set_ext *	(*update)(struct nft_set *set,
+ 						  const u32 *key,
+ 						  struct nft_elem_priv *
+ 							(*new)(struct nft_set *,
+ 							       const struct nft_expr *,
+ 							       struct nft_regs *),
+ 						  const struct nft_expr *expr,
+-						  struct nft_regs *regs,
+-						  const struct nft_set_ext **ext);
++						  struct nft_regs *regs);
+ 	bool				(*delete)(const struct nft_set *set,
+ 						  const u32 *key);
+ 
+@@ -1918,7 +1916,6 @@ struct nftables_pernet {
+ 	struct mutex		commit_mutex;
+ 	u64			table_handle;
+ 	u64			tstamp;
+-	unsigned int		base_seq;
+ 	unsigned int		gc_seq;
+ 	u8			validate_state;
+ 	struct work_struct	destroy_work;
+diff --git a/include/net/netfilter/nf_tables_core.h b/include/net/netfilter/nf_tables_core.h
+index 03b6165756fc5d..04699eac5b5243 100644
+--- a/include/net/netfilter/nf_tables_core.h
++++ b/include/net/netfilter/nf_tables_core.h
+@@ -94,34 +94,35 @@ extern const struct nft_set_type nft_set_pipapo_type;
+ extern const struct nft_set_type nft_set_pipapo_avx2_type;
+ 
+ #ifdef CONFIG_MITIGATION_RETPOLINE
+-bool nft_rhash_lookup(const struct net *net, const struct nft_set *set,
+-		      const u32 *key, const struct nft_set_ext **ext);
+-bool nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
+-		       const u32 *key, const struct nft_set_ext **ext);
+-bool nft_bitmap_lookup(const struct net *net, const struct nft_set *set,
+-		       const u32 *key, const struct nft_set_ext **ext);
+-bool nft_hash_lookup_fast(const struct net *net,
+-			  const struct nft_set *set,
+-			  const u32 *key, const struct nft_set_ext **ext);
+-bool nft_hash_lookup(const struct net *net, const struct nft_set *set,
+-		     const u32 *key, const struct nft_set_ext **ext);
+-bool nft_set_do_lookup(const struct net *net, const struct nft_set *set,
+-		       const u32 *key, const struct nft_set_ext **ext);
+-#else
+-static inline bool
+-nft_set_do_lookup(const struct net *net, const struct nft_set *set,
+-		  const u32 *key, const struct nft_set_ext **ext)
+-{
+-	return set->ops->lookup(net, set, key, ext);
+-}
++const struct nft_set_ext *
++nft_rhash_lookup(const struct net *net, const struct nft_set *set,
++		 const u32 *key);
++const struct nft_set_ext *
++nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
++		  const u32 *key);
++const struct nft_set_ext *
++nft_bitmap_lookup(const struct net *net, const struct nft_set *set,
++		  const u32 *key);
++const struct nft_set_ext *
++nft_hash_lookup_fast(const struct net *net, const struct nft_set *set,
++		     const u32 *key);
++const struct nft_set_ext *
++nft_hash_lookup(const struct net *net, const struct nft_set *set,
++		const u32 *key);
+ #endif
+ 
++const struct nft_set_ext *
++nft_set_do_lookup(const struct net *net, const struct nft_set *set,
++		  const u32 *key);
++
+ /* called from nft_pipapo_avx2.c */
+-bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+-		       const u32 *key, const struct nft_set_ext **ext);
++const struct nft_set_ext *
++nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
++		  const u32 *key);
+ /* called from nft_set_pipapo.c */
+-bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+-			    const u32 *key, const struct nft_set_ext **ext);
++const struct nft_set_ext *
++nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
++			const u32 *key);
+ 
+ void nft_counter_init_seqcount(void);
+ 
+diff --git a/include/net/netns/nftables.h b/include/net/netns/nftables.h
+index cc8060c017d5fb..99dd166c5d07c3 100644
+--- a/include/net/netns/nftables.h
++++ b/include/net/netns/nftables.h
+@@ -3,6 +3,7 @@
+ #define _NETNS_NFTABLES_H_
+ 
+ struct netns_nftables {
++	unsigned int		base_seq;
+ 	u8			gencursor;
+ };
+ 
+diff --git a/include/uapi/linux/raid/md_p.h b/include/uapi/linux/raid/md_p.h
+index b1394628727758..ac74133a476887 100644
+--- a/include/uapi/linux/raid/md_p.h
++++ b/include/uapi/linux/raid/md_p.h
+@@ -173,7 +173,7 @@ typedef struct mdp_superblock_s {
+ #else
+ #error unspecified endianness
+ #endif
+-	__u32 resync_offset;	/* 11 resync checkpoint sector count	      */
++	__u32 recovery_cp;	/* 11 resync checkpoint sector count	      */
+ 	/* There are only valid for minor_version > 90 */
+ 	__u64 reshape_position;	/* 12,13 next address in array-space for reshape */
+ 	__u32 new_level;	/* 14 new level we are reshaping to	      */
+diff --git a/io_uring/rw.c b/io_uring/rw.c
+index 52a5b950b2e5e9..af5a54b5db1233 100644
+--- a/io_uring/rw.c
++++ b/io_uring/rw.c
+@@ -886,6 +886,9 @@ static int io_rw_init_file(struct io_kiocb *req, fmode_t mode, int rw_type)
+ 	if (req->flags & REQ_F_HAS_METADATA) {
+ 		struct io_async_rw *io = req->async_data;
+ 
++		if (!(file->f_mode & FMODE_HAS_METADATA))
++			return -EINVAL;
++
+ 		/*
+ 		 * We have a union of meta fields with wpq used for buffered-io
+ 		 * in io_async_rw, so fail it here.
+diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
+index 3a335c50e6e3cb..12ec926ed7114e 100644
+--- a/kernel/bpf/Makefile
++++ b/kernel/bpf/Makefile
+@@ -62,3 +62,4 @@ CFLAGS_REMOVE_bpf_lru_list.o = $(CC_FLAGS_FTRACE)
+ CFLAGS_REMOVE_queue_stack_maps.o = $(CC_FLAGS_FTRACE)
+ CFLAGS_REMOVE_lpm_trie.o = $(CC_FLAGS_FTRACE)
+ CFLAGS_REMOVE_ringbuf.o = $(CC_FLAGS_FTRACE)
++CFLAGS_REMOVE_rqspinlock.o = $(CC_FLAGS_FTRACE)
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index d966e971893ab3..829f0792d8d831 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -2354,8 +2354,7 @@ static unsigned int __bpf_prog_ret0_warn(const void *ctx,
+ 					 const struct bpf_insn *insn)
+ {
+ 	/* If this handler ever gets executed, then BPF_JIT_ALWAYS_ON
+-	 * is not working properly, or interpreter is being used when
+-	 * prog->jit_requested is not 0, so warn about it!
++	 * is not working properly, so warn about it!
+ 	 */
+ 	WARN_ON_ONCE(1);
+ 	return 0;
+@@ -2456,8 +2455,9 @@ static int bpf_check_tail_call(const struct bpf_prog *fp)
+ 	return ret;
+ }
+ 
+-static void bpf_prog_select_func(struct bpf_prog *fp)
++static bool bpf_prog_select_interpreter(struct bpf_prog *fp)
+ {
++	bool select_interpreter = false;
+ #ifndef CONFIG_BPF_JIT_ALWAYS_ON
+ 	u32 stack_depth = max_t(u32, fp->aux->stack_depth, 1);
+ 	u32 idx = (round_up(stack_depth, 32) / 32) - 1;
+@@ -2466,15 +2466,16 @@ static void bpf_prog_select_func(struct bpf_prog *fp)
+ 	 * But for non-JITed programs, we don't need bpf_func, so no bounds
+ 	 * check needed.
+ 	 */
+-	if (!fp->jit_requested &&
+-	    !WARN_ON_ONCE(idx >= ARRAY_SIZE(interpreters))) {
++	if (idx < ARRAY_SIZE(interpreters)) {
+ 		fp->bpf_func = interpreters[idx];
++		select_interpreter = true;
+ 	} else {
+ 		fp->bpf_func = __bpf_prog_ret0_warn;
+ 	}
+ #else
+ 	fp->bpf_func = __bpf_prog_ret0_warn;
+ #endif
++	return select_interpreter;
+ }
+ 
+ /**
+@@ -2493,7 +2494,7 @@ struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err)
+ 	/* In case of BPF to BPF calls, verifier did all the prep
+ 	 * work with regards to JITing, etc.
+ 	 */
+-	bool jit_needed = fp->jit_requested;
++	bool jit_needed = false;
+ 
+ 	if (fp->bpf_func)
+ 		goto finalize;
+@@ -2502,7 +2503,8 @@ struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err)
+ 	    bpf_prog_has_kfunc_call(fp))
+ 		jit_needed = true;
+ 
+-	bpf_prog_select_func(fp);
++	if (!bpf_prog_select_interpreter(fp))
++		jit_needed = true;
+ 
+ 	/* eBPF JITs can rewrite the program in case constant
+ 	 * blinding is active. However, in case of error during
+diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
+index 67e8a2fc1a99de..cfcf7ed57ca0d2 100644
+--- a/kernel/bpf/cpumap.c
++++ b/kernel/bpf/cpumap.c
+@@ -186,7 +186,6 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
+ 	struct xdp_buff xdp;
+ 	int i, nframes = 0;
+ 
+-	xdp_set_return_frame_no_direct();
+ 	xdp.rxq = &rxq;
+ 
+ 	for (i = 0; i < n; i++) {
+@@ -231,7 +230,6 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
+ 		}
+ 	}
+ 
+-	xdp_clear_return_frame_no_direct();
+ 	stats->pass += nframes;
+ 
+ 	return nframes;
+@@ -255,6 +253,7 @@ static void cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames,
+ 
+ 	rcu_read_lock();
+ 	bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx);
++	xdp_set_return_frame_no_direct();
+ 
+ 	ret->xdp_n = cpu_map_bpf_prog_run_xdp(rcpu, frames, ret->xdp_n, stats);
+ 	if (unlikely(ret->skb_n))
+@@ -264,6 +263,7 @@ static void cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames,
+ 	if (stats->redirect)
+ 		xdp_do_flush();
+ 
++	xdp_clear_return_frame_no_direct();
+ 	bpf_net_ctx_clear(bpf_net_ctx);
+ 	rcu_read_unlock();
+ 
+diff --git a/kernel/bpf/crypto.c b/kernel/bpf/crypto.c
+index 94854cd9c4cc32..83c4d9943084b9 100644
+--- a/kernel/bpf/crypto.c
++++ b/kernel/bpf/crypto.c
+@@ -278,7 +278,7 @@ static int bpf_crypto_crypt(const struct bpf_crypto_ctx *ctx,
+ 	siv_len = siv ? __bpf_dynptr_size(siv) : 0;
+ 	src_len = __bpf_dynptr_size(src);
+ 	dst_len = __bpf_dynptr_size(dst);
+-	if (!src_len || !dst_len)
++	if (!src_len || !dst_len || src_len > dst_len)
+ 		return -EINVAL;
+ 
+ 	if (siv_len != ctx->siv_len)
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index fdf8737542ac45..3abbdebb2d9efc 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -1277,8 +1277,11 @@ static int __bpf_async_init(struct bpf_async_kern *async, struct bpf_map *map, u
+ 		goto out;
+ 	}
+ 
+-	/* allocate hrtimer via map_kmalloc to use memcg accounting */
+-	cb = bpf_map_kmalloc_node(map, size, GFP_ATOMIC, map->numa_node);
++	/* Allocate via bpf_map_kmalloc_node() for memcg accounting. Until
++	 * kmalloc_nolock() is available, avoid locking issues by using
++	 * __GFP_HIGH (GFP_ATOMIC & ~__GFP_RECLAIM).
++	 */
++	cb = bpf_map_kmalloc_node(map, size, __GFP_HIGH, map->numa_node);
+ 	if (!cb) {
+ 		ret = -ENOMEM;
+ 		goto out;
+diff --git a/kernel/bpf/rqspinlock.c b/kernel/bpf/rqspinlock.c
+index 338305c8852cf6..804e619f1e0066 100644
+--- a/kernel/bpf/rqspinlock.c
++++ b/kernel/bpf/rqspinlock.c
+@@ -471,7 +471,7 @@ int __lockfunc resilient_queued_spin_lock_slowpath(rqspinlock_t *lock, u32 val)
+ 	 * any MCS node. This is not the most elegant solution, but is
+ 	 * simple enough.
+ 	 */
+-	if (unlikely(idx >= _Q_MAX_NODES)) {
++	if (unlikely(idx >= _Q_MAX_NODES || in_nmi())) {
+ 		lockevent_inc(lock_no_node);
+ 		RES_RESET_TIMEOUT(ts, RES_DEF_TIMEOUT);
+ 		while (!queued_spin_trylock(lock)) {
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index e43c6de2bce4e7..b82399437db031 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -39,6 +39,7 @@ enum {
+ 	dma_debug_sg,
+ 	dma_debug_coherent,
+ 	dma_debug_resource,
++	dma_debug_noncoherent,
+ };
+ 
+ enum map_err_types {
+@@ -141,6 +142,7 @@ static const char *type2name[] = {
+ 	[dma_debug_sg] = "scatter-gather",
+ 	[dma_debug_coherent] = "coherent",
+ 	[dma_debug_resource] = "resource",
++	[dma_debug_noncoherent] = "noncoherent",
+ };
+ 
+ static const char *dir2name[] = {
+@@ -993,7 +995,8 @@ static void check_unmap(struct dma_debug_entry *ref)
+ 			   "[mapped as %s] [unmapped as %s]\n",
+ 			   ref->dev_addr, ref->size,
+ 			   type2name[entry->type], type2name[ref->type]);
+-	} else if (entry->type == dma_debug_coherent &&
++	} else if ((entry->type == dma_debug_coherent ||
++		    entry->type == dma_debug_noncoherent) &&
+ 		   ref->paddr != entry->paddr) {
+ 		err_printk(ref->dev, entry, "device driver frees "
+ 			   "DMA memory with different CPU address "
+@@ -1581,6 +1584,49 @@ void debug_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
+ 	}
+ }
+ 
++void debug_dma_alloc_pages(struct device *dev, struct page *page,
++			   size_t size, int direction,
++			   dma_addr_t dma_addr,
++			   unsigned long attrs)
++{
++	struct dma_debug_entry *entry;
++
++	if (unlikely(dma_debug_disabled()))
++		return;
++
++	entry = dma_entry_alloc();
++	if (!entry)
++		return;
++
++	entry->type      = dma_debug_noncoherent;
++	entry->dev       = dev;
++	entry->paddr	 = page_to_phys(page);
++	entry->size      = size;
++	entry->dev_addr  = dma_addr;
++	entry->direction = direction;
++
++	add_dma_entry(entry, attrs);
++}
++
++void debug_dma_free_pages(struct device *dev, struct page *page,
++			  size_t size, int direction,
++			  dma_addr_t dma_addr)
++{
++	struct dma_debug_entry ref = {
++		.type           = dma_debug_noncoherent,
++		.dev            = dev,
++		.paddr		= page_to_phys(page),
++		.dev_addr       = dma_addr,
++		.size           = size,
++		.direction      = direction,
++	};
++
++	if (unlikely(dma_debug_disabled()))
++		return;
++
++	check_unmap(&ref);
++}
++
+ static int __init dma_debug_driver_setup(char *str)
+ {
+ 	int i;
+diff --git a/kernel/dma/debug.h b/kernel/dma/debug.h
+index f525197d3cae60..48757ca13f3140 100644
+--- a/kernel/dma/debug.h
++++ b/kernel/dma/debug.h
+@@ -54,6 +54,13 @@ extern void debug_dma_sync_sg_for_cpu(struct device *dev,
+ extern void debug_dma_sync_sg_for_device(struct device *dev,
+ 					 struct scatterlist *sg,
+ 					 int nelems, int direction);
++extern void debug_dma_alloc_pages(struct device *dev, struct page *page,
++				  size_t size, int direction,
++				  dma_addr_t dma_addr,
++				  unsigned long attrs);
++extern void debug_dma_free_pages(struct device *dev, struct page *page,
++				 size_t size, int direction,
++				 dma_addr_t dma_addr);
+ #else /* CONFIG_DMA_API_DEBUG */
+ static inline void debug_dma_map_page(struct device *dev, struct page *page,
+ 				      size_t offset, size_t size,
+@@ -126,5 +133,18 @@ static inline void debug_dma_sync_sg_for_device(struct device *dev,
+ 						int nelems, int direction)
+ {
+ }
++
++static inline void debug_dma_alloc_pages(struct device *dev, struct page *page,
++					 size_t size, int direction,
++					 dma_addr_t dma_addr,
++					 unsigned long attrs)
++{
++}
++
++static inline void debug_dma_free_pages(struct device *dev, struct page *page,
++					size_t size, int direction,
++					dma_addr_t dma_addr)
++{
++}
+ #endif /* CONFIG_DMA_API_DEBUG */
+ #endif /* _KERNEL_DMA_DEBUG_H */
+diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
+index 107e4a4d251df6..56de28a3b1799f 100644
+--- a/kernel/dma/mapping.c
++++ b/kernel/dma/mapping.c
+@@ -712,7 +712,7 @@ struct page *dma_alloc_pages(struct device *dev, size_t size,
+ 	if (page) {
+ 		trace_dma_alloc_pages(dev, page_to_virt(page), *dma_handle,
+ 				      size, dir, gfp, 0);
+-		debug_dma_map_page(dev, page, 0, size, dir, *dma_handle, 0);
++		debug_dma_alloc_pages(dev, page, size, dir, *dma_handle, 0);
+ 	} else {
+ 		trace_dma_alloc_pages(dev, NULL, 0, size, dir, gfp, 0);
+ 	}
+@@ -738,7 +738,7 @@ void dma_free_pages(struct device *dev, size_t size, struct page *page,
+ 		dma_addr_t dma_handle, enum dma_data_direction dir)
+ {
+ 	trace_dma_free_pages(dev, page_to_virt(page), dma_handle, size, dir, 0);
+-	debug_dma_unmap_page(dev, dma_handle, size, dir);
++	debug_dma_free_pages(dev, page, size, dir, dma_handle);
+ 	__dma_free_pages(dev, size, page, dma_handle, dir);
+ }
+ EXPORT_SYMBOL_GPL(dma_free_pages);
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 872122e074e5fe..820127536e62b7 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -10330,6 +10330,7 @@ static int __perf_event_overflow(struct perf_event *event,
+ 		ret = 1;
+ 		event->pending_kill = POLL_HUP;
+ 		perf_event_disable_inatomic(event);
++		event->pmu->stop(event, 0);
+ 	}
+ 
+ 	if (event->attr.sigtrap) {
+diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
+index ea7995a25780f3..8df55397414a12 100644
+--- a/kernel/power/energy_model.c
++++ b/kernel/power/energy_model.c
+@@ -552,6 +552,30 @@ EXPORT_SYMBOL_GPL(em_cpu_get);
+ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
+ 				const struct em_data_callback *cb,
+ 				const cpumask_t *cpus, bool microwatts)
++{
++	int ret = em_dev_register_pd_no_update(dev, nr_states, cb, cpus, microwatts);
++
++	if (_is_cpu_device(dev))
++		em_check_capacity_update();
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(em_dev_register_perf_domain);
++
++/**
++ * em_dev_register_pd_no_update() - Register a perf domain for a device
++ * @dev : Device to register the PD for
++ * @nr_states : Number of performance states in the new PD
++ * @cb : Callback functions for populating the energy model
++ * @cpus : CPUs to include in the new PD (mandatory if @dev is a CPU device)
++ * @microwatts : Whether or not the power values in the EM will be in uW
++ *
++ * Like em_dev_register_perf_domain(), but does not trigger a CPU capacity
++ * update after registering the PD, even if @dev is a CPU device.
++ */
++int em_dev_register_pd_no_update(struct device *dev, unsigned int nr_states,
++				 const struct em_data_callback *cb,
++				 const cpumask_t *cpus, bool microwatts)
+ {
+ 	struct em_perf_table *em_table;
+ 	unsigned long cap, prev_cap = 0;
+@@ -636,12 +660,9 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
+ unlock:
+ 	mutex_unlock(&em_pd_mutex);
+ 
+-	if (_is_cpu_device(dev))
+-		em_check_capacity_update();
+-
+ 	return ret;
+ }
+-EXPORT_SYMBOL_GPL(em_dev_register_perf_domain);
++EXPORT_SYMBOL_GPL(em_dev_register_pd_no_update);
+ 
+ /**
+  * em_dev_unregister_perf_domain() - Unregister Energy Model (EM) for a device
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index 9216e3b91d3b3b..c8022a477d3a1c 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -423,6 +423,7 @@ int hibernation_snapshot(int platform_mode)
+ 	}
+ 
+ 	console_suspend_all();
++	pm_restrict_gfp_mask();
+ 
+ 	error = dpm_suspend(PMSG_FREEZE);
+ 
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 30899a8cc52c0a..e8c479329282f9 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -787,10 +787,10 @@ static void retrigger_next_event(void *arg)
+ 	 * of the next expiring timer is enough. The return from the SMP
+ 	 * function call will take care of the reprogramming in case the
+ 	 * CPU was in a NOHZ idle sleep.
++	 *
++	 * In periodic low resolution mode, the next softirq expiration
++	 * must also be updated.
+ 	 */
+-	if (!hrtimer_hres_active(base) && !tick_nohz_active)
+-		return;
+-
+ 	raw_spin_lock(&base->lock);
+ 	hrtimer_update_base(base);
+ 	if (hrtimer_hres_active(base))
+@@ -2295,11 +2295,6 @@ int hrtimers_cpu_dying(unsigned int dying_cpu)
+ 				     &new_base->clock_base[i]);
+ 	}
+ 
+-	/*
+-	 * The migration might have changed the first expiring softirq
+-	 * timer on this CPU. Update it.
+-	 */
+-	__hrtimer_get_next_event(new_base, HRTIMER_ACTIVE_SOFT);
+ 	/* Tell the other CPU to retrigger the next event */
+ 	smp_call_function_single(ncpu, retrigger_next_event, NULL, 0);
+ 
+diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
+index dac2d58f39490b..db40ec5cc9d731 100644
+--- a/kernel/trace/fgraph.c
++++ b/kernel/trace/fgraph.c
+@@ -1393,7 +1393,8 @@ int register_ftrace_graph(struct fgraph_ops *gops)
+ 		ftrace_graph_active--;
+ 		gops->saved_func = NULL;
+ 		fgraph_lru_release_index(i);
+-		unregister_pm_notifier(&ftrace_suspend_notifier);
++		if (!ftrace_graph_active)
++			unregister_pm_notifier(&ftrace_suspend_notifier);
+ 	}
+ 	return ret;
+ }
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index b91fa02cc54a6a..56f6cebdb22998 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -846,7 +846,10 @@ int trace_pid_write(struct trace_pid_list *filtered_pids,
+ 		/* copy the current bits to the new max */
+ 		ret = trace_pid_list_first(filtered_pids, &pid);
+ 		while (!ret) {
+-			trace_pid_list_set(pid_list, pid);
++			ret = trace_pid_list_set(pid_list, pid);
++			if (ret < 0)
++				goto out;
++
+ 			ret = trace_pid_list_next(filtered_pids, pid + 1, &pid);
+ 			nr_pids++;
+ 		}
+@@ -883,6 +886,7 @@ int trace_pid_write(struct trace_pid_list *filtered_pids,
+ 		trace_parser_clear(&parser);
+ 		ret = 0;
+ 	}
++ out:
+ 	trace_parser_put(&parser);
+ 
+ 	if (ret < 0) {
+@@ -7264,7 +7268,7 @@ static ssize_t write_marker_to_buffer(struct trace_array *tr, const char __user
+ 	entry = ring_buffer_event_data(event);
+ 	entry->ip = ip;
+ 
+-	len = __copy_from_user_inatomic(&entry->buf, ubuf, cnt);
++	len = copy_from_user_nofault(&entry->buf, ubuf, cnt);
+ 	if (len) {
+ 		memcpy(&entry->buf, FAULTED_STR, FAULTED_SIZE);
+ 		cnt = FAULTED_SIZE;
+@@ -7361,7 +7365,7 @@ static ssize_t write_raw_marker_to_buffer(struct trace_array *tr,
+ 
+ 	entry = ring_buffer_event_data(event);
+ 
+-	len = __copy_from_user_inatomic(&entry->id, ubuf, cnt);
++	len = copy_from_user_nofault(&entry->id, ubuf, cnt);
+ 	if (len) {
+ 		entry->id = -1;
+ 		memcpy(&entry->buf, FAULTED_STR, FAULTED_SIZE);
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index fd259da0aa6456..337bc0eb5d71bf 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -2322,6 +2322,9 @@ osnoise_cpus_write(struct file *filp, const char __user *ubuf, size_t count,
+ 	int running, err;
+ 	char *buf __free(kfree) = NULL;
+ 
++	if (count < 1)
++		return 0;
++
+ 	buf = kmalloc(count, GFP_KERNEL);
+ 	if (!buf)
+ 		return -ENOMEM;
+diff --git a/mm/damon/core.c b/mm/damon/core.c
+index 8ead13792f0495..d87fbb8c418d00 100644
+--- a/mm/damon/core.c
++++ b/mm/damon/core.c
+@@ -2050,6 +2050,10 @@ static void damos_adjust_quota(struct damon_ctx *c, struct damos *s)
+ 	if (!quota->ms && !quota->sz && list_empty(&quota->goals))
+ 		return;
+ 
++	/* First charge window */
++	if (!quota->total_charged_sz && !quota->charged_from)
++		quota->charged_from = jiffies;
++
+ 	/* New charge window starts */
+ 	if (time_after_eq(jiffies, quota->charged_from +
+ 				msecs_to_jiffies(quota->reset_interval))) {
+diff --git a/mm/damon/lru_sort.c b/mm/damon/lru_sort.c
+index 4af8fd4a390b66..c2b4f0b0714727 100644
+--- a/mm/damon/lru_sort.c
++++ b/mm/damon/lru_sort.c
+@@ -198,6 +198,11 @@ static int damon_lru_sort_apply_parameters(void)
+ 	if (err)
+ 		return err;
+ 
++	if (!damon_lru_sort_mon_attrs.sample_interval) {
++		err = -EINVAL;
++		goto out;
++	}
++
+ 	err = damon_set_attrs(ctx, &damon_lru_sort_mon_attrs);
+ 	if (err)
+ 		goto out;
+diff --git a/mm/damon/reclaim.c b/mm/damon/reclaim.c
+index a675150965e020..ade3ff724b24cd 100644
+--- a/mm/damon/reclaim.c
++++ b/mm/damon/reclaim.c
+@@ -194,6 +194,11 @@ static int damon_reclaim_apply_parameters(void)
+ 	if (err)
+ 		return err;
+ 
++	if (!damon_reclaim_mon_attrs.aggr_interval) {
++		err = -EINVAL;
++		goto out;
++	}
++
+ 	err = damon_set_attrs(ctx, &damon_reclaim_mon_attrs);
+ 	if (err)
+ 		goto out;
+diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c
+index 1af6aff35d84a0..57d4ec256682ce 100644
+--- a/mm/damon/sysfs.c
++++ b/mm/damon/sysfs.c
+@@ -1243,14 +1243,18 @@ static ssize_t state_show(struct kobject *kobj, struct kobj_attribute *attr,
+ {
+ 	struct damon_sysfs_kdamond *kdamond = container_of(kobj,
+ 			struct damon_sysfs_kdamond, kobj);
+-	struct damon_ctx *ctx = kdamond->damon_ctx;
+-	bool running;
++	struct damon_ctx *ctx;
++	bool running = false;
+ 
+-	if (!ctx)
+-		running = false;
+-	else
++	if (!mutex_trylock(&damon_sysfs_lock))
++		return -EBUSY;
++
++	ctx = kdamond->damon_ctx;
++	if (ctx)
+ 		running = damon_sysfs_ctx_running(ctx);
+ 
++	mutex_unlock(&damon_sysfs_lock);
++
+ 	return sysfs_emit(buf, "%s\n", running ?
+ 			damon_sysfs_cmd_strs[DAMON_SYSFS_CMD_ON] :
+ 			damon_sysfs_cmd_strs[DAMON_SYSFS_CMD_OFF]);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index a0d285d2099252..eee833f7068157 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -5855,7 +5855,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ 	spinlock_t *ptl;
+ 	struct hstate *h = hstate_vma(vma);
+ 	unsigned long sz = huge_page_size(h);
+-	bool adjust_reservation = false;
++	bool adjust_reservation;
+ 	unsigned long last_addr_mask;
+ 	bool force_flush = false;
+ 
+@@ -5948,6 +5948,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ 					sz);
+ 		hugetlb_count_sub(pages_per_huge_page(h), mm);
+ 		hugetlb_remove_rmap(folio);
++		spin_unlock(ptl);
+ 
+ 		/*
+ 		 * Restore the reservation for anonymous page, otherwise the
+@@ -5955,14 +5956,16 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ 		 * If there we are freeing a surplus, do not set the restore
+ 		 * reservation bit.
+ 		 */
++		adjust_reservation = false;
++
++		spin_lock_irq(&hugetlb_lock);
+ 		if (!h->surplus_huge_pages && __vma_private_lock(vma) &&
+ 		    folio_test_anon(folio)) {
+ 			folio_set_hugetlb_restore_reserve(folio);
+ 			/* Reservation to be adjusted after the spin lock */
+ 			adjust_reservation = true;
+ 		}
+-
+-		spin_unlock(ptl);
++		spin_unlock_irq(&hugetlb_lock);
+ 
+ 		/*
+ 		 * Adjust the reservation for the region that will have the
+diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
+index d2c70cd2afb1de..c7c0be11917370 100644
+--- a/mm/kasan/shadow.c
++++ b/mm/kasan/shadow.c
+@@ -335,13 +335,13 @@ static void ___free_pages_bulk(struct page **pages, int nr_pages)
+ 	}
+ }
+ 
+-static int ___alloc_pages_bulk(struct page **pages, int nr_pages)
++static int ___alloc_pages_bulk(struct page **pages, int nr_pages, gfp_t gfp_mask)
+ {
+ 	unsigned long nr_populated, nr_total = nr_pages;
+ 	struct page **page_array = pages;
+ 
+ 	while (nr_pages) {
+-		nr_populated = alloc_pages_bulk(GFP_KERNEL, nr_pages, pages);
++		nr_populated = alloc_pages_bulk(gfp_mask, nr_pages, pages);
+ 		if (!nr_populated) {
+ 			___free_pages_bulk(page_array, nr_total - nr_pages);
+ 			return -ENOMEM;
+@@ -353,25 +353,42 @@ static int ___alloc_pages_bulk(struct page **pages, int nr_pages)
+ 	return 0;
+ }
+ 
+-static int __kasan_populate_vmalloc(unsigned long start, unsigned long end)
++static int __kasan_populate_vmalloc(unsigned long start, unsigned long end, gfp_t gfp_mask)
+ {
+ 	unsigned long nr_pages, nr_total = PFN_UP(end - start);
+ 	struct vmalloc_populate_data data;
++	unsigned int flags;
+ 	int ret = 0;
+ 
+-	data.pages = (struct page **)__get_free_page(GFP_KERNEL | __GFP_ZERO);
++	data.pages = (struct page **)__get_free_page(gfp_mask | __GFP_ZERO);
+ 	if (!data.pages)
+ 		return -ENOMEM;
+ 
+ 	while (nr_total) {
+ 		nr_pages = min(nr_total, PAGE_SIZE / sizeof(data.pages[0]));
+-		ret = ___alloc_pages_bulk(data.pages, nr_pages);
++		ret = ___alloc_pages_bulk(data.pages, nr_pages, gfp_mask);
+ 		if (ret)
+ 			break;
+ 
+ 		data.start = start;
++
++		/*
++		 * page tables allocations ignore external gfp mask, enforce it
++		 * by the scope API
++		 */
++		if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO)
++			flags = memalloc_nofs_save();
++		else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0)
++			flags = memalloc_noio_save();
++
+ 		ret = apply_to_page_range(&init_mm, start, nr_pages * PAGE_SIZE,
+ 					  kasan_populate_vmalloc_pte, &data);
++
++		if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO)
++			memalloc_nofs_restore(flags);
++		else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0)
++			memalloc_noio_restore(flags);
++
+ 		___free_pages_bulk(data.pages, nr_pages);
+ 		if (ret)
+ 			break;
+@@ -385,7 +402,7 @@ static int __kasan_populate_vmalloc(unsigned long start, unsigned long end)
+ 	return ret;
+ }
+ 
+-int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
++int kasan_populate_vmalloc(unsigned long addr, unsigned long size, gfp_t gfp_mask)
+ {
+ 	unsigned long shadow_start, shadow_end;
+ 	int ret;
+@@ -414,7 +431,7 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
+ 	shadow_start = PAGE_ALIGN_DOWN(shadow_start);
+ 	shadow_end = PAGE_ALIGN(shadow_end);
+ 
+-	ret = __kasan_populate_vmalloc(shadow_start, shadow_end);
++	ret = __kasan_populate_vmalloc(shadow_start, shadow_end, gfp_mask);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 15203ea7d0073d..a0c040336fc59b 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -1400,8 +1400,8 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
+ 		 */
+ 		if (cc->is_khugepaged &&
+ 		    (pte_young(pteval) || folio_test_young(folio) ||
+-		     folio_test_referenced(folio) || mmu_notifier_test_young(vma->vm_mm,
+-								     address)))
++		     folio_test_referenced(folio) ||
++		     mmu_notifier_test_young(vma->vm_mm, _address)))
+ 			referenced++;
+ 	}
+ 	if (!writable) {
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index dd543dd7755fc0..e626b6c93ffeee 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -950,7 +950,7 @@ static const char * const action_page_types[] = {
+ 	[MF_MSG_BUDDY]			= "free buddy page",
+ 	[MF_MSG_DAX]			= "dax page",
+ 	[MF_MSG_UNSPLIT_THP]		= "unsplit thp",
+-	[MF_MSG_ALREADY_POISONED]	= "already poisoned",
++	[MF_MSG_ALREADY_POISONED]	= "already poisoned page",
+ 	[MF_MSG_UNKNOWN]		= "unknown page",
+ };
+ 
+@@ -1343,9 +1343,10 @@ static int action_result(unsigned long pfn, enum mf_action_page_type type,
+ {
+ 	trace_memory_failure_event(pfn, type, result);
+ 
+-	num_poisoned_pages_inc(pfn);
+-
+-	update_per_node_mf_stats(pfn, result);
++	if (type != MF_MSG_ALREADY_POISONED) {
++		num_poisoned_pages_inc(pfn);
++		update_per_node_mf_stats(pfn, result);
++	}
+ 
+ 	pr_err("%#lx: recovery action for %s: %s\n",
+ 		pfn, action_page_types[type], action_name[result]);
+@@ -2088,12 +2089,11 @@ static int try_memory_failure_hugetlb(unsigned long pfn, int flags, int *hugetlb
+ 		*hugetlb = 0;
+ 		return 0;
+ 	} else if (res == -EHWPOISON) {
+-		pr_err("%#lx: already hardware poisoned\n", pfn);
+ 		if (flags & MF_ACTION_REQUIRED) {
+ 			folio = page_folio(p);
+ 			res = kill_accessing_process(current, folio_pfn(folio), flags);
+-			action_result(pfn, MF_MSG_ALREADY_POISONED, MF_FAILED);
+ 		}
++		action_result(pfn, MF_MSG_ALREADY_POISONED, MF_FAILED);
+ 		return res;
+ 	} else if (res == -EBUSY) {
+ 		if (!(flags & MF_NO_RETRY)) {
+@@ -2279,7 +2279,6 @@ int memory_failure(unsigned long pfn, int flags)
+ 		goto unlock_mutex;
+ 
+ 	if (TestSetPageHWPoison(p)) {
+-		pr_err("%#lx: already hardware poisoned\n", pfn);
+ 		res = -EHWPOISON;
+ 		if (flags & MF_ACTION_REQUIRED)
+ 			res = kill_accessing_process(current, pfn, flags);
+@@ -2576,10 +2575,9 @@ int unpoison_memory(unsigned long pfn)
+ 	static DEFINE_RATELIMIT_STATE(unpoison_rs, DEFAULT_RATELIMIT_INTERVAL,
+ 					DEFAULT_RATELIMIT_BURST);
+ 
+-	if (!pfn_valid(pfn))
+-		return -ENXIO;
+-
+-	p = pfn_to_page(pfn);
++	p = pfn_to_online_page(pfn);
++	if (!p)
++		return -EIO;
+ 	folio = page_folio(p);
+ 
+ 	mutex_lock(&mf_mutex);
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 6dbcdceecae134..5edd536ba9d2a5 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -2026,6 +2026,8 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
+ 	if (unlikely(!vmap_initialized))
+ 		return ERR_PTR(-EBUSY);
+ 
++	/* Only reclaim behaviour flags are relevant. */
++	gfp_mask = gfp_mask & GFP_RECLAIM_MASK;
+ 	might_sleep();
+ 
+ 	/*
+@@ -2038,8 +2040,6 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
+ 	 */
+ 	va = node_alloc(size, align, vstart, vend, &addr, &vn_id);
+ 	if (!va) {
+-		gfp_mask = gfp_mask & GFP_RECLAIM_MASK;
+-
+ 		va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node);
+ 		if (unlikely(!va))
+ 			return ERR_PTR(-ENOMEM);
+@@ -2089,7 +2089,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
+ 	BUG_ON(va->va_start < vstart);
+ 	BUG_ON(va->va_end > vend);
+ 
+-	ret = kasan_populate_vmalloc(addr, size);
++	ret = kasan_populate_vmalloc(addr, size, gfp_mask);
+ 	if (ret) {
+ 		free_vmap_area(va);
+ 		return ERR_PTR(ret);
+@@ -4826,7 +4826,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
+ 
+ 	/* populate the kasan shadow space */
+ 	for (area = 0; area < nr_vms; area++) {
+-		if (kasan_populate_vmalloc(vas[area]->va_start, sizes[area]))
++		if (kasan_populate_vmalloc(vas[area]->va_start, sizes[area], GFP_KERNEL))
+ 			goto err_free_shadow;
+ 	}
+ 
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index ad5574e9a93ee9..ce17e489c67c37 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -829,7 +829,17 @@ static void bis_cleanup(struct hci_conn *conn)
+ 		/* Check if ISO connection is a BIS and terminate advertising
+ 		 * set and BIG if there are no other connections using it.
+ 		 */
+-		bis = hci_conn_hash_lookup_big(hdev, conn->iso_qos.bcast.big);
++		bis = hci_conn_hash_lookup_big_state(hdev,
++						     conn->iso_qos.bcast.big,
++						     BT_CONNECTED,
++						     HCI_ROLE_MASTER);
++		if (bis)
++			return;
++
++		bis = hci_conn_hash_lookup_big_state(hdev,
++						     conn->iso_qos.bcast.big,
++						     BT_CONNECT,
++						     HCI_ROLE_MASTER);
+ 		if (bis)
+ 			return;
+ 
+@@ -2274,7 +2284,7 @@ struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst,
+ 	 * the start periodic advertising and create BIG commands have
+ 	 * been queued
+ 	 */
+-	hci_conn_hash_list_state(hdev, bis_mark_per_adv, PA_LINK,
++	hci_conn_hash_list_state(hdev, bis_mark_per_adv, BIS_LINK,
+ 				 BT_BOUND, &data);
+ 
+ 	/* Queue start periodic advertising and create BIG */
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 0ffdbe249f5d3d..090c7ffa515252 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -6973,9 +6973,14 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+ 				continue;
+ 		}
+ 
+-		if (ev->status != 0x42)
++		if (ev->status != 0x42) {
+ 			/* Mark PA sync as established */
+ 			set_bit(HCI_CONN_PA_SYNC, &bis->flags);
++			/* Reset cleanup callback of PA Sync so it doesn't
++			 * terminate the sync when deleting the connection.
++			 */
++			conn->cleanup = NULL;
++		}
+ 
+ 		bis->sync_handle = conn->sync_handle;
+ 		bis->iso_qos.bcast.big = ev->handle;
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index 14a4215352d5f1..c21566e1494a99 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -1347,7 +1347,7 @@ static int iso_sock_getname(struct socket *sock, struct sockaddr *addr,
+ 		bacpy(&sa->iso_bdaddr, &iso_pi(sk)->dst);
+ 		sa->iso_bdaddr_type = iso_pi(sk)->dst_type;
+ 
+-		if (hcon && hcon->type == BIS_LINK) {
++		if (hcon && (hcon->type == BIS_LINK || hcon->type == PA_LINK)) {
+ 			sa->iso_bc->bc_sid = iso_pi(sk)->bc_sid;
+ 			sa->iso_bc->bc_num_bis = iso_pi(sk)->bc_num_bis;
+ 			memcpy(sa->iso_bc->bc_bis, iso_pi(sk)->bc_bis,
+diff --git a/net/bridge/br.c b/net/bridge/br.c
+index 0adeafe11a3651..ad2d8f59fc7bcc 100644
+--- a/net/bridge/br.c
++++ b/net/bridge/br.c
+@@ -324,6 +324,13 @@ int br_boolopt_multi_toggle(struct net_bridge *br,
+ 	int err = 0;
+ 	int opt_id;
+ 
++	opt_id = find_next_bit(&bitmap, BITS_PER_LONG, BR_BOOLOPT_MAX);
++	if (opt_id != BITS_PER_LONG) {
++		NL_SET_ERR_MSG_FMT_MOD(extack, "Unknown boolean option %d",
++				       opt_id);
++		return -EINVAL;
++	}
++
+ 	for_each_set_bit(opt_id, &bitmap, BR_BOOLOPT_MAX) {
+ 		bool on = !!(bm->optval & BIT(opt_id));
+ 
+diff --git a/net/can/j1939/bus.c b/net/can/j1939/bus.c
+index 39844f14eed862..797719cb227ec5 100644
+--- a/net/can/j1939/bus.c
++++ b/net/can/j1939/bus.c
+@@ -290,8 +290,11 @@ int j1939_local_ecu_get(struct j1939_priv *priv, name_t name, u8 sa)
+ 	if (!ecu)
+ 		ecu = j1939_ecu_create_locked(priv, name);
+ 	err = PTR_ERR_OR_ZERO(ecu);
+-	if (err)
++	if (err) {
++		if (j1939_address_is_unicast(sa))
++			priv->ents[sa].nusers--;
+ 		goto done;
++	}
+ 
+ 	ecu->nusers++;
+ 	/* TODO: do we care if ecu->addr != sa? */
+diff --git a/net/can/j1939/j1939-priv.h b/net/can/j1939/j1939-priv.h
+index 31a93cae5111b5..81f58924b4acd7 100644
+--- a/net/can/j1939/j1939-priv.h
++++ b/net/can/j1939/j1939-priv.h
+@@ -212,6 +212,7 @@ void j1939_priv_get(struct j1939_priv *priv);
+ 
+ /* notify/alert all j1939 sockets bound to ifindex */
+ void j1939_sk_netdev_event_netdown(struct j1939_priv *priv);
++void j1939_sk_netdev_event_unregister(struct j1939_priv *priv);
+ int j1939_cancel_active_session(struct j1939_priv *priv, struct sock *sk);
+ void j1939_tp_init(struct j1939_priv *priv);
+ 
+diff --git a/net/can/j1939/main.c b/net/can/j1939/main.c
+index 7e8a20f2fc42b5..3706a872ecafdb 100644
+--- a/net/can/j1939/main.c
++++ b/net/can/j1939/main.c
+@@ -377,6 +377,9 @@ static int j1939_netdev_notify(struct notifier_block *nb,
+ 		j1939_sk_netdev_event_netdown(priv);
+ 		j1939_ecu_unmap_all(priv);
+ 		break;
++	case NETDEV_UNREGISTER:
++		j1939_sk_netdev_event_unregister(priv);
++		break;
+ 	}
+ 
+ 	j1939_priv_put(priv);
+diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
+index 6fefe7a6876116..785b883a1319d3 100644
+--- a/net/can/j1939/socket.c
++++ b/net/can/j1939/socket.c
+@@ -520,6 +520,9 @@ static int j1939_sk_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ 	ret = j1939_local_ecu_get(priv, jsk->addr.src_name, jsk->addr.sa);
+ 	if (ret) {
+ 		j1939_netdev_stop(priv);
++		jsk->priv = NULL;
++		synchronize_rcu();
++		j1939_priv_put(priv);
+ 		goto out_release_sock;
+ 	}
+ 
+@@ -1299,6 +1302,55 @@ void j1939_sk_netdev_event_netdown(struct j1939_priv *priv)
+ 	read_unlock_bh(&priv->j1939_socks_lock);
+ }
+ 
++void j1939_sk_netdev_event_unregister(struct j1939_priv *priv)
++{
++	struct sock *sk;
++	struct j1939_sock *jsk;
++	bool wait_rcu = false;
++
++rescan: /* The caller is holding a ref on this "priv" via j1939_priv_get_by_ndev(). */
++	read_lock_bh(&priv->j1939_socks_lock);
++	list_for_each_entry(jsk, &priv->j1939_socks, list) {
++		/* Skip if j1939_jsk_add() is not called on this socket. */
++		if (!(jsk->state & J1939_SOCK_BOUND))
++			continue;
++		sk = &jsk->sk;
++		sock_hold(sk);
++		read_unlock_bh(&priv->j1939_socks_lock);
++		/* Check if j1939_jsk_del() is not yet called on this socket after holding
++		 * socket's lock, for both j1939_sk_bind() and j1939_sk_release() call
++		 * j1939_jsk_del() with socket's lock held.
++		 */
++		lock_sock(sk);
++		if (jsk->state & J1939_SOCK_BOUND) {
++			/* Neither j1939_sk_bind() nor j1939_sk_release() called j1939_jsk_del().
++			 * Make this socket no longer bound, by pretending as if j1939_sk_bind()
++			 * dropped old references but did not get new references.
++			 */
++			j1939_jsk_del(priv, jsk);
++			j1939_local_ecu_put(priv, jsk->addr.src_name, jsk->addr.sa);
++			j1939_netdev_stop(priv);
++			/* Call j1939_priv_put() now and prevent j1939_sk_sock_destruct() from
++			 * calling the corresponding j1939_priv_put().
++			 *
++			 * j1939_sk_sock_destruct() is supposed to call j1939_priv_put() after
++			 * an RCU grace period. But since the caller is holding a ref on this
++			 * "priv", we can defer synchronize_rcu() until immediately before
++			 * the caller calls j1939_priv_put().
++			 */
++			j1939_priv_put(priv);
++			jsk->priv = NULL;
++			wait_rcu = true;
++		}
++		release_sock(sk);
++		sock_put(sk);
++		goto rescan;
++	}
++	read_unlock_bh(&priv->j1939_socks_lock);
++	if (wait_rcu)
++		synchronize_rcu();
++}
++
+ static int j1939_sk_no_ioctlcmd(struct socket *sock, unsigned int cmd,
+ 				unsigned long arg)
+ {
+diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
+index d1b5705dc0c648..9f6d860411cbd1 100644
+--- a/net/ceph/messenger.c
++++ b/net/ceph/messenger.c
+@@ -1524,7 +1524,7 @@ static void con_fault_finish(struct ceph_connection *con)
+ 	 * in case we faulted due to authentication, invalidate our
+ 	 * current tickets so that we can get new ones.
+ 	 */
+-	if (con->v1.auth_retry) {
++	if (!ceph_msgr2(from_msgr(con->msgr)) && con->v1.auth_retry) {
+ 		dout("auth_retry %d, invalidating\n", con->v1.auth_retry);
+ 		if (con->ops->invalidate_authorizer)
+ 			con->ops->invalidate_authorizer(con);
+@@ -1714,9 +1714,10 @@ static void clear_standby(struct ceph_connection *con)
+ {
+ 	/* come back from STANDBY? */
+ 	if (con->state == CEPH_CON_S_STANDBY) {
+-		dout("clear_standby %p and ++connect_seq\n", con);
++		dout("clear_standby %p\n", con);
+ 		con->state = CEPH_CON_S_PREOPEN;
+-		con->v1.connect_seq++;
++		if (!ceph_msgr2(from_msgr(con->msgr)))
++			con->v1.connect_seq++;
+ 		WARN_ON(ceph_con_flag_test(con, CEPH_CON_F_WRITE_PENDING));
+ 		WARN_ON(ceph_con_flag_test(con, CEPH_CON_F_KEEPALIVE_PENDING));
+ 	}
+diff --git a/net/core/dev_ioctl.c b/net/core/dev_ioctl.c
+index 616479e7146633..9447065d01afb0 100644
+--- a/net/core/dev_ioctl.c
++++ b/net/core/dev_ioctl.c
+@@ -464,8 +464,15 @@ int generic_hwtstamp_get_lower(struct net_device *dev,
+ 	if (!netif_device_present(dev))
+ 		return -ENODEV;
+ 
+-	if (ops->ndo_hwtstamp_get)
+-		return dev_get_hwtstamp_phylib(dev, kernel_cfg);
++	if (ops->ndo_hwtstamp_get) {
++		int err;
++
++		netdev_lock_ops(dev);
++		err = dev_get_hwtstamp_phylib(dev, kernel_cfg);
++		netdev_unlock_ops(dev);
++
++		return err;
++	}
+ 
+ 	/* Legacy path: unconverted lower driver */
+ 	return generic_hwtstamp_ioctl_lower(dev, SIOCGHWTSTAMP, kernel_cfg);
+@@ -481,8 +488,15 @@ int generic_hwtstamp_set_lower(struct net_device *dev,
+ 	if (!netif_device_present(dev))
+ 		return -ENODEV;
+ 
+-	if (ops->ndo_hwtstamp_set)
+-		return dev_set_hwtstamp_phylib(dev, kernel_cfg, extack);
++	if (ops->ndo_hwtstamp_set) {
++		int err;
++
++		netdev_lock_ops(dev);
++		err = dev_set_hwtstamp_phylib(dev, kernel_cfg, extack);
++		netdev_unlock_ops(dev);
++
++		return err;
++	}
+ 
+ 	/* Legacy path: unconverted lower driver */
+ 	return generic_hwtstamp_ioctl_lower(dev, SIOCSHWTSTAMP, kernel_cfg);
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index 88657255fec12b..fbbc3ccf9df64b 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -49,7 +49,7 @@ static bool hsr_check_carrier(struct hsr_port *master)
+ 
+ 	ASSERT_RTNL();
+ 
+-	hsr_for_each_port(master->hsr, port) {
++	hsr_for_each_port_rtnl(master->hsr, port) {
+ 		if (port->type != HSR_PT_MASTER && is_slave_up(port->dev)) {
+ 			netif_carrier_on(master->dev);
+ 			return true;
+@@ -105,7 +105,7 @@ int hsr_get_max_mtu(struct hsr_priv *hsr)
+ 	struct hsr_port *port;
+ 
+ 	mtu_max = ETH_DATA_LEN;
+-	hsr_for_each_port(hsr, port)
++	hsr_for_each_port_rtnl(hsr, port)
+ 		if (port->type != HSR_PT_MASTER)
+ 			mtu_max = min(port->dev->mtu, mtu_max);
+ 
+@@ -139,7 +139,7 @@ static int hsr_dev_open(struct net_device *dev)
+ 
+ 	hsr = netdev_priv(dev);
+ 
+-	hsr_for_each_port(hsr, port) {
++	hsr_for_each_port_rtnl(hsr, port) {
+ 		if (port->type == HSR_PT_MASTER)
+ 			continue;
+ 		switch (port->type) {
+@@ -172,7 +172,7 @@ static int hsr_dev_close(struct net_device *dev)
+ 	struct hsr_priv *hsr;
+ 
+ 	hsr = netdev_priv(dev);
+-	hsr_for_each_port(hsr, port) {
++	hsr_for_each_port_rtnl(hsr, port) {
+ 		if (port->type == HSR_PT_MASTER)
+ 			continue;
+ 		switch (port->type) {
+@@ -205,7 +205,7 @@ static netdev_features_t hsr_features_recompute(struct hsr_priv *hsr,
+ 	 * may become enabled.
+ 	 */
+ 	features &= ~NETIF_F_ONE_FOR_ALL;
+-	hsr_for_each_port(hsr, port)
++	hsr_for_each_port_rtnl(hsr, port)
+ 		features = netdev_increment_features(features,
+ 						     port->dev->features,
+ 						     mask);
+@@ -226,6 +226,7 @@ static netdev_tx_t hsr_dev_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	struct hsr_priv *hsr = netdev_priv(dev);
+ 	struct hsr_port *master;
+ 
++	rcu_read_lock();
+ 	master = hsr_port_get_hsr(hsr, HSR_PT_MASTER);
+ 	if (master) {
+ 		skb->dev = master->dev;
+@@ -238,6 +239,8 @@ static netdev_tx_t hsr_dev_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		dev_core_stats_tx_dropped_inc(dev);
+ 		dev_kfree_skb_any(skb);
+ 	}
++	rcu_read_unlock();
++
+ 	return NETDEV_TX_OK;
+ }
+ 
+@@ -484,7 +487,7 @@ static void hsr_set_rx_mode(struct net_device *dev)
+ 
+ 	hsr = netdev_priv(dev);
+ 
+-	hsr_for_each_port(hsr, port) {
++	hsr_for_each_port_rtnl(hsr, port) {
+ 		if (port->type == HSR_PT_MASTER)
+ 			continue;
+ 		switch (port->type) {
+@@ -506,7 +509,7 @@ static void hsr_change_rx_flags(struct net_device *dev, int change)
+ 
+ 	hsr = netdev_priv(dev);
+ 
+-	hsr_for_each_port(hsr, port) {
++	hsr_for_each_port_rtnl(hsr, port) {
+ 		if (port->type == HSR_PT_MASTER)
+ 			continue;
+ 		switch (port->type) {
+@@ -534,7 +537,7 @@ static int hsr_ndo_vlan_rx_add_vid(struct net_device *dev,
+ 
+ 	hsr = netdev_priv(dev);
+ 
+-	hsr_for_each_port(hsr, port) {
++	hsr_for_each_port_rtnl(hsr, port) {
+ 		if (port->type == HSR_PT_MASTER ||
+ 		    port->type == HSR_PT_INTERLINK)
+ 			continue;
+@@ -580,7 +583,7 @@ static int hsr_ndo_vlan_rx_kill_vid(struct net_device *dev,
+ 
+ 	hsr = netdev_priv(dev);
+ 
+-	hsr_for_each_port(hsr, port) {
++	hsr_for_each_port_rtnl(hsr, port) {
+ 		switch (port->type) {
+ 		case HSR_PT_SLAVE_A:
+ 		case HSR_PT_SLAVE_B:
+@@ -672,9 +675,14 @@ struct net_device *hsr_get_port_ndev(struct net_device *ndev,
+ 	struct hsr_priv *hsr = netdev_priv(ndev);
+ 	struct hsr_port *port;
+ 
++	rcu_read_lock();
+ 	hsr_for_each_port(hsr, port)
+-		if (port->type == pt)
++		if (port->type == pt) {
++			dev_hold(port->dev);
++			rcu_read_unlock();
+ 			return port->dev;
++		}
++	rcu_read_unlock();
+ 	return NULL;
+ }
+ EXPORT_SYMBOL(hsr_get_port_ndev);
+diff --git a/net/hsr/hsr_main.c b/net/hsr/hsr_main.c
+index 192893c3f2ec73..bc94b07101d80e 100644
+--- a/net/hsr/hsr_main.c
++++ b/net/hsr/hsr_main.c
+@@ -22,7 +22,7 @@ static bool hsr_slave_empty(struct hsr_priv *hsr)
+ {
+ 	struct hsr_port *port;
+ 
+-	hsr_for_each_port(hsr, port)
++	hsr_for_each_port_rtnl(hsr, port)
+ 		if (port->type != HSR_PT_MASTER)
+ 			return false;
+ 	return true;
+@@ -134,7 +134,7 @@ struct hsr_port *hsr_port_get_hsr(struct hsr_priv *hsr, enum hsr_port_type pt)
+ {
+ 	struct hsr_port *port;
+ 
+-	hsr_for_each_port(hsr, port)
++	hsr_for_each_port_rtnl(hsr, port)
+ 		if (port->type == pt)
+ 			return port;
+ 	return NULL;
+diff --git a/net/hsr/hsr_main.h b/net/hsr/hsr_main.h
+index 135ec5fce01967..33b0d2460c9bcd 100644
+--- a/net/hsr/hsr_main.h
++++ b/net/hsr/hsr_main.h
+@@ -224,6 +224,9 @@ struct hsr_priv {
+ #define hsr_for_each_port(hsr, port) \
+ 	list_for_each_entry_rcu((port), &(hsr)->ports, port_list)
+ 
++#define hsr_for_each_port_rtnl(hsr, port) \
++	list_for_each_entry_rcu((port), &(hsr)->ports, port_list, lockdep_rtnl_is_held())
++
+ struct hsr_port *hsr_port_get_hsr(struct hsr_priv *hsr, enum hsr_port_type pt);
+ 
+ /* Caller must ensure skb is a valid HSR frame */
+diff --git a/net/ipv4/ip_tunnel_core.c b/net/ipv4/ip_tunnel_core.c
+index f65d2f7273813b..8392d304a72ebe 100644
+--- a/net/ipv4/ip_tunnel_core.c
++++ b/net/ipv4/ip_tunnel_core.c
+@@ -204,6 +204,9 @@ static int iptunnel_pmtud_build_icmp(struct sk_buff *skb, int mtu)
+ 	if (!pskb_may_pull(skb, ETH_HLEN + sizeof(struct iphdr)))
+ 		return -EINVAL;
+ 
++	if (skb_is_gso(skb))
++		skb_gso_reset(skb);
++
+ 	skb_copy_bits(skb, skb_mac_offset(skb), &eh, ETH_HLEN);
+ 	pskb_pull(skb, ETH_HLEN);
+ 	skb_reset_network_header(skb);
+@@ -298,6 +301,9 @@ static int iptunnel_pmtud_build_icmpv6(struct sk_buff *skb, int mtu)
+ 	if (!pskb_may_pull(skb, ETH_HLEN + sizeof(struct ipv6hdr)))
+ 		return -EINVAL;
+ 
++	if (skb_is_gso(skb))
++		skb_gso_reset(skb);
++
+ 	skb_copy_bits(skb, skb_mac_offset(skb), &eh, ETH_HLEN);
+ 	pskb_pull(skb, ETH_HLEN);
+ 	skb_reset_network_header(skb);
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index ba581785adb4b3..a268e1595b22aa 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -408,8 +408,11 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
+ 		if (!psock->cork) {
+ 			psock->cork = kzalloc(sizeof(*psock->cork),
+ 					      GFP_ATOMIC | __GFP_NOWARN);
+-			if (!psock->cork)
++			if (!psock->cork) {
++				sk_msg_free(sk, msg);
++				*copied = 0;
+ 				return -ENOMEM;
++			}
+ 		}
+ 		memcpy(psock->cork, msg, sizeof(*msg));
+ 		return 0;
+diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
+index 3caa0a9d3b3885..25d2b65653cd40 100644
+--- a/net/mptcp/sockopt.c
++++ b/net/mptcp/sockopt.c
+@@ -1508,13 +1508,12 @@ static void sync_socket_options(struct mptcp_sock *msk, struct sock *ssk)
+ {
+ 	static const unsigned int tx_rx_locks = SOCK_RCVBUF_LOCK | SOCK_SNDBUF_LOCK;
+ 	struct sock *sk = (struct sock *)msk;
++	bool keep_open;
+ 
+-	if (ssk->sk_prot->keepalive) {
+-		if (sock_flag(sk, SOCK_KEEPOPEN))
+-			ssk->sk_prot->keepalive(ssk, 1);
+-		else
+-			ssk->sk_prot->keepalive(ssk, 0);
+-	}
++	keep_open = sock_flag(sk, SOCK_KEEPOPEN);
++	if (ssk->sk_prot->keepalive)
++		ssk->sk_prot->keepalive(ssk, keep_open);
++	sock_valbool_flag(ssk, SOCK_KEEPOPEN, keep_open);
+ 
+ 	ssk->sk_priority = sk->sk_priority;
+ 	ssk->sk_bound_dev_if = sk->sk_bound_dev_if;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 0e86434ca13b00..cde63e5f18d8f9 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1131,11 +1131,14 @@ nf_tables_chain_type_lookup(struct net *net, const struct nlattr *nla,
+ 	return ERR_PTR(-ENOENT);
+ }
+ 
+-static __be16 nft_base_seq(const struct net *net)
++static unsigned int nft_base_seq(const struct net *net)
+ {
+-	struct nftables_pernet *nft_net = nft_pernet(net);
++	return READ_ONCE(net->nft.base_seq);
++}
+ 
+-	return htons(nft_net->base_seq & 0xffff);
++static __be16 nft_base_seq_be16(const struct net *net)
++{
++	return htons(nft_base_seq(net) & 0xffff);
+ }
+ 
+ static const struct nla_policy nft_table_policy[NFTA_TABLE_MAX + 1] = {
+@@ -1153,9 +1156,9 @@ static int nf_tables_fill_table_info(struct sk_buff *skb, struct net *net,
+ {
+ 	struct nlmsghdr *nlh;
+ 
+-	event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+-	nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
+-			   NFNETLINK_V0, nft_base_seq(net));
++	nlh = nfnl_msg_put(skb, portid, seq,
++			   nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event),
++			   flags, family, NFNETLINK_V0, nft_base_seq_be16(net));
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+@@ -1165,6 +1168,12 @@ static int nf_tables_fill_table_info(struct sk_buff *skb, struct net *net,
+ 			 NFTA_TABLE_PAD))
+ 		goto nla_put_failure;
+ 
++	if (event == NFT_MSG_DELTABLE ||
++	    event == NFT_MSG_DESTROYTABLE) {
++		nlmsg_end(skb, nlh);
++		return 0;
++	}
++
+ 	if (nla_put_be32(skb, NFTA_TABLE_FLAGS,
+ 			 htonl(table->flags & NFT_TABLE_F_MASK)))
+ 		goto nla_put_failure;
+@@ -1242,7 +1251,7 @@ static int nf_tables_dump_tables(struct sk_buff *skb,
+ 
+ 	rcu_read_lock();
+ 	nft_net = nft_pernet(net);
+-	cb->seq = READ_ONCE(nft_net->base_seq);
++	cb->seq = nft_base_seq(net);
+ 
+ 	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -2022,9 +2031,9 @@ static int nf_tables_fill_chain_info(struct sk_buff *skb, struct net *net,
+ {
+ 	struct nlmsghdr *nlh;
+ 
+-	event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+-	nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
+-			   NFNETLINK_V0, nft_base_seq(net));
++	nlh = nfnl_msg_put(skb, portid, seq,
++			   nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event),
++			   flags, family, NFNETLINK_V0, nft_base_seq_be16(net));
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+@@ -2034,6 +2043,13 @@ static int nf_tables_fill_chain_info(struct sk_buff *skb, struct net *net,
+ 			 NFTA_CHAIN_PAD))
+ 		goto nla_put_failure;
+ 
++	if (!hook_list &&
++	    (event == NFT_MSG_DELCHAIN ||
++	     event == NFT_MSG_DESTROYCHAIN)) {
++		nlmsg_end(skb, nlh);
++		return 0;
++	}
++
+ 	if (nft_is_base_chain(chain)) {
+ 		const struct nft_base_chain *basechain = nft_base_chain(chain);
+ 		struct nft_stats __percpu *stats;
+@@ -2120,7 +2136,7 @@ static int nf_tables_dump_chains(struct sk_buff *skb,
+ 
+ 	rcu_read_lock();
+ 	nft_net = nft_pernet(net);
+-	cb->seq = READ_ONCE(nft_net->base_seq);
++	cb->seq = nft_base_seq(net);
+ 
+ 	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -3658,7 +3674,7 @@ static int nf_tables_fill_rule_info(struct sk_buff *skb, struct net *net,
+ 	u16 type = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+ 
+ 	nlh = nfnl_msg_put(skb, portid, seq, type, flags, family, NFNETLINK_V0,
+-			   nft_base_seq(net));
++			   nft_base_seq_be16(net));
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+@@ -3826,7 +3842,7 @@ static int nf_tables_dump_rules(struct sk_buff *skb,
+ 
+ 	rcu_read_lock();
+ 	nft_net = nft_pernet(net);
+-	cb->seq = READ_ONCE(nft_net->base_seq);
++	cb->seq = nft_base_seq(net);
+ 
+ 	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -4037,7 +4053,7 @@ static int nf_tables_getrule_reset(struct sk_buff *skb,
+ 	buf = kasprintf(GFP_ATOMIC, "%.*s:%u",
+ 			nla_len(nla[NFTA_RULE_TABLE]),
+ 			(char *)nla_data(nla[NFTA_RULE_TABLE]),
+-			nft_net->base_seq);
++			nft_base_seq(net));
+ 	audit_log_nfcfg(buf, info->nfmsg->nfgen_family, 1,
+ 			AUDIT_NFT_OP_RULE_RESET, GFP_ATOMIC);
+ 	kfree(buf);
+@@ -4871,9 +4887,10 @@ static int nf_tables_fill_set(struct sk_buff *skb, const struct nft_ctx *ctx,
+ 	u32 seq = ctx->seq;
+ 	int i;
+ 
+-	event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+-	nlh = nfnl_msg_put(skb, portid, seq, event, flags, ctx->family,
+-			   NFNETLINK_V0, nft_base_seq(ctx->net));
++	nlh = nfnl_msg_put(skb, portid, seq,
++			   nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event),
++			   flags, ctx->family, NFNETLINK_V0,
++			   nft_base_seq_be16(ctx->net));
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+@@ -4885,6 +4902,12 @@ static int nf_tables_fill_set(struct sk_buff *skb, const struct nft_ctx *ctx,
+ 			 NFTA_SET_PAD))
+ 		goto nla_put_failure;
+ 
++	if (event == NFT_MSG_DELSET ||
++	    event == NFT_MSG_DESTROYSET) {
++		nlmsg_end(skb, nlh);
++		return 0;
++	}
++
+ 	if (set->flags != 0)
+ 		if (nla_put_be32(skb, NFTA_SET_FLAGS, htonl(set->flags)))
+ 			goto nla_put_failure;
+@@ -5012,7 +5035,7 @@ static int nf_tables_dump_sets(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ 	rcu_read_lock();
+ 	nft_net = nft_pernet(net);
+-	cb->seq = READ_ONCE(nft_net->base_seq);
++	cb->seq = nft_base_seq(net);
+ 
+ 	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (ctx->family != NFPROTO_UNSPEC &&
+@@ -6189,7 +6212,7 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ 	rcu_read_lock();
+ 	nft_net = nft_pernet(net);
+-	cb->seq = READ_ONCE(nft_net->base_seq);
++	cb->seq = nft_base_seq(net);
+ 
+ 	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (dump_ctx->ctx.family != NFPROTO_UNSPEC &&
+@@ -6218,7 +6241,7 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
+ 	seq    = cb->nlh->nlmsg_seq;
+ 
+ 	nlh = nfnl_msg_put(skb, portid, seq, event, NLM_F_MULTI,
+-			   table->family, NFNETLINK_V0, nft_base_seq(net));
++			   table->family, NFNETLINK_V0, nft_base_seq_be16(net));
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+@@ -6311,7 +6334,7 @@ static int nf_tables_fill_setelem_info(struct sk_buff *skb,
+ 
+ 	event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+ 	nlh = nfnl_msg_put(skb, portid, seq, event, flags, ctx->family,
+-			   NFNETLINK_V0, nft_base_seq(ctx->net));
++			   NFNETLINK_V0, nft_base_seq_be16(ctx->net));
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+@@ -6610,7 +6633,7 @@ static int nf_tables_getsetelem_reset(struct sk_buff *skb,
+ 		}
+ 		nelems++;
+ 	}
+-	audit_log_nft_set_reset(dump_ctx.ctx.table, nft_net->base_seq, nelems);
++	audit_log_nft_set_reset(dump_ctx.ctx.table, nft_base_seq(info->net), nelems);
+ 
+ out_unlock:
+ 	rcu_read_unlock();
+@@ -8359,20 +8382,26 @@ static int nf_tables_fill_obj_info(struct sk_buff *skb, struct net *net,
+ {
+ 	struct nlmsghdr *nlh;
+ 
+-	event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+-	nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
+-			   NFNETLINK_V0, nft_base_seq(net));
++	nlh = nfnl_msg_put(skb, portid, seq,
++			   nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event),
++			   flags, family, NFNETLINK_V0, nft_base_seq_be16(net));
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+ 	if (nla_put_string(skb, NFTA_OBJ_TABLE, table->name) ||
+ 	    nla_put_string(skb, NFTA_OBJ_NAME, obj->key.name) ||
++	    nla_put_be32(skb, NFTA_OBJ_TYPE, htonl(obj->ops->type->type)) ||
+ 	    nla_put_be64(skb, NFTA_OBJ_HANDLE, cpu_to_be64(obj->handle),
+ 			 NFTA_OBJ_PAD))
+ 		goto nla_put_failure;
+ 
+-	if (nla_put_be32(skb, NFTA_OBJ_TYPE, htonl(obj->ops->type->type)) ||
+-	    nla_put_be32(skb, NFTA_OBJ_USE, htonl(obj->use)) ||
++	if (event == NFT_MSG_DELOBJ ||
++	    event == NFT_MSG_DESTROYOBJ) {
++		nlmsg_end(skb, nlh);
++		return 0;
++	}
++
++	if (nla_put_be32(skb, NFTA_OBJ_USE, htonl(obj->use)) ||
+ 	    nft_object_dump(skb, NFTA_OBJ_DATA, obj, reset))
+ 		goto nla_put_failure;
+ 
+@@ -8420,7 +8449,7 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ 	rcu_read_lock();
+ 	nft_net = nft_pernet(net);
+-	cb->seq = READ_ONCE(nft_net->base_seq);
++	cb->seq = nft_base_seq(net);
+ 
+ 	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -8454,7 +8483,7 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
+ 			idx++;
+ 		}
+ 		if (ctx->reset && entries)
+-			audit_log_obj_reset(table, nft_net->base_seq, entries);
++			audit_log_obj_reset(table, nft_base_seq(net), entries);
+ 		if (rc < 0)
+ 			break;
+ 	}
+@@ -8623,7 +8652,7 @@ static int nf_tables_getobj_reset(struct sk_buff *skb,
+ 	buf = kasprintf(GFP_ATOMIC, "%.*s:%u",
+ 			nla_len(nla[NFTA_OBJ_TABLE]),
+ 			(char *)nla_data(nla[NFTA_OBJ_TABLE]),
+-			nft_net->base_seq);
++			nft_base_seq(net));
+ 	audit_log_nfcfg(buf, info->nfmsg->nfgen_family, 1,
+ 			AUDIT_NFT_OP_OBJ_RESET, GFP_ATOMIC);
+ 	kfree(buf);
+@@ -8728,9 +8757,8 @@ void nft_obj_notify(struct net *net, const struct nft_table *table,
+ 		    struct nft_object *obj, u32 portid, u32 seq, int event,
+ 		    u16 flags, int family, int report, gfp_t gfp)
+ {
+-	struct nftables_pernet *nft_net = nft_pernet(net);
+ 	char *buf = kasprintf(gfp, "%s:%u",
+-			      table->name, nft_net->base_seq);
++			      table->name, nft_base_seq(net));
+ 
+ 	audit_log_nfcfg(buf,
+ 			family,
+@@ -9413,9 +9441,9 @@ static int nf_tables_fill_flowtable_info(struct sk_buff *skb, struct net *net,
+ 	struct nft_hook *hook;
+ 	struct nlmsghdr *nlh;
+ 
+-	event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+-	nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
+-			   NFNETLINK_V0, nft_base_seq(net));
++	nlh = nfnl_msg_put(skb, portid, seq,
++			   nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event),
++			   flags, family, NFNETLINK_V0, nft_base_seq_be16(net));
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+@@ -9425,6 +9453,13 @@ static int nf_tables_fill_flowtable_info(struct sk_buff *skb, struct net *net,
+ 			 NFTA_FLOWTABLE_PAD))
+ 		goto nla_put_failure;
+ 
++	if (!hook_list &&
++	    (event == NFT_MSG_DELFLOWTABLE ||
++	     event == NFT_MSG_DESTROYFLOWTABLE)) {
++		nlmsg_end(skb, nlh);
++		return 0;
++	}
++
+ 	if (nla_put_be32(skb, NFTA_FLOWTABLE_USE, htonl(flowtable->use)) ||
+ 	    nla_put_be32(skb, NFTA_FLOWTABLE_FLAGS, htonl(flowtable->data.flags)))
+ 		goto nla_put_failure;
+@@ -9477,7 +9512,7 @@ static int nf_tables_dump_flowtable(struct sk_buff *skb,
+ 
+ 	rcu_read_lock();
+ 	nft_net = nft_pernet(net);
+-	cb->seq = READ_ONCE(nft_net->base_seq);
++	cb->seq = nft_base_seq(net);
+ 
+ 	list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ 		if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -9662,17 +9697,16 @@ static void nf_tables_flowtable_destroy(struct nft_flowtable *flowtable)
+ static int nf_tables_fill_gen_info(struct sk_buff *skb, struct net *net,
+ 				   u32 portid, u32 seq)
+ {
+-	struct nftables_pernet *nft_net = nft_pernet(net);
+ 	struct nlmsghdr *nlh;
+ 	char buf[TASK_COMM_LEN];
+ 	int event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, NFT_MSG_NEWGEN);
+ 
+ 	nlh = nfnl_msg_put(skb, portid, seq, event, 0, AF_UNSPEC,
+-			   NFNETLINK_V0, nft_base_seq(net));
++			   NFNETLINK_V0, nft_base_seq_be16(net));
+ 	if (!nlh)
+ 		goto nla_put_failure;
+ 
+-	if (nla_put_be32(skb, NFTA_GEN_ID, htonl(nft_net->base_seq)) ||
++	if (nla_put_be32(skb, NFTA_GEN_ID, htonl(nft_base_seq(net))) ||
+ 	    nla_put_be32(skb, NFTA_GEN_PROC_PID, htonl(task_pid_nr(current))) ||
+ 	    nla_put_string(skb, NFTA_GEN_PROC_NAME, get_task_comm(buf, current)))
+ 		goto nla_put_failure;
+@@ -10933,11 +10967,12 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 	 * Bump generation counter, invalidate any dump in progress.
+ 	 * Cannot fail after this point.
+ 	 */
+-	base_seq = READ_ONCE(nft_net->base_seq);
++	base_seq = nft_base_seq(net);
+ 	while (++base_seq == 0)
+ 		;
+ 
+-	WRITE_ONCE(nft_net->base_seq, base_seq);
++	/* pairs with smp_load_acquire in nft_lookup_eval */
++	smp_store_release(&net->nft.base_seq, base_seq);
+ 
+ 	gc_seq = nft_gc_seq_begin(nft_net);
+ 
+@@ -11146,7 +11181,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 
+ 	nft_commit_notify(net, NETLINK_CB(skb).portid);
+ 	nf_tables_gen_notify(net, skb, NFT_MSG_NEWGEN);
+-	nf_tables_commit_audit_log(&adl, nft_net->base_seq);
++	nf_tables_commit_audit_log(&adl, nft_base_seq(net));
+ 
+ 	nft_gc_seq_end(nft_net, gc_seq);
+ 	nft_net->validate_state = NFT_VALIDATE_SKIP;
+@@ -11471,7 +11506,7 @@ static bool nf_tables_valid_genid(struct net *net, u32 genid)
+ 	mutex_lock(&nft_net->commit_mutex);
+ 	nft_net->tstamp = get_jiffies_64();
+ 
+-	genid_ok = genid == 0 || nft_net->base_seq == genid;
++	genid_ok = genid == 0 || nft_base_seq(net) == genid;
+ 	if (!genid_ok)
+ 		mutex_unlock(&nft_net->commit_mutex);
+ 
+@@ -12108,7 +12143,7 @@ static int __net_init nf_tables_init_net(struct net *net)
+ 	INIT_LIST_HEAD(&nft_net->module_list);
+ 	INIT_LIST_HEAD(&nft_net->notify_list);
+ 	mutex_init(&nft_net->commit_mutex);
+-	nft_net->base_seq = 1;
++	net->nft.base_seq = 1;
+ 	nft_net->gc_seq = 0;
+ 	nft_net->validate_state = NFT_VALIDATE_SKIP;
+ 	INIT_WORK(&nft_net->destroy_work, nf_tables_trans_destroy_work);
+diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
+index 88922e0e8e8377..e24493d9e77615 100644
+--- a/net/netfilter/nft_dynset.c
++++ b/net/netfilter/nft_dynset.c
+@@ -91,8 +91,9 @@ void nft_dynset_eval(const struct nft_expr *expr,
+ 		return;
+ 	}
+ 
+-	if (set->ops->update(set, &regs->data[priv->sreg_key], nft_dynset_new,
+-			     expr, regs, &ext)) {
++	ext = set->ops->update(set, &regs->data[priv->sreg_key], nft_dynset_new,
++			     expr, regs);
++	if (ext) {
+ 		if (priv->op == NFT_DYNSET_OP_UPDATE &&
+ 		    nft_set_ext_exists(ext, NFT_SET_EXT_TIMEOUT) &&
+ 		    READ_ONCE(nft_set_ext_timeout(ext)->timeout) != 0) {
+diff --git a/net/netfilter/nft_lookup.c b/net/netfilter/nft_lookup.c
+index 63ef832b8aa710..58c5b14889c474 100644
+--- a/net/netfilter/nft_lookup.c
++++ b/net/netfilter/nft_lookup.c
+@@ -24,36 +24,73 @@ struct nft_lookup {
+ 	struct nft_set_binding		binding;
+ };
+ 
+-#ifdef CONFIG_MITIGATION_RETPOLINE
+-bool nft_set_do_lookup(const struct net *net, const struct nft_set *set,
+-		       const u32 *key, const struct nft_set_ext **ext)
++static const struct nft_set_ext *
++__nft_set_do_lookup(const struct net *net, const struct nft_set *set,
++		    const u32 *key)
+ {
++#ifdef CONFIG_MITIGATION_RETPOLINE
+ 	if (set->ops == &nft_set_hash_fast_type.ops)
+-		return nft_hash_lookup_fast(net, set, key, ext);
++		return nft_hash_lookup_fast(net, set, key);
+ 	if (set->ops == &nft_set_hash_type.ops)
+-		return nft_hash_lookup(net, set, key, ext);
++		return nft_hash_lookup(net, set, key);
+ 
+ 	if (set->ops == &nft_set_rhash_type.ops)
+-		return nft_rhash_lookup(net, set, key, ext);
++		return nft_rhash_lookup(net, set, key);
+ 
+ 	if (set->ops == &nft_set_bitmap_type.ops)
+-		return nft_bitmap_lookup(net, set, key, ext);
++		return nft_bitmap_lookup(net, set, key);
+ 
+ 	if (set->ops == &nft_set_pipapo_type.ops)
+-		return nft_pipapo_lookup(net, set, key, ext);
++		return nft_pipapo_lookup(net, set, key);
+ #if defined(CONFIG_X86_64) && !defined(CONFIG_UML)
+ 	if (set->ops == &nft_set_pipapo_avx2_type.ops)
+-		return nft_pipapo_avx2_lookup(net, set, key, ext);
++		return nft_pipapo_avx2_lookup(net, set, key);
+ #endif
+ 
+ 	if (set->ops == &nft_set_rbtree_type.ops)
+-		return nft_rbtree_lookup(net, set, key, ext);
++		return nft_rbtree_lookup(net, set, key);
+ 
+ 	WARN_ON_ONCE(1);
+-	return set->ops->lookup(net, set, key, ext);
++#endif
++	return set->ops->lookup(net, set, key);
++}
++
++static unsigned int nft_base_seq(const struct net *net)
++{
++	/* pairs with smp_store_release() in nf_tables_commit() */
++	return smp_load_acquire(&net->nft.base_seq);
++}
++
++static bool nft_lookup_should_retry(const struct net *net, unsigned int seq)
++{
++	return unlikely(seq != nft_base_seq(net));
++}
++
++const struct nft_set_ext *
++nft_set_do_lookup(const struct net *net, const struct nft_set *set,
++		  const u32 *key)
++{
++	const struct nft_set_ext *ext;
++	unsigned int base_seq;
++
++	do {
++		base_seq = nft_base_seq(net);
++
++		ext = __nft_set_do_lookup(net, set, key);
++		if (ext)
++			break;
++		/* No match?  There is a small chance that lookup was
++		 * performed in the old generation, but nf_tables_commit()
++		 * already unlinked a (matching) element.
++		 *
++		 * We need to repeat the lookup to make sure that we didn't
++		 * miss a matching element in the new generation.
++		 */
++	} while (nft_lookup_should_retry(net, base_seq));
++
++	return ext;
+ }
+ EXPORT_SYMBOL_GPL(nft_set_do_lookup);
+-#endif
+ 
+ void nft_lookup_eval(const struct nft_expr *expr,
+ 		     struct nft_regs *regs,
+@@ -61,12 +98,12 @@ void nft_lookup_eval(const struct nft_expr *expr,
+ {
+ 	const struct nft_lookup *priv = nft_expr_priv(expr);
+ 	const struct nft_set *set = priv->set;
+-	const struct nft_set_ext *ext = NULL;
+ 	const struct net *net = nft_net(pkt);
++	const struct nft_set_ext *ext;
+ 	bool found;
+ 
+-	found =	nft_set_do_lookup(net, set, &regs->data[priv->sreg], &ext) ^
+-				  priv->invert;
++	ext = nft_set_do_lookup(net, set, &regs->data[priv->sreg]);
++	found = !!ext ^ priv->invert;
+ 	if (!found) {
+ 		ext = nft_set_catchall_lookup(net, set);
+ 		if (!ext) {
+diff --git a/net/netfilter/nft_objref.c b/net/netfilter/nft_objref.c
+index 09da7a3f9f9677..8ee66a86c3bc75 100644
+--- a/net/netfilter/nft_objref.c
++++ b/net/netfilter/nft_objref.c
+@@ -111,10 +111,9 @@ void nft_objref_map_eval(const struct nft_expr *expr,
+ 	struct net *net = nft_net(pkt);
+ 	const struct nft_set_ext *ext;
+ 	struct nft_object *obj;
+-	bool found;
+ 
+-	found = nft_set_do_lookup(net, set, &regs->data[priv->sreg], &ext);
+-	if (!found) {
++	ext = nft_set_do_lookup(net, set, &regs->data[priv->sreg]);
++	if (!ext) {
+ 		ext = nft_set_catchall_lookup(net, set);
+ 		if (!ext) {
+ 			regs->verdict.code = NFT_BREAK;
+diff --git a/net/netfilter/nft_set_bitmap.c b/net/netfilter/nft_set_bitmap.c
+index 12390d2e994fc6..8d3f040a904a2c 100644
+--- a/net/netfilter/nft_set_bitmap.c
++++ b/net/netfilter/nft_set_bitmap.c
+@@ -75,16 +75,21 @@ nft_bitmap_active(const u8 *bitmap, u32 idx, u32 off, u8 genmask)
+ }
+ 
+ INDIRECT_CALLABLE_SCOPE
+-bool nft_bitmap_lookup(const struct net *net, const struct nft_set *set,
+-		       const u32 *key, const struct nft_set_ext **ext)
++const struct nft_set_ext *
++nft_bitmap_lookup(const struct net *net, const struct nft_set *set,
++		  const u32 *key)
+ {
+ 	const struct nft_bitmap *priv = nft_set_priv(set);
++	static const struct nft_set_ext found;
+ 	u8 genmask = nft_genmask_cur(net);
+ 	u32 idx, off;
+ 
+ 	nft_bitmap_location(set, key, &idx, &off);
+ 
+-	return nft_bitmap_active(priv->bitmap, idx, off, genmask);
++	if (nft_bitmap_active(priv->bitmap, idx, off, genmask))
++		return &found;
++
++	return NULL;
+ }
+ 
+ static struct nft_bitmap_elem *
+@@ -221,7 +226,8 @@ static void nft_bitmap_walk(const struct nft_ctx *ctx,
+ 	const struct nft_bitmap *priv = nft_set_priv(set);
+ 	struct nft_bitmap_elem *be;
+ 
+-	list_for_each_entry_rcu(be, &priv->list, head) {
++	list_for_each_entry_rcu(be, &priv->list, head,
++				lockdep_is_held(&nft_pernet(ctx->net)->commit_mutex)) {
+ 		if (iter->count < iter->skip)
+ 			goto cont;
+ 
+diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
+index abb0c8ec637191..9903c737c9f0ad 100644
+--- a/net/netfilter/nft_set_hash.c
++++ b/net/netfilter/nft_set_hash.c
+@@ -81,8 +81,9 @@ static const struct rhashtable_params nft_rhash_params = {
+ };
+ 
+ INDIRECT_CALLABLE_SCOPE
+-bool nft_rhash_lookup(const struct net *net, const struct nft_set *set,
+-		      const u32 *key, const struct nft_set_ext **ext)
++const struct nft_set_ext *
++nft_rhash_lookup(const struct net *net, const struct nft_set *set,
++		 const u32 *key)
+ {
+ 	struct nft_rhash *priv = nft_set_priv(set);
+ 	const struct nft_rhash_elem *he;
+@@ -95,9 +96,9 @@ bool nft_rhash_lookup(const struct net *net, const struct nft_set *set,
+ 
+ 	he = rhashtable_lookup(&priv->ht, &arg, nft_rhash_params);
+ 	if (he != NULL)
+-		*ext = &he->ext;
++		return &he->ext;
+ 
+-	return !!he;
++	return NULL;
+ }
+ 
+ static struct nft_elem_priv *
+@@ -120,14 +121,11 @@ nft_rhash_get(const struct net *net, const struct nft_set *set,
+ 	return ERR_PTR(-ENOENT);
+ }
+ 
+-static bool nft_rhash_update(struct nft_set *set, const u32 *key,
+-			     struct nft_elem_priv *
+-				   (*new)(struct nft_set *,
+-					  const struct nft_expr *,
+-					  struct nft_regs *regs),
+-			     const struct nft_expr *expr,
+-			     struct nft_regs *regs,
+-			     const struct nft_set_ext **ext)
++static const struct nft_set_ext *
++nft_rhash_update(struct nft_set *set, const u32 *key,
++		 struct nft_elem_priv *(*new)(struct nft_set *, const struct nft_expr *,
++		 struct nft_regs *regs),
++		 const struct nft_expr *expr, struct nft_regs *regs)
+ {
+ 	struct nft_rhash *priv = nft_set_priv(set);
+ 	struct nft_rhash_elem *he, *prev;
+@@ -161,14 +159,13 @@ static bool nft_rhash_update(struct nft_set *set, const u32 *key,
+ 	}
+ 
+ out:
+-	*ext = &he->ext;
+-	return true;
++	return &he->ext;
+ 
+ err2:
+ 	nft_set_elem_destroy(set, &he->priv, true);
+ 	atomic_dec(&set->nelems);
+ err1:
+-	return false;
++	return NULL;
+ }
+ 
+ static int nft_rhash_insert(const struct net *net, const struct nft_set *set,
+@@ -507,8 +504,9 @@ struct nft_hash_elem {
+ };
+ 
+ INDIRECT_CALLABLE_SCOPE
+-bool nft_hash_lookup(const struct net *net, const struct nft_set *set,
+-		     const u32 *key, const struct nft_set_ext **ext)
++const struct nft_set_ext *
++nft_hash_lookup(const struct net *net, const struct nft_set *set,
++		const u32 *key)
+ {
+ 	struct nft_hash *priv = nft_set_priv(set);
+ 	u8 genmask = nft_genmask_cur(net);
+@@ -519,12 +517,10 @@ bool nft_hash_lookup(const struct net *net, const struct nft_set *set,
+ 	hash = reciprocal_scale(hash, priv->buckets);
+ 	hlist_for_each_entry_rcu(he, &priv->table[hash], node) {
+ 		if (!memcmp(nft_set_ext_key(&he->ext), key, set->klen) &&
+-		    nft_set_elem_active(&he->ext, genmask)) {
+-			*ext = &he->ext;
+-			return true;
+-		}
++		    nft_set_elem_active(&he->ext, genmask))
++			return &he->ext;
+ 	}
+-	return false;
++	return NULL;
+ }
+ 
+ static struct nft_elem_priv *
+@@ -547,9 +543,9 @@ nft_hash_get(const struct net *net, const struct nft_set *set,
+ }
+ 
+ INDIRECT_CALLABLE_SCOPE
+-bool nft_hash_lookup_fast(const struct net *net,
+-			  const struct nft_set *set,
+-			  const u32 *key, const struct nft_set_ext **ext)
++const struct nft_set_ext *
++nft_hash_lookup_fast(const struct net *net, const struct nft_set *set,
++		     const u32 *key)
+ {
+ 	struct nft_hash *priv = nft_set_priv(set);
+ 	u8 genmask = nft_genmask_cur(net);
+@@ -562,12 +558,10 @@ bool nft_hash_lookup_fast(const struct net *net,
+ 	hlist_for_each_entry_rcu(he, &priv->table[hash], node) {
+ 		k2 = *(u32 *)nft_set_ext_key(&he->ext)->data;
+ 		if (k1 == k2 &&
+-		    nft_set_elem_active(&he->ext, genmask)) {
+-			*ext = &he->ext;
+-			return true;
+-		}
++		    nft_set_elem_active(&he->ext, genmask))
++			return &he->ext;
+ 	}
+-	return false;
++	return NULL;
+ }
+ 
+ static u32 nft_jhash(const struct nft_set *set, const struct nft_hash *priv,
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 9e4e25f2458f99..793790d79d1384 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -397,37 +397,38 @@ int pipapo_refill(unsigned long *map, unsigned int len, unsigned int rules,
+ }
+ 
+ /**
+- * nft_pipapo_lookup() - Lookup function
+- * @net:	Network namespace
+- * @set:	nftables API set representation
+- * @key:	nftables API element representation containing key data
+- * @ext:	nftables API extension pointer, filled with matching reference
++ * pipapo_get() - Get matching element reference given key data
++ * @m:		storage containing the set elements
++ * @data:	Key data to be matched against existing elements
++ * @genmask:	If set, check that element is active in given genmask
++ * @tstamp:	timestamp to check for expired elements
+  *
+  * For more details, see DOC: Theory of Operation.
+  *
+- * Return: true on match, false otherwise.
++ * This is the main lookup function.  It matches key data against either
++ * the working match set or the uncommitted copy, depending on what the
++ * caller passed to us.
++ * nft_pipapo_get (lookup from userspace/control plane) and nft_pipapo_lookup
++ * (datapath lookup) pass the active copy.
++ * The insertion path will pass the uncommitted working copy.
++ *
++ * Return: pointer to &struct nft_pipapo_elem on match, NULL otherwise.
+  */
+-bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+-		       const u32 *key, const struct nft_set_ext **ext)
++static struct nft_pipapo_elem *pipapo_get(const struct nft_pipapo_match *m,
++					  const u8 *data, u8 genmask,
++					  u64 tstamp)
+ {
+-	struct nft_pipapo *priv = nft_set_priv(set);
+ 	struct nft_pipapo_scratch *scratch;
+ 	unsigned long *res_map, *fill_map;
+-	u8 genmask = nft_genmask_cur(net);
+-	const struct nft_pipapo_match *m;
+ 	const struct nft_pipapo_field *f;
+-	const u8 *rp = (const u8 *)key;
+ 	bool map_index;
+ 	int i;
+ 
+ 	local_bh_disable();
+ 
+-	m = rcu_dereference(priv->match);
+-
+-	if (unlikely(!m || !*raw_cpu_ptr(m->scratch)))
+-		goto out;
+-
+ 	scratch = *raw_cpu_ptr(m->scratch);
++	if (unlikely(!scratch))
++		goto out;
+ 
+ 	map_index = scratch->map_index;
+ 
+@@ -444,12 +445,12 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ 		 * packet bytes value, then AND bucket value
+ 		 */
+ 		if (likely(f->bb == 8))
+-			pipapo_and_field_buckets_8bit(f, res_map, rp);
++			pipapo_and_field_buckets_8bit(f, res_map, data);
+ 		else
+-			pipapo_and_field_buckets_4bit(f, res_map, rp);
++			pipapo_and_field_buckets_4bit(f, res_map, data);
+ 		NFT_PIPAPO_GROUP_BITS_ARE_8_OR_4;
+ 
+-		rp += f->groups / NFT_PIPAPO_GROUPS_PER_BYTE(f);
++		data += f->groups / NFT_PIPAPO_GROUPS_PER_BYTE(f);
+ 
+ 		/* Now populate the bitmap for the next field, unless this is
+ 		 * the last field, in which case return the matched 'ext'
+@@ -465,13 +466,15 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ 			scratch->map_index = map_index;
+ 			local_bh_enable();
+ 
+-			return false;
++			return NULL;
+ 		}
+ 
+ 		if (last) {
+-			*ext = &f->mt[b].e->ext;
+-			if (unlikely(nft_set_elem_expired(*ext) ||
+-				     !nft_set_elem_active(*ext, genmask)))
++			struct nft_pipapo_elem *e;
++
++			e = f->mt[b].e;
++			if (unlikely(__nft_set_elem_expired(&e->ext, tstamp) ||
++				     !nft_set_elem_active(&e->ext, genmask)))
+ 				goto next_match;
+ 
+ 			/* Last field: we're just returning the key without
+@@ -481,8 +484,7 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ 			 */
+ 			scratch->map_index = map_index;
+ 			local_bh_enable();
+-
+-			return true;
++			return e;
+ 		}
+ 
+ 		/* Swap bitmap indices: res_map is the initial bitmap for the
+@@ -492,112 +494,54 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ 		map_index = !map_index;
+ 		swap(res_map, fill_map);
+ 
+-		rp += NFT_PIPAPO_GROUPS_PADDING(f);
++		data += NFT_PIPAPO_GROUPS_PADDING(f);
+ 	}
+ 
+ out:
+ 	local_bh_enable();
+-	return false;
++	return NULL;
+ }
+ 
+ /**
+- * pipapo_get() - Get matching element reference given key data
++ * nft_pipapo_lookup() - Dataplane fronted for main lookup function
+  * @net:	Network namespace
+  * @set:	nftables API set representation
+- * @m:		storage containing active/existing elements
+- * @data:	Key data to be matched against existing elements
+- * @genmask:	If set, check that element is active in given genmask
+- * @tstamp:	timestamp to check for expired elements
+- * @gfp:	the type of memory to allocate (see kmalloc).
++ * @key:	pointer to nft registers containing key data
++ *
++ * This function is called from the data path.  It will search for
++ * an element matching the given key in the current active copy.
++ * Unlike other set types, this uses NFT_GENMASK_ANY instead of
++ * nft_genmask_cur().
+  *
+- * This is essentially the same as the lookup function, except that it matches
+- * key data against the uncommitted copy and doesn't use preallocated maps for
+- * bitmap results.
++ * This is because new (future) elements are not reachable from
++ * priv->match, they get added to priv->clone instead.
++ * When the commit phase flips the generation bitmask, the
++ * 'now old' entries are skipped but without the 'now current'
++ * elements becoming visible. Using nft_genmask_cur() thus creates
++ * inconsistent state: matching old entries get skipped but thew
++ * newly matching entries are unreachable.
+  *
+- * Return: pointer to &struct nft_pipapo_elem on match, error pointer otherwise.
++ * GENMASK will still find the 'now old' entries which ensures consistent
++ * priv->match view.
++ *
++ * nft_pipapo_commit swaps ->clone and ->match shortly after the
++ * genbit flip.  As ->clone doesn't contain the old entries in the first
++ * place, lookup will only find the now-current ones.
++ *
++ * Return: ntables API extension pointer or NULL if no match.
+  */
+-static struct nft_pipapo_elem *pipapo_get(const struct net *net,
+-					  const struct nft_set *set,
+-					  const struct nft_pipapo_match *m,
+-					  const u8 *data, u8 genmask,
+-					  u64 tstamp, gfp_t gfp)
++const struct nft_set_ext *
++nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
++		  const u32 *key)
+ {
+-	struct nft_pipapo_elem *ret = ERR_PTR(-ENOENT);
+-	unsigned long *res_map, *fill_map = NULL;
+-	const struct nft_pipapo_field *f;
+-	int i;
+-
+-	if (m->bsize_max == 0)
+-		return ret;
+-
+-	res_map = kmalloc_array(m->bsize_max, sizeof(*res_map), gfp);
+-	if (!res_map) {
+-		ret = ERR_PTR(-ENOMEM);
+-		goto out;
+-	}
+-
+-	fill_map = kcalloc(m->bsize_max, sizeof(*res_map), gfp);
+-	if (!fill_map) {
+-		ret = ERR_PTR(-ENOMEM);
+-		goto out;
+-	}
+-
+-	pipapo_resmap_init(m, res_map);
+-
+-	nft_pipapo_for_each_field(f, i, m) {
+-		bool last = i == m->field_count - 1;
+-		int b;
+-
+-		/* For each bit group: select lookup table bucket depending on
+-		 * packet bytes value, then AND bucket value
+-		 */
+-		if (f->bb == 8)
+-			pipapo_and_field_buckets_8bit(f, res_map, data);
+-		else if (f->bb == 4)
+-			pipapo_and_field_buckets_4bit(f, res_map, data);
+-		else
+-			BUG();
+-
+-		data += f->groups / NFT_PIPAPO_GROUPS_PER_BYTE(f);
+-
+-		/* Now populate the bitmap for the next field, unless this is
+-		 * the last field, in which case return the matched 'ext'
+-		 * pointer if any.
+-		 *
+-		 * Now res_map contains the matching bitmap, and fill_map is the
+-		 * bitmap for the next field.
+-		 */
+-next_match:
+-		b = pipapo_refill(res_map, f->bsize, f->rules, fill_map, f->mt,
+-				  last);
+-		if (b < 0)
+-			goto out;
+-
+-		if (last) {
+-			if (__nft_set_elem_expired(&f->mt[b].e->ext, tstamp))
+-				goto next_match;
+-			if ((genmask &&
+-			     !nft_set_elem_active(&f->mt[b].e->ext, genmask)))
+-				goto next_match;
+-
+-			ret = f->mt[b].e;
+-			goto out;
+-		}
+-
+-		data += NFT_PIPAPO_GROUPS_PADDING(f);
++	struct nft_pipapo *priv = nft_set_priv(set);
++	const struct nft_pipapo_match *m;
++	const struct nft_pipapo_elem *e;
+ 
+-		/* Swap bitmap indices: fill_map will be the initial bitmap for
+-		 * the next field (i.e. the new res_map), and res_map is
+-		 * guaranteed to be all-zeroes at this point, ready to be filled
+-		 * according to the next mapping table.
+-		 */
+-		swap(res_map, fill_map);
+-	}
++	m = rcu_dereference(priv->match);
++	e = pipapo_get(m, (const u8 *)key, NFT_GENMASK_ANY, get_jiffies_64());
+ 
+-out:
+-	kfree(fill_map);
+-	kfree(res_map);
+-	return ret;
++	return e ? &e->ext : NULL;
+ }
+ 
+ /**
+@@ -606,6 +550,11 @@ static struct nft_pipapo_elem *pipapo_get(const struct net *net,
+  * @set:	nftables API set representation
+  * @elem:	nftables API element representation containing key data
+  * @flags:	Unused
++ *
++ * This function is called from the control plane path under
++ * RCU read lock.
++ *
++ * Return: set element private pointer or ERR_PTR(-ENOENT).
+  */
+ static struct nft_elem_priv *
+ nft_pipapo_get(const struct net *net, const struct nft_set *set,
+@@ -615,11 +564,10 @@ nft_pipapo_get(const struct net *net, const struct nft_set *set,
+ 	struct nft_pipapo_match *m = rcu_dereference(priv->match);
+ 	struct nft_pipapo_elem *e;
+ 
+-	e = pipapo_get(net, set, m, (const u8 *)elem->key.val.data,
+-		       nft_genmask_cur(net), get_jiffies_64(),
+-		       GFP_ATOMIC);
+-	if (IS_ERR(e))
+-		return ERR_CAST(e);
++	e = pipapo_get(m, (const u8 *)elem->key.val.data,
++		       nft_genmask_cur(net), get_jiffies_64());
++	if (!e)
++		return ERR_PTR(-ENOENT);
+ 
+ 	return &e->priv;
+ }
+@@ -1344,8 +1292,8 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
+ 	else
+ 		end = start;
+ 
+-	dup = pipapo_get(net, set, m, start, genmask, tstamp, GFP_KERNEL);
+-	if (!IS_ERR(dup)) {
++	dup = pipapo_get(m, start, genmask, tstamp);
++	if (dup) {
+ 		/* Check if we already have the same exact entry */
+ 		const struct nft_data *dup_key, *dup_end;
+ 
+@@ -1364,15 +1312,9 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
+ 		return -ENOTEMPTY;
+ 	}
+ 
+-	if (PTR_ERR(dup) == -ENOENT) {
+-		/* Look for partially overlapping entries */
+-		dup = pipapo_get(net, set, m, end, nft_genmask_next(net), tstamp,
+-				 GFP_KERNEL);
+-	}
+-
+-	if (PTR_ERR(dup) != -ENOENT) {
+-		if (IS_ERR(dup))
+-			return PTR_ERR(dup);
++	/* Look for partially overlapping entries */
++	dup = pipapo_get(m, end, nft_genmask_next(net), tstamp);
++	if (dup) {
+ 		*elem_priv = &dup->priv;
+ 		return -ENOTEMPTY;
+ 	}
+@@ -1913,9 +1855,9 @@ nft_pipapo_deactivate(const struct net *net, const struct nft_set *set,
+ 	if (!m)
+ 		return NULL;
+ 
+-	e = pipapo_get(net, set, m, (const u8 *)elem->key.val.data,
+-		       nft_genmask_next(net), nft_net_tstamp(net), GFP_KERNEL);
+-	if (IS_ERR(e))
++	e = pipapo_get(m, (const u8 *)elem->key.val.data,
++		       nft_genmask_next(net), nft_net_tstamp(net));
++	if (!e)
+ 		return NULL;
+ 
+ 	nft_set_elem_change_active(net, set, &e->ext);
+diff --git a/net/netfilter/nft_set_pipapo_avx2.c b/net/netfilter/nft_set_pipapo_avx2.c
+index be7c16c79f711e..39e356c9687a98 100644
+--- a/net/netfilter/nft_set_pipapo_avx2.c
++++ b/net/netfilter/nft_set_pipapo_avx2.c
+@@ -1146,26 +1146,27 @@ static inline void pipapo_resmap_init_avx2(const struct nft_pipapo_match *m, uns
+  *
+  * Return: true on match, false otherwise.
+  */
+-bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+-			    const u32 *key, const struct nft_set_ext **ext)
++const struct nft_set_ext *
++nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
++		       const u32 *key)
+ {
+ 	struct nft_pipapo *priv = nft_set_priv(set);
++	const struct nft_set_ext *ext = NULL;
+ 	struct nft_pipapo_scratch *scratch;
+-	u8 genmask = nft_genmask_cur(net);
+ 	const struct nft_pipapo_match *m;
+ 	const struct nft_pipapo_field *f;
+ 	const u8 *rp = (const u8 *)key;
+ 	unsigned long *res, *fill;
+ 	bool map_index;
+-	int i, ret = 0;
++	int i;
+ 
+ 	local_bh_disable();
+ 
+ 	if (unlikely(!irq_fpu_usable())) {
+-		bool fallback_res = nft_pipapo_lookup(net, set, key, ext);
++		ext = nft_pipapo_lookup(net, set, key);
+ 
+ 		local_bh_enable();
+-		return fallback_res;
++		return ext;
+ 	}
+ 
+ 	m = rcu_dereference(priv->match);
+@@ -1182,7 +1183,7 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 	if (unlikely(!scratch)) {
+ 		kernel_fpu_end();
+ 		local_bh_enable();
+-		return false;
++		return NULL;
+ 	}
+ 
+ 	map_index = scratch->map_index;
+@@ -1197,6 +1198,7 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ next_match:
+ 	nft_pipapo_for_each_field(f, i, m) {
+ 		bool last = i == m->field_count - 1, first = !i;
++		int ret = 0;
+ 
+ #define NFT_SET_PIPAPO_AVX2_LOOKUP(b, n)				\
+ 		(ret = nft_pipapo_avx2_lookup_##b##b_##n(res, fill, f,	\
+@@ -1244,13 +1246,12 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 			goto out;
+ 
+ 		if (last) {
+-			*ext = &f->mt[ret].e->ext;
+-			if (unlikely(nft_set_elem_expired(*ext) ||
+-				     !nft_set_elem_active(*ext, genmask))) {
+-				ret = 0;
++			const struct nft_set_ext *e = &f->mt[ret].e->ext;
++
++			if (unlikely(nft_set_elem_expired(e)))
+ 				goto next_match;
+-			}
+ 
++			ext = e;
+ 			goto out;
+ 		}
+ 
+@@ -1264,5 +1265,5 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 	kernel_fpu_end();
+ 	local_bh_enable();
+ 
+-	return ret >= 0;
++	return ext;
+ }
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 2e8ef16ff191d4..b1f04168ec9377 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -52,9 +52,9 @@ static bool nft_rbtree_elem_expired(const struct nft_rbtree_elem *rbe)
+ 	return nft_set_elem_expired(&rbe->ext);
+ }
+ 
+-static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
+-				const u32 *key, const struct nft_set_ext **ext,
+-				unsigned int seq)
++static const struct nft_set_ext *
++__nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
++		    const u32 *key, unsigned int seq)
+ {
+ 	struct nft_rbtree *priv = nft_set_priv(set);
+ 	const struct nft_rbtree_elem *rbe, *interval = NULL;
+@@ -65,7 +65,7 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
+ 	parent = rcu_dereference_raw(priv->root.rb_node);
+ 	while (parent != NULL) {
+ 		if (read_seqcount_retry(&priv->count, seq))
+-			return false;
++			return NULL;
+ 
+ 		rbe = rb_entry(parent, struct nft_rbtree_elem, node);
+ 
+@@ -77,7 +77,9 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
+ 			    nft_rbtree_interval_end(rbe) &&
+ 			    nft_rbtree_interval_start(interval))
+ 				continue;
+-			interval = rbe;
++			if (nft_set_elem_active(&rbe->ext, genmask) &&
++			    !nft_rbtree_elem_expired(rbe))
++				interval = rbe;
+ 		} else if (d > 0)
+ 			parent = rcu_dereference_raw(parent->rb_right);
+ 		else {
+@@ -87,50 +89,46 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
+ 			}
+ 
+ 			if (nft_rbtree_elem_expired(rbe))
+-				return false;
++				return NULL;
+ 
+ 			if (nft_rbtree_interval_end(rbe)) {
+ 				if (nft_set_is_anonymous(set))
+-					return false;
++					return NULL;
+ 				parent = rcu_dereference_raw(parent->rb_left);
+ 				interval = NULL;
+ 				continue;
+ 			}
+ 
+-			*ext = &rbe->ext;
+-			return true;
++			return &rbe->ext;
+ 		}
+ 	}
+ 
+ 	if (set->flags & NFT_SET_INTERVAL && interval != NULL &&
+-	    nft_set_elem_active(&interval->ext, genmask) &&
+-	    !nft_rbtree_elem_expired(interval) &&
+-	    nft_rbtree_interval_start(interval)) {
+-		*ext = &interval->ext;
+-		return true;
+-	}
++	    nft_rbtree_interval_start(interval))
++		return &interval->ext;
+ 
+-	return false;
++	return NULL;
+ }
+ 
+ INDIRECT_CALLABLE_SCOPE
+-bool nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
+-		       const u32 *key, const struct nft_set_ext **ext)
++const struct nft_set_ext *
++nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
++		  const u32 *key)
+ {
+ 	struct nft_rbtree *priv = nft_set_priv(set);
+ 	unsigned int seq = read_seqcount_begin(&priv->count);
+-	bool ret;
++	const struct nft_set_ext *ext;
+ 
+-	ret = __nft_rbtree_lookup(net, set, key, ext, seq);
+-	if (ret || !read_seqcount_retry(&priv->count, seq))
+-		return ret;
++	ext = __nft_rbtree_lookup(net, set, key, seq);
++	if (ext || !read_seqcount_retry(&priv->count, seq))
++		return ext;
+ 
+ 	read_lock_bh(&priv->lock);
+ 	seq = read_seqcount_begin(&priv->count);
+-	ret = __nft_rbtree_lookup(net, set, key, ext, seq);
++	ext = __nft_rbtree_lookup(net, set, key, seq);
+ 	read_unlock_bh(&priv->lock);
+ 
+-	return ret;
++	return ext;
+ }
+ 
+ static bool __nft_rbtree_get(const struct net *net, const struct nft_set *set,
+diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
+index 104732d3454348..978c129c609501 100644
+--- a/net/netlink/genetlink.c
++++ b/net/netlink/genetlink.c
+@@ -1836,6 +1836,9 @@ static int genl_bind(struct net *net, int group)
+ 		    !ns_capable(net->user_ns, CAP_SYS_ADMIN))
+ 			ret = -EPERM;
+ 
++		if (ret)
++			break;
++
+ 		if (family->bind)
+ 			family->bind(i);
+ 
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index 73bc39281ef5f5..9b45fbdc90cabe 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -276,8 +276,6 @@ EXPORT_SYMBOL_GPL(rpc_destroy_wait_queue);
+ 
+ static int rpc_wait_bit_killable(struct wait_bit_key *key, int mode)
+ {
+-	if (unlikely(current->flags & PF_EXITING))
+-		return -EINTR;
+ 	schedule();
+ 	if (signal_pending_state(mode, current))
+ 		return -ERESTARTSYS;
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index c5f7bbf5775ff8..3aa987e7f0724d 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -407,9 +407,9 @@ xs_sock_recv_cmsg(struct socket *sock, unsigned int *msg_flags, int flags)
+ 	iov_iter_kvec(&msg.msg_iter, ITER_DEST, &alert_kvec, 1,
+ 		      alert_kvec.iov_len);
+ 	ret = sock_recvmsg(sock, &msg, flags);
+-	if (ret > 0 &&
+-	    tls_get_record_type(sock->sk, &u.cmsg) == TLS_RECORD_TYPE_ALERT) {
+-		iov_iter_revert(&msg.msg_iter, ret);
++	if (ret > 0) {
++		if (tls_get_record_type(sock->sk, &u.cmsg) == TLS_RECORD_TYPE_ALERT)
++			iov_iter_revert(&msg.msg_iter, ret);
+ 		ret = xs_sock_process_cmsg(sock, &msg, msg_flags, &u.cmsg,
+ 					   -EAGAIN);
+ 	}
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 72c000c0ae5f57..de331541fdb387 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -36,6 +36,20 @@
+ #define TX_BATCH_SIZE 32
+ #define MAX_PER_SOCKET_BUDGET (TX_BATCH_SIZE)
+ 
++struct xsk_addr_node {
++	u64 addr;
++	struct list_head addr_node;
++};
++
++struct xsk_addr_head {
++	u32 num_descs;
++	struct list_head addrs_list;
++};
++
++static struct kmem_cache *xsk_tx_generic_cache;
++
++#define XSKCB(skb) ((struct xsk_addr_head *)((skb)->cb))
++
+ void xsk_set_rx_need_wakeup(struct xsk_buff_pool *pool)
+ {
+ 	if (pool->cached_need_wakeup & XDP_WAKEUP_RX)
+@@ -528,24 +542,43 @@ static int xsk_wakeup(struct xdp_sock *xs, u8 flags)
+ 	return dev->netdev_ops->ndo_xsk_wakeup(dev, xs->queue_id, flags);
+ }
+ 
+-static int xsk_cq_reserve_addr_locked(struct xsk_buff_pool *pool, u64 addr)
++static int xsk_cq_reserve_locked(struct xsk_buff_pool *pool)
+ {
+ 	unsigned long flags;
+ 	int ret;
+ 
+ 	spin_lock_irqsave(&pool->cq_lock, flags);
+-	ret = xskq_prod_reserve_addr(pool->cq, addr);
++	ret = xskq_prod_reserve(pool->cq);
+ 	spin_unlock_irqrestore(&pool->cq_lock, flags);
+ 
+ 	return ret;
+ }
+ 
+-static void xsk_cq_submit_locked(struct xsk_buff_pool *pool, u32 n)
++static void xsk_cq_submit_addr_locked(struct xsk_buff_pool *pool,
++				      struct sk_buff *skb)
+ {
++	struct xsk_addr_node *pos, *tmp;
++	u32 descs_processed = 0;
+ 	unsigned long flags;
++	u32 idx;
+ 
+ 	spin_lock_irqsave(&pool->cq_lock, flags);
+-	xskq_prod_submit_n(pool->cq, n);
++	idx = xskq_get_prod(pool->cq);
++
++	xskq_prod_write_addr(pool->cq, idx,
++			     (u64)(uintptr_t)skb_shinfo(skb)->destructor_arg);
++	descs_processed++;
++
++	if (unlikely(XSKCB(skb)->num_descs > 1)) {
++		list_for_each_entry_safe(pos, tmp, &XSKCB(skb)->addrs_list, addr_node) {
++			xskq_prod_write_addr(pool->cq, idx + descs_processed,
++					     pos->addr);
++			descs_processed++;
++			list_del(&pos->addr_node);
++			kmem_cache_free(xsk_tx_generic_cache, pos);
++		}
++	}
++	xskq_prod_submit_n(pool->cq, descs_processed);
+ 	spin_unlock_irqrestore(&pool->cq_lock, flags);
+ }
+ 
+@@ -558,9 +591,14 @@ static void xsk_cq_cancel_locked(struct xsk_buff_pool *pool, u32 n)
+ 	spin_unlock_irqrestore(&pool->cq_lock, flags);
+ }
+ 
++static void xsk_inc_num_desc(struct sk_buff *skb)
++{
++	XSKCB(skb)->num_descs++;
++}
++
+ static u32 xsk_get_num_desc(struct sk_buff *skb)
+ {
+-	return skb ? (long)skb_shinfo(skb)->destructor_arg : 0;
++	return XSKCB(skb)->num_descs;
+ }
+ 
+ static void xsk_destruct_skb(struct sk_buff *skb)
+@@ -572,23 +610,33 @@ static void xsk_destruct_skb(struct sk_buff *skb)
+ 		*compl->tx_timestamp = ktime_get_tai_fast_ns();
+ 	}
+ 
+-	xsk_cq_submit_locked(xdp_sk(skb->sk)->pool, xsk_get_num_desc(skb));
++	xsk_cq_submit_addr_locked(xdp_sk(skb->sk)->pool, skb);
+ 	sock_wfree(skb);
+ }
+ 
+-static void xsk_set_destructor_arg(struct sk_buff *skb)
++static void xsk_set_destructor_arg(struct sk_buff *skb, u64 addr)
+ {
+-	long num = xsk_get_num_desc(xdp_sk(skb->sk)->skb) + 1;
+-
+-	skb_shinfo(skb)->destructor_arg = (void *)num;
++	BUILD_BUG_ON(sizeof(struct xsk_addr_head) > sizeof(skb->cb));
++	INIT_LIST_HEAD(&XSKCB(skb)->addrs_list);
++	XSKCB(skb)->num_descs = 0;
++	skb_shinfo(skb)->destructor_arg = (void *)(uintptr_t)addr;
+ }
+ 
+ static void xsk_consume_skb(struct sk_buff *skb)
+ {
+ 	struct xdp_sock *xs = xdp_sk(skb->sk);
++	u32 num_descs = xsk_get_num_desc(skb);
++	struct xsk_addr_node *pos, *tmp;
++
++	if (unlikely(num_descs > 1)) {
++		list_for_each_entry_safe(pos, tmp, &XSKCB(skb)->addrs_list, addr_node) {
++			list_del(&pos->addr_node);
++			kmem_cache_free(xsk_tx_generic_cache, pos);
++		}
++	}
+ 
+ 	skb->destructor = sock_wfree;
+-	xsk_cq_cancel_locked(xs->pool, xsk_get_num_desc(skb));
++	xsk_cq_cancel_locked(xs->pool, num_descs);
+ 	/* Free skb without triggering the perf drop trace */
+ 	consume_skb(skb);
+ 	xs->skb = NULL;
+@@ -605,6 +653,7 @@ static struct sk_buff *xsk_build_skb_zerocopy(struct xdp_sock *xs,
+ {
+ 	struct xsk_buff_pool *pool = xs->pool;
+ 	u32 hr, len, ts, offset, copy, copied;
++	struct xsk_addr_node *xsk_addr;
+ 	struct sk_buff *skb = xs->skb;
+ 	struct page *page;
+ 	void *buffer;
+@@ -619,6 +668,19 @@ static struct sk_buff *xsk_build_skb_zerocopy(struct xdp_sock *xs,
+ 			return ERR_PTR(err);
+ 
+ 		skb_reserve(skb, hr);
++
++		xsk_set_destructor_arg(skb, desc->addr);
++	} else {
++		xsk_addr = kmem_cache_zalloc(xsk_tx_generic_cache, GFP_KERNEL);
++		if (!xsk_addr)
++			return ERR_PTR(-ENOMEM);
++
++		/* in case of -EOVERFLOW that could happen below,
++		 * xsk_consume_skb() will release this node as whole skb
++		 * would be dropped, which implies freeing all list elements
++		 */
++		xsk_addr->addr = desc->addr;
++		list_add_tail(&xsk_addr->addr_node, &XSKCB(skb)->addrs_list);
+ 	}
+ 
+ 	addr = desc->addr;
+@@ -690,8 +752,11 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ 			err = skb_store_bits(skb, 0, buffer, len);
+ 			if (unlikely(err))
+ 				goto free_err;
++
++			xsk_set_destructor_arg(skb, desc->addr);
+ 		} else {
+ 			int nr_frags = skb_shinfo(skb)->nr_frags;
++			struct xsk_addr_node *xsk_addr;
+ 			struct page *page;
+ 			u8 *vaddr;
+ 
+@@ -706,12 +771,22 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ 				goto free_err;
+ 			}
+ 
++			xsk_addr = kmem_cache_zalloc(xsk_tx_generic_cache, GFP_KERNEL);
++			if (!xsk_addr) {
++				__free_page(page);
++				err = -ENOMEM;
++				goto free_err;
++			}
++
+ 			vaddr = kmap_local_page(page);
+ 			memcpy(vaddr, buffer, len);
+ 			kunmap_local(vaddr);
+ 
+ 			skb_add_rx_frag(skb, nr_frags, page, 0, len, PAGE_SIZE);
+ 			refcount_add(PAGE_SIZE, &xs->sk.sk_wmem_alloc);
++
++			xsk_addr->addr = desc->addr;
++			list_add_tail(&xsk_addr->addr_node, &XSKCB(skb)->addrs_list);
+ 		}
+ 
+ 		if (first_frag && desc->options & XDP_TX_METADATA) {
+@@ -755,7 +830,7 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ 	skb->mark = READ_ONCE(xs->sk.sk_mark);
+ 	skb->destructor = xsk_destruct_skb;
+ 	xsk_tx_metadata_to_compl(meta, &skb_shinfo(skb)->xsk_meta);
+-	xsk_set_destructor_arg(skb);
++	xsk_inc_num_desc(skb);
+ 
+ 	return skb;
+ 
+@@ -765,7 +840,7 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ 
+ 	if (err == -EOVERFLOW) {
+ 		/* Drop the packet */
+-		xsk_set_destructor_arg(xs->skb);
++		xsk_inc_num_desc(xs->skb);
+ 		xsk_drop_skb(xs->skb);
+ 		xskq_cons_release(xs->tx);
+ 	} else {
+@@ -807,7 +882,7 @@ static int __xsk_generic_xmit(struct sock *sk)
+ 		 * if there is space in it. This avoids having to implement
+ 		 * any buffering in the Tx path.
+ 		 */
+-		err = xsk_cq_reserve_addr_locked(xs->pool, desc.addr);
++		err = xsk_cq_reserve_locked(xs->pool);
+ 		if (err) {
+ 			err = -EAGAIN;
+ 			goto out;
+@@ -1795,8 +1870,18 @@ static int __init xsk_init(void)
+ 	if (err)
+ 		goto out_pernet;
+ 
++	xsk_tx_generic_cache = kmem_cache_create("xsk_generic_xmit_cache",
++						 sizeof(struct xsk_addr_node),
++						 0, SLAB_HWCACHE_ALIGN, NULL);
++	if (!xsk_tx_generic_cache) {
++		err = -ENOMEM;
++		goto out_unreg_notif;
++	}
++
+ 	return 0;
+ 
++out_unreg_notif:
++	unregister_netdevice_notifier(&xsk_netdev_notifier);
+ out_pernet:
+ 	unregister_pernet_subsys(&xsk_net_ops);
+ out_sk:
+diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
+index 46d87e961ad6d3..f16f390370dc43 100644
+--- a/net/xdp/xsk_queue.h
++++ b/net/xdp/xsk_queue.h
+@@ -344,6 +344,11 @@ static inline u32 xskq_cons_present_entries(struct xsk_queue *q)
+ 
+ /* Functions for producers */
+ 
++static inline u32 xskq_get_prod(struct xsk_queue *q)
++{
++	return READ_ONCE(q->ring->producer);
++}
++
+ static inline u32 xskq_prod_nb_free(struct xsk_queue *q, u32 max)
+ {
+ 	u32 free_entries = q->nentries - (q->cached_prod - q->cached_cons);
+@@ -390,6 +395,13 @@ static inline int xskq_prod_reserve_addr(struct xsk_queue *q, u64 addr)
+ 	return 0;
+ }
+ 
++static inline void xskq_prod_write_addr(struct xsk_queue *q, u32 idx, u64 addr)
++{
++	struct xdp_umem_ring *ring = (struct xdp_umem_ring *)q->ring;
++
++	ring->desc[idx & q->ring_mask] = addr;
++}
++
+ static inline void xskq_prod_write_addr_batch(struct xsk_queue *q, struct xdp_desc *descs,
+ 					      u32 nb_entries)
+ {
+diff --git a/samples/ftrace/ftrace-direct-modify.c b/samples/ftrace/ftrace-direct-modify.c
+index cfea7a38befb05..da3a9f2091f55b 100644
+--- a/samples/ftrace/ftrace-direct-modify.c
++++ b/samples/ftrace/ftrace-direct-modify.c
+@@ -75,8 +75,8 @@ asm (
+ 	CALL_DEPTH_ACCOUNT
+ "	call my_direct_func1\n"
+ "	leave\n"
+-"	.size		my_tramp1, .-my_tramp1\n"
+ 	ASM_RET
++"	.size		my_tramp1, .-my_tramp1\n"
+ 
+ "	.type		my_tramp2, @function\n"
+ "	.globl		my_tramp2\n"
+diff --git a/tools/testing/selftests/net/can/config b/tools/testing/selftests/net/can/config
+new file mode 100644
+index 00000000000000..188f7979667097
+--- /dev/null
++++ b/tools/testing/selftests/net/can/config
+@@ -0,0 +1,3 @@
++CONFIG_CAN=m
++CONFIG_CAN_DEV=m
++CONFIG_CAN_VCAN=m


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-09-25 12:02 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-09-25 12:02 UTC (permalink / raw
  To: gentoo-commits

commit:     fb4386cb8d3a0e3944fcbd135b1a7c53cd28d3ad
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 25 12:02:04 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Sep 25 12:02:04 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fb4386cb

Linux patch 6.16.9

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README             |    4 +
 1008_linux-6.16.9.patch | 6546 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6550 insertions(+)

diff --git a/0000_README b/0000_README
index b72d6630..48b50ad0 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch:  1007_linux-6.16.8.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.16.8
 
+Patch:  1008_linux-6.16.9.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.16.9
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1008_linux-6.16.9.patch b/1008_linux-6.16.9.patch
new file mode 100644
index 00000000..3a65fc5a
--- /dev/null
+++ b/1008_linux-6.16.9.patch
@@ -0,0 +1,6546 @@
+diff --git a/Documentation/devicetree/bindings/serial/8250.yaml b/Documentation/devicetree/bindings/serial/8250.yaml
+index c6bc27709bf726..f59c0b37e8ebb2 100644
+--- a/Documentation/devicetree/bindings/serial/8250.yaml
++++ b/Documentation/devicetree/bindings/serial/8250.yaml
+@@ -48,6 +48,45 @@ allOf:
+       oneOf:
+         - required: [ clock-frequency ]
+         - required: [ clocks ]
++  - if:
++      properties:
++        compatible:
++          contains:
++            const: nxp,lpc1850-uart
++    then:
++      properties:
++        clock-names:
++          items:
++            - const: uartclk
++            - const: reg
++    else:
++      properties:
++        clock-names:
++          items:
++            - const: core
++            - const: bus
++  - if:
++      properties:
++        compatible:
++          contains:
++            enum:
++              - spacemit,k1-uart
++              - nxp,lpc1850-uart
++    then:
++      required:
++        - clocks
++        - clock-names
++      properties:
++        clocks:
++          minItems: 2
++        clock-names:
++          minItems: 2
++    else:
++      properties:
++        clocks:
++          maxItems: 1
++        clock-names:
++          maxItems: 1
+ 
+ properties:
+   compatible:
+@@ -142,9 +181,22 @@ properties:
+ 
+   clock-names:
+     minItems: 1
+-    items:
+-      - const: core
+-      - const: bus
++    maxItems: 2
++    oneOf:
++      - items:
++          - const: core
++          - const: bus
++      - items:
++          - const: uartclk
++          - const: reg
++
++  dmas:
++    minItems: 1
++    maxItems: 4
++
++  dma-names:
++    minItems: 1
++    maxItems: 4
+ 
+   resets:
+     maxItems: 1
+@@ -233,25 +285,6 @@ required:
+   - reg
+   - interrupts
+ 
+-if:
+-  properties:
+-    compatible:
+-      contains:
+-        const: spacemit,k1-uart
+-then:
+-  required: [clock-names]
+-  properties:
+-    clocks:
+-      minItems: 2
+-    clock-names:
+-      minItems: 2
+-else:
+-  properties:
+-    clocks:
+-      maxItems: 1
+-    clock-names:
+-      maxItems: 1
+-
+ unevaluatedProperties: false
+ 
+ examples:
+diff --git a/Documentation/netlink/specs/conntrack.yaml b/Documentation/netlink/specs/conntrack.yaml
+index 840dc4504216bc..1865ddf01fb0f6 100644
+--- a/Documentation/netlink/specs/conntrack.yaml
++++ b/Documentation/netlink/specs/conntrack.yaml
+@@ -575,8 +575,8 @@ operations:
+             - nat-dst
+             - timeout
+             - mark
+-            - counter-orig
+-            - counter-reply
++            - counters-orig
++            - counters-reply
+             - use
+             - id
+             - nat-dst
+@@ -591,7 +591,6 @@ operations:
+         request:
+           value: 0x101
+           attributes:
+-            - nfgen-family
+             - mark
+             - filter
+             - status
+@@ -608,8 +607,8 @@ operations:
+             - nat-dst
+             - timeout
+             - mark
+-            - counter-orig
+-            - counter-reply
++            - counters-orig
++            - counters-reply
+             - use
+             - id
+             - nat-dst
+diff --git a/Documentation/netlink/specs/mptcp_pm.yaml b/Documentation/netlink/specs/mptcp_pm.yaml
+index ecfe5ee33de2d8..c77f32cfcae973 100644
+--- a/Documentation/netlink/specs/mptcp_pm.yaml
++++ b/Documentation/netlink/specs/mptcp_pm.yaml
+@@ -28,13 +28,13 @@ definitions:
+         traffic-patterns it can take a long time until the
+         MPTCP_EVENT_ESTABLISHED is sent.
+         Attributes: token, family, saddr4 | saddr6, daddr4 | daddr6, sport,
+-        dport, server-side.
++        dport, server-side, [flags].
+      -
+       name: established
+       doc: >-
+         A MPTCP connection is established (can start new subflows).
+         Attributes: token, family, saddr4 | saddr6, daddr4 | daddr6, sport,
+-        dport, server-side.
++        dport, server-side, [flags].
+      -
+       name: closed
+       doc: >-
+diff --git a/Makefile b/Makefile
+index 7594f35cbc2a5a..aef2cb6ea99d8b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 16
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
+index 4b19f93379a153..eb13fcf095a1d4 100644
+--- a/arch/loongarch/Kconfig
++++ b/arch/loongarch/Kconfig
+@@ -301,6 +301,10 @@ config AS_HAS_LVZ_EXTENSION
+ config CC_HAS_ANNOTATE_TABLEJUMP
+ 	def_bool $(cc-option,-mannotate-tablejump)
+ 
++config RUSTC_HAS_ANNOTATE_TABLEJUMP
++	depends on RUST
++	def_bool $(rustc-option,-Cllvm-args=--loongarch-annotate-tablejump)
++
+ menu "Kernel type and options"
+ 
+ source "kernel/Kconfig.hz"
+@@ -566,10 +570,14 @@ config ARCH_STRICT_ALIGN
+ 	  -mstrict-align build parameter to prevent unaligned accesses.
+ 
+ 	  CPUs with h/w unaligned access support:
+-	  Loongson-2K2000/2K3000/3A5000/3C5000/3D5000.
++	  Loongson-2K2000/2K3000 and all of Loongson-3 series processors
++	  based on LoongArch.
+ 
+ 	  CPUs without h/w unaligned access support:
+-	  Loongson-2K500/2K1000.
++	  Loongson-2K0300/2K0500/2K1000.
++
++	  If you want to make sure whether to support unaligned memory access
++	  on your hardware, please read the bit 20 (UAL) of CPUCFG1 register.
+ 
+ 	  This option is enabled by default to make the kernel be able to run
+ 	  on all LoongArch systems. But you can disable it manually if you want
+diff --git a/arch/loongarch/Makefile b/arch/loongarch/Makefile
+index a3a9759414f40f..ae419e32f22e2f 100644
+--- a/arch/loongarch/Makefile
++++ b/arch/loongarch/Makefile
+@@ -102,16 +102,21 @@ KBUILD_CFLAGS			+= $(call cc-option,-mthin-add-sub) $(call cc-option,-Wa$(comma)
+ 
+ ifdef CONFIG_OBJTOOL
+ ifdef CONFIG_CC_HAS_ANNOTATE_TABLEJUMP
++KBUILD_CFLAGS			+= -mannotate-tablejump
++else
++KBUILD_CFLAGS			+= -fno-jump-tables # keep compatibility with older compilers
++endif
++ifdef CONFIG_RUSTC_HAS_ANNOTATE_TABLEJUMP
++KBUILD_RUSTFLAGS		+= -Cllvm-args=--loongarch-annotate-tablejump
++else
++KBUILD_RUSTFLAGS		+= -Zno-jump-tables # keep compatibility with older compilers
++endif
++ifdef CONFIG_LTO_CLANG
+ # The annotate-tablejump option can not be passed to LLVM backend when LTO is enabled.
+ # Ensure it is aware of linker with LTO, '--loongarch-annotate-tablejump' also needs to
+ # be passed via '-mllvm' to ld.lld.
+-KBUILD_CFLAGS			+= -mannotate-tablejump
+-ifdef CONFIG_LTO_CLANG
+ KBUILD_LDFLAGS			+= -mllvm --loongarch-annotate-tablejump
+ endif
+-else
+-KBUILD_CFLAGS			+= -fno-jump-tables # keep compatibility with older compilers
+-endif
+ endif
+ 
+ KBUILD_RUSTFLAGS		+= --target=loongarch64-unknown-none-softfloat -Ccode-model=small
+diff --git a/arch/loongarch/include/asm/acenv.h b/arch/loongarch/include/asm/acenv.h
+index 52f298f7293bab..483c955f2ae50d 100644
+--- a/arch/loongarch/include/asm/acenv.h
++++ b/arch/loongarch/include/asm/acenv.h
+@@ -10,9 +10,8 @@
+ #ifndef _ASM_LOONGARCH_ACENV_H
+ #define _ASM_LOONGARCH_ACENV_H
+ 
+-/*
+- * This header is required by ACPI core, but we have nothing to fill in
+- * right now. Will be updated later when needed.
+- */
++#ifdef CONFIG_ARCH_STRICT_ALIGN
++#define ACPI_MISALIGNMENT_NOT_SUPPORTED
++#endif /* CONFIG_ARCH_STRICT_ALIGN */
+ 
+ #endif /* _ASM_LOONGARCH_ACENV_H */
+diff --git a/arch/loongarch/include/asm/kvm_mmu.h b/arch/loongarch/include/asm/kvm_mmu.h
+index 099bafc6f797c9..e36cc7e8ed200a 100644
+--- a/arch/loongarch/include/asm/kvm_mmu.h
++++ b/arch/loongarch/include/asm/kvm_mmu.h
+@@ -16,6 +16,13 @@
+  */
+ #define KVM_MMU_CACHE_MIN_PAGES	(CONFIG_PGTABLE_LEVELS - 1)
+ 
++/*
++ * _PAGE_MODIFIED is a SW pte bit, it records page ever written on host
++ * kernel, on secondary MMU it records the page writeable attribute, in
++ * order for fast path handling.
++ */
++#define KVM_PAGE_WRITEABLE	_PAGE_MODIFIED
++
+ #define _KVM_FLUSH_PGTABLE	0x1
+ #define _KVM_HAS_PGMASK		0x2
+ #define kvm_pfn_pte(pfn, prot)	(((pfn) << PFN_PTE_SHIFT) | pgprot_val(prot))
+@@ -52,10 +59,10 @@ static inline void kvm_set_pte(kvm_pte_t *ptep, kvm_pte_t val)
+ 	WRITE_ONCE(*ptep, val);
+ }
+ 
+-static inline int kvm_pte_write(kvm_pte_t pte) { return pte & _PAGE_WRITE; }
+-static inline int kvm_pte_dirty(kvm_pte_t pte) { return pte & _PAGE_DIRTY; }
+ static inline int kvm_pte_young(kvm_pte_t pte) { return pte & _PAGE_ACCESSED; }
+ static inline int kvm_pte_huge(kvm_pte_t pte) { return pte & _PAGE_HUGE; }
++static inline int kvm_pte_dirty(kvm_pte_t pte) { return pte & __WRITEABLE; }
++static inline int kvm_pte_writeable(kvm_pte_t pte) { return pte & KVM_PAGE_WRITEABLE; }
+ 
+ static inline kvm_pte_t kvm_pte_mkyoung(kvm_pte_t pte)
+ {
+@@ -69,12 +76,12 @@ static inline kvm_pte_t kvm_pte_mkold(kvm_pte_t pte)
+ 
+ static inline kvm_pte_t kvm_pte_mkdirty(kvm_pte_t pte)
+ {
+-	return pte | _PAGE_DIRTY;
++	return pte | __WRITEABLE;
+ }
+ 
+ static inline kvm_pte_t kvm_pte_mkclean(kvm_pte_t pte)
+ {
+-	return pte & ~_PAGE_DIRTY;
++	return pte & ~__WRITEABLE;
+ }
+ 
+ static inline kvm_pte_t kvm_pte_mkhuge(kvm_pte_t pte)
+@@ -87,6 +94,11 @@ static inline kvm_pte_t kvm_pte_mksmall(kvm_pte_t pte)
+ 	return pte & ~_PAGE_HUGE;
+ }
+ 
++static inline kvm_pte_t kvm_pte_mkwriteable(kvm_pte_t pte)
++{
++	return pte | KVM_PAGE_WRITEABLE;
++}
++
+ static inline int kvm_need_flush(kvm_ptw_ctx *ctx)
+ {
+ 	return ctx->flag & _KVM_FLUSH_PGTABLE;
+diff --git a/arch/loongarch/kernel/env.c b/arch/loongarch/kernel/env.c
+index c0a5dc9aeae287..be309a71f20491 100644
+--- a/arch/loongarch/kernel/env.c
++++ b/arch/loongarch/kernel/env.c
+@@ -109,6 +109,8 @@ static int __init boardinfo_init(void)
+ 	struct kobject *loongson_kobj;
+ 
+ 	loongson_kobj = kobject_create_and_add("loongson", firmware_kobj);
++	if (!loongson_kobj)
++		return -ENOMEM;
+ 
+ 	return sysfs_create_file(loongson_kobj, &boardinfo_attr.attr);
+ }
+diff --git a/arch/loongarch/kernel/stacktrace.c b/arch/loongarch/kernel/stacktrace.c
+index 9a038d1070d73b..387dc4d3c4868f 100644
+--- a/arch/loongarch/kernel/stacktrace.c
++++ b/arch/loongarch/kernel/stacktrace.c
+@@ -51,12 +51,13 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
+ 	if (task == current) {
+ 		regs->regs[3] = (unsigned long)__builtin_frame_address(0);
+ 		regs->csr_era = (unsigned long)__builtin_return_address(0);
++		regs->regs[22] = 0;
+ 	} else {
+ 		regs->regs[3] = thread_saved_fp(task);
+ 		regs->csr_era = thread_saved_ra(task);
++		regs->regs[22] = task->thread.reg22;
+ 	}
+ 	regs->regs[1] = 0;
+-	regs->regs[22] = 0;
+ 
+ 	for (unwind_start(&state, task, regs);
+ 	     !unwind_done(&state) && !unwind_error(&state); unwind_next_frame(&state)) {
+diff --git a/arch/loongarch/kernel/vdso.c b/arch/loongarch/kernel/vdso.c
+index 7b888d9085a014..dee1a15d7f4c77 100644
+--- a/arch/loongarch/kernel/vdso.c
++++ b/arch/loongarch/kernel/vdso.c
+@@ -54,6 +54,9 @@ static int __init init_vdso(void)
+ 	vdso_info.code_mapping.pages =
+ 		kcalloc(vdso_info.size / PAGE_SIZE, sizeof(struct page *), GFP_KERNEL);
+ 
++	if (!vdso_info.code_mapping.pages)
++		return -ENOMEM;
++
+ 	pfn = __phys_to_pfn(__pa_symbol(vdso_info.vdso));
+ 	for (i = 0; i < vdso_info.size / PAGE_SIZE; i++)
+ 		vdso_info.code_mapping.pages[i] = pfn_to_page(pfn + i);
+diff --git a/arch/loongarch/kvm/intc/eiointc.c b/arch/loongarch/kvm/intc/eiointc.c
+index 0207cfe1dbd6c7..8e10393d276dc4 100644
+--- a/arch/loongarch/kvm/intc/eiointc.c
++++ b/arch/loongarch/kvm/intc/eiointc.c
+@@ -810,21 +810,26 @@ static int kvm_eiointc_ctrl_access(struct kvm_device *dev,
+ 	struct loongarch_eiointc *s = dev->kvm->arch.eiointc;
+ 
+ 	data = (void __user *)attr->addr;
+-	spin_lock_irqsave(&s->lock, flags);
+ 	switch (type) {
+ 	case KVM_DEV_LOONGARCH_EXTIOI_CTRL_INIT_NUM_CPU:
++	case KVM_DEV_LOONGARCH_EXTIOI_CTRL_INIT_FEATURE:
+ 		if (copy_from_user(&val, data, 4))
+-			ret = -EFAULT;
+-		else {
+-			if (val >= EIOINTC_ROUTE_MAX_VCPUS)
+-				ret = -EINVAL;
+-			else
+-				s->num_cpu = val;
+-		}
++			return -EFAULT;
++		break;
++	default:
++		break;
++	}
++
++	spin_lock_irqsave(&s->lock, flags);
++	switch (type) {
++	case KVM_DEV_LOONGARCH_EXTIOI_CTRL_INIT_NUM_CPU:
++		if (val >= EIOINTC_ROUTE_MAX_VCPUS)
++			ret = -EINVAL;
++		else
++			s->num_cpu = val;
+ 		break;
+ 	case KVM_DEV_LOONGARCH_EXTIOI_CTRL_INIT_FEATURE:
+-		if (copy_from_user(&s->features, data, 4))
+-			ret = -EFAULT;
++		s->features = val;
+ 		if (!(s->features & BIT(EIOINTC_HAS_VIRT_EXTENSION)))
+ 			s->status |= BIT(EIOINTC_ENABLE);
+ 		break;
+@@ -846,19 +851,17 @@ static int kvm_eiointc_ctrl_access(struct kvm_device *dev,
+ 
+ static int kvm_eiointc_regs_access(struct kvm_device *dev,
+ 					struct kvm_device_attr *attr,
+-					bool is_write)
++					bool is_write, int *data)
+ {
+ 	int addr, cpu, offset, ret = 0;
+ 	unsigned long flags;
+ 	void *p = NULL;
+-	void __user *data;
+ 	struct loongarch_eiointc *s;
+ 
+ 	s = dev->kvm->arch.eiointc;
+ 	addr = attr->attr;
+ 	cpu = addr >> 16;
+ 	addr &= 0xffff;
+-	data = (void __user *)attr->addr;
+ 	switch (addr) {
+ 	case EIOINTC_NODETYPE_START ... EIOINTC_NODETYPE_END:
+ 		offset = (addr - EIOINTC_NODETYPE_START) / 4;
+@@ -897,13 +900,10 @@ static int kvm_eiointc_regs_access(struct kvm_device *dev,
+ 	}
+ 
+ 	spin_lock_irqsave(&s->lock, flags);
+-	if (is_write) {
+-		if (copy_from_user(p, data, 4))
+-			ret = -EFAULT;
+-	} else {
+-		if (copy_to_user(data, p, 4))
+-			ret = -EFAULT;
+-	}
++	if (is_write)
++		memcpy(p, data, 4);
++	else
++		memcpy(data, p, 4);
+ 	spin_unlock_irqrestore(&s->lock, flags);
+ 
+ 	return ret;
+@@ -911,19 +911,17 @@ static int kvm_eiointc_regs_access(struct kvm_device *dev,
+ 
+ static int kvm_eiointc_sw_status_access(struct kvm_device *dev,
+ 					struct kvm_device_attr *attr,
+-					bool is_write)
++					bool is_write, int *data)
+ {
+ 	int addr, ret = 0;
+ 	unsigned long flags;
+ 	void *p = NULL;
+-	void __user *data;
+ 	struct loongarch_eiointc *s;
+ 
+ 	s = dev->kvm->arch.eiointc;
+ 	addr = attr->attr;
+ 	addr &= 0xffff;
+ 
+-	data = (void __user *)attr->addr;
+ 	switch (addr) {
+ 	case KVM_DEV_LOONGARCH_EXTIOI_SW_STATUS_NUM_CPU:
+ 		if (is_write)
+@@ -945,13 +943,10 @@ static int kvm_eiointc_sw_status_access(struct kvm_device *dev,
+ 		return -EINVAL;
+ 	}
+ 	spin_lock_irqsave(&s->lock, flags);
+-	if (is_write) {
+-		if (copy_from_user(p, data, 4))
+-			ret = -EFAULT;
+-	} else {
+-		if (copy_to_user(data, p, 4))
+-			ret = -EFAULT;
+-	}
++	if (is_write)
++		memcpy(p, data, 4);
++	else
++		memcpy(data, p, 4);
+ 	spin_unlock_irqrestore(&s->lock, flags);
+ 
+ 	return ret;
+@@ -960,11 +955,27 @@ static int kvm_eiointc_sw_status_access(struct kvm_device *dev,
+ static int kvm_eiointc_get_attr(struct kvm_device *dev,
+ 				struct kvm_device_attr *attr)
+ {
++	int ret, data;
++
+ 	switch (attr->group) {
+ 	case KVM_DEV_LOONGARCH_EXTIOI_GRP_REGS:
+-		return kvm_eiointc_regs_access(dev, attr, false);
++		ret = kvm_eiointc_regs_access(dev, attr, false, &data);
++		if (ret)
++			return ret;
++
++		if (copy_to_user((void __user *)attr->addr, &data, 4))
++			ret = -EFAULT;
++
++		return ret;
+ 	case KVM_DEV_LOONGARCH_EXTIOI_GRP_SW_STATUS:
+-		return kvm_eiointc_sw_status_access(dev, attr, false);
++		ret = kvm_eiointc_sw_status_access(dev, attr, false, &data);
++		if (ret)
++			return ret;
++
++		if (copy_to_user((void __user *)attr->addr, &data, 4))
++			ret = -EFAULT;
++
++		return ret;
+ 	default:
+ 		return -EINVAL;
+ 	}
+@@ -973,13 +984,21 @@ static int kvm_eiointc_get_attr(struct kvm_device *dev,
+ static int kvm_eiointc_set_attr(struct kvm_device *dev,
+ 				struct kvm_device_attr *attr)
+ {
++	int data;
++
+ 	switch (attr->group) {
+ 	case KVM_DEV_LOONGARCH_EXTIOI_GRP_CTRL:
+ 		return kvm_eiointc_ctrl_access(dev, attr);
+ 	case KVM_DEV_LOONGARCH_EXTIOI_GRP_REGS:
+-		return kvm_eiointc_regs_access(dev, attr, true);
++		if (copy_from_user(&data, (void __user *)attr->addr, 4))
++			return -EFAULT;
++
++		return kvm_eiointc_regs_access(dev, attr, true, &data);
+ 	case KVM_DEV_LOONGARCH_EXTIOI_GRP_SW_STATUS:
+-		return kvm_eiointc_sw_status_access(dev, attr, true);
++		if (copy_from_user(&data, (void __user *)attr->addr, 4))
++			return -EFAULT;
++
++		return kvm_eiointc_sw_status_access(dev, attr, true, &data);
+ 	default:
+ 		return -EINVAL;
+ 	}
+diff --git a/arch/loongarch/kvm/intc/pch_pic.c b/arch/loongarch/kvm/intc/pch_pic.c
+index ef5044796b7a6e..52f19dcf6b8814 100644
+--- a/arch/loongarch/kvm/intc/pch_pic.c
++++ b/arch/loongarch/kvm/intc/pch_pic.c
+@@ -348,6 +348,7 @@ static int kvm_pch_pic_regs_access(struct kvm_device *dev,
+ 				struct kvm_device_attr *attr,
+ 				bool is_write)
+ {
++	char buf[8];
+ 	int addr, offset, len = 8, ret = 0;
+ 	void __user *data;
+ 	void *p = NULL;
+@@ -397,17 +398,23 @@ static int kvm_pch_pic_regs_access(struct kvm_device *dev,
+ 		return -EINVAL;
+ 	}
+ 
+-	spin_lock(&s->lock);
+-	/* write or read value according to is_write */
+ 	if (is_write) {
+-		if (copy_from_user(p, data, len))
+-			ret = -EFAULT;
+-	} else {
+-		if (copy_to_user(data, p, len))
+-			ret = -EFAULT;
++		if (copy_from_user(buf, data, len))
++			return -EFAULT;
+ 	}
++
++	spin_lock(&s->lock);
++	if (is_write)
++		memcpy(p, buf, len);
++	else
++		memcpy(buf, p, len);
+ 	spin_unlock(&s->lock);
+ 
++	if (!is_write) {
++		if (copy_to_user(data, buf, len))
++			return -EFAULT;
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
+index ed956c5cf2cc04..7c8143e79c1279 100644
+--- a/arch/loongarch/kvm/mmu.c
++++ b/arch/loongarch/kvm/mmu.c
+@@ -569,7 +569,7 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ
+ 	/* Track access to pages marked old */
+ 	new = kvm_pte_mkyoung(*ptep);
+ 	if (write && !kvm_pte_dirty(new)) {
+-		if (!kvm_pte_write(new)) {
++		if (!kvm_pte_writeable(new)) {
+ 			ret = -EFAULT;
+ 			goto out;
+ 		}
+@@ -856,9 +856,9 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
+ 		prot_bits |= _CACHE_SUC;
+ 
+ 	if (writeable) {
+-		prot_bits |= _PAGE_WRITE;
++		prot_bits = kvm_pte_mkwriteable(prot_bits);
+ 		if (write)
+-			prot_bits |= __WRITEABLE;
++			prot_bits = kvm_pte_mkdirty(prot_bits);
+ 	}
+ 
+ 	/* Disable dirty logging on HugePages */
+@@ -904,7 +904,7 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
+ 	kvm_release_faultin_page(kvm, page, false, writeable);
+ 	spin_unlock(&kvm->mmu_lock);
+ 
+-	if (prot_bits & _PAGE_DIRTY)
++	if (kvm_pte_dirty(prot_bits))
+ 		mark_page_dirty_in_slot(kvm, memslot, gfn);
+ 
+ out:
+diff --git a/arch/s390/include/asm/pci_insn.h b/arch/s390/include/asm/pci_insn.h
+index e5f57cfe1d4582..025c6dcbf89331 100644
+--- a/arch/s390/include/asm/pci_insn.h
++++ b/arch/s390/include/asm/pci_insn.h
+@@ -16,11 +16,11 @@
+ #define ZPCI_PCI_ST_FUNC_NOT_AVAIL		40
+ #define ZPCI_PCI_ST_ALREADY_IN_RQ_STATE		44
+ 
+-/* Load/Store return codes */
+-#define ZPCI_PCI_LS_OK				0
+-#define ZPCI_PCI_LS_ERR				1
+-#define ZPCI_PCI_LS_BUSY			2
+-#define ZPCI_PCI_LS_INVAL_HANDLE		3
++/* PCI instruction condition codes */
++#define ZPCI_CC_OK				0
++#define ZPCI_CC_ERR				1
++#define ZPCI_CC_BUSY				2
++#define ZPCI_CC_INVAL_HANDLE			3
+ 
+ /* Load/Store address space identifiers */
+ #define ZPCI_PCIAS_MEMIO_0			0
+diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c
+index ad8d78fb1d9aaf..de7867ae220d0c 100644
+--- a/arch/um/drivers/virtio_uml.c
++++ b/arch/um/drivers/virtio_uml.c
+@@ -1250,10 +1250,12 @@ static int virtio_uml_probe(struct platform_device *pdev)
+ 	device_set_wakeup_capable(&vu_dev->vdev.dev, true);
+ 
+ 	rc = register_virtio_device(&vu_dev->vdev);
+-	if (rc)
++	if (rc) {
+ 		put_device(&vu_dev->vdev.dev);
++		return rc;
++	}
+ 	vu_dev->registered = 1;
+-	return rc;
++	return 0;
+ 
+ error_init:
+ 	os_close_file(vu_dev->sock);
+diff --git a/arch/um/os-Linux/file.c b/arch/um/os-Linux/file.c
+index 617886d1fb1e91..21f0e50fb1df95 100644
+--- a/arch/um/os-Linux/file.c
++++ b/arch/um/os-Linux/file.c
+@@ -535,7 +535,7 @@ ssize_t os_rcv_fd_msg(int fd, int *fds, unsigned int n_fds,
+ 	    cmsg->cmsg_type != SCM_RIGHTS)
+ 		return n;
+ 
+-	memcpy(fds, CMSG_DATA(cmsg), cmsg->cmsg_len);
++	memcpy(fds, CMSG_DATA(cmsg), cmsg->cmsg_len - CMSG_LEN(0));
+ 	return n;
+ }
+ 
+diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
+index 14d7e0719dd5ee..8d0dfd20ac2823 100644
+--- a/arch/x86/include/asm/sev.h
++++ b/arch/x86/include/asm/sev.h
+@@ -564,6 +564,24 @@ enum es_result sev_es_ghcb_hv_call(struct ghcb *ghcb,
+ 
+ extern struct ghcb *boot_ghcb;
+ 
++static inline void sev_evict_cache(void *va, int npages)
++{
++	volatile u8 val __always_unused;
++	u8 *bytes = va;
++	int page_idx;
++
++	/*
++	 * For SEV guests, a read from the first/last cache-lines of a 4K page
++	 * using the guest key is sufficient to cause a flush of all cache-lines
++	 * associated with that 4K page without incurring all the overhead of a
++	 * full CLFLUSH sequence.
++	 */
++	for (page_idx = 0; page_idx < npages; page_idx++) {
++		val = bytes[page_idx * PAGE_SIZE];
++		val = bytes[page_idx * PAGE_SIZE + PAGE_SIZE - 1];
++	}
++}
++
+ #else	/* !CONFIG_AMD_MEM_ENCRYPT */
+ 
+ #define snp_vmpl 0
+@@ -607,6 +625,7 @@ static inline int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_
+ static inline int snp_svsm_vtpm_send_command(u8 *buffer) { return -ENODEV; }
+ static inline void __init snp_secure_tsc_prepare(void) { }
+ static inline void __init snp_secure_tsc_init(void) { }
++static inline void sev_evict_cache(void *va, int npages) {}
+ 
+ #endif	/* CONFIG_AMD_MEM_ENCRYPT */
+ 
+@@ -621,24 +640,6 @@ int rmp_make_shared(u64 pfn, enum pg_level level);
+ void snp_leak_pages(u64 pfn, unsigned int npages);
+ void kdump_sev_callback(void);
+ void snp_fixup_e820_tables(void);
+-
+-static inline void sev_evict_cache(void *va, int npages)
+-{
+-	volatile u8 val __always_unused;
+-	u8 *bytes = va;
+-	int page_idx;
+-
+-	/*
+-	 * For SEV guests, a read from the first/last cache-lines of a 4K page
+-	 * using the guest key is sufficient to cause a flush of all cache-lines
+-	 * associated with that 4K page without incurring all the overhead of a
+-	 * full CLFLUSH sequence.
+-	 */
+-	for (page_idx = 0; page_idx < npages; page_idx++) {
+-		val = bytes[page_idx * PAGE_SIZE];
+-		val = bytes[page_idx * PAGE_SIZE + PAGE_SIZE - 1];
+-	}
+-}
+ #else
+ static inline bool snp_probe_rmptable_info(void) { return false; }
+ static inline int snp_rmptable_init(void) { return -ENOSYS; }
+@@ -654,7 +655,6 @@ static inline int rmp_make_shared(u64 pfn, enum pg_level level) { return -ENODEV
+ static inline void snp_leak_pages(u64 pfn, unsigned int npages) {}
+ static inline void kdump_sev_callback(void) { }
+ static inline void snp_fixup_e820_tables(void) {}
+-static inline void sev_evict_cache(void *va, int npages) {}
+ #endif
+ 
+ #endif
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index be8c43049f4d39..268d62fa28b824 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -4204,8 +4204,7 @@ static inline void sync_lapic_to_cr8(struct kvm_vcpu *vcpu)
+ 	struct vcpu_svm *svm = to_svm(vcpu);
+ 	u64 cr8;
+ 
+-	if (nested_svm_virtualize_tpr(vcpu) ||
+-	    kvm_vcpu_apicv_active(vcpu))
++	if (nested_svm_virtualize_tpr(vcpu))
+ 		return;
+ 
+ 	cr8 = kvm_get_cr8(vcpu);
+diff --git a/crypto/af_alg.c b/crypto/af_alg.c
+index 0da7c1ac778a0e..ca6fdcc6c54aca 100644
+--- a/crypto/af_alg.c
++++ b/crypto/af_alg.c
+@@ -970,6 +970,12 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ 	}
+ 
+ 	lock_sock(sk);
++	if (ctx->write) {
++		release_sock(sk);
++		return -EBUSY;
++	}
++	ctx->write = true;
++
+ 	if (ctx->init && !ctx->more) {
+ 		if (ctx->used) {
+ 			err = -EINVAL;
+@@ -1019,6 +1025,8 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ 			continue;
+ 		}
+ 
++		ctx->merge = 0;
++
+ 		if (!af_alg_writable(sk)) {
+ 			err = af_alg_wait_for_wmem(sk, msg->msg_flags);
+ 			if (err)
+@@ -1058,7 +1066,6 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ 			ctx->used += plen;
+ 			copied += plen;
+ 			size -= plen;
+-			ctx->merge = 0;
+ 		} else {
+ 			do {
+ 				struct page *pg;
+@@ -1104,6 +1111,7 @@ int af_alg_sendmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ 
+ unlock:
+ 	af_alg_data_wakeup(sk);
++	ctx->write = false;
+ 	release_sock(sk);
+ 
+ 	return copied ?: err;
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index 54c57103715f9b..0f7bcc86b6f724 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -1794,6 +1794,7 @@ static int write_same_filled_page(struct zram *zram, unsigned long fill,
+ 				  u32 index)
+ {
+ 	zram_slot_lock(zram, index);
++	zram_free_page(zram, index);
+ 	zram_set_flag(zram, index, ZRAM_SAME);
+ 	zram_set_handle(zram, index, fill);
+ 	zram_slot_unlock(zram, index);
+@@ -1831,6 +1832,7 @@ static int write_incompressible_page(struct zram *zram, struct page *page,
+ 	kunmap_local(src);
+ 
+ 	zram_slot_lock(zram, index);
++	zram_free_page(zram, index);
+ 	zram_set_flag(zram, index, ZRAM_HUGE);
+ 	zram_set_handle(zram, index, handle);
+ 	zram_set_obj_size(zram, index, PAGE_SIZE);
+@@ -1854,11 +1856,6 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index)
+ 	unsigned long element;
+ 	bool same_filled;
+ 
+-	/* First, free memory allocated to this slot (if any) */
+-	zram_slot_lock(zram, index);
+-	zram_free_page(zram, index);
+-	zram_slot_unlock(zram, index);
+-
+ 	mem = kmap_local_page(page);
+ 	same_filled = page_same_filled(mem, &element);
+ 	kunmap_local(mem);
+@@ -1900,6 +1897,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index)
+ 	zcomp_stream_put(zstrm);
+ 
+ 	zram_slot_lock(zram, index);
++	zram_free_page(zram, index);
+ 	zram_set_handle(zram, index, handle);
+ 	zram_set_obj_size(zram, index, comp_len);
+ 	zram_slot_unlock(zram, index);
+diff --git a/drivers/clk/sunxi-ng/ccu_mp.c b/drivers/clk/sunxi-ng/ccu_mp.c
+index 354c981943b6f8..4221b1888b38da 100644
+--- a/drivers/clk/sunxi-ng/ccu_mp.c
++++ b/drivers/clk/sunxi-ng/ccu_mp.c
+@@ -185,7 +185,7 @@ static unsigned long ccu_mp_recalc_rate(struct clk_hw *hw,
+ 	p &= (1 << cmp->p.width) - 1;
+ 
+ 	if (cmp->common.features & CCU_FEATURE_DUAL_DIV)
+-		rate = (parent_rate / p) / m;
++		rate = (parent_rate / (p + cmp->p.offset)) / m;
+ 	else
+ 		rate = (parent_rate >> p) / m;
+ 
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index e058ba02779296..9f5ccc1720cbc1 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -2430,7 +2430,7 @@ static void __sev_firmware_shutdown(struct sev_device *sev, bool panic)
+ {
+ 	int error;
+ 
+-	__sev_platform_shutdown_locked(NULL);
++	__sev_platform_shutdown_locked(&error);
+ 
+ 	if (sev_es_tmr) {
+ 		/*
+diff --git a/drivers/dpll/dpll_netlink.c b/drivers/dpll/dpll_netlink.c
+index c130f87147fa3f..815de752bcd384 100644
+--- a/drivers/dpll/dpll_netlink.c
++++ b/drivers/dpll/dpll_netlink.c
+@@ -173,8 +173,8 @@ static int
+ dpll_msg_add_clock_quality_level(struct sk_buff *msg, struct dpll_device *dpll,
+ 				 struct netlink_ext_ack *extack)
+ {
++	DECLARE_BITMAP(qls, DPLL_CLOCK_QUALITY_LEVEL_MAX + 1) = { 0 };
+ 	const struct dpll_device_ops *ops = dpll_device_ops(dpll);
+-	DECLARE_BITMAP(qls, DPLL_CLOCK_QUALITY_LEVEL_MAX) = { 0 };
+ 	enum dpll_clock_quality_level ql;
+ 	int ret;
+ 
+@@ -183,7 +183,7 @@ dpll_msg_add_clock_quality_level(struct sk_buff *msg, struct dpll_device *dpll,
+ 	ret = ops->clock_quality_level_get(dpll, dpll_priv(dpll), qls, extack);
+ 	if (ret)
+ 		return ret;
+-	for_each_set_bit(ql, qls, DPLL_CLOCK_QUALITY_LEVEL_MAX)
++	for_each_set_bit(ql, qls, DPLL_CLOCK_QUALITY_LEVEL_MAX + 1)
+ 		if (nla_put_u32(msg, DPLL_A_CLOCK_QUALITY_LEVEL, ql))
+ 			return -EMSGSIZE;
+ 
+diff --git a/drivers/gpio/gpiolib-acpi-core.c b/drivers/gpio/gpiolib-acpi-core.c
+index 12b24a717e43f1..d11bcaf1ae8842 100644
+--- a/drivers/gpio/gpiolib-acpi-core.c
++++ b/drivers/gpio/gpiolib-acpi-core.c
+@@ -942,7 +942,7 @@ struct gpio_desc *acpi_find_gpio(struct fwnode_handle *fwnode,
+ {
+ 	struct acpi_device *adev = to_acpi_device_node(fwnode);
+ 	bool can_fallback = acpi_can_fallback_to_crs(adev, con_id);
+-	struct acpi_gpio_info info;
++	struct acpi_gpio_info info = {};
+ 	struct gpio_desc *desc;
+ 
+ 	desc = __acpi_find_gpio(fwnode, con_id, idx, can_fallback, &info);
+@@ -992,7 +992,7 @@ int acpi_dev_gpio_irq_wake_get_by(struct acpi_device *adev, const char *con_id,
+ 	int ret;
+ 
+ 	for (i = 0, idx = 0; idx <= index; i++) {
+-		struct acpi_gpio_info info;
++		struct acpi_gpio_info info = {};
+ 		struct gpio_desc *desc;
+ 
+ 		/* Ignore -EPROBE_DEFER, it only matters if idx matches */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+index fe282b85573414..31010040a12f04 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+@@ -250,16 +250,24 @@ void amdgpu_amdkfd_interrupt(struct amdgpu_device *adev,
+ 
+ void amdgpu_amdkfd_suspend(struct amdgpu_device *adev, bool suspend_proc)
+ {
+-	if (adev->kfd.dev)
+-		kgd2kfd_suspend(adev->kfd.dev, suspend_proc);
++	if (adev->kfd.dev) {
++		if (adev->in_s0ix)
++			kgd2kfd_stop_sched_all_nodes(adev->kfd.dev);
++		else
++			kgd2kfd_suspend(adev->kfd.dev, suspend_proc);
++	}
+ }
+ 
+ int amdgpu_amdkfd_resume(struct amdgpu_device *adev, bool resume_proc)
+ {
+ 	int r = 0;
+ 
+-	if (adev->kfd.dev)
+-		r = kgd2kfd_resume(adev->kfd.dev, resume_proc);
++	if (adev->kfd.dev) {
++		if (adev->in_s0ix)
++			r = kgd2kfd_start_sched_all_nodes(adev->kfd.dev);
++		else
++			r = kgd2kfd_resume(adev->kfd.dev, resume_proc);
++	}
+ 
+ 	return r;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+index b7c3ec48340721..861697490ac2e3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+@@ -426,7 +426,9 @@ void kgd2kfd_smi_event_throttle(struct kfd_dev *kfd, uint64_t throttle_bitmask);
+ int kgd2kfd_check_and_lock_kfd(void);
+ void kgd2kfd_unlock_kfd(void);
+ int kgd2kfd_start_sched(struct kfd_dev *kfd, uint32_t node_id);
++int kgd2kfd_start_sched_all_nodes(struct kfd_dev *kfd);
+ int kgd2kfd_stop_sched(struct kfd_dev *kfd, uint32_t node_id);
++int kgd2kfd_stop_sched_all_nodes(struct kfd_dev *kfd);
+ bool kgd2kfd_compute_active(struct kfd_dev *kfd, uint32_t node_id);
+ bool kgd2kfd_vmfault_fast_path(struct amdgpu_device *adev, struct amdgpu_iv_entry *entry,
+ 			       bool retry_fault);
+@@ -516,11 +518,21 @@ static inline int kgd2kfd_start_sched(struct kfd_dev *kfd, uint32_t node_id)
+ 	return 0;
+ }
+ 
++static inline int kgd2kfd_start_sched_all_nodes(struct kfd_dev *kfd)
++{
++	return 0;
++}
++
+ static inline int kgd2kfd_stop_sched(struct kfd_dev *kfd, uint32_t node_id)
+ {
+ 	return 0;
+ }
+ 
++static inline int kgd2kfd_stop_sched_all_nodes(struct kfd_dev *kfd)
++{
++	return 0;
++}
++
+ static inline bool kgd2kfd_compute_active(struct kfd_dev *kfd, uint32_t node_id)
+ {
+ 	return false;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index a57e8c5474bb00..a65591f70b15d8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -5055,7 +5055,7 @@ int amdgpu_device_suspend(struct drm_device *dev, bool notify_clients)
+ 	adev->in_suspend = true;
+ 
+ 	if (amdgpu_sriov_vf(adev)) {
+-		if (!adev->in_s0ix && !adev->in_runpm)
++		if (!adev->in_runpm)
+ 			amdgpu_amdkfd_suspend_process(adev);
+ 		amdgpu_virt_fini_data_exchange(adev);
+ 		r = amdgpu_virt_request_full_gpu(adev, false);
+@@ -5075,10 +5075,8 @@ int amdgpu_device_suspend(struct drm_device *dev, bool notify_clients)
+ 
+ 	amdgpu_device_ip_suspend_phase1(adev);
+ 
+-	if (!adev->in_s0ix) {
+-		amdgpu_amdkfd_suspend(adev, !amdgpu_sriov_vf(adev) && !adev->in_runpm);
+-		amdgpu_userq_suspend(adev);
+-	}
++	amdgpu_amdkfd_suspend(adev, !amdgpu_sriov_vf(adev) && !adev->in_runpm);
++	amdgpu_userq_suspend(adev);
+ 
+ 	r = amdgpu_device_evict_resources(adev);
+ 	if (r)
+@@ -5141,15 +5139,13 @@ int amdgpu_device_resume(struct drm_device *dev, bool notify_clients)
+ 		goto exit;
+ 	}
+ 
+-	if (!adev->in_s0ix) {
+-		r = amdgpu_amdkfd_resume(adev, !amdgpu_sriov_vf(adev) && !adev->in_runpm);
+-		if (r)
+-			goto exit;
++	r = amdgpu_amdkfd_resume(adev, !amdgpu_sriov_vf(adev) && !adev->in_runpm);
++	if (r)
++		goto exit;
+ 
+-		r = amdgpu_userq_resume(adev);
+-		if (r)
+-			goto exit;
+-	}
++	r = amdgpu_userq_resume(adev);
++	if (r)
++		goto exit;
+ 
+ 	r = amdgpu_device_ip_late_init(adev);
+ 	if (r)
+@@ -5162,7 +5158,7 @@ int amdgpu_device_resume(struct drm_device *dev, bool notify_clients)
+ 		amdgpu_virt_init_data_exchange(adev);
+ 		amdgpu_virt_release_full_gpu(adev, true);
+ 
+-		if (!adev->in_s0ix && !r && !adev->in_runpm)
++		if (!r && !adev->in_runpm)
+ 			r = amdgpu_amdkfd_resume_process(adev);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index 097bf675378273..f512879cb71c65 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -1501,6 +1501,25 @@ int kgd2kfd_start_sched(struct kfd_dev *kfd, uint32_t node_id)
+ 	return ret;
+ }
+ 
++int kgd2kfd_start_sched_all_nodes(struct kfd_dev *kfd)
++{
++	struct kfd_node *node;
++	int i, r;
++
++	if (!kfd->init_complete)
++		return 0;
++
++	for (i = 0; i < kfd->num_nodes; i++) {
++		node = kfd->nodes[i];
++		r = node->dqm->ops.unhalt(node->dqm);
++		if (r) {
++			dev_err(kfd_device, "Error in starting scheduler\n");
++			return r;
++		}
++	}
++	return 0;
++}
++
+ int kgd2kfd_stop_sched(struct kfd_dev *kfd, uint32_t node_id)
+ {
+ 	struct kfd_node *node;
+@@ -1518,6 +1537,23 @@ int kgd2kfd_stop_sched(struct kfd_dev *kfd, uint32_t node_id)
+ 	return node->dqm->ops.halt(node->dqm);
+ }
+ 
++int kgd2kfd_stop_sched_all_nodes(struct kfd_dev *kfd)
++{
++	struct kfd_node *node;
++	int i, r;
++
++	if (!kfd->init_complete)
++		return 0;
++
++	for (i = 0; i < kfd->num_nodes; i++) {
++		node = kfd->nodes[i];
++		r = node->dqm->ops.halt(node->dqm);
++		if (r)
++			return r;
++	}
++	return 0;
++}
++
+ bool kgd2kfd_compute_active(struct kfd_dev *kfd, uint32_t node_id)
+ {
+ 	struct kfd_node *node;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 312f6075e39d11..58ea351dd48b5d 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -8689,7 +8689,16 @@ static int amdgpu_dm_encoder_init(struct drm_device *dev,
+ static void manage_dm_interrupts(struct amdgpu_device *adev,
+ 				 struct amdgpu_crtc *acrtc,
+ 				 struct dm_crtc_state *acrtc_state)
+-{
++{	/*
++	 * We cannot be sure that the frontend index maps to the same
++	 * backend index - some even map to more than one.
++	 * So we have to go through the CRTC to find the right IRQ.
++	 */
++	int irq_type = amdgpu_display_crtc_idx_to_irq_type(
++			adev,
++			acrtc->crtc_id);
++	struct drm_device *dev = adev_to_drm(adev);
++
+ 	struct drm_vblank_crtc_config config = {0};
+ 	struct dc_crtc_timing *timing;
+ 	int offdelay;
+@@ -8742,7 +8751,35 @@ static void manage_dm_interrupts(struct amdgpu_device *adev,
+ 
+ 		drm_crtc_vblank_on_config(&acrtc->base,
+ 					  &config);
++		/* Allow RX6xxx, RX7700, RX7800 GPUs to call amdgpu_irq_get.*/
++		switch (amdgpu_ip_version(adev, DCE_HWIP, 0)) {
++		case IP_VERSION(3, 0, 0):
++		case IP_VERSION(3, 0, 2):
++		case IP_VERSION(3, 0, 3):
++		case IP_VERSION(3, 2, 0):
++			if (amdgpu_irq_get(adev, &adev->pageflip_irq, irq_type))
++				drm_err(dev, "DM_IRQ: Cannot get pageflip irq!\n");
++#if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
++			if (amdgpu_irq_get(adev, &adev->vline0_irq, irq_type))
++				drm_err(dev, "DM_IRQ: Cannot get vline0 irq!\n");
++#endif
++		}
++
+ 	} else {
++		/* Allow RX6xxx, RX7700, RX7800 GPUs to call amdgpu_irq_put.*/
++		switch (amdgpu_ip_version(adev, DCE_HWIP, 0)) {
++		case IP_VERSION(3, 0, 0):
++		case IP_VERSION(3, 0, 2):
++		case IP_VERSION(3, 0, 3):
++		case IP_VERSION(3, 2, 0):
++#if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
++			if (amdgpu_irq_put(adev, &adev->vline0_irq, irq_type))
++				drm_err(dev, "DM_IRQ: Cannot put vline0 irq!\n");
++#endif
++			if (amdgpu_irq_put(adev, &adev->pageflip_irq, irq_type))
++				drm_err(dev, "DM_IRQ: Cannot put pageflip irq!\n");
++		}
++
+ 		drm_crtc_vblank_off(&acrtc->base);
+ 	}
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index 26b8e232f85825..2278a123db23f9 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -2185,7 +2185,7 @@ static int smu_resume(struct amdgpu_ip_block *ip_block)
+ 			return ret;
+ 	}
+ 
+-	if (smu_dpm_ctx->dpm_level == AMD_DPM_FORCED_LEVEL_MANUAL) {
++	if (smu_dpm_ctx->dpm_level == AMD_DPM_FORCED_LEVEL_MANUAL && smu->od_enabled) {
+ 		ret = smu_od_edit_dpm_table(smu, PP_OD_COMMIT_DPM_TABLE, NULL, 0);
+ 		if (ret)
+ 			return ret;
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index 8a9079c2ed5c22..8257132a8ee9d2 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -2678,7 +2678,7 @@ static int anx7625_i2c_probe(struct i2c_client *client)
+ 		ret = devm_request_threaded_irq(dev, platform->pdata.intp_irq,
+ 						NULL, anx7625_intr_hpd_isr,
+ 						IRQF_TRIGGER_FALLING |
+-						IRQF_ONESHOT,
++						IRQF_ONESHOT | IRQF_NO_AUTOEN,
+ 						"anx7625-intp", platform);
+ 		if (ret) {
+ 			DRM_DEV_ERROR(dev, "fail to request irq\n");
+@@ -2747,8 +2747,10 @@ static int anx7625_i2c_probe(struct i2c_client *client)
+ 	}
+ 
+ 	/* Add work function */
+-	if (platform->pdata.intp_irq)
++	if (platform->pdata.intp_irq) {
++		enable_irq(platform->pdata.intp_irq);
+ 		queue_work(platform->workqueue, &platform->work);
++	}
+ 
+ 	if (platform->pdata.audio_en)
+ 		anx7625_register_audio(dev, platform);
+diff --git a/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c b/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
+index b431e7efd1f0d7..dbef0ca1a22a38 100644
+--- a/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
++++ b/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
+@@ -1984,8 +1984,10 @@ static void cdns_mhdp_atomic_enable(struct drm_bridge *bridge,
+ 	mhdp_state = to_cdns_mhdp_bridge_state(new_state);
+ 
+ 	mhdp_state->current_mode = drm_mode_duplicate(bridge->dev, mode);
+-	if (!mhdp_state->current_mode)
+-		return;
++	if (!mhdp_state->current_mode) {
++		ret = -EINVAL;
++		goto out;
++	}
+ 
+ 	drm_mode_set_name(mhdp_state->current_mode);
+ 
+diff --git a/drivers/gpu/drm/xe/abi/guc_actions_abi.h b/drivers/gpu/drm/xe/abi/guc_actions_abi.h
+index 448afb86e05c7d..4d9896e14649c0 100644
+--- a/drivers/gpu/drm/xe/abi/guc_actions_abi.h
++++ b/drivers/gpu/drm/xe/abi/guc_actions_abi.h
+@@ -117,6 +117,7 @@ enum xe_guc_action {
+ 	XE_GUC_ACTION_ENTER_S_STATE = 0x501,
+ 	XE_GUC_ACTION_EXIT_S_STATE = 0x502,
+ 	XE_GUC_ACTION_GLOBAL_SCHED_POLICY_CHANGE = 0x506,
++	XE_GUC_ACTION_UPDATE_SCHEDULING_POLICIES_KLV = 0x509,
+ 	XE_GUC_ACTION_SCHED_CONTEXT = 0x1000,
+ 	XE_GUC_ACTION_SCHED_CONTEXT_MODE_SET = 0x1001,
+ 	XE_GUC_ACTION_SCHED_CONTEXT_MODE_DONE = 0x1002,
+@@ -142,6 +143,7 @@ enum xe_guc_action {
+ 	XE_GUC_ACTION_SET_ENG_UTIL_BUFF = 0x550A,
+ 	XE_GUC_ACTION_SET_DEVICE_ENGINE_ACTIVITY_BUFFER = 0x550C,
+ 	XE_GUC_ACTION_SET_FUNCTION_ENGINE_ACTIVITY_BUFFER = 0x550D,
++	XE_GUC_ACTION_OPT_IN_FEATURE_KLV = 0x550E,
+ 	XE_GUC_ACTION_NOTIFY_MEMORY_CAT_ERROR = 0x6000,
+ 	XE_GUC_ACTION_REPORT_PAGE_FAULT_REQ_DESC = 0x6002,
+ 	XE_GUC_ACTION_PAGE_FAULT_RES_DESC = 0x6003,
+@@ -240,4 +242,7 @@ enum xe_guc_g2g_type {
+ #define XE_G2G_DEREGISTER_TILE	REG_GENMASK(15, 12)
+ #define XE_G2G_DEREGISTER_TYPE	REG_GENMASK(11, 8)
+ 
++/* invalid type for XE_GUC_ACTION_NOTIFY_MEMORY_CAT_ERROR */
++#define XE_GUC_CAT_ERR_TYPE_INVALID 0xdeadbeef
++
+ #endif
+diff --git a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
+index 7de8f827281fcd..89034bc97ec5a4 100644
+--- a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
++++ b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
+@@ -16,6 +16,8 @@
+  *  +===+=======+==============================================================+
+  *  | 0 | 31:16 | **KEY** - KLV key identifier                                 |
+  *  |   |       |   - `GuC Self Config KLVs`_                                  |
++ *  |   |       |   - `GuC Opt In Feature KLVs`_                               |
++ *  |   |       |   - `GuC Scheduling Policies KLVs`_                          |
+  *  |   |       |   - `GuC VGT Policy KLVs`_                                   |
+  *  |   |       |   - `GuC VF Configuration KLVs`_                             |
+  *  |   |       |                                                              |
+@@ -124,6 +126,44 @@ enum  {
+ 	GUC_CONTEXT_POLICIES_KLV_NUM_IDS = 5,
+ };
+ 
++/**
++ * DOC: GuC Opt In Feature KLVs
++ *
++ * `GuC KLV`_ keys available for use with OPT_IN_FEATURE_KLV
++ *
++ *  _`GUC_KLV_OPT_IN_FEATURE_EXT_CAT_ERR_TYPE` : 0x4001
++ *      Adds an extra dword to the XE_GUC_ACTION_NOTIFY_MEMORY_CAT_ERROR G2H
++ *      containing the type of the CAT error. On HW that does not support
++ *      reporting the CAT error type, the extra dword is set to 0xdeadbeef.
++ */
++
++#define GUC_KLV_OPT_IN_FEATURE_EXT_CAT_ERR_TYPE_KEY 0x4001
++#define GUC_KLV_OPT_IN_FEATURE_EXT_CAT_ERR_TYPE_LEN 0u
++
++/**
++ * DOC: GuC Scheduling Policies KLVs
++ *
++ * `GuC KLV`_ keys available for use with UPDATE_SCHEDULING_POLICIES_KLV.
++ *
++ * _`GUC_KLV_SCHEDULING_POLICIES_RENDER_COMPUTE_YIELD` : 0x1001
++ *      Some platforms do not allow concurrent execution of RCS and CCS
++ *      workloads from different address spaces. By default, the GuC prioritizes
++ *      RCS submissions over CCS ones, which can lead to CCS workloads being
++ *      significantly (or completely) starved of execution time. This KLV allows
++ *      the driver to specify a quantum (in ms) and a ratio (percentage value
++ *      between 0 and 100), and the GuC will prioritize the CCS for that
++ *      percentage of each quantum. For example, specifying 100ms and 30% will
++ *      make the GuC prioritize the CCS for 30ms of every 100ms.
++ *      Note that this does not necessarly mean that RCS and CCS engines will
++ *      only be active for their percentage of the quantum, as the restriction
++ *      only kicks in if both classes are fully busy with non-compatible address
++ *      spaces; i.e., if one engine is idle or running the same address space,
++ *      a pending job on the other engine will still be submitted to the HW no
++ *      matter what the ratio is
++ */
++#define GUC_KLV_SCHEDULING_POLICIES_RENDER_COMPUTE_YIELD_KEY	0x1001
++#define GUC_KLV_SCHEDULING_POLICIES_RENDER_COMPUTE_YIELD_LEN	2u
++
+ /**
+  * DOC: GuC VGT Policy KLVs
+  *
+diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
+index fee22358cc09be..e5116975961461 100644
+--- a/drivers/gpu/drm/xe/xe_exec_queue.c
++++ b/drivers/gpu/drm/xe/xe_exec_queue.c
+@@ -151,6 +151,16 @@ static int __xe_exec_queue_init(struct xe_exec_queue *q)
+ 	return err;
+ }
+ 
++static void __xe_exec_queue_fini(struct xe_exec_queue *q)
++{
++	int i;
++
++	q->ops->fini(q);
++
++	for (i = 0; i < q->width; ++i)
++		xe_lrc_put(q->lrc[i]);
++}
++
+ struct xe_exec_queue *xe_exec_queue_create(struct xe_device *xe, struct xe_vm *vm,
+ 					   u32 logical_mask, u16 width,
+ 					   struct xe_hw_engine *hwe, u32 flags,
+@@ -181,11 +191,13 @@ struct xe_exec_queue *xe_exec_queue_create(struct xe_device *xe, struct xe_vm *v
+ 	if (xe_exec_queue_uses_pxp(q)) {
+ 		err = xe_pxp_exec_queue_add(xe->pxp, q);
+ 		if (err)
+-			goto err_post_alloc;
++			goto err_post_init;
+ 	}
+ 
+ 	return q;
+ 
++err_post_init:
++	__xe_exec_queue_fini(q);
+ err_post_alloc:
+ 	__xe_exec_queue_free(q);
+ 	return ERR_PTR(err);
+@@ -283,13 +295,11 @@ void xe_exec_queue_destroy(struct kref *ref)
+ 			xe_exec_queue_put(eq);
+ 	}
+ 
+-	q->ops->fini(q);
++	q->ops->destroy(q);
+ }
+ 
+ void xe_exec_queue_fini(struct xe_exec_queue *q)
+ {
+-	int i;
+-
+ 	/*
+ 	 * Before releasing our ref to lrc and xef, accumulate our run ticks
+ 	 * and wakeup any waiters.
+@@ -298,9 +308,7 @@ void xe_exec_queue_fini(struct xe_exec_queue *q)
+ 	if (q->xef && atomic_dec_and_test(&q->xef->exec_queue.pending_removal))
+ 		wake_up_var(&q->xef->exec_queue.pending_removal);
+ 
+-	for (i = 0; i < q->width; ++i)
+-		xe_lrc_put(q->lrc[i]);
+-
++	__xe_exec_queue_fini(q);
+ 	__xe_exec_queue_free(q);
+ }
+ 
+diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
+index cc1cffb5c87f1d..1c9d03f2a3e5da 100644
+--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
++++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
+@@ -166,8 +166,14 @@ struct xe_exec_queue_ops {
+ 	int (*init)(struct xe_exec_queue *q);
+ 	/** @kill: Kill inflight submissions for backend */
+ 	void (*kill)(struct xe_exec_queue *q);
+-	/** @fini: Fini exec queue for submission backend */
++	/** @fini: Undoes the init() for submission backend */
+ 	void (*fini)(struct xe_exec_queue *q);
++	/**
++	 * @destroy: Destroy exec queue for submission backend. The backend
++	 * function must call xe_exec_queue_fini() (which will in turn call the
++	 * fini() backend function) to ensure the queue is properly cleaned up.
++	 */
++	void (*destroy)(struct xe_exec_queue *q);
+ 	/** @set_priority: Set priority for exec queue */
+ 	int (*set_priority)(struct xe_exec_queue *q,
+ 			    enum xe_exec_queue_priority priority);
+diff --git a/drivers/gpu/drm/xe/xe_execlist.c b/drivers/gpu/drm/xe/xe_execlist.c
+index 788f56b066b6ad..f83d421ac9d3d2 100644
+--- a/drivers/gpu/drm/xe/xe_execlist.c
++++ b/drivers/gpu/drm/xe/xe_execlist.c
+@@ -385,10 +385,20 @@ static int execlist_exec_queue_init(struct xe_exec_queue *q)
+ 	return err;
+ }
+ 
+-static void execlist_exec_queue_fini_async(struct work_struct *w)
++static void execlist_exec_queue_fini(struct xe_exec_queue *q)
++{
++	struct xe_execlist_exec_queue *exl = q->execlist;
++
++	drm_sched_entity_fini(&exl->entity);
++	drm_sched_fini(&exl->sched);
++
++	kfree(exl);
++}
++
++static void execlist_exec_queue_destroy_async(struct work_struct *w)
+ {
+ 	struct xe_execlist_exec_queue *ee =
+-		container_of(w, struct xe_execlist_exec_queue, fini_async);
++		container_of(w, struct xe_execlist_exec_queue, destroy_async);
+ 	struct xe_exec_queue *q = ee->q;
+ 	struct xe_execlist_exec_queue *exl = q->execlist;
+ 	struct xe_device *xe = gt_to_xe(q->gt);
+@@ -401,10 +411,6 @@ static void execlist_exec_queue_fini_async(struct work_struct *w)
+ 		list_del(&exl->active_link);
+ 	spin_unlock_irqrestore(&exl->port->lock, flags);
+ 
+-	drm_sched_entity_fini(&exl->entity);
+-	drm_sched_fini(&exl->sched);
+-	kfree(exl);
+-
+ 	xe_exec_queue_fini(q);
+ }
+ 
+@@ -413,10 +419,10 @@ static void execlist_exec_queue_kill(struct xe_exec_queue *q)
+ 	/* NIY */
+ }
+ 
+-static void execlist_exec_queue_fini(struct xe_exec_queue *q)
++static void execlist_exec_queue_destroy(struct xe_exec_queue *q)
+ {
+-	INIT_WORK(&q->execlist->fini_async, execlist_exec_queue_fini_async);
+-	queue_work(system_unbound_wq, &q->execlist->fini_async);
++	INIT_WORK(&q->execlist->destroy_async, execlist_exec_queue_destroy_async);
++	queue_work(system_unbound_wq, &q->execlist->destroy_async);
+ }
+ 
+ static int execlist_exec_queue_set_priority(struct xe_exec_queue *q,
+@@ -467,6 +473,7 @@ static const struct xe_exec_queue_ops execlist_exec_queue_ops = {
+ 	.init = execlist_exec_queue_init,
+ 	.kill = execlist_exec_queue_kill,
+ 	.fini = execlist_exec_queue_fini,
++	.destroy = execlist_exec_queue_destroy,
+ 	.set_priority = execlist_exec_queue_set_priority,
+ 	.set_timeslice = execlist_exec_queue_set_timeslice,
+ 	.set_preempt_timeout = execlist_exec_queue_set_preempt_timeout,
+diff --git a/drivers/gpu/drm/xe/xe_execlist_types.h b/drivers/gpu/drm/xe/xe_execlist_types.h
+index 415140936f11da..92c4ba52db0cb1 100644
+--- a/drivers/gpu/drm/xe/xe_execlist_types.h
++++ b/drivers/gpu/drm/xe/xe_execlist_types.h
+@@ -42,7 +42,7 @@ struct xe_execlist_exec_queue {
+ 
+ 	bool has_run;
+ 
+-	struct work_struct fini_async;
++	struct work_struct destroy_async;
+ 
+ 	enum xe_exec_queue_priority active_priority;
+ 	struct list_head active_link;
+diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
+index e3517ce2e18c14..eaf7569a7c1d1e 100644
+--- a/drivers/gpu/drm/xe/xe_gt.c
++++ b/drivers/gpu/drm/xe/xe_gt.c
+@@ -41,6 +41,7 @@
+ #include "xe_gt_topology.h"
+ #include "xe_guc_exec_queue_types.h"
+ #include "xe_guc_pc.h"
++#include "xe_guc_submit.h"
+ #include "xe_hw_fence.h"
+ #include "xe_hw_engine_class_sysfs.h"
+ #include "xe_irq.h"
+@@ -97,7 +98,7 @@ void xe_gt_sanitize(struct xe_gt *gt)
+ 	 * FIXME: if xe_uc_sanitize is called here, on TGL driver will not
+ 	 * reload
+ 	 */
+-	gt->uc.guc.submission_state.enabled = false;
++	xe_guc_submit_disable(&gt->uc.guc);
+ }
+ 
+ static void xe_gt_enable_host_l2_vram(struct xe_gt *gt)
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+index 53a44702c04afd..c15dc600dcae7a 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+@@ -1600,7 +1600,6 @@ static u64 pf_estimate_fair_lmem(struct xe_gt *gt, unsigned int num_vfs)
+ 	u64 fair;
+ 
+ 	fair = div_u64(available, num_vfs);
+-	fair = rounddown_pow_of_two(fair);	/* XXX: ttm_vram_mgr & drm_buddy limitation */
+ 	fair = ALIGN_DOWN(fair, alignment);
+ #ifdef MAX_FAIR_LMEM
+ 	fair = min_t(u64, MAX_FAIR_LMEM, fair);
+diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c
+index bac5471a1a7806..b9d21fdaad48ba 100644
+--- a/drivers/gpu/drm/xe/xe_guc.c
++++ b/drivers/gpu/drm/xe/xe_guc.c
+@@ -29,6 +29,7 @@
+ #include "xe_guc_db_mgr.h"
+ #include "xe_guc_engine_activity.h"
+ #include "xe_guc_hwconfig.h"
++#include "xe_guc_klv_helpers.h"
+ #include "xe_guc_log.h"
+ #include "xe_guc_pc.h"
+ #include "xe_guc_relay.h"
+@@ -570,6 +571,57 @@ static int guc_g2g_start(struct xe_guc *guc)
+ 	return err;
+ }
+ 
++static int __guc_opt_in_features_enable(struct xe_guc *guc, u64 addr, u32 num_dwords)
++{
++	u32 action[] = {
++		XE_GUC_ACTION_OPT_IN_FEATURE_KLV,
++		lower_32_bits(addr),
++		upper_32_bits(addr),
++		num_dwords
++	};
++
++	return xe_guc_ct_send_block(&guc->ct, action, ARRAY_SIZE(action));
++}
++
++#define OPT_IN_MAX_DWORDS 16
++int xe_guc_opt_in_features_enable(struct xe_guc *guc)
++{
++	struct xe_device *xe = guc_to_xe(guc);
++	CLASS(xe_guc_buf, buf)(&guc->buf, OPT_IN_MAX_DWORDS);
++	u32 count = 0;
++	u32 *klvs;
++	int ret;
++
++	if (!xe_guc_buf_is_valid(buf))
++		return -ENOBUFS;
++
++	klvs = xe_guc_buf_cpu_ptr(buf);
++
++	/*
++	 * The extra CAT error type opt-in was added in GuC v70.17.0, which maps
++	 * to compatibility version v1.7.0.
++	 * Note that the GuC allows enabling this KLV even on platforms that do
++	 * not support the extra type; in such case the returned type variable
++	 * will be set to a known invalid value which we can check against.
++	 */
++	if (GUC_SUBMIT_VER(guc) >= MAKE_GUC_VER(1, 7, 0))
++		klvs[count++] = PREP_GUC_KLV_TAG(OPT_IN_FEATURE_EXT_CAT_ERR_TYPE);
++
++	if (count) {
++		xe_assert(xe, count <= OPT_IN_MAX_DWORDS);
++
++		ret = __guc_opt_in_features_enable(guc, xe_guc_buf_flush(buf), count);
++		if (ret < 0) {
++			xe_gt_err(guc_to_gt(guc),
++				  "failed to enable GuC opt-in features: %pe\n",
++				  ERR_PTR(ret));
++			return ret;
++		}
++	}
++
++	return 0;
++}
++
+ static void guc_fini_hw(void *arg)
+ {
+ 	struct xe_guc *guc = arg;
+@@ -763,15 +815,17 @@ int xe_guc_post_load_init(struct xe_guc *guc)
+ 
+ 	xe_guc_ads_populate_post_load(&guc->ads);
+ 
++	ret = xe_guc_opt_in_features_enable(guc);
++	if (ret)
++		return ret;
++
+ 	if (xe_guc_g2g_wanted(guc_to_xe(guc))) {
+ 		ret = guc_g2g_start(guc);
+ 		if (ret)
+ 			return ret;
+ 	}
+ 
+-	guc->submission_state.enabled = true;
+-
+-	return 0;
++	return xe_guc_submit_enable(guc);
+ }
+ 
+ int xe_guc_reset(struct xe_guc *guc)
+@@ -1465,7 +1519,7 @@ void xe_guc_sanitize(struct xe_guc *guc)
+ {
+ 	xe_uc_fw_sanitize(&guc->fw);
+ 	xe_guc_ct_disable(&guc->ct);
+-	guc->submission_state.enabled = false;
++	xe_guc_submit_disable(guc);
+ }
+ 
+ int xe_guc_reset_prepare(struct xe_guc *guc)
+diff --git a/drivers/gpu/drm/xe/xe_guc.h b/drivers/gpu/drm/xe/xe_guc.h
+index 58338be4455856..4a66575f017d2d 100644
+--- a/drivers/gpu/drm/xe/xe_guc.h
++++ b/drivers/gpu/drm/xe/xe_guc.h
+@@ -33,6 +33,7 @@ int xe_guc_reset(struct xe_guc *guc);
+ int xe_guc_upload(struct xe_guc *guc);
+ int xe_guc_min_load_for_hwconfig(struct xe_guc *guc);
+ int xe_guc_enable_communication(struct xe_guc *guc);
++int xe_guc_opt_in_features_enable(struct xe_guc *guc);
+ int xe_guc_suspend(struct xe_guc *guc);
+ void xe_guc_notify(struct xe_guc *guc);
+ int xe_guc_auth_huc(struct xe_guc *guc, u32 rsa_addr);
+diff --git a/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h b/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
+index a3f421e2adc03b..c30c0e3ccbbb93 100644
+--- a/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
++++ b/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
+@@ -35,8 +35,8 @@ struct xe_guc_exec_queue {
+ 	struct xe_sched_msg static_msgs[MAX_STATIC_MSG_TYPE];
+ 	/** @lr_tdr: long running TDR worker */
+ 	struct work_struct lr_tdr;
+-	/** @fini_async: do final fini async from this worker */
+-	struct work_struct fini_async;
++	/** @destroy_async: do final destroy async from this worker */
++	struct work_struct destroy_async;
+ 	/** @resume_time: time of last resume */
+ 	u64 resume_time;
+ 	/** @state: GuC specific state for this xe_exec_queue */
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index ef3e9e1588f7c6..18ddbb7b98a15b 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -32,6 +32,7 @@
+ #include "xe_guc_ct.h"
+ #include "xe_guc_exec_queue_types.h"
+ #include "xe_guc_id_mgr.h"
++#include "xe_guc_klv_helpers.h"
+ #include "xe_guc_submit_types.h"
+ #include "xe_hw_engine.h"
+ #include "xe_hw_fence.h"
+@@ -316,6 +317,71 @@ int xe_guc_submit_init(struct xe_guc *guc, unsigned int num_ids)
+ 	return drmm_add_action_or_reset(&xe->drm, guc_submit_fini, guc);
+ }
+ 
++/*
++ * Given that we want to guarantee enough RCS throughput to avoid missing
++ * frames, we set the yield policy to 20% of each 80ms interval.
++ */
++#define RC_YIELD_DURATION	80	/* in ms */
++#define RC_YIELD_RATIO		20	/* in percent */
++static u32 *emit_render_compute_yield_klv(u32 *emit)
++{
++	*emit++ = PREP_GUC_KLV_TAG(SCHEDULING_POLICIES_RENDER_COMPUTE_YIELD);
++	*emit++ = RC_YIELD_DURATION;
++	*emit++ = RC_YIELD_RATIO;
++
++	return emit;
++}
++
++#define SCHEDULING_POLICY_MAX_DWORDS 16
++static int guc_init_global_schedule_policy(struct xe_guc *guc)
++{
++	u32 data[SCHEDULING_POLICY_MAX_DWORDS];
++	u32 *emit = data;
++	u32 count = 0;
++	int ret;
++
++	if (GUC_SUBMIT_VER(guc) < MAKE_GUC_VER(1, 1, 0))
++		return 0;
++
++	*emit++ = XE_GUC_ACTION_UPDATE_SCHEDULING_POLICIES_KLV;
++
++	if (CCS_MASK(guc_to_gt(guc)))
++		emit = emit_render_compute_yield_klv(emit);
++
++	count = emit - data;
++	if (count > 1) {
++		xe_assert(guc_to_xe(guc), count <= SCHEDULING_POLICY_MAX_DWORDS);
++
++		ret = xe_guc_ct_send_block(&guc->ct, data, count);
++		if (ret < 0) {
++			xe_gt_err(guc_to_gt(guc),
++				  "failed to enable GuC sheduling policies: %pe\n",
++				  ERR_PTR(ret));
++			return ret;
++		}
++	}
++
++	return 0;
++}
++
++int xe_guc_submit_enable(struct xe_guc *guc)
++{
++	int ret;
++
++	ret = guc_init_global_schedule_policy(guc);
++	if (ret)
++		return ret;
++
++	guc->submission_state.enabled = true;
++
++	return 0;
++}
++
++void xe_guc_submit_disable(struct xe_guc *guc)
++{
++	guc->submission_state.enabled = false;
++}
++
+ static void __release_guc_id(struct xe_guc *guc, struct xe_exec_queue *q, u32 xa_count)
+ {
+ 	int i;
+@@ -1269,48 +1335,57 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
+ 	return DRM_GPU_SCHED_STAT_NOMINAL;
+ }
+ 
+-static void __guc_exec_queue_fini_async(struct work_struct *w)
++static void guc_exec_queue_fini(struct xe_exec_queue *q)
++{
++	struct xe_guc_exec_queue *ge = q->guc;
++	struct xe_guc *guc = exec_queue_to_guc(q);
++
++	release_guc_id(guc, q);
++	xe_sched_entity_fini(&ge->entity);
++	xe_sched_fini(&ge->sched);
++
++	/*
++	 * RCU free due sched being exported via DRM scheduler fences
++	 * (timeline name).
++	 */
++	kfree_rcu(ge, rcu);
++}
++
++static void __guc_exec_queue_destroy_async(struct work_struct *w)
+ {
+ 	struct xe_guc_exec_queue *ge =
+-		container_of(w, struct xe_guc_exec_queue, fini_async);
++		container_of(w, struct xe_guc_exec_queue, destroy_async);
+ 	struct xe_exec_queue *q = ge->q;
+ 	struct xe_guc *guc = exec_queue_to_guc(q);
+ 
+ 	xe_pm_runtime_get(guc_to_xe(guc));
+ 	trace_xe_exec_queue_destroy(q);
+ 
+-	release_guc_id(guc, q);
+ 	if (xe_exec_queue_is_lr(q))
+ 		cancel_work_sync(&ge->lr_tdr);
+ 	/* Confirm no work left behind accessing device structures */
+ 	cancel_delayed_work_sync(&ge->sched.base.work_tdr);
+-	xe_sched_entity_fini(&ge->entity);
+-	xe_sched_fini(&ge->sched);
+ 
+-	/*
+-	 * RCU free due sched being exported via DRM scheduler fences
+-	 * (timeline name).
+-	 */
+-	kfree_rcu(ge, rcu);
+ 	xe_exec_queue_fini(q);
++
+ 	xe_pm_runtime_put(guc_to_xe(guc));
+ }
+ 
+-static void guc_exec_queue_fini_async(struct xe_exec_queue *q)
++static void guc_exec_queue_destroy_async(struct xe_exec_queue *q)
+ {
+ 	struct xe_guc *guc = exec_queue_to_guc(q);
+ 	struct xe_device *xe = guc_to_xe(guc);
+ 
+-	INIT_WORK(&q->guc->fini_async, __guc_exec_queue_fini_async);
++	INIT_WORK(&q->guc->destroy_async, __guc_exec_queue_destroy_async);
+ 
+ 	/* We must block on kernel engines so slabs are empty on driver unload */
+ 	if (q->flags & EXEC_QUEUE_FLAG_PERMANENT || exec_queue_wedged(q))
+-		__guc_exec_queue_fini_async(&q->guc->fini_async);
++		__guc_exec_queue_destroy_async(&q->guc->destroy_async);
+ 	else
+-		queue_work(xe->destroy_wq, &q->guc->fini_async);
++		queue_work(xe->destroy_wq, &q->guc->destroy_async);
+ }
+ 
+-static void __guc_exec_queue_fini(struct xe_guc *guc, struct xe_exec_queue *q)
++static void __guc_exec_queue_destroy(struct xe_guc *guc, struct xe_exec_queue *q)
+ {
+ 	/*
+ 	 * Might be done from within the GPU scheduler, need to do async as we
+@@ -1319,7 +1394,7 @@ static void __guc_exec_queue_fini(struct xe_guc *guc, struct xe_exec_queue *q)
+ 	 * this we and don't really care when everything is fini'd, just that it
+ 	 * is.
+ 	 */
+-	guc_exec_queue_fini_async(q);
++	guc_exec_queue_destroy_async(q);
+ }
+ 
+ static void __guc_exec_queue_process_msg_cleanup(struct xe_sched_msg *msg)
+@@ -1333,7 +1408,7 @@ static void __guc_exec_queue_process_msg_cleanup(struct xe_sched_msg *msg)
+ 	if (exec_queue_registered(q))
+ 		disable_scheduling_deregister(guc, q);
+ 	else
+-		__guc_exec_queue_fini(guc, q);
++		__guc_exec_queue_destroy(guc, q);
+ }
+ 
+ static bool guc_exec_queue_allowed_to_change_state(struct xe_exec_queue *q)
+@@ -1566,14 +1641,14 @@ static bool guc_exec_queue_try_add_msg(struct xe_exec_queue *q,
+ #define STATIC_MSG_CLEANUP	0
+ #define STATIC_MSG_SUSPEND	1
+ #define STATIC_MSG_RESUME	2
+-static void guc_exec_queue_fini(struct xe_exec_queue *q)
++static void guc_exec_queue_destroy(struct xe_exec_queue *q)
+ {
+ 	struct xe_sched_msg *msg = q->guc->static_msgs + STATIC_MSG_CLEANUP;
+ 
+ 	if (!(q->flags & EXEC_QUEUE_FLAG_PERMANENT) && !exec_queue_wedged(q))
+ 		guc_exec_queue_add_msg(q, msg, CLEANUP);
+ 	else
+-		__guc_exec_queue_fini(exec_queue_to_guc(q), q);
++		__guc_exec_queue_destroy(exec_queue_to_guc(q), q);
+ }
+ 
+ static int guc_exec_queue_set_priority(struct xe_exec_queue *q,
+@@ -1703,6 +1778,7 @@ static const struct xe_exec_queue_ops guc_exec_queue_ops = {
+ 	.init = guc_exec_queue_init,
+ 	.kill = guc_exec_queue_kill,
+ 	.fini = guc_exec_queue_fini,
++	.destroy = guc_exec_queue_destroy,
+ 	.set_priority = guc_exec_queue_set_priority,
+ 	.set_timeslice = guc_exec_queue_set_timeslice,
+ 	.set_preempt_timeout = guc_exec_queue_set_preempt_timeout,
+@@ -1724,7 +1800,7 @@ static void guc_exec_queue_stop(struct xe_guc *guc, struct xe_exec_queue *q)
+ 		if (exec_queue_extra_ref(q) || xe_exec_queue_is_lr(q))
+ 			xe_exec_queue_put(q);
+ 		else if (exec_queue_destroyed(q))
+-			__guc_exec_queue_fini(guc, q);
++			__guc_exec_queue_destroy(guc, q);
+ 	}
+ 	if (q->guc->suspend_pending) {
+ 		set_exec_queue_suspended(q);
+@@ -1981,7 +2057,7 @@ static void handle_deregister_done(struct xe_guc *guc, struct xe_exec_queue *q)
+ 	if (exec_queue_extra_ref(q) || xe_exec_queue_is_lr(q))
+ 		xe_exec_queue_put(q);
+ 	else
+-		__guc_exec_queue_fini(guc, q);
++		__guc_exec_queue_destroy(guc, q);
+ }
+ 
+ int xe_guc_deregister_done_handler(struct xe_guc *guc, u32 *msg, u32 len)
+@@ -2078,12 +2154,16 @@ int xe_guc_exec_queue_memory_cat_error_handler(struct xe_guc *guc, u32 *msg,
+ 	struct xe_gt *gt = guc_to_gt(guc);
+ 	struct xe_exec_queue *q;
+ 	u32 guc_id;
++	u32 type = XE_GUC_CAT_ERR_TYPE_INVALID;
+ 
+-	if (unlikely(len < 1))
++	if (unlikely(!len || len > 2))
+ 		return -EPROTO;
+ 
+ 	guc_id = msg[0];
+ 
++	if (len == 2)
++		type = msg[1];
++
+ 	if (guc_id == GUC_ID_UNKNOWN) {
+ 		/*
+ 		 * GuC uses GUC_ID_UNKNOWN if it can not map the CAT fault to any PF/VF
+@@ -2097,8 +2177,19 @@ int xe_guc_exec_queue_memory_cat_error_handler(struct xe_guc *guc, u32 *msg,
+ 	if (unlikely(!q))
+ 		return -EPROTO;
+ 
+-	xe_gt_dbg(gt, "Engine memory cat error: engine_class=%s, logical_mask: 0x%x, guc_id=%d",
+-		  xe_hw_engine_class_to_str(q->class), q->logical_mask, guc_id);
++	/*
++	 * The type is HW-defined and changes based on platform, so we don't
++	 * decode it in the kernel and only check if it is valid.
++	 * See bspec 54047 and 72187 for details.
++	 */
++	if (type != XE_GUC_CAT_ERR_TYPE_INVALID)
++		xe_gt_dbg(gt,
++			  "Engine memory CAT error [%u]: class=%s, logical_mask: 0x%x, guc_id=%d",
++			  type, xe_hw_engine_class_to_str(q->class), q->logical_mask, guc_id);
++	else
++		xe_gt_dbg(gt,
++			  "Engine memory CAT error: class=%s, logical_mask: 0x%x, guc_id=%d",
++			  xe_hw_engine_class_to_str(q->class), q->logical_mask, guc_id);
+ 
+ 	trace_xe_exec_queue_memory_cat_error(q);
+ 
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.h b/drivers/gpu/drm/xe/xe_guc_submit.h
+index 9b71a986c6ca69..0d126b807c1041 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.h
++++ b/drivers/gpu/drm/xe/xe_guc_submit.h
+@@ -13,6 +13,8 @@ struct xe_exec_queue;
+ struct xe_guc;
+ 
+ int xe_guc_submit_init(struct xe_guc *guc, unsigned int num_ids);
++int xe_guc_submit_enable(struct xe_guc *guc);
++void xe_guc_submit_disable(struct xe_guc *guc);
+ 
+ int xe_guc_submit_reset_prepare(struct xe_guc *guc);
+ void xe_guc_submit_reset_wait(struct xe_guc *guc);
+diff --git a/drivers/gpu/drm/xe/xe_tile_sysfs.c b/drivers/gpu/drm/xe/xe_tile_sysfs.c
+index b804234a655160..9e1236a9ec6734 100644
+--- a/drivers/gpu/drm/xe/xe_tile_sysfs.c
++++ b/drivers/gpu/drm/xe/xe_tile_sysfs.c
+@@ -44,16 +44,18 @@ int xe_tile_sysfs_init(struct xe_tile *tile)
+ 	kt->tile = tile;
+ 
+ 	err = kobject_add(&kt->base, &dev->kobj, "tile%d", tile->id);
+-	if (err) {
+-		kobject_put(&kt->base);
+-		return err;
+-	}
++	if (err)
++		goto err_object;
+ 
+ 	tile->sysfs = &kt->base;
+ 
+ 	err = xe_vram_freq_sysfs_init(tile);
+ 	if (err)
+-		return err;
++		goto err_object;
+ 
+ 	return devm_add_action_or_reset(xe->drm.dev, tile_sysfs_fini, tile);
++
++err_object:
++	kobject_put(&kt->base);
++	return err;
+ }
+diff --git a/drivers/gpu/drm/xe/xe_uc.c b/drivers/gpu/drm/xe/xe_uc.c
+index 3a8751a8b92dde..5c45b0f072a4c2 100644
+--- a/drivers/gpu/drm/xe/xe_uc.c
++++ b/drivers/gpu/drm/xe/xe_uc.c
+@@ -165,6 +165,10 @@ static int vf_uc_init_hw(struct xe_uc *uc)
+ 
+ 	uc->guc.submission_state.enabled = true;
+ 
++	err = xe_guc_opt_in_features_enable(&uc->guc);
++	if (err)
++		return err;
++
+ 	err = xe_gt_record_default_lrcs(uc_to_gt(uc));
+ 	if (err)
+ 		return err;
+diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
+index 84052b98002d14..92ce7374a79ce9 100644
+--- a/drivers/gpu/drm/xe/xe_vm.c
++++ b/drivers/gpu/drm/xe/xe_vm.c
+@@ -240,8 +240,8 @@ int xe_vm_add_compute_exec_queue(struct xe_vm *vm, struct xe_exec_queue *q)
+ 
+ 	pfence = xe_preempt_fence_create(q, q->lr.context,
+ 					 ++q->lr.seqno);
+-	if (!pfence) {
+-		err = -ENOMEM;
++	if (IS_ERR(pfence)) {
++		err = PTR_ERR(pfence);
+ 		goto out_fini;
+ 	}
+ 
+diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
+index ccbab3a4811adf..a1086b6d1f11f8 100644
+--- a/drivers/iommu/amd/amd_iommu_types.h
++++ b/drivers/iommu/amd/amd_iommu_types.h
+@@ -551,6 +551,7 @@ struct gcr3_tbl_info {
+ };
+ 
+ struct amd_io_pgtable {
++	seqcount_t		seqcount;	/* Protects root/mode update */
+ 	struct io_pgtable	pgtbl;
+ 	int			mode;
+ 	u64			*root;
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 7add9bcf45dc8b..eef55aa4143c1f 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -1450,12 +1450,12 @@ static int __init init_iommu_from_acpi(struct amd_iommu *iommu,
+ 				    PCI_FUNC(e->devid));
+ 
+ 			devid = e->devid;
+-			for (dev_i = devid_start; dev_i <= devid; ++dev_i) {
+-				if (alias)
++			if (alias) {
++				for (dev_i = devid_start; dev_i <= devid; ++dev_i)
+ 					pci_seg->alias_table[dev_i] = devid_to;
++				set_dev_entry_from_acpi(iommu, devid_to, flags, ext_flags);
+ 			}
+ 			set_dev_entry_from_acpi_range(iommu, devid_start, devid, flags, ext_flags);
+-			set_dev_entry_from_acpi(iommu, devid_to, flags, ext_flags);
+ 			break;
+ 		case IVHD_DEV_SPECIAL: {
+ 			u8 handle, type;
+@@ -3048,7 +3048,8 @@ static int __init early_amd_iommu_init(void)
+ 
+ 	if (!boot_cpu_has(X86_FEATURE_CX16)) {
+ 		pr_err("Failed to initialize. The CMPXCHG16B feature is required.\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto out;
+ 	}
+ 
+ 	/*
+diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
+index 4d308c07113495..5c6fdf38636a0c 100644
+--- a/drivers/iommu/amd/io_pgtable.c
++++ b/drivers/iommu/amd/io_pgtable.c
+@@ -17,6 +17,7 @@
+ #include <linux/slab.h>
+ #include <linux/types.h>
+ #include <linux/dma-mapping.h>
++#include <linux/seqlock.h>
+ 
+ #include <asm/barrier.h>
+ 
+@@ -130,8 +131,11 @@ static bool increase_address_space(struct amd_io_pgtable *pgtable,
+ 
+ 	*pte = PM_LEVEL_PDE(pgtable->mode, iommu_virt_to_phys(pgtable->root));
+ 
++	write_seqcount_begin(&pgtable->seqcount);
+ 	pgtable->root  = pte;
+ 	pgtable->mode += 1;
++	write_seqcount_end(&pgtable->seqcount);
++
+ 	amd_iommu_update_and_flush_device_table(domain);
+ 
+ 	pte = NULL;
+@@ -153,6 +157,7 @@ static u64 *alloc_pte(struct amd_io_pgtable *pgtable,
+ {
+ 	unsigned long last_addr = address + (page_size - 1);
+ 	struct io_pgtable_cfg *cfg = &pgtable->pgtbl.cfg;
++	unsigned int seqcount;
+ 	int level, end_lvl;
+ 	u64 *pte, *page;
+ 
+@@ -170,8 +175,14 @@ static u64 *alloc_pte(struct amd_io_pgtable *pgtable,
+ 	}
+ 
+ 
+-	level   = pgtable->mode - 1;
+-	pte     = &pgtable->root[PM_LEVEL_INDEX(level, address)];
++	do {
++		seqcount = read_seqcount_begin(&pgtable->seqcount);
++
++		level   = pgtable->mode - 1;
++		pte     = &pgtable->root[PM_LEVEL_INDEX(level, address)];
++	} while (read_seqcount_retry(&pgtable->seqcount, seqcount));
++
++
+ 	address = PAGE_SIZE_ALIGN(address, page_size);
+ 	end_lvl = PAGE_SIZE_LEVEL(page_size);
+ 
+@@ -249,6 +260,7 @@ static u64 *fetch_pte(struct amd_io_pgtable *pgtable,
+ 		      unsigned long *page_size)
+ {
+ 	int level;
++	unsigned int seqcount;
+ 	u64 *pte;
+ 
+ 	*page_size = 0;
+@@ -256,8 +268,12 @@ static u64 *fetch_pte(struct amd_io_pgtable *pgtable,
+ 	if (address > PM_LEVEL_SIZE(pgtable->mode))
+ 		return NULL;
+ 
+-	level	   =  pgtable->mode - 1;
+-	pte	   = &pgtable->root[PM_LEVEL_INDEX(level, address)];
++	do {
++		seqcount = read_seqcount_begin(&pgtable->seqcount);
++		level	   =  pgtable->mode - 1;
++		pte	   = &pgtable->root[PM_LEVEL_INDEX(level, address)];
++	} while (read_seqcount_retry(&pgtable->seqcount, seqcount));
++
+ 	*page_size =  PTE_LEVEL_PAGE_SIZE(level);
+ 
+ 	while (level > 0) {
+@@ -541,6 +557,7 @@ static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, void *coo
+ 	if (!pgtable->root)
+ 		return NULL;
+ 	pgtable->mode = PAGE_MODE_3_LEVEL;
++	seqcount_init(&pgtable->seqcount);
+ 
+ 	cfg->pgsize_bitmap  = amd_iommu_pgsize_bitmap;
+ 	cfg->ias            = IOMMU_IN_ADDR_BIT_SIZE;
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 34dd175a331dc7..3dd4d73fcb5dc6 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -1592,6 +1592,10 @@ static void switch_to_super_page(struct dmar_domain *domain,
+ 	unsigned long lvl_pages = lvl_to_nr_pages(level);
+ 	struct dma_pte *pte = NULL;
+ 
++	if (WARN_ON(!IS_ALIGNED(start_pfn, lvl_pages) ||
++		    !IS_ALIGNED(end_pfn + 1, lvl_pages)))
++		return;
++
+ 	while (start_pfn <= end_pfn) {
+ 		if (!pte)
+ 			pte = pfn_to_dma_pte(domain, start_pfn, &level,
+@@ -1667,7 +1671,8 @@ __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn,
+ 				unsigned long pages_to_remove;
+ 
+ 				pteval |= DMA_PTE_LARGE_PAGE;
+-				pages_to_remove = min_t(unsigned long, nr_pages,
++				pages_to_remove = min_t(unsigned long,
++							round_down(nr_pages, lvl_pages),
+ 							nr_pte_to_next_page(pte) * lvl_pages);
+ 				end_pfn = iov_pfn + pages_to_remove - 1;
+ 				switch_to_super_page(domain, iov_pfn, end_pfn, largepage_lvl);
+diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
+index 433b59f435302b..cc9ec636a7a21e 100644
+--- a/drivers/iommu/s390-iommu.c
++++ b/drivers/iommu/s390-iommu.c
+@@ -611,6 +611,23 @@ static u64 get_iota_region_flag(struct s390_domain *domain)
+ 	}
+ }
+ 
++static bool reg_ioat_propagate_error(int cc, u8 status)
++{
++	/*
++	 * If the device is in the error state the reset routine
++	 * will register the IOAT of the newly set domain on re-enable
++	 */
++	if (cc == ZPCI_CC_ERR && status == ZPCI_PCI_ST_FUNC_NOT_AVAIL)
++		return false;
++	/*
++	 * If the device was removed treat registration as success
++	 * and let the subsequent error event trigger tear down.
++	 */
++	if (cc == ZPCI_CC_INVAL_HANDLE)
++		return false;
++	return cc != ZPCI_CC_OK;
++}
++
+ static int s390_iommu_domain_reg_ioat(struct zpci_dev *zdev,
+ 				      struct iommu_domain *domain, u8 *status)
+ {
+@@ -695,7 +712,7 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
+ 
+ 	/* If we fail now DMA remains blocked via blocking domain */
+ 	cc = s390_iommu_domain_reg_ioat(zdev, domain, &status);
+-	if (cc && status != ZPCI_PCI_ST_FUNC_NOT_AVAIL)
++	if (reg_ioat_propagate_error(cc, status))
+ 		return -EIO;
+ 	zdev->dma_table = s390_domain->dma_table;
+ 	zdev_s390_domain_update(zdev, domain);
+@@ -1031,7 +1048,8 @@ struct zpci_iommu_ctrs *zpci_get_iommu_ctrs(struct zpci_dev *zdev)
+ 
+ 	lockdep_assert_held(&zdev->dom_lock);
+ 
+-	if (zdev->s390_domain->type == IOMMU_DOMAIN_BLOCKED)
++	if (zdev->s390_domain->type == IOMMU_DOMAIN_BLOCKED ||
++	    zdev->s390_domain->type == IOMMU_DOMAIN_IDENTITY)
+ 		return NULL;
+ 
+ 	s390_domain = to_s390_domain(zdev->s390_domain);
+@@ -1122,12 +1140,7 @@ static int s390_attach_dev_identity(struct iommu_domain *domain,
+ 
+ 	/* If we fail now DMA remains blocked via blocking domain */
+ 	cc = s390_iommu_domain_reg_ioat(zdev, domain, &status);
+-
+-	/*
+-	 * If the device is undergoing error recovery the reset code
+-	 * will re-establish the new domain.
+-	 */
+-	if (cc && status != ZPCI_PCI_ST_FUNC_NOT_AVAIL)
++	if (reg_ioat_propagate_error(cc, status))
+ 		return -EIO;
+ 
+ 	zdev_s390_domain_update(zdev, domain);
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 9835f2fe26e99f..1e09b4860a4543 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -3810,8 +3810,10 @@ static void raid_io_hints(struct dm_target *ti, struct queue_limits *limits)
+ 	struct raid_set *rs = ti->private;
+ 	unsigned int chunk_size_bytes = to_bytes(rs->md.chunk_sectors);
+ 
+-	limits->io_min = chunk_size_bytes;
+-	limits->io_opt = chunk_size_bytes * mddev_data_stripes(rs);
++	if (chunk_size_bytes) {
++		limits->io_min = chunk_size_bytes;
++		limits->io_opt = chunk_size_bytes * mddev_data_stripes(rs);
++	}
+ }
+ 
+ static void raid_presuspend(struct dm_target *ti)
+diff --git a/drivers/md/dm-stripe.c b/drivers/md/dm-stripe.c
+index 5bbbdf8fc1bdef..2313dbaf642422 100644
+--- a/drivers/md/dm-stripe.c
++++ b/drivers/md/dm-stripe.c
+@@ -456,11 +456,15 @@ static void stripe_io_hints(struct dm_target *ti,
+ 			    struct queue_limits *limits)
+ {
+ 	struct stripe_c *sc = ti->private;
+-	unsigned int chunk_size = sc->chunk_size << SECTOR_SHIFT;
++	unsigned int io_min, io_opt;
+ 
+ 	limits->chunk_sectors = sc->chunk_size;
+-	limits->io_min = chunk_size;
+-	limits->io_opt = chunk_size * sc->stripes;
++
++	if (!check_shl_overflow(sc->chunk_size, SECTOR_SHIFT, &io_min) &&
++	    !check_mul_overflow(io_min, sc->stripes, &io_opt)) {
++		limits->io_min = io_min;
++		limits->io_opt = io_opt;
++	}
+ }
+ 
+ static struct target_type stripe_target = {
+diff --git a/drivers/mmc/host/mvsdio.c b/drivers/mmc/host/mvsdio.c
+index 101f36de7b63bd..8165fa4d0d937a 100644
+--- a/drivers/mmc/host/mvsdio.c
++++ b/drivers/mmc/host/mvsdio.c
+@@ -292,7 +292,7 @@ static u32 mvsd_finish_data(struct mvsd_host *host, struct mmc_data *data,
+ 		host->pio_ptr = NULL;
+ 		host->pio_size = 0;
+ 	} else {
+-		dma_unmap_sg(mmc_dev(host->mmc), data->sg, host->sg_frags,
++		dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
+ 			     mmc_get_dma_dir(data));
+ 	}
+ 
+diff --git a/drivers/mmc/host/sdhci-pci-gli.c b/drivers/mmc/host/sdhci-pci-gli.c
+index 3a1de477e9af8d..b0f91cc9e40e43 100644
+--- a/drivers/mmc/host/sdhci-pci-gli.c
++++ b/drivers/mmc/host/sdhci-pci-gli.c
+@@ -283,6 +283,8 @@
+ #define   PCIE_GLI_9767_UHS2_CTL2_ZC_VALUE	  0xb
+ #define   PCIE_GLI_9767_UHS2_CTL2_ZC_CTL	  BIT(6)
+ #define   PCIE_GLI_9767_UHS2_CTL2_ZC_CTL_VALUE	  0x1
++#define   PCIE_GLI_9767_UHS2_CTL2_FORCE_PHY_RESETN	BIT(13)
++#define   PCIE_GLI_9767_UHS2_CTL2_FORCE_RESETN_VALUE	BIT(14)
+ 
+ #define GLI_MAX_TUNING_LOOP 40
+ 
+@@ -1179,6 +1181,65 @@ static void gl9767_set_low_power_negotiation(struct pci_dev *pdev, bool enable)
+ 	gl9767_vhs_read(pdev);
+ }
+ 
++static void sdhci_gl9767_uhs2_phy_reset(struct sdhci_host *host, bool assert)
++{
++	struct sdhci_pci_slot *slot = sdhci_priv(host);
++	struct pci_dev *pdev = slot->chip->pdev;
++	u32 value, set, clr;
++
++	if (assert) {
++		/* Assert reset, set RESETN and clean RESETN_VALUE */
++		set = PCIE_GLI_9767_UHS2_CTL2_FORCE_PHY_RESETN;
++		clr = PCIE_GLI_9767_UHS2_CTL2_FORCE_RESETN_VALUE;
++	} else {
++		/* De-assert reset, clean RESETN and set RESETN_VALUE */
++		set = PCIE_GLI_9767_UHS2_CTL2_FORCE_RESETN_VALUE;
++		clr = PCIE_GLI_9767_UHS2_CTL2_FORCE_PHY_RESETN;
++	}
++
++	gl9767_vhs_write(pdev);
++	pci_read_config_dword(pdev, PCIE_GLI_9767_UHS2_CTL2, &value);
++	value |= set;
++	pci_write_config_dword(pdev, PCIE_GLI_9767_UHS2_CTL2, value);
++	value &= ~clr;
++	pci_write_config_dword(pdev, PCIE_GLI_9767_UHS2_CTL2, value);
++	gl9767_vhs_read(pdev);
++}
++
++static void __gl9767_uhs2_set_power(struct sdhci_host *host, unsigned char mode, unsigned short vdd)
++{
++	u8 pwr = 0;
++
++	if (mode != MMC_POWER_OFF) {
++		pwr = sdhci_get_vdd_value(vdd);
++		if (!pwr)
++			WARN(1, "%s: Invalid vdd %#x\n",
++			     mmc_hostname(host->mmc), vdd);
++		pwr |= SDHCI_VDD2_POWER_180;
++	}
++
++	if (host->pwr == pwr)
++		return;
++
++	host->pwr = pwr;
++
++	if (pwr == 0) {
++		sdhci_writeb(host, 0, SDHCI_POWER_CONTROL);
++	} else {
++		sdhci_writeb(host, 0, SDHCI_POWER_CONTROL);
++
++		pwr |= SDHCI_POWER_ON;
++		sdhci_writeb(host, pwr & 0xf, SDHCI_POWER_CONTROL);
++		usleep_range(5000, 6250);
++
++		/* Assert reset */
++		sdhci_gl9767_uhs2_phy_reset(host, true);
++		pwr |= SDHCI_VDD2_POWER_ON;
++		sdhci_writeb(host, pwr, SDHCI_POWER_CONTROL);
++		usleep_range(5000, 6250);
++	}
++}
++
+ static void sdhci_gl9767_set_clock(struct sdhci_host *host, unsigned int clock)
+ {
+ 	struct sdhci_pci_slot *slot = sdhci_priv(host);
+@@ -1205,6 +1266,11 @@ static void sdhci_gl9767_set_clock(struct sdhci_host *host, unsigned int clock)
+ 	}
+ 
+ 	sdhci_enable_clk(host, clk);
++
++	if (mmc_card_uhs2(host->mmc))
++		/* De-assert reset */
++		sdhci_gl9767_uhs2_phy_reset(host, false);
++
+ 	gl9767_set_low_power_negotiation(pdev, true);
+ }
+ 
+@@ -1476,7 +1542,7 @@ static void sdhci_gl9767_set_power(struct sdhci_host *host, unsigned char mode,
+ 		gl9767_vhs_read(pdev);
+ 
+ 		sdhci_gli_overcurrent_event_enable(host, false);
+-		sdhci_uhs2_set_power(host, mode, vdd);
++		__gl9767_uhs2_set_power(host, mode, vdd);
+ 		sdhci_gli_overcurrent_event_enable(host, true);
+ 	} else {
+ 		gl9767_vhs_write(pdev);
+diff --git a/drivers/mmc/host/sdhci-uhs2.c b/drivers/mmc/host/sdhci-uhs2.c
+index 0efeb9d0c3765a..c459a08d01da52 100644
+--- a/drivers/mmc/host/sdhci-uhs2.c
++++ b/drivers/mmc/host/sdhci-uhs2.c
+@@ -295,7 +295,8 @@ static void __sdhci_uhs2_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ 	else
+ 		sdhci_uhs2_set_power(host, ios->power_mode, ios->vdd);
+ 
+-	sdhci_set_clock(host, host->clock);
++	host->ops->set_clock(host, ios->clock);
++	host->clock = ios->clock;
+ }
+ 
+ static int sdhci_uhs2_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index e116f2db34d52b..be701eb25fec42 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -2367,23 +2367,6 @@ void sdhci_set_ios_common(struct mmc_host *mmc, struct mmc_ios *ios)
+ 		(ios->power_mode == MMC_POWER_UP) &&
+ 		!(host->quirks2 & SDHCI_QUIRK2_PRESET_VALUE_BROKEN))
+ 		sdhci_enable_preset_value(host, false);
+-
+-	if (!ios->clock || ios->clock != host->clock) {
+-		host->ops->set_clock(host, ios->clock);
+-		host->clock = ios->clock;
+-
+-		if (host->quirks & SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK &&
+-		    host->clock) {
+-			host->timeout_clk = mmc->actual_clock ?
+-						mmc->actual_clock / 1000 :
+-						host->clock / 1000;
+-			mmc->max_busy_timeout =
+-				host->ops->get_max_timeout_count ?
+-				host->ops->get_max_timeout_count(host) :
+-				1 << 27;
+-			mmc->max_busy_timeout /= host->timeout_clk;
+-		}
+-	}
+ }
+ EXPORT_SYMBOL_GPL(sdhci_set_ios_common);
+ 
+@@ -2410,6 +2393,23 @@ void sdhci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ 
+ 	sdhci_set_ios_common(mmc, ios);
+ 
++	if (!ios->clock || ios->clock != host->clock) {
++		host->ops->set_clock(host, ios->clock);
++		host->clock = ios->clock;
++
++		if (host->quirks & SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK &&
++		    host->clock) {
++			host->timeout_clk = mmc->actual_clock ?
++						mmc->actual_clock / 1000 :
++						host->clock / 1000;
++			mmc->max_busy_timeout =
++				host->ops->get_max_timeout_count ?
++				host->ops->get_max_timeout_count(host) :
++				1 << 27;
++			mmc->max_busy_timeout /= host->timeout_clk;
++		}
++	}
++
+ 	if (host->ops->set_power)
+ 		host->ops->set_power(host, ios->power_mode, ios->vdd);
+ 	else
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index c4d53e8e7c152d..e23195dd747769 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -2115,6 +2115,7 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
+ 		memcpy(ss.__data, bond_dev->dev_addr, bond_dev->addr_len);
+ 	} else if (bond->params.fail_over_mac == BOND_FOM_FOLLOW &&
+ 		   BOND_MODE(bond) == BOND_MODE_ACTIVEBACKUP &&
++		   bond_has_slaves(bond) &&
+ 		   memcmp(slave_dev->dev_addr, bond_dev->dev_addr, bond_dev->addr_len) == 0) {
+ 		/* Set slave to random address to avoid duplicate mac
+ 		 * address in later fail over.
+@@ -3338,7 +3339,6 @@ static void bond_ns_send_all(struct bonding *bond, struct slave *slave)
+ 		/* Find out through which dev should the packet go */
+ 		memset(&fl6, 0, sizeof(struct flowi6));
+ 		fl6.daddr = targets[i];
+-		fl6.flowi6_oif = bond->dev->ifindex;
+ 
+ 		dst = ip6_route_output(dev_net(bond->dev), NULL, &fl6);
+ 		if (dst->error) {
+diff --git a/drivers/net/ethernet/broadcom/cnic.c b/drivers/net/ethernet/broadcom/cnic.c
+index a9040c42d2ff97..6e97a5a7daaf9c 100644
+--- a/drivers/net/ethernet/broadcom/cnic.c
++++ b/drivers/net/ethernet/broadcom/cnic.c
+@@ -4230,8 +4230,7 @@ static void cnic_cm_stop_bnx2x_hw(struct cnic_dev *dev)
+ 
+ 	cnic_bnx2x_delete_wait(dev, 0);
+ 
+-	cancel_delayed_work(&cp->delete_task);
+-	flush_workqueue(cnic_wq);
++	cancel_delayed_work_sync(&cp->delete_task);
+ 
+ 	if (atomic_read(&cp->iscsi_conn) != 0)
+ 		netdev_warn(dev->netdev, "%d iSCSI connections not destroyed\n",
+diff --git a/drivers/net/ethernet/cavium/liquidio/request_manager.c b/drivers/net/ethernet/cavium/liquidio/request_manager.c
+index de8a6ce86ad7e2..12105ffb5dac6d 100644
+--- a/drivers/net/ethernet/cavium/liquidio/request_manager.c
++++ b/drivers/net/ethernet/cavium/liquidio/request_manager.c
+@@ -126,7 +126,7 @@ int octeon_init_instr_queue(struct octeon_device *oct,
+ 	oct->io_qmask.iq |= BIT_ULL(iq_no);
+ 
+ 	/* Set the 32B/64B mode for each input queue */
+-	oct->io_qmask.iq64B |= ((conf->instr_type == 64) << iq_no);
++	oct->io_qmask.iq64B |= ((u64)(conf->instr_type == 64) << iq_no);
+ 	iq->iqcmd_64B = (conf->instr_type == 64);
+ 
+ 	oct->fn_list.setup_iq_regs(oct, iq_no);
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
+index 4643a338061820..b1e1ad9e4b48e6 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
+@@ -2736,7 +2736,7 @@ static int dpaa2_switch_setup_dpbp(struct ethsw_core *ethsw)
+ 		dev_err(dev, "dpsw_ctrl_if_set_pools() failed\n");
+ 		goto err_get_attr;
+ 	}
+-	ethsw->bpid = dpbp_attrs.id;
++	ethsw->bpid = dpbp_attrs.bpid;
+ 
+ 	return 0;
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index c006f716a3bdbe..ca7517a68a2c32 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -947,9 +947,6 @@ static bool i40e_clean_tx_irq(struct i40e_vsi *vsi,
+ 		if (!eop_desc)
+ 			break;
+ 
+-		/* prevent any other reads prior to eop_desc */
+-		smp_rmb();
+-
+ 		i40e_trace(clean_tx_irq, tx_ring, tx_desc, tx_buf);
+ 		/* we have caught up to head, no work left to do */
+ 		if (tx_head == tx_desc)
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
+index c50cf3ad190e9a..4766597ac55509 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
+@@ -865,10 +865,6 @@ ice_add_xdp_frag(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
+ 	__skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, rx_buf->page,
+ 				   rx_buf->page_offset, size);
+ 	sinfo->xdp_frags_size += size;
+-	/* remember frag count before XDP prog execution; bpf_xdp_adjust_tail()
+-	 * can pop off frags but driver has to handle it on its own
+-	 */
+-	rx_ring->nr_frags = sinfo->nr_frags;
+ 
+ 	if (page_is_pfmemalloc(rx_buf->page))
+ 		xdp_buff_set_frag_pfmemalloc(xdp);
+@@ -939,20 +935,20 @@ ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size,
+ /**
+  * ice_get_pgcnts - grab page_count() for gathered fragments
+  * @rx_ring: Rx descriptor ring to store the page counts on
++ * @ntc: the next to clean element (not included in this frame!)
+  *
+  * This function is intended to be called right before running XDP
+  * program so that the page recycling mechanism will be able to take
+  * a correct decision regarding underlying pages; this is done in such
+  * way as XDP program can change the refcount of page
+  */
+-static void ice_get_pgcnts(struct ice_rx_ring *rx_ring)
++static void ice_get_pgcnts(struct ice_rx_ring *rx_ring, unsigned int ntc)
+ {
+-	u32 nr_frags = rx_ring->nr_frags + 1;
+ 	u32 idx = rx_ring->first_desc;
+ 	struct ice_rx_buf *rx_buf;
+ 	u32 cnt = rx_ring->count;
+ 
+-	for (int i = 0; i < nr_frags; i++) {
++	while (idx != ntc) {
+ 		rx_buf = &rx_ring->rx_buf[idx];
+ 		rx_buf->pgcnt = page_count(rx_buf->page);
+ 
+@@ -1125,62 +1121,51 @@ ice_put_rx_buf(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf)
+ }
+ 
+ /**
+- * ice_put_rx_mbuf - ice_put_rx_buf() caller, for all frame frags
++ * ice_put_rx_mbuf - ice_put_rx_buf() caller, for all buffers in frame
+  * @rx_ring: Rx ring with all the auxiliary data
+  * @xdp: XDP buffer carrying linear + frags part
+- * @xdp_xmit: XDP_TX/XDP_REDIRECT verdict storage
+- * @ntc: a current next_to_clean value to be stored at rx_ring
++ * @ntc: the next to clean element (not included in this frame!)
+  * @verdict: return code from XDP program execution
+  *
+- * Walk through gathered fragments and satisfy internal page
+- * recycle mechanism; we take here an action related to verdict
+- * returned by XDP program;
++ * Called after XDP program is completed, or on error with verdict set to
++ * ICE_XDP_CONSUMED.
++ *
++ * Walk through buffers from first_desc to the end of the frame, releasing
++ * buffers and satisfying internal page recycle mechanism. The action depends
++ * on verdict from XDP program.
+  */
+ static void ice_put_rx_mbuf(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
+-			    u32 *xdp_xmit, u32 ntc, u32 verdict)
++			    u32 ntc, u32 verdict)
+ {
+-	u32 nr_frags = rx_ring->nr_frags + 1;
+ 	u32 idx = rx_ring->first_desc;
+ 	u32 cnt = rx_ring->count;
+-	u32 post_xdp_frags = 1;
+ 	struct ice_rx_buf *buf;
+-	int i;
++	u32 xdp_frags = 0;
++	int i = 0;
+ 
+ 	if (unlikely(xdp_buff_has_frags(xdp)))
+-		post_xdp_frags += xdp_get_shared_info_from_buff(xdp)->nr_frags;
++		xdp_frags = xdp_get_shared_info_from_buff(xdp)->nr_frags;
+ 
+-	for (i = 0; i < post_xdp_frags; i++) {
++	while (idx != ntc) {
+ 		buf = &rx_ring->rx_buf[idx];
++		if (++idx == cnt)
++			idx = 0;
+ 
+-		if (verdict & (ICE_XDP_TX | ICE_XDP_REDIR)) {
++		/* An XDP program could release fragments from the end of the
++		 * buffer. For these, we need to keep the pagecnt_bias as-is.
++		 * To do this, only adjust pagecnt_bias for fragments up to
++		 * the total remaining after the XDP program has run.
++		 */
++		if (verdict != ICE_XDP_CONSUMED)
+ 			ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
+-			*xdp_xmit |= verdict;
+-		} else if (verdict & ICE_XDP_CONSUMED) {
++		else if (i++ <= xdp_frags)
+ 			buf->pagecnt_bias++;
+-		} else if (verdict == ICE_XDP_PASS) {
+-			ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
+-		}
+ 
+ 		ice_put_rx_buf(rx_ring, buf);
+-
+-		if (++idx == cnt)
+-			idx = 0;
+-	}
+-	/* handle buffers that represented frags released by XDP prog;
+-	 * for these we keep pagecnt_bias as-is; refcount from struct page
+-	 * has been decremented within XDP prog and we do not have to increase
+-	 * the biased refcnt
+-	 */
+-	for (; i < nr_frags; i++) {
+-		buf = &rx_ring->rx_buf[idx];
+-		ice_put_rx_buf(rx_ring, buf);
+-		if (++idx == cnt)
+-			idx = 0;
+ 	}
+ 
+ 	xdp->data = NULL;
+ 	rx_ring->first_desc = ntc;
+-	rx_ring->nr_frags = 0;
+ }
+ 
+ /**
+@@ -1260,6 +1245,10 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
+ 		/* retrieve a buffer from the ring */
+ 		rx_buf = ice_get_rx_buf(rx_ring, size, ntc);
+ 
++		/* Increment ntc before calls to ice_put_rx_mbuf() */
++		if (++ntc == cnt)
++			ntc = 0;
++
+ 		if (!xdp->data) {
+ 			void *hard_start;
+ 
+@@ -1268,24 +1257,23 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
+ 			xdp_prepare_buff(xdp, hard_start, offset, size, !!offset);
+ 			xdp_buff_clear_frags_flag(xdp);
+ 		} else if (ice_add_xdp_frag(rx_ring, xdp, rx_buf, size)) {
+-			ice_put_rx_mbuf(rx_ring, xdp, NULL, ntc, ICE_XDP_CONSUMED);
++			ice_put_rx_mbuf(rx_ring, xdp, ntc, ICE_XDP_CONSUMED);
+ 			break;
+ 		}
+-		if (++ntc == cnt)
+-			ntc = 0;
+ 
+ 		/* skip if it is NOP desc */
+ 		if (ice_is_non_eop(rx_ring, rx_desc))
+ 			continue;
+ 
+-		ice_get_pgcnts(rx_ring);
++		ice_get_pgcnts(rx_ring, ntc);
+ 		xdp_verdict = ice_run_xdp(rx_ring, xdp, xdp_prog, xdp_ring, rx_desc);
+ 		if (xdp_verdict == ICE_XDP_PASS)
+ 			goto construct_skb;
+ 		total_rx_bytes += xdp_get_buff_len(xdp);
+ 		total_rx_pkts++;
+ 
+-		ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict);
++		ice_put_rx_mbuf(rx_ring, xdp, ntc, xdp_verdict);
++		xdp_xmit |= xdp_verdict & (ICE_XDP_TX | ICE_XDP_REDIR);
+ 
+ 		continue;
+ construct_skb:
+@@ -1298,7 +1286,7 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
+ 			rx_ring->ring_stats->rx_stats.alloc_buf_failed++;
+ 			xdp_verdict = ICE_XDP_CONSUMED;
+ 		}
+-		ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict);
++		ice_put_rx_mbuf(rx_ring, xdp, ntc, xdp_verdict);
+ 
+ 		if (!skb)
+ 			break;
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
+index a4b1e95146327d..07155e615f75ab 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
+@@ -358,7 +358,6 @@ struct ice_rx_ring {
+ 	struct ice_tx_ring *xdp_ring;
+ 	struct ice_rx_ring *next;	/* pointer to next ring in q_vector */
+ 	struct xsk_buff_pool *xsk_pool;
+-	u32 nr_frags;
+ 	u16 max_frame;
+ 	u16 rx_buf_len;
+ 	dma_addr_t dma;			/* physical address of ring */
+diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h
+index 859a15e4ccbab5..1bbe7f72757c06 100644
+--- a/drivers/net/ethernet/intel/igc/igc.h
++++ b/drivers/net/ethernet/intel/igc/igc.h
+@@ -343,6 +343,7 @@ struct igc_adapter {
+ 	/* LEDs */
+ 	struct mutex led_mutex;
+ 	struct igc_led_classdev *leds;
++	bool leds_available;
+ };
+ 
+ void igc_up(struct igc_adapter *adapter);
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 1b4465d6b2b726..5b8f9b51214897 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -7301,8 +7301,14 @@ static int igc_probe(struct pci_dev *pdev,
+ 
+ 	if (IS_ENABLED(CONFIG_IGC_LEDS)) {
+ 		err = igc_led_setup(adapter);
+-		if (err)
+-			goto err_register;
++		if (err) {
++			netdev_warn_once(netdev,
++					 "LED init failed (%d); continuing without LED support\n",
++					 err);
++			adapter->leds_available = false;
++		} else {
++			adapter->leds_available = true;
++		}
+ 	}
+ 
+ 	return 0;
+@@ -7358,7 +7364,7 @@ static void igc_remove(struct pci_dev *pdev)
+ 	cancel_work_sync(&adapter->watchdog_task);
+ 	hrtimer_cancel(&adapter->hrtimer);
+ 
+-	if (IS_ENABLED(CONFIG_IGC_LEDS))
++	if (IS_ENABLED(CONFIG_IGC_LEDS) && adapter->leds_available)
+ 		igc_led_free(adapter);
+ 
+ 	/* Release control of h/w to f/w.  If f/w is AMT enabled, this
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index cba860f0e1f154..d5c421451f3191 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -6801,6 +6801,13 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter,
+ 		break;
+ 	}
+ 
++	/* Make sure the SWFW semaphore is in a valid state */
++	if (hw->mac.ops.init_swfw_sync)
++		hw->mac.ops.init_swfw_sync(hw);
++
++	if (hw->mac.type == ixgbe_mac_e610)
++		mutex_init(&hw->aci.lock);
++
+ #ifdef IXGBE_FCOE
+ 	/* FCoE support exists, always init the FCoE lock */
+ 	spin_lock_init(&adapter->fcoe.lock);
+@@ -11474,10 +11481,6 @@ static int ixgbe_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	if (err)
+ 		goto err_sw_init;
+ 
+-	/* Make sure the SWFW semaphore is in a valid state */
+-	if (hw->mac.ops.init_swfw_sync)
+-		hw->mac.ops.init_swfw_sync(hw);
+-
+ 	if (ixgbe_check_fw_error(adapter))
+ 		return ixgbe_recovery_probe(adapter);
+ 
+@@ -11681,8 +11684,6 @@ static int ixgbe_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	ether_addr_copy(hw->mac.addr, hw->mac.perm_addr);
+ 	ixgbe_mac_set_default_filter(adapter);
+ 
+-	if (hw->mac.type == ixgbe_mac_e610)
+-		mutex_init(&hw->aci.lock);
+ 	timer_setup(&adapter->service_timer, ixgbe_service_timer, 0);
+ 
+ 	if (ixgbe_removed(hw->hw_addr)) {
+@@ -11838,9 +11839,9 @@ static int ixgbe_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	devl_unlock(adapter->devlink);
+ 	ixgbe_release_hw_control(adapter);
+ 	ixgbe_clear_interrupt_scheme(adapter);
++err_sw_init:
+ 	if (hw->mac.type == ixgbe_mac_e610)
+ 		mutex_destroy(&adapter->hw.aci.lock);
+-err_sw_init:
+ 	ixgbe_disable_sriov(adapter);
+ 	adapter->flags2 &= ~IXGBE_FLAG2_SEARCH_FOR_SFP;
+ 	iounmap(adapter->io_addr);
+@@ -11891,10 +11892,8 @@ static void ixgbe_remove(struct pci_dev *pdev)
+ 	set_bit(__IXGBE_REMOVING, &adapter->state);
+ 	cancel_work_sync(&adapter->service_task);
+ 
+-	if (adapter->hw.mac.type == ixgbe_mac_e610) {
++	if (adapter->hw.mac.type == ixgbe_mac_e610)
+ 		ixgbe_disable_link_status_events(adapter);
+-		mutex_destroy(&adapter->hw.aci.lock);
+-	}
+ 
+ 	if (adapter->mii_bus)
+ 		mdiobus_unregister(adapter->mii_bus);
+@@ -11954,6 +11953,9 @@ static void ixgbe_remove(struct pci_dev *pdev)
+ 	disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state);
+ 	free_netdev(netdev);
+ 
++	if (adapter->hw.mac.type == ixgbe_mac_e610)
++		mutex_destroy(&adapter->hw.aci.lock);
++
+ 	if (disable_dev)
+ 		pci_disable_device(pdev);
+ }
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+index 24499bb36c0057..bcea3fc26a8c7d 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+@@ -1124,11 +1124,24 @@ static int octep_set_features(struct net_device *dev, netdev_features_t features
+ 	return err;
+ }
+ 
++static bool octep_is_vf_valid(struct octep_device *oct, int vf)
++{
++	if (vf >= CFG_GET_ACTIVE_VFS(oct->conf)) {
++		netdev_err(oct->netdev, "Invalid VF ID %d\n", vf);
++		return false;
++	}
++
++	return true;
++}
++
+ static int octep_get_vf_config(struct net_device *dev, int vf,
+ 			       struct ifla_vf_info *ivi)
+ {
+ 	struct octep_device *oct = netdev_priv(dev);
+ 
++	if (!octep_is_vf_valid(oct, vf))
++		return -EINVAL;
++
+ 	ivi->vf = vf;
+ 	ether_addr_copy(ivi->mac, oct->vf_info[vf].mac_addr);
+ 	ivi->spoofchk = true;
+@@ -1143,6 +1156,9 @@ static int octep_set_vf_mac(struct net_device *dev, int vf, u8 *mac)
+ 	struct octep_device *oct = netdev_priv(dev);
+ 	int err;
+ 
++	if (!octep_is_vf_valid(oct, vf))
++		return -EINVAL;
++
+ 	if (!is_valid_ether_addr(mac)) {
+ 		dev_err(&oct->pdev->dev, "Invalid  MAC Address %pM\n", mac);
+ 		return -EADDRNOTAVAIL;
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_pfvf_mbox.c b/drivers/net/ethernet/marvell/octeon_ep/octep_pfvf_mbox.c
+index ebecdd29f3bd05..0867fab61b1905 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_pfvf_mbox.c
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_pfvf_mbox.c
+@@ -196,6 +196,7 @@ static void octep_pfvf_get_mac_addr(struct octep_device *oct,  u32 vf_id,
+ 			vf_id);
+ 		return;
+ 	}
++	ether_addr_copy(oct->vf_info[vf_id].mac_addr, rsp->s_set_mac.mac_addr);
+ 	rsp->s_set_mac.type = OCTEP_PFVF_MBOX_TYPE_RSP_ACK;
+ }
+ 
+@@ -205,6 +206,8 @@ static void octep_pfvf_dev_remove(struct octep_device *oct,  u32 vf_id,
+ {
+ 	int err;
+ 
++	/* Reset VF-specific information maintained by the PF */
++	memset(&oct->vf_info[vf_id], 0, sizeof(struct octep_pfvf_info));
+ 	err = octep_ctrl_net_dev_remove(oct, vf_id);
+ 	if (err) {
+ 		rsp->s.type = OCTEP_PFVF_MBOX_TYPE_RSP_NACK;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ptp.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ptp.c
+index 63130ba37e9df1..69b435ed8fbbe9 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ptp.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ptp.c
+@@ -491,7 +491,7 @@ void otx2_ptp_destroy(struct otx2_nic *pfvf)
+ 	if (!ptp)
+ 		return;
+ 
+-	cancel_delayed_work(&pfvf->ptp->synctstamp_work);
++	cancel_delayed_work_sync(&pfvf->ptp->synctstamp_work);
+ 
+ 	ptp_clock_unregister(ptp->ptp_clock);
+ 	kfree(ptp);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
+index 9560fcba643f50..ac65e319148029 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
+@@ -92,6 +92,7 @@ enum {
+ 	MLX5E_ACCEL_FS_ESP_FT_LEVEL = MLX5E_INNER_TTC_FT_LEVEL + 1,
+ 	MLX5E_ACCEL_FS_ESP_FT_ERR_LEVEL,
+ 	MLX5E_ACCEL_FS_POL_FT_LEVEL,
++	MLX5E_ACCEL_FS_POL_MISS_FT_LEVEL,
+ 	MLX5E_ACCEL_FS_ESP_FT_ROCE_LEVEL,
+ #endif
+ };
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
+index ffcd0cdeb77544..23703f28386ad9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
+@@ -185,6 +185,7 @@ struct mlx5e_ipsec_rx_create_attr {
+ 	u32 family;
+ 	int prio;
+ 	int pol_level;
++	int pol_miss_level;
+ 	int sa_level;
+ 	int status_level;
+ 	enum mlx5_flow_namespace_type chains_ns;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
+index 98b6a3a623f995..65dc3529283b69 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
+@@ -747,6 +747,7 @@ static void ipsec_rx_create_attr_set(struct mlx5e_ipsec *ipsec,
+ 	attr->family = family;
+ 	attr->prio = MLX5E_NIC_PRIO;
+ 	attr->pol_level = MLX5E_ACCEL_FS_POL_FT_LEVEL;
++	attr->pol_miss_level = MLX5E_ACCEL_FS_POL_MISS_FT_LEVEL;
+ 	attr->sa_level = MLX5E_ACCEL_FS_ESP_FT_LEVEL;
+ 	attr->status_level = MLX5E_ACCEL_FS_ESP_FT_ERR_LEVEL;
+ 	attr->chains_ns = MLX5_FLOW_NAMESPACE_KERNEL;
+@@ -833,7 +834,7 @@ static int ipsec_rx_chains_create_miss(struct mlx5e_ipsec *ipsec,
+ 
+ 	ft_attr.max_fte = 1;
+ 	ft_attr.autogroup.max_num_groups = 1;
+-	ft_attr.level = attr->pol_level;
++	ft_attr.level = attr->pol_miss_level;
+ 	ft_attr.prio = attr->prio;
+ 
+ 	ft = mlx5_create_auto_grouped_flow_table(attr->ns, &ft_attr);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index e39c51cfc8e6c2..f0142d32b648f5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -136,8 +136,6 @@ void mlx5e_update_carrier(struct mlx5e_priv *priv)
+ 	if (up) {
+ 		netdev_info(priv->netdev, "Link up\n");
+ 		netif_carrier_on(priv->netdev);
+-		mlx5e_port_manual_buffer_config(priv, 0, priv->netdev->mtu,
+-						NULL, NULL, NULL);
+ 	} else {
+ 		netdev_info(priv->netdev, "Link down\n");
+ 		netif_carrier_off(priv->netdev);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index 63a7a788fb0db5..cd0242eb008c29 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -1506,12 +1506,21 @@ static const struct mlx5e_profile mlx5e_uplink_rep_profile = {
+ static int
+ mlx5e_vport_uplink_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep)
+ {
+-	struct mlx5e_priv *priv = netdev_priv(mlx5_uplink_netdev_get(dev));
+ 	struct mlx5e_rep_priv *rpriv = mlx5e_rep_to_rep_priv(rep);
++	struct net_device *netdev;
++	struct mlx5e_priv *priv;
++	int err;
++
++	netdev = mlx5_uplink_netdev_get(dev);
++	if (!netdev)
++		return 0;
+ 
++	priv = netdev_priv(netdev);
+ 	rpriv->netdev = priv->netdev;
+-	return mlx5e_netdev_change_profile(priv, &mlx5e_uplink_rep_profile,
+-					   rpriv);
++	err = mlx5e_netdev_change_profile(priv, &mlx5e_uplink_rep_profile,
++					  rpriv);
++	mlx5_uplink_netdev_put(dev, netdev);
++	return err;
+ }
+ 
+ static void
+@@ -1638,8 +1647,16 @@ mlx5e_vport_rep_unload(struct mlx5_eswitch_rep *rep)
+ {
+ 	struct mlx5e_rep_priv *rpriv = mlx5e_rep_to_rep_priv(rep);
+ 	struct net_device *netdev = rpriv->netdev;
+-	struct mlx5e_priv *priv = netdev_priv(netdev);
+-	void *ppriv = priv->ppriv;
++	struct mlx5e_priv *priv;
++	void *ppriv;
++
++	if (!netdev) {
++		ppriv = rpriv;
++		goto free_ppriv;
++	}
++
++	priv = netdev_priv(netdev);
++	ppriv = priv->ppriv;
+ 
+ 	if (rep->vport == MLX5_VPORT_UPLINK) {
+ 		mlx5e_vport_uplink_rep_unload(rpriv);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
+index ad9f6fca9b6a20..c6476e943e98d4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
+@@ -743,6 +743,7 @@ static u32 mlx5_esw_qos_lag_link_speed_get_locked(struct mlx5_core_dev *mdev)
+ 		speed = lksettings.base.speed;
+ 
+ out:
++	mlx5_uplink_netdev_put(mdev, slave);
+ 	return speed;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 29ce09af59aef0..3b57ef6b3de383 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -114,9 +114,9 @@
+ #define ETHTOOL_NUM_PRIOS 11
+ #define ETHTOOL_MIN_LEVEL (KERNEL_MIN_LEVEL + ETHTOOL_NUM_PRIOS)
+ /* Vlan, mac, ttc, inner ttc, {UDP/ANY/aRFS/accel/{esp, esp_err}}, IPsec policy,
+- * {IPsec RoCE MPV,Alias table},IPsec RoCE policy
++ * IPsec policy miss, {IPsec RoCE MPV,Alias table},IPsec RoCE policy
+  */
+-#define KERNEL_NIC_PRIO_NUM_LEVELS 10
++#define KERNEL_NIC_PRIO_NUM_LEVELS 11
+ #define KERNEL_NIC_NUM_PRIOS 1
+ /* One more level for tc, and one more for promisc */
+ #define KERNEL_MIN_LEVEL (KERNEL_NIC_PRIO_NUM_LEVELS + 2)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h
+index 37d5f445598c7b..a7486e6d0d5eff 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h
+@@ -52,7 +52,20 @@ static inline struct net *mlx5_core_net(struct mlx5_core_dev *dev)
+ 
+ static inline struct net_device *mlx5_uplink_netdev_get(struct mlx5_core_dev *mdev)
+ {
+-	return mdev->mlx5e_res.uplink_netdev;
++	struct mlx5e_resources *mlx5e_res = &mdev->mlx5e_res;
++	struct net_device *netdev;
++
++	mutex_lock(&mlx5e_res->uplink_netdev_lock);
++	netdev = mlx5e_res->uplink_netdev;
++	netdev_hold(netdev, &mlx5e_res->tracker, GFP_KERNEL);
++	mutex_unlock(&mlx5e_res->uplink_netdev_lock);
++	return netdev;
++}
++
++static inline void mlx5_uplink_netdev_put(struct mlx5_core_dev *mdev,
++					  struct net_device *netdev)
++{
++	netdev_put(netdev, &mdev->mlx5e_res.tracker);
+ }
+ 
+ struct mlx5_sd;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/port.c b/drivers/net/ethernet/mellanox/mlx5/core/port.c
+index 2d7adf7444ba29..aa9f2b0a77d36f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/port.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/port.c
+@@ -1170,7 +1170,11 @@ const struct mlx5_link_info *mlx5_port_ptys2info(struct mlx5_core_dev *mdev,
+ 	mlx5e_port_get_link_mode_info_arr(mdev, &table, &max_size,
+ 					  force_legacy);
+ 	i = find_first_bit(&temp, max_size);
+-	if (i < max_size)
++
++	/* mlx5e_link_info has holes. Check speed
++	 * is not zero as indication of one.
++	 */
++	if (i < max_size && table[i].speed)
+ 		return &table[i];
+ 
+ 	return NULL;
+diff --git a/drivers/net/ethernet/natsemi/ns83820.c b/drivers/net/ethernet/natsemi/ns83820.c
+index 56d5464222d97a..cdbf82affa7bea 100644
+--- a/drivers/net/ethernet/natsemi/ns83820.c
++++ b/drivers/net/ethernet/natsemi/ns83820.c
+@@ -820,7 +820,7 @@ static void rx_irq(struct net_device *ndev)
+ 	struct ns83820 *dev = PRIV(ndev);
+ 	struct rx_info *info = &dev->rx_info;
+ 	unsigned next_rx;
+-	int rx_rc, len;
++	int len;
+ 	u32 cmdsts;
+ 	__le32 *desc;
+ 	unsigned long flags;
+@@ -881,8 +881,10 @@ static void rx_irq(struct net_device *ndev)
+ 		if (likely(CMDSTS_OK & cmdsts)) {
+ #endif
+ 			skb_put(skb, len);
+-			if (unlikely(!skb))
++			if (unlikely(!skb)) {
++				ndev->stats.rx_dropped++;
+ 				goto netdev_mangle_me_harder_failed;
++			}
+ 			if (cmdsts & CMDSTS_DEST_MULTI)
+ 				ndev->stats.multicast++;
+ 			ndev->stats.rx_packets++;
+@@ -901,15 +903,12 @@ static void rx_irq(struct net_device *ndev)
+ 				__vlan_hwaccel_put_tag(skb, htons(ETH_P_IPV6), tag);
+ 			}
+ #endif
+-			rx_rc = netif_rx(skb);
+-			if (NET_RX_DROP == rx_rc) {
+-netdev_mangle_me_harder_failed:
+-				ndev->stats.rx_dropped++;
+-			}
++			netif_rx(skb);
+ 		} else {
+ 			dev_kfree_skb_irq(skb);
+ 		}
+ 
++netdev_mangle_me_harder_failed:
+ 		nr++;
+ 		next_rx = info->next_rx;
+ 		desc = info->descs + (DESC_SIZE * next_rx);
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_debug.c b/drivers/net/ethernet/qlogic/qed/qed_debug.c
+index 9c3d3dd2f84753..1f0cea3cae92f5 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_debug.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_debug.c
+@@ -4462,10 +4462,11 @@ static enum dbg_status qed_protection_override_dump(struct qed_hwfn *p_hwfn,
+ 		goto out;
+ 	}
+ 
+-	/* Add override window info to buffer */
++	/* Add override window info to buffer, preventing buffer overflow */
+ 	override_window_dwords =
+-		qed_rd(p_hwfn, p_ptt, GRC_REG_NUMBER_VALID_OVERRIDE_WINDOW) *
+-		PROTECTION_OVERRIDE_ELEMENT_DWORDS;
++		min(qed_rd(p_hwfn, p_ptt, GRC_REG_NUMBER_VALID_OVERRIDE_WINDOW) *
++		PROTECTION_OVERRIDE_ELEMENT_DWORDS,
++		PROTECTION_OVERRIDE_DEPTH_DWORDS);
+ 	if (override_window_dwords) {
+ 		addr = BYTES_TO_DWORDS(GRC_REG_PROTECTION_OVERRIDE_WINDOW);
+ 		offset += qed_grc_dump_addr_range(p_hwfn,
+diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c
+index 8e6ce16ab5b88f..c9e2dca3083123 100644
+--- a/drivers/net/wireless/mediatek/mt76/mac80211.c
++++ b/drivers/net/wireless/mediatek/mt76/mac80211.c
+@@ -1731,7 +1731,7 @@ EXPORT_SYMBOL_GPL(mt76_wcid_cleanup);
+ 
+ void mt76_wcid_add_poll(struct mt76_dev *dev, struct mt76_wcid *wcid)
+ {
+-	if (test_bit(MT76_MCU_RESET, &dev->phy.state))
++	if (test_bit(MT76_MCU_RESET, &dev->phy.state) || !wcid->sta)
+ 		return;
+ 
+ 	spin_lock_bh(&dev->sta_poll_lock);
+diff --git a/drivers/net/wireless/microchip/wilc1000/wlan_cfg.c b/drivers/net/wireless/microchip/wilc1000/wlan_cfg.c
+index 131388886acbfa..cfabd5aebb5400 100644
+--- a/drivers/net/wireless/microchip/wilc1000/wlan_cfg.c
++++ b/drivers/net/wireless/microchip/wilc1000/wlan_cfg.c
+@@ -41,10 +41,10 @@ static const struct wilc_cfg_word g_cfg_word[] = {
+ };
+ 
+ static const struct wilc_cfg_str g_cfg_str[] = {
+-	{WID_FIRMWARE_VERSION, NULL},
+-	{WID_MAC_ADDR, NULL},
+-	{WID_ASSOC_RES_INFO, NULL},
+-	{WID_NIL, NULL}
++	{WID_FIRMWARE_VERSION, 0, NULL},
++	{WID_MAC_ADDR, 0, NULL},
++	{WID_ASSOC_RES_INFO, 0, NULL},
++	{WID_NIL, 0, NULL}
+ };
+ 
+ #define WILC_RESP_MSG_TYPE_CONFIG_REPLY		'R'
+@@ -147,44 +147,58 @@ static void wilc_wlan_parse_response_frame(struct wilc *wl, u8 *info, int size)
+ 
+ 		switch (FIELD_GET(WILC_WID_TYPE, wid)) {
+ 		case WID_CHAR:
++			len = 3;
++			if (len + 2  > size)
++				return;
++
+ 			while (cfg->b[i].id != WID_NIL && cfg->b[i].id != wid)
+ 				i++;
+ 
+ 			if (cfg->b[i].id == wid)
+ 				cfg->b[i].val = info[4];
+ 
+-			len = 3;
+ 			break;
+ 
+ 		case WID_SHORT:
++			len = 4;
++			if (len + 2  > size)
++				return;
++
+ 			while (cfg->hw[i].id != WID_NIL && cfg->hw[i].id != wid)
+ 				i++;
+ 
+ 			if (cfg->hw[i].id == wid)
+ 				cfg->hw[i].val = get_unaligned_le16(&info[4]);
+ 
+-			len = 4;
+ 			break;
+ 
+ 		case WID_INT:
++			len = 6;
++			if (len + 2  > size)
++				return;
++
+ 			while (cfg->w[i].id != WID_NIL && cfg->w[i].id != wid)
+ 				i++;
+ 
+ 			if (cfg->w[i].id == wid)
+ 				cfg->w[i].val = get_unaligned_le32(&info[4]);
+ 
+-			len = 6;
+ 			break;
+ 
+ 		case WID_STR:
++			len = 2 + get_unaligned_le16(&info[2]);
++
+ 			while (cfg->s[i].id != WID_NIL && cfg->s[i].id != wid)
+ 				i++;
+ 
+-			if (cfg->s[i].id == wid)
++			if (cfg->s[i].id == wid) {
++				if (len > cfg->s[i].len || (len + 2  > size))
++					return;
++
+ 				memcpy(cfg->s[i].str, &info[2],
+-				       get_unaligned_le16(&info[2]) + 2);
++				       len);
++			}
+ 
+-			len = 2 + get_unaligned_le16(&info[2]);
+ 			break;
+ 
+ 		default:
+@@ -384,12 +398,15 @@ int wilc_wlan_cfg_init(struct wilc *wl)
+ 	/* store the string cfg parameters */
+ 	wl->cfg.s[i].id = WID_FIRMWARE_VERSION;
+ 	wl->cfg.s[i].str = str_vals->firmware_version;
++	wl->cfg.s[i].len = sizeof(str_vals->firmware_version);
+ 	i++;
+ 	wl->cfg.s[i].id = WID_MAC_ADDR;
+ 	wl->cfg.s[i].str = str_vals->mac_address;
++	wl->cfg.s[i].len = sizeof(str_vals->mac_address);
+ 	i++;
+ 	wl->cfg.s[i].id = WID_ASSOC_RES_INFO;
+ 	wl->cfg.s[i].str = str_vals->assoc_rsp;
++	wl->cfg.s[i].len = sizeof(str_vals->assoc_rsp);
+ 	i++;
+ 	wl->cfg.s[i].id = WID_NIL;
+ 	wl->cfg.s[i].str = NULL;
+diff --git a/drivers/net/wireless/microchip/wilc1000/wlan_cfg.h b/drivers/net/wireless/microchip/wilc1000/wlan_cfg.h
+index 7038b74f8e8ff6..5ae74bced7d748 100644
+--- a/drivers/net/wireless/microchip/wilc1000/wlan_cfg.h
++++ b/drivers/net/wireless/microchip/wilc1000/wlan_cfg.h
+@@ -24,12 +24,13 @@ struct wilc_cfg_word {
+ 
+ struct wilc_cfg_str {
+ 	u16 id;
++	u16 len;
+ 	u8 *str;
+ };
+ 
+ struct wilc_cfg_str_vals {
+-	u8 mac_address[7];
+-	u8 firmware_version[129];
++	u8 mac_address[8];
++	u8 firmware_version[130];
+ 	u8 assoc_rsp[WILC_MAX_ASSOC_RESP_FRAME_SIZE];
+ };
+ 
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 895fb163d48e6a..5395623d2ba6aa 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -903,6 +903,15 @@ static void nvme_set_ref_tag(struct nvme_ns *ns, struct nvme_command *cmnd,
+ 	u32 upper, lower;
+ 	u64 ref48;
+ 
++	/* only type1 and type 2 PI formats have a reftag */
++	switch (ns->head->pi_type) {
++	case NVME_NS_DPS_PI_TYPE1:
++	case NVME_NS_DPS_PI_TYPE2:
++		break;
++	default:
++		return;
++	}
++
+ 	/* both rw and write zeroes share the same reftag format */
+ 	switch (ns->head->guard_type) {
+ 	case NVME_NVM_NS_16B_GUARD:
+@@ -942,13 +951,7 @@ static inline blk_status_t nvme_setup_write_zeroes(struct nvme_ns *ns,
+ 
+ 	if (nvme_ns_has_pi(ns->head)) {
+ 		cmnd->write_zeroes.control |= cpu_to_le16(NVME_RW_PRINFO_PRACT);
+-
+-		switch (ns->head->pi_type) {
+-		case NVME_NS_DPS_PI_TYPE1:
+-		case NVME_NS_DPS_PI_TYPE2:
+-			nvme_set_ref_tag(ns, cmnd, req);
+-			break;
+-		}
++		nvme_set_ref_tag(ns, cmnd, req);
+ 	}
+ 
+ 	return BLK_STS_OK;
+@@ -1039,6 +1042,7 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns,
+ 			if (WARN_ON_ONCE(!nvme_ns_has_pi(ns->head)))
+ 				return BLK_STS_NOTSUPP;
+ 			control |= NVME_RW_PRINFO_PRACT;
++			nvme_set_ref_tag(ns, cmnd, req);
+ 		}
+ 
+ 		if (bio_integrity_flagged(req->bio, BIP_CHECK_GUARD))
+diff --git a/drivers/pcmcia/omap_cf.c b/drivers/pcmcia/omap_cf.c
+index 441cdf83f5a449..d6f24c7d156227 100644
+--- a/drivers/pcmcia/omap_cf.c
++++ b/drivers/pcmcia/omap_cf.c
+@@ -304,7 +304,13 @@ static void __exit omap_cf_remove(struct platform_device *pdev)
+ 	kfree(cf);
+ }
+ 
+-static struct platform_driver omap_cf_driver = {
++/*
++ * omap_cf_remove() lives in .exit.text. For drivers registered via
++ * platform_driver_probe() this is ok because they cannot get unbound at
++ * runtime. So mark the driver struct with __refdata to prevent modpost
++ * triggering a section mismatch warning.
++ */
++static struct platform_driver omap_cf_driver __refdata = {
+ 	.driver = {
+ 		.name	= driver_name,
+ 	},
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index e6726be5890e7f..ce177c37994140 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -147,7 +147,12 @@ static struct quirk_entry quirk_asus_ignore_fan = {
+ };
+ 
+ static struct quirk_entry quirk_asus_zenbook_duo_kbd = {
+-	.ignore_key_wlan = true,
++	.key_wlan_event = ASUS_WMI_KEY_IGNORE,
++};
++
++static struct quirk_entry quirk_asus_z13 = {
++	.key_wlan_event = ASUS_WMI_KEY_ARMOURY,
++	.tablet_switch_mode = asus_wmi_kbd_dock_devid,
+ };
+ 
+ static int dmi_matched(const struct dmi_system_id *dmi)
+@@ -539,6 +544,15 @@ static const struct dmi_system_id asus_quirks[] = {
+ 		},
+ 		.driver_data = &quirk_asus_zenbook_duo_kbd,
+ 	},
++	{
++		.callback = dmi_matched,
++		.ident = "ASUS ROG Z13",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "ROG Flow Z13"),
++		},
++		.driver_data = &quirk_asus_z13,
++	},
+ 	{},
+ };
+ 
+@@ -636,6 +650,7 @@ static const struct key_entry asus_nb_wmi_keymap[] = {
+ 	{ KE_IGNORE, 0xCF, },	/* AC mode */
+ 	{ KE_KEY, 0xFA, { KEY_PROG2 } },           /* Lid flip action */
+ 	{ KE_KEY, 0xBD, { KEY_PROG2 } },           /* Lid flip action on ROG xflow laptops */
++	{ KE_KEY, ASUS_WMI_KEY_ARMOURY, { KEY_PROG3 } },
+ 	{ KE_END, 0},
+ };
+ 
+@@ -655,9 +670,11 @@ static void asus_nb_wmi_key_filter(struct asus_wmi_driver *asus_wmi, int *code,
+ 		if (atkbd_reports_vol_keys)
+ 			*code = ASUS_WMI_KEY_IGNORE;
+ 		break;
+-	case 0x5F: /* Wireless console Disable */
+-		if (quirks->ignore_key_wlan)
+-			*code = ASUS_WMI_KEY_IGNORE;
++	case 0x5D: /* Wireless console Toggle */
++	case 0x5E: /* Wireless console Enable / Keyboard Attach, Detach */
++	case 0x5F: /* Wireless console Disable / Special Key */
++		if (quirks->key_wlan_event)
++			*code = quirks->key_wlan_event;
+ 		break;
+ 	}
+ }
+diff --git a/drivers/platform/x86/asus-wmi.h b/drivers/platform/x86/asus-wmi.h
+index 018dfde4025e79..5cd4392b964eb8 100644
+--- a/drivers/platform/x86/asus-wmi.h
++++ b/drivers/platform/x86/asus-wmi.h
+@@ -18,6 +18,7 @@
+ #include <linux/i8042.h>
+ 
+ #define ASUS_WMI_KEY_IGNORE (-1)
++#define ASUS_WMI_KEY_ARMOURY	0xffff01
+ #define ASUS_WMI_BRN_DOWN	0x2e
+ #define ASUS_WMI_BRN_UP		0x2f
+ 
+@@ -40,7 +41,7 @@ struct quirk_entry {
+ 	bool wmi_force_als_set;
+ 	bool wmi_ignore_fan;
+ 	bool filter_i8042_e1_extended_codes;
+-	bool ignore_key_wlan;
++	int key_wlan_event;
+ 	enum asus_wmi_tablet_switch_mode tablet_switch_mode;
+ 	int wapf;
+ 	/*
+diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
+index 93dcebbe114175..ad2d9ecf32a5ae 100644
+--- a/drivers/power/supply/bq27xxx_battery.c
++++ b/drivers/power/supply/bq27xxx_battery.c
+@@ -1919,8 +1919,8 @@ static void bq27xxx_battery_update_unlocked(struct bq27xxx_device_info *di)
+ 	bool has_singe_flag = di->opts & BQ27XXX_O_ZERO;
+ 
+ 	cache.flags = bq27xxx_read(di, BQ27XXX_REG_FLAGS, has_singe_flag);
+-	if ((cache.flags & 0xff) == 0xff)
+-		cache.flags = -1; /* read error */
++	if (di->chip == BQ27000 && (cache.flags & 0xff) == 0xff)
++		cache.flags = -ENODEV; /* bq27000 hdq read error */
+ 	if (cache.flags >= 0) {
+ 		cache.capacity = bq27xxx_battery_read_soc(di);
+ 
+diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
+index 8c597fa605233a..41e4b3d4f2b581 100644
+--- a/fs/btrfs/delayed-inode.c
++++ b/fs/btrfs/delayed-inode.c
+@@ -1843,7 +1843,6 @@ static void fill_stack_inode_item(struct btrfs_trans_handle *trans,
+ 
+ int btrfs_fill_inode(struct btrfs_inode *inode, u32 *rdev)
+ {
+-	struct btrfs_fs_info *fs_info = inode->root->fs_info;
+ 	struct btrfs_delayed_node *delayed_node;
+ 	struct btrfs_inode_item *inode_item;
+ 	struct inode *vfs_inode = &inode->vfs_inode;
+@@ -1864,8 +1863,6 @@ int btrfs_fill_inode(struct btrfs_inode *inode, u32 *rdev)
+ 	i_uid_write(vfs_inode, btrfs_stack_inode_uid(inode_item));
+ 	i_gid_write(vfs_inode, btrfs_stack_inode_gid(inode_item));
+ 	btrfs_i_size_write(inode, btrfs_stack_inode_size(inode_item));
+-	btrfs_inode_set_file_extent_range(inode, 0,
+-			round_up(i_size_read(vfs_inode), fs_info->sectorsize));
+ 	vfs_inode->i_mode = btrfs_stack_inode_mode(inode_item);
+ 	set_nlink(vfs_inode, btrfs_stack_inode_nlink(inode_item));
+ 	inode_set_bytes(vfs_inode, btrfs_stack_inode_nbytes(inode_item));
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index e266a229484852..eb73025d4e4abb 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -3881,10 +3881,6 @@ static int btrfs_read_locked_inode(struct btrfs_inode *inode, struct btrfs_path
+ 	bool filled = false;
+ 	int first_xattr_slot;
+ 
+-	ret = btrfs_init_file_extent_tree(inode);
+-	if (ret)
+-		goto out;
+-
+ 	ret = btrfs_fill_inode(inode, &rdev);
+ 	if (!ret)
+ 		filled = true;
+@@ -3916,8 +3912,6 @@ static int btrfs_read_locked_inode(struct btrfs_inode *inode, struct btrfs_path
+ 	i_uid_write(vfs_inode, btrfs_inode_uid(leaf, inode_item));
+ 	i_gid_write(vfs_inode, btrfs_inode_gid(leaf, inode_item));
+ 	btrfs_i_size_write(inode, btrfs_inode_size(leaf, inode_item));
+-	btrfs_inode_set_file_extent_range(inode, 0,
+-			round_up(i_size_read(vfs_inode), fs_info->sectorsize));
+ 
+ 	inode_set_atime(vfs_inode, btrfs_timespec_sec(leaf, &inode_item->atime),
+ 			btrfs_timespec_nsec(leaf, &inode_item->atime));
+@@ -3948,6 +3942,11 @@ static int btrfs_read_locked_inode(struct btrfs_inode *inode, struct btrfs_path
+ 	btrfs_update_inode_mapping_flags(inode);
+ 
+ cache_index:
++	ret = btrfs_init_file_extent_tree(inode);
++	if (ret)
++		goto out;
++	btrfs_inode_set_file_extent_range(inode, 0,
++			round_up(i_size_read(vfs_inode), fs_info->sectorsize));
+ 	/*
+ 	 * If we were modified in the current generation and evicted from memory
+ 	 * and then re-read we need to do a full sync since we don't have any
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index 8f4703b488b71d..b59d01b976ff1d 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -1756,10 +1756,10 @@ static int check_inode_ref(struct extent_buffer *leaf,
+ 	while (ptr < end) {
+ 		u16 namelen;
+ 
+-		if (unlikely(ptr + sizeof(iref) > end)) {
++		if (unlikely(ptr + sizeof(*iref) > end)) {
+ 			inode_ref_err(leaf, slot,
+ 			"inode ref overflow, ptr %lu end %lu inode_ref_size %zu",
+-				ptr, end, sizeof(iref));
++				ptr, end, sizeof(*iref));
+ 			return -EUCLEAN;
+ 		}
+ 
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 56d30ec0f52fca..5466a93a28f58f 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1933,7 +1933,7 @@ static noinline int replay_one_name(struct btrfs_trans_handle *trans,
+ 
+ 	search_key.objectid = log_key.objectid;
+ 	search_key.type = BTRFS_INODE_EXTREF_KEY;
+-	search_key.offset = key->objectid;
++	search_key.offset = btrfs_extref_hash(key->objectid, name.name, name.len);
+ 	ret = backref_in_log(root->log_root, &search_key, key->objectid, &name);
+ 	if (ret < 0) {
+ 		goto out;
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index d7a1193332d941..60937127a0bc63 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -2577,9 +2577,9 @@ void btrfs_zoned_reserve_data_reloc_bg(struct btrfs_fs_info *fs_info)
+ 			spin_lock(&space_info->lock);
+ 			space_info->total_bytes -= bg->length;
+ 			space_info->disk_total -= bg->length * factor;
++			space_info->disk_total -= bg->zone_unusable;
+ 			/* There is no allocation ever happened. */
+ 			ASSERT(bg->used == 0);
+-			ASSERT(bg->zone_unusable == 0);
+ 			/* No super block in a block group on the zoned setup. */
+ 			ASSERT(bg->bytes_super == 0);
+ 			spin_unlock(&space_info->lock);
+diff --git a/fs/nilfs2/sysfs.c b/fs/nilfs2/sysfs.c
+index 14868a3dd592ca..bc52afbfc5c739 100644
+--- a/fs/nilfs2/sysfs.c
++++ b/fs/nilfs2/sysfs.c
+@@ -1075,7 +1075,7 @@ void nilfs_sysfs_delete_device_group(struct the_nilfs *nilfs)
+  ************************************************************************/
+ 
+ static ssize_t nilfs_feature_revision_show(struct kobject *kobj,
+-					    struct attribute *attr, char *buf)
++					    struct kobj_attribute *attr, char *buf)
+ {
+ 	return sysfs_emit(buf, "%d.%d\n",
+ 			NILFS_CURRENT_REV, NILFS_MINOR_REV);
+@@ -1087,7 +1087,7 @@ static const char features_readme_str[] =
+ 	"(1) revision\n\tshow current revision of NILFS file system driver.\n";
+ 
+ static ssize_t nilfs_feature_README_show(struct kobject *kobj,
+-					 struct attribute *attr,
++					 struct kobj_attribute *attr,
+ 					 char *buf)
+ {
+ 	return sysfs_emit(buf, features_readme_str);
+diff --git a/fs/nilfs2/sysfs.h b/fs/nilfs2/sysfs.h
+index 78a87a016928b7..d370cd5cce3f5d 100644
+--- a/fs/nilfs2/sysfs.h
++++ b/fs/nilfs2/sysfs.h
+@@ -50,16 +50,16 @@ struct nilfs_sysfs_dev_subgroups {
+ 	struct completion sg_segments_kobj_unregister;
+ };
+ 
+-#define NILFS_COMMON_ATTR_STRUCT(name) \
++#define NILFS_KOBJ_ATTR_STRUCT(name) \
+ struct nilfs_##name##_attr { \
+ 	struct attribute attr; \
+-	ssize_t (*show)(struct kobject *, struct attribute *, \
++	ssize_t (*show)(struct kobject *, struct kobj_attribute *, \
+ 			char *); \
+-	ssize_t (*store)(struct kobject *, struct attribute *, \
++	ssize_t (*store)(struct kobject *, struct kobj_attribute *, \
+ 			 const char *, size_t); \
+ }
+ 
+-NILFS_COMMON_ATTR_STRUCT(feature);
++NILFS_KOBJ_ATTR_STRUCT(feature);
+ 
+ #define NILFS_DEV_ATTR_STRUCT(name) \
+ struct nilfs_##name##_attr { \
+diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
+index 045227ed4efc96..0dcea9acca5442 100644
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -297,8 +297,8 @@ extern void cifs_close_deferred_file(struct cifsInodeInfo *cifs_inode);
+ 
+ extern void cifs_close_all_deferred_files(struct cifs_tcon *cifs_tcon);
+ 
+-extern void cifs_close_deferred_file_under_dentry(struct cifs_tcon *cifs_tcon,
+-				const char *path);
++void cifs_close_deferred_file_under_dentry(struct cifs_tcon *cifs_tcon,
++					   struct dentry *dentry);
+ 
+ extern void cifs_mark_open_handles_for_deleted_file(struct inode *inode,
+ 				const char *path);
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index 11d442e8b3d622..0f0d2dae6283ad 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -1984,7 +1984,7 @@ static int __cifs_unlink(struct inode *dir, struct dentry *dentry, bool sillyren
+ 	}
+ 
+ 	netfs_wait_for_outstanding_io(inode);
+-	cifs_close_deferred_file_under_dentry(tcon, full_path);
++	cifs_close_deferred_file_under_dentry(tcon, dentry);
+ #ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+ 	if (cap_unix(tcon->ses) && (CIFS_UNIX_POSIX_PATH_OPS_CAP &
+ 				le64_to_cpu(tcon->fsUnixInfo.Capability))) {
+@@ -2003,8 +2003,21 @@ static int __cifs_unlink(struct inode *dir, struct dentry *dentry, bool sillyren
+ 		goto psx_del_no_retry;
+ 	}
+ 
+-	if (sillyrename || (server->vals->protocol_id > SMB10_PROT_ID &&
+-			    d_is_positive(dentry) && d_count(dentry) > 2))
++	/* For SMB2+, if the file is open, we always perform a silly rename.
++	 *
++	 * We check for d_count() right after calling
++	 * cifs_close_deferred_file_under_dentry() to make sure that the
++	 * dentry's refcount gets dropped in case the file had any deferred
++	 * close.
++	 */
++	if (!sillyrename && server->vals->protocol_id > SMB10_PROT_ID) {
++		spin_lock(&dentry->d_lock);
++		if (d_count(dentry) > 1)
++			sillyrename = true;
++		spin_unlock(&dentry->d_lock);
++	}
++
++	if (sillyrename)
+ 		rc = -EBUSY;
+ 	else
+ 		rc = server->ops->unlink(xid, tcon, full_path, cifs_sb, dentry);
+@@ -2538,10 +2551,10 @@ cifs_rename2(struct mnt_idmap *idmap, struct inode *source_dir,
+ 		goto cifs_rename_exit;
+ 	}
+ 
+-	cifs_close_deferred_file_under_dentry(tcon, from_name);
++	cifs_close_deferred_file_under_dentry(tcon, source_dentry);
+ 	if (d_inode(target_dentry) != NULL) {
+ 		netfs_wait_for_outstanding_io(d_inode(target_dentry));
+-		cifs_close_deferred_file_under_dentry(tcon, to_name);
++		cifs_close_deferred_file_under_dentry(tcon, target_dentry);
+ 	}
+ 
+ 	rc = cifs_do_rename(xid, source_dentry, from_name, target_dentry,
+diff --git a/fs/smb/client/misc.c b/fs/smb/client/misc.c
+index da23cc12a52caa..dda6dece802ad2 100644
+--- a/fs/smb/client/misc.c
++++ b/fs/smb/client/misc.c
+@@ -832,33 +832,28 @@ cifs_close_all_deferred_files(struct cifs_tcon *tcon)
+ 		kfree(tmp_list);
+ 	}
+ }
+-void
+-cifs_close_deferred_file_under_dentry(struct cifs_tcon *tcon, const char *path)
++
++void cifs_close_deferred_file_under_dentry(struct cifs_tcon *tcon,
++					   struct dentry *dentry)
+ {
+-	struct cifsFileInfo *cfile;
+ 	struct file_list *tmp_list, *tmp_next_list;
+-	void *page;
+-	const char *full_path;
++	struct cifsFileInfo *cfile;
+ 	LIST_HEAD(file_head);
+ 
+-	page = alloc_dentry_path();
+ 	spin_lock(&tcon->open_file_lock);
+ 	list_for_each_entry(cfile, &tcon->openFileList, tlist) {
+-		full_path = build_path_from_dentry(cfile->dentry, page);
+-		if (strstr(full_path, path)) {
+-			if (delayed_work_pending(&cfile->deferred)) {
+-				if (cancel_delayed_work(&cfile->deferred)) {
+-					spin_lock(&CIFS_I(d_inode(cfile->dentry))->deferred_lock);
+-					cifs_del_deferred_close(cfile);
+-					spin_unlock(&CIFS_I(d_inode(cfile->dentry))->deferred_lock);
+-
+-					tmp_list = kmalloc(sizeof(struct file_list), GFP_ATOMIC);
+-					if (tmp_list == NULL)
+-						break;
+-					tmp_list->cfile = cfile;
+-					list_add_tail(&tmp_list->list, &file_head);
+-				}
+-			}
++		if ((cfile->dentry == dentry) &&
++		    delayed_work_pending(&cfile->deferred) &&
++		    cancel_delayed_work(&cfile->deferred)) {
++			spin_lock(&CIFS_I(d_inode(cfile->dentry))->deferred_lock);
++			cifs_del_deferred_close(cfile);
++			spin_unlock(&CIFS_I(d_inode(cfile->dentry))->deferred_lock);
++
++			tmp_list = kmalloc(sizeof(struct file_list), GFP_ATOMIC);
++			if (tmp_list == NULL)
++				break;
++			tmp_list->cfile = cfile;
++			list_add_tail(&tmp_list->list, &file_head);
+ 		}
+ 	}
+ 	spin_unlock(&tcon->open_file_lock);
+@@ -868,7 +863,6 @@ cifs_close_deferred_file_under_dentry(struct cifs_tcon *tcon, const char *path)
+ 		list_del(&tmp_list->list);
+ 		kfree(tmp_list);
+ 	}
+-	free_dentry_path(page);
+ }
+ 
+ /*
+diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c
+index b9bb531717a651..6dd2a1c66df3db 100644
+--- a/fs/smb/client/smbdirect.c
++++ b/fs/smb/client/smbdirect.c
+@@ -13,23 +13,23 @@
+ #include "cifsproto.h"
+ #include "smb2proto.h"
+ 
+-static struct smbd_response *get_receive_buffer(
++static struct smbdirect_recv_io *get_receive_buffer(
+ 		struct smbd_connection *info);
+ static void put_receive_buffer(
+ 		struct smbd_connection *info,
+-		struct smbd_response *response);
++		struct smbdirect_recv_io *response);
+ static int allocate_receive_buffers(struct smbd_connection *info, int num_buf);
+ static void destroy_receive_buffers(struct smbd_connection *info);
+ 
+ static void enqueue_reassembly(
+ 		struct smbd_connection *info,
+-		struct smbd_response *response, int data_length);
+-static struct smbd_response *_get_first_reassembly(
++		struct smbdirect_recv_io *response, int data_length);
++static struct smbdirect_recv_io *_get_first_reassembly(
+ 		struct smbd_connection *info);
+ 
+ static int smbd_post_recv(
+ 		struct smbd_connection *info,
+-		struct smbd_response *response);
++		struct smbdirect_recv_io *response);
+ 
+ static int smbd_post_send_empty(struct smbd_connection *info);
+ 
+@@ -260,7 +260,7 @@ static inline void *smbd_request_payload(struct smbd_request *request)
+ 	return (void *)request->packet;
+ }
+ 
+-static inline void *smbd_response_payload(struct smbd_response *response)
++static inline void *smbdirect_recv_io_payload(struct smbdirect_recv_io *response)
+ {
+ 	return (void *)response->packet;
+ }
+@@ -315,12 +315,13 @@ static void dump_smbdirect_negotiate_resp(struct smbdirect_negotiate_resp *resp)
+  * return value: true if negotiation is a success, false if failed
+  */
+ static bool process_negotiation_response(
+-		struct smbd_response *response, int packet_length)
++		struct smbdirect_recv_io *response, int packet_length)
+ {
+-	struct smbd_connection *info = response->info;
+-	struct smbdirect_socket *sc = &info->socket;
++	struct smbdirect_socket *sc = response->socket;
++	struct smbd_connection *info =
++		container_of(sc, struct smbd_connection, socket);
+ 	struct smbdirect_socket_parameters *sp = &sc->parameters;
+-	struct smbdirect_negotiate_resp *packet = smbd_response_payload(response);
++	struct smbdirect_negotiate_resp *packet = smbdirect_recv_io_payload(response);
+ 
+ 	if (packet_length < sizeof(struct smbdirect_negotiate_resp)) {
+ 		log_rdma_event(ERR,
+@@ -383,6 +384,7 @@ static bool process_negotiation_response(
+ 			info->max_frmr_depth * PAGE_SIZE);
+ 	info->max_frmr_depth = sp->max_read_write_size / PAGE_SIZE;
+ 
++	sc->recv_io.expected = SMBDIRECT_EXPECT_DATA_TRANSFER;
+ 	return true;
+ }
+ 
+@@ -390,7 +392,7 @@ static void smbd_post_send_credits(struct work_struct *work)
+ {
+ 	int ret = 0;
+ 	int rc;
+-	struct smbd_response *response;
++	struct smbdirect_recv_io *response;
+ 	struct smbd_connection *info =
+ 		container_of(work, struct smbd_connection,
+ 			post_send_credits_work);
+@@ -408,7 +410,6 @@ static void smbd_post_send_credits(struct work_struct *work)
+ 			if (!response)
+ 				break;
+ 
+-			response->type = SMBD_TRANSFER_DATA;
+ 			response->first_segment = false;
+ 			rc = smbd_post_recv(info, response);
+ 			if (rc) {
+@@ -442,13 +443,18 @@ static void smbd_post_send_credits(struct work_struct *work)
+ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ {
+ 	struct smbdirect_data_transfer *data_transfer;
+-	struct smbd_response *response =
+-		container_of(wc->wr_cqe, struct smbd_response, cqe);
+-	struct smbd_connection *info = response->info;
+-	int data_length = 0;
++	struct smbdirect_recv_io *response =
++		container_of(wc->wr_cqe, struct smbdirect_recv_io, cqe);
++	struct smbdirect_socket *sc = response->socket;
++	struct smbdirect_socket_parameters *sp = &sc->parameters;
++	struct smbd_connection *info =
++		container_of(sc, struct smbd_connection, socket);
++	u32 data_offset = 0;
++	u32 data_length = 0;
++	u32 remaining_data_length = 0;
+ 
+ 	log_rdma_recv(INFO, "response=0x%p type=%d wc status=%d wc opcode %d byte_len=%d pkey_index=%u\n",
+-		      response, response->type, wc->status, wc->opcode,
++		      response, sc->recv_io.expected, wc->status, wc->opcode,
+ 		      wc->byte_len, wc->pkey_index);
+ 
+ 	if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_RECV) {
+@@ -463,10 +469,10 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 		response->sge.length,
+ 		DMA_FROM_DEVICE);
+ 
+-	switch (response->type) {
++	switch (sc->recv_io.expected) {
+ 	/* SMBD negotiation response */
+-	case SMBD_NEGOTIATE_RESP:
+-		dump_smbdirect_negotiate_resp(smbd_response_payload(response));
++	case SMBDIRECT_EXPECT_NEGOTIATE_REP:
++		dump_smbdirect_negotiate_resp(smbdirect_recv_io_payload(response));
+ 		info->full_packet_received = true;
+ 		info->negotiate_done =
+ 			process_negotiation_response(response, wc->byte_len);
+@@ -475,9 +481,24 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 		return;
+ 
+ 	/* SMBD data transfer packet */
+-	case SMBD_TRANSFER_DATA:
+-		data_transfer = smbd_response_payload(response);
++	case SMBDIRECT_EXPECT_DATA_TRANSFER:
++		data_transfer = smbdirect_recv_io_payload(response);
++
++		if (wc->byte_len <
++		    offsetof(struct smbdirect_data_transfer, padding))
++			goto error;
++
++		remaining_data_length = le32_to_cpu(data_transfer->remaining_data_length);
++		data_offset = le32_to_cpu(data_transfer->data_offset);
+ 		data_length = le32_to_cpu(data_transfer->data_length);
++		if (wc->byte_len < data_offset ||
++		    (u64)wc->byte_len < (u64)data_offset + data_length)
++			goto error;
++
++		if (remaining_data_length > sp->max_fragmented_recv_size ||
++		    data_length > sp->max_fragmented_recv_size ||
++		    (u64)remaining_data_length + (u64)data_length > (u64)sp->max_fragmented_recv_size)
++			goto error;
+ 
+ 		if (data_length) {
+ 			if (info->full_packet_received)
+@@ -526,13 +547,17 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 			put_receive_buffer(info, response);
+ 
+ 		return;
++
++	case SMBDIRECT_EXPECT_NEGOTIATE_REQ:
++		/* Only server... */
++		break;
+ 	}
+ 
+ 	/*
+ 	 * This is an internal error!
+ 	 */
+-	log_rdma_recv(ERR, "unexpected response type=%d\n", response->type);
+-	WARN_ON_ONCE(response->type != SMBD_TRANSFER_DATA);
++	log_rdma_recv(ERR, "unexpected response type=%d\n", sc->recv_io.expected);
++	WARN_ON_ONCE(sc->recv_io.expected != SMBDIRECT_EXPECT_DATA_TRANSFER);
+ error:
+ 	put_receive_buffer(info, response);
+ 	smbd_disconnect_rdma_connection(info);
+@@ -1029,7 +1054,7 @@ static int smbd_post_send_full_iter(struct smbd_connection *info,
+  * The interaction is controlled by send/receive credit system
+  */
+ static int smbd_post_recv(
+-		struct smbd_connection *info, struct smbd_response *response)
++		struct smbd_connection *info, struct smbdirect_recv_io *response)
+ {
+ 	struct smbdirect_socket *sc = &info->socket;
+ 	struct smbdirect_socket_parameters *sp = &sc->parameters;
+@@ -1067,16 +1092,19 @@ static int smbd_post_recv(
+ /* Perform SMBD negotiate according to [MS-SMBD] 3.1.5.2 */
+ static int smbd_negotiate(struct smbd_connection *info)
+ {
++	struct smbdirect_socket *sc = &info->socket;
+ 	int rc;
+-	struct smbd_response *response = get_receive_buffer(info);
++	struct smbdirect_recv_io *response = get_receive_buffer(info);
+ 
+-	response->type = SMBD_NEGOTIATE_RESP;
++	sc->recv_io.expected = SMBDIRECT_EXPECT_NEGOTIATE_REP;
+ 	rc = smbd_post_recv(info, response);
+ 	log_rdma_event(INFO, "smbd_post_recv rc=%d iov.addr=0x%llx iov.length=%u iov.lkey=0x%x\n",
+ 		       rc, response->sge.addr,
+ 		       response->sge.length, response->sge.lkey);
+-	if (rc)
++	if (rc) {
++		put_receive_buffer(info, response);
+ 		return rc;
++	}
+ 
+ 	init_completion(&info->negotiate_completion);
+ 	info->negotiate_done = false;
+@@ -1113,7 +1141,7 @@ static int smbd_negotiate(struct smbd_connection *info)
+  */
+ static void enqueue_reassembly(
+ 	struct smbd_connection *info,
+-	struct smbd_response *response,
++	struct smbdirect_recv_io *response,
+ 	int data_length)
+ {
+ 	spin_lock(&info->reassembly_queue_lock);
+@@ -1137,14 +1165,14 @@ static void enqueue_reassembly(
+  * Caller is responsible for locking
+  * return value: the first entry if any, NULL if queue is empty
+  */
+-static struct smbd_response *_get_first_reassembly(struct smbd_connection *info)
++static struct smbdirect_recv_io *_get_first_reassembly(struct smbd_connection *info)
+ {
+-	struct smbd_response *ret = NULL;
++	struct smbdirect_recv_io *ret = NULL;
+ 
+ 	if (!list_empty(&info->reassembly_queue)) {
+ 		ret = list_first_entry(
+ 			&info->reassembly_queue,
+-			struct smbd_response, list);
++			struct smbdirect_recv_io, list);
+ 	}
+ 	return ret;
+ }
+@@ -1155,16 +1183,16 @@ static struct smbd_response *_get_first_reassembly(struct smbd_connection *info)
+  * pre-allocated in advance.
+  * return value: the receive buffer, NULL if none is available
+  */
+-static struct smbd_response *get_receive_buffer(struct smbd_connection *info)
++static struct smbdirect_recv_io *get_receive_buffer(struct smbd_connection *info)
+ {
+-	struct smbd_response *ret = NULL;
++	struct smbdirect_recv_io *ret = NULL;
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&info->receive_queue_lock, flags);
+ 	if (!list_empty(&info->receive_queue)) {
+ 		ret = list_first_entry(
+ 			&info->receive_queue,
+-			struct smbd_response, list);
++			struct smbdirect_recv_io, list);
+ 		list_del(&ret->list);
+ 		info->count_receive_queue--;
+ 		info->count_get_receive_buffer++;
+@@ -1181,7 +1209,7 @@ static struct smbd_response *get_receive_buffer(struct smbd_connection *info)
+  * receive buffer is returned.
+  */
+ static void put_receive_buffer(
+-	struct smbd_connection *info, struct smbd_response *response)
++	struct smbd_connection *info, struct smbdirect_recv_io *response)
+ {
+ 	struct smbdirect_socket *sc = &info->socket;
+ 	unsigned long flags;
+@@ -1206,8 +1234,9 @@ static void put_receive_buffer(
+ /* Preallocate all receive buffer on transport establishment */
+ static int allocate_receive_buffers(struct smbd_connection *info, int num_buf)
+ {
++	struct smbdirect_socket *sc = &info->socket;
++	struct smbdirect_recv_io *response;
+ 	int i;
+-	struct smbd_response *response;
+ 
+ 	INIT_LIST_HEAD(&info->reassembly_queue);
+ 	spin_lock_init(&info->reassembly_queue_lock);
+@@ -1225,7 +1254,7 @@ static int allocate_receive_buffers(struct smbd_connection *info, int num_buf)
+ 		if (!response)
+ 			goto allocate_failed;
+ 
+-		response->info = info;
++		response->socket = sc;
+ 		response->sge.length = 0;
+ 		list_add_tail(&response->list, &info->receive_queue);
+ 		info->count_receive_queue++;
+@@ -1237,7 +1266,7 @@ static int allocate_receive_buffers(struct smbd_connection *info, int num_buf)
+ 	while (!list_empty(&info->receive_queue)) {
+ 		response = list_first_entry(
+ 				&info->receive_queue,
+-				struct smbd_response, list);
++				struct smbdirect_recv_io, list);
+ 		list_del(&response->list);
+ 		info->count_receive_queue--;
+ 
+@@ -1248,7 +1277,7 @@ static int allocate_receive_buffers(struct smbd_connection *info, int num_buf)
+ 
+ static void destroy_receive_buffers(struct smbd_connection *info)
+ {
+-	struct smbd_response *response;
++	struct smbdirect_recv_io *response;
+ 
+ 	while ((response = get_receive_buffer(info)))
+ 		mempool_free(response, info->response_mempool);
+@@ -1289,7 +1318,7 @@ void smbd_destroy(struct TCP_Server_Info *server)
+ 	struct smbd_connection *info = server->smbd_conn;
+ 	struct smbdirect_socket *sc;
+ 	struct smbdirect_socket_parameters *sp;
+-	struct smbd_response *response;
++	struct smbdirect_recv_io *response;
+ 	unsigned long flags;
+ 
+ 	if (!info) {
+@@ -1308,13 +1337,16 @@ void smbd_destroy(struct TCP_Server_Info *server)
+ 			sc->status == SMBDIRECT_SOCKET_DISCONNECTED);
+ 	}
+ 
++	log_rdma_event(INFO, "cancelling post_send_credits_work\n");
++	disable_work_sync(&info->post_send_credits_work);
++
+ 	log_rdma_event(INFO, "destroying qp\n");
+ 	ib_drain_qp(sc->ib.qp);
+ 	rdma_destroy_qp(sc->rdma.cm_id);
+ 	sc->ib.qp = NULL;
+ 
+ 	log_rdma_event(INFO, "cancelling idle timer\n");
+-	cancel_delayed_work_sync(&info->idle_timer_work);
++	disable_delayed_work_sync(&info->idle_timer_work);
+ 
+ 	/* It's not possible for upper layer to get to reassembly */
+ 	log_rdma_event(INFO, "drain the reassembly queue\n");
+@@ -1446,17 +1478,17 @@ static int allocate_caches_and_workqueue(struct smbd_connection *info)
+ 	if (!info->request_mempool)
+ 		goto out1;
+ 
+-	scnprintf(name, MAX_NAME_LEN, "smbd_response_%p", info);
++	scnprintf(name, MAX_NAME_LEN, "smbdirect_recv_io_%p", info);
+ 
+ 	struct kmem_cache_args response_args = {
+-		.align		= __alignof__(struct smbd_response),
+-		.useroffset	= (offsetof(struct smbd_response, packet) +
++		.align		= __alignof__(struct smbdirect_recv_io),
++		.useroffset	= (offsetof(struct smbdirect_recv_io, packet) +
+ 				   sizeof(struct smbdirect_data_transfer)),
+ 		.usersize	= sp->max_recv_size - sizeof(struct smbdirect_data_transfer),
+ 	};
+ 	info->response_cache =
+ 		kmem_cache_create(name,
+-				  sizeof(struct smbd_response) + sp->max_recv_size,
++				  sizeof(struct smbdirect_recv_io) + sp->max_recv_size,
+ 				  &response_args, SLAB_HWCACHE_ALIGN);
+ 	if (!info->response_cache)
+ 		goto out2;
+@@ -1686,7 +1718,7 @@ static struct smbd_connection *_smbd_get_connection(
+ 	return NULL;
+ 
+ negotiation_failed:
+-	cancel_delayed_work_sync(&info->idle_timer_work);
++	disable_delayed_work_sync(&info->idle_timer_work);
+ 	destroy_caches_and_workqueue(info);
+ 	sc->status = SMBDIRECT_SOCKET_NEGOTIATE_FAILED;
+ 	rdma_disconnect(sc->rdma.cm_id);
+@@ -1747,7 +1779,7 @@ struct smbd_connection *smbd_get_connection(
+ int smbd_recv(struct smbd_connection *info, struct msghdr *msg)
+ {
+ 	struct smbdirect_socket *sc = &info->socket;
+-	struct smbd_response *response;
++	struct smbdirect_recv_io *response;
+ 	struct smbdirect_data_transfer *data_transfer;
+ 	size_t size = iov_iter_count(&msg->msg_iter);
+ 	int to_copy, to_read, data_read, offset;
+@@ -1783,7 +1815,7 @@ int smbd_recv(struct smbd_connection *info, struct msghdr *msg)
+ 		offset = info->first_entry_offset;
+ 		while (data_read < size) {
+ 			response = _get_first_reassembly(info);
+-			data_transfer = smbd_response_payload(response);
++			data_transfer = smbdirect_recv_io_payload(response);
+ 			data_length = le32_to_cpu(data_transfer->data_length);
+ 			remaining_data_length =
+ 				le32_to_cpu(
+@@ -2045,7 +2077,7 @@ static void destroy_mr_list(struct smbd_connection *info)
+ 	struct smbdirect_socket *sc = &info->socket;
+ 	struct smbd_mr *mr, *tmp;
+ 
+-	cancel_work_sync(&info->mr_recovery_work);
++	disable_work_sync(&info->mr_recovery_work);
+ 	list_for_each_entry_safe(mr, tmp, &info->mr_list, list) {
+ 		if (mr->state == MR_INVALIDATED)
+ 			ib_dma_unmap_sg(sc->ib.dev, mr->sgt.sgl,
+diff --git a/fs/smb/client/smbdirect.h b/fs/smb/client/smbdirect.h
+index ea04ce8a9763a6..d60e445da22563 100644
+--- a/fs/smb/client/smbdirect.h
++++ b/fs/smb/client/smbdirect.h
+@@ -157,11 +157,6 @@ struct smbd_connection {
+ 	unsigned int count_send_empty;
+ };
+ 
+-enum smbd_message_type {
+-	SMBD_NEGOTIATE_RESP,
+-	SMBD_TRANSFER_DATA,
+-};
+-
+ /* Maximum number of SGEs used by smbdirect.c in any send work request */
+ #define SMBDIRECT_MAX_SEND_SGE	6
+ 
+@@ -181,24 +176,6 @@ struct smbd_request {
+ /* Maximum number of SGEs used by smbdirect.c in any receive work request */
+ #define SMBDIRECT_MAX_RECV_SGE	1
+ 
+-/* The context for a SMBD response */
+-struct smbd_response {
+-	struct smbd_connection *info;
+-	struct ib_cqe cqe;
+-	struct ib_sge sge;
+-
+-	enum smbd_message_type type;
+-
+-	/* Link to receive queue or reassembly queue */
+-	struct list_head list;
+-
+-	/* Indicate if this is the 1st packet of a payload */
+-	bool first_segment;
+-
+-	/* SMBD packet header and payload follows this structure */
+-	u8 packet[];
+-};
+-
+ /* Create a SMBDirect session */
+ struct smbd_connection *smbd_get_connection(
+ 	struct TCP_Server_Info *server, struct sockaddr *dstaddr);
+diff --git a/fs/smb/common/smbdirect/smbdirect_socket.h b/fs/smb/common/smbdirect/smbdirect_socket.h
+index e5b15cc44a7ba5..a7ad31c471a7b6 100644
+--- a/fs/smb/common/smbdirect/smbdirect_socket.h
++++ b/fs/smb/common/smbdirect/smbdirect_socket.h
+@@ -38,6 +38,35 @@ struct smbdirect_socket {
+ 	} ib;
+ 
+ 	struct smbdirect_socket_parameters parameters;
++
++	/*
++	 * The state for posted receive buffers
++	 */
++	struct {
++		/*
++		 * The type of PDU we are expecting
++		 */
++		enum {
++			SMBDIRECT_EXPECT_NEGOTIATE_REQ = 1,
++			SMBDIRECT_EXPECT_NEGOTIATE_REP = 2,
++			SMBDIRECT_EXPECT_DATA_TRANSFER = 3,
++		} expected;
++	} recv_io;
++};
++
++struct smbdirect_recv_io {
++	struct smbdirect_socket *socket;
++	struct ib_cqe cqe;
++	struct ib_sge sge;
++
++	/* Link to free or reassembly list */
++	struct list_head list;
++
++	/* Indicate if this is the 1st packet of a payload */
++	bool first_segment;
++
++	/* SMBD packet header and payload follows this structure */
++	u8 packet[];
+ };
+ 
+ #endif /* __FS_SMB_COMMON_SMBDIRECT_SMBDIRECT_SOCKET_H__ */
+diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c
+index 5466aa8c39b1cd..6550bd9f002c27 100644
+--- a/fs/smb/server/transport_rdma.c
++++ b/fs/smb/server/transport_rdma.c
+@@ -554,7 +554,7 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 	case SMB_DIRECT_MSG_DATA_TRANSFER: {
+ 		struct smb_direct_data_transfer *data_transfer =
+ 			(struct smb_direct_data_transfer *)recvmsg->packet;
+-		unsigned int data_length;
++		u32 remaining_data_length, data_offset, data_length;
+ 		int avail_recvmsg_count, receive_credits;
+ 
+ 		if (wc->byte_len <
+@@ -564,15 +564,25 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 			return;
+ 		}
+ 
++		remaining_data_length = le32_to_cpu(data_transfer->remaining_data_length);
+ 		data_length = le32_to_cpu(data_transfer->data_length);
+-		if (data_length) {
+-			if (wc->byte_len < sizeof(struct smb_direct_data_transfer) +
+-			    (u64)data_length) {
+-				put_recvmsg(t, recvmsg);
+-				smb_direct_disconnect_rdma_connection(t);
+-				return;
+-			}
++		data_offset = le32_to_cpu(data_transfer->data_offset);
++		if (wc->byte_len < data_offset ||
++		    wc->byte_len < (u64)data_offset + data_length) {
++			put_recvmsg(t, recvmsg);
++			smb_direct_disconnect_rdma_connection(t);
++			return;
++		}
++		if (remaining_data_length > t->max_fragmented_recv_size ||
++		    data_length > t->max_fragmented_recv_size ||
++		    (u64)remaining_data_length + (u64)data_length >
++		    (u64)t->max_fragmented_recv_size) {
++			put_recvmsg(t, recvmsg);
++			smb_direct_disconnect_rdma_connection(t);
++			return;
++		}
+ 
++		if (data_length) {
+ 			if (t->full_packet_received)
+ 				recvmsg->first_segment = true;
+ 
+@@ -1209,78 +1219,130 @@ static int smb_direct_writev(struct ksmbd_transport *t,
+ 			     bool need_invalidate, unsigned int remote_key)
+ {
+ 	struct smb_direct_transport *st = smb_trans_direct_transfort(t);
+-	int remaining_data_length;
+-	int start, i, j;
+-	int max_iov_size = st->max_send_size -
++	size_t remaining_data_length;
++	size_t iov_idx;
++	size_t iov_ofs;
++	size_t max_iov_size = st->max_send_size -
+ 			sizeof(struct smb_direct_data_transfer);
+ 	int ret;
+-	struct kvec vec;
+ 	struct smb_direct_send_ctx send_ctx;
++	int error = 0;
+ 
+ 	if (st->status != SMB_DIRECT_CS_CONNECTED)
+ 		return -ENOTCONN;
+ 
+ 	//FIXME: skip RFC1002 header..
++	if (WARN_ON_ONCE(niovs <= 1 || iov[0].iov_len != 4))
++		return -EINVAL;
+ 	buflen -= 4;
++	iov_idx = 1;
++	iov_ofs = 0;
+ 
+ 	remaining_data_length = buflen;
+ 	ksmbd_debug(RDMA, "Sending smb (RDMA): smb_len=%u\n", buflen);
+ 
+ 	smb_direct_send_ctx_init(st, &send_ctx, need_invalidate, remote_key);
+-	start = i = 1;
+-	buflen = 0;
+-	while (true) {
+-		buflen += iov[i].iov_len;
+-		if (buflen > max_iov_size) {
+-			if (i > start) {
+-				remaining_data_length -=
+-					(buflen - iov[i].iov_len);
+-				ret = smb_direct_post_send_data(st, &send_ctx,
+-								&iov[start], i - start,
+-								remaining_data_length);
+-				if (ret)
++	while (remaining_data_length) {
++		struct kvec vecs[SMB_DIRECT_MAX_SEND_SGES - 1]; /* minus smbdirect hdr */
++		size_t possible_bytes = max_iov_size;
++		size_t possible_vecs;
++		size_t bytes = 0;
++		size_t nvecs = 0;
++
++		/*
++		 * For the last message remaining_data_length should be
++		 * have been 0 already!
++		 */
++		if (WARN_ON_ONCE(iov_idx >= niovs)) {
++			error = -EINVAL;
++			goto done;
++		}
++
++		/*
++		 * We have 2 factors which limit the arguments we pass
++		 * to smb_direct_post_send_data():
++		 *
++		 * 1. The number of supported sges for the send,
++		 *    while one is reserved for the smbdirect header.
++		 *    And we currently need one SGE per page.
++		 * 2. The number of negotiated payload bytes per send.
++		 */
++		possible_vecs = min_t(size_t, ARRAY_SIZE(vecs), niovs - iov_idx);
++
++		while (iov_idx < niovs && possible_vecs && possible_bytes) {
++			struct kvec *v = &vecs[nvecs];
++			int page_count;
++
++			v->iov_base = ((u8 *)iov[iov_idx].iov_base) + iov_ofs;
++			v->iov_len = min_t(size_t,
++					   iov[iov_idx].iov_len - iov_ofs,
++					   possible_bytes);
++			page_count = get_buf_page_count(v->iov_base, v->iov_len);
++			if (page_count > possible_vecs) {
++				/*
++				 * If the number of pages in the buffer
++				 * is to much (because we currently require
++				 * one SGE per page), we need to limit the
++				 * length.
++				 *
++				 * We know possible_vecs is at least 1,
++				 * so we always keep the first page.
++				 *
++				 * We need to calculate the number extra
++				 * pages (epages) we can also keep.
++				 *
++				 * We calculate the number of bytes in the
++				 * first page (fplen), this should never be
++				 * larger than v->iov_len because page_count is
++				 * at least 2, but adding a limitation feels
++				 * better.
++				 *
++				 * Then we calculate the number of bytes (elen)
++				 * we can keep for the extra pages.
++				 */
++				size_t epages = possible_vecs - 1;
++				size_t fpofs = offset_in_page(v->iov_base);
++				size_t fplen = min_t(size_t, PAGE_SIZE - fpofs, v->iov_len);
++				size_t elen = min_t(size_t, v->iov_len - fplen, epages*PAGE_SIZE);
++
++				v->iov_len = fplen + elen;
++				page_count = get_buf_page_count(v->iov_base, v->iov_len);
++				if (WARN_ON_ONCE(page_count > possible_vecs)) {
++					/*
++					 * Something went wrong in the above
++					 * logic...
++					 */
++					error = -EINVAL;
+ 					goto done;
+-			} else {
+-				/* iov[start] is too big, break it */
+-				int nvec  = (buflen + max_iov_size - 1) /
+-						max_iov_size;
+-
+-				for (j = 0; j < nvec; j++) {
+-					vec.iov_base =
+-						(char *)iov[start].iov_base +
+-						j * max_iov_size;
+-					vec.iov_len =
+-						min_t(int, max_iov_size,
+-						      buflen - max_iov_size * j);
+-					remaining_data_length -= vec.iov_len;
+-					ret = smb_direct_post_send_data(st, &send_ctx, &vec, 1,
+-									remaining_data_length);
+-					if (ret)
+-						goto done;
+ 				}
+-				i++;
+-				if (i == niovs)
+-					break;
+ 			}
+-			start = i;
+-			buflen = 0;
+-		} else {
+-			i++;
+-			if (i == niovs) {
+-				/* send out all remaining vecs */
+-				remaining_data_length -= buflen;
+-				ret = smb_direct_post_send_data(st, &send_ctx,
+-								&iov[start], i - start,
+-								remaining_data_length);
+-				if (ret)
+-					goto done;
+-				break;
++			possible_vecs -= page_count;
++			nvecs += 1;
++			possible_bytes -= v->iov_len;
++			bytes += v->iov_len;
++
++			iov_ofs += v->iov_len;
++			if (iov_ofs >= iov[iov_idx].iov_len) {
++				iov_idx += 1;
++				iov_ofs = 0;
+ 			}
+ 		}
++
++		remaining_data_length -= bytes;
++
++		ret = smb_direct_post_send_data(st, &send_ctx,
++						vecs, nvecs,
++						remaining_data_length);
++		if (unlikely(ret)) {
++			error = ret;
++			goto done;
++		}
+ 	}
+ 
+ done:
+ 	ret = smb_direct_flush_send_list(st, &send_ctx, true);
++	if (unlikely(!ret && error))
++		ret = error;
+ 
+ 	/*
+ 	 * As an optimization, we don't wait for individual I/O to finish
+@@ -1744,6 +1806,11 @@ static int smb_direct_init_params(struct smb_direct_transport *t,
+ 		return -EINVAL;
+ 	}
+ 
++	if (device->attrs.max_send_sge < SMB_DIRECT_MAX_SEND_SGES) {
++		pr_err("warning: device max_send_sge = %d too small\n",
++		       device->attrs.max_send_sge);
++		return -EINVAL;
++	}
+ 	if (device->attrs.max_recv_sge < SMB_DIRECT_MAX_RECV_SGES) {
+ 		pr_err("warning: device max_recv_sge = %d too small\n",
+ 		       device->attrs.max_recv_sge);
+@@ -1767,7 +1834,7 @@ static int smb_direct_init_params(struct smb_direct_transport *t,
+ 
+ 	cap->max_send_wr = max_send_wrs;
+ 	cap->max_recv_wr = t->recv_credit_max;
+-	cap->max_send_sge = max_sge_per_wr;
++	cap->max_send_sge = SMB_DIRECT_MAX_SEND_SGES;
+ 	cap->max_recv_sge = SMB_DIRECT_MAX_RECV_SGES;
+ 	cap->max_inline_data = 0;
+ 	cap->max_rdma_ctxs = t->max_rw_credits;
+diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
+index f7b3b93f3a49a7..0c70f3a5557505 100644
+--- a/include/crypto/if_alg.h
++++ b/include/crypto/if_alg.h
+@@ -135,6 +135,7 @@ struct af_alg_async_req {
+  *			SG?
+  * @enc:		Cryptographic operation to be performed when
+  *			recvmsg is invoked.
++ * @write:		True if we are in the middle of a write.
+  * @init:		True if metadata has been sent.
+  * @len:		Length of memory allocated for this data structure.
+  * @inflight:		Non-zero when AIO requests are in flight.
+@@ -151,10 +152,11 @@ struct af_alg_ctx {
+ 	size_t used;
+ 	atomic_t rcvused;
+ 
+-	bool more;
+-	bool merge;
+-	bool enc;
+-	bool init;
++	u32		more:1,
++			merge:1,
++			enc:1,
++			write:1,
++			init:1;
+ 
+ 	unsigned int len;
+ 
+diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
+index a7efcec2e3d081..215ff20affa33f 100644
+--- a/include/linux/io_uring_types.h
++++ b/include/linux/io_uring_types.h
+@@ -418,9 +418,6 @@ struct io_ring_ctx {
+ 	struct list_head		defer_list;
+ 	unsigned			nr_drained;
+ 
+-	struct io_alloc_cache		msg_cache;
+-	spinlock_t			msg_lock;
+-
+ #ifdef CONFIG_NET_RX_BUSY_POLL
+ 	struct list_head	napi_list;	/* track busy poll napi_id */
+ 	spinlock_t		napi_lock;	/* napi_list lock */
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index e6ba8f4f4bd1f4..27850ebb651b30 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -662,6 +662,7 @@ struct mlx5e_resources {
+ 		bool			   tisn_valid;
+ 	} hw_objs;
+ 	struct net_device *uplink_netdev;
++	netdevice_tracker tracker;
+ 	struct mutex uplink_netdev_lock;
+ 	struct mlx5_crypto_dek_priv *dek_priv;
+ };
+diff --git a/include/linux/swap.h b/include/linux/swap.h
+index bc0e1c275fc047..8b56b2bbaa4f07 100644
+--- a/include/linux/swap.h
++++ b/include/linux/swap.h
+@@ -384,6 +384,16 @@ void folio_add_lru_vma(struct folio *, struct vm_area_struct *);
+ void mark_page_accessed(struct page *);
+ void folio_mark_accessed(struct folio *);
+ 
++static inline bool folio_may_be_lru_cached(struct folio *folio)
++{
++	/*
++	 * Holding PMD-sized folios in per-CPU LRU cache unbalances accounting.
++	 * Holding small numbers of low-order mTHP folios in per-CPU LRU cache
++	 * will be sensible, but nobody has implemented and tested that yet.
++	 */
++	return !folio_test_large(folio);
++}
++
+ extern atomic_t lru_disable_count;
+ 
+ static inline bool lru_cache_disabled(void)
+diff --git a/include/net/dst_metadata.h b/include/net/dst_metadata.h
+index 4160731dcb6e3a..1fc2fb03ce3f9a 100644
+--- a/include/net/dst_metadata.h
++++ b/include/net/dst_metadata.h
+@@ -3,6 +3,7 @@
+ #define __NET_DST_METADATA_H 1
+ 
+ #include <linux/skbuff.h>
++#include <net/ip.h>
+ #include <net/ip_tunnels.h>
+ #include <net/macsec.h>
+ #include <net/dst.h>
+@@ -220,9 +221,15 @@ static inline struct metadata_dst *ip_tun_rx_dst(struct sk_buff *skb,
+ 						 int md_size)
+ {
+ 	const struct iphdr *iph = ip_hdr(skb);
++	struct metadata_dst *tun_dst;
++
++	tun_dst = __ip_tun_set_dst(iph->saddr, iph->daddr, iph->tos, iph->ttl,
++				   0, flags, tunnel_id, md_size);
+ 
+-	return __ip_tun_set_dst(iph->saddr, iph->daddr, iph->tos, iph->ttl,
+-				0, flags, tunnel_id, md_size);
++	if (tun_dst && (iph->frag_off & htons(IP_DF)))
++		__set_bit(IP_TUNNEL_DONT_FRAGMENT_BIT,
++			  tun_dst->u.tun_info.key.tun_flags);
++	return tun_dst;
+ }
+ 
+ static inline struct metadata_dst *__ipv6_tun_set_dst(const struct in6_addr *saddr,
+diff --git a/include/net/sock.h b/include/net/sock.h
+index a348ae145eda43..6e9f4c126672d0 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -2061,6 +2061,9 @@ static inline void sk_set_socket(struct sock *sk, struct socket *sock)
+ 	if (sock) {
+ 		WRITE_ONCE(sk->sk_uid, SOCK_INODE(sock)->i_uid);
+ 		WRITE_ONCE(sk->sk_ino, SOCK_INODE(sock)->i_ino);
++	} else {
++		/* Note: sk_uid is unchanged. */
++		WRITE_ONCE(sk->sk_ino, 0);
+ 	}
+ }
+ 
+@@ -2082,8 +2085,6 @@ static inline void sock_orphan(struct sock *sk)
+ 	sock_set_flag(sk, SOCK_DEAD);
+ 	sk_set_socket(sk, NULL);
+ 	sk->sk_wq  = NULL;
+-	/* Note: sk_uid is unchanged. */
+-	WRITE_ONCE(sk->sk_ino, 0);
+ 	write_unlock_bh(&sk->sk_callback_lock);
+ }
+ 
+diff --git a/include/sound/sdca.h b/include/sound/sdca.h
+index 5a5d6de78d7283..9c6a351c9d474f 100644
+--- a/include/sound/sdca.h
++++ b/include/sound/sdca.h
+@@ -46,6 +46,7 @@ struct sdca_device_data {
+ 
+ enum sdca_quirk {
+ 	SDCA_QUIRKS_RT712_VB,
++	SDCA_QUIRKS_SKIP_FUNC_TYPE_PATCHING,
+ };
+ 
+ #if IS_ENABLED(CONFIG_ACPI) && IS_ENABLED(CONFIG_SND_SOC_SDCA)
+diff --git a/include/uapi/linux/mptcp.h b/include/uapi/linux/mptcp.h
+index 67d015df8893cc..5fd5b4cf75ca1e 100644
+--- a/include/uapi/linux/mptcp.h
++++ b/include/uapi/linux/mptcp.h
+@@ -31,6 +31,8 @@
+ #define MPTCP_INFO_FLAG_FALLBACK		_BITUL(0)
+ #define MPTCP_INFO_FLAG_REMOTE_KEY_RECEIVED	_BITUL(1)
+ 
++#define MPTCP_PM_EV_FLAG_DENY_JOIN_ID0		_BITUL(0)
++
+ #define MPTCP_PM_ADDR_FLAG_SIGNAL                      (1 << 0)
+ #define MPTCP_PM_ADDR_FLAG_SUBFLOW                     (1 << 1)
+ #define MPTCP_PM_ADDR_FLAG_BACKUP                      (1 << 2)
+diff --git a/include/uapi/linux/mptcp_pm.h b/include/uapi/linux/mptcp_pm.h
+index 6ac84b2f636ca2..7359d34da446b9 100644
+--- a/include/uapi/linux/mptcp_pm.h
++++ b/include/uapi/linux/mptcp_pm.h
+@@ -16,10 +16,10 @@
+  *   good time to allocate memory and send ADD_ADDR if needed. Depending on the
+  *   traffic-patterns it can take a long time until the MPTCP_EVENT_ESTABLISHED
+  *   is sent. Attributes: token, family, saddr4 | saddr6, daddr4 | daddr6,
+- *   sport, dport, server-side.
++ *   sport, dport, server-side, [flags].
+  * @MPTCP_EVENT_ESTABLISHED: A MPTCP connection is established (can start new
+  *   subflows). Attributes: token, family, saddr4 | saddr6, daddr4 | daddr6,
+- *   sport, dport, server-side.
++ *   sport, dport, server-side, [flags].
+  * @MPTCP_EVENT_CLOSED: A MPTCP connection has stopped. Attribute: token.
+  * @MPTCP_EVENT_ANNOUNCED: A new address has been announced by the peer.
+  *   Attributes: token, rem_id, family, daddr4 | daddr6 [, dport].
+diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
+index 17dfaa0395c46b..1d03b2fc4b2594 100644
+--- a/io_uring/io-wq.c
++++ b/io_uring/io-wq.c
+@@ -352,16 +352,16 @@ static void create_worker_cb(struct callback_head *cb)
+ 	struct io_wq *wq;
+ 
+ 	struct io_wq_acct *acct;
+-	bool do_create = false;
++	bool activated_free_worker, do_create = false;
+ 
+ 	worker = container_of(cb, struct io_worker, create_work);
+ 	wq = worker->wq;
+ 	acct = worker->acct;
+ 
+ 	rcu_read_lock();
+-	do_create = !io_acct_activate_free_worker(acct);
++	activated_free_worker = io_acct_activate_free_worker(acct);
+ 	rcu_read_unlock();
+-	if (!do_create)
++	if (activated_free_worker)
+ 		goto no_need_create;
+ 
+ 	raw_spin_lock(&acct->workers_lock);
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 5111ec040c5342..eaa5410e5a70a1 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -290,7 +290,6 @@ static void io_free_alloc_caches(struct io_ring_ctx *ctx)
+ 	io_alloc_cache_free(&ctx->netmsg_cache, io_netmsg_cache_free);
+ 	io_alloc_cache_free(&ctx->rw_cache, io_rw_cache_free);
+ 	io_alloc_cache_free(&ctx->cmd_cache, io_cmd_cache_free);
+-	io_alloc_cache_free(&ctx->msg_cache, kfree);
+ 	io_futex_cache_free(ctx);
+ 	io_rsrc_cache_free(ctx);
+ }
+@@ -337,9 +336,6 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
+ 	ret |= io_alloc_cache_init(&ctx->cmd_cache, IO_ALLOC_CACHE_MAX,
+ 			    sizeof(struct io_async_cmd),
+ 			    sizeof(struct io_async_cmd));
+-	spin_lock_init(&ctx->msg_lock);
+-	ret |= io_alloc_cache_init(&ctx->msg_cache, IO_ALLOC_CACHE_MAX,
+-			    sizeof(struct io_kiocb), 0);
+ 	ret |= io_futex_cache_init(ctx);
+ 	ret |= io_rsrc_cache_init(ctx);
+ 	if (ret)
+@@ -1371,8 +1367,10 @@ static void io_req_task_cancel(struct io_kiocb *req, io_tw_token_t tw)
+ 
+ void io_req_task_submit(struct io_kiocb *req, io_tw_token_t tw)
+ {
+-	io_tw_lock(req->ctx, tw);
+-	if (unlikely(io_should_terminate_tw()))
++	struct io_ring_ctx *ctx = req->ctx;
++
++	io_tw_lock(ctx, tw);
++	if (unlikely(io_should_terminate_tw(ctx)))
+ 		io_req_defer_failed(req, -EFAULT);
+ 	else if (req->flags & REQ_F_FORCE_ASYNC)
+ 		io_queue_iowq(req);
+diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
+index 66c1ca73f55ee5..336689752d9fe1 100644
+--- a/io_uring/io_uring.h
++++ b/io_uring/io_uring.h
+@@ -470,9 +470,9 @@ static inline bool io_allowed_run_tw(struct io_ring_ctx *ctx)
+  * 2) PF_KTHREAD is set, in which case the invoker of the task_work is
+  *    our fallback task_work.
+  */
+-static inline bool io_should_terminate_tw(void)
++static inline bool io_should_terminate_tw(struct io_ring_ctx *ctx)
+ {
+-	return current->flags & (PF_KTHREAD | PF_EXITING);
++	return (current->flags & (PF_KTHREAD | PF_EXITING)) || percpu_ref_is_dying(&ctx->refs);
+ }
+ 
+ static inline void io_req_queue_tw_complete(struct io_kiocb *req, s32 res)
+diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c
+index 4c2578f2efcb0e..5e5b94236d7204 100644
+--- a/io_uring/msg_ring.c
++++ b/io_uring/msg_ring.c
+@@ -11,7 +11,6 @@
+ #include "io_uring.h"
+ #include "rsrc.h"
+ #include "filetable.h"
+-#include "alloc_cache.h"
+ #include "msg_ring.h"
+ 
+ /* All valid masks for MSG_RING */
+@@ -76,13 +75,7 @@ static void io_msg_tw_complete(struct io_kiocb *req, io_tw_token_t tw)
+ 	struct io_ring_ctx *ctx = req->ctx;
+ 
+ 	io_add_aux_cqe(ctx, req->cqe.user_data, req->cqe.res, req->cqe.flags);
+-	if (spin_trylock(&ctx->msg_lock)) {
+-		if (io_alloc_cache_put(&ctx->msg_cache, req))
+-			req = NULL;
+-		spin_unlock(&ctx->msg_lock);
+-	}
+-	if (req)
+-		kfree_rcu(req, rcu_head);
++	kfree_rcu(req, rcu_head);
+ 	percpu_ref_put(&ctx->refs);
+ }
+ 
+@@ -104,26 +97,13 @@ static int io_msg_remote_post(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ 	return 0;
+ }
+ 
+-static struct io_kiocb *io_msg_get_kiocb(struct io_ring_ctx *ctx)
+-{
+-	struct io_kiocb *req = NULL;
+-
+-	if (spin_trylock(&ctx->msg_lock)) {
+-		req = io_alloc_cache_get(&ctx->msg_cache);
+-		spin_unlock(&ctx->msg_lock);
+-		if (req)
+-			return req;
+-	}
+-	return kmem_cache_alloc(req_cachep, GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO);
+-}
+-
+ static int io_msg_data_remote(struct io_ring_ctx *target_ctx,
+ 			      struct io_msg *msg)
+ {
+ 	struct io_kiocb *target;
+ 	u32 flags = 0;
+ 
+-	target = io_msg_get_kiocb(target_ctx);
++	target = kmem_cache_alloc(req_cachep, GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO)  ;
+ 	if (unlikely(!target))
+ 		return -ENOMEM;
+ 
+diff --git a/io_uring/notif.c b/io_uring/notif.c
+index 9a6f6e92d74242..ea9c0116cec2df 100644
+--- a/io_uring/notif.c
++++ b/io_uring/notif.c
+@@ -85,7 +85,7 @@ static int io_link_skb(struct sk_buff *skb, struct ubuf_info *uarg)
+ 		return -EEXIST;
+ 
+ 	prev_nd = container_of(prev_uarg, struct io_notif_data, uarg);
+-	prev_notif = cmd_to_io_kiocb(nd);
++	prev_notif = cmd_to_io_kiocb(prev_nd);
+ 
+ 	/* make sure all noifications can be finished in the same task_work */
+ 	if (unlikely(notif->ctx != prev_notif->ctx ||
+diff --git a/io_uring/poll.c b/io_uring/poll.c
+index 20e9b46a4adfd5..1b79c268725d47 100644
+--- a/io_uring/poll.c
++++ b/io_uring/poll.c
+@@ -224,7 +224,7 @@ static int io_poll_check_events(struct io_kiocb *req, io_tw_token_t tw)
+ {
+ 	int v;
+ 
+-	if (unlikely(io_should_terminate_tw()))
++	if (unlikely(io_should_terminate_tw(req->ctx)))
+ 		return -ECANCELED;
+ 
+ 	do {
+diff --git a/io_uring/timeout.c b/io_uring/timeout.c
+index 7f13bfa9f2b617..17e3aab0af3676 100644
+--- a/io_uring/timeout.c
++++ b/io_uring/timeout.c
+@@ -324,7 +324,7 @@ static void io_req_task_link_timeout(struct io_kiocb *req, io_tw_token_t tw)
+ 	int ret;
+ 
+ 	if (prev) {
+-		if (!io_should_terminate_tw()) {
++		if (!io_should_terminate_tw(req->ctx)) {
+ 			struct io_cancel_data cd = {
+ 				.ctx		= req->ctx,
+ 				.data		= prev->cqe.user_data,
+diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
+index 929cad6ee32628..b2b4f62c90ce80 100644
+--- a/io_uring/uring_cmd.c
++++ b/io_uring/uring_cmd.c
+@@ -123,7 +123,7 @@ static void io_uring_cmd_work(struct io_kiocb *req, io_tw_token_t tw)
+ 	struct io_uring_cmd *ioucmd = io_kiocb_to_cmd(req, struct io_uring_cmd);
+ 	unsigned int flags = IO_URING_F_COMPLETE_DEFER;
+ 
+-	if (io_should_terminate_tw())
++	if (io_should_terminate_tw(req->ctx))
+ 		flags |= IO_URING_F_TASK_DEAD;
+ 
+ 	/* task_work executor checks the deffered list completion */
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index a723b7dc6e4e28..20f76b21765016 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -126,8 +126,31 @@ DEFINE_PERCPU_RWSEM(cgroup_threadgroup_rwsem);
+  * of concurrent destructions.  Use a separate workqueue so that cgroup
+  * destruction work items don't end up filling up max_active of system_wq
+  * which may lead to deadlock.
++ *
++ * A cgroup destruction should enqueue work sequentially to:
++ * cgroup_offline_wq: use for css offline work
++ * cgroup_release_wq: use for css release work
++ * cgroup_free_wq: use for free work
++ *
++ * Rationale for using separate workqueues:
++ * The cgroup root free work may depend on completion of other css offline
++ * operations. If all tasks were enqueued to a single workqueue, this could
++ * create a deadlock scenario where:
++ * - Free work waits for other css offline work to complete.
++ * - But other css offline work is queued after free work in the same queue.
++ *
++ * Example deadlock scenario with single workqueue (cgroup_destroy_wq):
++ * 1. umount net_prio
++ * 2. net_prio root destruction enqueues work to cgroup_destroy_wq (CPUx)
++ * 3. perf_event CSS A offline enqueues work to same cgroup_destroy_wq (CPUx)
++ * 4. net_prio cgroup_destroy_root->cgroup_lock_and_drain_offline.
++ * 5. net_prio root destruction blocks waiting for perf_event CSS A offline,
++ *    which can never complete as it's behind in the same queue and
++ *    workqueue's max_active is 1.
+  */
+-static struct workqueue_struct *cgroup_destroy_wq;
++static struct workqueue_struct *cgroup_offline_wq;
++static struct workqueue_struct *cgroup_release_wq;
++static struct workqueue_struct *cgroup_free_wq;
+ 
+ /* generate an array of cgroup subsystem pointers */
+ #define SUBSYS(_x) [_x ## _cgrp_id] = &_x ## _cgrp_subsys,
+@@ -5553,7 +5576,7 @@ static void css_release_work_fn(struct work_struct *work)
+ 	cgroup_unlock();
+ 
+ 	INIT_RCU_WORK(&css->destroy_rwork, css_free_rwork_fn);
+-	queue_rcu_work(cgroup_destroy_wq, &css->destroy_rwork);
++	queue_rcu_work(cgroup_free_wq, &css->destroy_rwork);
+ }
+ 
+ static void css_release(struct percpu_ref *ref)
+@@ -5562,7 +5585,7 @@ static void css_release(struct percpu_ref *ref)
+ 		container_of(ref, struct cgroup_subsys_state, refcnt);
+ 
+ 	INIT_WORK(&css->destroy_work, css_release_work_fn);
+-	queue_work(cgroup_destroy_wq, &css->destroy_work);
++	queue_work(cgroup_release_wq, &css->destroy_work);
+ }
+ 
+ static void init_and_link_css(struct cgroup_subsys_state *css,
+@@ -5696,7 +5719,7 @@ static struct cgroup_subsys_state *css_create(struct cgroup *cgrp,
+ 	list_del_rcu(&css->sibling);
+ err_free_css:
+ 	INIT_RCU_WORK(&css->destroy_rwork, css_free_rwork_fn);
+-	queue_rcu_work(cgroup_destroy_wq, &css->destroy_rwork);
++	queue_rcu_work(cgroup_free_wq, &css->destroy_rwork);
+ 	return ERR_PTR(err);
+ }
+ 
+@@ -5934,7 +5957,7 @@ static void css_killed_ref_fn(struct percpu_ref *ref)
+ 
+ 	if (atomic_dec_and_test(&css->online_cnt)) {
+ 		INIT_WORK(&css->destroy_work, css_killed_work_fn);
+-		queue_work(cgroup_destroy_wq, &css->destroy_work);
++		queue_work(cgroup_offline_wq, &css->destroy_work);
+ 	}
+ }
+ 
+@@ -6320,8 +6343,14 @@ static int __init cgroup_wq_init(void)
+ 	 * We would prefer to do this in cgroup_init() above, but that
+ 	 * is called before init_workqueues(): so leave this until after.
+ 	 */
+-	cgroup_destroy_wq = alloc_workqueue("cgroup_destroy", 0, 1);
+-	BUG_ON(!cgroup_destroy_wq);
++	cgroup_offline_wq = alloc_workqueue("cgroup_offline", 0, 1);
++	BUG_ON(!cgroup_offline_wq);
++
++	cgroup_release_wq = alloc_workqueue("cgroup_release", 0, 1);
++	BUG_ON(!cgroup_release_wq);
++
++	cgroup_free_wq = alloc_workqueue("cgroup_free", 0, 1);
++	BUG_ON(!cgroup_free_wq);
+ 	return 0;
+ }
+ core_initcall(cgroup_wq_init);
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index 717e3d1d6a2fa2..f3a97005713db7 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -6794,12 +6794,8 @@ __bpf_kfunc u32 scx_bpf_reenqueue_local(void)
+ 		 * CPUs disagree, they use %ENQUEUE_RESTORE which is bypassed to
+ 		 * the current local DSQ for running tasks and thus are not
+ 		 * visible to the BPF scheduler.
+-		 *
+-		 * Also skip re-enqueueing tasks that can only run on this
+-		 * CPU, as they would just be re-added to the same local
+-		 * DSQ without any benefit.
+ 		 */
+-		if (p->migration_pending || is_migration_disabled(p) || p->nr_cpus_allowed == 1)
++		if (p->migration_pending)
+ 			continue;
+ 
+ 		dispatch_dequeue(rq, p);
+diff --git a/mm/gup.c b/mm/gup.c
+index 3c39cbbeebef1f..10e46ff0904b05 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -2300,6 +2300,31 @@ static void pofs_unpin(struct pages_or_folios *pofs)
+ 		unpin_user_pages(pofs->pages, pofs->nr_entries);
+ }
+ 
++static struct folio *pofs_next_folio(struct folio *folio,
++		struct pages_or_folios *pofs, long *index_ptr)
++{
++	long i = *index_ptr + 1;
++
++	if (!pofs->has_folios && folio_test_large(folio)) {
++		const unsigned long start_pfn = folio_pfn(folio);
++		const unsigned long end_pfn = start_pfn + folio_nr_pages(folio);
++
++		for (; i < pofs->nr_entries; i++) {
++			unsigned long pfn = page_to_pfn(pofs->pages[i]);
++
++			/* Is this page part of this folio? */
++			if (pfn < start_pfn || pfn >= end_pfn)
++				break;
++		}
++	}
++
++	if (unlikely(i == pofs->nr_entries))
++		return NULL;
++	*index_ptr = i;
++
++	return pofs_get_folio(pofs, i);
++}
++
+ /*
+  * Returns the number of collected folios. Return value is always >= 0.
+  */
+@@ -2307,16 +2332,13 @@ static unsigned long collect_longterm_unpinnable_folios(
+ 		struct list_head *movable_folio_list,
+ 		struct pages_or_folios *pofs)
+ {
+-	unsigned long i, collected = 0;
+-	struct folio *prev_folio = NULL;
+-	bool drain_allow = true;
+-
+-	for (i = 0; i < pofs->nr_entries; i++) {
+-		struct folio *folio = pofs_get_folio(pofs, i);
++	unsigned long collected = 0;
++	struct folio *folio;
++	int drained = 0;
++	long i = 0;
+ 
+-		if (folio == prev_folio)
+-			continue;
+-		prev_folio = folio;
++	for (folio = pofs_get_folio(pofs, i); folio;
++	     folio = pofs_next_folio(folio, pofs, &i)) {
+ 
+ 		if (folio_is_longterm_pinnable(folio))
+ 			continue;
+@@ -2331,9 +2353,17 @@ static unsigned long collect_longterm_unpinnable_folios(
+ 			continue;
+ 		}
+ 
+-		if (!folio_test_lru(folio) && drain_allow) {
++		if (drained == 0 && folio_may_be_lru_cached(folio) &&
++				folio_ref_count(folio) !=
++				folio_expected_ref_count(folio) + 1) {
++			lru_add_drain();
++			drained = 1;
++		}
++		if (drained == 1 && folio_may_be_lru_cached(folio) &&
++				folio_ref_count(folio) !=
++				folio_expected_ref_count(folio) + 1) {
+ 			lru_add_drain_all();
+-			drain_allow = false;
++			drained = 2;
+ 		}
+ 
+ 		if (!folio_isolate_lru(folio))
+diff --git a/mm/mlock.c b/mm/mlock.c
+index 3cb72b579ffd33..2f454ed6e51041 100644
+--- a/mm/mlock.c
++++ b/mm/mlock.c
+@@ -255,7 +255,7 @@ void mlock_folio(struct folio *folio)
+ 
+ 	folio_get(folio);
+ 	if (!folio_batch_add(fbatch, mlock_lru(folio)) ||
+-	    folio_test_large(folio) || lru_cache_disabled())
++	    !folio_may_be_lru_cached(folio) || lru_cache_disabled())
+ 		mlock_folio_batch(fbatch);
+ 	local_unlock(&mlock_fbatch.lock);
+ }
+@@ -278,7 +278,7 @@ void mlock_new_folio(struct folio *folio)
+ 
+ 	folio_get(folio);
+ 	if (!folio_batch_add(fbatch, mlock_new(folio)) ||
+-	    folio_test_large(folio) || lru_cache_disabled())
++	    !folio_may_be_lru_cached(folio) || lru_cache_disabled())
+ 		mlock_folio_batch(fbatch);
+ 	local_unlock(&mlock_fbatch.lock);
+ }
+@@ -299,7 +299,7 @@ void munlock_folio(struct folio *folio)
+ 	 */
+ 	folio_get(folio);
+ 	if (!folio_batch_add(fbatch, folio) ||
+-	    folio_test_large(folio) || lru_cache_disabled())
++	    !folio_may_be_lru_cached(folio) || lru_cache_disabled())
+ 		mlock_folio_batch(fbatch);
+ 	local_unlock(&mlock_fbatch.lock);
+ }
+diff --git a/mm/swap.c b/mm/swap.c
+index 4fc322f7111a98..a524736323c8b7 100644
+--- a/mm/swap.c
++++ b/mm/swap.c
+@@ -164,6 +164,10 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn)
+ 	for (i = 0; i < folio_batch_count(fbatch); i++) {
+ 		struct folio *folio = fbatch->folios[i];
+ 
++		/* block memcg migration while the folio moves between lru */
++		if (move_fn != lru_add && !folio_test_clear_lru(folio))
++			continue;
++
+ 		folio_lruvec_relock_irqsave(folio, &lruvec, &flags);
+ 		move_fn(lruvec, folio);
+ 
+@@ -176,14 +180,10 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn)
+ }
+ 
+ static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch,
+-		struct folio *folio, move_fn_t move_fn,
+-		bool on_lru, bool disable_irq)
++		struct folio *folio, move_fn_t move_fn, bool disable_irq)
+ {
+ 	unsigned long flags;
+ 
+-	if (on_lru && !folio_test_clear_lru(folio))
+-		return;
+-
+ 	folio_get(folio);
+ 
+ 	if (disable_irq)
+@@ -191,8 +191,8 @@ static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch,
+ 	else
+ 		local_lock(&cpu_fbatches.lock);
+ 
+-	if (!folio_batch_add(this_cpu_ptr(fbatch), folio) || folio_test_large(folio) ||
+-	    lru_cache_disabled())
++	if (!folio_batch_add(this_cpu_ptr(fbatch), folio) ||
++			!folio_may_be_lru_cached(folio) || lru_cache_disabled())
+ 		folio_batch_move_lru(this_cpu_ptr(fbatch), move_fn);
+ 
+ 	if (disable_irq)
+@@ -201,13 +201,13 @@ static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch,
+ 		local_unlock(&cpu_fbatches.lock);
+ }
+ 
+-#define folio_batch_add_and_move(folio, op, on_lru)						\
+-	__folio_batch_add_and_move(								\
+-		&cpu_fbatches.op,								\
+-		folio,										\
+-		op,										\
+-		on_lru,										\
+-		offsetof(struct cpu_fbatches, op) >= offsetof(struct cpu_fbatches, lock_irq)	\
++#define folio_batch_add_and_move(folio, op)		\
++	__folio_batch_add_and_move(			\
++		&cpu_fbatches.op,			\
++		folio,					\
++		op,					\
++		offsetof(struct cpu_fbatches, op) >=	\
++		offsetof(struct cpu_fbatches, lock_irq)	\
+ 	)
+ 
+ static void lru_move_tail(struct lruvec *lruvec, struct folio *folio)
+@@ -231,10 +231,10 @@ static void lru_move_tail(struct lruvec *lruvec, struct folio *folio)
+ void folio_rotate_reclaimable(struct folio *folio)
+ {
+ 	if (folio_test_locked(folio) || folio_test_dirty(folio) ||
+-	    folio_test_unevictable(folio))
++	    folio_test_unevictable(folio) || !folio_test_lru(folio))
+ 		return;
+ 
+-	folio_batch_add_and_move(folio, lru_move_tail, true);
++	folio_batch_add_and_move(folio, lru_move_tail);
+ }
+ 
+ void lru_note_cost(struct lruvec *lruvec, bool file,
+@@ -323,10 +323,11 @@ static void folio_activate_drain(int cpu)
+ 
+ void folio_activate(struct folio *folio)
+ {
+-	if (folio_test_active(folio) || folio_test_unevictable(folio))
++	if (folio_test_active(folio) || folio_test_unevictable(folio) ||
++	    !folio_test_lru(folio))
+ 		return;
+ 
+-	folio_batch_add_and_move(folio, lru_activate, true);
++	folio_batch_add_and_move(folio, lru_activate);
+ }
+ 
+ #else
+@@ -502,7 +503,7 @@ void folio_add_lru(struct folio *folio)
+ 	    lru_gen_in_fault() && !(current->flags & PF_MEMALLOC))
+ 		folio_set_active(folio);
+ 
+-	folio_batch_add_and_move(folio, lru_add, false);
++	folio_batch_add_and_move(folio, lru_add);
+ }
+ EXPORT_SYMBOL(folio_add_lru);
+ 
+@@ -680,13 +681,13 @@ void lru_add_drain_cpu(int cpu)
+ void deactivate_file_folio(struct folio *folio)
+ {
+ 	/* Deactivating an unevictable folio will not accelerate reclaim */
+-	if (folio_test_unevictable(folio))
++	if (folio_test_unevictable(folio) || !folio_test_lru(folio))
+ 		return;
+ 
+ 	if (lru_gen_enabled() && lru_gen_clear_refs(folio))
+ 		return;
+ 
+-	folio_batch_add_and_move(folio, lru_deactivate_file, true);
++	folio_batch_add_and_move(folio, lru_deactivate_file);
+ }
+ 
+ /*
+@@ -699,13 +700,13 @@ void deactivate_file_folio(struct folio *folio)
+  */
+ void folio_deactivate(struct folio *folio)
+ {
+-	if (folio_test_unevictable(folio))
++	if (folio_test_unevictable(folio) || !folio_test_lru(folio))
+ 		return;
+ 
+ 	if (lru_gen_enabled() ? lru_gen_clear_refs(folio) : !folio_test_active(folio))
+ 		return;
+ 
+-	folio_batch_add_and_move(folio, lru_deactivate, true);
++	folio_batch_add_and_move(folio, lru_deactivate);
+ }
+ 
+ /**
+@@ -718,10 +719,11 @@ void folio_deactivate(struct folio *folio)
+ void folio_mark_lazyfree(struct folio *folio)
+ {
+ 	if (!folio_test_anon(folio) || !folio_test_swapbacked(folio) ||
++	    !folio_test_lru(folio) ||
+ 	    folio_test_swapcache(folio) || folio_test_unevictable(folio))
+ 		return;
+ 
+-	folio_batch_add_and_move(folio, lru_lazyfree, true);
++	folio_batch_add_and_move(folio, lru_lazyfree);
+ }
+ 
+ void lru_add_drain(void)
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 424412680cfcc6..b7ed263e6dd701 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -4505,7 +4505,7 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
+ 	}
+ 
+ 	/* ineligible */
+-	if (!folio_test_lru(folio) || zone > sc->reclaim_idx) {
++	if (zone > sc->reclaim_idx) {
+ 		gen = folio_inc_gen(lruvec, folio, false);
+ 		list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]);
+ 		return true;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 461a9ab540af02..98da33e0c308b0 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -3330,6 +3330,7 @@ int tcp_disconnect(struct sock *sk, int flags)
+ 	struct inet_connection_sock *icsk = inet_csk(sk);
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 	int old_state = sk->sk_state;
++	struct request_sock *req;
+ 	u32 seq;
+ 
+ 	if (old_state != TCP_CLOSE)
+@@ -3445,6 +3446,10 @@ int tcp_disconnect(struct sock *sk, int flags)
+ 
+ 
+ 	/* Clean up fastopen related fields */
++	req = rcu_dereference_protected(tp->fastopen_rsk,
++					lockdep_sock_is_held(sk));
++	if (req)
++		reqsk_fastopen_remove(sk, req, false);
+ 	tcp_free_fastopen_req(tp);
+ 	inet_clear_bit(DEFER_CONNECT, sk);
+ 	tp->fastopen_client_fail = 0;
+diff --git a/net/ipv4/tcp_ao.c b/net/ipv4/tcp_ao.c
+index bbb8d5f0eae7d3..3338b6cc85c487 100644
+--- a/net/ipv4/tcp_ao.c
++++ b/net/ipv4/tcp_ao.c
+@@ -1178,7 +1178,9 @@ void tcp_ao_finish_connect(struct sock *sk, struct sk_buff *skb)
+ 	if (!ao)
+ 		return;
+ 
+-	WRITE_ONCE(ao->risn, tcp_hdr(skb)->seq);
++	/* sk with TCP_REPAIR_ON does not have skb in tcp_finish_connect */
++	if (skb)
++		WRITE_ONCE(ao->risn, tcp_hdr(skb)->seq);
+ 	ao->rcv_sne = 0;
+ 
+ 	hlist_for_each_entry_rcu(key, &ao->head, node, lockdep_sock_is_held(sk))
+diff --git a/net/mac80211/driver-ops.h b/net/mac80211/driver-ops.h
+index 307587c8a0037b..7964a7c5f0b2b5 100644
+--- a/net/mac80211/driver-ops.h
++++ b/net/mac80211/driver-ops.h
+@@ -1389,7 +1389,7 @@ drv_get_ftm_responder_stats(struct ieee80211_local *local,
+ 			    struct ieee80211_sub_if_data *sdata,
+ 			    struct cfg80211_ftm_responder_stats *ftm_stats)
+ {
+-	u32 ret = -EOPNOTSUPP;
++	int ret = -EOPNOTSUPP;
+ 
+ 	might_sleep();
+ 	lockdep_assert_wiphy(local->hw.wiphy);
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index 1bad353d8a772b..35c6755b817a83 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -1136,7 +1136,7 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ 	int result, i;
+ 	enum nl80211_band band;
+ 	int channels, max_bitrates;
+-	bool supp_ht, supp_vht, supp_he, supp_eht;
++	bool supp_ht, supp_vht, supp_he, supp_eht, supp_s1g;
+ 	struct cfg80211_chan_def dflt_chandef = {};
+ 
+ 	if (ieee80211_hw_check(hw, QUEUE_CONTROL) &&
+@@ -1252,6 +1252,7 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ 	supp_vht = false;
+ 	supp_he = false;
+ 	supp_eht = false;
++	supp_s1g = false;
+ 	for (band = 0; band < NUM_NL80211_BANDS; band++) {
+ 		const struct ieee80211_sband_iftype_data *iftd;
+ 		struct ieee80211_supported_band *sband;
+@@ -1299,6 +1300,7 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ 			max_bitrates = sband->n_bitrates;
+ 		supp_ht = supp_ht || sband->ht_cap.ht_supported;
+ 		supp_vht = supp_vht || sband->vht_cap.vht_supported;
++		supp_s1g = supp_s1g || sband->s1g_cap.s1g;
+ 
+ 		for_each_sband_iftype_data(sband, i, iftd) {
+ 			u8 he_40_mhz_cap;
+@@ -1432,6 +1434,9 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ 		local->scan_ies_len +=
+ 			2 + sizeof(struct ieee80211_vht_cap);
+ 
++	if (supp_s1g)
++		local->scan_ies_len += 2 + sizeof(struct ieee80211_s1g_cap);
++
+ 	/*
+ 	 * HE cap element is variable in size - set len to allow max size */
+ 	if (supp_he) {
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index c6983471dca552..bb4253aa675a61 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -984,13 +984,13 @@ static bool check_fully_established(struct mptcp_sock *msk, struct sock *ssk,
+ 		return false;
+ 	}
+ 
+-	if (mp_opt->deny_join_id0)
+-		WRITE_ONCE(msk->pm.remote_deny_join_id0, true);
+-
+ 	if (unlikely(!READ_ONCE(msk->pm.server_side)))
+ 		pr_warn_once("bogus mpc option on established client sk");
+ 
+ set_fully_established:
++	if (mp_opt->deny_join_id0)
++		WRITE_ONCE(msk->pm.remote_deny_join_id0, true);
++
+ 	mptcp_data_lock((struct sock *)msk);
+ 	__mptcp_subflow_fully_established(msk, subflow, mp_opt);
+ 	mptcp_data_unlock((struct sock *)msk);
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 50aaf259959aea..ce7d42d3bd007b 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -408,6 +408,7 @@ static int mptcp_event_created(struct sk_buff *skb,
+ 			       const struct sock *ssk)
+ {
+ 	int err = nla_put_u32(skb, MPTCP_ATTR_TOKEN, READ_ONCE(msk->token));
++	u16 flags = 0;
+ 
+ 	if (err)
+ 		return err;
+@@ -415,6 +416,12 @@ static int mptcp_event_created(struct sk_buff *skb,
+ 	if (nla_put_u8(skb, MPTCP_ATTR_SERVER_SIDE, READ_ONCE(msk->pm.server_side)))
+ 		return -EMSGSIZE;
+ 
++	if (READ_ONCE(msk->pm.remote_deny_join_id0))
++		flags |= MPTCP_PM_EV_FLAG_DENY_JOIN_ID0;
++
++	if (flags && nla_put_u16(skb, MPTCP_ATTR_FLAGS, flags))
++		return -EMSGSIZE;
++
+ 	return mptcp_event_add_subflow(skb, ssk);
+ }
+ 
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 1063c53850c057..b895e3ecdc9b7c 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -350,6 +350,20 @@ static void mptcp_close_wake_up(struct sock *sk)
+ 		sk_wake_async(sk, SOCK_WAKE_WAITD, POLL_IN);
+ }
+ 
++static void mptcp_shutdown_subflows(struct mptcp_sock *msk)
++{
++	struct mptcp_subflow_context *subflow;
++
++	mptcp_for_each_subflow(msk, subflow) {
++		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
++		bool slow;
++
++		slow = lock_sock_fast(ssk);
++		tcp_shutdown(ssk, SEND_SHUTDOWN);
++		unlock_sock_fast(ssk, slow);
++	}
++}
++
+ /* called under the msk socket lock */
+ static bool mptcp_pending_data_fin_ack(struct sock *sk)
+ {
+@@ -374,6 +388,7 @@ static void mptcp_check_data_fin_ack(struct sock *sk)
+ 			break;
+ 		case TCP_CLOSING:
+ 		case TCP_LAST_ACK:
++			mptcp_shutdown_subflows(msk);
+ 			mptcp_set_state(sk, TCP_CLOSE);
+ 			break;
+ 		}
+@@ -542,6 +557,7 @@ static bool mptcp_check_data_fin(struct sock *sk)
+ 			mptcp_set_state(sk, TCP_CLOSING);
+ 			break;
+ 		case TCP_FIN_WAIT2:
++			mptcp_shutdown_subflows(msk);
+ 			mptcp_set_state(sk, TCP_CLOSE);
+ 			break;
+ 		default:
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 1802bc5435a1aa..d77a2e374a7ae8 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -882,6 +882,10 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
+ 
+ 			ctx->subflow_id = 1;
+ 			owner = mptcp_sk(ctx->conn);
++
++			if (mp_opt.deny_join_id0)
++				WRITE_ONCE(owner->pm.remote_deny_join_id0, true);
++
+ 			mptcp_pm_new_connection(owner, child, 1);
+ 
+ 			/* with OoO packets we can reach here without ingress
+diff --git a/net/rds/ib_frmr.c b/net/rds/ib_frmr.c
+index 28c1b00221780f..bd861191157b54 100644
+--- a/net/rds/ib_frmr.c
++++ b/net/rds/ib_frmr.c
+@@ -133,12 +133,15 @@ static int rds_ib_post_reg_frmr(struct rds_ib_mr *ibmr)
+ 
+ 	ret = ib_map_mr_sg_zbva(frmr->mr, ibmr->sg, ibmr->sg_dma_len,
+ 				&off, PAGE_SIZE);
+-	if (unlikely(ret != ibmr->sg_dma_len))
+-		return ret < 0 ? ret : -EINVAL;
++	if (unlikely(ret != ibmr->sg_dma_len)) {
++		ret = ret < 0 ? ret : -EINVAL;
++		goto out_inc;
++	}
+ 
+-	if (cmpxchg(&frmr->fr_state,
+-		    FRMR_IS_FREE, FRMR_IS_INUSE) != FRMR_IS_FREE)
+-		return -EBUSY;
++	if (cmpxchg(&frmr->fr_state, FRMR_IS_FREE, FRMR_IS_INUSE) != FRMR_IS_FREE) {
++		ret = -EBUSY;
++		goto out_inc;
++	}
+ 
+ 	atomic_inc(&ibmr->ic->i_fastreg_inuse_count);
+ 
+@@ -166,11 +169,10 @@ static int rds_ib_post_reg_frmr(struct rds_ib_mr *ibmr)
+ 		/* Failure here can be because of -ENOMEM as well */
+ 		rds_transition_frwr_state(ibmr, FRMR_IS_INUSE, FRMR_IS_STALE);
+ 
+-		atomic_inc(&ibmr->ic->i_fastreg_wrs);
+ 		if (printk_ratelimit())
+ 			pr_warn("RDS/IB: %s returned error(%d)\n",
+ 				__func__, ret);
+-		goto out;
++		goto out_inc;
+ 	}
+ 
+ 	/* Wait for the registration to complete in order to prevent an invalid
+@@ -179,8 +181,10 @@ static int rds_ib_post_reg_frmr(struct rds_ib_mr *ibmr)
+ 	 */
+ 	wait_event(frmr->fr_reg_done, !frmr->fr_reg);
+ 
+-out:
++	return ret;
+ 
++out_inc:
++	atomic_inc(&ibmr->ic->i_fastreg_wrs);
+ 	return ret;
+ }
+ 
+diff --git a/net/rfkill/rfkill-gpio.c b/net/rfkill/rfkill-gpio.c
+index 41e657e977618a..cf2dcec6ce5afc 100644
+--- a/net/rfkill/rfkill-gpio.c
++++ b/net/rfkill/rfkill-gpio.c
+@@ -94,10 +94,10 @@ static const struct dmi_system_id rfkill_gpio_deny_table[] = {
+ static int rfkill_gpio_probe(struct platform_device *pdev)
+ {
+ 	struct rfkill_gpio_data *rfkill;
+-	struct gpio_desc *gpio;
++	const char *type_name = NULL;
+ 	const char *name_property;
+ 	const char *type_property;
+-	const char *type_name;
++	struct gpio_desc *gpio;
+ 	int ret;
+ 
+ 	if (dmi_check_system(rfkill_gpio_deny_table))
+diff --git a/net/rxrpc/rxgk.c b/net/rxrpc/rxgk.c
+index 1e19c605bcc829..dce5a3d8a964f8 100644
+--- a/net/rxrpc/rxgk.c
++++ b/net/rxrpc/rxgk.c
+@@ -475,7 +475,7 @@ static int rxgk_verify_packet_integrity(struct rxrpc_call *call,
+ 	struct krb5_buffer metadata;
+ 	unsigned int offset = sp->offset, len = sp->len;
+ 	size_t data_offset = 0, data_len = len;
+-	u32 ac;
++	u32 ac = 0;
+ 	int ret = -ENOMEM;
+ 
+ 	_enter("");
+@@ -499,9 +499,10 @@ static int rxgk_verify_packet_integrity(struct rxrpc_call *call,
+ 	ret = rxgk_verify_mic_skb(gk->krb5, gk->rx_Kc, &metadata,
+ 				  skb, &offset, &len, &ac);
+ 	kfree(hdr);
+-	if (ret == -EPROTO) {
+-		rxrpc_abort_eproto(call, skb, ac,
+-				   rxgk_abort_1_verify_mic_eproto);
++	if (ret < 0) {
++		if (ret != -ENOMEM)
++			rxrpc_abort_eproto(call, skb, ac,
++					   rxgk_abort_1_verify_mic_eproto);
+ 	} else {
+ 		sp->offset = offset;
+ 		sp->len = len;
+@@ -524,15 +525,16 @@ static int rxgk_verify_packet_encrypted(struct rxrpc_call *call,
+ 	struct rxgk_header hdr;
+ 	unsigned int offset = sp->offset, len = sp->len;
+ 	int ret;
+-	u32 ac;
++	u32 ac = 0;
+ 
+ 	_enter("");
+ 
+ 	ret = rxgk_decrypt_skb(gk->krb5, gk->rx_enc, skb, &offset, &len, &ac);
+-	if (ret == -EPROTO)
+-		rxrpc_abort_eproto(call, skb, ac, rxgk_abort_2_decrypt_eproto);
+-	if (ret < 0)
++	if (ret < 0) {
++		if (ret != -ENOMEM)
++			rxrpc_abort_eproto(call, skb, ac, rxgk_abort_2_decrypt_eproto);
+ 		goto error;
++	}
+ 
+ 	if (len < sizeof(hdr)) {
+ 		ret = rxrpc_abort_eproto(call, skb, RXGK_PACKETSHORT,
+diff --git a/net/rxrpc/rxgk_app.c b/net/rxrpc/rxgk_app.c
+index b94b77a1c31780..30275cb5ba3e25 100644
+--- a/net/rxrpc/rxgk_app.c
++++ b/net/rxrpc/rxgk_app.c
+@@ -54,6 +54,10 @@ int rxgk_yfs_decode_ticket(struct rxrpc_connection *conn, struct sk_buff *skb,
+ 
+ 	_enter("");
+ 
++	if (ticket_len < 10 * sizeof(__be32))
++		return rxrpc_abort_conn(conn, skb, RXGK_INCONSISTENCY, -EPROTO,
++					rxgk_abort_resp_short_yfs_tkt);
++
+ 	/* Get the session key length */
+ 	ret = skb_copy_bits(skb, ticket_offset, tmp, sizeof(tmp));
+ 	if (ret < 0)
+@@ -187,7 +191,7 @@ int rxgk_extract_token(struct rxrpc_connection *conn, struct sk_buff *skb,
+ 	struct key *server_key;
+ 	unsigned int ticket_offset, ticket_len;
+ 	u32 kvno, enctype;
+-	int ret, ec;
++	int ret, ec = 0;
+ 
+ 	struct {
+ 		__be32 kvno;
+@@ -195,22 +199,23 @@ int rxgk_extract_token(struct rxrpc_connection *conn, struct sk_buff *skb,
+ 		__be32 token_len;
+ 	} container;
+ 
++	if (token_len < sizeof(container))
++		goto short_packet;
++
+ 	/* Decode the RXGK_TokenContainer object.  This tells us which server
+ 	 * key we should be using.  We can then fetch the key, get the secret
+ 	 * and set up the crypto to extract the token.
+ 	 */
+ 	if (skb_copy_bits(skb, token_offset, &container, sizeof(container)) < 0)
+-		return rxrpc_abort_conn(conn, skb, RXGK_PACKETSHORT, -EPROTO,
+-					rxgk_abort_resp_tok_short);
++		goto short_packet;
+ 
+ 	kvno		= ntohl(container.kvno);
+ 	enctype		= ntohl(container.enctype);
+ 	ticket_len	= ntohl(container.token_len);
+ 	ticket_offset	= token_offset + sizeof(container);
+ 
+-	if (xdr_round_up(ticket_len) > token_len - 3 * 4)
+-		return rxrpc_abort_conn(conn, skb, RXGK_PACKETSHORT, -EPROTO,
+-					rxgk_abort_resp_tok_short);
++	if (xdr_round_up(ticket_len) > token_len - sizeof(container))
++		goto short_packet;
+ 
+ 	_debug("KVNO %u", kvno);
+ 	_debug("ENC  %u", enctype);
+@@ -236,9 +241,11 @@ int rxgk_extract_token(struct rxrpc_connection *conn, struct sk_buff *skb,
+ 			       &ticket_offset, &ticket_len, &ec);
+ 	crypto_free_aead(token_enc);
+ 	token_enc = NULL;
+-	if (ret < 0)
+-		return rxrpc_abort_conn(conn, skb, ec, ret,
+-					rxgk_abort_resp_tok_dec);
++	if (ret < 0) {
++		if (ret != -ENOMEM)
++			return rxrpc_abort_conn(conn, skb, ec, ret,
++						rxgk_abort_resp_tok_dec);
++	}
+ 
+ 	ret = conn->security->default_decode_ticket(conn, skb, ticket_offset,
+ 						    ticket_len, _key);
+@@ -283,4 +290,8 @@ int rxgk_extract_token(struct rxrpc_connection *conn, struct sk_buff *skb,
+ 	 * also come out this way if the ticket decryption fails.
+ 	 */
+ 	return ret;
++
++short_packet:
++	return rxrpc_abort_conn(conn, skb, RXGK_PACKETSHORT, -EPROTO,
++				rxgk_abort_resp_tok_short);
+ }
+diff --git a/net/rxrpc/rxgk_common.h b/net/rxrpc/rxgk_common.h
+index 7370a56559853f..80164d89e19c03 100644
+--- a/net/rxrpc/rxgk_common.h
++++ b/net/rxrpc/rxgk_common.h
+@@ -88,11 +88,16 @@ int rxgk_decrypt_skb(const struct krb5_enctype *krb5,
+ 		*_offset += offset;
+ 		*_len = len;
+ 		break;
++	case -EBADMSG: /* Checksum mismatch. */
+ 	case -EPROTO:
+-	case -EBADMSG:
+ 		*_error_code = RXGK_SEALEDINCON;
+ 		break;
++	case -EMSGSIZE:
++		*_error_code = RXGK_PACKETSHORT;
++		break;
++	case -ENOPKG: /* Would prefer RXGK_BADETYPE, but not available for YFS. */
+ 	default:
++		*_error_code = RXGK_INCONSISTENCY;
+ 		break;
+ 	}
+ 
+@@ -127,11 +132,16 @@ int rxgk_verify_mic_skb(const struct krb5_enctype *krb5,
+ 		*_offset += offset;
+ 		*_len = len;
+ 		break;
++	case -EBADMSG: /* Checksum mismatch */
+ 	case -EPROTO:
+-	case -EBADMSG:
+ 		*_error_code = RXGK_SEALEDINCON;
+ 		break;
++	case -EMSGSIZE:
++		*_error_code = RXGK_PACKETSHORT;
++		break;
++	case -ENOPKG: /* Would prefer RXGK_BADETYPE, but not available for YFS. */
+ 	default:
++		*_error_code = RXGK_INCONSISTENCY;
+ 		break;
+ 	}
+ 
+diff --git a/net/tls/tls.h b/net/tls/tls.h
+index 4e077068e6d98a..e4c42731ce39ae 100644
+--- a/net/tls/tls.h
++++ b/net/tls/tls.h
+@@ -141,6 +141,7 @@ void update_sk_prot(struct sock *sk, struct tls_context *ctx);
+ 
+ int wait_on_pending_writer(struct sock *sk, long *timeo);
+ void tls_err_abort(struct sock *sk, int err);
++void tls_strp_abort_strp(struct tls_strparser *strp, int err);
+ 
+ int init_prot_info(struct tls_prot_info *prot,
+ 		   const struct tls_crypto_info *crypto_info,
+diff --git a/net/tls/tls_strp.c b/net/tls/tls_strp.c
+index d71643b494a1ae..98e12f0ff57e51 100644
+--- a/net/tls/tls_strp.c
++++ b/net/tls/tls_strp.c
+@@ -13,7 +13,7 @@
+ 
+ static struct workqueue_struct *tls_strp_wq;
+ 
+-static void tls_strp_abort_strp(struct tls_strparser *strp, int err)
++void tls_strp_abort_strp(struct tls_strparser *strp, int err)
+ {
+ 	if (strp->stopped)
+ 		return;
+@@ -211,11 +211,17 @@ static int tls_strp_copyin_frag(struct tls_strparser *strp, struct sk_buff *skb,
+ 				struct sk_buff *in_skb, unsigned int offset,
+ 				size_t in_len)
+ {
++	unsigned int nfrag = skb->len / PAGE_SIZE;
+ 	size_t len, chunk;
+ 	skb_frag_t *frag;
+ 	int sz;
+ 
+-	frag = &skb_shinfo(skb)->frags[skb->len / PAGE_SIZE];
++	if (unlikely(nfrag >= skb_shinfo(skb)->nr_frags)) {
++		DEBUG_NET_WARN_ON_ONCE(1);
++		return -EMSGSIZE;
++	}
++
++	frag = &skb_shinfo(skb)->frags[nfrag];
+ 
+ 	len = in_len;
+ 	/* First make sure we got the header */
+@@ -520,10 +526,8 @@ static int tls_strp_read_sock(struct tls_strparser *strp)
+ 	tls_strp_load_anchor_with_queue(strp, inq);
+ 	if (!strp->stm.full_len) {
+ 		sz = tls_rx_msg_size(strp, strp->anchor);
+-		if (sz < 0) {
+-			tls_strp_abort_strp(strp, sz);
++		if (sz < 0)
+ 			return sz;
+-		}
+ 
+ 		strp->stm.full_len = sz;
+ 
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index bac65d0d4e3e1e..daac9fd4be7eb5 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -2474,8 +2474,7 @@ int tls_rx_msg_size(struct tls_strparser *strp, struct sk_buff *skb)
+ 	return data_len + TLS_HEADER_SIZE;
+ 
+ read_failure:
+-	tls_err_abort(strp->sk, ret);
+-
++	tls_strp_abort_strp(strp, ret);
+ 	return ret;
+ }
+ 
+diff --git a/samples/damon/mtier.c b/samples/damon/mtier.c
+index ed6bed8b3d4d99..88156145172f17 100644
+--- a/samples/damon/mtier.c
++++ b/samples/damon/mtier.c
+@@ -27,14 +27,14 @@ module_param(node1_end_addr, ulong, 0600);
+ static int damon_sample_mtier_enable_store(
+ 		const char *val, const struct kernel_param *kp);
+ 
+-static const struct kernel_param_ops enable_param_ops = {
++static const struct kernel_param_ops enabled_param_ops = {
+ 	.set = damon_sample_mtier_enable_store,
+ 	.get = param_get_bool,
+ };
+ 
+-static bool enable __read_mostly;
+-module_param_cb(enable, &enable_param_ops, &enable, 0600);
+-MODULE_PARM_DESC(enable, "Enable of disable DAMON_SAMPLE_MTIER");
++static bool enabled __read_mostly;
++module_param_cb(enabled, &enabled_param_ops, &enabled, 0600);
++MODULE_PARM_DESC(enabled, "Enable or disable DAMON_SAMPLE_MTIER");
+ 
+ static struct damon_ctx *ctxs[2];
+ 
+@@ -156,20 +156,23 @@ static bool init_called;
+ static int damon_sample_mtier_enable_store(
+ 		const char *val, const struct kernel_param *kp)
+ {
+-	bool enabled = enable;
++	bool is_enabled = enabled;
+ 	int err;
+ 
+-	err = kstrtobool(val, &enable);
++	err = kstrtobool(val, &enabled);
+ 	if (err)
+ 		return err;
+ 
+-	if (enable == enabled)
++	if (enabled == is_enabled)
+ 		return 0;
+ 
+-	if (enable) {
++	if (!init_called)
++		return 0;
++
++	if (enabled) {
+ 		err = damon_sample_mtier_start();
+ 		if (err)
+-			enable = false;
++			enabled = false;
+ 		return err;
+ 	}
+ 	damon_sample_mtier_stop();
+@@ -181,10 +184,10 @@ static int __init damon_sample_mtier_init(void)
+ 	int err = 0;
+ 
+ 	init_called = true;
+-	if (enable) {
++	if (enabled) {
+ 		err = damon_sample_mtier_start();
+ 		if (err)
+-			enable = false;
++			enabled = false;
+ 	}
+ 	return 0;
+ }
+diff --git a/samples/damon/prcl.c b/samples/damon/prcl.c
+index 5597e6a08ab226..f971e61e6c5c00 100644
+--- a/samples/damon/prcl.c
++++ b/samples/damon/prcl.c
+@@ -17,14 +17,14 @@ module_param(target_pid, int, 0600);
+ static int damon_sample_prcl_enable_store(
+ 		const char *val, const struct kernel_param *kp);
+ 
+-static const struct kernel_param_ops enable_param_ops = {
++static const struct kernel_param_ops enabled_param_ops = {
+ 	.set = damon_sample_prcl_enable_store,
+ 	.get = param_get_bool,
+ };
+ 
+-static bool enable __read_mostly;
+-module_param_cb(enable, &enable_param_ops, &enable, 0600);
+-MODULE_PARM_DESC(enable, "Enable of disable DAMON_SAMPLE_WSSE");
++static bool enabled __read_mostly;
++module_param_cb(enabled, &enabled_param_ops, &enabled, 0600);
++MODULE_PARM_DESC(enabled, "Enable or disable DAMON_SAMPLE_PRCL");
+ 
+ static struct damon_ctx *ctx;
+ static struct pid *target_pidp;
+@@ -109,23 +109,28 @@ static void damon_sample_prcl_stop(void)
+ 		put_pid(target_pidp);
+ }
+ 
++static bool init_called;
++
+ static int damon_sample_prcl_enable_store(
+ 		const char *val, const struct kernel_param *kp)
+ {
+-	bool enabled = enable;
++	bool is_enabled = enabled;
+ 	int err;
+ 
+-	err = kstrtobool(val, &enable);
++	err = kstrtobool(val, &enabled);
+ 	if (err)
+ 		return err;
+ 
+-	if (enable == enabled)
++	if (enabled == is_enabled)
++		return 0;
++
++	if (!init_called)
+ 		return 0;
+ 
+-	if (enable) {
++	if (enabled) {
+ 		err = damon_sample_prcl_start();
+ 		if (err)
+-			enable = false;
++			enabled = false;
+ 		return err;
+ 	}
+ 	damon_sample_prcl_stop();
+@@ -134,6 +139,14 @@ static int damon_sample_prcl_enable_store(
+ 
+ static int __init damon_sample_prcl_init(void)
+ {
++	int err = 0;
++
++	init_called = true;
++	if (enabled) {
++		err = damon_sample_prcl_start();
++		if (err)
++			enabled = false;
++	}
+ 	return 0;
+ }
+ 
+diff --git a/samples/damon/wsse.c b/samples/damon/wsse.c
+index e941958b103249..d50730ee65a7e7 100644
+--- a/samples/damon/wsse.c
++++ b/samples/damon/wsse.c
+@@ -18,14 +18,14 @@ module_param(target_pid, int, 0600);
+ static int damon_sample_wsse_enable_store(
+ 		const char *val, const struct kernel_param *kp);
+ 
+-static const struct kernel_param_ops enable_param_ops = {
++static const struct kernel_param_ops enabled_param_ops = {
+ 	.set = damon_sample_wsse_enable_store,
+ 	.get = param_get_bool,
+ };
+ 
+-static bool enable __read_mostly;
+-module_param_cb(enable, &enable_param_ops, &enable, 0600);
+-MODULE_PARM_DESC(enable, "Enable or disable DAMON_SAMPLE_WSSE");
++static bool enabled __read_mostly;
++module_param_cb(enabled, &enabled_param_ops, &enabled, 0600);
++MODULE_PARM_DESC(enabled, "Enable or disable DAMON_SAMPLE_WSSE");
+ 
+ static struct damon_ctx *ctx;
+ static struct pid *target_pidp;
+@@ -94,20 +94,20 @@ static bool init_called;
+ static int damon_sample_wsse_enable_store(
+ 		const char *val, const struct kernel_param *kp)
+ {
+-	bool enabled = enable;
++	bool is_enabled = enabled;
+ 	int err;
+ 
+-	err = kstrtobool(val, &enable);
++	err = kstrtobool(val, &enabled);
+ 	if (err)
+ 		return err;
+ 
+-	if (enable == enabled)
++	if (enabled == is_enabled)
+ 		return 0;
+ 
+-	if (enable) {
++	if (enabled) {
+ 		err = damon_sample_wsse_start();
+ 		if (err)
+-			enable = false;
++			enabled = false;
+ 		return err;
+ 	}
+ 	damon_sample_wsse_stop();
+@@ -119,10 +119,10 @@ static int __init damon_sample_wsse_init(void)
+ 	int err = 0;
+ 
+ 	init_called = true;
+-	if (enable) {
++	if (enabled) {
+ 		err = damon_sample_wsse_start();
+ 		if (err)
+-			enable = false;
++			enabled = false;
+ 	}
+ 	return err;
+ }
+diff --git a/sound/firewire/motu/motu-hwdep.c b/sound/firewire/motu/motu-hwdep.c
+index 88d1f4b56e4be4..a220ac0c8eb831 100644
+--- a/sound/firewire/motu/motu-hwdep.c
++++ b/sound/firewire/motu/motu-hwdep.c
+@@ -111,7 +111,7 @@ static __poll_t hwdep_poll(struct snd_hwdep *hwdep, struct file *file,
+ 		events = 0;
+ 	spin_unlock_irq(&motu->lock);
+ 
+-	return events | EPOLLOUT;
++	return events;
+ }
+ 
+ static int hwdep_get_info(struct snd_motu *motu, void __user *arg)
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 8458ca4d8d9dab..4819bd332f0390 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10752,6 +10752,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8992, "HP EliteBook 845 G9", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8994, "HP EliteBook 855 G9", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8995, "HP EliteBook 855 G9", ALC287_FIXUP_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x103c, 0x89a0, "HP Laptop 15-dw4xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ 	SND_PCI_QUIRK(0x103c, 0x89a4, "HP ProBook 440 G9", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x89a6, "HP ProBook 450 G9", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x89aa, "HP EliteBook 630 G9", ALC236_FIXUP_HP_GPIO_LED),
+diff --git a/sound/soc/amd/acp/acp-i2s.c b/sound/soc/amd/acp/acp-i2s.c
+index 70fa54d568ef68..4d9589b67099ef 100644
+--- a/sound/soc/amd/acp/acp-i2s.c
++++ b/sound/soc/amd/acp/acp-i2s.c
+@@ -72,7 +72,7 @@ static int acp_i2s_set_fmt(struct snd_soc_dai *cpu_dai,
+ 			   unsigned int fmt)
+ {
+ 	struct device *dev = cpu_dai->component->dev;
+-	struct acp_chip_info *chip = dev_get_platdata(dev);
++	struct acp_chip_info *chip = dev_get_drvdata(dev->parent);
+ 	int mode;
+ 
+ 	mode = fmt & SND_SOC_DAIFMT_FORMAT_MASK;
+@@ -196,7 +196,7 @@ static int acp_i2s_hwparams(struct snd_pcm_substream *substream, struct snd_pcm_
+ 	u32 reg_val, fmt_reg, tdm_fmt;
+ 	u32 lrclk_div_val, bclk_div_val;
+ 
+-	chip = dev_get_platdata(dev);
++	chip = dev_get_drvdata(dev->parent);
+ 	rsrc = chip->rsrc;
+ 
+ 	/* These values are as per Hardware Spec */
+@@ -383,7 +383,7 @@ static int acp_i2s_trigger(struct snd_pcm_substream *substream, int cmd, struct
+ {
+ 	struct acp_stream *stream = substream->runtime->private_data;
+ 	struct device *dev = dai->component->dev;
+-	struct acp_chip_info *chip = dev_get_platdata(dev);
++	struct acp_chip_info *chip = dev_get_drvdata(dev->parent);
+ 	struct acp_resource *rsrc = chip->rsrc;
+ 	u32 val, period_bytes, reg_val, ier_val, water_val, buf_size, buf_reg;
+ 
+@@ -513,14 +513,13 @@ static int acp_i2s_trigger(struct snd_pcm_substream *substream, int cmd, struct
+ static int acp_i2s_prepare(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
+ {
+ 	struct device *dev = dai->component->dev;
+-	struct acp_chip_info *chip = dev_get_platdata(dev);
++	struct acp_chip_info *chip = dev_get_drvdata(dev->parent);
+ 	struct acp_resource *rsrc = chip->rsrc;
+ 	struct acp_stream *stream = substream->runtime->private_data;
+ 	u32 reg_dma_size = 0, reg_fifo_size = 0, reg_fifo_addr = 0;
+ 	u32 phy_addr = 0, acp_fifo_addr = 0, ext_int_ctrl;
+ 	unsigned int dir = substream->stream;
+ 
+-	chip = dev_get_platdata(dev);
+ 	switch (dai->driver->id) {
+ 	case I2S_SP_INSTANCE:
+ 		if (dir == SNDRV_PCM_STREAM_PLAYBACK) {
+@@ -629,7 +628,7 @@ static int acp_i2s_startup(struct snd_pcm_substream *substream, struct snd_soc_d
+ {
+ 	struct acp_stream *stream = substream->runtime->private_data;
+ 	struct device *dev = dai->component->dev;
+-	struct acp_chip_info *chip = dev_get_platdata(dev);
++	struct acp_chip_info *chip = dev_get_drvdata(dev->parent);
+ 	struct acp_resource *rsrc = chip->rsrc;
+ 	unsigned int dir = substream->stream;
+ 	unsigned int irq_bit = 0;
+diff --git a/sound/soc/codecs/sma1307.c b/sound/soc/codecs/sma1307.c
+index b3d401ada17601..2d993428f87e3c 100644
+--- a/sound/soc/codecs/sma1307.c
++++ b/sound/soc/codecs/sma1307.c
+@@ -1737,9 +1737,10 @@ static void sma1307_setting_loaded(struct sma1307_priv *sma1307, const char *fil
+ 	sma1307->set.checksum = data[sma1307->set.header_size - 2];
+ 	sma1307->set.num_mode = data[sma1307->set.header_size - 1];
+ 	num_mode = sma1307->set.num_mode;
+-	sma1307->set.header = devm_kzalloc(sma1307->dev,
+-					   sma1307->set.header_size,
+-					   GFP_KERNEL);
++	sma1307->set.header = devm_kmalloc_array(sma1307->dev,
++						 sma1307->set.header_size,
++						 sizeof(int),
++						 GFP_KERNEL);
+ 	if (!sma1307->set.header) {
+ 		sma1307->set.status = false;
+ 		return;
+diff --git a/sound/soc/codecs/wm8940.c b/sound/soc/codecs/wm8940.c
+index 401ee20897b1ba..94873ea630146b 100644
+--- a/sound/soc/codecs/wm8940.c
++++ b/sound/soc/codecs/wm8940.c
+@@ -220,7 +220,7 @@ static const struct snd_kcontrol_new wm8940_snd_controls[] = {
+ 	SOC_SINGLE_TLV("Digital Capture Volume", WM8940_ADCVOL,
+ 		       0, 255, 0, wm8940_adc_tlv),
+ 	SOC_ENUM("Mic Bias Level", wm8940_mic_bias_level_enum),
+-	SOC_SINGLE_TLV("Capture Boost Volue", WM8940_ADCBOOST,
++	SOC_SINGLE_TLV("Capture Boost Volume", WM8940_ADCBOOST,
+ 		       8, 1, 0, wm8940_capture_boost_vol_tlv),
+ 	SOC_SINGLE_TLV("Speaker Playback Volume", WM8940_SPKVOL,
+ 		       0, 63, 0, wm8940_spk_vol_tlv),
+@@ -693,7 +693,12 @@ static int wm8940_update_clocks(struct snd_soc_dai *dai)
+ 	f = wm8940_get_mclkdiv(priv->mclk, fs256, &mclkdiv);
+ 	if (f != priv->mclk) {
+ 		/* The PLL performs best around 90MHz */
+-		fpll = wm8940_get_mclkdiv(22500000, fs256, &mclkdiv);
++		if (fs256 % 8000)
++			f = 22579200;
++		else
++			f = 24576000;
++
++		fpll = wm8940_get_mclkdiv(f, fs256, &mclkdiv);
+ 	}
+ 
+ 	wm8940_set_dai_pll(dai, 0, 0, priv->mclk, fpll);
+diff --git a/sound/soc/codecs/wm8974.c b/sound/soc/codecs/wm8974.c
+index bdf437a5403fe2..db16d893a23514 100644
+--- a/sound/soc/codecs/wm8974.c
++++ b/sound/soc/codecs/wm8974.c
+@@ -419,10 +419,14 @@ static int wm8974_update_clocks(struct snd_soc_dai *dai)
+ 	fs256 = 256 * priv->fs;
+ 
+ 	f = wm8974_get_mclkdiv(priv->mclk, fs256, &mclkdiv);
+-
+ 	if (f != priv->mclk) {
+ 		/* The PLL performs best around 90MHz */
+-		fpll = wm8974_get_mclkdiv(22500000, fs256, &mclkdiv);
++		if (fs256 % 8000)
++			f = 22579200;
++		else
++			f = 24576000;
++
++		fpll = wm8974_get_mclkdiv(f, fs256, &mclkdiv);
+ 	}
+ 
+ 	wm8974_set_dai_pll(dai, 0, 0, priv->mclk, fpll);
+diff --git a/sound/soc/intel/catpt/pcm.c b/sound/soc/intel/catpt/pcm.c
+index 81a2f0339e0552..ff1fa01acb85b2 100644
+--- a/sound/soc/intel/catpt/pcm.c
++++ b/sound/soc/intel/catpt/pcm.c
+@@ -568,8 +568,9 @@ static const struct snd_pcm_hardware catpt_pcm_hardware = {
+ 				  SNDRV_PCM_INFO_RESUME |
+ 				  SNDRV_PCM_INFO_NO_PERIOD_WAKEUP,
+ 	.formats		= SNDRV_PCM_FMTBIT_S16_LE |
+-				  SNDRV_PCM_FMTBIT_S24_LE |
+ 				  SNDRV_PCM_FMTBIT_S32_LE,
++	.subformats		= SNDRV_PCM_SUBFMTBIT_MSBITS_24 |
++				  SNDRV_PCM_SUBFMTBIT_MSBITS_MAX,
+ 	.period_bytes_min	= PAGE_SIZE,
+ 	.period_bytes_max	= CATPT_BUFFER_MAX_SIZE / CATPT_PCM_PERIODS_MIN,
+ 	.periods_min		= CATPT_PCM_PERIODS_MIN,
+@@ -699,14 +700,18 @@ static struct snd_soc_dai_driver dai_drivers[] = {
+ 		.channels_min = 2,
+ 		.channels_max = 2,
+ 		.rates = SNDRV_PCM_RATE_48000,
+-		.formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE,
++		.formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S32_LE,
++		.subformats = SNDRV_PCM_SUBFMTBIT_MSBITS_24 |
++			      SNDRV_PCM_SUBFMTBIT_MSBITS_MAX,
+ 	},
+ 	.capture = {
+ 		.stream_name = "Analog Capture",
+ 		.channels_min = 2,
+ 		.channels_max = 4,
+ 		.rates = SNDRV_PCM_RATE_48000,
+-		.formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE,
++		.formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S32_LE,
++		.subformats = SNDRV_PCM_SUBFMTBIT_MSBITS_24 |
++			      SNDRV_PCM_SUBFMTBIT_MSBITS_MAX,
+ 	},
+ },
+ {
+@@ -718,7 +723,9 @@ static struct snd_soc_dai_driver dai_drivers[] = {
+ 		.channels_min = 2,
+ 		.channels_max = 2,
+ 		.rates = SNDRV_PCM_RATE_8000_192000,
+-		.formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE,
++		.formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S32_LE,
++		.subformats = SNDRV_PCM_SUBFMTBIT_MSBITS_24 |
++			      SNDRV_PCM_SUBFMTBIT_MSBITS_MAX,
+ 	},
+ },
+ {
+@@ -730,7 +737,9 @@ static struct snd_soc_dai_driver dai_drivers[] = {
+ 		.channels_min = 2,
+ 		.channels_max = 2,
+ 		.rates = SNDRV_PCM_RATE_8000_192000,
+-		.formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE,
++		.formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S32_LE,
++		.subformats = SNDRV_PCM_SUBFMTBIT_MSBITS_24 |
++			      SNDRV_PCM_SUBFMTBIT_MSBITS_MAX,
+ 	},
+ },
+ {
+@@ -742,7 +751,9 @@ static struct snd_soc_dai_driver dai_drivers[] = {
+ 		.channels_min = 2,
+ 		.channels_max = 2,
+ 		.rates = SNDRV_PCM_RATE_48000,
+-		.formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE,
++		.formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S32_LE,
++		.subformats = SNDRV_PCM_SUBFMTBIT_MSBITS_24 |
++			      SNDRV_PCM_SUBFMTBIT_MSBITS_MAX,
+ 	},
+ },
+ {
+diff --git a/sound/soc/qcom/qdsp6/audioreach.c b/sound/soc/qcom/qdsp6/audioreach.c
+index 4ebaaf736fb98a..3f5eed5afce55e 100644
+--- a/sound/soc/qcom/qdsp6/audioreach.c
++++ b/sound/soc/qcom/qdsp6/audioreach.c
+@@ -971,6 +971,7 @@ static int audioreach_i2s_set_media_format(struct q6apm_graph *graph,
+ 	param_data->param_id = PARAM_ID_I2S_INTF_CFG;
+ 	param_data->param_size = ic_sz - APM_MODULE_PARAM_DATA_SIZE;
+ 
++	intf_cfg->cfg.lpaif_type = module->hw_interface_type;
+ 	intf_cfg->cfg.intf_idx = module->hw_interface_idx;
+ 	intf_cfg->cfg.sd_line_idx = module->sd_line_idx;
+ 
+diff --git a/sound/soc/qcom/qdsp6/q6apm-lpass-dais.c b/sound/soc/qcom/qdsp6/q6apm-lpass-dais.c
+index a0d90462fd6a38..528756f1332bcf 100644
+--- a/sound/soc/qcom/qdsp6/q6apm-lpass-dais.c
++++ b/sound/soc/qcom/qdsp6/q6apm-lpass-dais.c
+@@ -213,8 +213,10 @@ static int q6apm_lpass_dai_prepare(struct snd_pcm_substream *substream, struct s
+ 
+ 	return 0;
+ err:
+-	q6apm_graph_close(dai_data->graph[dai->id]);
+-	dai_data->graph[dai->id] = NULL;
++	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
++		q6apm_graph_close(dai_data->graph[dai->id]);
++		dai_data->graph[dai->id] = NULL;
++	}
+ 	return rc;
+ }
+ 
+@@ -260,6 +262,7 @@ static const struct snd_soc_dai_ops q6i2s_ops = {
+ 	.shutdown	= q6apm_lpass_dai_shutdown,
+ 	.set_channel_map  = q6dma_set_channel_map,
+ 	.hw_params        = q6dma_hw_params,
++	.set_fmt	= q6i2s_set_fmt,
+ };
+ 
+ static const struct snd_soc_dai_ops q6hdmi_ops = {
+diff --git a/sound/soc/sdca/sdca_device.c b/sound/soc/sdca/sdca_device.c
+index 0244cdcdd109a7..4798ce2c8f0b40 100644
+--- a/sound/soc/sdca/sdca_device.c
++++ b/sound/soc/sdca/sdca_device.c
+@@ -7,6 +7,7 @@
+  */
+ 
+ #include <linux/acpi.h>
++#include <linux/dmi.h>
+ #include <linux/module.h>
+ #include <linux/property.h>
+ #include <linux/soundwire/sdw.h>
+@@ -55,11 +56,30 @@ static bool sdca_device_quirk_rt712_vb(struct sdw_slave *slave)
+ 	return false;
+ }
+ 
++static bool sdca_device_quirk_skip_func_type_patching(struct sdw_slave *slave)
++{
++	const char *vendor, *sku;
++
++	vendor = dmi_get_system_info(DMI_SYS_VENDOR);
++	sku = dmi_get_system_info(DMI_PRODUCT_SKU);
++
++	if (vendor && sku &&
++	    !strcmp(vendor, "Dell Inc.") &&
++	    (!strcmp(sku, "0C62") || !strcmp(sku, "0C63") || !strcmp(sku, "0C6B")) &&
++	    slave->sdca_data.interface_revision == 0x061c &&
++	    slave->id.mfg_id == 0x01fa && slave->id.part_id == 0x4243)
++		return true;
++
++	return false;
++}
++
+ bool sdca_device_quirk_match(struct sdw_slave *slave, enum sdca_quirk quirk)
+ {
+ 	switch (quirk) {
+ 	case SDCA_QUIRKS_RT712_VB:
+ 		return sdca_device_quirk_rt712_vb(slave);
++	case SDCA_QUIRKS_SKIP_FUNC_TYPE_PATCHING:
++		return sdca_device_quirk_skip_func_type_patching(slave);
+ 	default:
+ 		break;
+ 	}
+diff --git a/sound/soc/sdca/sdca_functions.c b/sound/soc/sdca/sdca_functions.c
+index 050f7338aca95a..ea793869c03858 100644
+--- a/sound/soc/sdca/sdca_functions.c
++++ b/sound/soc/sdca/sdca_functions.c
+@@ -89,6 +89,7 @@ static int find_sdca_function(struct acpi_device *adev, void *data)
+ {
+ 	struct fwnode_handle *function_node = acpi_fwnode_handle(adev);
+ 	struct sdca_device_data *sdca_data = data;
++	struct sdw_slave *slave = container_of(sdca_data, struct sdw_slave, sdca_data);
+ 	struct device *dev = &adev->dev;
+ 	struct fwnode_handle *control5; /* used to identify function type */
+ 	const char *function_name;
+@@ -136,11 +137,13 @@ static int find_sdca_function(struct acpi_device *adev, void *data)
+ 		return ret;
+ 	}
+ 
+-	ret = patch_sdca_function_type(sdca_data->interface_revision, &function_type);
+-	if (ret < 0) {
+-		dev_err(dev, "SDCA version %#x invalid function type %d\n",
+-			sdca_data->interface_revision, function_type);
+-		return ret;
++	if (!sdca_device_quirk_match(slave, SDCA_QUIRKS_SKIP_FUNC_TYPE_PATCHING)) {
++		ret = patch_sdca_function_type(sdca_data->interface_revision, &function_type);
++		if (ret < 0) {
++			dev_err(dev, "SDCA version %#x invalid function type %d\n",
++				sdca_data->interface_revision, function_type);
++			return ret;
++		}
+ 	}
+ 
+ 	function_name = get_sdca_function_name(function_type);
+diff --git a/sound/soc/sdca/sdca_regmap.c b/sound/soc/sdca/sdca_regmap.c
+index c41c67c2204a41..ff1f8fe2a39bb7 100644
+--- a/sound/soc/sdca/sdca_regmap.c
++++ b/sound/soc/sdca/sdca_regmap.c
+@@ -196,7 +196,7 @@ int sdca_regmap_mbq_size(struct sdca_function_data *function, unsigned int reg)
+ 
+ 	control = function_find_control(function, reg);
+ 	if (!control)
+-		return false;
++		return -EINVAL;
+ 
+ 	return clamp_val(control->nbits / BITS_PER_BYTE, sizeof(u8), sizeof(u32));
+ }
+diff --git a/sound/soc/sof/intel/hda-stream.c b/sound/soc/sof/intel/hda-stream.c
+index aa6b0247d5c99e..a34f472ef1751f 100644
+--- a/sound/soc/sof/intel/hda-stream.c
++++ b/sound/soc/sof/intel/hda-stream.c
+@@ -890,7 +890,7 @@ int hda_dsp_stream_init(struct snd_sof_dev *sdev)
+ 
+ 	if (num_capture >= SOF_HDA_CAPTURE_STREAMS) {
+ 		dev_err(sdev->dev, "error: too many capture streams %d\n",
+-			num_playback);
++			num_capture);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/sound/usb/qcom/qc_audio_offload.c b/sound/usb/qcom/qc_audio_offload.c
+index a25c5a5316901c..9ad76fff741b8d 100644
+--- a/sound/usb/qcom/qc_audio_offload.c
++++ b/sound/usb/qcom/qc_audio_offload.c
+@@ -538,38 +538,33 @@ static void uaudio_iommu_unmap(enum mem_type mtype, unsigned long iova,
+ 			umap_size, iova, mapped_iova_size);
+ }
+ 
++static int uaudio_iommu_map_prot(bool dma_coherent)
++{
++	int prot = IOMMU_READ | IOMMU_WRITE;
++
++	if (dma_coherent)
++		prot |= IOMMU_CACHE;
++	return prot;
++}
++
+ /**
+- * uaudio_iommu_map() - maps iommu memory for adsp
++ * uaudio_iommu_map_pa() - maps iommu memory for adsp
+  * @mtype: ring type
+  * @dma_coherent: dma coherent
+  * @pa: physical address for ring/buffer
+  * @size: size of memory region
+- * @sgt: sg table for memory region
+  *
+  * Maps the XHCI related resources to a memory region that is assigned to be
+  * used by the adsp.  This will be mapped to the domain, which is created by
+  * the ASoC USB backend driver.
+  *
+  */
+-static unsigned long uaudio_iommu_map(enum mem_type mtype, bool dma_coherent,
+-				      phys_addr_t pa, size_t size,
+-				      struct sg_table *sgt)
++static unsigned long uaudio_iommu_map_pa(enum mem_type mtype, bool dma_coherent,
++					 phys_addr_t pa, size_t size)
+ {
+-	struct scatterlist *sg;
+ 	unsigned long iova = 0;
+-	size_t total_len = 0;
+-	unsigned long iova_sg;
+-	phys_addr_t pa_sg;
+ 	bool map = true;
+-	size_t sg_len;
+-	int prot;
+-	int ret;
+-	int i;
+-
+-	prot = IOMMU_READ | IOMMU_WRITE;
+-
+-	if (dma_coherent)
+-		prot |= IOMMU_CACHE;
++	int prot = uaudio_iommu_map_prot(dma_coherent);
+ 
+ 	switch (mtype) {
+ 	case MEM_EVENT_RING:
+@@ -583,20 +578,41 @@ static unsigned long uaudio_iommu_map(enum mem_type mtype, bool dma_coherent,
+ 				     &uaudio_qdev->xfer_ring_iova_size,
+ 				     &uaudio_qdev->xfer_ring_list, size);
+ 		break;
+-	case MEM_XFER_BUF:
+-		iova = uaudio_get_iova(&uaudio_qdev->curr_xfer_buf_iova,
+-				     &uaudio_qdev->xfer_buf_iova_size,
+-				     &uaudio_qdev->xfer_buf_list, size);
+-		break;
+ 	default:
+ 		dev_err(uaudio_qdev->data->dev, "unknown mem type %d\n", mtype);
+ 	}
+ 
+ 	if (!iova || !map)
+-		goto done;
++		return 0;
++
++	iommu_map(uaudio_qdev->data->domain, iova, pa, size, prot, GFP_KERNEL);
+ 
+-	if (!sgt)
+-		goto skip_sgt_map;
++	return iova;
++}
++
++static unsigned long uaudio_iommu_map_xfer_buf(bool dma_coherent, size_t size,
++					       struct sg_table *sgt)
++{
++	struct scatterlist *sg;
++	unsigned long iova = 0;
++	size_t total_len = 0;
++	unsigned long iova_sg;
++	phys_addr_t pa_sg;
++	size_t sg_len;
++	int prot = uaudio_iommu_map_prot(dma_coherent);
++	int ret;
++	int i;
++
++	prot = IOMMU_READ | IOMMU_WRITE;
++
++	if (dma_coherent)
++		prot |= IOMMU_CACHE;
++
++	iova = uaudio_get_iova(&uaudio_qdev->curr_xfer_buf_iova,
++			       &uaudio_qdev->xfer_buf_iova_size,
++			       &uaudio_qdev->xfer_buf_list, size);
++	if (!iova)
++		goto done;
+ 
+ 	iova_sg = iova;
+ 	for_each_sg(sgt->sgl, sg, sgt->nents, i) {
+@@ -618,11 +634,6 @@ static unsigned long uaudio_iommu_map(enum mem_type mtype, bool dma_coherent,
+ 		uaudio_iommu_unmap(MEM_XFER_BUF, iova, size, total_len);
+ 		iova = 0;
+ 	}
+-	return iova;
+-
+-skip_sgt_map:
+-	iommu_map(uaudio_qdev->data->domain, iova, pa, size, prot, GFP_KERNEL);
+-
+ done:
+ 	return iova;
+ }
+@@ -1020,7 +1031,6 @@ static int uaudio_transfer_buffer_setup(struct snd_usb_substream *subs,
+ 	struct sg_table xfer_buf_sgt;
+ 	dma_addr_t xfer_buf_dma;
+ 	void *xfer_buf;
+-	phys_addr_t xfer_buf_pa;
+ 	u32 len = xfer_buf_len;
+ 	bool dma_coherent;
+ 	dma_addr_t xfer_buf_dma_sysdev;
+@@ -1051,18 +1061,12 @@ static int uaudio_transfer_buffer_setup(struct snd_usb_substream *subs,
+ 	if (!xfer_buf)
+ 		return -ENOMEM;
+ 
+-	/* Remapping is not possible if xfer_buf is outside of linear map */
+-	xfer_buf_pa = virt_to_phys(xfer_buf);
+-	if (WARN_ON(!page_is_ram(PFN_DOWN(xfer_buf_pa)))) {
+-		ret = -ENXIO;
+-		goto unmap_sync;
+-	}
+ 	dma_get_sgtable(subs->dev->bus->sysdev, &xfer_buf_sgt, xfer_buf,
+ 			xfer_buf_dma, len);
+ 
+ 	/* map the physical buffer into sysdev as well */
+-	xfer_buf_dma_sysdev = uaudio_iommu_map(MEM_XFER_BUF, dma_coherent,
+-					       xfer_buf_pa, len, &xfer_buf_sgt);
++	xfer_buf_dma_sysdev = uaudio_iommu_map_xfer_buf(dma_coherent,
++							len, &xfer_buf_sgt);
+ 	if (!xfer_buf_dma_sysdev) {
+ 		ret = -ENOMEM;
+ 		goto unmap_sync;
+@@ -1143,8 +1147,8 @@ uaudio_endpoint_setup(struct snd_usb_substream *subs,
+ 	sg_free_table(sgt);
+ 
+ 	/* data transfer ring */
+-	iova = uaudio_iommu_map(MEM_XFER_RING, dma_coherent, tr_pa,
+-			      PAGE_SIZE, NULL);
++	iova = uaudio_iommu_map_pa(MEM_XFER_RING, dma_coherent, tr_pa,
++				   PAGE_SIZE);
+ 	if (!iova) {
+ 		ret = -ENOMEM;
+ 		goto clear_pa;
+@@ -1207,8 +1211,8 @@ static int uaudio_event_ring_setup(struct snd_usb_substream *subs,
+ 	mem_info->dma = sg_dma_address(sgt->sgl);
+ 	sg_free_table(sgt);
+ 
+-	iova = uaudio_iommu_map(MEM_EVENT_RING, dma_coherent, er_pa,
+-			      PAGE_SIZE, NULL);
++	iova = uaudio_iommu_map_pa(MEM_EVENT_RING, dma_coherent, er_pa,
++				   PAGE_SIZE);
+ 	if (!iova) {
+ 		ret = -ENOMEM;
+ 		goto clear_pa;
+diff --git a/tools/arch/loongarch/include/asm/inst.h b/tools/arch/loongarch/include/asm/inst.h
+index c25b5853181dba..d68fad63c8b732 100644
+--- a/tools/arch/loongarch/include/asm/inst.h
++++ b/tools/arch/loongarch/include/asm/inst.h
+@@ -51,6 +51,10 @@ enum reg2i16_op {
+ 	bgeu_op		= 0x1b,
+ };
+ 
++enum reg3_op {
++	amswapw_op	= 0x70c0,
++};
++
+ struct reg0i15_format {
+ 	unsigned int immediate : 15;
+ 	unsigned int opcode : 17;
+@@ -96,6 +100,13 @@ struct reg2i16_format {
+ 	unsigned int opcode : 6;
+ };
+ 
++struct reg3_format {
++	unsigned int rd : 5;
++	unsigned int rj : 5;
++	unsigned int rk : 5;
++	unsigned int opcode : 17;
++};
++
+ union loongarch_instruction {
+ 	unsigned int word;
+ 	struct reg0i15_format	reg0i15_format;
+@@ -105,6 +116,7 @@ union loongarch_instruction {
+ 	struct reg2i12_format	reg2i12_format;
+ 	struct reg2i14_format	reg2i14_format;
+ 	struct reg2i16_format	reg2i16_format;
++	struct reg3_format	reg3_format;
+ };
+ 
+ #define LOONGARCH_INSN_SIZE	sizeof(union loongarch_instruction)
+diff --git a/tools/objtool/arch/loongarch/decode.c b/tools/objtool/arch/loongarch/decode.c
+index b6fdc68053cc4e..2e555c4060c5e4 100644
+--- a/tools/objtool/arch/loongarch/decode.c
++++ b/tools/objtool/arch/loongarch/decode.c
+@@ -278,6 +278,25 @@ static bool decode_insn_reg2i16_fomat(union loongarch_instruction inst,
+ 	return true;
+ }
+ 
++static bool decode_insn_reg3_fomat(union loongarch_instruction inst,
++				   struct instruction *insn)
++{
++	switch (inst.reg3_format.opcode) {
++	case amswapw_op:
++		if (inst.reg3_format.rd == LOONGARCH_GPR_ZERO &&
++		    inst.reg3_format.rk == LOONGARCH_GPR_RA &&
++		    inst.reg3_format.rj == LOONGARCH_GPR_ZERO) {
++			/* amswap.w $zero, $ra, $zero */
++			insn->type = INSN_BUG;
++		}
++		break;
++	default:
++		return false;
++	}
++
++	return true;
++}
++
+ int arch_decode_instruction(struct objtool_file *file, const struct section *sec,
+ 			    unsigned long offset, unsigned int maxlen,
+ 			    struct instruction *insn)
+@@ -309,11 +328,19 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec
+ 		return 0;
+ 	if (decode_insn_reg2i16_fomat(inst, insn))
+ 		return 0;
++	if (decode_insn_reg3_fomat(inst, insn))
++		return 0;
+ 
+-	if (inst.word == 0)
++	if (inst.word == 0) {
++		/* andi $zero, $zero, 0x0 */
+ 		insn->type = INSN_NOP;
+-	else if (inst.reg0i15_format.opcode == break_op) {
+-		/* break */
++	} else if (inst.reg0i15_format.opcode == break_op &&
++		   inst.reg0i15_format.immediate == 0x0) {
++		/* break 0x0 */
++		insn->type = INSN_TRAP;
++	} else if (inst.reg0i15_format.opcode == break_op &&
++		   inst.reg0i15_format.immediate == 0x1) {
++		/* break 0x1 */
+ 		insn->type = INSN_BUG;
+ 	} else if (inst.reg2_format.opcode == ertn_op) {
+ 		/* ertn */
+diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
+index 85b2a93a59ac65..779f6230130af2 100644
+--- a/tools/perf/util/maps.c
++++ b/tools/perf/util/maps.c
+@@ -477,6 +477,7 @@ static int __maps__insert(struct maps *maps, struct map *new)
+ 	}
+ 	/* Insert the value at the end. */
+ 	maps_by_address[nr_maps] = map__get(new);
++	map__set_kmap_maps(new, maps);
+ 	if (maps_by_name)
+ 		maps_by_name[nr_maps] = map__get(new);
+ 
+@@ -502,8 +503,6 @@ static int __maps__insert(struct maps *maps, struct map *new)
+ 	if (map__end(new) < map__start(new))
+ 		RC_CHK_ACCESS(maps)->ends_broken = true;
+ 
+-	map__set_kmap_maps(new, maps);
+-
+ 	return 0;
+ }
+ 
+@@ -891,6 +890,7 @@ static int __maps__fixup_overlap_and_insert(struct maps *maps, struct map *new)
+ 		if (before) {
+ 			map__put(maps_by_address[i]);
+ 			maps_by_address[i] = before;
++			map__set_kmap_maps(before, maps);
+ 
+ 			if (maps_by_name) {
+ 				map__put(maps_by_name[ni]);
+@@ -918,6 +918,7 @@ static int __maps__fixup_overlap_and_insert(struct maps *maps, struct map *new)
+ 			 */
+ 			map__put(maps_by_address[i]);
+ 			maps_by_address[i] = map__get(new);
++			map__set_kmap_maps(new, maps);
+ 
+ 			if (maps_by_name) {
+ 				map__put(maps_by_name[ni]);
+@@ -942,14 +943,13 @@ static int __maps__fixup_overlap_and_insert(struct maps *maps, struct map *new)
+ 				 */
+ 				map__put(maps_by_address[i]);
+ 				maps_by_address[i] = map__get(new);
++				map__set_kmap_maps(new, maps);
+ 
+ 				if (maps_by_name) {
+ 					map__put(maps_by_name[ni]);
+ 					maps_by_name[ni] = map__get(new);
+ 				}
+ 
+-				map__set_kmap_maps(new, maps);
+-
+ 				check_invariants(maps);
+ 				return err;
+ 			}
+@@ -1019,6 +1019,7 @@ int maps__copy_from(struct maps *dest, struct maps *parent)
+ 				err = unwind__prepare_access(dest, new, NULL);
+ 				if (!err) {
+ 					dest_maps_by_address[i] = new;
++					map__set_kmap_maps(new, dest);
+ 					if (dest_maps_by_name)
+ 						dest_maps_by_name[i] = map__get(new);
+ 					RC_CHK_ACCESS(dest)->nr_maps = i + 1;
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.c b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+index 4f07ac9fa207cb..b148cadb96d0b7 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.c
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+@@ -1093,6 +1093,7 @@ int main_loop_s(int listensock)
+ 	struct pollfd polls;
+ 	socklen_t salen;
+ 	int remotesock;
++	int err = 0;
+ 	int fd = 0;
+ 
+ again:
+@@ -1125,7 +1126,7 @@ int main_loop_s(int listensock)
+ 		SOCK_TEST_TCPULP(remotesock, 0);
+ 
+ 		memset(&winfo, 0, sizeof(winfo));
+-		copyfd_io(fd, remotesock, 1, true, &winfo);
++		err = copyfd_io(fd, remotesock, 1, true, &winfo);
+ 	} else {
+ 		perror("accept");
+ 		return 1;
+@@ -1134,10 +1135,10 @@ int main_loop_s(int listensock)
+ 	if (cfg_input)
+ 		close(fd);
+ 
+-	if (--cfg_repeat > 0)
++	if (!err && --cfg_repeat > 0)
+ 		goto again;
+ 
+-	return 0;
++	return err;
+ }
+ 
+ static void init_rng(void)
+@@ -1247,7 +1248,7 @@ void xdisconnect(int fd)
+ 	else
+ 		xerror("bad family");
+ 
+-	strcpy(cmd, "ss -M | grep -q ");
++	strcpy(cmd, "ss -Mnt | grep -q ");
+ 	cmdlen = strlen(cmd);
+ 	if (!inet_ntop(addr.ss_family, raw_addr, &cmd[cmdlen],
+ 		       sizeof(cmd) - cmdlen))
+@@ -1257,7 +1258,7 @@ void xdisconnect(int fd)
+ 
+ 	/*
+ 	 * wait until the pending data is completely flushed and all
+-	 * the MPTCP sockets reached the closed status.
++	 * the sockets reached the closed status.
+ 	 * disconnect will bypass/ignore/drop any pending data.
+ 	 */
+ 	for (i = 0; ; i += msec_sleep) {
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_sockopt.c b/tools/testing/selftests/net/mptcp/mptcp_sockopt.c
+index e934dd26a59d9b..112c07c4c37a3c 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_sockopt.c
++++ b/tools/testing/selftests/net/mptcp/mptcp_sockopt.c
+@@ -667,22 +667,26 @@ static void process_one_client(int fd, int pipefd)
+ 
+ 	do_getsockopts(&s, fd, ret, ret2);
+ 	if (s.mptcpi_rcv_delta != (uint64_t)ret + 1)
+-		xerror("mptcpi_rcv_delta %" PRIu64 ", expect %" PRIu64, s.mptcpi_rcv_delta, ret + 1, s.mptcpi_rcv_delta - ret);
++		xerror("mptcpi_rcv_delta %" PRIu64 ", expect %" PRIu64 ", diff %" PRId64,
++		       s.mptcpi_rcv_delta, ret + 1, s.mptcpi_rcv_delta - (ret + 1));
+ 
+ 	/* be nice when running on top of older kernel */
+ 	if (s.pkt_stats_avail) {
+ 		if (s.last_sample.mptcpi_bytes_sent != ret2)
+-			xerror("mptcpi_bytes_sent %" PRIu64 ", expect %" PRIu64,
++			xerror("mptcpi_bytes_sent %" PRIu64 ", expect %" PRIu64
++			       ", diff %" PRId64,
+ 			       s.last_sample.mptcpi_bytes_sent, ret2,
+ 			       s.last_sample.mptcpi_bytes_sent - ret2);
+ 		if (s.last_sample.mptcpi_bytes_received != ret)
+-			xerror("mptcpi_bytes_received %" PRIu64 ", expect %" PRIu64,
++			xerror("mptcpi_bytes_received %" PRIu64 ", expect %" PRIu64
++			       ", diff %" PRId64,
+ 			       s.last_sample.mptcpi_bytes_received, ret,
+ 			       s.last_sample.mptcpi_bytes_received - ret);
+ 		if (s.last_sample.mptcpi_bytes_acked != ret)
+-			xerror("mptcpi_bytes_acked %" PRIu64 ", expect %" PRIu64,
+-			       s.last_sample.mptcpi_bytes_acked, ret2,
+-			       s.last_sample.mptcpi_bytes_acked - ret2);
++			xerror("mptcpi_bytes_acked %" PRIu64 ", expect %" PRIu64
++			       ", diff %" PRId64,
++			       s.last_sample.mptcpi_bytes_acked, ret,
++			       s.last_sample.mptcpi_bytes_acked - ret);
+ 	}
+ 
+ 	close(fd);
+diff --git a/tools/testing/selftests/net/mptcp/pm_nl_ctl.c b/tools/testing/selftests/net/mptcp/pm_nl_ctl.c
+index 994a556f46c151..93fea3442216c8 100644
+--- a/tools/testing/selftests/net/mptcp/pm_nl_ctl.c
++++ b/tools/testing/selftests/net/mptcp/pm_nl_ctl.c
+@@ -188,6 +188,13 @@ static int capture_events(int fd, int event_group)
+ 					fprintf(stderr, ",error:%u", *(__u8 *)RTA_DATA(attrs));
+ 				else if (attrs->rta_type == MPTCP_ATTR_SERVER_SIDE)
+ 					fprintf(stderr, ",server_side:%u", *(__u8 *)RTA_DATA(attrs));
++				else if (attrs->rta_type == MPTCP_ATTR_FLAGS) {
++					__u16 flags = *(__u16 *)RTA_DATA(attrs);
++
++					/* only print when present, easier */
++					if (flags & MPTCP_PM_EV_FLAG_DENY_JOIN_ID0)
++						fprintf(stderr, ",deny_join_id0:1");
++				}
+ 
+ 				attrs = RTA_NEXT(attrs, msg_len);
+ 			}
+diff --git a/tools/testing/selftests/net/mptcp/userspace_pm.sh b/tools/testing/selftests/net/mptcp/userspace_pm.sh
+index 333064b0b5ac03..97819e18578f4d 100755
+--- a/tools/testing/selftests/net/mptcp/userspace_pm.sh
++++ b/tools/testing/selftests/net/mptcp/userspace_pm.sh
+@@ -201,6 +201,9 @@ make_connection()
+ 		is_v6="v4"
+ 	fi
+ 
++	# set this on the client side only: will not affect the rest
++	ip netns exec "$ns2" sysctl -q net.mptcp.allow_join_initial_addr_port=0
++
+ 	:>"$client_evts"
+ 	:>"$server_evts"
+ 
+@@ -223,23 +226,28 @@ make_connection()
+ 	local client_token
+ 	local client_port
+ 	local client_serverside
++	local client_nojoin
+ 	local server_token
+ 	local server_serverside
++	local server_nojoin
+ 
+ 	client_token=$(mptcp_lib_evts_get_info token "$client_evts")
+ 	client_port=$(mptcp_lib_evts_get_info sport "$client_evts")
+ 	client_serverside=$(mptcp_lib_evts_get_info server_side "$client_evts")
++	client_nojoin=$(mptcp_lib_evts_get_info deny_join_id0 "$client_evts")
+ 	server_token=$(mptcp_lib_evts_get_info token "$server_evts")
+ 	server_serverside=$(mptcp_lib_evts_get_info server_side "$server_evts")
++	server_nojoin=$(mptcp_lib_evts_get_info deny_join_id0 "$server_evts")
+ 
+ 	print_test "Established IP${is_v6} MPTCP Connection ns2 => ns1"
+-	if [ "$client_token" != "" ] && [ "$server_token" != "" ] && [ "$client_serverside" = 0 ] &&
+-		   [ "$server_serverside" = 1 ]
++	if [ "${client_token}" != "" ] && [ "${server_token}" != "" ] &&
++	   [ "${client_serverside}" = 0 ] && [ "${server_serverside}" = 1 ] &&
++	   [ "${client_nojoin:-0}" = 0 ] && [ "${server_nojoin:-0}" = 1 ]
+ 	then
+ 		test_pass
+ 		print_title "Connection info: ${client_addr}:${client_port} -> ${connect_addr}:${app_port}"
+ 	else
+-		test_fail "Expected tokens (c:${client_token} - s:${server_token}) and server (c:${client_serverside} - s:${server_serverside})"
++		test_fail "Expected tokens (c:${client_token} - s:${server_token}), server (c:${client_serverside} - s:${server_serverside}), nojoin (c:${client_nojoin} - s:${server_nojoin})"
+ 		mptcp_lib_result_print_all_tap
+ 		exit ${KSFT_FAIL}
+ 	fi


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-10-02  3:12 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-10-02  3:12 UTC (permalink / raw
  To: gentoo-commits

commit:     a33d94814aa2669eceeecc62ffa12e2b1a73c783
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Oct  2 03:04:31 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Oct  2 03:11:03 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a33d9481

Add patch 2101 blk-mq: fix blk_mq_tags double free while nr_requests grown

Ref: https://lore.kernel.org/all/CAFj5m9K+ct=ioJUz8v78Wr_myC7pjVnB1SAKRXc-CLysHV_5ww <AT> mail.gmail.com/
Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                        |  4 ++
 ..._tags_double_free_while_nr_requests_grown.patch | 47 ++++++++++++++++++++++
 2 files changed, 51 insertions(+)

diff --git a/0000_README b/0000_README
index 48b50ad0..7c89ecd8 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
+Patch:  2101_blk-mq_fix_blk_mq_tags_double_free_while_nr_requests_grown.patch
+From:   https://lore.kernel.org/all/CAFj5m9K+ct=ioJUz8v78Wr_myC7pjVnB1SAKRXc-CLysHV_5ww@mail.gmail.com/
+Desc:   blk-mq: fix blk_mq_tags double free while nr_requests grown
+
 Patch:  2901_permit-menuconfig-sorting.patch
 From:   https://lore.kernel.org/
 Desc:   menuconfig: Allow sorting the entries alphabetically

diff --git a/2101_blk-mq_fix_blk_mq_tags_double_free_while_nr_requests_grown.patch b/2101_blk-mq_fix_blk_mq_tags_double_free_while_nr_requests_grown.patch
new file mode 100644
index 00000000..e47b4b2a
--- /dev/null
+++ b/2101_blk-mq_fix_blk_mq_tags_double_free_while_nr_requests_grown.patch
@@ -0,0 +1,47 @@
+From ba28afbd9eff2a6370f23ef4e6a036ab0cfda409 Mon Sep 17 00:00:00 2001
+From: Yu Kuai <yukuai3@huawei.com>
+Date: Thu, 21 Aug 2025 14:06:12 +0800
+Subject: blk-mq: fix blk_mq_tags double free while nr_requests grown
+
+In the case user trigger tags grow by queue sysfs attribute nr_requests,
+hctx->sched_tags will be freed directly and replaced with a new
+allocated tags, see blk_mq_tag_update_depth().
+
+The problem is that hctx->sched_tags is from elevator->et->tags, while
+et->tags is still the freed tags, hence later elevator exit will try to
+free the tags again, causing kernel panic.
+
+Fix this problem by replacing et->tags with new allocated tags as well.
+
+Noted there are still some long term problems that will require some
+refactor to be fixed thoroughly[1].
+
+[1] https://lore.kernel.org/all/20250815080216.410665-1-yukuai1@huaweicloud.com/
+Fixes: f5a6604f7a44 ("block: fix lockdep warning caused by lock dependency in elv_iosched_store")
+
+Signed-off-by: Yu Kuai <yukuai3@huawei.com>
+Reviewed-by: Ming Lei <ming.lei@redhat.com>
+Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
+Reviewed-by: Hannes Reinecke <hare@suse.de>
+Reviewed-by: Li Nan <linan122@huawei.com>
+Link: https://lore.kernel.org/r/20250821060612.1729939-3-yukuai1@huaweicloud.com
+Signed-off-by: Jens Axboe <axboe@kernel.dk>
+---
+ block/blk-mq-tag.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
+index d880c50629d612..5cffa5668d0c38 100644
+--- a/block/blk-mq-tag.c
++++ b/block/blk-mq-tag.c
+@@ -622,6 +622,7 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx,
+ 			return -ENOMEM;
+ 
+ 		blk_mq_free_map_and_rqs(set, *tagsptr, hctx->queue_num);
++		hctx->queue->elevator->et->tags[hctx->queue_num] = new;
+ 		*tagsptr = new;
+ 	} else {
+ 		/*
+-- 
+cgit 1.2.3-korg
+


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-10-02  3:28 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-10-02  3:28 UTC (permalink / raw
  To: gentoo-commits

commit:     ca080adda7285e5d7927b1c8a567018a99c55a25
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Oct  2 03:27:13 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Oct  2 03:27:13 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ca080add

Add patch crypto: af_alg - Fix incorrect boolean values in af_alg_ctx

Ref: https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git/plain/queue-6.16/crypto-af_alg-fix-incorrect-boolean-values-in-af_alg_ctx.patch
Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                        |  4 ++
 ...ix-incorrect-boolean-values-in-af_alg_ctx.patch | 43 ++++++++++++++++++++++
 2 files changed, 47 insertions(+)

diff --git a/0000_README b/0000_README
index 3f823104..d1517a09 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch:  1401_btrfs-don-t-allow-adding-block-device-of-less-than-1.patch
 From:   https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git/tree/queue-6.16/btrfs-don-t-allow-adding-block-device-of-less-than-1.patch
 Desc:   btrfs: don't allow adding block device of less than 1 MB
 
+Patch:  1402_crypto-af_alg-fix-incorrect-boolean-values-in-af_alg_ctx.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git/plain/queue-6.16/crypto-af_alg-fix-incorrect-boolean-values-in-af_alg_ctx.patch
+Desc:   crypto: af_alg - Fix incorrect boolean values in af_alg_ctx
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1402_crypto-af_alg-fix-incorrect-boolean-values-in-af_alg_ctx.patch b/1402_crypto-af_alg-fix-incorrect-boolean-values-in-af_alg_ctx.patch
new file mode 100644
index 00000000..c505c6a6
--- /dev/null
+++ b/1402_crypto-af_alg-fix-incorrect-boolean-values-in-af_alg_ctx.patch
@@ -0,0 +1,43 @@
+From d0ca0df179c4b21e2a6c4a4fb637aa8fa14575cb Mon Sep 17 00:00:00 2001
+From: Eric Biggers <ebiggers@kernel.org>
+Date: Wed, 24 Sep 2025 13:18:22 -0700
+Subject: crypto: af_alg - Fix incorrect boolean values in af_alg_ctx
+
+From: Eric Biggers <ebiggers@kernel.org>
+
+commit d0ca0df179c4b21e2a6c4a4fb637aa8fa14575cb upstream.
+
+Commit 1b34cbbf4f01 ("crypto: af_alg - Disallow concurrent writes in
+af_alg_sendmsg") changed some fields from bool to 1-bit bitfields of
+type u32.
+
+However, some assignments to these fields, specifically 'more' and
+'merge', assign values greater than 1.  These relied on C's implicit
+conversion to bool, such that zero becomes false and nonzero becomes
+true.
+
+With a 1-bit bitfields of type u32 instead, mod 2 of the value is taken
+instead, resulting in 0 being assigned in some cases when 1 was intended.
+
+Fix this by restoring the bool type.
+
+Fixes: 1b34cbbf4f01 ("crypto: af_alg - Disallow concurrent writes in af_alg_sendmsg")
+Cc: stable@vger.kernel.org
+Signed-off-by: Eric Biggers <ebiggers@kernel.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ include/crypto/if_alg.h |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/include/crypto/if_alg.h
++++ b/include/crypto/if_alg.h
+@@ -152,7 +152,7 @@ struct af_alg_ctx {
+ 	size_t used;
+ 	atomic_t rcvused;
+ 
+-	u32		more:1,
++	bool		more:1,
+ 			merge:1,
+ 			enc:1,
+ 			write:1,


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-10-02  3:28 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-10-02  3:28 UTC (permalink / raw
  To: gentoo-commits

commit:     a933280f70611c0dd026f824c5f52feaecd8803b
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Oct  2 03:23:52 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Oct  2 03:26:41 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a933280f

Add patch btrfs: don't allow adding block device of less than 1 MB

Ref: https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git/tree/queue-6.16/btrfs-don-t-allow-adding-block-device-of-less-than-1.patch
Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                        |  4 ++
 ...-allow-adding-block-device-of-less-than-1.patch | 54 ++++++++++++++++++++++
 2 files changed, 58 insertions(+)

diff --git a/0000_README b/0000_README
index 7c89ecd8..3f823104 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  1008_linux-6.16.9.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.16.9
 
+Patch:  1401_btrfs-don-t-allow-adding-block-device-of-less-than-1.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git/tree/queue-6.16/btrfs-don-t-allow-adding-block-device-of-less-than-1.patch
+Desc:   btrfs: don't allow adding block device of less than 1 MB
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1401_btrfs-don-t-allow-adding-block-device-of-less-than-1.patch b/1401_btrfs-don-t-allow-adding-block-device-of-less-than-1.patch
new file mode 100644
index 00000000..e03e21a2
--- /dev/null
+++ b/1401_btrfs-don-t-allow-adding-block-device-of-less-than-1.patch
@@ -0,0 +1,54 @@
+From b4cbca440070641199920a2d73a6abd95c9a9ec7 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 2 Sep 2025 11:34:10 +0100
+Subject: btrfs: don't allow adding block device of less than 1 MB
+
+From: Mark Harmstone <mark@harmstone.com>
+
+[ Upstream commit 3d1267475b94b3df7a61e4ea6788c7c5d9e473c4 ]
+
+Commit 15ae0410c37a79 ("btrfs-progs: add error handling for
+device_get_partition_size_fd_stat()") in btrfs-progs inadvertently
+changed it so that if the BLKGETSIZE64 ioctl on a block device returned
+a size of 0, this was no longer seen as an error condition.
+
+Unfortunately this is how disconnected NBD devices behave, meaning that
+with btrfs-progs 6.16 it's now possible to add a device you can't
+remove:
+
+  # btrfs device add /dev/nbd0 /root/temp
+  # btrfs device remove /dev/nbd0 /root/temp
+  ERROR: error removing device '/dev/nbd0': Invalid argument
+
+This check should always have been done kernel-side anyway, so add a
+check in btrfs_init_new_device() that the new device doesn't have a size
+less than BTRFS_DEVICE_RANGE_RESERVED (i.e. 1 MB).
+
+Reviewed-by: Qu Wenruo <wqu@suse.com>
+Signed-off-by: Mark Harmstone <mark@harmstone.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ fs/btrfs/volumes.c | 5 +++++
+ 1 file changed, 5 insertions(+)
+
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index f475b4b7c4578..817d3ef501ec4 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -2714,6 +2714,11 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path
+ 		goto error;
+ 	}
+ 
++	if (bdev_nr_bytes(file_bdev(bdev_file)) <= BTRFS_DEVICE_RANGE_RESERVED) {
++		ret = -EINVAL;
++		goto error;
++	}
++
+ 	if (fs_devices->seeding) {
+ 		seeding_dev = true;
+ 		down_write(&sb->s_umount);
+-- 
+2.51.0
+


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-10-02 13:25 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-10-02 13:25 UTC (permalink / raw
  To: gentoo-commits

commit:     e4f805bfd5c7a925273bf12a99dcb57f33f2ff0a
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Oct  2 13:24:50 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Oct  2 13:24:50 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e4f805bf

Linux patch 6.16.10

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README              |   10 +-
 1009_linux-6.16.10.patch | 6562 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6565 insertions(+), 7 deletions(-)

diff --git a/0000_README b/0000_README
index d1517a09..d6f79fa1 100644
--- a/0000_README
+++ b/0000_README
@@ -79,13 +79,9 @@ Patch:  1008_linux-6.16.9.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.16.9
 
-Patch:  1401_btrfs-don-t-allow-adding-block-device-of-less-than-1.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git/tree/queue-6.16/btrfs-don-t-allow-adding-block-device-of-less-than-1.patch
-Desc:   btrfs: don't allow adding block device of less than 1 MB
-
-Patch:  1402_crypto-af_alg-fix-incorrect-boolean-values-in-af_alg_ctx.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git/plain/queue-6.16/crypto-af_alg-fix-incorrect-boolean-values-in-af_alg_ctx.patch
-Desc:   crypto: af_alg - Fix incorrect boolean values in af_alg_ctx
+Patch:  1009_linux-6.16.10.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.16.10
 
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/

diff --git a/1009_linux-6.16.10.patch b/1009_linux-6.16.10.patch
new file mode 100644
index 00000000..4371790d
--- /dev/null
+++ b/1009_linux-6.16.10.patch
@@ -0,0 +1,6562 @@
+diff --git a/Documentation/admin-guide/laptops/lg-laptop.rst b/Documentation/admin-guide/laptops/lg-laptop.rst
+index 67fd6932cef4ff..c4dd534f91edd1 100644
+--- a/Documentation/admin-guide/laptops/lg-laptop.rst
++++ b/Documentation/admin-guide/laptops/lg-laptop.rst
+@@ -48,8 +48,8 @@ This value is reset to 100 when the kernel boots.
+ Fan mode
+ --------
+ 
+-Writing 1/0 to /sys/devices/platform/lg-laptop/fan_mode disables/enables
+-the fan silent mode.
++Writing 0/1/2 to /sys/devices/platform/lg-laptop/fan_mode sets fan mode to
++Optimal/Silent/Performance respectively.
+ 
+ 
+ USB charge
+diff --git a/Makefile b/Makefile
+index aef2cb6ea99d8b..19856af4819a53 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 16
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm/boot/dts/intel/socfpga/socfpga_cyclone5_sodia.dts b/arch/arm/boot/dts/intel/socfpga/socfpga_cyclone5_sodia.dts
+index ce0d6514eeb571..e4794ccb8e413f 100644
+--- a/arch/arm/boot/dts/intel/socfpga/socfpga_cyclone5_sodia.dts
++++ b/arch/arm/boot/dts/intel/socfpga/socfpga_cyclone5_sodia.dts
+@@ -66,8 +66,10 @@ &gmac1 {
+ 	mdio0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		phy0: ethernet-phy@0 {
+-			reg = <0>;
++		compatible = "snps,dwmac-mdio";
++
++		phy0: ethernet-phy@4 {
++			reg = <4>;
+ 			rxd0-skew-ps = <0>;
+ 			rxd1-skew-ps = <0>;
+ 			rxd2-skew-ps = <0>;
+diff --git a/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts b/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts
+index d4e0b8150a84ce..cf26e2ceaaa074 100644
+--- a/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts
++++ b/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts
+@@ -38,7 +38,7 @@ sound {
+ 		simple-audio-card,mclk-fs = <256>;
+ 
+ 		simple-audio-card,cpu {
+-			sound-dai = <&audio0 0>;
++			sound-dai = <&audio0>;
+ 		};
+ 
+ 		simple-audio-card,codec {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+index 948b88cf5e9dff..305c2912e90f74 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+@@ -298,7 +298,7 @@ thermal-zones {
+ 		cpu-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <2000>;
+-			thermal-sensors = <&tmu 0>;
++			thermal-sensors = <&tmu 1>;
+ 			trips {
+ 				cpu_alert0: trip0 {
+ 					temperature = <85000>;
+@@ -328,7 +328,7 @@ map0 {
+ 		soc-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <2000>;
+-			thermal-sensors = <&tmu 1>;
++			thermal-sensors = <&tmu 0>;
+ 			trips {
+ 				soc_alert0: trip0 {
+ 					temperature = <85000>;
+diff --git a/arch/arm64/boot/dts/marvell/cn9130-cf.dtsi b/arch/arm64/boot/dts/marvell/cn9130-cf.dtsi
+index ad0ab34b66028c..bd42bfbe408bbe 100644
+--- a/arch/arm64/boot/dts/marvell/cn9130-cf.dtsi
++++ b/arch/arm64/boot/dts/marvell/cn9130-cf.dtsi
+@@ -152,11 +152,12 @@ expander0_pins: cp0-expander0-pins {
+ 
+ /* SRDS #0 - SATA on M.2 connector */
+ &cp0_sata0 {
+-	phys = <&cp0_comphy0 1>;
+ 	status = "okay";
+ 
+-	/* only port 1 is available */
+-	/delete-node/ sata-port@0;
++	sata-port@1 {
++		phys = <&cp0_comphy0 1>;
++		status = "okay";
++	};
+ };
+ 
+ /* microSD */
+diff --git a/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts b/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts
+index 47234d0858dd21..338853d3b179bb 100644
+--- a/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts
++++ b/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts
+@@ -563,11 +563,13 @@ &cp1_rtc {
+ 
+ /* SRDS #1 - SATA on M.2 (J44) */
+ &cp1_sata0 {
+-	phys = <&cp1_comphy1 0>;
+ 	status = "okay";
+ 
+ 	/* only port 0 is available */
+-	/delete-node/ sata-port@1;
++	sata-port@0 {
++		phys = <&cp1_comphy1 0>;
++		status = "okay";
++	};
+ };
+ 
+ &cp1_syscon0 {
+diff --git a/arch/arm64/boot/dts/marvell/cn9132-clearfog.dts b/arch/arm64/boot/dts/marvell/cn9132-clearfog.dts
+index 0f53745a6fa0d8..6f237d3542b910 100644
+--- a/arch/arm64/boot/dts/marvell/cn9132-clearfog.dts
++++ b/arch/arm64/boot/dts/marvell/cn9132-clearfog.dts
+@@ -413,7 +413,13 @@ fixed-link {
+ /* SRDS #0,#1,#2,#3 - PCIe */
+ &cp0_pcie0 {
+ 	num-lanes = <4>;
+-	phys = <&cp0_comphy0 0>, <&cp0_comphy1 0>, <&cp0_comphy2 0>, <&cp0_comphy3 0>;
++	/*
++	 * The mvebu-comphy driver does not currently know how to pass correct
++	 * lane-count to ATF while configuring the serdes lanes.
++	 * Rely on bootloader configuration only.
++	 *
++	 * phys = <&cp0_comphy0 0>, <&cp0_comphy1 0>, <&cp0_comphy2 0>, <&cp0_comphy3 0>;
++	 */
+ 	status = "okay";
+ };
+ 
+@@ -475,7 +481,13 @@ &cp1_eth0 {
+ /* SRDS #0,#1 - PCIe */
+ &cp1_pcie0 {
+ 	num-lanes = <2>;
+-	phys = <&cp1_comphy0 0>, <&cp1_comphy1 0>;
++	/*
++	 * The mvebu-comphy driver does not currently know how to pass correct
++	 * lane-count to ATF while configuring the serdes lanes.
++	 * Rely on bootloader configuration only.
++	 *
++	 * phys = <&cp1_comphy0 0>, <&cp1_comphy1 0>;
++	 */
+ 	status = "okay";
+ };
+ 
+@@ -512,10 +524,9 @@ &cp1_sata0 {
+ 	status = "okay";
+ 
+ 	/* only port 1 is available */
+-	/delete-node/ sata-port@0;
+-
+ 	sata-port@1 {
+ 		phys = <&cp1_comphy3 1>;
++		status = "okay";
+ 	};
+ };
+ 
+@@ -631,9 +642,8 @@ &cp2_sata0 {
+ 	status = "okay";
+ 
+ 	/* only port 1 is available */
+-	/delete-node/ sata-port@0;
+-
+ 	sata-port@1 {
++		status = "okay";
+ 		phys = <&cp2_comphy3 1>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/marvell/cn9132-sr-cex7.dtsi b/arch/arm64/boot/dts/marvell/cn9132-sr-cex7.dtsi
+index afc041c1c448c3..bb2bb47fd77c12 100644
+--- a/arch/arm64/boot/dts/marvell/cn9132-sr-cex7.dtsi
++++ b/arch/arm64/boot/dts/marvell/cn9132-sr-cex7.dtsi
+@@ -137,6 +137,14 @@ &ap_sdhci0 {
+ 	pinctrl-0 = <&ap_mmc0_pins>;
+ 	pinctrl-names = "default";
+ 	vqmmc-supply = <&v_1_8>;
++	/*
++	 * Not stable in HS modes - phy needs "more calibration", so disable
++	 * UHS (by preventing voltage switch), SDR104, SDR50 and DDR50 modes.
++	 */
++	no-1-8-v;
++	no-sd;
++	no-sdio;
++	non-removable;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dtsi b/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dtsi
+index 4fedc50cce8c86..11940c77f2bd01 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dtsi
+@@ -42,9 +42,8 @@ analog-sound {
+ 		simple-audio-card,bitclock-master = <&masterdai>;
+ 		simple-audio-card,format = "i2s";
+ 		simple-audio-card,frame-master = <&masterdai>;
+-		simple-audio-card,hp-det-gpios = <&gpio1 RK_PD5 GPIO_ACTIVE_LOW>;
++		simple-audio-card,hp-det-gpios = <&gpio1 RK_PD5 GPIO_ACTIVE_HIGH>;
+ 		simple-audio-card,mclk-fs = <256>;
+-		simple-audio-card,pin-switches = "Headphones";
+ 		simple-audio-card,routing =
+ 			"Headphones", "LOUT1",
+ 			"Headphones", "ROUT1",
+diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
+index 5bd5aae60d5369..4d9fe4ea78affa 100644
+--- a/arch/riscv/include/asm/pgtable.h
++++ b/arch/riscv/include/asm/pgtable.h
+@@ -964,6 +964,23 @@ static inline int pudp_test_and_clear_young(struct vm_area_struct *vma,
+ 	return ptep_test_and_clear_young(vma, address, (pte_t *)pudp);
+ }
+ 
++#define __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR
++static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
++					    unsigned long address,  pud_t *pudp)
++{
++#ifdef CONFIG_SMP
++	pud_t pud = __pud(xchg(&pudp->pud, 0));
++#else
++	pud_t pud = *pudp;
++
++	pud_clear(pudp);
++#endif
++
++	page_table_check_pud_clear(mm, pud);
++
++	return pud;
++}
++
+ static inline int pud_young(pud_t pud)
+ {
+ 	return pte_young(pud_pte(pud));
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 874c9b264d6f0c..a530806ec5b025 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -26,7 +26,6 @@ config X86_64
+ 	depends on 64BIT
+ 	# Options that are inherently 64-bit kernel only:
+ 	select ARCH_HAS_GIGANTIC_PAGE
+-	select ARCH_HAS_PTDUMP
+ 	select ARCH_SUPPORTS_MSEAL_SYSTEM_MAPPINGS
+ 	select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
+ 	select ARCH_SUPPORTS_PER_VMA_LOCK
+@@ -101,6 +100,7 @@ config X86
+ 	select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
+ 	select ARCH_HAS_PMEM_API		if X86_64
+ 	select ARCH_HAS_PREEMPT_LAZY
++	select ARCH_HAS_PTDUMP
+ 	select ARCH_HAS_PTE_DEVMAP		if X86_64
+ 	select ARCH_HAS_PTE_SPECIAL
+ 	select ARCH_HAS_HW_PTE_YOUNG
+diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
+index 6c79ee7c0957a7..21041898157a1f 100644
+--- a/arch/x86/include/asm/topology.h
++++ b/arch/x86/include/asm/topology.h
+@@ -231,6 +231,16 @@ static inline bool topology_is_primary_thread(unsigned int cpu)
+ }
+ #define topology_is_primary_thread topology_is_primary_thread
+ 
++int topology_get_primary_thread(unsigned int cpu);
++
++static inline bool topology_is_core_online(unsigned int cpu)
++{
++	int pcpu = topology_get_primary_thread(cpu);
++
++	return pcpu >= 0 ? cpu_online(pcpu) : false;
++}
++#define topology_is_core_online topology_is_core_online
++
+ #else /* CONFIG_SMP */
+ static inline int topology_phys_to_logical_pkg(unsigned int pkg) { return 0; }
+ static inline int topology_max_smt_threads(void) { return 1; }
+diff --git a/arch/x86/kernel/cpu/topology.c b/arch/x86/kernel/cpu/topology.c
+index e35ccdc84910f5..6073a16628f9e4 100644
+--- a/arch/x86/kernel/cpu/topology.c
++++ b/arch/x86/kernel/cpu/topology.c
+@@ -372,6 +372,19 @@ unsigned int topology_unit_count(u32 apicid, enum x86_topology_domains which_uni
+ 	return topo_unit_count(lvlid, at_level, apic_maps[which_units].map);
+ }
+ 
++#ifdef CONFIG_SMP
++int topology_get_primary_thread(unsigned int cpu)
++{
++	u32 apic_id = cpuid_to_apicid[cpu];
++
++	/*
++	 * Get the core domain level APIC id, which is the primary thread
++	 * and return the CPU number assigned to it.
++	 */
++	return topo_lookup_cpuid(topo_apicid(apic_id, TOPO_CORE_DOMAIN));
++}
++#endif
++
+ #ifdef CONFIG_ACPI_HOTPLUG_CPU
+ /**
+  * topology_hotplug_apic - Handle a physical hotplugged APIC after boot
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 628f5b633b61fe..b2da1cda4cebd1 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -2956,6 +2956,15 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
+ 			goto err_null_driver;
+ 	}
+ 
++	/*
++	 * Mark support for the scheduler's frequency invariance engine for
++	 * drivers that implement target(), target_index() or fast_switch().
++	 */
++	if (!cpufreq_driver->setpolicy) {
++		static_branch_enable_cpuslocked(&cpufreq_freq_invariance);
++		pr_debug("cpufreq: supports frequency invariance\n");
++	}
++
+ 	ret = subsys_interface_register(&cpufreq_interface);
+ 	if (ret)
+ 		goto err_boost_unreg;
+@@ -2977,21 +2986,14 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
+ 	hp_online = ret;
+ 	ret = 0;
+ 
+-	/*
+-	 * Mark support for the scheduler's frequency invariance engine for
+-	 * drivers that implement target(), target_index() or fast_switch().
+-	 */
+-	if (!cpufreq_driver->setpolicy) {
+-		static_branch_enable_cpuslocked(&cpufreq_freq_invariance);
+-		pr_debug("supports frequency invariance");
+-	}
+-
+ 	pr_debug("driver %s up and running\n", driver_data->name);
+ 	goto out;
+ 
+ err_if_unreg:
+ 	subsys_interface_unregister(&cpufreq_interface);
+ err_boost_unreg:
++	if (!cpufreq_driver->setpolicy)
++		static_branch_disable_cpuslocked(&cpufreq_freq_invariance);
+ 	remove_boost_sysfs_file();
+ err_null_driver:
+ 	write_lock_irqsave(&cpufreq_driver_lock, flags);
+diff --git a/drivers/firewire/core-cdev.c b/drivers/firewire/core-cdev.c
+index bd04980009a467..6a81c3fd4c8609 100644
+--- a/drivers/firewire/core-cdev.c
++++ b/drivers/firewire/core-cdev.c
+@@ -41,7 +41,7 @@
+ /*
+  * ABI version history is documented in linux/firewire-cdev.h.
+  */
+-#define FW_CDEV_KERNEL_VERSION			5
++#define FW_CDEV_KERNEL_VERSION			6
+ #define FW_CDEV_VERSION_EVENT_REQUEST2		4
+ #define FW_CDEV_VERSION_ALLOCATE_REGION_END	4
+ #define FW_CDEV_VERSION_AUTO_FLUSH_ISO_OVERFLOW	5
+diff --git a/drivers/gpio/gpio-regmap.c b/drivers/gpio/gpio-regmap.c
+index 87c4225784cfae..b3b84a404485eb 100644
+--- a/drivers/gpio/gpio-regmap.c
++++ b/drivers/gpio/gpio-regmap.c
+@@ -274,7 +274,7 @@ struct gpio_regmap *gpio_regmap_register(const struct gpio_regmap_config *config
+ 	if (!chip->ngpio) {
+ 		ret = gpiochip_get_ngpios(chip, chip->parent);
+ 		if (ret)
+-			return ERR_PTR(ret);
++			goto err_free_gpio;
+ 	}
+ 
+ 	/* if not set, assume there is only one register */
+diff --git a/drivers/gpio/gpiolib-acpi-quirks.c b/drivers/gpio/gpiolib-acpi-quirks.c
+index c13545dce3492d..bfb04e67c4bc87 100644
+--- a/drivers/gpio/gpiolib-acpi-quirks.c
++++ b/drivers/gpio/gpiolib-acpi-quirks.c
+@@ -344,6 +344,20 @@ static const struct dmi_system_id gpiolib_acpi_quirks[] __initconst = {
+ 			.ignore_interrupt = "AMDI0030:00@8",
+ 		},
+ 	},
++	{
++		/*
++		 * Spurious wakeups from TP_ATTN# pin
++		 * Found in BIOS 5.35
++		 * https://gitlab.freedesktop.org/drm/amd/-/issues/4482
++		 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_FAMILY, "ProArt PX13"),
++		},
++		.driver_data = &(struct acpi_gpiolib_dmi_quirk) {
++			.ignore_wake = "ASCP1A00:00@8",
++		},
++	},
+ 	{} /* Terminating entry */
+ };
+ 
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 3a3eca5b4c40b6..01d611d7ee66ac 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -4605,6 +4605,23 @@ static struct gpio_desc *gpiod_find_by_fwnode(struct fwnode_handle *fwnode,
+ 	return desc;
+ }
+ 
++static struct gpio_desc *gpiod_fwnode_lookup(struct fwnode_handle *fwnode,
++					     struct device *consumer,
++					     const char *con_id,
++					     unsigned int idx,
++					     enum gpiod_flags *flags,
++					     unsigned long *lookupflags)
++{
++	struct gpio_desc *desc;
++
++	desc = gpiod_find_by_fwnode(fwnode, consumer, con_id, idx, flags, lookupflags);
++	if (gpiod_not_found(desc) && !IS_ERR_OR_NULL(fwnode))
++		desc = gpiod_find_by_fwnode(fwnode->secondary, consumer, con_id,
++					    idx, flags, lookupflags);
++
++	return desc;
++}
++
+ struct gpio_desc *gpiod_find_and_request(struct device *consumer,
+ 					 struct fwnode_handle *fwnode,
+ 					 const char *con_id,
+@@ -4623,8 +4640,8 @@ struct gpio_desc *gpiod_find_and_request(struct device *consumer,
+ 	int ret = 0;
+ 
+ 	scoped_guard(srcu, &gpio_devices_srcu) {
+-		desc = gpiod_find_by_fwnode(fwnode, consumer, con_id, idx,
+-					    &flags, &lookupflags);
++		desc = gpiod_fwnode_lookup(fwnode, consumer, con_id, idx,
++					   &flags, &lookupflags);
+ 		if (gpiod_not_found(desc) && platform_lookup_allowed) {
+ 			/*
+ 			 * Either we are not using DT or ACPI, or their lookup
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+index 260165bbe3736d..b16cce7c22c373 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+@@ -213,19 +213,35 @@ int amdgpu_amdkfd_reserve_mem_limit(struct amdgpu_device *adev,
+ 	spin_lock(&kfd_mem_limit.mem_limit_lock);
+ 
+ 	if (kfd_mem_limit.system_mem_used + system_mem_needed >
+-	    kfd_mem_limit.max_system_mem_limit)
++	    kfd_mem_limit.max_system_mem_limit) {
+ 		pr_debug("Set no_system_mem_limit=1 if using shared memory\n");
++		if (!no_system_mem_limit) {
++			ret = -ENOMEM;
++			goto release;
++		}
++	}
+ 
+-	if ((kfd_mem_limit.system_mem_used + system_mem_needed >
+-	     kfd_mem_limit.max_system_mem_limit && !no_system_mem_limit) ||
+-	    (kfd_mem_limit.ttm_mem_used + ttm_mem_needed >
+-	     kfd_mem_limit.max_ttm_mem_limit) ||
+-	    (adev && xcp_id >= 0 && adev->kfd.vram_used[xcp_id] + vram_needed >
+-	     vram_size - reserved_for_pt - reserved_for_ras - atomic64_read(&adev->vram_pin_size))) {
++	if (kfd_mem_limit.ttm_mem_used + ttm_mem_needed >
++		kfd_mem_limit.max_ttm_mem_limit) {
+ 		ret = -ENOMEM;
+ 		goto release;
+ 	}
+ 
++	/*if is_app_apu is false and apu_prefer_gtt is true, it is an APU with
++	 * carve out < gtt. In that case, VRAM allocation will go to gtt domain, skip
++	 * VRAM check since ttm_mem_limit check already cover this allocation
++	 */
++
++	if (adev && xcp_id >= 0 && (!adev->apu_prefer_gtt || adev->gmc.is_app_apu)) {
++		uint64_t vram_available =
++			vram_size - reserved_for_pt - reserved_for_ras -
++			atomic64_read(&adev->vram_pin_size);
++		if (adev->kfd.vram_used[xcp_id] + vram_needed > vram_available) {
++			ret = -ENOMEM;
++			goto release;
++		}
++	}
++
+ 	/* Update memory accounting by decreasing available system
+ 	 * memory, TTM memory and GPU memory as computed above
+ 	 */
+@@ -1626,11 +1642,15 @@ size_t amdgpu_amdkfd_get_available_memory(struct amdgpu_device *adev,
+ 	uint64_t vram_available, system_mem_available, ttm_mem_available;
+ 
+ 	spin_lock(&kfd_mem_limit.mem_limit_lock);
+-	vram_available = KFD_XCP_MEMORY_SIZE(adev, xcp_id)
+-		- adev->kfd.vram_used_aligned[xcp_id]
+-		- atomic64_read(&adev->vram_pin_size)
+-		- reserved_for_pt
+-		- reserved_for_ras;
++	if (adev->apu_prefer_gtt && !adev->gmc.is_app_apu)
++		vram_available = KFD_XCP_MEMORY_SIZE(adev, xcp_id)
++			- adev->kfd.vram_used_aligned[xcp_id];
++	else
++		vram_available = KFD_XCP_MEMORY_SIZE(adev, xcp_id)
++			- adev->kfd.vram_used_aligned[xcp_id]
++			- atomic64_read(&adev->vram_pin_size)
++			- reserved_for_pt
++			- reserved_for_ras;
+ 
+ 	if (adev->apu_prefer_gtt) {
+ 		system_mem_available = no_system_mem_limit ?
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index 4ec73f33535ebf..720b20e842ba43 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -1587,7 +1587,8 @@ static int kfd_dev_create_p2p_links(void)
+ 			break;
+ 		if (!dev->gpu || !dev->gpu->adev ||
+ 		    (dev->gpu->kfd->hive_id &&
+-		     dev->gpu->kfd->hive_id == new_dev->gpu->kfd->hive_id))
++		     dev->gpu->kfd->hive_id == new_dev->gpu->kfd->hive_id &&
++		     amdgpu_xgmi_get_is_sharing_enabled(dev->gpu->adev, new_dev->gpu->adev)))
+ 			goto next;
+ 
+ 		/* check if node(s) is/are peer accessible in one direction or bi-direction */
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 58ea351dd48b5d..fa24bcae3c5fc4 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2035,6 +2035,8 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ 
+ 	dc_hardware_init(adev->dm.dc);
+ 
++	adev->dm.restore_backlight = true;
++
+ 	adev->dm.hpd_rx_offload_wq = hpd_rx_irq_create_workqueue(adev);
+ 	if (!adev->dm.hpd_rx_offload_wq) {
+ 		drm_err(adev_to_drm(adev), "amdgpu: failed to create hpd rx offload workqueue.\n");
+@@ -3396,6 +3398,7 @@ static int dm_resume(struct amdgpu_ip_block *ip_block)
+ 		dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D0);
+ 
+ 		dc_resume(dm->dc);
++		adev->dm.restore_backlight = true;
+ 
+ 		amdgpu_dm_irq_resume_early(adev);
+ 
+@@ -9801,7 +9804,6 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
+ 	bool mode_set_reset_required = false;
+ 	u32 i;
+ 	struct dc_commit_streams_params params = {dc_state->streams, dc_state->stream_count};
+-	bool set_backlight_level = false;
+ 
+ 	/* Disable writeback */
+ 	for_each_old_connector_in_state(state, connector, old_con_state, i) {
+@@ -9921,7 +9923,6 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
+ 			acrtc->hw_mode = new_crtc_state->mode;
+ 			crtc->hwmode = new_crtc_state->mode;
+ 			mode_set_reset_required = true;
+-			set_backlight_level = true;
+ 		} else if (modereset_required(new_crtc_state)) {
+ 			drm_dbg_atomic(dev,
+ 				       "Atomic commit: RESET. crtc id %d:[%p]\n",
+@@ -9978,13 +9979,16 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
+ 	 * to fix a flicker issue.
+ 	 * It will cause the dm->actual_brightness is not the current panel brightness
+ 	 * level. (the dm->brightness is the correct panel level)
+-	 * So we set the backlight level with dm->brightness value after set mode
++	 * So we set the backlight level with dm->brightness value after initial
++	 * set mode. Use restore_backlight flag to avoid setting backlight level
++	 * for every subsequent mode set.
+ 	 */
+-	if (set_backlight_level) {
++	if (dm->restore_backlight) {
+ 		for (i = 0; i < dm->num_of_edps; i++) {
+ 			if (dm->backlight_dev[i])
+ 				amdgpu_dm_backlight_set_level(dm, i, dm->brightness[i]);
+ 		}
++		dm->restore_backlight = false;
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index d7d92f9911e465..47abef63686ea5 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -610,6 +610,13 @@ struct amdgpu_display_manager {
+ 	 */
+ 	u32 actual_brightness[AMDGPU_DM_MAX_NUM_EDP];
+ 
++	/**
++	 * @restore_backlight:
++	 *
++	 * Flag to indicate whether to restore backlight after modeset.
++	 */
++	bool restore_backlight;
++
+ 	/**
+ 	 * @aux_hpd_discon_quirk:
+ 	 *
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 7dfbfb18593c12..f037f2d83400b9 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -1292,7 +1292,6 @@ union surface_update_flags {
+ 		uint32_t in_transfer_func_change:1;
+ 		uint32_t input_csc_change:1;
+ 		uint32_t coeff_reduction_change:1;
+-		uint32_t output_tf_change:1;
+ 		uint32_t pixel_format_change:1;
+ 		uint32_t plane_size_change:1;
+ 		uint32_t gamut_remap_change:1;
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+index 454e362ff096aa..c0127d8b5b3961 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+@@ -1990,10 +1990,8 @@ static void dcn20_program_pipe(
+ 	 * updating on slave planes
+ 	 */
+ 	if (pipe_ctx->update_flags.bits.enable ||
+-		pipe_ctx->update_flags.bits.plane_changed ||
+-		pipe_ctx->stream->update_flags.bits.out_tf ||
+-		(pipe_ctx->plane_state &&
+-			pipe_ctx->plane_state->update_flags.bits.output_tf_change))
++	    pipe_ctx->update_flags.bits.plane_changed ||
++	    pipe_ctx->stream->update_flags.bits.out_tf)
+ 		hws->funcs.set_output_transfer_func(dc, pipe_ctx, pipe_ctx->stream);
+ 
+ 	/* If the pipe has been enabled or has a different opp, we
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
+index c4177a9a662fac..c68d01f3786026 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
+@@ -2289,10 +2289,8 @@ void dcn401_program_pipe(
+ 	 * updating on slave planes
+ 	 */
+ 	if (pipe_ctx->update_flags.bits.enable ||
+-		pipe_ctx->update_flags.bits.plane_changed ||
+-		pipe_ctx->stream->update_flags.bits.out_tf ||
+-		(pipe_ctx->plane_state &&
+-			pipe_ctx->plane_state->update_flags.bits.output_tf_change))
++	    pipe_ctx->update_flags.bits.plane_changed ||
++	    pipe_ctx->stream->update_flags.bits.out_tf)
+ 		hws->funcs.set_output_transfer_func(dc, pipe_ctx, pipe_ctx->stream);
+ 
+ 	/* If the pipe has been enabled or has a different opp, we
+diff --git a/drivers/gpu/drm/ast/ast_dp.c b/drivers/gpu/drm/ast/ast_dp.c
+index 19c04687b0fe1f..8e650a02c5287b 100644
+--- a/drivers/gpu/drm/ast/ast_dp.c
++++ b/drivers/gpu/drm/ast/ast_dp.c
+@@ -134,7 +134,7 @@ static int ast_astdp_read_edid_block(void *data, u8 *buf, unsigned int block, si
+ 			 * 3. The Delays are often longer a lot when system resume from S3/S4.
+ 			 */
+ 			if (j)
+-				mdelay(j + 1);
++				msleep(j + 1);
+ 
+ 			/* Wait for EDID offset to show up in mirror register */
+ 			vgacrd7 = ast_get_index_reg(ast, AST_IO_VGACRI, 0xd7);
+diff --git a/drivers/gpu/drm/gma500/oaktrail_hdmi.c b/drivers/gpu/drm/gma500/oaktrail_hdmi.c
+index 1cf39436912776..c0feca58511df3 100644
+--- a/drivers/gpu/drm/gma500/oaktrail_hdmi.c
++++ b/drivers/gpu/drm/gma500/oaktrail_hdmi.c
+@@ -726,8 +726,8 @@ void oaktrail_hdmi_teardown(struct drm_device *dev)
+ 
+ 	if (hdmi_dev) {
+ 		pdev = hdmi_dev->dev;
+-		pci_set_drvdata(pdev, NULL);
+ 		oaktrail_hdmi_i2c_exit(pdev);
++		pci_set_drvdata(pdev, NULL);
+ 		iounmap(hdmi_dev->regs);
+ 		kfree(hdmi_dev);
+ 		pci_dev_put(pdev);
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index d58f8fc3732658..55b8bfcf364aec 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -593,8 +593,9 @@ intel_ddi_transcoder_func_reg_val_get(struct intel_encoder *encoder,
+ 			enum transcoder master;
+ 
+ 			master = crtc_state->mst_master_transcoder;
+-			drm_WARN_ON(display->drm,
+-				    master == INVALID_TRANSCODER);
++			if (drm_WARN_ON(display->drm,
++					master == INVALID_TRANSCODER))
++				master = TRANSCODER_A;
+ 			temp |= TRANS_DDI_MST_TRANSPORT_SELECT(master);
+ 		}
+ 	} else {
+diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
+index f1ec3b02f15a00..07cd67baa81bfc 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
++++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
+@@ -789,6 +789,8 @@ static const struct panfrost_compatible amlogic_data = {
+ 	.vendor_quirk = panfrost_gpu_amlogic_quirk,
+ };
+ 
++static const char * const mediatek_pm_domains[] = { "core0", "core1", "core2",
++						    "core3", "core4" };
+ /*
+  * The old data with two power supplies for MT8183 is here only to
+  * keep retro-compatibility with older devicetrees, as DVFS will
+@@ -797,51 +799,53 @@ static const struct panfrost_compatible amlogic_data = {
+  * On new devicetrees please use the _b variant with a single and
+  * coupled regulators instead.
+  */
+-static const char * const mediatek_mt8183_supplies[] = { "mali", "sram", NULL };
+-static const char * const mediatek_mt8183_pm_domains[] = { "core0", "core1", "core2" };
++static const char * const legacy_supplies[] = { "mali", "sram", NULL };
+ static const struct panfrost_compatible mediatek_mt8183_data = {
+-	.num_supplies = ARRAY_SIZE(mediatek_mt8183_supplies) - 1,
+-	.supply_names = mediatek_mt8183_supplies,
+-	.num_pm_domains = ARRAY_SIZE(mediatek_mt8183_pm_domains),
+-	.pm_domain_names = mediatek_mt8183_pm_domains,
++	.num_supplies = ARRAY_SIZE(legacy_supplies) - 1,
++	.supply_names = legacy_supplies,
++	.num_pm_domains = 3,
++	.pm_domain_names = mediatek_pm_domains,
+ };
+ 
+-static const char * const mediatek_mt8183_b_supplies[] = { "mali", NULL };
+ static const struct panfrost_compatible mediatek_mt8183_b_data = {
+-	.num_supplies = ARRAY_SIZE(mediatek_mt8183_b_supplies) - 1,
+-	.supply_names = mediatek_mt8183_b_supplies,
+-	.num_pm_domains = ARRAY_SIZE(mediatek_mt8183_pm_domains),
+-	.pm_domain_names = mediatek_mt8183_pm_domains,
++	.num_supplies = ARRAY_SIZE(default_supplies) - 1,
++	.supply_names = default_supplies,
++	.num_pm_domains = 3,
++	.pm_domain_names = mediatek_pm_domains,
+ 	.pm_features = BIT(GPU_PM_CLK_DIS) | BIT(GPU_PM_VREG_OFF),
+ };
+ 
+-static const char * const mediatek_mt8186_pm_domains[] = { "core0", "core1" };
+ static const struct panfrost_compatible mediatek_mt8186_data = {
+-	.num_supplies = ARRAY_SIZE(mediatek_mt8183_b_supplies) - 1,
+-	.supply_names = mediatek_mt8183_b_supplies,
+-	.num_pm_domains = ARRAY_SIZE(mediatek_mt8186_pm_domains),
+-	.pm_domain_names = mediatek_mt8186_pm_domains,
++	.num_supplies = ARRAY_SIZE(default_supplies) - 1,
++	.supply_names = default_supplies,
++	.num_pm_domains = 2,
++	.pm_domain_names = mediatek_pm_domains,
+ 	.pm_features = BIT(GPU_PM_CLK_DIS) | BIT(GPU_PM_VREG_OFF),
+ };
+ 
+-/* MT8188 uses the same power domains and power supplies as MT8183 */
+ static const struct panfrost_compatible mediatek_mt8188_data = {
+-	.num_supplies = ARRAY_SIZE(mediatek_mt8183_b_supplies) - 1,
+-	.supply_names = mediatek_mt8183_b_supplies,
+-	.num_pm_domains = ARRAY_SIZE(mediatek_mt8183_pm_domains),
+-	.pm_domain_names = mediatek_mt8183_pm_domains,
++	.num_supplies = ARRAY_SIZE(default_supplies) - 1,
++	.supply_names = default_supplies,
++	.num_pm_domains = 3,
++	.pm_domain_names = mediatek_pm_domains,
+ 	.pm_features = BIT(GPU_PM_CLK_DIS) | BIT(GPU_PM_VREG_OFF),
+ 	.gpu_quirks = BIT(GPU_QUIRK_FORCE_AARCH64_PGTABLE),
+ };
+ 
+-static const char * const mediatek_mt8192_supplies[] = { "mali", NULL };
+-static const char * const mediatek_mt8192_pm_domains[] = { "core0", "core1", "core2",
+-							   "core3", "core4" };
+ static const struct panfrost_compatible mediatek_mt8192_data = {
+-	.num_supplies = ARRAY_SIZE(mediatek_mt8192_supplies) - 1,
+-	.supply_names = mediatek_mt8192_supplies,
+-	.num_pm_domains = ARRAY_SIZE(mediatek_mt8192_pm_domains),
+-	.pm_domain_names = mediatek_mt8192_pm_domains,
++	.num_supplies = ARRAY_SIZE(default_supplies) - 1,
++	.supply_names = default_supplies,
++	.num_pm_domains = 5,
++	.pm_domain_names = mediatek_pm_domains,
++	.pm_features = BIT(GPU_PM_CLK_DIS) | BIT(GPU_PM_VREG_OFF),
++	.gpu_quirks = BIT(GPU_QUIRK_FORCE_AARCH64_PGTABLE),
++};
++
++static const struct panfrost_compatible mediatek_mt8370_data = {
++	.num_supplies = ARRAY_SIZE(default_supplies) - 1,
++	.supply_names = default_supplies,
++	.num_pm_domains = 2,
++	.pm_domain_names = mediatek_pm_domains,
+ 	.pm_features = BIT(GPU_PM_CLK_DIS) | BIT(GPU_PM_VREG_OFF),
+ 	.gpu_quirks = BIT(GPU_QUIRK_FORCE_AARCH64_PGTABLE),
+ };
+@@ -868,6 +872,7 @@ static const struct of_device_id dt_match[] = {
+ 	{ .compatible = "mediatek,mt8186-mali", .data = &mediatek_mt8186_data },
+ 	{ .compatible = "mediatek,mt8188-mali", .data = &mediatek_mt8188_data },
+ 	{ .compatible = "mediatek,mt8192-mali", .data = &mediatek_mt8192_data },
++	{ .compatible = "mediatek,mt8370-mali", .data = &mediatek_mt8370_data },
+ 	{ .compatible = "allwinner,sun50i-h616-mali", .data = &allwinner_h616_data },
+ 	{}
+ };
+diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
+index 43ee57728de543..e927d80d6a2af9 100644
+--- a/drivers/gpu/drm/panthor/panthor_sched.c
++++ b/drivers/gpu/drm/panthor/panthor_sched.c
+@@ -886,8 +886,7 @@ static void group_free_queue(struct panthor_group *group, struct panthor_queue *
+ 	if (IS_ERR_OR_NULL(queue))
+ 		return;
+ 
+-	if (queue->entity.fence_context)
+-		drm_sched_entity_destroy(&queue->entity);
++	drm_sched_entity_destroy(&queue->entity);
+ 
+ 	if (queue->scheduler.ops)
+ 		drm_sched_fini(&queue->scheduler);
+@@ -3558,11 +3557,6 @@ int panthor_group_destroy(struct panthor_file *pfile, u32 group_handle)
+ 	if (!group)
+ 		return -EINVAL;
+ 
+-	for (u32 i = 0; i < group->queue_count; i++) {
+-		if (group->queues[i])
+-			drm_sched_entity_destroy(&group->queues[i]->entity);
+-	}
+-
+ 	mutex_lock(&sched->reset.lock);
+ 	mutex_lock(&sched->lock);
+ 	group->destroyed = true;
+diff --git a/drivers/gpu/drm/xe/abi/guc_actions_abi.h b/drivers/gpu/drm/xe/abi/guc_actions_abi.h
+index 4d9896e14649c0..448afb86e05c7d 100644
+--- a/drivers/gpu/drm/xe/abi/guc_actions_abi.h
++++ b/drivers/gpu/drm/xe/abi/guc_actions_abi.h
+@@ -117,7 +117,6 @@ enum xe_guc_action {
+ 	XE_GUC_ACTION_ENTER_S_STATE = 0x501,
+ 	XE_GUC_ACTION_EXIT_S_STATE = 0x502,
+ 	XE_GUC_ACTION_GLOBAL_SCHED_POLICY_CHANGE = 0x506,
+-	XE_GUC_ACTION_UPDATE_SCHEDULING_POLICIES_KLV = 0x509,
+ 	XE_GUC_ACTION_SCHED_CONTEXT = 0x1000,
+ 	XE_GUC_ACTION_SCHED_CONTEXT_MODE_SET = 0x1001,
+ 	XE_GUC_ACTION_SCHED_CONTEXT_MODE_DONE = 0x1002,
+@@ -143,7 +142,6 @@ enum xe_guc_action {
+ 	XE_GUC_ACTION_SET_ENG_UTIL_BUFF = 0x550A,
+ 	XE_GUC_ACTION_SET_DEVICE_ENGINE_ACTIVITY_BUFFER = 0x550C,
+ 	XE_GUC_ACTION_SET_FUNCTION_ENGINE_ACTIVITY_BUFFER = 0x550D,
+-	XE_GUC_ACTION_OPT_IN_FEATURE_KLV = 0x550E,
+ 	XE_GUC_ACTION_NOTIFY_MEMORY_CAT_ERROR = 0x6000,
+ 	XE_GUC_ACTION_REPORT_PAGE_FAULT_REQ_DESC = 0x6002,
+ 	XE_GUC_ACTION_PAGE_FAULT_RES_DESC = 0x6003,
+@@ -242,7 +240,4 @@ enum xe_guc_g2g_type {
+ #define XE_G2G_DEREGISTER_TILE	REG_GENMASK(15, 12)
+ #define XE_G2G_DEREGISTER_TYPE	REG_GENMASK(11, 8)
+ 
+-/* invalid type for XE_GUC_ACTION_NOTIFY_MEMORY_CAT_ERROR */
+-#define XE_GUC_CAT_ERR_TYPE_INVALID 0xdeadbeef
+-
+ #endif
+diff --git a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
+index 89034bc97ec5a4..7de8f827281fcd 100644
+--- a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
++++ b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
+@@ -16,8 +16,6 @@
+  *  +===+=======+==============================================================+
+  *  | 0 | 31:16 | **KEY** - KLV key identifier                                 |
+  *  |   |       |   - `GuC Self Config KLVs`_                                  |
+- *  |   |       |   - `GuC Opt In Feature KLVs`_                               |
+- *  |   |       |   - `GuC Scheduling Policies KLVs`_                          |
+  *  |   |       |   - `GuC VGT Policy KLVs`_                                   |
+  *  |   |       |   - `GuC VF Configuration KLVs`_                             |
+  *  |   |       |                                                              |
+@@ -126,44 +124,6 @@ enum  {
+ 	GUC_CONTEXT_POLICIES_KLV_NUM_IDS = 5,
+ };
+ 
+-/**
+- * DOC: GuC Opt In Feature KLVs
+- *
+- * `GuC KLV`_ keys available for use with OPT_IN_FEATURE_KLV
+- *
+- *  _`GUC_KLV_OPT_IN_FEATURE_EXT_CAT_ERR_TYPE` : 0x4001
+- *      Adds an extra dword to the XE_GUC_ACTION_NOTIFY_MEMORY_CAT_ERROR G2H
+- *      containing the type of the CAT error. On HW that does not support
+- *      reporting the CAT error type, the extra dword is set to 0xdeadbeef.
+- */
+-
+-#define GUC_KLV_OPT_IN_FEATURE_EXT_CAT_ERR_TYPE_KEY 0x4001
+-#define GUC_KLV_OPT_IN_FEATURE_EXT_CAT_ERR_TYPE_LEN 0u
+-
+-/**
+- * DOC: GuC Scheduling Policies KLVs
+- *
+- * `GuC KLV`_ keys available for use with UPDATE_SCHEDULING_POLICIES_KLV.
+- *
+- * _`GUC_KLV_SCHEDULING_POLICIES_RENDER_COMPUTE_YIELD` : 0x1001
+- *      Some platforms do not allow concurrent execution of RCS and CCS
+- *      workloads from different address spaces. By default, the GuC prioritizes
+- *      RCS submissions over CCS ones, which can lead to CCS workloads being
+- *      significantly (or completely) starved of execution time. This KLV allows
+- *      the driver to specify a quantum (in ms) and a ratio (percentage value
+- *      between 0 and 100), and the GuC will prioritize the CCS for that
+- *      percentage of each quantum. For example, specifying 100ms and 30% will
+- *      make the GuC prioritize the CCS for 30ms of every 100ms.
+- *      Note that this does not necessarly mean that RCS and CCS engines will
+- *      only be active for their percentage of the quantum, as the restriction
+- *      only kicks in if both classes are fully busy with non-compatible address
+- *      spaces; i.e., if one engine is idle or running the same address space,
+- *      a pending job on the other engine will still be submitted to the HW no
+- *      matter what the ratio is
+- */
+-#define GUC_KLV_SCHEDULING_POLICIES_RENDER_COMPUTE_YIELD_KEY	0x1001
+-#define GUC_KLV_SCHEDULING_POLICIES_RENDER_COMPUTE_YIELD_LEN	2u
+-
+ /**
+  * DOC: GuC VGT Policy KLVs
+  *
+diff --git a/drivers/gpu/drm/xe/xe_bo_evict.c b/drivers/gpu/drm/xe/xe_bo_evict.c
+index ed3746d32b27b1..4620201c72399d 100644
+--- a/drivers/gpu/drm/xe/xe_bo_evict.c
++++ b/drivers/gpu/drm/xe/xe_bo_evict.c
+@@ -158,8 +158,8 @@ int xe_bo_evict_all(struct xe_device *xe)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = xe_bo_apply_to_pinned(xe, &xe->pinned.late.kernel_bo_present,
+-				    &xe->pinned.late.evicted, xe_bo_evict_pinned);
++	ret = xe_bo_apply_to_pinned(xe, &xe->pinned.late.external,
++				    &xe->pinned.late.external, xe_bo_evict_pinned);
+ 
+ 	if (!ret)
+ 		ret = xe_bo_apply_to_pinned(xe, &xe->pinned.late.kernel_bo_present,
+diff --git a/drivers/gpu/drm/xe/xe_configfs.c b/drivers/gpu/drm/xe/xe_configfs.c
+index 9a2b96b111ef54..2b591ed055612a 100644
+--- a/drivers/gpu/drm/xe/xe_configfs.c
++++ b/drivers/gpu/drm/xe/xe_configfs.c
+@@ -244,7 +244,7 @@ int __init xe_configfs_init(void)
+ 	return 0;
+ }
+ 
+-void __exit xe_configfs_exit(void)
++void xe_configfs_exit(void)
+ {
+ 	configfs_unregister_subsystem(&xe_configfs);
+ }
+diff --git a/drivers/gpu/drm/xe/xe_device_sysfs.c b/drivers/gpu/drm/xe/xe_device_sysfs.c
+index b9440f8c781e3b..652da4d294c0b2 100644
+--- a/drivers/gpu/drm/xe/xe_device_sysfs.c
++++ b/drivers/gpu/drm/xe/xe_device_sysfs.c
+@@ -166,7 +166,7 @@ int xe_device_sysfs_init(struct xe_device *xe)
+ 			return ret;
+ 	}
+ 
+-	if (xe->info.platform == XE_BATTLEMAGE) {
++	if (xe->info.platform == XE_BATTLEMAGE && !IS_SRIOV_VF(xe)) {
+ 		ret = sysfs_create_files(&dev->kobj, auto_link_downgrade_attrs);
+ 		if (ret)
+ 			return ret;
+diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
+index eaf7569a7c1d1e..e3517ce2e18c14 100644
+--- a/drivers/gpu/drm/xe/xe_gt.c
++++ b/drivers/gpu/drm/xe/xe_gt.c
+@@ -41,7 +41,6 @@
+ #include "xe_gt_topology.h"
+ #include "xe_guc_exec_queue_types.h"
+ #include "xe_guc_pc.h"
+-#include "xe_guc_submit.h"
+ #include "xe_hw_fence.h"
+ #include "xe_hw_engine_class_sysfs.h"
+ #include "xe_irq.h"
+@@ -98,7 +97,7 @@ void xe_gt_sanitize(struct xe_gt *gt)
+ 	 * FIXME: if xe_uc_sanitize is called here, on TGL driver will not
+ 	 * reload
+ 	 */
+-	xe_guc_submit_disable(&gt->uc.guc);
++	gt->uc.guc.submission_state.enabled = false;
+ }
+ 
+ static void xe_gt_enable_host_l2_vram(struct xe_gt *gt)
+diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c
+index b9d21fdaad48ba..bac5471a1a7806 100644
+--- a/drivers/gpu/drm/xe/xe_guc.c
++++ b/drivers/gpu/drm/xe/xe_guc.c
+@@ -29,7 +29,6 @@
+ #include "xe_guc_db_mgr.h"
+ #include "xe_guc_engine_activity.h"
+ #include "xe_guc_hwconfig.h"
+-#include "xe_guc_klv_helpers.h"
+ #include "xe_guc_log.h"
+ #include "xe_guc_pc.h"
+ #include "xe_guc_relay.h"
+@@ -571,57 +570,6 @@ static int guc_g2g_start(struct xe_guc *guc)
+ 	return err;
+ }
+ 
+-static int __guc_opt_in_features_enable(struct xe_guc *guc, u64 addr, u32 num_dwords)
+-{
+-	u32 action[] = {
+-		XE_GUC_ACTION_OPT_IN_FEATURE_KLV,
+-		lower_32_bits(addr),
+-		upper_32_bits(addr),
+-		num_dwords
+-	};
+-
+-	return xe_guc_ct_send_block(&guc->ct, action, ARRAY_SIZE(action));
+-}
+-
+-#define OPT_IN_MAX_DWORDS 16
+-int xe_guc_opt_in_features_enable(struct xe_guc *guc)
+-{
+-	struct xe_device *xe = guc_to_xe(guc);
+-	CLASS(xe_guc_buf, buf)(&guc->buf, OPT_IN_MAX_DWORDS);
+-	u32 count = 0;
+-	u32 *klvs;
+-	int ret;
+-
+-	if (!xe_guc_buf_is_valid(buf))
+-		return -ENOBUFS;
+-
+-	klvs = xe_guc_buf_cpu_ptr(buf);
+-
+-	/*
+-	 * The extra CAT error type opt-in was added in GuC v70.17.0, which maps
+-	 * to compatibility version v1.7.0.
+-	 * Note that the GuC allows enabling this KLV even on platforms that do
+-	 * not support the extra type; in such case the returned type variable
+-	 * will be set to a known invalid value which we can check against.
+-	 */
+-	if (GUC_SUBMIT_VER(guc) >= MAKE_GUC_VER(1, 7, 0))
+-		klvs[count++] = PREP_GUC_KLV_TAG(OPT_IN_FEATURE_EXT_CAT_ERR_TYPE);
+-
+-	if (count) {
+-		xe_assert(xe, count <= OPT_IN_MAX_DWORDS);
+-
+-		ret = __guc_opt_in_features_enable(guc, xe_guc_buf_flush(buf), count);
+-		if (ret < 0) {
+-			xe_gt_err(guc_to_gt(guc),
+-				  "failed to enable GuC opt-in features: %pe\n",
+-				  ERR_PTR(ret));
+-			return ret;
+-		}
+-	}
+-
+-	return 0;
+-}
+-
+ static void guc_fini_hw(void *arg)
+ {
+ 	struct xe_guc *guc = arg;
+@@ -815,17 +763,15 @@ int xe_guc_post_load_init(struct xe_guc *guc)
+ 
+ 	xe_guc_ads_populate_post_load(&guc->ads);
+ 
+-	ret = xe_guc_opt_in_features_enable(guc);
+-	if (ret)
+-		return ret;
+-
+ 	if (xe_guc_g2g_wanted(guc_to_xe(guc))) {
+ 		ret = guc_g2g_start(guc);
+ 		if (ret)
+ 			return ret;
+ 	}
+ 
+-	return xe_guc_submit_enable(guc);
++	guc->submission_state.enabled = true;
++
++	return 0;
+ }
+ 
+ int xe_guc_reset(struct xe_guc *guc)
+@@ -1519,7 +1465,7 @@ void xe_guc_sanitize(struct xe_guc *guc)
+ {
+ 	xe_uc_fw_sanitize(&guc->fw);
+ 	xe_guc_ct_disable(&guc->ct);
+-	xe_guc_submit_disable(guc);
++	guc->submission_state.enabled = false;
+ }
+ 
+ int xe_guc_reset_prepare(struct xe_guc *guc)
+diff --git a/drivers/gpu/drm/xe/xe_guc.h b/drivers/gpu/drm/xe/xe_guc.h
+index 4a66575f017d2d..58338be4455856 100644
+--- a/drivers/gpu/drm/xe/xe_guc.h
++++ b/drivers/gpu/drm/xe/xe_guc.h
+@@ -33,7 +33,6 @@ int xe_guc_reset(struct xe_guc *guc);
+ int xe_guc_upload(struct xe_guc *guc);
+ int xe_guc_min_load_for_hwconfig(struct xe_guc *guc);
+ int xe_guc_enable_communication(struct xe_guc *guc);
+-int xe_guc_opt_in_features_enable(struct xe_guc *guc);
+ int xe_guc_suspend(struct xe_guc *guc);
+ void xe_guc_notify(struct xe_guc *guc);
+ int xe_guc_auth_huc(struct xe_guc *guc, u32 rsa_addr);
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index 18ddbb7b98a15b..45a21af1269276 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -32,7 +32,6 @@
+ #include "xe_guc_ct.h"
+ #include "xe_guc_exec_queue_types.h"
+ #include "xe_guc_id_mgr.h"
+-#include "xe_guc_klv_helpers.h"
+ #include "xe_guc_submit_types.h"
+ #include "xe_hw_engine.h"
+ #include "xe_hw_fence.h"
+@@ -317,71 +316,6 @@ int xe_guc_submit_init(struct xe_guc *guc, unsigned int num_ids)
+ 	return drmm_add_action_or_reset(&xe->drm, guc_submit_fini, guc);
+ }
+ 
+-/*
+- * Given that we want to guarantee enough RCS throughput to avoid missing
+- * frames, we set the yield policy to 20% of each 80ms interval.
+- */
+-#define RC_YIELD_DURATION	80	/* in ms */
+-#define RC_YIELD_RATIO		20	/* in percent */
+-static u32 *emit_render_compute_yield_klv(u32 *emit)
+-{
+-	*emit++ = PREP_GUC_KLV_TAG(SCHEDULING_POLICIES_RENDER_COMPUTE_YIELD);
+-	*emit++ = RC_YIELD_DURATION;
+-	*emit++ = RC_YIELD_RATIO;
+-
+-	return emit;
+-}
+-
+-#define SCHEDULING_POLICY_MAX_DWORDS 16
+-static int guc_init_global_schedule_policy(struct xe_guc *guc)
+-{
+-	u32 data[SCHEDULING_POLICY_MAX_DWORDS];
+-	u32 *emit = data;
+-	u32 count = 0;
+-	int ret;
+-
+-	if (GUC_SUBMIT_VER(guc) < MAKE_GUC_VER(1, 1, 0))
+-		return 0;
+-
+-	*emit++ = XE_GUC_ACTION_UPDATE_SCHEDULING_POLICIES_KLV;
+-
+-	if (CCS_MASK(guc_to_gt(guc)))
+-		emit = emit_render_compute_yield_klv(emit);
+-
+-	count = emit - data;
+-	if (count > 1) {
+-		xe_assert(guc_to_xe(guc), count <= SCHEDULING_POLICY_MAX_DWORDS);
+-
+-		ret = xe_guc_ct_send_block(&guc->ct, data, count);
+-		if (ret < 0) {
+-			xe_gt_err(guc_to_gt(guc),
+-				  "failed to enable GuC sheduling policies: %pe\n",
+-				  ERR_PTR(ret));
+-			return ret;
+-		}
+-	}
+-
+-	return 0;
+-}
+-
+-int xe_guc_submit_enable(struct xe_guc *guc)
+-{
+-	int ret;
+-
+-	ret = guc_init_global_schedule_policy(guc);
+-	if (ret)
+-		return ret;
+-
+-	guc->submission_state.enabled = true;
+-
+-	return 0;
+-}
+-
+-void xe_guc_submit_disable(struct xe_guc *guc)
+-{
+-	guc->submission_state.enabled = false;
+-}
+-
+ static void __release_guc_id(struct xe_guc *guc, struct xe_exec_queue *q, u32 xa_count)
+ {
+ 	int i;
+@@ -2154,16 +2088,12 @@ int xe_guc_exec_queue_memory_cat_error_handler(struct xe_guc *guc, u32 *msg,
+ 	struct xe_gt *gt = guc_to_gt(guc);
+ 	struct xe_exec_queue *q;
+ 	u32 guc_id;
+-	u32 type = XE_GUC_CAT_ERR_TYPE_INVALID;
+ 
+-	if (unlikely(!len || len > 2))
++	if (unlikely(len < 1))
+ 		return -EPROTO;
+ 
+ 	guc_id = msg[0];
+ 
+-	if (len == 2)
+-		type = msg[1];
+-
+ 	if (guc_id == GUC_ID_UNKNOWN) {
+ 		/*
+ 		 * GuC uses GUC_ID_UNKNOWN if it can not map the CAT fault to any PF/VF
+@@ -2177,19 +2107,8 @@ int xe_guc_exec_queue_memory_cat_error_handler(struct xe_guc *guc, u32 *msg,
+ 	if (unlikely(!q))
+ 		return -EPROTO;
+ 
+-	/*
+-	 * The type is HW-defined and changes based on platform, so we don't
+-	 * decode it in the kernel and only check if it is valid.
+-	 * See bspec 54047 and 72187 for details.
+-	 */
+-	if (type != XE_GUC_CAT_ERR_TYPE_INVALID)
+-		xe_gt_dbg(gt,
+-			  "Engine memory CAT error [%u]: class=%s, logical_mask: 0x%x, guc_id=%d",
+-			  type, xe_hw_engine_class_to_str(q->class), q->logical_mask, guc_id);
+-	else
+-		xe_gt_dbg(gt,
+-			  "Engine memory CAT error: class=%s, logical_mask: 0x%x, guc_id=%d",
+-			  xe_hw_engine_class_to_str(q->class), q->logical_mask, guc_id);
++	xe_gt_dbg(gt, "Engine memory cat error: engine_class=%s, logical_mask: 0x%x, guc_id=%d",
++		  xe_hw_engine_class_to_str(q->class), q->logical_mask, guc_id);
+ 
+ 	trace_xe_exec_queue_memory_cat_error(q);
+ 
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.h b/drivers/gpu/drm/xe/xe_guc_submit.h
+index 0d126b807c1041..9b71a986c6ca69 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.h
++++ b/drivers/gpu/drm/xe/xe_guc_submit.h
+@@ -13,8 +13,6 @@ struct xe_exec_queue;
+ struct xe_guc;
+ 
+ int xe_guc_submit_init(struct xe_guc *guc, unsigned int num_ids);
+-int xe_guc_submit_enable(struct xe_guc *guc);
+-void xe_guc_submit_disable(struct xe_guc *guc);
+ 
+ int xe_guc_submit_reset_prepare(struct xe_guc *guc);
+ void xe_guc_submit_reset_wait(struct xe_guc *guc);
+diff --git a/drivers/gpu/drm/xe/xe_uc.c b/drivers/gpu/drm/xe/xe_uc.c
+index 5c45b0f072a4c2..3a8751a8b92dde 100644
+--- a/drivers/gpu/drm/xe/xe_uc.c
++++ b/drivers/gpu/drm/xe/xe_uc.c
+@@ -165,10 +165,6 @@ static int vf_uc_init_hw(struct xe_uc *uc)
+ 
+ 	uc->guc.submission_state.enabled = true;
+ 
+-	err = xe_guc_opt_in_features_enable(&uc->guc);
+-	if (err)
+-		return err;
+-
+ 	err = xe_gt_record_default_lrcs(uc_to_gt(uc));
+ 	if (err)
+ 		return err;
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_client.c b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
+index 3438d392920fad..8dae9a77668536 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_client.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
+@@ -39,8 +39,12 @@ int amd_sfh_get_report(struct hid_device *hid, int report_id, int report_type)
+ 	struct amdtp_hid_data *hid_data = hid->driver_data;
+ 	struct amdtp_cl_data *cli_data = hid_data->cli_data;
+ 	struct request_list *req_list = &cli_data->req_list;
++	struct amd_input_data *in_data = cli_data->in_data;
++	struct amd_mp2_dev *mp2;
+ 	int i;
+ 
++	mp2 = container_of(in_data, struct amd_mp2_dev, in_data);
++	guard(mutex)(&mp2->lock);
+ 	for (i = 0; i < cli_data->num_hid_devices; i++) {
+ 		if (cli_data->hid_sensor_hubs[i] == hid) {
+ 			struct request_list *new = kzalloc(sizeof(*new), GFP_KERNEL);
+@@ -75,6 +79,8 @@ void amd_sfh_work(struct work_struct *work)
+ 	u8 report_id, node_type;
+ 	u8 report_size = 0;
+ 
++	mp2 = container_of(in_data, struct amd_mp2_dev, in_data);
++	guard(mutex)(&mp2->lock);
+ 	req_node = list_last_entry(&req_list->list, struct request_list, list);
+ 	list_del(&req_node->list);
+ 	current_index = req_node->current_index;
+@@ -83,7 +89,6 @@ void amd_sfh_work(struct work_struct *work)
+ 	node_type = req_node->report_type;
+ 	kfree(req_node);
+ 
+-	mp2 = container_of(in_data, struct amd_mp2_dev, in_data);
+ 	mp2_ops = mp2->mp2_ops;
+ 	if (node_type == HID_FEATURE_REPORT) {
+ 		report_size = mp2_ops->get_feat_rep(sensor_index, report_id,
+@@ -107,6 +112,8 @@ void amd_sfh_work(struct work_struct *work)
+ 	cli_data->cur_hid_dev = current_index;
+ 	cli_data->sensor_requested_cnt[current_index] = 0;
+ 	amdtp_hid_wakeup(cli_data->hid_sensor_hubs[current_index]);
++	if (!list_empty(&req_list->list))
++		schedule_delayed_work(&cli_data->work, 0);
+ }
+ 
+ void amd_sfh_work_buffer(struct work_struct *work)
+@@ -117,9 +124,10 @@ void amd_sfh_work_buffer(struct work_struct *work)
+ 	u8 report_size;
+ 	int i;
+ 
++	mp2 = container_of(in_data, struct amd_mp2_dev, in_data);
++	guard(mutex)(&mp2->lock);
+ 	for (i = 0; i < cli_data->num_hid_devices; i++) {
+ 		if (cli_data->sensor_sts[i] == SENSOR_ENABLED) {
+-			mp2 = container_of(in_data, struct amd_mp2_dev, in_data);
+ 			report_size = mp2->mp2_ops->get_in_rep(i, cli_data->sensor_idx[i],
+ 							       cli_data->report_id[i], in_data);
+ 			hid_input_report(cli_data->hid_sensor_hubs[i], HID_INPUT_REPORT,
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_common.h b/drivers/hid/amd-sfh-hid/amd_sfh_common.h
+index f44a3bb2fbd4fe..78f830c133e5cd 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_common.h
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_common.h
+@@ -10,6 +10,7 @@
+ #ifndef AMD_SFH_COMMON_H
+ #define AMD_SFH_COMMON_H
+ 
++#include <linux/mutex.h>
+ #include <linux/pci.h>
+ #include "amd_sfh_hid.h"
+ 
+@@ -59,6 +60,8 @@ struct amd_mp2_dev {
+ 	u32 mp2_acs;
+ 	struct sfh_dev_status dev_en;
+ 	struct work_struct work;
++	/* mp2 to protect data */
++	struct mutex lock;
+ 	u8 init_done;
+ 	u8 rver;
+ };
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+index 1c1fd63330c939..9a669c18a132fb 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+@@ -462,6 +462,10 @@ static int amd_mp2_pci_probe(struct pci_dev *pdev, const struct pci_device_id *i
+ 	if (!privdata->cl_data)
+ 		return -ENOMEM;
+ 
++	rc = devm_mutex_init(&pdev->dev, &privdata->lock);
++	if (rc)
++		return rc;
++
+ 	privdata->sfh1_1_ops = (const struct amd_sfh1_1_ops *)id->driver_data;
+ 	if (privdata->sfh1_1_ops) {
+ 		if (boot_cpu_data.x86 >= 0x1A)
+diff --git a/drivers/hid/hid-asus.c b/drivers/hid/hid-asus.c
+index d27dcfb2b9e4e1..8db9d4e7c3b0b2 100644
+--- a/drivers/hid/hid-asus.c
++++ b/drivers/hid/hid-asus.c
+@@ -974,7 +974,10 @@ static int asus_input_mapping(struct hid_device *hdev,
+ 		case 0xc4: asus_map_key_clear(KEY_KBDILLUMUP);		break;
+ 		case 0xc5: asus_map_key_clear(KEY_KBDILLUMDOWN);		break;
+ 		case 0xc7: asus_map_key_clear(KEY_KBDILLUMTOGGLE);	break;
++		case 0x4e: asus_map_key_clear(KEY_FN_ESC);		break;
++		case 0x7e: asus_map_key_clear(KEY_EMOJI_PICKER);	break;
+ 
++		case 0x8b: asus_map_key_clear(KEY_PROG1);	break; /* ProArt Creator Hub key */
+ 		case 0x6b: asus_map_key_clear(KEY_F21);		break; /* ASUS touchpad toggle */
+ 		case 0x38: asus_map_key_clear(KEY_PROG1);	break; /* ROG key */
+ 		case 0xba: asus_map_key_clear(KEY_PROG2);	break; /* Fn+C ASUS Splendid */
+diff --git a/drivers/hid/hid-cp2112.c b/drivers/hid/hid-cp2112.c
+index 234fa82eab0795..b5f2b6356f512a 100644
+--- a/drivers/hid/hid-cp2112.c
++++ b/drivers/hid/hid-cp2112.c
+@@ -229,10 +229,12 @@ static int cp2112_gpio_set_unlocked(struct cp2112_device *dev,
+ 	ret = hid_hw_raw_request(hdev, CP2112_GPIO_SET, buf,
+ 				 CP2112_GPIO_SET_LENGTH, HID_FEATURE_REPORT,
+ 				 HID_REQ_SET_REPORT);
+-	if (ret < 0)
++	if (ret != CP2112_GPIO_SET_LENGTH) {
+ 		hid_err(hdev, "error setting GPIO values: %d\n", ret);
++		return ret < 0 ? ret : -EIO;
++	}
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static int cp2112_gpio_set(struct gpio_chip *chip, unsigned int offset,
+@@ -309,9 +311,7 @@ static int cp2112_gpio_direction_output(struct gpio_chip *chip,
+ 	 * Set gpio value when output direction is already set,
+ 	 * as specified in AN495, Rev. 0.2, cpt. 4.4
+ 	 */
+-	cp2112_gpio_set_unlocked(dev, offset, value);
+-
+-	return 0;
++	return cp2112_gpio_set_unlocked(dev, offset, value);
+ }
+ 
+ static int cp2112_hid_get(struct hid_device *hdev, unsigned char report_number,
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 4c22bd2ba17080..edb8da49d91670 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -73,6 +73,7 @@ MODULE_LICENSE("GPL");
+ #define MT_QUIRK_FORCE_MULTI_INPUT	BIT(20)
+ #define MT_QUIRK_DISABLE_WAKEUP		BIT(21)
+ #define MT_QUIRK_ORIENTATION_INVERT	BIT(22)
++#define MT_QUIRK_APPLE_TOUCHBAR		BIT(23)
+ 
+ #define MT_INPUTMODE_TOUCHSCREEN	0x02
+ #define MT_INPUTMODE_TOUCHPAD		0x03
+@@ -625,6 +626,7 @@ static struct mt_application *mt_find_application(struct mt_device *td,
+ static struct mt_report_data *mt_allocate_report_data(struct mt_device *td,
+ 						      struct hid_report *report)
+ {
++	struct mt_class *cls = &td->mtclass;
+ 	struct mt_report_data *rdata;
+ 	struct hid_field *field;
+ 	int r, n;
+@@ -649,7 +651,11 @@ static struct mt_report_data *mt_allocate_report_data(struct mt_device *td,
+ 
+ 		if (field->logical == HID_DG_FINGER || td->hdev->group != HID_GROUP_MULTITOUCH_WIN_8) {
+ 			for (n = 0; n < field->report_count; n++) {
+-				if (field->usage[n].hid == HID_DG_CONTACTID) {
++				unsigned int hid = field->usage[n].hid;
++
++				if (hid == HID_DG_CONTACTID ||
++				   (cls->quirks & MT_QUIRK_APPLE_TOUCHBAR &&
++				   hid == HID_DG_TRANSDUCER_INDEX)) {
+ 					rdata->is_mt_collection = true;
+ 					break;
+ 				}
+@@ -821,12 +827,31 @@ static int mt_touch_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ 
+ 			MT_STORE_FIELD(confidence_state);
+ 			return 1;
++		case HID_DG_TOUCH:
++			/*
++			 * Legacy devices use TIPSWITCH and not TOUCH.
++			 * One special case here is of the Apple Touch Bars.
++			 * In these devices, the tip state is contained in
++			 * fields with the HID_DG_TOUCH usage.
++			 * Let's just ignore this field for other devices.
++			 */
++			if (!(cls->quirks & MT_QUIRK_APPLE_TOUCHBAR))
++				return -1;
++			fallthrough;
+ 		case HID_DG_TIPSWITCH:
+ 			if (field->application != HID_GD_SYSTEM_MULTIAXIS)
+ 				input_set_capability(hi->input,
+ 						     EV_KEY, BTN_TOUCH);
+ 			MT_STORE_FIELD(tip_state);
+ 			return 1;
++		case HID_DG_TRANSDUCER_INDEX:
++			/*
++			 * Contact ID in case of Apple Touch Bars is contained
++			 * in fields with HID_DG_TRANSDUCER_INDEX usage.
++			 */
++			if (!(cls->quirks & MT_QUIRK_APPLE_TOUCHBAR))
++				return 0;
++			fallthrough;
+ 		case HID_DG_CONTACTID:
+ 			MT_STORE_FIELD(contactid);
+ 			app->touches_by_report++;
+@@ -883,10 +908,6 @@ static int mt_touch_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ 		case HID_DG_CONTACTMAX:
+ 			/* contact max are global to the report */
+ 			return -1;
+-		case HID_DG_TOUCH:
+-			/* Legacy devices use TIPSWITCH and not TOUCH.
+-			 * Let's just ignore this field. */
+-			return -1;
+ 		}
+ 		/* let hid-input decide for the others */
+ 		return 0;
+@@ -1314,6 +1335,13 @@ static int mt_touch_input_configured(struct hid_device *hdev,
+ 	struct input_dev *input = hi->input;
+ 	int ret;
+ 
++	/*
++	 * HID_DG_CONTACTMAX field is not present on Apple Touch Bars,
++	 * but the maximum contact count is greater than the default.
++	 */
++	if (cls->quirks & MT_QUIRK_APPLE_TOUCHBAR && cls->maxcontacts)
++		td->maxcontacts = cls->maxcontacts;
++
+ 	if (!td->maxcontacts)
+ 		td->maxcontacts = MT_DEFAULT_MAXCONTACT;
+ 
+@@ -1321,6 +1349,13 @@ static int mt_touch_input_configured(struct hid_device *hdev,
+ 	if (td->serial_maybe)
+ 		mt_post_parse_default_settings(td, app);
+ 
++	/*
++	 * The application for Apple Touch Bars is HID_DG_TOUCHPAD,
++	 * but these devices are direct.
++	 */
++	if (cls->quirks & MT_QUIRK_APPLE_TOUCHBAR)
++		app->mt_flags |= INPUT_MT_DIRECT;
++
+ 	if (cls->is_indirect)
+ 		app->mt_flags |= INPUT_MT_POINTER;
+ 
+diff --git a/drivers/hid/intel-thc-hid/intel-quickspi/pci-quickspi.c b/drivers/hid/intel-thc-hid/intel-quickspi/pci-quickspi.c
+index d4f89f44c3b4d9..715480ef30cefa 100644
+--- a/drivers/hid/intel-thc-hid/intel-quickspi/pci-quickspi.c
++++ b/drivers/hid/intel-thc-hid/intel-quickspi/pci-quickspi.c
+@@ -961,6 +961,8 @@ static const struct pci_device_id quickspi_pci_tbl[] = {
+ 	{PCI_DEVICE_DATA(INTEL, THC_PTL_H_DEVICE_ID_SPI_PORT2, &ptl), },
+ 	{PCI_DEVICE_DATA(INTEL, THC_PTL_U_DEVICE_ID_SPI_PORT1, &ptl), },
+ 	{PCI_DEVICE_DATA(INTEL, THC_PTL_U_DEVICE_ID_SPI_PORT2, &ptl), },
++	{PCI_DEVICE_DATA(INTEL, THC_WCL_DEVICE_ID_SPI_PORT1, &ptl), },
++	{PCI_DEVICE_DATA(INTEL, THC_WCL_DEVICE_ID_SPI_PORT2, &ptl), },
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(pci, quickspi_pci_tbl);
+diff --git a/drivers/hid/intel-thc-hid/intel-quickspi/quickspi-dev.h b/drivers/hid/intel-thc-hid/intel-quickspi/quickspi-dev.h
+index 6fdf674b21c5a6..f3532d866749ca 100644
+--- a/drivers/hid/intel-thc-hid/intel-quickspi/quickspi-dev.h
++++ b/drivers/hid/intel-thc-hid/intel-quickspi/quickspi-dev.h
+@@ -19,6 +19,8 @@
+ #define PCI_DEVICE_ID_INTEL_THC_PTL_H_DEVICE_ID_SPI_PORT2	0xE34B
+ #define PCI_DEVICE_ID_INTEL_THC_PTL_U_DEVICE_ID_SPI_PORT1	0xE449
+ #define PCI_DEVICE_ID_INTEL_THC_PTL_U_DEVICE_ID_SPI_PORT2	0xE44B
++#define PCI_DEVICE_ID_INTEL_THC_WCL_DEVICE_ID_SPI_PORT1 	0x4D49
++#define PCI_DEVICE_ID_INTEL_THC_WCL_DEVICE_ID_SPI_PORT2 	0x4D4B
+ 
+ /* HIDSPI special ACPI parameters DSM methods */
+ #define ACPI_QUICKSPI_REVISION_NUM			2
+diff --git a/drivers/i2c/busses/i2c-designware-platdrv.c b/drivers/i2c/busses/i2c-designware-platdrv.c
+index 879719e91df2a5..c1262df02cdb2e 100644
+--- a/drivers/i2c/busses/i2c-designware-platdrv.c
++++ b/drivers/i2c/busses/i2c-designware-platdrv.c
+@@ -101,7 +101,7 @@ static int bt1_i2c_request_regs(struct dw_i2c_dev *dev)
+ }
+ #endif
+ 
+-static int txgbe_i2c_request_regs(struct dw_i2c_dev *dev)
++static int dw_i2c_get_parent_regmap(struct dw_i2c_dev *dev)
+ {
+ 	dev->map = dev_get_regmap(dev->dev->parent, NULL);
+ 	if (!dev->map)
+@@ -123,12 +123,15 @@ static int dw_i2c_plat_request_regs(struct dw_i2c_dev *dev)
+ 	struct platform_device *pdev = to_platform_device(dev->dev);
+ 	int ret;
+ 
++	if (device_is_compatible(dev->dev, "intel,xe-i2c"))
++		return dw_i2c_get_parent_regmap(dev);
++
+ 	switch (dev->flags & MODEL_MASK) {
+ 	case MODEL_BAIKAL_BT1:
+ 		ret = bt1_i2c_request_regs(dev);
+ 		break;
+ 	case MODEL_WANGXUN_SP:
+-		ret = txgbe_i2c_request_regs(dev);
++		ret = dw_i2c_get_parent_regmap(dev);
+ 		break;
+ 	default:
+ 		dev->base = devm_platform_ioremap_resource(pdev, 0);
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index c369fee3356216..00727472c87381 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -233,6 +233,7 @@ static u16 get_legacy_obj_type(u16 opcode)
+ {
+ 	switch (opcode) {
+ 	case MLX5_CMD_OP_CREATE_RQ:
++	case MLX5_CMD_OP_CREATE_RMP:
+ 		return MLX5_EVENT_QUEUE_TYPE_RQ;
+ 	case MLX5_CMD_OP_CREATE_QP:
+ 		return MLX5_EVENT_QUEUE_TYPE_QP;
+diff --git a/drivers/iommu/iommufd/eventq.c b/drivers/iommu/iommufd/eventq.c
+index e373b9eec7f5f5..2afef30ce41f16 100644
+--- a/drivers/iommu/iommufd/eventq.c
++++ b/drivers/iommu/iommufd/eventq.c
+@@ -393,12 +393,12 @@ static int iommufd_eventq_init(struct iommufd_eventq *eventq, char *name,
+ 			       const struct file_operations *fops)
+ {
+ 	struct file *filep;
+-	int fdno;
+ 
+ 	spin_lock_init(&eventq->lock);
+ 	INIT_LIST_HEAD(&eventq->deliver);
+ 	init_waitqueue_head(&eventq->wait_queue);
+ 
++	/* The filep is fput() by the core code during failure */
+ 	filep = anon_inode_getfile(name, fops, eventq, O_RDWR);
+ 	if (IS_ERR(filep))
+ 		return PTR_ERR(filep);
+@@ -408,10 +408,7 @@ static int iommufd_eventq_init(struct iommufd_eventq *eventq, char *name,
+ 	eventq->filep = filep;
+ 	refcount_inc(&eventq->obj.users);
+ 
+-	fdno = get_unused_fd_flags(O_CLOEXEC);
+-	if (fdno < 0)
+-		fput(filep);
+-	return fdno;
++	return get_unused_fd_flags(O_CLOEXEC);
+ }
+ 
+ static const struct file_operations iommufd_fault_fops =
+@@ -455,7 +452,6 @@ int iommufd_fault_alloc(struct iommufd_ucmd *ucmd)
+ 	return 0;
+ out_put_fdno:
+ 	put_unused_fd(fdno);
+-	fput(fault->common.filep);
+ out_abort:
+ 	iommufd_object_abort_and_destroy(ucmd->ictx, &fault->common.obj);
+ 
+@@ -542,7 +538,6 @@ int iommufd_veventq_alloc(struct iommufd_ucmd *ucmd)
+ 
+ out_put_fdno:
+ 	put_unused_fd(fdno);
+-	fput(veventq->common.filep);
+ out_abort:
+ 	iommufd_object_abort_and_destroy(ucmd->ictx, &veventq->common.obj);
+ out_unlock_veventqs:
+diff --git a/drivers/iommu/iommufd/main.c b/drivers/iommu/iommufd/main.c
+index 3df468f64e7d9e..62a3469bbd37e7 100644
+--- a/drivers/iommu/iommufd/main.c
++++ b/drivers/iommu/iommufd/main.c
+@@ -23,6 +23,7 @@
+ #include "iommufd_test.h"
+ 
+ struct iommufd_object_ops {
++	size_t file_offset;
+ 	void (*destroy)(struct iommufd_object *obj);
+ 	void (*abort)(struct iommufd_object *obj);
+ };
+@@ -71,10 +72,30 @@ void iommufd_object_abort(struct iommufd_ctx *ictx, struct iommufd_object *obj)
+ void iommufd_object_abort_and_destroy(struct iommufd_ctx *ictx,
+ 				      struct iommufd_object *obj)
+ {
+-	if (iommufd_object_ops[obj->type].abort)
+-		iommufd_object_ops[obj->type].abort(obj);
++	const struct iommufd_object_ops *ops = &iommufd_object_ops[obj->type];
++
++	if (ops->file_offset) {
++		struct file **filep = ((void *)obj) + ops->file_offset;
++
++		/*
++		 * A file should hold a users refcount while the file is open
++		 * and put it back in its release. The file should hold a
++		 * pointer to obj in their private data. Normal fput() is
++		 * deferred to a workqueue and can get out of order with the
++		 * following kfree(obj). Using the sync version ensures the
++		 * release happens immediately. During abort we require the file
++		 * refcount is one at this point - meaning the object alloc
++		 * function cannot do anything to allow another thread to take a
++		 * refcount prior to a guaranteed success.
++		 */
++		if (*filep)
++			__fput_sync(*filep);
++	}
++
++	if (ops->abort)
++		ops->abort(obj);
+ 	else
+-		iommufd_object_ops[obj->type].destroy(obj);
++		ops->destroy(obj);
+ 	iommufd_object_abort(ictx, obj);
+ }
+ 
+@@ -493,6 +514,12 @@ void iommufd_ctx_put(struct iommufd_ctx *ictx)
+ }
+ EXPORT_SYMBOL_NS_GPL(iommufd_ctx_put, "IOMMUFD");
+ 
++#define IOMMUFD_FILE_OFFSET(_struct, _filep, _obj)                           \
++	.file_offset = (offsetof(_struct, _filep) +                          \
++			BUILD_BUG_ON_ZERO(!__same_type(                      \
++				struct file *, ((_struct *)NULL)->_filep)) + \
++			BUILD_BUG_ON_ZERO(offsetof(_struct, _obj)))
++
+ static const struct iommufd_object_ops iommufd_object_ops[] = {
+ 	[IOMMUFD_OBJ_ACCESS] = {
+ 		.destroy = iommufd_access_destroy_object,
+@@ -502,6 +529,7 @@ static const struct iommufd_object_ops iommufd_object_ops[] = {
+ 	},
+ 	[IOMMUFD_OBJ_FAULT] = {
+ 		.destroy = iommufd_fault_destroy,
++		IOMMUFD_FILE_OFFSET(struct iommufd_fault, common.filep, common.obj),
+ 	},
+ 	[IOMMUFD_OBJ_HWPT_PAGING] = {
+ 		.destroy = iommufd_hwpt_paging_destroy,
+@@ -520,6 +548,7 @@ static const struct iommufd_object_ops iommufd_object_ops[] = {
+ 	[IOMMUFD_OBJ_VEVENTQ] = {
+ 		.destroy = iommufd_veventq_destroy,
+ 		.abort = iommufd_veventq_abort,
++		IOMMUFD_FILE_OFFSET(struct iommufd_veventq, common.filep, common.obj),
+ 	},
+ 	[IOMMUFD_OBJ_VIOMMU] = {
+ 		.destroy = iommufd_viommu_destroy,
+diff --git a/drivers/mmc/host/sdhci-cadence.c b/drivers/mmc/host/sdhci-cadence.c
+index a94b297fcf2a34..60ca09780da32d 100644
+--- a/drivers/mmc/host/sdhci-cadence.c
++++ b/drivers/mmc/host/sdhci-cadence.c
+@@ -433,6 +433,13 @@ static const struct sdhci_cdns_drv_data sdhci_elba_drv_data = {
+ 	},
+ };
+ 
++static const struct sdhci_cdns_drv_data sdhci_eyeq_drv_data = {
++	.pltfm_data = {
++		.ops = &sdhci_cdns_ops,
++		.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
++	},
++};
++
+ static const struct sdhci_cdns_drv_data sdhci_cdns_drv_data = {
+ 	.pltfm_data = {
+ 		.ops = &sdhci_cdns_ops,
+@@ -595,6 +602,10 @@ static const struct of_device_id sdhci_cdns_match[] = {
+ 		.compatible = "amd,pensando-elba-sd4hc",
+ 		.data = &sdhci_elba_drv_data,
+ 	},
++	{
++		.compatible = "mobileye,eyeq-sd4hc",
++		.data = &sdhci_eyeq_drv_data,
++	},
+ 	{ .compatible = "cdns,sd4hc" },
+ 	{ /* sentinel */ }
+ };
+diff --git a/drivers/net/can/rcar/rcar_can.c b/drivers/net/can/rcar/rcar_can.c
+index 2b7dd359f27b7d..8569178b66df7d 100644
+--- a/drivers/net/can/rcar/rcar_can.c
++++ b/drivers/net/can/rcar/rcar_can.c
+@@ -861,7 +861,6 @@ static int __maybe_unused rcar_can_resume(struct device *dev)
+ {
+ 	struct net_device *ndev = dev_get_drvdata(dev);
+ 	struct rcar_can_priv *priv = netdev_priv(ndev);
+-	u16 ctlr;
+ 	int err;
+ 
+ 	if (!netif_running(ndev))
+@@ -873,12 +872,7 @@ static int __maybe_unused rcar_can_resume(struct device *dev)
+ 		return err;
+ 	}
+ 
+-	ctlr = readw(&priv->regs->ctlr);
+-	ctlr &= ~RCAR_CAN_CTLR_SLPM;
+-	writew(ctlr, &priv->regs->ctlr);
+-	ctlr &= ~RCAR_CAN_CTLR_CANM;
+-	writew(ctlr, &priv->regs->ctlr);
+-	priv->can.state = CAN_STATE_ERROR_ACTIVE;
++	rcar_can_start(ndev);
+ 
+ 	netif_device_attach(ndev);
+ 	netif_start_queue(ndev);
+diff --git a/drivers/net/can/spi/hi311x.c b/drivers/net/can/spi/hi311x.c
+index 09ae218315d73d..6441ff3b419871 100644
+--- a/drivers/net/can/spi/hi311x.c
++++ b/drivers/net/can/spi/hi311x.c
+@@ -812,6 +812,7 @@ static const struct net_device_ops hi3110_netdev_ops = {
+ 	.ndo_open = hi3110_open,
+ 	.ndo_stop = hi3110_stop,
+ 	.ndo_start_xmit = hi3110_hard_start_xmit,
++	.ndo_change_mtu = can_change_mtu,
+ };
+ 
+ static const struct ethtool_ops hi3110_ethtool_ops = {
+diff --git a/drivers/net/can/sun4i_can.c b/drivers/net/can/sun4i_can.c
+index 6fcb301ef611d0..53bfd873de9bde 100644
+--- a/drivers/net/can/sun4i_can.c
++++ b/drivers/net/can/sun4i_can.c
+@@ -768,6 +768,7 @@ static const struct net_device_ops sun4ican_netdev_ops = {
+ 	.ndo_open = sun4ican_open,
+ 	.ndo_stop = sun4ican_close,
+ 	.ndo_start_xmit = sun4ican_start_xmit,
++	.ndo_change_mtu = can_change_mtu,
+ };
+ 
+ static const struct ethtool_ops sun4ican_ethtool_ops = {
+diff --git a/drivers/net/can/usb/etas_es58x/es58x_core.c b/drivers/net/can/usb/etas_es58x/es58x_core.c
+index db1acf6d504cf3..adc91873c083f9 100644
+--- a/drivers/net/can/usb/etas_es58x/es58x_core.c
++++ b/drivers/net/can/usb/etas_es58x/es58x_core.c
+@@ -7,7 +7,7 @@
+  *
+  * Copyright (c) 2019 Robert Bosch Engineering and Business Solutions. All rights reserved.
+  * Copyright (c) 2020 ETAS K.K.. All rights reserved.
+- * Copyright (c) 2020-2022 Vincent Mailhol <mailhol.vincent@wanadoo.fr>
++ * Copyright (c) 2020-2025 Vincent Mailhol <mailhol@kernel.org>
+  */
+ 
+ #include <linux/unaligned.h>
+@@ -1977,6 +1977,7 @@ static const struct net_device_ops es58x_netdev_ops = {
+ 	.ndo_stop = es58x_stop,
+ 	.ndo_start_xmit = es58x_start_xmit,
+ 	.ndo_eth_ioctl = can_eth_ioctl_hwts,
++	.ndo_change_mtu = can_change_mtu,
+ };
+ 
+ static const struct ethtool_ops es58x_ethtool_ops = {
+diff --git a/drivers/net/can/usb/mcba_usb.c b/drivers/net/can/usb/mcba_usb.c
+index 41c0a1c399bf36..1f9b915094e64d 100644
+--- a/drivers/net/can/usb/mcba_usb.c
++++ b/drivers/net/can/usb/mcba_usb.c
+@@ -761,6 +761,7 @@ static const struct net_device_ops mcba_netdev_ops = {
+ 	.ndo_open = mcba_usb_open,
+ 	.ndo_stop = mcba_usb_close,
+ 	.ndo_start_xmit = mcba_usb_start_xmit,
++	.ndo_change_mtu = can_change_mtu,
+ };
+ 
+ static const struct ethtool_ops mcba_ethtool_ops = {
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_core.c b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
+index 117637b9b995b9..dd5caa1c302b99 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb_core.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
+@@ -111,7 +111,7 @@ void peak_usb_update_ts_now(struct peak_time_ref *time_ref, u32 ts_now)
+ 		u32 delta_ts = time_ref->ts_dev_2 - time_ref->ts_dev_1;
+ 
+ 		if (time_ref->ts_dev_2 < time_ref->ts_dev_1)
+-			delta_ts &= (1 << time_ref->adapter->ts_used_bits) - 1;
++			delta_ts &= (1ULL << time_ref->adapter->ts_used_bits) - 1;
+ 
+ 		time_ref->ts_total += delta_ts;
+ 	}
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index 6eb3140d404449..84dc6e517acf94 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -685,18 +685,27 @@ static int gswip_add_single_port_br(struct gswip_priv *priv, int port, bool add)
+ 	return 0;
+ }
+ 
+-static int gswip_port_enable(struct dsa_switch *ds, int port,
+-			     struct phy_device *phydev)
++static int gswip_port_setup(struct dsa_switch *ds, int port)
+ {
+ 	struct gswip_priv *priv = ds->priv;
+ 	int err;
+ 
+ 	if (!dsa_is_cpu_port(ds, port)) {
+-		u32 mdio_phy = 0;
+-
+ 		err = gswip_add_single_port_br(priv, port, true);
+ 		if (err)
+ 			return err;
++	}
++
++	return 0;
++}
++
++static int gswip_port_enable(struct dsa_switch *ds, int port,
++			     struct phy_device *phydev)
++{
++	struct gswip_priv *priv = ds->priv;
++
++	if (!dsa_is_cpu_port(ds, port)) {
++		u32 mdio_phy = 0;
+ 
+ 		if (phydev)
+ 			mdio_phy = phydev->mdio.addr & GSWIP_MDIO_PHY_ADDR_MASK;
+@@ -1359,8 +1368,9 @@ static int gswip_port_fdb(struct dsa_switch *ds, int port,
+ 	int i;
+ 	int err;
+ 
++	/* Operation not supported on the CPU port, don't throw errors */
+ 	if (!bridge)
+-		return -EINVAL;
++		return 0;
+ 
+ 	for (i = max_ports; i < ARRAY_SIZE(priv->vlans); i++) {
+ 		if (priv->vlans[i].bridge == bridge) {
+@@ -1829,6 +1839,7 @@ static const struct phylink_mac_ops gswip_phylink_mac_ops = {
+ static const struct dsa_switch_ops gswip_xrx200_switch_ops = {
+ 	.get_tag_protocol	= gswip_get_tag_protocol,
+ 	.setup			= gswip_setup,
++	.port_setup		= gswip_port_setup,
+ 	.port_enable		= gswip_port_enable,
+ 	.port_disable		= gswip_port_disable,
+ 	.port_bridge_join	= gswip_port_bridge_join,
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+index d2ca90407cce76..8057350236c5ef 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+@@ -244,7 +244,7 @@ bnxt_tc_parse_pedit(struct bnxt *bp, struct bnxt_tc_actions *actions,
+ 			   offset < offset_of_ip6_daddr + 16) {
+ 			actions->nat.src_xlate = false;
+ 			idx = (offset - offset_of_ip6_daddr) / 4;
+-			actions->nat.l3.ipv6.saddr.s6_addr32[idx] = htonl(val);
++			actions->nat.l3.ipv6.daddr.s6_addr32[idx] = htonl(val);
+ 		} else {
+ 			netdev_err(bp->dev,
+ 				   "%s: IPv6_hdr: Invalid pedit field\n",
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 5f15f42070c539..e8b37dfd5cc1d6 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -131,7 +131,7 @@ static const struct fec_devinfo fec_mvf600_info = {
+ 		  FEC_QUIRK_HAS_MDIO_C45,
+ };
+ 
+-static const struct fec_devinfo fec_imx6x_info = {
++static const struct fec_devinfo fec_imx6sx_info = {
+ 	.quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
+ 		  FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
+ 		  FEC_QUIRK_HAS_VLAN | FEC_QUIRK_HAS_AVB |
+@@ -196,7 +196,7 @@ static const struct of_device_id fec_dt_ids[] = {
+ 	{ .compatible = "fsl,imx28-fec", .data = &fec_imx28_info, },
+ 	{ .compatible = "fsl,imx6q-fec", .data = &fec_imx6q_info, },
+ 	{ .compatible = "fsl,mvf600-fec", .data = &fec_mvf600_info, },
+-	{ .compatible = "fsl,imx6sx-fec", .data = &fec_imx6x_info, },
++	{ .compatible = "fsl,imx6sx-fec", .data = &fec_imx6sx_info, },
+ 	{ .compatible = "fsl,imx6ul-fec", .data = &fec_imx6ul_info, },
+ 	{ .compatible = "fsl,imx8mq-fec", .data = &fec_imx8mq_info, },
+ 	{ .compatible = "fsl,imx8qm-fec", .data = &fec_imx8qm_info, },
+diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
+index 7c600d6e66ba7c..fa9bb6f2786847 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e.h
++++ b/drivers/net/ethernet/intel/i40e/i40e.h
+@@ -1277,7 +1277,8 @@ struct i40e_mac_filter *i40e_add_mac_filter(struct i40e_vsi *vsi,
+ 					    const u8 *macaddr);
+ int i40e_del_mac_filter(struct i40e_vsi *vsi, const u8 *macaddr);
+ bool i40e_is_vsi_in_vlan(struct i40e_vsi *vsi);
+-int i40e_count_filters(struct i40e_vsi *vsi);
++int i40e_count_all_filters(struct i40e_vsi *vsi);
++int i40e_count_active_filters(struct i40e_vsi *vsi);
+ struct i40e_mac_filter *i40e_find_mac(struct i40e_vsi *vsi, const u8 *macaddr);
+ void i40e_vlan_stripping_enable(struct i40e_vsi *vsi);
+ static inline bool i40e_is_sw_dcb(struct i40e_pf *pf)
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 26dcdceae741e4..ec1e3fffb59269 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -1241,12 +1241,30 @@ void i40e_update_stats(struct i40e_vsi *vsi)
+ }
+ 
+ /**
+- * i40e_count_filters - counts VSI mac filters
++ * i40e_count_all_filters - counts VSI MAC filters
+  * @vsi: the VSI to be searched
+  *
+- * Returns count of mac filters
+- **/
+-int i40e_count_filters(struct i40e_vsi *vsi)
++ * Return: count of MAC filters in any state.
++ */
++int i40e_count_all_filters(struct i40e_vsi *vsi)
++{
++	struct i40e_mac_filter *f;
++	struct hlist_node *h;
++	int bkt, cnt = 0;
++
++	hash_for_each_safe(vsi->mac_filter_hash, bkt, h, f, hlist)
++		cnt++;
++
++	return cnt;
++}
++
++/**
++ * i40e_count_active_filters - counts VSI MAC filters
++ * @vsi: the VSI to be searched
++ *
++ * Return: count of active MAC filters.
++ */
++int i40e_count_active_filters(struct i40e_vsi *vsi)
+ {
+ 	struct i40e_mac_filter *f;
+ 	struct hlist_node *h;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 7ccfc1191ae56f..59881846e8e3fa 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -448,7 +448,7 @@ static void i40e_config_irq_link_list(struct i40e_vf *vf, u16 vsi_id,
+ 		    (qtype << I40E_QINT_RQCTL_NEXTQ_TYPE_SHIFT) |
+ 		    (pf_queue_id << I40E_QINT_RQCTL_NEXTQ_INDX_SHIFT) |
+ 		    BIT(I40E_QINT_RQCTL_CAUSE_ENA_SHIFT) |
+-		    (itr_idx << I40E_QINT_RQCTL_ITR_INDX_SHIFT);
++		    FIELD_PREP(I40E_QINT_RQCTL_ITR_INDX_MASK, itr_idx);
+ 		wr32(hw, reg_idx, reg);
+ 	}
+ 
+@@ -653,6 +653,13 @@ static int i40e_config_vsi_tx_queue(struct i40e_vf *vf, u16 vsi_id,
+ 
+ 	/* only set the required fields */
+ 	tx_ctx.base = info->dma_ring_addr / 128;
++
++	/* ring_len has to be multiple of 8 */
++	if (!IS_ALIGNED(info->ring_len, 8) ||
++	    info->ring_len > I40E_MAX_NUM_DESCRIPTORS_XL710) {
++		ret = -EINVAL;
++		goto error_context;
++	}
+ 	tx_ctx.qlen = info->ring_len;
+ 	tx_ctx.rdylist = le16_to_cpu(vsi->info.qs_handle[0]);
+ 	tx_ctx.rdylist_act = 0;
+@@ -716,6 +723,13 @@ static int i40e_config_vsi_rx_queue(struct i40e_vf *vf, u16 vsi_id,
+ 
+ 	/* only set the required fields */
+ 	rx_ctx.base = info->dma_ring_addr / 128;
++
++	/* ring_len has to be multiple of 32 */
++	if (!IS_ALIGNED(info->ring_len, 32) ||
++	    info->ring_len > I40E_MAX_NUM_DESCRIPTORS_XL710) {
++		ret = -EINVAL;
++		goto error_param;
++	}
+ 	rx_ctx.qlen = info->ring_len;
+ 
+ 	if (info->splithdr_enabled) {
+@@ -1453,6 +1467,7 @@ static void i40e_trigger_vf_reset(struct i40e_vf *vf, bool flr)
+ 	 * functions that may still be running at this point.
+ 	 */
+ 	clear_bit(I40E_VF_STATE_INIT, &vf->vf_states);
++	clear_bit(I40E_VF_STATE_RESOURCES_LOADED, &vf->vf_states);
+ 
+ 	/* In the case of a VFLR, the HW has already reset the VF and we
+ 	 * just need to clean up, so don't hit the VFRTRIG register.
+@@ -2119,7 +2134,10 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
+ 	size_t len = 0;
+ 	int ret;
+ 
+-	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_INIT)) {
++	i40e_sync_vf_state(vf, I40E_VF_STATE_INIT);
++
++	if (!test_bit(I40E_VF_STATE_INIT, &vf->vf_states) ||
++	    test_bit(I40E_VF_STATE_RESOURCES_LOADED, &vf->vf_states)) {
+ 		aq_ret = -EINVAL;
+ 		goto err;
+ 	}
+@@ -2222,6 +2240,7 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
+ 				vf->default_lan_addr.addr);
+ 	}
+ 	set_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states);
++	set_bit(I40E_VF_STATE_RESOURCES_LOADED, &vf->vf_states);
+ 
+ err:
+ 	/* send the response back to the VF */
+@@ -2384,7 +2403,7 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg)
+ 		}
+ 
+ 		if (vf->adq_enabled) {
+-			if (idx >= ARRAY_SIZE(vf->ch)) {
++			if (idx >= vf->num_tc) {
+ 				aq_ret = -ENODEV;
+ 				goto error_param;
+ 			}
+@@ -2405,7 +2424,7 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg)
+ 		 * to its appropriate VSIs based on TC mapping
+ 		 */
+ 		if (vf->adq_enabled) {
+-			if (idx >= ARRAY_SIZE(vf->ch)) {
++			if (idx >= vf->num_tc) {
+ 				aq_ret = -ENODEV;
+ 				goto error_param;
+ 			}
+@@ -2455,8 +2474,10 @@ static int i40e_validate_queue_map(struct i40e_vf *vf, u16 vsi_id,
+ 	u16 vsi_queue_id, queue_id;
+ 
+ 	for_each_set_bit(vsi_queue_id, &queuemap, I40E_MAX_VSI_QP) {
+-		if (vf->adq_enabled) {
+-			vsi_id = vf->ch[vsi_queue_id / I40E_MAX_VF_VSI].vsi_id;
++		u16 idx = vsi_queue_id / I40E_MAX_VF_VSI;
++
++		if (vf->adq_enabled && idx < vf->num_tc) {
++			vsi_id = vf->ch[idx].vsi_id;
+ 			queue_id = (vsi_queue_id % I40E_DEFAULT_QUEUES_PER_VF);
+ 		} else {
+ 			queue_id = vsi_queue_id;
+@@ -2844,24 +2865,6 @@ static int i40e_vc_get_stats_msg(struct i40e_vf *vf, u8 *msg)
+ 				      (u8 *)&stats, sizeof(stats));
+ }
+ 
+-/**
+- * i40e_can_vf_change_mac
+- * @vf: pointer to the VF info
+- *
+- * Return true if the VF is allowed to change its MAC filters, false otherwise
+- */
+-static bool i40e_can_vf_change_mac(struct i40e_vf *vf)
+-{
+-	/* If the VF MAC address has been set administratively (via the
+-	 * ndo_set_vf_mac command), then deny permission to the VF to
+-	 * add/delete unicast MAC addresses, unless the VF is trusted
+-	 */
+-	if (vf->pf_set_mac && !vf->trusted)
+-		return false;
+-
+-	return true;
+-}
+-
+ #define I40E_MAX_MACVLAN_PER_HW 3072
+ #define I40E_MAX_MACVLAN_PER_PF(num_ports) (I40E_MAX_MACVLAN_PER_HW /	\
+ 	(num_ports))
+@@ -2900,8 +2903,10 @@ static inline int i40e_check_vf_permission(struct i40e_vf *vf,
+ 	struct i40e_pf *pf = vf->pf;
+ 	struct i40e_vsi *vsi = pf->vsi[vf->lan_vsi_idx];
+ 	struct i40e_hw *hw = &pf->hw;
+-	int mac2add_cnt = 0;
+-	int i;
++	int i, mac_add_max, mac_add_cnt = 0;
++	bool vf_trusted;
++
++	vf_trusted = test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps);
+ 
+ 	for (i = 0; i < al->num_elements; i++) {
+ 		struct i40e_mac_filter *f;
+@@ -2921,9 +2926,8 @@ static inline int i40e_check_vf_permission(struct i40e_vf *vf,
+ 		 * The VF may request to set the MAC address filter already
+ 		 * assigned to it so do not return an error in that case.
+ 		 */
+-		if (!i40e_can_vf_change_mac(vf) &&
+-		    !is_multicast_ether_addr(addr) &&
+-		    !ether_addr_equal(addr, vf->default_lan_addr.addr)) {
++		if (!vf_trusted && !is_multicast_ether_addr(addr) &&
++		    vf->pf_set_mac && !ether_addr_equal(addr, vf->default_lan_addr.addr)) {
+ 			dev_err(&pf->pdev->dev,
+ 				"VF attempting to override administratively set MAC address, bring down and up the VF interface to resume normal operation\n");
+ 			return -EPERM;
+@@ -2932,29 +2936,33 @@ static inline int i40e_check_vf_permission(struct i40e_vf *vf,
+ 		/*count filters that really will be added*/
+ 		f = i40e_find_mac(vsi, addr);
+ 		if (!f)
+-			++mac2add_cnt;
++			++mac_add_cnt;
+ 	}
+ 
+ 	/* If this VF is not privileged, then we can't add more than a limited
+-	 * number of addresses. Check to make sure that the additions do not
+-	 * push us over the limit.
+-	 */
+-	if (!test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps)) {
+-		if ((i40e_count_filters(vsi) + mac2add_cnt) >
+-		    I40E_VC_MAX_MAC_ADDR_PER_VF) {
+-			dev_err(&pf->pdev->dev,
+-				"Cannot add more MAC addresses, VF is not trusted, switch the VF to trusted to add more functionality\n");
+-			return -EPERM;
+-		}
+-	/* If this VF is trusted, it can use more resources than untrusted.
++	 * number of addresses.
++	 *
++	 * If this VF is trusted, it can use more resources than untrusted.
+ 	 * However to ensure that every trusted VF has appropriate number of
+ 	 * resources, divide whole pool of resources per port and then across
+ 	 * all VFs.
+ 	 */
+-	} else {
+-		if ((i40e_count_filters(vsi) + mac2add_cnt) >
+-		    I40E_VC_MAX_MACVLAN_PER_TRUSTED_VF(pf->num_alloc_vfs,
+-						       hw->num_ports)) {
++	if (!vf_trusted)
++		mac_add_max = I40E_VC_MAX_MAC_ADDR_PER_VF;
++	else
++		mac_add_max = I40E_VC_MAX_MACVLAN_PER_TRUSTED_VF(pf->num_alloc_vfs, hw->num_ports);
++
++	/* VF can replace all its filters in one step, in this case mac_add_max
++	 * will be added as active and another mac_add_max will be in
++	 * a to-be-removed state. Account for that.
++	 */
++	if ((i40e_count_active_filters(vsi) + mac_add_cnt) > mac_add_max ||
++	    (i40e_count_all_filters(vsi) + mac_add_cnt) > 2 * mac_add_max) {
++		if (!vf_trusted) {
++			dev_err(&pf->pdev->dev,
++				"Cannot add more MAC addresses, VF is not trusted, switch the VF to trusted to add more functionality\n");
++			return -EPERM;
++		} else {
+ 			dev_err(&pf->pdev->dev,
+ 				"Cannot add more MAC addresses, trusted VF exhausted it's resources\n");
+ 			return -EPERM;
+@@ -3589,7 +3597,7 @@ static int i40e_validate_cloud_filter(struct i40e_vf *vf,
+ 
+ 	/* action_meta is TC number here to which the filter is applied */
+ 	if (!tc_filter->action_meta ||
+-	    tc_filter->action_meta > vf->num_tc) {
++	    tc_filter->action_meta >= vf->num_tc) {
+ 		dev_info(&pf->pdev->dev, "VF %d: Invalid TC number %u\n",
+ 			 vf->vf_id, tc_filter->action_meta);
+ 		goto err;
+@@ -3887,6 +3895,8 @@ static int i40e_vc_del_cloud_filter(struct i40e_vf *vf, u8 *msg)
+ 				       aq_ret);
+ }
+ 
++#define I40E_MAX_VF_CLOUD_FILTER 0xFF00
++
+ /**
+  * i40e_vc_add_cloud_filter
+  * @vf: pointer to the VF info
+@@ -3926,6 +3936,14 @@ static int i40e_vc_add_cloud_filter(struct i40e_vf *vf, u8 *msg)
+ 		goto err_out;
+ 	}
+ 
++	if (vf->num_cloud_filters >= I40E_MAX_VF_CLOUD_FILTER) {
++		dev_warn(&pf->pdev->dev,
++			 "VF %d: Max number of filters reached, can't apply cloud filter\n",
++			 vf->vf_id);
++		aq_ret = -ENOSPC;
++		goto err_out;
++	}
++
+ 	cfilter = kzalloc(sizeof(*cfilter), GFP_KERNEL);
+ 	if (!cfilter) {
+ 		aq_ret = -ENOMEM;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+index 5cf74f16f433f3..f558b45725c816 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+@@ -41,7 +41,8 @@ enum i40e_vf_states {
+ 	I40E_VF_STATE_MC_PROMISC,
+ 	I40E_VF_STATE_UC_PROMISC,
+ 	I40E_VF_STATE_PRE_ENABLE,
+-	I40E_VF_STATE_RESETTING
++	I40E_VF_STATE_RESETTING,
++	I40E_VF_STATE_RESOURCES_LOADED,
+ };
+ 
+ /* VF capabilities */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+index 442305463cc0ae..21161711c579fb 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+@@ -21,8 +21,7 @@
+ #include "rvu.h"
+ #include "lmac_common.h"
+ 
+-#define DRV_NAME	"Marvell-CGX/RPM"
+-#define DRV_STRING      "Marvell CGX/RPM Driver"
++#define DRV_NAME	"Marvell-CGX-RPM"
+ 
+ #define CGX_RX_STAT_GLOBAL_INDEX	9
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
+index 5f80b23c5335cd..26a08d2cfbb1b6 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
+@@ -1326,7 +1326,6 @@ static int otx2_tc_add_flow(struct otx2_nic *nic,
+ 
+ free_leaf:
+ 	otx2_tc_del_from_flow_list(flow_cfg, new_node);
+-	kfree_rcu(new_node, rcu);
+ 	if (new_node->is_act_police) {
+ 		mutex_lock(&nic->mbox.lock);
+ 
+@@ -1346,6 +1345,7 @@ static int otx2_tc_add_flow(struct otx2_nic *nic,
+ 
+ 		mutex_unlock(&nic->mbox.lock);
+ 	}
++	kfree_rcu(new_node, rcu);
+ 
+ 	return rc;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
+index 19664fa7f21713..46d6dd05fb814d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
+@@ -1466,6 +1466,7 @@ static void fec_set_block_stats(struct mlx5e_priv *priv,
+ 	case MLX5E_FEC_RS_528_514:
+ 	case MLX5E_FEC_RS_544_514:
+ 	case MLX5E_FEC_LLRS_272_257_1:
++	case MLX5E_FEC_RS_544_514_INTERLEAVED_QUAD:
+ 		fec_set_rs_stats(fec_stats, out);
+ 		return;
+ 	case MLX5E_FEC_FIRECODE:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 3b57ef6b3de383..93fb4e861b6916 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -663,7 +663,7 @@ static void del_sw_hw_rule(struct fs_node *node)
+ 			BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_ACTION) |
+ 			BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_FLOW_COUNTERS);
+ 		fte->act_dests.action.action &= ~MLX5_FLOW_CONTEXT_ACTION_COUNT;
+-		mlx5_fc_local_destroy(rule->dest_attr.counter);
++		mlx5_fc_local_put(rule->dest_attr.counter);
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
+index 500826229b0beb..e6a95b310b5554 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
+@@ -343,6 +343,7 @@ struct mlx5_fc {
+ 	enum mlx5_fc_type type;
+ 	struct mlx5_fc_bulk *bulk;
+ 	struct mlx5_fc_cache cache;
++	refcount_t fc_local_refcount;
+ 	/* last{packets,bytes} are used for calculating deltas since last reading. */
+ 	u64 lastpackets;
+ 	u64 lastbytes;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
+index 492775d3d193a3..83001eda38842a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
+@@ -562,17 +562,36 @@ mlx5_fc_local_create(u32 counter_id, u32 offset, u32 bulk_size)
+ 	counter->id = counter_id;
+ 	fc_bulk->base_id = counter_id - offset;
+ 	fc_bulk->fs_bulk.bulk_len = bulk_size;
++	refcount_set(&fc_bulk->hws_data.hws_action_refcount, 0);
++	mutex_init(&fc_bulk->hws_data.lock);
+ 	counter->bulk = fc_bulk;
++	refcount_set(&counter->fc_local_refcount, 1);
+ 	return counter;
+ }
+ EXPORT_SYMBOL(mlx5_fc_local_create);
+ 
+ void mlx5_fc_local_destroy(struct mlx5_fc *counter)
+ {
+-	if (!counter || counter->type != MLX5_FC_TYPE_LOCAL)
+-		return;
+-
+ 	kfree(counter->bulk);
+ 	kfree(counter);
+ }
+ EXPORT_SYMBOL(mlx5_fc_local_destroy);
++
++void mlx5_fc_local_get(struct mlx5_fc *counter)
++{
++	if (!counter || counter->type != MLX5_FC_TYPE_LOCAL)
++		return;
++
++	refcount_inc(&counter->fc_local_refcount);
++}
++
++void mlx5_fc_local_put(struct mlx5_fc *counter)
++{
++	if (!counter || counter->type != MLX5_FC_TYPE_LOCAL)
++		return;
++
++	if (!refcount_dec_and_test(&counter->fc_local_refcount))
++		return;
++
++	mlx5_fc_local_destroy(counter);
++}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
+index 8e4a085f4a2ec9..fe56b59e24c59c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
+@@ -1358,11 +1358,8 @@ mlx5hws_action_create_modify_header(struct mlx5hws_context *ctx,
+ }
+ 
+ struct mlx5hws_action *
+-mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx,
+-				 size_t num_dest,
++mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx, size_t num_dest,
+ 				 struct mlx5hws_action_dest_attr *dests,
+-				 bool ignore_flow_level,
+-				 u32 flow_source,
+ 				 u32 flags)
+ {
+ 	struct mlx5hws_cmd_set_fte_dest *dest_list = NULL;
+@@ -1400,7 +1397,7 @@ mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx,
+ 				MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
+ 			dest_list[i].destination_id = dests[i].dest->dest_obj.obj_id;
+ 			fte_attr.action_flags |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+-			fte_attr.ignore_flow_level = ignore_flow_level;
++			fte_attr.ignore_flow_level = 1;
+ 			if (dests[i].is_wire_ft)
+ 				last_dest_idx = i;
+ 			break;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
+index 47e3947e7b512f..6a4c4cccd64342 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
+@@ -572,14 +572,12 @@ static void mlx5_fs_put_dest_action_sampler(struct mlx5_fs_hws_context *fs_ctx,
+ static struct mlx5hws_action *
+ mlx5_fs_create_action_dest_array(struct mlx5hws_context *ctx,
+ 				 struct mlx5hws_action_dest_attr *dests,
+-				 u32 num_of_dests, bool ignore_flow_level,
+-				 u32 flow_source)
++				 u32 num_of_dests)
+ {
+ 	u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED;
+ 
+ 	return mlx5hws_action_create_dest_array(ctx, num_of_dests, dests,
+-						ignore_flow_level,
+-						flow_source, flags);
++						flags);
+ }
+ 
+ static struct mlx5hws_action *
+@@ -1016,20 +1014,14 @@ static int mlx5_fs_fte_get_hws_actions(struct mlx5_flow_root_namespace *ns,
+ 		}
+ 		(*ractions)[num_actions++].action = dest_actions->dest;
+ 	} else if (num_dest_actions > 1) {
+-		u32 flow_source = fte->act_dests.flow_context.flow_source;
+-		bool ignore_flow_level;
+-
+ 		if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX ||
+ 		    num_fs_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
+ 			err = -EOPNOTSUPP;
+ 			goto free_actions;
+ 		}
+-		ignore_flow_level =
+-			!!(fte_action->flags & FLOW_ACT_IGNORE_FLOW_LEVEL);
+-		tmp_action = mlx5_fs_create_action_dest_array(ctx, dest_actions,
+-							      num_dest_actions,
+-							      ignore_flow_level,
+-							      flow_source);
++		tmp_action =
++			mlx5_fs_create_action_dest_array(ctx, dest_actions,
++							 num_dest_actions);
+ 		if (!tmp_action) {
+ 			err = -EOPNOTSUPP;
+ 			goto free_actions;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c
+index f1ecdba74e1f46..839d71bd42164f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c
+@@ -407,15 +407,21 @@ struct mlx5hws_action *mlx5_fc_get_hws_action(struct mlx5hws_context *ctx,
+ {
+ 	struct mlx5_fs_hws_create_action_ctx create_ctx;
+ 	struct mlx5_fc_bulk *fc_bulk = counter->bulk;
++	struct mlx5hws_action *hws_action;
+ 
+ 	create_ctx.hws_ctx = ctx;
+ 	create_ctx.id = fc_bulk->base_id;
+ 	create_ctx.actions_type = MLX5HWS_ACTION_TYP_CTR;
+ 
+-	return mlx5_fs_get_hws_action(&fc_bulk->hws_data, &create_ctx);
++	mlx5_fc_local_get(counter);
++	hws_action = mlx5_fs_get_hws_action(&fc_bulk->hws_data, &create_ctx);
++	if (!hws_action)
++		mlx5_fc_local_put(counter);
++	return hws_action;
+ }
+ 
+ void mlx5_fc_put_hws_action(struct mlx5_fc *counter)
+ {
+ 	mlx5_fs_put_hws_action(&counter->bulk->hws_data);
++	mlx5_fc_local_put(counter);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
+index a2fe2f9e832d26..e6ba5a2129075a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
+@@ -727,18 +727,13 @@ mlx5hws_action_create_push_vlan(struct mlx5hws_context *ctx, u32 flags);
+  * @num_dest: The number of dests attributes.
+  * @dests: The destination array. Each contains a destination action and can
+  *	   have additional actions.
+- * @ignore_flow_level: Whether to turn on 'ignore_flow_level' for this dest.
+- * @flow_source: Source port of the traffic for this actions.
+  * @flags: Action creation flags (enum mlx5hws_action_flags).
+  *
+  * Return: pointer to mlx5hws_action on success NULL otherwise.
+  */
+ struct mlx5hws_action *
+-mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx,
+-				 size_t num_dest,
++mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx, size_t num_dest,
+ 				 struct mlx5hws_action_dest_attr *dests,
+-				 bool ignore_flow_level,
+-				 u32 flow_source,
+ 				 u32 flags);
+ 
+ /**
+diff --git a/drivers/net/phy/bcm-phy-ptp.c b/drivers/net/phy/bcm-phy-ptp.c
+index eba8b5fb1365f4..d3501f8487d964 100644
+--- a/drivers/net/phy/bcm-phy-ptp.c
++++ b/drivers/net/phy/bcm-phy-ptp.c
+@@ -597,10 +597,6 @@ static int bcm_ptp_perout_locked(struct bcm_ptp_private *priv,
+ 
+ 	period = BCM_MAX_PERIOD_8NS;	/* write nonzero value */
+ 
+-	/* Reject unsupported flags */
+-	if (req->flags & ~PTP_PEROUT_DUTY_CYCLE)
+-		return -EOPNOTSUPP;
+-
+ 	if (req->flags & PTP_PEROUT_DUTY_CYCLE)
+ 		pulse = ktime_to_ns(ktime_set(req->on.sec, req->on.nsec));
+ 	else
+@@ -741,6 +737,8 @@ static const struct ptp_clock_info bcm_ptp_clock_info = {
+ 	.n_pins		= 1,
+ 	.n_per_out	= 1,
+ 	.n_ext_ts	= 1,
++	.supported_perout_flags = PTP_PEROUT_DUTY_CYCLE,
++	.supported_extts_flags = PTP_STRICT_FLAGS | PTP_RISING_EDGE,
+ };
+ 
+ static void bcm_ptp_txtstamp(struct mii_timestamper *mii_ts,
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index 347c1e0e94d951..4cd1d6c51dc2a0 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -361,6 +361,11 @@ static void sfp_fixup_ignore_tx_fault(struct sfp *sfp)
+ 	sfp->state_ignore_mask |= SFP_F_TX_FAULT;
+ }
+ 
++static void sfp_fixup_ignore_hw(struct sfp *sfp, unsigned int mask)
++{
++	sfp->state_hw_mask &= ~mask;
++}
++
+ static void sfp_fixup_nokia(struct sfp *sfp)
+ {
+ 	sfp_fixup_long_startup(sfp);
+@@ -409,7 +414,19 @@ static void sfp_fixup_halny_gsfp(struct sfp *sfp)
+ 	 * these are possibly used for other purposes on this
+ 	 * module, e.g. a serial port.
+ 	 */
+-	sfp->state_hw_mask &= ~(SFP_F_TX_FAULT | SFP_F_LOS);
++	sfp_fixup_ignore_hw(sfp, SFP_F_TX_FAULT | SFP_F_LOS);
++}
++
++static void sfp_fixup_potron(struct sfp *sfp)
++{
++	/*
++	 * The TX_FAULT and LOS pins on this device are used for serial
++	 * communication, so ignore them. Additionally, provide extra
++	 * time for this device to fully start up.
++	 */
++
++	sfp_fixup_long_startup(sfp);
++	sfp_fixup_ignore_hw(sfp, SFP_F_TX_FAULT | SFP_F_LOS);
+ }
+ 
+ static void sfp_fixup_rollball_cc(struct sfp *sfp)
+@@ -475,6 +492,9 @@ static const struct sfp_quirk sfp_quirks[] = {
+ 	SFP_QUIRK("ALCATELLUCENT", "3FE46541AA", sfp_quirk_2500basex,
+ 		  sfp_fixup_nokia),
+ 
++	// FLYPRO SFP-10GT-CS-30M uses Rollball protocol to talk to the PHY.
++	SFP_QUIRK_F("FLYPRO", "SFP-10GT-CS-30M", sfp_fixup_rollball),
++
+ 	// Fiberstore SFP-10G-T doesn't identify as copper, uses the Rollball
+ 	// protocol to talk to the PHY and needs 4 sec wait before probing the
+ 	// PHY.
+@@ -512,6 +532,8 @@ static const struct sfp_quirk sfp_quirks[] = {
+ 	SFP_QUIRK_F("Walsun", "HXSX-ATRC-1", sfp_fixup_fs_10gt),
+ 	SFP_QUIRK_F("Walsun", "HXSX-ATRI-1", sfp_fixup_fs_10gt),
+ 
++	SFP_QUIRK_F("YV", "SFP+ONU-XGSPON", sfp_fixup_potron),
++
+ 	// OEM SFP-GE-T is a 1000Base-T module with broken TX_FAULT indicator
+ 	SFP_QUIRK_F("OEM", "SFP-GE-T", sfp_fixup_ignore_tx_fault),
+ 
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index f8c5e2fd04dfa0..0fffa023cb736b 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1863,6 +1863,9 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ 				local_bh_enable();
+ 				goto unlock_frags;
+ 			}
++
++			if (frags && skb != tfile->napi.skb)
++				tfile->napi.skb = skb;
+ 		}
+ 		rcu_read_unlock();
+ 		local_bh_enable();
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+index eee55428749c92..de5005815ee706 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+@@ -2093,7 +2093,8 @@ static void iwl_txq_gen1_update_byte_cnt_tbl(struct iwl_trans *trans,
+ 		break;
+ 	}
+ 
+-	if (trans->mac_cfg->device_family < IWL_DEVICE_FAMILY_AX210)
++	if (trans->mac_cfg->device_family >= IWL_DEVICE_FAMILY_7000 &&
++	    trans->mac_cfg->device_family < IWL_DEVICE_FAMILY_AX210)
+ 		len = DIV_ROUND_UP(len, 4);
+ 
+ 	if (WARN_ON(len > 0xFFF || write_ptr >= TFD_QUEUE_SIZE_MAX))
+diff --git a/drivers/net/wireless/virtual/virt_wifi.c b/drivers/net/wireless/virtual/virt_wifi.c
+index 1fffeff2190ca8..4eae89376feb55 100644
+--- a/drivers/net/wireless/virtual/virt_wifi.c
++++ b/drivers/net/wireless/virtual/virt_wifi.c
+@@ -277,7 +277,9 @@ static void virt_wifi_connect_complete(struct work_struct *work)
+ 		priv->is_connected = true;
+ 
+ 	/* Schedules an event that acquires the rtnl lock. */
+-	cfg80211_connect_result(priv->upperdev, requested_bss, NULL, 0, NULL, 0,
++	cfg80211_connect_result(priv->upperdev,
++				priv->is_connected ? fake_router_bssid : NULL,
++				NULL, 0, NULL, 0,
+ 				status, GFP_KERNEL);
+ 	netif_carrier_on(priv->upperdev);
+ }
+diff --git a/drivers/pinctrl/mediatek/pinctrl-airoha.c b/drivers/pinctrl/mediatek/pinctrl-airoha.c
+index 3fa5131d81e525..1dc26e898d787a 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-airoha.c
++++ b/drivers/pinctrl/mediatek/pinctrl-airoha.c
+@@ -108,6 +108,9 @@
+ #define JTAG_UDI_EN_MASK			BIT(4)
+ #define JTAG_DFD_EN_MASK			BIT(3)
+ 
++#define REG_FORCE_GPIO_EN			0x0228
++#define FORCE_GPIO_EN(n)			BIT(n)
++
+ /* LED MAP */
+ #define REG_LAN_LED0_MAPPING			0x027c
+ #define REG_LAN_LED1_MAPPING			0x0280
+@@ -718,17 +721,17 @@ static const struct airoha_pinctrl_func_group mdio_func_group[] = {
+ 	{
+ 		.name = "mdio",
+ 		.regmap[0] = {
+-			AIROHA_FUNC_MUX,
+-			REG_GPIO_PON_MODE,
+-			GPIO_SGMII_MDIO_MODE_MASK,
+-			GPIO_SGMII_MDIO_MODE_MASK
+-		},
+-		.regmap[1] = {
+ 			AIROHA_FUNC_MUX,
+ 			REG_GPIO_2ND_I2C_MODE,
+ 			GPIO_MDC_IO_MASTER_MODE_MODE,
+ 			GPIO_MDC_IO_MASTER_MODE_MODE
+ 		},
++		.regmap[1] = {
++			AIROHA_FUNC_MUX,
++			REG_FORCE_GPIO_EN,
++			FORCE_GPIO_EN(1) | FORCE_GPIO_EN(2),
++			FORCE_GPIO_EN(1) | FORCE_GPIO_EN(2)
++		},
+ 		.regmap_size = 2,
+ 	},
+ };
+@@ -1752,8 +1755,8 @@ static const struct airoha_pinctrl_func_group phy1_led1_func_group[] = {
+ 		.regmap[0] = {
+ 			AIROHA_FUNC_MUX,
+ 			REG_GPIO_2ND_I2C_MODE,
+-			GPIO_LAN3_LED0_MODE_MASK,
+-			GPIO_LAN3_LED0_MODE_MASK
++			GPIO_LAN3_LED1_MODE_MASK,
++			GPIO_LAN3_LED1_MODE_MASK
+ 		},
+ 		.regmap[1] = {
+ 			AIROHA_FUNC_MUX,
+@@ -1816,8 +1819,8 @@ static const struct airoha_pinctrl_func_group phy2_led1_func_group[] = {
+ 		.regmap[0] = {
+ 			AIROHA_FUNC_MUX,
+ 			REG_GPIO_2ND_I2C_MODE,
+-			GPIO_LAN3_LED0_MODE_MASK,
+-			GPIO_LAN3_LED0_MODE_MASK
++			GPIO_LAN3_LED1_MODE_MASK,
++			GPIO_LAN3_LED1_MODE_MASK
+ 		},
+ 		.regmap[1] = {
+ 			AIROHA_FUNC_MUX,
+@@ -1880,8 +1883,8 @@ static const struct airoha_pinctrl_func_group phy3_led1_func_group[] = {
+ 		.regmap[0] = {
+ 			AIROHA_FUNC_MUX,
+ 			REG_GPIO_2ND_I2C_MODE,
+-			GPIO_LAN3_LED0_MODE_MASK,
+-			GPIO_LAN3_LED0_MODE_MASK
++			GPIO_LAN3_LED1_MODE_MASK,
++			GPIO_LAN3_LED1_MODE_MASK
+ 		},
+ 		.regmap[1] = {
+ 			AIROHA_FUNC_MUX,
+@@ -1944,8 +1947,8 @@ static const struct airoha_pinctrl_func_group phy4_led1_func_group[] = {
+ 		.regmap[0] = {
+ 			AIROHA_FUNC_MUX,
+ 			REG_GPIO_2ND_I2C_MODE,
+-			GPIO_LAN3_LED0_MODE_MASK,
+-			GPIO_LAN3_LED0_MODE_MASK
++			GPIO_LAN3_LED1_MODE_MASK,
++			GPIO_LAN3_LED1_MODE_MASK
+ 		},
+ 		.regmap[1] = {
+ 			AIROHA_FUNC_MUX,
+diff --git a/drivers/platform/x86/lg-laptop.c b/drivers/platform/x86/lg-laptop.c
+index 4b57102c7f6270..6af6cf477c5b5b 100644
+--- a/drivers/platform/x86/lg-laptop.c
++++ b/drivers/platform/x86/lg-laptop.c
+@@ -8,6 +8,7 @@
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+ 
+ #include <linux/acpi.h>
++#include <linux/bitfield.h>
+ #include <linux/bits.h>
+ #include <linux/device.h>
+ #include <linux/dev_printk.h>
+@@ -75,6 +76,9 @@ MODULE_PARM_DESC(fw_debug, "Enable printing of firmware debug messages");
+ #define WMBB_USB_CHARGE 0x10B
+ #define WMBB_BATT_LIMIT 0x10C
+ 
++#define FAN_MODE_LOWER GENMASK(1, 0)
++#define FAN_MODE_UPPER GENMASK(5, 4)
++
+ #define PLATFORM_NAME   "lg-laptop"
+ 
+ MODULE_ALIAS("wmi:" WMI_EVENT_GUID0);
+@@ -274,29 +278,19 @@ static ssize_t fan_mode_store(struct device *dev,
+ 			      struct device_attribute *attr,
+ 			      const char *buffer, size_t count)
+ {
+-	bool value;
++	unsigned long value;
+ 	union acpi_object *r;
+-	u32 m;
+ 	int ret;
+ 
+-	ret = kstrtobool(buffer, &value);
++	ret = kstrtoul(buffer, 10, &value);
+ 	if (ret)
+ 		return ret;
++	if (value >= 3)
++		return -EINVAL;
+ 
+-	r = lg_wmab(dev, WM_FAN_MODE, WM_GET, 0);
+-	if (!r)
+-		return -EIO;
+-
+-	if (r->type != ACPI_TYPE_INTEGER) {
+-		kfree(r);
+-		return -EIO;
+-	}
+-
+-	m = r->integer.value;
+-	kfree(r);
+-	r = lg_wmab(dev, WM_FAN_MODE, WM_SET, (m & 0xffffff0f) | (value << 4));
+-	kfree(r);
+-	r = lg_wmab(dev, WM_FAN_MODE, WM_SET, (m & 0xfffffff0) | value);
++	r = lg_wmab(dev, WM_FAN_MODE, WM_SET,
++		FIELD_PREP(FAN_MODE_LOWER, value) |
++		FIELD_PREP(FAN_MODE_UPPER, value));
+ 	kfree(r);
+ 
+ 	return count;
+@@ -305,7 +299,7 @@ static ssize_t fan_mode_store(struct device *dev,
+ static ssize_t fan_mode_show(struct device *dev,
+ 			     struct device_attribute *attr, char *buffer)
+ {
+-	unsigned int status;
++	unsigned int mode;
+ 	union acpi_object *r;
+ 
+ 	r = lg_wmab(dev, WM_FAN_MODE, WM_GET, 0);
+@@ -317,10 +311,10 @@ static ssize_t fan_mode_show(struct device *dev,
+ 		return -EIO;
+ 	}
+ 
+-	status = r->integer.value & 0x01;
++	mode = FIELD_GET(FAN_MODE_LOWER, r->integer.value);
+ 	kfree(r);
+ 
+-	return sysfs_emit(buffer, "%d\n", status);
++	return sysfs_emit(buffer, "%d\n", mode);
+ }
+ 
+ static ssize_t usb_charge_store(struct device *dev,
+diff --git a/drivers/platform/x86/oxpec.c b/drivers/platform/x86/oxpec.c
+index 9839e8cb82ce4c..eb076bb4099bed 100644
+--- a/drivers/platform/x86/oxpec.c
++++ b/drivers/platform/x86/oxpec.c
+@@ -292,6 +292,13 @@ static const struct dmi_system_id dmi_table[] = {
+ 		},
+ 		.driver_data = (void *)oxp_x1,
+ 	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "ONE-NETBOOK"),
++			DMI_EXACT_MATCH(DMI_BOARD_NAME, "ONEXPLAYER X1Mini Pro"),
++		},
++		.driver_data = (void *)oxp_x1,
++	},
+ 	{
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "ONE-NETBOOK"),
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index d3c78f59b22cd9..c9eb8004fcc271 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -46,6 +46,7 @@ static_assert(CQSPI_MAX_CHIPSELECT <= SPI_CS_CNT_MAX);
+ #define CQSPI_DMA_SET_MASK		BIT(7)
+ #define CQSPI_SUPPORT_DEVICE_RESET	BIT(8)
+ #define CQSPI_DISABLE_STIG_MODE		BIT(9)
++#define CQSPI_DISABLE_RUNTIME_PM	BIT(10)
+ 
+ /* Capabilities */
+ #define CQSPI_SUPPORTS_OCTAL		BIT(0)
+@@ -108,6 +109,8 @@ struct cqspi_st {
+ 
+ 	bool			is_jh7110; /* Flag for StarFive JH7110 SoC */
+ 	bool			disable_stig_mode;
++	refcount_t		refcount;
++	refcount_t		inflight_ops;
+ 
+ 	const struct cqspi_driver_platdata *ddata;
+ };
+@@ -735,6 +738,9 @@ static int cqspi_indirect_read_execute(struct cqspi_flash_pdata *f_pdata,
+ 	u8 *rxbuf_end = rxbuf + n_rx;
+ 	int ret = 0;
+ 
++	if (!refcount_read(&cqspi->refcount))
++		return -ENODEV;
++
+ 	writel(from_addr, reg_base + CQSPI_REG_INDIRECTRDSTARTADDR);
+ 	writel(remaining, reg_base + CQSPI_REG_INDIRECTRDBYTES);
+ 
+@@ -1071,6 +1077,9 @@ static int cqspi_indirect_write_execute(struct cqspi_flash_pdata *f_pdata,
+ 	unsigned int write_bytes;
+ 	int ret;
+ 
++	if (!refcount_read(&cqspi->refcount))
++		return -ENODEV;
++
+ 	writel(to_addr, reg_base + CQSPI_REG_INDIRECTWRSTARTADDR);
+ 	writel(remaining, reg_base + CQSPI_REG_INDIRECTWRBYTES);
+ 
+@@ -1460,21 +1469,43 @@ static int cqspi_exec_mem_op(struct spi_mem *mem, const struct spi_mem_op *op)
+ 	int ret;
+ 	struct cqspi_st *cqspi = spi_controller_get_devdata(mem->spi->controller);
+ 	struct device *dev = &cqspi->pdev->dev;
++	const struct cqspi_driver_platdata *ddata = of_device_get_match_data(dev);
+ 
+-	ret = pm_runtime_resume_and_get(dev);
+-	if (ret) {
+-		dev_err(&mem->spi->dev, "resume failed with %d\n", ret);
+-		return ret;
++	if (refcount_read(&cqspi->inflight_ops) == 0)
++		return -ENODEV;
++
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) {
++		ret = pm_runtime_resume_and_get(dev);
++		if (ret) {
++			dev_err(&mem->spi->dev, "resume failed with %d\n", ret);
++			return ret;
++		}
++	}
++
++	if (!refcount_read(&cqspi->refcount))
++		return -EBUSY;
++
++	refcount_inc(&cqspi->inflight_ops);
++
++	if (!refcount_read(&cqspi->refcount)) {
++		if (refcount_read(&cqspi->inflight_ops))
++			refcount_dec(&cqspi->inflight_ops);
++		return -EBUSY;
+ 	}
+ 
+ 	ret = cqspi_mem_process(mem, op);
+ 
+-	pm_runtime_mark_last_busy(dev);
+-	pm_runtime_put_autosuspend(dev);
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) {
++		pm_runtime_mark_last_busy(dev);
++		pm_runtime_put_autosuspend(dev);
++	}
+ 
+ 	if (ret)
+ 		dev_err(&mem->spi->dev, "operation failed with %d\n", ret);
+ 
++	if (refcount_read(&cqspi->inflight_ops) > 1)
++		refcount_dec(&cqspi->inflight_ops);
++
+ 	return ret;
+ }
+ 
+@@ -1926,6 +1957,9 @@ static int cqspi_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
++	refcount_set(&cqspi->refcount, 1);
++	refcount_set(&cqspi->inflight_ops, 1);
++
+ 	ret = devm_request_irq(dev, irq, cqspi_irq_handler, 0,
+ 			       pdev->name, cqspi);
+ 	if (ret) {
+@@ -1958,11 +1992,12 @@ static int cqspi_probe(struct platform_device *pdev)
+ 			goto probe_setup_failed;
+ 	}
+ 
+-	pm_runtime_enable(dev);
+-
+-	pm_runtime_set_autosuspend_delay(dev, CQSPI_AUTOSUSPEND_TIMEOUT);
+-	pm_runtime_use_autosuspend(dev);
+-	pm_runtime_get_noresume(dev);
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) {
++		pm_runtime_enable(dev);
++		pm_runtime_set_autosuspend_delay(dev, CQSPI_AUTOSUSPEND_TIMEOUT);
++		pm_runtime_use_autosuspend(dev);
++		pm_runtime_get_noresume(dev);
++	}
+ 
+ 	ret = spi_register_controller(host);
+ 	if (ret) {
+@@ -1970,13 +2005,17 @@ static int cqspi_probe(struct platform_device *pdev)
+ 		goto probe_setup_failed;
+ 	}
+ 
+-	pm_runtime_mark_last_busy(dev);
+-	pm_runtime_put_autosuspend(dev);
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) {
++		pm_runtime_put_autosuspend(dev);
++		pm_runtime_mark_last_busy(dev);
++		pm_runtime_put_autosuspend(dev);
++	}
+ 
+ 	return 0;
+ probe_setup_failed:
+ 	cqspi_controller_enable(cqspi, 0);
+-	pm_runtime_disable(dev);
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM)))
++		pm_runtime_disable(dev);
+ probe_reset_failed:
+ 	if (cqspi->is_jh7110)
+ 		cqspi_jh7110_disable_clk(pdev, cqspi);
+@@ -1987,7 +2026,16 @@ static int cqspi_probe(struct platform_device *pdev)
+ 
+ static void cqspi_remove(struct platform_device *pdev)
+ {
++	const struct cqspi_driver_platdata *ddata;
+ 	struct cqspi_st *cqspi = platform_get_drvdata(pdev);
++	struct device *dev = &pdev->dev;
++
++	ddata = of_device_get_match_data(dev);
++
++	refcount_set(&cqspi->refcount, 0);
++
++	if (!refcount_dec_and_test(&cqspi->inflight_ops))
++		cqspi_wait_idle(cqspi);
+ 
+ 	spi_unregister_controller(cqspi->host);
+ 	cqspi_controller_enable(cqspi, 0);
+@@ -1995,14 +2043,17 @@ static void cqspi_remove(struct platform_device *pdev)
+ 	if (cqspi->rx_chan)
+ 		dma_release_channel(cqspi->rx_chan);
+ 
+-	if (pm_runtime_get_sync(&pdev->dev) >= 0)
+-		clk_disable(cqspi->clk);
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM)))
++		if (pm_runtime_get_sync(&pdev->dev) >= 0)
++			clk_disable(cqspi->clk);
+ 
+ 	if (cqspi->is_jh7110)
+ 		cqspi_jh7110_disable_clk(pdev, cqspi);
+ 
+-	pm_runtime_put_sync(&pdev->dev);
+-	pm_runtime_disable(&pdev->dev);
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) {
++		pm_runtime_put_sync(&pdev->dev);
++		pm_runtime_disable(&pdev->dev);
++	}
+ }
+ 
+ static int cqspi_runtime_suspend(struct device *dev)
+@@ -2081,7 +2132,8 @@ static const struct cqspi_driver_platdata socfpga_qspi = {
+ 	.quirks = CQSPI_DISABLE_DAC_MODE
+ 			| CQSPI_NO_SUPPORT_WR_COMPLETION
+ 			| CQSPI_SLOW_SRAM
+-			| CQSPI_DISABLE_STIG_MODE,
++			| CQSPI_DISABLE_STIG_MODE
++			| CQSPI_DISABLE_RUNTIME_PM,
+ };
+ 
+ static const struct cqspi_driver_platdata versal_ospi = {
+diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c
+index 1e50675772febb..cc88aaa106da30 100644
+--- a/drivers/ufs/core/ufs-mcq.c
++++ b/drivers/ufs/core/ufs-mcq.c
+@@ -243,7 +243,7 @@ int ufshcd_mcq_memory_alloc(struct ufs_hba *hba)
+ 		hwq->sqe_base_addr = dmam_alloc_coherent(hba->dev, utrdl_size,
+ 							 &hwq->sqe_dma_addr,
+ 							 GFP_KERNEL);
+-		if (!hwq->sqe_dma_addr) {
++		if (!hwq->sqe_base_addr) {
+ 			dev_err(hba->dev, "SQE allocation failed\n");
+ 			return -ENOMEM;
+ 		}
+@@ -252,7 +252,7 @@ int ufshcd_mcq_memory_alloc(struct ufs_hba *hba)
+ 		hwq->cqe_base_addr = dmam_alloc_coherent(hba->dev, cqe_size,
+ 							 &hwq->cqe_dma_addr,
+ 							 GFP_KERNEL);
+-		if (!hwq->cqe_dma_addr) {
++		if (!hwq->cqe_base_addr) {
+ 			dev_err(hba->dev, "CQE allocation failed\n");
+ 			return -ENOMEM;
+ 		}
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index d6daad39491b75..f5bc5387533012 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -737,7 +737,7 @@ void usb_detect_quirks(struct usb_device *udev)
+ 	udev->quirks ^= usb_detect_dynamic_quirks(udev);
+ 
+ 	if (udev->quirks)
+-		dev_dbg(&udev->dev, "USB quirks for this device: %x\n",
++		dev_dbg(&udev->dev, "USB quirks for this device: 0x%x\n",
+ 			udev->quirks);
+ 
+ #ifdef CONFIG_USB_DEFAULT_PERSIST
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index 53551aafd47027..c5902cc261e5b0 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -760,10 +760,10 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock)
+ 	int err;
+ 	int sent_pkts = 0;
+ 	bool sock_can_batch = (sock->sk->sk_sndbuf == INT_MAX);
+-	bool busyloop_intr;
+ 
+ 	do {
+-		busyloop_intr = false;
++		bool busyloop_intr = false;
++
+ 		if (nvq->done_idx == VHOST_NET_BATCH)
+ 			vhost_tx_batch(net, nvq, sock, &msg);
+ 
+@@ -774,10 +774,18 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock)
+ 			break;
+ 		/* Nothing new?  Wait for eventfd to tell us they refilled. */
+ 		if (head == vq->num) {
+-			/* Kicks are disabled at this point, break loop and
+-			 * process any remaining batched packets. Queue will
+-			 * be re-enabled afterwards.
++			/* Flush batched packets to handle pending RX
++			 * work (if busyloop_intr is set) and to avoid
++			 * unnecessary virtqueue kicks.
+ 			 */
++			vhost_tx_batch(net, nvq, sock, &msg);
++			if (unlikely(busyloop_intr)) {
++				vhost_poll_queue(&vq->poll);
++			} else if (unlikely(vhost_enable_notify(&net->dev,
++								vq))) {
++				vhost_disable_notify(&net->dev, vq);
++				continue;
++			}
+ 			break;
+ 		}
+ 
+@@ -827,22 +835,7 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock)
+ 		++nvq->done_idx;
+ 	} while (likely(!vhost_exceeds_weight(vq, ++sent_pkts, total_len)));
+ 
+-	/* Kicks are still disabled, dispatch any remaining batched msgs. */
+ 	vhost_tx_batch(net, nvq, sock, &msg);
+-
+-	if (unlikely(busyloop_intr))
+-		/* If interrupted while doing busy polling, requeue the
+-		 * handler to be fair handle_rx as well as other tasks
+-		 * waiting on cpu.
+-		 */
+-		vhost_poll_queue(&vq->poll);
+-	else
+-		/* All of our work has been completed; however, before
+-		 * leaving the TX handler, do one last check for work,
+-		 * and requeue handler if necessary. If there is no work,
+-		 * queue will be reenabled.
+-		 */
+-		vhost_net_busy_poll_try_queue(net, vq);
+ }
+ 
+ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock)
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 9eb880a11fd8ce..a786b0c1940b4c 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -2491,7 +2491,7 @@ static int fbcon_set_font(struct vc_data *vc, const struct console_font *font,
+ 	unsigned charcount = font->charcount;
+ 	int w = font->width;
+ 	int h = font->height;
+-	int size;
++	int size, alloc_size;
+ 	int i, csum;
+ 	u8 *new_data, *data = font->data;
+ 	int pitch = PITCH(font->width);
+@@ -2518,9 +2518,16 @@ static int fbcon_set_font(struct vc_data *vc, const struct console_font *font,
+ 	if (fbcon_invalid_charcount(info, charcount))
+ 		return -EINVAL;
+ 
+-	size = CALC_FONTSZ(h, pitch, charcount);
++	/* Check for integer overflow in font size calculation */
++	if (check_mul_overflow(h, pitch, &size) ||
++	    check_mul_overflow(size, charcount, &size))
++		return -EINVAL;
++
++	/* Check for overflow in allocation size calculation */
++	if (check_add_overflow(FONT_EXTRA_WORDS * sizeof(int), size, &alloc_size))
++		return -EINVAL;
+ 
+-	new_data = kmalloc(FONT_EXTRA_WORDS * sizeof(int) + size, GFP_USER);
++	new_data = kmalloc(alloc_size, GFP_USER);
+ 
+ 	if (!new_data)
+ 		return -ENOMEM;
+diff --git a/fs/afs/server.c b/fs/afs/server.c
+index a97562f831eb5a..c4428ebddb1da6 100644
+--- a/fs/afs/server.c
++++ b/fs/afs/server.c
+@@ -331,13 +331,14 @@ struct afs_server *afs_use_server(struct afs_server *server, bool activate,
+ void afs_put_server(struct afs_net *net, struct afs_server *server,
+ 		    enum afs_server_trace reason)
+ {
+-	unsigned int a, debug_id = server->debug_id;
++	unsigned int a, debug_id;
+ 	bool zero;
+ 	int r;
+ 
+ 	if (!server)
+ 		return;
+ 
++	debug_id = server->debug_id;
+ 	a = atomic_read(&server->active);
+ 	zero = __refcount_dec_and_test(&server->ref, &r);
+ 	trace_afs_server(debug_id, r - 1, a, reason);
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index f475b4b7c45782..817d3ef501ec44 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -2714,6 +2714,11 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path
+ 		goto error;
+ 	}
+ 
++	if (bdev_nr_bytes(file_bdev(bdev_file)) <= BTRFS_DEVICE_RANGE_RESERVED) {
++		ret = -EINVAL;
++		goto error;
++	}
++
+ 	if (fs_devices->seeding) {
+ 		seeding_dev = true;
+ 		down_write(&sb->s_umount);
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index e4de5425838d92..6040e54082777d 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -520,14 +520,16 @@ static bool remove_inode_single_folio(struct hstate *h, struct inode *inode,
+ 
+ 	/*
+ 	 * If folio is mapped, it was faulted in after being
+-	 * unmapped in caller.  Unmap (again) while holding
+-	 * the fault mutex.  The mutex will prevent faults
+-	 * until we finish removing the folio.
++	 * unmapped in caller or hugetlb_vmdelete_list() skips
++	 * unmapping it due to fail to grab lock.  Unmap (again)
++	 * while holding the fault mutex.  The mutex will prevent
++	 * faults until we finish removing the folio.  Hold folio
++	 * lock to guarantee no concurrent migration.
+ 	 */
++	folio_lock(folio);
+ 	if (unlikely(folio_mapped(folio)))
+ 		hugetlb_unmap_file_folio(h, mapping, folio, index);
+ 
+-	folio_lock(folio);
+ 	/*
+ 	 * We must remove the folio from page cache before removing
+ 	 * the region/ reserve map (hugetlb_unreserve_pages).  In
+diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
+index 18b3dc74c70e41..37ab6f28b5ad0e 100644
+--- a/fs/netfs/buffered_read.c
++++ b/fs/netfs/buffered_read.c
+@@ -369,7 +369,7 @@ void netfs_readahead(struct readahead_control *ractl)
+ 	return netfs_put_request(rreq, netfs_rreq_trace_put_return);
+ 
+ cleanup_free:
+-	return netfs_put_request(rreq, netfs_rreq_trace_put_failed);
++	return netfs_put_failed_request(rreq);
+ }
+ EXPORT_SYMBOL(netfs_readahead);
+ 
+@@ -472,7 +472,7 @@ static int netfs_read_gaps(struct file *file, struct folio *folio)
+ 	return ret < 0 ? ret : 0;
+ 
+ discard:
+-	netfs_put_request(rreq, netfs_rreq_trace_put_discard);
++	netfs_put_failed_request(rreq);
+ alloc_error:
+ 	folio_unlock(folio);
+ 	return ret;
+@@ -532,7 +532,7 @@ int netfs_read_folio(struct file *file, struct folio *folio)
+ 	return ret < 0 ? ret : 0;
+ 
+ discard:
+-	netfs_put_request(rreq, netfs_rreq_trace_put_discard);
++	netfs_put_failed_request(rreq);
+ alloc_error:
+ 	folio_unlock(folio);
+ 	return ret;
+@@ -699,7 +699,7 @@ int netfs_write_begin(struct netfs_inode *ctx,
+ 	return 0;
+ 
+ error_put:
+-	netfs_put_request(rreq, netfs_rreq_trace_put_failed);
++	netfs_put_failed_request(rreq);
+ error:
+ 	if (folio) {
+ 		folio_unlock(folio);
+@@ -754,7 +754,7 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio,
+ 	return ret < 0 ? ret : 0;
+ 
+ error_put:
+-	netfs_put_request(rreq, netfs_rreq_trace_put_discard);
++	netfs_put_failed_request(rreq);
+ error:
+ 	_leave(" = %d", ret);
+ 	return ret;
+diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c
+index a05e13472bafb2..a498ee8d66745f 100644
+--- a/fs/netfs/direct_read.c
++++ b/fs/netfs/direct_read.c
+@@ -131,6 +131,7 @@ static ssize_t netfs_unbuffered_read(struct netfs_io_request *rreq, bool sync)
+ 
+ 	if (rreq->len == 0) {
+ 		pr_err("Zero-sized read [R=%x]\n", rreq->debug_id);
++		netfs_put_request(rreq, netfs_rreq_trace_put_discard);
+ 		return -EIO;
+ 	}
+ 
+@@ -205,7 +206,7 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *i
+ 	if (user_backed_iter(iter)) {
+ 		ret = netfs_extract_user_iter(iter, rreq->len, &rreq->buffer.iter, 0);
+ 		if (ret < 0)
+-			goto out;
++			goto error_put;
+ 		rreq->direct_bv = (struct bio_vec *)rreq->buffer.iter.bvec;
+ 		rreq->direct_bv_count = ret;
+ 		rreq->direct_bv_unpin = iov_iter_extract_will_pin(iter);
+@@ -238,6 +239,10 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *i
+ 	if (ret > 0)
+ 		orig_count -= ret;
+ 	return ret;
++
++error_put:
++	netfs_put_failed_request(rreq);
++	return ret;
+ }
+ EXPORT_SYMBOL(netfs_unbuffered_read_iter_locked);
+ 
+diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c
+index a16660ab7f8385..a9d1c3b2c08426 100644
+--- a/fs/netfs/direct_write.c
++++ b/fs/netfs/direct_write.c
+@@ -57,7 +57,7 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
+ 			n = netfs_extract_user_iter(iter, len, &wreq->buffer.iter, 0);
+ 			if (n < 0) {
+ 				ret = n;
+-				goto out;
++				goto error_put;
+ 			}
+ 			wreq->direct_bv = (struct bio_vec *)wreq->buffer.iter.bvec;
+ 			wreq->direct_bv_count = n;
+@@ -101,6 +101,10 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
+ out:
+ 	netfs_put_request(wreq, netfs_rreq_trace_put_return);
+ 	return ret;
++
++error_put:
++	netfs_put_failed_request(wreq);
++	return ret;
+ }
+ EXPORT_SYMBOL(netfs_unbuffered_write_iter_locked);
+ 
+diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h
+index d4f16fefd96518..4319611f535449 100644
+--- a/fs/netfs/internal.h
++++ b/fs/netfs/internal.h
+@@ -87,6 +87,7 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
+ void netfs_get_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace what);
+ void netfs_clear_subrequests(struct netfs_io_request *rreq);
+ void netfs_put_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace what);
++void netfs_put_failed_request(struct netfs_io_request *rreq);
+ struct netfs_io_subrequest *netfs_alloc_subrequest(struct netfs_io_request *rreq);
+ 
+ static inline void netfs_see_request(struct netfs_io_request *rreq,
+diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c
+index e8c99738b5bbf2..40a1c7d6f6e03e 100644
+--- a/fs/netfs/objects.c
++++ b/fs/netfs/objects.c
+@@ -116,10 +116,8 @@ static void netfs_free_request_rcu(struct rcu_head *rcu)
+ 	netfs_stat_d(&netfs_n_rh_rreq);
+ }
+ 
+-static void netfs_free_request(struct work_struct *work)
++static void netfs_deinit_request(struct netfs_io_request *rreq)
+ {
+-	struct netfs_io_request *rreq =
+-		container_of(work, struct netfs_io_request, cleanup_work);
+ 	struct netfs_inode *ictx = netfs_inode(rreq->inode);
+ 	unsigned int i;
+ 
+@@ -149,6 +147,14 @@ static void netfs_free_request(struct work_struct *work)
+ 
+ 	if (atomic_dec_and_test(&ictx->io_count))
+ 		wake_up_var(&ictx->io_count);
++}
++
++static void netfs_free_request(struct work_struct *work)
++{
++	struct netfs_io_request *rreq =
++		container_of(work, struct netfs_io_request, cleanup_work);
++
++	netfs_deinit_request(rreq);
+ 	call_rcu(&rreq->rcu, netfs_free_request_rcu);
+ }
+ 
+@@ -167,6 +173,24 @@ void netfs_put_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace
+ 	}
+ }
+ 
++/*
++ * Free a request (synchronously) that was just allocated but has
++ * failed before it could be submitted.
++ */
++void netfs_put_failed_request(struct netfs_io_request *rreq)
++{
++	int r = refcount_read(&rreq->ref);
++
++	/* new requests have two references (see
++	 * netfs_alloc_request(), and this function is only allowed on
++	 * new request objects
++	 */
++	WARN_ON_ONCE(r != 2);
++
++	trace_netfs_rreq_ref(rreq->debug_id, r, netfs_rreq_trace_put_failed);
++	netfs_free_request(&rreq->cleanup_work);
++}
++
+ /*
+  * Allocate and partially initialise an I/O request structure.
+  */
+diff --git a/fs/netfs/read_pgpriv2.c b/fs/netfs/read_pgpriv2.c
+index 8097bc069c1de6..a1489aa29f782a 100644
+--- a/fs/netfs/read_pgpriv2.c
++++ b/fs/netfs/read_pgpriv2.c
+@@ -118,7 +118,7 @@ static struct netfs_io_request *netfs_pgpriv2_begin_copy_to_cache(
+ 	return creq;
+ 
+ cancel_put:
+-	netfs_put_request(creq, netfs_rreq_trace_put_return);
++	netfs_put_failed_request(creq);
+ cancel:
+ 	rreq->copy_to_cache = ERR_PTR(-ENOBUFS);
+ 	clear_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags);
+diff --git a/fs/netfs/read_single.c b/fs/netfs/read_single.c
+index fa622a6cd56da3..5c0dc4efc79227 100644
+--- a/fs/netfs/read_single.c
++++ b/fs/netfs/read_single.c
+@@ -189,7 +189,7 @@ ssize_t netfs_read_single(struct inode *inode, struct file *file, struct iov_ite
+ 	return ret;
+ 
+ cleanup_free:
+-	netfs_put_request(rreq, netfs_rreq_trace_put_failed);
++	netfs_put_failed_request(rreq);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(netfs_read_single);
+diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c
+index 0584cba1a04392..dd8743bc8d7fe3 100644
+--- a/fs/netfs/write_issue.c
++++ b/fs/netfs/write_issue.c
+@@ -133,8 +133,7 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping,
+ 
+ 	return wreq;
+ nomem:
+-	wreq->error = -ENOMEM;
+-	netfs_put_request(wreq, netfs_rreq_trace_put_failed);
++	netfs_put_failed_request(wreq);
+ 	return ERR_PTR(-ENOMEM);
+ }
+ 
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index a16a619fb8c33b..8cc39a73faff85 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -28,6 +28,7 @@
+ #include <linux/mm.h>
+ #include <linux/pagemap.h>
+ #include <linux/gfp.h>
++#include <linux/rmap.h>
+ #include <linux/swap.h>
+ #include <linux/compaction.h>
+ 
+@@ -279,6 +280,37 @@ nfs_file_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+ }
+ EXPORT_SYMBOL_GPL(nfs_file_fsync);
+ 
++void nfs_truncate_last_folio(struct address_space *mapping, loff_t from,
++			     loff_t to)
++{
++	struct folio *folio;
++
++	if (from >= to)
++		return;
++
++	folio = filemap_lock_folio(mapping, from >> PAGE_SHIFT);
++	if (IS_ERR(folio))
++		return;
++
++	if (folio_mkclean(folio))
++		folio_mark_dirty(folio);
++
++	if (folio_test_uptodate(folio)) {
++		loff_t fpos = folio_pos(folio);
++		size_t offset = from - fpos;
++		size_t end = folio_size(folio);
++
++		if (to - fpos < end)
++			end = to - fpos;
++		folio_zero_segment(folio, offset, end);
++		trace_nfs_size_truncate_folio(mapping->host, to);
++	}
++
++	folio_unlock(folio);
++	folio_put(folio);
++}
++EXPORT_SYMBOL_GPL(nfs_truncate_last_folio);
++
+ /*
+  * Decide whether a read/modify/write cycle may be more efficient
+  * then a modify/write/read cycle when writing to a page in the
+@@ -353,6 +385,7 @@ static int nfs_write_begin(struct file *file, struct address_space *mapping,
+ 
+ 	dfprintk(PAGECACHE, "NFS: write_begin(%pD2(%lu), %u@%lld)\n",
+ 		file, mapping->host->i_ino, len, (long long) pos);
++	nfs_truncate_last_folio(mapping, i_size_read(mapping->host), pos);
+ 
+ 	fgp |= fgf_set_order(len);
+ start:
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index a32cc45425e287..f6b448666d4195 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -710,6 +710,7 @@ nfs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ {
+ 	struct inode *inode = d_inode(dentry);
+ 	struct nfs_fattr *fattr;
++	loff_t oldsize = i_size_read(inode);
+ 	int error = 0;
+ 
+ 	nfs_inc_stats(inode, NFSIOS_VFSSETATTR);
+@@ -725,7 +726,7 @@ nfs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 		if (error)
+ 			return error;
+ 
+-		if (attr->ia_size == i_size_read(inode))
++		if (attr->ia_size == oldsize)
+ 			attr->ia_valid &= ~ATTR_SIZE;
+ 	}
+ 
+@@ -773,8 +774,12 @@ nfs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 	}
+ 
+ 	error = NFS_PROTO(inode)->setattr(dentry, fattr, attr);
+-	if (error == 0)
++	if (error == 0) {
++		if (attr->ia_valid & ATTR_SIZE)
++			nfs_truncate_last_folio(inode->i_mapping, oldsize,
++						attr->ia_size);
+ 		error = nfs_refresh_inode(inode, fattr);
++	}
+ 	nfs_free_fattr(fattr);
+ out:
+ 	trace_nfs_setattr_exit(inode, error);
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 0ef0fc6aba3b3c..ae4d039c10d3a4 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -438,6 +438,8 @@ int nfs_file_release(struct inode *, struct file *);
+ int nfs_lock(struct file *, int, struct file_lock *);
+ int nfs_flock(struct file *, int, struct file_lock *);
+ int nfs_check_flags(int);
++void nfs_truncate_last_folio(struct address_space *mapping, loff_t from,
++			     loff_t to);
+ 
+ /* inode.c */
+ extern struct workqueue_struct *nfsiod_workqueue;
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index 48ee3d5d89c4ae..6a0b5871ba3b09 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -138,6 +138,7 @@ int nfs42_proc_allocate(struct file *filep, loff_t offset, loff_t len)
+ 		.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_ALLOCATE],
+ 	};
+ 	struct inode *inode = file_inode(filep);
++	loff_t oldsize = i_size_read(inode);
+ 	int err;
+ 
+ 	if (!nfs_server_capable(inode, NFS_CAP_ALLOCATE))
+@@ -146,7 +147,11 @@ int nfs42_proc_allocate(struct file *filep, loff_t offset, loff_t len)
+ 	inode_lock(inode);
+ 
+ 	err = nfs42_proc_fallocate(&msg, filep, offset, len);
+-	if (err == -EOPNOTSUPP)
++
++	if (err == 0)
++		nfs_truncate_last_folio(inode->i_mapping, oldsize,
++					offset + len);
++	else if (err == -EOPNOTSUPP)
+ 		NFS_SERVER(inode)->caps &= ~(NFS_CAP_ALLOCATE |
+ 					     NFS_CAP_ZERO_RANGE);
+ 
+@@ -184,6 +189,7 @@ int nfs42_proc_zero_range(struct file *filep, loff_t offset, loff_t len)
+ 		.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_ZERO_RANGE],
+ 	};
+ 	struct inode *inode = file_inode(filep);
++	loff_t oldsize = i_size_read(inode);
+ 	int err;
+ 
+ 	if (!nfs_server_capable(inode, NFS_CAP_ZERO_RANGE))
+@@ -192,9 +198,11 @@ int nfs42_proc_zero_range(struct file *filep, loff_t offset, loff_t len)
+ 	inode_lock(inode);
+ 
+ 	err = nfs42_proc_fallocate(&msg, filep, offset, len);
+-	if (err == 0)
++	if (err == 0) {
++		nfs_truncate_last_folio(inode->i_mapping, oldsize,
++					offset + len);
+ 		truncate_pagecache_range(inode, offset, (offset + len) -1);
+-	if (err == -EOPNOTSUPP)
++	} else if (err == -EOPNOTSUPP)
+ 		NFS_SERVER(inode)->caps &= ~NFS_CAP_ZERO_RANGE;
+ 
+ 	inode_unlock(inode);
+@@ -355,22 +363,27 @@ static int process_copy_commit(struct file *dst, loff_t pos_dst,
+ 
+ /**
+  * nfs42_copy_dest_done - perform inode cache updates after clone/copy offload
+- * @inode: pointer to destination inode
++ * @file: pointer to destination file
+  * @pos: destination offset
+  * @len: copy length
++ * @oldsize: length of the file prior to clone/copy
+  *
+  * Punch a hole in the inode page cache, so that the NFS client will
+  * know to retrieve new data.
+  * Update the file size if necessary, and then mark the inode as having
+  * invalid cached values for change attribute, ctime, mtime and space used.
+  */
+-static void nfs42_copy_dest_done(struct inode *inode, loff_t pos, loff_t len)
++static void nfs42_copy_dest_done(struct file *file, loff_t pos, loff_t len,
++				 loff_t oldsize)
+ {
++	struct inode *inode = file_inode(file);
++	struct address_space *mapping = file->f_mapping;
+ 	loff_t newsize = pos + len;
+ 	loff_t end = newsize - 1;
+ 
+-	WARN_ON_ONCE(invalidate_inode_pages2_range(inode->i_mapping,
+-				pos >> PAGE_SHIFT, end >> PAGE_SHIFT));
++	nfs_truncate_last_folio(mapping, oldsize, pos);
++	WARN_ON_ONCE(invalidate_inode_pages2_range(mapping, pos >> PAGE_SHIFT,
++						   end >> PAGE_SHIFT));
+ 
+ 	spin_lock(&inode->i_lock);
+ 	if (newsize > i_size_read(inode))
+@@ -403,6 +416,7 @@ static ssize_t _nfs42_proc_copy(struct file *src,
+ 	struct nfs_server *src_server = NFS_SERVER(src_inode);
+ 	loff_t pos_src = args->src_pos;
+ 	loff_t pos_dst = args->dst_pos;
++	loff_t oldsize_dst = i_size_read(dst_inode);
+ 	size_t count = args->count;
+ 	ssize_t status;
+ 
+@@ -477,7 +491,7 @@ static ssize_t _nfs42_proc_copy(struct file *src,
+ 			goto out;
+ 	}
+ 
+-	nfs42_copy_dest_done(dst_inode, pos_dst, res->write_res.count);
++	nfs42_copy_dest_done(dst, pos_dst, res->write_res.count, oldsize_dst);
+ 	nfs_invalidate_atime(src_inode);
+ 	status = res->write_res.count;
+ out:
+@@ -1244,6 +1258,7 @@ static int _nfs42_proc_clone(struct rpc_message *msg, struct file *src_f,
+ 	struct nfs42_clone_res res = {
+ 		.server	= server,
+ 	};
++	loff_t oldsize_dst = i_size_read(dst_inode);
+ 	int status;
+ 
+ 	msg->rpc_argp = &args;
+@@ -1278,7 +1293,7 @@ static int _nfs42_proc_clone(struct rpc_message *msg, struct file *src_f,
+ 		/* a zero-length count means clone to EOF in src */
+ 		if (count == 0 && res.dst_fattr->valid & NFS_ATTR_FATTR_SIZE)
+ 			count = nfs_size_to_loff_t(res.dst_fattr->size) - dst_offset;
+-		nfs42_copy_dest_done(dst_inode, dst_offset, count);
++		nfs42_copy_dest_done(dst_f, dst_offset, count, oldsize_dst);
+ 		status = nfs_post_op_update_inode(dst_inode, res.dst_fattr);
+ 	}
+ 
+diff --git a/fs/nfs/nfstrace.h b/fs/nfs/nfstrace.h
+index 7a058bd8c566e2..1e4dc632f1800c 100644
+--- a/fs/nfs/nfstrace.h
++++ b/fs/nfs/nfstrace.h
+@@ -267,6 +267,7 @@ DECLARE_EVENT_CLASS(nfs_update_size_class,
+ 			TP_ARGS(inode, new_size))
+ 
+ DEFINE_NFS_UPDATE_SIZE_EVENT(truncate);
++DEFINE_NFS_UPDATE_SIZE_EVENT(truncate_folio);
+ DEFINE_NFS_UPDATE_SIZE_EVENT(wcc);
+ DEFINE_NFS_UPDATE_SIZE_EVENT(update);
+ DEFINE_NFS_UPDATE_SIZE_EVENT(grow);
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index bdc3c2e9334ad5..ea4ff22976c5c8 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -2286,6 +2286,9 @@ static void pagemap_scan_backout_range(struct pagemap_scan_private *p,
+ {
+ 	struct page_region *cur_buf = &p->vec_buf[p->vec_buf_index];
+ 
++	if (!p->vec_buf)
++		return;
++
+ 	if (cur_buf->start != addr)
+ 		cur_buf->end = addr;
+ 	else
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index 86cad8ee8e6f3b..ac3ce183bd59a9 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -687,7 +687,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 	}
+ 
+ 	for (i = 0; i < num_cmds; i++) {
+-		char *buf = rsp_iov[i + i].iov_base;
++		char *buf = rsp_iov[i + 1].iov_base;
+ 
+ 		if (buf && resp_buftype[i + 1] != CIFS_NO_BUFFER)
+ 			rc = server->ops->map_error(buf, false);
+diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c
+index 6550bd9f002c27..74dfb6496095db 100644
+--- a/fs/smb/server/transport_rdma.c
++++ b/fs/smb/server/transport_rdma.c
+@@ -148,7 +148,7 @@ struct smb_direct_transport {
+ 	wait_queue_head_t	wait_send_pending;
+ 	atomic_t		send_pending;
+ 
+-	struct delayed_work	post_recv_credits_work;
++	struct work_struct	post_recv_credits_work;
+ 	struct work_struct	send_immediate_work;
+ 	struct work_struct	disconnect_work;
+ 
+@@ -367,8 +367,8 @@ static struct smb_direct_transport *alloc_transport(struct rdma_cm_id *cm_id)
+ 
+ 	spin_lock_init(&t->lock_new_recv_credits);
+ 
+-	INIT_DELAYED_WORK(&t->post_recv_credits_work,
+-			  smb_direct_post_recv_credits);
++	INIT_WORK(&t->post_recv_credits_work,
++		  smb_direct_post_recv_credits);
+ 	INIT_WORK(&t->send_immediate_work, smb_direct_send_immediate_work);
+ 	INIT_WORK(&t->disconnect_work, smb_direct_disconnect_rdma_work);
+ 
+@@ -399,9 +399,9 @@ static void free_transport(struct smb_direct_transport *t)
+ 	wait_event(t->wait_send_pending,
+ 		   atomic_read(&t->send_pending) == 0);
+ 
+-	cancel_work_sync(&t->disconnect_work);
+-	cancel_delayed_work_sync(&t->post_recv_credits_work);
+-	cancel_work_sync(&t->send_immediate_work);
++	disable_work_sync(&t->disconnect_work);
++	disable_work_sync(&t->post_recv_credits_work);
++	disable_work_sync(&t->send_immediate_work);
+ 
+ 	if (t->qp) {
+ 		ib_drain_qp(t->qp);
+@@ -615,8 +615,7 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 			wake_up_interruptible(&t->wait_send_credits);
+ 
+ 		if (is_receive_credit_post_required(receive_credits, avail_recvmsg_count))
+-			mod_delayed_work(smb_direct_wq,
+-					 &t->post_recv_credits_work, 0);
++			queue_work(smb_direct_wq, &t->post_recv_credits_work);
+ 
+ 		if (data_length) {
+ 			enqueue_reassembly(t, recvmsg, (int)data_length);
+@@ -773,8 +772,7 @@ static int smb_direct_read(struct ksmbd_transport *t, char *buf,
+ 		st->count_avail_recvmsg += queue_removed;
+ 		if (is_receive_credit_post_required(st->recv_credits, st->count_avail_recvmsg)) {
+ 			spin_unlock(&st->receive_credit_lock);
+-			mod_delayed_work(smb_direct_wq,
+-					 &st->post_recv_credits_work, 0);
++			queue_work(smb_direct_wq, &st->post_recv_credits_work);
+ 		} else {
+ 			spin_unlock(&st->receive_credit_lock);
+ 		}
+@@ -801,7 +799,7 @@ static int smb_direct_read(struct ksmbd_transport *t, char *buf,
+ static void smb_direct_post_recv_credits(struct work_struct *work)
+ {
+ 	struct smb_direct_transport *t = container_of(work,
+-		struct smb_direct_transport, post_recv_credits_work.work);
++		struct smb_direct_transport, post_recv_credits_work);
+ 	struct smb_direct_recvmsg *recvmsg;
+ 	int receive_credits, credits = 0;
+ 	int ret;
+@@ -1734,7 +1732,7 @@ static int smb_direct_prepare_negotiation(struct smb_direct_transport *t)
+ 		goto out_err;
+ 	}
+ 
+-	smb_direct_post_recv_credits(&t->post_recv_credits_work.work);
++	smb_direct_post_recv_credits(&t->post_recv_credits_work);
+ 	return 0;
+ out_err:
+ 	put_recvmsg(t, recvmsg);
+diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
+index 0c70f3a5557505..107b797c33ecf7 100644
+--- a/include/crypto/if_alg.h
++++ b/include/crypto/if_alg.h
+@@ -152,7 +152,7 @@ struct af_alg_ctx {
+ 	size_t used;
+ 	atomic_t rcvused;
+ 
+-	u32		more:1,
++	bool		more:1,
+ 			merge:1,
+ 			enc:1,
+ 			write:1,
+diff --git a/include/linux/firmware/imx/sm.h b/include/linux/firmware/imx/sm.h
+index a8a17eeb7d907e..1817df9aceac80 100644
+--- a/include/linux/firmware/imx/sm.h
++++ b/include/linux/firmware/imx/sm.h
+@@ -18,13 +18,43 @@
+ #define SCMI_IMX_CTRL_SAI4_MCLK		4	/* WAKE SAI4 MCLK */
+ #define SCMI_IMX_CTRL_SAI5_MCLK		5	/* WAKE SAI5 MCLK */
+ 
++#if IS_ENABLED(CONFIG_IMX_SCMI_MISC_DRV)
+ int scmi_imx_misc_ctrl_get(u32 id, u32 *num, u32 *val);
+ int scmi_imx_misc_ctrl_set(u32 id, u32 val);
++#else
++static inline int scmi_imx_misc_ctrl_get(u32 id, u32 *num, u32 *val)
++{
++	return -EOPNOTSUPP;
++}
+ 
++static inline int scmi_imx_misc_ctrl_set(u32 id, u32 val)
++{
++	return -EOPNOTSUPP;
++}
++#endif
++
++#if IS_ENABLED(CONFIG_IMX_SCMI_CPU_DRV)
+ int scmi_imx_cpu_start(u32 cpuid, bool start);
+ int scmi_imx_cpu_started(u32 cpuid, bool *started);
+ int scmi_imx_cpu_reset_vector_set(u32 cpuid, u64 vector, bool start, bool boot,
+ 				  bool resume);
++#else
++static inline int scmi_imx_cpu_start(u32 cpuid, bool start)
++{
++	return -EOPNOTSUPP;
++}
++
++static inline int scmi_imx_cpu_started(u32 cpuid, bool *started)
++{
++	return -EOPNOTSUPP;
++}
++
++static inline int scmi_imx_cpu_reset_vector_set(u32 cpuid, u64 vector, bool start,
++						bool boot, bool resume)
++{
++	return -EOPNOTSUPP;
++}
++#endif
+ 
+ enum scmi_imx_lmm_op {
+ 	SCMI_IMX_LMM_BOOT,
+@@ -36,7 +66,24 @@ enum scmi_imx_lmm_op {
+ #define SCMI_IMX_LMM_OP_FORCEFUL	0
+ #define SCMI_IMX_LMM_OP_GRACEFUL	BIT(0)
+ 
++#if IS_ENABLED(CONFIG_IMX_SCMI_LMM_DRV)
+ int scmi_imx_lmm_operation(u32 lmid, enum scmi_imx_lmm_op op, u32 flags);
+ int scmi_imx_lmm_info(u32 lmid, struct scmi_imx_lmm_info *info);
+ int scmi_imx_lmm_reset_vector_set(u32 lmid, u32 cpuid, u32 flags, u64 vector);
++#else
++static inline int scmi_imx_lmm_operation(u32 lmid, enum scmi_imx_lmm_op op, u32 flags)
++{
++	return -EOPNOTSUPP;
++}
++
++static inline int scmi_imx_lmm_info(u32 lmid, struct scmi_imx_lmm_info *info)
++{
++	return -EOPNOTSUPP;
++}
++
++static inline int scmi_imx_lmm_reset_vector_set(u32 lmid, u32 cpuid, u32 flags, u64 vector)
++{
++	return -EOPNOTSUPP;
++}
++#endif
+ #endif
+diff --git a/include/linux/mlx5/fs.h b/include/linux/mlx5/fs.h
+index 939e58c2f3865f..fb5f98fcc72692 100644
+--- a/include/linux/mlx5/fs.h
++++ b/include/linux/mlx5/fs.h
+@@ -308,6 +308,8 @@ struct mlx5_fc *mlx5_fc_create(struct mlx5_core_dev *dev, bool aging);
+ void mlx5_fc_destroy(struct mlx5_core_dev *dev, struct mlx5_fc *counter);
+ struct mlx5_fc *mlx5_fc_local_create(u32 counter_id, u32 offset, u32 bulk_size);
+ void mlx5_fc_local_destroy(struct mlx5_fc *counter);
++void mlx5_fc_local_get(struct mlx5_fc *counter);
++void mlx5_fc_local_put(struct mlx5_fc *counter);
+ u64 mlx5_fc_query_lastuse(struct mlx5_fc *counter);
+ void mlx5_fc_query_cached(struct mlx5_fc *counter,
+ 			  u64 *bytes, u64 *packets, u64 *lastuse);
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 439bc124ce7098..1347ae13dd0a1c 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1245,6 +1245,27 @@ static inline struct hci_conn *hci_conn_hash_lookup_ba(struct hci_dev *hdev,
+ 	return NULL;
+ }
+ 
++static inline struct hci_conn *hci_conn_hash_lookup_role(struct hci_dev *hdev,
++							 __u8 type, __u8 role,
++							 bdaddr_t *ba)
++{
++	struct hci_conn_hash *h = &hdev->conn_hash;
++	struct hci_conn  *c;
++
++	rcu_read_lock();
++
++	list_for_each_entry_rcu(c, &h->list, list) {
++		if (c->type == type && c->role == role && !bacmp(&c->dst, ba)) {
++			rcu_read_unlock();
++			return c;
++		}
++	}
++
++	rcu_read_unlock();
++
++	return NULL;
++}
++
+ static inline struct hci_conn *hci_conn_hash_lookup_le(struct hci_dev *hdev,
+ 						       bdaddr_t *ba,
+ 						       __u8 ba_type)
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 829f0792d8d831..17e5cf18da1efb 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -3013,7 +3013,10 @@ EXPORT_SYMBOL_GPL(bpf_event_output);
+ 
+ /* Always built-in helper functions. */
+ const struct bpf_func_proto bpf_tail_call_proto = {
+-	.func		= NULL,
++	/* func is unused for tail_call, we set it to pass the
++	 * get_helper_proto check
++	 */
++	.func		= BPF_PTR_POISON,
+ 	.gpl_only	= false,
+ 	.ret_type	= RET_VOID,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 4fd89659750b25..a6338936085ae8 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -8405,6 +8405,10 @@ static int process_timer_func(struct bpf_verifier_env *env, int regno,
+ 		verifier_bug(env, "Two map pointers in a timer helper");
+ 		return -EFAULT;
+ 	}
++	if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
++		verbose(env, "bpf_timer cannot be used for PREEMPT_RT.\n");
++		return -EOPNOTSUPP;
++	}
+ 	meta->map_uid = reg->map_uid;
+ 	meta->map_ptr = map;
+ 	return 0;
+@@ -11206,7 +11210,7 @@ static int get_helper_proto(struct bpf_verifier_env *env, int func_id,
+ 		return -EINVAL;
+ 
+ 	*ptr = env->ops->get_func_proto(func_id, env->prog);
+-	return *ptr ? 0 : -EINVAL;
++	return *ptr && (*ptr)->func ? 0 : -EINVAL;
+ }
+ 
+ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 1ee8eb11f38bae..0cbc174da76ace 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -2289,7 +2289,7 @@ __latent_entropy struct task_struct *copy_process(
+ 	if (need_futex_hash_allocate_default(clone_flags)) {
+ 		retval = futex_hash_allocate_default();
+ 		if (retval)
+-			goto bad_fork_core_free;
++			goto bad_fork_cancel_cgroup;
+ 		/*
+ 		 * If we fail beyond this point we don't free the allocated
+ 		 * futex hash map. We assume that another thread will be created
+diff --git a/kernel/futex/requeue.c b/kernel/futex/requeue.c
+index c716a66f86929c..d818b4d47f1bad 100644
+--- a/kernel/futex/requeue.c
++++ b/kernel/futex/requeue.c
+@@ -230,8 +230,9 @@ static inline
+ void requeue_pi_wake_futex(struct futex_q *q, union futex_key *key,
+ 			   struct futex_hash_bucket *hb)
+ {
+-	q->key = *key;
++	struct task_struct *task;
+ 
++	q->key = *key;
+ 	__futex_unqueue(q);
+ 
+ 	WARN_ON(!q->rt_waiter);
+@@ -243,10 +244,11 @@ void requeue_pi_wake_futex(struct futex_q *q, union futex_key *key,
+ 	futex_hash_get(hb);
+ 	q->drop_hb_ref = true;
+ 	q->lock_ptr = &hb->lock;
++	task = READ_ONCE(q->task);
+ 
+ 	/* Signal locked state to the waiter */
+ 	futex_requeue_pi_complete(q, 1);
+-	wake_up_state(q->task, TASK_NORMAL);
++	wake_up_state(task, TASK_NORMAL);
+ }
+ 
+ /**
+diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
+index 001fb88a8481d8..edd6cdd9aadcac 100644
+--- a/kernel/sched/ext_idle.c
++++ b/kernel/sched/ext_idle.c
+@@ -75,7 +75,7 @@ static int scx_cpu_node_if_enabled(int cpu)
+ 	return cpu_to_node(cpu);
+ }
+ 
+-bool scx_idle_test_and_clear_cpu(int cpu)
++static bool scx_idle_test_and_clear_cpu(int cpu)
+ {
+ 	int node = scx_cpu_node_if_enabled(cpu);
+ 	struct cpumask *idle_cpus = idle_cpumask(node)->cpu;
+@@ -198,7 +198,7 @@ pick_idle_cpu_from_online_nodes(const struct cpumask *cpus_allowed, int node, u6
+ /*
+  * Find an idle CPU in the system, starting from @node.
+  */
+-s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
++static s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
+ {
+ 	s32 cpu;
+ 
+@@ -794,6 +794,16 @@ static void reset_idle_masks(struct sched_ext_ops *ops)
+ 		cpumask_and(idle_cpumask(node)->smt, cpu_online_mask, node_mask);
+ 	}
+ }
++#else	/* !CONFIG_SMP */
++static bool scx_idle_test_and_clear_cpu(int cpu)
++{
++	return -EBUSY;
++}
++
++static s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
++{
++	return -EBUSY;
++}
+ #endif	/* CONFIG_SMP */
+ 
+ void scx_idle_enable(struct sched_ext_ops *ops)
+@@ -860,8 +870,34 @@ static bool check_builtin_idle_enabled(void)
+ 	return false;
+ }
+ 
+-s32 select_cpu_from_kfunc(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
+-			  const struct cpumask *allowed, u64 flags)
++/*
++ * Determine whether @p is a migration-disabled task in the context of BPF
++ * code.
++ *
++ * We can't simply check whether @p->migration_disabled is set in a
++ * sched_ext callback, because migration is always disabled for the current
++ * task while running BPF code.
++ *
++ * The prolog (__bpf_prog_enter) and epilog (__bpf_prog_exit) respectively
++ * disable and re-enable migration. For this reason, the current task
++ * inside a sched_ext callback is always a migration-disabled task.
++ *
++ * Therefore, when @p->migration_disabled == 1, check whether @p is the
++ * current task or not: if it is, then migration was not disabled before
++ * entering the callback, otherwise migration was disabled.
++ *
++ * Returns true if @p is migration-disabled, false otherwise.
++ */
++static bool is_bpf_migration_disabled(const struct task_struct *p)
++{
++	if (p->migration_disabled == 1)
++		return p != current;
++	else
++		return p->migration_disabled;
++}
++
++static s32 select_cpu_from_kfunc(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
++				 const struct cpumask *allowed, u64 flags)
+ {
+ 	struct rq *rq;
+ 	struct rq_flags rf;
+@@ -903,7 +939,7 @@ s32 select_cpu_from_kfunc(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
+ 	 * selection optimizations and simply check whether the previously
+ 	 * used CPU is idle and within the allowed cpumask.
+ 	 */
+-	if (p->nr_cpus_allowed == 1 || is_migration_disabled(p)) {
++	if (p->nr_cpus_allowed == 1 || is_bpf_migration_disabled(p)) {
+ 		if (cpumask_test_cpu(prev_cpu, allowed ?: p->cpus_ptr) &&
+ 		    scx_idle_test_and_clear_cpu(prev_cpu))
+ 			cpu = prev_cpu;
+@@ -1125,10 +1161,10 @@ __bpf_kfunc bool scx_bpf_test_and_clear_cpu_idle(s32 cpu)
+ 	if (!check_builtin_idle_enabled())
+ 		return false;
+ 
+-	if (kf_cpu_valid(cpu, NULL))
+-		return scx_idle_test_and_clear_cpu(cpu);
+-	else
++	if (!kf_cpu_valid(cpu, NULL))
+ 		return false;
++
++	return scx_idle_test_and_clear_cpu(cpu);
+ }
+ 
+ /**
+diff --git a/kernel/sched/ext_idle.h b/kernel/sched/ext_idle.h
+index 37be78a7502b32..05e389ed72e4c1 100644
+--- a/kernel/sched/ext_idle.h
++++ b/kernel/sched/ext_idle.h
+@@ -15,16 +15,9 @@ struct sched_ext_ops;
+ #ifdef CONFIG_SMP
+ void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops);
+ void scx_idle_init_masks(void);
+-bool scx_idle_test_and_clear_cpu(int cpu);
+-s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags);
+ #else /* !CONFIG_SMP */
+ static inline void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops) {}
+ static inline void scx_idle_init_masks(void) {}
+-static inline bool scx_idle_test_and_clear_cpu(int cpu) { return false; }
+-static inline s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
+-{
+-	return -EBUSY;
+-}
+ #endif /* CONFIG_SMP */
+ 
+ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
+diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
+index db40ec5cc9d731..9fb1a8c107ec35 100644
+--- a/kernel/trace/fgraph.c
++++ b/kernel/trace/fgraph.c
+@@ -815,6 +815,7 @@ __ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointe
+ 	unsigned long bitmap;
+ 	unsigned long ret;
+ 	int offset;
++	int bit;
+ 	int i;
+ 
+ 	ret_stack = ftrace_pop_return_trace(&trace, &ret, frame_pointer, &offset);
+@@ -829,6 +830,15 @@ __ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointe
+ 	if (fregs)
+ 		ftrace_regs_set_instruction_pointer(fregs, ret);
+ 
++	bit = ftrace_test_recursion_trylock(trace.func, ret);
++	/*
++	 * This can fail because ftrace_test_recursion_trylock() allows one nest
++	 * call. If we are already in a nested call, then we don't probe this and
++	 * just return the original return address.
++	 */
++	if (unlikely(bit < 0))
++		goto out;
++
+ #ifdef CONFIG_FUNCTION_GRAPH_RETVAL
+ 	trace.retval = ftrace_regs_get_return_value(fregs);
+ #endif
+@@ -852,6 +862,8 @@ __ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointe
+ 		}
+ 	}
+ 
++	ftrace_test_recursion_unlock(bit);
++out:
+ 	/*
+ 	 * The ftrace_graph_return() may still access the current
+ 	 * ret_stack structure, we need to make sure the update of
+diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
+index f9b3aa9afb1784..342e84f8a40e24 100644
+--- a/kernel/trace/fprobe.c
++++ b/kernel/trace/fprobe.c
+@@ -428,8 +428,9 @@ static int fprobe_addr_list_add(struct fprobe_addr_list *alist, unsigned long ad
+ {
+ 	unsigned long *addrs;
+ 
+-	if (alist->index >= alist->size)
+-		return -ENOMEM;
++	/* Previously we failed to expand the list. */
++	if (alist->index == alist->size)
++		return -ENOSPC;
+ 
+ 	alist->addrs[alist->index++] = addr;
+ 	if (alist->index < alist->size)
+@@ -489,7 +490,7 @@ static int fprobe_module_callback(struct notifier_block *nb,
+ 	for (i = 0; i < FPROBE_IP_TABLE_SIZE; i++)
+ 		fprobe_remove_node_in_module(mod, &fprobe_ip_table[i], &alist);
+ 
+-	if (alist.index < alist.size && alist.index > 0)
++	if (alist.index > 0)
+ 		ftrace_set_filter_ips(&fprobe_graph_ops.ops,
+ 				      alist.addrs, alist.index, 1, 0);
+ 	mutex_unlock(&fprobe_mutex);
+diff --git a/kernel/trace/trace_dynevent.c b/kernel/trace/trace_dynevent.c
+index 5d64a18cacacc6..d06854bd32b357 100644
+--- a/kernel/trace/trace_dynevent.c
++++ b/kernel/trace/trace_dynevent.c
+@@ -230,6 +230,10 @@ static int dyn_event_open(struct inode *inode, struct file *file)
+ {
+ 	int ret;
+ 
++	ret = security_locked_down(LOCKDOWN_TRACEFS);
++	if (ret)
++		return ret;
++
+ 	ret = tracing_check_open_get_tr(NULL);
+ 	if (ret)
+ 		return ret;
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index 337bc0eb5d71bf..dc734867f0fc44 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -2325,12 +2325,13 @@ osnoise_cpus_write(struct file *filp, const char __user *ubuf, size_t count,
+ 	if (count < 1)
+ 		return 0;
+ 
+-	buf = kmalloc(count, GFP_KERNEL);
++	buf = kmalloc(count + 1, GFP_KERNEL);
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+ 	if (copy_from_user(buf, ubuf, count))
+ 		return -EFAULT;
++	buf[count] = '\0';
+ 
+ 	if (!zalloc_cpumask_var(&osnoise_cpumask_new, GFP_KERNEL))
+ 		return -ENOMEM;
+diff --git a/kernel/vhost_task.c b/kernel/vhost_task.c
+index 2f844c279a3e01..7f24ccc896c649 100644
+--- a/kernel/vhost_task.c
++++ b/kernel/vhost_task.c
+@@ -100,6 +100,7 @@ void vhost_task_stop(struct vhost_task *vtsk)
+ 	 * freeing it below.
+ 	 */
+ 	wait_for_completion(&vtsk->exited);
++	put_task_struct(vtsk->task);
+ 	kfree(vtsk);
+ }
+ EXPORT_SYMBOL_GPL(vhost_task_stop);
+@@ -148,7 +149,7 @@ struct vhost_task *vhost_task_create(bool (*fn)(void *),
+ 		return ERR_PTR(PTR_ERR(tsk));
+ 	}
+ 
+-	vtsk->task = tsk;
++	vtsk->task = get_task_struct(tsk);
+ 	return vtsk;
+ }
+ EXPORT_SYMBOL_GPL(vhost_task_create);
+diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c
+index 57d4ec256682ce..04f121a756d478 100644
+--- a/mm/damon/sysfs.c
++++ b/mm/damon/sysfs.c
+@@ -1576,12 +1576,14 @@ static int damon_sysfs_damon_call(int (*fn)(void *data),
+ 		struct damon_sysfs_kdamond *kdamond)
+ {
+ 	struct damon_call_control call_control = {};
++	int err;
+ 
+ 	if (!kdamond->damon_ctx)
+ 		return -EINVAL;
+ 	call_control.fn = fn;
+ 	call_control.data = kdamond;
+-	return damon_call(kdamond->damon_ctx, &call_control);
++	err = damon_call(kdamond->damon_ctx, &call_control);
++	return err ? err : call_control.return_code;
+ }
+ 
+ struct damon_sysfs_schemes_walk_data {
+diff --git a/mm/kmsan/core.c b/mm/kmsan/core.c
+index 1ea711786c522d..8bca7fece47f0e 100644
+--- a/mm/kmsan/core.c
++++ b/mm/kmsan/core.c
+@@ -195,7 +195,8 @@ void kmsan_internal_set_shadow_origin(void *addr, size_t size, int b,
+ 				      u32 origin, bool checked)
+ {
+ 	u64 address = (u64)addr;
+-	u32 *shadow_start, *origin_start;
++	void *shadow_start;
++	u32 *aligned_shadow, *origin_start;
+ 	size_t pad = 0;
+ 
+ 	KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(addr, size));
+@@ -214,9 +215,12 @@ void kmsan_internal_set_shadow_origin(void *addr, size_t size, int b,
+ 	}
+ 	__memset(shadow_start, b, size);
+ 
+-	if (!IS_ALIGNED(address, KMSAN_ORIGIN_SIZE)) {
++	if (IS_ALIGNED(address, KMSAN_ORIGIN_SIZE)) {
++		aligned_shadow = shadow_start;
++	} else {
+ 		pad = address % KMSAN_ORIGIN_SIZE;
+ 		address -= pad;
++		aligned_shadow = shadow_start - pad;
+ 		size += pad;
+ 	}
+ 	size = ALIGN(size, KMSAN_ORIGIN_SIZE);
+@@ -230,7 +234,7 @@ void kmsan_internal_set_shadow_origin(void *addr, size_t size, int b,
+ 	 * corresponding shadow slot is zero.
+ 	 */
+ 	for (int i = 0; i < size / KMSAN_ORIGIN_SIZE; i++) {
+-		if (origin || !shadow_start[i])
++		if (origin || !aligned_shadow[i])
+ 			origin_start[i] = origin;
+ 	}
+ }
+diff --git a/mm/kmsan/kmsan_test.c b/mm/kmsan/kmsan_test.c
+index c6c5b2bbede0cc..902ec48b1e3e6a 100644
+--- a/mm/kmsan/kmsan_test.c
++++ b/mm/kmsan/kmsan_test.c
+@@ -556,6 +556,21 @@ DEFINE_TEST_MEMSETXX(16)
+ DEFINE_TEST_MEMSETXX(32)
+ DEFINE_TEST_MEMSETXX(64)
+ 
++/* Test case: ensure that KMSAN does not access shadow memory out of bounds. */
++static void test_memset_on_guarded_buffer(struct kunit *test)
++{
++	void *buf = vmalloc(PAGE_SIZE);
++
++	kunit_info(test,
++		   "memset() on ends of guarded buffer should not crash\n");
++
++	for (size_t size = 0; size <= 128; size++) {
++		memset(buf, 0xff, size);
++		memset(buf + PAGE_SIZE - size, 0xff, size);
++	}
++	vfree(buf);
++}
++
+ static noinline void fibonacci(int *array, int size, int start)
+ {
+ 	if (start < 2 || (start == size))
+@@ -677,6 +692,7 @@ static struct kunit_case kmsan_test_cases[] = {
+ 	KUNIT_CASE(test_memset16),
+ 	KUNIT_CASE(test_memset32),
+ 	KUNIT_CASE(test_memset64),
++	KUNIT_CASE(test_memset_on_guarded_buffer),
+ 	KUNIT_CASE(test_long_origin_chain),
+ 	KUNIT_CASE(test_stackdepot_roundtrip),
+ 	KUNIT_CASE(test_unpoison_memory),
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 090c7ffa515252..2ef5b3004197b8 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3087,8 +3087,18 @@ static void hci_conn_complete_evt(struct hci_dev *hdev, void *data,
+ 
+ 	hci_dev_lock(hdev);
+ 
++	/* Check for existing connection:
++	 *
++	 * 1. If it doesn't exist then it must be receiver/slave role.
++	 * 2. If it does exist confirm that it is connecting/BT_CONNECT in case
++	 *    of initiator/master role since there could be a collision where
++	 *    either side is attempting to connect or something like a fuzzing
++	 *    testing is trying to play tricks to destroy the hcon object before
++	 *    it even attempts to connect (e.g. hcon->state == BT_OPEN).
++	 */
+ 	conn = hci_conn_hash_lookup_ba(hdev, ev->link_type, &ev->bdaddr);
+-	if (!conn) {
++	if (!conn ||
++	    (conn->role == HCI_ROLE_MASTER && conn->state != BT_CONNECT)) {
+ 		/* In case of error status and there is no connection pending
+ 		 * just unlock as there is nothing to cleanup.
+ 		 */
+@@ -4391,6 +4401,8 @@ static void hci_num_comp_pkts_evt(struct hci_dev *hdev, void *data,
+ 
+ 	bt_dev_dbg(hdev, "num %d", ev->num);
+ 
++	hci_dev_lock(hdev);
++
+ 	for (i = 0; i < ev->num; i++) {
+ 		struct hci_comp_pkts_info *info = &ev->handles[i];
+ 		struct hci_conn *conn;
+@@ -4472,6 +4484,8 @@ static void hci_num_comp_pkts_evt(struct hci_dev *hdev, void *data,
+ 	}
+ 
+ 	queue_work(hdev->workqueue, &hdev->tx_work);
++
++	hci_dev_unlock(hdev);
+ }
+ 
+ static void hci_mode_change_evt(struct hci_dev *hdev, void *data,
+@@ -5634,8 +5648,18 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+ 	 */
+ 	hci_dev_clear_flag(hdev, HCI_LE_ADV);
+ 
+-	conn = hci_conn_hash_lookup_ba(hdev, LE_LINK, bdaddr);
+-	if (!conn) {
++	/* Check for existing connection:
++	 *
++	 * 1. If it doesn't exist then use the role to create a new object.
++	 * 2. If it does exist confirm that it is connecting/BT_CONNECT in case
++	 *    of initiator/master role since there could be a collision where
++	 *    either side is attempting to connect or something like a fuzzing
++	 *    testing is trying to play tricks to destroy the hcon object before
++	 *    it even attempts to connect (e.g. hcon->state == BT_OPEN).
++	 */
++	conn = hci_conn_hash_lookup_role(hdev, LE_LINK, role, bdaddr);
++	if (!conn ||
++	    (conn->role == HCI_ROLE_MASTER && conn->state != BT_CONNECT)) {
+ 		/* In case of error status and there is no connection pending
+ 		 * just unlock as there is nothing to cleanup.
+ 		 */
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index a25439f1eeac28..7ca544d7791f43 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -2594,6 +2594,13 @@ static int hci_resume_advertising_sync(struct hci_dev *hdev)
+ 			hci_remove_ext_adv_instance_sync(hdev, adv->instance,
+ 							 NULL);
+ 		}
++
++		/* If current advertising instance is set to instance 0x00
++		 * then we need to re-enable it.
++		 */
++		if (!hdev->cur_adv_instance)
++			err = hci_enable_ext_advertising_sync(hdev,
++							      hdev->cur_adv_instance);
+ 	} else {
+ 		/* Schedule for most recent instance to be restarted and begin
+ 		 * the software rotation loop
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 50634ef5c8b707..225140fcb3d6c8 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -1323,8 +1323,7 @@ static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err)
+ 	struct mgmt_mode *cp;
+ 
+ 	/* Make sure cmd still outstanding. */
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
+ 	cp = cmd->param;
+@@ -1351,23 +1350,29 @@ static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err)
+ 				mgmt_status(err));
+ 	}
+ 
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ }
+ 
+ static int set_powered_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_mode *cp;
++	struct mgmt_mode cp;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
+ 
+ 	/* Make sure cmd still outstanding. */
+-	if (cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
+ 		return -ECANCELED;
++	}
+ 
+-	cp = cmd->param;
++	memcpy(&cp, cmd->param, sizeof(cp));
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
+ 
+ 	BT_DBG("%s", hdev->name);
+ 
+-	return hci_set_powered_sync(hdev, cp->val);
++	return hci_set_powered_sync(hdev, cp.val);
+ }
+ 
+ static int set_powered(struct sock *sk, struct hci_dev *hdev, void *data,
+@@ -1516,8 +1521,7 @@ static void mgmt_set_discoverable_complete(struct hci_dev *hdev, void *data,
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+ 	/* Make sure cmd still outstanding. */
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_SET_DISCOVERABLE, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
+ 	hci_dev_lock(hdev);
+@@ -1539,12 +1543,15 @@ static void mgmt_set_discoverable_complete(struct hci_dev *hdev, void *data,
+ 	new_settings(hdev, cmd->sk);
+ 
+ done:
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ 	hci_dev_unlock(hdev);
+ }
+ 
+ static int set_discoverable_sync(struct hci_dev *hdev, void *data)
+ {
++	if (!mgmt_pending_listed(hdev, data))
++		return -ECANCELED;
++
+ 	BT_DBG("%s", hdev->name);
+ 
+ 	return hci_update_discoverable_sync(hdev);
+@@ -1691,8 +1698,7 @@ static void mgmt_set_connectable_complete(struct hci_dev *hdev, void *data,
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+ 	/* Make sure cmd still outstanding. */
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_SET_CONNECTABLE, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
+ 	hci_dev_lock(hdev);
+@@ -1707,7 +1713,7 @@ static void mgmt_set_connectable_complete(struct hci_dev *hdev, void *data,
+ 	new_settings(hdev, cmd->sk);
+ 
+ done:
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ 
+ 	hci_dev_unlock(hdev);
+ }
+@@ -1743,6 +1749,9 @@ static int set_connectable_update_settings(struct hci_dev *hdev,
+ 
+ static int set_connectable_sync(struct hci_dev *hdev, void *data)
+ {
++	if (!mgmt_pending_listed(hdev, data))
++		return -ECANCELED;
++
+ 	BT_DBG("%s", hdev->name);
+ 
+ 	return hci_update_connectable_sync(hdev);
+@@ -1919,14 +1928,17 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ 	struct cmd_lookup match = { NULL, hdev };
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_mode *cp = cmd->param;
+-	u8 enable = cp->val;
++	struct mgmt_mode *cp;
++	u8 enable;
+ 	bool changed;
+ 
+ 	/* Make sure cmd still outstanding. */
+-	if (err == -ECANCELED || cmd != pending_find(MGMT_OP_SET_SSP, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
++	cp = cmd->param;
++	enable = cp->val;
++
+ 	if (err) {
+ 		u8 mgmt_err = mgmt_status(err);
+ 
+@@ -1935,8 +1947,7 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ 			new_settings(hdev, NULL);
+ 		}
+ 
+-		mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, true,
+-				     cmd_status_rsp, &mgmt_err);
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err);
+ 		return;
+ 	}
+ 
+@@ -1946,7 +1957,7 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ 		changed = hci_dev_test_and_clear_flag(hdev, HCI_SSP_ENABLED);
+ 	}
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, true, settings_rsp, &match);
++	settings_rsp(cmd, &match);
+ 
+ 	if (changed)
+ 		new_settings(hdev, match.sk);
+@@ -1960,14 +1971,25 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ static int set_ssp_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_mode *cp = cmd->param;
++	struct mgmt_mode cp;
+ 	bool changed = false;
+ 	int err;
+ 
+-	if (cp->val)
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
++		return -ECANCELED;
++	}
++
++	memcpy(&cp, cmd->param, sizeof(cp));
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
++	if (cp.val)
+ 		changed = !hci_dev_test_and_set_flag(hdev, HCI_SSP_ENABLED);
+ 
+-	err = hci_write_ssp_mode_sync(hdev, cp->val);
++	err = hci_write_ssp_mode_sync(hdev, cp.val);
+ 
+ 	if (!err && changed)
+ 		hci_dev_clear_flag(hdev, HCI_SSP_ENABLED);
+@@ -2060,32 +2082,50 @@ static int set_hs(struct sock *sk, struct hci_dev *hdev, void *data, u16 len)
+ 
+ static void set_le_complete(struct hci_dev *hdev, void *data, int err)
+ {
++	struct mgmt_pending_cmd *cmd = data;
+ 	struct cmd_lookup match = { NULL, hdev };
+ 	u8 status = mgmt_status(err);
+ 
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+-	if (status) {
+-		mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, true, cmd_status_rsp,
+-				     &status);
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, data))
+ 		return;
++
++	if (status) {
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, status);
++		goto done;
+ 	}
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, true, settings_rsp, &match);
++	settings_rsp(cmd, &match);
+ 
+ 	new_settings(hdev, match.sk);
+ 
+ 	if (match.sk)
+ 		sock_put(match.sk);
++
++done:
++	mgmt_pending_free(cmd);
+ }
+ 
+ static int set_le_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_mode *cp = cmd->param;
+-	u8 val = !!cp->val;
++	struct mgmt_mode cp;
++	u8 val;
+ 	int err;
+ 
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
++		return -ECANCELED;
++	}
++
++	memcpy(&cp, cmd->param, sizeof(cp));
++	val = !!cp.val;
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
+ 	if (!val) {
+ 		hci_clear_adv_instance_sync(hdev, NULL, 0x00, true);
+ 
+@@ -2127,7 +2167,12 @@ static void set_mesh_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+ 	u8 status = mgmt_status(err);
+-	struct sock *sk = cmd->sk;
++	struct sock *sk;
++
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
++		return;
++
++	sk = cmd->sk;
+ 
+ 	if (status) {
+ 		mgmt_pending_foreach(MGMT_OP_SET_MESH_RECEIVER, hdev, true,
+@@ -2142,24 +2187,37 @@ static void set_mesh_complete(struct hci_dev *hdev, void *data, int err)
+ static int set_mesh_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_cp_set_mesh *cp = cmd->param;
+-	size_t len = cmd->param_len;
++	struct mgmt_cp_set_mesh cp;
++	size_t len;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
++		return -ECANCELED;
++	}
++
++	memcpy(&cp, cmd->param, sizeof(cp));
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
++	len = cmd->param_len;
+ 
+ 	memset(hdev->mesh_ad_types, 0, sizeof(hdev->mesh_ad_types));
+ 
+-	if (cp->enable)
++	if (cp.enable)
+ 		hci_dev_set_flag(hdev, HCI_MESH);
+ 	else
+ 		hci_dev_clear_flag(hdev, HCI_MESH);
+ 
+-	hdev->le_scan_interval = __le16_to_cpu(cp->period);
+-	hdev->le_scan_window = __le16_to_cpu(cp->window);
++	hdev->le_scan_interval = __le16_to_cpu(cp.period);
++	hdev->le_scan_window = __le16_to_cpu(cp.window);
+ 
+-	len -= sizeof(*cp);
++	len -= sizeof(cp);
+ 
+ 	/* If filters don't fit, forward all adv pkts */
+ 	if (len <= sizeof(hdev->mesh_ad_types))
+-		memcpy(hdev->mesh_ad_types, cp->ad_types, len);
++		memcpy(hdev->mesh_ad_types, cp.ad_types, len);
+ 
+ 	hci_update_passive_scan_sync(hdev);
+ 	return 0;
+@@ -3867,15 +3925,16 @@ static int name_changed_sync(struct hci_dev *hdev, void *data)
+ static void set_name_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_cp_set_local_name *cp = cmd->param;
++	struct mgmt_cp_set_local_name *cp;
+ 	u8 status = mgmt_status(err);
+ 
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_SET_LOCAL_NAME, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
++	cp = cmd->param;
++
+ 	if (status) {
+ 		mgmt_cmd_status(cmd->sk, hdev->id, MGMT_OP_SET_LOCAL_NAME,
+ 				status);
+@@ -3887,16 +3946,27 @@ static void set_name_complete(struct hci_dev *hdev, void *data, int err)
+ 			hci_cmd_sync_queue(hdev, name_changed_sync, NULL, NULL);
+ 	}
+ 
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ }
+ 
+ static int set_name_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_cp_set_local_name *cp = cmd->param;
++	struct mgmt_cp_set_local_name cp;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
++		return -ECANCELED;
++	}
++
++	memcpy(&cp, cmd->param, sizeof(cp));
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
+ 
+ 	if (lmp_bredr_capable(hdev)) {
+-		hci_update_name_sync(hdev, cp->name);
++		hci_update_name_sync(hdev, cp.name);
+ 		hci_update_eir_sync(hdev);
+ 	}
+ 
+@@ -4048,12 +4118,10 @@ int mgmt_phy_configuration_changed(struct hci_dev *hdev, struct sock *skip)
+ static void set_default_phy_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct sk_buff *skb = cmd->skb;
++	struct sk_buff *skb;
+ 	u8 status = mgmt_status(err);
+ 
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_SET_PHY_CONFIGURATION, hdev))
+-		return;
++	skb = cmd->skb;
+ 
+ 	if (!status) {
+ 		if (!skb)
+@@ -4080,7 +4148,7 @@ static void set_default_phy_complete(struct hci_dev *hdev, void *data, int err)
+ 	if (skb && !IS_ERR(skb))
+ 		kfree_skb(skb);
+ 
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ }
+ 
+ static int set_default_phy_sync(struct hci_dev *hdev, void *data)
+@@ -4088,7 +4156,9 @@ static int set_default_phy_sync(struct hci_dev *hdev, void *data)
+ 	struct mgmt_pending_cmd *cmd = data;
+ 	struct mgmt_cp_set_phy_configuration *cp = cmd->param;
+ 	struct hci_cp_le_set_default_phy cp_phy;
+-	u32 selected_phys = __le32_to_cpu(cp->selected_phys);
++	u32 selected_phys;
++
++	selected_phys = __le32_to_cpu(cp->selected_phys);
+ 
+ 	memset(&cp_phy, 0, sizeof(cp_phy));
+ 
+@@ -4228,7 +4298,7 @@ static int set_phy_configuration(struct sock *sk, struct hci_dev *hdev,
+ 		goto unlock;
+ 	}
+ 
+-	cmd = mgmt_pending_add(sk, MGMT_OP_SET_PHY_CONFIGURATION, hdev, data,
++	cmd = mgmt_pending_new(sk, MGMT_OP_SET_PHY_CONFIGURATION, hdev, data,
+ 			       len);
+ 	if (!cmd)
+ 		err = -ENOMEM;
+@@ -5189,7 +5259,17 @@ static void mgmt_add_adv_patterns_monitor_complete(struct hci_dev *hdev,
+ {
+ 	struct mgmt_rp_add_adv_patterns_monitor rp;
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct adv_monitor *monitor = cmd->user_data;
++	struct adv_monitor *monitor;
++
++	/* This is likely the result of hdev being closed and mgmt_index_removed
++	 * is attempting to clean up any pending command so
++	 * hci_adv_monitors_clear is about to be called which will take care of
++	 * freeing the adv_monitor instances.
++	 */
++	if (status == -ECANCELED && !mgmt_pending_valid(hdev, cmd))
++		return;
++
++	monitor = cmd->user_data;
+ 
+ 	hci_dev_lock(hdev);
+ 
+@@ -5215,9 +5295,20 @@ static void mgmt_add_adv_patterns_monitor_complete(struct hci_dev *hdev,
+ static int mgmt_add_adv_patterns_monitor_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct adv_monitor *monitor = cmd->user_data;
++	struct adv_monitor *mon;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
++		return -ECANCELED;
++	}
++
++	mon = cmd->user_data;
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
+ 
+-	return hci_add_adv_monitor(hdev, monitor);
++	return hci_add_adv_monitor(hdev, mon);
+ }
+ 
+ static int __add_adv_patterns_monitor(struct sock *sk, struct hci_dev *hdev,
+@@ -5484,7 +5575,8 @@ static int remove_adv_monitor(struct sock *sk, struct hci_dev *hdev,
+ 			       status);
+ }
+ 
+-static void read_local_oob_data_complete(struct hci_dev *hdev, void *data, int err)
++static void read_local_oob_data_complete(struct hci_dev *hdev, void *data,
++					 int err)
+ {
+ 	struct mgmt_rp_read_local_oob_data mgmt_rp;
+ 	size_t rp_size = sizeof(mgmt_rp);
+@@ -5504,7 +5596,8 @@ static void read_local_oob_data_complete(struct hci_dev *hdev, void *data, int e
+ 	bt_dev_dbg(hdev, "status %d", status);
+ 
+ 	if (status) {
+-		mgmt_cmd_status(cmd->sk, hdev->id, MGMT_OP_READ_LOCAL_OOB_DATA, status);
++		mgmt_cmd_status(cmd->sk, hdev->id, MGMT_OP_READ_LOCAL_OOB_DATA,
++				status);
+ 		goto remove;
+ 	}
+ 
+@@ -5786,17 +5879,12 @@ static void start_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ 
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+-	if (err == -ECANCELED)
+-		return;
+-
+-	if (cmd != pending_find(MGMT_OP_START_DISCOVERY, hdev) &&
+-	    cmd != pending_find(MGMT_OP_START_LIMITED_DISCOVERY, hdev) &&
+-	    cmd != pending_find(MGMT_OP_START_SERVICE_DISCOVERY, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
+ 	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_status(err),
+ 			  cmd->param, 1);
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ 
+ 	hci_discovery_set_state(hdev, err ? DISCOVERY_STOPPED:
+ 				DISCOVERY_FINDING);
+@@ -5804,6 +5892,9 @@ static void start_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ 
+ static int start_discovery_sync(struct hci_dev *hdev, void *data)
+ {
++	if (!mgmt_pending_listed(hdev, data))
++		return -ECANCELED;
++
+ 	return hci_start_discovery_sync(hdev);
+ }
+ 
+@@ -6009,15 +6100,14 @@ static void stop_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+ 
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_STOP_DISCOVERY, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+ 	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_status(err),
+ 			  cmd->param, 1);
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ 
+ 	if (!err)
+ 		hci_discovery_set_state(hdev, DISCOVERY_STOPPED);
+@@ -6025,6 +6115,9 @@ static void stop_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ 
+ static int stop_discovery_sync(struct hci_dev *hdev, void *data)
+ {
++	if (!mgmt_pending_listed(hdev, data))
++		return -ECANCELED;
++
+ 	return hci_stop_discovery_sync(hdev);
+ }
+ 
+@@ -6234,14 +6327,18 @@ static void enable_advertising_instance(struct hci_dev *hdev, int err)
+ 
+ static void set_advertising_complete(struct hci_dev *hdev, void *data, int err)
+ {
++	struct mgmt_pending_cmd *cmd = data;
+ 	struct cmd_lookup match = { NULL, hdev };
+ 	u8 instance;
+ 	struct adv_info *adv_instance;
+ 	u8 status = mgmt_status(err);
+ 
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, data))
++		return;
++
+ 	if (status) {
+-		mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, true,
+-				     cmd_status_rsp, &status);
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, status);
++		mgmt_pending_free(cmd);
+ 		return;
+ 	}
+ 
+@@ -6250,8 +6347,7 @@ static void set_advertising_complete(struct hci_dev *hdev, void *data, int err)
+ 	else
+ 		hci_dev_clear_flag(hdev, HCI_ADVERTISING);
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, true, settings_rsp,
+-			     &match);
++	settings_rsp(cmd, &match);
+ 
+ 	new_settings(hdev, match.sk);
+ 
+@@ -6283,10 +6379,23 @@ static void set_advertising_complete(struct hci_dev *hdev, void *data, int err)
+ static int set_adv_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_mode *cp = cmd->param;
+-	u8 val = !!cp->val;
++	struct mgmt_mode cp;
++	u8 val;
+ 
+-	if (cp->val == 0x02)
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
++		return -ECANCELED;
++	}
++
++	memcpy(&cp, cmd->param, sizeof(cp));
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
++	val = !!cp.val;
++
++	if (cp.val == 0x02)
+ 		hci_dev_set_flag(hdev, HCI_ADVERTISING_CONNECTABLE);
+ 	else
+ 		hci_dev_clear_flag(hdev, HCI_ADVERTISING_CONNECTABLE);
+@@ -8039,10 +8148,6 @@ static void read_local_oob_ext_data_complete(struct hci_dev *hdev, void *data,
+ 	u8 status = mgmt_status(err);
+ 	u16 eir_len;
+ 
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev))
+-		return;
+-
+ 	if (!status) {
+ 		if (!skb)
+ 			status = MGMT_STATUS_FAILED;
+@@ -8149,7 +8254,7 @@ static void read_local_oob_ext_data_complete(struct hci_dev *hdev, void *data,
+ 		kfree_skb(skb);
+ 
+ 	kfree(mgmt_rp);
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ }
+ 
+ static int read_local_ssp_oob_req(struct hci_dev *hdev, struct sock *sk,
+@@ -8158,7 +8263,7 @@ static int read_local_ssp_oob_req(struct hci_dev *hdev, struct sock *sk,
+ 	struct mgmt_pending_cmd *cmd;
+ 	int err;
+ 
+-	cmd = mgmt_pending_add(sk, MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev,
++	cmd = mgmt_pending_new(sk, MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev,
+ 			       cp, sizeof(*cp));
+ 	if (!cmd)
+ 		return -ENOMEM;
+diff --git a/net/bluetooth/mgmt_util.c b/net/bluetooth/mgmt_util.c
+index a88a07da394734..aa7b5585cb268b 100644
+--- a/net/bluetooth/mgmt_util.c
++++ b/net/bluetooth/mgmt_util.c
+@@ -320,6 +320,52 @@ void mgmt_pending_remove(struct mgmt_pending_cmd *cmd)
+ 	mgmt_pending_free(cmd);
+ }
+ 
++bool __mgmt_pending_listed(struct hci_dev *hdev, struct mgmt_pending_cmd *cmd)
++{
++	struct mgmt_pending_cmd *tmp;
++
++	lockdep_assert_held(&hdev->mgmt_pending_lock);
++
++	if (!cmd)
++		return false;
++
++	list_for_each_entry(tmp, &hdev->mgmt_pending, list) {
++		if (cmd == tmp)
++			return true;
++	}
++
++	return false;
++}
++
++bool mgmt_pending_listed(struct hci_dev *hdev, struct mgmt_pending_cmd *cmd)
++{
++	bool listed;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
++	listed = __mgmt_pending_listed(hdev, cmd);
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
++	return listed;
++}
++
++bool mgmt_pending_valid(struct hci_dev *hdev, struct mgmt_pending_cmd *cmd)
++{
++	bool listed;
++
++	if (!cmd)
++		return false;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	listed = __mgmt_pending_listed(hdev, cmd);
++	if (listed)
++		list_del(&cmd->list);
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
++	return listed;
++}
++
+ void mgmt_mesh_foreach(struct hci_dev *hdev,
+ 		       void (*cb)(struct mgmt_mesh_tx *mesh_tx, void *data),
+ 		       void *data, struct sock *sk)
+diff --git a/net/bluetooth/mgmt_util.h b/net/bluetooth/mgmt_util.h
+index 024e51dd693756..bcba8c9d895285 100644
+--- a/net/bluetooth/mgmt_util.h
++++ b/net/bluetooth/mgmt_util.h
+@@ -65,6 +65,9 @@ struct mgmt_pending_cmd *mgmt_pending_new(struct sock *sk, u16 opcode,
+ 					  void *data, u16 len);
+ void mgmt_pending_free(struct mgmt_pending_cmd *cmd);
+ void mgmt_pending_remove(struct mgmt_pending_cmd *cmd);
++bool __mgmt_pending_listed(struct hci_dev *hdev, struct mgmt_pending_cmd *cmd);
++bool mgmt_pending_listed(struct hci_dev *hdev, struct mgmt_pending_cmd *cmd);
++bool mgmt_pending_valid(struct hci_dev *hdev, struct mgmt_pending_cmd *cmd);
+ void mgmt_mesh_foreach(struct hci_dev *hdev,
+ 		       void (*cb)(struct mgmt_mesh_tx *mesh_tx, void *data),
+ 		       void *data, struct sock *sk);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index d6420b74ea9c6a..cb77bb84371bd3 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -6667,7 +6667,7 @@ struct sk_buff *alloc_skb_with_frags(unsigned long header_len,
+ 		return NULL;
+ 
+ 	while (data_len) {
+-		if (nr_frags == MAX_SKB_FRAGS - 1)
++		if (nr_frags == MAX_SKB_FRAGS)
+ 			goto failure;
+ 		while (order && PAGE_ALIGN(data_len) < (PAGE_SIZE << order))
+ 			order--;
+diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
+index 4397e89d3123a0..423f876d14c6a8 100644
+--- a/net/ipv4/nexthop.c
++++ b/net/ipv4/nexthop.c
+@@ -2400,6 +2400,13 @@ static int replace_nexthop_single(struct net *net, struct nexthop *old,
+ 		return -EINVAL;
+ 	}
+ 
++	if (!list_empty(&old->grp_list) &&
++	    rtnl_dereference(new->nh_info)->fdb_nh !=
++	    rtnl_dereference(old->nh_info)->fdb_nh) {
++		NL_SET_ERR_MSG(extack, "Cannot change nexthop FDB status while in a group");
++		return -EINVAL;
++	}
++
+ 	err = call_nexthop_notifiers(net, NEXTHOP_EVENT_REPLACE, new, extack);
+ 	if (err)
+ 		return err;
+diff --git a/net/smc/smc_loopback.c b/net/smc/smc_loopback.c
+index 3c5f64ca41153f..85f0b7853b1737 100644
+--- a/net/smc/smc_loopback.c
++++ b/net/smc/smc_loopback.c
+@@ -56,6 +56,7 @@ static int smc_lo_register_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb,
+ {
+ 	struct smc_lo_dmb_node *dmb_node, *tmp_node;
+ 	struct smc_lo_dev *ldev = smcd->priv;
++	struct folio *folio;
+ 	int sba_idx, rc;
+ 
+ 	/* check space for new dmb */
+@@ -74,13 +75,16 @@ static int smc_lo_register_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb,
+ 
+ 	dmb_node->sba_idx = sba_idx;
+ 	dmb_node->len = dmb->dmb_len;
+-	dmb_node->cpu_addr = kzalloc(dmb_node->len, GFP_KERNEL |
+-				     __GFP_NOWARN | __GFP_NORETRY |
+-				     __GFP_NOMEMALLOC);
+-	if (!dmb_node->cpu_addr) {
++
++	/* not critical; fail under memory pressure and fallback to TCP */
++	folio = folio_alloc(GFP_KERNEL | __GFP_NOWARN | __GFP_NOMEMALLOC |
++			    __GFP_NORETRY | __GFP_ZERO,
++			    get_order(dmb_node->len));
++	if (!folio) {
+ 		rc = -ENOMEM;
+ 		goto err_node;
+ 	}
++	dmb_node->cpu_addr = folio_address(folio);
+ 	dmb_node->dma_addr = SMC_DMA_ADDR_INVALID;
+ 	refcount_set(&dmb_node->refcnt, 1);
+ 
+@@ -122,7 +126,7 @@ static void __smc_lo_unregister_dmb(struct smc_lo_dev *ldev,
+ 	write_unlock_bh(&ldev->dmb_ht_lock);
+ 
+ 	clear_bit(dmb_node->sba_idx, ldev->sba_idx_mask);
+-	kvfree(dmb_node->cpu_addr);
++	folio_put(virt_to_folio(dmb_node->cpu_addr));
+ 	kfree(dmb_node);
+ 
+ 	if (atomic_dec_and_test(&ldev->dmb_cnt))
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index c7a1f080d2de3a..44b9de6e4e7788 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -438,7 +438,7 @@ bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x)
+ 
+ 	check_tunnel_size = x->xso.type == XFRM_DEV_OFFLOAD_PACKET &&
+ 			    x->props.mode == XFRM_MODE_TUNNEL;
+-	switch (x->props.family) {
++	switch (x->inner_mode.family) {
+ 	case AF_INET:
+ 		/* Check for IPv4 options */
+ 		if (ip_hdr(skb)->ihl != 5)
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 86337453709bad..4b2eb260f5c2e9 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -2578,6 +2578,8 @@ int xfrm_alloc_spi(struct xfrm_state *x, u32 low, u32 high,
+ 
+ 	for (h = 0; h < range; h++) {
+ 		u32 spi = (low == high) ? low : get_random_u32_inclusive(low, high);
++		if (spi == 0)
++			goto next;
+ 		newspi = htonl(spi);
+ 
+ 		spin_lock_bh(&net->xfrm.xfrm_state_lock);
+@@ -2593,6 +2595,7 @@ int xfrm_alloc_spi(struct xfrm_state *x, u32 low, u32 high,
+ 		xfrm_state_put(x0);
+ 		spin_unlock_bh(&net->xfrm.xfrm_state_lock);
+ 
++next:
+ 		if (signal_pending(current)) {
+ 			err = -ERESTARTSYS;
+ 			goto unlock;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 4819bd332f0390..fa28e3e85861c2 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7298,6 +7298,11 @@ static void cs35l41_fixup_spi_two(struct hda_codec *codec, const struct hda_fixu
+ 	comp_generic_fixup(codec, action, "spi", "CSC3551", "-%s:00-cs35l41-hda.%d", 2);
+ }
+ 
++static void cs35l41_fixup_spi_one(struct hda_codec *codec, const struct hda_fixup *fix, int action)
++{
++	comp_generic_fixup(codec, action, "spi", "CSC3551", "-%s:00-cs35l41-hda.%d", 1);
++}
++
+ static void cs35l41_fixup_spi_four(struct hda_codec *codec, const struct hda_fixup *fix, int action)
+ {
+ 	comp_generic_fixup(codec, action, "spi", "CSC3551", "-%s:00-cs35l41-hda.%d", 4);
+@@ -7991,6 +7996,7 @@ enum {
+ 	ALC287_FIXUP_CS35L41_I2C_2,
+ 	ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED,
+ 	ALC287_FIXUP_CS35L41_I2C_4,
++	ALC245_FIXUP_CS35L41_SPI_1,
+ 	ALC245_FIXUP_CS35L41_SPI_2,
+ 	ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED,
+ 	ALC245_FIXUP_CS35L41_SPI_4,
+@@ -10120,6 +10126,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = cs35l41_fixup_spi_two,
+ 	},
++	[ALC245_FIXUP_CS35L41_SPI_1] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = cs35l41_fixup_spi_one,
++	},
+ 	[ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = cs35l41_fixup_spi_two,
+@@ -11099,6 +11109,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x8398, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x83ce, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x8516, "ASUS X101CH", ALC269_FIXUP_ASUS_X101),
++	SND_PCI_QUIRK(0x1043, 0x88f4, "ASUS NUC14LNS", ALC245_FIXUP_CS35L41_SPI_1),
+ 	SND_PCI_QUIRK(0x104d, 0x9073, "Sony VAIO", ALC275_FIXUP_SONY_VAIO_GPIO2),
+ 	SND_PCI_QUIRK(0x104d, 0x907b, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ),
+ 	SND_PCI_QUIRK(0x104d, 0x9084, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ),
+diff --git a/sound/soc/intel/boards/sof_es8336.c b/sound/soc/intel/boards/sof_es8336.c
+index a0b3679b17b423..1211a2b8a2a2c7 100644
+--- a/sound/soc/intel/boards/sof_es8336.c
++++ b/sound/soc/intel/boards/sof_es8336.c
+@@ -826,6 +826,16 @@ static const struct platform_device_id board_ids[] = {
+ 					SOF_ES8336_SPEAKERS_EN_GPIO1_QUIRK |
+ 					SOF_ES8336_JD_INVERTED),
+ 	},
++	{
++		.name = "ptl_es83x6_c1_h02",
++		.driver_data = (kernel_ulong_t)(SOF_ES8336_SSP_CODEC(1) |
++					SOF_NO_OF_HDMI_CAPTURE_SSP(2) |
++					SOF_HDMI_CAPTURE_1_SSP(0) |
++					SOF_HDMI_CAPTURE_2_SSP(2) |
++					SOF_SSP_HDMI_CAPTURE_PRESENT |
++					SOF_ES8336_SPEAKERS_EN_GPIO1_QUIRK |
++					SOF_ES8336_JD_INVERTED),
++	},
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(platform, board_ids);
+diff --git a/sound/soc/intel/boards/sof_rt5682.c b/sound/soc/intel/boards/sof_rt5682.c
+index f5925bd0a3fc67..4994aaccc583ae 100644
+--- a/sound/soc/intel/boards/sof_rt5682.c
++++ b/sound/soc/intel/boards/sof_rt5682.c
+@@ -892,6 +892,13 @@ static const struct platform_device_id board_ids[] = {
+ 					SOF_SSP_PORT_BT_OFFLOAD(2) |
+ 					SOF_BT_OFFLOAD_PRESENT),
+ 	},
++	{
++		.name = "ptl_rt5682_c1_h02",
++		.driver_data = (kernel_ulong_t)(SOF_RT5682_MCLK_EN |
++					SOF_SSP_PORT_CODEC(1) |
++					/* SSP 0 and SSP 2 are used for HDMI IN */
++					SOF_SSP_MASK_HDMI_CAPTURE(0x5)),
++	},
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(platform, board_ids);
+diff --git a/sound/soc/intel/common/soc-acpi-intel-ptl-match.c b/sound/soc/intel/common/soc-acpi-intel-ptl-match.c
+index eae75f3f0fa40d..d90d8672ab77d1 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-ptl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-ptl-match.c
+@@ -21,7 +21,24 @@ static const struct snd_soc_acpi_codecs ptl_rt5682_rt5682s_hp = {
+ 	.codecs = {RT5682_ACPI_HID, RT5682S_ACPI_HID},
+ };
+ 
++static const struct snd_soc_acpi_codecs ptl_essx_83x6 = {
++	.num_codecs = 3,
++	.codecs = { "ESSX8316", "ESSX8326", "ESSX8336"},
++};
++
++static const struct snd_soc_acpi_codecs ptl_lt6911_hdmi = {
++	.num_codecs = 1,
++	.codecs = {"INTC10B0"}
++};
++
+ struct snd_soc_acpi_mach snd_soc_acpi_intel_ptl_machines[] = {
++	{
++		.comp_ids = &ptl_rt5682_rt5682s_hp,
++		.drv_name = "ptl_rt5682_c1_h02",
++		.machine_quirk = snd_soc_acpi_codec_list,
++		.quirk_data = &ptl_lt6911_hdmi,
++		.sof_tplg_filename = "sof-ptl-rt5682-ssp1-hdmi-ssp02.tplg",
++	},
+ 	{
+ 		.comp_ids = &ptl_rt5682_rt5682s_hp,
+ 		.drv_name = "ptl_rt5682_def",
+@@ -29,6 +46,21 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_ptl_machines[] = {
+ 		.tplg_quirk_mask = SND_SOC_ACPI_TPLG_INTEL_AMP_NAME |
+ 					SND_SOC_ACPI_TPLG_INTEL_CODEC_NAME,
+ 	},
++	{
++		.comp_ids = &ptl_essx_83x6,
++		.drv_name = "ptl_es83x6_c1_h02",
++		.machine_quirk = snd_soc_acpi_codec_list,
++		.quirk_data = &ptl_lt6911_hdmi,
++		.sof_tplg_filename = "sof-ptl-es83x6-ssp1-hdmi-ssp02.tplg",
++	},
++	{
++		.comp_ids = &ptl_essx_83x6,
++		.drv_name = "sof-essx8336",
++		.sof_tplg_filename = "sof-ptl-es8336", /* the tplg suffix is added at run time */
++		.tplg_quirk_mask = SND_SOC_ACPI_TPLG_INTEL_SSP_NUMBER |
++					SND_SOC_ACPI_TPLG_INTEL_SSP_MSB |
++					SND_SOC_ACPI_TPLG_INTEL_DMIC_NUMBER,
++	},
+ 	{},
+ };
+ EXPORT_SYMBOL_GPL(snd_soc_acpi_intel_ptl_machines);
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index 9530c59b3cf4c8..3df537fdb9f1c7 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -17,6 +17,7 @@
+ #include <linux/bitfield.h>
+ #include <linux/hid.h>
+ #include <linux/init.h>
++#include <linux/input.h>
+ #include <linux/math64.h>
+ #include <linux/slab.h>
+ #include <linux/usb.h>
+@@ -55,13 +56,13 @@ struct std_mono_table {
+  * version, we keep it mono for simplicity.
+  */
+ static int snd_create_std_mono_ctl_offset(struct usb_mixer_interface *mixer,
+-				unsigned int unitid,
+-				unsigned int control,
+-				unsigned int cmask,
+-				int val_type,
+-				unsigned int idx_off,
+-				const char *name,
+-				snd_kcontrol_tlv_rw_t *tlv_callback)
++					  unsigned int unitid,
++					  unsigned int control,
++					  unsigned int cmask,
++					  int val_type,
++					  unsigned int idx_off,
++					  const char *name,
++					  snd_kcontrol_tlv_rw_t *tlv_callback)
+ {
+ 	struct usb_mixer_elem_info *cval;
+ 	struct snd_kcontrol *kctl;
+@@ -78,7 +79,8 @@ static int snd_create_std_mono_ctl_offset(struct usb_mixer_interface *mixer,
+ 	cval->idx_off = idx_off;
+ 
+ 	/* get_min_max() is called only for integer volumes later,
+-	 * so provide a short-cut for booleans */
++	 * so provide a short-cut for booleans
++	 */
+ 	cval->min = 0;
+ 	cval->max = 1;
+ 	cval->res = 0;
+@@ -108,15 +110,16 @@ static int snd_create_std_mono_ctl_offset(struct usb_mixer_interface *mixer,
+ }
+ 
+ static int snd_create_std_mono_ctl(struct usb_mixer_interface *mixer,
+-				unsigned int unitid,
+-				unsigned int control,
+-				unsigned int cmask,
+-				int val_type,
+-				const char *name,
+-				snd_kcontrol_tlv_rw_t *tlv_callback)
++				   unsigned int unitid,
++				   unsigned int control,
++				   unsigned int cmask,
++				   int val_type,
++				   const char *name,
++				   snd_kcontrol_tlv_rw_t *tlv_callback)
+ {
+ 	return snd_create_std_mono_ctl_offset(mixer, unitid, control, cmask,
+-		val_type, 0 /* Offset */, name, tlv_callback);
++					      val_type, 0 /* Offset */,
++					      name, tlv_callback);
+ }
+ 
+ /*
+@@ -127,9 +130,10 @@ static int snd_create_std_mono_table(struct usb_mixer_interface *mixer,
+ {
+ 	int err;
+ 
+-	while (t->name != NULL) {
++	while (t->name) {
+ 		err = snd_create_std_mono_ctl(mixer, t->unitid, t->control,
+-				t->cmask, t->val_type, t->name, t->tlv_callback);
++					      t->cmask, t->val_type, t->name,
++					      t->tlv_callback);
+ 		if (err < 0)
+ 			return err;
+ 		t++;
+@@ -209,12 +213,11 @@ static void snd_usb_soundblaster_remote_complete(struct urb *urb)
+ 	if (code == rc->mute_code)
+ 		snd_usb_mixer_notify_id(mixer, rc->mute_mixer_id);
+ 	mixer->rc_code = code;
+-	wmb();
+ 	wake_up(&mixer->rc_waitq);
+ }
+ 
+ static long snd_usb_sbrc_hwdep_read(struct snd_hwdep *hw, char __user *buf,
+-				     long count, loff_t *offset)
++				    long count, loff_t *offset)
+ {
+ 	struct usb_mixer_interface *mixer = hw->private_data;
+ 	int err;
+@@ -234,7 +237,7 @@ static long snd_usb_sbrc_hwdep_read(struct snd_hwdep *hw, char __user *buf,
+ }
+ 
+ static __poll_t snd_usb_sbrc_hwdep_poll(struct snd_hwdep *hw, struct file *file,
+-					    poll_table *wait)
++					poll_table *wait)
+ {
+ 	struct usb_mixer_interface *mixer = hw->private_data;
+ 
+@@ -285,7 +288,7 @@ static int snd_usb_soundblaster_remote_init(struct usb_mixer_interface *mixer)
+ 	mixer->rc_setup_packet->wLength = cpu_to_le16(len);
+ 	usb_fill_control_urb(mixer->rc_urb, mixer->chip->dev,
+ 			     usb_rcvctrlpipe(mixer->chip->dev, 0),
+-			     (u8*)mixer->rc_setup_packet, mixer->rc_buffer, len,
++			     (u8 *)mixer->rc_setup_packet, mixer->rc_buffer, len,
+ 			     snd_usb_soundblaster_remote_complete, mixer);
+ 	return 0;
+ }
+@@ -310,20 +313,20 @@ static int snd_audigy2nx_led_update(struct usb_mixer_interface *mixer,
+ 
+ 	if (chip->usb_id == USB_ID(0x041e, 0x3042))
+ 		err = snd_usb_ctl_msg(chip->dev,
+-			      usb_sndctrlpipe(chip->dev, 0), 0x24,
+-			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+-			      !value, 0, NULL, 0);
++				      usb_sndctrlpipe(chip->dev, 0), 0x24,
++				      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
++				      !value, 0, NULL, 0);
+ 	/* USB X-Fi S51 Pro */
+ 	if (chip->usb_id == USB_ID(0x041e, 0x30df))
+ 		err = snd_usb_ctl_msg(chip->dev,
+-			      usb_sndctrlpipe(chip->dev, 0), 0x24,
+-			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+-			      !value, 0, NULL, 0);
++				      usb_sndctrlpipe(chip->dev, 0), 0x24,
++				      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
++				      !value, 0, NULL, 0);
+ 	else
+ 		err = snd_usb_ctl_msg(chip->dev,
+-			      usb_sndctrlpipe(chip->dev, 0), 0x24,
+-			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+-			      value, index + 2, NULL, 0);
++				      usb_sndctrlpipe(chip->dev, 0), 0x24,
++				      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
++				      value, index + 2, NULL, 0);
+ 	snd_usb_unlock_shutdown(chip);
+ 	return err;
+ }
+@@ -377,17 +380,17 @@ static int snd_audigy2nx_controls_create(struct usb_mixer_interface *mixer)
+ 		struct snd_kcontrol_new knew;
+ 
+ 		/* USB X-Fi S51 doesn't have a CMSS LED */
+-		if ((mixer->chip->usb_id == USB_ID(0x041e, 0x3042)) && i == 0)
++		if (mixer->chip->usb_id == USB_ID(0x041e, 0x3042) && i == 0)
+ 			continue;
+ 		/* USB X-Fi S51 Pro doesn't have one either */
+-		if ((mixer->chip->usb_id == USB_ID(0x041e, 0x30df)) && i == 0)
++		if (mixer->chip->usb_id == USB_ID(0x041e, 0x30df) && i == 0)
+ 			continue;
+ 		if (i > 1 && /* Live24ext has 2 LEDs only */
+ 			(mixer->chip->usb_id == USB_ID(0x041e, 0x3040) ||
+ 			 mixer->chip->usb_id == USB_ID(0x041e, 0x3042) ||
+ 			 mixer->chip->usb_id == USB_ID(0x041e, 0x30df) ||
+ 			 mixer->chip->usb_id == USB_ID(0x041e, 0x3048)))
+-			break; 
++			break;
+ 
+ 		knew = snd_audigy2nx_control;
+ 		knew.name = snd_audigy2nx_led_names[i];
+@@ -481,9 +484,9 @@ static int snd_emu0204_ch_switch_update(struct usb_mixer_interface *mixer,
+ 	buf[0] = 0x01;
+ 	buf[1] = value ? 0x02 : 0x01;
+ 	err = snd_usb_ctl_msg(chip->dev,
+-		      usb_sndctrlpipe(chip->dev, 0), UAC_SET_CUR,
+-		      USB_RECIP_INTERFACE | USB_TYPE_CLASS | USB_DIR_OUT,
+-		      0x0400, 0x0e00, buf, 2);
++			      usb_sndctrlpipe(chip->dev, 0), UAC_SET_CUR,
++			      USB_RECIP_INTERFACE | USB_TYPE_CLASS | USB_DIR_OUT,
++			      0x0400, 0x0e00, buf, 2);
+ 	snd_usb_unlock_shutdown(chip);
+ 	return err;
+ }
+@@ -529,6 +532,265 @@ static int snd_emu0204_controls_create(struct usb_mixer_interface *mixer)
+ 					  &snd_emu0204_control, NULL);
+ }
+ 
++#if IS_REACHABLE(CONFIG_INPUT)
++/*
++ * Sony DualSense controller (PS5) jack detection
++ *
++ * Since this is an UAC 1 device, it doesn't support jack detection.
++ * However, the controller hid-playstation driver reports HP & MIC
++ * insert events through a dedicated input device.
++ */
++
++#define SND_DUALSENSE_JACK_OUT_TERM_ID 3
++#define SND_DUALSENSE_JACK_IN_TERM_ID 4
++
++struct dualsense_mixer_elem_info {
++	struct usb_mixer_elem_info info;
++	struct input_handler ih;
++	struct input_device_id id_table[2];
++	bool connected;
++};
++
++static void snd_dualsense_ih_event(struct input_handle *handle,
++				   unsigned int type, unsigned int code,
++				   int value)
++{
++	struct dualsense_mixer_elem_info *mei;
++	struct usb_mixer_elem_list *me;
++
++	if (type != EV_SW)
++		return;
++
++	mei = container_of(handle->handler, struct dualsense_mixer_elem_info, ih);
++	me = &mei->info.head;
++
++	if ((me->id == SND_DUALSENSE_JACK_OUT_TERM_ID && code == SW_HEADPHONE_INSERT) ||
++	    (me->id == SND_DUALSENSE_JACK_IN_TERM_ID && code == SW_MICROPHONE_INSERT)) {
++		mei->connected = !!value;
++		snd_ctl_notify(me->mixer->chip->card, SNDRV_CTL_EVENT_MASK_VALUE,
++			       &me->kctl->id);
++	}
++}
++
++static bool snd_dualsense_ih_match(struct input_handler *handler,
++				   struct input_dev *dev)
++{
++	struct dualsense_mixer_elem_info *mei;
++	struct usb_device *snd_dev;
++	char *input_dev_path, *usb_dev_path;
++	size_t usb_dev_path_len;
++	bool match = false;
++
++	mei = container_of(handler, struct dualsense_mixer_elem_info, ih);
++	snd_dev = mei->info.head.mixer->chip->dev;
++
++	input_dev_path = kobject_get_path(&dev->dev.kobj, GFP_KERNEL);
++	if (!input_dev_path) {
++		dev_warn(&snd_dev->dev, "Failed to get input dev path\n");
++		return false;
++	}
++
++	usb_dev_path = kobject_get_path(&snd_dev->dev.kobj, GFP_KERNEL);
++	if (!usb_dev_path) {
++		dev_warn(&snd_dev->dev, "Failed to get USB dev path\n");
++		goto free_paths;
++	}
++
++	/*
++	 * Ensure the VID:PID matched input device supposedly owned by the
++	 * hid-playstation driver belongs to the actual hardware handled by
++	 * the current USB audio device, which implies input_dev_path being
++	 * a subpath of usb_dev_path.
++	 *
++	 * This verification is necessary when there is more than one identical
++	 * controller attached to the host system.
++	 */
++	usb_dev_path_len = strlen(usb_dev_path);
++	if (usb_dev_path_len >= strlen(input_dev_path))
++		goto free_paths;
++
++	usb_dev_path[usb_dev_path_len] = '/';
++	match = !memcmp(input_dev_path, usb_dev_path, usb_dev_path_len + 1);
++
++free_paths:
++	kfree(input_dev_path);
++	kfree(usb_dev_path);
++
++	return match;
++}
++
++static int snd_dualsense_ih_connect(struct input_handler *handler,
++				    struct input_dev *dev,
++				    const struct input_device_id *id)
++{
++	struct input_handle *handle;
++	int err;
++
++	handle = kzalloc(sizeof(*handle), GFP_KERNEL);
++	if (!handle)
++		return -ENOMEM;
++
++	handle->dev = dev;
++	handle->handler = handler;
++	handle->name = handler->name;
++
++	err = input_register_handle(handle);
++	if (err)
++		goto err_free;
++
++	err = input_open_device(handle);
++	if (err)
++		goto err_unregister;
++
++	return 0;
++
++err_unregister:
++	input_unregister_handle(handle);
++err_free:
++	kfree(handle);
++	return err;
++}
++
++static void snd_dualsense_ih_disconnect(struct input_handle *handle)
++{
++	input_close_device(handle);
++	input_unregister_handle(handle);
++	kfree(handle);
++}
++
++static void snd_dualsense_ih_start(struct input_handle *handle)
++{
++	struct dualsense_mixer_elem_info *mei;
++	struct usb_mixer_elem_list *me;
++	int status = -1;
++
++	mei = container_of(handle->handler, struct dualsense_mixer_elem_info, ih);
++	me = &mei->info.head;
++
++	if (me->id == SND_DUALSENSE_JACK_OUT_TERM_ID &&
++	    test_bit(SW_HEADPHONE_INSERT, handle->dev->swbit))
++		status = test_bit(SW_HEADPHONE_INSERT, handle->dev->sw);
++	else if (me->id == SND_DUALSENSE_JACK_IN_TERM_ID &&
++		 test_bit(SW_MICROPHONE_INSERT, handle->dev->swbit))
++		status = test_bit(SW_MICROPHONE_INSERT, handle->dev->sw);
++
++	if (status >= 0) {
++		mei->connected = !!status;
++		snd_ctl_notify(me->mixer->chip->card, SNDRV_CTL_EVENT_MASK_VALUE,
++			       &me->kctl->id);
++	}
++}
++
++static int snd_dualsense_jack_get(struct snd_kcontrol *kctl,
++				  struct snd_ctl_elem_value *ucontrol)
++{
++	struct dualsense_mixer_elem_info *mei = snd_kcontrol_chip(kctl);
++
++	ucontrol->value.integer.value[0] = mei->connected;
++
++	return 0;
++}
++
++static const struct snd_kcontrol_new snd_dualsense_jack_control = {
++	.iface = SNDRV_CTL_ELEM_IFACE_CARD,
++	.access = SNDRV_CTL_ELEM_ACCESS_READ,
++	.info = snd_ctl_boolean_mono_info,
++	.get = snd_dualsense_jack_get,
++};
++
++static int snd_dualsense_resume_jack(struct usb_mixer_elem_list *list)
++{
++	snd_ctl_notify(list->mixer->chip->card, SNDRV_CTL_EVENT_MASK_VALUE,
++		       &list->kctl->id);
++	return 0;
++}
++
++static void snd_dualsense_mixer_elem_free(struct snd_kcontrol *kctl)
++{
++	struct dualsense_mixer_elem_info *mei = snd_kcontrol_chip(kctl);
++
++	if (mei->ih.event)
++		input_unregister_handler(&mei->ih);
++
++	snd_usb_mixer_elem_free(kctl);
++}
++
++static int snd_dualsense_jack_create(struct usb_mixer_interface *mixer,
++				     const char *name, bool is_output)
++{
++	struct dualsense_mixer_elem_info *mei;
++	struct input_device_id *idev_id;
++	struct snd_kcontrol *kctl;
++	int err;
++
++	mei = kzalloc(sizeof(*mei), GFP_KERNEL);
++	if (!mei)
++		return -ENOMEM;
++
++	snd_usb_mixer_elem_init_std(&mei->info.head, mixer,
++				    is_output ? SND_DUALSENSE_JACK_OUT_TERM_ID :
++						SND_DUALSENSE_JACK_IN_TERM_ID);
++
++	mei->info.head.resume = snd_dualsense_resume_jack;
++	mei->info.val_type = USB_MIXER_BOOLEAN;
++	mei->info.channels = 1;
++	mei->info.min = 0;
++	mei->info.max = 1;
++
++	kctl = snd_ctl_new1(&snd_dualsense_jack_control, mei);
++	if (!kctl) {
++		kfree(mei);
++		return -ENOMEM;
++	}
++
++	strscpy(kctl->id.name, name, sizeof(kctl->id.name));
++	kctl->private_free = snd_dualsense_mixer_elem_free;
++
++	err = snd_usb_mixer_add_control(&mei->info.head, kctl);
++	if (err)
++		return err;
++
++	idev_id = &mei->id_table[0];
++	idev_id->flags = INPUT_DEVICE_ID_MATCH_VENDOR | INPUT_DEVICE_ID_MATCH_PRODUCT |
++			 INPUT_DEVICE_ID_MATCH_EVBIT | INPUT_DEVICE_ID_MATCH_SWBIT;
++	idev_id->vendor = USB_ID_VENDOR(mixer->chip->usb_id);
++	idev_id->product = USB_ID_PRODUCT(mixer->chip->usb_id);
++	idev_id->evbit[BIT_WORD(EV_SW)] = BIT_MASK(EV_SW);
++	if (is_output)
++		idev_id->swbit[BIT_WORD(SW_HEADPHONE_INSERT)] = BIT_MASK(SW_HEADPHONE_INSERT);
++	else
++		idev_id->swbit[BIT_WORD(SW_MICROPHONE_INSERT)] = BIT_MASK(SW_MICROPHONE_INSERT);
++
++	mei->ih.event = snd_dualsense_ih_event;
++	mei->ih.match = snd_dualsense_ih_match;
++	mei->ih.connect = snd_dualsense_ih_connect;
++	mei->ih.disconnect = snd_dualsense_ih_disconnect;
++	mei->ih.start = snd_dualsense_ih_start;
++	mei->ih.name = name;
++	mei->ih.id_table = mei->id_table;
++
++	err = input_register_handler(&mei->ih);
++	if (err) {
++		dev_warn(&mixer->chip->dev->dev,
++			 "Could not register input handler: %d\n", err);
++		mei->ih.event = NULL;
++	}
++
++	return 0;
++}
++
++static int snd_dualsense_controls_create(struct usb_mixer_interface *mixer)
++{
++	int err;
++
++	err = snd_dualsense_jack_create(mixer, "Headphone Jack", true);
++	if (err < 0)
++		return err;
++
++	return snd_dualsense_jack_create(mixer, "Headset Mic Jack", false);
++}
++#endif /* IS_REACHABLE(CONFIG_INPUT) */
++
+ /* ASUS Xonar U1 / U3 controls */
+ 
+ static int snd_xonar_u1_switch_get(struct snd_kcontrol *kcontrol,
+@@ -856,6 +1118,7 @@ static const struct snd_kcontrol_new snd_mbox1_src_switch = {
+ static int snd_mbox1_controls_create(struct usb_mixer_interface *mixer)
+ {
+ 	int err;
++
+ 	err = add_single_ctl_with_resume(mixer, 0,
+ 					 snd_mbox1_clk_switch_resume,
+ 					 &snd_mbox1_clk_switch, NULL);
+@@ -869,7 +1132,7 @@ static int snd_mbox1_controls_create(struct usb_mixer_interface *mixer)
+ 
+ /* Native Instruments device quirks */
+ 
+-#define _MAKE_NI_CONTROL(bRequest,wIndex) ((bRequest) << 16 | (wIndex))
++#define _MAKE_NI_CONTROL(bRequest, wIndex) ((bRequest) << 16 | (wIndex))
+ 
+ static int snd_ni_control_init_val(struct usb_mixer_interface *mixer,
+ 				   struct snd_kcontrol *kctl)
+@@ -1021,7 +1284,7 @@ static int snd_nativeinstruments_create_mixer(struct usb_mixer_interface *mixer,
+ /* M-Audio FastTrack Ultra quirks */
+ /* FTU Effect switch (also used by C400/C600) */
+ static int snd_ftu_eff_switch_info(struct snd_kcontrol *kcontrol,
+-					struct snd_ctl_elem_info *uinfo)
++				   struct snd_ctl_elem_info *uinfo)
+ {
+ 	static const char *const texts[8] = {
+ 		"Room 1", "Room 2", "Room 3", "Hall 1",
+@@ -1055,7 +1318,7 @@ static int snd_ftu_eff_switch_init(struct usb_mixer_interface *mixer,
+ }
+ 
+ static int snd_ftu_eff_switch_get(struct snd_kcontrol *kctl,
+-					struct snd_ctl_elem_value *ucontrol)
++				  struct snd_ctl_elem_value *ucontrol)
+ {
+ 	ucontrol->value.enumerated.item[0] = kctl->private_value >> 24;
+ 	return 0;
+@@ -1086,7 +1349,7 @@ static int snd_ftu_eff_switch_update(struct usb_mixer_elem_list *list)
+ }
+ 
+ static int snd_ftu_eff_switch_put(struct snd_kcontrol *kctl,
+-					struct snd_ctl_elem_value *ucontrol)
++				  struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct usb_mixer_elem_list *list = snd_kcontrol_chip(kctl);
+ 	unsigned int pval = list->kctl->private_value;
+@@ -1104,7 +1367,7 @@ static int snd_ftu_eff_switch_put(struct snd_kcontrol *kctl,
+ }
+ 
+ static int snd_ftu_create_effect_switch(struct usb_mixer_interface *mixer,
+-	int validx, int bUnitID)
++					int validx, int bUnitID)
+ {
+ 	static struct snd_kcontrol_new template = {
+ 		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+@@ -1143,22 +1406,22 @@ static int snd_ftu_create_volume_ctls(struct usb_mixer_interface *mixer)
+ 		for (in = 0; in < 8; in++) {
+ 			cmask = BIT(in);
+ 			snprintf(name, sizeof(name),
+-				"AIn%d - Out%d Capture Volume",
+-				in  + 1, out + 1);
++				 "AIn%d - Out%d Capture Volume",
++				 in  + 1, out + 1);
+ 			err = snd_create_std_mono_ctl(mixer, id, control,
+-							cmask, val_type, name,
+-							&snd_usb_mixer_vol_tlv);
++						      cmask, val_type, name,
++						      &snd_usb_mixer_vol_tlv);
+ 			if (err < 0)
+ 				return err;
+ 		}
+ 		for (in = 8; in < 16; in++) {
+ 			cmask = BIT(in);
+ 			snprintf(name, sizeof(name),
+-				"DIn%d - Out%d Playback Volume",
+-				in - 7, out + 1);
++				 "DIn%d - Out%d Playback Volume",
++				 in - 7, out + 1);
+ 			err = snd_create_std_mono_ctl(mixer, id, control,
+-							cmask, val_type, name,
+-							&snd_usb_mixer_vol_tlv);
++						      cmask, val_type, name,
++						      &snd_usb_mixer_vol_tlv);
+ 			if (err < 0)
+ 				return err;
+ 		}
+@@ -1219,10 +1482,10 @@ static int snd_ftu_create_effect_return_ctls(struct usb_mixer_interface *mixer)
+ 	for (ch = 0; ch < 4; ++ch) {
+ 		cmask = BIT(ch);
+ 		snprintf(name, sizeof(name),
+-			"Effect Return %d Volume", ch + 1);
++			 "Effect Return %d Volume", ch + 1);
+ 		err = snd_create_std_mono_ctl(mixer, id, control,
+-						cmask, val_type, name,
+-						snd_usb_mixer_vol_tlv);
++					      cmask, val_type, name,
++					      snd_usb_mixer_vol_tlv);
+ 		if (err < 0)
+ 			return err;
+ 	}
+@@ -1243,20 +1506,20 @@ static int snd_ftu_create_effect_send_ctls(struct usb_mixer_interface *mixer)
+ 	for (ch = 0; ch < 8; ++ch) {
+ 		cmask = BIT(ch);
+ 		snprintf(name, sizeof(name),
+-			"Effect Send AIn%d Volume", ch + 1);
++			 "Effect Send AIn%d Volume", ch + 1);
+ 		err = snd_create_std_mono_ctl(mixer, id, control, cmask,
+-						val_type, name,
+-						snd_usb_mixer_vol_tlv);
++					      val_type, name,
++					      snd_usb_mixer_vol_tlv);
+ 		if (err < 0)
+ 			return err;
+ 	}
+ 	for (ch = 8; ch < 16; ++ch) {
+ 		cmask = BIT(ch);
+ 		snprintf(name, sizeof(name),
+-			"Effect Send DIn%d Volume", ch - 7);
++			 "Effect Send DIn%d Volume", ch - 7);
+ 		err = snd_create_std_mono_ctl(mixer, id, control, cmask,
+-						val_type, name,
+-						snd_usb_mixer_vol_tlv);
++					      val_type, name,
++					      snd_usb_mixer_vol_tlv);
+ 		if (err < 0)
+ 			return err;
+ 	}
+@@ -1346,19 +1609,19 @@ static int snd_c400_create_vol_ctls(struct usb_mixer_interface *mixer)
+ 		for (out = 0; out < num_outs; out++) {
+ 			if (chan < num_outs) {
+ 				snprintf(name, sizeof(name),
+-					"PCM%d-Out%d Playback Volume",
+-					chan + 1, out + 1);
++					 "PCM%d-Out%d Playback Volume",
++					 chan + 1, out + 1);
+ 			} else {
+ 				snprintf(name, sizeof(name),
+-					"In%d-Out%d Playback Volume",
+-					chan - num_outs + 1, out + 1);
++					 "In%d-Out%d Playback Volume",
++					 chan - num_outs + 1, out + 1);
+ 			}
+ 
+ 			cmask = (out == 0) ? 0 : BIT(out - 1);
+ 			offset = chan * num_outs;
+ 			err = snd_create_std_mono_ctl_offset(mixer, id, control,
+-						cmask, val_type, offset, name,
+-						&snd_usb_mixer_vol_tlv);
++							     cmask, val_type, offset, name,
++							     &snd_usb_mixer_vol_tlv);
+ 			if (err < 0)
+ 				return err;
+ 		}
+@@ -1377,7 +1640,7 @@ static int snd_c400_create_effect_volume_ctl(struct usb_mixer_interface *mixer)
+ 	const unsigned int cmask = 0;
+ 
+ 	return snd_create_std_mono_ctl(mixer, id, control, cmask, val_type,
+-					name, snd_usb_mixer_vol_tlv);
++				       name, snd_usb_mixer_vol_tlv);
+ }
+ 
+ /* This control needs a volume quirk, see mixer.c */
+@@ -1390,7 +1653,7 @@ static int snd_c400_create_effect_duration_ctl(struct usb_mixer_interface *mixer
+ 	const unsigned int cmask = 0;
+ 
+ 	return snd_create_std_mono_ctl(mixer, id, control, cmask, val_type,
+-					name, snd_usb_mixer_vol_tlv);
++				       name, snd_usb_mixer_vol_tlv);
+ }
+ 
+ /* This control needs a volume quirk, see mixer.c */
+@@ -1403,7 +1666,7 @@ static int snd_c400_create_effect_feedback_ctl(struct usb_mixer_interface *mixer
+ 	const unsigned int cmask = 0;
+ 
+ 	return snd_create_std_mono_ctl(mixer, id, control, cmask, val_type,
+-					name, NULL);
++				       name, NULL);
+ }
+ 
+ static int snd_c400_create_effect_vol_ctls(struct usb_mixer_interface *mixer)
+@@ -1432,18 +1695,18 @@ static int snd_c400_create_effect_vol_ctls(struct usb_mixer_interface *mixer)
+ 	for (chan = 0; chan < num_outs + num_ins; chan++) {
+ 		if (chan < num_outs) {
+ 			snprintf(name, sizeof(name),
+-				"Effect Send DOut%d",
+-				chan + 1);
++				 "Effect Send DOut%d",
++				 chan + 1);
+ 		} else {
+ 			snprintf(name, sizeof(name),
+-				"Effect Send AIn%d",
+-				chan - num_outs + 1);
++				 "Effect Send AIn%d",
++				 chan - num_outs + 1);
+ 		}
+ 
+ 		cmask = (chan == 0) ? 0 : BIT(chan - 1);
+ 		err = snd_create_std_mono_ctl(mixer, id, control,
+-						cmask, val_type, name,
+-						&snd_usb_mixer_vol_tlv);
++					      cmask, val_type, name,
++					      &snd_usb_mixer_vol_tlv);
+ 		if (err < 0)
+ 			return err;
+ 	}
+@@ -1478,14 +1741,14 @@ static int snd_c400_create_effect_ret_vol_ctls(struct usb_mixer_interface *mixer
+ 
+ 	for (chan = 0; chan < num_outs; chan++) {
+ 		snprintf(name, sizeof(name),
+-			"Effect Return %d",
+-			chan + 1);
++			 "Effect Return %d",
++			 chan + 1);
+ 
+ 		cmask = (chan == 0) ? 0 :
+ 			BIT(chan + (chan % 2) * num_outs - 1);
+ 		err = snd_create_std_mono_ctl_offset(mixer, id, control,
+-						cmask, val_type, offset, name,
+-						&snd_usb_mixer_vol_tlv);
++						     cmask, val_type, offset, name,
++						     &snd_usb_mixer_vol_tlv);
+ 		if (err < 0)
+ 			return err;
+ 	}
+@@ -1626,7 +1889,7 @@ static const struct std_mono_table ebox44_table[] = {
+  *
+  */
+ static int snd_microii_spdif_info(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_info *uinfo)
++				  struct snd_ctl_elem_info *uinfo)
+ {
+ 	uinfo->type = SNDRV_CTL_ELEM_TYPE_IEC958;
+ 	uinfo->count = 1;
+@@ -1634,7 +1897,7 @@ static int snd_microii_spdif_info(struct snd_kcontrol *kcontrol,
+ }
+ 
+ static int snd_microii_spdif_default_get(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_value *ucontrol)
++					 struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct usb_mixer_elem_list *list = snd_kcontrol_chip(kcontrol);
+ 	struct snd_usb_audio *chip = list->mixer->chip;
+@@ -1667,13 +1930,13 @@ static int snd_microii_spdif_default_get(struct snd_kcontrol *kcontrol,
+ 	ep = get_endpoint(alts, 0)->bEndpointAddress;
+ 
+ 	err = snd_usb_ctl_msg(chip->dev,
+-			usb_rcvctrlpipe(chip->dev, 0),
+-			UAC_GET_CUR,
+-			USB_TYPE_CLASS | USB_RECIP_ENDPOINT | USB_DIR_IN,
+-			UAC_EP_CS_ATTR_SAMPLE_RATE << 8,
+-			ep,
+-			data,
+-			sizeof(data));
++			      usb_rcvctrlpipe(chip->dev, 0),
++			      UAC_GET_CUR,
++			      USB_TYPE_CLASS | USB_RECIP_ENDPOINT | USB_DIR_IN,
++			      UAC_EP_CS_ATTR_SAMPLE_RATE << 8,
++			      ep,
++			      data,
++			      sizeof(data));
+ 	if (err < 0)
+ 		goto end;
+ 
+@@ -1700,26 +1963,26 @@ static int snd_microii_spdif_default_update(struct usb_mixer_elem_list *list)
+ 
+ 	reg = ((pval >> 4) & 0xf0) | (pval & 0x0f);
+ 	err = snd_usb_ctl_msg(chip->dev,
+-			usb_sndctrlpipe(chip->dev, 0),
+-			UAC_SET_CUR,
+-			USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+-			reg,
+-			2,
+-			NULL,
+-			0);
++			      usb_sndctrlpipe(chip->dev, 0),
++			      UAC_SET_CUR,
++			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
++			      reg,
++			      2,
++			      NULL,
++			      0);
+ 	if (err < 0)
+ 		goto end;
+ 
+ 	reg = (pval & IEC958_AES0_NONAUDIO) ? 0xa0 : 0x20;
+ 	reg |= (pval >> 12) & 0x0f;
+ 	err = snd_usb_ctl_msg(chip->dev,
+-			usb_sndctrlpipe(chip->dev, 0),
+-			UAC_SET_CUR,
+-			USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+-			reg,
+-			3,
+-			NULL,
+-			0);
++			      usb_sndctrlpipe(chip->dev, 0),
++			      UAC_SET_CUR,
++			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
++			      reg,
++			      3,
++			      NULL,
++			      0);
+ 	if (err < 0)
+ 		goto end;
+ 
+@@ -1729,13 +1992,14 @@ static int snd_microii_spdif_default_update(struct usb_mixer_elem_list *list)
+ }
+ 
+ static int snd_microii_spdif_default_put(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_value *ucontrol)
++					 struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct usb_mixer_elem_list *list = snd_kcontrol_chip(kcontrol);
+ 	unsigned int pval, pval_old;
+ 	int err;
+ 
+-	pval = pval_old = kcontrol->private_value;
++	pval = kcontrol->private_value;
++	pval_old = pval;
+ 	pval &= 0xfffff0f0;
+ 	pval |= (ucontrol->value.iec958.status[1] & 0x0f) << 8;
+ 	pval |= (ucontrol->value.iec958.status[0] & 0x0f);
+@@ -1756,7 +2020,7 @@ static int snd_microii_spdif_default_put(struct snd_kcontrol *kcontrol,
+ }
+ 
+ static int snd_microii_spdif_mask_get(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_value *ucontrol)
++				      struct snd_ctl_elem_value *ucontrol)
+ {
+ 	ucontrol->value.iec958.status[0] = 0x0f;
+ 	ucontrol->value.iec958.status[1] = 0xff;
+@@ -1767,7 +2031,7 @@ static int snd_microii_spdif_mask_get(struct snd_kcontrol *kcontrol,
+ }
+ 
+ static int snd_microii_spdif_switch_get(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_value *ucontrol)
++					struct snd_ctl_elem_value *ucontrol)
+ {
+ 	ucontrol->value.integer.value[0] = !(kcontrol->private_value & 0x02);
+ 
+@@ -1785,20 +2049,20 @@ static int snd_microii_spdif_switch_update(struct usb_mixer_elem_list *list)
+ 		return err;
+ 
+ 	err = snd_usb_ctl_msg(chip->dev,
+-			usb_sndctrlpipe(chip->dev, 0),
+-			UAC_SET_CUR,
+-			USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+-			reg,
+-			9,
+-			NULL,
+-			0);
++			      usb_sndctrlpipe(chip->dev, 0),
++			      UAC_SET_CUR,
++			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
++			      reg,
++			      9,
++			      NULL,
++			      0);
+ 
+ 	snd_usb_unlock_shutdown(chip);
+ 	return err;
+ }
+ 
+ static int snd_microii_spdif_switch_put(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_value *ucontrol)
++					struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct usb_mixer_elem_list *list = snd_kcontrol_chip(kcontrol);
+ 	u8 reg;
+@@ -1883,9 +2147,9 @@ static int snd_soundblaster_e1_switch_update(struct usb_mixer_interface *mixer,
+ 	if (err < 0)
+ 		return err;
+ 	err = snd_usb_ctl_msg(chip->dev,
+-			usb_sndctrlpipe(chip->dev, 0), HID_REQ_SET_REPORT,
+-			USB_TYPE_CLASS | USB_RECIP_INTERFACE | USB_DIR_OUT,
+-			0x0202, 3, buff, 2);
++			      usb_sndctrlpipe(chip->dev, 0), HID_REQ_SET_REPORT,
++			      USB_TYPE_CLASS | USB_RECIP_INTERFACE | USB_DIR_OUT,
++			      0x0202, 3, buff, 2);
+ 	snd_usb_unlock_shutdown(chip);
+ 	return err;
+ }
+@@ -2181,6 +2445,7 @@ static const u32 snd_rme_rate_table[] = {
+ 	256000,	352800, 384000, 400000,
+ 	512000, 705600, 768000, 800000
+ };
++
+ /* maximum number of items for AES and S/PDIF rates for above table */
+ #define SND_RME_RATE_IDX_AES_SPDIF_NUM		12
+ 
+@@ -3235,7 +3500,7 @@ static int snd_rme_digiface_enum_put(struct snd_kcontrol *kcontrol,
+ }
+ 
+ static int snd_rme_digiface_current_sync_get(struct snd_kcontrol *kcontrol,
+-				     struct snd_ctl_elem_value *ucontrol)
++					     struct snd_ctl_elem_value *ucontrol)
+ {
+ 	int ret = snd_rme_digiface_enum_get(kcontrol, ucontrol);
+ 
+@@ -3269,7 +3534,6 @@ static int snd_rme_digiface_sync_state_get(struct snd_kcontrol *kcontrol,
+ 	return 0;
+ }
+ 
+-
+ static int snd_rme_digiface_format_info(struct snd_kcontrol *kcontrol,
+ 					struct snd_ctl_elem_info *uinfo)
+ {
+@@ -3281,7 +3545,6 @@ static int snd_rme_digiface_format_info(struct snd_kcontrol *kcontrol,
+ 				 ARRAY_SIZE(format), format);
+ }
+ 
+-
+ static int snd_rme_digiface_sync_source_info(struct snd_kcontrol *kcontrol,
+ 					     struct snd_ctl_elem_info *uinfo)
+ {
+@@ -3564,7 +3827,6 @@ static int snd_rme_digiface_controls_create(struct usb_mixer_interface *mixer)
+ #define SND_DJM_A9_IDX		0x6
+ #define SND_DJM_V10_IDX	0x7
+ 
+-
+ #define SND_DJM_CTL(_name, suffix, _default_value, _windex) { \
+ 	.name = _name, \
+ 	.options = snd_djm_opts_##suffix, \
+@@ -3576,7 +3838,6 @@ static int snd_rme_digiface_controls_create(struct usb_mixer_interface *mixer)
+ 	.controls = snd_djm_ctls_##suffix, \
+ 	.ncontrols = ARRAY_SIZE(snd_djm_ctls_##suffix) }
+ 
+-
+ struct snd_djm_device {
+ 	const char *name;
+ 	const struct snd_djm_ctl *controls;
+@@ -3722,7 +3983,6 @@ static const struct snd_djm_ctl snd_djm_ctls_250mk2[] = {
+ 	SND_DJM_CTL("Output 3 Playback Switch", 250mk2_pb3, 2, SND_DJM_WINDEX_PB)
+ };
+ 
+-
+ // DJM-450
+ static const u16 snd_djm_opts_450_cap1[] = {
+ 	0x0103, 0x0100, 0x0106, 0x0107, 0x0108, 0x0109, 0x010d, 0x010a };
+@@ -3747,7 +4007,6 @@ static const struct snd_djm_ctl snd_djm_ctls_450[] = {
+ 	SND_DJM_CTL("Output 3 Playback Switch", 450_pb3, 2, SND_DJM_WINDEX_PB)
+ };
+ 
+-
+ // DJM-750
+ static const u16 snd_djm_opts_750_cap1[] = {
+ 	0x0101, 0x0103, 0x0106, 0x0107, 0x0108, 0x0109, 0x010a, 0x010f };
+@@ -3766,7 +4025,6 @@ static const struct snd_djm_ctl snd_djm_ctls_750[] = {
+ 	SND_DJM_CTL("Input 4 Capture Switch", 750_cap4, 0, SND_DJM_WINDEX_CAP)
+ };
+ 
+-
+ // DJM-850
+ static const u16 snd_djm_opts_850_cap1[] = {
+ 	0x0100, 0x0103, 0x0106, 0x0107, 0x0108, 0x0109, 0x010a, 0x010f };
+@@ -3785,7 +4043,6 @@ static const struct snd_djm_ctl snd_djm_ctls_850[] = {
+ 	SND_DJM_CTL("Input 4 Capture Switch", 850_cap4, 1, SND_DJM_WINDEX_CAP)
+ };
+ 
+-
+ // DJM-900NXS2
+ static const u16 snd_djm_opts_900nxs2_cap1[] = {
+ 	0x0100, 0x0102, 0x0103, 0x0106, 0x0107, 0x0108, 0x0109, 0x010a };
+@@ -3823,7 +4080,6 @@ static const u16 snd_djm_opts_750mk2_pb1[] = { 0x0100, 0x0101, 0x0104 };
+ static const u16 snd_djm_opts_750mk2_pb2[] = { 0x0200, 0x0201, 0x0204 };
+ static const u16 snd_djm_opts_750mk2_pb3[] = { 0x0300, 0x0301, 0x0304 };
+ 
+-
+ static const struct snd_djm_ctl snd_djm_ctls_750mk2[] = {
+ 	SND_DJM_CTL("Master Input Level Capture Switch", cap_level, 0, SND_DJM_WINDEX_CAPLVL),
+ 	SND_DJM_CTL("Input 1 Capture Switch",   750mk2_cap1, 2, SND_DJM_WINDEX_CAP),
+@@ -3836,7 +4092,6 @@ static const struct snd_djm_ctl snd_djm_ctls_750mk2[] = {
+ 	SND_DJM_CTL("Output 3 Playback Switch", 750mk2_pb3, 2, SND_DJM_WINDEX_PB)
+ };
+ 
+-
+ // DJM-A9
+ static const u16 snd_djm_opts_a9_cap_level[] = {
+ 	0x0000, 0x0100, 0x0200, 0x0300, 0x0400, 0x0500 };
+@@ -3865,29 +4120,35 @@ static const struct snd_djm_ctl snd_djm_ctls_a9[] = {
+ static const u16 snd_djm_opts_v10_cap_level[] = {
+ 	0x0000, 0x0100, 0x0200, 0x0300, 0x0400, 0x0500
+ };
++
+ static const u16 snd_djm_opts_v10_cap1[] = {
+ 	0x0103,
+ 	0x0100, 0x0102, 0x0106, 0x0110, 0x0107,
+ 	0x0108, 0x0109, 0x010a, 0x0121, 0x0122
+ };
++
+ static const u16 snd_djm_opts_v10_cap2[] = {
+ 	0x0200, 0x0202, 0x0206, 0x0210, 0x0207,
+ 	0x0208, 0x0209, 0x020a, 0x0221, 0x0222
+ };
++
+ static const u16 snd_djm_opts_v10_cap3[] = {
+ 	0x0303,
+ 	0x0300, 0x0302, 0x0306, 0x0310, 0x0307,
+ 	0x0308, 0x0309, 0x030a, 0x0321, 0x0322
+ };
++
+ static const u16 snd_djm_opts_v10_cap4[] = {
+ 	0x0403,
+ 	0x0400, 0x0402, 0x0406, 0x0410, 0x0407,
+ 	0x0408, 0x0409, 0x040a, 0x0421, 0x0422
+ };
++
+ static const u16 snd_djm_opts_v10_cap5[] = {
+ 	0x0500, 0x0502, 0x0506, 0x0510, 0x0507,
+ 	0x0508, 0x0509, 0x050a, 0x0521, 0x0522
+ };
++
+ static const u16 snd_djm_opts_v10_cap6[] = {
+ 	0x0603,
+ 	0x0600, 0x0602, 0x0606, 0x0610, 0x0607,
+@@ -3916,9 +4177,8 @@ static const struct snd_djm_device snd_djm_devices[] = {
+ 	[SND_DJM_V10_IDX] = SND_DJM_DEVICE(v10),
+ };
+ 
+-
+ static int snd_djm_controls_info(struct snd_kcontrol *kctl,
+-				struct snd_ctl_elem_info *info)
++				 struct snd_ctl_elem_info *info)
+ {
+ 	unsigned long private_value = kctl->private_value;
+ 	u8 device_idx = (private_value & SND_DJM_DEVICE_MASK) >> SND_DJM_DEVICE_SHIFT;
+@@ -3937,8 +4197,8 @@ static int snd_djm_controls_info(struct snd_kcontrol *kctl,
+ 		info->value.enumerated.item = noptions - 1;
+ 
+ 	name = snd_djm_get_label(device_idx,
+-				ctl->options[info->value.enumerated.item],
+-				ctl->wIndex);
++				 ctl->options[info->value.enumerated.item],
++				 ctl->wIndex);
+ 	if (!name)
+ 		return -EINVAL;
+ 
+@@ -3950,25 +4210,25 @@ static int snd_djm_controls_info(struct snd_kcontrol *kctl,
+ }
+ 
+ static int snd_djm_controls_update(struct usb_mixer_interface *mixer,
+-				u8 device_idx, u8 group, u16 value)
++				   u8 device_idx, u8 group, u16 value)
+ {
+ 	int err;
+ 	const struct snd_djm_device *device = &snd_djm_devices[device_idx];
+ 
+-	if ((group >= device->ncontrols) || value >= device->controls[group].noptions)
++	if (group >= device->ncontrols || value >= device->controls[group].noptions)
+ 		return -EINVAL;
+ 
+ 	err = snd_usb_lock_shutdown(mixer->chip);
+ 	if (err)
+ 		return err;
+ 
+-	err = snd_usb_ctl_msg(
+-		mixer->chip->dev, usb_sndctrlpipe(mixer->chip->dev, 0),
+-		USB_REQ_SET_FEATURE,
+-		USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-		device->controls[group].options[value],
+-		device->controls[group].wIndex,
+-		NULL, 0);
++	err = snd_usb_ctl_msg(mixer->chip->dev,
++			      usb_sndctrlpipe(mixer->chip->dev, 0),
++			      USB_REQ_SET_FEATURE,
++			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
++			      device->controls[group].options[value],
++			      device->controls[group].wIndex,
++			      NULL, 0);
+ 
+ 	snd_usb_unlock_shutdown(mixer->chip);
+ 	return err;
+@@ -4009,7 +4269,7 @@ static int snd_djm_controls_resume(struct usb_mixer_elem_list *list)
+ }
+ 
+ static int snd_djm_controls_create(struct usb_mixer_interface *mixer,
+-		const u8 device_idx)
++				   const u8 device_idx)
+ {
+ 	int err, i;
+ 	u16 value;
+@@ -4028,10 +4288,10 @@ static int snd_djm_controls_create(struct usb_mixer_interface *mixer,
+ 	for (i = 0; i < device->ncontrols; i++) {
+ 		value = device->controls[i].default_value;
+ 		knew.name = device->controls[i].name;
+-		knew.private_value = (
++		knew.private_value =
+ 			((unsigned long)device_idx << SND_DJM_DEVICE_SHIFT) |
+ 			(i << SND_DJM_GROUP_SHIFT) |
+-			value);
++			value;
+ 		err = snd_djm_controls_update(mixer, device_idx, i, value);
+ 		if (err)
+ 			return err;
+@@ -4073,6 +4333,13 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
+ 		err = snd_emu0204_controls_create(mixer);
+ 		break;
+ 
++#if IS_REACHABLE(CONFIG_INPUT)
++	case USB_ID(0x054c, 0x0ce6): /* Sony DualSense controller (PS5) */
++	case USB_ID(0x054c, 0x0df2): /* Sony DualSense Edge controller (PS5) */
++		err = snd_dualsense_controls_create(mixer);
++		break;
++#endif /* IS_REACHABLE(CONFIG_INPUT) */
++
+ 	case USB_ID(0x0763, 0x2030): /* M-Audio Fast Track C400 */
+ 	case USB_ID(0x0763, 0x2031): /* M-Audio Fast Track C400 */
+ 		err = snd_c400_create_mixer(mixer);
+@@ -4098,13 +4365,15 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
+ 		break;
+ 
+ 	case USB_ID(0x17cc, 0x1011): /* Traktor Audio 6 */
+-		err = snd_nativeinstruments_create_mixer(mixer,
++		err = snd_nativeinstruments_create_mixer(/* checkpatch hack */
++				mixer,
+ 				snd_nativeinstruments_ta6_mixers,
+ 				ARRAY_SIZE(snd_nativeinstruments_ta6_mixers));
+ 		break;
+ 
+ 	case USB_ID(0x17cc, 0x1021): /* Traktor Audio 10 */
+-		err = snd_nativeinstruments_create_mixer(mixer,
++		err = snd_nativeinstruments_create_mixer(/* checkpatch hack */
++				mixer,
+ 				snd_nativeinstruments_ta10_mixers,
+ 				ARRAY_SIZE(snd_nativeinstruments_ta10_mixers));
+ 		break;
+@@ -4254,7 +4523,8 @@ static void snd_dragonfly_quirk_db_scale(struct usb_mixer_interface *mixer,
+ 					 struct snd_kcontrol *kctl)
+ {
+ 	/* Approximation using 10 ranges based on output measurement on hw v1.2.
+-	 * This seems close to the cubic mapping e.g. alsamixer uses. */
++	 * This seems close to the cubic mapping e.g. alsamixer uses.
++	 */
+ 	static const DECLARE_TLV_DB_RANGE(scale,
+ 		 0,  1, TLV_DB_MINMAX_ITEM(-5300, -4970),
+ 		 2,  5, TLV_DB_MINMAX_ITEM(-4710, -4160),
+@@ -4338,20 +4608,15 @@ void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer,
+ 		if (unitid == 7 && cval->control == UAC_FU_VOLUME)
+ 			snd_dragonfly_quirk_db_scale(mixer, cval, kctl);
+ 		break;
++	}
++
+ 	/* lowest playback value is muted on some devices */
+-	case USB_ID(0x0572, 0x1b09): /* Conexant Systems (Rockwell), Inc. */
+-	case USB_ID(0x0d8c, 0x000c): /* C-Media */
+-	case USB_ID(0x0d8c, 0x0014): /* C-Media */
+-	case USB_ID(0x19f7, 0x0003): /* RODE NT-USB */
+-	case USB_ID(0x2d99, 0x0026): /* HECATE G2 GAMING HEADSET */
++	if (mixer->chip->quirk_flags & QUIRK_FLAG_MIXER_MIN_MUTE)
+ 		if (strstr(kctl->id.name, "Playback"))
+ 			cval->min_mute = 1;
+-		break;
+-	}
+ 
+ 	/* ALSA-ify some Plantronics headset control names */
+ 	if (USB_ID_VENDOR(mixer->chip->usb_id) == 0x047f &&
+ 	    (cval->control == UAC_FU_MUTE || cval->control == UAC_FU_VOLUME))
+ 		snd_fix_plt_name(mixer->chip, &kctl->id);
+ }
+-
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index bd24f3a78ea9db..766db7d00cbc95 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -2199,6 +2199,10 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_SET_IFACE_FIRST),
+ 	DEVICE_FLG(0x0556, 0x0014, /* Phoenix Audio TMX320VC */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
++	DEVICE_FLG(0x0572, 0x1b08, /* Conexant Systems (Rockwell), Inc. */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
++	DEVICE_FLG(0x0572, 0x1b09, /* Conexant Systems (Rockwell), Inc. */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x05a3, 0x9420, /* ELP HD USB Camera */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x05a7, 0x1020, /* Bose Companion 5 */
+@@ -2241,12 +2245,16 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ 	DEVICE_FLG(0x0b0e, 0x0349, /* Jabra 550a */
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
++	DEVICE_FLG(0x0bda, 0x498a, /* Realtek Semiconductor Corp. */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x0c45, 0x6340, /* Sonix HD USB Camera */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x0c45, 0x636b, /* Microdia JP001 USB Camera */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+-	DEVICE_FLG(0x0d8c, 0x0014, /* USB Audio Device */
+-		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
++	DEVICE_FLG(0x0d8c, 0x000c, /* C-Media */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
++	DEVICE_FLG(0x0d8c, 0x0014, /* C-Media */
++		   QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x0ecb, 0x205c, /* JBL Quantum610 Wireless */
+ 		   QUIRK_FLAG_FIXED_RATE),
+ 	DEVICE_FLG(0x0ecb, 0x2069, /* JBL Quantum810 Wireless */
+@@ -2255,6 +2263,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER),
+ 	DEVICE_FLG(0x1101, 0x0003, /* Audioengine D1 */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
++	DEVICE_FLG(0x12d1, 0x3a07, /* Huawei Technologies Co., Ltd. */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x1224, 0x2a25, /* Jieli Technology USB PHY 2.0 */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_MIC_RES_16),
+ 	DEVICE_FLG(0x1395, 0x740a, /* Sennheiser DECT */
+@@ -2293,6 +2303,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY),
+ 	DEVICE_FLG(0x1901, 0x0191, /* GE B850V3 CP2114 audio interface */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
++	DEVICE_FLG(0x19f7, 0x0003, /* RODE NT-USB */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x19f7, 0x0035, /* RODE NT-USB+ */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x1bcf, 0x2281, /* HD Webcam */
+@@ -2343,6 +2355,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_IGNORE_CTL_ERROR),
+ 	DEVICE_FLG(0x2912, 0x30c8, /* Audioengine D1 */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
++	DEVICE_FLG(0x2a70, 0x1881, /* OnePlus Technology (Shenzhen) Co., Ltd. BE02T */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x2b53, 0x0023, /* Fiero SC-01 (firmware v1.0.0 @ 48 kHz) */
+ 		   QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+ 	DEVICE_FLG(0x2b53, 0x0024, /* Fiero SC-01 (firmware v1.0.0 @ 96 kHz) */
+@@ -2353,10 +2367,14 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ 	DEVICE_FLG(0x2d95, 0x8021, /* VIVO USB-C-XE710 HEADSET */
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
++	DEVICE_FLG(0x2d99, 0x0026, /* HECATE G2 GAMING HEADSET */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x2fc6, 0xf0b7, /* iBasso DC07 Pro */
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ 	DEVICE_FLG(0x30be, 0x0101, /* Schiit Hel */
+ 		   QUIRK_FLAG_IGNORE_CTL_ERROR),
++	DEVICE_FLG(0x339b, 0x3a07, /* Synaptics HONOR USB-C HEADSET */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x413c, 0xa506, /* Dell AE515 sound bar */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x534d, 0x0021, /* MacroSilicon MS2100/MS2106 */
+@@ -2408,6 +2426,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_DSD_RAW),
+ 	VENDOR_FLG(0x2d87, /* Cayin device */
+ 		   QUIRK_FLAG_DSD_RAW),
++	VENDOR_FLG(0x2fc6, /* Comture-inc devices */
++		   QUIRK_FLAG_DSD_RAW),
+ 	VENDOR_FLG(0x3336, /* HEM devices */
+ 		   QUIRK_FLAG_DSD_RAW),
+ 	VENDOR_FLG(0x3353, /* Khadas devices */
+diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
+index 158ec053dc44dd..1ef4d39978df36 100644
+--- a/sound/usb/usbaudio.h
++++ b/sound/usb/usbaudio.h
+@@ -196,6 +196,9 @@ extern bool snd_usb_skip_validation;
+  *  for the given endpoint.
+  * QUIRK_FLAG_MIC_RES_16 and QUIRK_FLAG_MIC_RES_384
+  *  Set the fixed resolution for Mic Capture Volume (mostly for webcams)
++ * QUIRK_FLAG_MIXER_MIN_MUTE
++ *  Set minimum volume control value as mute for devices where the lowest
++ *  playback value represents muted state instead of minimum audible volume
+  */
+ 
+ #define QUIRK_FLAG_GET_SAMPLE_RATE	(1U << 0)
+@@ -222,5 +225,6 @@ extern bool snd_usb_skip_validation;
+ #define QUIRK_FLAG_FIXED_RATE		(1U << 21)
+ #define QUIRK_FLAG_MIC_RES_16		(1U << 22)
+ #define QUIRK_FLAG_MIC_RES_384		(1U << 23)
++#define QUIRK_FLAG_MIXER_MIN_MUTE	(1U << 24)
+ 
+ #endif /* __USBAUDIO_H */
+diff --git a/tools/testing/selftests/bpf/prog_tests/free_timer.c b/tools/testing/selftests/bpf/prog_tests/free_timer.c
+index b7b77a6b29799c..0de8facca4c5bc 100644
+--- a/tools/testing/selftests/bpf/prog_tests/free_timer.c
++++ b/tools/testing/selftests/bpf/prog_tests/free_timer.c
+@@ -124,6 +124,10 @@ void test_free_timer(void)
+ 	int err;
+ 
+ 	skel = free_timer__open_and_load();
++	if (!skel && errno == EOPNOTSUPP) {
++		test__skip();
++		return;
++	}
+ 	if (!ASSERT_OK_PTR(skel, "open_load"))
+ 		return;
+ 
+diff --git a/tools/testing/selftests/bpf/prog_tests/timer.c b/tools/testing/selftests/bpf/prog_tests/timer.c
+index d66687f1ee6a8d..56f660ca567ba1 100644
+--- a/tools/testing/selftests/bpf/prog_tests/timer.c
++++ b/tools/testing/selftests/bpf/prog_tests/timer.c
+@@ -86,6 +86,10 @@ void serial_test_timer(void)
+ 	int err;
+ 
+ 	timer_skel = timer__open_and_load();
++	if (!timer_skel && errno == EOPNOTSUPP) {
++		test__skip();
++		return;
++	}
+ 	if (!ASSERT_OK_PTR(timer_skel, "timer_skel_load"))
+ 		return;
+ 
+diff --git a/tools/testing/selftests/bpf/prog_tests/timer_crash.c b/tools/testing/selftests/bpf/prog_tests/timer_crash.c
+index f74b82305da8c8..b841597c8a3a31 100644
+--- a/tools/testing/selftests/bpf/prog_tests/timer_crash.c
++++ b/tools/testing/selftests/bpf/prog_tests/timer_crash.c
+@@ -12,6 +12,10 @@ static void test_timer_crash_mode(int mode)
+ 	struct timer_crash *skel;
+ 
+ 	skel = timer_crash__open_and_load();
++	if (!skel && errno == EOPNOTSUPP) {
++		test__skip();
++		return;
++	}
+ 	if (!ASSERT_OK_PTR(skel, "timer_crash__open_and_load"))
+ 		return;
+ 	skel->bss->pid = getpid();
+diff --git a/tools/testing/selftests/bpf/prog_tests/timer_lockup.c b/tools/testing/selftests/bpf/prog_tests/timer_lockup.c
+index 1a2f99596916fb..eb303fa1e09af9 100644
+--- a/tools/testing/selftests/bpf/prog_tests/timer_lockup.c
++++ b/tools/testing/selftests/bpf/prog_tests/timer_lockup.c
+@@ -59,6 +59,10 @@ void test_timer_lockup(void)
+ 	}
+ 
+ 	skel = timer_lockup__open_and_load();
++	if (!skel && errno == EOPNOTSUPP) {
++		test__skip();
++		return;
++	}
+ 	if (!ASSERT_OK_PTR(skel, "timer_lockup__open_and_load"))
+ 		return;
+ 
+diff --git a/tools/testing/selftests/bpf/prog_tests/timer_mim.c b/tools/testing/selftests/bpf/prog_tests/timer_mim.c
+index 9ff7843909e7d3..c930c7d7105b9f 100644
+--- a/tools/testing/selftests/bpf/prog_tests/timer_mim.c
++++ b/tools/testing/selftests/bpf/prog_tests/timer_mim.c
+@@ -65,6 +65,10 @@ void serial_test_timer_mim(void)
+ 		goto cleanup;
+ 
+ 	timer_skel = timer_mim__open_and_load();
++	if (!timer_skel && errno == EOPNOTSUPP) {
++		test__skip();
++		return;
++	}
+ 	if (!ASSERT_OK_PTR(timer_skel, "timer_skel_load"))
+ 		goto cleanup;
+ 
+diff --git a/tools/testing/selftests/filesystems/mount-notify/mount-notify_test.c b/tools/testing/selftests/filesystems/mount-notify/mount-notify_test.c
+index 63ce708d93ed06..e4b7c2b457ee7a 100644
+--- a/tools/testing/selftests/filesystems/mount-notify/mount-notify_test.c
++++ b/tools/testing/selftests/filesystems/mount-notify/mount-notify_test.c
+@@ -2,6 +2,13 @@
+ // Copyright (c) 2025 Miklos Szeredi <miklos@szeredi.hu>
+ 
+ #define _GNU_SOURCE
++
++// Needed for linux/fanotify.h
++typedef struct {
++	int	val[2];
++} __kernel_fsid_t;
++#define __kernel_fsid_t __kernel_fsid_t
++
+ #include <fcntl.h>
+ #include <sched.h>
+ #include <stdio.h>
+@@ -10,20 +17,12 @@
+ #include <sys/mount.h>
+ #include <unistd.h>
+ #include <sys/syscall.h>
++#include <sys/fanotify.h>
+ 
+ #include "../../kselftest_harness.h"
+ #include "../statmount/statmount.h"
+ #include "../utils.h"
+ 
+-// Needed for linux/fanotify.h
+-#ifndef __kernel_fsid_t
+-typedef struct {
+-	int	val[2];
+-} __kernel_fsid_t;
+-#endif
+-
+-#include <sys/fanotify.h>
+-
+ static const char root_mntpoint_templ[] = "/tmp/mount-notify_test_root.XXXXXX";
+ 
+ static const int mark_cmds[] = {
+diff --git a/tools/testing/selftests/filesystems/mount-notify/mount-notify_test_ns.c b/tools/testing/selftests/filesystems/mount-notify/mount-notify_test_ns.c
+index 090a5ca65004a0..9f57ca46e3afa0 100644
+--- a/tools/testing/selftests/filesystems/mount-notify/mount-notify_test_ns.c
++++ b/tools/testing/selftests/filesystems/mount-notify/mount-notify_test_ns.c
+@@ -2,6 +2,13 @@
+ // Copyright (c) 2025 Miklos Szeredi <miklos@szeredi.hu>
+ 
+ #define _GNU_SOURCE
++
++// Needed for linux/fanotify.h
++typedef struct {
++	int	val[2];
++} __kernel_fsid_t;
++#define __kernel_fsid_t __kernel_fsid_t
++
+ #include <fcntl.h>
+ #include <sched.h>
+ #include <stdio.h>
+@@ -10,21 +17,12 @@
+ #include <sys/mount.h>
+ #include <unistd.h>
+ #include <sys/syscall.h>
++#include <sys/fanotify.h>
+ 
+ #include "../../kselftest_harness.h"
+-#include "../../pidfd/pidfd.h"
+ #include "../statmount/statmount.h"
+ #include "../utils.h"
+ 
+-// Needed for linux/fanotify.h
+-#ifndef __kernel_fsid_t
+-typedef struct {
+-	int	val[2];
+-} __kernel_fsid_t;
+-#endif
+-
+-#include <sys/fanotify.h>
+-
+ static const char root_mntpoint_templ[] = "/tmp/mount-notify_test_root.XXXXXX";
+ 
+ static const int mark_types[] = {
+diff --git a/tools/testing/selftests/net/fib_nexthops.sh b/tools/testing/selftests/net/fib_nexthops.sh
+index b39f748c25722a..2ac394c99d0183 100755
+--- a/tools/testing/selftests/net/fib_nexthops.sh
++++ b/tools/testing/selftests/net/fib_nexthops.sh
+@@ -467,8 +467,8 @@ ipv6_fdb_grp_fcnal()
+ 	log_test $? 0 "Get Fdb nexthop group by id"
+ 
+ 	# fdb nexthop group can only contain fdb nexthops
+-	run_cmd "$IP nexthop add id 63 via 2001:db8:91::4"
+-	run_cmd "$IP nexthop add id 64 via 2001:db8:91::5"
++	run_cmd "$IP nexthop add id 63 via 2001:db8:91::4 dev veth1"
++	run_cmd "$IP nexthop add id 64 via 2001:db8:91::5 dev veth1"
+ 	run_cmd "$IP nexthop add id 103 group 63/64 fdb"
+ 	log_test $? 2 "Fdb Nexthop group with non-fdb nexthops"
+ 
+@@ -547,15 +547,15 @@ ipv4_fdb_grp_fcnal()
+ 	log_test $? 0 "Get Fdb nexthop group by id"
+ 
+ 	# fdb nexthop group can only contain fdb nexthops
+-	run_cmd "$IP nexthop add id 14 via 172.16.1.2"
+-	run_cmd "$IP nexthop add id 15 via 172.16.1.3"
++	run_cmd "$IP nexthop add id 14 via 172.16.1.2 dev veth1"
++	run_cmd "$IP nexthop add id 15 via 172.16.1.3 dev veth1"
+ 	run_cmd "$IP nexthop add id 103 group 14/15 fdb"
+ 	log_test $? 2 "Fdb Nexthop group with non-fdb nexthops"
+ 
+ 	# Non fdb nexthop group can not contain fdb nexthops
+ 	run_cmd "$IP nexthop add id 16 via 172.16.1.2 fdb"
+ 	run_cmd "$IP nexthop add id 17 via 172.16.1.3 fdb"
+-	run_cmd "$IP nexthop add id 104 group 14/15"
++	run_cmd "$IP nexthop add id 104 group 16/17"
+ 	log_test $? 2 "Non-Fdb Nexthop group with fdb nexthops"
+ 
+ 	# fdb nexthop cannot have blackhole
+@@ -582,7 +582,7 @@ ipv4_fdb_grp_fcnal()
+ 	run_cmd "$BRIDGE fdb add 02:02:00:00:00:14 dev vx10 nhid 12 self"
+ 	log_test $? 255 "Fdb mac add with nexthop"
+ 
+-	run_cmd "$IP ro add 172.16.0.0/22 nhid 15"
++	run_cmd "$IP ro add 172.16.0.0/22 nhid 16"
+ 	log_test $? 2 "Route add with fdb nexthop"
+ 
+ 	run_cmd "$IP ro add 172.16.0.0/22 nhid 103"


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-10-02 13:30 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-10-02 13:30 UTC (permalink / raw
  To: gentoo-commits

commit:     bb13a225747cbfce6c3622ae1772cc4bb6ee4994
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Oct  2 13:30:10 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Oct  2 13:30:10 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bb13a225

Linux patch 6.16.10

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README              |   10 +-
 1009_linux-6.16.10.patch | 6562 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6565 insertions(+), 7 deletions(-)

diff --git a/0000_README b/0000_README
index d1517a09..d6f79fa1 100644
--- a/0000_README
+++ b/0000_README
@@ -79,13 +79,9 @@ Patch:  1008_linux-6.16.9.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.16.9
 
-Patch:  1401_btrfs-don-t-allow-adding-block-device-of-less-than-1.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git/tree/queue-6.16/btrfs-don-t-allow-adding-block-device-of-less-than-1.patch
-Desc:   btrfs: don't allow adding block device of less than 1 MB
-
-Patch:  1402_crypto-af_alg-fix-incorrect-boolean-values-in-af_alg_ctx.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git/plain/queue-6.16/crypto-af_alg-fix-incorrect-boolean-values-in-af_alg_ctx.patch
-Desc:   crypto: af_alg - Fix incorrect boolean values in af_alg_ctx
+Patch:  1009_linux-6.16.10.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.16.10
 
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/

diff --git a/1009_linux-6.16.10.patch b/1009_linux-6.16.10.patch
new file mode 100644
index 00000000..4371790d
--- /dev/null
+++ b/1009_linux-6.16.10.patch
@@ -0,0 +1,6562 @@
+diff --git a/Documentation/admin-guide/laptops/lg-laptop.rst b/Documentation/admin-guide/laptops/lg-laptop.rst
+index 67fd6932cef4ff..c4dd534f91edd1 100644
+--- a/Documentation/admin-guide/laptops/lg-laptop.rst
++++ b/Documentation/admin-guide/laptops/lg-laptop.rst
+@@ -48,8 +48,8 @@ This value is reset to 100 when the kernel boots.
+ Fan mode
+ --------
+ 
+-Writing 1/0 to /sys/devices/platform/lg-laptop/fan_mode disables/enables
+-the fan silent mode.
++Writing 0/1/2 to /sys/devices/platform/lg-laptop/fan_mode sets fan mode to
++Optimal/Silent/Performance respectively.
+ 
+ 
+ USB charge
+diff --git a/Makefile b/Makefile
+index aef2cb6ea99d8b..19856af4819a53 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 16
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm/boot/dts/intel/socfpga/socfpga_cyclone5_sodia.dts b/arch/arm/boot/dts/intel/socfpga/socfpga_cyclone5_sodia.dts
+index ce0d6514eeb571..e4794ccb8e413f 100644
+--- a/arch/arm/boot/dts/intel/socfpga/socfpga_cyclone5_sodia.dts
++++ b/arch/arm/boot/dts/intel/socfpga/socfpga_cyclone5_sodia.dts
+@@ -66,8 +66,10 @@ &gmac1 {
+ 	mdio0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		phy0: ethernet-phy@0 {
+-			reg = <0>;
++		compatible = "snps,dwmac-mdio";
++
++		phy0: ethernet-phy@4 {
++			reg = <4>;
+ 			rxd0-skew-ps = <0>;
+ 			rxd1-skew-ps = <0>;
+ 			rxd2-skew-ps = <0>;
+diff --git a/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts b/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts
+index d4e0b8150a84ce..cf26e2ceaaa074 100644
+--- a/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts
++++ b/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts
+@@ -38,7 +38,7 @@ sound {
+ 		simple-audio-card,mclk-fs = <256>;
+ 
+ 		simple-audio-card,cpu {
+-			sound-dai = <&audio0 0>;
++			sound-dai = <&audio0>;
+ 		};
+ 
+ 		simple-audio-card,codec {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+index 948b88cf5e9dff..305c2912e90f74 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+@@ -298,7 +298,7 @@ thermal-zones {
+ 		cpu-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <2000>;
+-			thermal-sensors = <&tmu 0>;
++			thermal-sensors = <&tmu 1>;
+ 			trips {
+ 				cpu_alert0: trip0 {
+ 					temperature = <85000>;
+@@ -328,7 +328,7 @@ map0 {
+ 		soc-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <2000>;
+-			thermal-sensors = <&tmu 1>;
++			thermal-sensors = <&tmu 0>;
+ 			trips {
+ 				soc_alert0: trip0 {
+ 					temperature = <85000>;
+diff --git a/arch/arm64/boot/dts/marvell/cn9130-cf.dtsi b/arch/arm64/boot/dts/marvell/cn9130-cf.dtsi
+index ad0ab34b66028c..bd42bfbe408bbe 100644
+--- a/arch/arm64/boot/dts/marvell/cn9130-cf.dtsi
++++ b/arch/arm64/boot/dts/marvell/cn9130-cf.dtsi
+@@ -152,11 +152,12 @@ expander0_pins: cp0-expander0-pins {
+ 
+ /* SRDS #0 - SATA on M.2 connector */
+ &cp0_sata0 {
+-	phys = <&cp0_comphy0 1>;
+ 	status = "okay";
+ 
+-	/* only port 1 is available */
+-	/delete-node/ sata-port@0;
++	sata-port@1 {
++		phys = <&cp0_comphy0 1>;
++		status = "okay";
++	};
+ };
+ 
+ /* microSD */
+diff --git a/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts b/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts
+index 47234d0858dd21..338853d3b179bb 100644
+--- a/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts
++++ b/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts
+@@ -563,11 +563,13 @@ &cp1_rtc {
+ 
+ /* SRDS #1 - SATA on M.2 (J44) */
+ &cp1_sata0 {
+-	phys = <&cp1_comphy1 0>;
+ 	status = "okay";
+ 
+ 	/* only port 0 is available */
+-	/delete-node/ sata-port@1;
++	sata-port@0 {
++		phys = <&cp1_comphy1 0>;
++		status = "okay";
++	};
+ };
+ 
+ &cp1_syscon0 {
+diff --git a/arch/arm64/boot/dts/marvell/cn9132-clearfog.dts b/arch/arm64/boot/dts/marvell/cn9132-clearfog.dts
+index 0f53745a6fa0d8..6f237d3542b910 100644
+--- a/arch/arm64/boot/dts/marvell/cn9132-clearfog.dts
++++ b/arch/arm64/boot/dts/marvell/cn9132-clearfog.dts
+@@ -413,7 +413,13 @@ fixed-link {
+ /* SRDS #0,#1,#2,#3 - PCIe */
+ &cp0_pcie0 {
+ 	num-lanes = <4>;
+-	phys = <&cp0_comphy0 0>, <&cp0_comphy1 0>, <&cp0_comphy2 0>, <&cp0_comphy3 0>;
++	/*
++	 * The mvebu-comphy driver does not currently know how to pass correct
++	 * lane-count to ATF while configuring the serdes lanes.
++	 * Rely on bootloader configuration only.
++	 *
++	 * phys = <&cp0_comphy0 0>, <&cp0_comphy1 0>, <&cp0_comphy2 0>, <&cp0_comphy3 0>;
++	 */
+ 	status = "okay";
+ };
+ 
+@@ -475,7 +481,13 @@ &cp1_eth0 {
+ /* SRDS #0,#1 - PCIe */
+ &cp1_pcie0 {
+ 	num-lanes = <2>;
+-	phys = <&cp1_comphy0 0>, <&cp1_comphy1 0>;
++	/*
++	 * The mvebu-comphy driver does not currently know how to pass correct
++	 * lane-count to ATF while configuring the serdes lanes.
++	 * Rely on bootloader configuration only.
++	 *
++	 * phys = <&cp1_comphy0 0>, <&cp1_comphy1 0>;
++	 */
+ 	status = "okay";
+ };
+ 
+@@ -512,10 +524,9 @@ &cp1_sata0 {
+ 	status = "okay";
+ 
+ 	/* only port 1 is available */
+-	/delete-node/ sata-port@0;
+-
+ 	sata-port@1 {
+ 		phys = <&cp1_comphy3 1>;
++		status = "okay";
+ 	};
+ };
+ 
+@@ -631,9 +642,8 @@ &cp2_sata0 {
+ 	status = "okay";
+ 
+ 	/* only port 1 is available */
+-	/delete-node/ sata-port@0;
+-
+ 	sata-port@1 {
++		status = "okay";
+ 		phys = <&cp2_comphy3 1>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/marvell/cn9132-sr-cex7.dtsi b/arch/arm64/boot/dts/marvell/cn9132-sr-cex7.dtsi
+index afc041c1c448c3..bb2bb47fd77c12 100644
+--- a/arch/arm64/boot/dts/marvell/cn9132-sr-cex7.dtsi
++++ b/arch/arm64/boot/dts/marvell/cn9132-sr-cex7.dtsi
+@@ -137,6 +137,14 @@ &ap_sdhci0 {
+ 	pinctrl-0 = <&ap_mmc0_pins>;
+ 	pinctrl-names = "default";
+ 	vqmmc-supply = <&v_1_8>;
++	/*
++	 * Not stable in HS modes - phy needs "more calibration", so disable
++	 * UHS (by preventing voltage switch), SDR104, SDR50 and DDR50 modes.
++	 */
++	no-1-8-v;
++	no-sd;
++	no-sdio;
++	non-removable;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dtsi b/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dtsi
+index 4fedc50cce8c86..11940c77f2bd01 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dtsi
+@@ -42,9 +42,8 @@ analog-sound {
+ 		simple-audio-card,bitclock-master = <&masterdai>;
+ 		simple-audio-card,format = "i2s";
+ 		simple-audio-card,frame-master = <&masterdai>;
+-		simple-audio-card,hp-det-gpios = <&gpio1 RK_PD5 GPIO_ACTIVE_LOW>;
++		simple-audio-card,hp-det-gpios = <&gpio1 RK_PD5 GPIO_ACTIVE_HIGH>;
+ 		simple-audio-card,mclk-fs = <256>;
+-		simple-audio-card,pin-switches = "Headphones";
+ 		simple-audio-card,routing =
+ 			"Headphones", "LOUT1",
+ 			"Headphones", "ROUT1",
+diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
+index 5bd5aae60d5369..4d9fe4ea78affa 100644
+--- a/arch/riscv/include/asm/pgtable.h
++++ b/arch/riscv/include/asm/pgtable.h
+@@ -964,6 +964,23 @@ static inline int pudp_test_and_clear_young(struct vm_area_struct *vma,
+ 	return ptep_test_and_clear_young(vma, address, (pte_t *)pudp);
+ }
+ 
++#define __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR
++static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
++					    unsigned long address,  pud_t *pudp)
++{
++#ifdef CONFIG_SMP
++	pud_t pud = __pud(xchg(&pudp->pud, 0));
++#else
++	pud_t pud = *pudp;
++
++	pud_clear(pudp);
++#endif
++
++	page_table_check_pud_clear(mm, pud);
++
++	return pud;
++}
++
+ static inline int pud_young(pud_t pud)
+ {
+ 	return pte_young(pud_pte(pud));
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 874c9b264d6f0c..a530806ec5b025 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -26,7 +26,6 @@ config X86_64
+ 	depends on 64BIT
+ 	# Options that are inherently 64-bit kernel only:
+ 	select ARCH_HAS_GIGANTIC_PAGE
+-	select ARCH_HAS_PTDUMP
+ 	select ARCH_SUPPORTS_MSEAL_SYSTEM_MAPPINGS
+ 	select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
+ 	select ARCH_SUPPORTS_PER_VMA_LOCK
+@@ -101,6 +100,7 @@ config X86
+ 	select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
+ 	select ARCH_HAS_PMEM_API		if X86_64
+ 	select ARCH_HAS_PREEMPT_LAZY
++	select ARCH_HAS_PTDUMP
+ 	select ARCH_HAS_PTE_DEVMAP		if X86_64
+ 	select ARCH_HAS_PTE_SPECIAL
+ 	select ARCH_HAS_HW_PTE_YOUNG
+diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
+index 6c79ee7c0957a7..21041898157a1f 100644
+--- a/arch/x86/include/asm/topology.h
++++ b/arch/x86/include/asm/topology.h
+@@ -231,6 +231,16 @@ static inline bool topology_is_primary_thread(unsigned int cpu)
+ }
+ #define topology_is_primary_thread topology_is_primary_thread
+ 
++int topology_get_primary_thread(unsigned int cpu);
++
++static inline bool topology_is_core_online(unsigned int cpu)
++{
++	int pcpu = topology_get_primary_thread(cpu);
++
++	return pcpu >= 0 ? cpu_online(pcpu) : false;
++}
++#define topology_is_core_online topology_is_core_online
++
+ #else /* CONFIG_SMP */
+ static inline int topology_phys_to_logical_pkg(unsigned int pkg) { return 0; }
+ static inline int topology_max_smt_threads(void) { return 1; }
+diff --git a/arch/x86/kernel/cpu/topology.c b/arch/x86/kernel/cpu/topology.c
+index e35ccdc84910f5..6073a16628f9e4 100644
+--- a/arch/x86/kernel/cpu/topology.c
++++ b/arch/x86/kernel/cpu/topology.c
+@@ -372,6 +372,19 @@ unsigned int topology_unit_count(u32 apicid, enum x86_topology_domains which_uni
+ 	return topo_unit_count(lvlid, at_level, apic_maps[which_units].map);
+ }
+ 
++#ifdef CONFIG_SMP
++int topology_get_primary_thread(unsigned int cpu)
++{
++	u32 apic_id = cpuid_to_apicid[cpu];
++
++	/*
++	 * Get the core domain level APIC id, which is the primary thread
++	 * and return the CPU number assigned to it.
++	 */
++	return topo_lookup_cpuid(topo_apicid(apic_id, TOPO_CORE_DOMAIN));
++}
++#endif
++
+ #ifdef CONFIG_ACPI_HOTPLUG_CPU
+ /**
+  * topology_hotplug_apic - Handle a physical hotplugged APIC after boot
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 628f5b633b61fe..b2da1cda4cebd1 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -2956,6 +2956,15 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
+ 			goto err_null_driver;
+ 	}
+ 
++	/*
++	 * Mark support for the scheduler's frequency invariance engine for
++	 * drivers that implement target(), target_index() or fast_switch().
++	 */
++	if (!cpufreq_driver->setpolicy) {
++		static_branch_enable_cpuslocked(&cpufreq_freq_invariance);
++		pr_debug("cpufreq: supports frequency invariance\n");
++	}
++
+ 	ret = subsys_interface_register(&cpufreq_interface);
+ 	if (ret)
+ 		goto err_boost_unreg;
+@@ -2977,21 +2986,14 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
+ 	hp_online = ret;
+ 	ret = 0;
+ 
+-	/*
+-	 * Mark support for the scheduler's frequency invariance engine for
+-	 * drivers that implement target(), target_index() or fast_switch().
+-	 */
+-	if (!cpufreq_driver->setpolicy) {
+-		static_branch_enable_cpuslocked(&cpufreq_freq_invariance);
+-		pr_debug("supports frequency invariance");
+-	}
+-
+ 	pr_debug("driver %s up and running\n", driver_data->name);
+ 	goto out;
+ 
+ err_if_unreg:
+ 	subsys_interface_unregister(&cpufreq_interface);
+ err_boost_unreg:
++	if (!cpufreq_driver->setpolicy)
++		static_branch_disable_cpuslocked(&cpufreq_freq_invariance);
+ 	remove_boost_sysfs_file();
+ err_null_driver:
+ 	write_lock_irqsave(&cpufreq_driver_lock, flags);
+diff --git a/drivers/firewire/core-cdev.c b/drivers/firewire/core-cdev.c
+index bd04980009a467..6a81c3fd4c8609 100644
+--- a/drivers/firewire/core-cdev.c
++++ b/drivers/firewire/core-cdev.c
+@@ -41,7 +41,7 @@
+ /*
+  * ABI version history is documented in linux/firewire-cdev.h.
+  */
+-#define FW_CDEV_KERNEL_VERSION			5
++#define FW_CDEV_KERNEL_VERSION			6
+ #define FW_CDEV_VERSION_EVENT_REQUEST2		4
+ #define FW_CDEV_VERSION_ALLOCATE_REGION_END	4
+ #define FW_CDEV_VERSION_AUTO_FLUSH_ISO_OVERFLOW	5
+diff --git a/drivers/gpio/gpio-regmap.c b/drivers/gpio/gpio-regmap.c
+index 87c4225784cfae..b3b84a404485eb 100644
+--- a/drivers/gpio/gpio-regmap.c
++++ b/drivers/gpio/gpio-regmap.c
+@@ -274,7 +274,7 @@ struct gpio_regmap *gpio_regmap_register(const struct gpio_regmap_config *config
+ 	if (!chip->ngpio) {
+ 		ret = gpiochip_get_ngpios(chip, chip->parent);
+ 		if (ret)
+-			return ERR_PTR(ret);
++			goto err_free_gpio;
+ 	}
+ 
+ 	/* if not set, assume there is only one register */
+diff --git a/drivers/gpio/gpiolib-acpi-quirks.c b/drivers/gpio/gpiolib-acpi-quirks.c
+index c13545dce3492d..bfb04e67c4bc87 100644
+--- a/drivers/gpio/gpiolib-acpi-quirks.c
++++ b/drivers/gpio/gpiolib-acpi-quirks.c
+@@ -344,6 +344,20 @@ static const struct dmi_system_id gpiolib_acpi_quirks[] __initconst = {
+ 			.ignore_interrupt = "AMDI0030:00@8",
+ 		},
+ 	},
++	{
++		/*
++		 * Spurious wakeups from TP_ATTN# pin
++		 * Found in BIOS 5.35
++		 * https://gitlab.freedesktop.org/drm/amd/-/issues/4482
++		 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_FAMILY, "ProArt PX13"),
++		},
++		.driver_data = &(struct acpi_gpiolib_dmi_quirk) {
++			.ignore_wake = "ASCP1A00:00@8",
++		},
++	},
+ 	{} /* Terminating entry */
+ };
+ 
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 3a3eca5b4c40b6..01d611d7ee66ac 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -4605,6 +4605,23 @@ static struct gpio_desc *gpiod_find_by_fwnode(struct fwnode_handle *fwnode,
+ 	return desc;
+ }
+ 
++static struct gpio_desc *gpiod_fwnode_lookup(struct fwnode_handle *fwnode,
++					     struct device *consumer,
++					     const char *con_id,
++					     unsigned int idx,
++					     enum gpiod_flags *flags,
++					     unsigned long *lookupflags)
++{
++	struct gpio_desc *desc;
++
++	desc = gpiod_find_by_fwnode(fwnode, consumer, con_id, idx, flags, lookupflags);
++	if (gpiod_not_found(desc) && !IS_ERR_OR_NULL(fwnode))
++		desc = gpiod_find_by_fwnode(fwnode->secondary, consumer, con_id,
++					    idx, flags, lookupflags);
++
++	return desc;
++}
++
+ struct gpio_desc *gpiod_find_and_request(struct device *consumer,
+ 					 struct fwnode_handle *fwnode,
+ 					 const char *con_id,
+@@ -4623,8 +4640,8 @@ struct gpio_desc *gpiod_find_and_request(struct device *consumer,
+ 	int ret = 0;
+ 
+ 	scoped_guard(srcu, &gpio_devices_srcu) {
+-		desc = gpiod_find_by_fwnode(fwnode, consumer, con_id, idx,
+-					    &flags, &lookupflags);
++		desc = gpiod_fwnode_lookup(fwnode, consumer, con_id, idx,
++					   &flags, &lookupflags);
+ 		if (gpiod_not_found(desc) && platform_lookup_allowed) {
+ 			/*
+ 			 * Either we are not using DT or ACPI, or their lookup
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+index 260165bbe3736d..b16cce7c22c373 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+@@ -213,19 +213,35 @@ int amdgpu_amdkfd_reserve_mem_limit(struct amdgpu_device *adev,
+ 	spin_lock(&kfd_mem_limit.mem_limit_lock);
+ 
+ 	if (kfd_mem_limit.system_mem_used + system_mem_needed >
+-	    kfd_mem_limit.max_system_mem_limit)
++	    kfd_mem_limit.max_system_mem_limit) {
+ 		pr_debug("Set no_system_mem_limit=1 if using shared memory\n");
++		if (!no_system_mem_limit) {
++			ret = -ENOMEM;
++			goto release;
++		}
++	}
+ 
+-	if ((kfd_mem_limit.system_mem_used + system_mem_needed >
+-	     kfd_mem_limit.max_system_mem_limit && !no_system_mem_limit) ||
+-	    (kfd_mem_limit.ttm_mem_used + ttm_mem_needed >
+-	     kfd_mem_limit.max_ttm_mem_limit) ||
+-	    (adev && xcp_id >= 0 && adev->kfd.vram_used[xcp_id] + vram_needed >
+-	     vram_size - reserved_for_pt - reserved_for_ras - atomic64_read(&adev->vram_pin_size))) {
++	if (kfd_mem_limit.ttm_mem_used + ttm_mem_needed >
++		kfd_mem_limit.max_ttm_mem_limit) {
+ 		ret = -ENOMEM;
+ 		goto release;
+ 	}
+ 
++	/*if is_app_apu is false and apu_prefer_gtt is true, it is an APU with
++	 * carve out < gtt. In that case, VRAM allocation will go to gtt domain, skip
++	 * VRAM check since ttm_mem_limit check already cover this allocation
++	 */
++
++	if (adev && xcp_id >= 0 && (!adev->apu_prefer_gtt || adev->gmc.is_app_apu)) {
++		uint64_t vram_available =
++			vram_size - reserved_for_pt - reserved_for_ras -
++			atomic64_read(&adev->vram_pin_size);
++		if (adev->kfd.vram_used[xcp_id] + vram_needed > vram_available) {
++			ret = -ENOMEM;
++			goto release;
++		}
++	}
++
+ 	/* Update memory accounting by decreasing available system
+ 	 * memory, TTM memory and GPU memory as computed above
+ 	 */
+@@ -1626,11 +1642,15 @@ size_t amdgpu_amdkfd_get_available_memory(struct amdgpu_device *adev,
+ 	uint64_t vram_available, system_mem_available, ttm_mem_available;
+ 
+ 	spin_lock(&kfd_mem_limit.mem_limit_lock);
+-	vram_available = KFD_XCP_MEMORY_SIZE(adev, xcp_id)
+-		- adev->kfd.vram_used_aligned[xcp_id]
+-		- atomic64_read(&adev->vram_pin_size)
+-		- reserved_for_pt
+-		- reserved_for_ras;
++	if (adev->apu_prefer_gtt && !adev->gmc.is_app_apu)
++		vram_available = KFD_XCP_MEMORY_SIZE(adev, xcp_id)
++			- adev->kfd.vram_used_aligned[xcp_id];
++	else
++		vram_available = KFD_XCP_MEMORY_SIZE(adev, xcp_id)
++			- adev->kfd.vram_used_aligned[xcp_id]
++			- atomic64_read(&adev->vram_pin_size)
++			- reserved_for_pt
++			- reserved_for_ras;
+ 
+ 	if (adev->apu_prefer_gtt) {
+ 		system_mem_available = no_system_mem_limit ?
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index 4ec73f33535ebf..720b20e842ba43 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -1587,7 +1587,8 @@ static int kfd_dev_create_p2p_links(void)
+ 			break;
+ 		if (!dev->gpu || !dev->gpu->adev ||
+ 		    (dev->gpu->kfd->hive_id &&
+-		     dev->gpu->kfd->hive_id == new_dev->gpu->kfd->hive_id))
++		     dev->gpu->kfd->hive_id == new_dev->gpu->kfd->hive_id &&
++		     amdgpu_xgmi_get_is_sharing_enabled(dev->gpu->adev, new_dev->gpu->adev)))
+ 			goto next;
+ 
+ 		/* check if node(s) is/are peer accessible in one direction or bi-direction */
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 58ea351dd48b5d..fa24bcae3c5fc4 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2035,6 +2035,8 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ 
+ 	dc_hardware_init(adev->dm.dc);
+ 
++	adev->dm.restore_backlight = true;
++
+ 	adev->dm.hpd_rx_offload_wq = hpd_rx_irq_create_workqueue(adev);
+ 	if (!adev->dm.hpd_rx_offload_wq) {
+ 		drm_err(adev_to_drm(adev), "amdgpu: failed to create hpd rx offload workqueue.\n");
+@@ -3396,6 +3398,7 @@ static int dm_resume(struct amdgpu_ip_block *ip_block)
+ 		dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D0);
+ 
+ 		dc_resume(dm->dc);
++		adev->dm.restore_backlight = true;
+ 
+ 		amdgpu_dm_irq_resume_early(adev);
+ 
+@@ -9801,7 +9804,6 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
+ 	bool mode_set_reset_required = false;
+ 	u32 i;
+ 	struct dc_commit_streams_params params = {dc_state->streams, dc_state->stream_count};
+-	bool set_backlight_level = false;
+ 
+ 	/* Disable writeback */
+ 	for_each_old_connector_in_state(state, connector, old_con_state, i) {
+@@ -9921,7 +9923,6 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
+ 			acrtc->hw_mode = new_crtc_state->mode;
+ 			crtc->hwmode = new_crtc_state->mode;
+ 			mode_set_reset_required = true;
+-			set_backlight_level = true;
+ 		} else if (modereset_required(new_crtc_state)) {
+ 			drm_dbg_atomic(dev,
+ 				       "Atomic commit: RESET. crtc id %d:[%p]\n",
+@@ -9978,13 +9979,16 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
+ 	 * to fix a flicker issue.
+ 	 * It will cause the dm->actual_brightness is not the current panel brightness
+ 	 * level. (the dm->brightness is the correct panel level)
+-	 * So we set the backlight level with dm->brightness value after set mode
++	 * So we set the backlight level with dm->brightness value after initial
++	 * set mode. Use restore_backlight flag to avoid setting backlight level
++	 * for every subsequent mode set.
+ 	 */
+-	if (set_backlight_level) {
++	if (dm->restore_backlight) {
+ 		for (i = 0; i < dm->num_of_edps; i++) {
+ 			if (dm->backlight_dev[i])
+ 				amdgpu_dm_backlight_set_level(dm, i, dm->brightness[i]);
+ 		}
++		dm->restore_backlight = false;
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index d7d92f9911e465..47abef63686ea5 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -610,6 +610,13 @@ struct amdgpu_display_manager {
+ 	 */
+ 	u32 actual_brightness[AMDGPU_DM_MAX_NUM_EDP];
+ 
++	/**
++	 * @restore_backlight:
++	 *
++	 * Flag to indicate whether to restore backlight after modeset.
++	 */
++	bool restore_backlight;
++
+ 	/**
+ 	 * @aux_hpd_discon_quirk:
+ 	 *
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 7dfbfb18593c12..f037f2d83400b9 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -1292,7 +1292,6 @@ union surface_update_flags {
+ 		uint32_t in_transfer_func_change:1;
+ 		uint32_t input_csc_change:1;
+ 		uint32_t coeff_reduction_change:1;
+-		uint32_t output_tf_change:1;
+ 		uint32_t pixel_format_change:1;
+ 		uint32_t plane_size_change:1;
+ 		uint32_t gamut_remap_change:1;
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+index 454e362ff096aa..c0127d8b5b3961 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+@@ -1990,10 +1990,8 @@ static void dcn20_program_pipe(
+ 	 * updating on slave planes
+ 	 */
+ 	if (pipe_ctx->update_flags.bits.enable ||
+-		pipe_ctx->update_flags.bits.plane_changed ||
+-		pipe_ctx->stream->update_flags.bits.out_tf ||
+-		(pipe_ctx->plane_state &&
+-			pipe_ctx->plane_state->update_flags.bits.output_tf_change))
++	    pipe_ctx->update_flags.bits.plane_changed ||
++	    pipe_ctx->stream->update_flags.bits.out_tf)
+ 		hws->funcs.set_output_transfer_func(dc, pipe_ctx, pipe_ctx->stream);
+ 
+ 	/* If the pipe has been enabled or has a different opp, we
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
+index c4177a9a662fac..c68d01f3786026 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
+@@ -2289,10 +2289,8 @@ void dcn401_program_pipe(
+ 	 * updating on slave planes
+ 	 */
+ 	if (pipe_ctx->update_flags.bits.enable ||
+-		pipe_ctx->update_flags.bits.plane_changed ||
+-		pipe_ctx->stream->update_flags.bits.out_tf ||
+-		(pipe_ctx->plane_state &&
+-			pipe_ctx->plane_state->update_flags.bits.output_tf_change))
++	    pipe_ctx->update_flags.bits.plane_changed ||
++	    pipe_ctx->stream->update_flags.bits.out_tf)
+ 		hws->funcs.set_output_transfer_func(dc, pipe_ctx, pipe_ctx->stream);
+ 
+ 	/* If the pipe has been enabled or has a different opp, we
+diff --git a/drivers/gpu/drm/ast/ast_dp.c b/drivers/gpu/drm/ast/ast_dp.c
+index 19c04687b0fe1f..8e650a02c5287b 100644
+--- a/drivers/gpu/drm/ast/ast_dp.c
++++ b/drivers/gpu/drm/ast/ast_dp.c
+@@ -134,7 +134,7 @@ static int ast_astdp_read_edid_block(void *data, u8 *buf, unsigned int block, si
+ 			 * 3. The Delays are often longer a lot when system resume from S3/S4.
+ 			 */
+ 			if (j)
+-				mdelay(j + 1);
++				msleep(j + 1);
+ 
+ 			/* Wait for EDID offset to show up in mirror register */
+ 			vgacrd7 = ast_get_index_reg(ast, AST_IO_VGACRI, 0xd7);
+diff --git a/drivers/gpu/drm/gma500/oaktrail_hdmi.c b/drivers/gpu/drm/gma500/oaktrail_hdmi.c
+index 1cf39436912776..c0feca58511df3 100644
+--- a/drivers/gpu/drm/gma500/oaktrail_hdmi.c
++++ b/drivers/gpu/drm/gma500/oaktrail_hdmi.c
+@@ -726,8 +726,8 @@ void oaktrail_hdmi_teardown(struct drm_device *dev)
+ 
+ 	if (hdmi_dev) {
+ 		pdev = hdmi_dev->dev;
+-		pci_set_drvdata(pdev, NULL);
+ 		oaktrail_hdmi_i2c_exit(pdev);
++		pci_set_drvdata(pdev, NULL);
+ 		iounmap(hdmi_dev->regs);
+ 		kfree(hdmi_dev);
+ 		pci_dev_put(pdev);
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index d58f8fc3732658..55b8bfcf364aec 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -593,8 +593,9 @@ intel_ddi_transcoder_func_reg_val_get(struct intel_encoder *encoder,
+ 			enum transcoder master;
+ 
+ 			master = crtc_state->mst_master_transcoder;
+-			drm_WARN_ON(display->drm,
+-				    master == INVALID_TRANSCODER);
++			if (drm_WARN_ON(display->drm,
++					master == INVALID_TRANSCODER))
++				master = TRANSCODER_A;
+ 			temp |= TRANS_DDI_MST_TRANSPORT_SELECT(master);
+ 		}
+ 	} else {
+diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
+index f1ec3b02f15a00..07cd67baa81bfc 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
++++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
+@@ -789,6 +789,8 @@ static const struct panfrost_compatible amlogic_data = {
+ 	.vendor_quirk = panfrost_gpu_amlogic_quirk,
+ };
+ 
++static const char * const mediatek_pm_domains[] = { "core0", "core1", "core2",
++						    "core3", "core4" };
+ /*
+  * The old data with two power supplies for MT8183 is here only to
+  * keep retro-compatibility with older devicetrees, as DVFS will
+@@ -797,51 +799,53 @@ static const struct panfrost_compatible amlogic_data = {
+  * On new devicetrees please use the _b variant with a single and
+  * coupled regulators instead.
+  */
+-static const char * const mediatek_mt8183_supplies[] = { "mali", "sram", NULL };
+-static const char * const mediatek_mt8183_pm_domains[] = { "core0", "core1", "core2" };
++static const char * const legacy_supplies[] = { "mali", "sram", NULL };
+ static const struct panfrost_compatible mediatek_mt8183_data = {
+-	.num_supplies = ARRAY_SIZE(mediatek_mt8183_supplies) - 1,
+-	.supply_names = mediatek_mt8183_supplies,
+-	.num_pm_domains = ARRAY_SIZE(mediatek_mt8183_pm_domains),
+-	.pm_domain_names = mediatek_mt8183_pm_domains,
++	.num_supplies = ARRAY_SIZE(legacy_supplies) - 1,
++	.supply_names = legacy_supplies,
++	.num_pm_domains = 3,
++	.pm_domain_names = mediatek_pm_domains,
+ };
+ 
+-static const char * const mediatek_mt8183_b_supplies[] = { "mali", NULL };
+ static const struct panfrost_compatible mediatek_mt8183_b_data = {
+-	.num_supplies = ARRAY_SIZE(mediatek_mt8183_b_supplies) - 1,
+-	.supply_names = mediatek_mt8183_b_supplies,
+-	.num_pm_domains = ARRAY_SIZE(mediatek_mt8183_pm_domains),
+-	.pm_domain_names = mediatek_mt8183_pm_domains,
++	.num_supplies = ARRAY_SIZE(default_supplies) - 1,
++	.supply_names = default_supplies,
++	.num_pm_domains = 3,
++	.pm_domain_names = mediatek_pm_domains,
+ 	.pm_features = BIT(GPU_PM_CLK_DIS) | BIT(GPU_PM_VREG_OFF),
+ };
+ 
+-static const char * const mediatek_mt8186_pm_domains[] = { "core0", "core1" };
+ static const struct panfrost_compatible mediatek_mt8186_data = {
+-	.num_supplies = ARRAY_SIZE(mediatek_mt8183_b_supplies) - 1,
+-	.supply_names = mediatek_mt8183_b_supplies,
+-	.num_pm_domains = ARRAY_SIZE(mediatek_mt8186_pm_domains),
+-	.pm_domain_names = mediatek_mt8186_pm_domains,
++	.num_supplies = ARRAY_SIZE(default_supplies) - 1,
++	.supply_names = default_supplies,
++	.num_pm_domains = 2,
++	.pm_domain_names = mediatek_pm_domains,
+ 	.pm_features = BIT(GPU_PM_CLK_DIS) | BIT(GPU_PM_VREG_OFF),
+ };
+ 
+-/* MT8188 uses the same power domains and power supplies as MT8183 */
+ static const struct panfrost_compatible mediatek_mt8188_data = {
+-	.num_supplies = ARRAY_SIZE(mediatek_mt8183_b_supplies) - 1,
+-	.supply_names = mediatek_mt8183_b_supplies,
+-	.num_pm_domains = ARRAY_SIZE(mediatek_mt8183_pm_domains),
+-	.pm_domain_names = mediatek_mt8183_pm_domains,
++	.num_supplies = ARRAY_SIZE(default_supplies) - 1,
++	.supply_names = default_supplies,
++	.num_pm_domains = 3,
++	.pm_domain_names = mediatek_pm_domains,
+ 	.pm_features = BIT(GPU_PM_CLK_DIS) | BIT(GPU_PM_VREG_OFF),
+ 	.gpu_quirks = BIT(GPU_QUIRK_FORCE_AARCH64_PGTABLE),
+ };
+ 
+-static const char * const mediatek_mt8192_supplies[] = { "mali", NULL };
+-static const char * const mediatek_mt8192_pm_domains[] = { "core0", "core1", "core2",
+-							   "core3", "core4" };
+ static const struct panfrost_compatible mediatek_mt8192_data = {
+-	.num_supplies = ARRAY_SIZE(mediatek_mt8192_supplies) - 1,
+-	.supply_names = mediatek_mt8192_supplies,
+-	.num_pm_domains = ARRAY_SIZE(mediatek_mt8192_pm_domains),
+-	.pm_domain_names = mediatek_mt8192_pm_domains,
++	.num_supplies = ARRAY_SIZE(default_supplies) - 1,
++	.supply_names = default_supplies,
++	.num_pm_domains = 5,
++	.pm_domain_names = mediatek_pm_domains,
++	.pm_features = BIT(GPU_PM_CLK_DIS) | BIT(GPU_PM_VREG_OFF),
++	.gpu_quirks = BIT(GPU_QUIRK_FORCE_AARCH64_PGTABLE),
++};
++
++static const struct panfrost_compatible mediatek_mt8370_data = {
++	.num_supplies = ARRAY_SIZE(default_supplies) - 1,
++	.supply_names = default_supplies,
++	.num_pm_domains = 2,
++	.pm_domain_names = mediatek_pm_domains,
+ 	.pm_features = BIT(GPU_PM_CLK_DIS) | BIT(GPU_PM_VREG_OFF),
+ 	.gpu_quirks = BIT(GPU_QUIRK_FORCE_AARCH64_PGTABLE),
+ };
+@@ -868,6 +872,7 @@ static const struct of_device_id dt_match[] = {
+ 	{ .compatible = "mediatek,mt8186-mali", .data = &mediatek_mt8186_data },
+ 	{ .compatible = "mediatek,mt8188-mali", .data = &mediatek_mt8188_data },
+ 	{ .compatible = "mediatek,mt8192-mali", .data = &mediatek_mt8192_data },
++	{ .compatible = "mediatek,mt8370-mali", .data = &mediatek_mt8370_data },
+ 	{ .compatible = "allwinner,sun50i-h616-mali", .data = &allwinner_h616_data },
+ 	{}
+ };
+diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
+index 43ee57728de543..e927d80d6a2af9 100644
+--- a/drivers/gpu/drm/panthor/panthor_sched.c
++++ b/drivers/gpu/drm/panthor/panthor_sched.c
+@@ -886,8 +886,7 @@ static void group_free_queue(struct panthor_group *group, struct panthor_queue *
+ 	if (IS_ERR_OR_NULL(queue))
+ 		return;
+ 
+-	if (queue->entity.fence_context)
+-		drm_sched_entity_destroy(&queue->entity);
++	drm_sched_entity_destroy(&queue->entity);
+ 
+ 	if (queue->scheduler.ops)
+ 		drm_sched_fini(&queue->scheduler);
+@@ -3558,11 +3557,6 @@ int panthor_group_destroy(struct panthor_file *pfile, u32 group_handle)
+ 	if (!group)
+ 		return -EINVAL;
+ 
+-	for (u32 i = 0; i < group->queue_count; i++) {
+-		if (group->queues[i])
+-			drm_sched_entity_destroy(&group->queues[i]->entity);
+-	}
+-
+ 	mutex_lock(&sched->reset.lock);
+ 	mutex_lock(&sched->lock);
+ 	group->destroyed = true;
+diff --git a/drivers/gpu/drm/xe/abi/guc_actions_abi.h b/drivers/gpu/drm/xe/abi/guc_actions_abi.h
+index 4d9896e14649c0..448afb86e05c7d 100644
+--- a/drivers/gpu/drm/xe/abi/guc_actions_abi.h
++++ b/drivers/gpu/drm/xe/abi/guc_actions_abi.h
+@@ -117,7 +117,6 @@ enum xe_guc_action {
+ 	XE_GUC_ACTION_ENTER_S_STATE = 0x501,
+ 	XE_GUC_ACTION_EXIT_S_STATE = 0x502,
+ 	XE_GUC_ACTION_GLOBAL_SCHED_POLICY_CHANGE = 0x506,
+-	XE_GUC_ACTION_UPDATE_SCHEDULING_POLICIES_KLV = 0x509,
+ 	XE_GUC_ACTION_SCHED_CONTEXT = 0x1000,
+ 	XE_GUC_ACTION_SCHED_CONTEXT_MODE_SET = 0x1001,
+ 	XE_GUC_ACTION_SCHED_CONTEXT_MODE_DONE = 0x1002,
+@@ -143,7 +142,6 @@ enum xe_guc_action {
+ 	XE_GUC_ACTION_SET_ENG_UTIL_BUFF = 0x550A,
+ 	XE_GUC_ACTION_SET_DEVICE_ENGINE_ACTIVITY_BUFFER = 0x550C,
+ 	XE_GUC_ACTION_SET_FUNCTION_ENGINE_ACTIVITY_BUFFER = 0x550D,
+-	XE_GUC_ACTION_OPT_IN_FEATURE_KLV = 0x550E,
+ 	XE_GUC_ACTION_NOTIFY_MEMORY_CAT_ERROR = 0x6000,
+ 	XE_GUC_ACTION_REPORT_PAGE_FAULT_REQ_DESC = 0x6002,
+ 	XE_GUC_ACTION_PAGE_FAULT_RES_DESC = 0x6003,
+@@ -242,7 +240,4 @@ enum xe_guc_g2g_type {
+ #define XE_G2G_DEREGISTER_TILE	REG_GENMASK(15, 12)
+ #define XE_G2G_DEREGISTER_TYPE	REG_GENMASK(11, 8)
+ 
+-/* invalid type for XE_GUC_ACTION_NOTIFY_MEMORY_CAT_ERROR */
+-#define XE_GUC_CAT_ERR_TYPE_INVALID 0xdeadbeef
+-
+ #endif
+diff --git a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
+index 89034bc97ec5a4..7de8f827281fcd 100644
+--- a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
++++ b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
+@@ -16,8 +16,6 @@
+  *  +===+=======+==============================================================+
+  *  | 0 | 31:16 | **KEY** - KLV key identifier                                 |
+  *  |   |       |   - `GuC Self Config KLVs`_                                  |
+- *  |   |       |   - `GuC Opt In Feature KLVs`_                               |
+- *  |   |       |   - `GuC Scheduling Policies KLVs`_                          |
+  *  |   |       |   - `GuC VGT Policy KLVs`_                                   |
+  *  |   |       |   - `GuC VF Configuration KLVs`_                             |
+  *  |   |       |                                                              |
+@@ -126,44 +124,6 @@ enum  {
+ 	GUC_CONTEXT_POLICIES_KLV_NUM_IDS = 5,
+ };
+ 
+-/**
+- * DOC: GuC Opt In Feature KLVs
+- *
+- * `GuC KLV`_ keys available for use with OPT_IN_FEATURE_KLV
+- *
+- *  _`GUC_KLV_OPT_IN_FEATURE_EXT_CAT_ERR_TYPE` : 0x4001
+- *      Adds an extra dword to the XE_GUC_ACTION_NOTIFY_MEMORY_CAT_ERROR G2H
+- *      containing the type of the CAT error. On HW that does not support
+- *      reporting the CAT error type, the extra dword is set to 0xdeadbeef.
+- */
+-
+-#define GUC_KLV_OPT_IN_FEATURE_EXT_CAT_ERR_TYPE_KEY 0x4001
+-#define GUC_KLV_OPT_IN_FEATURE_EXT_CAT_ERR_TYPE_LEN 0u
+-
+-/**
+- * DOC: GuC Scheduling Policies KLVs
+- *
+- * `GuC KLV`_ keys available for use with UPDATE_SCHEDULING_POLICIES_KLV.
+- *
+- * _`GUC_KLV_SCHEDULING_POLICIES_RENDER_COMPUTE_YIELD` : 0x1001
+- *      Some platforms do not allow concurrent execution of RCS and CCS
+- *      workloads from different address spaces. By default, the GuC prioritizes
+- *      RCS submissions over CCS ones, which can lead to CCS workloads being
+- *      significantly (or completely) starved of execution time. This KLV allows
+- *      the driver to specify a quantum (in ms) and a ratio (percentage value
+- *      between 0 and 100), and the GuC will prioritize the CCS for that
+- *      percentage of each quantum. For example, specifying 100ms and 30% will
+- *      make the GuC prioritize the CCS for 30ms of every 100ms.
+- *      Note that this does not necessarly mean that RCS and CCS engines will
+- *      only be active for their percentage of the quantum, as the restriction
+- *      only kicks in if both classes are fully busy with non-compatible address
+- *      spaces; i.e., if one engine is idle or running the same address space,
+- *      a pending job on the other engine will still be submitted to the HW no
+- *      matter what the ratio is
+- */
+-#define GUC_KLV_SCHEDULING_POLICIES_RENDER_COMPUTE_YIELD_KEY	0x1001
+-#define GUC_KLV_SCHEDULING_POLICIES_RENDER_COMPUTE_YIELD_LEN	2u
+-
+ /**
+  * DOC: GuC VGT Policy KLVs
+  *
+diff --git a/drivers/gpu/drm/xe/xe_bo_evict.c b/drivers/gpu/drm/xe/xe_bo_evict.c
+index ed3746d32b27b1..4620201c72399d 100644
+--- a/drivers/gpu/drm/xe/xe_bo_evict.c
++++ b/drivers/gpu/drm/xe/xe_bo_evict.c
+@@ -158,8 +158,8 @@ int xe_bo_evict_all(struct xe_device *xe)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = xe_bo_apply_to_pinned(xe, &xe->pinned.late.kernel_bo_present,
+-				    &xe->pinned.late.evicted, xe_bo_evict_pinned);
++	ret = xe_bo_apply_to_pinned(xe, &xe->pinned.late.external,
++				    &xe->pinned.late.external, xe_bo_evict_pinned);
+ 
+ 	if (!ret)
+ 		ret = xe_bo_apply_to_pinned(xe, &xe->pinned.late.kernel_bo_present,
+diff --git a/drivers/gpu/drm/xe/xe_configfs.c b/drivers/gpu/drm/xe/xe_configfs.c
+index 9a2b96b111ef54..2b591ed055612a 100644
+--- a/drivers/gpu/drm/xe/xe_configfs.c
++++ b/drivers/gpu/drm/xe/xe_configfs.c
+@@ -244,7 +244,7 @@ int __init xe_configfs_init(void)
+ 	return 0;
+ }
+ 
+-void __exit xe_configfs_exit(void)
++void xe_configfs_exit(void)
+ {
+ 	configfs_unregister_subsystem(&xe_configfs);
+ }
+diff --git a/drivers/gpu/drm/xe/xe_device_sysfs.c b/drivers/gpu/drm/xe/xe_device_sysfs.c
+index b9440f8c781e3b..652da4d294c0b2 100644
+--- a/drivers/gpu/drm/xe/xe_device_sysfs.c
++++ b/drivers/gpu/drm/xe/xe_device_sysfs.c
+@@ -166,7 +166,7 @@ int xe_device_sysfs_init(struct xe_device *xe)
+ 			return ret;
+ 	}
+ 
+-	if (xe->info.platform == XE_BATTLEMAGE) {
++	if (xe->info.platform == XE_BATTLEMAGE && !IS_SRIOV_VF(xe)) {
+ 		ret = sysfs_create_files(&dev->kobj, auto_link_downgrade_attrs);
+ 		if (ret)
+ 			return ret;
+diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
+index eaf7569a7c1d1e..e3517ce2e18c14 100644
+--- a/drivers/gpu/drm/xe/xe_gt.c
++++ b/drivers/gpu/drm/xe/xe_gt.c
+@@ -41,7 +41,6 @@
+ #include "xe_gt_topology.h"
+ #include "xe_guc_exec_queue_types.h"
+ #include "xe_guc_pc.h"
+-#include "xe_guc_submit.h"
+ #include "xe_hw_fence.h"
+ #include "xe_hw_engine_class_sysfs.h"
+ #include "xe_irq.h"
+@@ -98,7 +97,7 @@ void xe_gt_sanitize(struct xe_gt *gt)
+ 	 * FIXME: if xe_uc_sanitize is called here, on TGL driver will not
+ 	 * reload
+ 	 */
+-	xe_guc_submit_disable(&gt->uc.guc);
++	gt->uc.guc.submission_state.enabled = false;
+ }
+ 
+ static void xe_gt_enable_host_l2_vram(struct xe_gt *gt)
+diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c
+index b9d21fdaad48ba..bac5471a1a7806 100644
+--- a/drivers/gpu/drm/xe/xe_guc.c
++++ b/drivers/gpu/drm/xe/xe_guc.c
+@@ -29,7 +29,6 @@
+ #include "xe_guc_db_mgr.h"
+ #include "xe_guc_engine_activity.h"
+ #include "xe_guc_hwconfig.h"
+-#include "xe_guc_klv_helpers.h"
+ #include "xe_guc_log.h"
+ #include "xe_guc_pc.h"
+ #include "xe_guc_relay.h"
+@@ -571,57 +570,6 @@ static int guc_g2g_start(struct xe_guc *guc)
+ 	return err;
+ }
+ 
+-static int __guc_opt_in_features_enable(struct xe_guc *guc, u64 addr, u32 num_dwords)
+-{
+-	u32 action[] = {
+-		XE_GUC_ACTION_OPT_IN_FEATURE_KLV,
+-		lower_32_bits(addr),
+-		upper_32_bits(addr),
+-		num_dwords
+-	};
+-
+-	return xe_guc_ct_send_block(&guc->ct, action, ARRAY_SIZE(action));
+-}
+-
+-#define OPT_IN_MAX_DWORDS 16
+-int xe_guc_opt_in_features_enable(struct xe_guc *guc)
+-{
+-	struct xe_device *xe = guc_to_xe(guc);
+-	CLASS(xe_guc_buf, buf)(&guc->buf, OPT_IN_MAX_DWORDS);
+-	u32 count = 0;
+-	u32 *klvs;
+-	int ret;
+-
+-	if (!xe_guc_buf_is_valid(buf))
+-		return -ENOBUFS;
+-
+-	klvs = xe_guc_buf_cpu_ptr(buf);
+-
+-	/*
+-	 * The extra CAT error type opt-in was added in GuC v70.17.0, which maps
+-	 * to compatibility version v1.7.0.
+-	 * Note that the GuC allows enabling this KLV even on platforms that do
+-	 * not support the extra type; in such case the returned type variable
+-	 * will be set to a known invalid value which we can check against.
+-	 */
+-	if (GUC_SUBMIT_VER(guc) >= MAKE_GUC_VER(1, 7, 0))
+-		klvs[count++] = PREP_GUC_KLV_TAG(OPT_IN_FEATURE_EXT_CAT_ERR_TYPE);
+-
+-	if (count) {
+-		xe_assert(xe, count <= OPT_IN_MAX_DWORDS);
+-
+-		ret = __guc_opt_in_features_enable(guc, xe_guc_buf_flush(buf), count);
+-		if (ret < 0) {
+-			xe_gt_err(guc_to_gt(guc),
+-				  "failed to enable GuC opt-in features: %pe\n",
+-				  ERR_PTR(ret));
+-			return ret;
+-		}
+-	}
+-
+-	return 0;
+-}
+-
+ static void guc_fini_hw(void *arg)
+ {
+ 	struct xe_guc *guc = arg;
+@@ -815,17 +763,15 @@ int xe_guc_post_load_init(struct xe_guc *guc)
+ 
+ 	xe_guc_ads_populate_post_load(&guc->ads);
+ 
+-	ret = xe_guc_opt_in_features_enable(guc);
+-	if (ret)
+-		return ret;
+-
+ 	if (xe_guc_g2g_wanted(guc_to_xe(guc))) {
+ 		ret = guc_g2g_start(guc);
+ 		if (ret)
+ 			return ret;
+ 	}
+ 
+-	return xe_guc_submit_enable(guc);
++	guc->submission_state.enabled = true;
++
++	return 0;
+ }
+ 
+ int xe_guc_reset(struct xe_guc *guc)
+@@ -1519,7 +1465,7 @@ void xe_guc_sanitize(struct xe_guc *guc)
+ {
+ 	xe_uc_fw_sanitize(&guc->fw);
+ 	xe_guc_ct_disable(&guc->ct);
+-	xe_guc_submit_disable(guc);
++	guc->submission_state.enabled = false;
+ }
+ 
+ int xe_guc_reset_prepare(struct xe_guc *guc)
+diff --git a/drivers/gpu/drm/xe/xe_guc.h b/drivers/gpu/drm/xe/xe_guc.h
+index 4a66575f017d2d..58338be4455856 100644
+--- a/drivers/gpu/drm/xe/xe_guc.h
++++ b/drivers/gpu/drm/xe/xe_guc.h
+@@ -33,7 +33,6 @@ int xe_guc_reset(struct xe_guc *guc);
+ int xe_guc_upload(struct xe_guc *guc);
+ int xe_guc_min_load_for_hwconfig(struct xe_guc *guc);
+ int xe_guc_enable_communication(struct xe_guc *guc);
+-int xe_guc_opt_in_features_enable(struct xe_guc *guc);
+ int xe_guc_suspend(struct xe_guc *guc);
+ void xe_guc_notify(struct xe_guc *guc);
+ int xe_guc_auth_huc(struct xe_guc *guc, u32 rsa_addr);
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index 18ddbb7b98a15b..45a21af1269276 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -32,7 +32,6 @@
+ #include "xe_guc_ct.h"
+ #include "xe_guc_exec_queue_types.h"
+ #include "xe_guc_id_mgr.h"
+-#include "xe_guc_klv_helpers.h"
+ #include "xe_guc_submit_types.h"
+ #include "xe_hw_engine.h"
+ #include "xe_hw_fence.h"
+@@ -317,71 +316,6 @@ int xe_guc_submit_init(struct xe_guc *guc, unsigned int num_ids)
+ 	return drmm_add_action_or_reset(&xe->drm, guc_submit_fini, guc);
+ }
+ 
+-/*
+- * Given that we want to guarantee enough RCS throughput to avoid missing
+- * frames, we set the yield policy to 20% of each 80ms interval.
+- */
+-#define RC_YIELD_DURATION	80	/* in ms */
+-#define RC_YIELD_RATIO		20	/* in percent */
+-static u32 *emit_render_compute_yield_klv(u32 *emit)
+-{
+-	*emit++ = PREP_GUC_KLV_TAG(SCHEDULING_POLICIES_RENDER_COMPUTE_YIELD);
+-	*emit++ = RC_YIELD_DURATION;
+-	*emit++ = RC_YIELD_RATIO;
+-
+-	return emit;
+-}
+-
+-#define SCHEDULING_POLICY_MAX_DWORDS 16
+-static int guc_init_global_schedule_policy(struct xe_guc *guc)
+-{
+-	u32 data[SCHEDULING_POLICY_MAX_DWORDS];
+-	u32 *emit = data;
+-	u32 count = 0;
+-	int ret;
+-
+-	if (GUC_SUBMIT_VER(guc) < MAKE_GUC_VER(1, 1, 0))
+-		return 0;
+-
+-	*emit++ = XE_GUC_ACTION_UPDATE_SCHEDULING_POLICIES_KLV;
+-
+-	if (CCS_MASK(guc_to_gt(guc)))
+-		emit = emit_render_compute_yield_klv(emit);
+-
+-	count = emit - data;
+-	if (count > 1) {
+-		xe_assert(guc_to_xe(guc), count <= SCHEDULING_POLICY_MAX_DWORDS);
+-
+-		ret = xe_guc_ct_send_block(&guc->ct, data, count);
+-		if (ret < 0) {
+-			xe_gt_err(guc_to_gt(guc),
+-				  "failed to enable GuC sheduling policies: %pe\n",
+-				  ERR_PTR(ret));
+-			return ret;
+-		}
+-	}
+-
+-	return 0;
+-}
+-
+-int xe_guc_submit_enable(struct xe_guc *guc)
+-{
+-	int ret;
+-
+-	ret = guc_init_global_schedule_policy(guc);
+-	if (ret)
+-		return ret;
+-
+-	guc->submission_state.enabled = true;
+-
+-	return 0;
+-}
+-
+-void xe_guc_submit_disable(struct xe_guc *guc)
+-{
+-	guc->submission_state.enabled = false;
+-}
+-
+ static void __release_guc_id(struct xe_guc *guc, struct xe_exec_queue *q, u32 xa_count)
+ {
+ 	int i;
+@@ -2154,16 +2088,12 @@ int xe_guc_exec_queue_memory_cat_error_handler(struct xe_guc *guc, u32 *msg,
+ 	struct xe_gt *gt = guc_to_gt(guc);
+ 	struct xe_exec_queue *q;
+ 	u32 guc_id;
+-	u32 type = XE_GUC_CAT_ERR_TYPE_INVALID;
+ 
+-	if (unlikely(!len || len > 2))
++	if (unlikely(len < 1))
+ 		return -EPROTO;
+ 
+ 	guc_id = msg[0];
+ 
+-	if (len == 2)
+-		type = msg[1];
+-
+ 	if (guc_id == GUC_ID_UNKNOWN) {
+ 		/*
+ 		 * GuC uses GUC_ID_UNKNOWN if it can not map the CAT fault to any PF/VF
+@@ -2177,19 +2107,8 @@ int xe_guc_exec_queue_memory_cat_error_handler(struct xe_guc *guc, u32 *msg,
+ 	if (unlikely(!q))
+ 		return -EPROTO;
+ 
+-	/*
+-	 * The type is HW-defined and changes based on platform, so we don't
+-	 * decode it in the kernel and only check if it is valid.
+-	 * See bspec 54047 and 72187 for details.
+-	 */
+-	if (type != XE_GUC_CAT_ERR_TYPE_INVALID)
+-		xe_gt_dbg(gt,
+-			  "Engine memory CAT error [%u]: class=%s, logical_mask: 0x%x, guc_id=%d",
+-			  type, xe_hw_engine_class_to_str(q->class), q->logical_mask, guc_id);
+-	else
+-		xe_gt_dbg(gt,
+-			  "Engine memory CAT error: class=%s, logical_mask: 0x%x, guc_id=%d",
+-			  xe_hw_engine_class_to_str(q->class), q->logical_mask, guc_id);
++	xe_gt_dbg(gt, "Engine memory cat error: engine_class=%s, logical_mask: 0x%x, guc_id=%d",
++		  xe_hw_engine_class_to_str(q->class), q->logical_mask, guc_id);
+ 
+ 	trace_xe_exec_queue_memory_cat_error(q);
+ 
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.h b/drivers/gpu/drm/xe/xe_guc_submit.h
+index 0d126b807c1041..9b71a986c6ca69 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.h
++++ b/drivers/gpu/drm/xe/xe_guc_submit.h
+@@ -13,8 +13,6 @@ struct xe_exec_queue;
+ struct xe_guc;
+ 
+ int xe_guc_submit_init(struct xe_guc *guc, unsigned int num_ids);
+-int xe_guc_submit_enable(struct xe_guc *guc);
+-void xe_guc_submit_disable(struct xe_guc *guc);
+ 
+ int xe_guc_submit_reset_prepare(struct xe_guc *guc);
+ void xe_guc_submit_reset_wait(struct xe_guc *guc);
+diff --git a/drivers/gpu/drm/xe/xe_uc.c b/drivers/gpu/drm/xe/xe_uc.c
+index 5c45b0f072a4c2..3a8751a8b92dde 100644
+--- a/drivers/gpu/drm/xe/xe_uc.c
++++ b/drivers/gpu/drm/xe/xe_uc.c
+@@ -165,10 +165,6 @@ static int vf_uc_init_hw(struct xe_uc *uc)
+ 
+ 	uc->guc.submission_state.enabled = true;
+ 
+-	err = xe_guc_opt_in_features_enable(&uc->guc);
+-	if (err)
+-		return err;
+-
+ 	err = xe_gt_record_default_lrcs(uc_to_gt(uc));
+ 	if (err)
+ 		return err;
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_client.c b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
+index 3438d392920fad..8dae9a77668536 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_client.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
+@@ -39,8 +39,12 @@ int amd_sfh_get_report(struct hid_device *hid, int report_id, int report_type)
+ 	struct amdtp_hid_data *hid_data = hid->driver_data;
+ 	struct amdtp_cl_data *cli_data = hid_data->cli_data;
+ 	struct request_list *req_list = &cli_data->req_list;
++	struct amd_input_data *in_data = cli_data->in_data;
++	struct amd_mp2_dev *mp2;
+ 	int i;
+ 
++	mp2 = container_of(in_data, struct amd_mp2_dev, in_data);
++	guard(mutex)(&mp2->lock);
+ 	for (i = 0; i < cli_data->num_hid_devices; i++) {
+ 		if (cli_data->hid_sensor_hubs[i] == hid) {
+ 			struct request_list *new = kzalloc(sizeof(*new), GFP_KERNEL);
+@@ -75,6 +79,8 @@ void amd_sfh_work(struct work_struct *work)
+ 	u8 report_id, node_type;
+ 	u8 report_size = 0;
+ 
++	mp2 = container_of(in_data, struct amd_mp2_dev, in_data);
++	guard(mutex)(&mp2->lock);
+ 	req_node = list_last_entry(&req_list->list, struct request_list, list);
+ 	list_del(&req_node->list);
+ 	current_index = req_node->current_index;
+@@ -83,7 +89,6 @@ void amd_sfh_work(struct work_struct *work)
+ 	node_type = req_node->report_type;
+ 	kfree(req_node);
+ 
+-	mp2 = container_of(in_data, struct amd_mp2_dev, in_data);
+ 	mp2_ops = mp2->mp2_ops;
+ 	if (node_type == HID_FEATURE_REPORT) {
+ 		report_size = mp2_ops->get_feat_rep(sensor_index, report_id,
+@@ -107,6 +112,8 @@ void amd_sfh_work(struct work_struct *work)
+ 	cli_data->cur_hid_dev = current_index;
+ 	cli_data->sensor_requested_cnt[current_index] = 0;
+ 	amdtp_hid_wakeup(cli_data->hid_sensor_hubs[current_index]);
++	if (!list_empty(&req_list->list))
++		schedule_delayed_work(&cli_data->work, 0);
+ }
+ 
+ void amd_sfh_work_buffer(struct work_struct *work)
+@@ -117,9 +124,10 @@ void amd_sfh_work_buffer(struct work_struct *work)
+ 	u8 report_size;
+ 	int i;
+ 
++	mp2 = container_of(in_data, struct amd_mp2_dev, in_data);
++	guard(mutex)(&mp2->lock);
+ 	for (i = 0; i < cli_data->num_hid_devices; i++) {
+ 		if (cli_data->sensor_sts[i] == SENSOR_ENABLED) {
+-			mp2 = container_of(in_data, struct amd_mp2_dev, in_data);
+ 			report_size = mp2->mp2_ops->get_in_rep(i, cli_data->sensor_idx[i],
+ 							       cli_data->report_id[i], in_data);
+ 			hid_input_report(cli_data->hid_sensor_hubs[i], HID_INPUT_REPORT,
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_common.h b/drivers/hid/amd-sfh-hid/amd_sfh_common.h
+index f44a3bb2fbd4fe..78f830c133e5cd 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_common.h
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_common.h
+@@ -10,6 +10,7 @@
+ #ifndef AMD_SFH_COMMON_H
+ #define AMD_SFH_COMMON_H
+ 
++#include <linux/mutex.h>
+ #include <linux/pci.h>
+ #include "amd_sfh_hid.h"
+ 
+@@ -59,6 +60,8 @@ struct amd_mp2_dev {
+ 	u32 mp2_acs;
+ 	struct sfh_dev_status dev_en;
+ 	struct work_struct work;
++	/* mp2 to protect data */
++	struct mutex lock;
+ 	u8 init_done;
+ 	u8 rver;
+ };
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+index 1c1fd63330c939..9a669c18a132fb 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+@@ -462,6 +462,10 @@ static int amd_mp2_pci_probe(struct pci_dev *pdev, const struct pci_device_id *i
+ 	if (!privdata->cl_data)
+ 		return -ENOMEM;
+ 
++	rc = devm_mutex_init(&pdev->dev, &privdata->lock);
++	if (rc)
++		return rc;
++
+ 	privdata->sfh1_1_ops = (const struct amd_sfh1_1_ops *)id->driver_data;
+ 	if (privdata->sfh1_1_ops) {
+ 		if (boot_cpu_data.x86 >= 0x1A)
+diff --git a/drivers/hid/hid-asus.c b/drivers/hid/hid-asus.c
+index d27dcfb2b9e4e1..8db9d4e7c3b0b2 100644
+--- a/drivers/hid/hid-asus.c
++++ b/drivers/hid/hid-asus.c
+@@ -974,7 +974,10 @@ static int asus_input_mapping(struct hid_device *hdev,
+ 		case 0xc4: asus_map_key_clear(KEY_KBDILLUMUP);		break;
+ 		case 0xc5: asus_map_key_clear(KEY_KBDILLUMDOWN);		break;
+ 		case 0xc7: asus_map_key_clear(KEY_KBDILLUMTOGGLE);	break;
++		case 0x4e: asus_map_key_clear(KEY_FN_ESC);		break;
++		case 0x7e: asus_map_key_clear(KEY_EMOJI_PICKER);	break;
+ 
++		case 0x8b: asus_map_key_clear(KEY_PROG1);	break; /* ProArt Creator Hub key */
+ 		case 0x6b: asus_map_key_clear(KEY_F21);		break; /* ASUS touchpad toggle */
+ 		case 0x38: asus_map_key_clear(KEY_PROG1);	break; /* ROG key */
+ 		case 0xba: asus_map_key_clear(KEY_PROG2);	break; /* Fn+C ASUS Splendid */
+diff --git a/drivers/hid/hid-cp2112.c b/drivers/hid/hid-cp2112.c
+index 234fa82eab0795..b5f2b6356f512a 100644
+--- a/drivers/hid/hid-cp2112.c
++++ b/drivers/hid/hid-cp2112.c
+@@ -229,10 +229,12 @@ static int cp2112_gpio_set_unlocked(struct cp2112_device *dev,
+ 	ret = hid_hw_raw_request(hdev, CP2112_GPIO_SET, buf,
+ 				 CP2112_GPIO_SET_LENGTH, HID_FEATURE_REPORT,
+ 				 HID_REQ_SET_REPORT);
+-	if (ret < 0)
++	if (ret != CP2112_GPIO_SET_LENGTH) {
+ 		hid_err(hdev, "error setting GPIO values: %d\n", ret);
++		return ret < 0 ? ret : -EIO;
++	}
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static int cp2112_gpio_set(struct gpio_chip *chip, unsigned int offset,
+@@ -309,9 +311,7 @@ static int cp2112_gpio_direction_output(struct gpio_chip *chip,
+ 	 * Set gpio value when output direction is already set,
+ 	 * as specified in AN495, Rev. 0.2, cpt. 4.4
+ 	 */
+-	cp2112_gpio_set_unlocked(dev, offset, value);
+-
+-	return 0;
++	return cp2112_gpio_set_unlocked(dev, offset, value);
+ }
+ 
+ static int cp2112_hid_get(struct hid_device *hdev, unsigned char report_number,
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 4c22bd2ba17080..edb8da49d91670 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -73,6 +73,7 @@ MODULE_LICENSE("GPL");
+ #define MT_QUIRK_FORCE_MULTI_INPUT	BIT(20)
+ #define MT_QUIRK_DISABLE_WAKEUP		BIT(21)
+ #define MT_QUIRK_ORIENTATION_INVERT	BIT(22)
++#define MT_QUIRK_APPLE_TOUCHBAR		BIT(23)
+ 
+ #define MT_INPUTMODE_TOUCHSCREEN	0x02
+ #define MT_INPUTMODE_TOUCHPAD		0x03
+@@ -625,6 +626,7 @@ static struct mt_application *mt_find_application(struct mt_device *td,
+ static struct mt_report_data *mt_allocate_report_data(struct mt_device *td,
+ 						      struct hid_report *report)
+ {
++	struct mt_class *cls = &td->mtclass;
+ 	struct mt_report_data *rdata;
+ 	struct hid_field *field;
+ 	int r, n;
+@@ -649,7 +651,11 @@ static struct mt_report_data *mt_allocate_report_data(struct mt_device *td,
+ 
+ 		if (field->logical == HID_DG_FINGER || td->hdev->group != HID_GROUP_MULTITOUCH_WIN_8) {
+ 			for (n = 0; n < field->report_count; n++) {
+-				if (field->usage[n].hid == HID_DG_CONTACTID) {
++				unsigned int hid = field->usage[n].hid;
++
++				if (hid == HID_DG_CONTACTID ||
++				   (cls->quirks & MT_QUIRK_APPLE_TOUCHBAR &&
++				   hid == HID_DG_TRANSDUCER_INDEX)) {
+ 					rdata->is_mt_collection = true;
+ 					break;
+ 				}
+@@ -821,12 +827,31 @@ static int mt_touch_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ 
+ 			MT_STORE_FIELD(confidence_state);
+ 			return 1;
++		case HID_DG_TOUCH:
++			/*
++			 * Legacy devices use TIPSWITCH and not TOUCH.
++			 * One special case here is of the Apple Touch Bars.
++			 * In these devices, the tip state is contained in
++			 * fields with the HID_DG_TOUCH usage.
++			 * Let's just ignore this field for other devices.
++			 */
++			if (!(cls->quirks & MT_QUIRK_APPLE_TOUCHBAR))
++				return -1;
++			fallthrough;
+ 		case HID_DG_TIPSWITCH:
+ 			if (field->application != HID_GD_SYSTEM_MULTIAXIS)
+ 				input_set_capability(hi->input,
+ 						     EV_KEY, BTN_TOUCH);
+ 			MT_STORE_FIELD(tip_state);
+ 			return 1;
++		case HID_DG_TRANSDUCER_INDEX:
++			/*
++			 * Contact ID in case of Apple Touch Bars is contained
++			 * in fields with HID_DG_TRANSDUCER_INDEX usage.
++			 */
++			if (!(cls->quirks & MT_QUIRK_APPLE_TOUCHBAR))
++				return 0;
++			fallthrough;
+ 		case HID_DG_CONTACTID:
+ 			MT_STORE_FIELD(contactid);
+ 			app->touches_by_report++;
+@@ -883,10 +908,6 @@ static int mt_touch_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ 		case HID_DG_CONTACTMAX:
+ 			/* contact max are global to the report */
+ 			return -1;
+-		case HID_DG_TOUCH:
+-			/* Legacy devices use TIPSWITCH and not TOUCH.
+-			 * Let's just ignore this field. */
+-			return -1;
+ 		}
+ 		/* let hid-input decide for the others */
+ 		return 0;
+@@ -1314,6 +1335,13 @@ static int mt_touch_input_configured(struct hid_device *hdev,
+ 	struct input_dev *input = hi->input;
+ 	int ret;
+ 
++	/*
++	 * HID_DG_CONTACTMAX field is not present on Apple Touch Bars,
++	 * but the maximum contact count is greater than the default.
++	 */
++	if (cls->quirks & MT_QUIRK_APPLE_TOUCHBAR && cls->maxcontacts)
++		td->maxcontacts = cls->maxcontacts;
++
+ 	if (!td->maxcontacts)
+ 		td->maxcontacts = MT_DEFAULT_MAXCONTACT;
+ 
+@@ -1321,6 +1349,13 @@ static int mt_touch_input_configured(struct hid_device *hdev,
+ 	if (td->serial_maybe)
+ 		mt_post_parse_default_settings(td, app);
+ 
++	/*
++	 * The application for Apple Touch Bars is HID_DG_TOUCHPAD,
++	 * but these devices are direct.
++	 */
++	if (cls->quirks & MT_QUIRK_APPLE_TOUCHBAR)
++		app->mt_flags |= INPUT_MT_DIRECT;
++
+ 	if (cls->is_indirect)
+ 		app->mt_flags |= INPUT_MT_POINTER;
+ 
+diff --git a/drivers/hid/intel-thc-hid/intel-quickspi/pci-quickspi.c b/drivers/hid/intel-thc-hid/intel-quickspi/pci-quickspi.c
+index d4f89f44c3b4d9..715480ef30cefa 100644
+--- a/drivers/hid/intel-thc-hid/intel-quickspi/pci-quickspi.c
++++ b/drivers/hid/intel-thc-hid/intel-quickspi/pci-quickspi.c
+@@ -961,6 +961,8 @@ static const struct pci_device_id quickspi_pci_tbl[] = {
+ 	{PCI_DEVICE_DATA(INTEL, THC_PTL_H_DEVICE_ID_SPI_PORT2, &ptl), },
+ 	{PCI_DEVICE_DATA(INTEL, THC_PTL_U_DEVICE_ID_SPI_PORT1, &ptl), },
+ 	{PCI_DEVICE_DATA(INTEL, THC_PTL_U_DEVICE_ID_SPI_PORT2, &ptl), },
++	{PCI_DEVICE_DATA(INTEL, THC_WCL_DEVICE_ID_SPI_PORT1, &ptl), },
++	{PCI_DEVICE_DATA(INTEL, THC_WCL_DEVICE_ID_SPI_PORT2, &ptl), },
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(pci, quickspi_pci_tbl);
+diff --git a/drivers/hid/intel-thc-hid/intel-quickspi/quickspi-dev.h b/drivers/hid/intel-thc-hid/intel-quickspi/quickspi-dev.h
+index 6fdf674b21c5a6..f3532d866749ca 100644
+--- a/drivers/hid/intel-thc-hid/intel-quickspi/quickspi-dev.h
++++ b/drivers/hid/intel-thc-hid/intel-quickspi/quickspi-dev.h
+@@ -19,6 +19,8 @@
+ #define PCI_DEVICE_ID_INTEL_THC_PTL_H_DEVICE_ID_SPI_PORT2	0xE34B
+ #define PCI_DEVICE_ID_INTEL_THC_PTL_U_DEVICE_ID_SPI_PORT1	0xE449
+ #define PCI_DEVICE_ID_INTEL_THC_PTL_U_DEVICE_ID_SPI_PORT2	0xE44B
++#define PCI_DEVICE_ID_INTEL_THC_WCL_DEVICE_ID_SPI_PORT1 	0x4D49
++#define PCI_DEVICE_ID_INTEL_THC_WCL_DEVICE_ID_SPI_PORT2 	0x4D4B
+ 
+ /* HIDSPI special ACPI parameters DSM methods */
+ #define ACPI_QUICKSPI_REVISION_NUM			2
+diff --git a/drivers/i2c/busses/i2c-designware-platdrv.c b/drivers/i2c/busses/i2c-designware-platdrv.c
+index 879719e91df2a5..c1262df02cdb2e 100644
+--- a/drivers/i2c/busses/i2c-designware-platdrv.c
++++ b/drivers/i2c/busses/i2c-designware-platdrv.c
+@@ -101,7 +101,7 @@ static int bt1_i2c_request_regs(struct dw_i2c_dev *dev)
+ }
+ #endif
+ 
+-static int txgbe_i2c_request_regs(struct dw_i2c_dev *dev)
++static int dw_i2c_get_parent_regmap(struct dw_i2c_dev *dev)
+ {
+ 	dev->map = dev_get_regmap(dev->dev->parent, NULL);
+ 	if (!dev->map)
+@@ -123,12 +123,15 @@ static int dw_i2c_plat_request_regs(struct dw_i2c_dev *dev)
+ 	struct platform_device *pdev = to_platform_device(dev->dev);
+ 	int ret;
+ 
++	if (device_is_compatible(dev->dev, "intel,xe-i2c"))
++		return dw_i2c_get_parent_regmap(dev);
++
+ 	switch (dev->flags & MODEL_MASK) {
+ 	case MODEL_BAIKAL_BT1:
+ 		ret = bt1_i2c_request_regs(dev);
+ 		break;
+ 	case MODEL_WANGXUN_SP:
+-		ret = txgbe_i2c_request_regs(dev);
++		ret = dw_i2c_get_parent_regmap(dev);
+ 		break;
+ 	default:
+ 		dev->base = devm_platform_ioremap_resource(pdev, 0);
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index c369fee3356216..00727472c87381 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -233,6 +233,7 @@ static u16 get_legacy_obj_type(u16 opcode)
+ {
+ 	switch (opcode) {
+ 	case MLX5_CMD_OP_CREATE_RQ:
++	case MLX5_CMD_OP_CREATE_RMP:
+ 		return MLX5_EVENT_QUEUE_TYPE_RQ;
+ 	case MLX5_CMD_OP_CREATE_QP:
+ 		return MLX5_EVENT_QUEUE_TYPE_QP;
+diff --git a/drivers/iommu/iommufd/eventq.c b/drivers/iommu/iommufd/eventq.c
+index e373b9eec7f5f5..2afef30ce41f16 100644
+--- a/drivers/iommu/iommufd/eventq.c
++++ b/drivers/iommu/iommufd/eventq.c
+@@ -393,12 +393,12 @@ static int iommufd_eventq_init(struct iommufd_eventq *eventq, char *name,
+ 			       const struct file_operations *fops)
+ {
+ 	struct file *filep;
+-	int fdno;
+ 
+ 	spin_lock_init(&eventq->lock);
+ 	INIT_LIST_HEAD(&eventq->deliver);
+ 	init_waitqueue_head(&eventq->wait_queue);
+ 
++	/* The filep is fput() by the core code during failure */
+ 	filep = anon_inode_getfile(name, fops, eventq, O_RDWR);
+ 	if (IS_ERR(filep))
+ 		return PTR_ERR(filep);
+@@ -408,10 +408,7 @@ static int iommufd_eventq_init(struct iommufd_eventq *eventq, char *name,
+ 	eventq->filep = filep;
+ 	refcount_inc(&eventq->obj.users);
+ 
+-	fdno = get_unused_fd_flags(O_CLOEXEC);
+-	if (fdno < 0)
+-		fput(filep);
+-	return fdno;
++	return get_unused_fd_flags(O_CLOEXEC);
+ }
+ 
+ static const struct file_operations iommufd_fault_fops =
+@@ -455,7 +452,6 @@ int iommufd_fault_alloc(struct iommufd_ucmd *ucmd)
+ 	return 0;
+ out_put_fdno:
+ 	put_unused_fd(fdno);
+-	fput(fault->common.filep);
+ out_abort:
+ 	iommufd_object_abort_and_destroy(ucmd->ictx, &fault->common.obj);
+ 
+@@ -542,7 +538,6 @@ int iommufd_veventq_alloc(struct iommufd_ucmd *ucmd)
+ 
+ out_put_fdno:
+ 	put_unused_fd(fdno);
+-	fput(veventq->common.filep);
+ out_abort:
+ 	iommufd_object_abort_and_destroy(ucmd->ictx, &veventq->common.obj);
+ out_unlock_veventqs:
+diff --git a/drivers/iommu/iommufd/main.c b/drivers/iommu/iommufd/main.c
+index 3df468f64e7d9e..62a3469bbd37e7 100644
+--- a/drivers/iommu/iommufd/main.c
++++ b/drivers/iommu/iommufd/main.c
+@@ -23,6 +23,7 @@
+ #include "iommufd_test.h"
+ 
+ struct iommufd_object_ops {
++	size_t file_offset;
+ 	void (*destroy)(struct iommufd_object *obj);
+ 	void (*abort)(struct iommufd_object *obj);
+ };
+@@ -71,10 +72,30 @@ void iommufd_object_abort(struct iommufd_ctx *ictx, struct iommufd_object *obj)
+ void iommufd_object_abort_and_destroy(struct iommufd_ctx *ictx,
+ 				      struct iommufd_object *obj)
+ {
+-	if (iommufd_object_ops[obj->type].abort)
+-		iommufd_object_ops[obj->type].abort(obj);
++	const struct iommufd_object_ops *ops = &iommufd_object_ops[obj->type];
++
++	if (ops->file_offset) {
++		struct file **filep = ((void *)obj) + ops->file_offset;
++
++		/*
++		 * A file should hold a users refcount while the file is open
++		 * and put it back in its release. The file should hold a
++		 * pointer to obj in their private data. Normal fput() is
++		 * deferred to a workqueue and can get out of order with the
++		 * following kfree(obj). Using the sync version ensures the
++		 * release happens immediately. During abort we require the file
++		 * refcount is one at this point - meaning the object alloc
++		 * function cannot do anything to allow another thread to take a
++		 * refcount prior to a guaranteed success.
++		 */
++		if (*filep)
++			__fput_sync(*filep);
++	}
++
++	if (ops->abort)
++		ops->abort(obj);
+ 	else
+-		iommufd_object_ops[obj->type].destroy(obj);
++		ops->destroy(obj);
+ 	iommufd_object_abort(ictx, obj);
+ }
+ 
+@@ -493,6 +514,12 @@ void iommufd_ctx_put(struct iommufd_ctx *ictx)
+ }
+ EXPORT_SYMBOL_NS_GPL(iommufd_ctx_put, "IOMMUFD");
+ 
++#define IOMMUFD_FILE_OFFSET(_struct, _filep, _obj)                           \
++	.file_offset = (offsetof(_struct, _filep) +                          \
++			BUILD_BUG_ON_ZERO(!__same_type(                      \
++				struct file *, ((_struct *)NULL)->_filep)) + \
++			BUILD_BUG_ON_ZERO(offsetof(_struct, _obj)))
++
+ static const struct iommufd_object_ops iommufd_object_ops[] = {
+ 	[IOMMUFD_OBJ_ACCESS] = {
+ 		.destroy = iommufd_access_destroy_object,
+@@ -502,6 +529,7 @@ static const struct iommufd_object_ops iommufd_object_ops[] = {
+ 	},
+ 	[IOMMUFD_OBJ_FAULT] = {
+ 		.destroy = iommufd_fault_destroy,
++		IOMMUFD_FILE_OFFSET(struct iommufd_fault, common.filep, common.obj),
+ 	},
+ 	[IOMMUFD_OBJ_HWPT_PAGING] = {
+ 		.destroy = iommufd_hwpt_paging_destroy,
+@@ -520,6 +548,7 @@ static const struct iommufd_object_ops iommufd_object_ops[] = {
+ 	[IOMMUFD_OBJ_VEVENTQ] = {
+ 		.destroy = iommufd_veventq_destroy,
+ 		.abort = iommufd_veventq_abort,
++		IOMMUFD_FILE_OFFSET(struct iommufd_veventq, common.filep, common.obj),
+ 	},
+ 	[IOMMUFD_OBJ_VIOMMU] = {
+ 		.destroy = iommufd_viommu_destroy,
+diff --git a/drivers/mmc/host/sdhci-cadence.c b/drivers/mmc/host/sdhci-cadence.c
+index a94b297fcf2a34..60ca09780da32d 100644
+--- a/drivers/mmc/host/sdhci-cadence.c
++++ b/drivers/mmc/host/sdhci-cadence.c
+@@ -433,6 +433,13 @@ static const struct sdhci_cdns_drv_data sdhci_elba_drv_data = {
+ 	},
+ };
+ 
++static const struct sdhci_cdns_drv_data sdhci_eyeq_drv_data = {
++	.pltfm_data = {
++		.ops = &sdhci_cdns_ops,
++		.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
++	},
++};
++
+ static const struct sdhci_cdns_drv_data sdhci_cdns_drv_data = {
+ 	.pltfm_data = {
+ 		.ops = &sdhci_cdns_ops,
+@@ -595,6 +602,10 @@ static const struct of_device_id sdhci_cdns_match[] = {
+ 		.compatible = "amd,pensando-elba-sd4hc",
+ 		.data = &sdhci_elba_drv_data,
+ 	},
++	{
++		.compatible = "mobileye,eyeq-sd4hc",
++		.data = &sdhci_eyeq_drv_data,
++	},
+ 	{ .compatible = "cdns,sd4hc" },
+ 	{ /* sentinel */ }
+ };
+diff --git a/drivers/net/can/rcar/rcar_can.c b/drivers/net/can/rcar/rcar_can.c
+index 2b7dd359f27b7d..8569178b66df7d 100644
+--- a/drivers/net/can/rcar/rcar_can.c
++++ b/drivers/net/can/rcar/rcar_can.c
+@@ -861,7 +861,6 @@ static int __maybe_unused rcar_can_resume(struct device *dev)
+ {
+ 	struct net_device *ndev = dev_get_drvdata(dev);
+ 	struct rcar_can_priv *priv = netdev_priv(ndev);
+-	u16 ctlr;
+ 	int err;
+ 
+ 	if (!netif_running(ndev))
+@@ -873,12 +872,7 @@ static int __maybe_unused rcar_can_resume(struct device *dev)
+ 		return err;
+ 	}
+ 
+-	ctlr = readw(&priv->regs->ctlr);
+-	ctlr &= ~RCAR_CAN_CTLR_SLPM;
+-	writew(ctlr, &priv->regs->ctlr);
+-	ctlr &= ~RCAR_CAN_CTLR_CANM;
+-	writew(ctlr, &priv->regs->ctlr);
+-	priv->can.state = CAN_STATE_ERROR_ACTIVE;
++	rcar_can_start(ndev);
+ 
+ 	netif_device_attach(ndev);
+ 	netif_start_queue(ndev);
+diff --git a/drivers/net/can/spi/hi311x.c b/drivers/net/can/spi/hi311x.c
+index 09ae218315d73d..6441ff3b419871 100644
+--- a/drivers/net/can/spi/hi311x.c
++++ b/drivers/net/can/spi/hi311x.c
+@@ -812,6 +812,7 @@ static const struct net_device_ops hi3110_netdev_ops = {
+ 	.ndo_open = hi3110_open,
+ 	.ndo_stop = hi3110_stop,
+ 	.ndo_start_xmit = hi3110_hard_start_xmit,
++	.ndo_change_mtu = can_change_mtu,
+ };
+ 
+ static const struct ethtool_ops hi3110_ethtool_ops = {
+diff --git a/drivers/net/can/sun4i_can.c b/drivers/net/can/sun4i_can.c
+index 6fcb301ef611d0..53bfd873de9bde 100644
+--- a/drivers/net/can/sun4i_can.c
++++ b/drivers/net/can/sun4i_can.c
+@@ -768,6 +768,7 @@ static const struct net_device_ops sun4ican_netdev_ops = {
+ 	.ndo_open = sun4ican_open,
+ 	.ndo_stop = sun4ican_close,
+ 	.ndo_start_xmit = sun4ican_start_xmit,
++	.ndo_change_mtu = can_change_mtu,
+ };
+ 
+ static const struct ethtool_ops sun4ican_ethtool_ops = {
+diff --git a/drivers/net/can/usb/etas_es58x/es58x_core.c b/drivers/net/can/usb/etas_es58x/es58x_core.c
+index db1acf6d504cf3..adc91873c083f9 100644
+--- a/drivers/net/can/usb/etas_es58x/es58x_core.c
++++ b/drivers/net/can/usb/etas_es58x/es58x_core.c
+@@ -7,7 +7,7 @@
+  *
+  * Copyright (c) 2019 Robert Bosch Engineering and Business Solutions. All rights reserved.
+  * Copyright (c) 2020 ETAS K.K.. All rights reserved.
+- * Copyright (c) 2020-2022 Vincent Mailhol <mailhol.vincent@wanadoo.fr>
++ * Copyright (c) 2020-2025 Vincent Mailhol <mailhol@kernel.org>
+  */
+ 
+ #include <linux/unaligned.h>
+@@ -1977,6 +1977,7 @@ static const struct net_device_ops es58x_netdev_ops = {
+ 	.ndo_stop = es58x_stop,
+ 	.ndo_start_xmit = es58x_start_xmit,
+ 	.ndo_eth_ioctl = can_eth_ioctl_hwts,
++	.ndo_change_mtu = can_change_mtu,
+ };
+ 
+ static const struct ethtool_ops es58x_ethtool_ops = {
+diff --git a/drivers/net/can/usb/mcba_usb.c b/drivers/net/can/usb/mcba_usb.c
+index 41c0a1c399bf36..1f9b915094e64d 100644
+--- a/drivers/net/can/usb/mcba_usb.c
++++ b/drivers/net/can/usb/mcba_usb.c
+@@ -761,6 +761,7 @@ static const struct net_device_ops mcba_netdev_ops = {
+ 	.ndo_open = mcba_usb_open,
+ 	.ndo_stop = mcba_usb_close,
+ 	.ndo_start_xmit = mcba_usb_start_xmit,
++	.ndo_change_mtu = can_change_mtu,
+ };
+ 
+ static const struct ethtool_ops mcba_ethtool_ops = {
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_core.c b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
+index 117637b9b995b9..dd5caa1c302b99 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb_core.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
+@@ -111,7 +111,7 @@ void peak_usb_update_ts_now(struct peak_time_ref *time_ref, u32 ts_now)
+ 		u32 delta_ts = time_ref->ts_dev_2 - time_ref->ts_dev_1;
+ 
+ 		if (time_ref->ts_dev_2 < time_ref->ts_dev_1)
+-			delta_ts &= (1 << time_ref->adapter->ts_used_bits) - 1;
++			delta_ts &= (1ULL << time_ref->adapter->ts_used_bits) - 1;
+ 
+ 		time_ref->ts_total += delta_ts;
+ 	}
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index 6eb3140d404449..84dc6e517acf94 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -685,18 +685,27 @@ static int gswip_add_single_port_br(struct gswip_priv *priv, int port, bool add)
+ 	return 0;
+ }
+ 
+-static int gswip_port_enable(struct dsa_switch *ds, int port,
+-			     struct phy_device *phydev)
++static int gswip_port_setup(struct dsa_switch *ds, int port)
+ {
+ 	struct gswip_priv *priv = ds->priv;
+ 	int err;
+ 
+ 	if (!dsa_is_cpu_port(ds, port)) {
+-		u32 mdio_phy = 0;
+-
+ 		err = gswip_add_single_port_br(priv, port, true);
+ 		if (err)
+ 			return err;
++	}
++
++	return 0;
++}
++
++static int gswip_port_enable(struct dsa_switch *ds, int port,
++			     struct phy_device *phydev)
++{
++	struct gswip_priv *priv = ds->priv;
++
++	if (!dsa_is_cpu_port(ds, port)) {
++		u32 mdio_phy = 0;
+ 
+ 		if (phydev)
+ 			mdio_phy = phydev->mdio.addr & GSWIP_MDIO_PHY_ADDR_MASK;
+@@ -1359,8 +1368,9 @@ static int gswip_port_fdb(struct dsa_switch *ds, int port,
+ 	int i;
+ 	int err;
+ 
++	/* Operation not supported on the CPU port, don't throw errors */
+ 	if (!bridge)
+-		return -EINVAL;
++		return 0;
+ 
+ 	for (i = max_ports; i < ARRAY_SIZE(priv->vlans); i++) {
+ 		if (priv->vlans[i].bridge == bridge) {
+@@ -1829,6 +1839,7 @@ static const struct phylink_mac_ops gswip_phylink_mac_ops = {
+ static const struct dsa_switch_ops gswip_xrx200_switch_ops = {
+ 	.get_tag_protocol	= gswip_get_tag_protocol,
+ 	.setup			= gswip_setup,
++	.port_setup		= gswip_port_setup,
+ 	.port_enable		= gswip_port_enable,
+ 	.port_disable		= gswip_port_disable,
+ 	.port_bridge_join	= gswip_port_bridge_join,
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+index d2ca90407cce76..8057350236c5ef 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+@@ -244,7 +244,7 @@ bnxt_tc_parse_pedit(struct bnxt *bp, struct bnxt_tc_actions *actions,
+ 			   offset < offset_of_ip6_daddr + 16) {
+ 			actions->nat.src_xlate = false;
+ 			idx = (offset - offset_of_ip6_daddr) / 4;
+-			actions->nat.l3.ipv6.saddr.s6_addr32[idx] = htonl(val);
++			actions->nat.l3.ipv6.daddr.s6_addr32[idx] = htonl(val);
+ 		} else {
+ 			netdev_err(bp->dev,
+ 				   "%s: IPv6_hdr: Invalid pedit field\n",
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 5f15f42070c539..e8b37dfd5cc1d6 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -131,7 +131,7 @@ static const struct fec_devinfo fec_mvf600_info = {
+ 		  FEC_QUIRK_HAS_MDIO_C45,
+ };
+ 
+-static const struct fec_devinfo fec_imx6x_info = {
++static const struct fec_devinfo fec_imx6sx_info = {
+ 	.quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
+ 		  FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
+ 		  FEC_QUIRK_HAS_VLAN | FEC_QUIRK_HAS_AVB |
+@@ -196,7 +196,7 @@ static const struct of_device_id fec_dt_ids[] = {
+ 	{ .compatible = "fsl,imx28-fec", .data = &fec_imx28_info, },
+ 	{ .compatible = "fsl,imx6q-fec", .data = &fec_imx6q_info, },
+ 	{ .compatible = "fsl,mvf600-fec", .data = &fec_mvf600_info, },
+-	{ .compatible = "fsl,imx6sx-fec", .data = &fec_imx6x_info, },
++	{ .compatible = "fsl,imx6sx-fec", .data = &fec_imx6sx_info, },
+ 	{ .compatible = "fsl,imx6ul-fec", .data = &fec_imx6ul_info, },
+ 	{ .compatible = "fsl,imx8mq-fec", .data = &fec_imx8mq_info, },
+ 	{ .compatible = "fsl,imx8qm-fec", .data = &fec_imx8qm_info, },
+diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
+index 7c600d6e66ba7c..fa9bb6f2786847 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e.h
++++ b/drivers/net/ethernet/intel/i40e/i40e.h
+@@ -1277,7 +1277,8 @@ struct i40e_mac_filter *i40e_add_mac_filter(struct i40e_vsi *vsi,
+ 					    const u8 *macaddr);
+ int i40e_del_mac_filter(struct i40e_vsi *vsi, const u8 *macaddr);
+ bool i40e_is_vsi_in_vlan(struct i40e_vsi *vsi);
+-int i40e_count_filters(struct i40e_vsi *vsi);
++int i40e_count_all_filters(struct i40e_vsi *vsi);
++int i40e_count_active_filters(struct i40e_vsi *vsi);
+ struct i40e_mac_filter *i40e_find_mac(struct i40e_vsi *vsi, const u8 *macaddr);
+ void i40e_vlan_stripping_enable(struct i40e_vsi *vsi);
+ static inline bool i40e_is_sw_dcb(struct i40e_pf *pf)
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 26dcdceae741e4..ec1e3fffb59269 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -1241,12 +1241,30 @@ void i40e_update_stats(struct i40e_vsi *vsi)
+ }
+ 
+ /**
+- * i40e_count_filters - counts VSI mac filters
++ * i40e_count_all_filters - counts VSI MAC filters
+  * @vsi: the VSI to be searched
+  *
+- * Returns count of mac filters
+- **/
+-int i40e_count_filters(struct i40e_vsi *vsi)
++ * Return: count of MAC filters in any state.
++ */
++int i40e_count_all_filters(struct i40e_vsi *vsi)
++{
++	struct i40e_mac_filter *f;
++	struct hlist_node *h;
++	int bkt, cnt = 0;
++
++	hash_for_each_safe(vsi->mac_filter_hash, bkt, h, f, hlist)
++		cnt++;
++
++	return cnt;
++}
++
++/**
++ * i40e_count_active_filters - counts VSI MAC filters
++ * @vsi: the VSI to be searched
++ *
++ * Return: count of active MAC filters.
++ */
++int i40e_count_active_filters(struct i40e_vsi *vsi)
+ {
+ 	struct i40e_mac_filter *f;
+ 	struct hlist_node *h;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 7ccfc1191ae56f..59881846e8e3fa 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -448,7 +448,7 @@ static void i40e_config_irq_link_list(struct i40e_vf *vf, u16 vsi_id,
+ 		    (qtype << I40E_QINT_RQCTL_NEXTQ_TYPE_SHIFT) |
+ 		    (pf_queue_id << I40E_QINT_RQCTL_NEXTQ_INDX_SHIFT) |
+ 		    BIT(I40E_QINT_RQCTL_CAUSE_ENA_SHIFT) |
+-		    (itr_idx << I40E_QINT_RQCTL_ITR_INDX_SHIFT);
++		    FIELD_PREP(I40E_QINT_RQCTL_ITR_INDX_MASK, itr_idx);
+ 		wr32(hw, reg_idx, reg);
+ 	}
+ 
+@@ -653,6 +653,13 @@ static int i40e_config_vsi_tx_queue(struct i40e_vf *vf, u16 vsi_id,
+ 
+ 	/* only set the required fields */
+ 	tx_ctx.base = info->dma_ring_addr / 128;
++
++	/* ring_len has to be multiple of 8 */
++	if (!IS_ALIGNED(info->ring_len, 8) ||
++	    info->ring_len > I40E_MAX_NUM_DESCRIPTORS_XL710) {
++		ret = -EINVAL;
++		goto error_context;
++	}
+ 	tx_ctx.qlen = info->ring_len;
+ 	tx_ctx.rdylist = le16_to_cpu(vsi->info.qs_handle[0]);
+ 	tx_ctx.rdylist_act = 0;
+@@ -716,6 +723,13 @@ static int i40e_config_vsi_rx_queue(struct i40e_vf *vf, u16 vsi_id,
+ 
+ 	/* only set the required fields */
+ 	rx_ctx.base = info->dma_ring_addr / 128;
++
++	/* ring_len has to be multiple of 32 */
++	if (!IS_ALIGNED(info->ring_len, 32) ||
++	    info->ring_len > I40E_MAX_NUM_DESCRIPTORS_XL710) {
++		ret = -EINVAL;
++		goto error_param;
++	}
+ 	rx_ctx.qlen = info->ring_len;
+ 
+ 	if (info->splithdr_enabled) {
+@@ -1453,6 +1467,7 @@ static void i40e_trigger_vf_reset(struct i40e_vf *vf, bool flr)
+ 	 * functions that may still be running at this point.
+ 	 */
+ 	clear_bit(I40E_VF_STATE_INIT, &vf->vf_states);
++	clear_bit(I40E_VF_STATE_RESOURCES_LOADED, &vf->vf_states);
+ 
+ 	/* In the case of a VFLR, the HW has already reset the VF and we
+ 	 * just need to clean up, so don't hit the VFRTRIG register.
+@@ -2119,7 +2134,10 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
+ 	size_t len = 0;
+ 	int ret;
+ 
+-	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_INIT)) {
++	i40e_sync_vf_state(vf, I40E_VF_STATE_INIT);
++
++	if (!test_bit(I40E_VF_STATE_INIT, &vf->vf_states) ||
++	    test_bit(I40E_VF_STATE_RESOURCES_LOADED, &vf->vf_states)) {
+ 		aq_ret = -EINVAL;
+ 		goto err;
+ 	}
+@@ -2222,6 +2240,7 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
+ 				vf->default_lan_addr.addr);
+ 	}
+ 	set_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states);
++	set_bit(I40E_VF_STATE_RESOURCES_LOADED, &vf->vf_states);
+ 
+ err:
+ 	/* send the response back to the VF */
+@@ -2384,7 +2403,7 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg)
+ 		}
+ 
+ 		if (vf->adq_enabled) {
+-			if (idx >= ARRAY_SIZE(vf->ch)) {
++			if (idx >= vf->num_tc) {
+ 				aq_ret = -ENODEV;
+ 				goto error_param;
+ 			}
+@@ -2405,7 +2424,7 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg)
+ 		 * to its appropriate VSIs based on TC mapping
+ 		 */
+ 		if (vf->adq_enabled) {
+-			if (idx >= ARRAY_SIZE(vf->ch)) {
++			if (idx >= vf->num_tc) {
+ 				aq_ret = -ENODEV;
+ 				goto error_param;
+ 			}
+@@ -2455,8 +2474,10 @@ static int i40e_validate_queue_map(struct i40e_vf *vf, u16 vsi_id,
+ 	u16 vsi_queue_id, queue_id;
+ 
+ 	for_each_set_bit(vsi_queue_id, &queuemap, I40E_MAX_VSI_QP) {
+-		if (vf->adq_enabled) {
+-			vsi_id = vf->ch[vsi_queue_id / I40E_MAX_VF_VSI].vsi_id;
++		u16 idx = vsi_queue_id / I40E_MAX_VF_VSI;
++
++		if (vf->adq_enabled && idx < vf->num_tc) {
++			vsi_id = vf->ch[idx].vsi_id;
+ 			queue_id = (vsi_queue_id % I40E_DEFAULT_QUEUES_PER_VF);
+ 		} else {
+ 			queue_id = vsi_queue_id;
+@@ -2844,24 +2865,6 @@ static int i40e_vc_get_stats_msg(struct i40e_vf *vf, u8 *msg)
+ 				      (u8 *)&stats, sizeof(stats));
+ }
+ 
+-/**
+- * i40e_can_vf_change_mac
+- * @vf: pointer to the VF info
+- *
+- * Return true if the VF is allowed to change its MAC filters, false otherwise
+- */
+-static bool i40e_can_vf_change_mac(struct i40e_vf *vf)
+-{
+-	/* If the VF MAC address has been set administratively (via the
+-	 * ndo_set_vf_mac command), then deny permission to the VF to
+-	 * add/delete unicast MAC addresses, unless the VF is trusted
+-	 */
+-	if (vf->pf_set_mac && !vf->trusted)
+-		return false;
+-
+-	return true;
+-}
+-
+ #define I40E_MAX_MACVLAN_PER_HW 3072
+ #define I40E_MAX_MACVLAN_PER_PF(num_ports) (I40E_MAX_MACVLAN_PER_HW /	\
+ 	(num_ports))
+@@ -2900,8 +2903,10 @@ static inline int i40e_check_vf_permission(struct i40e_vf *vf,
+ 	struct i40e_pf *pf = vf->pf;
+ 	struct i40e_vsi *vsi = pf->vsi[vf->lan_vsi_idx];
+ 	struct i40e_hw *hw = &pf->hw;
+-	int mac2add_cnt = 0;
+-	int i;
++	int i, mac_add_max, mac_add_cnt = 0;
++	bool vf_trusted;
++
++	vf_trusted = test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps);
+ 
+ 	for (i = 0; i < al->num_elements; i++) {
+ 		struct i40e_mac_filter *f;
+@@ -2921,9 +2926,8 @@ static inline int i40e_check_vf_permission(struct i40e_vf *vf,
+ 		 * The VF may request to set the MAC address filter already
+ 		 * assigned to it so do not return an error in that case.
+ 		 */
+-		if (!i40e_can_vf_change_mac(vf) &&
+-		    !is_multicast_ether_addr(addr) &&
+-		    !ether_addr_equal(addr, vf->default_lan_addr.addr)) {
++		if (!vf_trusted && !is_multicast_ether_addr(addr) &&
++		    vf->pf_set_mac && !ether_addr_equal(addr, vf->default_lan_addr.addr)) {
+ 			dev_err(&pf->pdev->dev,
+ 				"VF attempting to override administratively set MAC address, bring down and up the VF interface to resume normal operation\n");
+ 			return -EPERM;
+@@ -2932,29 +2936,33 @@ static inline int i40e_check_vf_permission(struct i40e_vf *vf,
+ 		/*count filters that really will be added*/
+ 		f = i40e_find_mac(vsi, addr);
+ 		if (!f)
+-			++mac2add_cnt;
++			++mac_add_cnt;
+ 	}
+ 
+ 	/* If this VF is not privileged, then we can't add more than a limited
+-	 * number of addresses. Check to make sure that the additions do not
+-	 * push us over the limit.
+-	 */
+-	if (!test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps)) {
+-		if ((i40e_count_filters(vsi) + mac2add_cnt) >
+-		    I40E_VC_MAX_MAC_ADDR_PER_VF) {
+-			dev_err(&pf->pdev->dev,
+-				"Cannot add more MAC addresses, VF is not trusted, switch the VF to trusted to add more functionality\n");
+-			return -EPERM;
+-		}
+-	/* If this VF is trusted, it can use more resources than untrusted.
++	 * number of addresses.
++	 *
++	 * If this VF is trusted, it can use more resources than untrusted.
+ 	 * However to ensure that every trusted VF has appropriate number of
+ 	 * resources, divide whole pool of resources per port and then across
+ 	 * all VFs.
+ 	 */
+-	} else {
+-		if ((i40e_count_filters(vsi) + mac2add_cnt) >
+-		    I40E_VC_MAX_MACVLAN_PER_TRUSTED_VF(pf->num_alloc_vfs,
+-						       hw->num_ports)) {
++	if (!vf_trusted)
++		mac_add_max = I40E_VC_MAX_MAC_ADDR_PER_VF;
++	else
++		mac_add_max = I40E_VC_MAX_MACVLAN_PER_TRUSTED_VF(pf->num_alloc_vfs, hw->num_ports);
++
++	/* VF can replace all its filters in one step, in this case mac_add_max
++	 * will be added as active and another mac_add_max will be in
++	 * a to-be-removed state. Account for that.
++	 */
++	if ((i40e_count_active_filters(vsi) + mac_add_cnt) > mac_add_max ||
++	    (i40e_count_all_filters(vsi) + mac_add_cnt) > 2 * mac_add_max) {
++		if (!vf_trusted) {
++			dev_err(&pf->pdev->dev,
++				"Cannot add more MAC addresses, VF is not trusted, switch the VF to trusted to add more functionality\n");
++			return -EPERM;
++		} else {
+ 			dev_err(&pf->pdev->dev,
+ 				"Cannot add more MAC addresses, trusted VF exhausted it's resources\n");
+ 			return -EPERM;
+@@ -3589,7 +3597,7 @@ static int i40e_validate_cloud_filter(struct i40e_vf *vf,
+ 
+ 	/* action_meta is TC number here to which the filter is applied */
+ 	if (!tc_filter->action_meta ||
+-	    tc_filter->action_meta > vf->num_tc) {
++	    tc_filter->action_meta >= vf->num_tc) {
+ 		dev_info(&pf->pdev->dev, "VF %d: Invalid TC number %u\n",
+ 			 vf->vf_id, tc_filter->action_meta);
+ 		goto err;
+@@ -3887,6 +3895,8 @@ static int i40e_vc_del_cloud_filter(struct i40e_vf *vf, u8 *msg)
+ 				       aq_ret);
+ }
+ 
++#define I40E_MAX_VF_CLOUD_FILTER 0xFF00
++
+ /**
+  * i40e_vc_add_cloud_filter
+  * @vf: pointer to the VF info
+@@ -3926,6 +3936,14 @@ static int i40e_vc_add_cloud_filter(struct i40e_vf *vf, u8 *msg)
+ 		goto err_out;
+ 	}
+ 
++	if (vf->num_cloud_filters >= I40E_MAX_VF_CLOUD_FILTER) {
++		dev_warn(&pf->pdev->dev,
++			 "VF %d: Max number of filters reached, can't apply cloud filter\n",
++			 vf->vf_id);
++		aq_ret = -ENOSPC;
++		goto err_out;
++	}
++
+ 	cfilter = kzalloc(sizeof(*cfilter), GFP_KERNEL);
+ 	if (!cfilter) {
+ 		aq_ret = -ENOMEM;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+index 5cf74f16f433f3..f558b45725c816 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+@@ -41,7 +41,8 @@ enum i40e_vf_states {
+ 	I40E_VF_STATE_MC_PROMISC,
+ 	I40E_VF_STATE_UC_PROMISC,
+ 	I40E_VF_STATE_PRE_ENABLE,
+-	I40E_VF_STATE_RESETTING
++	I40E_VF_STATE_RESETTING,
++	I40E_VF_STATE_RESOURCES_LOADED,
+ };
+ 
+ /* VF capabilities */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+index 442305463cc0ae..21161711c579fb 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+@@ -21,8 +21,7 @@
+ #include "rvu.h"
+ #include "lmac_common.h"
+ 
+-#define DRV_NAME	"Marvell-CGX/RPM"
+-#define DRV_STRING      "Marvell CGX/RPM Driver"
++#define DRV_NAME	"Marvell-CGX-RPM"
+ 
+ #define CGX_RX_STAT_GLOBAL_INDEX	9
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
+index 5f80b23c5335cd..26a08d2cfbb1b6 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
+@@ -1326,7 +1326,6 @@ static int otx2_tc_add_flow(struct otx2_nic *nic,
+ 
+ free_leaf:
+ 	otx2_tc_del_from_flow_list(flow_cfg, new_node);
+-	kfree_rcu(new_node, rcu);
+ 	if (new_node->is_act_police) {
+ 		mutex_lock(&nic->mbox.lock);
+ 
+@@ -1346,6 +1345,7 @@ static int otx2_tc_add_flow(struct otx2_nic *nic,
+ 
+ 		mutex_unlock(&nic->mbox.lock);
+ 	}
++	kfree_rcu(new_node, rcu);
+ 
+ 	return rc;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
+index 19664fa7f21713..46d6dd05fb814d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
+@@ -1466,6 +1466,7 @@ static void fec_set_block_stats(struct mlx5e_priv *priv,
+ 	case MLX5E_FEC_RS_528_514:
+ 	case MLX5E_FEC_RS_544_514:
+ 	case MLX5E_FEC_LLRS_272_257_1:
++	case MLX5E_FEC_RS_544_514_INTERLEAVED_QUAD:
+ 		fec_set_rs_stats(fec_stats, out);
+ 		return;
+ 	case MLX5E_FEC_FIRECODE:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 3b57ef6b3de383..93fb4e861b6916 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -663,7 +663,7 @@ static void del_sw_hw_rule(struct fs_node *node)
+ 			BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_ACTION) |
+ 			BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_FLOW_COUNTERS);
+ 		fte->act_dests.action.action &= ~MLX5_FLOW_CONTEXT_ACTION_COUNT;
+-		mlx5_fc_local_destroy(rule->dest_attr.counter);
++		mlx5_fc_local_put(rule->dest_attr.counter);
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
+index 500826229b0beb..e6a95b310b5554 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
+@@ -343,6 +343,7 @@ struct mlx5_fc {
+ 	enum mlx5_fc_type type;
+ 	struct mlx5_fc_bulk *bulk;
+ 	struct mlx5_fc_cache cache;
++	refcount_t fc_local_refcount;
+ 	/* last{packets,bytes} are used for calculating deltas since last reading. */
+ 	u64 lastpackets;
+ 	u64 lastbytes;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
+index 492775d3d193a3..83001eda38842a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
+@@ -562,17 +562,36 @@ mlx5_fc_local_create(u32 counter_id, u32 offset, u32 bulk_size)
+ 	counter->id = counter_id;
+ 	fc_bulk->base_id = counter_id - offset;
+ 	fc_bulk->fs_bulk.bulk_len = bulk_size;
++	refcount_set(&fc_bulk->hws_data.hws_action_refcount, 0);
++	mutex_init(&fc_bulk->hws_data.lock);
+ 	counter->bulk = fc_bulk;
++	refcount_set(&counter->fc_local_refcount, 1);
+ 	return counter;
+ }
+ EXPORT_SYMBOL(mlx5_fc_local_create);
+ 
+ void mlx5_fc_local_destroy(struct mlx5_fc *counter)
+ {
+-	if (!counter || counter->type != MLX5_FC_TYPE_LOCAL)
+-		return;
+-
+ 	kfree(counter->bulk);
+ 	kfree(counter);
+ }
+ EXPORT_SYMBOL(mlx5_fc_local_destroy);
++
++void mlx5_fc_local_get(struct mlx5_fc *counter)
++{
++	if (!counter || counter->type != MLX5_FC_TYPE_LOCAL)
++		return;
++
++	refcount_inc(&counter->fc_local_refcount);
++}
++
++void mlx5_fc_local_put(struct mlx5_fc *counter)
++{
++	if (!counter || counter->type != MLX5_FC_TYPE_LOCAL)
++		return;
++
++	if (!refcount_dec_and_test(&counter->fc_local_refcount))
++		return;
++
++	mlx5_fc_local_destroy(counter);
++}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
+index 8e4a085f4a2ec9..fe56b59e24c59c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
+@@ -1358,11 +1358,8 @@ mlx5hws_action_create_modify_header(struct mlx5hws_context *ctx,
+ }
+ 
+ struct mlx5hws_action *
+-mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx,
+-				 size_t num_dest,
++mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx, size_t num_dest,
+ 				 struct mlx5hws_action_dest_attr *dests,
+-				 bool ignore_flow_level,
+-				 u32 flow_source,
+ 				 u32 flags)
+ {
+ 	struct mlx5hws_cmd_set_fte_dest *dest_list = NULL;
+@@ -1400,7 +1397,7 @@ mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx,
+ 				MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
+ 			dest_list[i].destination_id = dests[i].dest->dest_obj.obj_id;
+ 			fte_attr.action_flags |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+-			fte_attr.ignore_flow_level = ignore_flow_level;
++			fte_attr.ignore_flow_level = 1;
+ 			if (dests[i].is_wire_ft)
+ 				last_dest_idx = i;
+ 			break;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
+index 47e3947e7b512f..6a4c4cccd64342 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
+@@ -572,14 +572,12 @@ static void mlx5_fs_put_dest_action_sampler(struct mlx5_fs_hws_context *fs_ctx,
+ static struct mlx5hws_action *
+ mlx5_fs_create_action_dest_array(struct mlx5hws_context *ctx,
+ 				 struct mlx5hws_action_dest_attr *dests,
+-				 u32 num_of_dests, bool ignore_flow_level,
+-				 u32 flow_source)
++				 u32 num_of_dests)
+ {
+ 	u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED;
+ 
+ 	return mlx5hws_action_create_dest_array(ctx, num_of_dests, dests,
+-						ignore_flow_level,
+-						flow_source, flags);
++						flags);
+ }
+ 
+ static struct mlx5hws_action *
+@@ -1016,20 +1014,14 @@ static int mlx5_fs_fte_get_hws_actions(struct mlx5_flow_root_namespace *ns,
+ 		}
+ 		(*ractions)[num_actions++].action = dest_actions->dest;
+ 	} else if (num_dest_actions > 1) {
+-		u32 flow_source = fte->act_dests.flow_context.flow_source;
+-		bool ignore_flow_level;
+-
+ 		if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX ||
+ 		    num_fs_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
+ 			err = -EOPNOTSUPP;
+ 			goto free_actions;
+ 		}
+-		ignore_flow_level =
+-			!!(fte_action->flags & FLOW_ACT_IGNORE_FLOW_LEVEL);
+-		tmp_action = mlx5_fs_create_action_dest_array(ctx, dest_actions,
+-							      num_dest_actions,
+-							      ignore_flow_level,
+-							      flow_source);
++		tmp_action =
++			mlx5_fs_create_action_dest_array(ctx, dest_actions,
++							 num_dest_actions);
+ 		if (!tmp_action) {
+ 			err = -EOPNOTSUPP;
+ 			goto free_actions;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c
+index f1ecdba74e1f46..839d71bd42164f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c
+@@ -407,15 +407,21 @@ struct mlx5hws_action *mlx5_fc_get_hws_action(struct mlx5hws_context *ctx,
+ {
+ 	struct mlx5_fs_hws_create_action_ctx create_ctx;
+ 	struct mlx5_fc_bulk *fc_bulk = counter->bulk;
++	struct mlx5hws_action *hws_action;
+ 
+ 	create_ctx.hws_ctx = ctx;
+ 	create_ctx.id = fc_bulk->base_id;
+ 	create_ctx.actions_type = MLX5HWS_ACTION_TYP_CTR;
+ 
+-	return mlx5_fs_get_hws_action(&fc_bulk->hws_data, &create_ctx);
++	mlx5_fc_local_get(counter);
++	hws_action = mlx5_fs_get_hws_action(&fc_bulk->hws_data, &create_ctx);
++	if (!hws_action)
++		mlx5_fc_local_put(counter);
++	return hws_action;
+ }
+ 
+ void mlx5_fc_put_hws_action(struct mlx5_fc *counter)
+ {
+ 	mlx5_fs_put_hws_action(&counter->bulk->hws_data);
++	mlx5_fc_local_put(counter);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
+index a2fe2f9e832d26..e6ba5a2129075a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
+@@ -727,18 +727,13 @@ mlx5hws_action_create_push_vlan(struct mlx5hws_context *ctx, u32 flags);
+  * @num_dest: The number of dests attributes.
+  * @dests: The destination array. Each contains a destination action and can
+  *	   have additional actions.
+- * @ignore_flow_level: Whether to turn on 'ignore_flow_level' for this dest.
+- * @flow_source: Source port of the traffic for this actions.
+  * @flags: Action creation flags (enum mlx5hws_action_flags).
+  *
+  * Return: pointer to mlx5hws_action on success NULL otherwise.
+  */
+ struct mlx5hws_action *
+-mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx,
+-				 size_t num_dest,
++mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx, size_t num_dest,
+ 				 struct mlx5hws_action_dest_attr *dests,
+-				 bool ignore_flow_level,
+-				 u32 flow_source,
+ 				 u32 flags);
+ 
+ /**
+diff --git a/drivers/net/phy/bcm-phy-ptp.c b/drivers/net/phy/bcm-phy-ptp.c
+index eba8b5fb1365f4..d3501f8487d964 100644
+--- a/drivers/net/phy/bcm-phy-ptp.c
++++ b/drivers/net/phy/bcm-phy-ptp.c
+@@ -597,10 +597,6 @@ static int bcm_ptp_perout_locked(struct bcm_ptp_private *priv,
+ 
+ 	period = BCM_MAX_PERIOD_8NS;	/* write nonzero value */
+ 
+-	/* Reject unsupported flags */
+-	if (req->flags & ~PTP_PEROUT_DUTY_CYCLE)
+-		return -EOPNOTSUPP;
+-
+ 	if (req->flags & PTP_PEROUT_DUTY_CYCLE)
+ 		pulse = ktime_to_ns(ktime_set(req->on.sec, req->on.nsec));
+ 	else
+@@ -741,6 +737,8 @@ static const struct ptp_clock_info bcm_ptp_clock_info = {
+ 	.n_pins		= 1,
+ 	.n_per_out	= 1,
+ 	.n_ext_ts	= 1,
++	.supported_perout_flags = PTP_PEROUT_DUTY_CYCLE,
++	.supported_extts_flags = PTP_STRICT_FLAGS | PTP_RISING_EDGE,
+ };
+ 
+ static void bcm_ptp_txtstamp(struct mii_timestamper *mii_ts,
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index 347c1e0e94d951..4cd1d6c51dc2a0 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -361,6 +361,11 @@ static void sfp_fixup_ignore_tx_fault(struct sfp *sfp)
+ 	sfp->state_ignore_mask |= SFP_F_TX_FAULT;
+ }
+ 
++static void sfp_fixup_ignore_hw(struct sfp *sfp, unsigned int mask)
++{
++	sfp->state_hw_mask &= ~mask;
++}
++
+ static void sfp_fixup_nokia(struct sfp *sfp)
+ {
+ 	sfp_fixup_long_startup(sfp);
+@@ -409,7 +414,19 @@ static void sfp_fixup_halny_gsfp(struct sfp *sfp)
+ 	 * these are possibly used for other purposes on this
+ 	 * module, e.g. a serial port.
+ 	 */
+-	sfp->state_hw_mask &= ~(SFP_F_TX_FAULT | SFP_F_LOS);
++	sfp_fixup_ignore_hw(sfp, SFP_F_TX_FAULT | SFP_F_LOS);
++}
++
++static void sfp_fixup_potron(struct sfp *sfp)
++{
++	/*
++	 * The TX_FAULT and LOS pins on this device are used for serial
++	 * communication, so ignore them. Additionally, provide extra
++	 * time for this device to fully start up.
++	 */
++
++	sfp_fixup_long_startup(sfp);
++	sfp_fixup_ignore_hw(sfp, SFP_F_TX_FAULT | SFP_F_LOS);
+ }
+ 
+ static void sfp_fixup_rollball_cc(struct sfp *sfp)
+@@ -475,6 +492,9 @@ static const struct sfp_quirk sfp_quirks[] = {
+ 	SFP_QUIRK("ALCATELLUCENT", "3FE46541AA", sfp_quirk_2500basex,
+ 		  sfp_fixup_nokia),
+ 
++	// FLYPRO SFP-10GT-CS-30M uses Rollball protocol to talk to the PHY.
++	SFP_QUIRK_F("FLYPRO", "SFP-10GT-CS-30M", sfp_fixup_rollball),
++
+ 	// Fiberstore SFP-10G-T doesn't identify as copper, uses the Rollball
+ 	// protocol to talk to the PHY and needs 4 sec wait before probing the
+ 	// PHY.
+@@ -512,6 +532,8 @@ static const struct sfp_quirk sfp_quirks[] = {
+ 	SFP_QUIRK_F("Walsun", "HXSX-ATRC-1", sfp_fixup_fs_10gt),
+ 	SFP_QUIRK_F("Walsun", "HXSX-ATRI-1", sfp_fixup_fs_10gt),
+ 
++	SFP_QUIRK_F("YV", "SFP+ONU-XGSPON", sfp_fixup_potron),
++
+ 	// OEM SFP-GE-T is a 1000Base-T module with broken TX_FAULT indicator
+ 	SFP_QUIRK_F("OEM", "SFP-GE-T", sfp_fixup_ignore_tx_fault),
+ 
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index f8c5e2fd04dfa0..0fffa023cb736b 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1863,6 +1863,9 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ 				local_bh_enable();
+ 				goto unlock_frags;
+ 			}
++
++			if (frags && skb != tfile->napi.skb)
++				tfile->napi.skb = skb;
+ 		}
+ 		rcu_read_unlock();
+ 		local_bh_enable();
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+index eee55428749c92..de5005815ee706 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+@@ -2093,7 +2093,8 @@ static void iwl_txq_gen1_update_byte_cnt_tbl(struct iwl_trans *trans,
+ 		break;
+ 	}
+ 
+-	if (trans->mac_cfg->device_family < IWL_DEVICE_FAMILY_AX210)
++	if (trans->mac_cfg->device_family >= IWL_DEVICE_FAMILY_7000 &&
++	    trans->mac_cfg->device_family < IWL_DEVICE_FAMILY_AX210)
+ 		len = DIV_ROUND_UP(len, 4);
+ 
+ 	if (WARN_ON(len > 0xFFF || write_ptr >= TFD_QUEUE_SIZE_MAX))
+diff --git a/drivers/net/wireless/virtual/virt_wifi.c b/drivers/net/wireless/virtual/virt_wifi.c
+index 1fffeff2190ca8..4eae89376feb55 100644
+--- a/drivers/net/wireless/virtual/virt_wifi.c
++++ b/drivers/net/wireless/virtual/virt_wifi.c
+@@ -277,7 +277,9 @@ static void virt_wifi_connect_complete(struct work_struct *work)
+ 		priv->is_connected = true;
+ 
+ 	/* Schedules an event that acquires the rtnl lock. */
+-	cfg80211_connect_result(priv->upperdev, requested_bss, NULL, 0, NULL, 0,
++	cfg80211_connect_result(priv->upperdev,
++				priv->is_connected ? fake_router_bssid : NULL,
++				NULL, 0, NULL, 0,
+ 				status, GFP_KERNEL);
+ 	netif_carrier_on(priv->upperdev);
+ }
+diff --git a/drivers/pinctrl/mediatek/pinctrl-airoha.c b/drivers/pinctrl/mediatek/pinctrl-airoha.c
+index 3fa5131d81e525..1dc26e898d787a 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-airoha.c
++++ b/drivers/pinctrl/mediatek/pinctrl-airoha.c
+@@ -108,6 +108,9 @@
+ #define JTAG_UDI_EN_MASK			BIT(4)
+ #define JTAG_DFD_EN_MASK			BIT(3)
+ 
++#define REG_FORCE_GPIO_EN			0x0228
++#define FORCE_GPIO_EN(n)			BIT(n)
++
+ /* LED MAP */
+ #define REG_LAN_LED0_MAPPING			0x027c
+ #define REG_LAN_LED1_MAPPING			0x0280
+@@ -718,17 +721,17 @@ static const struct airoha_pinctrl_func_group mdio_func_group[] = {
+ 	{
+ 		.name = "mdio",
+ 		.regmap[0] = {
+-			AIROHA_FUNC_MUX,
+-			REG_GPIO_PON_MODE,
+-			GPIO_SGMII_MDIO_MODE_MASK,
+-			GPIO_SGMII_MDIO_MODE_MASK
+-		},
+-		.regmap[1] = {
+ 			AIROHA_FUNC_MUX,
+ 			REG_GPIO_2ND_I2C_MODE,
+ 			GPIO_MDC_IO_MASTER_MODE_MODE,
+ 			GPIO_MDC_IO_MASTER_MODE_MODE
+ 		},
++		.regmap[1] = {
++			AIROHA_FUNC_MUX,
++			REG_FORCE_GPIO_EN,
++			FORCE_GPIO_EN(1) | FORCE_GPIO_EN(2),
++			FORCE_GPIO_EN(1) | FORCE_GPIO_EN(2)
++		},
+ 		.regmap_size = 2,
+ 	},
+ };
+@@ -1752,8 +1755,8 @@ static const struct airoha_pinctrl_func_group phy1_led1_func_group[] = {
+ 		.regmap[0] = {
+ 			AIROHA_FUNC_MUX,
+ 			REG_GPIO_2ND_I2C_MODE,
+-			GPIO_LAN3_LED0_MODE_MASK,
+-			GPIO_LAN3_LED0_MODE_MASK
++			GPIO_LAN3_LED1_MODE_MASK,
++			GPIO_LAN3_LED1_MODE_MASK
+ 		},
+ 		.regmap[1] = {
+ 			AIROHA_FUNC_MUX,
+@@ -1816,8 +1819,8 @@ static const struct airoha_pinctrl_func_group phy2_led1_func_group[] = {
+ 		.regmap[0] = {
+ 			AIROHA_FUNC_MUX,
+ 			REG_GPIO_2ND_I2C_MODE,
+-			GPIO_LAN3_LED0_MODE_MASK,
+-			GPIO_LAN3_LED0_MODE_MASK
++			GPIO_LAN3_LED1_MODE_MASK,
++			GPIO_LAN3_LED1_MODE_MASK
+ 		},
+ 		.regmap[1] = {
+ 			AIROHA_FUNC_MUX,
+@@ -1880,8 +1883,8 @@ static const struct airoha_pinctrl_func_group phy3_led1_func_group[] = {
+ 		.regmap[0] = {
+ 			AIROHA_FUNC_MUX,
+ 			REG_GPIO_2ND_I2C_MODE,
+-			GPIO_LAN3_LED0_MODE_MASK,
+-			GPIO_LAN3_LED0_MODE_MASK
++			GPIO_LAN3_LED1_MODE_MASK,
++			GPIO_LAN3_LED1_MODE_MASK
+ 		},
+ 		.regmap[1] = {
+ 			AIROHA_FUNC_MUX,
+@@ -1944,8 +1947,8 @@ static const struct airoha_pinctrl_func_group phy4_led1_func_group[] = {
+ 		.regmap[0] = {
+ 			AIROHA_FUNC_MUX,
+ 			REG_GPIO_2ND_I2C_MODE,
+-			GPIO_LAN3_LED0_MODE_MASK,
+-			GPIO_LAN3_LED0_MODE_MASK
++			GPIO_LAN3_LED1_MODE_MASK,
++			GPIO_LAN3_LED1_MODE_MASK
+ 		},
+ 		.regmap[1] = {
+ 			AIROHA_FUNC_MUX,
+diff --git a/drivers/platform/x86/lg-laptop.c b/drivers/platform/x86/lg-laptop.c
+index 4b57102c7f6270..6af6cf477c5b5b 100644
+--- a/drivers/platform/x86/lg-laptop.c
++++ b/drivers/platform/x86/lg-laptop.c
+@@ -8,6 +8,7 @@
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+ 
+ #include <linux/acpi.h>
++#include <linux/bitfield.h>
+ #include <linux/bits.h>
+ #include <linux/device.h>
+ #include <linux/dev_printk.h>
+@@ -75,6 +76,9 @@ MODULE_PARM_DESC(fw_debug, "Enable printing of firmware debug messages");
+ #define WMBB_USB_CHARGE 0x10B
+ #define WMBB_BATT_LIMIT 0x10C
+ 
++#define FAN_MODE_LOWER GENMASK(1, 0)
++#define FAN_MODE_UPPER GENMASK(5, 4)
++
+ #define PLATFORM_NAME   "lg-laptop"
+ 
+ MODULE_ALIAS("wmi:" WMI_EVENT_GUID0);
+@@ -274,29 +278,19 @@ static ssize_t fan_mode_store(struct device *dev,
+ 			      struct device_attribute *attr,
+ 			      const char *buffer, size_t count)
+ {
+-	bool value;
++	unsigned long value;
+ 	union acpi_object *r;
+-	u32 m;
+ 	int ret;
+ 
+-	ret = kstrtobool(buffer, &value);
++	ret = kstrtoul(buffer, 10, &value);
+ 	if (ret)
+ 		return ret;
++	if (value >= 3)
++		return -EINVAL;
+ 
+-	r = lg_wmab(dev, WM_FAN_MODE, WM_GET, 0);
+-	if (!r)
+-		return -EIO;
+-
+-	if (r->type != ACPI_TYPE_INTEGER) {
+-		kfree(r);
+-		return -EIO;
+-	}
+-
+-	m = r->integer.value;
+-	kfree(r);
+-	r = lg_wmab(dev, WM_FAN_MODE, WM_SET, (m & 0xffffff0f) | (value << 4));
+-	kfree(r);
+-	r = lg_wmab(dev, WM_FAN_MODE, WM_SET, (m & 0xfffffff0) | value);
++	r = lg_wmab(dev, WM_FAN_MODE, WM_SET,
++		FIELD_PREP(FAN_MODE_LOWER, value) |
++		FIELD_PREP(FAN_MODE_UPPER, value));
+ 	kfree(r);
+ 
+ 	return count;
+@@ -305,7 +299,7 @@ static ssize_t fan_mode_store(struct device *dev,
+ static ssize_t fan_mode_show(struct device *dev,
+ 			     struct device_attribute *attr, char *buffer)
+ {
+-	unsigned int status;
++	unsigned int mode;
+ 	union acpi_object *r;
+ 
+ 	r = lg_wmab(dev, WM_FAN_MODE, WM_GET, 0);
+@@ -317,10 +311,10 @@ static ssize_t fan_mode_show(struct device *dev,
+ 		return -EIO;
+ 	}
+ 
+-	status = r->integer.value & 0x01;
++	mode = FIELD_GET(FAN_MODE_LOWER, r->integer.value);
+ 	kfree(r);
+ 
+-	return sysfs_emit(buffer, "%d\n", status);
++	return sysfs_emit(buffer, "%d\n", mode);
+ }
+ 
+ static ssize_t usb_charge_store(struct device *dev,
+diff --git a/drivers/platform/x86/oxpec.c b/drivers/platform/x86/oxpec.c
+index 9839e8cb82ce4c..eb076bb4099bed 100644
+--- a/drivers/platform/x86/oxpec.c
++++ b/drivers/platform/x86/oxpec.c
+@@ -292,6 +292,13 @@ static const struct dmi_system_id dmi_table[] = {
+ 		},
+ 		.driver_data = (void *)oxp_x1,
+ 	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "ONE-NETBOOK"),
++			DMI_EXACT_MATCH(DMI_BOARD_NAME, "ONEXPLAYER X1Mini Pro"),
++		},
++		.driver_data = (void *)oxp_x1,
++	},
+ 	{
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "ONE-NETBOOK"),
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index d3c78f59b22cd9..c9eb8004fcc271 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -46,6 +46,7 @@ static_assert(CQSPI_MAX_CHIPSELECT <= SPI_CS_CNT_MAX);
+ #define CQSPI_DMA_SET_MASK		BIT(7)
+ #define CQSPI_SUPPORT_DEVICE_RESET	BIT(8)
+ #define CQSPI_DISABLE_STIG_MODE		BIT(9)
++#define CQSPI_DISABLE_RUNTIME_PM	BIT(10)
+ 
+ /* Capabilities */
+ #define CQSPI_SUPPORTS_OCTAL		BIT(0)
+@@ -108,6 +109,8 @@ struct cqspi_st {
+ 
+ 	bool			is_jh7110; /* Flag for StarFive JH7110 SoC */
+ 	bool			disable_stig_mode;
++	refcount_t		refcount;
++	refcount_t		inflight_ops;
+ 
+ 	const struct cqspi_driver_platdata *ddata;
+ };
+@@ -735,6 +738,9 @@ static int cqspi_indirect_read_execute(struct cqspi_flash_pdata *f_pdata,
+ 	u8 *rxbuf_end = rxbuf + n_rx;
+ 	int ret = 0;
+ 
++	if (!refcount_read(&cqspi->refcount))
++		return -ENODEV;
++
+ 	writel(from_addr, reg_base + CQSPI_REG_INDIRECTRDSTARTADDR);
+ 	writel(remaining, reg_base + CQSPI_REG_INDIRECTRDBYTES);
+ 
+@@ -1071,6 +1077,9 @@ static int cqspi_indirect_write_execute(struct cqspi_flash_pdata *f_pdata,
+ 	unsigned int write_bytes;
+ 	int ret;
+ 
++	if (!refcount_read(&cqspi->refcount))
++		return -ENODEV;
++
+ 	writel(to_addr, reg_base + CQSPI_REG_INDIRECTWRSTARTADDR);
+ 	writel(remaining, reg_base + CQSPI_REG_INDIRECTWRBYTES);
+ 
+@@ -1460,21 +1469,43 @@ static int cqspi_exec_mem_op(struct spi_mem *mem, const struct spi_mem_op *op)
+ 	int ret;
+ 	struct cqspi_st *cqspi = spi_controller_get_devdata(mem->spi->controller);
+ 	struct device *dev = &cqspi->pdev->dev;
++	const struct cqspi_driver_platdata *ddata = of_device_get_match_data(dev);
+ 
+-	ret = pm_runtime_resume_and_get(dev);
+-	if (ret) {
+-		dev_err(&mem->spi->dev, "resume failed with %d\n", ret);
+-		return ret;
++	if (refcount_read(&cqspi->inflight_ops) == 0)
++		return -ENODEV;
++
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) {
++		ret = pm_runtime_resume_and_get(dev);
++		if (ret) {
++			dev_err(&mem->spi->dev, "resume failed with %d\n", ret);
++			return ret;
++		}
++	}
++
++	if (!refcount_read(&cqspi->refcount))
++		return -EBUSY;
++
++	refcount_inc(&cqspi->inflight_ops);
++
++	if (!refcount_read(&cqspi->refcount)) {
++		if (refcount_read(&cqspi->inflight_ops))
++			refcount_dec(&cqspi->inflight_ops);
++		return -EBUSY;
+ 	}
+ 
+ 	ret = cqspi_mem_process(mem, op);
+ 
+-	pm_runtime_mark_last_busy(dev);
+-	pm_runtime_put_autosuspend(dev);
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) {
++		pm_runtime_mark_last_busy(dev);
++		pm_runtime_put_autosuspend(dev);
++	}
+ 
+ 	if (ret)
+ 		dev_err(&mem->spi->dev, "operation failed with %d\n", ret);
+ 
++	if (refcount_read(&cqspi->inflight_ops) > 1)
++		refcount_dec(&cqspi->inflight_ops);
++
+ 	return ret;
+ }
+ 
+@@ -1926,6 +1957,9 @@ static int cqspi_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
++	refcount_set(&cqspi->refcount, 1);
++	refcount_set(&cqspi->inflight_ops, 1);
++
+ 	ret = devm_request_irq(dev, irq, cqspi_irq_handler, 0,
+ 			       pdev->name, cqspi);
+ 	if (ret) {
+@@ -1958,11 +1992,12 @@ static int cqspi_probe(struct platform_device *pdev)
+ 			goto probe_setup_failed;
+ 	}
+ 
+-	pm_runtime_enable(dev);
+-
+-	pm_runtime_set_autosuspend_delay(dev, CQSPI_AUTOSUSPEND_TIMEOUT);
+-	pm_runtime_use_autosuspend(dev);
+-	pm_runtime_get_noresume(dev);
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) {
++		pm_runtime_enable(dev);
++		pm_runtime_set_autosuspend_delay(dev, CQSPI_AUTOSUSPEND_TIMEOUT);
++		pm_runtime_use_autosuspend(dev);
++		pm_runtime_get_noresume(dev);
++	}
+ 
+ 	ret = spi_register_controller(host);
+ 	if (ret) {
+@@ -1970,13 +2005,17 @@ static int cqspi_probe(struct platform_device *pdev)
+ 		goto probe_setup_failed;
+ 	}
+ 
+-	pm_runtime_mark_last_busy(dev);
+-	pm_runtime_put_autosuspend(dev);
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) {
++		pm_runtime_put_autosuspend(dev);
++		pm_runtime_mark_last_busy(dev);
++		pm_runtime_put_autosuspend(dev);
++	}
+ 
+ 	return 0;
+ probe_setup_failed:
+ 	cqspi_controller_enable(cqspi, 0);
+-	pm_runtime_disable(dev);
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM)))
++		pm_runtime_disable(dev);
+ probe_reset_failed:
+ 	if (cqspi->is_jh7110)
+ 		cqspi_jh7110_disable_clk(pdev, cqspi);
+@@ -1987,7 +2026,16 @@ static int cqspi_probe(struct platform_device *pdev)
+ 
+ static void cqspi_remove(struct platform_device *pdev)
+ {
++	const struct cqspi_driver_platdata *ddata;
+ 	struct cqspi_st *cqspi = platform_get_drvdata(pdev);
++	struct device *dev = &pdev->dev;
++
++	ddata = of_device_get_match_data(dev);
++
++	refcount_set(&cqspi->refcount, 0);
++
++	if (!refcount_dec_and_test(&cqspi->inflight_ops))
++		cqspi_wait_idle(cqspi);
+ 
+ 	spi_unregister_controller(cqspi->host);
+ 	cqspi_controller_enable(cqspi, 0);
+@@ -1995,14 +2043,17 @@ static void cqspi_remove(struct platform_device *pdev)
+ 	if (cqspi->rx_chan)
+ 		dma_release_channel(cqspi->rx_chan);
+ 
+-	if (pm_runtime_get_sync(&pdev->dev) >= 0)
+-		clk_disable(cqspi->clk);
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM)))
++		if (pm_runtime_get_sync(&pdev->dev) >= 0)
++			clk_disable(cqspi->clk);
+ 
+ 	if (cqspi->is_jh7110)
+ 		cqspi_jh7110_disable_clk(pdev, cqspi);
+ 
+-	pm_runtime_put_sync(&pdev->dev);
+-	pm_runtime_disable(&pdev->dev);
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) {
++		pm_runtime_put_sync(&pdev->dev);
++		pm_runtime_disable(&pdev->dev);
++	}
+ }
+ 
+ static int cqspi_runtime_suspend(struct device *dev)
+@@ -2081,7 +2132,8 @@ static const struct cqspi_driver_platdata socfpga_qspi = {
+ 	.quirks = CQSPI_DISABLE_DAC_MODE
+ 			| CQSPI_NO_SUPPORT_WR_COMPLETION
+ 			| CQSPI_SLOW_SRAM
+-			| CQSPI_DISABLE_STIG_MODE,
++			| CQSPI_DISABLE_STIG_MODE
++			| CQSPI_DISABLE_RUNTIME_PM,
+ };
+ 
+ static const struct cqspi_driver_platdata versal_ospi = {
+diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c
+index 1e50675772febb..cc88aaa106da30 100644
+--- a/drivers/ufs/core/ufs-mcq.c
++++ b/drivers/ufs/core/ufs-mcq.c
+@@ -243,7 +243,7 @@ int ufshcd_mcq_memory_alloc(struct ufs_hba *hba)
+ 		hwq->sqe_base_addr = dmam_alloc_coherent(hba->dev, utrdl_size,
+ 							 &hwq->sqe_dma_addr,
+ 							 GFP_KERNEL);
+-		if (!hwq->sqe_dma_addr) {
++		if (!hwq->sqe_base_addr) {
+ 			dev_err(hba->dev, "SQE allocation failed\n");
+ 			return -ENOMEM;
+ 		}
+@@ -252,7 +252,7 @@ int ufshcd_mcq_memory_alloc(struct ufs_hba *hba)
+ 		hwq->cqe_base_addr = dmam_alloc_coherent(hba->dev, cqe_size,
+ 							 &hwq->cqe_dma_addr,
+ 							 GFP_KERNEL);
+-		if (!hwq->cqe_dma_addr) {
++		if (!hwq->cqe_base_addr) {
+ 			dev_err(hba->dev, "CQE allocation failed\n");
+ 			return -ENOMEM;
+ 		}
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index d6daad39491b75..f5bc5387533012 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -737,7 +737,7 @@ void usb_detect_quirks(struct usb_device *udev)
+ 	udev->quirks ^= usb_detect_dynamic_quirks(udev);
+ 
+ 	if (udev->quirks)
+-		dev_dbg(&udev->dev, "USB quirks for this device: %x\n",
++		dev_dbg(&udev->dev, "USB quirks for this device: 0x%x\n",
+ 			udev->quirks);
+ 
+ #ifdef CONFIG_USB_DEFAULT_PERSIST
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index 53551aafd47027..c5902cc261e5b0 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -760,10 +760,10 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock)
+ 	int err;
+ 	int sent_pkts = 0;
+ 	bool sock_can_batch = (sock->sk->sk_sndbuf == INT_MAX);
+-	bool busyloop_intr;
+ 
+ 	do {
+-		busyloop_intr = false;
++		bool busyloop_intr = false;
++
+ 		if (nvq->done_idx == VHOST_NET_BATCH)
+ 			vhost_tx_batch(net, nvq, sock, &msg);
+ 
+@@ -774,10 +774,18 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock)
+ 			break;
+ 		/* Nothing new?  Wait for eventfd to tell us they refilled. */
+ 		if (head == vq->num) {
+-			/* Kicks are disabled at this point, break loop and
+-			 * process any remaining batched packets. Queue will
+-			 * be re-enabled afterwards.
++			/* Flush batched packets to handle pending RX
++			 * work (if busyloop_intr is set) and to avoid
++			 * unnecessary virtqueue kicks.
+ 			 */
++			vhost_tx_batch(net, nvq, sock, &msg);
++			if (unlikely(busyloop_intr)) {
++				vhost_poll_queue(&vq->poll);
++			} else if (unlikely(vhost_enable_notify(&net->dev,
++								vq))) {
++				vhost_disable_notify(&net->dev, vq);
++				continue;
++			}
+ 			break;
+ 		}
+ 
+@@ -827,22 +835,7 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock)
+ 		++nvq->done_idx;
+ 	} while (likely(!vhost_exceeds_weight(vq, ++sent_pkts, total_len)));
+ 
+-	/* Kicks are still disabled, dispatch any remaining batched msgs. */
+ 	vhost_tx_batch(net, nvq, sock, &msg);
+-
+-	if (unlikely(busyloop_intr))
+-		/* If interrupted while doing busy polling, requeue the
+-		 * handler to be fair handle_rx as well as other tasks
+-		 * waiting on cpu.
+-		 */
+-		vhost_poll_queue(&vq->poll);
+-	else
+-		/* All of our work has been completed; however, before
+-		 * leaving the TX handler, do one last check for work,
+-		 * and requeue handler if necessary. If there is no work,
+-		 * queue will be reenabled.
+-		 */
+-		vhost_net_busy_poll_try_queue(net, vq);
+ }
+ 
+ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock)
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 9eb880a11fd8ce..a786b0c1940b4c 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -2491,7 +2491,7 @@ static int fbcon_set_font(struct vc_data *vc, const struct console_font *font,
+ 	unsigned charcount = font->charcount;
+ 	int w = font->width;
+ 	int h = font->height;
+-	int size;
++	int size, alloc_size;
+ 	int i, csum;
+ 	u8 *new_data, *data = font->data;
+ 	int pitch = PITCH(font->width);
+@@ -2518,9 +2518,16 @@ static int fbcon_set_font(struct vc_data *vc, const struct console_font *font,
+ 	if (fbcon_invalid_charcount(info, charcount))
+ 		return -EINVAL;
+ 
+-	size = CALC_FONTSZ(h, pitch, charcount);
++	/* Check for integer overflow in font size calculation */
++	if (check_mul_overflow(h, pitch, &size) ||
++	    check_mul_overflow(size, charcount, &size))
++		return -EINVAL;
++
++	/* Check for overflow in allocation size calculation */
++	if (check_add_overflow(FONT_EXTRA_WORDS * sizeof(int), size, &alloc_size))
++		return -EINVAL;
+ 
+-	new_data = kmalloc(FONT_EXTRA_WORDS * sizeof(int) + size, GFP_USER);
++	new_data = kmalloc(alloc_size, GFP_USER);
+ 
+ 	if (!new_data)
+ 		return -ENOMEM;
+diff --git a/fs/afs/server.c b/fs/afs/server.c
+index a97562f831eb5a..c4428ebddb1da6 100644
+--- a/fs/afs/server.c
++++ b/fs/afs/server.c
+@@ -331,13 +331,14 @@ struct afs_server *afs_use_server(struct afs_server *server, bool activate,
+ void afs_put_server(struct afs_net *net, struct afs_server *server,
+ 		    enum afs_server_trace reason)
+ {
+-	unsigned int a, debug_id = server->debug_id;
++	unsigned int a, debug_id;
+ 	bool zero;
+ 	int r;
+ 
+ 	if (!server)
+ 		return;
+ 
++	debug_id = server->debug_id;
+ 	a = atomic_read(&server->active);
+ 	zero = __refcount_dec_and_test(&server->ref, &r);
+ 	trace_afs_server(debug_id, r - 1, a, reason);
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index f475b4b7c45782..817d3ef501ec44 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -2714,6 +2714,11 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path
+ 		goto error;
+ 	}
+ 
++	if (bdev_nr_bytes(file_bdev(bdev_file)) <= BTRFS_DEVICE_RANGE_RESERVED) {
++		ret = -EINVAL;
++		goto error;
++	}
++
+ 	if (fs_devices->seeding) {
+ 		seeding_dev = true;
+ 		down_write(&sb->s_umount);
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index e4de5425838d92..6040e54082777d 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -520,14 +520,16 @@ static bool remove_inode_single_folio(struct hstate *h, struct inode *inode,
+ 
+ 	/*
+ 	 * If folio is mapped, it was faulted in after being
+-	 * unmapped in caller.  Unmap (again) while holding
+-	 * the fault mutex.  The mutex will prevent faults
+-	 * until we finish removing the folio.
++	 * unmapped in caller or hugetlb_vmdelete_list() skips
++	 * unmapping it due to fail to grab lock.  Unmap (again)
++	 * while holding the fault mutex.  The mutex will prevent
++	 * faults until we finish removing the folio.  Hold folio
++	 * lock to guarantee no concurrent migration.
+ 	 */
++	folio_lock(folio);
+ 	if (unlikely(folio_mapped(folio)))
+ 		hugetlb_unmap_file_folio(h, mapping, folio, index);
+ 
+-	folio_lock(folio);
+ 	/*
+ 	 * We must remove the folio from page cache before removing
+ 	 * the region/ reserve map (hugetlb_unreserve_pages).  In
+diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
+index 18b3dc74c70e41..37ab6f28b5ad0e 100644
+--- a/fs/netfs/buffered_read.c
++++ b/fs/netfs/buffered_read.c
+@@ -369,7 +369,7 @@ void netfs_readahead(struct readahead_control *ractl)
+ 	return netfs_put_request(rreq, netfs_rreq_trace_put_return);
+ 
+ cleanup_free:
+-	return netfs_put_request(rreq, netfs_rreq_trace_put_failed);
++	return netfs_put_failed_request(rreq);
+ }
+ EXPORT_SYMBOL(netfs_readahead);
+ 
+@@ -472,7 +472,7 @@ static int netfs_read_gaps(struct file *file, struct folio *folio)
+ 	return ret < 0 ? ret : 0;
+ 
+ discard:
+-	netfs_put_request(rreq, netfs_rreq_trace_put_discard);
++	netfs_put_failed_request(rreq);
+ alloc_error:
+ 	folio_unlock(folio);
+ 	return ret;
+@@ -532,7 +532,7 @@ int netfs_read_folio(struct file *file, struct folio *folio)
+ 	return ret < 0 ? ret : 0;
+ 
+ discard:
+-	netfs_put_request(rreq, netfs_rreq_trace_put_discard);
++	netfs_put_failed_request(rreq);
+ alloc_error:
+ 	folio_unlock(folio);
+ 	return ret;
+@@ -699,7 +699,7 @@ int netfs_write_begin(struct netfs_inode *ctx,
+ 	return 0;
+ 
+ error_put:
+-	netfs_put_request(rreq, netfs_rreq_trace_put_failed);
++	netfs_put_failed_request(rreq);
+ error:
+ 	if (folio) {
+ 		folio_unlock(folio);
+@@ -754,7 +754,7 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio,
+ 	return ret < 0 ? ret : 0;
+ 
+ error_put:
+-	netfs_put_request(rreq, netfs_rreq_trace_put_discard);
++	netfs_put_failed_request(rreq);
+ error:
+ 	_leave(" = %d", ret);
+ 	return ret;
+diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c
+index a05e13472bafb2..a498ee8d66745f 100644
+--- a/fs/netfs/direct_read.c
++++ b/fs/netfs/direct_read.c
+@@ -131,6 +131,7 @@ static ssize_t netfs_unbuffered_read(struct netfs_io_request *rreq, bool sync)
+ 
+ 	if (rreq->len == 0) {
+ 		pr_err("Zero-sized read [R=%x]\n", rreq->debug_id);
++		netfs_put_request(rreq, netfs_rreq_trace_put_discard);
+ 		return -EIO;
+ 	}
+ 
+@@ -205,7 +206,7 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *i
+ 	if (user_backed_iter(iter)) {
+ 		ret = netfs_extract_user_iter(iter, rreq->len, &rreq->buffer.iter, 0);
+ 		if (ret < 0)
+-			goto out;
++			goto error_put;
+ 		rreq->direct_bv = (struct bio_vec *)rreq->buffer.iter.bvec;
+ 		rreq->direct_bv_count = ret;
+ 		rreq->direct_bv_unpin = iov_iter_extract_will_pin(iter);
+@@ -238,6 +239,10 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *i
+ 	if (ret > 0)
+ 		orig_count -= ret;
+ 	return ret;
++
++error_put:
++	netfs_put_failed_request(rreq);
++	return ret;
+ }
+ EXPORT_SYMBOL(netfs_unbuffered_read_iter_locked);
+ 
+diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c
+index a16660ab7f8385..a9d1c3b2c08426 100644
+--- a/fs/netfs/direct_write.c
++++ b/fs/netfs/direct_write.c
+@@ -57,7 +57,7 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
+ 			n = netfs_extract_user_iter(iter, len, &wreq->buffer.iter, 0);
+ 			if (n < 0) {
+ 				ret = n;
+-				goto out;
++				goto error_put;
+ 			}
+ 			wreq->direct_bv = (struct bio_vec *)wreq->buffer.iter.bvec;
+ 			wreq->direct_bv_count = n;
+@@ -101,6 +101,10 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
+ out:
+ 	netfs_put_request(wreq, netfs_rreq_trace_put_return);
+ 	return ret;
++
++error_put:
++	netfs_put_failed_request(wreq);
++	return ret;
+ }
+ EXPORT_SYMBOL(netfs_unbuffered_write_iter_locked);
+ 
+diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h
+index d4f16fefd96518..4319611f535449 100644
+--- a/fs/netfs/internal.h
++++ b/fs/netfs/internal.h
+@@ -87,6 +87,7 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
+ void netfs_get_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace what);
+ void netfs_clear_subrequests(struct netfs_io_request *rreq);
+ void netfs_put_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace what);
++void netfs_put_failed_request(struct netfs_io_request *rreq);
+ struct netfs_io_subrequest *netfs_alloc_subrequest(struct netfs_io_request *rreq);
+ 
+ static inline void netfs_see_request(struct netfs_io_request *rreq,
+diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c
+index e8c99738b5bbf2..40a1c7d6f6e03e 100644
+--- a/fs/netfs/objects.c
++++ b/fs/netfs/objects.c
+@@ -116,10 +116,8 @@ static void netfs_free_request_rcu(struct rcu_head *rcu)
+ 	netfs_stat_d(&netfs_n_rh_rreq);
+ }
+ 
+-static void netfs_free_request(struct work_struct *work)
++static void netfs_deinit_request(struct netfs_io_request *rreq)
+ {
+-	struct netfs_io_request *rreq =
+-		container_of(work, struct netfs_io_request, cleanup_work);
+ 	struct netfs_inode *ictx = netfs_inode(rreq->inode);
+ 	unsigned int i;
+ 
+@@ -149,6 +147,14 @@ static void netfs_free_request(struct work_struct *work)
+ 
+ 	if (atomic_dec_and_test(&ictx->io_count))
+ 		wake_up_var(&ictx->io_count);
++}
++
++static void netfs_free_request(struct work_struct *work)
++{
++	struct netfs_io_request *rreq =
++		container_of(work, struct netfs_io_request, cleanup_work);
++
++	netfs_deinit_request(rreq);
+ 	call_rcu(&rreq->rcu, netfs_free_request_rcu);
+ }
+ 
+@@ -167,6 +173,24 @@ void netfs_put_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace
+ 	}
+ }
+ 
++/*
++ * Free a request (synchronously) that was just allocated but has
++ * failed before it could be submitted.
++ */
++void netfs_put_failed_request(struct netfs_io_request *rreq)
++{
++	int r = refcount_read(&rreq->ref);
++
++	/* new requests have two references (see
++	 * netfs_alloc_request(), and this function is only allowed on
++	 * new request objects
++	 */
++	WARN_ON_ONCE(r != 2);
++
++	trace_netfs_rreq_ref(rreq->debug_id, r, netfs_rreq_trace_put_failed);
++	netfs_free_request(&rreq->cleanup_work);
++}
++
+ /*
+  * Allocate and partially initialise an I/O request structure.
+  */
+diff --git a/fs/netfs/read_pgpriv2.c b/fs/netfs/read_pgpriv2.c
+index 8097bc069c1de6..a1489aa29f782a 100644
+--- a/fs/netfs/read_pgpriv2.c
++++ b/fs/netfs/read_pgpriv2.c
+@@ -118,7 +118,7 @@ static struct netfs_io_request *netfs_pgpriv2_begin_copy_to_cache(
+ 	return creq;
+ 
+ cancel_put:
+-	netfs_put_request(creq, netfs_rreq_trace_put_return);
++	netfs_put_failed_request(creq);
+ cancel:
+ 	rreq->copy_to_cache = ERR_PTR(-ENOBUFS);
+ 	clear_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags);
+diff --git a/fs/netfs/read_single.c b/fs/netfs/read_single.c
+index fa622a6cd56da3..5c0dc4efc79227 100644
+--- a/fs/netfs/read_single.c
++++ b/fs/netfs/read_single.c
+@@ -189,7 +189,7 @@ ssize_t netfs_read_single(struct inode *inode, struct file *file, struct iov_ite
+ 	return ret;
+ 
+ cleanup_free:
+-	netfs_put_request(rreq, netfs_rreq_trace_put_failed);
++	netfs_put_failed_request(rreq);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(netfs_read_single);
+diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c
+index 0584cba1a04392..dd8743bc8d7fe3 100644
+--- a/fs/netfs/write_issue.c
++++ b/fs/netfs/write_issue.c
+@@ -133,8 +133,7 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping,
+ 
+ 	return wreq;
+ nomem:
+-	wreq->error = -ENOMEM;
+-	netfs_put_request(wreq, netfs_rreq_trace_put_failed);
++	netfs_put_failed_request(wreq);
+ 	return ERR_PTR(-ENOMEM);
+ }
+ 
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index a16a619fb8c33b..8cc39a73faff85 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -28,6 +28,7 @@
+ #include <linux/mm.h>
+ #include <linux/pagemap.h>
+ #include <linux/gfp.h>
++#include <linux/rmap.h>
+ #include <linux/swap.h>
+ #include <linux/compaction.h>
+ 
+@@ -279,6 +280,37 @@ nfs_file_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+ }
+ EXPORT_SYMBOL_GPL(nfs_file_fsync);
+ 
++void nfs_truncate_last_folio(struct address_space *mapping, loff_t from,
++			     loff_t to)
++{
++	struct folio *folio;
++
++	if (from >= to)
++		return;
++
++	folio = filemap_lock_folio(mapping, from >> PAGE_SHIFT);
++	if (IS_ERR(folio))
++		return;
++
++	if (folio_mkclean(folio))
++		folio_mark_dirty(folio);
++
++	if (folio_test_uptodate(folio)) {
++		loff_t fpos = folio_pos(folio);
++		size_t offset = from - fpos;
++		size_t end = folio_size(folio);
++
++		if (to - fpos < end)
++			end = to - fpos;
++		folio_zero_segment(folio, offset, end);
++		trace_nfs_size_truncate_folio(mapping->host, to);
++	}
++
++	folio_unlock(folio);
++	folio_put(folio);
++}
++EXPORT_SYMBOL_GPL(nfs_truncate_last_folio);
++
+ /*
+  * Decide whether a read/modify/write cycle may be more efficient
+  * then a modify/write/read cycle when writing to a page in the
+@@ -353,6 +385,7 @@ static int nfs_write_begin(struct file *file, struct address_space *mapping,
+ 
+ 	dfprintk(PAGECACHE, "NFS: write_begin(%pD2(%lu), %u@%lld)\n",
+ 		file, mapping->host->i_ino, len, (long long) pos);
++	nfs_truncate_last_folio(mapping, i_size_read(mapping->host), pos);
+ 
+ 	fgp |= fgf_set_order(len);
+ start:
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index a32cc45425e287..f6b448666d4195 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -710,6 +710,7 @@ nfs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ {
+ 	struct inode *inode = d_inode(dentry);
+ 	struct nfs_fattr *fattr;
++	loff_t oldsize = i_size_read(inode);
+ 	int error = 0;
+ 
+ 	nfs_inc_stats(inode, NFSIOS_VFSSETATTR);
+@@ -725,7 +726,7 @@ nfs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 		if (error)
+ 			return error;
+ 
+-		if (attr->ia_size == i_size_read(inode))
++		if (attr->ia_size == oldsize)
+ 			attr->ia_valid &= ~ATTR_SIZE;
+ 	}
+ 
+@@ -773,8 +774,12 @@ nfs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 	}
+ 
+ 	error = NFS_PROTO(inode)->setattr(dentry, fattr, attr);
+-	if (error == 0)
++	if (error == 0) {
++		if (attr->ia_valid & ATTR_SIZE)
++			nfs_truncate_last_folio(inode->i_mapping, oldsize,
++						attr->ia_size);
+ 		error = nfs_refresh_inode(inode, fattr);
++	}
+ 	nfs_free_fattr(fattr);
+ out:
+ 	trace_nfs_setattr_exit(inode, error);
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 0ef0fc6aba3b3c..ae4d039c10d3a4 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -438,6 +438,8 @@ int nfs_file_release(struct inode *, struct file *);
+ int nfs_lock(struct file *, int, struct file_lock *);
+ int nfs_flock(struct file *, int, struct file_lock *);
+ int nfs_check_flags(int);
++void nfs_truncate_last_folio(struct address_space *mapping, loff_t from,
++			     loff_t to);
+ 
+ /* inode.c */
+ extern struct workqueue_struct *nfsiod_workqueue;
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index 48ee3d5d89c4ae..6a0b5871ba3b09 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -138,6 +138,7 @@ int nfs42_proc_allocate(struct file *filep, loff_t offset, loff_t len)
+ 		.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_ALLOCATE],
+ 	};
+ 	struct inode *inode = file_inode(filep);
++	loff_t oldsize = i_size_read(inode);
+ 	int err;
+ 
+ 	if (!nfs_server_capable(inode, NFS_CAP_ALLOCATE))
+@@ -146,7 +147,11 @@ int nfs42_proc_allocate(struct file *filep, loff_t offset, loff_t len)
+ 	inode_lock(inode);
+ 
+ 	err = nfs42_proc_fallocate(&msg, filep, offset, len);
+-	if (err == -EOPNOTSUPP)
++
++	if (err == 0)
++		nfs_truncate_last_folio(inode->i_mapping, oldsize,
++					offset + len);
++	else if (err == -EOPNOTSUPP)
+ 		NFS_SERVER(inode)->caps &= ~(NFS_CAP_ALLOCATE |
+ 					     NFS_CAP_ZERO_RANGE);
+ 
+@@ -184,6 +189,7 @@ int nfs42_proc_zero_range(struct file *filep, loff_t offset, loff_t len)
+ 		.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_ZERO_RANGE],
+ 	};
+ 	struct inode *inode = file_inode(filep);
++	loff_t oldsize = i_size_read(inode);
+ 	int err;
+ 
+ 	if (!nfs_server_capable(inode, NFS_CAP_ZERO_RANGE))
+@@ -192,9 +198,11 @@ int nfs42_proc_zero_range(struct file *filep, loff_t offset, loff_t len)
+ 	inode_lock(inode);
+ 
+ 	err = nfs42_proc_fallocate(&msg, filep, offset, len);
+-	if (err == 0)
++	if (err == 0) {
++		nfs_truncate_last_folio(inode->i_mapping, oldsize,
++					offset + len);
+ 		truncate_pagecache_range(inode, offset, (offset + len) -1);
+-	if (err == -EOPNOTSUPP)
++	} else if (err == -EOPNOTSUPP)
+ 		NFS_SERVER(inode)->caps &= ~NFS_CAP_ZERO_RANGE;
+ 
+ 	inode_unlock(inode);
+@@ -355,22 +363,27 @@ static int process_copy_commit(struct file *dst, loff_t pos_dst,
+ 
+ /**
+  * nfs42_copy_dest_done - perform inode cache updates after clone/copy offload
+- * @inode: pointer to destination inode
++ * @file: pointer to destination file
+  * @pos: destination offset
+  * @len: copy length
++ * @oldsize: length of the file prior to clone/copy
+  *
+  * Punch a hole in the inode page cache, so that the NFS client will
+  * know to retrieve new data.
+  * Update the file size if necessary, and then mark the inode as having
+  * invalid cached values for change attribute, ctime, mtime and space used.
+  */
+-static void nfs42_copy_dest_done(struct inode *inode, loff_t pos, loff_t len)
++static void nfs42_copy_dest_done(struct file *file, loff_t pos, loff_t len,
++				 loff_t oldsize)
+ {
++	struct inode *inode = file_inode(file);
++	struct address_space *mapping = file->f_mapping;
+ 	loff_t newsize = pos + len;
+ 	loff_t end = newsize - 1;
+ 
+-	WARN_ON_ONCE(invalidate_inode_pages2_range(inode->i_mapping,
+-				pos >> PAGE_SHIFT, end >> PAGE_SHIFT));
++	nfs_truncate_last_folio(mapping, oldsize, pos);
++	WARN_ON_ONCE(invalidate_inode_pages2_range(mapping, pos >> PAGE_SHIFT,
++						   end >> PAGE_SHIFT));
+ 
+ 	spin_lock(&inode->i_lock);
+ 	if (newsize > i_size_read(inode))
+@@ -403,6 +416,7 @@ static ssize_t _nfs42_proc_copy(struct file *src,
+ 	struct nfs_server *src_server = NFS_SERVER(src_inode);
+ 	loff_t pos_src = args->src_pos;
+ 	loff_t pos_dst = args->dst_pos;
++	loff_t oldsize_dst = i_size_read(dst_inode);
+ 	size_t count = args->count;
+ 	ssize_t status;
+ 
+@@ -477,7 +491,7 @@ static ssize_t _nfs42_proc_copy(struct file *src,
+ 			goto out;
+ 	}
+ 
+-	nfs42_copy_dest_done(dst_inode, pos_dst, res->write_res.count);
++	nfs42_copy_dest_done(dst, pos_dst, res->write_res.count, oldsize_dst);
+ 	nfs_invalidate_atime(src_inode);
+ 	status = res->write_res.count;
+ out:
+@@ -1244,6 +1258,7 @@ static int _nfs42_proc_clone(struct rpc_message *msg, struct file *src_f,
+ 	struct nfs42_clone_res res = {
+ 		.server	= server,
+ 	};
++	loff_t oldsize_dst = i_size_read(dst_inode);
+ 	int status;
+ 
+ 	msg->rpc_argp = &args;
+@@ -1278,7 +1293,7 @@ static int _nfs42_proc_clone(struct rpc_message *msg, struct file *src_f,
+ 		/* a zero-length count means clone to EOF in src */
+ 		if (count == 0 && res.dst_fattr->valid & NFS_ATTR_FATTR_SIZE)
+ 			count = nfs_size_to_loff_t(res.dst_fattr->size) - dst_offset;
+-		nfs42_copy_dest_done(dst_inode, dst_offset, count);
++		nfs42_copy_dest_done(dst_f, dst_offset, count, oldsize_dst);
+ 		status = nfs_post_op_update_inode(dst_inode, res.dst_fattr);
+ 	}
+ 
+diff --git a/fs/nfs/nfstrace.h b/fs/nfs/nfstrace.h
+index 7a058bd8c566e2..1e4dc632f1800c 100644
+--- a/fs/nfs/nfstrace.h
++++ b/fs/nfs/nfstrace.h
+@@ -267,6 +267,7 @@ DECLARE_EVENT_CLASS(nfs_update_size_class,
+ 			TP_ARGS(inode, new_size))
+ 
+ DEFINE_NFS_UPDATE_SIZE_EVENT(truncate);
++DEFINE_NFS_UPDATE_SIZE_EVENT(truncate_folio);
+ DEFINE_NFS_UPDATE_SIZE_EVENT(wcc);
+ DEFINE_NFS_UPDATE_SIZE_EVENT(update);
+ DEFINE_NFS_UPDATE_SIZE_EVENT(grow);
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index bdc3c2e9334ad5..ea4ff22976c5c8 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -2286,6 +2286,9 @@ static void pagemap_scan_backout_range(struct pagemap_scan_private *p,
+ {
+ 	struct page_region *cur_buf = &p->vec_buf[p->vec_buf_index];
+ 
++	if (!p->vec_buf)
++		return;
++
+ 	if (cur_buf->start != addr)
+ 		cur_buf->end = addr;
+ 	else
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index 86cad8ee8e6f3b..ac3ce183bd59a9 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -687,7 +687,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 	}
+ 
+ 	for (i = 0; i < num_cmds; i++) {
+-		char *buf = rsp_iov[i + i].iov_base;
++		char *buf = rsp_iov[i + 1].iov_base;
+ 
+ 		if (buf && resp_buftype[i + 1] != CIFS_NO_BUFFER)
+ 			rc = server->ops->map_error(buf, false);
+diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c
+index 6550bd9f002c27..74dfb6496095db 100644
+--- a/fs/smb/server/transport_rdma.c
++++ b/fs/smb/server/transport_rdma.c
+@@ -148,7 +148,7 @@ struct smb_direct_transport {
+ 	wait_queue_head_t	wait_send_pending;
+ 	atomic_t		send_pending;
+ 
+-	struct delayed_work	post_recv_credits_work;
++	struct work_struct	post_recv_credits_work;
+ 	struct work_struct	send_immediate_work;
+ 	struct work_struct	disconnect_work;
+ 
+@@ -367,8 +367,8 @@ static struct smb_direct_transport *alloc_transport(struct rdma_cm_id *cm_id)
+ 
+ 	spin_lock_init(&t->lock_new_recv_credits);
+ 
+-	INIT_DELAYED_WORK(&t->post_recv_credits_work,
+-			  smb_direct_post_recv_credits);
++	INIT_WORK(&t->post_recv_credits_work,
++		  smb_direct_post_recv_credits);
+ 	INIT_WORK(&t->send_immediate_work, smb_direct_send_immediate_work);
+ 	INIT_WORK(&t->disconnect_work, smb_direct_disconnect_rdma_work);
+ 
+@@ -399,9 +399,9 @@ static void free_transport(struct smb_direct_transport *t)
+ 	wait_event(t->wait_send_pending,
+ 		   atomic_read(&t->send_pending) == 0);
+ 
+-	cancel_work_sync(&t->disconnect_work);
+-	cancel_delayed_work_sync(&t->post_recv_credits_work);
+-	cancel_work_sync(&t->send_immediate_work);
++	disable_work_sync(&t->disconnect_work);
++	disable_work_sync(&t->post_recv_credits_work);
++	disable_work_sync(&t->send_immediate_work);
+ 
+ 	if (t->qp) {
+ 		ib_drain_qp(t->qp);
+@@ -615,8 +615,7 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 			wake_up_interruptible(&t->wait_send_credits);
+ 
+ 		if (is_receive_credit_post_required(receive_credits, avail_recvmsg_count))
+-			mod_delayed_work(smb_direct_wq,
+-					 &t->post_recv_credits_work, 0);
++			queue_work(smb_direct_wq, &t->post_recv_credits_work);
+ 
+ 		if (data_length) {
+ 			enqueue_reassembly(t, recvmsg, (int)data_length);
+@@ -773,8 +772,7 @@ static int smb_direct_read(struct ksmbd_transport *t, char *buf,
+ 		st->count_avail_recvmsg += queue_removed;
+ 		if (is_receive_credit_post_required(st->recv_credits, st->count_avail_recvmsg)) {
+ 			spin_unlock(&st->receive_credit_lock);
+-			mod_delayed_work(smb_direct_wq,
+-					 &st->post_recv_credits_work, 0);
++			queue_work(smb_direct_wq, &st->post_recv_credits_work);
+ 		} else {
+ 			spin_unlock(&st->receive_credit_lock);
+ 		}
+@@ -801,7 +799,7 @@ static int smb_direct_read(struct ksmbd_transport *t, char *buf,
+ static void smb_direct_post_recv_credits(struct work_struct *work)
+ {
+ 	struct smb_direct_transport *t = container_of(work,
+-		struct smb_direct_transport, post_recv_credits_work.work);
++		struct smb_direct_transport, post_recv_credits_work);
+ 	struct smb_direct_recvmsg *recvmsg;
+ 	int receive_credits, credits = 0;
+ 	int ret;
+@@ -1734,7 +1732,7 @@ static int smb_direct_prepare_negotiation(struct smb_direct_transport *t)
+ 		goto out_err;
+ 	}
+ 
+-	smb_direct_post_recv_credits(&t->post_recv_credits_work.work);
++	smb_direct_post_recv_credits(&t->post_recv_credits_work);
+ 	return 0;
+ out_err:
+ 	put_recvmsg(t, recvmsg);
+diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
+index 0c70f3a5557505..107b797c33ecf7 100644
+--- a/include/crypto/if_alg.h
++++ b/include/crypto/if_alg.h
+@@ -152,7 +152,7 @@ struct af_alg_ctx {
+ 	size_t used;
+ 	atomic_t rcvused;
+ 
+-	u32		more:1,
++	bool		more:1,
+ 			merge:1,
+ 			enc:1,
+ 			write:1,
+diff --git a/include/linux/firmware/imx/sm.h b/include/linux/firmware/imx/sm.h
+index a8a17eeb7d907e..1817df9aceac80 100644
+--- a/include/linux/firmware/imx/sm.h
++++ b/include/linux/firmware/imx/sm.h
+@@ -18,13 +18,43 @@
+ #define SCMI_IMX_CTRL_SAI4_MCLK		4	/* WAKE SAI4 MCLK */
+ #define SCMI_IMX_CTRL_SAI5_MCLK		5	/* WAKE SAI5 MCLK */
+ 
++#if IS_ENABLED(CONFIG_IMX_SCMI_MISC_DRV)
+ int scmi_imx_misc_ctrl_get(u32 id, u32 *num, u32 *val);
+ int scmi_imx_misc_ctrl_set(u32 id, u32 val);
++#else
++static inline int scmi_imx_misc_ctrl_get(u32 id, u32 *num, u32 *val)
++{
++	return -EOPNOTSUPP;
++}
+ 
++static inline int scmi_imx_misc_ctrl_set(u32 id, u32 val)
++{
++	return -EOPNOTSUPP;
++}
++#endif
++
++#if IS_ENABLED(CONFIG_IMX_SCMI_CPU_DRV)
+ int scmi_imx_cpu_start(u32 cpuid, bool start);
+ int scmi_imx_cpu_started(u32 cpuid, bool *started);
+ int scmi_imx_cpu_reset_vector_set(u32 cpuid, u64 vector, bool start, bool boot,
+ 				  bool resume);
++#else
++static inline int scmi_imx_cpu_start(u32 cpuid, bool start)
++{
++	return -EOPNOTSUPP;
++}
++
++static inline int scmi_imx_cpu_started(u32 cpuid, bool *started)
++{
++	return -EOPNOTSUPP;
++}
++
++static inline int scmi_imx_cpu_reset_vector_set(u32 cpuid, u64 vector, bool start,
++						bool boot, bool resume)
++{
++	return -EOPNOTSUPP;
++}
++#endif
+ 
+ enum scmi_imx_lmm_op {
+ 	SCMI_IMX_LMM_BOOT,
+@@ -36,7 +66,24 @@ enum scmi_imx_lmm_op {
+ #define SCMI_IMX_LMM_OP_FORCEFUL	0
+ #define SCMI_IMX_LMM_OP_GRACEFUL	BIT(0)
+ 
++#if IS_ENABLED(CONFIG_IMX_SCMI_LMM_DRV)
+ int scmi_imx_lmm_operation(u32 lmid, enum scmi_imx_lmm_op op, u32 flags);
+ int scmi_imx_lmm_info(u32 lmid, struct scmi_imx_lmm_info *info);
+ int scmi_imx_lmm_reset_vector_set(u32 lmid, u32 cpuid, u32 flags, u64 vector);
++#else
++static inline int scmi_imx_lmm_operation(u32 lmid, enum scmi_imx_lmm_op op, u32 flags)
++{
++	return -EOPNOTSUPP;
++}
++
++static inline int scmi_imx_lmm_info(u32 lmid, struct scmi_imx_lmm_info *info)
++{
++	return -EOPNOTSUPP;
++}
++
++static inline int scmi_imx_lmm_reset_vector_set(u32 lmid, u32 cpuid, u32 flags, u64 vector)
++{
++	return -EOPNOTSUPP;
++}
++#endif
+ #endif
+diff --git a/include/linux/mlx5/fs.h b/include/linux/mlx5/fs.h
+index 939e58c2f3865f..fb5f98fcc72692 100644
+--- a/include/linux/mlx5/fs.h
++++ b/include/linux/mlx5/fs.h
+@@ -308,6 +308,8 @@ struct mlx5_fc *mlx5_fc_create(struct mlx5_core_dev *dev, bool aging);
+ void mlx5_fc_destroy(struct mlx5_core_dev *dev, struct mlx5_fc *counter);
+ struct mlx5_fc *mlx5_fc_local_create(u32 counter_id, u32 offset, u32 bulk_size);
+ void mlx5_fc_local_destroy(struct mlx5_fc *counter);
++void mlx5_fc_local_get(struct mlx5_fc *counter);
++void mlx5_fc_local_put(struct mlx5_fc *counter);
+ u64 mlx5_fc_query_lastuse(struct mlx5_fc *counter);
+ void mlx5_fc_query_cached(struct mlx5_fc *counter,
+ 			  u64 *bytes, u64 *packets, u64 *lastuse);
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 439bc124ce7098..1347ae13dd0a1c 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1245,6 +1245,27 @@ static inline struct hci_conn *hci_conn_hash_lookup_ba(struct hci_dev *hdev,
+ 	return NULL;
+ }
+ 
++static inline struct hci_conn *hci_conn_hash_lookup_role(struct hci_dev *hdev,
++							 __u8 type, __u8 role,
++							 bdaddr_t *ba)
++{
++	struct hci_conn_hash *h = &hdev->conn_hash;
++	struct hci_conn  *c;
++
++	rcu_read_lock();
++
++	list_for_each_entry_rcu(c, &h->list, list) {
++		if (c->type == type && c->role == role && !bacmp(&c->dst, ba)) {
++			rcu_read_unlock();
++			return c;
++		}
++	}
++
++	rcu_read_unlock();
++
++	return NULL;
++}
++
+ static inline struct hci_conn *hci_conn_hash_lookup_le(struct hci_dev *hdev,
+ 						       bdaddr_t *ba,
+ 						       __u8 ba_type)
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 829f0792d8d831..17e5cf18da1efb 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -3013,7 +3013,10 @@ EXPORT_SYMBOL_GPL(bpf_event_output);
+ 
+ /* Always built-in helper functions. */
+ const struct bpf_func_proto bpf_tail_call_proto = {
+-	.func		= NULL,
++	/* func is unused for tail_call, we set it to pass the
++	 * get_helper_proto check
++	 */
++	.func		= BPF_PTR_POISON,
+ 	.gpl_only	= false,
+ 	.ret_type	= RET_VOID,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 4fd89659750b25..a6338936085ae8 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -8405,6 +8405,10 @@ static int process_timer_func(struct bpf_verifier_env *env, int regno,
+ 		verifier_bug(env, "Two map pointers in a timer helper");
+ 		return -EFAULT;
+ 	}
++	if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
++		verbose(env, "bpf_timer cannot be used for PREEMPT_RT.\n");
++		return -EOPNOTSUPP;
++	}
+ 	meta->map_uid = reg->map_uid;
+ 	meta->map_ptr = map;
+ 	return 0;
+@@ -11206,7 +11210,7 @@ static int get_helper_proto(struct bpf_verifier_env *env, int func_id,
+ 		return -EINVAL;
+ 
+ 	*ptr = env->ops->get_func_proto(func_id, env->prog);
+-	return *ptr ? 0 : -EINVAL;
++	return *ptr && (*ptr)->func ? 0 : -EINVAL;
+ }
+ 
+ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 1ee8eb11f38bae..0cbc174da76ace 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -2289,7 +2289,7 @@ __latent_entropy struct task_struct *copy_process(
+ 	if (need_futex_hash_allocate_default(clone_flags)) {
+ 		retval = futex_hash_allocate_default();
+ 		if (retval)
+-			goto bad_fork_core_free;
++			goto bad_fork_cancel_cgroup;
+ 		/*
+ 		 * If we fail beyond this point we don't free the allocated
+ 		 * futex hash map. We assume that another thread will be created
+diff --git a/kernel/futex/requeue.c b/kernel/futex/requeue.c
+index c716a66f86929c..d818b4d47f1bad 100644
+--- a/kernel/futex/requeue.c
++++ b/kernel/futex/requeue.c
+@@ -230,8 +230,9 @@ static inline
+ void requeue_pi_wake_futex(struct futex_q *q, union futex_key *key,
+ 			   struct futex_hash_bucket *hb)
+ {
+-	q->key = *key;
++	struct task_struct *task;
+ 
++	q->key = *key;
+ 	__futex_unqueue(q);
+ 
+ 	WARN_ON(!q->rt_waiter);
+@@ -243,10 +244,11 @@ void requeue_pi_wake_futex(struct futex_q *q, union futex_key *key,
+ 	futex_hash_get(hb);
+ 	q->drop_hb_ref = true;
+ 	q->lock_ptr = &hb->lock;
++	task = READ_ONCE(q->task);
+ 
+ 	/* Signal locked state to the waiter */
+ 	futex_requeue_pi_complete(q, 1);
+-	wake_up_state(q->task, TASK_NORMAL);
++	wake_up_state(task, TASK_NORMAL);
+ }
+ 
+ /**
+diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
+index 001fb88a8481d8..edd6cdd9aadcac 100644
+--- a/kernel/sched/ext_idle.c
++++ b/kernel/sched/ext_idle.c
+@@ -75,7 +75,7 @@ static int scx_cpu_node_if_enabled(int cpu)
+ 	return cpu_to_node(cpu);
+ }
+ 
+-bool scx_idle_test_and_clear_cpu(int cpu)
++static bool scx_idle_test_and_clear_cpu(int cpu)
+ {
+ 	int node = scx_cpu_node_if_enabled(cpu);
+ 	struct cpumask *idle_cpus = idle_cpumask(node)->cpu;
+@@ -198,7 +198,7 @@ pick_idle_cpu_from_online_nodes(const struct cpumask *cpus_allowed, int node, u6
+ /*
+  * Find an idle CPU in the system, starting from @node.
+  */
+-s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
++static s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
+ {
+ 	s32 cpu;
+ 
+@@ -794,6 +794,16 @@ static void reset_idle_masks(struct sched_ext_ops *ops)
+ 		cpumask_and(idle_cpumask(node)->smt, cpu_online_mask, node_mask);
+ 	}
+ }
++#else	/* !CONFIG_SMP */
++static bool scx_idle_test_and_clear_cpu(int cpu)
++{
++	return -EBUSY;
++}
++
++static s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
++{
++	return -EBUSY;
++}
+ #endif	/* CONFIG_SMP */
+ 
+ void scx_idle_enable(struct sched_ext_ops *ops)
+@@ -860,8 +870,34 @@ static bool check_builtin_idle_enabled(void)
+ 	return false;
+ }
+ 
+-s32 select_cpu_from_kfunc(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
+-			  const struct cpumask *allowed, u64 flags)
++/*
++ * Determine whether @p is a migration-disabled task in the context of BPF
++ * code.
++ *
++ * We can't simply check whether @p->migration_disabled is set in a
++ * sched_ext callback, because migration is always disabled for the current
++ * task while running BPF code.
++ *
++ * The prolog (__bpf_prog_enter) and epilog (__bpf_prog_exit) respectively
++ * disable and re-enable migration. For this reason, the current task
++ * inside a sched_ext callback is always a migration-disabled task.
++ *
++ * Therefore, when @p->migration_disabled == 1, check whether @p is the
++ * current task or not: if it is, then migration was not disabled before
++ * entering the callback, otherwise migration was disabled.
++ *
++ * Returns true if @p is migration-disabled, false otherwise.
++ */
++static bool is_bpf_migration_disabled(const struct task_struct *p)
++{
++	if (p->migration_disabled == 1)
++		return p != current;
++	else
++		return p->migration_disabled;
++}
++
++static s32 select_cpu_from_kfunc(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
++				 const struct cpumask *allowed, u64 flags)
+ {
+ 	struct rq *rq;
+ 	struct rq_flags rf;
+@@ -903,7 +939,7 @@ s32 select_cpu_from_kfunc(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
+ 	 * selection optimizations and simply check whether the previously
+ 	 * used CPU is idle and within the allowed cpumask.
+ 	 */
+-	if (p->nr_cpus_allowed == 1 || is_migration_disabled(p)) {
++	if (p->nr_cpus_allowed == 1 || is_bpf_migration_disabled(p)) {
+ 		if (cpumask_test_cpu(prev_cpu, allowed ?: p->cpus_ptr) &&
+ 		    scx_idle_test_and_clear_cpu(prev_cpu))
+ 			cpu = prev_cpu;
+@@ -1125,10 +1161,10 @@ __bpf_kfunc bool scx_bpf_test_and_clear_cpu_idle(s32 cpu)
+ 	if (!check_builtin_idle_enabled())
+ 		return false;
+ 
+-	if (kf_cpu_valid(cpu, NULL))
+-		return scx_idle_test_and_clear_cpu(cpu);
+-	else
++	if (!kf_cpu_valid(cpu, NULL))
+ 		return false;
++
++	return scx_idle_test_and_clear_cpu(cpu);
+ }
+ 
+ /**
+diff --git a/kernel/sched/ext_idle.h b/kernel/sched/ext_idle.h
+index 37be78a7502b32..05e389ed72e4c1 100644
+--- a/kernel/sched/ext_idle.h
++++ b/kernel/sched/ext_idle.h
+@@ -15,16 +15,9 @@ struct sched_ext_ops;
+ #ifdef CONFIG_SMP
+ void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops);
+ void scx_idle_init_masks(void);
+-bool scx_idle_test_and_clear_cpu(int cpu);
+-s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags);
+ #else /* !CONFIG_SMP */
+ static inline void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops) {}
+ static inline void scx_idle_init_masks(void) {}
+-static inline bool scx_idle_test_and_clear_cpu(int cpu) { return false; }
+-static inline s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
+-{
+-	return -EBUSY;
+-}
+ #endif /* CONFIG_SMP */
+ 
+ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
+diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
+index db40ec5cc9d731..9fb1a8c107ec35 100644
+--- a/kernel/trace/fgraph.c
++++ b/kernel/trace/fgraph.c
+@@ -815,6 +815,7 @@ __ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointe
+ 	unsigned long bitmap;
+ 	unsigned long ret;
+ 	int offset;
++	int bit;
+ 	int i;
+ 
+ 	ret_stack = ftrace_pop_return_trace(&trace, &ret, frame_pointer, &offset);
+@@ -829,6 +830,15 @@ __ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointe
+ 	if (fregs)
+ 		ftrace_regs_set_instruction_pointer(fregs, ret);
+ 
++	bit = ftrace_test_recursion_trylock(trace.func, ret);
++	/*
++	 * This can fail because ftrace_test_recursion_trylock() allows one nest
++	 * call. If we are already in a nested call, then we don't probe this and
++	 * just return the original return address.
++	 */
++	if (unlikely(bit < 0))
++		goto out;
++
+ #ifdef CONFIG_FUNCTION_GRAPH_RETVAL
+ 	trace.retval = ftrace_regs_get_return_value(fregs);
+ #endif
+@@ -852,6 +862,8 @@ __ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointe
+ 		}
+ 	}
+ 
++	ftrace_test_recursion_unlock(bit);
++out:
+ 	/*
+ 	 * The ftrace_graph_return() may still access the current
+ 	 * ret_stack structure, we need to make sure the update of
+diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
+index f9b3aa9afb1784..342e84f8a40e24 100644
+--- a/kernel/trace/fprobe.c
++++ b/kernel/trace/fprobe.c
+@@ -428,8 +428,9 @@ static int fprobe_addr_list_add(struct fprobe_addr_list *alist, unsigned long ad
+ {
+ 	unsigned long *addrs;
+ 
+-	if (alist->index >= alist->size)
+-		return -ENOMEM;
++	/* Previously we failed to expand the list. */
++	if (alist->index == alist->size)
++		return -ENOSPC;
+ 
+ 	alist->addrs[alist->index++] = addr;
+ 	if (alist->index < alist->size)
+@@ -489,7 +490,7 @@ static int fprobe_module_callback(struct notifier_block *nb,
+ 	for (i = 0; i < FPROBE_IP_TABLE_SIZE; i++)
+ 		fprobe_remove_node_in_module(mod, &fprobe_ip_table[i], &alist);
+ 
+-	if (alist.index < alist.size && alist.index > 0)
++	if (alist.index > 0)
+ 		ftrace_set_filter_ips(&fprobe_graph_ops.ops,
+ 				      alist.addrs, alist.index, 1, 0);
+ 	mutex_unlock(&fprobe_mutex);
+diff --git a/kernel/trace/trace_dynevent.c b/kernel/trace/trace_dynevent.c
+index 5d64a18cacacc6..d06854bd32b357 100644
+--- a/kernel/trace/trace_dynevent.c
++++ b/kernel/trace/trace_dynevent.c
+@@ -230,6 +230,10 @@ static int dyn_event_open(struct inode *inode, struct file *file)
+ {
+ 	int ret;
+ 
++	ret = security_locked_down(LOCKDOWN_TRACEFS);
++	if (ret)
++		return ret;
++
+ 	ret = tracing_check_open_get_tr(NULL);
+ 	if (ret)
+ 		return ret;
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index 337bc0eb5d71bf..dc734867f0fc44 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -2325,12 +2325,13 @@ osnoise_cpus_write(struct file *filp, const char __user *ubuf, size_t count,
+ 	if (count < 1)
+ 		return 0;
+ 
+-	buf = kmalloc(count, GFP_KERNEL);
++	buf = kmalloc(count + 1, GFP_KERNEL);
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+ 	if (copy_from_user(buf, ubuf, count))
+ 		return -EFAULT;
++	buf[count] = '\0';
+ 
+ 	if (!zalloc_cpumask_var(&osnoise_cpumask_new, GFP_KERNEL))
+ 		return -ENOMEM;
+diff --git a/kernel/vhost_task.c b/kernel/vhost_task.c
+index 2f844c279a3e01..7f24ccc896c649 100644
+--- a/kernel/vhost_task.c
++++ b/kernel/vhost_task.c
+@@ -100,6 +100,7 @@ void vhost_task_stop(struct vhost_task *vtsk)
+ 	 * freeing it below.
+ 	 */
+ 	wait_for_completion(&vtsk->exited);
++	put_task_struct(vtsk->task);
+ 	kfree(vtsk);
+ }
+ EXPORT_SYMBOL_GPL(vhost_task_stop);
+@@ -148,7 +149,7 @@ struct vhost_task *vhost_task_create(bool (*fn)(void *),
+ 		return ERR_PTR(PTR_ERR(tsk));
+ 	}
+ 
+-	vtsk->task = tsk;
++	vtsk->task = get_task_struct(tsk);
+ 	return vtsk;
+ }
+ EXPORT_SYMBOL_GPL(vhost_task_create);
+diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c
+index 57d4ec256682ce..04f121a756d478 100644
+--- a/mm/damon/sysfs.c
++++ b/mm/damon/sysfs.c
+@@ -1576,12 +1576,14 @@ static int damon_sysfs_damon_call(int (*fn)(void *data),
+ 		struct damon_sysfs_kdamond *kdamond)
+ {
+ 	struct damon_call_control call_control = {};
++	int err;
+ 
+ 	if (!kdamond->damon_ctx)
+ 		return -EINVAL;
+ 	call_control.fn = fn;
+ 	call_control.data = kdamond;
+-	return damon_call(kdamond->damon_ctx, &call_control);
++	err = damon_call(kdamond->damon_ctx, &call_control);
++	return err ? err : call_control.return_code;
+ }
+ 
+ struct damon_sysfs_schemes_walk_data {
+diff --git a/mm/kmsan/core.c b/mm/kmsan/core.c
+index 1ea711786c522d..8bca7fece47f0e 100644
+--- a/mm/kmsan/core.c
++++ b/mm/kmsan/core.c
+@@ -195,7 +195,8 @@ void kmsan_internal_set_shadow_origin(void *addr, size_t size, int b,
+ 				      u32 origin, bool checked)
+ {
+ 	u64 address = (u64)addr;
+-	u32 *shadow_start, *origin_start;
++	void *shadow_start;
++	u32 *aligned_shadow, *origin_start;
+ 	size_t pad = 0;
+ 
+ 	KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(addr, size));
+@@ -214,9 +215,12 @@ void kmsan_internal_set_shadow_origin(void *addr, size_t size, int b,
+ 	}
+ 	__memset(shadow_start, b, size);
+ 
+-	if (!IS_ALIGNED(address, KMSAN_ORIGIN_SIZE)) {
++	if (IS_ALIGNED(address, KMSAN_ORIGIN_SIZE)) {
++		aligned_shadow = shadow_start;
++	} else {
+ 		pad = address % KMSAN_ORIGIN_SIZE;
+ 		address -= pad;
++		aligned_shadow = shadow_start - pad;
+ 		size += pad;
+ 	}
+ 	size = ALIGN(size, KMSAN_ORIGIN_SIZE);
+@@ -230,7 +234,7 @@ void kmsan_internal_set_shadow_origin(void *addr, size_t size, int b,
+ 	 * corresponding shadow slot is zero.
+ 	 */
+ 	for (int i = 0; i < size / KMSAN_ORIGIN_SIZE; i++) {
+-		if (origin || !shadow_start[i])
++		if (origin || !aligned_shadow[i])
+ 			origin_start[i] = origin;
+ 	}
+ }
+diff --git a/mm/kmsan/kmsan_test.c b/mm/kmsan/kmsan_test.c
+index c6c5b2bbede0cc..902ec48b1e3e6a 100644
+--- a/mm/kmsan/kmsan_test.c
++++ b/mm/kmsan/kmsan_test.c
+@@ -556,6 +556,21 @@ DEFINE_TEST_MEMSETXX(16)
+ DEFINE_TEST_MEMSETXX(32)
+ DEFINE_TEST_MEMSETXX(64)
+ 
++/* Test case: ensure that KMSAN does not access shadow memory out of bounds. */
++static void test_memset_on_guarded_buffer(struct kunit *test)
++{
++	void *buf = vmalloc(PAGE_SIZE);
++
++	kunit_info(test,
++		   "memset() on ends of guarded buffer should not crash\n");
++
++	for (size_t size = 0; size <= 128; size++) {
++		memset(buf, 0xff, size);
++		memset(buf + PAGE_SIZE - size, 0xff, size);
++	}
++	vfree(buf);
++}
++
+ static noinline void fibonacci(int *array, int size, int start)
+ {
+ 	if (start < 2 || (start == size))
+@@ -677,6 +692,7 @@ static struct kunit_case kmsan_test_cases[] = {
+ 	KUNIT_CASE(test_memset16),
+ 	KUNIT_CASE(test_memset32),
+ 	KUNIT_CASE(test_memset64),
++	KUNIT_CASE(test_memset_on_guarded_buffer),
+ 	KUNIT_CASE(test_long_origin_chain),
+ 	KUNIT_CASE(test_stackdepot_roundtrip),
+ 	KUNIT_CASE(test_unpoison_memory),
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 090c7ffa515252..2ef5b3004197b8 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3087,8 +3087,18 @@ static void hci_conn_complete_evt(struct hci_dev *hdev, void *data,
+ 
+ 	hci_dev_lock(hdev);
+ 
++	/* Check for existing connection:
++	 *
++	 * 1. If it doesn't exist then it must be receiver/slave role.
++	 * 2. If it does exist confirm that it is connecting/BT_CONNECT in case
++	 *    of initiator/master role since there could be a collision where
++	 *    either side is attempting to connect or something like a fuzzing
++	 *    testing is trying to play tricks to destroy the hcon object before
++	 *    it even attempts to connect (e.g. hcon->state == BT_OPEN).
++	 */
+ 	conn = hci_conn_hash_lookup_ba(hdev, ev->link_type, &ev->bdaddr);
+-	if (!conn) {
++	if (!conn ||
++	    (conn->role == HCI_ROLE_MASTER && conn->state != BT_CONNECT)) {
+ 		/* In case of error status and there is no connection pending
+ 		 * just unlock as there is nothing to cleanup.
+ 		 */
+@@ -4391,6 +4401,8 @@ static void hci_num_comp_pkts_evt(struct hci_dev *hdev, void *data,
+ 
+ 	bt_dev_dbg(hdev, "num %d", ev->num);
+ 
++	hci_dev_lock(hdev);
++
+ 	for (i = 0; i < ev->num; i++) {
+ 		struct hci_comp_pkts_info *info = &ev->handles[i];
+ 		struct hci_conn *conn;
+@@ -4472,6 +4484,8 @@ static void hci_num_comp_pkts_evt(struct hci_dev *hdev, void *data,
+ 	}
+ 
+ 	queue_work(hdev->workqueue, &hdev->tx_work);
++
++	hci_dev_unlock(hdev);
+ }
+ 
+ static void hci_mode_change_evt(struct hci_dev *hdev, void *data,
+@@ -5634,8 +5648,18 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+ 	 */
+ 	hci_dev_clear_flag(hdev, HCI_LE_ADV);
+ 
+-	conn = hci_conn_hash_lookup_ba(hdev, LE_LINK, bdaddr);
+-	if (!conn) {
++	/* Check for existing connection:
++	 *
++	 * 1. If it doesn't exist then use the role to create a new object.
++	 * 2. If it does exist confirm that it is connecting/BT_CONNECT in case
++	 *    of initiator/master role since there could be a collision where
++	 *    either side is attempting to connect or something like a fuzzing
++	 *    testing is trying to play tricks to destroy the hcon object before
++	 *    it even attempts to connect (e.g. hcon->state == BT_OPEN).
++	 */
++	conn = hci_conn_hash_lookup_role(hdev, LE_LINK, role, bdaddr);
++	if (!conn ||
++	    (conn->role == HCI_ROLE_MASTER && conn->state != BT_CONNECT)) {
+ 		/* In case of error status and there is no connection pending
+ 		 * just unlock as there is nothing to cleanup.
+ 		 */
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index a25439f1eeac28..7ca544d7791f43 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -2594,6 +2594,13 @@ static int hci_resume_advertising_sync(struct hci_dev *hdev)
+ 			hci_remove_ext_adv_instance_sync(hdev, adv->instance,
+ 							 NULL);
+ 		}
++
++		/* If current advertising instance is set to instance 0x00
++		 * then we need to re-enable it.
++		 */
++		if (!hdev->cur_adv_instance)
++			err = hci_enable_ext_advertising_sync(hdev,
++							      hdev->cur_adv_instance);
+ 	} else {
+ 		/* Schedule for most recent instance to be restarted and begin
+ 		 * the software rotation loop
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 50634ef5c8b707..225140fcb3d6c8 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -1323,8 +1323,7 @@ static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err)
+ 	struct mgmt_mode *cp;
+ 
+ 	/* Make sure cmd still outstanding. */
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
+ 	cp = cmd->param;
+@@ -1351,23 +1350,29 @@ static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err)
+ 				mgmt_status(err));
+ 	}
+ 
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ }
+ 
+ static int set_powered_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_mode *cp;
++	struct mgmt_mode cp;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
+ 
+ 	/* Make sure cmd still outstanding. */
+-	if (cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
+ 		return -ECANCELED;
++	}
+ 
+-	cp = cmd->param;
++	memcpy(&cp, cmd->param, sizeof(cp));
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
+ 
+ 	BT_DBG("%s", hdev->name);
+ 
+-	return hci_set_powered_sync(hdev, cp->val);
++	return hci_set_powered_sync(hdev, cp.val);
+ }
+ 
+ static int set_powered(struct sock *sk, struct hci_dev *hdev, void *data,
+@@ -1516,8 +1521,7 @@ static void mgmt_set_discoverable_complete(struct hci_dev *hdev, void *data,
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+ 	/* Make sure cmd still outstanding. */
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_SET_DISCOVERABLE, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
+ 	hci_dev_lock(hdev);
+@@ -1539,12 +1543,15 @@ static void mgmt_set_discoverable_complete(struct hci_dev *hdev, void *data,
+ 	new_settings(hdev, cmd->sk);
+ 
+ done:
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ 	hci_dev_unlock(hdev);
+ }
+ 
+ static int set_discoverable_sync(struct hci_dev *hdev, void *data)
+ {
++	if (!mgmt_pending_listed(hdev, data))
++		return -ECANCELED;
++
+ 	BT_DBG("%s", hdev->name);
+ 
+ 	return hci_update_discoverable_sync(hdev);
+@@ -1691,8 +1698,7 @@ static void mgmt_set_connectable_complete(struct hci_dev *hdev, void *data,
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+ 	/* Make sure cmd still outstanding. */
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_SET_CONNECTABLE, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
+ 	hci_dev_lock(hdev);
+@@ -1707,7 +1713,7 @@ static void mgmt_set_connectable_complete(struct hci_dev *hdev, void *data,
+ 	new_settings(hdev, cmd->sk);
+ 
+ done:
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ 
+ 	hci_dev_unlock(hdev);
+ }
+@@ -1743,6 +1749,9 @@ static int set_connectable_update_settings(struct hci_dev *hdev,
+ 
+ static int set_connectable_sync(struct hci_dev *hdev, void *data)
+ {
++	if (!mgmt_pending_listed(hdev, data))
++		return -ECANCELED;
++
+ 	BT_DBG("%s", hdev->name);
+ 
+ 	return hci_update_connectable_sync(hdev);
+@@ -1919,14 +1928,17 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ 	struct cmd_lookup match = { NULL, hdev };
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_mode *cp = cmd->param;
+-	u8 enable = cp->val;
++	struct mgmt_mode *cp;
++	u8 enable;
+ 	bool changed;
+ 
+ 	/* Make sure cmd still outstanding. */
+-	if (err == -ECANCELED || cmd != pending_find(MGMT_OP_SET_SSP, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
++	cp = cmd->param;
++	enable = cp->val;
++
+ 	if (err) {
+ 		u8 mgmt_err = mgmt_status(err);
+ 
+@@ -1935,8 +1947,7 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ 			new_settings(hdev, NULL);
+ 		}
+ 
+-		mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, true,
+-				     cmd_status_rsp, &mgmt_err);
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err);
+ 		return;
+ 	}
+ 
+@@ -1946,7 +1957,7 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ 		changed = hci_dev_test_and_clear_flag(hdev, HCI_SSP_ENABLED);
+ 	}
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, true, settings_rsp, &match);
++	settings_rsp(cmd, &match);
+ 
+ 	if (changed)
+ 		new_settings(hdev, match.sk);
+@@ -1960,14 +1971,25 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ static int set_ssp_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_mode *cp = cmd->param;
++	struct mgmt_mode cp;
+ 	bool changed = false;
+ 	int err;
+ 
+-	if (cp->val)
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
++		return -ECANCELED;
++	}
++
++	memcpy(&cp, cmd->param, sizeof(cp));
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
++	if (cp.val)
+ 		changed = !hci_dev_test_and_set_flag(hdev, HCI_SSP_ENABLED);
+ 
+-	err = hci_write_ssp_mode_sync(hdev, cp->val);
++	err = hci_write_ssp_mode_sync(hdev, cp.val);
+ 
+ 	if (!err && changed)
+ 		hci_dev_clear_flag(hdev, HCI_SSP_ENABLED);
+@@ -2060,32 +2082,50 @@ static int set_hs(struct sock *sk, struct hci_dev *hdev, void *data, u16 len)
+ 
+ static void set_le_complete(struct hci_dev *hdev, void *data, int err)
+ {
++	struct mgmt_pending_cmd *cmd = data;
+ 	struct cmd_lookup match = { NULL, hdev };
+ 	u8 status = mgmt_status(err);
+ 
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+-	if (status) {
+-		mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, true, cmd_status_rsp,
+-				     &status);
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, data))
+ 		return;
++
++	if (status) {
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, status);
++		goto done;
+ 	}
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, true, settings_rsp, &match);
++	settings_rsp(cmd, &match);
+ 
+ 	new_settings(hdev, match.sk);
+ 
+ 	if (match.sk)
+ 		sock_put(match.sk);
++
++done:
++	mgmt_pending_free(cmd);
+ }
+ 
+ static int set_le_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_mode *cp = cmd->param;
+-	u8 val = !!cp->val;
++	struct mgmt_mode cp;
++	u8 val;
+ 	int err;
+ 
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
++		return -ECANCELED;
++	}
++
++	memcpy(&cp, cmd->param, sizeof(cp));
++	val = !!cp.val;
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
+ 	if (!val) {
+ 		hci_clear_adv_instance_sync(hdev, NULL, 0x00, true);
+ 
+@@ -2127,7 +2167,12 @@ static void set_mesh_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+ 	u8 status = mgmt_status(err);
+-	struct sock *sk = cmd->sk;
++	struct sock *sk;
++
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
++		return;
++
++	sk = cmd->sk;
+ 
+ 	if (status) {
+ 		mgmt_pending_foreach(MGMT_OP_SET_MESH_RECEIVER, hdev, true,
+@@ -2142,24 +2187,37 @@ static void set_mesh_complete(struct hci_dev *hdev, void *data, int err)
+ static int set_mesh_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_cp_set_mesh *cp = cmd->param;
+-	size_t len = cmd->param_len;
++	struct mgmt_cp_set_mesh cp;
++	size_t len;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
++		return -ECANCELED;
++	}
++
++	memcpy(&cp, cmd->param, sizeof(cp));
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
++	len = cmd->param_len;
+ 
+ 	memset(hdev->mesh_ad_types, 0, sizeof(hdev->mesh_ad_types));
+ 
+-	if (cp->enable)
++	if (cp.enable)
+ 		hci_dev_set_flag(hdev, HCI_MESH);
+ 	else
+ 		hci_dev_clear_flag(hdev, HCI_MESH);
+ 
+-	hdev->le_scan_interval = __le16_to_cpu(cp->period);
+-	hdev->le_scan_window = __le16_to_cpu(cp->window);
++	hdev->le_scan_interval = __le16_to_cpu(cp.period);
++	hdev->le_scan_window = __le16_to_cpu(cp.window);
+ 
+-	len -= sizeof(*cp);
++	len -= sizeof(cp);
+ 
+ 	/* If filters don't fit, forward all adv pkts */
+ 	if (len <= sizeof(hdev->mesh_ad_types))
+-		memcpy(hdev->mesh_ad_types, cp->ad_types, len);
++		memcpy(hdev->mesh_ad_types, cp.ad_types, len);
+ 
+ 	hci_update_passive_scan_sync(hdev);
+ 	return 0;
+@@ -3867,15 +3925,16 @@ static int name_changed_sync(struct hci_dev *hdev, void *data)
+ static void set_name_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_cp_set_local_name *cp = cmd->param;
++	struct mgmt_cp_set_local_name *cp;
+ 	u8 status = mgmt_status(err);
+ 
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_SET_LOCAL_NAME, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
++	cp = cmd->param;
++
+ 	if (status) {
+ 		mgmt_cmd_status(cmd->sk, hdev->id, MGMT_OP_SET_LOCAL_NAME,
+ 				status);
+@@ -3887,16 +3946,27 @@ static void set_name_complete(struct hci_dev *hdev, void *data, int err)
+ 			hci_cmd_sync_queue(hdev, name_changed_sync, NULL, NULL);
+ 	}
+ 
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ }
+ 
+ static int set_name_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_cp_set_local_name *cp = cmd->param;
++	struct mgmt_cp_set_local_name cp;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
++		return -ECANCELED;
++	}
++
++	memcpy(&cp, cmd->param, sizeof(cp));
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
+ 
+ 	if (lmp_bredr_capable(hdev)) {
+-		hci_update_name_sync(hdev, cp->name);
++		hci_update_name_sync(hdev, cp.name);
+ 		hci_update_eir_sync(hdev);
+ 	}
+ 
+@@ -4048,12 +4118,10 @@ int mgmt_phy_configuration_changed(struct hci_dev *hdev, struct sock *skip)
+ static void set_default_phy_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct sk_buff *skb = cmd->skb;
++	struct sk_buff *skb;
+ 	u8 status = mgmt_status(err);
+ 
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_SET_PHY_CONFIGURATION, hdev))
+-		return;
++	skb = cmd->skb;
+ 
+ 	if (!status) {
+ 		if (!skb)
+@@ -4080,7 +4148,7 @@ static void set_default_phy_complete(struct hci_dev *hdev, void *data, int err)
+ 	if (skb && !IS_ERR(skb))
+ 		kfree_skb(skb);
+ 
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ }
+ 
+ static int set_default_phy_sync(struct hci_dev *hdev, void *data)
+@@ -4088,7 +4156,9 @@ static int set_default_phy_sync(struct hci_dev *hdev, void *data)
+ 	struct mgmt_pending_cmd *cmd = data;
+ 	struct mgmt_cp_set_phy_configuration *cp = cmd->param;
+ 	struct hci_cp_le_set_default_phy cp_phy;
+-	u32 selected_phys = __le32_to_cpu(cp->selected_phys);
++	u32 selected_phys;
++
++	selected_phys = __le32_to_cpu(cp->selected_phys);
+ 
+ 	memset(&cp_phy, 0, sizeof(cp_phy));
+ 
+@@ -4228,7 +4298,7 @@ static int set_phy_configuration(struct sock *sk, struct hci_dev *hdev,
+ 		goto unlock;
+ 	}
+ 
+-	cmd = mgmt_pending_add(sk, MGMT_OP_SET_PHY_CONFIGURATION, hdev, data,
++	cmd = mgmt_pending_new(sk, MGMT_OP_SET_PHY_CONFIGURATION, hdev, data,
+ 			       len);
+ 	if (!cmd)
+ 		err = -ENOMEM;
+@@ -5189,7 +5259,17 @@ static void mgmt_add_adv_patterns_monitor_complete(struct hci_dev *hdev,
+ {
+ 	struct mgmt_rp_add_adv_patterns_monitor rp;
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct adv_monitor *monitor = cmd->user_data;
++	struct adv_monitor *monitor;
++
++	/* This is likely the result of hdev being closed and mgmt_index_removed
++	 * is attempting to clean up any pending command so
++	 * hci_adv_monitors_clear is about to be called which will take care of
++	 * freeing the adv_monitor instances.
++	 */
++	if (status == -ECANCELED && !mgmt_pending_valid(hdev, cmd))
++		return;
++
++	monitor = cmd->user_data;
+ 
+ 	hci_dev_lock(hdev);
+ 
+@@ -5215,9 +5295,20 @@ static void mgmt_add_adv_patterns_monitor_complete(struct hci_dev *hdev,
+ static int mgmt_add_adv_patterns_monitor_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct adv_monitor *monitor = cmd->user_data;
++	struct adv_monitor *mon;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
++		return -ECANCELED;
++	}
++
++	mon = cmd->user_data;
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
+ 
+-	return hci_add_adv_monitor(hdev, monitor);
++	return hci_add_adv_monitor(hdev, mon);
+ }
+ 
+ static int __add_adv_patterns_monitor(struct sock *sk, struct hci_dev *hdev,
+@@ -5484,7 +5575,8 @@ static int remove_adv_monitor(struct sock *sk, struct hci_dev *hdev,
+ 			       status);
+ }
+ 
+-static void read_local_oob_data_complete(struct hci_dev *hdev, void *data, int err)
++static void read_local_oob_data_complete(struct hci_dev *hdev, void *data,
++					 int err)
+ {
+ 	struct mgmt_rp_read_local_oob_data mgmt_rp;
+ 	size_t rp_size = sizeof(mgmt_rp);
+@@ -5504,7 +5596,8 @@ static void read_local_oob_data_complete(struct hci_dev *hdev, void *data, int e
+ 	bt_dev_dbg(hdev, "status %d", status);
+ 
+ 	if (status) {
+-		mgmt_cmd_status(cmd->sk, hdev->id, MGMT_OP_READ_LOCAL_OOB_DATA, status);
++		mgmt_cmd_status(cmd->sk, hdev->id, MGMT_OP_READ_LOCAL_OOB_DATA,
++				status);
+ 		goto remove;
+ 	}
+ 
+@@ -5786,17 +5879,12 @@ static void start_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ 
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+-	if (err == -ECANCELED)
+-		return;
+-
+-	if (cmd != pending_find(MGMT_OP_START_DISCOVERY, hdev) &&
+-	    cmd != pending_find(MGMT_OP_START_LIMITED_DISCOVERY, hdev) &&
+-	    cmd != pending_find(MGMT_OP_START_SERVICE_DISCOVERY, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
+ 	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_status(err),
+ 			  cmd->param, 1);
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ 
+ 	hci_discovery_set_state(hdev, err ? DISCOVERY_STOPPED:
+ 				DISCOVERY_FINDING);
+@@ -5804,6 +5892,9 @@ static void start_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ 
+ static int start_discovery_sync(struct hci_dev *hdev, void *data)
+ {
++	if (!mgmt_pending_listed(hdev, data))
++		return -ECANCELED;
++
+ 	return hci_start_discovery_sync(hdev);
+ }
+ 
+@@ -6009,15 +6100,14 @@ static void stop_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+ 
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_STOP_DISCOVERY, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+ 	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_status(err),
+ 			  cmd->param, 1);
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ 
+ 	if (!err)
+ 		hci_discovery_set_state(hdev, DISCOVERY_STOPPED);
+@@ -6025,6 +6115,9 @@ static void stop_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ 
+ static int stop_discovery_sync(struct hci_dev *hdev, void *data)
+ {
++	if (!mgmt_pending_listed(hdev, data))
++		return -ECANCELED;
++
+ 	return hci_stop_discovery_sync(hdev);
+ }
+ 
+@@ -6234,14 +6327,18 @@ static void enable_advertising_instance(struct hci_dev *hdev, int err)
+ 
+ static void set_advertising_complete(struct hci_dev *hdev, void *data, int err)
+ {
++	struct mgmt_pending_cmd *cmd = data;
+ 	struct cmd_lookup match = { NULL, hdev };
+ 	u8 instance;
+ 	struct adv_info *adv_instance;
+ 	u8 status = mgmt_status(err);
+ 
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, data))
++		return;
++
+ 	if (status) {
+-		mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, true,
+-				     cmd_status_rsp, &status);
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, status);
++		mgmt_pending_free(cmd);
+ 		return;
+ 	}
+ 
+@@ -6250,8 +6347,7 @@ static void set_advertising_complete(struct hci_dev *hdev, void *data, int err)
+ 	else
+ 		hci_dev_clear_flag(hdev, HCI_ADVERTISING);
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, true, settings_rsp,
+-			     &match);
++	settings_rsp(cmd, &match);
+ 
+ 	new_settings(hdev, match.sk);
+ 
+@@ -6283,10 +6379,23 @@ static void set_advertising_complete(struct hci_dev *hdev, void *data, int err)
+ static int set_adv_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_mode *cp = cmd->param;
+-	u8 val = !!cp->val;
++	struct mgmt_mode cp;
++	u8 val;
+ 
+-	if (cp->val == 0x02)
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
++		return -ECANCELED;
++	}
++
++	memcpy(&cp, cmd->param, sizeof(cp));
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
++	val = !!cp.val;
++
++	if (cp.val == 0x02)
+ 		hci_dev_set_flag(hdev, HCI_ADVERTISING_CONNECTABLE);
+ 	else
+ 		hci_dev_clear_flag(hdev, HCI_ADVERTISING_CONNECTABLE);
+@@ -8039,10 +8148,6 @@ static void read_local_oob_ext_data_complete(struct hci_dev *hdev, void *data,
+ 	u8 status = mgmt_status(err);
+ 	u16 eir_len;
+ 
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev))
+-		return;
+-
+ 	if (!status) {
+ 		if (!skb)
+ 			status = MGMT_STATUS_FAILED;
+@@ -8149,7 +8254,7 @@ static void read_local_oob_ext_data_complete(struct hci_dev *hdev, void *data,
+ 		kfree_skb(skb);
+ 
+ 	kfree(mgmt_rp);
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ }
+ 
+ static int read_local_ssp_oob_req(struct hci_dev *hdev, struct sock *sk,
+@@ -8158,7 +8263,7 @@ static int read_local_ssp_oob_req(struct hci_dev *hdev, struct sock *sk,
+ 	struct mgmt_pending_cmd *cmd;
+ 	int err;
+ 
+-	cmd = mgmt_pending_add(sk, MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev,
++	cmd = mgmt_pending_new(sk, MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev,
+ 			       cp, sizeof(*cp));
+ 	if (!cmd)
+ 		return -ENOMEM;
+diff --git a/net/bluetooth/mgmt_util.c b/net/bluetooth/mgmt_util.c
+index a88a07da394734..aa7b5585cb268b 100644
+--- a/net/bluetooth/mgmt_util.c
++++ b/net/bluetooth/mgmt_util.c
+@@ -320,6 +320,52 @@ void mgmt_pending_remove(struct mgmt_pending_cmd *cmd)
+ 	mgmt_pending_free(cmd);
+ }
+ 
++bool __mgmt_pending_listed(struct hci_dev *hdev, struct mgmt_pending_cmd *cmd)
++{
++	struct mgmt_pending_cmd *tmp;
++
++	lockdep_assert_held(&hdev->mgmt_pending_lock);
++
++	if (!cmd)
++		return false;
++
++	list_for_each_entry(tmp, &hdev->mgmt_pending, list) {
++		if (cmd == tmp)
++			return true;
++	}
++
++	return false;
++}
++
++bool mgmt_pending_listed(struct hci_dev *hdev, struct mgmt_pending_cmd *cmd)
++{
++	bool listed;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
++	listed = __mgmt_pending_listed(hdev, cmd);
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
++	return listed;
++}
++
++bool mgmt_pending_valid(struct hci_dev *hdev, struct mgmt_pending_cmd *cmd)
++{
++	bool listed;
++
++	if (!cmd)
++		return false;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	listed = __mgmt_pending_listed(hdev, cmd);
++	if (listed)
++		list_del(&cmd->list);
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
++	return listed;
++}
++
+ void mgmt_mesh_foreach(struct hci_dev *hdev,
+ 		       void (*cb)(struct mgmt_mesh_tx *mesh_tx, void *data),
+ 		       void *data, struct sock *sk)
+diff --git a/net/bluetooth/mgmt_util.h b/net/bluetooth/mgmt_util.h
+index 024e51dd693756..bcba8c9d895285 100644
+--- a/net/bluetooth/mgmt_util.h
++++ b/net/bluetooth/mgmt_util.h
+@@ -65,6 +65,9 @@ struct mgmt_pending_cmd *mgmt_pending_new(struct sock *sk, u16 opcode,
+ 					  void *data, u16 len);
+ void mgmt_pending_free(struct mgmt_pending_cmd *cmd);
+ void mgmt_pending_remove(struct mgmt_pending_cmd *cmd);
++bool __mgmt_pending_listed(struct hci_dev *hdev, struct mgmt_pending_cmd *cmd);
++bool mgmt_pending_listed(struct hci_dev *hdev, struct mgmt_pending_cmd *cmd);
++bool mgmt_pending_valid(struct hci_dev *hdev, struct mgmt_pending_cmd *cmd);
+ void mgmt_mesh_foreach(struct hci_dev *hdev,
+ 		       void (*cb)(struct mgmt_mesh_tx *mesh_tx, void *data),
+ 		       void *data, struct sock *sk);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index d6420b74ea9c6a..cb77bb84371bd3 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -6667,7 +6667,7 @@ struct sk_buff *alloc_skb_with_frags(unsigned long header_len,
+ 		return NULL;
+ 
+ 	while (data_len) {
+-		if (nr_frags == MAX_SKB_FRAGS - 1)
++		if (nr_frags == MAX_SKB_FRAGS)
+ 			goto failure;
+ 		while (order && PAGE_ALIGN(data_len) < (PAGE_SIZE << order))
+ 			order--;
+diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
+index 4397e89d3123a0..423f876d14c6a8 100644
+--- a/net/ipv4/nexthop.c
++++ b/net/ipv4/nexthop.c
+@@ -2400,6 +2400,13 @@ static int replace_nexthop_single(struct net *net, struct nexthop *old,
+ 		return -EINVAL;
+ 	}
+ 
++	if (!list_empty(&old->grp_list) &&
++	    rtnl_dereference(new->nh_info)->fdb_nh !=
++	    rtnl_dereference(old->nh_info)->fdb_nh) {
++		NL_SET_ERR_MSG(extack, "Cannot change nexthop FDB status while in a group");
++		return -EINVAL;
++	}
++
+ 	err = call_nexthop_notifiers(net, NEXTHOP_EVENT_REPLACE, new, extack);
+ 	if (err)
+ 		return err;
+diff --git a/net/smc/smc_loopback.c b/net/smc/smc_loopback.c
+index 3c5f64ca41153f..85f0b7853b1737 100644
+--- a/net/smc/smc_loopback.c
++++ b/net/smc/smc_loopback.c
+@@ -56,6 +56,7 @@ static int smc_lo_register_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb,
+ {
+ 	struct smc_lo_dmb_node *dmb_node, *tmp_node;
+ 	struct smc_lo_dev *ldev = smcd->priv;
++	struct folio *folio;
+ 	int sba_idx, rc;
+ 
+ 	/* check space for new dmb */
+@@ -74,13 +75,16 @@ static int smc_lo_register_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb,
+ 
+ 	dmb_node->sba_idx = sba_idx;
+ 	dmb_node->len = dmb->dmb_len;
+-	dmb_node->cpu_addr = kzalloc(dmb_node->len, GFP_KERNEL |
+-				     __GFP_NOWARN | __GFP_NORETRY |
+-				     __GFP_NOMEMALLOC);
+-	if (!dmb_node->cpu_addr) {
++
++	/* not critical; fail under memory pressure and fallback to TCP */
++	folio = folio_alloc(GFP_KERNEL | __GFP_NOWARN | __GFP_NOMEMALLOC |
++			    __GFP_NORETRY | __GFP_ZERO,
++			    get_order(dmb_node->len));
++	if (!folio) {
+ 		rc = -ENOMEM;
+ 		goto err_node;
+ 	}
++	dmb_node->cpu_addr = folio_address(folio);
+ 	dmb_node->dma_addr = SMC_DMA_ADDR_INVALID;
+ 	refcount_set(&dmb_node->refcnt, 1);
+ 
+@@ -122,7 +126,7 @@ static void __smc_lo_unregister_dmb(struct smc_lo_dev *ldev,
+ 	write_unlock_bh(&ldev->dmb_ht_lock);
+ 
+ 	clear_bit(dmb_node->sba_idx, ldev->sba_idx_mask);
+-	kvfree(dmb_node->cpu_addr);
++	folio_put(virt_to_folio(dmb_node->cpu_addr));
+ 	kfree(dmb_node);
+ 
+ 	if (atomic_dec_and_test(&ldev->dmb_cnt))
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index c7a1f080d2de3a..44b9de6e4e7788 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -438,7 +438,7 @@ bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x)
+ 
+ 	check_tunnel_size = x->xso.type == XFRM_DEV_OFFLOAD_PACKET &&
+ 			    x->props.mode == XFRM_MODE_TUNNEL;
+-	switch (x->props.family) {
++	switch (x->inner_mode.family) {
+ 	case AF_INET:
+ 		/* Check for IPv4 options */
+ 		if (ip_hdr(skb)->ihl != 5)
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 86337453709bad..4b2eb260f5c2e9 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -2578,6 +2578,8 @@ int xfrm_alloc_spi(struct xfrm_state *x, u32 low, u32 high,
+ 
+ 	for (h = 0; h < range; h++) {
+ 		u32 spi = (low == high) ? low : get_random_u32_inclusive(low, high);
++		if (spi == 0)
++			goto next;
+ 		newspi = htonl(spi);
+ 
+ 		spin_lock_bh(&net->xfrm.xfrm_state_lock);
+@@ -2593,6 +2595,7 @@ int xfrm_alloc_spi(struct xfrm_state *x, u32 low, u32 high,
+ 		xfrm_state_put(x0);
+ 		spin_unlock_bh(&net->xfrm.xfrm_state_lock);
+ 
++next:
+ 		if (signal_pending(current)) {
+ 			err = -ERESTARTSYS;
+ 			goto unlock;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 4819bd332f0390..fa28e3e85861c2 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7298,6 +7298,11 @@ static void cs35l41_fixup_spi_two(struct hda_codec *codec, const struct hda_fixu
+ 	comp_generic_fixup(codec, action, "spi", "CSC3551", "-%s:00-cs35l41-hda.%d", 2);
+ }
+ 
++static void cs35l41_fixup_spi_one(struct hda_codec *codec, const struct hda_fixup *fix, int action)
++{
++	comp_generic_fixup(codec, action, "spi", "CSC3551", "-%s:00-cs35l41-hda.%d", 1);
++}
++
+ static void cs35l41_fixup_spi_four(struct hda_codec *codec, const struct hda_fixup *fix, int action)
+ {
+ 	comp_generic_fixup(codec, action, "spi", "CSC3551", "-%s:00-cs35l41-hda.%d", 4);
+@@ -7991,6 +7996,7 @@ enum {
+ 	ALC287_FIXUP_CS35L41_I2C_2,
+ 	ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED,
+ 	ALC287_FIXUP_CS35L41_I2C_4,
++	ALC245_FIXUP_CS35L41_SPI_1,
+ 	ALC245_FIXUP_CS35L41_SPI_2,
+ 	ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED,
+ 	ALC245_FIXUP_CS35L41_SPI_4,
+@@ -10120,6 +10126,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = cs35l41_fixup_spi_two,
+ 	},
++	[ALC245_FIXUP_CS35L41_SPI_1] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = cs35l41_fixup_spi_one,
++	},
+ 	[ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = cs35l41_fixup_spi_two,
+@@ -11099,6 +11109,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x8398, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x83ce, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x8516, "ASUS X101CH", ALC269_FIXUP_ASUS_X101),
++	SND_PCI_QUIRK(0x1043, 0x88f4, "ASUS NUC14LNS", ALC245_FIXUP_CS35L41_SPI_1),
+ 	SND_PCI_QUIRK(0x104d, 0x9073, "Sony VAIO", ALC275_FIXUP_SONY_VAIO_GPIO2),
+ 	SND_PCI_QUIRK(0x104d, 0x907b, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ),
+ 	SND_PCI_QUIRK(0x104d, 0x9084, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ),
+diff --git a/sound/soc/intel/boards/sof_es8336.c b/sound/soc/intel/boards/sof_es8336.c
+index a0b3679b17b423..1211a2b8a2a2c7 100644
+--- a/sound/soc/intel/boards/sof_es8336.c
++++ b/sound/soc/intel/boards/sof_es8336.c
+@@ -826,6 +826,16 @@ static const struct platform_device_id board_ids[] = {
+ 					SOF_ES8336_SPEAKERS_EN_GPIO1_QUIRK |
+ 					SOF_ES8336_JD_INVERTED),
+ 	},
++	{
++		.name = "ptl_es83x6_c1_h02",
++		.driver_data = (kernel_ulong_t)(SOF_ES8336_SSP_CODEC(1) |
++					SOF_NO_OF_HDMI_CAPTURE_SSP(2) |
++					SOF_HDMI_CAPTURE_1_SSP(0) |
++					SOF_HDMI_CAPTURE_2_SSP(2) |
++					SOF_SSP_HDMI_CAPTURE_PRESENT |
++					SOF_ES8336_SPEAKERS_EN_GPIO1_QUIRK |
++					SOF_ES8336_JD_INVERTED),
++	},
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(platform, board_ids);
+diff --git a/sound/soc/intel/boards/sof_rt5682.c b/sound/soc/intel/boards/sof_rt5682.c
+index f5925bd0a3fc67..4994aaccc583ae 100644
+--- a/sound/soc/intel/boards/sof_rt5682.c
++++ b/sound/soc/intel/boards/sof_rt5682.c
+@@ -892,6 +892,13 @@ static const struct platform_device_id board_ids[] = {
+ 					SOF_SSP_PORT_BT_OFFLOAD(2) |
+ 					SOF_BT_OFFLOAD_PRESENT),
+ 	},
++	{
++		.name = "ptl_rt5682_c1_h02",
++		.driver_data = (kernel_ulong_t)(SOF_RT5682_MCLK_EN |
++					SOF_SSP_PORT_CODEC(1) |
++					/* SSP 0 and SSP 2 are used for HDMI IN */
++					SOF_SSP_MASK_HDMI_CAPTURE(0x5)),
++	},
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(platform, board_ids);
+diff --git a/sound/soc/intel/common/soc-acpi-intel-ptl-match.c b/sound/soc/intel/common/soc-acpi-intel-ptl-match.c
+index eae75f3f0fa40d..d90d8672ab77d1 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-ptl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-ptl-match.c
+@@ -21,7 +21,24 @@ static const struct snd_soc_acpi_codecs ptl_rt5682_rt5682s_hp = {
+ 	.codecs = {RT5682_ACPI_HID, RT5682S_ACPI_HID},
+ };
+ 
++static const struct snd_soc_acpi_codecs ptl_essx_83x6 = {
++	.num_codecs = 3,
++	.codecs = { "ESSX8316", "ESSX8326", "ESSX8336"},
++};
++
++static const struct snd_soc_acpi_codecs ptl_lt6911_hdmi = {
++	.num_codecs = 1,
++	.codecs = {"INTC10B0"}
++};
++
+ struct snd_soc_acpi_mach snd_soc_acpi_intel_ptl_machines[] = {
++	{
++		.comp_ids = &ptl_rt5682_rt5682s_hp,
++		.drv_name = "ptl_rt5682_c1_h02",
++		.machine_quirk = snd_soc_acpi_codec_list,
++		.quirk_data = &ptl_lt6911_hdmi,
++		.sof_tplg_filename = "sof-ptl-rt5682-ssp1-hdmi-ssp02.tplg",
++	},
+ 	{
+ 		.comp_ids = &ptl_rt5682_rt5682s_hp,
+ 		.drv_name = "ptl_rt5682_def",
+@@ -29,6 +46,21 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_ptl_machines[] = {
+ 		.tplg_quirk_mask = SND_SOC_ACPI_TPLG_INTEL_AMP_NAME |
+ 					SND_SOC_ACPI_TPLG_INTEL_CODEC_NAME,
+ 	},
++	{
++		.comp_ids = &ptl_essx_83x6,
++		.drv_name = "ptl_es83x6_c1_h02",
++		.machine_quirk = snd_soc_acpi_codec_list,
++		.quirk_data = &ptl_lt6911_hdmi,
++		.sof_tplg_filename = "sof-ptl-es83x6-ssp1-hdmi-ssp02.tplg",
++	},
++	{
++		.comp_ids = &ptl_essx_83x6,
++		.drv_name = "sof-essx8336",
++		.sof_tplg_filename = "sof-ptl-es8336", /* the tplg suffix is added at run time */
++		.tplg_quirk_mask = SND_SOC_ACPI_TPLG_INTEL_SSP_NUMBER |
++					SND_SOC_ACPI_TPLG_INTEL_SSP_MSB |
++					SND_SOC_ACPI_TPLG_INTEL_DMIC_NUMBER,
++	},
+ 	{},
+ };
+ EXPORT_SYMBOL_GPL(snd_soc_acpi_intel_ptl_machines);
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index 9530c59b3cf4c8..3df537fdb9f1c7 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -17,6 +17,7 @@
+ #include <linux/bitfield.h>
+ #include <linux/hid.h>
+ #include <linux/init.h>
++#include <linux/input.h>
+ #include <linux/math64.h>
+ #include <linux/slab.h>
+ #include <linux/usb.h>
+@@ -55,13 +56,13 @@ struct std_mono_table {
+  * version, we keep it mono for simplicity.
+  */
+ static int snd_create_std_mono_ctl_offset(struct usb_mixer_interface *mixer,
+-				unsigned int unitid,
+-				unsigned int control,
+-				unsigned int cmask,
+-				int val_type,
+-				unsigned int idx_off,
+-				const char *name,
+-				snd_kcontrol_tlv_rw_t *tlv_callback)
++					  unsigned int unitid,
++					  unsigned int control,
++					  unsigned int cmask,
++					  int val_type,
++					  unsigned int idx_off,
++					  const char *name,
++					  snd_kcontrol_tlv_rw_t *tlv_callback)
+ {
+ 	struct usb_mixer_elem_info *cval;
+ 	struct snd_kcontrol *kctl;
+@@ -78,7 +79,8 @@ static int snd_create_std_mono_ctl_offset(struct usb_mixer_interface *mixer,
+ 	cval->idx_off = idx_off;
+ 
+ 	/* get_min_max() is called only for integer volumes later,
+-	 * so provide a short-cut for booleans */
++	 * so provide a short-cut for booleans
++	 */
+ 	cval->min = 0;
+ 	cval->max = 1;
+ 	cval->res = 0;
+@@ -108,15 +110,16 @@ static int snd_create_std_mono_ctl_offset(struct usb_mixer_interface *mixer,
+ }
+ 
+ static int snd_create_std_mono_ctl(struct usb_mixer_interface *mixer,
+-				unsigned int unitid,
+-				unsigned int control,
+-				unsigned int cmask,
+-				int val_type,
+-				const char *name,
+-				snd_kcontrol_tlv_rw_t *tlv_callback)
++				   unsigned int unitid,
++				   unsigned int control,
++				   unsigned int cmask,
++				   int val_type,
++				   const char *name,
++				   snd_kcontrol_tlv_rw_t *tlv_callback)
+ {
+ 	return snd_create_std_mono_ctl_offset(mixer, unitid, control, cmask,
+-		val_type, 0 /* Offset */, name, tlv_callback);
++					      val_type, 0 /* Offset */,
++					      name, tlv_callback);
+ }
+ 
+ /*
+@@ -127,9 +130,10 @@ static int snd_create_std_mono_table(struct usb_mixer_interface *mixer,
+ {
+ 	int err;
+ 
+-	while (t->name != NULL) {
++	while (t->name) {
+ 		err = snd_create_std_mono_ctl(mixer, t->unitid, t->control,
+-				t->cmask, t->val_type, t->name, t->tlv_callback);
++					      t->cmask, t->val_type, t->name,
++					      t->tlv_callback);
+ 		if (err < 0)
+ 			return err;
+ 		t++;
+@@ -209,12 +213,11 @@ static void snd_usb_soundblaster_remote_complete(struct urb *urb)
+ 	if (code == rc->mute_code)
+ 		snd_usb_mixer_notify_id(mixer, rc->mute_mixer_id);
+ 	mixer->rc_code = code;
+-	wmb();
+ 	wake_up(&mixer->rc_waitq);
+ }
+ 
+ static long snd_usb_sbrc_hwdep_read(struct snd_hwdep *hw, char __user *buf,
+-				     long count, loff_t *offset)
++				    long count, loff_t *offset)
+ {
+ 	struct usb_mixer_interface *mixer = hw->private_data;
+ 	int err;
+@@ -234,7 +237,7 @@ static long snd_usb_sbrc_hwdep_read(struct snd_hwdep *hw, char __user *buf,
+ }
+ 
+ static __poll_t snd_usb_sbrc_hwdep_poll(struct snd_hwdep *hw, struct file *file,
+-					    poll_table *wait)
++					poll_table *wait)
+ {
+ 	struct usb_mixer_interface *mixer = hw->private_data;
+ 
+@@ -285,7 +288,7 @@ static int snd_usb_soundblaster_remote_init(struct usb_mixer_interface *mixer)
+ 	mixer->rc_setup_packet->wLength = cpu_to_le16(len);
+ 	usb_fill_control_urb(mixer->rc_urb, mixer->chip->dev,
+ 			     usb_rcvctrlpipe(mixer->chip->dev, 0),
+-			     (u8*)mixer->rc_setup_packet, mixer->rc_buffer, len,
++			     (u8 *)mixer->rc_setup_packet, mixer->rc_buffer, len,
+ 			     snd_usb_soundblaster_remote_complete, mixer);
+ 	return 0;
+ }
+@@ -310,20 +313,20 @@ static int snd_audigy2nx_led_update(struct usb_mixer_interface *mixer,
+ 
+ 	if (chip->usb_id == USB_ID(0x041e, 0x3042))
+ 		err = snd_usb_ctl_msg(chip->dev,
+-			      usb_sndctrlpipe(chip->dev, 0), 0x24,
+-			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+-			      !value, 0, NULL, 0);
++				      usb_sndctrlpipe(chip->dev, 0), 0x24,
++				      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
++				      !value, 0, NULL, 0);
+ 	/* USB X-Fi S51 Pro */
+ 	if (chip->usb_id == USB_ID(0x041e, 0x30df))
+ 		err = snd_usb_ctl_msg(chip->dev,
+-			      usb_sndctrlpipe(chip->dev, 0), 0x24,
+-			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+-			      !value, 0, NULL, 0);
++				      usb_sndctrlpipe(chip->dev, 0), 0x24,
++				      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
++				      !value, 0, NULL, 0);
+ 	else
+ 		err = snd_usb_ctl_msg(chip->dev,
+-			      usb_sndctrlpipe(chip->dev, 0), 0x24,
+-			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+-			      value, index + 2, NULL, 0);
++				      usb_sndctrlpipe(chip->dev, 0), 0x24,
++				      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
++				      value, index + 2, NULL, 0);
+ 	snd_usb_unlock_shutdown(chip);
+ 	return err;
+ }
+@@ -377,17 +380,17 @@ static int snd_audigy2nx_controls_create(struct usb_mixer_interface *mixer)
+ 		struct snd_kcontrol_new knew;
+ 
+ 		/* USB X-Fi S51 doesn't have a CMSS LED */
+-		if ((mixer->chip->usb_id == USB_ID(0x041e, 0x3042)) && i == 0)
++		if (mixer->chip->usb_id == USB_ID(0x041e, 0x3042) && i == 0)
+ 			continue;
+ 		/* USB X-Fi S51 Pro doesn't have one either */
+-		if ((mixer->chip->usb_id == USB_ID(0x041e, 0x30df)) && i == 0)
++		if (mixer->chip->usb_id == USB_ID(0x041e, 0x30df) && i == 0)
+ 			continue;
+ 		if (i > 1 && /* Live24ext has 2 LEDs only */
+ 			(mixer->chip->usb_id == USB_ID(0x041e, 0x3040) ||
+ 			 mixer->chip->usb_id == USB_ID(0x041e, 0x3042) ||
+ 			 mixer->chip->usb_id == USB_ID(0x041e, 0x30df) ||
+ 			 mixer->chip->usb_id == USB_ID(0x041e, 0x3048)))
+-			break; 
++			break;
+ 
+ 		knew = snd_audigy2nx_control;
+ 		knew.name = snd_audigy2nx_led_names[i];
+@@ -481,9 +484,9 @@ static int snd_emu0204_ch_switch_update(struct usb_mixer_interface *mixer,
+ 	buf[0] = 0x01;
+ 	buf[1] = value ? 0x02 : 0x01;
+ 	err = snd_usb_ctl_msg(chip->dev,
+-		      usb_sndctrlpipe(chip->dev, 0), UAC_SET_CUR,
+-		      USB_RECIP_INTERFACE | USB_TYPE_CLASS | USB_DIR_OUT,
+-		      0x0400, 0x0e00, buf, 2);
++			      usb_sndctrlpipe(chip->dev, 0), UAC_SET_CUR,
++			      USB_RECIP_INTERFACE | USB_TYPE_CLASS | USB_DIR_OUT,
++			      0x0400, 0x0e00, buf, 2);
+ 	snd_usb_unlock_shutdown(chip);
+ 	return err;
+ }
+@@ -529,6 +532,265 @@ static int snd_emu0204_controls_create(struct usb_mixer_interface *mixer)
+ 					  &snd_emu0204_control, NULL);
+ }
+ 
++#if IS_REACHABLE(CONFIG_INPUT)
++/*
++ * Sony DualSense controller (PS5) jack detection
++ *
++ * Since this is an UAC 1 device, it doesn't support jack detection.
++ * However, the controller hid-playstation driver reports HP & MIC
++ * insert events through a dedicated input device.
++ */
++
++#define SND_DUALSENSE_JACK_OUT_TERM_ID 3
++#define SND_DUALSENSE_JACK_IN_TERM_ID 4
++
++struct dualsense_mixer_elem_info {
++	struct usb_mixer_elem_info info;
++	struct input_handler ih;
++	struct input_device_id id_table[2];
++	bool connected;
++};
++
++static void snd_dualsense_ih_event(struct input_handle *handle,
++				   unsigned int type, unsigned int code,
++				   int value)
++{
++	struct dualsense_mixer_elem_info *mei;
++	struct usb_mixer_elem_list *me;
++
++	if (type != EV_SW)
++		return;
++
++	mei = container_of(handle->handler, struct dualsense_mixer_elem_info, ih);
++	me = &mei->info.head;
++
++	if ((me->id == SND_DUALSENSE_JACK_OUT_TERM_ID && code == SW_HEADPHONE_INSERT) ||
++	    (me->id == SND_DUALSENSE_JACK_IN_TERM_ID && code == SW_MICROPHONE_INSERT)) {
++		mei->connected = !!value;
++		snd_ctl_notify(me->mixer->chip->card, SNDRV_CTL_EVENT_MASK_VALUE,
++			       &me->kctl->id);
++	}
++}
++
++static bool snd_dualsense_ih_match(struct input_handler *handler,
++				   struct input_dev *dev)
++{
++	struct dualsense_mixer_elem_info *mei;
++	struct usb_device *snd_dev;
++	char *input_dev_path, *usb_dev_path;
++	size_t usb_dev_path_len;
++	bool match = false;
++
++	mei = container_of(handler, struct dualsense_mixer_elem_info, ih);
++	snd_dev = mei->info.head.mixer->chip->dev;
++
++	input_dev_path = kobject_get_path(&dev->dev.kobj, GFP_KERNEL);
++	if (!input_dev_path) {
++		dev_warn(&snd_dev->dev, "Failed to get input dev path\n");
++		return false;
++	}
++
++	usb_dev_path = kobject_get_path(&snd_dev->dev.kobj, GFP_KERNEL);
++	if (!usb_dev_path) {
++		dev_warn(&snd_dev->dev, "Failed to get USB dev path\n");
++		goto free_paths;
++	}
++
++	/*
++	 * Ensure the VID:PID matched input device supposedly owned by the
++	 * hid-playstation driver belongs to the actual hardware handled by
++	 * the current USB audio device, which implies input_dev_path being
++	 * a subpath of usb_dev_path.
++	 *
++	 * This verification is necessary when there is more than one identical
++	 * controller attached to the host system.
++	 */
++	usb_dev_path_len = strlen(usb_dev_path);
++	if (usb_dev_path_len >= strlen(input_dev_path))
++		goto free_paths;
++
++	usb_dev_path[usb_dev_path_len] = '/';
++	match = !memcmp(input_dev_path, usb_dev_path, usb_dev_path_len + 1);
++
++free_paths:
++	kfree(input_dev_path);
++	kfree(usb_dev_path);
++
++	return match;
++}
++
++static int snd_dualsense_ih_connect(struct input_handler *handler,
++				    struct input_dev *dev,
++				    const struct input_device_id *id)
++{
++	struct input_handle *handle;
++	int err;
++
++	handle = kzalloc(sizeof(*handle), GFP_KERNEL);
++	if (!handle)
++		return -ENOMEM;
++
++	handle->dev = dev;
++	handle->handler = handler;
++	handle->name = handler->name;
++
++	err = input_register_handle(handle);
++	if (err)
++		goto err_free;
++
++	err = input_open_device(handle);
++	if (err)
++		goto err_unregister;
++
++	return 0;
++
++err_unregister:
++	input_unregister_handle(handle);
++err_free:
++	kfree(handle);
++	return err;
++}
++
++static void snd_dualsense_ih_disconnect(struct input_handle *handle)
++{
++	input_close_device(handle);
++	input_unregister_handle(handle);
++	kfree(handle);
++}
++
++static void snd_dualsense_ih_start(struct input_handle *handle)
++{
++	struct dualsense_mixer_elem_info *mei;
++	struct usb_mixer_elem_list *me;
++	int status = -1;
++
++	mei = container_of(handle->handler, struct dualsense_mixer_elem_info, ih);
++	me = &mei->info.head;
++
++	if (me->id == SND_DUALSENSE_JACK_OUT_TERM_ID &&
++	    test_bit(SW_HEADPHONE_INSERT, handle->dev->swbit))
++		status = test_bit(SW_HEADPHONE_INSERT, handle->dev->sw);
++	else if (me->id == SND_DUALSENSE_JACK_IN_TERM_ID &&
++		 test_bit(SW_MICROPHONE_INSERT, handle->dev->swbit))
++		status = test_bit(SW_MICROPHONE_INSERT, handle->dev->sw);
++
++	if (status >= 0) {
++		mei->connected = !!status;
++		snd_ctl_notify(me->mixer->chip->card, SNDRV_CTL_EVENT_MASK_VALUE,
++			       &me->kctl->id);
++	}
++}
++
++static int snd_dualsense_jack_get(struct snd_kcontrol *kctl,
++				  struct snd_ctl_elem_value *ucontrol)
++{
++	struct dualsense_mixer_elem_info *mei = snd_kcontrol_chip(kctl);
++
++	ucontrol->value.integer.value[0] = mei->connected;
++
++	return 0;
++}
++
++static const struct snd_kcontrol_new snd_dualsense_jack_control = {
++	.iface = SNDRV_CTL_ELEM_IFACE_CARD,
++	.access = SNDRV_CTL_ELEM_ACCESS_READ,
++	.info = snd_ctl_boolean_mono_info,
++	.get = snd_dualsense_jack_get,
++};
++
++static int snd_dualsense_resume_jack(struct usb_mixer_elem_list *list)
++{
++	snd_ctl_notify(list->mixer->chip->card, SNDRV_CTL_EVENT_MASK_VALUE,
++		       &list->kctl->id);
++	return 0;
++}
++
++static void snd_dualsense_mixer_elem_free(struct snd_kcontrol *kctl)
++{
++	struct dualsense_mixer_elem_info *mei = snd_kcontrol_chip(kctl);
++
++	if (mei->ih.event)
++		input_unregister_handler(&mei->ih);
++
++	snd_usb_mixer_elem_free(kctl);
++}
++
++static int snd_dualsense_jack_create(struct usb_mixer_interface *mixer,
++				     const char *name, bool is_output)
++{
++	struct dualsense_mixer_elem_info *mei;
++	struct input_device_id *idev_id;
++	struct snd_kcontrol *kctl;
++	int err;
++
++	mei = kzalloc(sizeof(*mei), GFP_KERNEL);
++	if (!mei)
++		return -ENOMEM;
++
++	snd_usb_mixer_elem_init_std(&mei->info.head, mixer,
++				    is_output ? SND_DUALSENSE_JACK_OUT_TERM_ID :
++						SND_DUALSENSE_JACK_IN_TERM_ID);
++
++	mei->info.head.resume = snd_dualsense_resume_jack;
++	mei->info.val_type = USB_MIXER_BOOLEAN;
++	mei->info.channels = 1;
++	mei->info.min = 0;
++	mei->info.max = 1;
++
++	kctl = snd_ctl_new1(&snd_dualsense_jack_control, mei);
++	if (!kctl) {
++		kfree(mei);
++		return -ENOMEM;
++	}
++
++	strscpy(kctl->id.name, name, sizeof(kctl->id.name));
++	kctl->private_free = snd_dualsense_mixer_elem_free;
++
++	err = snd_usb_mixer_add_control(&mei->info.head, kctl);
++	if (err)
++		return err;
++
++	idev_id = &mei->id_table[0];
++	idev_id->flags = INPUT_DEVICE_ID_MATCH_VENDOR | INPUT_DEVICE_ID_MATCH_PRODUCT |
++			 INPUT_DEVICE_ID_MATCH_EVBIT | INPUT_DEVICE_ID_MATCH_SWBIT;
++	idev_id->vendor = USB_ID_VENDOR(mixer->chip->usb_id);
++	idev_id->product = USB_ID_PRODUCT(mixer->chip->usb_id);
++	idev_id->evbit[BIT_WORD(EV_SW)] = BIT_MASK(EV_SW);
++	if (is_output)
++		idev_id->swbit[BIT_WORD(SW_HEADPHONE_INSERT)] = BIT_MASK(SW_HEADPHONE_INSERT);
++	else
++		idev_id->swbit[BIT_WORD(SW_MICROPHONE_INSERT)] = BIT_MASK(SW_MICROPHONE_INSERT);
++
++	mei->ih.event = snd_dualsense_ih_event;
++	mei->ih.match = snd_dualsense_ih_match;
++	mei->ih.connect = snd_dualsense_ih_connect;
++	mei->ih.disconnect = snd_dualsense_ih_disconnect;
++	mei->ih.start = snd_dualsense_ih_start;
++	mei->ih.name = name;
++	mei->ih.id_table = mei->id_table;
++
++	err = input_register_handler(&mei->ih);
++	if (err) {
++		dev_warn(&mixer->chip->dev->dev,
++			 "Could not register input handler: %d\n", err);
++		mei->ih.event = NULL;
++	}
++
++	return 0;
++}
++
++static int snd_dualsense_controls_create(struct usb_mixer_interface *mixer)
++{
++	int err;
++
++	err = snd_dualsense_jack_create(mixer, "Headphone Jack", true);
++	if (err < 0)
++		return err;
++
++	return snd_dualsense_jack_create(mixer, "Headset Mic Jack", false);
++}
++#endif /* IS_REACHABLE(CONFIG_INPUT) */
++
+ /* ASUS Xonar U1 / U3 controls */
+ 
+ static int snd_xonar_u1_switch_get(struct snd_kcontrol *kcontrol,
+@@ -856,6 +1118,7 @@ static const struct snd_kcontrol_new snd_mbox1_src_switch = {
+ static int snd_mbox1_controls_create(struct usb_mixer_interface *mixer)
+ {
+ 	int err;
++
+ 	err = add_single_ctl_with_resume(mixer, 0,
+ 					 snd_mbox1_clk_switch_resume,
+ 					 &snd_mbox1_clk_switch, NULL);
+@@ -869,7 +1132,7 @@ static int snd_mbox1_controls_create(struct usb_mixer_interface *mixer)
+ 
+ /* Native Instruments device quirks */
+ 
+-#define _MAKE_NI_CONTROL(bRequest,wIndex) ((bRequest) << 16 | (wIndex))
++#define _MAKE_NI_CONTROL(bRequest, wIndex) ((bRequest) << 16 | (wIndex))
+ 
+ static int snd_ni_control_init_val(struct usb_mixer_interface *mixer,
+ 				   struct snd_kcontrol *kctl)
+@@ -1021,7 +1284,7 @@ static int snd_nativeinstruments_create_mixer(struct usb_mixer_interface *mixer,
+ /* M-Audio FastTrack Ultra quirks */
+ /* FTU Effect switch (also used by C400/C600) */
+ static int snd_ftu_eff_switch_info(struct snd_kcontrol *kcontrol,
+-					struct snd_ctl_elem_info *uinfo)
++				   struct snd_ctl_elem_info *uinfo)
+ {
+ 	static const char *const texts[8] = {
+ 		"Room 1", "Room 2", "Room 3", "Hall 1",
+@@ -1055,7 +1318,7 @@ static int snd_ftu_eff_switch_init(struct usb_mixer_interface *mixer,
+ }
+ 
+ static int snd_ftu_eff_switch_get(struct snd_kcontrol *kctl,
+-					struct snd_ctl_elem_value *ucontrol)
++				  struct snd_ctl_elem_value *ucontrol)
+ {
+ 	ucontrol->value.enumerated.item[0] = kctl->private_value >> 24;
+ 	return 0;
+@@ -1086,7 +1349,7 @@ static int snd_ftu_eff_switch_update(struct usb_mixer_elem_list *list)
+ }
+ 
+ static int snd_ftu_eff_switch_put(struct snd_kcontrol *kctl,
+-					struct snd_ctl_elem_value *ucontrol)
++				  struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct usb_mixer_elem_list *list = snd_kcontrol_chip(kctl);
+ 	unsigned int pval = list->kctl->private_value;
+@@ -1104,7 +1367,7 @@ static int snd_ftu_eff_switch_put(struct snd_kcontrol *kctl,
+ }
+ 
+ static int snd_ftu_create_effect_switch(struct usb_mixer_interface *mixer,
+-	int validx, int bUnitID)
++					int validx, int bUnitID)
+ {
+ 	static struct snd_kcontrol_new template = {
+ 		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+@@ -1143,22 +1406,22 @@ static int snd_ftu_create_volume_ctls(struct usb_mixer_interface *mixer)
+ 		for (in = 0; in < 8; in++) {
+ 			cmask = BIT(in);
+ 			snprintf(name, sizeof(name),
+-				"AIn%d - Out%d Capture Volume",
+-				in  + 1, out + 1);
++				 "AIn%d - Out%d Capture Volume",
++				 in  + 1, out + 1);
+ 			err = snd_create_std_mono_ctl(mixer, id, control,
+-							cmask, val_type, name,
+-							&snd_usb_mixer_vol_tlv);
++						      cmask, val_type, name,
++						      &snd_usb_mixer_vol_tlv);
+ 			if (err < 0)
+ 				return err;
+ 		}
+ 		for (in = 8; in < 16; in++) {
+ 			cmask = BIT(in);
+ 			snprintf(name, sizeof(name),
+-				"DIn%d - Out%d Playback Volume",
+-				in - 7, out + 1);
++				 "DIn%d - Out%d Playback Volume",
++				 in - 7, out + 1);
+ 			err = snd_create_std_mono_ctl(mixer, id, control,
+-							cmask, val_type, name,
+-							&snd_usb_mixer_vol_tlv);
++						      cmask, val_type, name,
++						      &snd_usb_mixer_vol_tlv);
+ 			if (err < 0)
+ 				return err;
+ 		}
+@@ -1219,10 +1482,10 @@ static int snd_ftu_create_effect_return_ctls(struct usb_mixer_interface *mixer)
+ 	for (ch = 0; ch < 4; ++ch) {
+ 		cmask = BIT(ch);
+ 		snprintf(name, sizeof(name),
+-			"Effect Return %d Volume", ch + 1);
++			 "Effect Return %d Volume", ch + 1);
+ 		err = snd_create_std_mono_ctl(mixer, id, control,
+-						cmask, val_type, name,
+-						snd_usb_mixer_vol_tlv);
++					      cmask, val_type, name,
++					      snd_usb_mixer_vol_tlv);
+ 		if (err < 0)
+ 			return err;
+ 	}
+@@ -1243,20 +1506,20 @@ static int snd_ftu_create_effect_send_ctls(struct usb_mixer_interface *mixer)
+ 	for (ch = 0; ch < 8; ++ch) {
+ 		cmask = BIT(ch);
+ 		snprintf(name, sizeof(name),
+-			"Effect Send AIn%d Volume", ch + 1);
++			 "Effect Send AIn%d Volume", ch + 1);
+ 		err = snd_create_std_mono_ctl(mixer, id, control, cmask,
+-						val_type, name,
+-						snd_usb_mixer_vol_tlv);
++					      val_type, name,
++					      snd_usb_mixer_vol_tlv);
+ 		if (err < 0)
+ 			return err;
+ 	}
+ 	for (ch = 8; ch < 16; ++ch) {
+ 		cmask = BIT(ch);
+ 		snprintf(name, sizeof(name),
+-			"Effect Send DIn%d Volume", ch - 7);
++			 "Effect Send DIn%d Volume", ch - 7);
+ 		err = snd_create_std_mono_ctl(mixer, id, control, cmask,
+-						val_type, name,
+-						snd_usb_mixer_vol_tlv);
++					      val_type, name,
++					      snd_usb_mixer_vol_tlv);
+ 		if (err < 0)
+ 			return err;
+ 	}
+@@ -1346,19 +1609,19 @@ static int snd_c400_create_vol_ctls(struct usb_mixer_interface *mixer)
+ 		for (out = 0; out < num_outs; out++) {
+ 			if (chan < num_outs) {
+ 				snprintf(name, sizeof(name),
+-					"PCM%d-Out%d Playback Volume",
+-					chan + 1, out + 1);
++					 "PCM%d-Out%d Playback Volume",
++					 chan + 1, out + 1);
+ 			} else {
+ 				snprintf(name, sizeof(name),
+-					"In%d-Out%d Playback Volume",
+-					chan - num_outs + 1, out + 1);
++					 "In%d-Out%d Playback Volume",
++					 chan - num_outs + 1, out + 1);
+ 			}
+ 
+ 			cmask = (out == 0) ? 0 : BIT(out - 1);
+ 			offset = chan * num_outs;
+ 			err = snd_create_std_mono_ctl_offset(mixer, id, control,
+-						cmask, val_type, offset, name,
+-						&snd_usb_mixer_vol_tlv);
++							     cmask, val_type, offset, name,
++							     &snd_usb_mixer_vol_tlv);
+ 			if (err < 0)
+ 				return err;
+ 		}
+@@ -1377,7 +1640,7 @@ static int snd_c400_create_effect_volume_ctl(struct usb_mixer_interface *mixer)
+ 	const unsigned int cmask = 0;
+ 
+ 	return snd_create_std_mono_ctl(mixer, id, control, cmask, val_type,
+-					name, snd_usb_mixer_vol_tlv);
++				       name, snd_usb_mixer_vol_tlv);
+ }
+ 
+ /* This control needs a volume quirk, see mixer.c */
+@@ -1390,7 +1653,7 @@ static int snd_c400_create_effect_duration_ctl(struct usb_mixer_interface *mixer
+ 	const unsigned int cmask = 0;
+ 
+ 	return snd_create_std_mono_ctl(mixer, id, control, cmask, val_type,
+-					name, snd_usb_mixer_vol_tlv);
++				       name, snd_usb_mixer_vol_tlv);
+ }
+ 
+ /* This control needs a volume quirk, see mixer.c */
+@@ -1403,7 +1666,7 @@ static int snd_c400_create_effect_feedback_ctl(struct usb_mixer_interface *mixer
+ 	const unsigned int cmask = 0;
+ 
+ 	return snd_create_std_mono_ctl(mixer, id, control, cmask, val_type,
+-					name, NULL);
++				       name, NULL);
+ }
+ 
+ static int snd_c400_create_effect_vol_ctls(struct usb_mixer_interface *mixer)
+@@ -1432,18 +1695,18 @@ static int snd_c400_create_effect_vol_ctls(struct usb_mixer_interface *mixer)
+ 	for (chan = 0; chan < num_outs + num_ins; chan++) {
+ 		if (chan < num_outs) {
+ 			snprintf(name, sizeof(name),
+-				"Effect Send DOut%d",
+-				chan + 1);
++				 "Effect Send DOut%d",
++				 chan + 1);
+ 		} else {
+ 			snprintf(name, sizeof(name),
+-				"Effect Send AIn%d",
+-				chan - num_outs + 1);
++				 "Effect Send AIn%d",
++				 chan - num_outs + 1);
+ 		}
+ 
+ 		cmask = (chan == 0) ? 0 : BIT(chan - 1);
+ 		err = snd_create_std_mono_ctl(mixer, id, control,
+-						cmask, val_type, name,
+-						&snd_usb_mixer_vol_tlv);
++					      cmask, val_type, name,
++					      &snd_usb_mixer_vol_tlv);
+ 		if (err < 0)
+ 			return err;
+ 	}
+@@ -1478,14 +1741,14 @@ static int snd_c400_create_effect_ret_vol_ctls(struct usb_mixer_interface *mixer
+ 
+ 	for (chan = 0; chan < num_outs; chan++) {
+ 		snprintf(name, sizeof(name),
+-			"Effect Return %d",
+-			chan + 1);
++			 "Effect Return %d",
++			 chan + 1);
+ 
+ 		cmask = (chan == 0) ? 0 :
+ 			BIT(chan + (chan % 2) * num_outs - 1);
+ 		err = snd_create_std_mono_ctl_offset(mixer, id, control,
+-						cmask, val_type, offset, name,
+-						&snd_usb_mixer_vol_tlv);
++						     cmask, val_type, offset, name,
++						     &snd_usb_mixer_vol_tlv);
+ 		if (err < 0)
+ 			return err;
+ 	}
+@@ -1626,7 +1889,7 @@ static const struct std_mono_table ebox44_table[] = {
+  *
+  */
+ static int snd_microii_spdif_info(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_info *uinfo)
++				  struct snd_ctl_elem_info *uinfo)
+ {
+ 	uinfo->type = SNDRV_CTL_ELEM_TYPE_IEC958;
+ 	uinfo->count = 1;
+@@ -1634,7 +1897,7 @@ static int snd_microii_spdif_info(struct snd_kcontrol *kcontrol,
+ }
+ 
+ static int snd_microii_spdif_default_get(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_value *ucontrol)
++					 struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct usb_mixer_elem_list *list = snd_kcontrol_chip(kcontrol);
+ 	struct snd_usb_audio *chip = list->mixer->chip;
+@@ -1667,13 +1930,13 @@ static int snd_microii_spdif_default_get(struct snd_kcontrol *kcontrol,
+ 	ep = get_endpoint(alts, 0)->bEndpointAddress;
+ 
+ 	err = snd_usb_ctl_msg(chip->dev,
+-			usb_rcvctrlpipe(chip->dev, 0),
+-			UAC_GET_CUR,
+-			USB_TYPE_CLASS | USB_RECIP_ENDPOINT | USB_DIR_IN,
+-			UAC_EP_CS_ATTR_SAMPLE_RATE << 8,
+-			ep,
+-			data,
+-			sizeof(data));
++			      usb_rcvctrlpipe(chip->dev, 0),
++			      UAC_GET_CUR,
++			      USB_TYPE_CLASS | USB_RECIP_ENDPOINT | USB_DIR_IN,
++			      UAC_EP_CS_ATTR_SAMPLE_RATE << 8,
++			      ep,
++			      data,
++			      sizeof(data));
+ 	if (err < 0)
+ 		goto end;
+ 
+@@ -1700,26 +1963,26 @@ static int snd_microii_spdif_default_update(struct usb_mixer_elem_list *list)
+ 
+ 	reg = ((pval >> 4) & 0xf0) | (pval & 0x0f);
+ 	err = snd_usb_ctl_msg(chip->dev,
+-			usb_sndctrlpipe(chip->dev, 0),
+-			UAC_SET_CUR,
+-			USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+-			reg,
+-			2,
+-			NULL,
+-			0);
++			      usb_sndctrlpipe(chip->dev, 0),
++			      UAC_SET_CUR,
++			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
++			      reg,
++			      2,
++			      NULL,
++			      0);
+ 	if (err < 0)
+ 		goto end;
+ 
+ 	reg = (pval & IEC958_AES0_NONAUDIO) ? 0xa0 : 0x20;
+ 	reg |= (pval >> 12) & 0x0f;
+ 	err = snd_usb_ctl_msg(chip->dev,
+-			usb_sndctrlpipe(chip->dev, 0),
+-			UAC_SET_CUR,
+-			USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+-			reg,
+-			3,
+-			NULL,
+-			0);
++			      usb_sndctrlpipe(chip->dev, 0),
++			      UAC_SET_CUR,
++			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
++			      reg,
++			      3,
++			      NULL,
++			      0);
+ 	if (err < 0)
+ 		goto end;
+ 
+@@ -1729,13 +1992,14 @@ static int snd_microii_spdif_default_update(struct usb_mixer_elem_list *list)
+ }
+ 
+ static int snd_microii_spdif_default_put(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_value *ucontrol)
++					 struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct usb_mixer_elem_list *list = snd_kcontrol_chip(kcontrol);
+ 	unsigned int pval, pval_old;
+ 	int err;
+ 
+-	pval = pval_old = kcontrol->private_value;
++	pval = kcontrol->private_value;
++	pval_old = pval;
+ 	pval &= 0xfffff0f0;
+ 	pval |= (ucontrol->value.iec958.status[1] & 0x0f) << 8;
+ 	pval |= (ucontrol->value.iec958.status[0] & 0x0f);
+@@ -1756,7 +2020,7 @@ static int snd_microii_spdif_default_put(struct snd_kcontrol *kcontrol,
+ }
+ 
+ static int snd_microii_spdif_mask_get(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_value *ucontrol)
++				      struct snd_ctl_elem_value *ucontrol)
+ {
+ 	ucontrol->value.iec958.status[0] = 0x0f;
+ 	ucontrol->value.iec958.status[1] = 0xff;
+@@ -1767,7 +2031,7 @@ static int snd_microii_spdif_mask_get(struct snd_kcontrol *kcontrol,
+ }
+ 
+ static int snd_microii_spdif_switch_get(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_value *ucontrol)
++					struct snd_ctl_elem_value *ucontrol)
+ {
+ 	ucontrol->value.integer.value[0] = !(kcontrol->private_value & 0x02);
+ 
+@@ -1785,20 +2049,20 @@ static int snd_microii_spdif_switch_update(struct usb_mixer_elem_list *list)
+ 		return err;
+ 
+ 	err = snd_usb_ctl_msg(chip->dev,
+-			usb_sndctrlpipe(chip->dev, 0),
+-			UAC_SET_CUR,
+-			USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+-			reg,
+-			9,
+-			NULL,
+-			0);
++			      usb_sndctrlpipe(chip->dev, 0),
++			      UAC_SET_CUR,
++			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
++			      reg,
++			      9,
++			      NULL,
++			      0);
+ 
+ 	snd_usb_unlock_shutdown(chip);
+ 	return err;
+ }
+ 
+ static int snd_microii_spdif_switch_put(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_value *ucontrol)
++					struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct usb_mixer_elem_list *list = snd_kcontrol_chip(kcontrol);
+ 	u8 reg;
+@@ -1883,9 +2147,9 @@ static int snd_soundblaster_e1_switch_update(struct usb_mixer_interface *mixer,
+ 	if (err < 0)
+ 		return err;
+ 	err = snd_usb_ctl_msg(chip->dev,
+-			usb_sndctrlpipe(chip->dev, 0), HID_REQ_SET_REPORT,
+-			USB_TYPE_CLASS | USB_RECIP_INTERFACE | USB_DIR_OUT,
+-			0x0202, 3, buff, 2);
++			      usb_sndctrlpipe(chip->dev, 0), HID_REQ_SET_REPORT,
++			      USB_TYPE_CLASS | USB_RECIP_INTERFACE | USB_DIR_OUT,
++			      0x0202, 3, buff, 2);
+ 	snd_usb_unlock_shutdown(chip);
+ 	return err;
+ }
+@@ -2181,6 +2445,7 @@ static const u32 snd_rme_rate_table[] = {
+ 	256000,	352800, 384000, 400000,
+ 	512000, 705600, 768000, 800000
+ };
++
+ /* maximum number of items for AES and S/PDIF rates for above table */
+ #define SND_RME_RATE_IDX_AES_SPDIF_NUM		12
+ 
+@@ -3235,7 +3500,7 @@ static int snd_rme_digiface_enum_put(struct snd_kcontrol *kcontrol,
+ }
+ 
+ static int snd_rme_digiface_current_sync_get(struct snd_kcontrol *kcontrol,
+-				     struct snd_ctl_elem_value *ucontrol)
++					     struct snd_ctl_elem_value *ucontrol)
+ {
+ 	int ret = snd_rme_digiface_enum_get(kcontrol, ucontrol);
+ 
+@@ -3269,7 +3534,6 @@ static int snd_rme_digiface_sync_state_get(struct snd_kcontrol *kcontrol,
+ 	return 0;
+ }
+ 
+-
+ static int snd_rme_digiface_format_info(struct snd_kcontrol *kcontrol,
+ 					struct snd_ctl_elem_info *uinfo)
+ {
+@@ -3281,7 +3545,6 @@ static int snd_rme_digiface_format_info(struct snd_kcontrol *kcontrol,
+ 				 ARRAY_SIZE(format), format);
+ }
+ 
+-
+ static int snd_rme_digiface_sync_source_info(struct snd_kcontrol *kcontrol,
+ 					     struct snd_ctl_elem_info *uinfo)
+ {
+@@ -3564,7 +3827,6 @@ static int snd_rme_digiface_controls_create(struct usb_mixer_interface *mixer)
+ #define SND_DJM_A9_IDX		0x6
+ #define SND_DJM_V10_IDX	0x7
+ 
+-
+ #define SND_DJM_CTL(_name, suffix, _default_value, _windex) { \
+ 	.name = _name, \
+ 	.options = snd_djm_opts_##suffix, \
+@@ -3576,7 +3838,6 @@ static int snd_rme_digiface_controls_create(struct usb_mixer_interface *mixer)
+ 	.controls = snd_djm_ctls_##suffix, \
+ 	.ncontrols = ARRAY_SIZE(snd_djm_ctls_##suffix) }
+ 
+-
+ struct snd_djm_device {
+ 	const char *name;
+ 	const struct snd_djm_ctl *controls;
+@@ -3722,7 +3983,6 @@ static const struct snd_djm_ctl snd_djm_ctls_250mk2[] = {
+ 	SND_DJM_CTL("Output 3 Playback Switch", 250mk2_pb3, 2, SND_DJM_WINDEX_PB)
+ };
+ 
+-
+ // DJM-450
+ static const u16 snd_djm_opts_450_cap1[] = {
+ 	0x0103, 0x0100, 0x0106, 0x0107, 0x0108, 0x0109, 0x010d, 0x010a };
+@@ -3747,7 +4007,6 @@ static const struct snd_djm_ctl snd_djm_ctls_450[] = {
+ 	SND_DJM_CTL("Output 3 Playback Switch", 450_pb3, 2, SND_DJM_WINDEX_PB)
+ };
+ 
+-
+ // DJM-750
+ static const u16 snd_djm_opts_750_cap1[] = {
+ 	0x0101, 0x0103, 0x0106, 0x0107, 0x0108, 0x0109, 0x010a, 0x010f };
+@@ -3766,7 +4025,6 @@ static const struct snd_djm_ctl snd_djm_ctls_750[] = {
+ 	SND_DJM_CTL("Input 4 Capture Switch", 750_cap4, 0, SND_DJM_WINDEX_CAP)
+ };
+ 
+-
+ // DJM-850
+ static const u16 snd_djm_opts_850_cap1[] = {
+ 	0x0100, 0x0103, 0x0106, 0x0107, 0x0108, 0x0109, 0x010a, 0x010f };
+@@ -3785,7 +4043,6 @@ static const struct snd_djm_ctl snd_djm_ctls_850[] = {
+ 	SND_DJM_CTL("Input 4 Capture Switch", 850_cap4, 1, SND_DJM_WINDEX_CAP)
+ };
+ 
+-
+ // DJM-900NXS2
+ static const u16 snd_djm_opts_900nxs2_cap1[] = {
+ 	0x0100, 0x0102, 0x0103, 0x0106, 0x0107, 0x0108, 0x0109, 0x010a };
+@@ -3823,7 +4080,6 @@ static const u16 snd_djm_opts_750mk2_pb1[] = { 0x0100, 0x0101, 0x0104 };
+ static const u16 snd_djm_opts_750mk2_pb2[] = { 0x0200, 0x0201, 0x0204 };
+ static const u16 snd_djm_opts_750mk2_pb3[] = { 0x0300, 0x0301, 0x0304 };
+ 
+-
+ static const struct snd_djm_ctl snd_djm_ctls_750mk2[] = {
+ 	SND_DJM_CTL("Master Input Level Capture Switch", cap_level, 0, SND_DJM_WINDEX_CAPLVL),
+ 	SND_DJM_CTL("Input 1 Capture Switch",   750mk2_cap1, 2, SND_DJM_WINDEX_CAP),
+@@ -3836,7 +4092,6 @@ static const struct snd_djm_ctl snd_djm_ctls_750mk2[] = {
+ 	SND_DJM_CTL("Output 3 Playback Switch", 750mk2_pb3, 2, SND_DJM_WINDEX_PB)
+ };
+ 
+-
+ // DJM-A9
+ static const u16 snd_djm_opts_a9_cap_level[] = {
+ 	0x0000, 0x0100, 0x0200, 0x0300, 0x0400, 0x0500 };
+@@ -3865,29 +4120,35 @@ static const struct snd_djm_ctl snd_djm_ctls_a9[] = {
+ static const u16 snd_djm_opts_v10_cap_level[] = {
+ 	0x0000, 0x0100, 0x0200, 0x0300, 0x0400, 0x0500
+ };
++
+ static const u16 snd_djm_opts_v10_cap1[] = {
+ 	0x0103,
+ 	0x0100, 0x0102, 0x0106, 0x0110, 0x0107,
+ 	0x0108, 0x0109, 0x010a, 0x0121, 0x0122
+ };
++
+ static const u16 snd_djm_opts_v10_cap2[] = {
+ 	0x0200, 0x0202, 0x0206, 0x0210, 0x0207,
+ 	0x0208, 0x0209, 0x020a, 0x0221, 0x0222
+ };
++
+ static const u16 snd_djm_opts_v10_cap3[] = {
+ 	0x0303,
+ 	0x0300, 0x0302, 0x0306, 0x0310, 0x0307,
+ 	0x0308, 0x0309, 0x030a, 0x0321, 0x0322
+ };
++
+ static const u16 snd_djm_opts_v10_cap4[] = {
+ 	0x0403,
+ 	0x0400, 0x0402, 0x0406, 0x0410, 0x0407,
+ 	0x0408, 0x0409, 0x040a, 0x0421, 0x0422
+ };
++
+ static const u16 snd_djm_opts_v10_cap5[] = {
+ 	0x0500, 0x0502, 0x0506, 0x0510, 0x0507,
+ 	0x0508, 0x0509, 0x050a, 0x0521, 0x0522
+ };
++
+ static const u16 snd_djm_opts_v10_cap6[] = {
+ 	0x0603,
+ 	0x0600, 0x0602, 0x0606, 0x0610, 0x0607,
+@@ -3916,9 +4177,8 @@ static const struct snd_djm_device snd_djm_devices[] = {
+ 	[SND_DJM_V10_IDX] = SND_DJM_DEVICE(v10),
+ };
+ 
+-
+ static int snd_djm_controls_info(struct snd_kcontrol *kctl,
+-				struct snd_ctl_elem_info *info)
++				 struct snd_ctl_elem_info *info)
+ {
+ 	unsigned long private_value = kctl->private_value;
+ 	u8 device_idx = (private_value & SND_DJM_DEVICE_MASK) >> SND_DJM_DEVICE_SHIFT;
+@@ -3937,8 +4197,8 @@ static int snd_djm_controls_info(struct snd_kcontrol *kctl,
+ 		info->value.enumerated.item = noptions - 1;
+ 
+ 	name = snd_djm_get_label(device_idx,
+-				ctl->options[info->value.enumerated.item],
+-				ctl->wIndex);
++				 ctl->options[info->value.enumerated.item],
++				 ctl->wIndex);
+ 	if (!name)
+ 		return -EINVAL;
+ 
+@@ -3950,25 +4210,25 @@ static int snd_djm_controls_info(struct snd_kcontrol *kctl,
+ }
+ 
+ static int snd_djm_controls_update(struct usb_mixer_interface *mixer,
+-				u8 device_idx, u8 group, u16 value)
++				   u8 device_idx, u8 group, u16 value)
+ {
+ 	int err;
+ 	const struct snd_djm_device *device = &snd_djm_devices[device_idx];
+ 
+-	if ((group >= device->ncontrols) || value >= device->controls[group].noptions)
++	if (group >= device->ncontrols || value >= device->controls[group].noptions)
+ 		return -EINVAL;
+ 
+ 	err = snd_usb_lock_shutdown(mixer->chip);
+ 	if (err)
+ 		return err;
+ 
+-	err = snd_usb_ctl_msg(
+-		mixer->chip->dev, usb_sndctrlpipe(mixer->chip->dev, 0),
+-		USB_REQ_SET_FEATURE,
+-		USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-		device->controls[group].options[value],
+-		device->controls[group].wIndex,
+-		NULL, 0);
++	err = snd_usb_ctl_msg(mixer->chip->dev,
++			      usb_sndctrlpipe(mixer->chip->dev, 0),
++			      USB_REQ_SET_FEATURE,
++			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
++			      device->controls[group].options[value],
++			      device->controls[group].wIndex,
++			      NULL, 0);
+ 
+ 	snd_usb_unlock_shutdown(mixer->chip);
+ 	return err;
+@@ -4009,7 +4269,7 @@ static int snd_djm_controls_resume(struct usb_mixer_elem_list *list)
+ }
+ 
+ static int snd_djm_controls_create(struct usb_mixer_interface *mixer,
+-		const u8 device_idx)
++				   const u8 device_idx)
+ {
+ 	int err, i;
+ 	u16 value;
+@@ -4028,10 +4288,10 @@ static int snd_djm_controls_create(struct usb_mixer_interface *mixer,
+ 	for (i = 0; i < device->ncontrols; i++) {
+ 		value = device->controls[i].default_value;
+ 		knew.name = device->controls[i].name;
+-		knew.private_value = (
++		knew.private_value =
+ 			((unsigned long)device_idx << SND_DJM_DEVICE_SHIFT) |
+ 			(i << SND_DJM_GROUP_SHIFT) |
+-			value);
++			value;
+ 		err = snd_djm_controls_update(mixer, device_idx, i, value);
+ 		if (err)
+ 			return err;
+@@ -4073,6 +4333,13 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
+ 		err = snd_emu0204_controls_create(mixer);
+ 		break;
+ 
++#if IS_REACHABLE(CONFIG_INPUT)
++	case USB_ID(0x054c, 0x0ce6): /* Sony DualSense controller (PS5) */
++	case USB_ID(0x054c, 0x0df2): /* Sony DualSense Edge controller (PS5) */
++		err = snd_dualsense_controls_create(mixer);
++		break;
++#endif /* IS_REACHABLE(CONFIG_INPUT) */
++
+ 	case USB_ID(0x0763, 0x2030): /* M-Audio Fast Track C400 */
+ 	case USB_ID(0x0763, 0x2031): /* M-Audio Fast Track C400 */
+ 		err = snd_c400_create_mixer(mixer);
+@@ -4098,13 +4365,15 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
+ 		break;
+ 
+ 	case USB_ID(0x17cc, 0x1011): /* Traktor Audio 6 */
+-		err = snd_nativeinstruments_create_mixer(mixer,
++		err = snd_nativeinstruments_create_mixer(/* checkpatch hack */
++				mixer,
+ 				snd_nativeinstruments_ta6_mixers,
+ 				ARRAY_SIZE(snd_nativeinstruments_ta6_mixers));
+ 		break;
+ 
+ 	case USB_ID(0x17cc, 0x1021): /* Traktor Audio 10 */
+-		err = snd_nativeinstruments_create_mixer(mixer,
++		err = snd_nativeinstruments_create_mixer(/* checkpatch hack */
++				mixer,
+ 				snd_nativeinstruments_ta10_mixers,
+ 				ARRAY_SIZE(snd_nativeinstruments_ta10_mixers));
+ 		break;
+@@ -4254,7 +4523,8 @@ static void snd_dragonfly_quirk_db_scale(struct usb_mixer_interface *mixer,
+ 					 struct snd_kcontrol *kctl)
+ {
+ 	/* Approximation using 10 ranges based on output measurement on hw v1.2.
+-	 * This seems close to the cubic mapping e.g. alsamixer uses. */
++	 * This seems close to the cubic mapping e.g. alsamixer uses.
++	 */
+ 	static const DECLARE_TLV_DB_RANGE(scale,
+ 		 0,  1, TLV_DB_MINMAX_ITEM(-5300, -4970),
+ 		 2,  5, TLV_DB_MINMAX_ITEM(-4710, -4160),
+@@ -4338,20 +4608,15 @@ void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer,
+ 		if (unitid == 7 && cval->control == UAC_FU_VOLUME)
+ 			snd_dragonfly_quirk_db_scale(mixer, cval, kctl);
+ 		break;
++	}
++
+ 	/* lowest playback value is muted on some devices */
+-	case USB_ID(0x0572, 0x1b09): /* Conexant Systems (Rockwell), Inc. */
+-	case USB_ID(0x0d8c, 0x000c): /* C-Media */
+-	case USB_ID(0x0d8c, 0x0014): /* C-Media */
+-	case USB_ID(0x19f7, 0x0003): /* RODE NT-USB */
+-	case USB_ID(0x2d99, 0x0026): /* HECATE G2 GAMING HEADSET */
++	if (mixer->chip->quirk_flags & QUIRK_FLAG_MIXER_MIN_MUTE)
+ 		if (strstr(kctl->id.name, "Playback"))
+ 			cval->min_mute = 1;
+-		break;
+-	}
+ 
+ 	/* ALSA-ify some Plantronics headset control names */
+ 	if (USB_ID_VENDOR(mixer->chip->usb_id) == 0x047f &&
+ 	    (cval->control == UAC_FU_MUTE || cval->control == UAC_FU_VOLUME))
+ 		snd_fix_plt_name(mixer->chip, &kctl->id);
+ }
+-
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index bd24f3a78ea9db..766db7d00cbc95 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -2199,6 +2199,10 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_SET_IFACE_FIRST),
+ 	DEVICE_FLG(0x0556, 0x0014, /* Phoenix Audio TMX320VC */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
++	DEVICE_FLG(0x0572, 0x1b08, /* Conexant Systems (Rockwell), Inc. */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
++	DEVICE_FLG(0x0572, 0x1b09, /* Conexant Systems (Rockwell), Inc. */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x05a3, 0x9420, /* ELP HD USB Camera */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x05a7, 0x1020, /* Bose Companion 5 */
+@@ -2241,12 +2245,16 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ 	DEVICE_FLG(0x0b0e, 0x0349, /* Jabra 550a */
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
++	DEVICE_FLG(0x0bda, 0x498a, /* Realtek Semiconductor Corp. */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x0c45, 0x6340, /* Sonix HD USB Camera */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x0c45, 0x636b, /* Microdia JP001 USB Camera */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+-	DEVICE_FLG(0x0d8c, 0x0014, /* USB Audio Device */
+-		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
++	DEVICE_FLG(0x0d8c, 0x000c, /* C-Media */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
++	DEVICE_FLG(0x0d8c, 0x0014, /* C-Media */
++		   QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x0ecb, 0x205c, /* JBL Quantum610 Wireless */
+ 		   QUIRK_FLAG_FIXED_RATE),
+ 	DEVICE_FLG(0x0ecb, 0x2069, /* JBL Quantum810 Wireless */
+@@ -2255,6 +2263,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER),
+ 	DEVICE_FLG(0x1101, 0x0003, /* Audioengine D1 */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
++	DEVICE_FLG(0x12d1, 0x3a07, /* Huawei Technologies Co., Ltd. */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x1224, 0x2a25, /* Jieli Technology USB PHY 2.0 */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_MIC_RES_16),
+ 	DEVICE_FLG(0x1395, 0x740a, /* Sennheiser DECT */
+@@ -2293,6 +2303,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY),
+ 	DEVICE_FLG(0x1901, 0x0191, /* GE B850V3 CP2114 audio interface */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
++	DEVICE_FLG(0x19f7, 0x0003, /* RODE NT-USB */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x19f7, 0x0035, /* RODE NT-USB+ */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x1bcf, 0x2281, /* HD Webcam */
+@@ -2343,6 +2355,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_IGNORE_CTL_ERROR),
+ 	DEVICE_FLG(0x2912, 0x30c8, /* Audioengine D1 */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
++	DEVICE_FLG(0x2a70, 0x1881, /* OnePlus Technology (Shenzhen) Co., Ltd. BE02T */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x2b53, 0x0023, /* Fiero SC-01 (firmware v1.0.0 @ 48 kHz) */
+ 		   QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+ 	DEVICE_FLG(0x2b53, 0x0024, /* Fiero SC-01 (firmware v1.0.0 @ 96 kHz) */
+@@ -2353,10 +2367,14 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ 	DEVICE_FLG(0x2d95, 0x8021, /* VIVO USB-C-XE710 HEADSET */
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
++	DEVICE_FLG(0x2d99, 0x0026, /* HECATE G2 GAMING HEADSET */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x2fc6, 0xf0b7, /* iBasso DC07 Pro */
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ 	DEVICE_FLG(0x30be, 0x0101, /* Schiit Hel */
+ 		   QUIRK_FLAG_IGNORE_CTL_ERROR),
++	DEVICE_FLG(0x339b, 0x3a07, /* Synaptics HONOR USB-C HEADSET */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x413c, 0xa506, /* Dell AE515 sound bar */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x534d, 0x0021, /* MacroSilicon MS2100/MS2106 */
+@@ -2408,6 +2426,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_DSD_RAW),
+ 	VENDOR_FLG(0x2d87, /* Cayin device */
+ 		   QUIRK_FLAG_DSD_RAW),
++	VENDOR_FLG(0x2fc6, /* Comture-inc devices */
++		   QUIRK_FLAG_DSD_RAW),
+ 	VENDOR_FLG(0x3336, /* HEM devices */
+ 		   QUIRK_FLAG_DSD_RAW),
+ 	VENDOR_FLG(0x3353, /* Khadas devices */
+diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
+index 158ec053dc44dd..1ef4d39978df36 100644
+--- a/sound/usb/usbaudio.h
++++ b/sound/usb/usbaudio.h
+@@ -196,6 +196,9 @@ extern bool snd_usb_skip_validation;
+  *  for the given endpoint.
+  * QUIRK_FLAG_MIC_RES_16 and QUIRK_FLAG_MIC_RES_384
+  *  Set the fixed resolution for Mic Capture Volume (mostly for webcams)
++ * QUIRK_FLAG_MIXER_MIN_MUTE
++ *  Set minimum volume control value as mute for devices where the lowest
++ *  playback value represents muted state instead of minimum audible volume
+  */
+ 
+ #define QUIRK_FLAG_GET_SAMPLE_RATE	(1U << 0)
+@@ -222,5 +225,6 @@ extern bool snd_usb_skip_validation;
+ #define QUIRK_FLAG_FIXED_RATE		(1U << 21)
+ #define QUIRK_FLAG_MIC_RES_16		(1U << 22)
+ #define QUIRK_FLAG_MIC_RES_384		(1U << 23)
++#define QUIRK_FLAG_MIXER_MIN_MUTE	(1U << 24)
+ 
+ #endif /* __USBAUDIO_H */
+diff --git a/tools/testing/selftests/bpf/prog_tests/free_timer.c b/tools/testing/selftests/bpf/prog_tests/free_timer.c
+index b7b77a6b29799c..0de8facca4c5bc 100644
+--- a/tools/testing/selftests/bpf/prog_tests/free_timer.c
++++ b/tools/testing/selftests/bpf/prog_tests/free_timer.c
+@@ -124,6 +124,10 @@ void test_free_timer(void)
+ 	int err;
+ 
+ 	skel = free_timer__open_and_load();
++	if (!skel && errno == EOPNOTSUPP) {
++		test__skip();
++		return;
++	}
+ 	if (!ASSERT_OK_PTR(skel, "open_load"))
+ 		return;
+ 
+diff --git a/tools/testing/selftests/bpf/prog_tests/timer.c b/tools/testing/selftests/bpf/prog_tests/timer.c
+index d66687f1ee6a8d..56f660ca567ba1 100644
+--- a/tools/testing/selftests/bpf/prog_tests/timer.c
++++ b/tools/testing/selftests/bpf/prog_tests/timer.c
+@@ -86,6 +86,10 @@ void serial_test_timer(void)
+ 	int err;
+ 
+ 	timer_skel = timer__open_and_load();
++	if (!timer_skel && errno == EOPNOTSUPP) {
++		test__skip();
++		return;
++	}
+ 	if (!ASSERT_OK_PTR(timer_skel, "timer_skel_load"))
+ 		return;
+ 
+diff --git a/tools/testing/selftests/bpf/prog_tests/timer_crash.c b/tools/testing/selftests/bpf/prog_tests/timer_crash.c
+index f74b82305da8c8..b841597c8a3a31 100644
+--- a/tools/testing/selftests/bpf/prog_tests/timer_crash.c
++++ b/tools/testing/selftests/bpf/prog_tests/timer_crash.c
+@@ -12,6 +12,10 @@ static void test_timer_crash_mode(int mode)
+ 	struct timer_crash *skel;
+ 
+ 	skel = timer_crash__open_and_load();
++	if (!skel && errno == EOPNOTSUPP) {
++		test__skip();
++		return;
++	}
+ 	if (!ASSERT_OK_PTR(skel, "timer_crash__open_and_load"))
+ 		return;
+ 	skel->bss->pid = getpid();
+diff --git a/tools/testing/selftests/bpf/prog_tests/timer_lockup.c b/tools/testing/selftests/bpf/prog_tests/timer_lockup.c
+index 1a2f99596916fb..eb303fa1e09af9 100644
+--- a/tools/testing/selftests/bpf/prog_tests/timer_lockup.c
++++ b/tools/testing/selftests/bpf/prog_tests/timer_lockup.c
+@@ -59,6 +59,10 @@ void test_timer_lockup(void)
+ 	}
+ 
+ 	skel = timer_lockup__open_and_load();
++	if (!skel && errno == EOPNOTSUPP) {
++		test__skip();
++		return;
++	}
+ 	if (!ASSERT_OK_PTR(skel, "timer_lockup__open_and_load"))
+ 		return;
+ 
+diff --git a/tools/testing/selftests/bpf/prog_tests/timer_mim.c b/tools/testing/selftests/bpf/prog_tests/timer_mim.c
+index 9ff7843909e7d3..c930c7d7105b9f 100644
+--- a/tools/testing/selftests/bpf/prog_tests/timer_mim.c
++++ b/tools/testing/selftests/bpf/prog_tests/timer_mim.c
+@@ -65,6 +65,10 @@ void serial_test_timer_mim(void)
+ 		goto cleanup;
+ 
+ 	timer_skel = timer_mim__open_and_load();
++	if (!timer_skel && errno == EOPNOTSUPP) {
++		test__skip();
++		return;
++	}
+ 	if (!ASSERT_OK_PTR(timer_skel, "timer_skel_load"))
+ 		goto cleanup;
+ 
+diff --git a/tools/testing/selftests/filesystems/mount-notify/mount-notify_test.c b/tools/testing/selftests/filesystems/mount-notify/mount-notify_test.c
+index 63ce708d93ed06..e4b7c2b457ee7a 100644
+--- a/tools/testing/selftests/filesystems/mount-notify/mount-notify_test.c
++++ b/tools/testing/selftests/filesystems/mount-notify/mount-notify_test.c
+@@ -2,6 +2,13 @@
+ // Copyright (c) 2025 Miklos Szeredi <miklos@szeredi.hu>
+ 
+ #define _GNU_SOURCE
++
++// Needed for linux/fanotify.h
++typedef struct {
++	int	val[2];
++} __kernel_fsid_t;
++#define __kernel_fsid_t __kernel_fsid_t
++
+ #include <fcntl.h>
+ #include <sched.h>
+ #include <stdio.h>
+@@ -10,20 +17,12 @@
+ #include <sys/mount.h>
+ #include <unistd.h>
+ #include <sys/syscall.h>
++#include <sys/fanotify.h>
+ 
+ #include "../../kselftest_harness.h"
+ #include "../statmount/statmount.h"
+ #include "../utils.h"
+ 
+-// Needed for linux/fanotify.h
+-#ifndef __kernel_fsid_t
+-typedef struct {
+-	int	val[2];
+-} __kernel_fsid_t;
+-#endif
+-
+-#include <sys/fanotify.h>
+-
+ static const char root_mntpoint_templ[] = "/tmp/mount-notify_test_root.XXXXXX";
+ 
+ static const int mark_cmds[] = {
+diff --git a/tools/testing/selftests/filesystems/mount-notify/mount-notify_test_ns.c b/tools/testing/selftests/filesystems/mount-notify/mount-notify_test_ns.c
+index 090a5ca65004a0..9f57ca46e3afa0 100644
+--- a/tools/testing/selftests/filesystems/mount-notify/mount-notify_test_ns.c
++++ b/tools/testing/selftests/filesystems/mount-notify/mount-notify_test_ns.c
+@@ -2,6 +2,13 @@
+ // Copyright (c) 2025 Miklos Szeredi <miklos@szeredi.hu>
+ 
+ #define _GNU_SOURCE
++
++// Needed for linux/fanotify.h
++typedef struct {
++	int	val[2];
++} __kernel_fsid_t;
++#define __kernel_fsid_t __kernel_fsid_t
++
+ #include <fcntl.h>
+ #include <sched.h>
+ #include <stdio.h>
+@@ -10,21 +17,12 @@
+ #include <sys/mount.h>
+ #include <unistd.h>
+ #include <sys/syscall.h>
++#include <sys/fanotify.h>
+ 
+ #include "../../kselftest_harness.h"
+-#include "../../pidfd/pidfd.h"
+ #include "../statmount/statmount.h"
+ #include "../utils.h"
+ 
+-// Needed for linux/fanotify.h
+-#ifndef __kernel_fsid_t
+-typedef struct {
+-	int	val[2];
+-} __kernel_fsid_t;
+-#endif
+-
+-#include <sys/fanotify.h>
+-
+ static const char root_mntpoint_templ[] = "/tmp/mount-notify_test_root.XXXXXX";
+ 
+ static const int mark_types[] = {
+diff --git a/tools/testing/selftests/net/fib_nexthops.sh b/tools/testing/selftests/net/fib_nexthops.sh
+index b39f748c25722a..2ac394c99d0183 100755
+--- a/tools/testing/selftests/net/fib_nexthops.sh
++++ b/tools/testing/selftests/net/fib_nexthops.sh
+@@ -467,8 +467,8 @@ ipv6_fdb_grp_fcnal()
+ 	log_test $? 0 "Get Fdb nexthop group by id"
+ 
+ 	# fdb nexthop group can only contain fdb nexthops
+-	run_cmd "$IP nexthop add id 63 via 2001:db8:91::4"
+-	run_cmd "$IP nexthop add id 64 via 2001:db8:91::5"
++	run_cmd "$IP nexthop add id 63 via 2001:db8:91::4 dev veth1"
++	run_cmd "$IP nexthop add id 64 via 2001:db8:91::5 dev veth1"
+ 	run_cmd "$IP nexthop add id 103 group 63/64 fdb"
+ 	log_test $? 2 "Fdb Nexthop group with non-fdb nexthops"
+ 
+@@ -547,15 +547,15 @@ ipv4_fdb_grp_fcnal()
+ 	log_test $? 0 "Get Fdb nexthop group by id"
+ 
+ 	# fdb nexthop group can only contain fdb nexthops
+-	run_cmd "$IP nexthop add id 14 via 172.16.1.2"
+-	run_cmd "$IP nexthop add id 15 via 172.16.1.3"
++	run_cmd "$IP nexthop add id 14 via 172.16.1.2 dev veth1"
++	run_cmd "$IP nexthop add id 15 via 172.16.1.3 dev veth1"
+ 	run_cmd "$IP nexthop add id 103 group 14/15 fdb"
+ 	log_test $? 2 "Fdb Nexthop group with non-fdb nexthops"
+ 
+ 	# Non fdb nexthop group can not contain fdb nexthops
+ 	run_cmd "$IP nexthop add id 16 via 172.16.1.2 fdb"
+ 	run_cmd "$IP nexthop add id 17 via 172.16.1.3 fdb"
+-	run_cmd "$IP nexthop add id 104 group 14/15"
++	run_cmd "$IP nexthop add id 104 group 16/17"
+ 	log_test $? 2 "Non-Fdb Nexthop group with fdb nexthops"
+ 
+ 	# fdb nexthop cannot have blackhole
+@@ -582,7 +582,7 @@ ipv4_fdb_grp_fcnal()
+ 	run_cmd "$BRIDGE fdb add 02:02:00:00:00:14 dev vx10 nhid 12 self"
+ 	log_test $? 255 "Fdb mac add with nexthop"
+ 
+-	run_cmd "$IP ro add 172.16.0.0/22 nhid 15"
++	run_cmd "$IP ro add 172.16.0.0/22 nhid 16"
+ 	log_test $? 2 "Route add with fdb nexthop"
+ 
+ 	run_cmd "$IP ro add 172.16.0.0/22 nhid 103"


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-10-02 13:42 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-10-02 13:42 UTC (permalink / raw
  To: gentoo-commits

commit:     01315893db7b9ea97479ea8dad74ce942bc9acc8
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Oct  2 13:41:51 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Oct  2 13:41:51 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=01315893

Linux patch 6.16.10

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README              |    4 +
 1009_linux-6.16.10.patch | 6562 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6566 insertions(+)

diff --git a/0000_README b/0000_README
index d1517a09..8a2d4ad0 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  1008_linux-6.16.9.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.16.9
 
+Patch:  1009_linux-6.16.10.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.16.10
+
 Patch:  1401_btrfs-don-t-allow-adding-block-device-of-less-than-1.patch
 From:   https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git/tree/queue-6.16/btrfs-don-t-allow-adding-block-device-of-less-than-1.patch
 Desc:   btrfs: don't allow adding block device of less than 1 MB

diff --git a/1009_linux-6.16.10.patch b/1009_linux-6.16.10.patch
new file mode 100644
index 00000000..4371790d
--- /dev/null
+++ b/1009_linux-6.16.10.patch
@@ -0,0 +1,6562 @@
+diff --git a/Documentation/admin-guide/laptops/lg-laptop.rst b/Documentation/admin-guide/laptops/lg-laptop.rst
+index 67fd6932cef4ff..c4dd534f91edd1 100644
+--- a/Documentation/admin-guide/laptops/lg-laptop.rst
++++ b/Documentation/admin-guide/laptops/lg-laptop.rst
+@@ -48,8 +48,8 @@ This value is reset to 100 when the kernel boots.
+ Fan mode
+ --------
+ 
+-Writing 1/0 to /sys/devices/platform/lg-laptop/fan_mode disables/enables
+-the fan silent mode.
++Writing 0/1/2 to /sys/devices/platform/lg-laptop/fan_mode sets fan mode to
++Optimal/Silent/Performance respectively.
+ 
+ 
+ USB charge
+diff --git a/Makefile b/Makefile
+index aef2cb6ea99d8b..19856af4819a53 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 16
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm/boot/dts/intel/socfpga/socfpga_cyclone5_sodia.dts b/arch/arm/boot/dts/intel/socfpga/socfpga_cyclone5_sodia.dts
+index ce0d6514eeb571..e4794ccb8e413f 100644
+--- a/arch/arm/boot/dts/intel/socfpga/socfpga_cyclone5_sodia.dts
++++ b/arch/arm/boot/dts/intel/socfpga/socfpga_cyclone5_sodia.dts
+@@ -66,8 +66,10 @@ &gmac1 {
+ 	mdio0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+-		phy0: ethernet-phy@0 {
+-			reg = <0>;
++		compatible = "snps,dwmac-mdio";
++
++		phy0: ethernet-phy@4 {
++			reg = <4>;
+ 			rxd0-skew-ps = <0>;
+ 			rxd1-skew-ps = <0>;
+ 			rxd2-skew-ps = <0>;
+diff --git a/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts b/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts
+index d4e0b8150a84ce..cf26e2ceaaa074 100644
+--- a/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts
++++ b/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts
+@@ -38,7 +38,7 @@ sound {
+ 		simple-audio-card,mclk-fs = <256>;
+ 
+ 		simple-audio-card,cpu {
+-			sound-dai = <&audio0 0>;
++			sound-dai = <&audio0>;
+ 		};
+ 
+ 		simple-audio-card,codec {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+index 948b88cf5e9dff..305c2912e90f74 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+@@ -298,7 +298,7 @@ thermal-zones {
+ 		cpu-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <2000>;
+-			thermal-sensors = <&tmu 0>;
++			thermal-sensors = <&tmu 1>;
+ 			trips {
+ 				cpu_alert0: trip0 {
+ 					temperature = <85000>;
+@@ -328,7 +328,7 @@ map0 {
+ 		soc-thermal {
+ 			polling-delay-passive = <250>;
+ 			polling-delay = <2000>;
+-			thermal-sensors = <&tmu 1>;
++			thermal-sensors = <&tmu 0>;
+ 			trips {
+ 				soc_alert0: trip0 {
+ 					temperature = <85000>;
+diff --git a/arch/arm64/boot/dts/marvell/cn9130-cf.dtsi b/arch/arm64/boot/dts/marvell/cn9130-cf.dtsi
+index ad0ab34b66028c..bd42bfbe408bbe 100644
+--- a/arch/arm64/boot/dts/marvell/cn9130-cf.dtsi
++++ b/arch/arm64/boot/dts/marvell/cn9130-cf.dtsi
+@@ -152,11 +152,12 @@ expander0_pins: cp0-expander0-pins {
+ 
+ /* SRDS #0 - SATA on M.2 connector */
+ &cp0_sata0 {
+-	phys = <&cp0_comphy0 1>;
+ 	status = "okay";
+ 
+-	/* only port 1 is available */
+-	/delete-node/ sata-port@0;
++	sata-port@1 {
++		phys = <&cp0_comphy0 1>;
++		status = "okay";
++	};
+ };
+ 
+ /* microSD */
+diff --git a/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts b/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts
+index 47234d0858dd21..338853d3b179bb 100644
+--- a/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts
++++ b/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts
+@@ -563,11 +563,13 @@ &cp1_rtc {
+ 
+ /* SRDS #1 - SATA on M.2 (J44) */
+ &cp1_sata0 {
+-	phys = <&cp1_comphy1 0>;
+ 	status = "okay";
+ 
+ 	/* only port 0 is available */
+-	/delete-node/ sata-port@1;
++	sata-port@0 {
++		phys = <&cp1_comphy1 0>;
++		status = "okay";
++	};
+ };
+ 
+ &cp1_syscon0 {
+diff --git a/arch/arm64/boot/dts/marvell/cn9132-clearfog.dts b/arch/arm64/boot/dts/marvell/cn9132-clearfog.dts
+index 0f53745a6fa0d8..6f237d3542b910 100644
+--- a/arch/arm64/boot/dts/marvell/cn9132-clearfog.dts
++++ b/arch/arm64/boot/dts/marvell/cn9132-clearfog.dts
+@@ -413,7 +413,13 @@ fixed-link {
+ /* SRDS #0,#1,#2,#3 - PCIe */
+ &cp0_pcie0 {
+ 	num-lanes = <4>;
+-	phys = <&cp0_comphy0 0>, <&cp0_comphy1 0>, <&cp0_comphy2 0>, <&cp0_comphy3 0>;
++	/*
++	 * The mvebu-comphy driver does not currently know how to pass correct
++	 * lane-count to ATF while configuring the serdes lanes.
++	 * Rely on bootloader configuration only.
++	 *
++	 * phys = <&cp0_comphy0 0>, <&cp0_comphy1 0>, <&cp0_comphy2 0>, <&cp0_comphy3 0>;
++	 */
+ 	status = "okay";
+ };
+ 
+@@ -475,7 +481,13 @@ &cp1_eth0 {
+ /* SRDS #0,#1 - PCIe */
+ &cp1_pcie0 {
+ 	num-lanes = <2>;
+-	phys = <&cp1_comphy0 0>, <&cp1_comphy1 0>;
++	/*
++	 * The mvebu-comphy driver does not currently know how to pass correct
++	 * lane-count to ATF while configuring the serdes lanes.
++	 * Rely on bootloader configuration only.
++	 *
++	 * phys = <&cp1_comphy0 0>, <&cp1_comphy1 0>;
++	 */
+ 	status = "okay";
+ };
+ 
+@@ -512,10 +524,9 @@ &cp1_sata0 {
+ 	status = "okay";
+ 
+ 	/* only port 1 is available */
+-	/delete-node/ sata-port@0;
+-
+ 	sata-port@1 {
+ 		phys = <&cp1_comphy3 1>;
++		status = "okay";
+ 	};
+ };
+ 
+@@ -631,9 +642,8 @@ &cp2_sata0 {
+ 	status = "okay";
+ 
+ 	/* only port 1 is available */
+-	/delete-node/ sata-port@0;
+-
+ 	sata-port@1 {
++		status = "okay";
+ 		phys = <&cp2_comphy3 1>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/marvell/cn9132-sr-cex7.dtsi b/arch/arm64/boot/dts/marvell/cn9132-sr-cex7.dtsi
+index afc041c1c448c3..bb2bb47fd77c12 100644
+--- a/arch/arm64/boot/dts/marvell/cn9132-sr-cex7.dtsi
++++ b/arch/arm64/boot/dts/marvell/cn9132-sr-cex7.dtsi
+@@ -137,6 +137,14 @@ &ap_sdhci0 {
+ 	pinctrl-0 = <&ap_mmc0_pins>;
+ 	pinctrl-names = "default";
+ 	vqmmc-supply = <&v_1_8>;
++	/*
++	 * Not stable in HS modes - phy needs "more calibration", so disable
++	 * UHS (by preventing voltage switch), SDR104, SDR50 and DDR50 modes.
++	 */
++	no-1-8-v;
++	no-sd;
++	no-sdio;
++	non-removable;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dtsi b/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dtsi
+index 4fedc50cce8c86..11940c77f2bd01 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dtsi
+@@ -42,9 +42,8 @@ analog-sound {
+ 		simple-audio-card,bitclock-master = <&masterdai>;
+ 		simple-audio-card,format = "i2s";
+ 		simple-audio-card,frame-master = <&masterdai>;
+-		simple-audio-card,hp-det-gpios = <&gpio1 RK_PD5 GPIO_ACTIVE_LOW>;
++		simple-audio-card,hp-det-gpios = <&gpio1 RK_PD5 GPIO_ACTIVE_HIGH>;
+ 		simple-audio-card,mclk-fs = <256>;
+-		simple-audio-card,pin-switches = "Headphones";
+ 		simple-audio-card,routing =
+ 			"Headphones", "LOUT1",
+ 			"Headphones", "ROUT1",
+diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
+index 5bd5aae60d5369..4d9fe4ea78affa 100644
+--- a/arch/riscv/include/asm/pgtable.h
++++ b/arch/riscv/include/asm/pgtable.h
+@@ -964,6 +964,23 @@ static inline int pudp_test_and_clear_young(struct vm_area_struct *vma,
+ 	return ptep_test_and_clear_young(vma, address, (pte_t *)pudp);
+ }
+ 
++#define __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR
++static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
++					    unsigned long address,  pud_t *pudp)
++{
++#ifdef CONFIG_SMP
++	pud_t pud = __pud(xchg(&pudp->pud, 0));
++#else
++	pud_t pud = *pudp;
++
++	pud_clear(pudp);
++#endif
++
++	page_table_check_pud_clear(mm, pud);
++
++	return pud;
++}
++
+ static inline int pud_young(pud_t pud)
+ {
+ 	return pte_young(pud_pte(pud));
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 874c9b264d6f0c..a530806ec5b025 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -26,7 +26,6 @@ config X86_64
+ 	depends on 64BIT
+ 	# Options that are inherently 64-bit kernel only:
+ 	select ARCH_HAS_GIGANTIC_PAGE
+-	select ARCH_HAS_PTDUMP
+ 	select ARCH_SUPPORTS_MSEAL_SYSTEM_MAPPINGS
+ 	select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
+ 	select ARCH_SUPPORTS_PER_VMA_LOCK
+@@ -101,6 +100,7 @@ config X86
+ 	select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
+ 	select ARCH_HAS_PMEM_API		if X86_64
+ 	select ARCH_HAS_PREEMPT_LAZY
++	select ARCH_HAS_PTDUMP
+ 	select ARCH_HAS_PTE_DEVMAP		if X86_64
+ 	select ARCH_HAS_PTE_SPECIAL
+ 	select ARCH_HAS_HW_PTE_YOUNG
+diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
+index 6c79ee7c0957a7..21041898157a1f 100644
+--- a/arch/x86/include/asm/topology.h
++++ b/arch/x86/include/asm/topology.h
+@@ -231,6 +231,16 @@ static inline bool topology_is_primary_thread(unsigned int cpu)
+ }
+ #define topology_is_primary_thread topology_is_primary_thread
+ 
++int topology_get_primary_thread(unsigned int cpu);
++
++static inline bool topology_is_core_online(unsigned int cpu)
++{
++	int pcpu = topology_get_primary_thread(cpu);
++
++	return pcpu >= 0 ? cpu_online(pcpu) : false;
++}
++#define topology_is_core_online topology_is_core_online
++
+ #else /* CONFIG_SMP */
+ static inline int topology_phys_to_logical_pkg(unsigned int pkg) { return 0; }
+ static inline int topology_max_smt_threads(void) { return 1; }
+diff --git a/arch/x86/kernel/cpu/topology.c b/arch/x86/kernel/cpu/topology.c
+index e35ccdc84910f5..6073a16628f9e4 100644
+--- a/arch/x86/kernel/cpu/topology.c
++++ b/arch/x86/kernel/cpu/topology.c
+@@ -372,6 +372,19 @@ unsigned int topology_unit_count(u32 apicid, enum x86_topology_domains which_uni
+ 	return topo_unit_count(lvlid, at_level, apic_maps[which_units].map);
+ }
+ 
++#ifdef CONFIG_SMP
++int topology_get_primary_thread(unsigned int cpu)
++{
++	u32 apic_id = cpuid_to_apicid[cpu];
++
++	/*
++	 * Get the core domain level APIC id, which is the primary thread
++	 * and return the CPU number assigned to it.
++	 */
++	return topo_lookup_cpuid(topo_apicid(apic_id, TOPO_CORE_DOMAIN));
++}
++#endif
++
+ #ifdef CONFIG_ACPI_HOTPLUG_CPU
+ /**
+  * topology_hotplug_apic - Handle a physical hotplugged APIC after boot
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 628f5b633b61fe..b2da1cda4cebd1 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -2956,6 +2956,15 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
+ 			goto err_null_driver;
+ 	}
+ 
++	/*
++	 * Mark support for the scheduler's frequency invariance engine for
++	 * drivers that implement target(), target_index() or fast_switch().
++	 */
++	if (!cpufreq_driver->setpolicy) {
++		static_branch_enable_cpuslocked(&cpufreq_freq_invariance);
++		pr_debug("cpufreq: supports frequency invariance\n");
++	}
++
+ 	ret = subsys_interface_register(&cpufreq_interface);
+ 	if (ret)
+ 		goto err_boost_unreg;
+@@ -2977,21 +2986,14 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
+ 	hp_online = ret;
+ 	ret = 0;
+ 
+-	/*
+-	 * Mark support for the scheduler's frequency invariance engine for
+-	 * drivers that implement target(), target_index() or fast_switch().
+-	 */
+-	if (!cpufreq_driver->setpolicy) {
+-		static_branch_enable_cpuslocked(&cpufreq_freq_invariance);
+-		pr_debug("supports frequency invariance");
+-	}
+-
+ 	pr_debug("driver %s up and running\n", driver_data->name);
+ 	goto out;
+ 
+ err_if_unreg:
+ 	subsys_interface_unregister(&cpufreq_interface);
+ err_boost_unreg:
++	if (!cpufreq_driver->setpolicy)
++		static_branch_disable_cpuslocked(&cpufreq_freq_invariance);
+ 	remove_boost_sysfs_file();
+ err_null_driver:
+ 	write_lock_irqsave(&cpufreq_driver_lock, flags);
+diff --git a/drivers/firewire/core-cdev.c b/drivers/firewire/core-cdev.c
+index bd04980009a467..6a81c3fd4c8609 100644
+--- a/drivers/firewire/core-cdev.c
++++ b/drivers/firewire/core-cdev.c
+@@ -41,7 +41,7 @@
+ /*
+  * ABI version history is documented in linux/firewire-cdev.h.
+  */
+-#define FW_CDEV_KERNEL_VERSION			5
++#define FW_CDEV_KERNEL_VERSION			6
+ #define FW_CDEV_VERSION_EVENT_REQUEST2		4
+ #define FW_CDEV_VERSION_ALLOCATE_REGION_END	4
+ #define FW_CDEV_VERSION_AUTO_FLUSH_ISO_OVERFLOW	5
+diff --git a/drivers/gpio/gpio-regmap.c b/drivers/gpio/gpio-regmap.c
+index 87c4225784cfae..b3b84a404485eb 100644
+--- a/drivers/gpio/gpio-regmap.c
++++ b/drivers/gpio/gpio-regmap.c
+@@ -274,7 +274,7 @@ struct gpio_regmap *gpio_regmap_register(const struct gpio_regmap_config *config
+ 	if (!chip->ngpio) {
+ 		ret = gpiochip_get_ngpios(chip, chip->parent);
+ 		if (ret)
+-			return ERR_PTR(ret);
++			goto err_free_gpio;
+ 	}
+ 
+ 	/* if not set, assume there is only one register */
+diff --git a/drivers/gpio/gpiolib-acpi-quirks.c b/drivers/gpio/gpiolib-acpi-quirks.c
+index c13545dce3492d..bfb04e67c4bc87 100644
+--- a/drivers/gpio/gpiolib-acpi-quirks.c
++++ b/drivers/gpio/gpiolib-acpi-quirks.c
+@@ -344,6 +344,20 @@ static const struct dmi_system_id gpiolib_acpi_quirks[] __initconst = {
+ 			.ignore_interrupt = "AMDI0030:00@8",
+ 		},
+ 	},
++	{
++		/*
++		 * Spurious wakeups from TP_ATTN# pin
++		 * Found in BIOS 5.35
++		 * https://gitlab.freedesktop.org/drm/amd/-/issues/4482
++		 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_FAMILY, "ProArt PX13"),
++		},
++		.driver_data = &(struct acpi_gpiolib_dmi_quirk) {
++			.ignore_wake = "ASCP1A00:00@8",
++		},
++	},
+ 	{} /* Terminating entry */
+ };
+ 
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 3a3eca5b4c40b6..01d611d7ee66ac 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -4605,6 +4605,23 @@ static struct gpio_desc *gpiod_find_by_fwnode(struct fwnode_handle *fwnode,
+ 	return desc;
+ }
+ 
++static struct gpio_desc *gpiod_fwnode_lookup(struct fwnode_handle *fwnode,
++					     struct device *consumer,
++					     const char *con_id,
++					     unsigned int idx,
++					     enum gpiod_flags *flags,
++					     unsigned long *lookupflags)
++{
++	struct gpio_desc *desc;
++
++	desc = gpiod_find_by_fwnode(fwnode, consumer, con_id, idx, flags, lookupflags);
++	if (gpiod_not_found(desc) && !IS_ERR_OR_NULL(fwnode))
++		desc = gpiod_find_by_fwnode(fwnode->secondary, consumer, con_id,
++					    idx, flags, lookupflags);
++
++	return desc;
++}
++
+ struct gpio_desc *gpiod_find_and_request(struct device *consumer,
+ 					 struct fwnode_handle *fwnode,
+ 					 const char *con_id,
+@@ -4623,8 +4640,8 @@ struct gpio_desc *gpiod_find_and_request(struct device *consumer,
+ 	int ret = 0;
+ 
+ 	scoped_guard(srcu, &gpio_devices_srcu) {
+-		desc = gpiod_find_by_fwnode(fwnode, consumer, con_id, idx,
+-					    &flags, &lookupflags);
++		desc = gpiod_fwnode_lookup(fwnode, consumer, con_id, idx,
++					   &flags, &lookupflags);
+ 		if (gpiod_not_found(desc) && platform_lookup_allowed) {
+ 			/*
+ 			 * Either we are not using DT or ACPI, or their lookup
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+index 260165bbe3736d..b16cce7c22c373 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+@@ -213,19 +213,35 @@ int amdgpu_amdkfd_reserve_mem_limit(struct amdgpu_device *adev,
+ 	spin_lock(&kfd_mem_limit.mem_limit_lock);
+ 
+ 	if (kfd_mem_limit.system_mem_used + system_mem_needed >
+-	    kfd_mem_limit.max_system_mem_limit)
++	    kfd_mem_limit.max_system_mem_limit) {
+ 		pr_debug("Set no_system_mem_limit=1 if using shared memory\n");
++		if (!no_system_mem_limit) {
++			ret = -ENOMEM;
++			goto release;
++		}
++	}
+ 
+-	if ((kfd_mem_limit.system_mem_used + system_mem_needed >
+-	     kfd_mem_limit.max_system_mem_limit && !no_system_mem_limit) ||
+-	    (kfd_mem_limit.ttm_mem_used + ttm_mem_needed >
+-	     kfd_mem_limit.max_ttm_mem_limit) ||
+-	    (adev && xcp_id >= 0 && adev->kfd.vram_used[xcp_id] + vram_needed >
+-	     vram_size - reserved_for_pt - reserved_for_ras - atomic64_read(&adev->vram_pin_size))) {
++	if (kfd_mem_limit.ttm_mem_used + ttm_mem_needed >
++		kfd_mem_limit.max_ttm_mem_limit) {
+ 		ret = -ENOMEM;
+ 		goto release;
+ 	}
+ 
++	/*if is_app_apu is false and apu_prefer_gtt is true, it is an APU with
++	 * carve out < gtt. In that case, VRAM allocation will go to gtt domain, skip
++	 * VRAM check since ttm_mem_limit check already cover this allocation
++	 */
++
++	if (adev && xcp_id >= 0 && (!adev->apu_prefer_gtt || adev->gmc.is_app_apu)) {
++		uint64_t vram_available =
++			vram_size - reserved_for_pt - reserved_for_ras -
++			atomic64_read(&adev->vram_pin_size);
++		if (adev->kfd.vram_used[xcp_id] + vram_needed > vram_available) {
++			ret = -ENOMEM;
++			goto release;
++		}
++	}
++
+ 	/* Update memory accounting by decreasing available system
+ 	 * memory, TTM memory and GPU memory as computed above
+ 	 */
+@@ -1626,11 +1642,15 @@ size_t amdgpu_amdkfd_get_available_memory(struct amdgpu_device *adev,
+ 	uint64_t vram_available, system_mem_available, ttm_mem_available;
+ 
+ 	spin_lock(&kfd_mem_limit.mem_limit_lock);
+-	vram_available = KFD_XCP_MEMORY_SIZE(adev, xcp_id)
+-		- adev->kfd.vram_used_aligned[xcp_id]
+-		- atomic64_read(&adev->vram_pin_size)
+-		- reserved_for_pt
+-		- reserved_for_ras;
++	if (adev->apu_prefer_gtt && !adev->gmc.is_app_apu)
++		vram_available = KFD_XCP_MEMORY_SIZE(adev, xcp_id)
++			- adev->kfd.vram_used_aligned[xcp_id];
++	else
++		vram_available = KFD_XCP_MEMORY_SIZE(adev, xcp_id)
++			- adev->kfd.vram_used_aligned[xcp_id]
++			- atomic64_read(&adev->vram_pin_size)
++			- reserved_for_pt
++			- reserved_for_ras;
+ 
+ 	if (adev->apu_prefer_gtt) {
+ 		system_mem_available = no_system_mem_limit ?
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index 4ec73f33535ebf..720b20e842ba43 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -1587,7 +1587,8 @@ static int kfd_dev_create_p2p_links(void)
+ 			break;
+ 		if (!dev->gpu || !dev->gpu->adev ||
+ 		    (dev->gpu->kfd->hive_id &&
+-		     dev->gpu->kfd->hive_id == new_dev->gpu->kfd->hive_id))
++		     dev->gpu->kfd->hive_id == new_dev->gpu->kfd->hive_id &&
++		     amdgpu_xgmi_get_is_sharing_enabled(dev->gpu->adev, new_dev->gpu->adev)))
+ 			goto next;
+ 
+ 		/* check if node(s) is/are peer accessible in one direction or bi-direction */
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 58ea351dd48b5d..fa24bcae3c5fc4 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2035,6 +2035,8 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ 
+ 	dc_hardware_init(adev->dm.dc);
+ 
++	adev->dm.restore_backlight = true;
++
+ 	adev->dm.hpd_rx_offload_wq = hpd_rx_irq_create_workqueue(adev);
+ 	if (!adev->dm.hpd_rx_offload_wq) {
+ 		drm_err(adev_to_drm(adev), "amdgpu: failed to create hpd rx offload workqueue.\n");
+@@ -3396,6 +3398,7 @@ static int dm_resume(struct amdgpu_ip_block *ip_block)
+ 		dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D0);
+ 
+ 		dc_resume(dm->dc);
++		adev->dm.restore_backlight = true;
+ 
+ 		amdgpu_dm_irq_resume_early(adev);
+ 
+@@ -9801,7 +9804,6 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
+ 	bool mode_set_reset_required = false;
+ 	u32 i;
+ 	struct dc_commit_streams_params params = {dc_state->streams, dc_state->stream_count};
+-	bool set_backlight_level = false;
+ 
+ 	/* Disable writeback */
+ 	for_each_old_connector_in_state(state, connector, old_con_state, i) {
+@@ -9921,7 +9923,6 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
+ 			acrtc->hw_mode = new_crtc_state->mode;
+ 			crtc->hwmode = new_crtc_state->mode;
+ 			mode_set_reset_required = true;
+-			set_backlight_level = true;
+ 		} else if (modereset_required(new_crtc_state)) {
+ 			drm_dbg_atomic(dev,
+ 				       "Atomic commit: RESET. crtc id %d:[%p]\n",
+@@ -9978,13 +9979,16 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
+ 	 * to fix a flicker issue.
+ 	 * It will cause the dm->actual_brightness is not the current panel brightness
+ 	 * level. (the dm->brightness is the correct panel level)
+-	 * So we set the backlight level with dm->brightness value after set mode
++	 * So we set the backlight level with dm->brightness value after initial
++	 * set mode. Use restore_backlight flag to avoid setting backlight level
++	 * for every subsequent mode set.
+ 	 */
+-	if (set_backlight_level) {
++	if (dm->restore_backlight) {
+ 		for (i = 0; i < dm->num_of_edps; i++) {
+ 			if (dm->backlight_dev[i])
+ 				amdgpu_dm_backlight_set_level(dm, i, dm->brightness[i]);
+ 		}
++		dm->restore_backlight = false;
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index d7d92f9911e465..47abef63686ea5 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -610,6 +610,13 @@ struct amdgpu_display_manager {
+ 	 */
+ 	u32 actual_brightness[AMDGPU_DM_MAX_NUM_EDP];
+ 
++	/**
++	 * @restore_backlight:
++	 *
++	 * Flag to indicate whether to restore backlight after modeset.
++	 */
++	bool restore_backlight;
++
+ 	/**
+ 	 * @aux_hpd_discon_quirk:
+ 	 *
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 7dfbfb18593c12..f037f2d83400b9 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -1292,7 +1292,6 @@ union surface_update_flags {
+ 		uint32_t in_transfer_func_change:1;
+ 		uint32_t input_csc_change:1;
+ 		uint32_t coeff_reduction_change:1;
+-		uint32_t output_tf_change:1;
+ 		uint32_t pixel_format_change:1;
+ 		uint32_t plane_size_change:1;
+ 		uint32_t gamut_remap_change:1;
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+index 454e362ff096aa..c0127d8b5b3961 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+@@ -1990,10 +1990,8 @@ static void dcn20_program_pipe(
+ 	 * updating on slave planes
+ 	 */
+ 	if (pipe_ctx->update_flags.bits.enable ||
+-		pipe_ctx->update_flags.bits.plane_changed ||
+-		pipe_ctx->stream->update_flags.bits.out_tf ||
+-		(pipe_ctx->plane_state &&
+-			pipe_ctx->plane_state->update_flags.bits.output_tf_change))
++	    pipe_ctx->update_flags.bits.plane_changed ||
++	    pipe_ctx->stream->update_flags.bits.out_tf)
+ 		hws->funcs.set_output_transfer_func(dc, pipe_ctx, pipe_ctx->stream);
+ 
+ 	/* If the pipe has been enabled or has a different opp, we
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
+index c4177a9a662fac..c68d01f3786026 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
+@@ -2289,10 +2289,8 @@ void dcn401_program_pipe(
+ 	 * updating on slave planes
+ 	 */
+ 	if (pipe_ctx->update_flags.bits.enable ||
+-		pipe_ctx->update_flags.bits.plane_changed ||
+-		pipe_ctx->stream->update_flags.bits.out_tf ||
+-		(pipe_ctx->plane_state &&
+-			pipe_ctx->plane_state->update_flags.bits.output_tf_change))
++	    pipe_ctx->update_flags.bits.plane_changed ||
++	    pipe_ctx->stream->update_flags.bits.out_tf)
+ 		hws->funcs.set_output_transfer_func(dc, pipe_ctx, pipe_ctx->stream);
+ 
+ 	/* If the pipe has been enabled or has a different opp, we
+diff --git a/drivers/gpu/drm/ast/ast_dp.c b/drivers/gpu/drm/ast/ast_dp.c
+index 19c04687b0fe1f..8e650a02c5287b 100644
+--- a/drivers/gpu/drm/ast/ast_dp.c
++++ b/drivers/gpu/drm/ast/ast_dp.c
+@@ -134,7 +134,7 @@ static int ast_astdp_read_edid_block(void *data, u8 *buf, unsigned int block, si
+ 			 * 3. The Delays are often longer a lot when system resume from S3/S4.
+ 			 */
+ 			if (j)
+-				mdelay(j + 1);
++				msleep(j + 1);
+ 
+ 			/* Wait for EDID offset to show up in mirror register */
+ 			vgacrd7 = ast_get_index_reg(ast, AST_IO_VGACRI, 0xd7);
+diff --git a/drivers/gpu/drm/gma500/oaktrail_hdmi.c b/drivers/gpu/drm/gma500/oaktrail_hdmi.c
+index 1cf39436912776..c0feca58511df3 100644
+--- a/drivers/gpu/drm/gma500/oaktrail_hdmi.c
++++ b/drivers/gpu/drm/gma500/oaktrail_hdmi.c
+@@ -726,8 +726,8 @@ void oaktrail_hdmi_teardown(struct drm_device *dev)
+ 
+ 	if (hdmi_dev) {
+ 		pdev = hdmi_dev->dev;
+-		pci_set_drvdata(pdev, NULL);
+ 		oaktrail_hdmi_i2c_exit(pdev);
++		pci_set_drvdata(pdev, NULL);
+ 		iounmap(hdmi_dev->regs);
+ 		kfree(hdmi_dev);
+ 		pci_dev_put(pdev);
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index d58f8fc3732658..55b8bfcf364aec 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -593,8 +593,9 @@ intel_ddi_transcoder_func_reg_val_get(struct intel_encoder *encoder,
+ 			enum transcoder master;
+ 
+ 			master = crtc_state->mst_master_transcoder;
+-			drm_WARN_ON(display->drm,
+-				    master == INVALID_TRANSCODER);
++			if (drm_WARN_ON(display->drm,
++					master == INVALID_TRANSCODER))
++				master = TRANSCODER_A;
+ 			temp |= TRANS_DDI_MST_TRANSPORT_SELECT(master);
+ 		}
+ 	} else {
+diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
+index f1ec3b02f15a00..07cd67baa81bfc 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
++++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
+@@ -789,6 +789,8 @@ static const struct panfrost_compatible amlogic_data = {
+ 	.vendor_quirk = panfrost_gpu_amlogic_quirk,
+ };
+ 
++static const char * const mediatek_pm_domains[] = { "core0", "core1", "core2",
++						    "core3", "core4" };
+ /*
+  * The old data with two power supplies for MT8183 is here only to
+  * keep retro-compatibility with older devicetrees, as DVFS will
+@@ -797,51 +799,53 @@ static const struct panfrost_compatible amlogic_data = {
+  * On new devicetrees please use the _b variant with a single and
+  * coupled regulators instead.
+  */
+-static const char * const mediatek_mt8183_supplies[] = { "mali", "sram", NULL };
+-static const char * const mediatek_mt8183_pm_domains[] = { "core0", "core1", "core2" };
++static const char * const legacy_supplies[] = { "mali", "sram", NULL };
+ static const struct panfrost_compatible mediatek_mt8183_data = {
+-	.num_supplies = ARRAY_SIZE(mediatek_mt8183_supplies) - 1,
+-	.supply_names = mediatek_mt8183_supplies,
+-	.num_pm_domains = ARRAY_SIZE(mediatek_mt8183_pm_domains),
+-	.pm_domain_names = mediatek_mt8183_pm_domains,
++	.num_supplies = ARRAY_SIZE(legacy_supplies) - 1,
++	.supply_names = legacy_supplies,
++	.num_pm_domains = 3,
++	.pm_domain_names = mediatek_pm_domains,
+ };
+ 
+-static const char * const mediatek_mt8183_b_supplies[] = { "mali", NULL };
+ static const struct panfrost_compatible mediatek_mt8183_b_data = {
+-	.num_supplies = ARRAY_SIZE(mediatek_mt8183_b_supplies) - 1,
+-	.supply_names = mediatek_mt8183_b_supplies,
+-	.num_pm_domains = ARRAY_SIZE(mediatek_mt8183_pm_domains),
+-	.pm_domain_names = mediatek_mt8183_pm_domains,
++	.num_supplies = ARRAY_SIZE(default_supplies) - 1,
++	.supply_names = default_supplies,
++	.num_pm_domains = 3,
++	.pm_domain_names = mediatek_pm_domains,
+ 	.pm_features = BIT(GPU_PM_CLK_DIS) | BIT(GPU_PM_VREG_OFF),
+ };
+ 
+-static const char * const mediatek_mt8186_pm_domains[] = { "core0", "core1" };
+ static const struct panfrost_compatible mediatek_mt8186_data = {
+-	.num_supplies = ARRAY_SIZE(mediatek_mt8183_b_supplies) - 1,
+-	.supply_names = mediatek_mt8183_b_supplies,
+-	.num_pm_domains = ARRAY_SIZE(mediatek_mt8186_pm_domains),
+-	.pm_domain_names = mediatek_mt8186_pm_domains,
++	.num_supplies = ARRAY_SIZE(default_supplies) - 1,
++	.supply_names = default_supplies,
++	.num_pm_domains = 2,
++	.pm_domain_names = mediatek_pm_domains,
+ 	.pm_features = BIT(GPU_PM_CLK_DIS) | BIT(GPU_PM_VREG_OFF),
+ };
+ 
+-/* MT8188 uses the same power domains and power supplies as MT8183 */
+ static const struct panfrost_compatible mediatek_mt8188_data = {
+-	.num_supplies = ARRAY_SIZE(mediatek_mt8183_b_supplies) - 1,
+-	.supply_names = mediatek_mt8183_b_supplies,
+-	.num_pm_domains = ARRAY_SIZE(mediatek_mt8183_pm_domains),
+-	.pm_domain_names = mediatek_mt8183_pm_domains,
++	.num_supplies = ARRAY_SIZE(default_supplies) - 1,
++	.supply_names = default_supplies,
++	.num_pm_domains = 3,
++	.pm_domain_names = mediatek_pm_domains,
+ 	.pm_features = BIT(GPU_PM_CLK_DIS) | BIT(GPU_PM_VREG_OFF),
+ 	.gpu_quirks = BIT(GPU_QUIRK_FORCE_AARCH64_PGTABLE),
+ };
+ 
+-static const char * const mediatek_mt8192_supplies[] = { "mali", NULL };
+-static const char * const mediatek_mt8192_pm_domains[] = { "core0", "core1", "core2",
+-							   "core3", "core4" };
+ static const struct panfrost_compatible mediatek_mt8192_data = {
+-	.num_supplies = ARRAY_SIZE(mediatek_mt8192_supplies) - 1,
+-	.supply_names = mediatek_mt8192_supplies,
+-	.num_pm_domains = ARRAY_SIZE(mediatek_mt8192_pm_domains),
+-	.pm_domain_names = mediatek_mt8192_pm_domains,
++	.num_supplies = ARRAY_SIZE(default_supplies) - 1,
++	.supply_names = default_supplies,
++	.num_pm_domains = 5,
++	.pm_domain_names = mediatek_pm_domains,
++	.pm_features = BIT(GPU_PM_CLK_DIS) | BIT(GPU_PM_VREG_OFF),
++	.gpu_quirks = BIT(GPU_QUIRK_FORCE_AARCH64_PGTABLE),
++};
++
++static const struct panfrost_compatible mediatek_mt8370_data = {
++	.num_supplies = ARRAY_SIZE(default_supplies) - 1,
++	.supply_names = default_supplies,
++	.num_pm_domains = 2,
++	.pm_domain_names = mediatek_pm_domains,
+ 	.pm_features = BIT(GPU_PM_CLK_DIS) | BIT(GPU_PM_VREG_OFF),
+ 	.gpu_quirks = BIT(GPU_QUIRK_FORCE_AARCH64_PGTABLE),
+ };
+@@ -868,6 +872,7 @@ static const struct of_device_id dt_match[] = {
+ 	{ .compatible = "mediatek,mt8186-mali", .data = &mediatek_mt8186_data },
+ 	{ .compatible = "mediatek,mt8188-mali", .data = &mediatek_mt8188_data },
+ 	{ .compatible = "mediatek,mt8192-mali", .data = &mediatek_mt8192_data },
++	{ .compatible = "mediatek,mt8370-mali", .data = &mediatek_mt8370_data },
+ 	{ .compatible = "allwinner,sun50i-h616-mali", .data = &allwinner_h616_data },
+ 	{}
+ };
+diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
+index 43ee57728de543..e927d80d6a2af9 100644
+--- a/drivers/gpu/drm/panthor/panthor_sched.c
++++ b/drivers/gpu/drm/panthor/panthor_sched.c
+@@ -886,8 +886,7 @@ static void group_free_queue(struct panthor_group *group, struct panthor_queue *
+ 	if (IS_ERR_OR_NULL(queue))
+ 		return;
+ 
+-	if (queue->entity.fence_context)
+-		drm_sched_entity_destroy(&queue->entity);
++	drm_sched_entity_destroy(&queue->entity);
+ 
+ 	if (queue->scheduler.ops)
+ 		drm_sched_fini(&queue->scheduler);
+@@ -3558,11 +3557,6 @@ int panthor_group_destroy(struct panthor_file *pfile, u32 group_handle)
+ 	if (!group)
+ 		return -EINVAL;
+ 
+-	for (u32 i = 0; i < group->queue_count; i++) {
+-		if (group->queues[i])
+-			drm_sched_entity_destroy(&group->queues[i]->entity);
+-	}
+-
+ 	mutex_lock(&sched->reset.lock);
+ 	mutex_lock(&sched->lock);
+ 	group->destroyed = true;
+diff --git a/drivers/gpu/drm/xe/abi/guc_actions_abi.h b/drivers/gpu/drm/xe/abi/guc_actions_abi.h
+index 4d9896e14649c0..448afb86e05c7d 100644
+--- a/drivers/gpu/drm/xe/abi/guc_actions_abi.h
++++ b/drivers/gpu/drm/xe/abi/guc_actions_abi.h
+@@ -117,7 +117,6 @@ enum xe_guc_action {
+ 	XE_GUC_ACTION_ENTER_S_STATE = 0x501,
+ 	XE_GUC_ACTION_EXIT_S_STATE = 0x502,
+ 	XE_GUC_ACTION_GLOBAL_SCHED_POLICY_CHANGE = 0x506,
+-	XE_GUC_ACTION_UPDATE_SCHEDULING_POLICIES_KLV = 0x509,
+ 	XE_GUC_ACTION_SCHED_CONTEXT = 0x1000,
+ 	XE_GUC_ACTION_SCHED_CONTEXT_MODE_SET = 0x1001,
+ 	XE_GUC_ACTION_SCHED_CONTEXT_MODE_DONE = 0x1002,
+@@ -143,7 +142,6 @@ enum xe_guc_action {
+ 	XE_GUC_ACTION_SET_ENG_UTIL_BUFF = 0x550A,
+ 	XE_GUC_ACTION_SET_DEVICE_ENGINE_ACTIVITY_BUFFER = 0x550C,
+ 	XE_GUC_ACTION_SET_FUNCTION_ENGINE_ACTIVITY_BUFFER = 0x550D,
+-	XE_GUC_ACTION_OPT_IN_FEATURE_KLV = 0x550E,
+ 	XE_GUC_ACTION_NOTIFY_MEMORY_CAT_ERROR = 0x6000,
+ 	XE_GUC_ACTION_REPORT_PAGE_FAULT_REQ_DESC = 0x6002,
+ 	XE_GUC_ACTION_PAGE_FAULT_RES_DESC = 0x6003,
+@@ -242,7 +240,4 @@ enum xe_guc_g2g_type {
+ #define XE_G2G_DEREGISTER_TILE	REG_GENMASK(15, 12)
+ #define XE_G2G_DEREGISTER_TYPE	REG_GENMASK(11, 8)
+ 
+-/* invalid type for XE_GUC_ACTION_NOTIFY_MEMORY_CAT_ERROR */
+-#define XE_GUC_CAT_ERR_TYPE_INVALID 0xdeadbeef
+-
+ #endif
+diff --git a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
+index 89034bc97ec5a4..7de8f827281fcd 100644
+--- a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
++++ b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
+@@ -16,8 +16,6 @@
+  *  +===+=======+==============================================================+
+  *  | 0 | 31:16 | **KEY** - KLV key identifier                                 |
+  *  |   |       |   - `GuC Self Config KLVs`_                                  |
+- *  |   |       |   - `GuC Opt In Feature KLVs`_                               |
+- *  |   |       |   - `GuC Scheduling Policies KLVs`_                          |
+  *  |   |       |   - `GuC VGT Policy KLVs`_                                   |
+  *  |   |       |   - `GuC VF Configuration KLVs`_                             |
+  *  |   |       |                                                              |
+@@ -126,44 +124,6 @@ enum  {
+ 	GUC_CONTEXT_POLICIES_KLV_NUM_IDS = 5,
+ };
+ 
+-/**
+- * DOC: GuC Opt In Feature KLVs
+- *
+- * `GuC KLV`_ keys available for use with OPT_IN_FEATURE_KLV
+- *
+- *  _`GUC_KLV_OPT_IN_FEATURE_EXT_CAT_ERR_TYPE` : 0x4001
+- *      Adds an extra dword to the XE_GUC_ACTION_NOTIFY_MEMORY_CAT_ERROR G2H
+- *      containing the type of the CAT error. On HW that does not support
+- *      reporting the CAT error type, the extra dword is set to 0xdeadbeef.
+- */
+-
+-#define GUC_KLV_OPT_IN_FEATURE_EXT_CAT_ERR_TYPE_KEY 0x4001
+-#define GUC_KLV_OPT_IN_FEATURE_EXT_CAT_ERR_TYPE_LEN 0u
+-
+-/**
+- * DOC: GuC Scheduling Policies KLVs
+- *
+- * `GuC KLV`_ keys available for use with UPDATE_SCHEDULING_POLICIES_KLV.
+- *
+- * _`GUC_KLV_SCHEDULING_POLICIES_RENDER_COMPUTE_YIELD` : 0x1001
+- *      Some platforms do not allow concurrent execution of RCS and CCS
+- *      workloads from different address spaces. By default, the GuC prioritizes
+- *      RCS submissions over CCS ones, which can lead to CCS workloads being
+- *      significantly (or completely) starved of execution time. This KLV allows
+- *      the driver to specify a quantum (in ms) and a ratio (percentage value
+- *      between 0 and 100), and the GuC will prioritize the CCS for that
+- *      percentage of each quantum. For example, specifying 100ms and 30% will
+- *      make the GuC prioritize the CCS for 30ms of every 100ms.
+- *      Note that this does not necessarly mean that RCS and CCS engines will
+- *      only be active for their percentage of the quantum, as the restriction
+- *      only kicks in if both classes are fully busy with non-compatible address
+- *      spaces; i.e., if one engine is idle or running the same address space,
+- *      a pending job on the other engine will still be submitted to the HW no
+- *      matter what the ratio is
+- */
+-#define GUC_KLV_SCHEDULING_POLICIES_RENDER_COMPUTE_YIELD_KEY	0x1001
+-#define GUC_KLV_SCHEDULING_POLICIES_RENDER_COMPUTE_YIELD_LEN	2u
+-
+ /**
+  * DOC: GuC VGT Policy KLVs
+  *
+diff --git a/drivers/gpu/drm/xe/xe_bo_evict.c b/drivers/gpu/drm/xe/xe_bo_evict.c
+index ed3746d32b27b1..4620201c72399d 100644
+--- a/drivers/gpu/drm/xe/xe_bo_evict.c
++++ b/drivers/gpu/drm/xe/xe_bo_evict.c
+@@ -158,8 +158,8 @@ int xe_bo_evict_all(struct xe_device *xe)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = xe_bo_apply_to_pinned(xe, &xe->pinned.late.kernel_bo_present,
+-				    &xe->pinned.late.evicted, xe_bo_evict_pinned);
++	ret = xe_bo_apply_to_pinned(xe, &xe->pinned.late.external,
++				    &xe->pinned.late.external, xe_bo_evict_pinned);
+ 
+ 	if (!ret)
+ 		ret = xe_bo_apply_to_pinned(xe, &xe->pinned.late.kernel_bo_present,
+diff --git a/drivers/gpu/drm/xe/xe_configfs.c b/drivers/gpu/drm/xe/xe_configfs.c
+index 9a2b96b111ef54..2b591ed055612a 100644
+--- a/drivers/gpu/drm/xe/xe_configfs.c
++++ b/drivers/gpu/drm/xe/xe_configfs.c
+@@ -244,7 +244,7 @@ int __init xe_configfs_init(void)
+ 	return 0;
+ }
+ 
+-void __exit xe_configfs_exit(void)
++void xe_configfs_exit(void)
+ {
+ 	configfs_unregister_subsystem(&xe_configfs);
+ }
+diff --git a/drivers/gpu/drm/xe/xe_device_sysfs.c b/drivers/gpu/drm/xe/xe_device_sysfs.c
+index b9440f8c781e3b..652da4d294c0b2 100644
+--- a/drivers/gpu/drm/xe/xe_device_sysfs.c
++++ b/drivers/gpu/drm/xe/xe_device_sysfs.c
+@@ -166,7 +166,7 @@ int xe_device_sysfs_init(struct xe_device *xe)
+ 			return ret;
+ 	}
+ 
+-	if (xe->info.platform == XE_BATTLEMAGE) {
++	if (xe->info.platform == XE_BATTLEMAGE && !IS_SRIOV_VF(xe)) {
+ 		ret = sysfs_create_files(&dev->kobj, auto_link_downgrade_attrs);
+ 		if (ret)
+ 			return ret;
+diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
+index eaf7569a7c1d1e..e3517ce2e18c14 100644
+--- a/drivers/gpu/drm/xe/xe_gt.c
++++ b/drivers/gpu/drm/xe/xe_gt.c
+@@ -41,7 +41,6 @@
+ #include "xe_gt_topology.h"
+ #include "xe_guc_exec_queue_types.h"
+ #include "xe_guc_pc.h"
+-#include "xe_guc_submit.h"
+ #include "xe_hw_fence.h"
+ #include "xe_hw_engine_class_sysfs.h"
+ #include "xe_irq.h"
+@@ -98,7 +97,7 @@ void xe_gt_sanitize(struct xe_gt *gt)
+ 	 * FIXME: if xe_uc_sanitize is called here, on TGL driver will not
+ 	 * reload
+ 	 */
+-	xe_guc_submit_disable(&gt->uc.guc);
++	gt->uc.guc.submission_state.enabled = false;
+ }
+ 
+ static void xe_gt_enable_host_l2_vram(struct xe_gt *gt)
+diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c
+index b9d21fdaad48ba..bac5471a1a7806 100644
+--- a/drivers/gpu/drm/xe/xe_guc.c
++++ b/drivers/gpu/drm/xe/xe_guc.c
+@@ -29,7 +29,6 @@
+ #include "xe_guc_db_mgr.h"
+ #include "xe_guc_engine_activity.h"
+ #include "xe_guc_hwconfig.h"
+-#include "xe_guc_klv_helpers.h"
+ #include "xe_guc_log.h"
+ #include "xe_guc_pc.h"
+ #include "xe_guc_relay.h"
+@@ -571,57 +570,6 @@ static int guc_g2g_start(struct xe_guc *guc)
+ 	return err;
+ }
+ 
+-static int __guc_opt_in_features_enable(struct xe_guc *guc, u64 addr, u32 num_dwords)
+-{
+-	u32 action[] = {
+-		XE_GUC_ACTION_OPT_IN_FEATURE_KLV,
+-		lower_32_bits(addr),
+-		upper_32_bits(addr),
+-		num_dwords
+-	};
+-
+-	return xe_guc_ct_send_block(&guc->ct, action, ARRAY_SIZE(action));
+-}
+-
+-#define OPT_IN_MAX_DWORDS 16
+-int xe_guc_opt_in_features_enable(struct xe_guc *guc)
+-{
+-	struct xe_device *xe = guc_to_xe(guc);
+-	CLASS(xe_guc_buf, buf)(&guc->buf, OPT_IN_MAX_DWORDS);
+-	u32 count = 0;
+-	u32 *klvs;
+-	int ret;
+-
+-	if (!xe_guc_buf_is_valid(buf))
+-		return -ENOBUFS;
+-
+-	klvs = xe_guc_buf_cpu_ptr(buf);
+-
+-	/*
+-	 * The extra CAT error type opt-in was added in GuC v70.17.0, which maps
+-	 * to compatibility version v1.7.0.
+-	 * Note that the GuC allows enabling this KLV even on platforms that do
+-	 * not support the extra type; in such case the returned type variable
+-	 * will be set to a known invalid value which we can check against.
+-	 */
+-	if (GUC_SUBMIT_VER(guc) >= MAKE_GUC_VER(1, 7, 0))
+-		klvs[count++] = PREP_GUC_KLV_TAG(OPT_IN_FEATURE_EXT_CAT_ERR_TYPE);
+-
+-	if (count) {
+-		xe_assert(xe, count <= OPT_IN_MAX_DWORDS);
+-
+-		ret = __guc_opt_in_features_enable(guc, xe_guc_buf_flush(buf), count);
+-		if (ret < 0) {
+-			xe_gt_err(guc_to_gt(guc),
+-				  "failed to enable GuC opt-in features: %pe\n",
+-				  ERR_PTR(ret));
+-			return ret;
+-		}
+-	}
+-
+-	return 0;
+-}
+-
+ static void guc_fini_hw(void *arg)
+ {
+ 	struct xe_guc *guc = arg;
+@@ -815,17 +763,15 @@ int xe_guc_post_load_init(struct xe_guc *guc)
+ 
+ 	xe_guc_ads_populate_post_load(&guc->ads);
+ 
+-	ret = xe_guc_opt_in_features_enable(guc);
+-	if (ret)
+-		return ret;
+-
+ 	if (xe_guc_g2g_wanted(guc_to_xe(guc))) {
+ 		ret = guc_g2g_start(guc);
+ 		if (ret)
+ 			return ret;
+ 	}
+ 
+-	return xe_guc_submit_enable(guc);
++	guc->submission_state.enabled = true;
++
++	return 0;
+ }
+ 
+ int xe_guc_reset(struct xe_guc *guc)
+@@ -1519,7 +1465,7 @@ void xe_guc_sanitize(struct xe_guc *guc)
+ {
+ 	xe_uc_fw_sanitize(&guc->fw);
+ 	xe_guc_ct_disable(&guc->ct);
+-	xe_guc_submit_disable(guc);
++	guc->submission_state.enabled = false;
+ }
+ 
+ int xe_guc_reset_prepare(struct xe_guc *guc)
+diff --git a/drivers/gpu/drm/xe/xe_guc.h b/drivers/gpu/drm/xe/xe_guc.h
+index 4a66575f017d2d..58338be4455856 100644
+--- a/drivers/gpu/drm/xe/xe_guc.h
++++ b/drivers/gpu/drm/xe/xe_guc.h
+@@ -33,7 +33,6 @@ int xe_guc_reset(struct xe_guc *guc);
+ int xe_guc_upload(struct xe_guc *guc);
+ int xe_guc_min_load_for_hwconfig(struct xe_guc *guc);
+ int xe_guc_enable_communication(struct xe_guc *guc);
+-int xe_guc_opt_in_features_enable(struct xe_guc *guc);
+ int xe_guc_suspend(struct xe_guc *guc);
+ void xe_guc_notify(struct xe_guc *guc);
+ int xe_guc_auth_huc(struct xe_guc *guc, u32 rsa_addr);
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index 18ddbb7b98a15b..45a21af1269276 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -32,7 +32,6 @@
+ #include "xe_guc_ct.h"
+ #include "xe_guc_exec_queue_types.h"
+ #include "xe_guc_id_mgr.h"
+-#include "xe_guc_klv_helpers.h"
+ #include "xe_guc_submit_types.h"
+ #include "xe_hw_engine.h"
+ #include "xe_hw_fence.h"
+@@ -317,71 +316,6 @@ int xe_guc_submit_init(struct xe_guc *guc, unsigned int num_ids)
+ 	return drmm_add_action_or_reset(&xe->drm, guc_submit_fini, guc);
+ }
+ 
+-/*
+- * Given that we want to guarantee enough RCS throughput to avoid missing
+- * frames, we set the yield policy to 20% of each 80ms interval.
+- */
+-#define RC_YIELD_DURATION	80	/* in ms */
+-#define RC_YIELD_RATIO		20	/* in percent */
+-static u32 *emit_render_compute_yield_klv(u32 *emit)
+-{
+-	*emit++ = PREP_GUC_KLV_TAG(SCHEDULING_POLICIES_RENDER_COMPUTE_YIELD);
+-	*emit++ = RC_YIELD_DURATION;
+-	*emit++ = RC_YIELD_RATIO;
+-
+-	return emit;
+-}
+-
+-#define SCHEDULING_POLICY_MAX_DWORDS 16
+-static int guc_init_global_schedule_policy(struct xe_guc *guc)
+-{
+-	u32 data[SCHEDULING_POLICY_MAX_DWORDS];
+-	u32 *emit = data;
+-	u32 count = 0;
+-	int ret;
+-
+-	if (GUC_SUBMIT_VER(guc) < MAKE_GUC_VER(1, 1, 0))
+-		return 0;
+-
+-	*emit++ = XE_GUC_ACTION_UPDATE_SCHEDULING_POLICIES_KLV;
+-
+-	if (CCS_MASK(guc_to_gt(guc)))
+-		emit = emit_render_compute_yield_klv(emit);
+-
+-	count = emit - data;
+-	if (count > 1) {
+-		xe_assert(guc_to_xe(guc), count <= SCHEDULING_POLICY_MAX_DWORDS);
+-
+-		ret = xe_guc_ct_send_block(&guc->ct, data, count);
+-		if (ret < 0) {
+-			xe_gt_err(guc_to_gt(guc),
+-				  "failed to enable GuC sheduling policies: %pe\n",
+-				  ERR_PTR(ret));
+-			return ret;
+-		}
+-	}
+-
+-	return 0;
+-}
+-
+-int xe_guc_submit_enable(struct xe_guc *guc)
+-{
+-	int ret;
+-
+-	ret = guc_init_global_schedule_policy(guc);
+-	if (ret)
+-		return ret;
+-
+-	guc->submission_state.enabled = true;
+-
+-	return 0;
+-}
+-
+-void xe_guc_submit_disable(struct xe_guc *guc)
+-{
+-	guc->submission_state.enabled = false;
+-}
+-
+ static void __release_guc_id(struct xe_guc *guc, struct xe_exec_queue *q, u32 xa_count)
+ {
+ 	int i;
+@@ -2154,16 +2088,12 @@ int xe_guc_exec_queue_memory_cat_error_handler(struct xe_guc *guc, u32 *msg,
+ 	struct xe_gt *gt = guc_to_gt(guc);
+ 	struct xe_exec_queue *q;
+ 	u32 guc_id;
+-	u32 type = XE_GUC_CAT_ERR_TYPE_INVALID;
+ 
+-	if (unlikely(!len || len > 2))
++	if (unlikely(len < 1))
+ 		return -EPROTO;
+ 
+ 	guc_id = msg[0];
+ 
+-	if (len == 2)
+-		type = msg[1];
+-
+ 	if (guc_id == GUC_ID_UNKNOWN) {
+ 		/*
+ 		 * GuC uses GUC_ID_UNKNOWN if it can not map the CAT fault to any PF/VF
+@@ -2177,19 +2107,8 @@ int xe_guc_exec_queue_memory_cat_error_handler(struct xe_guc *guc, u32 *msg,
+ 	if (unlikely(!q))
+ 		return -EPROTO;
+ 
+-	/*
+-	 * The type is HW-defined and changes based on platform, so we don't
+-	 * decode it in the kernel and only check if it is valid.
+-	 * See bspec 54047 and 72187 for details.
+-	 */
+-	if (type != XE_GUC_CAT_ERR_TYPE_INVALID)
+-		xe_gt_dbg(gt,
+-			  "Engine memory CAT error [%u]: class=%s, logical_mask: 0x%x, guc_id=%d",
+-			  type, xe_hw_engine_class_to_str(q->class), q->logical_mask, guc_id);
+-	else
+-		xe_gt_dbg(gt,
+-			  "Engine memory CAT error: class=%s, logical_mask: 0x%x, guc_id=%d",
+-			  xe_hw_engine_class_to_str(q->class), q->logical_mask, guc_id);
++	xe_gt_dbg(gt, "Engine memory cat error: engine_class=%s, logical_mask: 0x%x, guc_id=%d",
++		  xe_hw_engine_class_to_str(q->class), q->logical_mask, guc_id);
+ 
+ 	trace_xe_exec_queue_memory_cat_error(q);
+ 
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.h b/drivers/gpu/drm/xe/xe_guc_submit.h
+index 0d126b807c1041..9b71a986c6ca69 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.h
++++ b/drivers/gpu/drm/xe/xe_guc_submit.h
+@@ -13,8 +13,6 @@ struct xe_exec_queue;
+ struct xe_guc;
+ 
+ int xe_guc_submit_init(struct xe_guc *guc, unsigned int num_ids);
+-int xe_guc_submit_enable(struct xe_guc *guc);
+-void xe_guc_submit_disable(struct xe_guc *guc);
+ 
+ int xe_guc_submit_reset_prepare(struct xe_guc *guc);
+ void xe_guc_submit_reset_wait(struct xe_guc *guc);
+diff --git a/drivers/gpu/drm/xe/xe_uc.c b/drivers/gpu/drm/xe/xe_uc.c
+index 5c45b0f072a4c2..3a8751a8b92dde 100644
+--- a/drivers/gpu/drm/xe/xe_uc.c
++++ b/drivers/gpu/drm/xe/xe_uc.c
+@@ -165,10 +165,6 @@ static int vf_uc_init_hw(struct xe_uc *uc)
+ 
+ 	uc->guc.submission_state.enabled = true;
+ 
+-	err = xe_guc_opt_in_features_enable(&uc->guc);
+-	if (err)
+-		return err;
+-
+ 	err = xe_gt_record_default_lrcs(uc_to_gt(uc));
+ 	if (err)
+ 		return err;
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_client.c b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
+index 3438d392920fad..8dae9a77668536 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_client.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
+@@ -39,8 +39,12 @@ int amd_sfh_get_report(struct hid_device *hid, int report_id, int report_type)
+ 	struct amdtp_hid_data *hid_data = hid->driver_data;
+ 	struct amdtp_cl_data *cli_data = hid_data->cli_data;
+ 	struct request_list *req_list = &cli_data->req_list;
++	struct amd_input_data *in_data = cli_data->in_data;
++	struct amd_mp2_dev *mp2;
+ 	int i;
+ 
++	mp2 = container_of(in_data, struct amd_mp2_dev, in_data);
++	guard(mutex)(&mp2->lock);
+ 	for (i = 0; i < cli_data->num_hid_devices; i++) {
+ 		if (cli_data->hid_sensor_hubs[i] == hid) {
+ 			struct request_list *new = kzalloc(sizeof(*new), GFP_KERNEL);
+@@ -75,6 +79,8 @@ void amd_sfh_work(struct work_struct *work)
+ 	u8 report_id, node_type;
+ 	u8 report_size = 0;
+ 
++	mp2 = container_of(in_data, struct amd_mp2_dev, in_data);
++	guard(mutex)(&mp2->lock);
+ 	req_node = list_last_entry(&req_list->list, struct request_list, list);
+ 	list_del(&req_node->list);
+ 	current_index = req_node->current_index;
+@@ -83,7 +89,6 @@ void amd_sfh_work(struct work_struct *work)
+ 	node_type = req_node->report_type;
+ 	kfree(req_node);
+ 
+-	mp2 = container_of(in_data, struct amd_mp2_dev, in_data);
+ 	mp2_ops = mp2->mp2_ops;
+ 	if (node_type == HID_FEATURE_REPORT) {
+ 		report_size = mp2_ops->get_feat_rep(sensor_index, report_id,
+@@ -107,6 +112,8 @@ void amd_sfh_work(struct work_struct *work)
+ 	cli_data->cur_hid_dev = current_index;
+ 	cli_data->sensor_requested_cnt[current_index] = 0;
+ 	amdtp_hid_wakeup(cli_data->hid_sensor_hubs[current_index]);
++	if (!list_empty(&req_list->list))
++		schedule_delayed_work(&cli_data->work, 0);
+ }
+ 
+ void amd_sfh_work_buffer(struct work_struct *work)
+@@ -117,9 +124,10 @@ void amd_sfh_work_buffer(struct work_struct *work)
+ 	u8 report_size;
+ 	int i;
+ 
++	mp2 = container_of(in_data, struct amd_mp2_dev, in_data);
++	guard(mutex)(&mp2->lock);
+ 	for (i = 0; i < cli_data->num_hid_devices; i++) {
+ 		if (cli_data->sensor_sts[i] == SENSOR_ENABLED) {
+-			mp2 = container_of(in_data, struct amd_mp2_dev, in_data);
+ 			report_size = mp2->mp2_ops->get_in_rep(i, cli_data->sensor_idx[i],
+ 							       cli_data->report_id[i], in_data);
+ 			hid_input_report(cli_data->hid_sensor_hubs[i], HID_INPUT_REPORT,
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_common.h b/drivers/hid/amd-sfh-hid/amd_sfh_common.h
+index f44a3bb2fbd4fe..78f830c133e5cd 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_common.h
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_common.h
+@@ -10,6 +10,7 @@
+ #ifndef AMD_SFH_COMMON_H
+ #define AMD_SFH_COMMON_H
+ 
++#include <linux/mutex.h>
+ #include <linux/pci.h>
+ #include "amd_sfh_hid.h"
+ 
+@@ -59,6 +60,8 @@ struct amd_mp2_dev {
+ 	u32 mp2_acs;
+ 	struct sfh_dev_status dev_en;
+ 	struct work_struct work;
++	/* mp2 to protect data */
++	struct mutex lock;
+ 	u8 init_done;
+ 	u8 rver;
+ };
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+index 1c1fd63330c939..9a669c18a132fb 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+@@ -462,6 +462,10 @@ static int amd_mp2_pci_probe(struct pci_dev *pdev, const struct pci_device_id *i
+ 	if (!privdata->cl_data)
+ 		return -ENOMEM;
+ 
++	rc = devm_mutex_init(&pdev->dev, &privdata->lock);
++	if (rc)
++		return rc;
++
+ 	privdata->sfh1_1_ops = (const struct amd_sfh1_1_ops *)id->driver_data;
+ 	if (privdata->sfh1_1_ops) {
+ 		if (boot_cpu_data.x86 >= 0x1A)
+diff --git a/drivers/hid/hid-asus.c b/drivers/hid/hid-asus.c
+index d27dcfb2b9e4e1..8db9d4e7c3b0b2 100644
+--- a/drivers/hid/hid-asus.c
++++ b/drivers/hid/hid-asus.c
+@@ -974,7 +974,10 @@ static int asus_input_mapping(struct hid_device *hdev,
+ 		case 0xc4: asus_map_key_clear(KEY_KBDILLUMUP);		break;
+ 		case 0xc5: asus_map_key_clear(KEY_KBDILLUMDOWN);		break;
+ 		case 0xc7: asus_map_key_clear(KEY_KBDILLUMTOGGLE);	break;
++		case 0x4e: asus_map_key_clear(KEY_FN_ESC);		break;
++		case 0x7e: asus_map_key_clear(KEY_EMOJI_PICKER);	break;
+ 
++		case 0x8b: asus_map_key_clear(KEY_PROG1);	break; /* ProArt Creator Hub key */
+ 		case 0x6b: asus_map_key_clear(KEY_F21);		break; /* ASUS touchpad toggle */
+ 		case 0x38: asus_map_key_clear(KEY_PROG1);	break; /* ROG key */
+ 		case 0xba: asus_map_key_clear(KEY_PROG2);	break; /* Fn+C ASUS Splendid */
+diff --git a/drivers/hid/hid-cp2112.c b/drivers/hid/hid-cp2112.c
+index 234fa82eab0795..b5f2b6356f512a 100644
+--- a/drivers/hid/hid-cp2112.c
++++ b/drivers/hid/hid-cp2112.c
+@@ -229,10 +229,12 @@ static int cp2112_gpio_set_unlocked(struct cp2112_device *dev,
+ 	ret = hid_hw_raw_request(hdev, CP2112_GPIO_SET, buf,
+ 				 CP2112_GPIO_SET_LENGTH, HID_FEATURE_REPORT,
+ 				 HID_REQ_SET_REPORT);
+-	if (ret < 0)
++	if (ret != CP2112_GPIO_SET_LENGTH) {
+ 		hid_err(hdev, "error setting GPIO values: %d\n", ret);
++		return ret < 0 ? ret : -EIO;
++	}
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static int cp2112_gpio_set(struct gpio_chip *chip, unsigned int offset,
+@@ -309,9 +311,7 @@ static int cp2112_gpio_direction_output(struct gpio_chip *chip,
+ 	 * Set gpio value when output direction is already set,
+ 	 * as specified in AN495, Rev. 0.2, cpt. 4.4
+ 	 */
+-	cp2112_gpio_set_unlocked(dev, offset, value);
+-
+-	return 0;
++	return cp2112_gpio_set_unlocked(dev, offset, value);
+ }
+ 
+ static int cp2112_hid_get(struct hid_device *hdev, unsigned char report_number,
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 4c22bd2ba17080..edb8da49d91670 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -73,6 +73,7 @@ MODULE_LICENSE("GPL");
+ #define MT_QUIRK_FORCE_MULTI_INPUT	BIT(20)
+ #define MT_QUIRK_DISABLE_WAKEUP		BIT(21)
+ #define MT_QUIRK_ORIENTATION_INVERT	BIT(22)
++#define MT_QUIRK_APPLE_TOUCHBAR		BIT(23)
+ 
+ #define MT_INPUTMODE_TOUCHSCREEN	0x02
+ #define MT_INPUTMODE_TOUCHPAD		0x03
+@@ -625,6 +626,7 @@ static struct mt_application *mt_find_application(struct mt_device *td,
+ static struct mt_report_data *mt_allocate_report_data(struct mt_device *td,
+ 						      struct hid_report *report)
+ {
++	struct mt_class *cls = &td->mtclass;
+ 	struct mt_report_data *rdata;
+ 	struct hid_field *field;
+ 	int r, n;
+@@ -649,7 +651,11 @@ static struct mt_report_data *mt_allocate_report_data(struct mt_device *td,
+ 
+ 		if (field->logical == HID_DG_FINGER || td->hdev->group != HID_GROUP_MULTITOUCH_WIN_8) {
+ 			for (n = 0; n < field->report_count; n++) {
+-				if (field->usage[n].hid == HID_DG_CONTACTID) {
++				unsigned int hid = field->usage[n].hid;
++
++				if (hid == HID_DG_CONTACTID ||
++				   (cls->quirks & MT_QUIRK_APPLE_TOUCHBAR &&
++				   hid == HID_DG_TRANSDUCER_INDEX)) {
+ 					rdata->is_mt_collection = true;
+ 					break;
+ 				}
+@@ -821,12 +827,31 @@ static int mt_touch_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ 
+ 			MT_STORE_FIELD(confidence_state);
+ 			return 1;
++		case HID_DG_TOUCH:
++			/*
++			 * Legacy devices use TIPSWITCH and not TOUCH.
++			 * One special case here is of the Apple Touch Bars.
++			 * In these devices, the tip state is contained in
++			 * fields with the HID_DG_TOUCH usage.
++			 * Let's just ignore this field for other devices.
++			 */
++			if (!(cls->quirks & MT_QUIRK_APPLE_TOUCHBAR))
++				return -1;
++			fallthrough;
+ 		case HID_DG_TIPSWITCH:
+ 			if (field->application != HID_GD_SYSTEM_MULTIAXIS)
+ 				input_set_capability(hi->input,
+ 						     EV_KEY, BTN_TOUCH);
+ 			MT_STORE_FIELD(tip_state);
+ 			return 1;
++		case HID_DG_TRANSDUCER_INDEX:
++			/*
++			 * Contact ID in case of Apple Touch Bars is contained
++			 * in fields with HID_DG_TRANSDUCER_INDEX usage.
++			 */
++			if (!(cls->quirks & MT_QUIRK_APPLE_TOUCHBAR))
++				return 0;
++			fallthrough;
+ 		case HID_DG_CONTACTID:
+ 			MT_STORE_FIELD(contactid);
+ 			app->touches_by_report++;
+@@ -883,10 +908,6 @@ static int mt_touch_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ 		case HID_DG_CONTACTMAX:
+ 			/* contact max are global to the report */
+ 			return -1;
+-		case HID_DG_TOUCH:
+-			/* Legacy devices use TIPSWITCH and not TOUCH.
+-			 * Let's just ignore this field. */
+-			return -1;
+ 		}
+ 		/* let hid-input decide for the others */
+ 		return 0;
+@@ -1314,6 +1335,13 @@ static int mt_touch_input_configured(struct hid_device *hdev,
+ 	struct input_dev *input = hi->input;
+ 	int ret;
+ 
++	/*
++	 * HID_DG_CONTACTMAX field is not present on Apple Touch Bars,
++	 * but the maximum contact count is greater than the default.
++	 */
++	if (cls->quirks & MT_QUIRK_APPLE_TOUCHBAR && cls->maxcontacts)
++		td->maxcontacts = cls->maxcontacts;
++
+ 	if (!td->maxcontacts)
+ 		td->maxcontacts = MT_DEFAULT_MAXCONTACT;
+ 
+@@ -1321,6 +1349,13 @@ static int mt_touch_input_configured(struct hid_device *hdev,
+ 	if (td->serial_maybe)
+ 		mt_post_parse_default_settings(td, app);
+ 
++	/*
++	 * The application for Apple Touch Bars is HID_DG_TOUCHPAD,
++	 * but these devices are direct.
++	 */
++	if (cls->quirks & MT_QUIRK_APPLE_TOUCHBAR)
++		app->mt_flags |= INPUT_MT_DIRECT;
++
+ 	if (cls->is_indirect)
+ 		app->mt_flags |= INPUT_MT_POINTER;
+ 
+diff --git a/drivers/hid/intel-thc-hid/intel-quickspi/pci-quickspi.c b/drivers/hid/intel-thc-hid/intel-quickspi/pci-quickspi.c
+index d4f89f44c3b4d9..715480ef30cefa 100644
+--- a/drivers/hid/intel-thc-hid/intel-quickspi/pci-quickspi.c
++++ b/drivers/hid/intel-thc-hid/intel-quickspi/pci-quickspi.c
+@@ -961,6 +961,8 @@ static const struct pci_device_id quickspi_pci_tbl[] = {
+ 	{PCI_DEVICE_DATA(INTEL, THC_PTL_H_DEVICE_ID_SPI_PORT2, &ptl), },
+ 	{PCI_DEVICE_DATA(INTEL, THC_PTL_U_DEVICE_ID_SPI_PORT1, &ptl), },
+ 	{PCI_DEVICE_DATA(INTEL, THC_PTL_U_DEVICE_ID_SPI_PORT2, &ptl), },
++	{PCI_DEVICE_DATA(INTEL, THC_WCL_DEVICE_ID_SPI_PORT1, &ptl), },
++	{PCI_DEVICE_DATA(INTEL, THC_WCL_DEVICE_ID_SPI_PORT2, &ptl), },
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(pci, quickspi_pci_tbl);
+diff --git a/drivers/hid/intel-thc-hid/intel-quickspi/quickspi-dev.h b/drivers/hid/intel-thc-hid/intel-quickspi/quickspi-dev.h
+index 6fdf674b21c5a6..f3532d866749ca 100644
+--- a/drivers/hid/intel-thc-hid/intel-quickspi/quickspi-dev.h
++++ b/drivers/hid/intel-thc-hid/intel-quickspi/quickspi-dev.h
+@@ -19,6 +19,8 @@
+ #define PCI_DEVICE_ID_INTEL_THC_PTL_H_DEVICE_ID_SPI_PORT2	0xE34B
+ #define PCI_DEVICE_ID_INTEL_THC_PTL_U_DEVICE_ID_SPI_PORT1	0xE449
+ #define PCI_DEVICE_ID_INTEL_THC_PTL_U_DEVICE_ID_SPI_PORT2	0xE44B
++#define PCI_DEVICE_ID_INTEL_THC_WCL_DEVICE_ID_SPI_PORT1 	0x4D49
++#define PCI_DEVICE_ID_INTEL_THC_WCL_DEVICE_ID_SPI_PORT2 	0x4D4B
+ 
+ /* HIDSPI special ACPI parameters DSM methods */
+ #define ACPI_QUICKSPI_REVISION_NUM			2
+diff --git a/drivers/i2c/busses/i2c-designware-platdrv.c b/drivers/i2c/busses/i2c-designware-platdrv.c
+index 879719e91df2a5..c1262df02cdb2e 100644
+--- a/drivers/i2c/busses/i2c-designware-platdrv.c
++++ b/drivers/i2c/busses/i2c-designware-platdrv.c
+@@ -101,7 +101,7 @@ static int bt1_i2c_request_regs(struct dw_i2c_dev *dev)
+ }
+ #endif
+ 
+-static int txgbe_i2c_request_regs(struct dw_i2c_dev *dev)
++static int dw_i2c_get_parent_regmap(struct dw_i2c_dev *dev)
+ {
+ 	dev->map = dev_get_regmap(dev->dev->parent, NULL);
+ 	if (!dev->map)
+@@ -123,12 +123,15 @@ static int dw_i2c_plat_request_regs(struct dw_i2c_dev *dev)
+ 	struct platform_device *pdev = to_platform_device(dev->dev);
+ 	int ret;
+ 
++	if (device_is_compatible(dev->dev, "intel,xe-i2c"))
++		return dw_i2c_get_parent_regmap(dev);
++
+ 	switch (dev->flags & MODEL_MASK) {
+ 	case MODEL_BAIKAL_BT1:
+ 		ret = bt1_i2c_request_regs(dev);
+ 		break;
+ 	case MODEL_WANGXUN_SP:
+-		ret = txgbe_i2c_request_regs(dev);
++		ret = dw_i2c_get_parent_regmap(dev);
+ 		break;
+ 	default:
+ 		dev->base = devm_platform_ioremap_resource(pdev, 0);
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index c369fee3356216..00727472c87381 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -233,6 +233,7 @@ static u16 get_legacy_obj_type(u16 opcode)
+ {
+ 	switch (opcode) {
+ 	case MLX5_CMD_OP_CREATE_RQ:
++	case MLX5_CMD_OP_CREATE_RMP:
+ 		return MLX5_EVENT_QUEUE_TYPE_RQ;
+ 	case MLX5_CMD_OP_CREATE_QP:
+ 		return MLX5_EVENT_QUEUE_TYPE_QP;
+diff --git a/drivers/iommu/iommufd/eventq.c b/drivers/iommu/iommufd/eventq.c
+index e373b9eec7f5f5..2afef30ce41f16 100644
+--- a/drivers/iommu/iommufd/eventq.c
++++ b/drivers/iommu/iommufd/eventq.c
+@@ -393,12 +393,12 @@ static int iommufd_eventq_init(struct iommufd_eventq *eventq, char *name,
+ 			       const struct file_operations *fops)
+ {
+ 	struct file *filep;
+-	int fdno;
+ 
+ 	spin_lock_init(&eventq->lock);
+ 	INIT_LIST_HEAD(&eventq->deliver);
+ 	init_waitqueue_head(&eventq->wait_queue);
+ 
++	/* The filep is fput() by the core code during failure */
+ 	filep = anon_inode_getfile(name, fops, eventq, O_RDWR);
+ 	if (IS_ERR(filep))
+ 		return PTR_ERR(filep);
+@@ -408,10 +408,7 @@ static int iommufd_eventq_init(struct iommufd_eventq *eventq, char *name,
+ 	eventq->filep = filep;
+ 	refcount_inc(&eventq->obj.users);
+ 
+-	fdno = get_unused_fd_flags(O_CLOEXEC);
+-	if (fdno < 0)
+-		fput(filep);
+-	return fdno;
++	return get_unused_fd_flags(O_CLOEXEC);
+ }
+ 
+ static const struct file_operations iommufd_fault_fops =
+@@ -455,7 +452,6 @@ int iommufd_fault_alloc(struct iommufd_ucmd *ucmd)
+ 	return 0;
+ out_put_fdno:
+ 	put_unused_fd(fdno);
+-	fput(fault->common.filep);
+ out_abort:
+ 	iommufd_object_abort_and_destroy(ucmd->ictx, &fault->common.obj);
+ 
+@@ -542,7 +538,6 @@ int iommufd_veventq_alloc(struct iommufd_ucmd *ucmd)
+ 
+ out_put_fdno:
+ 	put_unused_fd(fdno);
+-	fput(veventq->common.filep);
+ out_abort:
+ 	iommufd_object_abort_and_destroy(ucmd->ictx, &veventq->common.obj);
+ out_unlock_veventqs:
+diff --git a/drivers/iommu/iommufd/main.c b/drivers/iommu/iommufd/main.c
+index 3df468f64e7d9e..62a3469bbd37e7 100644
+--- a/drivers/iommu/iommufd/main.c
++++ b/drivers/iommu/iommufd/main.c
+@@ -23,6 +23,7 @@
+ #include "iommufd_test.h"
+ 
+ struct iommufd_object_ops {
++	size_t file_offset;
+ 	void (*destroy)(struct iommufd_object *obj);
+ 	void (*abort)(struct iommufd_object *obj);
+ };
+@@ -71,10 +72,30 @@ void iommufd_object_abort(struct iommufd_ctx *ictx, struct iommufd_object *obj)
+ void iommufd_object_abort_and_destroy(struct iommufd_ctx *ictx,
+ 				      struct iommufd_object *obj)
+ {
+-	if (iommufd_object_ops[obj->type].abort)
+-		iommufd_object_ops[obj->type].abort(obj);
++	const struct iommufd_object_ops *ops = &iommufd_object_ops[obj->type];
++
++	if (ops->file_offset) {
++		struct file **filep = ((void *)obj) + ops->file_offset;
++
++		/*
++		 * A file should hold a users refcount while the file is open
++		 * and put it back in its release. The file should hold a
++		 * pointer to obj in their private data. Normal fput() is
++		 * deferred to a workqueue and can get out of order with the
++		 * following kfree(obj). Using the sync version ensures the
++		 * release happens immediately. During abort we require the file
++		 * refcount is one at this point - meaning the object alloc
++		 * function cannot do anything to allow another thread to take a
++		 * refcount prior to a guaranteed success.
++		 */
++		if (*filep)
++			__fput_sync(*filep);
++	}
++
++	if (ops->abort)
++		ops->abort(obj);
+ 	else
+-		iommufd_object_ops[obj->type].destroy(obj);
++		ops->destroy(obj);
+ 	iommufd_object_abort(ictx, obj);
+ }
+ 
+@@ -493,6 +514,12 @@ void iommufd_ctx_put(struct iommufd_ctx *ictx)
+ }
+ EXPORT_SYMBOL_NS_GPL(iommufd_ctx_put, "IOMMUFD");
+ 
++#define IOMMUFD_FILE_OFFSET(_struct, _filep, _obj)                           \
++	.file_offset = (offsetof(_struct, _filep) +                          \
++			BUILD_BUG_ON_ZERO(!__same_type(                      \
++				struct file *, ((_struct *)NULL)->_filep)) + \
++			BUILD_BUG_ON_ZERO(offsetof(_struct, _obj)))
++
+ static const struct iommufd_object_ops iommufd_object_ops[] = {
+ 	[IOMMUFD_OBJ_ACCESS] = {
+ 		.destroy = iommufd_access_destroy_object,
+@@ -502,6 +529,7 @@ static const struct iommufd_object_ops iommufd_object_ops[] = {
+ 	},
+ 	[IOMMUFD_OBJ_FAULT] = {
+ 		.destroy = iommufd_fault_destroy,
++		IOMMUFD_FILE_OFFSET(struct iommufd_fault, common.filep, common.obj),
+ 	},
+ 	[IOMMUFD_OBJ_HWPT_PAGING] = {
+ 		.destroy = iommufd_hwpt_paging_destroy,
+@@ -520,6 +548,7 @@ static const struct iommufd_object_ops iommufd_object_ops[] = {
+ 	[IOMMUFD_OBJ_VEVENTQ] = {
+ 		.destroy = iommufd_veventq_destroy,
+ 		.abort = iommufd_veventq_abort,
++		IOMMUFD_FILE_OFFSET(struct iommufd_veventq, common.filep, common.obj),
+ 	},
+ 	[IOMMUFD_OBJ_VIOMMU] = {
+ 		.destroy = iommufd_viommu_destroy,
+diff --git a/drivers/mmc/host/sdhci-cadence.c b/drivers/mmc/host/sdhci-cadence.c
+index a94b297fcf2a34..60ca09780da32d 100644
+--- a/drivers/mmc/host/sdhci-cadence.c
++++ b/drivers/mmc/host/sdhci-cadence.c
+@@ -433,6 +433,13 @@ static const struct sdhci_cdns_drv_data sdhci_elba_drv_data = {
+ 	},
+ };
+ 
++static const struct sdhci_cdns_drv_data sdhci_eyeq_drv_data = {
++	.pltfm_data = {
++		.ops = &sdhci_cdns_ops,
++		.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
++	},
++};
++
+ static const struct sdhci_cdns_drv_data sdhci_cdns_drv_data = {
+ 	.pltfm_data = {
+ 		.ops = &sdhci_cdns_ops,
+@@ -595,6 +602,10 @@ static const struct of_device_id sdhci_cdns_match[] = {
+ 		.compatible = "amd,pensando-elba-sd4hc",
+ 		.data = &sdhci_elba_drv_data,
+ 	},
++	{
++		.compatible = "mobileye,eyeq-sd4hc",
++		.data = &sdhci_eyeq_drv_data,
++	},
+ 	{ .compatible = "cdns,sd4hc" },
+ 	{ /* sentinel */ }
+ };
+diff --git a/drivers/net/can/rcar/rcar_can.c b/drivers/net/can/rcar/rcar_can.c
+index 2b7dd359f27b7d..8569178b66df7d 100644
+--- a/drivers/net/can/rcar/rcar_can.c
++++ b/drivers/net/can/rcar/rcar_can.c
+@@ -861,7 +861,6 @@ static int __maybe_unused rcar_can_resume(struct device *dev)
+ {
+ 	struct net_device *ndev = dev_get_drvdata(dev);
+ 	struct rcar_can_priv *priv = netdev_priv(ndev);
+-	u16 ctlr;
+ 	int err;
+ 
+ 	if (!netif_running(ndev))
+@@ -873,12 +872,7 @@ static int __maybe_unused rcar_can_resume(struct device *dev)
+ 		return err;
+ 	}
+ 
+-	ctlr = readw(&priv->regs->ctlr);
+-	ctlr &= ~RCAR_CAN_CTLR_SLPM;
+-	writew(ctlr, &priv->regs->ctlr);
+-	ctlr &= ~RCAR_CAN_CTLR_CANM;
+-	writew(ctlr, &priv->regs->ctlr);
+-	priv->can.state = CAN_STATE_ERROR_ACTIVE;
++	rcar_can_start(ndev);
+ 
+ 	netif_device_attach(ndev);
+ 	netif_start_queue(ndev);
+diff --git a/drivers/net/can/spi/hi311x.c b/drivers/net/can/spi/hi311x.c
+index 09ae218315d73d..6441ff3b419871 100644
+--- a/drivers/net/can/spi/hi311x.c
++++ b/drivers/net/can/spi/hi311x.c
+@@ -812,6 +812,7 @@ static const struct net_device_ops hi3110_netdev_ops = {
+ 	.ndo_open = hi3110_open,
+ 	.ndo_stop = hi3110_stop,
+ 	.ndo_start_xmit = hi3110_hard_start_xmit,
++	.ndo_change_mtu = can_change_mtu,
+ };
+ 
+ static const struct ethtool_ops hi3110_ethtool_ops = {
+diff --git a/drivers/net/can/sun4i_can.c b/drivers/net/can/sun4i_can.c
+index 6fcb301ef611d0..53bfd873de9bde 100644
+--- a/drivers/net/can/sun4i_can.c
++++ b/drivers/net/can/sun4i_can.c
+@@ -768,6 +768,7 @@ static const struct net_device_ops sun4ican_netdev_ops = {
+ 	.ndo_open = sun4ican_open,
+ 	.ndo_stop = sun4ican_close,
+ 	.ndo_start_xmit = sun4ican_start_xmit,
++	.ndo_change_mtu = can_change_mtu,
+ };
+ 
+ static const struct ethtool_ops sun4ican_ethtool_ops = {
+diff --git a/drivers/net/can/usb/etas_es58x/es58x_core.c b/drivers/net/can/usb/etas_es58x/es58x_core.c
+index db1acf6d504cf3..adc91873c083f9 100644
+--- a/drivers/net/can/usb/etas_es58x/es58x_core.c
++++ b/drivers/net/can/usb/etas_es58x/es58x_core.c
+@@ -7,7 +7,7 @@
+  *
+  * Copyright (c) 2019 Robert Bosch Engineering and Business Solutions. All rights reserved.
+  * Copyright (c) 2020 ETAS K.K.. All rights reserved.
+- * Copyright (c) 2020-2022 Vincent Mailhol <mailhol.vincent@wanadoo.fr>
++ * Copyright (c) 2020-2025 Vincent Mailhol <mailhol@kernel.org>
+  */
+ 
+ #include <linux/unaligned.h>
+@@ -1977,6 +1977,7 @@ static const struct net_device_ops es58x_netdev_ops = {
+ 	.ndo_stop = es58x_stop,
+ 	.ndo_start_xmit = es58x_start_xmit,
+ 	.ndo_eth_ioctl = can_eth_ioctl_hwts,
++	.ndo_change_mtu = can_change_mtu,
+ };
+ 
+ static const struct ethtool_ops es58x_ethtool_ops = {
+diff --git a/drivers/net/can/usb/mcba_usb.c b/drivers/net/can/usb/mcba_usb.c
+index 41c0a1c399bf36..1f9b915094e64d 100644
+--- a/drivers/net/can/usb/mcba_usb.c
++++ b/drivers/net/can/usb/mcba_usb.c
+@@ -761,6 +761,7 @@ static const struct net_device_ops mcba_netdev_ops = {
+ 	.ndo_open = mcba_usb_open,
+ 	.ndo_stop = mcba_usb_close,
+ 	.ndo_start_xmit = mcba_usb_start_xmit,
++	.ndo_change_mtu = can_change_mtu,
+ };
+ 
+ static const struct ethtool_ops mcba_ethtool_ops = {
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_core.c b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
+index 117637b9b995b9..dd5caa1c302b99 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb_core.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
+@@ -111,7 +111,7 @@ void peak_usb_update_ts_now(struct peak_time_ref *time_ref, u32 ts_now)
+ 		u32 delta_ts = time_ref->ts_dev_2 - time_ref->ts_dev_1;
+ 
+ 		if (time_ref->ts_dev_2 < time_ref->ts_dev_1)
+-			delta_ts &= (1 << time_ref->adapter->ts_used_bits) - 1;
++			delta_ts &= (1ULL << time_ref->adapter->ts_used_bits) - 1;
+ 
+ 		time_ref->ts_total += delta_ts;
+ 	}
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index 6eb3140d404449..84dc6e517acf94 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -685,18 +685,27 @@ static int gswip_add_single_port_br(struct gswip_priv *priv, int port, bool add)
+ 	return 0;
+ }
+ 
+-static int gswip_port_enable(struct dsa_switch *ds, int port,
+-			     struct phy_device *phydev)
++static int gswip_port_setup(struct dsa_switch *ds, int port)
+ {
+ 	struct gswip_priv *priv = ds->priv;
+ 	int err;
+ 
+ 	if (!dsa_is_cpu_port(ds, port)) {
+-		u32 mdio_phy = 0;
+-
+ 		err = gswip_add_single_port_br(priv, port, true);
+ 		if (err)
+ 			return err;
++	}
++
++	return 0;
++}
++
++static int gswip_port_enable(struct dsa_switch *ds, int port,
++			     struct phy_device *phydev)
++{
++	struct gswip_priv *priv = ds->priv;
++
++	if (!dsa_is_cpu_port(ds, port)) {
++		u32 mdio_phy = 0;
+ 
+ 		if (phydev)
+ 			mdio_phy = phydev->mdio.addr & GSWIP_MDIO_PHY_ADDR_MASK;
+@@ -1359,8 +1368,9 @@ static int gswip_port_fdb(struct dsa_switch *ds, int port,
+ 	int i;
+ 	int err;
+ 
++	/* Operation not supported on the CPU port, don't throw errors */
+ 	if (!bridge)
+-		return -EINVAL;
++		return 0;
+ 
+ 	for (i = max_ports; i < ARRAY_SIZE(priv->vlans); i++) {
+ 		if (priv->vlans[i].bridge == bridge) {
+@@ -1829,6 +1839,7 @@ static const struct phylink_mac_ops gswip_phylink_mac_ops = {
+ static const struct dsa_switch_ops gswip_xrx200_switch_ops = {
+ 	.get_tag_protocol	= gswip_get_tag_protocol,
+ 	.setup			= gswip_setup,
++	.port_setup		= gswip_port_setup,
+ 	.port_enable		= gswip_port_enable,
+ 	.port_disable		= gswip_port_disable,
+ 	.port_bridge_join	= gswip_port_bridge_join,
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+index d2ca90407cce76..8057350236c5ef 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+@@ -244,7 +244,7 @@ bnxt_tc_parse_pedit(struct bnxt *bp, struct bnxt_tc_actions *actions,
+ 			   offset < offset_of_ip6_daddr + 16) {
+ 			actions->nat.src_xlate = false;
+ 			idx = (offset - offset_of_ip6_daddr) / 4;
+-			actions->nat.l3.ipv6.saddr.s6_addr32[idx] = htonl(val);
++			actions->nat.l3.ipv6.daddr.s6_addr32[idx] = htonl(val);
+ 		} else {
+ 			netdev_err(bp->dev,
+ 				   "%s: IPv6_hdr: Invalid pedit field\n",
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 5f15f42070c539..e8b37dfd5cc1d6 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -131,7 +131,7 @@ static const struct fec_devinfo fec_mvf600_info = {
+ 		  FEC_QUIRK_HAS_MDIO_C45,
+ };
+ 
+-static const struct fec_devinfo fec_imx6x_info = {
++static const struct fec_devinfo fec_imx6sx_info = {
+ 	.quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
+ 		  FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
+ 		  FEC_QUIRK_HAS_VLAN | FEC_QUIRK_HAS_AVB |
+@@ -196,7 +196,7 @@ static const struct of_device_id fec_dt_ids[] = {
+ 	{ .compatible = "fsl,imx28-fec", .data = &fec_imx28_info, },
+ 	{ .compatible = "fsl,imx6q-fec", .data = &fec_imx6q_info, },
+ 	{ .compatible = "fsl,mvf600-fec", .data = &fec_mvf600_info, },
+-	{ .compatible = "fsl,imx6sx-fec", .data = &fec_imx6x_info, },
++	{ .compatible = "fsl,imx6sx-fec", .data = &fec_imx6sx_info, },
+ 	{ .compatible = "fsl,imx6ul-fec", .data = &fec_imx6ul_info, },
+ 	{ .compatible = "fsl,imx8mq-fec", .data = &fec_imx8mq_info, },
+ 	{ .compatible = "fsl,imx8qm-fec", .data = &fec_imx8qm_info, },
+diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
+index 7c600d6e66ba7c..fa9bb6f2786847 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e.h
++++ b/drivers/net/ethernet/intel/i40e/i40e.h
+@@ -1277,7 +1277,8 @@ struct i40e_mac_filter *i40e_add_mac_filter(struct i40e_vsi *vsi,
+ 					    const u8 *macaddr);
+ int i40e_del_mac_filter(struct i40e_vsi *vsi, const u8 *macaddr);
+ bool i40e_is_vsi_in_vlan(struct i40e_vsi *vsi);
+-int i40e_count_filters(struct i40e_vsi *vsi);
++int i40e_count_all_filters(struct i40e_vsi *vsi);
++int i40e_count_active_filters(struct i40e_vsi *vsi);
+ struct i40e_mac_filter *i40e_find_mac(struct i40e_vsi *vsi, const u8 *macaddr);
+ void i40e_vlan_stripping_enable(struct i40e_vsi *vsi);
+ static inline bool i40e_is_sw_dcb(struct i40e_pf *pf)
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 26dcdceae741e4..ec1e3fffb59269 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -1241,12 +1241,30 @@ void i40e_update_stats(struct i40e_vsi *vsi)
+ }
+ 
+ /**
+- * i40e_count_filters - counts VSI mac filters
++ * i40e_count_all_filters - counts VSI MAC filters
+  * @vsi: the VSI to be searched
+  *
+- * Returns count of mac filters
+- **/
+-int i40e_count_filters(struct i40e_vsi *vsi)
++ * Return: count of MAC filters in any state.
++ */
++int i40e_count_all_filters(struct i40e_vsi *vsi)
++{
++	struct i40e_mac_filter *f;
++	struct hlist_node *h;
++	int bkt, cnt = 0;
++
++	hash_for_each_safe(vsi->mac_filter_hash, bkt, h, f, hlist)
++		cnt++;
++
++	return cnt;
++}
++
++/**
++ * i40e_count_active_filters - counts VSI MAC filters
++ * @vsi: the VSI to be searched
++ *
++ * Return: count of active MAC filters.
++ */
++int i40e_count_active_filters(struct i40e_vsi *vsi)
+ {
+ 	struct i40e_mac_filter *f;
+ 	struct hlist_node *h;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 7ccfc1191ae56f..59881846e8e3fa 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -448,7 +448,7 @@ static void i40e_config_irq_link_list(struct i40e_vf *vf, u16 vsi_id,
+ 		    (qtype << I40E_QINT_RQCTL_NEXTQ_TYPE_SHIFT) |
+ 		    (pf_queue_id << I40E_QINT_RQCTL_NEXTQ_INDX_SHIFT) |
+ 		    BIT(I40E_QINT_RQCTL_CAUSE_ENA_SHIFT) |
+-		    (itr_idx << I40E_QINT_RQCTL_ITR_INDX_SHIFT);
++		    FIELD_PREP(I40E_QINT_RQCTL_ITR_INDX_MASK, itr_idx);
+ 		wr32(hw, reg_idx, reg);
+ 	}
+ 
+@@ -653,6 +653,13 @@ static int i40e_config_vsi_tx_queue(struct i40e_vf *vf, u16 vsi_id,
+ 
+ 	/* only set the required fields */
+ 	tx_ctx.base = info->dma_ring_addr / 128;
++
++	/* ring_len has to be multiple of 8 */
++	if (!IS_ALIGNED(info->ring_len, 8) ||
++	    info->ring_len > I40E_MAX_NUM_DESCRIPTORS_XL710) {
++		ret = -EINVAL;
++		goto error_context;
++	}
+ 	tx_ctx.qlen = info->ring_len;
+ 	tx_ctx.rdylist = le16_to_cpu(vsi->info.qs_handle[0]);
+ 	tx_ctx.rdylist_act = 0;
+@@ -716,6 +723,13 @@ static int i40e_config_vsi_rx_queue(struct i40e_vf *vf, u16 vsi_id,
+ 
+ 	/* only set the required fields */
+ 	rx_ctx.base = info->dma_ring_addr / 128;
++
++	/* ring_len has to be multiple of 32 */
++	if (!IS_ALIGNED(info->ring_len, 32) ||
++	    info->ring_len > I40E_MAX_NUM_DESCRIPTORS_XL710) {
++		ret = -EINVAL;
++		goto error_param;
++	}
+ 	rx_ctx.qlen = info->ring_len;
+ 
+ 	if (info->splithdr_enabled) {
+@@ -1453,6 +1467,7 @@ static void i40e_trigger_vf_reset(struct i40e_vf *vf, bool flr)
+ 	 * functions that may still be running at this point.
+ 	 */
+ 	clear_bit(I40E_VF_STATE_INIT, &vf->vf_states);
++	clear_bit(I40E_VF_STATE_RESOURCES_LOADED, &vf->vf_states);
+ 
+ 	/* In the case of a VFLR, the HW has already reset the VF and we
+ 	 * just need to clean up, so don't hit the VFRTRIG register.
+@@ -2119,7 +2134,10 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
+ 	size_t len = 0;
+ 	int ret;
+ 
+-	if (!i40e_sync_vf_state(vf, I40E_VF_STATE_INIT)) {
++	i40e_sync_vf_state(vf, I40E_VF_STATE_INIT);
++
++	if (!test_bit(I40E_VF_STATE_INIT, &vf->vf_states) ||
++	    test_bit(I40E_VF_STATE_RESOURCES_LOADED, &vf->vf_states)) {
+ 		aq_ret = -EINVAL;
+ 		goto err;
+ 	}
+@@ -2222,6 +2240,7 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
+ 				vf->default_lan_addr.addr);
+ 	}
+ 	set_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states);
++	set_bit(I40E_VF_STATE_RESOURCES_LOADED, &vf->vf_states);
+ 
+ err:
+ 	/* send the response back to the VF */
+@@ -2384,7 +2403,7 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg)
+ 		}
+ 
+ 		if (vf->adq_enabled) {
+-			if (idx >= ARRAY_SIZE(vf->ch)) {
++			if (idx >= vf->num_tc) {
+ 				aq_ret = -ENODEV;
+ 				goto error_param;
+ 			}
+@@ -2405,7 +2424,7 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg)
+ 		 * to its appropriate VSIs based on TC mapping
+ 		 */
+ 		if (vf->adq_enabled) {
+-			if (idx >= ARRAY_SIZE(vf->ch)) {
++			if (idx >= vf->num_tc) {
+ 				aq_ret = -ENODEV;
+ 				goto error_param;
+ 			}
+@@ -2455,8 +2474,10 @@ static int i40e_validate_queue_map(struct i40e_vf *vf, u16 vsi_id,
+ 	u16 vsi_queue_id, queue_id;
+ 
+ 	for_each_set_bit(vsi_queue_id, &queuemap, I40E_MAX_VSI_QP) {
+-		if (vf->adq_enabled) {
+-			vsi_id = vf->ch[vsi_queue_id / I40E_MAX_VF_VSI].vsi_id;
++		u16 idx = vsi_queue_id / I40E_MAX_VF_VSI;
++
++		if (vf->adq_enabled && idx < vf->num_tc) {
++			vsi_id = vf->ch[idx].vsi_id;
+ 			queue_id = (vsi_queue_id % I40E_DEFAULT_QUEUES_PER_VF);
+ 		} else {
+ 			queue_id = vsi_queue_id;
+@@ -2844,24 +2865,6 @@ static int i40e_vc_get_stats_msg(struct i40e_vf *vf, u8 *msg)
+ 				      (u8 *)&stats, sizeof(stats));
+ }
+ 
+-/**
+- * i40e_can_vf_change_mac
+- * @vf: pointer to the VF info
+- *
+- * Return true if the VF is allowed to change its MAC filters, false otherwise
+- */
+-static bool i40e_can_vf_change_mac(struct i40e_vf *vf)
+-{
+-	/* If the VF MAC address has been set administratively (via the
+-	 * ndo_set_vf_mac command), then deny permission to the VF to
+-	 * add/delete unicast MAC addresses, unless the VF is trusted
+-	 */
+-	if (vf->pf_set_mac && !vf->trusted)
+-		return false;
+-
+-	return true;
+-}
+-
+ #define I40E_MAX_MACVLAN_PER_HW 3072
+ #define I40E_MAX_MACVLAN_PER_PF(num_ports) (I40E_MAX_MACVLAN_PER_HW /	\
+ 	(num_ports))
+@@ -2900,8 +2903,10 @@ static inline int i40e_check_vf_permission(struct i40e_vf *vf,
+ 	struct i40e_pf *pf = vf->pf;
+ 	struct i40e_vsi *vsi = pf->vsi[vf->lan_vsi_idx];
+ 	struct i40e_hw *hw = &pf->hw;
+-	int mac2add_cnt = 0;
+-	int i;
++	int i, mac_add_max, mac_add_cnt = 0;
++	bool vf_trusted;
++
++	vf_trusted = test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps);
+ 
+ 	for (i = 0; i < al->num_elements; i++) {
+ 		struct i40e_mac_filter *f;
+@@ -2921,9 +2926,8 @@ static inline int i40e_check_vf_permission(struct i40e_vf *vf,
+ 		 * The VF may request to set the MAC address filter already
+ 		 * assigned to it so do not return an error in that case.
+ 		 */
+-		if (!i40e_can_vf_change_mac(vf) &&
+-		    !is_multicast_ether_addr(addr) &&
+-		    !ether_addr_equal(addr, vf->default_lan_addr.addr)) {
++		if (!vf_trusted && !is_multicast_ether_addr(addr) &&
++		    vf->pf_set_mac && !ether_addr_equal(addr, vf->default_lan_addr.addr)) {
+ 			dev_err(&pf->pdev->dev,
+ 				"VF attempting to override administratively set MAC address, bring down and up the VF interface to resume normal operation\n");
+ 			return -EPERM;
+@@ -2932,29 +2936,33 @@ static inline int i40e_check_vf_permission(struct i40e_vf *vf,
+ 		/*count filters that really will be added*/
+ 		f = i40e_find_mac(vsi, addr);
+ 		if (!f)
+-			++mac2add_cnt;
++			++mac_add_cnt;
+ 	}
+ 
+ 	/* If this VF is not privileged, then we can't add more than a limited
+-	 * number of addresses. Check to make sure that the additions do not
+-	 * push us over the limit.
+-	 */
+-	if (!test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps)) {
+-		if ((i40e_count_filters(vsi) + mac2add_cnt) >
+-		    I40E_VC_MAX_MAC_ADDR_PER_VF) {
+-			dev_err(&pf->pdev->dev,
+-				"Cannot add more MAC addresses, VF is not trusted, switch the VF to trusted to add more functionality\n");
+-			return -EPERM;
+-		}
+-	/* If this VF is trusted, it can use more resources than untrusted.
++	 * number of addresses.
++	 *
++	 * If this VF is trusted, it can use more resources than untrusted.
+ 	 * However to ensure that every trusted VF has appropriate number of
+ 	 * resources, divide whole pool of resources per port and then across
+ 	 * all VFs.
+ 	 */
+-	} else {
+-		if ((i40e_count_filters(vsi) + mac2add_cnt) >
+-		    I40E_VC_MAX_MACVLAN_PER_TRUSTED_VF(pf->num_alloc_vfs,
+-						       hw->num_ports)) {
++	if (!vf_trusted)
++		mac_add_max = I40E_VC_MAX_MAC_ADDR_PER_VF;
++	else
++		mac_add_max = I40E_VC_MAX_MACVLAN_PER_TRUSTED_VF(pf->num_alloc_vfs, hw->num_ports);
++
++	/* VF can replace all its filters in one step, in this case mac_add_max
++	 * will be added as active and another mac_add_max will be in
++	 * a to-be-removed state. Account for that.
++	 */
++	if ((i40e_count_active_filters(vsi) + mac_add_cnt) > mac_add_max ||
++	    (i40e_count_all_filters(vsi) + mac_add_cnt) > 2 * mac_add_max) {
++		if (!vf_trusted) {
++			dev_err(&pf->pdev->dev,
++				"Cannot add more MAC addresses, VF is not trusted, switch the VF to trusted to add more functionality\n");
++			return -EPERM;
++		} else {
+ 			dev_err(&pf->pdev->dev,
+ 				"Cannot add more MAC addresses, trusted VF exhausted it's resources\n");
+ 			return -EPERM;
+@@ -3589,7 +3597,7 @@ static int i40e_validate_cloud_filter(struct i40e_vf *vf,
+ 
+ 	/* action_meta is TC number here to which the filter is applied */
+ 	if (!tc_filter->action_meta ||
+-	    tc_filter->action_meta > vf->num_tc) {
++	    tc_filter->action_meta >= vf->num_tc) {
+ 		dev_info(&pf->pdev->dev, "VF %d: Invalid TC number %u\n",
+ 			 vf->vf_id, tc_filter->action_meta);
+ 		goto err;
+@@ -3887,6 +3895,8 @@ static int i40e_vc_del_cloud_filter(struct i40e_vf *vf, u8 *msg)
+ 				       aq_ret);
+ }
+ 
++#define I40E_MAX_VF_CLOUD_FILTER 0xFF00
++
+ /**
+  * i40e_vc_add_cloud_filter
+  * @vf: pointer to the VF info
+@@ -3926,6 +3936,14 @@ static int i40e_vc_add_cloud_filter(struct i40e_vf *vf, u8 *msg)
+ 		goto err_out;
+ 	}
+ 
++	if (vf->num_cloud_filters >= I40E_MAX_VF_CLOUD_FILTER) {
++		dev_warn(&pf->pdev->dev,
++			 "VF %d: Max number of filters reached, can't apply cloud filter\n",
++			 vf->vf_id);
++		aq_ret = -ENOSPC;
++		goto err_out;
++	}
++
+ 	cfilter = kzalloc(sizeof(*cfilter), GFP_KERNEL);
+ 	if (!cfilter) {
+ 		aq_ret = -ENOMEM;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+index 5cf74f16f433f3..f558b45725c816 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+@@ -41,7 +41,8 @@ enum i40e_vf_states {
+ 	I40E_VF_STATE_MC_PROMISC,
+ 	I40E_VF_STATE_UC_PROMISC,
+ 	I40E_VF_STATE_PRE_ENABLE,
+-	I40E_VF_STATE_RESETTING
++	I40E_VF_STATE_RESETTING,
++	I40E_VF_STATE_RESOURCES_LOADED,
+ };
+ 
+ /* VF capabilities */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+index 442305463cc0ae..21161711c579fb 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+@@ -21,8 +21,7 @@
+ #include "rvu.h"
+ #include "lmac_common.h"
+ 
+-#define DRV_NAME	"Marvell-CGX/RPM"
+-#define DRV_STRING      "Marvell CGX/RPM Driver"
++#define DRV_NAME	"Marvell-CGX-RPM"
+ 
+ #define CGX_RX_STAT_GLOBAL_INDEX	9
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
+index 5f80b23c5335cd..26a08d2cfbb1b6 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
+@@ -1326,7 +1326,6 @@ static int otx2_tc_add_flow(struct otx2_nic *nic,
+ 
+ free_leaf:
+ 	otx2_tc_del_from_flow_list(flow_cfg, new_node);
+-	kfree_rcu(new_node, rcu);
+ 	if (new_node->is_act_police) {
+ 		mutex_lock(&nic->mbox.lock);
+ 
+@@ -1346,6 +1345,7 @@ static int otx2_tc_add_flow(struct otx2_nic *nic,
+ 
+ 		mutex_unlock(&nic->mbox.lock);
+ 	}
++	kfree_rcu(new_node, rcu);
+ 
+ 	return rc;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
+index 19664fa7f21713..46d6dd05fb814d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
+@@ -1466,6 +1466,7 @@ static void fec_set_block_stats(struct mlx5e_priv *priv,
+ 	case MLX5E_FEC_RS_528_514:
+ 	case MLX5E_FEC_RS_544_514:
+ 	case MLX5E_FEC_LLRS_272_257_1:
++	case MLX5E_FEC_RS_544_514_INTERLEAVED_QUAD:
+ 		fec_set_rs_stats(fec_stats, out);
+ 		return;
+ 	case MLX5E_FEC_FIRECODE:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 3b57ef6b3de383..93fb4e861b6916 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -663,7 +663,7 @@ static void del_sw_hw_rule(struct fs_node *node)
+ 			BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_ACTION) |
+ 			BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_FLOW_COUNTERS);
+ 		fte->act_dests.action.action &= ~MLX5_FLOW_CONTEXT_ACTION_COUNT;
+-		mlx5_fc_local_destroy(rule->dest_attr.counter);
++		mlx5_fc_local_put(rule->dest_attr.counter);
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
+index 500826229b0beb..e6a95b310b5554 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
+@@ -343,6 +343,7 @@ struct mlx5_fc {
+ 	enum mlx5_fc_type type;
+ 	struct mlx5_fc_bulk *bulk;
+ 	struct mlx5_fc_cache cache;
++	refcount_t fc_local_refcount;
+ 	/* last{packets,bytes} are used for calculating deltas since last reading. */
+ 	u64 lastpackets;
+ 	u64 lastbytes;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
+index 492775d3d193a3..83001eda38842a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
+@@ -562,17 +562,36 @@ mlx5_fc_local_create(u32 counter_id, u32 offset, u32 bulk_size)
+ 	counter->id = counter_id;
+ 	fc_bulk->base_id = counter_id - offset;
+ 	fc_bulk->fs_bulk.bulk_len = bulk_size;
++	refcount_set(&fc_bulk->hws_data.hws_action_refcount, 0);
++	mutex_init(&fc_bulk->hws_data.lock);
+ 	counter->bulk = fc_bulk;
++	refcount_set(&counter->fc_local_refcount, 1);
+ 	return counter;
+ }
+ EXPORT_SYMBOL(mlx5_fc_local_create);
+ 
+ void mlx5_fc_local_destroy(struct mlx5_fc *counter)
+ {
+-	if (!counter || counter->type != MLX5_FC_TYPE_LOCAL)
+-		return;
+-
+ 	kfree(counter->bulk);
+ 	kfree(counter);
+ }
+ EXPORT_SYMBOL(mlx5_fc_local_destroy);
++
++void mlx5_fc_local_get(struct mlx5_fc *counter)
++{
++	if (!counter || counter->type != MLX5_FC_TYPE_LOCAL)
++		return;
++
++	refcount_inc(&counter->fc_local_refcount);
++}
++
++void mlx5_fc_local_put(struct mlx5_fc *counter)
++{
++	if (!counter || counter->type != MLX5_FC_TYPE_LOCAL)
++		return;
++
++	if (!refcount_dec_and_test(&counter->fc_local_refcount))
++		return;
++
++	mlx5_fc_local_destroy(counter);
++}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
+index 8e4a085f4a2ec9..fe56b59e24c59c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
+@@ -1358,11 +1358,8 @@ mlx5hws_action_create_modify_header(struct mlx5hws_context *ctx,
+ }
+ 
+ struct mlx5hws_action *
+-mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx,
+-				 size_t num_dest,
++mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx, size_t num_dest,
+ 				 struct mlx5hws_action_dest_attr *dests,
+-				 bool ignore_flow_level,
+-				 u32 flow_source,
+ 				 u32 flags)
+ {
+ 	struct mlx5hws_cmd_set_fte_dest *dest_list = NULL;
+@@ -1400,7 +1397,7 @@ mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx,
+ 				MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
+ 			dest_list[i].destination_id = dests[i].dest->dest_obj.obj_id;
+ 			fte_attr.action_flags |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+-			fte_attr.ignore_flow_level = ignore_flow_level;
++			fte_attr.ignore_flow_level = 1;
+ 			if (dests[i].is_wire_ft)
+ 				last_dest_idx = i;
+ 			break;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
+index 47e3947e7b512f..6a4c4cccd64342 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
+@@ -572,14 +572,12 @@ static void mlx5_fs_put_dest_action_sampler(struct mlx5_fs_hws_context *fs_ctx,
+ static struct mlx5hws_action *
+ mlx5_fs_create_action_dest_array(struct mlx5hws_context *ctx,
+ 				 struct mlx5hws_action_dest_attr *dests,
+-				 u32 num_of_dests, bool ignore_flow_level,
+-				 u32 flow_source)
++				 u32 num_of_dests)
+ {
+ 	u32 flags = MLX5HWS_ACTION_FLAG_HWS_FDB | MLX5HWS_ACTION_FLAG_SHARED;
+ 
+ 	return mlx5hws_action_create_dest_array(ctx, num_of_dests, dests,
+-						ignore_flow_level,
+-						flow_source, flags);
++						flags);
+ }
+ 
+ static struct mlx5hws_action *
+@@ -1016,20 +1014,14 @@ static int mlx5_fs_fte_get_hws_actions(struct mlx5_flow_root_namespace *ns,
+ 		}
+ 		(*ractions)[num_actions++].action = dest_actions->dest;
+ 	} else if (num_dest_actions > 1) {
+-		u32 flow_source = fte->act_dests.flow_context.flow_source;
+-		bool ignore_flow_level;
+-
+ 		if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX ||
+ 		    num_fs_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
+ 			err = -EOPNOTSUPP;
+ 			goto free_actions;
+ 		}
+-		ignore_flow_level =
+-			!!(fte_action->flags & FLOW_ACT_IGNORE_FLOW_LEVEL);
+-		tmp_action = mlx5_fs_create_action_dest_array(ctx, dest_actions,
+-							      num_dest_actions,
+-							      ignore_flow_level,
+-							      flow_source);
++		tmp_action =
++			mlx5_fs_create_action_dest_array(ctx, dest_actions,
++							 num_dest_actions);
+ 		if (!tmp_action) {
+ 			err = -EOPNOTSUPP;
+ 			goto free_actions;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c
+index f1ecdba74e1f46..839d71bd42164f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws_pools.c
+@@ -407,15 +407,21 @@ struct mlx5hws_action *mlx5_fc_get_hws_action(struct mlx5hws_context *ctx,
+ {
+ 	struct mlx5_fs_hws_create_action_ctx create_ctx;
+ 	struct mlx5_fc_bulk *fc_bulk = counter->bulk;
++	struct mlx5hws_action *hws_action;
+ 
+ 	create_ctx.hws_ctx = ctx;
+ 	create_ctx.id = fc_bulk->base_id;
+ 	create_ctx.actions_type = MLX5HWS_ACTION_TYP_CTR;
+ 
+-	return mlx5_fs_get_hws_action(&fc_bulk->hws_data, &create_ctx);
++	mlx5_fc_local_get(counter);
++	hws_action = mlx5_fs_get_hws_action(&fc_bulk->hws_data, &create_ctx);
++	if (!hws_action)
++		mlx5_fc_local_put(counter);
++	return hws_action;
+ }
+ 
+ void mlx5_fc_put_hws_action(struct mlx5_fc *counter)
+ {
+ 	mlx5_fs_put_hws_action(&counter->bulk->hws_data);
++	mlx5_fc_local_put(counter);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
+index a2fe2f9e832d26..e6ba5a2129075a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
+@@ -727,18 +727,13 @@ mlx5hws_action_create_push_vlan(struct mlx5hws_context *ctx, u32 flags);
+  * @num_dest: The number of dests attributes.
+  * @dests: The destination array. Each contains a destination action and can
+  *	   have additional actions.
+- * @ignore_flow_level: Whether to turn on 'ignore_flow_level' for this dest.
+- * @flow_source: Source port of the traffic for this actions.
+  * @flags: Action creation flags (enum mlx5hws_action_flags).
+  *
+  * Return: pointer to mlx5hws_action on success NULL otherwise.
+  */
+ struct mlx5hws_action *
+-mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx,
+-				 size_t num_dest,
++mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx, size_t num_dest,
+ 				 struct mlx5hws_action_dest_attr *dests,
+-				 bool ignore_flow_level,
+-				 u32 flow_source,
+ 				 u32 flags);
+ 
+ /**
+diff --git a/drivers/net/phy/bcm-phy-ptp.c b/drivers/net/phy/bcm-phy-ptp.c
+index eba8b5fb1365f4..d3501f8487d964 100644
+--- a/drivers/net/phy/bcm-phy-ptp.c
++++ b/drivers/net/phy/bcm-phy-ptp.c
+@@ -597,10 +597,6 @@ static int bcm_ptp_perout_locked(struct bcm_ptp_private *priv,
+ 
+ 	period = BCM_MAX_PERIOD_8NS;	/* write nonzero value */
+ 
+-	/* Reject unsupported flags */
+-	if (req->flags & ~PTP_PEROUT_DUTY_CYCLE)
+-		return -EOPNOTSUPP;
+-
+ 	if (req->flags & PTP_PEROUT_DUTY_CYCLE)
+ 		pulse = ktime_to_ns(ktime_set(req->on.sec, req->on.nsec));
+ 	else
+@@ -741,6 +737,8 @@ static const struct ptp_clock_info bcm_ptp_clock_info = {
+ 	.n_pins		= 1,
+ 	.n_per_out	= 1,
+ 	.n_ext_ts	= 1,
++	.supported_perout_flags = PTP_PEROUT_DUTY_CYCLE,
++	.supported_extts_flags = PTP_STRICT_FLAGS | PTP_RISING_EDGE,
+ };
+ 
+ static void bcm_ptp_txtstamp(struct mii_timestamper *mii_ts,
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index 347c1e0e94d951..4cd1d6c51dc2a0 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -361,6 +361,11 @@ static void sfp_fixup_ignore_tx_fault(struct sfp *sfp)
+ 	sfp->state_ignore_mask |= SFP_F_TX_FAULT;
+ }
+ 
++static void sfp_fixup_ignore_hw(struct sfp *sfp, unsigned int mask)
++{
++	sfp->state_hw_mask &= ~mask;
++}
++
+ static void sfp_fixup_nokia(struct sfp *sfp)
+ {
+ 	sfp_fixup_long_startup(sfp);
+@@ -409,7 +414,19 @@ static void sfp_fixup_halny_gsfp(struct sfp *sfp)
+ 	 * these are possibly used for other purposes on this
+ 	 * module, e.g. a serial port.
+ 	 */
+-	sfp->state_hw_mask &= ~(SFP_F_TX_FAULT | SFP_F_LOS);
++	sfp_fixup_ignore_hw(sfp, SFP_F_TX_FAULT | SFP_F_LOS);
++}
++
++static void sfp_fixup_potron(struct sfp *sfp)
++{
++	/*
++	 * The TX_FAULT and LOS pins on this device are used for serial
++	 * communication, so ignore them. Additionally, provide extra
++	 * time for this device to fully start up.
++	 */
++
++	sfp_fixup_long_startup(sfp);
++	sfp_fixup_ignore_hw(sfp, SFP_F_TX_FAULT | SFP_F_LOS);
+ }
+ 
+ static void sfp_fixup_rollball_cc(struct sfp *sfp)
+@@ -475,6 +492,9 @@ static const struct sfp_quirk sfp_quirks[] = {
+ 	SFP_QUIRK("ALCATELLUCENT", "3FE46541AA", sfp_quirk_2500basex,
+ 		  sfp_fixup_nokia),
+ 
++	// FLYPRO SFP-10GT-CS-30M uses Rollball protocol to talk to the PHY.
++	SFP_QUIRK_F("FLYPRO", "SFP-10GT-CS-30M", sfp_fixup_rollball),
++
+ 	// Fiberstore SFP-10G-T doesn't identify as copper, uses the Rollball
+ 	// protocol to talk to the PHY and needs 4 sec wait before probing the
+ 	// PHY.
+@@ -512,6 +532,8 @@ static const struct sfp_quirk sfp_quirks[] = {
+ 	SFP_QUIRK_F("Walsun", "HXSX-ATRC-1", sfp_fixup_fs_10gt),
+ 	SFP_QUIRK_F("Walsun", "HXSX-ATRI-1", sfp_fixup_fs_10gt),
+ 
++	SFP_QUIRK_F("YV", "SFP+ONU-XGSPON", sfp_fixup_potron),
++
+ 	// OEM SFP-GE-T is a 1000Base-T module with broken TX_FAULT indicator
+ 	SFP_QUIRK_F("OEM", "SFP-GE-T", sfp_fixup_ignore_tx_fault),
+ 
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index f8c5e2fd04dfa0..0fffa023cb736b 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1863,6 +1863,9 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ 				local_bh_enable();
+ 				goto unlock_frags;
+ 			}
++
++			if (frags && skb != tfile->napi.skb)
++				tfile->napi.skb = skb;
+ 		}
+ 		rcu_read_unlock();
+ 		local_bh_enable();
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+index eee55428749c92..de5005815ee706 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+@@ -2093,7 +2093,8 @@ static void iwl_txq_gen1_update_byte_cnt_tbl(struct iwl_trans *trans,
+ 		break;
+ 	}
+ 
+-	if (trans->mac_cfg->device_family < IWL_DEVICE_FAMILY_AX210)
++	if (trans->mac_cfg->device_family >= IWL_DEVICE_FAMILY_7000 &&
++	    trans->mac_cfg->device_family < IWL_DEVICE_FAMILY_AX210)
+ 		len = DIV_ROUND_UP(len, 4);
+ 
+ 	if (WARN_ON(len > 0xFFF || write_ptr >= TFD_QUEUE_SIZE_MAX))
+diff --git a/drivers/net/wireless/virtual/virt_wifi.c b/drivers/net/wireless/virtual/virt_wifi.c
+index 1fffeff2190ca8..4eae89376feb55 100644
+--- a/drivers/net/wireless/virtual/virt_wifi.c
++++ b/drivers/net/wireless/virtual/virt_wifi.c
+@@ -277,7 +277,9 @@ static void virt_wifi_connect_complete(struct work_struct *work)
+ 		priv->is_connected = true;
+ 
+ 	/* Schedules an event that acquires the rtnl lock. */
+-	cfg80211_connect_result(priv->upperdev, requested_bss, NULL, 0, NULL, 0,
++	cfg80211_connect_result(priv->upperdev,
++				priv->is_connected ? fake_router_bssid : NULL,
++				NULL, 0, NULL, 0,
+ 				status, GFP_KERNEL);
+ 	netif_carrier_on(priv->upperdev);
+ }
+diff --git a/drivers/pinctrl/mediatek/pinctrl-airoha.c b/drivers/pinctrl/mediatek/pinctrl-airoha.c
+index 3fa5131d81e525..1dc26e898d787a 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-airoha.c
++++ b/drivers/pinctrl/mediatek/pinctrl-airoha.c
+@@ -108,6 +108,9 @@
+ #define JTAG_UDI_EN_MASK			BIT(4)
+ #define JTAG_DFD_EN_MASK			BIT(3)
+ 
++#define REG_FORCE_GPIO_EN			0x0228
++#define FORCE_GPIO_EN(n)			BIT(n)
++
+ /* LED MAP */
+ #define REG_LAN_LED0_MAPPING			0x027c
+ #define REG_LAN_LED1_MAPPING			0x0280
+@@ -718,17 +721,17 @@ static const struct airoha_pinctrl_func_group mdio_func_group[] = {
+ 	{
+ 		.name = "mdio",
+ 		.regmap[0] = {
+-			AIROHA_FUNC_MUX,
+-			REG_GPIO_PON_MODE,
+-			GPIO_SGMII_MDIO_MODE_MASK,
+-			GPIO_SGMII_MDIO_MODE_MASK
+-		},
+-		.regmap[1] = {
+ 			AIROHA_FUNC_MUX,
+ 			REG_GPIO_2ND_I2C_MODE,
+ 			GPIO_MDC_IO_MASTER_MODE_MODE,
+ 			GPIO_MDC_IO_MASTER_MODE_MODE
+ 		},
++		.regmap[1] = {
++			AIROHA_FUNC_MUX,
++			REG_FORCE_GPIO_EN,
++			FORCE_GPIO_EN(1) | FORCE_GPIO_EN(2),
++			FORCE_GPIO_EN(1) | FORCE_GPIO_EN(2)
++		},
+ 		.regmap_size = 2,
+ 	},
+ };
+@@ -1752,8 +1755,8 @@ static const struct airoha_pinctrl_func_group phy1_led1_func_group[] = {
+ 		.regmap[0] = {
+ 			AIROHA_FUNC_MUX,
+ 			REG_GPIO_2ND_I2C_MODE,
+-			GPIO_LAN3_LED0_MODE_MASK,
+-			GPIO_LAN3_LED0_MODE_MASK
++			GPIO_LAN3_LED1_MODE_MASK,
++			GPIO_LAN3_LED1_MODE_MASK
+ 		},
+ 		.regmap[1] = {
+ 			AIROHA_FUNC_MUX,
+@@ -1816,8 +1819,8 @@ static const struct airoha_pinctrl_func_group phy2_led1_func_group[] = {
+ 		.regmap[0] = {
+ 			AIROHA_FUNC_MUX,
+ 			REG_GPIO_2ND_I2C_MODE,
+-			GPIO_LAN3_LED0_MODE_MASK,
+-			GPIO_LAN3_LED0_MODE_MASK
++			GPIO_LAN3_LED1_MODE_MASK,
++			GPIO_LAN3_LED1_MODE_MASK
+ 		},
+ 		.regmap[1] = {
+ 			AIROHA_FUNC_MUX,
+@@ -1880,8 +1883,8 @@ static const struct airoha_pinctrl_func_group phy3_led1_func_group[] = {
+ 		.regmap[0] = {
+ 			AIROHA_FUNC_MUX,
+ 			REG_GPIO_2ND_I2C_MODE,
+-			GPIO_LAN3_LED0_MODE_MASK,
+-			GPIO_LAN3_LED0_MODE_MASK
++			GPIO_LAN3_LED1_MODE_MASK,
++			GPIO_LAN3_LED1_MODE_MASK
+ 		},
+ 		.regmap[1] = {
+ 			AIROHA_FUNC_MUX,
+@@ -1944,8 +1947,8 @@ static const struct airoha_pinctrl_func_group phy4_led1_func_group[] = {
+ 		.regmap[0] = {
+ 			AIROHA_FUNC_MUX,
+ 			REG_GPIO_2ND_I2C_MODE,
+-			GPIO_LAN3_LED0_MODE_MASK,
+-			GPIO_LAN3_LED0_MODE_MASK
++			GPIO_LAN3_LED1_MODE_MASK,
++			GPIO_LAN3_LED1_MODE_MASK
+ 		},
+ 		.regmap[1] = {
+ 			AIROHA_FUNC_MUX,
+diff --git a/drivers/platform/x86/lg-laptop.c b/drivers/platform/x86/lg-laptop.c
+index 4b57102c7f6270..6af6cf477c5b5b 100644
+--- a/drivers/platform/x86/lg-laptop.c
++++ b/drivers/platform/x86/lg-laptop.c
+@@ -8,6 +8,7 @@
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+ 
+ #include <linux/acpi.h>
++#include <linux/bitfield.h>
+ #include <linux/bits.h>
+ #include <linux/device.h>
+ #include <linux/dev_printk.h>
+@@ -75,6 +76,9 @@ MODULE_PARM_DESC(fw_debug, "Enable printing of firmware debug messages");
+ #define WMBB_USB_CHARGE 0x10B
+ #define WMBB_BATT_LIMIT 0x10C
+ 
++#define FAN_MODE_LOWER GENMASK(1, 0)
++#define FAN_MODE_UPPER GENMASK(5, 4)
++
+ #define PLATFORM_NAME   "lg-laptop"
+ 
+ MODULE_ALIAS("wmi:" WMI_EVENT_GUID0);
+@@ -274,29 +278,19 @@ static ssize_t fan_mode_store(struct device *dev,
+ 			      struct device_attribute *attr,
+ 			      const char *buffer, size_t count)
+ {
+-	bool value;
++	unsigned long value;
+ 	union acpi_object *r;
+-	u32 m;
+ 	int ret;
+ 
+-	ret = kstrtobool(buffer, &value);
++	ret = kstrtoul(buffer, 10, &value);
+ 	if (ret)
+ 		return ret;
++	if (value >= 3)
++		return -EINVAL;
+ 
+-	r = lg_wmab(dev, WM_FAN_MODE, WM_GET, 0);
+-	if (!r)
+-		return -EIO;
+-
+-	if (r->type != ACPI_TYPE_INTEGER) {
+-		kfree(r);
+-		return -EIO;
+-	}
+-
+-	m = r->integer.value;
+-	kfree(r);
+-	r = lg_wmab(dev, WM_FAN_MODE, WM_SET, (m & 0xffffff0f) | (value << 4));
+-	kfree(r);
+-	r = lg_wmab(dev, WM_FAN_MODE, WM_SET, (m & 0xfffffff0) | value);
++	r = lg_wmab(dev, WM_FAN_MODE, WM_SET,
++		FIELD_PREP(FAN_MODE_LOWER, value) |
++		FIELD_PREP(FAN_MODE_UPPER, value));
+ 	kfree(r);
+ 
+ 	return count;
+@@ -305,7 +299,7 @@ static ssize_t fan_mode_store(struct device *dev,
+ static ssize_t fan_mode_show(struct device *dev,
+ 			     struct device_attribute *attr, char *buffer)
+ {
+-	unsigned int status;
++	unsigned int mode;
+ 	union acpi_object *r;
+ 
+ 	r = lg_wmab(dev, WM_FAN_MODE, WM_GET, 0);
+@@ -317,10 +311,10 @@ static ssize_t fan_mode_show(struct device *dev,
+ 		return -EIO;
+ 	}
+ 
+-	status = r->integer.value & 0x01;
++	mode = FIELD_GET(FAN_MODE_LOWER, r->integer.value);
+ 	kfree(r);
+ 
+-	return sysfs_emit(buffer, "%d\n", status);
++	return sysfs_emit(buffer, "%d\n", mode);
+ }
+ 
+ static ssize_t usb_charge_store(struct device *dev,
+diff --git a/drivers/platform/x86/oxpec.c b/drivers/platform/x86/oxpec.c
+index 9839e8cb82ce4c..eb076bb4099bed 100644
+--- a/drivers/platform/x86/oxpec.c
++++ b/drivers/platform/x86/oxpec.c
+@@ -292,6 +292,13 @@ static const struct dmi_system_id dmi_table[] = {
+ 		},
+ 		.driver_data = (void *)oxp_x1,
+ 	},
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "ONE-NETBOOK"),
++			DMI_EXACT_MATCH(DMI_BOARD_NAME, "ONEXPLAYER X1Mini Pro"),
++		},
++		.driver_data = (void *)oxp_x1,
++	},
+ 	{
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "ONE-NETBOOK"),
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index d3c78f59b22cd9..c9eb8004fcc271 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -46,6 +46,7 @@ static_assert(CQSPI_MAX_CHIPSELECT <= SPI_CS_CNT_MAX);
+ #define CQSPI_DMA_SET_MASK		BIT(7)
+ #define CQSPI_SUPPORT_DEVICE_RESET	BIT(8)
+ #define CQSPI_DISABLE_STIG_MODE		BIT(9)
++#define CQSPI_DISABLE_RUNTIME_PM	BIT(10)
+ 
+ /* Capabilities */
+ #define CQSPI_SUPPORTS_OCTAL		BIT(0)
+@@ -108,6 +109,8 @@ struct cqspi_st {
+ 
+ 	bool			is_jh7110; /* Flag for StarFive JH7110 SoC */
+ 	bool			disable_stig_mode;
++	refcount_t		refcount;
++	refcount_t		inflight_ops;
+ 
+ 	const struct cqspi_driver_platdata *ddata;
+ };
+@@ -735,6 +738,9 @@ static int cqspi_indirect_read_execute(struct cqspi_flash_pdata *f_pdata,
+ 	u8 *rxbuf_end = rxbuf + n_rx;
+ 	int ret = 0;
+ 
++	if (!refcount_read(&cqspi->refcount))
++		return -ENODEV;
++
+ 	writel(from_addr, reg_base + CQSPI_REG_INDIRECTRDSTARTADDR);
+ 	writel(remaining, reg_base + CQSPI_REG_INDIRECTRDBYTES);
+ 
+@@ -1071,6 +1077,9 @@ static int cqspi_indirect_write_execute(struct cqspi_flash_pdata *f_pdata,
+ 	unsigned int write_bytes;
+ 	int ret;
+ 
++	if (!refcount_read(&cqspi->refcount))
++		return -ENODEV;
++
+ 	writel(to_addr, reg_base + CQSPI_REG_INDIRECTWRSTARTADDR);
+ 	writel(remaining, reg_base + CQSPI_REG_INDIRECTWRBYTES);
+ 
+@@ -1460,21 +1469,43 @@ static int cqspi_exec_mem_op(struct spi_mem *mem, const struct spi_mem_op *op)
+ 	int ret;
+ 	struct cqspi_st *cqspi = spi_controller_get_devdata(mem->spi->controller);
+ 	struct device *dev = &cqspi->pdev->dev;
++	const struct cqspi_driver_platdata *ddata = of_device_get_match_data(dev);
+ 
+-	ret = pm_runtime_resume_and_get(dev);
+-	if (ret) {
+-		dev_err(&mem->spi->dev, "resume failed with %d\n", ret);
+-		return ret;
++	if (refcount_read(&cqspi->inflight_ops) == 0)
++		return -ENODEV;
++
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) {
++		ret = pm_runtime_resume_and_get(dev);
++		if (ret) {
++			dev_err(&mem->spi->dev, "resume failed with %d\n", ret);
++			return ret;
++		}
++	}
++
++	if (!refcount_read(&cqspi->refcount))
++		return -EBUSY;
++
++	refcount_inc(&cqspi->inflight_ops);
++
++	if (!refcount_read(&cqspi->refcount)) {
++		if (refcount_read(&cqspi->inflight_ops))
++			refcount_dec(&cqspi->inflight_ops);
++		return -EBUSY;
+ 	}
+ 
+ 	ret = cqspi_mem_process(mem, op);
+ 
+-	pm_runtime_mark_last_busy(dev);
+-	pm_runtime_put_autosuspend(dev);
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) {
++		pm_runtime_mark_last_busy(dev);
++		pm_runtime_put_autosuspend(dev);
++	}
+ 
+ 	if (ret)
+ 		dev_err(&mem->spi->dev, "operation failed with %d\n", ret);
+ 
++	if (refcount_read(&cqspi->inflight_ops) > 1)
++		refcount_dec(&cqspi->inflight_ops);
++
+ 	return ret;
+ }
+ 
+@@ -1926,6 +1957,9 @@ static int cqspi_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
++	refcount_set(&cqspi->refcount, 1);
++	refcount_set(&cqspi->inflight_ops, 1);
++
+ 	ret = devm_request_irq(dev, irq, cqspi_irq_handler, 0,
+ 			       pdev->name, cqspi);
+ 	if (ret) {
+@@ -1958,11 +1992,12 @@ static int cqspi_probe(struct platform_device *pdev)
+ 			goto probe_setup_failed;
+ 	}
+ 
+-	pm_runtime_enable(dev);
+-
+-	pm_runtime_set_autosuspend_delay(dev, CQSPI_AUTOSUSPEND_TIMEOUT);
+-	pm_runtime_use_autosuspend(dev);
+-	pm_runtime_get_noresume(dev);
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) {
++		pm_runtime_enable(dev);
++		pm_runtime_set_autosuspend_delay(dev, CQSPI_AUTOSUSPEND_TIMEOUT);
++		pm_runtime_use_autosuspend(dev);
++		pm_runtime_get_noresume(dev);
++	}
+ 
+ 	ret = spi_register_controller(host);
+ 	if (ret) {
+@@ -1970,13 +2005,17 @@ static int cqspi_probe(struct platform_device *pdev)
+ 		goto probe_setup_failed;
+ 	}
+ 
+-	pm_runtime_mark_last_busy(dev);
+-	pm_runtime_put_autosuspend(dev);
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) {
++		pm_runtime_put_autosuspend(dev);
++		pm_runtime_mark_last_busy(dev);
++		pm_runtime_put_autosuspend(dev);
++	}
+ 
+ 	return 0;
+ probe_setup_failed:
+ 	cqspi_controller_enable(cqspi, 0);
+-	pm_runtime_disable(dev);
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM)))
++		pm_runtime_disable(dev);
+ probe_reset_failed:
+ 	if (cqspi->is_jh7110)
+ 		cqspi_jh7110_disable_clk(pdev, cqspi);
+@@ -1987,7 +2026,16 @@ static int cqspi_probe(struct platform_device *pdev)
+ 
+ static void cqspi_remove(struct platform_device *pdev)
+ {
++	const struct cqspi_driver_platdata *ddata;
+ 	struct cqspi_st *cqspi = platform_get_drvdata(pdev);
++	struct device *dev = &pdev->dev;
++
++	ddata = of_device_get_match_data(dev);
++
++	refcount_set(&cqspi->refcount, 0);
++
++	if (!refcount_dec_and_test(&cqspi->inflight_ops))
++		cqspi_wait_idle(cqspi);
+ 
+ 	spi_unregister_controller(cqspi->host);
+ 	cqspi_controller_enable(cqspi, 0);
+@@ -1995,14 +2043,17 @@ static void cqspi_remove(struct platform_device *pdev)
+ 	if (cqspi->rx_chan)
+ 		dma_release_channel(cqspi->rx_chan);
+ 
+-	if (pm_runtime_get_sync(&pdev->dev) >= 0)
+-		clk_disable(cqspi->clk);
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM)))
++		if (pm_runtime_get_sync(&pdev->dev) >= 0)
++			clk_disable(cqspi->clk);
+ 
+ 	if (cqspi->is_jh7110)
+ 		cqspi_jh7110_disable_clk(pdev, cqspi);
+ 
+-	pm_runtime_put_sync(&pdev->dev);
+-	pm_runtime_disable(&pdev->dev);
++	if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) {
++		pm_runtime_put_sync(&pdev->dev);
++		pm_runtime_disable(&pdev->dev);
++	}
+ }
+ 
+ static int cqspi_runtime_suspend(struct device *dev)
+@@ -2081,7 +2132,8 @@ static const struct cqspi_driver_platdata socfpga_qspi = {
+ 	.quirks = CQSPI_DISABLE_DAC_MODE
+ 			| CQSPI_NO_SUPPORT_WR_COMPLETION
+ 			| CQSPI_SLOW_SRAM
+-			| CQSPI_DISABLE_STIG_MODE,
++			| CQSPI_DISABLE_STIG_MODE
++			| CQSPI_DISABLE_RUNTIME_PM,
+ };
+ 
+ static const struct cqspi_driver_platdata versal_ospi = {
+diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c
+index 1e50675772febb..cc88aaa106da30 100644
+--- a/drivers/ufs/core/ufs-mcq.c
++++ b/drivers/ufs/core/ufs-mcq.c
+@@ -243,7 +243,7 @@ int ufshcd_mcq_memory_alloc(struct ufs_hba *hba)
+ 		hwq->sqe_base_addr = dmam_alloc_coherent(hba->dev, utrdl_size,
+ 							 &hwq->sqe_dma_addr,
+ 							 GFP_KERNEL);
+-		if (!hwq->sqe_dma_addr) {
++		if (!hwq->sqe_base_addr) {
+ 			dev_err(hba->dev, "SQE allocation failed\n");
+ 			return -ENOMEM;
+ 		}
+@@ -252,7 +252,7 @@ int ufshcd_mcq_memory_alloc(struct ufs_hba *hba)
+ 		hwq->cqe_base_addr = dmam_alloc_coherent(hba->dev, cqe_size,
+ 							 &hwq->cqe_dma_addr,
+ 							 GFP_KERNEL);
+-		if (!hwq->cqe_dma_addr) {
++		if (!hwq->cqe_base_addr) {
+ 			dev_err(hba->dev, "CQE allocation failed\n");
+ 			return -ENOMEM;
+ 		}
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index d6daad39491b75..f5bc5387533012 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -737,7 +737,7 @@ void usb_detect_quirks(struct usb_device *udev)
+ 	udev->quirks ^= usb_detect_dynamic_quirks(udev);
+ 
+ 	if (udev->quirks)
+-		dev_dbg(&udev->dev, "USB quirks for this device: %x\n",
++		dev_dbg(&udev->dev, "USB quirks for this device: 0x%x\n",
+ 			udev->quirks);
+ 
+ #ifdef CONFIG_USB_DEFAULT_PERSIST
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index 53551aafd47027..c5902cc261e5b0 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -760,10 +760,10 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock)
+ 	int err;
+ 	int sent_pkts = 0;
+ 	bool sock_can_batch = (sock->sk->sk_sndbuf == INT_MAX);
+-	bool busyloop_intr;
+ 
+ 	do {
+-		busyloop_intr = false;
++		bool busyloop_intr = false;
++
+ 		if (nvq->done_idx == VHOST_NET_BATCH)
+ 			vhost_tx_batch(net, nvq, sock, &msg);
+ 
+@@ -774,10 +774,18 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock)
+ 			break;
+ 		/* Nothing new?  Wait for eventfd to tell us they refilled. */
+ 		if (head == vq->num) {
+-			/* Kicks are disabled at this point, break loop and
+-			 * process any remaining batched packets. Queue will
+-			 * be re-enabled afterwards.
++			/* Flush batched packets to handle pending RX
++			 * work (if busyloop_intr is set) and to avoid
++			 * unnecessary virtqueue kicks.
+ 			 */
++			vhost_tx_batch(net, nvq, sock, &msg);
++			if (unlikely(busyloop_intr)) {
++				vhost_poll_queue(&vq->poll);
++			} else if (unlikely(vhost_enable_notify(&net->dev,
++								vq))) {
++				vhost_disable_notify(&net->dev, vq);
++				continue;
++			}
+ 			break;
+ 		}
+ 
+@@ -827,22 +835,7 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock)
+ 		++nvq->done_idx;
+ 	} while (likely(!vhost_exceeds_weight(vq, ++sent_pkts, total_len)));
+ 
+-	/* Kicks are still disabled, dispatch any remaining batched msgs. */
+ 	vhost_tx_batch(net, nvq, sock, &msg);
+-
+-	if (unlikely(busyloop_intr))
+-		/* If interrupted while doing busy polling, requeue the
+-		 * handler to be fair handle_rx as well as other tasks
+-		 * waiting on cpu.
+-		 */
+-		vhost_poll_queue(&vq->poll);
+-	else
+-		/* All of our work has been completed; however, before
+-		 * leaving the TX handler, do one last check for work,
+-		 * and requeue handler if necessary. If there is no work,
+-		 * queue will be reenabled.
+-		 */
+-		vhost_net_busy_poll_try_queue(net, vq);
+ }
+ 
+ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock)
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 9eb880a11fd8ce..a786b0c1940b4c 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -2491,7 +2491,7 @@ static int fbcon_set_font(struct vc_data *vc, const struct console_font *font,
+ 	unsigned charcount = font->charcount;
+ 	int w = font->width;
+ 	int h = font->height;
+-	int size;
++	int size, alloc_size;
+ 	int i, csum;
+ 	u8 *new_data, *data = font->data;
+ 	int pitch = PITCH(font->width);
+@@ -2518,9 +2518,16 @@ static int fbcon_set_font(struct vc_data *vc, const struct console_font *font,
+ 	if (fbcon_invalid_charcount(info, charcount))
+ 		return -EINVAL;
+ 
+-	size = CALC_FONTSZ(h, pitch, charcount);
++	/* Check for integer overflow in font size calculation */
++	if (check_mul_overflow(h, pitch, &size) ||
++	    check_mul_overflow(size, charcount, &size))
++		return -EINVAL;
++
++	/* Check for overflow in allocation size calculation */
++	if (check_add_overflow(FONT_EXTRA_WORDS * sizeof(int), size, &alloc_size))
++		return -EINVAL;
+ 
+-	new_data = kmalloc(FONT_EXTRA_WORDS * sizeof(int) + size, GFP_USER);
++	new_data = kmalloc(alloc_size, GFP_USER);
+ 
+ 	if (!new_data)
+ 		return -ENOMEM;
+diff --git a/fs/afs/server.c b/fs/afs/server.c
+index a97562f831eb5a..c4428ebddb1da6 100644
+--- a/fs/afs/server.c
++++ b/fs/afs/server.c
+@@ -331,13 +331,14 @@ struct afs_server *afs_use_server(struct afs_server *server, bool activate,
+ void afs_put_server(struct afs_net *net, struct afs_server *server,
+ 		    enum afs_server_trace reason)
+ {
+-	unsigned int a, debug_id = server->debug_id;
++	unsigned int a, debug_id;
+ 	bool zero;
+ 	int r;
+ 
+ 	if (!server)
+ 		return;
+ 
++	debug_id = server->debug_id;
+ 	a = atomic_read(&server->active);
+ 	zero = __refcount_dec_and_test(&server->ref, &r);
+ 	trace_afs_server(debug_id, r - 1, a, reason);
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index f475b4b7c45782..817d3ef501ec44 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -2714,6 +2714,11 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path
+ 		goto error;
+ 	}
+ 
++	if (bdev_nr_bytes(file_bdev(bdev_file)) <= BTRFS_DEVICE_RANGE_RESERVED) {
++		ret = -EINVAL;
++		goto error;
++	}
++
+ 	if (fs_devices->seeding) {
+ 		seeding_dev = true;
+ 		down_write(&sb->s_umount);
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index e4de5425838d92..6040e54082777d 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -520,14 +520,16 @@ static bool remove_inode_single_folio(struct hstate *h, struct inode *inode,
+ 
+ 	/*
+ 	 * If folio is mapped, it was faulted in after being
+-	 * unmapped in caller.  Unmap (again) while holding
+-	 * the fault mutex.  The mutex will prevent faults
+-	 * until we finish removing the folio.
++	 * unmapped in caller or hugetlb_vmdelete_list() skips
++	 * unmapping it due to fail to grab lock.  Unmap (again)
++	 * while holding the fault mutex.  The mutex will prevent
++	 * faults until we finish removing the folio.  Hold folio
++	 * lock to guarantee no concurrent migration.
+ 	 */
++	folio_lock(folio);
+ 	if (unlikely(folio_mapped(folio)))
+ 		hugetlb_unmap_file_folio(h, mapping, folio, index);
+ 
+-	folio_lock(folio);
+ 	/*
+ 	 * We must remove the folio from page cache before removing
+ 	 * the region/ reserve map (hugetlb_unreserve_pages).  In
+diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
+index 18b3dc74c70e41..37ab6f28b5ad0e 100644
+--- a/fs/netfs/buffered_read.c
++++ b/fs/netfs/buffered_read.c
+@@ -369,7 +369,7 @@ void netfs_readahead(struct readahead_control *ractl)
+ 	return netfs_put_request(rreq, netfs_rreq_trace_put_return);
+ 
+ cleanup_free:
+-	return netfs_put_request(rreq, netfs_rreq_trace_put_failed);
++	return netfs_put_failed_request(rreq);
+ }
+ EXPORT_SYMBOL(netfs_readahead);
+ 
+@@ -472,7 +472,7 @@ static int netfs_read_gaps(struct file *file, struct folio *folio)
+ 	return ret < 0 ? ret : 0;
+ 
+ discard:
+-	netfs_put_request(rreq, netfs_rreq_trace_put_discard);
++	netfs_put_failed_request(rreq);
+ alloc_error:
+ 	folio_unlock(folio);
+ 	return ret;
+@@ -532,7 +532,7 @@ int netfs_read_folio(struct file *file, struct folio *folio)
+ 	return ret < 0 ? ret : 0;
+ 
+ discard:
+-	netfs_put_request(rreq, netfs_rreq_trace_put_discard);
++	netfs_put_failed_request(rreq);
+ alloc_error:
+ 	folio_unlock(folio);
+ 	return ret;
+@@ -699,7 +699,7 @@ int netfs_write_begin(struct netfs_inode *ctx,
+ 	return 0;
+ 
+ error_put:
+-	netfs_put_request(rreq, netfs_rreq_trace_put_failed);
++	netfs_put_failed_request(rreq);
+ error:
+ 	if (folio) {
+ 		folio_unlock(folio);
+@@ -754,7 +754,7 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio,
+ 	return ret < 0 ? ret : 0;
+ 
+ error_put:
+-	netfs_put_request(rreq, netfs_rreq_trace_put_discard);
++	netfs_put_failed_request(rreq);
+ error:
+ 	_leave(" = %d", ret);
+ 	return ret;
+diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c
+index a05e13472bafb2..a498ee8d66745f 100644
+--- a/fs/netfs/direct_read.c
++++ b/fs/netfs/direct_read.c
+@@ -131,6 +131,7 @@ static ssize_t netfs_unbuffered_read(struct netfs_io_request *rreq, bool sync)
+ 
+ 	if (rreq->len == 0) {
+ 		pr_err("Zero-sized read [R=%x]\n", rreq->debug_id);
++		netfs_put_request(rreq, netfs_rreq_trace_put_discard);
+ 		return -EIO;
+ 	}
+ 
+@@ -205,7 +206,7 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *i
+ 	if (user_backed_iter(iter)) {
+ 		ret = netfs_extract_user_iter(iter, rreq->len, &rreq->buffer.iter, 0);
+ 		if (ret < 0)
+-			goto out;
++			goto error_put;
+ 		rreq->direct_bv = (struct bio_vec *)rreq->buffer.iter.bvec;
+ 		rreq->direct_bv_count = ret;
+ 		rreq->direct_bv_unpin = iov_iter_extract_will_pin(iter);
+@@ -238,6 +239,10 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *i
+ 	if (ret > 0)
+ 		orig_count -= ret;
+ 	return ret;
++
++error_put:
++	netfs_put_failed_request(rreq);
++	return ret;
+ }
+ EXPORT_SYMBOL(netfs_unbuffered_read_iter_locked);
+ 
+diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c
+index a16660ab7f8385..a9d1c3b2c08426 100644
+--- a/fs/netfs/direct_write.c
++++ b/fs/netfs/direct_write.c
+@@ -57,7 +57,7 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
+ 			n = netfs_extract_user_iter(iter, len, &wreq->buffer.iter, 0);
+ 			if (n < 0) {
+ 				ret = n;
+-				goto out;
++				goto error_put;
+ 			}
+ 			wreq->direct_bv = (struct bio_vec *)wreq->buffer.iter.bvec;
+ 			wreq->direct_bv_count = n;
+@@ -101,6 +101,10 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
+ out:
+ 	netfs_put_request(wreq, netfs_rreq_trace_put_return);
+ 	return ret;
++
++error_put:
++	netfs_put_failed_request(wreq);
++	return ret;
+ }
+ EXPORT_SYMBOL(netfs_unbuffered_write_iter_locked);
+ 
+diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h
+index d4f16fefd96518..4319611f535449 100644
+--- a/fs/netfs/internal.h
++++ b/fs/netfs/internal.h
+@@ -87,6 +87,7 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
+ void netfs_get_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace what);
+ void netfs_clear_subrequests(struct netfs_io_request *rreq);
+ void netfs_put_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace what);
++void netfs_put_failed_request(struct netfs_io_request *rreq);
+ struct netfs_io_subrequest *netfs_alloc_subrequest(struct netfs_io_request *rreq);
+ 
+ static inline void netfs_see_request(struct netfs_io_request *rreq,
+diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c
+index e8c99738b5bbf2..40a1c7d6f6e03e 100644
+--- a/fs/netfs/objects.c
++++ b/fs/netfs/objects.c
+@@ -116,10 +116,8 @@ static void netfs_free_request_rcu(struct rcu_head *rcu)
+ 	netfs_stat_d(&netfs_n_rh_rreq);
+ }
+ 
+-static void netfs_free_request(struct work_struct *work)
++static void netfs_deinit_request(struct netfs_io_request *rreq)
+ {
+-	struct netfs_io_request *rreq =
+-		container_of(work, struct netfs_io_request, cleanup_work);
+ 	struct netfs_inode *ictx = netfs_inode(rreq->inode);
+ 	unsigned int i;
+ 
+@@ -149,6 +147,14 @@ static void netfs_free_request(struct work_struct *work)
+ 
+ 	if (atomic_dec_and_test(&ictx->io_count))
+ 		wake_up_var(&ictx->io_count);
++}
++
++static void netfs_free_request(struct work_struct *work)
++{
++	struct netfs_io_request *rreq =
++		container_of(work, struct netfs_io_request, cleanup_work);
++
++	netfs_deinit_request(rreq);
+ 	call_rcu(&rreq->rcu, netfs_free_request_rcu);
+ }
+ 
+@@ -167,6 +173,24 @@ void netfs_put_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace
+ 	}
+ }
+ 
++/*
++ * Free a request (synchronously) that was just allocated but has
++ * failed before it could be submitted.
++ */
++void netfs_put_failed_request(struct netfs_io_request *rreq)
++{
++	int r = refcount_read(&rreq->ref);
++
++	/* new requests have two references (see
++	 * netfs_alloc_request(), and this function is only allowed on
++	 * new request objects
++	 */
++	WARN_ON_ONCE(r != 2);
++
++	trace_netfs_rreq_ref(rreq->debug_id, r, netfs_rreq_trace_put_failed);
++	netfs_free_request(&rreq->cleanup_work);
++}
++
+ /*
+  * Allocate and partially initialise an I/O request structure.
+  */
+diff --git a/fs/netfs/read_pgpriv2.c b/fs/netfs/read_pgpriv2.c
+index 8097bc069c1de6..a1489aa29f782a 100644
+--- a/fs/netfs/read_pgpriv2.c
++++ b/fs/netfs/read_pgpriv2.c
+@@ -118,7 +118,7 @@ static struct netfs_io_request *netfs_pgpriv2_begin_copy_to_cache(
+ 	return creq;
+ 
+ cancel_put:
+-	netfs_put_request(creq, netfs_rreq_trace_put_return);
++	netfs_put_failed_request(creq);
+ cancel:
+ 	rreq->copy_to_cache = ERR_PTR(-ENOBUFS);
+ 	clear_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags);
+diff --git a/fs/netfs/read_single.c b/fs/netfs/read_single.c
+index fa622a6cd56da3..5c0dc4efc79227 100644
+--- a/fs/netfs/read_single.c
++++ b/fs/netfs/read_single.c
+@@ -189,7 +189,7 @@ ssize_t netfs_read_single(struct inode *inode, struct file *file, struct iov_ite
+ 	return ret;
+ 
+ cleanup_free:
+-	netfs_put_request(rreq, netfs_rreq_trace_put_failed);
++	netfs_put_failed_request(rreq);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(netfs_read_single);
+diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c
+index 0584cba1a04392..dd8743bc8d7fe3 100644
+--- a/fs/netfs/write_issue.c
++++ b/fs/netfs/write_issue.c
+@@ -133,8 +133,7 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping,
+ 
+ 	return wreq;
+ nomem:
+-	wreq->error = -ENOMEM;
+-	netfs_put_request(wreq, netfs_rreq_trace_put_failed);
++	netfs_put_failed_request(wreq);
+ 	return ERR_PTR(-ENOMEM);
+ }
+ 
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index a16a619fb8c33b..8cc39a73faff85 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -28,6 +28,7 @@
+ #include <linux/mm.h>
+ #include <linux/pagemap.h>
+ #include <linux/gfp.h>
++#include <linux/rmap.h>
+ #include <linux/swap.h>
+ #include <linux/compaction.h>
+ 
+@@ -279,6 +280,37 @@ nfs_file_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+ }
+ EXPORT_SYMBOL_GPL(nfs_file_fsync);
+ 
++void nfs_truncate_last_folio(struct address_space *mapping, loff_t from,
++			     loff_t to)
++{
++	struct folio *folio;
++
++	if (from >= to)
++		return;
++
++	folio = filemap_lock_folio(mapping, from >> PAGE_SHIFT);
++	if (IS_ERR(folio))
++		return;
++
++	if (folio_mkclean(folio))
++		folio_mark_dirty(folio);
++
++	if (folio_test_uptodate(folio)) {
++		loff_t fpos = folio_pos(folio);
++		size_t offset = from - fpos;
++		size_t end = folio_size(folio);
++
++		if (to - fpos < end)
++			end = to - fpos;
++		folio_zero_segment(folio, offset, end);
++		trace_nfs_size_truncate_folio(mapping->host, to);
++	}
++
++	folio_unlock(folio);
++	folio_put(folio);
++}
++EXPORT_SYMBOL_GPL(nfs_truncate_last_folio);
++
+ /*
+  * Decide whether a read/modify/write cycle may be more efficient
+  * then a modify/write/read cycle when writing to a page in the
+@@ -353,6 +385,7 @@ static int nfs_write_begin(struct file *file, struct address_space *mapping,
+ 
+ 	dfprintk(PAGECACHE, "NFS: write_begin(%pD2(%lu), %u@%lld)\n",
+ 		file, mapping->host->i_ino, len, (long long) pos);
++	nfs_truncate_last_folio(mapping, i_size_read(mapping->host), pos);
+ 
+ 	fgp |= fgf_set_order(len);
+ start:
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index a32cc45425e287..f6b448666d4195 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -710,6 +710,7 @@ nfs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ {
+ 	struct inode *inode = d_inode(dentry);
+ 	struct nfs_fattr *fattr;
++	loff_t oldsize = i_size_read(inode);
+ 	int error = 0;
+ 
+ 	nfs_inc_stats(inode, NFSIOS_VFSSETATTR);
+@@ -725,7 +726,7 @@ nfs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 		if (error)
+ 			return error;
+ 
+-		if (attr->ia_size == i_size_read(inode))
++		if (attr->ia_size == oldsize)
+ 			attr->ia_valid &= ~ATTR_SIZE;
+ 	}
+ 
+@@ -773,8 +774,12 @@ nfs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ 	}
+ 
+ 	error = NFS_PROTO(inode)->setattr(dentry, fattr, attr);
+-	if (error == 0)
++	if (error == 0) {
++		if (attr->ia_valid & ATTR_SIZE)
++			nfs_truncate_last_folio(inode->i_mapping, oldsize,
++						attr->ia_size);
+ 		error = nfs_refresh_inode(inode, fattr);
++	}
+ 	nfs_free_fattr(fattr);
+ out:
+ 	trace_nfs_setattr_exit(inode, error);
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 0ef0fc6aba3b3c..ae4d039c10d3a4 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -438,6 +438,8 @@ int nfs_file_release(struct inode *, struct file *);
+ int nfs_lock(struct file *, int, struct file_lock *);
+ int nfs_flock(struct file *, int, struct file_lock *);
+ int nfs_check_flags(int);
++void nfs_truncate_last_folio(struct address_space *mapping, loff_t from,
++			     loff_t to);
+ 
+ /* inode.c */
+ extern struct workqueue_struct *nfsiod_workqueue;
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index 48ee3d5d89c4ae..6a0b5871ba3b09 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -138,6 +138,7 @@ int nfs42_proc_allocate(struct file *filep, loff_t offset, loff_t len)
+ 		.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_ALLOCATE],
+ 	};
+ 	struct inode *inode = file_inode(filep);
++	loff_t oldsize = i_size_read(inode);
+ 	int err;
+ 
+ 	if (!nfs_server_capable(inode, NFS_CAP_ALLOCATE))
+@@ -146,7 +147,11 @@ int nfs42_proc_allocate(struct file *filep, loff_t offset, loff_t len)
+ 	inode_lock(inode);
+ 
+ 	err = nfs42_proc_fallocate(&msg, filep, offset, len);
+-	if (err == -EOPNOTSUPP)
++
++	if (err == 0)
++		nfs_truncate_last_folio(inode->i_mapping, oldsize,
++					offset + len);
++	else if (err == -EOPNOTSUPP)
+ 		NFS_SERVER(inode)->caps &= ~(NFS_CAP_ALLOCATE |
+ 					     NFS_CAP_ZERO_RANGE);
+ 
+@@ -184,6 +189,7 @@ int nfs42_proc_zero_range(struct file *filep, loff_t offset, loff_t len)
+ 		.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_ZERO_RANGE],
+ 	};
+ 	struct inode *inode = file_inode(filep);
++	loff_t oldsize = i_size_read(inode);
+ 	int err;
+ 
+ 	if (!nfs_server_capable(inode, NFS_CAP_ZERO_RANGE))
+@@ -192,9 +198,11 @@ int nfs42_proc_zero_range(struct file *filep, loff_t offset, loff_t len)
+ 	inode_lock(inode);
+ 
+ 	err = nfs42_proc_fallocate(&msg, filep, offset, len);
+-	if (err == 0)
++	if (err == 0) {
++		nfs_truncate_last_folio(inode->i_mapping, oldsize,
++					offset + len);
+ 		truncate_pagecache_range(inode, offset, (offset + len) -1);
+-	if (err == -EOPNOTSUPP)
++	} else if (err == -EOPNOTSUPP)
+ 		NFS_SERVER(inode)->caps &= ~NFS_CAP_ZERO_RANGE;
+ 
+ 	inode_unlock(inode);
+@@ -355,22 +363,27 @@ static int process_copy_commit(struct file *dst, loff_t pos_dst,
+ 
+ /**
+  * nfs42_copy_dest_done - perform inode cache updates after clone/copy offload
+- * @inode: pointer to destination inode
++ * @file: pointer to destination file
+  * @pos: destination offset
+  * @len: copy length
++ * @oldsize: length of the file prior to clone/copy
+  *
+  * Punch a hole in the inode page cache, so that the NFS client will
+  * know to retrieve new data.
+  * Update the file size if necessary, and then mark the inode as having
+  * invalid cached values for change attribute, ctime, mtime and space used.
+  */
+-static void nfs42_copy_dest_done(struct inode *inode, loff_t pos, loff_t len)
++static void nfs42_copy_dest_done(struct file *file, loff_t pos, loff_t len,
++				 loff_t oldsize)
+ {
++	struct inode *inode = file_inode(file);
++	struct address_space *mapping = file->f_mapping;
+ 	loff_t newsize = pos + len;
+ 	loff_t end = newsize - 1;
+ 
+-	WARN_ON_ONCE(invalidate_inode_pages2_range(inode->i_mapping,
+-				pos >> PAGE_SHIFT, end >> PAGE_SHIFT));
++	nfs_truncate_last_folio(mapping, oldsize, pos);
++	WARN_ON_ONCE(invalidate_inode_pages2_range(mapping, pos >> PAGE_SHIFT,
++						   end >> PAGE_SHIFT));
+ 
+ 	spin_lock(&inode->i_lock);
+ 	if (newsize > i_size_read(inode))
+@@ -403,6 +416,7 @@ static ssize_t _nfs42_proc_copy(struct file *src,
+ 	struct nfs_server *src_server = NFS_SERVER(src_inode);
+ 	loff_t pos_src = args->src_pos;
+ 	loff_t pos_dst = args->dst_pos;
++	loff_t oldsize_dst = i_size_read(dst_inode);
+ 	size_t count = args->count;
+ 	ssize_t status;
+ 
+@@ -477,7 +491,7 @@ static ssize_t _nfs42_proc_copy(struct file *src,
+ 			goto out;
+ 	}
+ 
+-	nfs42_copy_dest_done(dst_inode, pos_dst, res->write_res.count);
++	nfs42_copy_dest_done(dst, pos_dst, res->write_res.count, oldsize_dst);
+ 	nfs_invalidate_atime(src_inode);
+ 	status = res->write_res.count;
+ out:
+@@ -1244,6 +1258,7 @@ static int _nfs42_proc_clone(struct rpc_message *msg, struct file *src_f,
+ 	struct nfs42_clone_res res = {
+ 		.server	= server,
+ 	};
++	loff_t oldsize_dst = i_size_read(dst_inode);
+ 	int status;
+ 
+ 	msg->rpc_argp = &args;
+@@ -1278,7 +1293,7 @@ static int _nfs42_proc_clone(struct rpc_message *msg, struct file *src_f,
+ 		/* a zero-length count means clone to EOF in src */
+ 		if (count == 0 && res.dst_fattr->valid & NFS_ATTR_FATTR_SIZE)
+ 			count = nfs_size_to_loff_t(res.dst_fattr->size) - dst_offset;
+-		nfs42_copy_dest_done(dst_inode, dst_offset, count);
++		nfs42_copy_dest_done(dst_f, dst_offset, count, oldsize_dst);
+ 		status = nfs_post_op_update_inode(dst_inode, res.dst_fattr);
+ 	}
+ 
+diff --git a/fs/nfs/nfstrace.h b/fs/nfs/nfstrace.h
+index 7a058bd8c566e2..1e4dc632f1800c 100644
+--- a/fs/nfs/nfstrace.h
++++ b/fs/nfs/nfstrace.h
+@@ -267,6 +267,7 @@ DECLARE_EVENT_CLASS(nfs_update_size_class,
+ 			TP_ARGS(inode, new_size))
+ 
+ DEFINE_NFS_UPDATE_SIZE_EVENT(truncate);
++DEFINE_NFS_UPDATE_SIZE_EVENT(truncate_folio);
+ DEFINE_NFS_UPDATE_SIZE_EVENT(wcc);
+ DEFINE_NFS_UPDATE_SIZE_EVENT(update);
+ DEFINE_NFS_UPDATE_SIZE_EVENT(grow);
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index bdc3c2e9334ad5..ea4ff22976c5c8 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -2286,6 +2286,9 @@ static void pagemap_scan_backout_range(struct pagemap_scan_private *p,
+ {
+ 	struct page_region *cur_buf = &p->vec_buf[p->vec_buf_index];
+ 
++	if (!p->vec_buf)
++		return;
++
+ 	if (cur_buf->start != addr)
+ 		cur_buf->end = addr;
+ 	else
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index 86cad8ee8e6f3b..ac3ce183bd59a9 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -687,7 +687,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 	}
+ 
+ 	for (i = 0; i < num_cmds; i++) {
+-		char *buf = rsp_iov[i + i].iov_base;
++		char *buf = rsp_iov[i + 1].iov_base;
+ 
+ 		if (buf && resp_buftype[i + 1] != CIFS_NO_BUFFER)
+ 			rc = server->ops->map_error(buf, false);
+diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c
+index 6550bd9f002c27..74dfb6496095db 100644
+--- a/fs/smb/server/transport_rdma.c
++++ b/fs/smb/server/transport_rdma.c
+@@ -148,7 +148,7 @@ struct smb_direct_transport {
+ 	wait_queue_head_t	wait_send_pending;
+ 	atomic_t		send_pending;
+ 
+-	struct delayed_work	post_recv_credits_work;
++	struct work_struct	post_recv_credits_work;
+ 	struct work_struct	send_immediate_work;
+ 	struct work_struct	disconnect_work;
+ 
+@@ -367,8 +367,8 @@ static struct smb_direct_transport *alloc_transport(struct rdma_cm_id *cm_id)
+ 
+ 	spin_lock_init(&t->lock_new_recv_credits);
+ 
+-	INIT_DELAYED_WORK(&t->post_recv_credits_work,
+-			  smb_direct_post_recv_credits);
++	INIT_WORK(&t->post_recv_credits_work,
++		  smb_direct_post_recv_credits);
+ 	INIT_WORK(&t->send_immediate_work, smb_direct_send_immediate_work);
+ 	INIT_WORK(&t->disconnect_work, smb_direct_disconnect_rdma_work);
+ 
+@@ -399,9 +399,9 @@ static void free_transport(struct smb_direct_transport *t)
+ 	wait_event(t->wait_send_pending,
+ 		   atomic_read(&t->send_pending) == 0);
+ 
+-	cancel_work_sync(&t->disconnect_work);
+-	cancel_delayed_work_sync(&t->post_recv_credits_work);
+-	cancel_work_sync(&t->send_immediate_work);
++	disable_work_sync(&t->disconnect_work);
++	disable_work_sync(&t->post_recv_credits_work);
++	disable_work_sync(&t->send_immediate_work);
+ 
+ 	if (t->qp) {
+ 		ib_drain_qp(t->qp);
+@@ -615,8 +615,7 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 			wake_up_interruptible(&t->wait_send_credits);
+ 
+ 		if (is_receive_credit_post_required(receive_credits, avail_recvmsg_count))
+-			mod_delayed_work(smb_direct_wq,
+-					 &t->post_recv_credits_work, 0);
++			queue_work(smb_direct_wq, &t->post_recv_credits_work);
+ 
+ 		if (data_length) {
+ 			enqueue_reassembly(t, recvmsg, (int)data_length);
+@@ -773,8 +772,7 @@ static int smb_direct_read(struct ksmbd_transport *t, char *buf,
+ 		st->count_avail_recvmsg += queue_removed;
+ 		if (is_receive_credit_post_required(st->recv_credits, st->count_avail_recvmsg)) {
+ 			spin_unlock(&st->receive_credit_lock);
+-			mod_delayed_work(smb_direct_wq,
+-					 &st->post_recv_credits_work, 0);
++			queue_work(smb_direct_wq, &st->post_recv_credits_work);
+ 		} else {
+ 			spin_unlock(&st->receive_credit_lock);
+ 		}
+@@ -801,7 +799,7 @@ static int smb_direct_read(struct ksmbd_transport *t, char *buf,
+ static void smb_direct_post_recv_credits(struct work_struct *work)
+ {
+ 	struct smb_direct_transport *t = container_of(work,
+-		struct smb_direct_transport, post_recv_credits_work.work);
++		struct smb_direct_transport, post_recv_credits_work);
+ 	struct smb_direct_recvmsg *recvmsg;
+ 	int receive_credits, credits = 0;
+ 	int ret;
+@@ -1734,7 +1732,7 @@ static int smb_direct_prepare_negotiation(struct smb_direct_transport *t)
+ 		goto out_err;
+ 	}
+ 
+-	smb_direct_post_recv_credits(&t->post_recv_credits_work.work);
++	smb_direct_post_recv_credits(&t->post_recv_credits_work);
+ 	return 0;
+ out_err:
+ 	put_recvmsg(t, recvmsg);
+diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
+index 0c70f3a5557505..107b797c33ecf7 100644
+--- a/include/crypto/if_alg.h
++++ b/include/crypto/if_alg.h
+@@ -152,7 +152,7 @@ struct af_alg_ctx {
+ 	size_t used;
+ 	atomic_t rcvused;
+ 
+-	u32		more:1,
++	bool		more:1,
+ 			merge:1,
+ 			enc:1,
+ 			write:1,
+diff --git a/include/linux/firmware/imx/sm.h b/include/linux/firmware/imx/sm.h
+index a8a17eeb7d907e..1817df9aceac80 100644
+--- a/include/linux/firmware/imx/sm.h
++++ b/include/linux/firmware/imx/sm.h
+@@ -18,13 +18,43 @@
+ #define SCMI_IMX_CTRL_SAI4_MCLK		4	/* WAKE SAI4 MCLK */
+ #define SCMI_IMX_CTRL_SAI5_MCLK		5	/* WAKE SAI5 MCLK */
+ 
++#if IS_ENABLED(CONFIG_IMX_SCMI_MISC_DRV)
+ int scmi_imx_misc_ctrl_get(u32 id, u32 *num, u32 *val);
+ int scmi_imx_misc_ctrl_set(u32 id, u32 val);
++#else
++static inline int scmi_imx_misc_ctrl_get(u32 id, u32 *num, u32 *val)
++{
++	return -EOPNOTSUPP;
++}
+ 
++static inline int scmi_imx_misc_ctrl_set(u32 id, u32 val)
++{
++	return -EOPNOTSUPP;
++}
++#endif
++
++#if IS_ENABLED(CONFIG_IMX_SCMI_CPU_DRV)
+ int scmi_imx_cpu_start(u32 cpuid, bool start);
+ int scmi_imx_cpu_started(u32 cpuid, bool *started);
+ int scmi_imx_cpu_reset_vector_set(u32 cpuid, u64 vector, bool start, bool boot,
+ 				  bool resume);
++#else
++static inline int scmi_imx_cpu_start(u32 cpuid, bool start)
++{
++	return -EOPNOTSUPP;
++}
++
++static inline int scmi_imx_cpu_started(u32 cpuid, bool *started)
++{
++	return -EOPNOTSUPP;
++}
++
++static inline int scmi_imx_cpu_reset_vector_set(u32 cpuid, u64 vector, bool start,
++						bool boot, bool resume)
++{
++	return -EOPNOTSUPP;
++}
++#endif
+ 
+ enum scmi_imx_lmm_op {
+ 	SCMI_IMX_LMM_BOOT,
+@@ -36,7 +66,24 @@ enum scmi_imx_lmm_op {
+ #define SCMI_IMX_LMM_OP_FORCEFUL	0
+ #define SCMI_IMX_LMM_OP_GRACEFUL	BIT(0)
+ 
++#if IS_ENABLED(CONFIG_IMX_SCMI_LMM_DRV)
+ int scmi_imx_lmm_operation(u32 lmid, enum scmi_imx_lmm_op op, u32 flags);
+ int scmi_imx_lmm_info(u32 lmid, struct scmi_imx_lmm_info *info);
+ int scmi_imx_lmm_reset_vector_set(u32 lmid, u32 cpuid, u32 flags, u64 vector);
++#else
++static inline int scmi_imx_lmm_operation(u32 lmid, enum scmi_imx_lmm_op op, u32 flags)
++{
++	return -EOPNOTSUPP;
++}
++
++static inline int scmi_imx_lmm_info(u32 lmid, struct scmi_imx_lmm_info *info)
++{
++	return -EOPNOTSUPP;
++}
++
++static inline int scmi_imx_lmm_reset_vector_set(u32 lmid, u32 cpuid, u32 flags, u64 vector)
++{
++	return -EOPNOTSUPP;
++}
++#endif
+ #endif
+diff --git a/include/linux/mlx5/fs.h b/include/linux/mlx5/fs.h
+index 939e58c2f3865f..fb5f98fcc72692 100644
+--- a/include/linux/mlx5/fs.h
++++ b/include/linux/mlx5/fs.h
+@@ -308,6 +308,8 @@ struct mlx5_fc *mlx5_fc_create(struct mlx5_core_dev *dev, bool aging);
+ void mlx5_fc_destroy(struct mlx5_core_dev *dev, struct mlx5_fc *counter);
+ struct mlx5_fc *mlx5_fc_local_create(u32 counter_id, u32 offset, u32 bulk_size);
+ void mlx5_fc_local_destroy(struct mlx5_fc *counter);
++void mlx5_fc_local_get(struct mlx5_fc *counter);
++void mlx5_fc_local_put(struct mlx5_fc *counter);
+ u64 mlx5_fc_query_lastuse(struct mlx5_fc *counter);
+ void mlx5_fc_query_cached(struct mlx5_fc *counter,
+ 			  u64 *bytes, u64 *packets, u64 *lastuse);
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 439bc124ce7098..1347ae13dd0a1c 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1245,6 +1245,27 @@ static inline struct hci_conn *hci_conn_hash_lookup_ba(struct hci_dev *hdev,
+ 	return NULL;
+ }
+ 
++static inline struct hci_conn *hci_conn_hash_lookup_role(struct hci_dev *hdev,
++							 __u8 type, __u8 role,
++							 bdaddr_t *ba)
++{
++	struct hci_conn_hash *h = &hdev->conn_hash;
++	struct hci_conn  *c;
++
++	rcu_read_lock();
++
++	list_for_each_entry_rcu(c, &h->list, list) {
++		if (c->type == type && c->role == role && !bacmp(&c->dst, ba)) {
++			rcu_read_unlock();
++			return c;
++		}
++	}
++
++	rcu_read_unlock();
++
++	return NULL;
++}
++
+ static inline struct hci_conn *hci_conn_hash_lookup_le(struct hci_dev *hdev,
+ 						       bdaddr_t *ba,
+ 						       __u8 ba_type)
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 829f0792d8d831..17e5cf18da1efb 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -3013,7 +3013,10 @@ EXPORT_SYMBOL_GPL(bpf_event_output);
+ 
+ /* Always built-in helper functions. */
+ const struct bpf_func_proto bpf_tail_call_proto = {
+-	.func		= NULL,
++	/* func is unused for tail_call, we set it to pass the
++	 * get_helper_proto check
++	 */
++	.func		= BPF_PTR_POISON,
+ 	.gpl_only	= false,
+ 	.ret_type	= RET_VOID,
+ 	.arg1_type	= ARG_PTR_TO_CTX,
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 4fd89659750b25..a6338936085ae8 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -8405,6 +8405,10 @@ static int process_timer_func(struct bpf_verifier_env *env, int regno,
+ 		verifier_bug(env, "Two map pointers in a timer helper");
+ 		return -EFAULT;
+ 	}
++	if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
++		verbose(env, "bpf_timer cannot be used for PREEMPT_RT.\n");
++		return -EOPNOTSUPP;
++	}
+ 	meta->map_uid = reg->map_uid;
+ 	meta->map_ptr = map;
+ 	return 0;
+@@ -11206,7 +11210,7 @@ static int get_helper_proto(struct bpf_verifier_env *env, int func_id,
+ 		return -EINVAL;
+ 
+ 	*ptr = env->ops->get_func_proto(func_id, env->prog);
+-	return *ptr ? 0 : -EINVAL;
++	return *ptr && (*ptr)->func ? 0 : -EINVAL;
+ }
+ 
+ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 1ee8eb11f38bae..0cbc174da76ace 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -2289,7 +2289,7 @@ __latent_entropy struct task_struct *copy_process(
+ 	if (need_futex_hash_allocate_default(clone_flags)) {
+ 		retval = futex_hash_allocate_default();
+ 		if (retval)
+-			goto bad_fork_core_free;
++			goto bad_fork_cancel_cgroup;
+ 		/*
+ 		 * If we fail beyond this point we don't free the allocated
+ 		 * futex hash map. We assume that another thread will be created
+diff --git a/kernel/futex/requeue.c b/kernel/futex/requeue.c
+index c716a66f86929c..d818b4d47f1bad 100644
+--- a/kernel/futex/requeue.c
++++ b/kernel/futex/requeue.c
+@@ -230,8 +230,9 @@ static inline
+ void requeue_pi_wake_futex(struct futex_q *q, union futex_key *key,
+ 			   struct futex_hash_bucket *hb)
+ {
+-	q->key = *key;
++	struct task_struct *task;
+ 
++	q->key = *key;
+ 	__futex_unqueue(q);
+ 
+ 	WARN_ON(!q->rt_waiter);
+@@ -243,10 +244,11 @@ void requeue_pi_wake_futex(struct futex_q *q, union futex_key *key,
+ 	futex_hash_get(hb);
+ 	q->drop_hb_ref = true;
+ 	q->lock_ptr = &hb->lock;
++	task = READ_ONCE(q->task);
+ 
+ 	/* Signal locked state to the waiter */
+ 	futex_requeue_pi_complete(q, 1);
+-	wake_up_state(q->task, TASK_NORMAL);
++	wake_up_state(task, TASK_NORMAL);
+ }
+ 
+ /**
+diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
+index 001fb88a8481d8..edd6cdd9aadcac 100644
+--- a/kernel/sched/ext_idle.c
++++ b/kernel/sched/ext_idle.c
+@@ -75,7 +75,7 @@ static int scx_cpu_node_if_enabled(int cpu)
+ 	return cpu_to_node(cpu);
+ }
+ 
+-bool scx_idle_test_and_clear_cpu(int cpu)
++static bool scx_idle_test_and_clear_cpu(int cpu)
+ {
+ 	int node = scx_cpu_node_if_enabled(cpu);
+ 	struct cpumask *idle_cpus = idle_cpumask(node)->cpu;
+@@ -198,7 +198,7 @@ pick_idle_cpu_from_online_nodes(const struct cpumask *cpus_allowed, int node, u6
+ /*
+  * Find an idle CPU in the system, starting from @node.
+  */
+-s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
++static s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
+ {
+ 	s32 cpu;
+ 
+@@ -794,6 +794,16 @@ static void reset_idle_masks(struct sched_ext_ops *ops)
+ 		cpumask_and(idle_cpumask(node)->smt, cpu_online_mask, node_mask);
+ 	}
+ }
++#else	/* !CONFIG_SMP */
++static bool scx_idle_test_and_clear_cpu(int cpu)
++{
++	return -EBUSY;
++}
++
++static s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
++{
++	return -EBUSY;
++}
+ #endif	/* CONFIG_SMP */
+ 
+ void scx_idle_enable(struct sched_ext_ops *ops)
+@@ -860,8 +870,34 @@ static bool check_builtin_idle_enabled(void)
+ 	return false;
+ }
+ 
+-s32 select_cpu_from_kfunc(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
+-			  const struct cpumask *allowed, u64 flags)
++/*
++ * Determine whether @p is a migration-disabled task in the context of BPF
++ * code.
++ *
++ * We can't simply check whether @p->migration_disabled is set in a
++ * sched_ext callback, because migration is always disabled for the current
++ * task while running BPF code.
++ *
++ * The prolog (__bpf_prog_enter) and epilog (__bpf_prog_exit) respectively
++ * disable and re-enable migration. For this reason, the current task
++ * inside a sched_ext callback is always a migration-disabled task.
++ *
++ * Therefore, when @p->migration_disabled == 1, check whether @p is the
++ * current task or not: if it is, then migration was not disabled before
++ * entering the callback, otherwise migration was disabled.
++ *
++ * Returns true if @p is migration-disabled, false otherwise.
++ */
++static bool is_bpf_migration_disabled(const struct task_struct *p)
++{
++	if (p->migration_disabled == 1)
++		return p != current;
++	else
++		return p->migration_disabled;
++}
++
++static s32 select_cpu_from_kfunc(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
++				 const struct cpumask *allowed, u64 flags)
+ {
+ 	struct rq *rq;
+ 	struct rq_flags rf;
+@@ -903,7 +939,7 @@ s32 select_cpu_from_kfunc(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
+ 	 * selection optimizations and simply check whether the previously
+ 	 * used CPU is idle and within the allowed cpumask.
+ 	 */
+-	if (p->nr_cpus_allowed == 1 || is_migration_disabled(p)) {
++	if (p->nr_cpus_allowed == 1 || is_bpf_migration_disabled(p)) {
+ 		if (cpumask_test_cpu(prev_cpu, allowed ?: p->cpus_ptr) &&
+ 		    scx_idle_test_and_clear_cpu(prev_cpu))
+ 			cpu = prev_cpu;
+@@ -1125,10 +1161,10 @@ __bpf_kfunc bool scx_bpf_test_and_clear_cpu_idle(s32 cpu)
+ 	if (!check_builtin_idle_enabled())
+ 		return false;
+ 
+-	if (kf_cpu_valid(cpu, NULL))
+-		return scx_idle_test_and_clear_cpu(cpu);
+-	else
++	if (!kf_cpu_valid(cpu, NULL))
+ 		return false;
++
++	return scx_idle_test_and_clear_cpu(cpu);
+ }
+ 
+ /**
+diff --git a/kernel/sched/ext_idle.h b/kernel/sched/ext_idle.h
+index 37be78a7502b32..05e389ed72e4c1 100644
+--- a/kernel/sched/ext_idle.h
++++ b/kernel/sched/ext_idle.h
+@@ -15,16 +15,9 @@ struct sched_ext_ops;
+ #ifdef CONFIG_SMP
+ void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops);
+ void scx_idle_init_masks(void);
+-bool scx_idle_test_and_clear_cpu(int cpu);
+-s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags);
+ #else /* !CONFIG_SMP */
+ static inline void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops) {}
+ static inline void scx_idle_init_masks(void) {}
+-static inline bool scx_idle_test_and_clear_cpu(int cpu) { return false; }
+-static inline s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
+-{
+-	return -EBUSY;
+-}
+ #endif /* CONFIG_SMP */
+ 
+ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
+diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
+index db40ec5cc9d731..9fb1a8c107ec35 100644
+--- a/kernel/trace/fgraph.c
++++ b/kernel/trace/fgraph.c
+@@ -815,6 +815,7 @@ __ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointe
+ 	unsigned long bitmap;
+ 	unsigned long ret;
+ 	int offset;
++	int bit;
+ 	int i;
+ 
+ 	ret_stack = ftrace_pop_return_trace(&trace, &ret, frame_pointer, &offset);
+@@ -829,6 +830,15 @@ __ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointe
+ 	if (fregs)
+ 		ftrace_regs_set_instruction_pointer(fregs, ret);
+ 
++	bit = ftrace_test_recursion_trylock(trace.func, ret);
++	/*
++	 * This can fail because ftrace_test_recursion_trylock() allows one nest
++	 * call. If we are already in a nested call, then we don't probe this and
++	 * just return the original return address.
++	 */
++	if (unlikely(bit < 0))
++		goto out;
++
+ #ifdef CONFIG_FUNCTION_GRAPH_RETVAL
+ 	trace.retval = ftrace_regs_get_return_value(fregs);
+ #endif
+@@ -852,6 +862,8 @@ __ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointe
+ 		}
+ 	}
+ 
++	ftrace_test_recursion_unlock(bit);
++out:
+ 	/*
+ 	 * The ftrace_graph_return() may still access the current
+ 	 * ret_stack structure, we need to make sure the update of
+diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
+index f9b3aa9afb1784..342e84f8a40e24 100644
+--- a/kernel/trace/fprobe.c
++++ b/kernel/trace/fprobe.c
+@@ -428,8 +428,9 @@ static int fprobe_addr_list_add(struct fprobe_addr_list *alist, unsigned long ad
+ {
+ 	unsigned long *addrs;
+ 
+-	if (alist->index >= alist->size)
+-		return -ENOMEM;
++	/* Previously we failed to expand the list. */
++	if (alist->index == alist->size)
++		return -ENOSPC;
+ 
+ 	alist->addrs[alist->index++] = addr;
+ 	if (alist->index < alist->size)
+@@ -489,7 +490,7 @@ static int fprobe_module_callback(struct notifier_block *nb,
+ 	for (i = 0; i < FPROBE_IP_TABLE_SIZE; i++)
+ 		fprobe_remove_node_in_module(mod, &fprobe_ip_table[i], &alist);
+ 
+-	if (alist.index < alist.size && alist.index > 0)
++	if (alist.index > 0)
+ 		ftrace_set_filter_ips(&fprobe_graph_ops.ops,
+ 				      alist.addrs, alist.index, 1, 0);
+ 	mutex_unlock(&fprobe_mutex);
+diff --git a/kernel/trace/trace_dynevent.c b/kernel/trace/trace_dynevent.c
+index 5d64a18cacacc6..d06854bd32b357 100644
+--- a/kernel/trace/trace_dynevent.c
++++ b/kernel/trace/trace_dynevent.c
+@@ -230,6 +230,10 @@ static int dyn_event_open(struct inode *inode, struct file *file)
+ {
+ 	int ret;
+ 
++	ret = security_locked_down(LOCKDOWN_TRACEFS);
++	if (ret)
++		return ret;
++
+ 	ret = tracing_check_open_get_tr(NULL);
+ 	if (ret)
+ 		return ret;
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index 337bc0eb5d71bf..dc734867f0fc44 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -2325,12 +2325,13 @@ osnoise_cpus_write(struct file *filp, const char __user *ubuf, size_t count,
+ 	if (count < 1)
+ 		return 0;
+ 
+-	buf = kmalloc(count, GFP_KERNEL);
++	buf = kmalloc(count + 1, GFP_KERNEL);
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+ 	if (copy_from_user(buf, ubuf, count))
+ 		return -EFAULT;
++	buf[count] = '\0';
+ 
+ 	if (!zalloc_cpumask_var(&osnoise_cpumask_new, GFP_KERNEL))
+ 		return -ENOMEM;
+diff --git a/kernel/vhost_task.c b/kernel/vhost_task.c
+index 2f844c279a3e01..7f24ccc896c649 100644
+--- a/kernel/vhost_task.c
++++ b/kernel/vhost_task.c
+@@ -100,6 +100,7 @@ void vhost_task_stop(struct vhost_task *vtsk)
+ 	 * freeing it below.
+ 	 */
+ 	wait_for_completion(&vtsk->exited);
++	put_task_struct(vtsk->task);
+ 	kfree(vtsk);
+ }
+ EXPORT_SYMBOL_GPL(vhost_task_stop);
+@@ -148,7 +149,7 @@ struct vhost_task *vhost_task_create(bool (*fn)(void *),
+ 		return ERR_PTR(PTR_ERR(tsk));
+ 	}
+ 
+-	vtsk->task = tsk;
++	vtsk->task = get_task_struct(tsk);
+ 	return vtsk;
+ }
+ EXPORT_SYMBOL_GPL(vhost_task_create);
+diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c
+index 57d4ec256682ce..04f121a756d478 100644
+--- a/mm/damon/sysfs.c
++++ b/mm/damon/sysfs.c
+@@ -1576,12 +1576,14 @@ static int damon_sysfs_damon_call(int (*fn)(void *data),
+ 		struct damon_sysfs_kdamond *kdamond)
+ {
+ 	struct damon_call_control call_control = {};
++	int err;
+ 
+ 	if (!kdamond->damon_ctx)
+ 		return -EINVAL;
+ 	call_control.fn = fn;
+ 	call_control.data = kdamond;
+-	return damon_call(kdamond->damon_ctx, &call_control);
++	err = damon_call(kdamond->damon_ctx, &call_control);
++	return err ? err : call_control.return_code;
+ }
+ 
+ struct damon_sysfs_schemes_walk_data {
+diff --git a/mm/kmsan/core.c b/mm/kmsan/core.c
+index 1ea711786c522d..8bca7fece47f0e 100644
+--- a/mm/kmsan/core.c
++++ b/mm/kmsan/core.c
+@@ -195,7 +195,8 @@ void kmsan_internal_set_shadow_origin(void *addr, size_t size, int b,
+ 				      u32 origin, bool checked)
+ {
+ 	u64 address = (u64)addr;
+-	u32 *shadow_start, *origin_start;
++	void *shadow_start;
++	u32 *aligned_shadow, *origin_start;
+ 	size_t pad = 0;
+ 
+ 	KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(addr, size));
+@@ -214,9 +215,12 @@ void kmsan_internal_set_shadow_origin(void *addr, size_t size, int b,
+ 	}
+ 	__memset(shadow_start, b, size);
+ 
+-	if (!IS_ALIGNED(address, KMSAN_ORIGIN_SIZE)) {
++	if (IS_ALIGNED(address, KMSAN_ORIGIN_SIZE)) {
++		aligned_shadow = shadow_start;
++	} else {
+ 		pad = address % KMSAN_ORIGIN_SIZE;
+ 		address -= pad;
++		aligned_shadow = shadow_start - pad;
+ 		size += pad;
+ 	}
+ 	size = ALIGN(size, KMSAN_ORIGIN_SIZE);
+@@ -230,7 +234,7 @@ void kmsan_internal_set_shadow_origin(void *addr, size_t size, int b,
+ 	 * corresponding shadow slot is zero.
+ 	 */
+ 	for (int i = 0; i < size / KMSAN_ORIGIN_SIZE; i++) {
+-		if (origin || !shadow_start[i])
++		if (origin || !aligned_shadow[i])
+ 			origin_start[i] = origin;
+ 	}
+ }
+diff --git a/mm/kmsan/kmsan_test.c b/mm/kmsan/kmsan_test.c
+index c6c5b2bbede0cc..902ec48b1e3e6a 100644
+--- a/mm/kmsan/kmsan_test.c
++++ b/mm/kmsan/kmsan_test.c
+@@ -556,6 +556,21 @@ DEFINE_TEST_MEMSETXX(16)
+ DEFINE_TEST_MEMSETXX(32)
+ DEFINE_TEST_MEMSETXX(64)
+ 
++/* Test case: ensure that KMSAN does not access shadow memory out of bounds. */
++static void test_memset_on_guarded_buffer(struct kunit *test)
++{
++	void *buf = vmalloc(PAGE_SIZE);
++
++	kunit_info(test,
++		   "memset() on ends of guarded buffer should not crash\n");
++
++	for (size_t size = 0; size <= 128; size++) {
++		memset(buf, 0xff, size);
++		memset(buf + PAGE_SIZE - size, 0xff, size);
++	}
++	vfree(buf);
++}
++
+ static noinline void fibonacci(int *array, int size, int start)
+ {
+ 	if (start < 2 || (start == size))
+@@ -677,6 +692,7 @@ static struct kunit_case kmsan_test_cases[] = {
+ 	KUNIT_CASE(test_memset16),
+ 	KUNIT_CASE(test_memset32),
+ 	KUNIT_CASE(test_memset64),
++	KUNIT_CASE(test_memset_on_guarded_buffer),
+ 	KUNIT_CASE(test_long_origin_chain),
+ 	KUNIT_CASE(test_stackdepot_roundtrip),
+ 	KUNIT_CASE(test_unpoison_memory),
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 090c7ffa515252..2ef5b3004197b8 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3087,8 +3087,18 @@ static void hci_conn_complete_evt(struct hci_dev *hdev, void *data,
+ 
+ 	hci_dev_lock(hdev);
+ 
++	/* Check for existing connection:
++	 *
++	 * 1. If it doesn't exist then it must be receiver/slave role.
++	 * 2. If it does exist confirm that it is connecting/BT_CONNECT in case
++	 *    of initiator/master role since there could be a collision where
++	 *    either side is attempting to connect or something like a fuzzing
++	 *    testing is trying to play tricks to destroy the hcon object before
++	 *    it even attempts to connect (e.g. hcon->state == BT_OPEN).
++	 */
+ 	conn = hci_conn_hash_lookup_ba(hdev, ev->link_type, &ev->bdaddr);
+-	if (!conn) {
++	if (!conn ||
++	    (conn->role == HCI_ROLE_MASTER && conn->state != BT_CONNECT)) {
+ 		/* In case of error status and there is no connection pending
+ 		 * just unlock as there is nothing to cleanup.
+ 		 */
+@@ -4391,6 +4401,8 @@ static void hci_num_comp_pkts_evt(struct hci_dev *hdev, void *data,
+ 
+ 	bt_dev_dbg(hdev, "num %d", ev->num);
+ 
++	hci_dev_lock(hdev);
++
+ 	for (i = 0; i < ev->num; i++) {
+ 		struct hci_comp_pkts_info *info = &ev->handles[i];
+ 		struct hci_conn *conn;
+@@ -4472,6 +4484,8 @@ static void hci_num_comp_pkts_evt(struct hci_dev *hdev, void *data,
+ 	}
+ 
+ 	queue_work(hdev->workqueue, &hdev->tx_work);
++
++	hci_dev_unlock(hdev);
+ }
+ 
+ static void hci_mode_change_evt(struct hci_dev *hdev, void *data,
+@@ -5634,8 +5648,18 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+ 	 */
+ 	hci_dev_clear_flag(hdev, HCI_LE_ADV);
+ 
+-	conn = hci_conn_hash_lookup_ba(hdev, LE_LINK, bdaddr);
+-	if (!conn) {
++	/* Check for existing connection:
++	 *
++	 * 1. If it doesn't exist then use the role to create a new object.
++	 * 2. If it does exist confirm that it is connecting/BT_CONNECT in case
++	 *    of initiator/master role since there could be a collision where
++	 *    either side is attempting to connect or something like a fuzzing
++	 *    testing is trying to play tricks to destroy the hcon object before
++	 *    it even attempts to connect (e.g. hcon->state == BT_OPEN).
++	 */
++	conn = hci_conn_hash_lookup_role(hdev, LE_LINK, role, bdaddr);
++	if (!conn ||
++	    (conn->role == HCI_ROLE_MASTER && conn->state != BT_CONNECT)) {
+ 		/* In case of error status and there is no connection pending
+ 		 * just unlock as there is nothing to cleanup.
+ 		 */
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index a25439f1eeac28..7ca544d7791f43 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -2594,6 +2594,13 @@ static int hci_resume_advertising_sync(struct hci_dev *hdev)
+ 			hci_remove_ext_adv_instance_sync(hdev, adv->instance,
+ 							 NULL);
+ 		}
++
++		/* If current advertising instance is set to instance 0x00
++		 * then we need to re-enable it.
++		 */
++		if (!hdev->cur_adv_instance)
++			err = hci_enable_ext_advertising_sync(hdev,
++							      hdev->cur_adv_instance);
+ 	} else {
+ 		/* Schedule for most recent instance to be restarted and begin
+ 		 * the software rotation loop
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 50634ef5c8b707..225140fcb3d6c8 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -1323,8 +1323,7 @@ static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err)
+ 	struct mgmt_mode *cp;
+ 
+ 	/* Make sure cmd still outstanding. */
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
+ 	cp = cmd->param;
+@@ -1351,23 +1350,29 @@ static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err)
+ 				mgmt_status(err));
+ 	}
+ 
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ }
+ 
+ static int set_powered_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_mode *cp;
++	struct mgmt_mode cp;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
+ 
+ 	/* Make sure cmd still outstanding. */
+-	if (cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
+ 		return -ECANCELED;
++	}
+ 
+-	cp = cmd->param;
++	memcpy(&cp, cmd->param, sizeof(cp));
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
+ 
+ 	BT_DBG("%s", hdev->name);
+ 
+-	return hci_set_powered_sync(hdev, cp->val);
++	return hci_set_powered_sync(hdev, cp.val);
+ }
+ 
+ static int set_powered(struct sock *sk, struct hci_dev *hdev, void *data,
+@@ -1516,8 +1521,7 @@ static void mgmt_set_discoverable_complete(struct hci_dev *hdev, void *data,
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+ 	/* Make sure cmd still outstanding. */
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_SET_DISCOVERABLE, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
+ 	hci_dev_lock(hdev);
+@@ -1539,12 +1543,15 @@ static void mgmt_set_discoverable_complete(struct hci_dev *hdev, void *data,
+ 	new_settings(hdev, cmd->sk);
+ 
+ done:
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ 	hci_dev_unlock(hdev);
+ }
+ 
+ static int set_discoverable_sync(struct hci_dev *hdev, void *data)
+ {
++	if (!mgmt_pending_listed(hdev, data))
++		return -ECANCELED;
++
+ 	BT_DBG("%s", hdev->name);
+ 
+ 	return hci_update_discoverable_sync(hdev);
+@@ -1691,8 +1698,7 @@ static void mgmt_set_connectable_complete(struct hci_dev *hdev, void *data,
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+ 	/* Make sure cmd still outstanding. */
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_SET_CONNECTABLE, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
+ 	hci_dev_lock(hdev);
+@@ -1707,7 +1713,7 @@ static void mgmt_set_connectable_complete(struct hci_dev *hdev, void *data,
+ 	new_settings(hdev, cmd->sk);
+ 
+ done:
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ 
+ 	hci_dev_unlock(hdev);
+ }
+@@ -1743,6 +1749,9 @@ static int set_connectable_update_settings(struct hci_dev *hdev,
+ 
+ static int set_connectable_sync(struct hci_dev *hdev, void *data)
+ {
++	if (!mgmt_pending_listed(hdev, data))
++		return -ECANCELED;
++
+ 	BT_DBG("%s", hdev->name);
+ 
+ 	return hci_update_connectable_sync(hdev);
+@@ -1919,14 +1928,17 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ 	struct cmd_lookup match = { NULL, hdev };
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_mode *cp = cmd->param;
+-	u8 enable = cp->val;
++	struct mgmt_mode *cp;
++	u8 enable;
+ 	bool changed;
+ 
+ 	/* Make sure cmd still outstanding. */
+-	if (err == -ECANCELED || cmd != pending_find(MGMT_OP_SET_SSP, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
++	cp = cmd->param;
++	enable = cp->val;
++
+ 	if (err) {
+ 		u8 mgmt_err = mgmt_status(err);
+ 
+@@ -1935,8 +1947,7 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ 			new_settings(hdev, NULL);
+ 		}
+ 
+-		mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, true,
+-				     cmd_status_rsp, &mgmt_err);
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err);
+ 		return;
+ 	}
+ 
+@@ -1946,7 +1957,7 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ 		changed = hci_dev_test_and_clear_flag(hdev, HCI_SSP_ENABLED);
+ 	}
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, true, settings_rsp, &match);
++	settings_rsp(cmd, &match);
+ 
+ 	if (changed)
+ 		new_settings(hdev, match.sk);
+@@ -1960,14 +1971,25 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ static int set_ssp_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_mode *cp = cmd->param;
++	struct mgmt_mode cp;
+ 	bool changed = false;
+ 	int err;
+ 
+-	if (cp->val)
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
++		return -ECANCELED;
++	}
++
++	memcpy(&cp, cmd->param, sizeof(cp));
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
++	if (cp.val)
+ 		changed = !hci_dev_test_and_set_flag(hdev, HCI_SSP_ENABLED);
+ 
+-	err = hci_write_ssp_mode_sync(hdev, cp->val);
++	err = hci_write_ssp_mode_sync(hdev, cp.val);
+ 
+ 	if (!err && changed)
+ 		hci_dev_clear_flag(hdev, HCI_SSP_ENABLED);
+@@ -2060,32 +2082,50 @@ static int set_hs(struct sock *sk, struct hci_dev *hdev, void *data, u16 len)
+ 
+ static void set_le_complete(struct hci_dev *hdev, void *data, int err)
+ {
++	struct mgmt_pending_cmd *cmd = data;
+ 	struct cmd_lookup match = { NULL, hdev };
+ 	u8 status = mgmt_status(err);
+ 
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+-	if (status) {
+-		mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, true, cmd_status_rsp,
+-				     &status);
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, data))
+ 		return;
++
++	if (status) {
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, status);
++		goto done;
+ 	}
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, true, settings_rsp, &match);
++	settings_rsp(cmd, &match);
+ 
+ 	new_settings(hdev, match.sk);
+ 
+ 	if (match.sk)
+ 		sock_put(match.sk);
++
++done:
++	mgmt_pending_free(cmd);
+ }
+ 
+ static int set_le_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_mode *cp = cmd->param;
+-	u8 val = !!cp->val;
++	struct mgmt_mode cp;
++	u8 val;
+ 	int err;
+ 
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
++		return -ECANCELED;
++	}
++
++	memcpy(&cp, cmd->param, sizeof(cp));
++	val = !!cp.val;
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
+ 	if (!val) {
+ 		hci_clear_adv_instance_sync(hdev, NULL, 0x00, true);
+ 
+@@ -2127,7 +2167,12 @@ static void set_mesh_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+ 	u8 status = mgmt_status(err);
+-	struct sock *sk = cmd->sk;
++	struct sock *sk;
++
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
++		return;
++
++	sk = cmd->sk;
+ 
+ 	if (status) {
+ 		mgmt_pending_foreach(MGMT_OP_SET_MESH_RECEIVER, hdev, true,
+@@ -2142,24 +2187,37 @@ static void set_mesh_complete(struct hci_dev *hdev, void *data, int err)
+ static int set_mesh_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_cp_set_mesh *cp = cmd->param;
+-	size_t len = cmd->param_len;
++	struct mgmt_cp_set_mesh cp;
++	size_t len;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
++		return -ECANCELED;
++	}
++
++	memcpy(&cp, cmd->param, sizeof(cp));
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
++	len = cmd->param_len;
+ 
+ 	memset(hdev->mesh_ad_types, 0, sizeof(hdev->mesh_ad_types));
+ 
+-	if (cp->enable)
++	if (cp.enable)
+ 		hci_dev_set_flag(hdev, HCI_MESH);
+ 	else
+ 		hci_dev_clear_flag(hdev, HCI_MESH);
+ 
+-	hdev->le_scan_interval = __le16_to_cpu(cp->period);
+-	hdev->le_scan_window = __le16_to_cpu(cp->window);
++	hdev->le_scan_interval = __le16_to_cpu(cp.period);
++	hdev->le_scan_window = __le16_to_cpu(cp.window);
+ 
+-	len -= sizeof(*cp);
++	len -= sizeof(cp);
+ 
+ 	/* If filters don't fit, forward all adv pkts */
+ 	if (len <= sizeof(hdev->mesh_ad_types))
+-		memcpy(hdev->mesh_ad_types, cp->ad_types, len);
++		memcpy(hdev->mesh_ad_types, cp.ad_types, len);
+ 
+ 	hci_update_passive_scan_sync(hdev);
+ 	return 0;
+@@ -3867,15 +3925,16 @@ static int name_changed_sync(struct hci_dev *hdev, void *data)
+ static void set_name_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_cp_set_local_name *cp = cmd->param;
++	struct mgmt_cp_set_local_name *cp;
+ 	u8 status = mgmt_status(err);
+ 
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_SET_LOCAL_NAME, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
++	cp = cmd->param;
++
+ 	if (status) {
+ 		mgmt_cmd_status(cmd->sk, hdev->id, MGMT_OP_SET_LOCAL_NAME,
+ 				status);
+@@ -3887,16 +3946,27 @@ static void set_name_complete(struct hci_dev *hdev, void *data, int err)
+ 			hci_cmd_sync_queue(hdev, name_changed_sync, NULL, NULL);
+ 	}
+ 
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ }
+ 
+ static int set_name_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_cp_set_local_name *cp = cmd->param;
++	struct mgmt_cp_set_local_name cp;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
++		return -ECANCELED;
++	}
++
++	memcpy(&cp, cmd->param, sizeof(cp));
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
+ 
+ 	if (lmp_bredr_capable(hdev)) {
+-		hci_update_name_sync(hdev, cp->name);
++		hci_update_name_sync(hdev, cp.name);
+ 		hci_update_eir_sync(hdev);
+ 	}
+ 
+@@ -4048,12 +4118,10 @@ int mgmt_phy_configuration_changed(struct hci_dev *hdev, struct sock *skip)
+ static void set_default_phy_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct sk_buff *skb = cmd->skb;
++	struct sk_buff *skb;
+ 	u8 status = mgmt_status(err);
+ 
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_SET_PHY_CONFIGURATION, hdev))
+-		return;
++	skb = cmd->skb;
+ 
+ 	if (!status) {
+ 		if (!skb)
+@@ -4080,7 +4148,7 @@ static void set_default_phy_complete(struct hci_dev *hdev, void *data, int err)
+ 	if (skb && !IS_ERR(skb))
+ 		kfree_skb(skb);
+ 
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ }
+ 
+ static int set_default_phy_sync(struct hci_dev *hdev, void *data)
+@@ -4088,7 +4156,9 @@ static int set_default_phy_sync(struct hci_dev *hdev, void *data)
+ 	struct mgmt_pending_cmd *cmd = data;
+ 	struct mgmt_cp_set_phy_configuration *cp = cmd->param;
+ 	struct hci_cp_le_set_default_phy cp_phy;
+-	u32 selected_phys = __le32_to_cpu(cp->selected_phys);
++	u32 selected_phys;
++
++	selected_phys = __le32_to_cpu(cp->selected_phys);
+ 
+ 	memset(&cp_phy, 0, sizeof(cp_phy));
+ 
+@@ -4228,7 +4298,7 @@ static int set_phy_configuration(struct sock *sk, struct hci_dev *hdev,
+ 		goto unlock;
+ 	}
+ 
+-	cmd = mgmt_pending_add(sk, MGMT_OP_SET_PHY_CONFIGURATION, hdev, data,
++	cmd = mgmt_pending_new(sk, MGMT_OP_SET_PHY_CONFIGURATION, hdev, data,
+ 			       len);
+ 	if (!cmd)
+ 		err = -ENOMEM;
+@@ -5189,7 +5259,17 @@ static void mgmt_add_adv_patterns_monitor_complete(struct hci_dev *hdev,
+ {
+ 	struct mgmt_rp_add_adv_patterns_monitor rp;
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct adv_monitor *monitor = cmd->user_data;
++	struct adv_monitor *monitor;
++
++	/* This is likely the result of hdev being closed and mgmt_index_removed
++	 * is attempting to clean up any pending command so
++	 * hci_adv_monitors_clear is about to be called which will take care of
++	 * freeing the adv_monitor instances.
++	 */
++	if (status == -ECANCELED && !mgmt_pending_valid(hdev, cmd))
++		return;
++
++	monitor = cmd->user_data;
+ 
+ 	hci_dev_lock(hdev);
+ 
+@@ -5215,9 +5295,20 @@ static void mgmt_add_adv_patterns_monitor_complete(struct hci_dev *hdev,
+ static int mgmt_add_adv_patterns_monitor_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct adv_monitor *monitor = cmd->user_data;
++	struct adv_monitor *mon;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
++		return -ECANCELED;
++	}
++
++	mon = cmd->user_data;
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
+ 
+-	return hci_add_adv_monitor(hdev, monitor);
++	return hci_add_adv_monitor(hdev, mon);
+ }
+ 
+ static int __add_adv_patterns_monitor(struct sock *sk, struct hci_dev *hdev,
+@@ -5484,7 +5575,8 @@ static int remove_adv_monitor(struct sock *sk, struct hci_dev *hdev,
+ 			       status);
+ }
+ 
+-static void read_local_oob_data_complete(struct hci_dev *hdev, void *data, int err)
++static void read_local_oob_data_complete(struct hci_dev *hdev, void *data,
++					 int err)
+ {
+ 	struct mgmt_rp_read_local_oob_data mgmt_rp;
+ 	size_t rp_size = sizeof(mgmt_rp);
+@@ -5504,7 +5596,8 @@ static void read_local_oob_data_complete(struct hci_dev *hdev, void *data, int e
+ 	bt_dev_dbg(hdev, "status %d", status);
+ 
+ 	if (status) {
+-		mgmt_cmd_status(cmd->sk, hdev->id, MGMT_OP_READ_LOCAL_OOB_DATA, status);
++		mgmt_cmd_status(cmd->sk, hdev->id, MGMT_OP_READ_LOCAL_OOB_DATA,
++				status);
+ 		goto remove;
+ 	}
+ 
+@@ -5786,17 +5879,12 @@ static void start_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ 
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+-	if (err == -ECANCELED)
+-		return;
+-
+-	if (cmd != pending_find(MGMT_OP_START_DISCOVERY, hdev) &&
+-	    cmd != pending_find(MGMT_OP_START_LIMITED_DISCOVERY, hdev) &&
+-	    cmd != pending_find(MGMT_OP_START_SERVICE_DISCOVERY, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
+ 	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_status(err),
+ 			  cmd->param, 1);
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ 
+ 	hci_discovery_set_state(hdev, err ? DISCOVERY_STOPPED:
+ 				DISCOVERY_FINDING);
+@@ -5804,6 +5892,9 @@ static void start_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ 
+ static int start_discovery_sync(struct hci_dev *hdev, void *data)
+ {
++	if (!mgmt_pending_listed(hdev, data))
++		return -ECANCELED;
++
+ 	return hci_start_discovery_sync(hdev);
+ }
+ 
+@@ -6009,15 +6100,14 @@ static void stop_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+ 
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_STOP_DISCOVERY, hdev))
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, cmd))
+ 		return;
+ 
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+ 	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_status(err),
+ 			  cmd->param, 1);
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ 
+ 	if (!err)
+ 		hci_discovery_set_state(hdev, DISCOVERY_STOPPED);
+@@ -6025,6 +6115,9 @@ static void stop_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ 
+ static int stop_discovery_sync(struct hci_dev *hdev, void *data)
+ {
++	if (!mgmt_pending_listed(hdev, data))
++		return -ECANCELED;
++
+ 	return hci_stop_discovery_sync(hdev);
+ }
+ 
+@@ -6234,14 +6327,18 @@ static void enable_advertising_instance(struct hci_dev *hdev, int err)
+ 
+ static void set_advertising_complete(struct hci_dev *hdev, void *data, int err)
+ {
++	struct mgmt_pending_cmd *cmd = data;
+ 	struct cmd_lookup match = { NULL, hdev };
+ 	u8 instance;
+ 	struct adv_info *adv_instance;
+ 	u8 status = mgmt_status(err);
+ 
++	if (err == -ECANCELED || !mgmt_pending_valid(hdev, data))
++		return;
++
+ 	if (status) {
+-		mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, true,
+-				     cmd_status_rsp, &status);
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, status);
++		mgmt_pending_free(cmd);
+ 		return;
+ 	}
+ 
+@@ -6250,8 +6347,7 @@ static void set_advertising_complete(struct hci_dev *hdev, void *data, int err)
+ 	else
+ 		hci_dev_clear_flag(hdev, HCI_ADVERTISING);
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, true, settings_rsp,
+-			     &match);
++	settings_rsp(cmd, &match);
+ 
+ 	new_settings(hdev, match.sk);
+ 
+@@ -6283,10 +6379,23 @@ static void set_advertising_complete(struct hci_dev *hdev, void *data, int err)
+ static int set_adv_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-	struct mgmt_mode *cp = cmd->param;
+-	u8 val = !!cp->val;
++	struct mgmt_mode cp;
++	u8 val;
+ 
+-	if (cp->val == 0x02)
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	if (!__mgmt_pending_listed(hdev, cmd)) {
++		mutex_unlock(&hdev->mgmt_pending_lock);
++		return -ECANCELED;
++	}
++
++	memcpy(&cp, cmd->param, sizeof(cp));
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
++	val = !!cp.val;
++
++	if (cp.val == 0x02)
+ 		hci_dev_set_flag(hdev, HCI_ADVERTISING_CONNECTABLE);
+ 	else
+ 		hci_dev_clear_flag(hdev, HCI_ADVERTISING_CONNECTABLE);
+@@ -8039,10 +8148,6 @@ static void read_local_oob_ext_data_complete(struct hci_dev *hdev, void *data,
+ 	u8 status = mgmt_status(err);
+ 	u16 eir_len;
+ 
+-	if (err == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev))
+-		return;
+-
+ 	if (!status) {
+ 		if (!skb)
+ 			status = MGMT_STATUS_FAILED;
+@@ -8149,7 +8254,7 @@ static void read_local_oob_ext_data_complete(struct hci_dev *hdev, void *data,
+ 		kfree_skb(skb);
+ 
+ 	kfree(mgmt_rp);
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ }
+ 
+ static int read_local_ssp_oob_req(struct hci_dev *hdev, struct sock *sk,
+@@ -8158,7 +8263,7 @@ static int read_local_ssp_oob_req(struct hci_dev *hdev, struct sock *sk,
+ 	struct mgmt_pending_cmd *cmd;
+ 	int err;
+ 
+-	cmd = mgmt_pending_add(sk, MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev,
++	cmd = mgmt_pending_new(sk, MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev,
+ 			       cp, sizeof(*cp));
+ 	if (!cmd)
+ 		return -ENOMEM;
+diff --git a/net/bluetooth/mgmt_util.c b/net/bluetooth/mgmt_util.c
+index a88a07da394734..aa7b5585cb268b 100644
+--- a/net/bluetooth/mgmt_util.c
++++ b/net/bluetooth/mgmt_util.c
+@@ -320,6 +320,52 @@ void mgmt_pending_remove(struct mgmt_pending_cmd *cmd)
+ 	mgmt_pending_free(cmd);
+ }
+ 
++bool __mgmt_pending_listed(struct hci_dev *hdev, struct mgmt_pending_cmd *cmd)
++{
++	struct mgmt_pending_cmd *tmp;
++
++	lockdep_assert_held(&hdev->mgmt_pending_lock);
++
++	if (!cmd)
++		return false;
++
++	list_for_each_entry(tmp, &hdev->mgmt_pending, list) {
++		if (cmd == tmp)
++			return true;
++	}
++
++	return false;
++}
++
++bool mgmt_pending_listed(struct hci_dev *hdev, struct mgmt_pending_cmd *cmd)
++{
++	bool listed;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
++	listed = __mgmt_pending_listed(hdev, cmd);
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
++	return listed;
++}
++
++bool mgmt_pending_valid(struct hci_dev *hdev, struct mgmt_pending_cmd *cmd)
++{
++	bool listed;
++
++	if (!cmd)
++		return false;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
++
++	listed = __mgmt_pending_listed(hdev, cmd);
++	if (listed)
++		list_del(&cmd->list);
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
++	return listed;
++}
++
+ void mgmt_mesh_foreach(struct hci_dev *hdev,
+ 		       void (*cb)(struct mgmt_mesh_tx *mesh_tx, void *data),
+ 		       void *data, struct sock *sk)
+diff --git a/net/bluetooth/mgmt_util.h b/net/bluetooth/mgmt_util.h
+index 024e51dd693756..bcba8c9d895285 100644
+--- a/net/bluetooth/mgmt_util.h
++++ b/net/bluetooth/mgmt_util.h
+@@ -65,6 +65,9 @@ struct mgmt_pending_cmd *mgmt_pending_new(struct sock *sk, u16 opcode,
+ 					  void *data, u16 len);
+ void mgmt_pending_free(struct mgmt_pending_cmd *cmd);
+ void mgmt_pending_remove(struct mgmt_pending_cmd *cmd);
++bool __mgmt_pending_listed(struct hci_dev *hdev, struct mgmt_pending_cmd *cmd);
++bool mgmt_pending_listed(struct hci_dev *hdev, struct mgmt_pending_cmd *cmd);
++bool mgmt_pending_valid(struct hci_dev *hdev, struct mgmt_pending_cmd *cmd);
+ void mgmt_mesh_foreach(struct hci_dev *hdev,
+ 		       void (*cb)(struct mgmt_mesh_tx *mesh_tx, void *data),
+ 		       void *data, struct sock *sk);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index d6420b74ea9c6a..cb77bb84371bd3 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -6667,7 +6667,7 @@ struct sk_buff *alloc_skb_with_frags(unsigned long header_len,
+ 		return NULL;
+ 
+ 	while (data_len) {
+-		if (nr_frags == MAX_SKB_FRAGS - 1)
++		if (nr_frags == MAX_SKB_FRAGS)
+ 			goto failure;
+ 		while (order && PAGE_ALIGN(data_len) < (PAGE_SIZE << order))
+ 			order--;
+diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
+index 4397e89d3123a0..423f876d14c6a8 100644
+--- a/net/ipv4/nexthop.c
++++ b/net/ipv4/nexthop.c
+@@ -2400,6 +2400,13 @@ static int replace_nexthop_single(struct net *net, struct nexthop *old,
+ 		return -EINVAL;
+ 	}
+ 
++	if (!list_empty(&old->grp_list) &&
++	    rtnl_dereference(new->nh_info)->fdb_nh !=
++	    rtnl_dereference(old->nh_info)->fdb_nh) {
++		NL_SET_ERR_MSG(extack, "Cannot change nexthop FDB status while in a group");
++		return -EINVAL;
++	}
++
+ 	err = call_nexthop_notifiers(net, NEXTHOP_EVENT_REPLACE, new, extack);
+ 	if (err)
+ 		return err;
+diff --git a/net/smc/smc_loopback.c b/net/smc/smc_loopback.c
+index 3c5f64ca41153f..85f0b7853b1737 100644
+--- a/net/smc/smc_loopback.c
++++ b/net/smc/smc_loopback.c
+@@ -56,6 +56,7 @@ static int smc_lo_register_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb,
+ {
+ 	struct smc_lo_dmb_node *dmb_node, *tmp_node;
+ 	struct smc_lo_dev *ldev = smcd->priv;
++	struct folio *folio;
+ 	int sba_idx, rc;
+ 
+ 	/* check space for new dmb */
+@@ -74,13 +75,16 @@ static int smc_lo_register_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb,
+ 
+ 	dmb_node->sba_idx = sba_idx;
+ 	dmb_node->len = dmb->dmb_len;
+-	dmb_node->cpu_addr = kzalloc(dmb_node->len, GFP_KERNEL |
+-				     __GFP_NOWARN | __GFP_NORETRY |
+-				     __GFP_NOMEMALLOC);
+-	if (!dmb_node->cpu_addr) {
++
++	/* not critical; fail under memory pressure and fallback to TCP */
++	folio = folio_alloc(GFP_KERNEL | __GFP_NOWARN | __GFP_NOMEMALLOC |
++			    __GFP_NORETRY | __GFP_ZERO,
++			    get_order(dmb_node->len));
++	if (!folio) {
+ 		rc = -ENOMEM;
+ 		goto err_node;
+ 	}
++	dmb_node->cpu_addr = folio_address(folio);
+ 	dmb_node->dma_addr = SMC_DMA_ADDR_INVALID;
+ 	refcount_set(&dmb_node->refcnt, 1);
+ 
+@@ -122,7 +126,7 @@ static void __smc_lo_unregister_dmb(struct smc_lo_dev *ldev,
+ 	write_unlock_bh(&ldev->dmb_ht_lock);
+ 
+ 	clear_bit(dmb_node->sba_idx, ldev->sba_idx_mask);
+-	kvfree(dmb_node->cpu_addr);
++	folio_put(virt_to_folio(dmb_node->cpu_addr));
+ 	kfree(dmb_node);
+ 
+ 	if (atomic_dec_and_test(&ldev->dmb_cnt))
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index c7a1f080d2de3a..44b9de6e4e7788 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -438,7 +438,7 @@ bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x)
+ 
+ 	check_tunnel_size = x->xso.type == XFRM_DEV_OFFLOAD_PACKET &&
+ 			    x->props.mode == XFRM_MODE_TUNNEL;
+-	switch (x->props.family) {
++	switch (x->inner_mode.family) {
+ 	case AF_INET:
+ 		/* Check for IPv4 options */
+ 		if (ip_hdr(skb)->ihl != 5)
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 86337453709bad..4b2eb260f5c2e9 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -2578,6 +2578,8 @@ int xfrm_alloc_spi(struct xfrm_state *x, u32 low, u32 high,
+ 
+ 	for (h = 0; h < range; h++) {
+ 		u32 spi = (low == high) ? low : get_random_u32_inclusive(low, high);
++		if (spi == 0)
++			goto next;
+ 		newspi = htonl(spi);
+ 
+ 		spin_lock_bh(&net->xfrm.xfrm_state_lock);
+@@ -2593,6 +2595,7 @@ int xfrm_alloc_spi(struct xfrm_state *x, u32 low, u32 high,
+ 		xfrm_state_put(x0);
+ 		spin_unlock_bh(&net->xfrm.xfrm_state_lock);
+ 
++next:
+ 		if (signal_pending(current)) {
+ 			err = -ERESTARTSYS;
+ 			goto unlock;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 4819bd332f0390..fa28e3e85861c2 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7298,6 +7298,11 @@ static void cs35l41_fixup_spi_two(struct hda_codec *codec, const struct hda_fixu
+ 	comp_generic_fixup(codec, action, "spi", "CSC3551", "-%s:00-cs35l41-hda.%d", 2);
+ }
+ 
++static void cs35l41_fixup_spi_one(struct hda_codec *codec, const struct hda_fixup *fix, int action)
++{
++	comp_generic_fixup(codec, action, "spi", "CSC3551", "-%s:00-cs35l41-hda.%d", 1);
++}
++
+ static void cs35l41_fixup_spi_four(struct hda_codec *codec, const struct hda_fixup *fix, int action)
+ {
+ 	comp_generic_fixup(codec, action, "spi", "CSC3551", "-%s:00-cs35l41-hda.%d", 4);
+@@ -7991,6 +7996,7 @@ enum {
+ 	ALC287_FIXUP_CS35L41_I2C_2,
+ 	ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED,
+ 	ALC287_FIXUP_CS35L41_I2C_4,
++	ALC245_FIXUP_CS35L41_SPI_1,
+ 	ALC245_FIXUP_CS35L41_SPI_2,
+ 	ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED,
+ 	ALC245_FIXUP_CS35L41_SPI_4,
+@@ -10120,6 +10126,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = cs35l41_fixup_spi_two,
+ 	},
++	[ALC245_FIXUP_CS35L41_SPI_1] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = cs35l41_fixup_spi_one,
++	},
+ 	[ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = cs35l41_fixup_spi_two,
+@@ -11099,6 +11109,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x8398, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x83ce, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x8516, "ASUS X101CH", ALC269_FIXUP_ASUS_X101),
++	SND_PCI_QUIRK(0x1043, 0x88f4, "ASUS NUC14LNS", ALC245_FIXUP_CS35L41_SPI_1),
+ 	SND_PCI_QUIRK(0x104d, 0x9073, "Sony VAIO", ALC275_FIXUP_SONY_VAIO_GPIO2),
+ 	SND_PCI_QUIRK(0x104d, 0x907b, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ),
+ 	SND_PCI_QUIRK(0x104d, 0x9084, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ),
+diff --git a/sound/soc/intel/boards/sof_es8336.c b/sound/soc/intel/boards/sof_es8336.c
+index a0b3679b17b423..1211a2b8a2a2c7 100644
+--- a/sound/soc/intel/boards/sof_es8336.c
++++ b/sound/soc/intel/boards/sof_es8336.c
+@@ -826,6 +826,16 @@ static const struct platform_device_id board_ids[] = {
+ 					SOF_ES8336_SPEAKERS_EN_GPIO1_QUIRK |
+ 					SOF_ES8336_JD_INVERTED),
+ 	},
++	{
++		.name = "ptl_es83x6_c1_h02",
++		.driver_data = (kernel_ulong_t)(SOF_ES8336_SSP_CODEC(1) |
++					SOF_NO_OF_HDMI_CAPTURE_SSP(2) |
++					SOF_HDMI_CAPTURE_1_SSP(0) |
++					SOF_HDMI_CAPTURE_2_SSP(2) |
++					SOF_SSP_HDMI_CAPTURE_PRESENT |
++					SOF_ES8336_SPEAKERS_EN_GPIO1_QUIRK |
++					SOF_ES8336_JD_INVERTED),
++	},
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(platform, board_ids);
+diff --git a/sound/soc/intel/boards/sof_rt5682.c b/sound/soc/intel/boards/sof_rt5682.c
+index f5925bd0a3fc67..4994aaccc583ae 100644
+--- a/sound/soc/intel/boards/sof_rt5682.c
++++ b/sound/soc/intel/boards/sof_rt5682.c
+@@ -892,6 +892,13 @@ static const struct platform_device_id board_ids[] = {
+ 					SOF_SSP_PORT_BT_OFFLOAD(2) |
+ 					SOF_BT_OFFLOAD_PRESENT),
+ 	},
++	{
++		.name = "ptl_rt5682_c1_h02",
++		.driver_data = (kernel_ulong_t)(SOF_RT5682_MCLK_EN |
++					SOF_SSP_PORT_CODEC(1) |
++					/* SSP 0 and SSP 2 are used for HDMI IN */
++					SOF_SSP_MASK_HDMI_CAPTURE(0x5)),
++	},
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(platform, board_ids);
+diff --git a/sound/soc/intel/common/soc-acpi-intel-ptl-match.c b/sound/soc/intel/common/soc-acpi-intel-ptl-match.c
+index eae75f3f0fa40d..d90d8672ab77d1 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-ptl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-ptl-match.c
+@@ -21,7 +21,24 @@ static const struct snd_soc_acpi_codecs ptl_rt5682_rt5682s_hp = {
+ 	.codecs = {RT5682_ACPI_HID, RT5682S_ACPI_HID},
+ };
+ 
++static const struct snd_soc_acpi_codecs ptl_essx_83x6 = {
++	.num_codecs = 3,
++	.codecs = { "ESSX8316", "ESSX8326", "ESSX8336"},
++};
++
++static const struct snd_soc_acpi_codecs ptl_lt6911_hdmi = {
++	.num_codecs = 1,
++	.codecs = {"INTC10B0"}
++};
++
+ struct snd_soc_acpi_mach snd_soc_acpi_intel_ptl_machines[] = {
++	{
++		.comp_ids = &ptl_rt5682_rt5682s_hp,
++		.drv_name = "ptl_rt5682_c1_h02",
++		.machine_quirk = snd_soc_acpi_codec_list,
++		.quirk_data = &ptl_lt6911_hdmi,
++		.sof_tplg_filename = "sof-ptl-rt5682-ssp1-hdmi-ssp02.tplg",
++	},
+ 	{
+ 		.comp_ids = &ptl_rt5682_rt5682s_hp,
+ 		.drv_name = "ptl_rt5682_def",
+@@ -29,6 +46,21 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_ptl_machines[] = {
+ 		.tplg_quirk_mask = SND_SOC_ACPI_TPLG_INTEL_AMP_NAME |
+ 					SND_SOC_ACPI_TPLG_INTEL_CODEC_NAME,
+ 	},
++	{
++		.comp_ids = &ptl_essx_83x6,
++		.drv_name = "ptl_es83x6_c1_h02",
++		.machine_quirk = snd_soc_acpi_codec_list,
++		.quirk_data = &ptl_lt6911_hdmi,
++		.sof_tplg_filename = "sof-ptl-es83x6-ssp1-hdmi-ssp02.tplg",
++	},
++	{
++		.comp_ids = &ptl_essx_83x6,
++		.drv_name = "sof-essx8336",
++		.sof_tplg_filename = "sof-ptl-es8336", /* the tplg suffix is added at run time */
++		.tplg_quirk_mask = SND_SOC_ACPI_TPLG_INTEL_SSP_NUMBER |
++					SND_SOC_ACPI_TPLG_INTEL_SSP_MSB |
++					SND_SOC_ACPI_TPLG_INTEL_DMIC_NUMBER,
++	},
+ 	{},
+ };
+ EXPORT_SYMBOL_GPL(snd_soc_acpi_intel_ptl_machines);
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index 9530c59b3cf4c8..3df537fdb9f1c7 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -17,6 +17,7 @@
+ #include <linux/bitfield.h>
+ #include <linux/hid.h>
+ #include <linux/init.h>
++#include <linux/input.h>
+ #include <linux/math64.h>
+ #include <linux/slab.h>
+ #include <linux/usb.h>
+@@ -55,13 +56,13 @@ struct std_mono_table {
+  * version, we keep it mono for simplicity.
+  */
+ static int snd_create_std_mono_ctl_offset(struct usb_mixer_interface *mixer,
+-				unsigned int unitid,
+-				unsigned int control,
+-				unsigned int cmask,
+-				int val_type,
+-				unsigned int idx_off,
+-				const char *name,
+-				snd_kcontrol_tlv_rw_t *tlv_callback)
++					  unsigned int unitid,
++					  unsigned int control,
++					  unsigned int cmask,
++					  int val_type,
++					  unsigned int idx_off,
++					  const char *name,
++					  snd_kcontrol_tlv_rw_t *tlv_callback)
+ {
+ 	struct usb_mixer_elem_info *cval;
+ 	struct snd_kcontrol *kctl;
+@@ -78,7 +79,8 @@ static int snd_create_std_mono_ctl_offset(struct usb_mixer_interface *mixer,
+ 	cval->idx_off = idx_off;
+ 
+ 	/* get_min_max() is called only for integer volumes later,
+-	 * so provide a short-cut for booleans */
++	 * so provide a short-cut for booleans
++	 */
+ 	cval->min = 0;
+ 	cval->max = 1;
+ 	cval->res = 0;
+@@ -108,15 +110,16 @@ static int snd_create_std_mono_ctl_offset(struct usb_mixer_interface *mixer,
+ }
+ 
+ static int snd_create_std_mono_ctl(struct usb_mixer_interface *mixer,
+-				unsigned int unitid,
+-				unsigned int control,
+-				unsigned int cmask,
+-				int val_type,
+-				const char *name,
+-				snd_kcontrol_tlv_rw_t *tlv_callback)
++				   unsigned int unitid,
++				   unsigned int control,
++				   unsigned int cmask,
++				   int val_type,
++				   const char *name,
++				   snd_kcontrol_tlv_rw_t *tlv_callback)
+ {
+ 	return snd_create_std_mono_ctl_offset(mixer, unitid, control, cmask,
+-		val_type, 0 /* Offset */, name, tlv_callback);
++					      val_type, 0 /* Offset */,
++					      name, tlv_callback);
+ }
+ 
+ /*
+@@ -127,9 +130,10 @@ static int snd_create_std_mono_table(struct usb_mixer_interface *mixer,
+ {
+ 	int err;
+ 
+-	while (t->name != NULL) {
++	while (t->name) {
+ 		err = snd_create_std_mono_ctl(mixer, t->unitid, t->control,
+-				t->cmask, t->val_type, t->name, t->tlv_callback);
++					      t->cmask, t->val_type, t->name,
++					      t->tlv_callback);
+ 		if (err < 0)
+ 			return err;
+ 		t++;
+@@ -209,12 +213,11 @@ static void snd_usb_soundblaster_remote_complete(struct urb *urb)
+ 	if (code == rc->mute_code)
+ 		snd_usb_mixer_notify_id(mixer, rc->mute_mixer_id);
+ 	mixer->rc_code = code;
+-	wmb();
+ 	wake_up(&mixer->rc_waitq);
+ }
+ 
+ static long snd_usb_sbrc_hwdep_read(struct snd_hwdep *hw, char __user *buf,
+-				     long count, loff_t *offset)
++				    long count, loff_t *offset)
+ {
+ 	struct usb_mixer_interface *mixer = hw->private_data;
+ 	int err;
+@@ -234,7 +237,7 @@ static long snd_usb_sbrc_hwdep_read(struct snd_hwdep *hw, char __user *buf,
+ }
+ 
+ static __poll_t snd_usb_sbrc_hwdep_poll(struct snd_hwdep *hw, struct file *file,
+-					    poll_table *wait)
++					poll_table *wait)
+ {
+ 	struct usb_mixer_interface *mixer = hw->private_data;
+ 
+@@ -285,7 +288,7 @@ static int snd_usb_soundblaster_remote_init(struct usb_mixer_interface *mixer)
+ 	mixer->rc_setup_packet->wLength = cpu_to_le16(len);
+ 	usb_fill_control_urb(mixer->rc_urb, mixer->chip->dev,
+ 			     usb_rcvctrlpipe(mixer->chip->dev, 0),
+-			     (u8*)mixer->rc_setup_packet, mixer->rc_buffer, len,
++			     (u8 *)mixer->rc_setup_packet, mixer->rc_buffer, len,
+ 			     snd_usb_soundblaster_remote_complete, mixer);
+ 	return 0;
+ }
+@@ -310,20 +313,20 @@ static int snd_audigy2nx_led_update(struct usb_mixer_interface *mixer,
+ 
+ 	if (chip->usb_id == USB_ID(0x041e, 0x3042))
+ 		err = snd_usb_ctl_msg(chip->dev,
+-			      usb_sndctrlpipe(chip->dev, 0), 0x24,
+-			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+-			      !value, 0, NULL, 0);
++				      usb_sndctrlpipe(chip->dev, 0), 0x24,
++				      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
++				      !value, 0, NULL, 0);
+ 	/* USB X-Fi S51 Pro */
+ 	if (chip->usb_id == USB_ID(0x041e, 0x30df))
+ 		err = snd_usb_ctl_msg(chip->dev,
+-			      usb_sndctrlpipe(chip->dev, 0), 0x24,
+-			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+-			      !value, 0, NULL, 0);
++				      usb_sndctrlpipe(chip->dev, 0), 0x24,
++				      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
++				      !value, 0, NULL, 0);
+ 	else
+ 		err = snd_usb_ctl_msg(chip->dev,
+-			      usb_sndctrlpipe(chip->dev, 0), 0x24,
+-			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+-			      value, index + 2, NULL, 0);
++				      usb_sndctrlpipe(chip->dev, 0), 0x24,
++				      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
++				      value, index + 2, NULL, 0);
+ 	snd_usb_unlock_shutdown(chip);
+ 	return err;
+ }
+@@ -377,17 +380,17 @@ static int snd_audigy2nx_controls_create(struct usb_mixer_interface *mixer)
+ 		struct snd_kcontrol_new knew;
+ 
+ 		/* USB X-Fi S51 doesn't have a CMSS LED */
+-		if ((mixer->chip->usb_id == USB_ID(0x041e, 0x3042)) && i == 0)
++		if (mixer->chip->usb_id == USB_ID(0x041e, 0x3042) && i == 0)
+ 			continue;
+ 		/* USB X-Fi S51 Pro doesn't have one either */
+-		if ((mixer->chip->usb_id == USB_ID(0x041e, 0x30df)) && i == 0)
++		if (mixer->chip->usb_id == USB_ID(0x041e, 0x30df) && i == 0)
+ 			continue;
+ 		if (i > 1 && /* Live24ext has 2 LEDs only */
+ 			(mixer->chip->usb_id == USB_ID(0x041e, 0x3040) ||
+ 			 mixer->chip->usb_id == USB_ID(0x041e, 0x3042) ||
+ 			 mixer->chip->usb_id == USB_ID(0x041e, 0x30df) ||
+ 			 mixer->chip->usb_id == USB_ID(0x041e, 0x3048)))
+-			break; 
++			break;
+ 
+ 		knew = snd_audigy2nx_control;
+ 		knew.name = snd_audigy2nx_led_names[i];
+@@ -481,9 +484,9 @@ static int snd_emu0204_ch_switch_update(struct usb_mixer_interface *mixer,
+ 	buf[0] = 0x01;
+ 	buf[1] = value ? 0x02 : 0x01;
+ 	err = snd_usb_ctl_msg(chip->dev,
+-		      usb_sndctrlpipe(chip->dev, 0), UAC_SET_CUR,
+-		      USB_RECIP_INTERFACE | USB_TYPE_CLASS | USB_DIR_OUT,
+-		      0x0400, 0x0e00, buf, 2);
++			      usb_sndctrlpipe(chip->dev, 0), UAC_SET_CUR,
++			      USB_RECIP_INTERFACE | USB_TYPE_CLASS | USB_DIR_OUT,
++			      0x0400, 0x0e00, buf, 2);
+ 	snd_usb_unlock_shutdown(chip);
+ 	return err;
+ }
+@@ -529,6 +532,265 @@ static int snd_emu0204_controls_create(struct usb_mixer_interface *mixer)
+ 					  &snd_emu0204_control, NULL);
+ }
+ 
++#if IS_REACHABLE(CONFIG_INPUT)
++/*
++ * Sony DualSense controller (PS5) jack detection
++ *
++ * Since this is an UAC 1 device, it doesn't support jack detection.
++ * However, the controller hid-playstation driver reports HP & MIC
++ * insert events through a dedicated input device.
++ */
++
++#define SND_DUALSENSE_JACK_OUT_TERM_ID 3
++#define SND_DUALSENSE_JACK_IN_TERM_ID 4
++
++struct dualsense_mixer_elem_info {
++	struct usb_mixer_elem_info info;
++	struct input_handler ih;
++	struct input_device_id id_table[2];
++	bool connected;
++};
++
++static void snd_dualsense_ih_event(struct input_handle *handle,
++				   unsigned int type, unsigned int code,
++				   int value)
++{
++	struct dualsense_mixer_elem_info *mei;
++	struct usb_mixer_elem_list *me;
++
++	if (type != EV_SW)
++		return;
++
++	mei = container_of(handle->handler, struct dualsense_mixer_elem_info, ih);
++	me = &mei->info.head;
++
++	if ((me->id == SND_DUALSENSE_JACK_OUT_TERM_ID && code == SW_HEADPHONE_INSERT) ||
++	    (me->id == SND_DUALSENSE_JACK_IN_TERM_ID && code == SW_MICROPHONE_INSERT)) {
++		mei->connected = !!value;
++		snd_ctl_notify(me->mixer->chip->card, SNDRV_CTL_EVENT_MASK_VALUE,
++			       &me->kctl->id);
++	}
++}
++
++static bool snd_dualsense_ih_match(struct input_handler *handler,
++				   struct input_dev *dev)
++{
++	struct dualsense_mixer_elem_info *mei;
++	struct usb_device *snd_dev;
++	char *input_dev_path, *usb_dev_path;
++	size_t usb_dev_path_len;
++	bool match = false;
++
++	mei = container_of(handler, struct dualsense_mixer_elem_info, ih);
++	snd_dev = mei->info.head.mixer->chip->dev;
++
++	input_dev_path = kobject_get_path(&dev->dev.kobj, GFP_KERNEL);
++	if (!input_dev_path) {
++		dev_warn(&snd_dev->dev, "Failed to get input dev path\n");
++		return false;
++	}
++
++	usb_dev_path = kobject_get_path(&snd_dev->dev.kobj, GFP_KERNEL);
++	if (!usb_dev_path) {
++		dev_warn(&snd_dev->dev, "Failed to get USB dev path\n");
++		goto free_paths;
++	}
++
++	/*
++	 * Ensure the VID:PID matched input device supposedly owned by the
++	 * hid-playstation driver belongs to the actual hardware handled by
++	 * the current USB audio device, which implies input_dev_path being
++	 * a subpath of usb_dev_path.
++	 *
++	 * This verification is necessary when there is more than one identical
++	 * controller attached to the host system.
++	 */
++	usb_dev_path_len = strlen(usb_dev_path);
++	if (usb_dev_path_len >= strlen(input_dev_path))
++		goto free_paths;
++
++	usb_dev_path[usb_dev_path_len] = '/';
++	match = !memcmp(input_dev_path, usb_dev_path, usb_dev_path_len + 1);
++
++free_paths:
++	kfree(input_dev_path);
++	kfree(usb_dev_path);
++
++	return match;
++}
++
++static int snd_dualsense_ih_connect(struct input_handler *handler,
++				    struct input_dev *dev,
++				    const struct input_device_id *id)
++{
++	struct input_handle *handle;
++	int err;
++
++	handle = kzalloc(sizeof(*handle), GFP_KERNEL);
++	if (!handle)
++		return -ENOMEM;
++
++	handle->dev = dev;
++	handle->handler = handler;
++	handle->name = handler->name;
++
++	err = input_register_handle(handle);
++	if (err)
++		goto err_free;
++
++	err = input_open_device(handle);
++	if (err)
++		goto err_unregister;
++
++	return 0;
++
++err_unregister:
++	input_unregister_handle(handle);
++err_free:
++	kfree(handle);
++	return err;
++}
++
++static void snd_dualsense_ih_disconnect(struct input_handle *handle)
++{
++	input_close_device(handle);
++	input_unregister_handle(handle);
++	kfree(handle);
++}
++
++static void snd_dualsense_ih_start(struct input_handle *handle)
++{
++	struct dualsense_mixer_elem_info *mei;
++	struct usb_mixer_elem_list *me;
++	int status = -1;
++
++	mei = container_of(handle->handler, struct dualsense_mixer_elem_info, ih);
++	me = &mei->info.head;
++
++	if (me->id == SND_DUALSENSE_JACK_OUT_TERM_ID &&
++	    test_bit(SW_HEADPHONE_INSERT, handle->dev->swbit))
++		status = test_bit(SW_HEADPHONE_INSERT, handle->dev->sw);
++	else if (me->id == SND_DUALSENSE_JACK_IN_TERM_ID &&
++		 test_bit(SW_MICROPHONE_INSERT, handle->dev->swbit))
++		status = test_bit(SW_MICROPHONE_INSERT, handle->dev->sw);
++
++	if (status >= 0) {
++		mei->connected = !!status;
++		snd_ctl_notify(me->mixer->chip->card, SNDRV_CTL_EVENT_MASK_VALUE,
++			       &me->kctl->id);
++	}
++}
++
++static int snd_dualsense_jack_get(struct snd_kcontrol *kctl,
++				  struct snd_ctl_elem_value *ucontrol)
++{
++	struct dualsense_mixer_elem_info *mei = snd_kcontrol_chip(kctl);
++
++	ucontrol->value.integer.value[0] = mei->connected;
++
++	return 0;
++}
++
++static const struct snd_kcontrol_new snd_dualsense_jack_control = {
++	.iface = SNDRV_CTL_ELEM_IFACE_CARD,
++	.access = SNDRV_CTL_ELEM_ACCESS_READ,
++	.info = snd_ctl_boolean_mono_info,
++	.get = snd_dualsense_jack_get,
++};
++
++static int snd_dualsense_resume_jack(struct usb_mixer_elem_list *list)
++{
++	snd_ctl_notify(list->mixer->chip->card, SNDRV_CTL_EVENT_MASK_VALUE,
++		       &list->kctl->id);
++	return 0;
++}
++
++static void snd_dualsense_mixer_elem_free(struct snd_kcontrol *kctl)
++{
++	struct dualsense_mixer_elem_info *mei = snd_kcontrol_chip(kctl);
++
++	if (mei->ih.event)
++		input_unregister_handler(&mei->ih);
++
++	snd_usb_mixer_elem_free(kctl);
++}
++
++static int snd_dualsense_jack_create(struct usb_mixer_interface *mixer,
++				     const char *name, bool is_output)
++{
++	struct dualsense_mixer_elem_info *mei;
++	struct input_device_id *idev_id;
++	struct snd_kcontrol *kctl;
++	int err;
++
++	mei = kzalloc(sizeof(*mei), GFP_KERNEL);
++	if (!mei)
++		return -ENOMEM;
++
++	snd_usb_mixer_elem_init_std(&mei->info.head, mixer,
++				    is_output ? SND_DUALSENSE_JACK_OUT_TERM_ID :
++						SND_DUALSENSE_JACK_IN_TERM_ID);
++
++	mei->info.head.resume = snd_dualsense_resume_jack;
++	mei->info.val_type = USB_MIXER_BOOLEAN;
++	mei->info.channels = 1;
++	mei->info.min = 0;
++	mei->info.max = 1;
++
++	kctl = snd_ctl_new1(&snd_dualsense_jack_control, mei);
++	if (!kctl) {
++		kfree(mei);
++		return -ENOMEM;
++	}
++
++	strscpy(kctl->id.name, name, sizeof(kctl->id.name));
++	kctl->private_free = snd_dualsense_mixer_elem_free;
++
++	err = snd_usb_mixer_add_control(&mei->info.head, kctl);
++	if (err)
++		return err;
++
++	idev_id = &mei->id_table[0];
++	idev_id->flags = INPUT_DEVICE_ID_MATCH_VENDOR | INPUT_DEVICE_ID_MATCH_PRODUCT |
++			 INPUT_DEVICE_ID_MATCH_EVBIT | INPUT_DEVICE_ID_MATCH_SWBIT;
++	idev_id->vendor = USB_ID_VENDOR(mixer->chip->usb_id);
++	idev_id->product = USB_ID_PRODUCT(mixer->chip->usb_id);
++	idev_id->evbit[BIT_WORD(EV_SW)] = BIT_MASK(EV_SW);
++	if (is_output)
++		idev_id->swbit[BIT_WORD(SW_HEADPHONE_INSERT)] = BIT_MASK(SW_HEADPHONE_INSERT);
++	else
++		idev_id->swbit[BIT_WORD(SW_MICROPHONE_INSERT)] = BIT_MASK(SW_MICROPHONE_INSERT);
++
++	mei->ih.event = snd_dualsense_ih_event;
++	mei->ih.match = snd_dualsense_ih_match;
++	mei->ih.connect = snd_dualsense_ih_connect;
++	mei->ih.disconnect = snd_dualsense_ih_disconnect;
++	mei->ih.start = snd_dualsense_ih_start;
++	mei->ih.name = name;
++	mei->ih.id_table = mei->id_table;
++
++	err = input_register_handler(&mei->ih);
++	if (err) {
++		dev_warn(&mixer->chip->dev->dev,
++			 "Could not register input handler: %d\n", err);
++		mei->ih.event = NULL;
++	}
++
++	return 0;
++}
++
++static int snd_dualsense_controls_create(struct usb_mixer_interface *mixer)
++{
++	int err;
++
++	err = snd_dualsense_jack_create(mixer, "Headphone Jack", true);
++	if (err < 0)
++		return err;
++
++	return snd_dualsense_jack_create(mixer, "Headset Mic Jack", false);
++}
++#endif /* IS_REACHABLE(CONFIG_INPUT) */
++
+ /* ASUS Xonar U1 / U3 controls */
+ 
+ static int snd_xonar_u1_switch_get(struct snd_kcontrol *kcontrol,
+@@ -856,6 +1118,7 @@ static const struct snd_kcontrol_new snd_mbox1_src_switch = {
+ static int snd_mbox1_controls_create(struct usb_mixer_interface *mixer)
+ {
+ 	int err;
++
+ 	err = add_single_ctl_with_resume(mixer, 0,
+ 					 snd_mbox1_clk_switch_resume,
+ 					 &snd_mbox1_clk_switch, NULL);
+@@ -869,7 +1132,7 @@ static int snd_mbox1_controls_create(struct usb_mixer_interface *mixer)
+ 
+ /* Native Instruments device quirks */
+ 
+-#define _MAKE_NI_CONTROL(bRequest,wIndex) ((bRequest) << 16 | (wIndex))
++#define _MAKE_NI_CONTROL(bRequest, wIndex) ((bRequest) << 16 | (wIndex))
+ 
+ static int snd_ni_control_init_val(struct usb_mixer_interface *mixer,
+ 				   struct snd_kcontrol *kctl)
+@@ -1021,7 +1284,7 @@ static int snd_nativeinstruments_create_mixer(struct usb_mixer_interface *mixer,
+ /* M-Audio FastTrack Ultra quirks */
+ /* FTU Effect switch (also used by C400/C600) */
+ static int snd_ftu_eff_switch_info(struct snd_kcontrol *kcontrol,
+-					struct snd_ctl_elem_info *uinfo)
++				   struct snd_ctl_elem_info *uinfo)
+ {
+ 	static const char *const texts[8] = {
+ 		"Room 1", "Room 2", "Room 3", "Hall 1",
+@@ -1055,7 +1318,7 @@ static int snd_ftu_eff_switch_init(struct usb_mixer_interface *mixer,
+ }
+ 
+ static int snd_ftu_eff_switch_get(struct snd_kcontrol *kctl,
+-					struct snd_ctl_elem_value *ucontrol)
++				  struct snd_ctl_elem_value *ucontrol)
+ {
+ 	ucontrol->value.enumerated.item[0] = kctl->private_value >> 24;
+ 	return 0;
+@@ -1086,7 +1349,7 @@ static int snd_ftu_eff_switch_update(struct usb_mixer_elem_list *list)
+ }
+ 
+ static int snd_ftu_eff_switch_put(struct snd_kcontrol *kctl,
+-					struct snd_ctl_elem_value *ucontrol)
++				  struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct usb_mixer_elem_list *list = snd_kcontrol_chip(kctl);
+ 	unsigned int pval = list->kctl->private_value;
+@@ -1104,7 +1367,7 @@ static int snd_ftu_eff_switch_put(struct snd_kcontrol *kctl,
+ }
+ 
+ static int snd_ftu_create_effect_switch(struct usb_mixer_interface *mixer,
+-	int validx, int bUnitID)
++					int validx, int bUnitID)
+ {
+ 	static struct snd_kcontrol_new template = {
+ 		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+@@ -1143,22 +1406,22 @@ static int snd_ftu_create_volume_ctls(struct usb_mixer_interface *mixer)
+ 		for (in = 0; in < 8; in++) {
+ 			cmask = BIT(in);
+ 			snprintf(name, sizeof(name),
+-				"AIn%d - Out%d Capture Volume",
+-				in  + 1, out + 1);
++				 "AIn%d - Out%d Capture Volume",
++				 in  + 1, out + 1);
+ 			err = snd_create_std_mono_ctl(mixer, id, control,
+-							cmask, val_type, name,
+-							&snd_usb_mixer_vol_tlv);
++						      cmask, val_type, name,
++						      &snd_usb_mixer_vol_tlv);
+ 			if (err < 0)
+ 				return err;
+ 		}
+ 		for (in = 8; in < 16; in++) {
+ 			cmask = BIT(in);
+ 			snprintf(name, sizeof(name),
+-				"DIn%d - Out%d Playback Volume",
+-				in - 7, out + 1);
++				 "DIn%d - Out%d Playback Volume",
++				 in - 7, out + 1);
+ 			err = snd_create_std_mono_ctl(mixer, id, control,
+-							cmask, val_type, name,
+-							&snd_usb_mixer_vol_tlv);
++						      cmask, val_type, name,
++						      &snd_usb_mixer_vol_tlv);
+ 			if (err < 0)
+ 				return err;
+ 		}
+@@ -1219,10 +1482,10 @@ static int snd_ftu_create_effect_return_ctls(struct usb_mixer_interface *mixer)
+ 	for (ch = 0; ch < 4; ++ch) {
+ 		cmask = BIT(ch);
+ 		snprintf(name, sizeof(name),
+-			"Effect Return %d Volume", ch + 1);
++			 "Effect Return %d Volume", ch + 1);
+ 		err = snd_create_std_mono_ctl(mixer, id, control,
+-						cmask, val_type, name,
+-						snd_usb_mixer_vol_tlv);
++					      cmask, val_type, name,
++					      snd_usb_mixer_vol_tlv);
+ 		if (err < 0)
+ 			return err;
+ 	}
+@@ -1243,20 +1506,20 @@ static int snd_ftu_create_effect_send_ctls(struct usb_mixer_interface *mixer)
+ 	for (ch = 0; ch < 8; ++ch) {
+ 		cmask = BIT(ch);
+ 		snprintf(name, sizeof(name),
+-			"Effect Send AIn%d Volume", ch + 1);
++			 "Effect Send AIn%d Volume", ch + 1);
+ 		err = snd_create_std_mono_ctl(mixer, id, control, cmask,
+-						val_type, name,
+-						snd_usb_mixer_vol_tlv);
++					      val_type, name,
++					      snd_usb_mixer_vol_tlv);
+ 		if (err < 0)
+ 			return err;
+ 	}
+ 	for (ch = 8; ch < 16; ++ch) {
+ 		cmask = BIT(ch);
+ 		snprintf(name, sizeof(name),
+-			"Effect Send DIn%d Volume", ch - 7);
++			 "Effect Send DIn%d Volume", ch - 7);
+ 		err = snd_create_std_mono_ctl(mixer, id, control, cmask,
+-						val_type, name,
+-						snd_usb_mixer_vol_tlv);
++					      val_type, name,
++					      snd_usb_mixer_vol_tlv);
+ 		if (err < 0)
+ 			return err;
+ 	}
+@@ -1346,19 +1609,19 @@ static int snd_c400_create_vol_ctls(struct usb_mixer_interface *mixer)
+ 		for (out = 0; out < num_outs; out++) {
+ 			if (chan < num_outs) {
+ 				snprintf(name, sizeof(name),
+-					"PCM%d-Out%d Playback Volume",
+-					chan + 1, out + 1);
++					 "PCM%d-Out%d Playback Volume",
++					 chan + 1, out + 1);
+ 			} else {
+ 				snprintf(name, sizeof(name),
+-					"In%d-Out%d Playback Volume",
+-					chan - num_outs + 1, out + 1);
++					 "In%d-Out%d Playback Volume",
++					 chan - num_outs + 1, out + 1);
+ 			}
+ 
+ 			cmask = (out == 0) ? 0 : BIT(out - 1);
+ 			offset = chan * num_outs;
+ 			err = snd_create_std_mono_ctl_offset(mixer, id, control,
+-						cmask, val_type, offset, name,
+-						&snd_usb_mixer_vol_tlv);
++							     cmask, val_type, offset, name,
++							     &snd_usb_mixer_vol_tlv);
+ 			if (err < 0)
+ 				return err;
+ 		}
+@@ -1377,7 +1640,7 @@ static int snd_c400_create_effect_volume_ctl(struct usb_mixer_interface *mixer)
+ 	const unsigned int cmask = 0;
+ 
+ 	return snd_create_std_mono_ctl(mixer, id, control, cmask, val_type,
+-					name, snd_usb_mixer_vol_tlv);
++				       name, snd_usb_mixer_vol_tlv);
+ }
+ 
+ /* This control needs a volume quirk, see mixer.c */
+@@ -1390,7 +1653,7 @@ static int snd_c400_create_effect_duration_ctl(struct usb_mixer_interface *mixer
+ 	const unsigned int cmask = 0;
+ 
+ 	return snd_create_std_mono_ctl(mixer, id, control, cmask, val_type,
+-					name, snd_usb_mixer_vol_tlv);
++				       name, snd_usb_mixer_vol_tlv);
+ }
+ 
+ /* This control needs a volume quirk, see mixer.c */
+@@ -1403,7 +1666,7 @@ static int snd_c400_create_effect_feedback_ctl(struct usb_mixer_interface *mixer
+ 	const unsigned int cmask = 0;
+ 
+ 	return snd_create_std_mono_ctl(mixer, id, control, cmask, val_type,
+-					name, NULL);
++				       name, NULL);
+ }
+ 
+ static int snd_c400_create_effect_vol_ctls(struct usb_mixer_interface *mixer)
+@@ -1432,18 +1695,18 @@ static int snd_c400_create_effect_vol_ctls(struct usb_mixer_interface *mixer)
+ 	for (chan = 0; chan < num_outs + num_ins; chan++) {
+ 		if (chan < num_outs) {
+ 			snprintf(name, sizeof(name),
+-				"Effect Send DOut%d",
+-				chan + 1);
++				 "Effect Send DOut%d",
++				 chan + 1);
+ 		} else {
+ 			snprintf(name, sizeof(name),
+-				"Effect Send AIn%d",
+-				chan - num_outs + 1);
++				 "Effect Send AIn%d",
++				 chan - num_outs + 1);
+ 		}
+ 
+ 		cmask = (chan == 0) ? 0 : BIT(chan - 1);
+ 		err = snd_create_std_mono_ctl(mixer, id, control,
+-						cmask, val_type, name,
+-						&snd_usb_mixer_vol_tlv);
++					      cmask, val_type, name,
++					      &snd_usb_mixer_vol_tlv);
+ 		if (err < 0)
+ 			return err;
+ 	}
+@@ -1478,14 +1741,14 @@ static int snd_c400_create_effect_ret_vol_ctls(struct usb_mixer_interface *mixer
+ 
+ 	for (chan = 0; chan < num_outs; chan++) {
+ 		snprintf(name, sizeof(name),
+-			"Effect Return %d",
+-			chan + 1);
++			 "Effect Return %d",
++			 chan + 1);
+ 
+ 		cmask = (chan == 0) ? 0 :
+ 			BIT(chan + (chan % 2) * num_outs - 1);
+ 		err = snd_create_std_mono_ctl_offset(mixer, id, control,
+-						cmask, val_type, offset, name,
+-						&snd_usb_mixer_vol_tlv);
++						     cmask, val_type, offset, name,
++						     &snd_usb_mixer_vol_tlv);
+ 		if (err < 0)
+ 			return err;
+ 	}
+@@ -1626,7 +1889,7 @@ static const struct std_mono_table ebox44_table[] = {
+  *
+  */
+ static int snd_microii_spdif_info(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_info *uinfo)
++				  struct snd_ctl_elem_info *uinfo)
+ {
+ 	uinfo->type = SNDRV_CTL_ELEM_TYPE_IEC958;
+ 	uinfo->count = 1;
+@@ -1634,7 +1897,7 @@ static int snd_microii_spdif_info(struct snd_kcontrol *kcontrol,
+ }
+ 
+ static int snd_microii_spdif_default_get(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_value *ucontrol)
++					 struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct usb_mixer_elem_list *list = snd_kcontrol_chip(kcontrol);
+ 	struct snd_usb_audio *chip = list->mixer->chip;
+@@ -1667,13 +1930,13 @@ static int snd_microii_spdif_default_get(struct snd_kcontrol *kcontrol,
+ 	ep = get_endpoint(alts, 0)->bEndpointAddress;
+ 
+ 	err = snd_usb_ctl_msg(chip->dev,
+-			usb_rcvctrlpipe(chip->dev, 0),
+-			UAC_GET_CUR,
+-			USB_TYPE_CLASS | USB_RECIP_ENDPOINT | USB_DIR_IN,
+-			UAC_EP_CS_ATTR_SAMPLE_RATE << 8,
+-			ep,
+-			data,
+-			sizeof(data));
++			      usb_rcvctrlpipe(chip->dev, 0),
++			      UAC_GET_CUR,
++			      USB_TYPE_CLASS | USB_RECIP_ENDPOINT | USB_DIR_IN,
++			      UAC_EP_CS_ATTR_SAMPLE_RATE << 8,
++			      ep,
++			      data,
++			      sizeof(data));
+ 	if (err < 0)
+ 		goto end;
+ 
+@@ -1700,26 +1963,26 @@ static int snd_microii_spdif_default_update(struct usb_mixer_elem_list *list)
+ 
+ 	reg = ((pval >> 4) & 0xf0) | (pval & 0x0f);
+ 	err = snd_usb_ctl_msg(chip->dev,
+-			usb_sndctrlpipe(chip->dev, 0),
+-			UAC_SET_CUR,
+-			USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+-			reg,
+-			2,
+-			NULL,
+-			0);
++			      usb_sndctrlpipe(chip->dev, 0),
++			      UAC_SET_CUR,
++			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
++			      reg,
++			      2,
++			      NULL,
++			      0);
+ 	if (err < 0)
+ 		goto end;
+ 
+ 	reg = (pval & IEC958_AES0_NONAUDIO) ? 0xa0 : 0x20;
+ 	reg |= (pval >> 12) & 0x0f;
+ 	err = snd_usb_ctl_msg(chip->dev,
+-			usb_sndctrlpipe(chip->dev, 0),
+-			UAC_SET_CUR,
+-			USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+-			reg,
+-			3,
+-			NULL,
+-			0);
++			      usb_sndctrlpipe(chip->dev, 0),
++			      UAC_SET_CUR,
++			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
++			      reg,
++			      3,
++			      NULL,
++			      0);
+ 	if (err < 0)
+ 		goto end;
+ 
+@@ -1729,13 +1992,14 @@ static int snd_microii_spdif_default_update(struct usb_mixer_elem_list *list)
+ }
+ 
+ static int snd_microii_spdif_default_put(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_value *ucontrol)
++					 struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct usb_mixer_elem_list *list = snd_kcontrol_chip(kcontrol);
+ 	unsigned int pval, pval_old;
+ 	int err;
+ 
+-	pval = pval_old = kcontrol->private_value;
++	pval = kcontrol->private_value;
++	pval_old = pval;
+ 	pval &= 0xfffff0f0;
+ 	pval |= (ucontrol->value.iec958.status[1] & 0x0f) << 8;
+ 	pval |= (ucontrol->value.iec958.status[0] & 0x0f);
+@@ -1756,7 +2020,7 @@ static int snd_microii_spdif_default_put(struct snd_kcontrol *kcontrol,
+ }
+ 
+ static int snd_microii_spdif_mask_get(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_value *ucontrol)
++				      struct snd_ctl_elem_value *ucontrol)
+ {
+ 	ucontrol->value.iec958.status[0] = 0x0f;
+ 	ucontrol->value.iec958.status[1] = 0xff;
+@@ -1767,7 +2031,7 @@ static int snd_microii_spdif_mask_get(struct snd_kcontrol *kcontrol,
+ }
+ 
+ static int snd_microii_spdif_switch_get(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_value *ucontrol)
++					struct snd_ctl_elem_value *ucontrol)
+ {
+ 	ucontrol->value.integer.value[0] = !(kcontrol->private_value & 0x02);
+ 
+@@ -1785,20 +2049,20 @@ static int snd_microii_spdif_switch_update(struct usb_mixer_elem_list *list)
+ 		return err;
+ 
+ 	err = snd_usb_ctl_msg(chip->dev,
+-			usb_sndctrlpipe(chip->dev, 0),
+-			UAC_SET_CUR,
+-			USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
+-			reg,
+-			9,
+-			NULL,
+-			0);
++			      usb_sndctrlpipe(chip->dev, 0),
++			      UAC_SET_CUR,
++			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
++			      reg,
++			      9,
++			      NULL,
++			      0);
+ 
+ 	snd_usb_unlock_shutdown(chip);
+ 	return err;
+ }
+ 
+ static int snd_microii_spdif_switch_put(struct snd_kcontrol *kcontrol,
+-	struct snd_ctl_elem_value *ucontrol)
++					struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct usb_mixer_elem_list *list = snd_kcontrol_chip(kcontrol);
+ 	u8 reg;
+@@ -1883,9 +2147,9 @@ static int snd_soundblaster_e1_switch_update(struct usb_mixer_interface *mixer,
+ 	if (err < 0)
+ 		return err;
+ 	err = snd_usb_ctl_msg(chip->dev,
+-			usb_sndctrlpipe(chip->dev, 0), HID_REQ_SET_REPORT,
+-			USB_TYPE_CLASS | USB_RECIP_INTERFACE | USB_DIR_OUT,
+-			0x0202, 3, buff, 2);
++			      usb_sndctrlpipe(chip->dev, 0), HID_REQ_SET_REPORT,
++			      USB_TYPE_CLASS | USB_RECIP_INTERFACE | USB_DIR_OUT,
++			      0x0202, 3, buff, 2);
+ 	snd_usb_unlock_shutdown(chip);
+ 	return err;
+ }
+@@ -2181,6 +2445,7 @@ static const u32 snd_rme_rate_table[] = {
+ 	256000,	352800, 384000, 400000,
+ 	512000, 705600, 768000, 800000
+ };
++
+ /* maximum number of items for AES and S/PDIF rates for above table */
+ #define SND_RME_RATE_IDX_AES_SPDIF_NUM		12
+ 
+@@ -3235,7 +3500,7 @@ static int snd_rme_digiface_enum_put(struct snd_kcontrol *kcontrol,
+ }
+ 
+ static int snd_rme_digiface_current_sync_get(struct snd_kcontrol *kcontrol,
+-				     struct snd_ctl_elem_value *ucontrol)
++					     struct snd_ctl_elem_value *ucontrol)
+ {
+ 	int ret = snd_rme_digiface_enum_get(kcontrol, ucontrol);
+ 
+@@ -3269,7 +3534,6 @@ static int snd_rme_digiface_sync_state_get(struct snd_kcontrol *kcontrol,
+ 	return 0;
+ }
+ 
+-
+ static int snd_rme_digiface_format_info(struct snd_kcontrol *kcontrol,
+ 					struct snd_ctl_elem_info *uinfo)
+ {
+@@ -3281,7 +3545,6 @@ static int snd_rme_digiface_format_info(struct snd_kcontrol *kcontrol,
+ 				 ARRAY_SIZE(format), format);
+ }
+ 
+-
+ static int snd_rme_digiface_sync_source_info(struct snd_kcontrol *kcontrol,
+ 					     struct snd_ctl_elem_info *uinfo)
+ {
+@@ -3564,7 +3827,6 @@ static int snd_rme_digiface_controls_create(struct usb_mixer_interface *mixer)
+ #define SND_DJM_A9_IDX		0x6
+ #define SND_DJM_V10_IDX	0x7
+ 
+-
+ #define SND_DJM_CTL(_name, suffix, _default_value, _windex) { \
+ 	.name = _name, \
+ 	.options = snd_djm_opts_##suffix, \
+@@ -3576,7 +3838,6 @@ static int snd_rme_digiface_controls_create(struct usb_mixer_interface *mixer)
+ 	.controls = snd_djm_ctls_##suffix, \
+ 	.ncontrols = ARRAY_SIZE(snd_djm_ctls_##suffix) }
+ 
+-
+ struct snd_djm_device {
+ 	const char *name;
+ 	const struct snd_djm_ctl *controls;
+@@ -3722,7 +3983,6 @@ static const struct snd_djm_ctl snd_djm_ctls_250mk2[] = {
+ 	SND_DJM_CTL("Output 3 Playback Switch", 250mk2_pb3, 2, SND_DJM_WINDEX_PB)
+ };
+ 
+-
+ // DJM-450
+ static const u16 snd_djm_opts_450_cap1[] = {
+ 	0x0103, 0x0100, 0x0106, 0x0107, 0x0108, 0x0109, 0x010d, 0x010a };
+@@ -3747,7 +4007,6 @@ static const struct snd_djm_ctl snd_djm_ctls_450[] = {
+ 	SND_DJM_CTL("Output 3 Playback Switch", 450_pb3, 2, SND_DJM_WINDEX_PB)
+ };
+ 
+-
+ // DJM-750
+ static const u16 snd_djm_opts_750_cap1[] = {
+ 	0x0101, 0x0103, 0x0106, 0x0107, 0x0108, 0x0109, 0x010a, 0x010f };
+@@ -3766,7 +4025,6 @@ static const struct snd_djm_ctl snd_djm_ctls_750[] = {
+ 	SND_DJM_CTL("Input 4 Capture Switch", 750_cap4, 0, SND_DJM_WINDEX_CAP)
+ };
+ 
+-
+ // DJM-850
+ static const u16 snd_djm_opts_850_cap1[] = {
+ 	0x0100, 0x0103, 0x0106, 0x0107, 0x0108, 0x0109, 0x010a, 0x010f };
+@@ -3785,7 +4043,6 @@ static const struct snd_djm_ctl snd_djm_ctls_850[] = {
+ 	SND_DJM_CTL("Input 4 Capture Switch", 850_cap4, 1, SND_DJM_WINDEX_CAP)
+ };
+ 
+-
+ // DJM-900NXS2
+ static const u16 snd_djm_opts_900nxs2_cap1[] = {
+ 	0x0100, 0x0102, 0x0103, 0x0106, 0x0107, 0x0108, 0x0109, 0x010a };
+@@ -3823,7 +4080,6 @@ static const u16 snd_djm_opts_750mk2_pb1[] = { 0x0100, 0x0101, 0x0104 };
+ static const u16 snd_djm_opts_750mk2_pb2[] = { 0x0200, 0x0201, 0x0204 };
+ static const u16 snd_djm_opts_750mk2_pb3[] = { 0x0300, 0x0301, 0x0304 };
+ 
+-
+ static const struct snd_djm_ctl snd_djm_ctls_750mk2[] = {
+ 	SND_DJM_CTL("Master Input Level Capture Switch", cap_level, 0, SND_DJM_WINDEX_CAPLVL),
+ 	SND_DJM_CTL("Input 1 Capture Switch",   750mk2_cap1, 2, SND_DJM_WINDEX_CAP),
+@@ -3836,7 +4092,6 @@ static const struct snd_djm_ctl snd_djm_ctls_750mk2[] = {
+ 	SND_DJM_CTL("Output 3 Playback Switch", 750mk2_pb3, 2, SND_DJM_WINDEX_PB)
+ };
+ 
+-
+ // DJM-A9
+ static const u16 snd_djm_opts_a9_cap_level[] = {
+ 	0x0000, 0x0100, 0x0200, 0x0300, 0x0400, 0x0500 };
+@@ -3865,29 +4120,35 @@ static const struct snd_djm_ctl snd_djm_ctls_a9[] = {
+ static const u16 snd_djm_opts_v10_cap_level[] = {
+ 	0x0000, 0x0100, 0x0200, 0x0300, 0x0400, 0x0500
+ };
++
+ static const u16 snd_djm_opts_v10_cap1[] = {
+ 	0x0103,
+ 	0x0100, 0x0102, 0x0106, 0x0110, 0x0107,
+ 	0x0108, 0x0109, 0x010a, 0x0121, 0x0122
+ };
++
+ static const u16 snd_djm_opts_v10_cap2[] = {
+ 	0x0200, 0x0202, 0x0206, 0x0210, 0x0207,
+ 	0x0208, 0x0209, 0x020a, 0x0221, 0x0222
+ };
++
+ static const u16 snd_djm_opts_v10_cap3[] = {
+ 	0x0303,
+ 	0x0300, 0x0302, 0x0306, 0x0310, 0x0307,
+ 	0x0308, 0x0309, 0x030a, 0x0321, 0x0322
+ };
++
+ static const u16 snd_djm_opts_v10_cap4[] = {
+ 	0x0403,
+ 	0x0400, 0x0402, 0x0406, 0x0410, 0x0407,
+ 	0x0408, 0x0409, 0x040a, 0x0421, 0x0422
+ };
++
+ static const u16 snd_djm_opts_v10_cap5[] = {
+ 	0x0500, 0x0502, 0x0506, 0x0510, 0x0507,
+ 	0x0508, 0x0509, 0x050a, 0x0521, 0x0522
+ };
++
+ static const u16 snd_djm_opts_v10_cap6[] = {
+ 	0x0603,
+ 	0x0600, 0x0602, 0x0606, 0x0610, 0x0607,
+@@ -3916,9 +4177,8 @@ static const struct snd_djm_device snd_djm_devices[] = {
+ 	[SND_DJM_V10_IDX] = SND_DJM_DEVICE(v10),
+ };
+ 
+-
+ static int snd_djm_controls_info(struct snd_kcontrol *kctl,
+-				struct snd_ctl_elem_info *info)
++				 struct snd_ctl_elem_info *info)
+ {
+ 	unsigned long private_value = kctl->private_value;
+ 	u8 device_idx = (private_value & SND_DJM_DEVICE_MASK) >> SND_DJM_DEVICE_SHIFT;
+@@ -3937,8 +4197,8 @@ static int snd_djm_controls_info(struct snd_kcontrol *kctl,
+ 		info->value.enumerated.item = noptions - 1;
+ 
+ 	name = snd_djm_get_label(device_idx,
+-				ctl->options[info->value.enumerated.item],
+-				ctl->wIndex);
++				 ctl->options[info->value.enumerated.item],
++				 ctl->wIndex);
+ 	if (!name)
+ 		return -EINVAL;
+ 
+@@ -3950,25 +4210,25 @@ static int snd_djm_controls_info(struct snd_kcontrol *kctl,
+ }
+ 
+ static int snd_djm_controls_update(struct usb_mixer_interface *mixer,
+-				u8 device_idx, u8 group, u16 value)
++				   u8 device_idx, u8 group, u16 value)
+ {
+ 	int err;
+ 	const struct snd_djm_device *device = &snd_djm_devices[device_idx];
+ 
+-	if ((group >= device->ncontrols) || value >= device->controls[group].noptions)
++	if (group >= device->ncontrols || value >= device->controls[group].noptions)
+ 		return -EINVAL;
+ 
+ 	err = snd_usb_lock_shutdown(mixer->chip);
+ 	if (err)
+ 		return err;
+ 
+-	err = snd_usb_ctl_msg(
+-		mixer->chip->dev, usb_sndctrlpipe(mixer->chip->dev, 0),
+-		USB_REQ_SET_FEATURE,
+-		USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-		device->controls[group].options[value],
+-		device->controls[group].wIndex,
+-		NULL, 0);
++	err = snd_usb_ctl_msg(mixer->chip->dev,
++			      usb_sndctrlpipe(mixer->chip->dev, 0),
++			      USB_REQ_SET_FEATURE,
++			      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
++			      device->controls[group].options[value],
++			      device->controls[group].wIndex,
++			      NULL, 0);
+ 
+ 	snd_usb_unlock_shutdown(mixer->chip);
+ 	return err;
+@@ -4009,7 +4269,7 @@ static int snd_djm_controls_resume(struct usb_mixer_elem_list *list)
+ }
+ 
+ static int snd_djm_controls_create(struct usb_mixer_interface *mixer,
+-		const u8 device_idx)
++				   const u8 device_idx)
+ {
+ 	int err, i;
+ 	u16 value;
+@@ -4028,10 +4288,10 @@ static int snd_djm_controls_create(struct usb_mixer_interface *mixer,
+ 	for (i = 0; i < device->ncontrols; i++) {
+ 		value = device->controls[i].default_value;
+ 		knew.name = device->controls[i].name;
+-		knew.private_value = (
++		knew.private_value =
+ 			((unsigned long)device_idx << SND_DJM_DEVICE_SHIFT) |
+ 			(i << SND_DJM_GROUP_SHIFT) |
+-			value);
++			value;
+ 		err = snd_djm_controls_update(mixer, device_idx, i, value);
+ 		if (err)
+ 			return err;
+@@ -4073,6 +4333,13 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
+ 		err = snd_emu0204_controls_create(mixer);
+ 		break;
+ 
++#if IS_REACHABLE(CONFIG_INPUT)
++	case USB_ID(0x054c, 0x0ce6): /* Sony DualSense controller (PS5) */
++	case USB_ID(0x054c, 0x0df2): /* Sony DualSense Edge controller (PS5) */
++		err = snd_dualsense_controls_create(mixer);
++		break;
++#endif /* IS_REACHABLE(CONFIG_INPUT) */
++
+ 	case USB_ID(0x0763, 0x2030): /* M-Audio Fast Track C400 */
+ 	case USB_ID(0x0763, 0x2031): /* M-Audio Fast Track C400 */
+ 		err = snd_c400_create_mixer(mixer);
+@@ -4098,13 +4365,15 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
+ 		break;
+ 
+ 	case USB_ID(0x17cc, 0x1011): /* Traktor Audio 6 */
+-		err = snd_nativeinstruments_create_mixer(mixer,
++		err = snd_nativeinstruments_create_mixer(/* checkpatch hack */
++				mixer,
+ 				snd_nativeinstruments_ta6_mixers,
+ 				ARRAY_SIZE(snd_nativeinstruments_ta6_mixers));
+ 		break;
+ 
+ 	case USB_ID(0x17cc, 0x1021): /* Traktor Audio 10 */
+-		err = snd_nativeinstruments_create_mixer(mixer,
++		err = snd_nativeinstruments_create_mixer(/* checkpatch hack */
++				mixer,
+ 				snd_nativeinstruments_ta10_mixers,
+ 				ARRAY_SIZE(snd_nativeinstruments_ta10_mixers));
+ 		break;
+@@ -4254,7 +4523,8 @@ static void snd_dragonfly_quirk_db_scale(struct usb_mixer_interface *mixer,
+ 					 struct snd_kcontrol *kctl)
+ {
+ 	/* Approximation using 10 ranges based on output measurement on hw v1.2.
+-	 * This seems close to the cubic mapping e.g. alsamixer uses. */
++	 * This seems close to the cubic mapping e.g. alsamixer uses.
++	 */
+ 	static const DECLARE_TLV_DB_RANGE(scale,
+ 		 0,  1, TLV_DB_MINMAX_ITEM(-5300, -4970),
+ 		 2,  5, TLV_DB_MINMAX_ITEM(-4710, -4160),
+@@ -4338,20 +4608,15 @@ void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer,
+ 		if (unitid == 7 && cval->control == UAC_FU_VOLUME)
+ 			snd_dragonfly_quirk_db_scale(mixer, cval, kctl);
+ 		break;
++	}
++
+ 	/* lowest playback value is muted on some devices */
+-	case USB_ID(0x0572, 0x1b09): /* Conexant Systems (Rockwell), Inc. */
+-	case USB_ID(0x0d8c, 0x000c): /* C-Media */
+-	case USB_ID(0x0d8c, 0x0014): /* C-Media */
+-	case USB_ID(0x19f7, 0x0003): /* RODE NT-USB */
+-	case USB_ID(0x2d99, 0x0026): /* HECATE G2 GAMING HEADSET */
++	if (mixer->chip->quirk_flags & QUIRK_FLAG_MIXER_MIN_MUTE)
+ 		if (strstr(kctl->id.name, "Playback"))
+ 			cval->min_mute = 1;
+-		break;
+-	}
+ 
+ 	/* ALSA-ify some Plantronics headset control names */
+ 	if (USB_ID_VENDOR(mixer->chip->usb_id) == 0x047f &&
+ 	    (cval->control == UAC_FU_MUTE || cval->control == UAC_FU_VOLUME))
+ 		snd_fix_plt_name(mixer->chip, &kctl->id);
+ }
+-
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index bd24f3a78ea9db..766db7d00cbc95 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -2199,6 +2199,10 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_SET_IFACE_FIRST),
+ 	DEVICE_FLG(0x0556, 0x0014, /* Phoenix Audio TMX320VC */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
++	DEVICE_FLG(0x0572, 0x1b08, /* Conexant Systems (Rockwell), Inc. */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
++	DEVICE_FLG(0x0572, 0x1b09, /* Conexant Systems (Rockwell), Inc. */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x05a3, 0x9420, /* ELP HD USB Camera */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x05a7, 0x1020, /* Bose Companion 5 */
+@@ -2241,12 +2245,16 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ 	DEVICE_FLG(0x0b0e, 0x0349, /* Jabra 550a */
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
++	DEVICE_FLG(0x0bda, 0x498a, /* Realtek Semiconductor Corp. */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x0c45, 0x6340, /* Sonix HD USB Camera */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x0c45, 0x636b, /* Microdia JP001 USB Camera */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+-	DEVICE_FLG(0x0d8c, 0x0014, /* USB Audio Device */
+-		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
++	DEVICE_FLG(0x0d8c, 0x000c, /* C-Media */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
++	DEVICE_FLG(0x0d8c, 0x0014, /* C-Media */
++		   QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x0ecb, 0x205c, /* JBL Quantum610 Wireless */
+ 		   QUIRK_FLAG_FIXED_RATE),
+ 	DEVICE_FLG(0x0ecb, 0x2069, /* JBL Quantum810 Wireless */
+@@ -2255,6 +2263,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER),
+ 	DEVICE_FLG(0x1101, 0x0003, /* Audioengine D1 */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
++	DEVICE_FLG(0x12d1, 0x3a07, /* Huawei Technologies Co., Ltd. */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x1224, 0x2a25, /* Jieli Technology USB PHY 2.0 */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_MIC_RES_16),
+ 	DEVICE_FLG(0x1395, 0x740a, /* Sennheiser DECT */
+@@ -2293,6 +2303,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY),
+ 	DEVICE_FLG(0x1901, 0x0191, /* GE B850V3 CP2114 audio interface */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
++	DEVICE_FLG(0x19f7, 0x0003, /* RODE NT-USB */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x19f7, 0x0035, /* RODE NT-USB+ */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x1bcf, 0x2281, /* HD Webcam */
+@@ -2343,6 +2355,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_IGNORE_CTL_ERROR),
+ 	DEVICE_FLG(0x2912, 0x30c8, /* Audioengine D1 */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
++	DEVICE_FLG(0x2a70, 0x1881, /* OnePlus Technology (Shenzhen) Co., Ltd. BE02T */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x2b53, 0x0023, /* Fiero SC-01 (firmware v1.0.0 @ 48 kHz) */
+ 		   QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+ 	DEVICE_FLG(0x2b53, 0x0024, /* Fiero SC-01 (firmware v1.0.0 @ 96 kHz) */
+@@ -2353,10 +2367,14 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ 	DEVICE_FLG(0x2d95, 0x8021, /* VIVO USB-C-XE710 HEADSET */
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
++	DEVICE_FLG(0x2d99, 0x0026, /* HECATE G2 GAMING HEADSET */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x2fc6, 0xf0b7, /* iBasso DC07 Pro */
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ 	DEVICE_FLG(0x30be, 0x0101, /* Schiit Hel */
+ 		   QUIRK_FLAG_IGNORE_CTL_ERROR),
++	DEVICE_FLG(0x339b, 0x3a07, /* Synaptics HONOR USB-C HEADSET */
++		   QUIRK_FLAG_MIXER_MIN_MUTE),
+ 	DEVICE_FLG(0x413c, 0xa506, /* Dell AE515 sound bar */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x534d, 0x0021, /* MacroSilicon MS2100/MS2106 */
+@@ -2408,6 +2426,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_DSD_RAW),
+ 	VENDOR_FLG(0x2d87, /* Cayin device */
+ 		   QUIRK_FLAG_DSD_RAW),
++	VENDOR_FLG(0x2fc6, /* Comture-inc devices */
++		   QUIRK_FLAG_DSD_RAW),
+ 	VENDOR_FLG(0x3336, /* HEM devices */
+ 		   QUIRK_FLAG_DSD_RAW),
+ 	VENDOR_FLG(0x3353, /* Khadas devices */
+diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
+index 158ec053dc44dd..1ef4d39978df36 100644
+--- a/sound/usb/usbaudio.h
++++ b/sound/usb/usbaudio.h
+@@ -196,6 +196,9 @@ extern bool snd_usb_skip_validation;
+  *  for the given endpoint.
+  * QUIRK_FLAG_MIC_RES_16 and QUIRK_FLAG_MIC_RES_384
+  *  Set the fixed resolution for Mic Capture Volume (mostly for webcams)
++ * QUIRK_FLAG_MIXER_MIN_MUTE
++ *  Set minimum volume control value as mute for devices where the lowest
++ *  playback value represents muted state instead of minimum audible volume
+  */
+ 
+ #define QUIRK_FLAG_GET_SAMPLE_RATE	(1U << 0)
+@@ -222,5 +225,6 @@ extern bool snd_usb_skip_validation;
+ #define QUIRK_FLAG_FIXED_RATE		(1U << 21)
+ #define QUIRK_FLAG_MIC_RES_16		(1U << 22)
+ #define QUIRK_FLAG_MIC_RES_384		(1U << 23)
++#define QUIRK_FLAG_MIXER_MIN_MUTE	(1U << 24)
+ 
+ #endif /* __USBAUDIO_H */
+diff --git a/tools/testing/selftests/bpf/prog_tests/free_timer.c b/tools/testing/selftests/bpf/prog_tests/free_timer.c
+index b7b77a6b29799c..0de8facca4c5bc 100644
+--- a/tools/testing/selftests/bpf/prog_tests/free_timer.c
++++ b/tools/testing/selftests/bpf/prog_tests/free_timer.c
+@@ -124,6 +124,10 @@ void test_free_timer(void)
+ 	int err;
+ 
+ 	skel = free_timer__open_and_load();
++	if (!skel && errno == EOPNOTSUPP) {
++		test__skip();
++		return;
++	}
+ 	if (!ASSERT_OK_PTR(skel, "open_load"))
+ 		return;
+ 
+diff --git a/tools/testing/selftests/bpf/prog_tests/timer.c b/tools/testing/selftests/bpf/prog_tests/timer.c
+index d66687f1ee6a8d..56f660ca567ba1 100644
+--- a/tools/testing/selftests/bpf/prog_tests/timer.c
++++ b/tools/testing/selftests/bpf/prog_tests/timer.c
+@@ -86,6 +86,10 @@ void serial_test_timer(void)
+ 	int err;
+ 
+ 	timer_skel = timer__open_and_load();
++	if (!timer_skel && errno == EOPNOTSUPP) {
++		test__skip();
++		return;
++	}
+ 	if (!ASSERT_OK_PTR(timer_skel, "timer_skel_load"))
+ 		return;
+ 
+diff --git a/tools/testing/selftests/bpf/prog_tests/timer_crash.c b/tools/testing/selftests/bpf/prog_tests/timer_crash.c
+index f74b82305da8c8..b841597c8a3a31 100644
+--- a/tools/testing/selftests/bpf/prog_tests/timer_crash.c
++++ b/tools/testing/selftests/bpf/prog_tests/timer_crash.c
+@@ -12,6 +12,10 @@ static void test_timer_crash_mode(int mode)
+ 	struct timer_crash *skel;
+ 
+ 	skel = timer_crash__open_and_load();
++	if (!skel && errno == EOPNOTSUPP) {
++		test__skip();
++		return;
++	}
+ 	if (!ASSERT_OK_PTR(skel, "timer_crash__open_and_load"))
+ 		return;
+ 	skel->bss->pid = getpid();
+diff --git a/tools/testing/selftests/bpf/prog_tests/timer_lockup.c b/tools/testing/selftests/bpf/prog_tests/timer_lockup.c
+index 1a2f99596916fb..eb303fa1e09af9 100644
+--- a/tools/testing/selftests/bpf/prog_tests/timer_lockup.c
++++ b/tools/testing/selftests/bpf/prog_tests/timer_lockup.c
+@@ -59,6 +59,10 @@ void test_timer_lockup(void)
+ 	}
+ 
+ 	skel = timer_lockup__open_and_load();
++	if (!skel && errno == EOPNOTSUPP) {
++		test__skip();
++		return;
++	}
+ 	if (!ASSERT_OK_PTR(skel, "timer_lockup__open_and_load"))
+ 		return;
+ 
+diff --git a/tools/testing/selftests/bpf/prog_tests/timer_mim.c b/tools/testing/selftests/bpf/prog_tests/timer_mim.c
+index 9ff7843909e7d3..c930c7d7105b9f 100644
+--- a/tools/testing/selftests/bpf/prog_tests/timer_mim.c
++++ b/tools/testing/selftests/bpf/prog_tests/timer_mim.c
+@@ -65,6 +65,10 @@ void serial_test_timer_mim(void)
+ 		goto cleanup;
+ 
+ 	timer_skel = timer_mim__open_and_load();
++	if (!timer_skel && errno == EOPNOTSUPP) {
++		test__skip();
++		return;
++	}
+ 	if (!ASSERT_OK_PTR(timer_skel, "timer_skel_load"))
+ 		goto cleanup;
+ 
+diff --git a/tools/testing/selftests/filesystems/mount-notify/mount-notify_test.c b/tools/testing/selftests/filesystems/mount-notify/mount-notify_test.c
+index 63ce708d93ed06..e4b7c2b457ee7a 100644
+--- a/tools/testing/selftests/filesystems/mount-notify/mount-notify_test.c
++++ b/tools/testing/selftests/filesystems/mount-notify/mount-notify_test.c
+@@ -2,6 +2,13 @@
+ // Copyright (c) 2025 Miklos Szeredi <miklos@szeredi.hu>
+ 
+ #define _GNU_SOURCE
++
++// Needed for linux/fanotify.h
++typedef struct {
++	int	val[2];
++} __kernel_fsid_t;
++#define __kernel_fsid_t __kernel_fsid_t
++
+ #include <fcntl.h>
+ #include <sched.h>
+ #include <stdio.h>
+@@ -10,20 +17,12 @@
+ #include <sys/mount.h>
+ #include <unistd.h>
+ #include <sys/syscall.h>
++#include <sys/fanotify.h>
+ 
+ #include "../../kselftest_harness.h"
+ #include "../statmount/statmount.h"
+ #include "../utils.h"
+ 
+-// Needed for linux/fanotify.h
+-#ifndef __kernel_fsid_t
+-typedef struct {
+-	int	val[2];
+-} __kernel_fsid_t;
+-#endif
+-
+-#include <sys/fanotify.h>
+-
+ static const char root_mntpoint_templ[] = "/tmp/mount-notify_test_root.XXXXXX";
+ 
+ static const int mark_cmds[] = {
+diff --git a/tools/testing/selftests/filesystems/mount-notify/mount-notify_test_ns.c b/tools/testing/selftests/filesystems/mount-notify/mount-notify_test_ns.c
+index 090a5ca65004a0..9f57ca46e3afa0 100644
+--- a/tools/testing/selftests/filesystems/mount-notify/mount-notify_test_ns.c
++++ b/tools/testing/selftests/filesystems/mount-notify/mount-notify_test_ns.c
+@@ -2,6 +2,13 @@
+ // Copyright (c) 2025 Miklos Szeredi <miklos@szeredi.hu>
+ 
+ #define _GNU_SOURCE
++
++// Needed for linux/fanotify.h
++typedef struct {
++	int	val[2];
++} __kernel_fsid_t;
++#define __kernel_fsid_t __kernel_fsid_t
++
+ #include <fcntl.h>
+ #include <sched.h>
+ #include <stdio.h>
+@@ -10,21 +17,12 @@
+ #include <sys/mount.h>
+ #include <unistd.h>
+ #include <sys/syscall.h>
++#include <sys/fanotify.h>
+ 
+ #include "../../kselftest_harness.h"
+-#include "../../pidfd/pidfd.h"
+ #include "../statmount/statmount.h"
+ #include "../utils.h"
+ 
+-// Needed for linux/fanotify.h
+-#ifndef __kernel_fsid_t
+-typedef struct {
+-	int	val[2];
+-} __kernel_fsid_t;
+-#endif
+-
+-#include <sys/fanotify.h>
+-
+ static const char root_mntpoint_templ[] = "/tmp/mount-notify_test_root.XXXXXX";
+ 
+ static const int mark_types[] = {
+diff --git a/tools/testing/selftests/net/fib_nexthops.sh b/tools/testing/selftests/net/fib_nexthops.sh
+index b39f748c25722a..2ac394c99d0183 100755
+--- a/tools/testing/selftests/net/fib_nexthops.sh
++++ b/tools/testing/selftests/net/fib_nexthops.sh
+@@ -467,8 +467,8 @@ ipv6_fdb_grp_fcnal()
+ 	log_test $? 0 "Get Fdb nexthop group by id"
+ 
+ 	# fdb nexthop group can only contain fdb nexthops
+-	run_cmd "$IP nexthop add id 63 via 2001:db8:91::4"
+-	run_cmd "$IP nexthop add id 64 via 2001:db8:91::5"
++	run_cmd "$IP nexthop add id 63 via 2001:db8:91::4 dev veth1"
++	run_cmd "$IP nexthop add id 64 via 2001:db8:91::5 dev veth1"
+ 	run_cmd "$IP nexthop add id 103 group 63/64 fdb"
+ 	log_test $? 2 "Fdb Nexthop group with non-fdb nexthops"
+ 
+@@ -547,15 +547,15 @@ ipv4_fdb_grp_fcnal()
+ 	log_test $? 0 "Get Fdb nexthop group by id"
+ 
+ 	# fdb nexthop group can only contain fdb nexthops
+-	run_cmd "$IP nexthop add id 14 via 172.16.1.2"
+-	run_cmd "$IP nexthop add id 15 via 172.16.1.3"
++	run_cmd "$IP nexthop add id 14 via 172.16.1.2 dev veth1"
++	run_cmd "$IP nexthop add id 15 via 172.16.1.3 dev veth1"
+ 	run_cmd "$IP nexthop add id 103 group 14/15 fdb"
+ 	log_test $? 2 "Fdb Nexthop group with non-fdb nexthops"
+ 
+ 	# Non fdb nexthop group can not contain fdb nexthops
+ 	run_cmd "$IP nexthop add id 16 via 172.16.1.2 fdb"
+ 	run_cmd "$IP nexthop add id 17 via 172.16.1.3 fdb"
+-	run_cmd "$IP nexthop add id 104 group 14/15"
++	run_cmd "$IP nexthop add id 104 group 16/17"
+ 	log_test $? 2 "Non-Fdb Nexthop group with fdb nexthops"
+ 
+ 	# fdb nexthop cannot have blackhole
+@@ -582,7 +582,7 @@ ipv4_fdb_grp_fcnal()
+ 	run_cmd "$BRIDGE fdb add 02:02:00:00:00:14 dev vx10 nhid 12 self"
+ 	log_test $? 255 "Fdb mac add with nexthop"
+ 
+-	run_cmd "$IP ro add 172.16.0.0/22 nhid 15"
++	run_cmd "$IP ro add 172.16.0.0/22 nhid 16"
+ 	log_test $? 2 "Route add with fdb nexthop"
+ 
+ 	run_cmd "$IP ro add 172.16.0.0/22 nhid 103"


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-10-02 14:14 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-10-02 14:14 UTC (permalink / raw
  To: gentoo-commits

commit:     fc690cb8a17c5cb6cf2ef08b3ae0f4f251e29ec1
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Oct  2 14:08:41 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Oct  2 14:14:18 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fc690cb8

Remove patch crypto: af_alg - Fix incorrect boolean values in af_alg_ctx

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                        |  4 --
 ...ix-incorrect-boolean-values-in-af_alg_ctx.patch | 43 ----------------------
 2 files changed, 47 deletions(-)

diff --git a/0000_README b/0000_README
index 8a2d4ad0..574e70f5 100644
--- a/0000_README
+++ b/0000_README
@@ -87,10 +87,6 @@ Patch:  1401_btrfs-don-t-allow-adding-block-device-of-less-than-1.patch
 From:   https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git/tree/queue-6.16/btrfs-don-t-allow-adding-block-device-of-less-than-1.patch
 Desc:   btrfs: don't allow adding block device of less than 1 MB
 
-Patch:  1402_crypto-af_alg-fix-incorrect-boolean-values-in-af_alg_ctx.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git/plain/queue-6.16/crypto-af_alg-fix-incorrect-boolean-values-in-af_alg_ctx.patch
-Desc:   crypto: af_alg - Fix incorrect boolean values in af_alg_ctx
-
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1402_crypto-af_alg-fix-incorrect-boolean-values-in-af_alg_ctx.patch b/1402_crypto-af_alg-fix-incorrect-boolean-values-in-af_alg_ctx.patch
deleted file mode 100644
index c505c6a6..00000000
--- a/1402_crypto-af_alg-fix-incorrect-boolean-values-in-af_alg_ctx.patch
+++ /dev/null
@@ -1,43 +0,0 @@
-From d0ca0df179c4b21e2a6c4a4fb637aa8fa14575cb Mon Sep 17 00:00:00 2001
-From: Eric Biggers <ebiggers@kernel.org>
-Date: Wed, 24 Sep 2025 13:18:22 -0700
-Subject: crypto: af_alg - Fix incorrect boolean values in af_alg_ctx
-
-From: Eric Biggers <ebiggers@kernel.org>
-
-commit d0ca0df179c4b21e2a6c4a4fb637aa8fa14575cb upstream.
-
-Commit 1b34cbbf4f01 ("crypto: af_alg - Disallow concurrent writes in
-af_alg_sendmsg") changed some fields from bool to 1-bit bitfields of
-type u32.
-
-However, some assignments to these fields, specifically 'more' and
-'merge', assign values greater than 1.  These relied on C's implicit
-conversion to bool, such that zero becomes false and nonzero becomes
-true.
-
-With a 1-bit bitfields of type u32 instead, mod 2 of the value is taken
-instead, resulting in 0 being assigned in some cases when 1 was intended.
-
-Fix this by restoring the bool type.
-
-Fixes: 1b34cbbf4f01 ("crypto: af_alg - Disallow concurrent writes in af_alg_sendmsg")
-Cc: stable@vger.kernel.org
-Signed-off-by: Eric Biggers <ebiggers@kernel.org>
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
----
- include/crypto/if_alg.h |    2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
---- a/include/crypto/if_alg.h
-+++ b/include/crypto/if_alg.h
-@@ -152,7 +152,7 @@ struct af_alg_ctx {
- 	size_t used;
- 	atomic_t rcvused;
- 
--	u32		more:1,
-+	bool		more:1,
- 			merge:1,
- 			enc:1,
- 			write:1,


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [gentoo-commits] proj/linux-patches:6.16 commit in: /
@ 2025-10-02 14:17 Arisu Tachibana
  0 siblings, 0 replies; 38+ messages in thread
From: Arisu Tachibana @ 2025-10-02 14:17 UTC (permalink / raw
  To: gentoo-commits

commit:     b9ac67f5e18490b80c5fd88c0709d52f2f9ab86c
Author:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Oct  2 14:16:42 2025 +0000
Commit:     Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Oct  2 14:16:42 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b9ac67f5

Remove 1401_btrfs-don-t-allow-adding-block-device-of-less-than-1.patch

Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>

 0000_README                                        |  4 --
 ...-allow-adding-block-device-of-less-than-1.patch | 54 ----------------------
 2 files changed, 58 deletions(-)

diff --git a/0000_README b/0000_README
index 574e70f5..d6f79fa1 100644
--- a/0000_README
+++ b/0000_README
@@ -83,10 +83,6 @@ Patch:  1009_linux-6.16.10.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.16.10
 
-Patch:  1401_btrfs-don-t-allow-adding-block-device-of-less-than-1.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git/tree/queue-6.16/btrfs-don-t-allow-adding-block-device-of-less-than-1.patch
-Desc:   btrfs: don't allow adding block device of less than 1 MB
-
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1401_btrfs-don-t-allow-adding-block-device-of-less-than-1.patch b/1401_btrfs-don-t-allow-adding-block-device-of-less-than-1.patch
deleted file mode 100644
index e03e21a2..00000000
--- a/1401_btrfs-don-t-allow-adding-block-device-of-less-than-1.patch
+++ /dev/null
@@ -1,54 +0,0 @@
-From b4cbca440070641199920a2d73a6abd95c9a9ec7 Mon Sep 17 00:00:00 2001
-From: Sasha Levin <sashal@kernel.org>
-Date: Tue, 2 Sep 2025 11:34:10 +0100
-Subject: btrfs: don't allow adding block device of less than 1 MB
-
-From: Mark Harmstone <mark@harmstone.com>
-
-[ Upstream commit 3d1267475b94b3df7a61e4ea6788c7c5d9e473c4 ]
-
-Commit 15ae0410c37a79 ("btrfs-progs: add error handling for
-device_get_partition_size_fd_stat()") in btrfs-progs inadvertently
-changed it so that if the BLKGETSIZE64 ioctl on a block device returned
-a size of 0, this was no longer seen as an error condition.
-
-Unfortunately this is how disconnected NBD devices behave, meaning that
-with btrfs-progs 6.16 it's now possible to add a device you can't
-remove:
-
-  # btrfs device add /dev/nbd0 /root/temp
-  # btrfs device remove /dev/nbd0 /root/temp
-  ERROR: error removing device '/dev/nbd0': Invalid argument
-
-This check should always have been done kernel-side anyway, so add a
-check in btrfs_init_new_device() that the new device doesn't have a size
-less than BTRFS_DEVICE_RANGE_RESERVED (i.e. 1 MB).
-
-Reviewed-by: Qu Wenruo <wqu@suse.com>
-Signed-off-by: Mark Harmstone <mark@harmstone.com>
-Reviewed-by: David Sterba <dsterba@suse.com>
-Signed-off-by: David Sterba <dsterba@suse.com>
-Signed-off-by: Sasha Levin <sashal@kernel.org>
----
- fs/btrfs/volumes.c | 5 +++++
- 1 file changed, 5 insertions(+)
-
-diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
-index f475b4b7c4578..817d3ef501ec4 100644
---- a/fs/btrfs/volumes.c
-+++ b/fs/btrfs/volumes.c
-@@ -2714,6 +2714,11 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path
- 		goto error;
- 	}
- 
-+	if (bdev_nr_bytes(file_bdev(bdev_file)) <= BTRFS_DEVICE_RANGE_RESERVED) {
-+		ret = -EINVAL;
-+		goto error;
-+	}
-+
- 	if (fs_devices->seeding) {
- 		seeding_dev = true;
- 		down_write(&sb->s_umount);
--- 
-2.51.0
-


^ permalink raw reply related	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2025-10-02 14:17 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-16  5:21 [gentoo-commits] proj/linux-patches:6.16 commit in: / Arisu Tachibana
  -- strict thread matches above, loose matches on Subject: below --
2025-10-02 14:17 Arisu Tachibana
2025-10-02 14:14 Arisu Tachibana
2025-10-02 13:42 Arisu Tachibana
2025-10-02 13:30 Arisu Tachibana
2025-10-02 13:25 Arisu Tachibana
2025-10-02  3:28 Arisu Tachibana
2025-10-02  3:28 Arisu Tachibana
2025-10-02  3:12 Arisu Tachibana
2025-09-25 12:02 Arisu Tachibana
2025-09-20  6:29 Arisu Tachibana
2025-09-20  6:29 Arisu Tachibana
2025-09-20  5:31 Arisu Tachibana
2025-09-20  5:25 Arisu Tachibana
2025-09-12  3:56 Arisu Tachibana
2025-09-10  6:18 Arisu Tachibana
2025-09-10  5:57 Arisu Tachibana
2025-09-10  5:30 Arisu Tachibana
2025-09-05 14:01 Arisu Tachibana
2025-09-04 15:46 Arisu Tachibana
2025-09-04 15:33 Arisu Tachibana
2025-08-28 16:37 Arisu Tachibana
2025-08-28 16:01 Arisu Tachibana
2025-08-28 15:31 Arisu Tachibana
2025-08-28 15:19 Arisu Tachibana
2025-08-28 15:14 Arisu Tachibana
2025-08-25  0:00 Arisu Tachibana
2025-08-24 23:09 Arisu Tachibana
2025-08-21  4:31 Arisu Tachibana
2025-08-21  4:31 Arisu Tachibana
2025-08-21  1:07 Arisu Tachibana
2025-08-21  1:00 Arisu Tachibana
2025-08-21  0:27 Arisu Tachibana
2025-08-16  5:54 Arisu Tachibana
2025-08-16  5:54 Arisu Tachibana
2025-08-16  4:02 Arisu Tachibana
2025-08-16  3:07 Arisu Tachibana
2025-07-29  7:43 Arisu Tachibana

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox